text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: [new]: `xml_extract(xml, xpath)`
Body: ### Check the idea has not already been suggested
- [X] I could not find my idea in [existing issues](https://github.com/unytics/bigfunctions/issues?q=is%3Aissue+is%3Aopen+label%3Anew-bigfunction)
### Edit the title above with self-explanatory function name and argument names
- [X] The function name and the argument names I entered in the title above seems self explanatory to me.
### BigFunction Description as it would appear in the documentation
Extract part of xml using xpath.
Inspired from https://github.com/stankiewicz/bigquery-xml-parser-udf
### Examples of (arguments, expected output) as they would appear in the documentation
`xml_extract("<customer><name>Paul</name></customer>", "/customer/name")`
--> `Paul` | 0easy
|
Title: Suite setup and teardown are executed even if all tests are skipped
Body: When all test cases in a suite are skipped because of the `--skip` command line option and a matching tag, the `Suite Setup` is executed, even when no test cases in the suite are executed. If the setup fails, all test cases will get status FAIL, even tough they shall be skipped.
It doesn't matter if each individual test case is tagged using `[TAGS]` or if the `Force Tags` setting is used.
A workaround is to use both the `--skip` and `--skiponfailure` options with the same tag. The status for all the keywords will then be SKIP. However, the `Suite Setup` will still be executed, which is a problem for us because the failure may leave our system in an undefined state.
We have previously used the `--exclude` option. Then the `Suite Setup` was not executed in the case described above. We switched to the `--skip` option because we want the not executed tests to be visible in the logs.
We are using Robot Framework 6.0.1.
### Example:
```
*** Settings ***
Suite Setup My Setup
*** Test Cases ***
Try skipMe
[Tags] skipMe
Log This should not be executed
*** Keywords ***
My Setup
Log This Setup will fail
Fail
```
### Executing the example:
```
$ robot --skip skipMe example.robot
==============================================================================
Example
==============================================================================
Try skipMe | FAIL |
Parent suite setup failed:
AssertionError
------------------------------------------------------------------------------
Example | FAIL |
Suite setup failed:
AssertionError
1 test, 0 passed, 1 failed
==============================================================================
Output: /home/mbs/rf/output.xml
Log: /home/mbs/rf/log.html
Report: /home/mbs/rf/report.html
``` | 0easy
|
Title: max_digits and decimal_places do not serialize to json_schema for decimal
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Not sure if this is expected or not, but my decimal constraints do not appear to make it to the Jsonschema:
### Example Code
```Python
from pydantic import TypeAdapter, Field
from typing import Annotated
from decimal import Decimal
constraints = {'max_digits': 5, 'decimal_places': 2, 'ge': 0.0, 'le': 10.0}
TypeAdapter(Annotated[Decimal, Field(**constraints)]).json_schema()
# Out[53]: {'anyOf': [{'maximum': 10.0, 'minimum': 0.0, 'type': 'number'}, {'type': 'string'}]}
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /home/mheiser/src/taxa-backend/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.7 (main, Oct 8 2024, 00:20:25) [Clang 18.1.8 ]
platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
related packages: fastapi-0.115.4 typing_extensions-4.12.2 pydantic-settings-2.6.1 pyright-1.1.385
commit: unknown
```
| 0easy
|
Title: DVC and syrupy. .dvc tracking files deleted by "pytest --snapshot-update"
Body: **Describe the bug**
[DVC](https://dvc.org/) is a tool that combines git and external storage to track large files that shouldn't be added to the git repository but that need to be tracked and stored. For a PNG file like `__snapshots__/test_image/test_image.jpeg` running
```
dvc add __snapshots__/test_image/test_image.jpeg
```
will
1. add a `.gitignore` entry for the file `__snapshots__/test_image/test_image.jpeg`
2. create a file `__snapshots__/test_image/test_image.jpeg.dvc` to track the file
3. execute `git add __snapshots__/test_image/test_image.jpeg.dvc .gitignore`
Then you use `git commit; dvc push; git push` to copy everything to the storage and repositories.
So the snapshots created by syrupy are still in the working copy of the repository, but now there are .dvc files next to them. A new git clone will not include the syrupy snapshots until `dvc pull` is executed.
When I ran pytest, syrupy reported
```
9 snapshots passed. 7 snapshots unused.
Re-run pytest with --snapshot-update to delete unused snapshots.
```
So I did `pytest --snapshot-update` and the .dvc files under the `__snapshots__` directory
were deleted. `git restore` brought them back.
**To reproduce**
Note: This uses bash commands to create a .dvc file that will be deleted
by running `pytest --snapshot-update`. This simplifies the reproduction steps.
1. mkdir dvc-and-syrupy
2. cd dvc-and-syrupy
3. pipenv --python 3.7
4. pipenv shell
5. git init
6. wget https://raw.githubusercontent.com/tophat/syrupy/main/tests/examples/test_custom_image_extension.py
7. python -m pip install pytest syrupy
8. pytest --snapshot-update
9. git status
```bash
On branch master
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
Pipfile
__snapshots__/
dvc-and-syrupy.md
test_custom_image_extension.py
nothing added to commit but untracked files present (use "git add" to track)
```
10. echo "this is a test dvc file" > __snapshots__/test_custom_image_extension/test_jpeg_image.jpg.dvc
11. echo "/test_jpeg_image.jpg" >> __snapshots__/test_custom_image_extension/.gitignore
12. git add __snapshots__/ test_custom_image_extension.py
13. git status
```bash
On branch master
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: __snapshots__/test_custom_image_extension/.gitignore
new file: __snapshots__/test_custom_image_extension/test_jpeg_image.jpg.dvc
new file: test_custom_image_extension.py
Untracked files:
(use "git add <file>..." to include in what will be committed)
Pipfile
```
14. git commit -m "dvc tracking of snapshots"
```bash
[master (root-commit) 90fb787] dvc tracking of snapshots
3 files changed, 44 insertions(+)
create mode 100644 __snapshots__/test_custom_image_extension/.gitignore
create mode 100644 __snapshots__/test_custom_image_extension/test_jpeg_image.jpg.dvc
create mode 100644 test_custom_image_extension.py
```
15. pytest --snapshot-update
```bash
=============================================================== test session starts ===============================================================
platform linux -- Python 3.7.13, pytest-7.2.1, pluggy-1.0.0
rootdir: /home/jason/mm/dvc-and-syrupy
plugins: syrupy-3.0.6
collected 1 item
test_custom_image_extension.py . [100%]
------------------------------------------------------------- snapshot report summary -------------------------------------------------------------
1 snapshot passed. 1 unused snapshot deleted.
Deleted unknown snapshot fossil (__snapshots__/test_custom_image_extension/test_jpeg_image.jpg.dvc)
================================================================ 1 passed in 0.02s ================================================================
```
16. git status
```bash
On branch master
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
deleted: __snapshots__/test_custom_image_extension/test_jpeg_image.jpg.dvc
Untracked files:
(use "git add <file>..." to include in what will be committed)
Pipfile
no changes added to commit (use "git add" and/or "git commit -a")
```
**Expected behavior**
It would be nice to have a way to tell syrupy that
certain files, or file patterns, under the snapshot folder should be ignored.
**Screenshots**
N/A
**Environment (please complete the following information):**
- OS: Ubuntu 18.04 but it's not an OS issue
- Syrupy Version: 2.
- Python Version: 3.7.13
**Additional context**
Right now I have in my repos, .gitignore, .dockerignore, .dvcignore, .jshintignore,
.npmignore, .prettierignore, .eslintignore..... Perhaps it's a sign that a tool has
reached a certain maturity when it accepts guidance on how to ignore unexpected files in
it's sandbox.
| 0easy
|
Title: Add argument `optuna_verbose` to `AutoML()` constructor
Body: | 0easy
|
Title: [New feature] Add apply_to_images to HueSaturationValue
Body: | 0easy
|
Title: [Feature request] Add apply_to_images to UnsharpMask
Body: | 0easy
|
Title: http://update6.dedyn.$DOMAIN broken
Body: This bug is invisible if you have received HSTS headers on update6.dedyn.$DOMAIN. However, it's visible using curl:
$ curl http://update6.dedyn.io
curl: (52) Empty reply from server
$ curl https://update6.dedyn.io
No username URL parameter provided.
The expected behavior would be a redirect on https.
| 0easy
|
Title: [Doc] Update user guide for namespace label
Body: ### What you would like to be added?
Update user guide for the namespace label for metrics collector injection.
### Why is this needed?
As we discussed in https://github.com/kubeflow/katib/pull/2493#discussion_r1923814931, @andreyvelich suggested that the namespace label for metrics collector injection should not be patched in the Python SDK. We need to update the user guide to inform users of the effect of this label.
/cc @kubeflow/wg-automl-leads @helenxie-bit @tariq-hasan @Doris-xm @truc0
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: Implement key down & key up actions in the keyboard module
Body: This is requested in StackOverflow question: https://stackoverflow.com/questions/50656751/pywinauto-send-key-down-to-application
`class KeyAction` already contains necessary params: `down = True, up = True`. So just need to use them at a higher level. | 0easy
|
Title: change return type of `find_batched()` utility function
Body: Currently, `find_batched()` has the following signature:
```python
def find_batched(
...
) -> List[FindResult]:
```
In order to align with `DocumentIndex`, it should be switched to:
```python
def find_batched(
...
) -> FindResultBatched:
```
**TODOs:**
- move the definition of `FindResultBatched` from `index/abstract.py` to `utils/find.py`
- change the signature as described above
- fix the imports on all Document Index classes
- Probably `FindResultBatched` needs a tweak as well: `scores: np.ndarray` probably needs to be changed to allow a union of ndarray, torch.Tensor and tf.Tensor | 0easy
|
Title: [BUG] igraph regression
Body: Igraph CI tests failing: https://github.com/graphistry/pygraphistry/actions/runs/3006925527
Maybe some upstream changes we need to work around? Ideally preserve behavior around previous versions | 0easy
|
Title: add info about presentation mode and full screen option
Body: Please add info about presentation mode and full screen option in the side bar | 0easy
|
Title: Schema generation enhancements
Body: Piccolo now has a command for generating a schema from an existing database.
```bash
piccolo schema generate
```
The current implementation doesn't cover every possible edge case and database feature, so there's room for improvement. For example:
## Column defaults
Getting the defaults for a column, and reflecting them in the Piccolo column. For example, if the column has a default of `1` in the database, we want that to be reflected in the Piccolo schema as:
```python
class MyTable(Table):
my_column = Integer(default=1)
```
## On Delete / On Update
The database values aren't currently reflected in the ``ForeignKey`` column definitions.
```python
class MyTable(Table):
my_column = ForeignKey(on_delete=OnDelete.cascade)
```
## Decimal precision
The precision of a decimal column in the database currently isn't reflected in the ``Decimal`` column definitions.
```python
class MyTable(Table):
my_column = Decimal(digits=(5,2))
```
--------
It's a fairly beginner friendly feature to work on, because even though the code is fairly advanced, it's completely self contained, and doesn't require extensive knowledge of the rest of Piccolo.
| 0easy
|
Title: [UX] `sky queue` shows a tuple for cluster name
Body: ```console
$ sky queue test-back-compat-ubuntu-4
Fetching and parsing job queue...
Fetching job queue for ('test-back-compat-ubuntu-4',)
``` | 0easy
|
Title: How do you run tests with an IDE?
Body: seems like pycharm / vscode can't recognise them since they are not prefixed with `test_` | 0easy
|
Title: Lintian suggests adding keywords to autokey-*.desktop
Body: ## Classification:
Enhancement
## Version
AutoKey version: v0.95.10 (probably as early as v0.61.5)
## Summary
Debian’s Lintian reports the following "Info" or suggestion to autokey-{gtk,qt}.desktop files:
> I: autokey-gtk: desktop-entry-lacks-keywords-entry usr/share/applications/autokey-gtk.desktop
I: autokey-qt: desktop-entry-lacks-keywords-entry usr/share/applications/autokey-qt.desktop
N:
N: This .desktop file does either not contain a "Keywords" entry or it
N: does not contain any keywords not already present in the "Name" or
N: "GenericName" entries.
N:
N: .desktop files are organized in key/value pairs (similar to .ini
N: files). "Keywords" is the name of the entry/key in the .desktop file
N: containing keywords relevant for this .desktop file.
N:
N: The desktop-file-validate tool in the desktop-file-utils package is
N: useful for checking the syntax of desktop entries.
N:
N: Refer to
N: https://specifications.freedesktop.org/desktop-entry-spec/latest/ar01s06.html,
N: https://bugs.debian.org/693918, and
N: https://wiki.gnome.org/Initiatives/GnomeGoals/DesktopFileKeywords for
N: details.
N:
N: Severity: wishlist, Certainty: certain
N:
N: Check: menu-format, Type: binary
N:
Thanks! | 0easy
|
Title: Improvement in Documentation
Body: Hey, first of all thanks for building this library. We are creating projects on strawberry django since day1.
Just pointing this out, in the documentation `strawberry.django` and `strawberry_django` are being used as interchangeable terms. Eg in [Mutations](https://strawberry-graphql.github.io/strawberry-graphql-django/guide/mutations/).
I'm not sure, what the difference is between those two, it would be good if we can use either one of them or if they work differently then we can have a section which mentions the difference between them.
<!--- Provide a general summary of the changes you want in the title above. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. --> | 0easy
|
Title: Migration from ta.lib to pandas.lib
Body: Recently Successfully installed pandas-ta-0.3.14b0
But still now....
with
!pip install ta
import ta
I am currently using the following code (to make in final a SVR(kernel="rbf", epsilon=0.005)) :
_**df_indicators = ta.add_all_ta_features(
df, open="open", high="high", low="low", close="close", volume="volume", fillna=True).shift(1)**_
I would like to have the equivalent line code with pandas.ta to have in result the same df_indicators. Can you help me?
Thanks
| 0easy
|
Title: Add TravisCI badge into README and documentation
Body: A TravisCI build exists and includes code coverage. The results should be added to the README document and also the documentation. | 0easy
|
Title: [k8s] GPU labeler script should skip already labeled nodes
Body: `sky/utils/kubernetes/gpu_labeler.py` is a utility script a user can run to label GPU nodes in their k8s cluster. See https://docs.skypilot.co/en/latest/reference/kubernetes/kubernetes-setup.html#automatically-labelling-nodes for how this script may be used.
The python script labels GPU nodes by finding all nodes with `nvidia.com/gpu` resource on it, and scheduling a pod on each node which adds the necessary gpu label (specifically, `skypilot.co/accelerator: <gpu_name>` label). The relevant logic is copied here:
```
# Get the list of nodes with GPUs
gpu_nodes = []
for node in nodes:
if kubernetes_utils.get_gpu_resource_key() in node.status.capacity:
gpu_nodes.append(node)
... # launch labeling job on each node
```
While this script works, the script launches a labeling job on every node with GPU resource - regardless of if the node has already been labeled.
One could imagine a k8s cluster with GPU nodes that have been labeled in the past, but had additional nodes join the cluster to better scale workloads. In such cases, a user may run the GPU labeler script to label the nodes that have just joined the cluster, but the script will schedule pods even on already labeled nodes. This is inefficient, and we'd like to avoid this.
We could check, in the for loop mentioned above, if the node already has a `skypilot.co/accelerator` label. If the node does, we should not launch a job to label that node.
| 0easy
|
Title: Support Image Fill Tiling
Body: Add a new fill mode for image fills, that makes them tile. This is often useful for material-y patterns, such as dots, squiggly separators and such.
I'd propose the following changes:
- Add a new `fill_mode` to `rio.ImageFill`: `"tile"`
- Add another parameter: `tile_size: tuple[float, float]` for controlling how large each individual tile is. This parameter is ignored in other fill modes (add that to the docs). | 0easy
|
Title: server_onupdate don't allow valid types
Body: ### Ensure stubs packages are not installed
- [X] No sqlalchemy stub packages is installed (both `sqlalchemy-stubs` and `sqlalchemy2-stubs` are not compatible with v2)
### Verify if the api is typed
- [X] The api is not in a module listed in [#6810](https://github.com/sqlalchemy/sqlalchemy/issues/6810) so it should pass type checking
### Describe the typing issue
When defining `server_onupdate=<...>` in `Column()` or in `mapped_column()`, valid types are `FetchedValue`, `str`, `ClauseElement`, `TextClause`.
However, the kwarg is typed as `Optional[FetchedValue]` in both `Column()` and `mapped_column()`.
### To Reproduce
```python
from sqlalchemy import Column, DateTime, Table, func, text
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
my_table = Table(
"my_table",
Base.metadata,
Column("updated_at1", DateTime, server_onupdate=func.now()),
Column("updated_at2", DateTime, server_onupdate=text("NOW")),
Column("updated_at3", DateTime, server_onupdate="NOW"),
)
```
### Error
```
main.py:12: error: Argument "server_onupdate" to "Column" has incompatible type "now"; expected "FetchedValue | None" [arg-type]
main.py:13: error: Argument "server_onupdate" to "Column" has incompatible type "TextClause"; expected "FetchedValue | None" [arg-type]
main.py:14: error: Argument "server_onupdate" to "Column" has incompatible type "str"; expected "FetchedValue | None" [arg-type]
Found 3 errors in 1 file (checked 1 source file)
```
### Versions
- OS: Arch Linux
- Python: 3.12.3
- SQLAlchemy: 2.0.31
- Type checker: mypy 1.10.1
### Additional context
_No response_ | 0easy
|
Title: Remove twitter search embedded timeline from gevent.org
Body: Twitter [has deprecated their embedded search timelines](https://twittercommunity.com/t/deprecating-widget-settings/102295) and they will stop functioning on July 27. There are other types of embedded timelines, but the search timeline is going away.
The current embedded widget does not function very well as it is. (I have a "better" search I use in TweetDeck: `gevent lang:en exclude:nativeretweets exclude:retweets -bigeventlv -rockingevents -levent -MEGevent95 -atgevent2014 -'mpi -next -generation'`, but again, search timelines are going away.)
They do not offer a replacement that's suitable. You can embed a user timeline, or a list timeline or collection timeline. The first is extremely limited, and the later two require "curation" which seems like way more work than its worth. | 0easy
|
Title: zlma issue
Body: My environment is:
Pandas-TA version 0.3.14b (current)
TA-Lib version 0.4.25 (current)
MAC OS
I'm not able to use the non-default `mamode` when calling `zlma`
This works
```python
pta_params = {'length': 10, 'mamode': 'ema', 'offset': 0}
df.ta.zlma(**pta_params, append=True)
```
This does not work
```python
pta_params = {'length': 10, 'mamode': 'rma', 'offset': 0}
df.ta.zlma(**pta_params, append=True)
```
All ma-mode options other than ema fail with the error
```sh
Exception has occurred: TypeError
'module' object is not callable
File "ta.lib.tst.py", line 57, in <module>
df.ta.zlma(**pta_params, append=True)
TypeError: 'module' object is not callable
``` | 0easy
|
Title: [Feature]: Overall tests improvement and speedup
Body: ### 🚀 The feature, motivation and pitch
Currently, vLLM test suite are often very expensive and takes a long time to setup and run (approx 1h for the full suite).
Majority of these times spent are at our frontend code.
Given that API logics are relatively complex (OpenAI vs. API server), we can speed up by mocking and keep e2e to a minimum.
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: Move constants to top of file
Body: The file `chatterbot/parsing.py` has three variables that should be moved closer to the top of the file to improve readability.
The variables are named: `HASHMONTHS`, `HASHWEEKDAYS`, and `HASHORDINALS`.
They should be placed below the existing constant called `NUMBERS` that is already located near the top of the file. | 0easy
|
Title: Group acquisition function and optimizer arguments
Body: The number of arguments that need passing around is increasing, sooner or later we will start having collisions, and it is getting unwieldy.
[Suggestion](https://github.com/scikit-optimize/scikit-optimize/pull/234/files#r95050797) is to replace them by `acq_func_kwargs` and `acq_optimizer_kwargs` instead of (`n_points`, `n_restarts_optimizer`, `xi`, `kappa`).
Maybe start this before #115 | 0easy
|
Title: ADX value from plugin vastly different from Tradingview, just me or did I mess up some entry?
Body: Hi,
So first off this problem persists across multiple plugins and code. I love the pandas_ta library thus was hoping we could find a solution here.
First off here is the difference in values:


Here is my code:
```python
def each_copy(ticker):
ticker_data = pd.DataFrame.copy(df[df["ticker"]==ticker],deep=False)
ticker_data.set_index('date',inplace=True)
return ticker_data
test_data = each_copy("MRPL")
test_data["adx"] = ta.adx(high=test_data.high,low=test_data.low,close=test_data.close)["ADX_14"]
test_data["aDMP_14"] = ta.adx(high=test_data.high,low=test_data.low,close=test_data.close)["DMP_14"]
test_data["aDMN_14"] = ta.adx(high=test_data.high,low=test_data.low,close=test_data.close)["DMN_14"]
test_data.loc["2022-04-19 15:15:00+05:30":]
```
Data from the function 'each copy' is returned this way

After adding adx data

I hope this is enough, I did notice that the code on Tradingview is different from what we are using here. Could that be the reason? Does TV use some other formula for ADX?
I am not much of a coder thus relying on your help.
Please help and thank you. | 0easy
|
Title: Enable parallel tests
Body:
Update config (tox and Github CI) to allow for parallel testing.
Tests take ~12 minutes and parallel would be a win
| 0easy
|
Title: Update README with new Demo API
Body: ## Description
We need to update the examples in the README.md file that uses demo.scanapi.dev/api/. We've [changed the API](https://github.com/scanapi/demo-api/issues/16) and the doc should reflect that.
Old API: https://demo.scanapi.dev/api/
New API: https://demo.scanapi.dev/ | 0easy
|
Title: Importance of dataset used during quantisation?
Body: Something I don't yet understand in GPTQ is the significance of the dataset used for quantisation?
In Qwopqwop's GPTQ-for-LLaMa, the examples use `c4`. I've also seen him use `wikitext2` and `ptb`.
But now AutoGPTQ has an example that uses the Alpaca instruction/response data.
Are there benchmarks to indicate which dataset is best to use for quantisation? Or does it depend on the type of model being quantised, or the expected use case of the model?
Thanks in advance! | 0easy
|
Title: feature: document RPC responses in AsyncAPI
Body: ### Discussed in https://github.com/airtai/faststream/discussions/1570
<div type='discussions-op-text'>
<sup>Originally posted by **gri38** July 1, 2024</sup>
Hello.
When we subscribe to a queue to answer a RPC call, it seems we cannot document the return type.
H§ere a small exemple:
```python
import logging
from faststream import FastStream
from faststream.rabbit import RabbitBroker
from pydantic import BaseModel, conint
broker = RabbitBroker()
app = FastStream(broker)
class In(BaseModel):
i: str
j: conint(gt=0)
class Out(BaseModel):
k: str
l: float
@broker.subscriber("RPC",
title="RPC",
description="This queue is used to for RPC",
)
async def rpc_handler(input: In) -> Out:
logging.info(f"{input=}")
return Out(k=input.i, l=input.j)
```
If I run with `faststream docs serve rpc:app` I cannot see Out in the doc (http://localhost:8000).
Is there a way to do so ?
When I use the RabbitRouter, I can see in the router @subscriber decorator:
```python
# FastAPI args
response_model: Annotated[
Any,
Doc(
"""
The type to use for the response.
It could be any valid Pydantic *field* type. So, it doesn't have to
be a Pydantic model, it could be other things, like a `list`, `dict`,
etc.
It will be used for:
* Documentation: the generated OpenAPI (and the UI at `/docs`) will
show it as the response (JSON Schema).
* Serialization: you could return an arbitrary object and the
`response_model` would be used to serialize that object into the
corresponding JSON.
* Filtering: the JSON sent to the client will only contain the data
(fields) defined in the `response_model`. If you returned an object
that contains an attribute `password` but the `response_model` does
not include that field, the JSON sent to the client would not have
that `password`.
* Validation: whatever you return will be serialized with the
`response_model`, converting any data as necessary to generate the
corresponding JSON. But if the data in the object returned is not
valid, that would mean a violation of the contract with the client,
so it's an error from the API developer. So, FastAPI will raise an
error and return a 500 error code (Internal Server Error).
Read more about it in the
[FastAPI docs for Response Model](https://fastapi.tiangolo.com/tutorial/response-model/).
"""
),
```
But even with response_model: nothing in asyncAPI.
Thanks for any tips, even for "it's not possible" ;-)
</div> | 0easy
|
Title: Replace the use of "provide" with a simpler alternative as per Vale recommendation
Body: Vale recommends use of a simpler alternative to "provide"
<img width="606" alt="image" src="https://github.com/user-attachments/assets/0b724546-86e4-4a8c-8063-496c3fa9b5e4">
Task: Search across Vizro docs (vizro-core and vizro-ai) for instances of provide and replace with discretion.
If there are genuine cases where lower case python is more appropriate, use the following to switch Vale off/on in situ
```markdown
<!-- vale off -->
Legitimate use of provide
<!--vale on-->
``` | 0easy
|
Title: Remove assertion call syntax
Body: **Is your feature request related to a problem? Please describe.**
One of the project's motivations is to provide a uniform, idiomatic way to assert snapshots. We're keeping `snapshot.assert_match` for backwards compatibility with snapshottest for the moment, however no one uses the `snapshot(actual)` syntax. It's a remnant of testing out different approaches.
**Describe the solution you'd like**
Remove the snapshot call syntax: https://github.com/tophat/syrupy/blob/7cc40476556cbb7ddb3f60bab81668341550a742/src/syrupy/assertion.py#L123 | 0easy
|
Title: [FEA] luminosity encoding
Body: **Is your feature request related to a problem? Please describe.**
In some scenarios, an extra dimension can help, and luminosity is one of them. Additional, a secondary key for normalization can also help.
**Describe the solution you'd like**
Something like:
```python
import math, numpy as np
def with_coloring(base_g, point_colors: dict, point_color_default: int):
"""
sets base color by base_g._nodes.type and then luminosity by inferred degree
"""
#log-scale 'g2._nodes.degree'
g2 = base_g.nodes(base_g._nodes[[base_g._node, 'type']].reset_index(drop=True)).get_degrees()
g2 = g2.nodes(g2._nodes.assign(degree=pd.Series(np.log(g2._nodes['degree'].values))))
degs = g2._nodes.degree
colors = g2._nodes['type'].map(point_colors).fillna(point_color_default).astype('int64')
# color multiplier: color_final = color_base * (min_luminosity + (1 - min_luminosity) * multiplier)
min_luminosity = 0.5
type_to_min_degree = g2._nodes.groupby('type').agg({'degree': 'min'}).to_dict()['degree']
type_to_max_degree = g2._nodes.groupby('type').agg({'degree': 'max'}).to_dict()['degree']
mns = g2._nodes['type'].map(type_to_min_degree)
mxs = g2._nodes['type'].map(type_to_max_degree)
multiplier = (degs - mns + 1.0) / (mxs - mns + 1.0)
multiplier = min_luminosity + (1 - min_luminosity) * multiplier
rgbs_arr = [
pd.Series(c & 255, dtype='int')
for c in [colors.values >> 16, colors.values >> 8, colors.values]
]
weighted_arr = [
pd.Series(np.floor((c * multiplier).values)).round().astype('int64').values
for c in rgbs_arr
]
rgb = (weighted_arr[0]<<24) | (weighted_arr[1]<<16) | (weighted_arr[2]<<8)
colors = pd.Series(rgb, dtype='int64')
return (base_g
.nodes(base_g._nodes.reset_index().assign(pc=colors))
.bind(point_color='pc'))
```
**Describe alternatives you've considered**
Via just pygraphistry
- note how this works off of a known computed per-entity color; would be better if not
- with and without log scaling
- luminosity can really be by mixing in black, white, or some other color
- with and without secondary dimension ("degree" vs "degree partitioned by type")
- default on degree
Via backend:
- enables legend support & multi-client
- server likely would reuse this impl, so a good start
| 0easy
|
Title: Implement “refresh_cache” for Request.meta
Body: At the moment, you can set `dont_cache` in `Request.meta` to have your request bypass the cache.
However, what if you want the cache not to be read, but be written? For example, if you detect you got a bad HTTP 200, and you want to retry without the retry request fetching the bad response from the cache again.
Workaround: implement your own middleware:
```python
from scrapy import Spider
from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware as _HttpCacheMiddleware
class HttpCacheMiddleware(_HttpCacheMiddleware):
def process_request(self, request: Request, spider: Spider) -> Optional[Response]:
if request.meta.get("refresh_cache", False):
return None
return super().process_request(request, spider)
class MySpider(Spider):
custom_settings = {
'DOWNLOADER_MIDDLEWARES': {
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': None,
HttpCacheMiddleware: 900,
},
}
``` | 0easy
|
Title: exit code
Body: Hello, thank you for making this library.
How can I set the exit code to 5 instead of 1 when no tests are collected [exit-codes](https://docs.pytest.org/en/latest/reference/exit-codes.html).
Thanks
<img width="514" alt="image" src="https://user-images.githubusercontent.com/59024965/162152326-9557de85-41fc-4993-9c55-525a65cce434.png">
| 0easy
|
Title: completion of file names with newlines
Body: I accidentally created a file name with an embedded newline. (I wanted to write a systemd mount service and didn't consider that the output of `systemd-escape` includes a newline)
I then wanted to get rid of the file (or rename to remove the trailing newline) and tried to auto-complete the file name, which fails.
## Current Behavior
```xsh
$ touch $(systemd-escape --suffix=mount --path /media/backup-local)
$ ls media[TAB] # Expands to:
$ ls r'media-backup\x2dlocal.mount'
ls: cannot access 'media-backup\x2dlocal.mount': No such file or directory
$ ls -l
-rw-r--r-- 1 root root 112 16. Jun 16:15 'media-backup\x2dlocal.mount'$'\n'
```
## Expected Behavior
Auto-completion should complete to some string that matches the file on disk. For example
```xsh
$ rm media[TAB] # Should expand to:
$ rm r'''media-backup\x2dlocal.mount
'''
$
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: fix provider_data in send_invoice
Body: ## Failure Information (for bugs)
provider_data in send_invoice method should be encoded to JSON or should be str like it is in the docs, currently it is Optional[Dict]
| 0easy
|
Title: lfda not deterministic
Body: See comment https://github.com/metric-learn/metric-learn/pull/193#pullrequestreview-245994079
It seems that `LFDA` is not deterministic because of an eigenvalue decomposition that can return one sign or another as output
More details to come | 0easy
|
Title: DOC: xgboost user guide
Body: Add user guide on how to use xorbits.xgboost.
You can add a rst file at `doc/source/libraries/xorbits_train` by following the code style on `doc/source/libraries/xorbits_data/numpy.rst` or `doc/source/libraries/xorbits_data/pandas.rst` | 0easy
|
Title: Use gitpod.io for documentation work - round 2
Body: Follow up tasks from #908:
* work out what a "gitpod link" that opens the editor in a documentation file (not README.md) or a file that welcomes people (some instructions) looks like
* mention gitpod as an option for contributing to the docs in CONTRIBUTING
* add a badge to CONTRIBUTING
* they have a badge like mybinder.org does and I am sure it is somewhere in https://www.gitpod.io/docs/ but I can't find it
* add badge to the README(?)
Anyone can help out with these tasks, some are 10min of work (find link, find badge) and we can post the result here. Or if you have more than 10min make a PR that adds the link into one of the documents.
Labelling this as good first issue because it contains several small tasks. | 0easy
|
Title: [BUG] PicklingError: "Can't pickle : attribute lookup on __main__ failed" happend when fit model
Body: **Describe the bug**
when I test the model (maybe any forecast model, I test TFTModel, NlinearModel etc.)
I set the following parameters of model:
force_reset=True,
save_checkpoints=True,
optimizer_kwargs={"lr": results.suggestion()},
pl_trainer_kwargs={"callbacks": [early_stopper]},
add_encoders={
'cyclic': {'future': ['month']},
'datetime_attribute': {'future': ['hour', 'dayofweek']},
'position': {'past': ['relative'], 'future': ['relative']},
'custom': {'past': [lambda idx: (idx.year - 1950) / 50]},
'transformer': Scaler(StandardScaler())
}
model.fit(
series=[train_1, train_2, train_3],
val_series=[val_1, val_2, val_3],
past_covariates=[train_1_covariates, train_2_covariates, train_3_covariates],
val_past_covariates=[val_1_covariates, val_2_covariates, val_3_covariates],
)
and train the model with validation data, it should trigger the model save the best model checkpoint.
but the following error happened:
634 pickler.persistent_id = persistent_id
635 print(type(obj))
--> 636 pickler.dump(obj)
637 data_value = data_buf.getvalue()
638 zip_file.write_record('data.pkl', data_value, len(data_value))
PicklingError: Can't pickle at 0x2bc431280>: attribute lookup on __main__ failed
when I remove add_encoders then the train work well.
I think the add_encoders lead to something can NOT dump correctly.
Anybody take a look this issue? it block me the model performance testing.
Thanks!
**To Reproduce**
see the description.
**Expected behavior**
model should be train and save the best checkpoint
**System (please complete the following information):**
- Python version: 3.8
- darts version 0.24.0
| 0easy
|
Title: "ploomber install" should add venv directory to .gitignore
Body: (Also create .gitignore if it doesn't exist)
`pip install` automates dependency installation and creating lock files. When using pip, the command first creates a venv in the current directory, such directory must be added to .gitignore to prevent it from being committed.
https://github.com/ploomber/ploomber/blob/ac915ea998f4da0177204f2189f8f608f3404fc6/src/ploomber/cli/install.py#L83 | 0easy
|
Title: add auto-completion to the CLI
Body: If someone types:
```
ploomber exa
```
And presses the tab key, it should autocomplete to:
```
ploomber examples
``` | 0easy
|
Title: Change default to Paste using clipboard (ctrl+v) for new phrases
Body: ## Classification:
Enhancement
## Reproducibility:
N/A
## Version
AutoKey version: All - longstanding problem
Used GUI (Gtk, Qt, or both): Both
Installed via: All
Linux Distribution: All
## Summary
The current default - Paste using keyboard - has a number of problems
1) Fails on long phrases - especially in slow target windows
2) Fails to handle some keys which are not on the physical keyboard
3) Fails with multibyte character sets (everything except EN-US)
4) Fails for some characters on remapped keyboards
## Steps to Reproduce (if applicable)
Define a phrase with any of the above problem characteristics
Define a long phrase (> 20 characters) and use it with a slow application such as a word processor
## Expected Results
Phrase contents should be accurately reproduced
## Actual Results
Phrases are emitted with missing, out of order, and/or wrong characters
We get unnecessary support questions and issues filed, including a couple in the last few days.
## Notes
This has been discussed at length on our forum, Gitter, and in other issues.
Selecting Paste using clipboard (ctrl+v) fixes most issues
I am not suggesting that we get rid of Paste using keyboard or that we do anything unexpected to change the method used automatically. It should still be an option and continue to work as is for the great many preexisting phrases which use it. | 0easy
|
Title: [FEA] secondary index for node size
Body: **Is your feature request related to a problem? Please describe.**
In categorical settings, default node sizes should often be normalized per category, not globally, as they may be sampled from different distributions and thus relative comparison doesn't make sense
**Describe the solution you'd like**
```python
def with_degree_type_sizing(base_g):
#log-scale 'g2._nodes.degree'
g2 = base_g.nodes(base_g._nodes[[base_g._node, 'type']].reset_index(drop=True)).get_degrees()
g2 = g2.nodes(g2._nodes.assign(degree=pd.Series(np.log(g2._nodes['degree'].values))))
degs = g2._nodes.degree
min_scale = 0.5
type_to_min_degree = g2._nodes.groupby('type').agg({'degree': 'min'}).to_dict()['degree']
type_to_max_degree = g2._nodes.groupby('type').agg({'degree': 'max'}).to_dict()['degree']
mns = g2._nodes['type'].map(type_to_min_degree)
mxs = g2._nodes['type'].map(type_to_max_degree)
multiplier = (degs - mns + 1.0) / (mxs - mns + 1.0)
multiplier = min_scale + (1 - min_scale) * multiplier
sizes = pd.Series(degs * multiplier)
return (base_g
.nodes(base_g._nodes.reset_index().assign(sz=sizes))
.bind(point_size='sz'))
```
**Describe alternatives you've considered**
additional opts
- specify primary dimension (vs degree) and secondary (vs type)
- specify min
- specify preconditioner (log, ..)
server support
- in general, for multi-client
- in legend
- in ui
pygraphistry is a good start as can ideally reuse
| 0easy
|
Title: Tutorial Request: FastAPI
Body: There have been multiple requests for a FastAPI tutorial. For this task, we're looking for a tutorial or blogpost which showcases the use of `supabase` with FastAPI.
There are no restrictions on the type of tutorial and any number of people can take this issue up. This issue is open for creative exploration but feel free to tag/ping me if ideas are needed. | 0easy
|
Title: Support setting system prompt
Body: Support setting the system message.
```python
from magentic import prompt
@prompt(
'Add more "dude"ness to: {phrase}',
system="You are the coolest dude ever.",
)
def dudeify(phrase: str) -> str:
...
```
- `system` should support templating in the same way that `template` does.
- (future) `prompt` should output a warning that `system` is being ignored when it is passed in combination with a completions model (if/when support for those is added). | 0easy
|
Title: Register staff by admin account
Body: Currently, we can use the `REGISTER_MUTATION_FIELDS` to specify which fields we want to be at the registration, but @bzhr pointed out that it can lead to some problems when we need to make sure for example to allow only admin users to create staff users.
An easy fix for this, in my opinion, would be building a new mutation, something like `InternalRegister` that works very similar to the `Register`, but for the admin users use in mind.
Some specification:
- [ ] Only admin or staff users have access
- [ ] New setting to define who has access to it (superuser or staff)
- [ ] New setting to specify behavior, for example, maybe the email verification isn't necessary for staff users.
- [ ] What else?
---
@bzhr can you work on this? | 0easy
|
Title: function to explore object detection datasets
Body: Assuming object detection data in the same format as required by methods in:
https://github.com/cleanlab/cleanlab/tree/master/cleanlab/object_detection
Add functionality to explore basic properties about this data. For instance to compute:
1. The number of annotated objects in each image, the number of predicted objects in each image. Users can sort by these to reveal interesting insights about certain images. Eg. number of images with 0 annotations may be particularly revealing.
2. The sizes of every annotated bounding box. Users could look at boxplot/histogram and outliers here to reveal interesting insights about certain annotations. In particular, looking at the size-distribution of boxes labeled as each particular class.
3. The distribution of classes associated with each annotated bounding box. Users could look for rare classes to better understand their dataset.
Plus other useful things to explore! | 0easy
|
Title: skeletonize_3d feature request: numpy.pad "mode" parameter
Body: Hello everyone,
I'm currently working on a project involving skeletonizing elongated vessel-like shapes as a way to recover their center line. The thing is, many of these vessels get cut by the border of the image. I'm usually working with 3d images, but I'll provide a simpler 2d example here.
## The issue:
With current padding of the numpy.pad function ('constant' mode, the skeletonize_3d give results that generally do not correspond to my expectations at the borders. The major issue being that most of the time, the skeleton does not reach the borders. I would like to be able to select different modes (in my case, 'edge' mode in particular) as the results can differ quiet a lot.
## Request:
So, is it possible to add such parameter?
Otherwise, can I used a modified version of the library? (does the BSD-3-Clause permit such usage, given that we do not have any commercial objective).
## Illustration:
Please find below a plot of a simple elongated tube, being cut by the borders with both a low and high incidence angle. As mentioned earlier, both results present different behavior at the border. The current result is the middle image whereas I would like to have the one at the right.

## Code for reproducing the results:
In order to obtain this plot, I had to modify skeletonize_3d the following way:
**Line 575**:
`def skeletonize_3d(image):`
to
`def skeletonize_3d(image, mode='constant'):`
**Line 628**:
`image_o = np.pad(image_o, pad_width=1, mode='constant')`
to
`image_o = np.pad(image_o, pad_width=1, mode=mode)`
Here is my script used as well:
```python
import matplotlib.pyplot as plt
import numpy as np
from skimage.morphology import skeletonize_3d
from skimage.io import imread
img = imread('test_skeletonize.png', as_gray=True).astype('uint8')
res_const = skeletonize_3d(img)
res_edge = skeletonize_3d(img, mode='edge')
fig, ax = plt.subplots(1,3, figsize=(10,10))
ax[0].imshow(img, cmap='Greys')
ax[1].imshow(img-res_const, cmap='Greys')
ax[2].imshow(img-res_edge, cmap='Greys')
ax[0].set_title(label='Original image')
ax[1].set_title(label='Result with constant mode')
ax[2].set_title(label='Result with edge mode')
plt.show()
```
## End word:
I'm not really used to the development community, so if anything is wrong with this ticket, please let me know.
Best Regard | 0easy
|
Title: Add IsMonthEnd primitive
Body: - This primitive determines the `is_month_end` of a datetime column | 0easy
|
Title: [Datalab new issue type] detect incorrect usage of placeholder-values in features
Body: [New Datalab issue type](https://docs.cleanlab.ai/master/cleanlab/datalab/guide/custom_issue_manager.html) called something like `placeholder` that checks `features` for potentially invalid use of placeholder values.
Motivation: placeholder values are sometimes used in place of missing/null observations in numeric data, eg. -99 in a column of values that are otherwise non-negative. These can negatively affect modeling if somebody doesn't realize these aren't real values but rather just placeholders.
The best implementation of this must be compute-efficient and will require some exploration to figure out the best method for detecting placeholders when they exist, but not flagging false positives in datasets where there are real values that just happen to look like placeholders.
One option is a two-stage algorithm:
First see whether there are any candidate placeholders (i.e. is there any column of `features` with continuous numbers across a broad range, but with one suspicious constant repeatedly occurring throughout?). This should be done in a very efficient manner (eg just based on subset of the data) and if not, the check should terminate immediately.
Second consider each candidate placeholder and estimate whether it actually seems problematic or not (is the value of the candidate-placeholder a huge outlier in the other values of this column? is the candidate placeholder a negative number while rest of column is non-negative, etc). Sophisticated variant of this could replace the candidate with missing values, try to impute them based on the other `features`, and see if the imputed values are significantly different than the placeholder values. Want to ensure very few false positives here
| 0easy
|
Title: Example code for using random_flax_module with flax.linen.BatchNorm (mutable)
Body: Hi,
I am trying to use random_flax_module on a class that uses flax.linen.BatchNorm that uses mutable parameters. Is there any example on how to use that? Here is my code:
The model:
```
class NSBlock(nn.Module):
train: bool
dim: int
ks_1: int = 3
ks_2: int = 3
dl_1: int = 1
dl_2: int = 1
mp_ks: int = 3
mp_st: int = 1
@nn.compact
def __call__(self, x):
lpad1 = (self.dl_1 * (self.ks_1 - 1)) // 2
conv_r1 = nn.Conv(features = self.dim,
kernel_size = (self.ks_1, self.ks_1),
kernel_dilation = self.dl_1,
padding= lpad1)
bn_r1 = nn.BatchNorm(momentum = 0.9, use_running_average = not self.train)
lpad2 = (self.dl_2 * (self.ks_2 - 1)) // 2
conv_r2 = nn.Conv(features = self.dim,
kernel_size = (self.ks_2, self.ks_2),
kernel_dilation = self.dl_2,
padding= lpad2)
bn_r2 = nn.BatchNorm(momentum = 0.9, use_running_average = not self.train)
x_0 = conv_r1(x)
x_1 = bn_r1(x_0)
x_2 = nn.relu(x_1)
x_3 = conv_r2(x_2)
x_4 = bn_r2(x_3)
y = x + x_4
rpad = (self.mp_ks - 1) // 2
z = nn.max_pool(y, window_shape = (1, self.mp_ks),
padding = ((0, 0), (rpad, rpad)), strides = (1, self.mp_st))
return z
class NeuSomaticNet(nn.Module):
train : bool
dim: int = 64
nsblocks = [
[3, 5, 1, 1, 3, 1],
[3, 5, 1, 1, 3, 2],
[3, 5, 2, 1, 3, 2],
[3, 5, 4, 2, 3, 2],
]
@nn.compact
def __call__(self, x_0):
conv1 = nn.Conv(features = self.dim, kernel_size = (1, 3),
padding = (0, 1), strides = 1)
bn1 = nn.BatchNorm(momentum = 0.9, use_running_average = not self.train)
x_1 = conv1(x_0)
x_2 = bn1(x_1)
x = nn.relu(x_2)
# x = nn.relu(bn1(conv1(x_0)))
x = nn.max_pool(x, window_shape = (1, 3), padding = ((0, 0), (1, 1)), strides = (1, 1))
for ks_1, ks_2, dl_1, dl_2, mp_ks, mp_st in self.nsblocks:
rb = NSBlock(self.train, self.dim, ks_1, ks_2, dl_1, dl_2, mp_ks, mp_st)
x = rb(x)
# Do a transpose before using the dense layer
x = jnp.transpose(x, (0, 3, 1, 2))
ds = np.prod(list(map(lambda x: x[5], self.nsblocks)))
fc_dim = self.dim * 32 * 5 // ds
x2 = jnp.reshape(x, (-1, fc_dim))
x3 = nn.Dense(features = 240)(x2)
o1 = nn.Dense(features = 4)(x3)
return o1
```
The numpyro code:
```
def neu_model(X, Y = None, D_Y = 1, train = True):
N, H, W, C = X.shape
lclass = 4
module = NeuSomaticNet(train = train)
key1, key2 = random.split(random.PRNGKey(0), 2)
vars_in = module.init(key1, X)
lvals, mutated_vars = module.apply(vars_in, X, mutable=['batch_stats'])
net = random_flax_module("net", module, prior = (
lambda name, shape:
dist.Cauchy() if "bias" in name
else dist.Normal(0, 1)),
input_shape = (N, H, W, C), mutable=['batch_stats'])
lvals = net(X)
with numpyro.plate("batch", X.shape[0]):
numpyro.sample("Y", dist.Categorical(logits = lvals), obs = Y)
```
Part of the inference code:
```
def run_inference_DELTA(model_l, rng_key, X, Y, num_samples, svi_steps):
print("Starting DELTA inference")
guide = AutoDelta(model_l)
svi = SVI(model_l, guide, optim.Adam(0.001), Trace_ELBO())
svi_result = svi.run(random.PRNGKey(100), svi_steps, X, Y, train = True)
# create_loss_plot(args, svi_result.losses)
print("AutoDelta inference done")
params = svi_result.params
samples_1 = guide.sample_posterior(
random.PRNGKey(1), params, sample_shape=(num_samples,)
)
return samples_1
rng_key, rng_key_predict = random.split(random.PRNGKey(10))
samples_1 = run_inference_DELTA(neu_model, rng_key, inputs_f, labels_f, num_samples, svi_steps)
```
Here is the error that I am getting:
```
Starting DELTA inference
ERROR 2022-07-04 11:55:04,285 __main__ Traceback (most recent call last):
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 797, in <module>
checkpoint = train_neusomatic(args.candidates_tsv, args.validation_candidates_tsv,
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 633, in train_neusomatic
samples_1 = run_inference_DELTA(neu_model, rng_key, inputs_f, labels_f, num_samples, svi_steps)
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 673, in run_inference_DELTA
svi_result = svi.run(random.PRNGKey(100), svi_steps, X, Y, train = True)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/svi.py", line 342, in run
svi_state = self.init(rng_key, *args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/svi.py", line 180, in init
guide_trace = trace(guide_init).get_trace(*args, **kwargs, **self.static_kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/autoguide.py", line 430, in __call__
self._setup_prototype(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/autoguide.py", line 404, in _setup_prototype
super()._setup_prototype(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/autoguide.py", line 154, in _setup_prototype
) = initialize_model(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 654, in initialize_model
(init_params, pe, grad), is_valid = find_valid_initial_params(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 395, in find_valid_initial_params
(init_params, pe, z_grad), is_valid = _find_valid_params(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 381, in _find_valid_params
_, _, (init_params, pe, z_grad), is_valid = init_state = body_fn(init_state)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 331, in body_fn
model_trace = trace(seeded_model).get_trace(*model_args, **model_kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
[Previous line repeated 2 more times]
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 656, in neu_model
net = random_flax_module("net", module, prior = (
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 355, in random_flax_module
nn = flax_module(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 79, in flax_module
assert (nn_state is None) == (nn_params is None)
AssertionError
Traceback (most recent call last):
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 821, in <module>
raise e
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 797, in <module>
checkpoint = train_neusomatic(args.candidates_tsv, args.validation_candidates_tsv,
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 633, in train_neusomatic
samples_1 = run_inference_DELTA(neu_model, rng_key, inputs_f, labels_f, num_samples, svi_steps)
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 673, in run_inference_DELTA
svi_result = svi.run(random.PRNGKey(100), svi_steps, X, Y, train = True)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/svi.py", line 342, in run
svi_state = self.init(rng_key, *args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/svi.py", line 180, in init
guide_trace = trace(guide_init).get_trace(*args, **kwargs, **self.static_kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/autoguide.py", line 430, in __call__
self._setup_prototype(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/autoguide.py", line 404, in _setup_prototype
super()._setup_prototype(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/autoguide.py", line 154, in _setup_prototype
) = initialize_model(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 654, in initialize_model
(init_params, pe, grad), is_valid = find_valid_initial_params(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 395, in find_valid_initial_params
(init_params, pe, z_grad), is_valid = _find_valid_params(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 381, in _find_valid_params
_, _, (init_params, pe, z_grad), is_valid = init_state = body_fn(init_state)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/infer/util.py", line 331, in body_fn
model_trace = trace(seeded_model).get_trace(*model_args, **model_kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/primitives.py", line 105, in __call__
return self.fn(*args, **kwargs)
[Previous line repeated 2 more times]
File "/home/bandyopn/global/code/neusomatic_uq/neusomatic/python/train_uq.py", line 656, in neu_model
net = random_flax_module("net", module, prior = (
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 355, in random_flax_module
nn = flax_module(
File "/home/bandyopn/.local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 79, in flax_module
assert (nn_state is None) == (nn_params is None)
AssertionError
```
Could anyone please help me, regarding how to use random_flax_module with BatchNorm? Thanks. | 0easy
|
Title: Badly formatted output when running post-write hooks
Body: **Describe the bug**
There is a small output bug when running `alembic revision ...` with a post-write hook. The name of the hook is not correctly displayed:
```bash
$ alembic revision -m 'my new revision'
Generating myrepo/migrations/versions/202306151538_my_new_revision.py_ ... done
Running post write hook {name!r} ... # Display bug HERE
reformatted myrepo/migrations/versions/202306151538_my_new_revision.py
All done! ✨ 🍰 ✨
1 file reformatted.
done
```
Happy to fix, if approved!
**Expected behavior**
This output
```bash
$ alembic revision -m 'my new revision'
Generating myrepo/migrations/versions/202306151538_my_new_revision.py_ ... done
Running post write hook black ... # Display bug HERE
reformatted myrepo/migrations/versions/202306151538_my_new_revision.py
All done! ✨ 🍰 ✨
1 file reformatted.
done
```
when `black` hook is configured.
**To Reproduce**
```bash
$ mkdir testdir
$ cd testdir
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install alembic
```
Uncomment default `post_write_hooks` parameters in `alembic.ini`
```ini
[post_write_hooks]
black.type = console_scripts
black.entrypoint = black
black.options = -l 79 REVISION_SCRIPT_FILENAME
```
Then run revision command
```bash
alembic revision -m 'my new revision'
```
**Error**
```bash
# Command output
Generating workspace/testdir/migrations/versions/324b3e655f54_my_new_revision.py ... done
Running post write hook {name!r} ...
...
```
**Versions.**
- OS: macOS
- Python: 3.11
- Alembic: 1.11.1
- SQLAlchemy: 2.0.16
- Database: PostgreSQL 15.3
- DBAPI: N/A
**Additional context**
Bug seems to be here: https://github.com/sqlalchemy/alembic/blob/79738c9e971d4bbe6f4845a54fd0f972d15e1035/alembic/script/write_hooks.py#L88
**Have a nice day!**
| 0easy
|
Title: link kaggle kernel using CyclicalTransformer
Body: We would like to prepare a simple Kaggle kernel where we demo how to use the CyclicalTransformer and link it to the CyclicalTransformer,rst file | 0easy
|
Title: Would it be useful to have `req.get_header_as_int()`?
Body: Sometimes it might be handy to easily get a header's value as `int`, where validation errors would automatically result in an HTTP 400, along the lines of [`req.get_header_as_datetime`](https://falcon.readthedocs.io/en/stable/api/request_and_response_wsgi.html#falcon.Request.get_header_as_datetime) & [`req.get_param_as_int`](https://falcon.readthedocs.io/en/stable/api/request_and_response_wsgi.html#falcon.Request.get_param_as_int). | 0easy
|
Title: Add More Blogposts to Multipage Website Example
Body: ### Description
The current Multipage Website includes one blogpost. To enhance its usability and showcase its versatility, we should add more blogposts with the following specifications:
### 1. Support for Mobile and Desktop:
-Ensure that all blogposts support both mobile and desktop screens.
-Maintain a consistent design across devices while adapting layouts as necessary for smaller screens.
### 2. Distinct Blogpost Designs:
- **Each blogpost should have a unique layout to demonstrate various design possibilities.**
- Examples of differences:
-- Varying text-to-image ratios.
-- Diverse image placement.
-- Thematic variations (e.g., minimalist vs. media-rich).
### 3. Content Requirements:
- Use lorem ipsum text for placeholders, but feel free to use actual blog content if available.
- Include representative images or graphics for visual diversity.
As a refernce have a look at `adventure_in_alps.py`. If you need any help or guidance while working on this issue, feel free to join our Discord server and contact me directly. | 0easy
|
Title: How to hide the first row "preview" in data tab
Body: Hi team,
I would like to hide the first row that shows the dirtribution in data tab. How to do that?
I tried set use_preview = False but it's not working.
```
walker = pyg.to_html(df,default_tab="data",use_preview=False)
```
Error return
TypeError: pygwalker.api.pygwalker.PygWalker() got multiple values for keyword argument 'use_preview' | 0easy
|
Title: Clones metric API
Body: The canonical definition is here: https://chaoss.community/?p=3429 | 0easy
|
Title: Crash in Alembic comparator for a custom TypeDecorator column when default impl differs from a specific
Body: **Migrated issue, originally created by frol ([@frol](https://github.com/frol))**
Having a custom column type, e.g.:
```python
class PasswordType(types.TypeDecorator, ScalarCoercible):
def load_dialect_impl(self, dialect):
if dialect.name == 'sqlite':
# Use a NUMERIC type for sqlite
impl = sqlite.NUMERIC(self.length)
else:
# Use a VARBINARY for all other dialects.
impl = types.VARBINARY(self.length)
return dialect.type_descriptor(impl)
```
Alembic crashes when tries to compare the existing (the first migration is fine) column with the custom type:
```
File ".../lib/python3.5/site-packages/alembic/autogenerate/compare.py", line 77, in _autogen_for_tables
inspector, metadata, upgrade_ops, autogen_context)
File ".../lib/python3.5/site-packages/alembic/autogenerate/compare.py", line 177, in _compare_tables
modify_table_ops, autogen_context, inspector):
File ".../lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File ".../lib/python3.5/site-packages/alembic/autogenerate/compare.py", line 256, in _compare_columns
schema, tname, colname, conn_col, metadata_col
File ".../lib/python3.5/site-packages/alembic/util/langhelpers.py", line 314, in go
fn(*arg, **kw)
File ".../lib/python3.5/site-packages/alembic/autogenerate/compare.py", line 624, in _compare_type
conn_col, metadata_col)
File ".../lib/python3.5/site-packages/alembic/runtime/migration.py", line 391, in _compare_type
metadata_column)
File ".../lib/python3.5/site-packages/alembic/ddl/impl.py", line 261, in compare_type
return comparator and comparator(metadata_type, conn_type)
File ".../lib/python3.5/site-packages/alembic/ddl/impl.py", line 335, in _numeric_compare
t1.precision is not None and
File ".../lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 913, in __getattr__
return getattr(self.impl, key)
AttributeError: 'VARBINARY' object has no attribute 'precision'
```
I have attempted to fix this in PR: https://bitbucket.org/zzzeek/alembic/pull-requests/64/
| 0easy
|
Title: [3.x] Move update logs from INFO to DEBUG
Body: **Is your feature request related to a problem? Please describe.**
Currently with logging level INFO `aiogram.dispatcher` logger prints every single update into to stdout. This makes "default" logging much more cluttered and harder to watch.
**Describe the solution you'd like**
Move update logs to DEBUG level of `aiogram.dispatcher`
**Describe alternatives you've considered**
One solution is to switch `aiogram.dispatcher` logger to WARNING+ levels, however in this case "bot started" and "bot stopped" logs will be missing.
| 0easy
|
Title: High-cardinality bar chart text color overridden by color channel encoding
Body: For high cardinality bar chart, we have a text indicator that shows "X more..." bars, however the text attribute color:
```
text = alt.Chart(visData).mark_text(
x=155,
y=142,
align="right",
color = "#ff8e04",
fontSize = 11,
text=f"+ 230 more ..."
)
```
is overridden by the color encoding:
```
chart = chart.encode(color=alt.Color('occupation',type='nominal'))
```
which is generated from `AltairChart:encode_color`, the encode color is called last since that is only added for visualizations that have a color attribute.
This example can be found in the census dataset.
```
df = pd.read_csv("../lux-datasets/data/census.csv")
df.set_intent(["education"])
df
```



| 0easy
|
Title: DOC: add user guide on lightgbm
Body: Add user guide on how to use xorbits.lightgbm.
You can add a rst file at doc/source/libraries/xorbits_train by following the code style on `doc/source/libraries/xorbits_data/numpy.rst` or `doc/source/libraries/xorbits_data/pandas.rst`
| 0easy
|
Title: Buggy redo when using pinecone
Body: When using pinecone, sometimes when redoing a message we see:
<img width="423" alt="image" src="https://user-images.githubusercontent.com/21161101/211937057-f3840ee0-efa1-40c4-95d4-d2beb53a6b82.png">
Towards the end, there's a blank "GPTie: <|endofstatement|>" message.
Moreover, we need to ensure that embeddings are made for results of a redo. | 0easy
|
Title: 重要!!!!!!!!揭露作者诈骗过程
Body: 我方公司于2020年12月1日向作者购买了simplepro的源代码,共计6688元整,作者以压缩包的形式发送给我方,在作者官网上,写着如下信息:

而作者于2020年12月10日失联,所有问题均不回复,我方于2020年12月26日发现已被拉黑.
作者姓名: 潘敬
公司: 深圳市十二点科技有限公司
账号: 7559 4971 5410 902
以下为付款及聊天记录:



| 0easy
|
Title: [Feature request] Add apply_to_images to MaskDropout
Body: | 0easy
|
Title: Allow for `high` in TruncatedNormal, TruncatedCauchy
Body: TruncatedDistribution has both `low` and `high`. Why do `TruncatedNormal` and `TruncatedCauchy` only have `low`? | 0easy
|
Title: A team created by an admin in the user facing interface should be hidden by default
Body: Admins have a lot of power in CTFd and they can see things before an event opens. In Teams Mode when an Admin creates their own team the team is not automatically marked as hidden. This can accidentally reveal information.
Insted when an Admin creates a team through the normal Team Creation flow that team should be defaulted to Hidden. | 0easy
|
Title: Different behaviour between suffixed responders and "normal" ones
Body: I'm developing a generic resource with all the API operations (GET/POST collection, GET/PUT/PATCH/DELETE element). The idea is to reuse this component by passing a DAO object to make it truly generic. The idea is that all the methods should be disabled by some resources, and I did this with this kind of code (minimal reproduction):
```
import falcon
class GenericResource:
def __init__(self, on_get_collection=True):
if on_get_collection:
self.on_get_collection = self._on_get_collection
def _on_get_collection(self, req, resp):
resp.media = {'message': 'on_get_collection'}
app = falcon.App()
res1 = GenericResource()
res2 = GenericResource(on_get_collection=False)
```
As I start gunicorn server, my app crashes:
```
falcon.routing.util.SuffixedMethodNotFoundError: No responders found for the specified suffix
```
At this point, the resource "res2" is useless because it has no methods, I know... but this happens when I disable all "collection" methods. If I remove the `suffix='collection'` I only get 405s on every request, like this:
```
import falcon
class NoneResource:
pass
app = falcon.App()
app.add_route('/res3', NoneResource())
# the next route crashes...
# app.add_route('/res4', NoneResource(), suffix='collection')
```
The issue is easy to bypass by removing the "collection" add_route, but I still think that both routes `/res3` (normal route) and `/res4` (suffixed route) should behave the same way, with no methods available...
| 0easy
|
Title: Add examples for authentication (Basic, OAuth, etc.)
Body: We need examples of how to perform authentication with Uplink. This includes:
- [ ] [Basic Authentication](https://uplink.readthedocs.io/en/stable/user/auth.html#basic-authentication)
- [ ] [OAuth with `requests-oauthlib`](https://uplink.readthedocs.io/en/stable/user/auth.html#using-auth-support-for-requests-and-aiohttp)
- [ ] [Using other third-party Auth Support for Requests and aiohttp](https://uplink.readthedocs.io/en/stable/user/auth.html#using-auth-support-for-requests-and-aiohttp)
- [ ] [Using the `Query` or `Header` annotations](https://uplink.readthedocs.io/en/stable/user/auth.html#other-authentication)
- [ ] [Setting authentication tokens with the `session` property](https://uplink.readthedocs.io/en/stable/user/auth.html#other-authentication)
These examples should reside under the [`examples`](https://github.com/prkumar/uplink/tree/master/examples) directory.
| 0easy
|
Title: 分享一下我的成功使用经过(仅用4步操作),算是给刚入门或者下载失败的人一个参考
Body:
**桌面(请填写以下信息):**
-操作系统:[linux]
-vpn代理:[关闭]
-项目版本:[1.5.0.0]
-py版本:[3.11.1]
-下载平台:douyin.com
第 1 步
```
python3 -m venv py-env
source ./py-env/bin/activate
```
#说明:在当前目录[**创建PY虚拟环境**](https://docs.python.org/zh-cn/3/library/venv.html),
第 2 步
`pip3 install f2`
#说明:安装f2
第 3 步
修改配置 ./py-env/lib/python3.11/site-packages/f2/conf/app.yaml
```
douyin:
cookie:
cover: false
desc: no
folderize: false
interval: all
languages: zh_CN
max_connections: 5
max_counts: 0
max_retries: 4
max_tasks: 6
mode: post
music: false
naming: '{create}_{aweme_id}'
page_counts: 20
path: ./Download
timeout: 6
```
#说明:不要乱加引号,也不要使用“null” , 所有的冒号: 后面必须有一个空格。查看相关选项解释 `f2 dy -h`
第 4 步
使用浏览器**登录**抖音,然后**关闭**。命令行执行以下命令。支持的浏览器:chrome、firefox、edge、opera
```
f2 dy --auto-cookie firefox
```
#说明:这会将cookie值自动写到配置里,千万**不要**去手工复制,格式不对将无法下载。
#dy 表示抖音平台,查看其他平台 `f2 -h`
第 5 步
开始下载
`f2 dy -u https://www.douyin.com/user/MS4wLjABAAAAXXXXXXXXXXXXXXXXXXXXXXXX`
#说明:下载抖音用户发布的作品。在第 3 步中“mode: post”是指用户发布的作品。
#“https://” 这个不能少
#重启电脑或命令行,只执行第 1 步 ,第 5 步
完。
| 0easy
|
Title: [BUG] dcc.Upload does not support re-upload the same file.
Body: It appears that if one uses the dcc.Upload.contents property for triggering callback, it fails when the user tries to upload the same file again, due to the content is the same.
I would imagine that a property similar to `n_clicks` should be provided to trigger callback on each upload event.
Versions:
```
dash 1.12.0
dash-core-components 1.10.0
dash-html-components 1.0.3
dash-renderer 1.4.1
``` | 0easy
|
Title: Raise test coverage above 90% for giotto/time_series/preprocessing.py
Body: Current test coverage from pytest is 84% | 0easy
|
Title: [BUG] new VAR syntax with dynamic variable name throws an error if the variable comes from a lower scope
Body: with RFW v7.0a2
I'm struggling with the new VAR syntax and the possibility to use dynamic variable names.
```robot
*** Test Cases ***
first
VAR ${basename} var
VAR ${${basename}_1} scope=GLOBAL
Log ${var_1}
```
I would assume that the test case creates a new global variable ${var_1}, but I only get the error message from robot:
`[ FAIL ] Setting variable '${${basename}_1}' failed: Variable '${basename}' not found.`
The same bug also occurs if I specify the scope SUITE. Without scope or with LOCAL or TEST works.
if the scope of the `${basename}` var is higher or equals the scope of the new `${${basename}_1}` it works.
I would assume that the name of the dynamic variable is resolved in the same scope in which it is written, not in the scope in which it is to be defined. | 0easy
|
Title: Review the GlobaLeaks codebase for Best Practices and Consistency
Body: ## Description:
Globaleaks is a collaborative open-source project that welcomes contributions from developers worldwide. One important aspect of maintaining a healthy codebase is regular code reviews. As a new contributor, you will have the opportunity to help by reviewing the code in recent pull requests (PRs) for best practices, consistency, and potential improvements.
You will also have the option to propose improvements to the code by submitting your own pull request. This is a great way to become more familiar with the project and ensure the code remains clean, efficient, and maintainable.
## Steps:
1. **Explore Open Pull Requests:**
- Access the [pull requests](https://github.com/globaleaks/globaleaks-whistleblowing-software/pulls) or directly to the [codebase](https://github.com/globaleaks/globaleaks-whistleblowing-software).
- Browse through the open pull requests or the codebase to find something that interests you. In case of a pull request look for one that is ready for review, i.e., it has been submitted and is awaiting approval.
2. **Perform the Code Review:**
- Thoroughly review the code in the pull request, checking for the following:
- **Adherence to coding standards:** Make sure the code follows the style guide (e.g., naming conventions, indentation).
- **Code quality:** Ensure the code is clean, understandable, and efficient.
- **Functionality:** Confirm that the code works as expected and doesn’t break existing functionality.
- **Security concerns:** Look for potential security issues, such as SQL injection, XSS vulnerabilities, etc.
- **Documentation:** Check if there are any missing comments or documentation in the code.
3. **Leave Constructive Feedback:**
- If you notice anything that could be improved, leave clear, constructive feedback in the comments section of the pull request.
- Point out any areas where the code can be optimized or made more readable. If something is unclear, ask for clarification.
4. **Suggest Changes or Improvements:**
- If you find something that needs to be changed or fixed, suggest improvements directly in the PR or propose your own changes by submitting a new pull request.
- You can fork the repository, make your changes, and create a new PR that addresses the issues you identified.
5. **Approve or Request Changes:**
- If everything looks good and you’re satisfied with the pull request, you can approve it.
- If you feel that changes are needed, request the author to make those changes and provide guidance on what should be done.
6. **Submit a Pull Request (Optional):**
- If you make any changes or improvements (e.g., fixes or optimizations), create a **pull request** with your changes.
- Make sure your pull request is based on the most recent version of the code to avoid conflicts.
## Prerequisites:
- **Knowledge of Git and GitHub:** Basic understanding of how to clone repositories, create branches, and submit pull requests.
- **Familiarity with the Globaleaks Codebase:** You don’t need to be an expert, but a general understanding of how Globaleaks is structured and its core components will help.
- **Experience with Code Reviews (Optional):** If you have experience reviewing code, this will be a great opportunity to practice and improve your skills.
## Why it's a Good First Issue:
- Reviewing pull requests and proposing changes is an excellent way to get familiar with the Globaleaks codebase.
- You’ll learn best practices for code quality, security, and maintainability while contributing to the project’s success.
- This task encourages collaboration and helps ensure that the codebase remains clean and efficient for all contributors.
- If you prefer, you can explore the codebase independently and contribute your own improvements.
## Helpful Links:
- [Globaleaks GitHub Repository](https://github.com/globaleaks/globaleaks-whistleblowing-software)
- [GitHub Pull Request Workflow](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests) | 0easy
|
Title: getting reactions_count,all values of reactions dictionary as 0 and Unicode as a output when language is not english
Body: When I change page_name to scrape , but output data no change | 0easy
|
Title: Feature request: Allow selectable fields for export in UI
Body: Feature request:
Allow UI customisation such that users can select which fields are exported.
Example: https://stackoverflow.com/q/67651313/39296
| 0easy
|
Title: Feat: add Header
Body: | 0easy
|
Title: animation after opening docs
Body: When using the search feature, users might be directed to specific sections in the documentation. For example, if search for "schedule", the top [result is this](https://docs.ploomber.io/en/latest/deployment/batch.html#scheduling).
When clicking on it, the page scrolls down to the "Scheduling" section, however, it isn't clear to a user that such section is the one they should pay attention to. It'd better have some animation to draw attention to it, I've seen this on other websites. | 0easy
|
Title: [FEATURE]: automatically find duplicate issues - when user files issue on GitHub
Body: ### Feature summary
_No response_
### Feature description
GitHub does not have a fully automated, out-of-the-box feature that suggests similar issues in real-time as you type, similar to Stack Overflow's system. However, there are some built-in and customizable features that can help streamline the process of identifying related issues:
currently - GitHub Issues Search
**While it is manual**, GitHub provides a powerful search feature within the Issues section of repositories. Users can search by keywords and use advanced filters (e.g., labels, assignees) to find related issues. This is not automatic - user needs to take responsibility and do these checks before raising request.
**Proposal to have GitHub Actions for Automating Suggestions**
While not a default feature, GitHub Actions can be configured to automate parts of the issue-raising process. For example, you can create a workflow that triggers when a new issue is created and automatically searches the repository for similar issues using keywords from the issue title or body. The results could then be posted as a comment on the newly created issue, suggesting related issues for review
There are bunch of plugins which we can use.
https://github.com/marketplace?query=duplicate+issue
### Motivation
_No response_
### Alternatives considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Default base_estimator for Optimizer
Body: Currently, the constructor of `Optimizer` expects `base_estimator`.
It would be a good default I think to have the same GaussianProcessRegressor as in `gp_minimize` when `base_estimator=None`.
(Reproducing that logic is not trivial for the user, hence that would be a nice shortcut.) | 0easy
|
Title: Admin/Accounts/Roles errors: "create a copy" and "publish/unpublish" in "with selected"
Body: attempting to create a copy of a role results in a failure because role model has no attribute "slug"
same goes for publish/unpublish, with no attribute
| 0easy
|
Title: chat.petals integration with Hugging Face Spaces
Body: Currently, you can use the model either directly, via [this notebook](https://colab.research.google.com/drive/1DqaqGkI6ab92WeXDGMStWt_l8BPHUaIj#scrollTo=OJ7AFOumIYlY), or through http://chat.petals.ml/ .
The latter is a self-hosted chat app that runs code similar to the colab notebook, but wraps it with a simple web server. The source code is available [here](https://github.com/borzunov/chat.petals.ml).
It would be great if anybody could fork the chat project and reuse it for their pet projects. Thing is, not everyone has a server to host that web server - and it is not always convenient to share jupyter notebooks for everything.
To make it easier to create similar apps, we can create an example with Spaces. [Hugging Face Spaces](https://huggingface.co/spaces/launch) allows you to run simple web apps at the expense of HF.
References:
- @younesbelkada has made a space with the early version of Petals: https://huggingface.co/spaces/ybelkada/petals/tree/main
- There is another related project that integrates petals 1.0.0, https://huggingface.co/spaces/hivemind-personalized-chat/chat-gradio/tree/main
**Task:** create something similar to chat.petals on top of HF spaces. One way to do this is to fork @younesbelkada 's project and update it to the latest stable version of Petals | 0easy
|
Title: [BUG] Can't Use us-east1-gcp Pinecone Server
Body: When using an api key for a pinecone server in the us-east1-gcp environment, I get the following error:
`2023-02-16 04:03:37 Defaulting to allowing all users to use Translator commands...
2023-02-16 04:03:37 return self.call_with_http_info(**kwargs)
2023-02-16 04:03:37 File "/usr/local/lib/python3.9/site-packages/pinecone/core/client/api_client.py", line 838, in call_with_http_info
2023-02-16 04:03:37 return self.api_client.call_api(
2023-02-16 04:03:37 File "/usr/local/lib/python3.9/site-packages/pinecone/core/client/api_client.py", line 413, in call_api
2023-02-16 04:03:37 return self.__call_api(resource_path, method,
2023-02-16 04:03:37 File "/usr/local/lib/python3.9/site-packages/pinecone/core/client/api_client.py", line 207, in __call_api
2023-02-16 04:03:37 raise e
2023-02-16 04:03:37 File "/usr/local/lib/python3.9/site-packages/pinecone/core/client/api_client.py", line 200, in __call_api
2023-02-16 04:03:37 response_data = self.request(
2023-02-16 04:03:37 File "/usr/local/lib/python3.9/site-packages/pinecone/core/client/api_client.py", line 439, in request
2023-02-16 04:03:37 return self.rest_client.GET(url,
2023-02-16 04:03:37 File "/usr/local/lib/python3.9/site-packages/pinecone/core/client/rest.py", line 236, in GET
2023-02-16 04:03:37 return self.request("GET", url,
2023-02-16 04:03:37 File "/usr/local/lib/python3.9/site-packages/pinecone/core/client/rest.py", line 219, in request
2023-02-16 04:03:37 raise UnauthorizedException(http_resp=r)
2023-02-16 04:03:37 pinecone.core.client.exceptions.UnauthorizedException: (401)
2023-02-16 04:03:37 Reason: Unauthorized
2023-02-16 04:03:37 HTTP response headers: HTTPHeaderDict({'www-authenticate': 'API key is missing or invalid for the environment "us-west1-gcp". Check that the correct environment is specified.', 'content-length': '114', 'date': 'Thu, 16 Feb 2023 04:03:42 GMT', 'server': 'envoy'})
2023-02-16 04:03:37 HTTP response body: API key is missing or invalid for the environment "us-west1-gcp". Check that the correct environment is specified.
2023-02-16 04:03:37`
Installed using Docker | 0easy
|
Title: CI: Efficiency improvements to reduce time take by Github Actions workflows
Body: Eg. installation of dependencies may be sped up via proper caching.
For commits to PRs, we can cap python version everywhere and package versions so they can be reused rather being constantly reinstalled in CI.
For commits to master, we do not want to do this to ensure latest package versions are always being used (alerts us if something is broken due to a new version of package being released).
Anyway we can tolerate a more intensive CI process in master, it is the slow CI in PRs that is more cumbersome for developers to deal with. So efforts at improving CI efficiency should primarily focus on the CI jobs executed in commits to PRs. | 0easy
|
Title: If config is empty interpreter crashes
Body: ### Describe the bug
Deleting the contents of the config file crashes interpreter
If there is someone who wants to try to contribute to Open Source Software for the first time, here is a good issue to work on.
@ me if you need assistance, I'll help you make your first PR :teacher:
You'll need some basic understanding of python :snake:
### Reproduce
```
interpreter --config
```
delete content, save and close.
```
interpreter --config
```
### Expected behavior
Open the config file
### Screenshots

### Open Interpreter version
0.1.16
### Python version
3.11.6
### Operating System name and version
Windows 11
### Additional context
_No response_ | 0easy
|
Title: Buggy conversation edits when using pinecone
Body: <img width="842" alt="image" src="https://user-images.githubusercontent.com/21161101/211927058-69bb3508-ed69-4e0d-923d-6f0ebe21c670.png">
Sometimes, when editing a message in a conversation while pinecone is enabled, the bot will not properly delete the previous messages.
As an example, at the bottom: I said that I'm working on a discord bot, and then I said I'm working on a javascript project, the javascript project message was an edit, but the previous message wasn't cleared from the conversation history and was sent up again in the prompt, not sure why this is.
The messages above it were edits too that were copied.
It has something to do with when/how embeddings are made, especially for the message edit flow, when encapsulated_send is triggered by on_message_edit | 0easy
|
Title: Typo in listener interface documentation in the User Guide
Body: "Called when a keyword starts." is written in the documentation for the **end_keyword** method in the listener's V3
In (latest - 7.0 version)
[https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#listener-interface-versions](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#listener-interface-versions)
for listener version 3. Is this tiny 'copy-paste' bug:

| 0easy
|
Title: [ENH] Add PyODAdapter-implementation for IsolationForest
Body: ### Describe the feature or idea you want to propose
The [`PyODAdapter`](https://github.com/aeon-toolkit/aeon/blob/main/aeon/anomaly_detection/_pyodadapter.py) in aeon allows us to use any outlier detector from [PyOD](https://github.com/yzhao062/pyod), which were originally proposed for relational data, also for time series anomaly detection (TSAD). Not all detectors are equally well suited for TSAD, however. We want to represent the frequently used and competitive outlier detection techniques within the `anomaly_detection` module of aeon directly.
Implement the [IsolationForest method](https://github.com/yzhao062/pyod/blob/master/pyod/models/iforest.py#L49) using the `PyODAdapter`.
### Describe your proposed solution
- Create a new file in `aeon.anomaly_detection` for the method
- Create a new estimator class with `PyODAdapter` as the parent
- Expose the algorithm's hyperparameters as constructor arguments, create the PyOD model and pass it to the super-constructor
- Document your class
- Add tests for certain edge cases if necessary
---
Example for IsolationForest:
```python
class IsolationForest(PyODAdapter):
"""documentation ..."""
def __init__(n_estimators: int = 100, max_samples: int | str = "auto", ..., window_size: int, stride: int):
model = IForest(n_estimators, max_samples, ...
super().__init__(model, window_size, stride)
@classmethod
def get_test_params(cls, parameter_set="default"):
"""..."""
return {"n_estimators": 10, ...}
```
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | 0easy
|
Title: Feature: more stronger unique RPC subject names
Body: We should use smth more stronger, than just `str(uuid4())` to generate unique reply-to topic for RPC requests in Redis (and other future brokers): https://github.com/airtai/faststream/blob/main/faststream/redis/publisher/producer.py#L71
I suggest to copy `nats.NUID` class to **FastStream** and use it like in the NatsBroker do: https://github.com/nats-io/nats.py/blob/main/nats/nuid.py | 0easy
|
Title: NCR下线后,可以支持NCR导出的tcx文件吗?
Body: 您好,NCR要下线了,现在可以导出tcx文件,running page是否可以支持这些导出文件的显示? | 0easy
|
Title: Closing connections with RabbitMQ leads to warnings in the broker side
Body: **Describe the bug**
While advancing with some application using faststream, by coincidence I went to look to RabbitMQ logs and found lots of warnings about unexpected closed connections.
After debugging a lot locally, initially expecting that it was some bug my side, I think that I'm doing things properly (start & close). And have reduced the problem to a minimal example.
Note I'm a noob and maybe I'm missing something, but haven't been able to find anything in docs, issues, discussions about how to, properly, stop/close a connection. Also note that this problem, apparently, doesn't cause any trouble and everything seems to be working ok, just it's annoying to get all those warnings in RabbitMQ logs.
**How to reproduce**
```python
import asyncio
from faststream.rabbit import RabbitBroker
# Usage example
async def main():
broker = RabbitBroker("amqp://user:pass@localhost:5672")
await broker.start()
await asyncio.sleep(3)
await broker.close()
if __name__ == "__main__":
asyncio.run(main())
```
And/Or steps to reproduce the behavior:
1. Run the script.
**Expected behavior**
Broker disconnects properly and no problems are logged in the Rabbit side.
**Observed behavior**
In Rabbit logs, the execution of the script leads to these logs:
```logs
2024-12-15 19:17:05.933765+00:00 [info] <0.10969.0> accepting AMQP connection 172.17.0.1:65170 -> 172.17.0.3:5672
2024-12-15 19:17:05.942399+00:00 [info] <0.10969.0> connection 172.17.0.1:65170 -> 172.17.0.3:5672: user 'user' authenticated and granted access to vhost '/'
2024-12-15 19:17:08.979577+00:00 [warning] <0.10969.0> closing AMQP connection <0.10969.0> (172.17.0.1:65170 -> 172.17.0.3:5672, vhost: '/', user: 'user', duration: '3s'):
2024-12-15 19:17:08.979577+00:00 [warning] <0.10969.0> client unexpectedly closed TCP connection
```
**Screenshots**
N/A
**Environment**
```bash
$ faststream -v
Running FastStream 0.5.33 with CPython 3.12.7 on Darwin
```
**Additional context**
RabbitMQ running locally, docker.
| 0easy
|
Title: Configure pre-commit for this repository and update documentation
Body: In order to help new contributors it could be nice to add [pre-commit](https://pre-commit.com/) hooks to the repository with the following checks:
- flake8
- isort
- ...?
The [CONTRIBUTING.md file](https://github.com/adbar/trafilatura/blob/master/CONTRIBUTING.md) could get updated accordingly. | 0easy
|
Title: Ambiguity in docs about ORPCA and RPCA
Body: From the docs:
> Finally, online RPCA includes two alternatives methods
But then the code examples use RPCA, and not ORPCA:
```python
> s.decomposition(algorithm="RPCA",
> output_dimension=3,
> method="SGD",
> subspace_learning_rate=1.1)
```
Do the SGD methods apply to RPCA as well? Or are these just being passed into the function and do nothing?
A second question: Does anyone have a good approach for estimating the output_dimension on big data, where the number of components is somewhere between 3 and 20? How should I perform decomposition on such data? Start with output_dimension=20, see how many components are noise, and then reduce by that number? | 0easy
|
Title: [SDK] Add Some Checks for `metrics` Field in `report_metrics()` Interface
Body: ### What you would like to be added?
We decide to add some conditional checks in `report_metrics()` to ensure it works as expected when encounting some tricky corner case and enhance its robustness.
### Why is this needed?
In the AutoML and Training WG Community Call, @tenzen-y asked what would happen if we pushed the same metrics twice in a single container. However, the current version of the `report_metrics()` API does not process this corner case.
Thus, we decided to raise an issue discussing the possible corner case that `report_metrics()` might meet with. Everyone is welcome to put your valuable insights and suggestions here!
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: Add a `from_fixture` method to `BaseUser`
Body: If you pass a password into the `BaseUser` constructor, it hashes it.
```python
user = BaseUser(username="bob", password="bob123")
>>> user.password
pbkdf2_sha256$10000$bb353d7fc92...
```
This is the expected behaviour, but it's problematic if you're passing in an already hashed password (it will get hashed again).
An example use case is fixtures - you dump the users table, and end up with data like this:
```json
{
"user": {
"BaseUser": [
{
"id": 1,
"username": "bob",
"password": "pbkdf2_sha256$10000$abc123...",
"first_name": "",
"last_name": "",
"email": "[email protected]",
"active": true,
"admin": true,
"superuser": true,
"last_login": null
},
]
},
```
We need a class method on `BaseUser`, called something like `from_fixture` (or some other, better name).
```python
class BaseUser(Table):
@classmethod
def from_fixture(self, data: t.Dict[str, t.Any]) -> BaseUser:
# Create a BaseUser instance, but bypasses the password hashing, as we can assume the password is already hashed.
...
```
We should also raise a `ValueError` in the `Baseuser` constructor if someone tries to pass in an already hashed password. | 0easy
|
Title: Tidy up oembed_providers.py
Body: ### Issue Summary
Currently [oembed_providers.py](https://github.com/wagtail/wagtail/blob/ac38343957787b7d7a2ff34164a98538e177ff96/wagtail/embeds/oembed_providers.py) has a list of providers that no longer exist. A quick example would be Revision3.com, it longer exists as a service.
### Steps to Reproduce
```
from wagtail.embeds.embeds import get_embed
get_embed('http://www.revision3.com/demo')
```
Returns an `EmbedNotFoundException` exception, but there's other exceptions underneath because the URL/hostname doesn't exist.
Not a huge issue, but if there's a dead provider that isn't accepting connections - it just hangs:
```
get_embed('http://qik.com/demo')
```
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: Any
- Django version: Any
- Wagtail version: 6.1 / main
- Browser version: n/a
### Working on this
I think this is probably a "good first issue", so would love to see someone else give this one a go. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.