text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Add cli tests to ensure args are correctly routed to function args
Body: click has a `CLIRunner` to test CLI applications, however, it's limiting (e.g., monkeypatch doesn't work well). So we started to modify the `test_cli.py` tests to call the functions directly (e.g., `install.main(use_lock=True)`. But given this change, we are no longer testing that cli args actually become the right function arguments (e.g., if we pass `--use-lock`), this should imply, we pass `install.main(use_lock=True)`, so we now have to add those tests. | 0easy
|
Title: Improve Globaleaks Documentation
Body: ## Description:
Good documentation is key to the success of any open-source project. Globaleaks aims to have comprehensive and user-friendly documentation, but it can always be improved. In this task, you will help improve the Globaleaks documentation by adding missing sections, fixing typos, clarifying instructions, and ensuring everything is up to date.
Your contributions will help new communities adopt Globaleaks and ensure that users and developers can easily understand how to use and contribute to the platform.
## Steps:
1. **Explore the Current Documentation:**
- Visit the [Globaleaks documentation](https://docs.globaleaks.org/) to review the existing documentation.
- Identify areas where improvements can be made, such as:
- Missing or unclear instructions.
- Sections that are outdated or need further explanation.
- Typos, grammar issues, or formatting inconsistencies.
2. **Identify Areas for Improvement:**
- Look for areas that could be enhanced, such as:
- The "Getting Started" section could be made clearer or more detailed.
- Contribution guidelines might need updating or clarification.
- Troubleshooting tips or FAQs could be expanded.
3. **Propose Improvements:**
- Edit the documentation to improve clarity, readability, and organization.
- Fix any typos or grammatical errors.
- Add sections or details to improve understanding.
- Ensure that the tone and style of the documentation are consistent throughout.
4. **Submit a Pull Request:**
- Once you've made improvements, submit a pull request (PR) with your changes.
- Provide a brief description of what you've updated in the PR description.
- Ensure your changes are well-formatted and consistent with the rest of the documentation.
5. **Request Feedback:**
- After submitting your PR, ask for feedback from other contributors or maintainers to ensure your changes are accurate and helpful.
- Be open to suggestions and make any necessary revisions based on the feedback you receive.
6. **Testing the Documentation (Optional but Recommended):**
- If you can, test the instructions or steps you've updated to ensure they work in real-world scenarios. This could include:
- Running the setup process yourself to verify accuracy.
- Ensuring eventual code snippets are correct and functional.
## Prerequisites:
- **Basic Markdown Knowledge:** You should be familiar with Markdown syntax, as it's commonly used for documentation.
- **Attention to Detail:** A keen eye for spotting unclear sections, typos, or missing information in the documentation.
- **No Code Experience Required:** This task is focused on improving documentation, so no coding knowledge is needed, though familiarity with the Globaleaks project is helpful.
## Why it's a Great Contribution:
- Contributing to documentation is a valuable way to support the Globaleaks project. By improving the clarity and usability of the documentation, you help new communities adopt the software more easily.
- Your work will improve the user experience for both developers and users of the platform, ensuring that the project remains accessible and welcoming to everyone.
## Helpful Links:
- [Globaleaks Documentation](https://docs.globaleaks.org/)
- [reStructuredText Markdown Guide](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html)
| 0easy
|
Title: [DOC] Add type hints for similarity search
Body: ### Describe the feature or idea you want to propose
Other modules are increasingly using type hints in the function declaration, the similarity search module should do the same.
### Describe your proposed solution
Implement type hint in function declaration similarly to as it's done in other modules.
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | 0easy
|
Title: Improve the release automation process
Body: Probably something more similar to https://github.com/pydantic/logfire/pull/504 where the PR + release creation are automated as well. | 0easy
|
Title: Parafac: set fixed_modes default to None
Body: In parafac, currently `fixed_modes` defaults to an empty list, which is mutable and mutations of that list would be preserved from one call to another. Set the default to None and initialize accordingly in the core of the function.
https://github.com/tensorly/tensorly/blob/ab504452039c3fe4f46351a622dd177b29778bd2/tensorly/decomposition/_cp.py#L176 | 0easy
|
Title: Can not open and score a Word2vec model generated in v3.8.0 in v4.2.0
Body: #### Problem description
Hello,
I'm trying to score a sentence in a script using gensim v4.2.0, with a model trained in v3.8.0. However, I meet the error `AttributeError: 'Word2Vec' object has no attribute 'syn1'`
#### Steps/code/corpus to reproduce
Link to the model file: https://filetransfer.io/data-package/DIMOegMO#link
```python
from gensim.models import Word2vec
from gensim.utils import SaveLoad
sv = SaveLoad()
model = sv.load('test.mdl')
model.score(['test'], total_sentences=1)
```
#### Versions
```python
>>> import platform; print(platform.platform())
Linux-5.4.0-135-generic-x86_64-with-Ubuntu-18.04-bionic
>>> import sys; print("Python", sys.version)
Python 3.6.9 (default, Nov 25 2022, 14:10:45)
[GCC 8.4.0]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.18.3
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.4.1
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.2.0
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
Thanks a lot !
| 0easy
|
Title: 关于下载账号发布作品不全的说明
Body: 抖音网页版在不登录账号的情况下只能查看账号的部分发布作品,如需下载账号全部发布作品,请使用登录后的 Cookie!
 | 0easy
|
Title: ValueError: cannot find context for 'fork'
Body: ### Description
When importing aiomultiprocess, I got this error.
### Details
from aiomultiprocess import Pool
Traceback (most recent call last):
File "<ipython-input-39-066f50ec84fc>", line 1, in <module>
from aiomultiprocess import Pool
File "C:\Users\patlo\Anaconda3\lib\site-packages\aiomultiprocess\__init__.py", line 11, in <module>
from .core import Pool, Process, Worker
File "C:\Users\patlo\Anaconda3\lib\site-packages\aiomultiprocess\core.py", line 35, in <module>
context = multiprocessing.get_context("fork")
File "C:\Users\patlo\Anaconda3\lib\multiprocessing\context.py", line 238, in get_context
return super().get_context(method)
File "C:\Users\patlo\Anaconda3\lib\multiprocessing\context.py", line 192, in get_context
raise ValueError('cannot find context for %r' % method)
ValueError: cannot find context for 'fork'
* OS:
* Python version:3.6.3
* aiomultiprocess version:0.4.0
* Can you repro on master? Yes
* Can you repro in a clean virtualenv? NO
| 0easy
|
Title: Change <Model>.from_dict() to a @classmethod
Body: **Is your feature request related to a problem? Please describe.**
I want to be able to extend a model generated by this project in my business logic. A basic example would be:
```python
# GeneratedModel generated by openapi-python-client
class EnhancedGeneratedModel(GeneratedModel):
def id_eq(self, other: GeneratedModel) -> bool:
return self.id == other.id
```
I cannot use `EnhancedGeneratedModel.from_dict` as that would return a `GeneratedModel`.
**Describe the solution you'd like**
Update the [`from_dict`](https://github.com/triaxtec/openapi-python-client/blob/196a8fc7c6c0abeb0b9b690a4cb9cbfed7fbe994/openapi_python_client/templates/model.pyi#L34) template in `openapi_python_client/templates/model.pyi` to be a `@classmethod` instead of a `@staticmethod` and use the passed class object to instantiate the model.
i.e. change from
```python
@staticmethod
def from_dict(d: Dict[str, Any]) -> "{{ model.reference.class_name }}":
# ...
return {{ model.reference.class_name }}(...)
```
to
```python
@classmethod
def from_dict(cls, d: Dict[str, Any]) -> "{{ model.reference.class_name }}":
# ...
return cls(...)
```
**Describe alternatives you've considered**
Manually creating a `from_dict` method on the extended class. | 0easy
|
Title: 希望能直接从dockerhub直接拉镜像运行
Body: 有些机器在本地构建镜像不太方便,比如白群😂 | 0easy
|
Title: Contribute `Diverging stacked bar` to Vizro visual vocabulary
Body: ## Thank you for contributing to our visual-vocabulary! 🎨
Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard.
Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary
The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary
## Instructions
0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions)
1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary
2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart.
3. Ensure the app is running without any issues via `hatch run example visual-vocabulary`
4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary)
5. Raise a PR
**Useful resources:**
- Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization
- Diverging stacked bar: https://community.plotly.com/t/need-help-in-making-diverging-stacked-bar-charts/34023 | 0easy
|
Title: [BUG] Adding model to existing plugin, existing plugin instances disappear
Body: <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
I have some simple plugins which are basically just templates to wrap child content, things like rows and columns. Usually they don't need a model. If the requirements of a plugin change and I do need a model for a plugin, adding a model will make all existing instances of a plugin disappear from the editor.
<!--
If this is a security issue stop immediately and follow the instructions at:
http://docs.django-cms.org/en/latest/contributing/development-policies.html#reporting-security-issues
-->
## Steps to reproduce
`
@plugin_pool.register_plugin
class ColumnPlugin(CMSPluginBase):
name = _("Column")
# model = models.Column <-- adding model to plugin that already exists
render_template = "cmsplugin_column/cmsplugin_column.html"
allow_children = True
require_parent = True
parent_classes = ["RowPlugin"]
def render(self, context, instance, placeholder):
context = super().render(context, instance, placeholder)
return context
`
1. create the plugin above, initially without a model
2. create a couple instances of the plugin on the page
3. add the model to the plugin ( uncomment the commented line above )
4. plugins will disappear from the page
## Expected behaviour
I would expect that either plugins show up, but without any settings, or that some error displays, indicating that some plugins dont have corresponding model isntances.
## Actual behaviour
Plugins disappear without any warning or indication
## Additional information (CMS/Python/Django versions)
django==3.2.14
django-cms==3.11.0
## Do you want to help fix this issue?
Could this be solved via an manage.py command that creates default model instances for plugins that hove none? I could help creating a manage.py command if I had some guidance how to correctly query for this case.
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [ ] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [x ] No, I only want to report the issue.
| 0easy
|
Title: authors can choose if picture comes from gravatar, local or another url
Body: | 0easy
|
Title: Update the Arch Linux AUR link in the README
Body: The Arch AUR package name was updated to be just `autokey`, thus the link is outdated and broken.
TODO: Update the Arch AUR link in the README | 0easy
|
Title: Converting notebook to HTML throws encoding error on windows
Body: ```pytb
====================================================== DAG build failed ======================================================
----- NotebookRunner: fit -> MetaProduct({'model': File('products\\model.pickle'), 'nb': File('products\\report.html')}) -----
----------------------------------------- C:\Users\edubl\Desktop\proj\scripts\fit.py -----------------------------------------
Traceback (most recent call last):
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\abc.py", line 562, in _build
res = self._run()
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\abc.py", line 669, in _run
self.run()
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\notebook.py", line 525, in run
self._converter.convert()
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\notebook.py", line 94, in convert
self._from_ipynb(self.path_to_output, self.exporter,
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\notebook.py", line 160, in _from_ipynb
path.write_text(content)
File "C:\Users\edubl\miniconda3\envs\scaffold\lib\pathlib.py", line 1256, in write_text
return f.write(data)
File "C:\Users\edubl\miniconda3\envs\scaffold\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\ue6c6' in position 233857: character maps to <undefined>
``` | 0easy
|
Title: Set which iteration is used on learning curve
Body: Mark the selected iteration (number of trees) on the plot with learning curves.

| 0easy
|
Title: Migrate from appdirs to platformdirs
Body: ### Is your feature request related to a problem? Please describe.
Appdirs has been [officially deprecated](https://github.com/ActiveState/appdirs). It is recommended to change to [platformdirs](https://pypi.org/project/platformdirs/). The repo for `platformdirs` can be found here: https://github.com/platformdirs/platformdirs
### Describe the solution you'd like
Appdirs is only used in a couple of files, as can be seen here: https://github.com/search?q=repo:KillianLucas/open-interpreter+appdirs&type=code
Please update to `platformdirs` to ensure we use maintain packages
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Deprecate Skopt Suggestion Service
Body: /kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
Unfortunately, the scikit-optimize was closed on February 29, 2024.
https://github.com/scikit-optimize/scikit-optimize
So we need to stop supporting the skopt suggestion service as well.
I think that we can follow similar deprecation steps as the [MXNet](https://github.com/kubeflow/training-operator/issues/1996).
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
---
<!-- Don't delete this message to encourage users to support your issue! -->
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 0easy
|
Title: Feature? Pause after deleting abbreviation.
Body: One of the common issues with expansion is this. When expanding `abc` to `alphabet` often the first letter remains and you get `aalphabet`. I've noticed that actually sometimes what happens is this:
```
aalpabet
```
with the `h` missing (or another letter). So what seems to happen is that autokey sends backspaces to delete the `abc`. However, before the three backspaces have been sent, the keyboard starts typing again, causing the remaing backspace to be mixed with the new letters. I.e.,
```
abc
```
triggers
```
[backspace][backspace][backspace]alphabet
```
but what happens is e.g.
```
[backspace][backspace]alph[backspace]abet
```
Question/feature: Is it at all possible to adjust the speed of the insertion? Or insert a pause?
```
[backspace][backspace][backspace][pause]alphabet
```
?
I guess this could be done via a script... but for the insertion with the keyboard, I guess there's no setting? | 0easy
|
Title: zh_Hans translation file has a problem
Body: I package django-import-export for Debian, and the Debian checkers found an issue with the zh_Hans translation ([report](https://i18n.debian.org/l10n-pkg-status/p/python-django-import-export.html)):
```gettext: import_export/locale/zh_Hans/LC_MESSAGES/django.po: can't guess language``` | 0easy
|
Title: Using Prisma Model in FastAPI response annotation gives warnings
Body: <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
Using Prisma Model in FastAPI response annotation gives warnings about being subclassed, when user cannot change behaviour.
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
Run a basic FastAPI app and return data from db using prisma, ensure to use type annotations or the response model (I used fastapi-utils InferringRouter). In the output will be something similar to:
```bash
backend-fastapi-prisma | /usr/local/lib/python3.10/site-packages/fastapi/utils.py:88: UnsupportedSubclassWarning: Subclassing models while using pseudo-recursive types may cause unexpected errors when static type checking;
backend-fastapi-prisma | You can disable this warning by generating fully recursive types:
backend-fastapi-prisma | https://prisma-client-py.readthedocs.io/en/stable/reference/config/#recursive
backend-fastapi-prisma | or if that is not possible you can pass warn_subclass=False e.g.
backend-fastapi-prisma | class Role(prisma.models.Role, warn_subclass=False):
backend-fastapi-prisma | use_type = create_model(original_type.__name__, __base__=original_type)
```
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
No Warning to be shown
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
Any schema
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: All
- Database: All
- Python version: All
- Prisma version: 0.5.0
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
```
| 0easy
|
Title: Question Regarding Image Downscaling Method
Body: https://github.com/nerfstudio-project/nerfstudio/blob/9b3cbc79bf239eb3c69e7c288632aab02c4f0bb1/nerfstudio/models/splatfacto.py#L83
Why was the following method chosen for downscaling images instead of directly using [F.resize](https://pytorch.org/vision/main/generated/torchvision.transforms.functional.resize.html)?
```python
def resize_image(image: torch.Tensor, d: int):
"""
Downscale images using the same 'area' method in opencv
:param image shape [H, W, C]
:param d downscale factor (must be 2, 4, 8, etc.)
return downscaled image in shape [H//d, W//d, C]
"""
import torch.nn.functional as tf
image = image.to(torch.float32)
weight = (1.0 / (d * d)) * torch.ones((1, 1, d, d), dtype=torch.float32, device=image.device)
return tf.conv2d(image.permute(2, 0, 1)[:, None, ...], weight, stride=d).squeeze(1).permute(1, 2, 0)
```
My concern is that this method may lead to misaligned coordinates. For instance, if we input an image of size 19x19 and downscale it by a factor of 4, the last 3 pixels would be left empty, whereas ideally, these 3 pixels should be evenly distributed in one row.
https://github.com/nerfstudio-project/nerfstudio/blob/9b3cbc79bf239eb3c69e7c288632aab02c4f0bb1/nerfstudio/data/dataparsers/colmap_dataparser.py#L460
Additionally, I noticed in another part of the code, linear interpolation (FFMPEG default) is used for image downsampling. Therefore, for code consistency, I believe the same interpolation method should be used during dataset preprocessing and training phase downsampling.
| 0easy
|
Title: Schaff Trend Cycle: stc - Cannot Adjust tclen=stc_tclen
Body: **Which version are you running? = 0.3.14b0
The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
**Do you have _TA Lib_ also installed in your environment?**
YES = TA-Lib 0.4.21
```sh
$ pip list
```
**Did you upgrade? Did the upgrade resolve the issue?** NOP
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
The indicator (stc) appears to be hard codes for tclen=10 which is he default.
No matter what value I enter the results are the same. Even when I place quotes around the number no error is produces. Seems like this parameter is being ignored.
**To Reproduce**
```python
stcy = df.ta.stc(tclen=10, fast=26, slow=12,append=True)
stcy = df.ta.stc(tclen=26, fast=26, slow=12,append=True)
stcy = df.ta.stc(tclen='10', fast=26, slow=12,append=True)
```
**Expected behavior**
STC_10_12_26_0.5
and
STC_26_12_26_0.5 with different values.
**Screenshots**
See attached screenshot

**Additional context**
Add any other context about the problem here.
Thanks for using Pandas TA!
| 0easy
|
Title: [Bug]: LLM.collective_rpc is broken in v1 by default
Body: ### Your current environment
v0.8.1
### 🐛 Describe the bug
see https://github.com/vllm-project/vllm/pull/15324#discussion_r2008716131 for details.
because `self.llm_engine.model_executor` is in a different process.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: refactor: count number of documents using hnswlib
Body: Data storage in `HnswDocumentIndex` works in the following way:
1. Vectors are stored on disk using `hnswlib`.
2. All other types of data are saved in an SQLITE database.
One of the operations we frequently perform is determining the total number of documents (`num_docs()`). However, the only way to get number of documents from SQLITE is by scanning the entire table. Even though we've made efforts to reduce the number of times we use this functionality (https://github.com/docarray/docarray/pull/1729), it's still a time-consuming process.
For better performance, let's do the following: instead of scanning the SQLITE table, we can use hnswlib's `get_current_count` function to quickly get the number of documents in the index.
But there's a potential issue with this approach. What if documents don't have associated vectors? `get_current_count` would return 0.
We have two potential solutions:
1. Notify/Warn users about this behavior and return 0.
2. Use to the older method of counting using the SQL table if vector-less documents are detected. | 0easy
|
Title: Raise test coverage above 90% for giotto/time_series/features.py
Body: Current test coverage from pytest is 46% | 0easy
|
Title: Feature request on native support for Dgraph database.
Body: I see the project already supports Neo4j and Tiger graph for accessing data. I was wondering if folks would be interested in supporting [Dgraph](www.dgraph.io) ([github](https://github.com/dgraph-io/dgraph)) as well which is an open-source fast, distributed, and transactional database.
I work with Dgraph happy to help with this addition to pygraphistry. Some of our users have showed interest in visualization capabilities that pygraphistry offers. | 0easy
|
Title: [ENH] interface to `tsai` package
Body: `tsai` is an excellent package for deep learning models for time series, mostly for time series classification, but some also for forecasting.
https://github.com/timeseriesAI/tsai
It has a unified interface to these, so it should be easy to interface these from `sktime`. Possibly, even `_DelegatedClassifier` or `_DelegatedForecaster` might be used.
As a recipe, I think different classifiers should be different classes; obviously, we also want to credit the authors of the individual classifiers, with their GitHub names, in the `"authors"` tag - these can be gleaned from the commit history.
FYI @oguiza, it would be great to collaborate on this! Ref issue on `tsai`: https://github.com/timeseriesAI/tsai/issues/931
| 0easy
|
Title: TypeError: unary_unary() got an unexpected keyword argument '_registered_method'
Body: ### What happened?
When I run the following scripts:
```
import kubeflow.katib as katib
def train_mnist_model(parameters):
import tensorflow as tf
import kubeflow.katib as katib
import numpy as np
import logging
logging.basicConfig(
format="%(asctime)s %(levelname)-8s %(message)s",
datefmt="%Y-%m-%dT%H:%M:%SZ",
level=logging.INFO,
)
logging.info("--------------------------------------------------------------------------------------")
logging.info(f"Input Parameters: {parameters}")
logging.info("--------------------------------------------------------------------------------------\n\n")
# Get HyperParameters from the input params dict.
lr = float(parameters["lr"])
num_epoch = int(parameters["num_epoch"])
# Set dist parameters and strategy.
is_dist = parameters["is_dist"]
num_workers = parameters["num_workers"]
batch_size_per_worker = 64
batch_size_global = batch_size_per_worker * num_workers
strategy = tf.distribute.MultiWorkerMirroredStrategy(
communication_options=tf.distribute.experimental.CommunicationOptions(
implementation=tf.distribute.experimental.CollectiveCommunication.RING
)
)
# Callback class for logging training.
# Katib parses metrics in this format: <metric-name>=<metric-value>.
class CustomCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
katib.report_metrics({
"accuracy": logs["accuracy"],
"logs": logs["loss"],
})
# Prepare MNIST Dataset.
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = (
tf.data.Dataset.from_tensor_slices((x_train, y_train))
.shuffle(60000)
.repeat()
.batch(batch_size)
)
return train_dataset
# Build and compile CNN Model.
def build_and_compile_cnn_model():
model = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation="relu"),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(10),
]
)
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=lr),
metrics=["accuracy"],
)
return model
# Download Dataset.
dataset = mnist_dataset(batch_size_global)
# For dist strategy we should build model under scope().
if is_dist:
logging.info("Running Distributed Training")
logging.info("--------------------------------------------------------------------------------------\n\n")
with strategy.scope():
model = build_and_compile_cnn_model()
else:
logging.info("Running Single Worker Training")
logging.info("--------------------------------------------------------------------------------------\n\n")
model = build_and_compile_cnn_model()
# Start Training.
model.fit(
dataset,
epochs=num_epoch,
steps_per_epoch=70,
callbacks=[CustomCallback()],
verbose=0,
)
# Set parameters with their distribution for HyperParameter Tuning with Katib.
parameters = {
"lr": katib.search.double(min=0.1, max=0.2),
"num_epoch": katib.search.int(min=10, max=15),
"is_dist": False,
"num_workers": 1
}
# Start the Katib Experiment.
katib_client = katib.KatibClient(namespace="kubeflow")
katib_client.tune(
name="tune-mnist",
objective=train_mnist_model, # Objective function.
base_image="electronicwaste/tensorflow:git", # tensorflow/tensorflow:2.13.0 + git
parameters=parameters, # HyperParameters to tune.
algorithm_name="cmaes", # Alorithm to use.
objective_metric_name="accuracy", # Katib is going to optimize "accuracy".
additional_metric_names=["loss"], # Katib is going to collect these metrics in addition to the objective metric.
max_trial_count=12, # Trial Threshold.
parallel_trial_count=2,
packages_to_install=["git+https://github.com/kubeflow/katib.git@master#subdirectory=sdk/python/v1beta1"],
metrics_collector_config={"kind": "Push"},
)
```
The error happened:
```
Traceback (most recent call last):
File "/tmp/tmp.fGitfCta5x/ephemeral_objective.py", line 97, in <module>
train_mnist_model({'lr': '0.16377224201308005', 'num_epoch': '13', 'is_dist': False, 'num_workers': 1})
File "/tmp/tmp.fGitfCta5x/ephemeral_objective.py", line 89, in train_mnist_model
model.fit(
File "/usr/local/lib/python3.8/dist-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/tmp.fGitfCta5x/ephemeral_objective.py", line 36, in on_epoch_end
katib.report_metrics({
File "/usr/local/lib/python3.8/dist-packages/kubeflow/katib/api/report_metrics.py", line 61, in report_metrics
client = katib_api_pb2_grpc.DBManagerStub(channel)
File "/usr/local/lib/python3.8/dist-packages/kubeflow/katib/katib_api_pb2_grpc.py", line 19, in __init__
self.ReportObservationLog = channel.unary_unary(
TypeError: unary_unary() got an unexpected keyword argument '_registered_method'
```
### What did you expect to happen?
Run without error.
### Environment
Kubernetes version:
```bash
$ kubectl version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1
```
Katib controller version:
```bash
$ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}"
docker.io/kubeflowkatib/katib-controller:lates
```
Katib Python SDK version:
```bash
$ pip show kubeflow-katib
Name: kubeflow-katib
Version: 0.17.0
Summary: Katib Python SDK for APIVersion v1beta1
Home-page: https://github.com/kubeflow/katib/tree/master/sdk/python/v1beta1
Author: Kubeflow Authors
Author-email: [email protected]
License: Apache License Version 2.0
Location: /home/ws/miniconda3/envs/katib/lib/python3.10/site-packages
Requires: certifi, grpcio, kubernetes, protobuf, setuptools, six, urllib3
Required-by:
```
Python Packages Version in the Training Container:
```bash
$ pip list
Package Version
---------------------------- --------------------
absl-py 1.4.0
astunparse 1.6.3
cachetools 5.3.1
certifi 2019.11.28
chardet 3.0.4
dbus-python 1.2.16
flatbuffers 23.5.26
gast 0.4.0
google-auth 2.21.0
google-auth-oauthlib 1.0.0
google-pasta 0.2.0
grpcio 1.56.0
h5py 3.9.0
idna 2.8
importlib-metadata 6.7.0
keras 2.13.1
kubeflow-katib 0.17.0
kubernetes 30.1.0
libclang 16.0.0
Markdown 3.4.3
MarkupSafe 2.1.3
numpy 1.24.3
oauthlib 3.2.2
opt-einsum 3.3.0
packaging 23.1
pip 23.1.2
protobuf 4.23.3
pyasn1 0.5.0
pyasn1-modules 0.3.0
PyGObject 3.36.0
python-apt 2.0.1+ubuntu0.20.4.1
python-dateutil 2.9.0.post0
PyYAML 6.0.2
requests 2.22.0
requests-oauthlib 1.3.1
requests-unixsocket 0.2.0
rsa 4.9
setuptools 68.0.0
six 1.14.0
tensorboard 2.13.0
tensorboard-data-server 0.7.1
tensorflow-cpu 2.13.0
tensorflow-estimator 2.13.0
tensorflow-io-gcs-filesystem 0.32.0
termcolor 2.3.0
typing_extensions 4.5.0
urllib3 1.25.8
websocket-client 1.8.0
Werkzeug 2.3.6
wheel 0.40.0
wrapt 1.15.0
zipp 3.15.0
```
### Impacted by this bug?
Give it a 👍 We prioritize the issues with most 👍 | 0easy
|
Title: how to crop the template patch
Body: 
I just want to know why not crop directly using target's bbox(w*h).
Does the context of target matter?
| 0easy
|
Title: Change formatter from Black to Ruff
Body: ### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
Changing from Black to Ruff will increase the speed of development and deployment. From [Astral](https://astral.sh/):
`The Ruff formatter is an extremely fast Python formatter, written in Rust. It’s over 30x faster than [Black](https://github.com/psf/black) and 100x faster than [YAPF](https://github.com/google/yapf), formatting large-scale Python projects in milliseconds — all while achieving >99.9% Black compatibility.`
More information can be found on their [blog post](https://astral.sh/blog/the-ruff-formatter)
### Describe alternatives you've considered
_No response_
### Additional context
Lots of projects are switching to Ruff and seeing massive performance gains. Some examples are: Gradio, Jax, Flask, and [many more](https://x.com/charliermarsh) | 0easy
|
Title: Question: How to provide oauth_cb to confluent kafka consumer/producer
Body: We are trying to set up a faststream application which connects via IAM to an MSK cluster (confluent kafka). We got a successful connection to work by directly creating a `Consumer` and now want to use the same settings in faststream.
**Working code**
```python
from aws_msk_iam_sasl_signer import MSKAuthTokenProvider
from confluent_kafka import Consumer
def oauth_cb(oauth_config):
auth_token, expiry_ms = MSKAuthTokenProvider.generate_auth_token_from_role_arn("us-east-1", "<assumed_role_arn>")
return auth_token, expiry_ms / 1000
config = {
"bootstrap.servers": "bootstrap.servers",
"security.protocol": "SASL_SSL",
"sasl.mechanism": "OAUTHBEARER",
"oauth_cb": oauth_cb,
"group.id": "our-group",
}
c = Consumer(config)
c.poll(0)
print(c.list_topics(timeout=1).topics)
```
A callback is added to the producer via `oauth_cb` setting which generates a new token using [aws-msk-iam-sasl-signer-python](https://github.com/aws/aws-msk-iam-sasl-signer-python). The setting can't be found in the configuration file from [librdkafka](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md) but in the documentation for [confluent_kafka](https://docs.confluent.io/platform/current/clients/confluent-kafka-python/html/index.html#kafka-client-configuration).
> oauth_cb(config_str): Callback for retrieving OAuth Bearer token. Function argument config_str is a str from config: sasl.oauthbearer.config. Return value of this callback is expected to be (token_str, expiry_time) tuple where expiry_time is the time in seconds since the epoch as a floating point number. This callback is useful only when sasl.mechanisms=OAUTHBEARER is set and is served to get the initial token before a successful broker connection can be made. The callback can be triggered by calling client.poll() or producer.flush().
Calling `.poll()` is required to invoke the callback for the first time (see [this bug](https://github.com/confluentinc/confluent-kafka-python/issues/1713)). Without the token, a KafkaException is raised:
```
cimpl.KafkaException: KafkaError{code=_TRANSPORT,val=-195,str="Failed to get metadata: Local: Broker transport failure"}
```
Is there a way to pass the `oauth_cb` to the kafka consumer/producer in faststream? Or should we use a different configuration like `oauthbearer_token_refresh_cb`?
```python
broker = KafkaBroker(
"localhost:9098",
security=security,
config=ConfluentConfig(
{
"sasl.mechanism": "OAUTHBEARER",
"oauthbearer_token_refresh_cb": oauth_cb,
}
),
)
```
Above code fails with a KafkaError:
```
cimpl.KafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Property "oauthbearer_token_refresh_cb" must be set through dedicated .._set_..() function"}
```
Any help would be very welcome :) | 0easy
|
Title: uniques and distinct_counts in ak.str.*
Body: ### Description of new feature
Although [pyarrow.compute.unique](https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html) and [pyarrow.compute.value_counds](https://arrow.apache.org/docs/python/generated/pyarrow.compute.value_counts.html) work for many data types, we could use them on strings only in the `ak.str.*` namespace.
Why not use them in general? Outside of the `ak.str.*` (and possible `ak.dt.*`) namespace, it would be surprising to encounter a function that does not work due to pyarrow not being installed. Also, I don't know how these functions would define equality for lists and records, with or without missing values. We'd want to know what semantics we're imposing.
We can already implement uniqueness and unique counts of primitive types with sorting and `ak.run_lengths`, so that wouldn't be a _new_ ability. Doing uniqueness-counting on strings is an especially useful case; it would be a positive asset to add even that one case.
I must have overlooked it when scanning through lists of string functions; they're categorized differently. Are there any other functions that we could use in a string-only context in `ak.str.*`? | 0easy
|
Title: Run pytest in pre-commit
Body: - Add requirement to pyproject.toml
- Setup `.pre-commit-config.yaml` config
- test that everything is working with `pre-commit run` and in github actions | 0easy
|
Title: Update documentation of WindowedAnomalyScorers
Body: We should make some small corrections to the docs of all `WindowedAnomalyScorer`.
- KMeansScorer
- PyODScorer
- Wassersteinscorer
See #2660 for more info. | 0easy
|
Title: Add an easier way to access Events API retry info
Body: The issue #731 reminds me that https://github.com/slackapi/java-slack-sdk/pull/677 can be added to bolt-python (and possibly to bolt-js).
### Category (place an `x` in each of the `[ ]`)
* [x] **slack_bolt.App** and/or its core components
* [x] **slack_bolt.async_app.AsyncApp** and/or its core components
* [ ] Adapters in **slack_bolt.adapter**
* [ ] Others
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: provide a way to use reproducible results
Body:
### Description
Users should have the flexibility to get/generate reproducible results. In practice, this happens through a seed, which can be set globally or locally in each function, where random parameters are generated.
| 0easy
|
Title: create multipage example app
Body: | 0easy
|
Title: Wrong handling of 0 in expires_at
Body: If the `expires_at` timestamp in a token is 0 the `is_expired()` function erroneously returns `None` instead of `True`.
The check should explicitly check for `None`.
https://github.com/lepture/authlib/blob/ee4337cf7c825349dd23870822a3cc7df123097f/authlib/oauth2/rfc6749/wrappers.py#L13-L17 | 0easy
|
Title: URLScan
Body: **User story:** I want to submit a URL to URLScan for analysis.
API reference: https://urlscan.io/docs/api/
**Implementation details**
- Two stage API: POST call, then wait, then GET report
- Tracecat is not bounded by the one endpoint per Action constraint
- Action = function. Prioritize usefulness over purity
- Use recommend retries and timeouts:
> The suggested polling logic would be to wait 10-30 seconds directly after submission and then polling in 5-second intervals until the scan is finished or a maximum wait time has been reached.
- There are three `visibility` options: `public`, `private`, `unlisted` (only visible to verified security researchers)
**Key requirements**
- [x] Can scan URL and return report
- [x] `private` visibility by default : https://portswigger.net/daily-swig/urlscan-io-api-unwittingly-leaks-sensitive-urls-data
- [x] Add icon to frontend
Note: we will NOT implement the [search API](https://urlscan.io/docs/search/) for now. There is a massive set of possible search queries. We need more data from URLScan power users of what searches *actually* matter | 0easy
|
Title: value errors on Chaikin Money Flow (cmf)
Body: this is stock independent, use any stock values generated by ta.cmf function are completely different from values in Tradingview CMF20.
| 0easy
|
Title: Fix broken JavaScript reference in the docs
Body: As @Laerte [pointed out](https://github.com/scrapy/scrapy/pull/5072#issuecomment-1483391283), https://github.com/scrapy/scrapy.org/issues/224 also applies to docs.scrapy.org, so changes are needed in this repository as well. | 0easy
|
Title: [docs] Add a verbose explanation on how `tox exec` works
Body: This docs section https://tox.wiki/en/latest/cli_interface.html#tox-exec-(e) does not actually show how to pass a command in (it's not in the signature). And it doesn't explain if the default commands are executed. It documents `--notest` in the signature, which makes it confusing — does the passed command replace the defined ones or does it augment them? What about `commands_pre`?
From what I saw in the wild (https://github.com/jamescooke/flake8-aaa/blob/master/Makefile#L42C2-L42C10), the command to execute should be passed after `--`. But that's a typical syntax for `{posargs}`. So how do these parts play together? | 0easy
|
Title: [FEATURE] Consolidate toml and tomli across preswald
Body: **Is your feature request related to a problem? Please describe.**
We are using both toml and tomli. Why use both when tomli is known to be faster, and more modern.
**Describe the solution you'd like**
Let's consolidate everywhere to just using tomli everywhere. Replace toml with tomli in usage, and then remove toml from setup.py | 0easy
|
Title: OSM attribution in corner of map
Body: The current map is missing any attribution, and should be fixed | 0easy
|
Title: Integrate Algolia (or similar) for docs search
Body: Sphinx's built-in search isn't great. It'd be much better to use Algolia or similar.
Looks like it's possible:
https://stackoverflow.com/q/54872828/709975
https://github.com/readthedocs/sphinx_rtd_theme/issues/761
If anyone wants to take this, I'm happy to help. | 0easy
|
Title: typo in modeltest._on_rows_about_to_be_inserted
Body: Hello,
just to mention that, in modeltest._on_rows_about_to_be_inserted, the following line (549):
`last_data = self._model.data(last_index) if start - 1 > 0 else None`
should be:
`last_data = self._model.data(last_index) if start - 1 >= 0 else None`
since start can be equal to 1.
It is actually like this in the [qt5 source code](https://code.woboq.org/qt5/qtbase/src/testlib/qabstractitemmodeltester.cpp.html#_ZN31QAbstractItemModelTesterPrivate21rowsAboutToBeInsertedERK11QModelIndexii) (line 643):
`c.last = (start - 1 >= 0) ? model->index(start - 1, 0, parent).data() : QVariant();`
Thank you.
| 0easy
|
Title: Deprecate `return_str` parameter in `NLTKWordTokenizer` and `TreebankWordTokenizer`
Body: Hello!
I'd like to discuss a potential enhancements of `NLTKWordTokenizer` and `TreebankWordTokenizer`. For those unaware, the former is the tokenizer that is most frequently used, and is used in the `word_tokenize` function. It's also based on the latter class: `TreebankWordTokenizer`.
An example usage as can be found in the documentation:
```python
>>> from nltk.tokenize import NLTKWordTokenizer
>>> s = '''Good muffins cost $3.88 (roughly 3,36 euros)\nin New York. Please buy me\ntwo of them.\nThanks.'''
>>> NLTKWordTokenizer().tokenize(s)
['Good', 'muffins', 'cost', '$', '3.88', '(', 'roughly', '3,36',
'euros', ')', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two',
'of', 'them.', 'Thanks', '.']
>>> NLTKWordTokenizer().tokenize(s, convert_parentheses=True)
['Good', 'muffins', 'cost', '$', '3.88', '-LRB-', 'roughly', '3,36',
'euros', '-RRB-', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two',
'of', 'them.', 'Thanks', '.']
>>> NLTKWordTokenizer().tokenize(s, return_str=True)
' Good muffins cost $ 3.88 ( roughly 3,36 euros ) \nin New York. Please buy me\ntwo of them.\nThanks . '
```
### The enhancement
As you can see from the example, if `return_str` is True, then the `tokenize` method returns a space-separated string. However, the number of spaces is very inconsistent. Perhaps we would be better off stripping spaces on the ends, and replacing all sequences of multiple spaces with just one space.
E.g.
```python
>>> NLTKWordTokenizer().tokenize(s, return_str=True)
'Good muffins cost $ 3.88 ( roughly 3,36 euros ) \nin New York. Please buy me\ntwo of them.\nThanks .'
```
instead of
```python
>>> NLTKWordTokenizer().tokenize(s, return_str=True)
' Good muffins cost $ 3.88 ( roughly 3,36 euros ) \nin New York. Please buy me\ntwo of them.\nThanks . '
```
I figured I would create an issue for this to find out if others agree with this idea, before I put in the time to make this change for no reason. So, I'd like to hear your thoughts.
- Tom Aarsen | 0easy
|
Title: GCSFeedStorage does not support feed_options
Body: Since https://github.com/scrapy/scrapy/issues/6105, `feed_options` parameter needs to be supported by the feed storage classes.
Now with version 2.12.0, GCSFeedStorage does not have this parameter added, therefore it fails upon usage.
```
GCSFeedStorage.from_crawler() got an unexpected keyword argument 'feed_options'
``` | 0easy
|
Title: XML: Support ignoring element order with `Elements Should Be Equal`
Body: The XML specification in [RFC3470](https://www.rfc-editor.org/rfc/rfc3470) doesn't require any order of the elements in one level of the XML.
**Expected behaviour:**
`<test><c1/><c2/><c3/></test>` and `<test><c1/><c3/><c2/></test>` therefore contain the same information and should be regarded as equal.
**Current behaviour:**
However, the BuiltIn XML function `Elements Should Be Equal` fails with `Differen tag name at 'test/c2'`.
**Setup:**
Robotframework version: 6.1.1.
Python Version: 3.11.3
Windows Version: 22H2
alternative Docker Image: python:3.11-alpine
**Sample Code:**
```
*** Settings ***
Documentation Test file for XML Bug
Library XML
*** Test Cases ***
Test
[Documentation] Test case displaying XML bug
Elements Should Be Equal <test><c1/><c2/><c3/></test> <test><c1/><c3/><c2/></test>
```
| 0easy
|
Title: Opening a link in a new tab
Body: Howdy,
With Selenium, it is easy to open a link in a new tab.
Is this even possible with Splinter?
I dived deeply into Splinter API documentation but I did not find anything related to tab features.
Thank you in advance for any hints,
Bill BEGUERADJ
| 0easy
|
Title: TST: Make test_sql.py parallelizable
Body: ### Feature Type
- [ ] Adding new functionality to pandas
- [X] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
test_sql.py must be run on a single thread now, because tests re-use the same table names. This can cause a race condition when different parametrizations of a test run on different threads
### Feature Description
Add a uuid or something else to the table names in the test_sql.py module to disambiguate
### Alternative Solutions
status quo
### Additional Context
_No response_ | 0easy
|
Title: heroku部署后数据库初始化失败
Body: 显示503,程序数据库初始化失败 | 0easy
|
Title: Issues about why it always skips the images that I upload
Body: Hi, recently I tried your Colab demo to restore some of my old images, but here comes the issue. When I follow the steps and run the code, it always prints skip the my uploaded images like the following shows:
Running Stage 1: Overall restoration
initializing the dataloader
model weights loaded
directory of testing image: /content/photo_restoration/test_images/upload
processing testScratch.png
You are using NL + Res
Now you are processing testScratch.png
Skip testScratch.png
Finish Stage 1 ...
Running Stage 2: Face Detection
Finish Stage 2 ...
Running Stage 3: Face Enhancement
The main GPU is
0
dataset [FaceTestDataset] of size 0 was created
The size of the latent vector size is [8,8]
Network [SPADEGenerator] was created. Total number of parameters: 92.1 million. To see the architecture, do print(network).
hi :)
Finish Stage 3 ...
Running Stage 4: Blending
Finish Stage 4 ...
All the processing is done. Please check the results.
Therefore, I'd like to know whether there are any requirements that the uploaded images have to satisfy or did I make some mistakes ? Thanks a lot. | 0easy
|
Title: Colorize stderr
Body: I've notices that in every my xonsh script I want to have accent to stderr.
It will be cool to have shell feature to show stdout and stderr using colors.
Example (the yellow line is stderr):
<img width="318" alt="image" src="https://github.com/xonsh/xonsh/assets/1708680/b55a862b-5581-4058-8e20-1ffd358c687f">
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Compatibility with PyPy
Body: ### Problem
After several days of testing I can confirm that version 3.0 is functional in PyPy, contrary to version 2 which crashed after running.
### Possible solution
Please run tests to fully and officially confirm compatibility with PyPy and keep this compatibility in future releases.
### Alternatives
_No response_
### Code example
_No response_
### Additional information
_No response_ | 0easy
|
Title: Enhance documentation related to importing same library multiple times
Body: According to the [user guide](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#library-scope), "If a library is imported multiple times with different arguments, a new instance is created every time regardless the scope."
But that does not seem to work for me.
Here is the example test case I use:
``` robotframework
*** Settings ***
Library testlib.py num=1 AS Test
Library testlib.py num=2 AS Test
Library testlib.py num=3 AS Test
*** Test Cases ***
Example Test
Log Params
BuiltIn.Import Library ${CURDIR}/testlib.py num=4 AS Test
Log Params
```
with the following example library:
``` python
from robot.api import logger
class testlib():
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
def __init__(self, num):
self.num = num
def log_params(self):
logger.console(f'num: {self.num}')
```
As I import the library each time with a different argument, I would expect, that a new instance of the library is created that overwrites the current instance. Instead, the very first instance of the library seems to be used as the output of both calls of `Log Params` is 1.
Did I understand the note in the user guide wrong or is this an unintended behaviour? | 0easy
|
Title: CLI option to update snapshots without running other tests
Body: **Is your feature request related to a problem? Please describe.**
I am using syrupy in a few libraries, and I find myself in a common pattern in which I write a Makefile target like `make snapshot-update` which invokes `pytest --snapshot-update`. But the real intention behind that make target is not necessarily to run all the tests _and_ update snapshots; but to update the snapshots only.
**Describe the solution you'd like**
I think it'd be a neat feature to support a pytest selector which isolates only the tests depending on the snapshot fixture; so that `pytest --snapshot-update` could be extended to only run tests with any snapshots.
**Describe alternatives you've considered**
It's not a big deal! I just run all my tests when i run updates. :smile:
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| 0easy
|
Title: Fix the "improve" prompt to make sure that it generates diffs, and parse and apply those diffs to the existing codebase
Body: One way to do this is to write the prompt for gpt-engineer with `-i` flag to annotate each codeblock with one of:
1. `NEW CODE`
2. `REPLACING ONE FUNCTION`
If 1., the generated code can just be written to a new file (or appended to an existing file).
If it is replacing an existing function, we could make sure to find the name of the function that is being replaced using an AST parser (see how [here](https://chat.openai.com/share/71012377-7ebb-47f2-a8fc-7d1bfd4fabe2))
## Why this is necessary
As an example, I tried to use it on the project itself and got a codeblock that was just changing one of the function (so it should not be used to overwrite the entire file)
## How to do it
We can take inspiration from Aider, that generates diffs, or sweep in how they prompt for "<copy_lines>" and [parse the GPT4 output here](https://github.com/sweepai/sweep/blob/e384c9fc3e0278257324c4ce57a888fa64f071b7/sweepai/utils/diff.py#L113)
Should be quite straightforward! | 0easy
|
Title: 联系方式如果填的是之前填过的,那么下单后再也无法查询订单。
Body: 我更新了最新的佰阅发卡,有个bug。通过联系方式查询订单。如果一个人填写的联系方式是之前填过的,那么就算下的是最新的订单也都查不了,提示订单不存在或已过期。
按道理就算填的方式是以前的,只要下的订单是2小时内,也能查询! | 0easy
|
Title: invalid import in version 3.0.0 docs
Body: On the [filtering events](https://docs.aiogram.dev/en/dev-3.x/dispatcher/filters/index.html#) page, on writing bound filters, the sample code section, wrong import is inserted:
`from aiogram.filters import BaseFilter`
it has to be :
`from aiogram.dispatcher.filters import BaseFilter`
| 0easy
|
Title: 可以不是用docker安装吗
Body: 可以不是用docker安装吗,直接使用源码安装,或者部署到宝塔安装呢 | 0easy
|
Title: Show Request Body on Markdown Report
Body: Show Request Body, if it exists, on Markdown Report. Current it is showing only response body. | 0easy
|
Title: Remove deprecated code moved to itemloaders
Body: Deprecated in 2.3.0.
* scrapy.utils.misc.extract_regex()
* scrapy.loader.common
* scrapy.loader.processors | 0easy
|
Title: chooses 15% of token
Body: From paper, it mentioned
> Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
> dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure. Does it aligned with 15% of token will be chosen? | 0easy
|
Title: Feature: AsyncAPI Kafka partitions support
Body: Now, the following code
```python
from faststream.specification import AsyncAPI
from faststream.kafka import KafkaBroker, TopicPartition
broker = KafkaBroker()
@broker.subscriber(partitions=[TopicPartition("test", 1)])
async def handler(): ...
docs = AsyncAPI(broker)
```
Just has no AsyncAPI representation
We should provides users with a correct schema in all `subscriber` options combinations, so
* [ ] Generate Channels for Kafka Subscriber in partitions case - https://github.com/airtai/faststream/blob/0.6.0/faststream/kafka/subscriber/specified.py#L20
* [ ] Respect partition in Kafka Publisher specifciation - https://github.com/airtai/faststream/blob/0.6.0/faststream/kafka/publisher/specified.py#L31
* [ ] Generate Channels for Confluent Subscriber in partitions case
* [ ] Respect partition in Confluent Publisher specification | 0easy
|
Title: [MNT] Remove Reshape layer from Deep Learning Clusterers
Body: ### Describe the issue
The `tf.keras.layers.Reshape` was introduced in `clustering/deep_learnin` to make the input shape of the encoder equal to the output shape of the decoder which is already being tested in `test_all_networks.py`. So, basically it is unnecessary at the moment.
### Suggest a potential alternative/fix
Remove it. | 0easy
|
Title: Passing kwargs to the underlying fitting function
Body: **Is your feature request related to a current problem? Please describe.**
I wanted to pass some additional arguments to `Prophet`'s fit function (from now on called fit_kwargs), but this is not currently supported by darts. The only current possibility for the user to achieve this, is to subclass `Prophet` and overwrite the `._fit` function
In general, there are multiple different strategies (depending on the model) in darts how passing through of fit_kwargs is currently handled.
Looking through the code, I would summarize the current situation as follows (hope I haven't missed something):
* `RegressionModels` support passing of fit_kwargs to the `.fit` function
* `TorchForecastingModels` use the `pl_trainer_kwargs` argument in the constructor
* `ExponentialSmoothing` supports passing of fit_kwargs to the constructor. It also allows passing of constructor_kwargs (that will be passed to the constructor of the underlying model) as a dict.
* `Prophet` and `AutoARIMA` do not support passing of fit_kwargs (even though the underlying model has meaningful kwargs)
* Other models do not support passing fit_kwargs, but that's fine since the underlying models do not support any meaningful additional arguments
**Describe proposed solution**
I propose to unify the behavior (except for `TorchForecastingModels` which makes sense be treated differently) to support passing of fit_kwargs through the `.fit` function, i.e.:
* Add passing through of fit_kwargs to `Prophet` and `AutoARIMA`
* Rework `ExponentialSmoothing` constructor and fit function signatures to the same style. This would be a breaking change.
I think having the argument passing in the `.fit` function rather than the constructor function is better for two reasons:
1) Models often also support kwargs that are passed to the underlying models constructor, making a distinction between constructor_kwargs and fit_kwargs necessary. That means at least one of them has do be passed as a dict, which feels unintuive.
2) I think the `.fit` method would be the more obvious place where users would look for the possibility to pass such kwargs.
**Describe potential alternatives**
* Add passing through of fit_kwargs to `Prophet` and `AutoARIMA` but keep `ExponentialSmoothing` as it is
* Add a `fit_kwargs` argument to the constructors of `Prophet` and `AutoARIMA` and pass those to the fit function.
**Additional context**
I'm happy to prepare a PR for this issue, once it is decided which of these solutions should be implemented
| 0easy
|
Title: [RunPod] Resource mismatch in catalog causes crash
Body: `sky launch --gpus L4:1 --memory 500+` with RunPod enabled causes a hard crash instead of gracefully removing runpod instances from the candidate resources:
```
(base) ➜ ~ sky launch --gpus L4:1 --memory 500+
D 01-27 18:16:57 skypilot_config.py:228] Using config path: /Users/romilb/.sky/config.yaml
D 01-27 18:16:57 skypilot_config.py:233] Config loaded:
D 01-27 18:16:57 skypilot_config.py:233] {'jobs': {'controller': {'resources': {'cloud': 'gcp',
D 01-27 18:16:57 skypilot_config.py:233] 'cpus': '128+',
D 01-27 18:16:57 skypilot_config.py:233] 'disk_size': 1024,
D 01-27 18:16:57 skypilot_config.py:233] 'region': 'us-central1'}}}}
D 01-27 18:16:57 skypilot_config.py:245] Config syntax check passed.
D 01-27 18:16:58 optimizer.py:294] #### Task<name=sky-cmd>(run=<empty>)
D 01-27 18:16:58 optimizer.py:294] resources: <Cloud>(mem=500+, {'L4': 1}) ####
D 01-27 18:16:58 common.py:232] Updated Lambda catalog lambda/vms.csv.
Traceback (most recent call last):
File "/Users/romilb/tools/anaconda3/bin/sky", line 8, in <module>
sys.exit(cli())
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 838, in invoke
return super().invoke(ctx)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 1166, in launch
_launch_with_confirm(task,
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 603, in _launch_with_confirm
dag = sky.optimize(dag)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/optimizer.py", line 135, in optimize
unused_best_plan = Optimizer._optimize_dag(
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/optimizer.py", line 1066, in _optimize_dag
Optimizer._estimate_nodes_cost_or_time(local_topo_order,
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/optimizer.py", line 300, in _estimate_nodes_cost_or_time
_fill_in_launchable_resources(
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/optimizer.py", line 1297, in _fill_in_launchable_resources
feasible_list = subprocess_utils.run_in_parallel(
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/subprocess_utils.py", line 129, in run_in_parallel
return list(ordered_iterators)
File "/Users/romilb/tools/anaconda3/lib/python3.9/multiprocessing/pool.py", line 870, in next
raise value
File "/Users/romilb/tools/anaconda3/lib/python3.9/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/optimizer.py", line 1299, in <lambda>
(cloud, cloud.get_feasible_launchable_resources(r, n)),
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/clouds/cloud.py", line 411, in get_feasible_launchable_resources
return self._get_feasible_launchable_resources(resources)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/clouds/runpod.py", line 240, in _get_feasible_launchable_resources
return resources_utils.FeasibleResources(_make(instance_list),
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/clouds/runpod.py", line 201, in _make
r = resources.copy(
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/resources.py", line 1261, in copy
resources = Resources(
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/resources.py", line 247, in __init__
self._try_validate_cpus_mem()
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/resources.py", line 830, in _try_validate_cpus_mem
raise ValueError(
ValueError: 1x_L4_SECURE does not have enough memory. 1x_L4_SECURE has 24.0 GB memory, but 500+ is requested.
``` | 0easy
|
Title: [DOC] Update pandas example in the README?
Body: # Brief Description of Fix
Right now, the pandas example in the README shows what I (IMHO) would say is "bad" pandas code. I would rather write it as something like
```python
In [15]: df = (
...: pd.DataFrame(company_sales)
...: .drop(columns="Company1")
...: .dropna(subset=['Company2', 'Company3'])
...: .rename(columns={"Company2": "Amazon", "Company3": "Facebook"})
...: .assign(Google=[450.0, 550.0, 800.0])
...: )
...: df
Out[15]:
SalesMonth Amazon Facebook Google
0 Jan 180.0 400.0 450.0
1 Feb 250.0 500.0 550.0
3 April 500.0 675.0 800.0
```
Thoughts on updating that example? I would certainly keep the current one, since it makes it clear that pandas *allows* the "bad" style, while pyjanitor doesn't. But I think it'd be nice to include the "good" style as well :) | 0easy
|
Title: Adding type annotations to Meta class causes TypeError
Body: Adding annotations to the Meta class of an ObjectType causes an Error to be raised.
Minimal reproduction:
```python
import graphene
class Query(graphene.ObjectType):
class Meta:
name: str = 'oops'
hello = graphene.String()
def resolve_hello(self, info):
return 'Hello'
schema = graphene.Schema(query=Query)
```
This causes the following error when importing the file with Graphene 2.2.0 on Python 3.7.1:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../minimal_repro.py", line 3, in <module>
class Query(graphene.ObjectType):
File "/.../.pyenv/versions/3.7.1/lib/python3.7/site-packages/graphene/utils/subclass_with_meta.py", line 52, in __init_subclass__
super_class.__init_subclass_with_meta__(**options)
File "/.../.pyenv/versions/3.7.1/lib/python3.7/site-packages/graphene/types/objecttype.py", line 64, in __init_subclass_with_meta__
super(ObjectType, cls).__init_subclass_with_meta__(_meta=_meta, **options)
TypeError: __init_subclass_with_meta__() got an unexpected keyword argument '__annotations__'
```
If you remove the `: str` annotation from the name property of Meta, this works just fine.
Without this, any type annotations for mypy, for example, would need to be done with the comment syntax. Not a huge deal, but took some time to debug. | 0easy
|
Title: missing some attributes for `Iframe`?
Body: - frameborder
- scrolling | 0easy
|
Title: Cannot switch back from pre-defined benefit/condition to a non-pre-defined one
Body: ### Issue Summary
The current benefit and condition forms don't work properly when switching back to using a non-pre-defined benefit/condition from a pre-defined one.
### Steps to Reproduce
1. Create an offer with a non-pre-defined benefit or condition.
2. Go to the benefit/condition form for that offer and change it to a pre-defined one.
3. Go back to the form and try to change it back to a non-pre-defined one.
Instead of switching back, the benefit/condition is still referencing the previous pre-defined one.
#### Notes
- On the public sandbox, the form only has pre-defined benefits, but the saving logic is identical and the bug is also present in the `ConditionForm` as well:
https://github.com/django-oscar/django-oscar/blob/04dd391c900537a61dbb0ae5250ca5c2df6bbc4b/src/oscar/apps/dashboard/offers/forms.py#L137-L143
- After switching to a pre-defined benefit/condition, the form also displays the other field data that had to be removed back in step 2.
### Technical details
* Reproducable on the public sandbox: https://latest.oscarcommerce.com/en-gb/dashboard/offers/
| 0easy
|
Title: New exceptions
Body: ### New exceptions
- BadRequest: Have no rights to send a message
- BadRequest: Group chat was deactivated
- BadRequest: Channel_private
- BadRequest: Have no rights to send a message
- Unauthorized: Forbidden: bot is not a member of the supergroup chat
- Unauthorized: Forbidden: bot is not a member of the group chat
- Unauthorized: Forbidden: chat_write_forbidden
### Task
Wrap this exceptions | 0easy
|
Title: [FEATURE] add the HEARTS dataset
Body: The [tensorflow lattice documentation](https://www.tensorflow.org/lattice/tutorials/keras_layers) links to a [dataset](http://storage.googleapis.com/applied-dl/heart.csv) that might be nice to add to our little library. It might be good to demonstrate the merit of adding monotonic constraints. | 0easy
|
Title: [BUG] - Status Code 10204
Body: **Describe the bug**
When running `api.getTikTokById` on a silent video's ID, it returns `{'statusCode': 10204}` when it should return a TikTok object.
**The buggy code**
```
from TikTokApi import TikTokApi
api = TikTokApi()
tiktoks = api.byUsername('penguins')
for tiktok in tiktoks:
if (tiktok['id'] == '6791925545199258886'):
print(tiktok)
newTik = api.getTikTokById('6791925545199258886')
print (newTik)
```
**Expected behavior**
The code above shows what happens when the ID is found from `api.byUsername` and how the result is different when calling `api.getTikTokById` for the same ID.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Windows 10
- Browser Default is Chrome
- Version 3.1.9
**Additional context**
Silent videos have always been a problem on desktop TikTok, [direct links don't work](https://www.tiktok.com/share/video/6791925545199258886) but [clicking on the video works](https://imgur.com/a/iZkUc6f).
| 0easy
|
Title: Explicit static directive for serving file or dir
Body: See #2132
```python
app.static('/', './resources/web', as="dir")
app.static('/', './resources/web/example.html', as="file")
```
With an explicit kwarg, we would not need to run `path.isfile`. This is not really new behavior. If Sanic 20.12 is not functioning this way, I think that is more a problem on the earlier version than with the current.
---
A bit of behind the scenes info...
Let's look at what these two are doing:
```python
app.static("/", "./resources/web")
# Sanic checks this, sees that it is not a file and then converts to:
# "/<__file_uri__:path>"
# meaning that it will look for anything that matches using the `path` param type
app.static('/', './resources/web/example.html')
# If this is indeed a file, Sanic knows that you meant for an explicit path "/"
# There is no ambiguity, and the path does not need to be altered
# If the file does not exist, then like the first example, it tries to convert it to a `path` type
# But, since you already have a path expansion on that base path, there is now ambiguity
# and Sanic raises RouteExists
```
__Originally posted by @ahopkins in https://github.com/sanic-org/sanic/issues/2132#issuecomment-839477709__ | 0easy
|
Title: Inner bug: TypeError: __init__() got an unexpected keyword argument 'escape_forward_slashes'
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
i'm not sure if this is a bug, a internal exception occurred when i try to access the inspector page(http://localhost:6457/),
pkgs version:
sanic 23.3.0
sanic-ext 23.3.0
sanic-routing 22.8.0
log:
```
[ERROR] Exception occurred while handling uri: 'http://localhost:6457/'
Traceback (most recent call last):
File "handle_request", line 97, in handle_request
from sanic_ext.extensions.base import Extension # type: ignore
File "D:\apps\Anaconda\envs\sanic\lib\site-packages\sanic\worker\inspector.py", line 82, in _info
return await self._respond(request, self._state_to_json())
File "D:\apps\Anaconda\envs\sanic\lib\site-packages\sanic\worker\inspector.py", line 86, in _respond
return json(
File "D:\apps\Anaconda\envs\sanic\lib\site-packages\sanic\response\convenience.py", line 50, in json
return JSONResponse(
File "D:\apps\Anaconda\envs\sanic\lib\site-packages\sanic\response\types.py", line 361, in __init__
self._encode_body(self._use_dumps(body, **self._use_dumps_kwargs)),
File "D:\apps\Anaconda\envs\sanic\lib\json\__init__.py", line 234, in dumps
return cls(
TypeError: __init__() got an unexpected keyword argument 'escape_forward_slashes'
```
### Code snippet
_No response_
### Expected Behavior
_No response_
### How do you run Sanic?
As a script (`app.run` or `Sanic.serve`)
### Operating System
windows
### Sanic Version
Sanic 23.3.0; Routing 22.8.0
### Additional context
_No response_ | 0easy
|
Title: BUG: incorrect memory info when GPU is available
Body: ### Describe the bug
When GPU is available, the amount of memory showed in the dashboard is not correct. Looks like the GPU memory is included.
Also, the resource page cannot be accessed.
### To Reproduce
Create an Xorbits cluster on a host with GPU.
1. Your Python version
3.9.15
2. The version of Xorbits you use
0.2.0+5.gf204d77
| 0easy
|
Title: Allow protocols to work on raw arrays
Body: **Is your feature request related to a use case or problem? Please describe.**
When working on #7048, was surprised that `unitary(val)` and `apply_unitary(val, args)` didn't accept raw numpy unitaries for `val`. This made it necessary to special case a couple `isinstance(np.ndarray)` checks in the consumers of those functions. There is also an already-existing redundant function in `apply_mixture` that can be removed if `apply_unitary` can accept raw numpy unitaries. If accepted, this could be a good first issue for someone.
**Describe the solution you'd like**
I started this for `apply_unitary` in #7039, but closed the PR because a) I think it's a maintainer decision as to whether the feature is desired, how large the scope should be, and whether it can be delivered incrementally or should go all-at-once, and b) I think it would be a good first issue for someone to learn about quantum operators in general, and cirq's implementation of protocols in particular.
As mentioned in the closing comment https://github.com/quantumlib/Cirq/pull/7039#issuecomment-2645896832, the scope is open-ended.
> Closing for now. Seems like if we do this, then we should also do `unitary` and `has_unitary`. Which, then may as well do `mixture` and `kraus` series, maybe `act_on`, and whatever else comes up too. Support raw Python arrays? Sympy?....
**[optional] Describe alternatives/workarounds you've considered**
Duplicate code, `instanceof` checks.
**[optional] Additional context (e.g. screenshots)**
As part of this, I noticed the protocols have different techniques. Some loop through an array of strategies, others are more imperative. Some use dynamic invocations of the fallback strategies, whereas others call the fallback protocols directly. It'd be good to improve consistency in these when working through them.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
<!-- Please choose one and remove the others -->
P3 - I'm not really blocked by it, it is a suggestion based on principle
<!-- [optional] additional comment / context -->
| 0easy
|
Title: Support OpenAI API v1
Body: ### 🚀 The feature
Add support for the newest version of the API.
### Motivation, pitch
The old `openai.ChatCompletion` has been replaced by `openai.OpenAI.chat.completions` (a factual class) and `openai.AzureOpenAI.chat.completions` for Azure.
**PR coming soon!** | 0easy
|
Title: Update keras na tf versions
Body: | 0easy
|
Title: Document all the different ways we can interact with reference container objects like dict, set, list
Body: We've [implemented](https://github.com/nteract/testbook/blob/main/testbook/reference.py#L34-L80) a bunch of container methods which help us do the following with reference objects:
| Internal method| Description|
| ------------- |-------------|
|`__len__` | Find length of a container type object (dict, set, list)|
|`__iter__` and `__next__`|Iterate through an container|
|`__getitem__`|Fetch an item by index (or key) from a container|
|`__setitem__`| Set an item's value in a container|
|`__contains__`|Find if a container... contains an item|
It would be nice to have examples that show that users can directly interact with reference objects, for example:
An example for finding length would look like:
```python
from testbook import testbook
@testbook('notebook.ipynb', execute=True)
def test_foo(tb):
my_list = tb.ref('my_list')
assert len(my_list) == 3 # this internally calls __len__
```
| 0easy
|
Title: Check support for Django 1.11
Body: Django 1.11 is now out.
factory_boy should be compatible out of the box, but this still needs checking.
Steps to follow:
1. Add Django 1.11 to https://github.com/FactoryBoy/factory_boy/blob/master/tox.ini
2. Replace Django 1.10 with Django 1.11 in https://github.com/FactoryBoy/factory_boy/blob/master/.travis.yml
3. Make a pull request, wait for the travis build, and check that it's still working.
Bonus ideas:
- Drop support for Django 1.7 at the same time ;
- Check for Django deprecation warnings, and see if it's possible to not trigger them without breaking support for Django 1.8/1.9/1.10 (Beware: the travis build only tests for the latest Django version). | 0easy
|
Title: feat(api): change SOA RETRY to 1h (but keep REFRESH at 1d)
Body: See https://talk.desec.io/t/decrease-soa-ttl/301/5. | 0easy
|
Title: Help add mypy Type Annotations to all code files
Body: Contributors: we could really use your help with this effort! Adding type annotations (that pass mypy with `--strict` flag) to every function in the codebase is a lot of work, but it will make our code much more robust and help avoid future bugs.
Just start with any one file and help us add type annotations (that pass mypy with `--strict` flag) for the functions in this file. You can make a PR that only covers one file which shouldn't take that long.
To get started, just run:
```
mypy --strict --install-types --non-interactive cleanlab
```
and see which files it complains about. You can add strict type annotations for any one these files. After you have saved some changes, you can run the above command again to see if mypy is now happy with the strict type annotations you have added.
Also check out:
https://github.com/cleanlab/cleanlab/issues/307
https://github.com/cleanlab/cleanlab/issues/351
Example PR that added type annotations for one file: https://github.com/cleanlab/cleanlab/pull/317
You can similarly copy this pattern for other files.
| 0easy
|
Title: Unused or suppressed code path in layers.Base.get_status?
Body: I feel like I should have caught this in #7584 😅. This else path:
https://github.com/napari/napari/blob/aafd25e222fffd7a98d7d883f7e3555ce57aa366/napari/layers/base/base.py#L2224-L2227
is from `if position is not None:` ... `else:`. That means that position is None, which means that `position - self._translate_grid` is an error. I can't trigger it on purpose though. I tried for example `examples/layers.py` and tried to mouse around bits of the viewer that don't have coordinates (as in #4664), and I don't get an error. However, I also don't get layer info in the status bar, so I wonder if the error is suppressed because status calculation is now in a separate thread...
Either way, we should figure out what is going on with this code path and either remove it or fix it.
CC @Czaki @psobolewskiPhD | 0easy
|
Title: License Coverage metric API
Body: The canonical definition is here: https://chaoss.community/?p=3961 | 0easy
|
Title: Show disabled blocks as disabled (greyed out and un-selectable) in the block list with a reason on hover (or similar) and add a separate block bool to hide blocks from the list
Body: Right now, locally, it's challenging to know what blocks exist and what blocks you don't have configured vs. what blocks don't exist. Add this capability so we can still hide blocks on the platform. <br><br>You may need to play with the wording of hide vs disabled, etc | 0easy
|
Title: Issue with column widths in document chooser when there are long filenames
Body: ### Issue Summary
When there are long filenames (without spaces) for documents, the column widths of the table in the document chooser are stretched (especially when the browser width is narrow) so it is difficult to read the document title.
### Steps to Reproduce
1. Upload a document with a long filename (with no spaces)
2. Edit a page with a document chooser
3. View the modal in a narrow browser window
See attached screengrab

### Technical details
- Python version: unknown, I'm just raising this from an editorial perspective
- Django version: unknown, I'm just raising this from an editorial perspective
- Wagtail version: 6.1.3
- Browser version: Chrome 128
### Working on this
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| 0easy
|
Title: Add public API to query is Robot running and is dry-run active
Body: For libraries it would be helpful to know if RF is running, or if a library is just imported as python package or by libdoc, which also would be the case when language servers would analyze them.
https://robotframework.slack.com/archives/C0K0240NL/p1677318363986369
Proposal would be two boolean properties either in BuiltIn or robot.api?!
`robot_running` or `robot_is_running` would be True if there is an execution context.
`dry_run_active` or `dryrun_is_active` would be True if there is a running context and the dryrun is ongoing.
| 0easy
|
Title: Save instance of results object on to disk
Body: Sometimes at the end of long optimization runs, we would like to store the objects to disk for future use.
One immediate use-case is https://github.com/scikit-optimize/scikit-optimize/issues/180
For pickling, lambdas etc, I suggest we backport cloudpickle (https://github.com/apache/spark/blob/master/python/pyspark/cloudpickle.py) to `skopt.externals`, unless others have better ideas.
| 0easy
|
Title: Async email support
Body: Currently, all emails are sent in the `mixins.py` but would be great to have async support.
I think the easiest way to make an optional plug and play support would be creating a new setting to enter a function that wraps each send email call.
**Actually it's not an async email support, but make easy to integrate with your own async solution.**
```python
# settings
EMAIL_ASYNC_TASK = None
```
Then it accepts a function path"
```python
EMAIL_ASYNC_TASK: "path/to/task"
```
The task need to accept the email send function and its arguments, usage with celery would be something like this:
```python
from celery import task
@task
def graphql_auth_async_email(func, *args):
"""
Task to send an e-mail for the graphql_auth package
"""
return func(*args)
```
Then, in the `mixins.py`, we need to change all send email calls, for example:
```python
# from
user.status.send_activation_email(info)
# to
if app_settings.EMAIL_ASYNC_TASK:
app_settings.EMAIL_ASYNC_TASK(user.status.send_activation_email, info)
else:
user.status.send_activation_email(info)
```
Of course, to make this work we must import the function from its path in the `settings.py`, probably using:
```python
from django.utils.module_loading import import_string
```
| 0easy
|
Title: [DOCS] Gremlin Part II demo notebook
Body: **Is your feature request related to a problem? Please describe.**
More examples (and support)
**Describe the solution you'd like**
- [ ] Neptune Sagemaker instance <> graphistry server
- [ ] Passing gremlinpython objects to pygraphistry, including via magics
- [ ] Top queries: sampling, search & expand, connect-the-dots, ...
- [ ] Add transforms & enrichments
- [ ] Writebacks
**Describe alternatives you've considered**
- APIs should support, but a notebook should make top cases clear
- Do as a Part II to current, vs making current overwhelming
**Additional context**
Similar use cases in graph-app-kit
| 0easy
|
Title: Add docs page with links to forks for more specific purposes (so people can PR a link to their projects)
Body: | 0easy
|
Title: [Proposal] Allow to specify dtype for Discrete
Body: ### Proposal
Add dtype argument to spaces.Discrete (similar to MultiDiscrete and Box).
### Motivation
Currently the dtype is fixed to numpy.int64. However, often Discrete spaces are much smaller, resulting in a waste of memory.
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| 0easy
|
Title: Marketplace - Reduce the margin under the Featured cards and the circular arrow buttons to 16px
Body: ### Describe your issue.
Reduce the margin under the Featured cards and the circular arrow buttons to 16px
<img width="1374" alt="Screenshot 2024-12-13 at 17 07 20" src="https://github.com/user-attachments/assets/cbdc19bd-ce6a-44bb-abe1-b6d08393d093" />
| 0easy
|
Title: add first-timers-welcome badge to readme
Body: Very easy, we need a badge to show that we welcome contributions from first-timers.
The code for the badge can be taken from here:
https://www.firsttimersonly.com/
and it is actually this one:
[](https://www.firsttimersonly.com/)
| 0easy
|
Title: Huge Requirements.txt for pyjanitor
Body: Hi ,
I am using some basic functions from pyjanitor such as - clean_names() , collapse_levels() in one of my code which I want to productionise.
And there are limitations on the size of the production code base.
Currently ,if I just look at the requirements.txt for just "pyjanitor" , its huge .
I don't think I require all the dependencies in my code.
How can I remove the unnecessary ones ?
Or Whats the best approach to productionize the code which uses the above basic pyjanitor functions.
To get requirement.txt file, I did --
python -m venv test_env
pip install pyjanitor
pip freeze > pyjanitor_req.txt
[pyjanitor_req.txt](https://github.com/ericmjl/pyjanitor/files/5878504/pyjanitor_req.txt)
| 0easy
|
Title: Support `alias_generator` callable to return an instance of `AliasChoices`
Body: ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
**Context**
_Relates to https://github.com/pydantic/pydantic-settings/issues/406_
When setting individual fields aliases, one can use `AliasChoices` to specify multiple alternatives (for an example, see this [comment](https://github.com/pydantic/pydantic-settings/issues/406#issuecomment-2352788218))
But when using a callable as an `alias_generator`, the callable must return a `str`.
**Feature request**
can `alias_generator` be modified to also support Callables which return `AliasChoices`?
**My current use case**
It would be as described in the linked pydantic-settings issue:
> When using multi-word named settings, I would like to:
>
> 1. for CLI, use dashes for word separation, as it is the most common approach I've seen in CLI programs
> 2. for ENV variables, use underscores for word separation as it is the standard (and dashes are not supported anyway)
Although I managed to achieve this by [setting AliasChoices for each individual field](https://github.com/pydantic/pydantic-settings/issues/406#issuecomment-2352788218), it would be great to remove so much repetition.
**Code example**
I would like to avoid repetition and be able to simply set an alias_generator like so:
```python
model_config = SettingsConfigDict(
cli_parse_args=True,
env_nested_delimiter="__",
alias_generator=lambda x: AliasChoices(x, x.replace('_','-'))
)
```
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [X] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [X] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.