text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Verify spearmanr works as expected in Rank2D
Body: The [`scipy.stats.spearmanr`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanr.html) correlation was added to Rank2D without tests, and on reviewing the documentation for the correlation metric and in comparison to the numpy methods for covariance and Pearson correlation, we should check to ensure things are correct.
The goal of Rank2D is to create a plot of pairwise comparisons between features, e.g. given a dataset `X` with 500 rows of instances and 3 columns, A, B, C, and D - we should produce a metric table for two pairs of columns as follows:
```
AA AB AC AD
BA BB BC BD
CA CB CC CD
DA DB DC DD
```
The documentation for Spearmanr mentions that it takes parameters a,b (b is optional) each of which can be 1D or 2D. The behavior in the 2D case depends on the axis - if `axis=0` then each column is a variable, and observations in the rows, if `axis=None` then both arrays are raveled. In the default case, it appears that everything is correct, however we should ensure that `Rank2D` is not subject to library change by specifying exactly the axis we intend to compute the correlation.
To verify this, we should perform the following steps:
- [x] update `'spearman': lambda X: spearmanr(X)[0]` to `'spearman': lambda X: spearmanr(X, axis=0)[0]` and any other specifications we need for correctness.
- [x] create `TestRank2D.test_pearson`, `TestRank2D.test_covariance`, `TestRank2D.test_spearman`
The tests should:
1. Create a dataset with known correlations
2. Fit the visualizer with the dataset
3. Ensure that the `ranks_` size matches the number of features
4. Ensure that `ranks_` matches the expected correlations
5. Ensure that the image is drawn correctly.
See also #621
| 0easy
|
Title: Improve interactions tests
Body: ## Future works for `tests.test_interactions`
- make the nested functions (save_screenshot, perform_dragging, etc.) class methods of Test, and initialize the app in setUpClass (inherited from IntegrationTests).
- break down the interactions tests into separate tests, one for dragging, one for clicking, one for hovering, etc.
- programmatically set the `renderedPositions` of the nodes, rather than hardcoded.
Those suggestions do not currently affect the testing procedures, but shall we add supplementary tests or make substantial changes to the dash-cytoscape API, they could end up becoming useful. They might take substantial effort to implement. | 0easy
|
Title: NATR in non TA-lib mode
Body: ```python
# Calculate Result
if Imports["talib"] and mode_tal:
from talib import NATR
natr = NATR(high, low, close, length)
else:
natr = scalar / close
natr *= atr(high=high, low=low, close=close, length=length, mamode=mamode, drift=drift, offset=offset, **kwargs)
```
I guess when talib is false, even then atr is calculated in talib mode since talib is False is not passed in atr call. | 0easy
|
Title: Good First Issue: Allow `predict` method to accept date values as `steps`
Body: Use branch 0.14.x as base.
**Summary**
Currently, the `steps` parameter in all Forecasters' `predict` methods only accepts an integer value. This integer defines how many observations to forecast into the future. We would like to extend this functionality so that `steps` can also accept a date (e.g., `'2020-01-01'`). If a date is provided, the function should calculate the appropriate number of observations corresponding to the time window between the last observation in the last window and the given date.
**Task**
1. Create an auxiliary function, `_preprocess_steps_as_date(last_window: pd.Series, steps)` in the `utils` module:
- `last_window` is the last window of the series used to forecast the future. This is an argument of the `predict` method in all Forecasters.
- `steps` can be an integer or any datetime format that pandas allows to be passed to a `pd.DatetimeIndex` (e.g., string, pandas timestamp...).
- If the Forecaster was not fitted using a `pd.DatetimeIndex`, raise a `TypeError` with the message: "If the Forecaster was not fitted using a pd.DatetimeIndex, `steps` must be an integer."
- If the Forecaster was fitted using a `pd.DatetimeIndex`, this function will return the length of the time window between the last observation in the last window and the given date as an integer value.
- If the input `steps` is an integer, return the same integer.
- Create unit tests using pytest in the `utils.tests` folder.
```python
# Expected behavior
# ==============================================================================
last_window = pd.Series([1, 2, 3, 4, 5], index=pd.date_range('2020-01-01', periods=5, freq='D'))
_preprocess_steps_as_date(last_window, '2020-01-07') # expected output: 2
last_window = pd.Series([1, 2, 3, 4, 5], index=pd.date_range('2020-01-01', periods=5, freq='D'))
_preprocess_steps_as_date(last_window, 2) # expected output: 2
last_window = pd.Series([1, 2, 3, 4, 5], index=pd.RangeIndex(start=0, stop=5, step=1))
_preprocess_steps_as_date(last_window, '2020-01-07') # expected output: TypeError
```
2. Integrate this function in the `predict` method of the `ForecasterAutoreg` class.
**Acceptance Criteria**
- [ ] The `steps` parameter accepts both integer and date formats.
- [ ] The function correctly calculates the number of steps when a date is provided.
- [ ] Existing tests continue to pass.
- [ ] New test cases are added to verify the correct behavior for both int and date inputs.
**Full Example**
```python
# Expected behavior
# ==============================================================================
data = fetch_dataset(name="h2o", kwargs_read_csv={"names": ["y", "datetime"], "header": 0})
steps = 36
data_train = data[:-steps]
data_test = data[-steps:]
forecaster = ForecasterAutoreg(
regressor = LGBMRegressor(random_state=123, verbose=-1),
lags = 15
)
forecaster.fit(y=data_train['y'])
predictions = forecaster.predict(steps='2005-09-01') # As steps=3
```
2005-07-01 1.020833
2005-08-01 1.021721
2005-09-01 1.093488
Freq: MS, Name: pred, dtype: float64 | 0easy
|
Title: Queuing related guides contain outdated information about `concurrency_count`
Body: ### Describe the bug
These guides related to queuing still refer to `concurrency_count`:
- [Queuing](https://www.gradio.app/guides/queuing)
- [Setting Up a Demo for Maximum Performance](https://www.gradio.app/guides/setting-up-a-demo-for-maximum-performance)
However, as confirmed in #9463:
> The `concurrency_count` parameter has been removed from `.queue()`. In Gradio 4, this parameter was already deprecated and had no effect. In Gradio 5, this parameter has been removed altogether.
Running the code from [Queuing](https://www.gradio.app/guides/queuing) guide results in the error below:
```
Exception has occurred: TypeError
EventListener._setup.<locals>.event_trigger() got an unexpected keyword argument 'concurrency_count'
File "./test_gradio.py", line 23, in <module>
greet_btn.click(fn=greet, inputs=[tag, output], outputs=[
TypeError: EventListener._setup.<locals>.event_trigger() got an unexpected keyword argument 'concurrency_count'
```
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
# Sample code from https://www.gradio.app/guides/queuing
import gradio as gr
with gr.Blocks() as demo:
prompt = gr.Textbox()
image = gr.Image()
generate_btn = gr.Button("Generate Image")
generate_btn.click(image_gen, prompt, image, concurrency_count=5)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.6.0
gradio_client version: 1.4.3
```
### Severity
I can work around it | 0easy
|
Title: If github check fails, can't 'import interpreter' - Exception: Failed to fetch latest release from GitHub API.
Body: ### Describe the bug
If for some reason the github API is not working, you can't load the library due to the update check failing
### Reproduce
Try to import library when Github API is down:
```
File "/app/openinterpreter/chat.py", line 2, in <module>
import interpreter
File "/usr/local/lib/python3.10/site-packages/interpreter/__init__.py", line 1, in <module>
from .core.core import Interpreter
File "/usr/local/lib/python3.10/site-packages/interpreter/core/core.py", line 6, in <module>
from ..cli.cli import cli
File "/usr/local/lib/python3.10/site-packages/interpreter/cli/cli.py", line 6, in <module>
import ooba
File "/usr/local/lib/python3.10/site-packages/ooba/__init__.py", line 1, in <module>
from .download import download
File "/usr/local/lib/python3.10/site-packages/ooba/download.py", line 2, in <module>
from .utils.ensure_repo_exists import ensure_repo_exists
File "/usr/local/lib/python3.10/site-packages/ooba/utils/ensure_repo_exists.py", line 8, in <module>
TAG = get_latest_release()
File "/usr/local/lib/python3.10/site-packages/ooba/utils/get_latest_release.py", line 8, in get_latest_release
raise Exception("Failed to fetch latest release from GitHub API.")
Exception: Failed to fetch latest release from GitHub API.
```
### Expected behavior
Graceful failure and being able to continue even if update check is not possible
### Screenshots
_No response_
### Open Interpreter version
0.1.10
### Python version
3.10
### Operating System name and version
Linux
### Additional context
_No response_ | 0easy
|
Title: [FEA] Bill of materials tracking
Body: **Is your feature request related to a problem? Please describe.**
As part of every release, include a bill of materials digest
**Describe the solution you'd like**
Ex: https://cyclonedx.org/use-cases/
| 0easy
|
Title: Switching from columns view to another view loses column metadata
Body: ### Describe the bug
* Create a notebook with a two columns, and arrange cells across the columns
* Switch to a compact view and save.
Expected behavior:
* Column metadata is still persisted in the notebook file.
Actual behavior:
* Column metadata is erased from the file.
### Environment
<details>
```
{
"marimo": "0.10.16",
"OS": "Darwin",
"OS Version": "22.5.0",
"Processor": "arm",
"Python Version": "3.12.4",
"Binaries": {
"Browser": "131.0.6778.265",
"Node": "v21.5.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.23.0",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.1",
"pyyaml": "6.0.2",
"ruff": "0.9.2",
"starlette": "0.45.2",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {}
}
```
</details>
### Code to reproduce
```python
import marimo
__generated_with = "0.10.16"
app = marimo.App(width="columns")
@app.cell(column=0)
def _():
"1"
return
@app.cell(column=1)
def _():
"2"
return
if __name__ == "__main__":
app.run()
```
| 0easy
|
Title: passlib - DeprecationWarning: 'crypt' is deprecated and slated for removal in Python 3.13
Body: If passlib isn't replaced, hashing will not work when python is updated to 3.13. | 0easy
|
Title: Dialogs created by `Dialogs` are not centered and their minimum size is too small
Body: While looking at #4619, I got annoyed that dialogs are not centered and noticed also that minimum size is so small that the `Robot Framework` title isn't always shown. Both are trivial to fix. | 0easy
|
Title: Wrong link in `GDC` page
Body: ### 📚 Describe the documentation issue
Hi, there is a wrong link to the Diffusion Improves Graph Learning paper on `GDC` page.
https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.transforms.GDC.html
### Suggest a potential alternative/fix
Can anyone update the link to [this one](https://arxiv.org/abs/1911.05485)?
Thank you for reading this issue. | 0easy
|
Title: Add `history delete`
Body: We need two things:
1. `history delete <pattern>` command that will delete certain command from history
2. ~The way to mark commands and skip it while flushing history on disk i.e. `echo 123 #skip-history`. In bash this work by [adding the space](https://stackoverflow.com/questions/6475524/how-do-i-prevent-commands-from-showing-up-in-bash-history) before command.~ (implemented, see [$XONSH_HISTORY_IGNORE_REGEX](https://xon.sh/envvars.html#xonsh-history-ignore-regex))
### Discussed in https://github.com/xonsh/xonsh/discussions/4928
<div type='discussions-op-text'>
<sup>Originally posted by **manujchandra** August 28, 2022</sup>
I recently processed some OTP codes URI on the terminal using python qr library. It contains sensitive information regarding token IDS.
After that, I issued the commands history flush and history clear. But there are two problems:
1) When I press the up arrow, it refills the previous command which contains the sensitive information and it keeps going back in history.
2) When I write qr it tries to autocompletes the full otp uri with sensitive codes (in gray color).
How do I delete the history in such a manner that there is no autocomplete and also no history when I press the up button. Because that data should not get flashed during screen sharing accidentally.

</div>
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: 解析链接提取不到数据
Body: 2.2版本
启动服务端发送请求的内容如下
```
{
"url":"http://xhslink.com/B/i1uWyT",
"download":false
}
```
返回的json如下:
```
{
"message": "获取小红书作品数据失败",
"url": "https://www.xiaohongshu.com/discovery/item/66bb735c0000000005022d85?app_platform=ios&app_version=8.52&share_from_user_hidden=true&xsec_source=app_share&type=normal&xsec_token=CBoLSCGJH_K9xehE2Zhanm_R4PH426vlQwamlvq0OT6Pk=&author_share=1&xhsshare=CopyLink&shareRedId=N0w7RDg5SEE2NzUyOTgwNjY0OThISEc_&apptime=1725604335&share_id=62e3c57b62ff458ab398fcb981e97c8f",
"data": null
}
```
请问一下为什么会出现这情况 | 0easy
|
Title: [ENH] to_fasta as part of janitor.biology
Body: `janitor.biology` could do with a `to_fasta` function, I think. The intent here would be to conveniently export a dataframe of sequences as a FASTA file, using one column as the fasta header.
strawman implementation below:
```python
import pandas_flavor as pf
from Bio.SeqRecord import SeqRecord
from Bio.Seq import Seq
from Bio import SeqIO
@pf.register_dataframe_method
def to_fasta(df, identifier_column_name, sequence_column_name, filename):
seq_records = []
for r, d in df.iterrows():
seq = Seq(d[sequence_column_name])
seq_record = SeqRecord(seq, id=d[identifier_column_name], description="", name="")
seq_records.append(seq_record)
SeqIO.write(seq_records, filename, format="fasta")
``` | 0easy
|
Title: Add citation to the readme
Body: About 11 peer-reviewed publications have cited the github repository, rather than the JOSS paper. Adding the citation at the top of the readme would help clarify how to cite the software.
The citation is currently stored in a separate file in the repository root. | 0easy
|
Title: Cleanup Usage of CIMultidict
Body: While reviewing #2097, @ashleysommer pointed out how we have been using `header` which is a `CIMultidict` type. We have been using `header.get` instead of explicitly using either `getone` or `getall`
In order to keep the original PR clean, we are opening a new tracker item to make sure we can cleanup the usage of this across the board in `sanic` codebase.
1. Replace usage of `get` with a more suitable appropriate replacement of `getone` or `getall`
| 0easy
|
Title: [Feature request] Add apply_to_images to ZoomBlur
Body: | 0easy
|
Title: Document the export_schema command
Body: ## Feature Request Type
- Documentation
## Description
I was figuring out how to export my schema, since the new version of graphiql doesn't seem to allow you. It was a bit of a struggle. With graphene there was a management command to do it; and the penny dropped that this should have the same.
Little did I know that there was already an `export_schema` command. Let's put it in the docs! | 0easy
|
Title: add requirement for lower jinja2 version
Body: Please add requirement for jinja2==3.0.3 because it sometimes crash the notebook conversion to HTML. | 0easy
|
Title: Marketplace - agent page - Update body text to Geist font
Body:
### Describe your issue.
<img width="574" alt="Screenshot 2024-12-17 at 17 24 37" src="https://github.com/user-attachments/assets/58965dda-c4ff-478e-87de-2833b5e546b8" />
**Please update the font to the following style:** (This is the style "p-ui" in the typography style sheet)
font-family: Geist;
font-size: 16px;
font-weight: 400;
line-height: 24px;
text-align: left;
text-underline-position: from-font;
text-decoration-skip-ink: none;
**And using the following colors:**
background: var(--neutral-600, #525252);
| 0easy
|
Title: 📝 Followup with documentation - Internal Feature
Body: Introducing AuthX V1.0.0, our redesigned authentication system. This version incorporates numerous fresh functionalities and improvements aimed at enhancing security, usability, and performance.
## Core Functionality:
- JWT encoding/decoding for application authentication
- Automatic detection of JWTs in requests:
- JWTs in headers
- JWTs in cookies
- JWTs in query strings
- JWTs in JSON bodies
- Implicit/explicit token refresh mechanism
- Tracking the freshness state of tokens
- Route protection:
- Protection based on token type (access/refresh)
- Protection based on token freshness
- Partial route protection
- Handling custom user logic for revoked token validation
- Handling custom logic for token recipient retrieval (ORM, pydantic serialization...)
- Providing FastAPI-compliant dependency injection API
- Automatic error handling | 0easy
|
Title: Faster tests by deselcting slow tests
Body: Running the complete set of tests takes a long time. We can use [test attributes](http://nose.readthedocs.io/en/latest/plugins/attrib.html) to mark tests that take a long time to run, this gives people a chance to run a lightweight test quickly by deselecting those tests. Travis would still run all tests. | 0easy
|
Title: refactor: consistent logging
Body: We support basic logging for doc indices. It's defined in the abstract class and consists of simple logs for `init`, `index`, `find`, `filter` (`filter_batched`) and `text_search` (`text_search_batched`), etc.
The inconsistency is introduced when a specific doc index overrides method(s) listed above, and thus, doesn't use logs defined in the abstract class. As a result, some doc indices will have logs, and others won't.
**Definition of Done:**
- [ ] Add logs to document index implementations where needed (i.e. inside the functions that override ones in the abstract class) | 0easy
|
Title: ploomber scaffold should accept hyphens if no creating a packaged project
Body: | 0easy
|
Title: FEAT: Support Index.str
Body: Implements `Index.str`, almost same as `Series.str`.
| 0easy
|
Title: Contribution Attribution metric API
Body: The canonical definition is here: https://chaoss.community/?p=3616 | 0easy
|
Title: Fix typos in ```spec_evaluator.py```
Body: File: [spec_evaluator.py](https://github.com/scanapi/scanapi/blob/main/scanapi/evaluators/spec_evaluator.py#L72)
```Python
@classmethod
def filter_response_var(cls, spec_vars):
"""Returns a copy pf ``spec_vars`` without 'response' references.
Any items with a ``response.*`` reference in their value are left out.
Returns:
[dict]: filtered dictionary.
"""
pattern = re.compile(r"(?:(\s*response\.\w+))")
return {k: v for k, v in spec_vars.items() if not pattern.search(v)}
```
Update docstring typo in the classmethod `Returns a copy pf` -> `Returns a copy of` | 0easy
|
Title: update documents to reflect new default format
Body: initially, we were using the `py:light` format as the default one for scripts:
```python
# +
x = 1
# +
y = 2
```
However, we're now migrating to the percent format to be the default, since VSCode, PyCharm, and others support it:
```python
# %%
x = 1
# %%
y = 2
```
Some sections in our documentation need updates:
https://docs.ploomber.io/en/latest/user-guide/jupyter.html
https://docs.ploomber.io/en/latest/get-started/basic-concepts.html
I found these two but there may be others.
| 0easy
|
Title: [GOOD FIRST ISSUE]: Missing secret 'llm_api_key'
Body: ### Issue summary
no recognizing API key
### Detailed description
I've been trying to put my open AI API key in for a couple of hours now with now luck. I've confirmed secrets.yaml is up to date, and have tried 5-10 different syntaxes as well as bringing the key in as an environment variable. Here's my error when I try to run python main.py:
(venv) PS C:\Users\ashdo\OneDrive\Desktop\Job Search\Auto_Jobs_Applier_AIHawk> python main.py
2024-10-11 19:42:39.303 | ERROR | __main__:main:205 - Configuration error: Missing secret 'llm_api_key' in file data_folder\secrets.yaml
2024-10-11 19:42:39.304 | ERROR | __main__:main:206 - Refer to the configuration guide for troubleshooting: https://github.com/feder-cr/AIHawk_AIHawk_automatic_job_application/blob/main/readme.md#configuration Missing secret 'llm_api_key' in file data_folder\secrets.yaml
secret.yaml:
llm_api_key: ["${OPENAI_API_KEY}"]
config.yaml
remote: true
experienceLevel:
internship: false
entry: false
associate: false
mid-senior level: true
director: true
executive: true
jobTypes:
full-time: true
contract: true
part-time: false
temporary: true
internship: false
other: false
volunteer: true
date:
all time: true
month: true
week: true
24 hours: true
positions:
- Sales
- Account Executive
- AI Process Engineer
- Business Development
- AI Product Manager
- AI Solutions Architect
- AI Specialist
- Automation Engineer
- Cognitive Solutions Architect
- AI Implementation Consultant
locations:
- Cambridge, Massachusetts
- Stanford, California
- Oxford, United Kingdom
- Cambridge, United Kingdom
- London, United Kingdom
- Washington, D.C.
- New York City, New York
- Princeton, New Jersey
- Chicago, Illinois
- New Haven, Connecticut
- Berkeley, California
- Cambridge, Massachusetts
- San Diego, California
- Paris, France
- Toronto, Canada
- Canberra, Australia
- Ann Arbor, Michigan
- Ithaca, New York
- London, United Kingdom
- Tokyo, Japan
- Sydney, Australia
- Beijing, China
- New York City, New York
- Geneva, Switzerland
- Edinburgh, United Kingdom
- Copenhagen, Denmark
- Shanghai, China
- Singapore, Singapore
- St Andrews, United Kingdom
- Lima, Peru
- Bogota, Columbia
- Brazil
apply_once_at_company: true
distance: 100
company_blacklist:
- Ajax Tocco
- Makino SST
- Absolute Machine Tools
- Heartland Machine and Engineering
#title_blacklist:
#- word1
#- word2
job_applicants_threshold:
min_applicants: 0
max_applicants: 50
llm_model_type: openai
llm_model: 'gpt-4o-mini'
llm_api_url: https://api.pawan.krd/cosmosrp/v1
plain_text_resume.yaml
personal_information:
name: "Michael"
surname: "&&&&&&&&&&"
date_of_birth: "12/12/1986" # Update with your actual date of birth
country: "US"
city: "Ferndale"
zip_code: "48220"
address: "580 &&&&&&& St."
phone_prefix: "+1"
phone: "********"
email: "&&&&&&&&&"
#github: "https://github.com/mikeashdown" # Update if applicable
linkedin: "https://www.linkedin.com/in/mikeashdown/"
education_details:
- education_level: "Bachelor of Science"
institution: "YSU College of STEM"
field_of_study: "Mechanical Engineering"
final_evaluation_grade: "N/A" # Update if applicable
year_of_completion: "2025"
start_date: "2019"
# additional_info:
# exam:
# Algorithms: "A"
# Linear_Algebra: "A"
# Database_Systems: "A"
# Operating_Systems: "A-"
# Web_Development: "A"
- education_level: "Bachelor of Science"
institution: "Miami University"
field_of_study: "Manufacturing Engineering"
final_evaluation_grade: "N/A" # Update if applicable
year_of_completion: "2010"
start_date: "2006"
#additional_info:
# exam:
# Relevant_Courses: "N/A" # Add specific courses if applicable
- education_level: "Bachelor of Science"
institution: "Miami University"
field_of_study: "Engineering Management"
final_evaluation_grade: "N/A" # Update if applicable
year_of_completion: "2010"
start_date: "2006"
#additional_info:
# exam:
# Relevant_Courses: "N/A" # Add specific courses if applicable
experience_details:
- position: "Owner/Founder"
company: "MTA Lead Generation"
employment_period: "July 2023 - Present"
location: "Ferndale, MI"
industry: "Sales Training & Process Improvement"
key_responsibilities:
- "Founded a sales training and process improvement organization, specializing in lead generation and AI process enhancement."
- "Implemented AI systems in a medical practice, resulting in $77K annual revenue increase and $91K cost reduction."
- "Successfully executed lead generation and sales strategies across diverse industries including trucking insurance, accounting, and corporate leadership training."
skills_acquired:
- "Sales Strategy"
- "AI Process Enhancement"
- "Lead Generation"
- position: "Account Executive & Business Development Manager"
company: "Sales Insights Lab"
employment_period: "July 2022 - December 2023"
location: "Ferndale, MI"
industry: "B2B Sales"
key_responsibilities:
- "Rapidly promoted from Sales Development to Account Executive within three weeks due to exceptional performance."
- "Achieved highest closing rate of 27%, closing 93 deals and becoming lead sales strategist within five months."
- "Successfully managed dual roles of Sales Development Manager and Account Executive, demonstrating versatility and efficiency."
skills_acquired:
- "Full-Cycle Sales"
- "Relationship Management"
- "Sales Strategy"
- position: "Outside Sales Engineer"
company: "Makino SST"
employment_period: "July 2021 - July 2022"
location: "Auburn Hills, MI"
industry: "Machine Tools & EDM Consumables"
key_responsibilities:
- "Developed and executed comprehensive sales strategy for ~900 new accounts, including cold calling, discovery, quoting, and presentation."
- "Captured new business consistently in a highly competitive machine tool and EDM consumables market."
skills_acquired:
- "Cold Calling"
- "Sales Strategy Development"
- "Market Penetration"
- position: "Outside Sales Manager"
company: "Heartland Machine & Engineering"
employment_period: "April 2020 - June 2021"
location: "Farmington Hills, MI"
industry: "Manufacturing Engineering"
key_responsibilities:
- "Spearheaded company expansion into Michigan territory, building a project pipeline of $15.7 million and over 137 projects."
- "Developed and implemented strategy to engage entire 521-company territory every six weeks, ensuring consistent market presence."
skills_acquired:
- "Territory Expansion"
- "Project Pipeline Development"
- "Strategic Planning"
- position: "Outside Sales Engineer"
company: "Absolute Machine Tools"
employment_period: "March 2019 - March 2020"
location: "Livonia, MI"
industry: "Manufacturing Engineering"
key_responsibilities:
- "Expanded company presence into North Eastern Michigan, managing relationships with ~1213 manufacturers."
- "Discovered and quoted 123 projects within a year, demonstrating strong prospecting and solution selling skills."
skills_acquired:
- "Client Relationship Management"
- "Prospecting"
- "Solution Selling"
- position: "Field Sales Engineer"
company: "Ajax Tocco Magnethermic"
employment_period: "October 2013 - March 2019"
location: "Madison Heights, MI / Warren, OH"
industry: "Manufacturing Engineering"
key_responsibilities:
- "Transitioned from engineering to sales role, managing aftermarket product line for Michigan."
- "Achieved sales growth of 16.3% in 2015 and 101% in 2016, leading to territory expansion to include Illinois."
skills_acquired:
- "Aftermarket Product Management"
- "Sales Growth Strategies"
- "Territory Expansion"
##projects:
# - name: "Sales Automation Tool"
# description: "Developed a Python script to automate LinkedIn job applications, integrating AI-driven resume customization and automated submissions."
# link: "https://github.com/mikeashdown/sales-automation-tool"
achievements:
- name: "Top Sales Performer"
description: "Consistently exceeded sales targets by 20%+ each quarter at Sales Insights Lab."
- name: "Cost Reduction Award"
description: "Awarded for implementing AI-driven processes that decreased annual costs by $77k at client company"
certifications:
- "Sales Insights Lab Certified Sales Professional"
languages:
- language: "English"
proficiency: "Fluent"
- language: "Spanish"
proficiency: "Intermediate"
interests:
- "Artificial Intelligence"
- "Sales Strategy"
- "Process Improvement"
- "Machine Learning"
- "Digital Marketing"
- "Dance"
- "Spanish"
- "Traveling"
availability:
notice_period: "2 weeks"
salary_expectations:
salary_range_usd: "111,573 - 127,311"
self_identification:
gender: "Male"
pronouns: "He/Him"
veteran: "No"
disability: "No"
ethnicity: "Caucasian"
legal_authorization:
eu_work_authorization: "No"
legally_allowed_to_work_in_eu: "No"
legally_allowed_to_work_in_us: "Yes"
requires_eu_sponsorship: "No"
canada_work_authorization: "No"
requires_canada_visa: "Yes"
legally_allowed_to_work_in_canada: "No"
requires_canada_sponsorship: "No"
uk_work_authorization: "No"
requires_uk_visa: "Yes"
legally_allowed_to_work_in_uk: "No"
requires_uk_sponsorship: "No"
work_preferences:
remote_work: "Yes"
in_person_work: "yes"
open_to_relocation: "Yes"
willing_to_complete_assessments: "Yes"
willing_to_undergo_drug_tests: "Yes"
willing_to_undergo_background_checks: "Yes"
### Steps to reproduce (if applicable)
_No response_
### Expected outcome
does not run
### Additional context
_No response_ | 0easy
|
Title: Raise test coverage above 90% for gtda/diagrams/features.py
Body: Current test coverage from pytest is 84% | 0easy
|
Title: `xonfig web` overwrites its own config within the same session (colors vs. prompts)
Body: <!--- Provide a general summary of the issue in the Title above -->
<!--- If you have a question along the lines of "How do I do this Bash command in xonsh"
please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html
If you don't find an answer there, please do open an issue! -->
## xonfig
No config file to start with:
```
$ xonfig
+------------------+-----------------+
| xonsh | 0.15.1 |
| Python | 3.11.8 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.40 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.17.2 |
| on posix | True |
| on linux | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file | [] |
+------------------+-----------------+
```
## Expected Behavior
Running `xonfig web` and clicking through colors and themes should in the end result in a `.xonshrc` that contains all of the choices made, not just the last one.
## Current Behavior
`xonfig web` seems to overwrite the entire `.xonshrc` file instead of appending to or otherwise modifying it.
## Steps to Reproduce
Start without an `.xonshrc` or other config file:
```xsh
cat .xonshrc
# cat: .xonshrc: No such file or directory
ls -a .config/xonsh/
# . ..
```
Start `xonfig web`, in it, select a non-default color theme, then click on the "Update $XONSH_COLOR_STYLE" button.
```xsh
cat .xonshrc
# XONSH WEBCONFIG START
$XONSH_COLOR_STYLE = 'solarized-dark'
```
Now go to "Prompts", select a non-default one, click "Set". The previously selected color style is now overwritten with the prompt config
```xsh
cat .xonshrc
# XONSH WEBCONFIG START
$PROMPT = '[{localtime}] {YELLOW}{env_name} {BOLD_BLUE}{user}@{hostname} {BOLD_GREEN}{cwd} {gitstatus}{RESET}\n> '
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Switch from UUID v4 to v7
Body: **Describe the solution you'd like**
Migrate to using UUID v7 (`uuid6` package, which despite its name provides `uuid7()` fn) as data type for `id` columns.
**Additional context**
These articles outline advantages of v7 (as opposed to v4), and also some pitfalls of not doing so:
[Link 1](https://www.toomanyafterthoughts.com/uuids-are-bad-for-database-index-performance-uuid7) comes with very detailed information about all UUID implementations (benchmarks included), [Link 2](https://www.cybertec-postgresql.com/en/unexpected-downsides-of-uuid-keys-in-postgresql) provides a well-rounded writeup on implications for PostgreSQL in particular!
| 0easy
|
Title: CFO (Chande Forcast Ocsilator)
Body: Hey @twopirllc,
I'm still using pandas-ta==0.1.36b0 version at this moment and added one indicator on my own in my project. This indicator is known as Chande Forcast Oscilator (CFO).
Here is the working code for the same. You man alter it according to the current codebase and include it.
```
import pandas_ta as ta_p
period = 14
data['linreg'] = ta_p.linreg(data['close'],length=period)
data['linreg'] = data['linreg'].round(decimals=2)
data['cfo'] = ((data['close']-data['linreg'])/data['close'] *100)
data['cfo'] = data['cfo'].round(decimals=2)
```
And here is a [link](https://www.fmlabs.com/reference/default.htm?url=ForecastOscillator.htm) for the formula.
Let me know if you need more details before you include it in the master code.
Cheers !!
| 0easy
|
Title: Feat: Color customization
Body: You know, currently, we just have a main yellow color, I think there may be some requests for color customization from other users. It's easy to extract the color value to the config file, which is also convenient for us to expand in the future. | 0easy
|
Title: Logging APIs do not work if Robot Framework is run on thread
Body: When running a Robot Framework task directly, the logs are generated in the log.html file as expected. However, when the same task is registered as an APScheduler job and executed by the scheduled task, the logs do not appear in log.html.
Here is the code for the scheduled task:
```
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
import robot
from datetime import datetime, timedelta
# Define the function to execute the Robot Framework task
def run_robot_task(robot_file_path):
# Generate report directory
report_dir = "./"
# Run the Robot Framework task
robot.run(robot_file_path, outputdir=report_dir)
# Create the APScheduler background scheduler
scheduler = BackgroundScheduler()
# Define the scheduling time, using IntervalTrigger to execute every 60 seconds
job_trigger = IntervalTrigger(seconds=60)
# Provide the actual Robot Framework file path
robot_file_path = "./3.robot"
# Register the job
scheduler.add_job(run_robot_task, job_trigger, args=[robot_file_path], id="robot_task",
name="Execute Robot Framework Task", next_run_time=datetime.now() + timedelta(seconds=5))
# Start the scheduler
scheduler.start()
# Keep the main thread running to ensure the task continues to schedule
try:
while True:
pass
except (KeyboardInterrupt, SystemExit):
# Shutdown the scheduler
scheduler.shutdown()
```
Here is the code for 3.robot
```
*** Settings ***
Library Collections
Library SeleniumLibrary
Library DateTime
Library OperatingSystem
Library BuiltIn
*** Variables ***
${DETAILS_BUTTON_XPATH} //*[@id="details-button"]
${PROCEED_LINK_XPATH} //*[@id="proceed-link"]
${SRARCH_SHOW_HIDE} //*[@id="search_show_hide"]
*** Test Cases ***
Open Browser With Custom Proxy
${version}= Evaluate robot.__version__
Log Robot Framework version: ${version} console=True
Log Robot Framework version: ${version} level=INFO
Log This is an info message level=INFO
Log This is a debug message level=DEBUG
Log This is a trace message level=TRACE
``` | 0easy
|
Title: drop numpy dependency from Python code for cases without vectors
Body: According to this line, it seems that `numpy` is used as a default math library for runtime even when we do not operate with vectors.
https://github.com/BayesWitnesses/m2cgen/blob/2475f3cddb5b328c8673795ca3cbe4fdc89f6797/m2cgen/interpreters/python/interpreter.py#L30-L31
Let me describe two advantages of dropping `numpy` where it's possible.
The first one is **excess dependence**. Even though `numpy` is a sort of "classic" dependence and there should be no problems with installing it, it requires additional manipulation from a user side. Also, there are some companies with very strict security policies, which prohibit using pip (conda, brew, and other package managers). So, I guess, for them raw Python may be preferable solution in cases where it's possible.
The second one is **speed**. `numpy` is about efficient **vector** math, in other cases it only produces redundant computational cost. Consider the following example. Take [this generated Python code from the repo](https://github.com/BayesWitnesses/m2cgen/blob/master/generated_code_examples/python/classification/svm.py), change return type from `np.array` to simple `list`, replace the following things in script:
- `numpy` -> `math`
- `np.exp` -> `math.exp`
- `np.power` -> `math.pow`
Here what we get after removing `numpy`:
```
import math
def score_raw(input):
var0 = (0) - (0.25)
var1 = math.exp((var0) * ((((math.pow((5.4) - (input[0]), 2)) + (math.pow((3.0) - (input[1]), 2))) + (math.pow((4.5) - (input[2]), 2))) + (math.pow((1.5) - (input[3]), 2))))
var2 = math.exp((var0) * ((((math.pow((6.2) - (input[0]), 2)) + (math.pow((2.2) - (input[1]), 2))) + (math.pow((4.5) - (input[2]), 2))) + (math.pow((1.5) - (input[3]), 2))))
var3 = math.exp((var0) * ((((math.pow((5.0) - (input[0]), 2)) + (math.pow((2.3) - (input[1]), 2))) + (math.pow((3.3) - (input[2]), 2))) + (math.pow((1.0) - (input[3]), 2))))
var4 = math.exp((var0) * ((((math.pow((5.9) - (input[0]), 2)) + (math.pow((3.2) - (input[1]), 2))) + (math.pow((4.8) - (input[2]), 2))) + (math.pow((1.8) - (input[3]), 2))))
var5 = math.exp((var0) * ((((math.pow((5.0) - (input[0]), 2)) + (math.pow((2.0) - (input[1]), 2))) + (math.pow((3.5) - (input[2]), 2))) + (math.pow((1.0) - (input[3]), 2))))
var6 = math.exp((var0) * ((((math.pow((6.7) - (input[0]), 2)) + (math.pow((3.0) - (input[1]), 2))) + (math.pow((5.0) - (input[2]), 2))) + (math.pow((1.7) - (input[3]), 2))))
var7 = math.exp((var0) * ((((math.pow((7.0) - (input[0]), 2)) + (math.pow((3.2) - (input[1]), 2))) + (math.pow((4.7) - (input[2]), 2))) + (math.pow((1.4) - (input[3]), 2))))
var8 = math.exp((var0) * ((((math.pow((4.9) - (input[0]), 2)) + (math.pow((2.4) - (input[1]), 2))) + (math.pow((3.3) - (input[2]), 2))) + (math.pow((1.0) - (input[3]), 2))))
var9 = math.exp((var0) * ((((math.pow((6.3) - (input[0]), 2)) + (math.pow((2.5) - (input[1]), 2))) + (math.pow((4.9) - (input[2]), 2))) + (math.pow((1.5) - (input[3]), 2))))
var10 = math.exp((var0) * ((((math.pow((6.0) - (input[0]), 2)) + (math.pow((2.7) - (input[1]), 2))) + (math.pow((5.1) - (input[2]), 2))) + (math.pow((1.6) - (input[3]), 2))))
var11 = math.exp((var0) * ((((math.pow((5.7) - (input[0]), 2)) + (math.pow((2.6) - (input[1]), 2))) + (math.pow((3.5) - (input[2]), 2))) + (math.pow((1.0) - (input[3]), 2))))
var12 = math.exp((var0) * ((((math.pow((5.1) - (input[0]), 2)) + (math.pow((3.8) - (input[1]), 2))) + (math.pow((1.9) - (input[2]), 2))) + (math.pow((0.4) - (input[3]), 2))))
var13 = math.exp((var0) * ((((math.pow((4.4) - (input[0]), 2)) + (math.pow((2.9) - (input[1]), 2))) + (math.pow((1.4) - (input[2]), 2))) + (math.pow((0.2) - (input[3]), 2))))
var14 = math.exp((var0) * ((((math.pow((5.7) - (input[0]), 2)) + (math.pow((4.4) - (input[1]), 2))) + (math.pow((1.5) - (input[2]), 2))) + (math.pow((0.4) - (input[3]), 2))))
var15 = math.exp((var0) * ((((math.pow((5.8) - (input[0]), 2)) + (math.pow((4.0) - (input[1]), 2))) + (math.pow((1.2) - (input[2]), 2))) + (math.pow((0.2) - (input[3]), 2))))
var16 = math.exp((var0) * ((((math.pow((5.1) - (input[0]), 2)) + (math.pow((3.3) - (input[1]), 2))) + (math.pow((1.7) - (input[2]), 2))) + (math.pow((0.5) - (input[3]), 2))))
var17 = math.exp((var0) * ((((math.pow((5.7) - (input[0]), 2)) + (math.pow((3.8) - (input[1]), 2))) + (math.pow((1.7) - (input[2]), 2))) + (math.pow((0.3) - (input[3]), 2))))
var18 = math.exp((var0) * ((((math.pow((4.3) - (input[0]), 2)) + (math.pow((3.0) - (input[1]), 2))) + (math.pow((1.1) - (input[2]), 2))) + (math.pow((0.1) - (input[3]), 2))))
var19 = math.exp((var0) * ((((math.pow((4.5) - (input[0]), 2)) + (math.pow((2.3) - (input[1]), 2))) + (math.pow((1.3) - (input[2]), 2))) + (math.pow((0.3) - (input[3]), 2))))
var20 = math.exp((var0) * ((((math.pow((6.3) - (input[0]), 2)) + (math.pow((2.7) - (input[1]), 2))) + (math.pow((4.9) - (input[2]), 2))) + (math.pow((1.8) - (input[3]), 2))))
var21 = math.exp((var0) * ((((math.pow((6.0) - (input[0]), 2)) + (math.pow((3.0) - (input[1]), 2))) + (math.pow((4.8) - (input[2]), 2))) + (math.pow((1.8) - (input[3]), 2))))
var22 = math.exp((var0) * ((((math.pow((6.3) - (input[0]), 2)) + (math.pow((2.8) - (input[1]), 2))) + (math.pow((5.1) - (input[2]), 2))) + (math.pow((1.5) - (input[3]), 2))))
var23 = math.exp((var0) * ((((math.pow((5.8) - (input[0]), 2)) + (math.pow((2.8) - (input[1]), 2))) + (math.pow((5.1) - (input[2]), 2))) + (math.pow((2.4) - (input[3]), 2))))
var24 = math.exp((var0) * ((((math.pow((6.1) - (input[0]), 2)) + (math.pow((3.0) - (input[1]), 2))) + (math.pow((4.9) - (input[2]), 2))) + (math.pow((1.8) - (input[3]), 2))))
var25 = math.exp((var0) * ((((math.pow((7.7) - (input[0]), 2)) + (math.pow((2.6) - (input[1]), 2))) + (math.pow((6.9) - (input[2]), 2))) + (math.pow((2.3) - (input[3]), 2))))
var26 = math.exp((var0) * ((((math.pow((6.9) - (input[0]), 2)) + (math.pow((3.1) - (input[1]), 2))) + (math.pow((5.1) - (input[2]), 2))) + (math.pow((2.3) - (input[3]), 2))))
var27 = math.exp((var0) * ((((math.pow((6.3) - (input[0]), 2)) + (math.pow((3.3) - (input[1]), 2))) + (math.pow((6.0) - (input[2]), 2))) + (math.pow((2.5) - (input[3]), 2))))
var28 = math.exp((var0) * ((((math.pow((4.9) - (input[0]), 2)) + (math.pow((2.5) - (input[1]), 2))) + (math.pow((4.5) - (input[2]), 2))) + (math.pow((1.7) - (input[3]), 2))))
var29 = math.exp((var0) * ((((math.pow((6.0) - (input[0]), 2)) + (math.pow((2.2) - (input[1]), 2))) + (math.pow((5.0) - (input[2]), 2))) + (math.pow((1.5) - (input[3]), 2))))
var30 = math.exp((var0) * ((((math.pow((7.9) - (input[0]), 2)) + (math.pow((3.8) - (input[1]), 2))) + (math.pow((6.4) - (input[2]), 2))) + (math.pow((2.0) - (input[3]), 2))))
var31 = math.exp((var0) * ((((math.pow((7.2) - (input[0]), 2)) + (math.pow((3.0) - (input[1]), 2))) + (math.pow((5.8) - (input[2]), 2))) + (math.pow((1.6) - (input[3]), 2))))
var32 = math.exp((var0) * ((((math.pow((7.7) - (input[0]), 2)) + (math.pow((3.8) - (input[1]), 2))) + (math.pow((6.7) - (input[2]), 2))) + (math.pow((2.2) - (input[3]), 2))))
return [(((((((((((((((((((-0.08359187780790468) + ((var1) * (-0.0))) + ((var2) * (-0.0))) + ((var3) * (-0.4393498355605194))) + ((var4) * (-0.009465620856664334))) + ((var5) * (-0.16223369966927))) + ((var6) * (-0.26861888775075243))) + ((var7) * (-0.4393498355605194))) + ((var8) * (-0.4393498355605194))) + ((var9) * (-0.0))) + ((var10) * (-0.0))) + ((var11) * (-0.19673905328606292))) + ((var12) * (0.3340655283922188))) + ((var13) * (0.3435087305152051))) + ((var14) * (0.4393498355605194))) + ((var15) * (0.0))) + ((var16) * (0.28614124535416424))) + ((var17) * (0.11269159286168087))) + ((var18) * (0.0))) + ((var19) * (0.4393498355605194)), (((((((((((((((((((((-0.18563912331454907) + ((var20) * (-0.0))) + ((var21) * (-0.06014273244194299))) + ((var22) * (-0.0))) + ((var23) * (-0.031132453078851926))) + ((var24) * (-0.0))) + ((var25) * (-0.3893079321588921))) + ((var26) * (-0.06738007627290196))) + ((var27) * (-0.1225075748937126))) + ((var28) * (-0.3893079321588921))) + ((var29) * (-0.29402231709614085))) + ((var30) * (-0.3893079321588921))) + ((var31) * (-0.0))) + ((var32) * (-0.028242141062729226))) + ((var12) * (0.16634667752431267))) + ((var13) * (0.047772685163074764))) + ((var14) * (0.3893079321588921))) + ((var15) * (0.3893079321588921))) + ((var16) * (0.0))) + ((var17) * (0.0))) + ((var18) * (0.3893079321588921))) + ((var19) * (0.3893079321588921)), ((((((((((((((((((((((((0.5566649875797668) + ((var20) * (-25.563066587228416))) + ((var21) * (-38.35628154976547))) + ((var22) * (-38.35628154976547))) + ((var23) * (-0.0))) + ((var24) * (-38.35628154976547))) + ((var25) * (-0.0))) + ((var26) * (-0.0))) + ((var27) * (-0.0))) + ((var28) * (-6.2260303727828745))) + ((var29) * (-18.42781911624364))) + ((var30) * (-0.14775026537286423))) + ((var31) * (-7.169755983020096))) + ((var32) * (-0.0))) + ((var1) * (12.612328267927264))) + ((var2) * (6.565812506955159))) + ((var3) * (0.0))) + ((var4) * (38.35628154976547))) + ((var5) * (0.0))) + ((var6) * (38.35628154976547))) + ((var7) * (0.0))) + ((var8) * (0.0))) + ((var9) * (38.35628154976547))) + ((var10) * (38.35628154976547))) + ((var11) * (0.0))]
```
And here are some timings:
```
%%timeit -n 10000
score([1, 2, 3, 4])
```
```
310 µs ± 658 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
```
%%timeit -n 10000
score_raw([1, 2, 3, 4])
```
```
39.4 µs ± 136 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Results seems to be identical:
```
np.testing.assert_allclose(score([1, 2, 3, 4]), score_raw([1, 2, 3, 4]))
```
Please share your thoughts about this refactoring. | 0easy
|
Title: [FEATURE-REQUEST] Support Google Colab installation without runtime restart
Body: **Description**
When running `pip install vaex`, the runtime of the colab is needed to restarted (see attachments)
**Is your feature request related to a problem? Please describe.**
The problem is some dependencies update matplotlib to larger then 3.22 (which runs on Google Colab)
**Additional context**
[Here is a Google Colab that shows the problem and a workaround](https://colab.research.google.com/drive/19O8H2n947R4g5PEMKrOHvIb7pvec3VPg?usp=sharing)
Example of runtime restart needed:
<img width="1197" alt="Screenshot 2022-09-20 at 15 27 03" src="https://user-images.githubusercontent.com/18228395/191348197-88ec9c97-d03f-4eb5-b612-2aa3522aa1c7.png">
Example of installing the packages in a way matplotlib is not updated:
<img width="1547" alt="Screenshot 2022-09-20 at 15 27 56" src="https://user-images.githubusercontent.com/18228395/191348378-4da6ee8a-1eb5-4b33-b920-6df407a55052.png"> | 0easy
|
Title: Validate mandatory keys for API spec
Body: ## Description
We need to ensure that API spec has some mandatories keys in order to work properly.
The first mandatory key that need to be checked is the key `api`. The specification should start with it.
https://github.com/scanapi/scanapi/blob/master/scanapi/__init__.py#L68
Under the key `endpoints`, we need to ensure each entry has **at least** a `name` and a `requests` key
https://github.com/scanapi/scanapi/blob/master/scanapi/tree/endpoint_node.py#L79
Under the key `requests`, we need to ensure each entry has **at least** a `name` and a `path` key.
https://github.com/scanapi/scanapi/blob/master/scanapi/tree/request_node.py#L106
This is an example of a minimal possible structure:
```yaml
api:
endpoints:
- name: scanapi-demo
requests:
- name: health
path: http://demo.scanapi.dev/api/health/
```
If any of this mandatories keys is missing, an [error](https://github.com/scanapi/scanapi/blob/master/scanapi/errors.py) should be raised. | 0easy
|
Title: Add visualization for Optuna optimization results
Body: In #332 we added support for Optuna. It can be used for tuning:
- Extra Trees
- Random Forest
- Xgboost
- LightGBM
- CatBoost
It will be nice to have a visualization of Optuna optimization. Optuna study results are saved as joblib in `optuna` dir. | 0easy
|
Title: feature request : Batched scale_tril
Body: I would like to do batch transformations using `transforms.LowerCholeskyAffine`. So far I can do a "batched" transform as long as the `loc` and `scale_tril` are the same however if I try to use different `loc`s with the same `scale_tril` then I get the following error message.
>ValueError: Only support 2-dimensional scale_tril matrix. Please make a feature request if you need to use this transform with batched scale_tril. | 0easy
|
Title: [QOL] Add tracking responses per endpoint
Body: Make it easier for testers to sift through endpoint responses. Probably easy to add it to the `stats` module | 0easy
|
Title: Marketplace - search results - change margins between chips and section title
Body:
### Describe your issue.
Change margins to 36px
<img width="1460" alt="Screenshot 2024-12-13 at 21 13 49" src="https://github.com/user-attachments/assets/e0cf638f-256e-4f86-81c2-322471f62f87" />
| 0easy
|
Title: Upgraded from version 4.0.1. to 4.2.0, now KeyedVectors.most_similar() throws error
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Using KeyedVectors.most_similar() as follows:
learn_language.most_similar(positive=[class_vectors[classname]], negative=[class_vectors[aclassname]], topn=max_topn)
where class_vectors[X] are some vectors created by averaging a number of KeyedVectors (of the size 75 or shape (75,)) from the same KeyedVectors object.
WORKS perfectly well in version 4.0.1, but on version 4.2.0 throws the following error:
learn_language.most_similar(positive=[class_vectors[classname]], negative=[class_vectors[aclassname]], topn=max_topn)
File "/usr/local/lib/python3.9/site-packages/gensim/models/keyedvectors.py", line 842, in most_similar
mean = self.get_mean_vector(keys, weight, pre_normalize=True, post_normalize=True, ignore_missing=False)
File "/usr/local/lib/python3.9/site-packages/gensim/models/keyedvectors.py", line 512, in get_mean_vector
mean += weights[idx] * key
ValueError: operands could not be broadcast together with shapes (100,) (75,) (100,)
#### Steps/code/corpus to reproduce
```python
import gensim
import numpy as np
learn_language = gensim.models.KeyedVectors.load("embeddings.english", mmap="r")
learn_language.most_similar(positive=[learn_language["great"]], negative=[learn_language["terrible"]], topn=100)
learn_language.most_similar(positive=[learn_language["great"]], negative=[learn_language["terrible"]], topn=100)
File "/usr/local/lib/python3.9/site-packages/gensim/models/keyedvectors.py", line 842, in most_similar
mean = self.get_mean_vector(keys, weight, pre_normalize=True, post_normalize=True, ignore_missing=False)
File "/usr/local/lib/python3.9/site-packages/gensim/models/keyedvectors.py", line 512, in get_mean_vector
mean += weights[idx] * key
ValueError: operands could not be broadcast together with shapes (100,) (75,) (100,)
```
#### Versions
>>> import platform; print(platform.platform())
Windows-10-10.0.19041-SP0
>>> import sys; print("Python", sys.version)
Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.22.4
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.8.1
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.2.0
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 0 | 0easy
|
Title: Hide snapshot report summary if there is nothing to report
Body: **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
No point in outputting a summary when there is nothing to report
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
Patched this myself by modifying the `pytest_terminal_summary` hook to the following,
will happily make a PR if this is wanted.
```py
with __terminal_color(config):
lines = terminalreporter.config._syrupy.report.lines
first = next(lines)
if first:
terminalreporter.write_sep("-", gettext("snapshot report summary"))
terminalreporter.write_line(first)
for line in lines:
terminalreporter.write_line(line)
```
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| 0easy
|
Title: Monitoring: add Flower
Body: Add Flower as a new service in Docker Compose and maybe in Pulumi configuration. | 0easy
|
Title: Add framework adaptor option for Django
Body: A good heuristic for what a Django controller is is that it has request as the first argument. This is easy enough to implement in an hour by looking at https://github.com/python-security/pyt/pull/52/files though, if you're interested. | 0easy
|
Title: [🐛 BUG] Setting width/height with CSS units not working for metrics
Body: ### What went wrong? 🤔
Setting width and height for metrics with CSS units is not working. This won't change the width/height as expected.
This works when just putting a number. It took a while for me to figure out (I went looking at the actual PR).
If this issue could be backported to another patch, this would be great.
### Expected Behavior
This should work with all CSS units supported by Taipy. Or the documentation should change to reflect this behavior.
### Steps to Reproduce Issue
```python
from taipy.gui import Gui
value = 50
page = """
<|{value}|metric|>
## Works
<|{value}|metric|width=300|height=300|>
## Height not working, Width not working
<|{value}|metric|width=150px|height=300px|>
<|{value}|metric|width=300px|height=300px|>
## Height not working, Width not working
<|{value}|metric|width=150rem|height=150rem|>
<|{value}|metric|width=300rem|height=150rem|>
"""
Gui(page).run()
```
### Screenshots

### Version of Taipy
4.0
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 0easy
|
Title: Add docs on ArgoCD deep links
Body: Show how to use ArgoCD [deep links](https://argo-cd.readthedocs.io/en/stable/operator-manual/deep_links/) with Robusta.
Docs should go in https://docs.robusta.dev/master/setup-robusta/gitops/argocd.html#configuring-argo-links | 0easy
|
Title: [INFRA] Move to GHA
Body: travis-ci is shutting down in a few weeks: we should move to gha:
- [x] runs on PRs + master
- [x] triggerable only by contributors
- [x] test via the new docker test, incl. w/ connectors | 0easy
|
Title: AI Music Generator Block Fails, but Replicate shows Successful Generation
Body: > The AI Music Generator block showed that it failed after 3 attempts, but Replicate shows that it created 3 audio files
This is a user reported issue. It is unknown whether this happens every time or intermittently.<br><br>**Steps to Reproduce:**
1. Add the AI Music Generator Block to an Agent
2. Run the Agent
3. Observe output being Error as pictured below
4. Check the generation logs on the replicate platform
<img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/2e989dd1-479b-40b6-bcbc-55872d9ed1e6/01e97413-6422-431a-9e86-5ee59f0e1390?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi8yZTk4OWRkMS00NzliLTQwYjYtYmNiYy01NTg3MmQ5ZWQxZTYvMDFlOTc0MTMtNjQyMi00MzFhLTllODYtNWVlNTlmMGUxMzkwIiwiaWF0IjoxNzM1NTUzOTk3LCJleHAiOjMzMzA2MTEzOTk3fQ.HtOvHZd5j3Ij2Zuq6rICFUJU5x6WV36AJLSiFG3hEBs " alt="Screenshot_2024-12-29_at_7.40.16_PM.png" width="637" data-linear-height="308" />
<img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/b2de5ffc-e818-4598-bc32-dd7f59c9ce53/65454dca-ee40-43dc-8d70-5e06911a7454?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9iMmRlNWZmYy1lODE4LTQ1OTgtYmMzMi1kZDdmNTljOWNlNTMvNjU0NTRkY2EtZWU0MC00M2RjLThkNzAtNWUwNjkxMWE3NDU0IiwiaWF0IjoxNzM1NTUzOTk3LCJleHAiOjMzMzA2MTEzOTk3fQ.Cvmv32wSDK7JyltEAAokwRnRNA3EVo3JwITPcgWWOw8 " alt="Screenshot_2024-12-29_at_7.41.18_PM.png" width="761" data-linear-height="151" /> | 0easy
|
Title: xontrib load: add option to skip not installed xontribs
Body: It will be cool to have an ability to load xontribs without errors about not installed ones.
For example this useful in case when you run xonsh in virtual environment and xonsh tries to run ~/.xonshrc and because of the virtual env has no xontribs xonsh produces errors. Using skip missing xontribs option in xonshrc you can prevent errors.
## Current Behavior
Show error and return code 1:
```xsh
xontrib load whole_word_jumping xontrib_not_installed
# The following xontribs are enabled but not installed:
# ['xontrib_not_installed']
# Please make sure that they are installed correctly by checking https://xonsh.github.io/awesome-xontribs/
# Return code 1
```
## Expected Behavior
Add an option `-s` ("(S)ilent" or "(S)kip missing") and return zero code without errors:
```xsh
xontrib load -s whole_word_jumping xontrib_not_installed
# Return code 0
```
## Current workaround
[xontrib-rc-awesome](https://github.com/anki-code/xontrib-rc-awesome/blob/5105050cf2f7318eb8b011f1f7a5897574e6ca05/xontrib/rc_awesome.xsh#L68-L83) has this workaround:
```xsh
from xonsh.xontribs import get_xontribs
_xontribs_installed = set(get_xontribs().keys())
_xontribs_to_load = ('back2dir', 'prompt_bar', 'pipeliner', 'sh',)
xontrib load @(_xontribs_installed.intersection(_xontribs_to_load))
```
## Start point
https://github.com/xonsh/xonsh/blob/3716f3da8636e41482358b61779e274aade8ecf7/xonsh/xontribs.py#L198
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Page listing social connections should use email addresses or usernames, instead of names
Body: Take the page at the URL `/accounts/3rdparty/` (URL name `socialaccount_connections` configured to `allauth.social.views.connections`). If you load that page, it will list all the social connections of the current account, like this:
- ( ) John Smith **Google**
- ( ) John Smith **Facebook**
- ( ) John Smith **Google**
It lists each account by displaying the name of the account, followed by the provider name. Unfortunately, the name of the account is often the first name and last name of the person associated with that account. A user may have multiple accounts with Google under the name "John Smith", and may not be able to discern which one is being referred to here. Even if the user has not connected all their Google accounts that have the name "John Smith", they still cannot tell which one of their Google accounts with the name "John Smith" is being referred to here. To put it succinctly, names are not unique identifiers for Google.
Instead, I suggest using an identifier that is unique to that provider, for example, an email address or a username, like this:
- ( ) **Google** account [email protected]
- ( ) **Facebook** account johnsmithy
- ( ) **Google** account [email protected]
I propose that the method `get_identifier` be added to the `ProviderAccount` class:
```
class ProviderAccount(object):
def get_identifier(self):
"""
Returns a string representing an identifier that is uniquely identifies
this account among the provider's accounts, preferably one that is
meaningful to human beings. This is usually a username or an email address.
"""
return None
```
I also propose modify the `to_str` method on that class so that it uses the `get_identifier` method, falling back to the previously implemented return value if `get_identifier` does not return a truthy value.
```
def to_str(self):
return self.get_identifier() or self.get_brand()["name"]
```
Then providers can override the `get_identifier` method to return an email address or username, and `to_str` will continue to work even for providers that haven't implemented the `get_identifier` method yet.
| 0easy
|
Title: `test_resources.py` is too large and can be broken up
Body: The `test_resources.py` module is too large (~1.8k lines). It could be broken up into its own package, with sub-modules.
Additionally `ModelResourceTest` test case is too large and should be broken up.
For example:
```
test_resources/
test_modelresource/
test_export.py
test_import.py
...
```
| 0easy
|
Title: 调用支付接口失败
Body: 使用v免签,调用支付接口失败

| 0easy
|
Title: Issue is still there Exception: SNlM0e value not found. Double-check __Secure-1PSID value or pass it as token='xxxxx'.
Body: Exception: SNlM0e value not found. Double-check __Secure-1PSID value or pass it as token='xxxxx'.**** | 0easy
|
Title: Fix capitalisation of Markdown (prefer markdown) as per Vale suggestion
Body: Vale recommends that the word Markdown is not capitalised except at the start of a sentence.
<img width="549" alt="image" src="https://github.com/user-attachments/assets/a2edcb7f-74c6-4622-b833-3bca9c4b51dd">
Task: Search across Vizro docs (vizro-core and vizro-ai) for instances of `Markdown` and replace with discretion.
```markdown
<!-- vale off -->
Legitimate use of "Markdown" rather than "markdown" (within a sentence, since Vale won't flag sentence case as an issue)
<!--vale on-->
``` | 0easy
|
Title: Bug: Serializing TorchTensor with grad fails
Body: **Description:**
When a `TorchTensor` requires grad it cannot be serialized to JSON.
This is particularly bad because:
- outputs of pytorch models require grad
- FastAPI serilaizes to JSON
-> model serving is b0rked
**How to reproduce:**
```python
from docarray import BaseDoc
from docarray.typing import TorchTensor
import torch
class MyDoc(BaseDoc):
tens: TorchTensor
t1 = torch.rand(512, requires_grad=True)
doc = MyDoc(tens=t1)
doc.json()
```
```bash
TypeError: Type is not JSON serializable: TorchTensor
```
But the following works:
```python
from docarray import BaseDoc
from docarray.typing import TorchTensor
import torch
class MyDoc(BaseDoc):
tens: TorchTensor
t1 = torch.rand(512, requires_grad=True)
doc = MyDoc(tens=t1.detach())
doc.json()
```
**TODO**:
- Fix this, probably by calling `detach()` in the serialization logic
- check if other serialization options are also affected
- what about tensorflow? | 0easy
|
Title: Switch WebSockets legacy implementation to Sans-I/O
Body: Given https://github.com/aaugustin/websockets/issues/975 and https://github.com/aaugustin/websockets/issues/1310#issuecomment-1479780072 we should try to follow @aaugustin's recommendation.
We can discuss if we should deprecate the current `WebSocketProtocol`, or if we should replace it, and that's it.
<!-- POLAR PLEDGE BADGE START -->
> [!IMPORTANT]
> - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue.
> - We receive the funding once the issue is completed & confirmed by you.
> - Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/encode/uvicorn/issues/1908">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/encode/uvicorn/issues/1908/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/encode/uvicorn/issues/1908/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
| 0easy
|
Title: Move pytest config to pyproject.toml
Body: Suggestion:
By moving the pytest configuration to `pyproject.toml` it would be possible to remove the `pytest.ini` file.
Note that the [pyproject support](https://docs.pytest.org/en/stable/customize.html#pyproject-toml) was included in pytest version 6.0 and I'm not sure if there is a specific reason (e.g. dependency conflict) that makes it impossible to update pytest right now. | 0easy
|
Title: Feature: endless asyncio tasks hypervisor
Body: Now we have some endless `asyncio.Tasks` consuming messages like [here](https://github.com/airtai/faststream/blob/main/faststream/nats/subscriber/usecase.py#L309) or in kafka/confluent/redis as well.
They have no chance to fail, but anything can sometimes things happens and users even don't see any exception about it. Thus, we should log this tasks failure and try to restart them.
* [ ] pass `broker.logger` to subscriber through setup method
* [ ] add task done callback to log an exception and restart it
It should helps users to diagnostic the problem and create Issue about accident and improve the system susteinability | 0easy
|
Title: Suggest add in docs the possibility to use selenium special Keys
Body: I'm new on using github tools.
We should add in the docs that we can use selenium special Keys:
e.g. (return key):
from selenium.webdriver.common.keys import Keys
ElementAPI.type(Keys.RETURN)
or add a reference to selenium docs for special keys | 0easy
|
Title: [Feature Request] App Window: standardize on pywebview[qt]
Body: ### Description
This request involves the window icon for local app mode.
According to the [pyproject.toml](https://github.com/rio-labs/rio/blob/main/pyproject.toml), pywebview 4.2 is installed on Windows with cefpython and current (5.3.2) is installed on Linux with QT. However, of these two only the QT backend supports runtime icons, per [the pywebview documentation](https://pywebview.flowrl.com/examples/icon.html).
### Suggested Solution
Standardizing on the QT backend would allow for identical support on Windows, Mac, and Linux, with app icon support. QTPY is arguably better supported in general, as cefpython3 has not been updated in years.
### Alternatives
N/A - see [NiceGUI pull 634](https://github.com/zauberzeug/nicegui/pull/634)
### Additional Context
The [window] subinstall is broken on Ubuntu 22.04, as the unsupported pywebview 4.2 is installed in addition to current. If this feature request is not accepted, please update pyproject.toml line 76 to read:
` "pywebview~=4.2" ; sys_platform == 'win32'",`
Separately, cefpython3 is [available for Mac on Python 3.10]( https://pypi.org/project/cefpython3-v66.1-for-python-3.10.2-unofficial/).
### Related Issues/Pull Requests
_No response_ | 0easy
|
Title: Create Dockerfile, publish images
Body: It would be really useful if we had a simple Dockerfile containing the build instructions for the library. Then, we could create an image which would be easy to distribute and run for the volunteer participants without setting up the environment themselves. | 0easy
|
Title: No way to execute an arbitrary command on a pipeline
Body: Pipeline.exectute() shadows ConnectionsPool.execute() and RedisConnection.execute().
As the result, one cannot execute arbitrary commands in a pipeline.
I use Redisearch module and need to execute multiple commands like "FT.DEL" in a row, but cannot use pipelining.
In redis-py, the method to execute a command is called Redis.execute_command(), which ensures there is no clash.
Happy to be told that there is a way of doing this in aioredis.
Thanks
| 0easy
|
Title: [BUG] show_menu template tags pollute global template context
Body: ## Description
When using the `show_menu` or `show_menu_below_id` template tags the context variables it uses internally are added to the global context.
## Steps to reproduce
1. Have a page with one of the `show_menu` family of template tags.
2. Dump/debug the template context after the tag
3. Notice the `children, template, from_level, to_level, extra_inactive, extra_active` keys are littering the context and potentially clobbering existing variables.
## Expected behaviour
Template tags like these are expected to use the context stack to push their variables so they don't leak out.
## Actual behaviour
Context variables for the menu tag clobber the global context.
## Additional information (CMS/Python/Django versions)
django-cms==3.11.1
django==3.2.20
## Do you want to help fix this issue?
Fixing this changes changes behavior and potentially breaks dependents so I don't know if this can be fixed.
* [ ] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [ ] No, I only want to report the issue. | 0easy
|
Title: Remove pkg_resources usage
Body: Importing pkg_resources raises a deprecation warning now so it makes sense to remove it. We can even use importlib without checking if it's available as we no longer support 3.7. I see only `pkg_resources.parse_version` and `pkg_resources.iter_entry_points` used in Scrapy, I think they can be replaced by things from `packaging` and `importlib.metadata` respectively. | 0easy
|
Title: Upgrade manual_legend to match line/marker style of plotted points
Body: **Describe the solution you'd like**
Upgrade `manual_legend` to match line/marker style of plotted points by enabling simple circles and lines with Line2D patches.
**Is your feature request related to a problem? Please describe.**
Right now `manual_legend` draws the patches as rectangles and cannot take into account the line or scatter plot properties (e.g. line style or marker style). It would be nice to add [Line2D patches](https://matplotlib.org/gallery/text_labels_and_annotations/custom_legends.html) that would enable the legend to show simple circles or lines to match the marker/line style the user has selected for the plotted points.

_Note: This issue is a follow-on to #564, where @bbengfort added a `manual_legend` to the newly created `yellowbrick.draw` module, that enabled us to update the visualizers that implement some kind of scatter plot and have an `alpha` param that affects the opacity/translucency of the plotted points. Now with the `manual_legend`, the colors in the legend no longer become translucent when `alpha` is decreased._ | 0easy
|
Title: Add support for synchronous RPC calls
Body: RPC calls are asynchronous as `_execute_monkey_patch` doesn't effect them. This is annoying because all the other calls in supabase-py are synchronous. | 0easy
|
Title: Implement method item_by_path(path) and maybe others for uia_controls.ToolbarWrapper
Body: This is suggested in Gitter chat (Russian room). We have such method for `uia_controls.MenuWrapper`. But some menus are implemented as toolbar with drop-down submenus. It makes sense to add similar methods to choose such submenu items. | 0easy
|
Title: Get costs at any given iteration
Body: I want to visualise the convergence rate of k-modes (and k-prototypes, eventually) by plotting the cost function at each iteration. I've had a look through `kmodes.py` and I can't see a way of extracting that information directly. Could you suggest a way of going about this?
Thanks,
Henry | 0easy
|
Title: GRADIO UI
Body: adding Gradio for better GUI for first time users | 0easy
|
Title: [BUG] Could not retrieve stock videos: generate_subtitles() missing 2 required positional arguments: 'sentences' and 'audio_clips'
Body: **Describe the bug**
Every time I try to generate a video it shows this error:
Could not retrieve stock videos: generate_subtitles() missing 2 required positional arguments: 'sentences' and 'audio_clips'
I'm using my local machine with ubuntu.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Linux
**Additional context**
I filled all the .env with all the providers but didn't work. | 0easy
|
Title: create user-facing cleanlab.data_valuation module
Body: Move the following method to the cleanlab.data_valuation module, such that it can be a standalone user-facing method:
[cleanlab.datalab.internal.issue_maanger.data_valuation._knn_shapley_score
](https://docs.cleanlab.ai/master/_modules/cleanlab/datalab/internal/issue_manager/data_valuation.html#DataValuationIssueManager.find_issues)
| 0easy
|
Title: Trace log level logs arguments twice for embedded arguments
Body: Using following code:
```
*** Test Cases ***
Test
Keyword With Arguments 1 2
Keyword With Embedded 1
Keyword With Mixed 1 2
*** Keywords ***
Keyword With Arguments
[Arguments] ${arg} ${arg2}
Log ${arg}
Keyword With Embedded ${arg}
Log ${arg}
Keyword With Mixed ${arg}
[Arguments] ${arg2}
Log ${arg}
```
and with log level trace:
```
robot --loglevel TRACE test.robot
```
Arguments are logged twice:

It could be because we have embedded variables, but there is not difference for the source of the argument in the log or any indication of that. | 0easy
|
Title: [DOCS] CosmosDB notebook
Body: We should have an explicit cosmosdb tutorial notebook series:
- [ ] connection info, incl pk usage
- [ ] df -> cosmos
- [ ] query -> cosmos -> graph
- [ ] node/edge enriching
- [ ] and viz, of course :)
An internal hub notebook sketches a bunch of these from before we had the integration, which may be a good start | 0easy
|
Title: Add usage example for fasttext subword embedding in Pre-trained Word Embeddings tutorial
Body: Currently we have the fastText subword embeddings available, but it's not included in the pre-trained word embeddings tutorial which is often the first tutorial people checks out. It would be great to add example usage of fastText subword embeddings and show how it helps representing out of vocabulary words.
The usage of fastText subword embeddings is to simply specify `load_ngrams` to True when creating [gluonnlp.embedding.FastText](http://gluon-nlp.mxnet.io/master/api/modules/embedding.html#gluonnlp.embedding.FastText). | 0easy
|
Title: Bug: Using 2 LoRA configs with `target_modules='all-linear'` leads to nested LoRA layers
Body: ### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model
model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id)
config0 = LoraConfig(target_modules="all-linear")
config1 = LoraConfig(target_modules="all-linear")
model = get_peft_model(model, config0)#, adapter_name="default")
model.add_adapter("adapter1", config1)
print(model.base_model.model.model.decoder.layers[0].self_attn.k_proj)
```
prints:
```
lora.Linear(
(base_layer): lora.Linear(
(base_layer): Linear(in_features=16, out_features=16, bias=True)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(lora_dropout): ModuleDict(
(default): Identity()
)
(lora_A): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=16, out_features=8, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_B): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=8, out_features=16, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
```
### Expected behavior
Instead of getting nested LoRA layers, the linear layers belonging to a LoRA layer should not be targeted by `all-linear`. | 0easy
|
Title: Removed deprecated function `get_all_colorscale_names()`
Body: Removed deprecated function `get_all_colorscale_names()` in favor of `list_all_colorscale_names()` | 0easy
|
Title: Remove mutable arguments
Body: As mentioned in openjournals/joss-reviews#1315, some functions have mutable arguments.
I found the following instances of this:
https://github.com/scikit-tda/kepler-mapper/blob/8a2a955e2b7302ee3c37e5c6a2c6ebfc1924a39b/kmapper/kmapper.py#L630-L632
These could easily be transformed into something like
```
def f(param=None):
param = [] if param is None else param
``` | 0easy
|
Title: [DOCS]: Perplexity API Support - docs update
Body: ### Affected documentation section
_No response_
### Documentation improvement description
related to PR
https://github.com/AIHawk-co/Auto_Jobs_Applier_AI_Agent/pull/811
### Why is this change necessary?
_No response_
### Additional context
_No response_ | 0easy
|
Title: Dev Branch: Chandelier Exit offset direction
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
I am using python 3.9 and pandas_ta 0.4.19b0 (dev branch).
**Do you have TA Lib also installed in your environment?**
no
There is a bug in the development branch of the volatility indicator "Chandelier Exit", see here:
https://github.com/twopirllc/pandas-ta/blob/438f3a97b71b26bc17efb1a204fb0e0fdc48e0aa/pandas_ta/volatility/chandelier_exit.py#L108
```python
direction = uptrend + downtrend
if direction.iloc[0] == 0:
direction.iloc[0] = 1
direction = direction.replace(0, nan).ffill()
# Offset
if offset != 0:
long = long.shift(offset)
short = short.shift(offset)
direction = short.shift(offset)
```
If you add a nonzero offset, the direction column is replaced with the shifted short column. In reality you would want the direction column to be shifted itself.
Here is what I believe to be the corrected version (only the last line is changed):
```python
direction = uptrend + downtrend
if direction.iloc[0] == 0:
direction.iloc[0] = 1
direction = direction.replace(0, nan).ffill()
# Offset
if offset != 0:
long = long.shift(offset)
short = short.shift(offset)
direction = direction.shift(offset)
``` | 0easy
|
Title: Feature request - Fixed Quantity Feed
Body: A useful feature where the user specifies the quantity to take in an orderbook and callback returns the average price paid for that lot size. For example if the offers in the order book for BTC-USD was the following:
Price | size
11,000 | 1
10,500 | 1
10,000 | 1
If the user specified quantity 3, the callback would return 10,500. | 0easy
|
Title: Add Field Description and example to Predefined Documents
Body: ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.docarray.org) and still think this feature is missing
### Description
One nice thing about DocArray is the predefined Document types that we offer that allow people to use for multimodal applications.
Having rich field definitions, with description and examples would help when building services based on FastAPI with them.
### Affected Components
- [ ] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [X] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [X] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [X] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | 0easy
|
Title: Support custom converters that accept only `*varargs`
Body: see https://github.com/robotframework/robotframework/issues/4088#issuecomment-986367405 | 0easy
|
Title: Unused os.path.join in chat_photo.py
Body: I was looking at the aiogram code on github accidentally found that `os.path.join` is not used here, which is suspicious.
https://github.com/aiogram/aiogram/blob/df294e579f104e2ae7e9f37b0c69490782d33091/aiogram/types/chat_photo.py#L62-L63 | 0easy
|
Title: Parsing model: `Documentation.from_params(...).value` doesn't work
Body: Documentation.from_params(...) has problems with multiline values and empty lines.
Must be investigated more closely and there must be more tests about this.
https://github.com/robotframework/robotframework/blob/c80464b98f1efff1d966865eddec6120cf95ad18/src/robot/parsing/model/statements.py#L166
Revert this commit 0c5d7f1fcac4ef9bd0178df27528e985af7dafb6 from "custom parsers" #4614 because it should be fixed independently. | 0easy
|
Title: [feat] add marker strategy as a separate repo + example how to load it
Body: As with the #91 - the `marker` strategy has been officially removed along with all dependencies to `surya-ocr` and `marker` - as the packages imply the `GPL3` license. We changed the license to MIT. `marker`, `surya`, `tabled` and other GPL licensed products still might be used BUT only as a 3rd party libraries - included at the local-deployment level and not in the core product.
Looking forward for someone willing to add it as a 3rd party plugin/example | 0easy
|
Title: Bug: Remove use of `assert` for pydantic import flow control
Body: ### Description
It was a poor choice to use assertions for the Pydantic import version discrimination refactor in #3347, as they can be optimized away.
We should revert to raising some other exception type where this was applied.
https://github.com/litestar-org/litestar/blob/fe73848ad25961f37109ad4049799a64b856c8d3/litestar/contrib/pydantic/pydantic_dto_factory.py#L26-L37
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
v2.8.2
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | 0easy
|
Title: Wrong value of default parameter in report
Body: #### I'm submitting a ...
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
This code produces wrong parameters values in report.
```python
@allure.step("abc")
def abc(first_param, default_param=10):
pass
def test_example():
abc(1, 2)
```
```json
"steps": [
{
"name": "abc",
"status": "passed",
"parameters": [
{
"name": "first_param",
"value": "1"
},
{
"name": "default_param",
"value": "10"
}
],
```
#### What is the expected behavior?
I expect to see `default_param` equals 2 instead of 10.
#### Please tell us about your environment:
- Test framework: [email protected]
- Allure adaptor: [email protected]
| 0easy
|
Title: Add Private conversation command
Body: Just one more command like /gpt private.
Then the bot will create a private channel like a ticket channel that only the user and the bot can access.
So in the env file, there will be two variables for private role iD and private channels category iD.
Only the users with the "private role can use the /gpt private command and only the user who used the command can see the channel.
This is the beginning of a new utility to this bot. I will configure it to behave like a counselor of some sort along the line.
I'll be taking another feature to a new issue. | 0easy
|
Title: Better error message when missing __init__.py
Body: If a dotted path fails to import, we should check if the reason is a missing `__init__.py` file and suggest adding it. Current error:
```pytb
Traceback (most recent call last):
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/cli/io.py", line 20, in wrapper
fn(**kwargs)
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/cli/task.py", line 34, in main
dag, args = _custom_command(parser)
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/cli/parsers.py", line 405, in _custom_command
dag, args = load_dag_from_entry_point_and_parser(entry_point, parser,
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/cli/parsers.py", line 451, in load_dag_from_entry_point_and_parser
dag, args = _process_file_dir_or_glob(parser)
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/cli/parsers.py", line 392, in _process_file_dir_or_glob
dag = DAGSpec(dagspec_arg).to_dag()
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/spec/dagspec.py", line 424, in to_dag
dag = self._to_dag()
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/spec/dagspec.py", line 473, in _to_dag
process_tasks(dag, self, root_path=self._parent_path)
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/spec/dagspec.py", line 745, in process_tasks
task, up = task_dict.to_task(dag)
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/spec/taskspec.py", line 298, in to_task
return _init_task(data=data,
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/spec/taskspec.py", line 381, in _init_task
task.on_finish = dotted_path.DottedPath(on_finish,
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/util/dotted_path.py", line 47, in __init__
self._load_callable()
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/util/dotted_path.py", line 54, in _load_callable
self._callable = load_callable_dotted_path(self._spec.dotted_path)
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/util/dotted_path.py", line 169, in load_callable_dotted_path
loaded_object = load_dotted_path(dotted_path=dotted_path,
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/site-packages/ploomber/util/dotted_path.py", line 128, in load_dotted_path
module = importlib.import_module(mod)
File "/Users/Edu/miniconda3/envs/ml-testing/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: An error happened when trying to import dotted path "tests.quality.clean": No module named 'tests.quality'
``` | 0easy
|
Title: Add Support for Arbitrary Precision Arithmetic with BigFloat
Body: **Is your feature request related to a problem? Please describe.**
I tried running 'pysr' on a 1,000 row array with 4 integer input variables and one integer output variable - a Goedel Number.
From Mathematica:
```
GoedelNumber[l_List] := Times @@ MapIndexed[Prime[First[#2]]^#1 &, l]
```
E.g.
```
Data file:
# 7 1 5 8 6917761200000
julia> 2^7*3^1*5^5*7^8
6917761200000
```
The model returned:
```
Complexity Loss Score Equation
1 Inf NaN 0.22984365
```
I am just learning 'pysr' and maybe it's just 'user error'. However, Inf and Nan suggest that Goedel numbers may exceed
Float64.
<img width="1430" alt="Screenshot 2022-12-01 at 8 33 44 AM" src="https://user-images.githubusercontent.com/3105499/205108196-cb0c09d2-ccd6-4a9f-978f-44b3032410ae.png">
**Describe the solution you'd like**
Not sure what happened, because the largest Goedel number in the input is:
1.6679880978201e+23
**Additional context**
I didn't see any parameters to set 'verbose' mode or 'debugging' information.
[GoedelTableFourParameters.txt](https://github.com/MilesCranmer/PySR/files/10134343/GoedelTableFourParameters.txt)
| 0easy
|
Title: Setting current_symbol on a Points layer does not make the next point added use this symbol.
Body: ### 🐛 Bug Report
The current_symbol parameter is expected to set the symbol for the next point in the layer, this is useful when writing callbacks to modify the appearance of points as they are added. The expected behaviour is confirmed by the description line
`current_symbol : Symbol
Symbol for the next point to be added or the currently selected points.` .
Inspecting the code, we see the following in `Points._set_data`
```
if len(self._symbol) > 0:
new_symbol = copy(self._symbol[-1])
else:
new_symbol = self.current_symbol
```
which states that the symbol used for the new point is always the previous symbol if one is available. Equivalent logic is present for border width which indicates this bug affects that property as well, but the other point attributes have different logic which works as expected.
I suggest that the new symbol should always be self.current_symbol if one is defined (which I believe is essentially all the time), only perhaps falling back to the previous symbol if appropriate. I could create a pull request but I would like some feedback on whether there is some reason for the current behaviour that I'm not aware.
### 💡 Steps to Reproduce
(Using code)
1. Create a points layer with `symbol = 'square'`
2. add a point
3. deselect the point (if you skip this step then the next step will modify the current point)
4. set the current_symbol to another value, say `'diamond'`
5. add a new point
### 💡 Expected Behavior
The second point will still be a square when it is expected to be a diamond. The fact that current_symbol is `'diamond'` is of no help, although if we set it again with the point selected we will finally get a diamond point.
### 🌎 Environment
napari: 0.5.3
Platform: Windows-10-10.0.22631-SP0
Python: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)]
Qt: 5.15.8
PyQt5: 5.15.9
NumPy: 2.0.2
SciPy: 1.14.1
Dask: 2024.8.2
VisPy: 0.14.3
magicgui: 0.8.3
superqt: 0.6.7
in-n-out: 0.2.1
app-model: 0.2.8
npe2: 0.7.7
OpenGL:
- GL version: 4.6.0 NVIDIA 560.76
- MAX_TEXTURE_SIZE: 32768
- GL_MAX_3D_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 1920x1080, scale 1.0
- screen 2: resolution 2048x1152, scale 1.0
Optional:
- numba: 0.60.0
- triangle not installed
- napari-plugin-manager: 0.1.0
Settings path:
- C:\Users\dbarton\AppData\Local\napari\napari_dc6d252616b39fa2639fd92f910f7b9647b9362c\settings.yaml
Plugins:
- napari: 0.5.3 (81 contributions)
- napari-console: 0.0.9 (0 contributions)
- napari-manual-tracking: 0.0.1 (8 contributions)
- napari-skimage-regionprops: 0.10.1 (2 contributions)
- napari-svg: 0.2.0 (2 contributions)
- napari-tools-menu: 0.1.19 (0 contributions)
### 💡 Additional Context
_No response_ | 0easy
|
Title: get_all_images from content
Body: content.get_all_images should return
```
[{identifier: 'foo', path: 'http://.....jpg', thumb: 'http:///.thumb.jpg'}, ...]
```
to be used in template
| 0easy
|
Title: Add Python 3.8 support
Body: | 0easy
|
Title: Bug: passing a dict of object to the factory is ignored
Body: Hello!
When setting a `.build()` override for a dict where the value is a Pydantic model, it seems to be ignored.
**pydantic-factories version 1.6.2**
Following code repoduces: note that i made a simple dict to show that it doesn't work with classes only
```
from pydantic import BaseModel
from pydantic_factories import ModelFactory, Use
class MyMappedClass(BaseModel):
val: str
class MyClass(BaseModel):
my_mapping_obj: dict[str, MyMappedClass]
my_mapping_str: dict[str, str]
class MyClassFactory(ModelFactory[MyClass]):
__model__ = MyClass
obj = MyClassFactory.build(
my_mapping_obj={"foo": MyMappedClass(val="bar")},
my_mapping_str={"foo": "bar"},
)
print(obj.json(indent=2))
```
Output is
```
{
"my_mapping_obj": {
"xNLsYhvhemQDptZfpDWz": {
"val": "DzMmWPdDALDUROEsduqT"
}
},
"my_mapping_str": {
"foo": "bar"
}
}
```
expected output is
```
{
"my_mapping_obj": {
"foo": { <---
"val": "bar" <---
}
},
"my_mapping_str": {
"foo": "bar"
}
}
```
Thank you! | 0easy
|
Title: Add `--console-width` value from CLI option to built-in variable `&{OPTIONS}`
Body: I heavily use the built-in keyword [`Log To Console`](https://robotframework.org/robotframework/latest/libraries/BuiltIn.html#Log%20To%20Console) with `no_newline` true and a carriage return (\r) to keep the user informed about what is going and to not worry that the test is stuck.
To keep it clean and to refresh the line properly, I would need to know the ["console width"](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#console-width) value.
But it does not seem available from the built-in variables.
Would it be possible to add it to built-in variable &{OPTIONS}, even if the default value is used?
For now, it seems I can only get it by setting an environment variable. (edited) | 0easy
|
Title: Issue with reults of rma moving average
Body: **Describe the bug**
ta.rma() should behave exactly like ta.ema() - with the only difference being the weighting factor.
Implementation of ta.rma() in pandas_ta calculates the starting value differently than the implementation of ta.ema()
specifically, **the first value** in the ta.rma() result should be a SMA() average of the period. If the first value is different, the whole series is different...
**To Reproduce**
```python
df = pd.DataFrame({'close': [1.0,0,-1,0,2,-3,4,5,-8,7,8,9,10,-5,10,0]})
df["rma"]=df.ta.rma(close=df["close"], length=4)
df["ema"]=df.ta.ema(close=df["close"], length=4)
```
the first value that is not a NaN should be the same for both rma() and ema() - but it is not... | 0easy
|
Title: [ENH] Warn about Python 2.7
Body: Some environments default to Python 2.7, which is not expected to work in PyGraphistry, yet may lead to confusing errors.
As part of the import, we should do a Python 3 check and issue a warning. | 0easy
|
Title: Replace Stacked Barchart with Group Bar chart with shared axes
Body: Currently the stacked bar chart is used when 2 categorical variables and 1 quantitative variable are specified. This works when the aggregate function is sum, but the chart is not very interpretable when another aggregate type is used.
**Current Chart used**

**Alternatives**


While the heatmap chart is the more space efficient alternative, it is a bit harder to interpret compared to the grouped bar chart. We will implement the grouped bar chart, but will need to modify the grouped bar chart to share axes, similar to matplotlib's [shared axes grouped bar chart](https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/barchart.html).
Again for compactness reasons, we may only chose to apply this when the color attribute has less than 3 distinct values. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.