text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Python 3.7.3, gensim 3.8.1: UnboundLocalError: local variable 'doc_no2' referenced before assignment
Body: Client code:
` model = LogEntropyModel(corpus=data_corpus, normalize=True)
`
Referenced code:
https://github.com/RaRe-Technologies/gensim/blob/44ea7931c916349821aa1c717fbf7e90fb138297/gensim/models/logentropy_model.py#L115
Exception thrown:
```
File "/anaconda3/lib/python3.7/site-packages/gensim/models/logentropy_model.py", line 76, in __init__
self.initialize(corpus)
File "/anaconda3/lib/python3.7/site-packages/gensim/models/logentropy_model.py", line 115, in initialize
if doc_no2 != doc_no:
UnboundLocalError: local variable 'doc_no2' referenced before assignment
``` | 0easy
|
Title: [DOCS] There's no mention of the datasets in the docs
Body: | 0easy
|
Title: Calculating Probabilities on Zero-Shot Learning
Body: ### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
Hi everyone, first of all I would like to thank @MaartenGr and all the contributors for this amazing project.
Recently I started looking at BERTopic as a method to classify some customer tickets into some categories defined into`zeroshot_topic_list` parameter. After fitting the model by calling `fit_transform` my goal was to look for each document, the probability of that document of belonging to all the topics (both predefined and generated ones).
`probs` is `None` after `fit_transform`, as expected and mentioned at the end of this page https://maartengr.github.io/BERTopic/getting_started/zeroshot/zeroshot.html#example. Therefore, I then called `transform` to get the probabilities.
Now I've got two questions I would like to answer:
- is this one the right approach to get the probabilities?
- most of the documents have almost the same (high) probability among all the topics. Does this mean the clustering didn't fit that much the data? What do you suggest?
Again, I would like to thank you for your effort in advance and I look forward to contribute as well if needed.
### Reproduction
```python
topic_model = BERTopic(
calculate_probabilities = True,
vectorizer_model = CountVectorizer(stop_words=default_stopwords + custom_stopwords),
ctfidf_model = ClassTfidfTransformer(reduce_frequent_words=True),
embedding_model = sentence_model,
min_topic_size = 50,
zeroshot_topic_list = taxonomy_list,
zeroshot_min_similarity = .80,
representation_model = KeyBERTInspired(),
verbose = True,
)
topics, _ = topic_model.fit_transform(global_ticket_descriptions, embeddings=embeddings)
_, probs = topic_model.transform(global_ticket_descriptions, embeddings=embeddings)
print(probs)
```
The output is:
```
2024-07-09 14:48:15,691 - BERTopic - Predicting topic assignments through cosine similarity of topic and document embeddings.
[[0.8463203 0.8880229 0.8636132 ... 0.8376566 0.8344382 0.82069004]
[0.92492086 0.8977871 0.91309047 ... 0.9031642 0.90693504 0.9101905 ]
[0.9009018 0.9072951 0.91581297 ... 0.9002963 0.8903491 0.8731015 ]
...
[0.85506856 0.9055494 0.8743948 ... 0.8490986 0.8607102 0.8375366 ]
[0.8783543 0.88571143 0.8983458 ... 0.8795974 0.87251174 0.8653137 ]
[0.8480984 0.8751351 0.8680049 ... 0.8383051 0.8333502 0.8458183 ]]
```
### BERTopic Version
0.16.2 | 0easy
|
Title: Integration of EthOS miner API in Antminer-Monitor-Master App
Body: Broadening the horizons of the Antminer-Monitor-Master App makes perfect sense now that GPU mining is becoming more and more popular.
At present, EthOS have over 50,000 miners working with their OS so their software is well developed and regularly maintained.
An example of the EthOS monitor can be found here: 48061f.ethosdistro.com
Example API:
{"rigs":{"91c5eb":{"condition":"high_load","version":"1.2.9","driver":"amdgpu","miner":"claymore-zcash","gpus":"6","miner_instance":"6","miner_hashes":"353.91 352.42 328.95 353.48 351.82 352.11","bioses":"xxx-xxx-xxx xxx-xxx-xxx xxx-xxx-xxx xxx-xxx-xxx xxx-xxx-xxx xxx-xxx-xxx","meminfo":"GPU0:01.00.0:Radeon RX 580:xxx-xxx-xxx:Samsung K4G80325FB:GDDR5:Polaris10\nGPU1:04.00.0:Radeon RX 580:xxx-xxx-xxx:Samsung K4G80325FB:GDDR5:Polaris10\nGPU2:07.00.0:Radeon RX 580:xxx-xxx-xxx:Samsung K4G80325FB:GDDR5:Polaris10\nGPU3:08.00.0:Radeon RX 580:xxx-xxx-xxx:Samsung K4G80325FB:GDDR5:Polaris10\nGPU4:0a.00.0:Radeon RX 580:xxx-xxx-xxx:Samsung K4G80325FB:GDDR5:Polaris10\nGPU5:0b.00.0:Radeon RX 580:xxx-xxx-xxx:Samsung K4G80325FB:GDDR5:Polaris10","vramsize":"8 8 8 8 8 8","drive_name":"Ultra USB 3.0 4C530001130119101042","mobo":"Z270-Gaming K3","lan_chip":"Qualcomm Atheros Killer E2500 Gigabit Ethernet Controller (rev 10)","connected_displays":"","ram":"4","rack_loc":"miner1","ip":"192.168.0.6","server_time":1515628268,"uptime":"287970","miner_secs":287901,"rx_kbps":"0.08","tx_kbps":"0.22","load":"4.3","cpu_temp":"50","freespace":4.3,"hash":2092.69,"pool":"","temp":"70.00 67.00 72.00 48.00 67.00 55.00","powertune":"7 7 7 7 7 7","voltage":"1.150 1.150 1.150 1.150 1.150 1.150","watts":null,"fanrpm":"3282 3282 3282 3282 3282 3282","core":"1450 1450 1400 1450 1450 1450","mem":"2200 2200 2000 2200 2200 2200"},"2af51a":{"condition":"mining","version":"1.2.9","driver":"nvidia","miner":"ewbf-zcash","gpus":"6","miner_instance":"6","miner_hashes":"528.00 518.00 513.00 515.00 513.00 512.00","bioses":"86.04.85.00.72 86.04.85.00.72 86.04.85.00.72 86.04.85.00.72 86.04.85.00.72 86.04.85.00.72","meminfo":"GPU0:01:00.0:GeForce GTX 1070 Ti:86.04.85.00.72:Unknown\nGPU1:02:00.0:GeForce GTX 1070 Ti:86.04.85.00.72:Unknown\nGPU2:04:00.0:GeForce GTX 1070 Ti:86.04.85.00.72:Unknown\nGPU3:05:00.0:GeForce GTX 1070 Ti:86.04.85.00.72:Unknown\nGPU4:07:00.0:GeForce GTX 1070 Ti:86.04.85.00.72:Unknown\nGPU5:08:00.0:GeForce GTX 1070 Ti:86.04.85.00.72:Unknown","vramsize":"8 8 8 8 8 8","drive_name":"Ultra Fit 4C531001570906104421","mobo":"PRIME Z270-A","lan_chip":"Intel Corporation Ethernet Connection (2) I219-V","connected_displays":"1920x1080","ram":"4","rack_loc":"miner3","ip":"192.168.1.95","server_time":1515628360,"uptime":"16121","miner_secs":16057,"rx_kbps":"0.05","tx_kbps":"0.35","load":"1.1","cpu_temp":"30","freespace":4,"hash":3099,"pool":"zhash.pro:3057","temp":"49 50 45 44 40 48","powertune":"2 2 2 2 2 2","voltage":"0.00 0.00 0.00 0.00 0.00 0.00","watts":"146 150 150 144 147 149","fanrpm":"3150 3150 3150 3150 3150 3150","core":"1809 1822 1809 1822 1822 1809","mem":"4374 4374 4374 4374 4374 4374"}},"total_hash":5191.69,"alive_gpus":12,"total_gpus":12,"alive_rigs":2,"total_rigs":2,"current_version":"1.2.9","avg_temp":54.585,"capacity":"100.0","per_info":{"claymore-zcash":{"hash":2093,"per_alive_gpus":6,"per_total_gpus":6,"per_alive_rigs":1,"per_total_rigs":1,"per_hash-gpu":"348.8","per_hash-rig":"2093.0","current_time":1515628434},"ewbf-zcash":{"hash":3099,"per_alive_gpus":6,"per_total_gpus":6,"per_alive_rigs":1,"per_total_rigs":1,"per_hash-gpu":"516.5","per_hash-rig":"3099.0","current_time":1515628434}}}
| 0easy
|
Title: Rerunning a report from the result screen doesn't hide code input
Body: It seems like the "don't generate code" command to nbconvert doesn't get sent when you ask for a rerun of a report which previously had this selected. | 0easy
|
Title: Add docker container(s) to help run examples
Body: **Is your feature request related to a problem? Please describe.**
The friction to getting the examples up and running is installing the dependencies. A docker container with them already provided would reduce friction for people to get started with Hamilton.
**Describe the solution you'd like**
1. A docker container, that has different python virtual environments, that has the dependencies to run the examples.
2. The container has the hamilton repository checked out -- so it has the examples folder.
3. Then using it would be:
- docker pull image
- docker start image
- activate python virtual environment
- run example
**Describe alternatives you've considered**
Not doing this.
**Additional context**
This was a request from a Hamilton talk.
| 0easy
|
Title: Addition: A SimplifiedClassifierDashboard
Body: The default dashboard can be very overwhelming with lots of tabs, toggles and dropdowns. It would be nice to offer a simplified version. This can be built as a custom `ExplainerComponent` and included in custom, so that you could e.g.:
```python
from explainerdashboard import ClassifierExplainer, ExplainerDashboard
from explainerdashboard.custom import SimplifiedClassifierDashboard
explainer = ClassifierExplainer(model, X, y)
ExplainerDashboard(explainer, SimplifiedClassifierDashboard).run()
```
It should probably include at least:
- Confusion matrix + one other model quality indicator
- Shap importances
- Shap dependence
- Shap contributions graph
And ideally would add in some `dash_bootstrap_components` sugar to make it look extra nice, plus perhaps some extra information on how to interpret the various graphs. | 0easy
|
Title: Include triple-slash Prisma Schema comments in the generated code
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma gives us the comments like these at generation time:
```prisma
model User {
/// The user's email address
email String
}
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should include these comments for both model definitions and field definitions:
```prisma
/// The User model
model User {
/// The user's email address
email String
}
```
```py
class User(BaseModel):
"""The User model"""
email: str
"""The user's email address"""
```
## Implementation
- Add a `documentation: str` field to both `Model` and `Field` in `src/prisma/generator/models.py`
- Modify `src/prisma/generator/templates/models.py.jinja` to include the docstring if it exists and if not use a reasonable default
- Add comments to both `tests/test_generation/exhaustive/async.schema.prisma` and `sync.schema.prisma` and update the snapshots (see contributing documentation) | 0easy
|
Title: Injecting malicious code
Body: Add a test to check that the AI is not injecting malicious code outside the workspace @AntonOsika
[../../src/main.py] 👎
```python
def test_files_are_local():
chat = textwrap.dedent(
"""
All this will soon be over.
[../../src/main.py]
```python
print("Goodbye, World!")
```
"""
)
```
| 0easy
|
Title: Add `robot.running.TestSuite.from_string` method
Body: It's already possible to do this:
```python
>>> from robot.api import get_model, TestSuite
>>> suite = TestSuite.from_model(get_model('''
... *** Test Cases ***
... Example
... Log Hello, world!
... '''))
```
Using `get_model` here is somewhat unnecessary and this would be more convenient:
```python
>>> from robot.api import TestSuite
>>> suite = TestSuite.from_string('''
... *** Test Cases ***
... Example
... Log Hello, world!
... '''))
```
The `from_string` method would need to accept configuration options that `get_model` accepts. Instead of accepting individual options, and needing to update them if `get_model` is changed, it's probably better to accept `**config` that is passed to `get_model`. The existing `TestSuite.from_file_system` works that way as well. | 0easy
|
Title: Improve the stability of results from end-to-end tests of Datalab with label error-detection for regression tasks
Body: When running Datalab for regression, the detected issues vary greatly across Python/OS versions in CI, making assertions about the issue masks difficult and slows development down.
We need to figure out how to guarantee more stable results across the platforms/python versions.
_Originally posted by @elisno in https://github.com/cleanlab/cleanlab/pull/902#discussion_r1418324737_
| 0easy
|
Title: New localizations
Body: Robot Framework got localization support in RF 6.0 (#4096). Let's use this issue to track new translations in RF 7.3.
- [ ] Arabic (PR #5356) | 0easy
|
Title: Support reading stdin in `readline` interactive move (`TERM=dumb xonsh -i` hangs)
Body: ## Current Behavior
<!---
For general xonsh issues, please try to replicate the failure using `xonsh --no-rc --no-env`.
Short, reproducible code snippets are highly appreciated.
You can use `$XONSH_SHOW_TRACEBACK=1`, `$XONSH_TRACE_SUBPROC=2`, or `$XONSH_DEBUG=1`
to collect more information about the failure.
-->
```xsh
echo whoami | TERM=dumb xonsh -i --no-rc
# or the same
echo whoami | xonsh -st dumb -i # dumb is readline in fact
# or the same
echo whoami | xonsh -st readline -i
```
right now, this opens an interactive terminal which hangs indefinitely and does not respond to input:

xonsh has to be manually killed by another process.
## Expected Behavior
xonsh behaves the same as if TERM were unset, or set to a "normal" value (e.g. `linux` or `xterm-256color`).
## Workaround
Set any other `TERM=linux` or `TERM=xterm` or etc.
## xonfig
<details>
```xsh
+-----------------------------+----------------------+
| xonsh | 0.16.0 |
| Git SHA | b5ebcd3f |
| Commit Date | May 30 09:51:25 2024 |
| Python | 3.10.12 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.45 |
| shell type | none |
| history backend | json |
| pygments | 2.18.0 |
| on posix | True |
| on linux | True |
| distro | pop |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file | [] |
| UPDATE_OS_ENVIRON | False |
| XONSH_CAPTURE_ALWAYS | False |
| XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines |
| THREAD_SUBPROCS | True |
| XONSH_CACHE_SCRIPTS | True |
+-----------------------------+----------------------+
```
</details>
## more info
note that setting `--no-env` makes the problem go away, mostly:
```
$ echo whoami | TERM=dumb xonsh -i --no-rc --no-env
Warning: Input is not a terminal (fd=0).
jyn@pop-os ~/src/xonsh title @ whoami
jyn
/usr/bin/stty: 'standard input': Inappropriate ioctl for device
jyn@pop-os ~/src/xonsh title @
```
i think this is a red herring though; it just means that `TERM=dumb` gets ignored by xonsh itself. `echo env | env -i TERM=dumb $(which xonsh) -i --no-rc ` still hangs indefinitely.
originally i thought this was related to [`DumbShell`](https://github.com/xonsh/xonsh/blob/38295a1dd941451362d2e3d14dd29d94ba4f54ce/xonsh/dumb_shell.py#L7) behaving differently than ReadLine shell, but i tried this diff and it didn't fix the issue:
```diff
diff --git a/xonsh/shell.py b/xonsh/shell.py
index f54c5350..4e60b6ae 100644
--- a/xonsh/shell.py
+++ b/xonsh/shell.py
@@ -208,6 +208,4 @@ class Shell:
from xonsh.ptk_shell.shell import PromptToolkitShell as cls
- elif backend == "readline":
+ elif backend == "readline" or backend == "dumb":
from xonsh.readline_shell import ReadlineShell as cls
- elif backend == "dumb":
- from xonsh.dumb_shell import DumbShell as cls
else:
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: [Datalab new issue type] detect ID columns in features
Body: [New Datalab issue type](https://docs.cleanlab.ai/master/cleanlab/datalab/guide/custom_issue_manager.html) called something like `identifier_column` that checks whether `features` contains a column that is entirely sequential integers.
That is check whether there exists a column `i` in `features` such that `set(features[:,i]) = set(c, c+1, ..., c+n)` for some integer `c`, where `n` = num-rows of `features`. Note we don't consider the ordering of the column in case the dataset was pre-shuffled.
If the condition is met, then we say this dataset has the `identifier_column` issue (similar to the non-iid issue). The overall issue-summary-score can simply be binary = 0 if the dataset has this issue = 1 otherwise.
In this case the `info` attribute of Datalab should specify which column(s) seem like the identifier.
`datalab.issues` does not need to reflect this issue-type at all, since it is not something related to the rows of the dataset.
Motivation: ID columns are common in datasets, but generally should be left out of ML modeling or they risk harming the results. | 0easy
|
Title: Error boundaries
Body: To catch errors e.g. when invalid components are returned from the backed.
https://www.npmjs.com/package/react-error-boundary | 0easy
|
Title: [FEA] hypergraph Directly Follows edges for process mining
Body: **Is your feature request related to a problem? Please describe.**
We should be able to create Event Graphs for process mining: https://towardsdatascience.com/introduction-to-process-mining-5f4ce985b7e5
The main missing piece seems to be a flag for `Directly Follows`
**Describe the solution you'd like**
```python
hypergraph(df, cols, directly_follows=True, direct=False) # links successive event nodes
hypergraph(df, cols, directly_follows=True, direct=True) # links entities of successive event nodes
```
* [ ] Default off hypergraph param `directly_follows`
* [ ] When `direct=False`: creates new event=>event edges of type `directly_follows`
* [ ] When `direct=True`: creates new entity=>entity edges of type `directly_follows`
* [ ] int64 column `directly_follows_step`
* [ ] Should work for all engines
* [ ] Tested
* [ ] Documented
* [ ] Tutorial
**Describe alternatives you've considered**
* We should check libraries for additional common shapes
* Additional options:
```python
directly_follows_step_offset=0 # initial number to count from
directly_follows_attributed=True # control whether to add some/all attrs
directly_follows_ordering_dimension='some_col' # default to current order
directly_follows_max_threshold='30s' # only link if next even in a threshold, default=None
directly_follows_predicate=lambda (prev, next): True # only link when True
```
| 0easy
|
Title: Si.to_df() does not work with parameter groups
Body: bug and solution from Mickaël Trochet. Thank you!
The function `Si.to_df()` currently only works for the `names` key, but not when `groups` is defined.
Proposed fixes below, sent by Mickaël. This looks good and is ready for a PR. It could also be nice to add a unit test about this issue.
```python
def Si_to_pandas_dict(S_dict):
"""Convert Si information into Pandas DataFrame compatible dict.
Parameters
----------
S_dict : ResultDict
Sobol sensitivity indices
See Also
----------
Si_list_to_dict
Returns
----------
tuple : of total, first, and second order sensitivities.
Total and first order are dicts.
Second order sensitivities contain a tuple of parameter name
combinations for use as the DataFrame index and second order
sensitivities.
If no second order indices found, then returns tuple of (None, None)
Examples
--------
>>> X = saltelli.sample(problem, 1000)
>>> Y = Ishigami.evaluate(X)
>>> Si = sobol.analyze(problem, Y, print_to_console=True)
>>> T_Si, first_Si, (idx, second_Si) = sobol.Si_to_pandas_dict(Si, problem)
"""
problem = S_dict.problem
total_order = {
'ST': S_dict['ST'],
'ST_conf': S_dict['ST_conf']
}
first_order = {
'S1': S_dict['S1'],
'S1_conf': S_dict['S1_conf']
}
idx = None
second_order = None
if 'S2' in S_dict:
if problem['groups'] is not None:
groups = problem['groups']
groups_uniq = pd.Series(groups).drop_duplicates().tolist()
idx = list(combinations(groups_uniq, 2))
second_order = {
'S2': [S_dict['S2'][groups_uniq.index(i[0]), groups_uniq.index(i[1])]
for i in idx],
'S2_conf': [S_dict['S2_conf'][groups_uniq.index(i[0]), groups_uniq.index(i[1])]
for i in idx]
}
else:
names = problem['names']
idx = list(combinations(names, 2))
second_order = {
'S2': [S_dict['S2'][names.index(i[0]), names.index(i[1])]
for i in idx],
'S2_conf': [S_dict['S2_conf'][names.index(i[0]), names.index(i[1])]
for i in idx]
}
return total_order, first_order, (idx, second_order)
def to_df(self):
'''Conversion method to Pandas DataFrame. To be attached to ResultDict.
Returns
========
List : of Pandas DataFrames in order of Total, First, Second
'''
total, first, (idx, second) = Si_to_pandas_dict(self)
if self.problem['groups'] is not None:
groups = self.problem['groups']
groups_uniq = pd.Series(groups).drop_duplicates().tolist()
ret = [pd.DataFrame(total, index=groups_uniq),
pd.DataFrame(first, index=groups_uniq)]
if second:
ret += [pd.DataFrame(second, index=idx)]
return ret
else:
names = self.problem['names']
ret = [pd.DataFrame(total, index=names),
pd.DataFrame(first, index=names)]
if second:
ret += [pd.DataFrame(second, index=idx)]
return ret
``` | 0easy
|
Title: ENH: implement `__array__` for series
Body: ### Is your feature request related to a problem? Please describe
Implement `__array__` for series.
The impl for dataframe could be helpful: #282 .
| 0easy
|
Title: Custom columns?
Body: ### Checklist
- [x] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
In the admin interface, I want to have extra table columns in addition to model properties. For example, if there is a User model in relationship with UserGroups, I want to have each User row with the number of groups in addition to the groups.
Like this:
|id|groups|group count|
|-|-|-|
|123|(A) (B)|2|
### Describe the solution you would like.
We can keep the current API but extend `column_list` and `column_formatters`:
```python
class UserAdmin(ModelView, model=User):
column_list = [User.name, User.groups, 'group_count']
column_formatters = {'group_count': lambda m, a: len(User.groups)}
```
### Describe alternatives you considered
- Format model properties which are not in use (not always possible)
- Add the needed info to the existing columns (not convenient to use such tables)
Also, in terms of my example, if I want to show only the number of groups but not the list, I can only format the `groups` column to have the groups loaded by sqladmin. This means putting a single value in a relation column, so sqladmin adds unwanted links and brackets:

### Additional context
Brilliant project! Thank you. | 0easy
|
Title: add meta tags for embedding
Body: Following the example below
http://iframely.com/debug?uri=http%3A%2F%2Ftechcrunch.com%2F2015%2F07%2F07%2Fcitibank-is-working-on-its-own-digital-currency-citicoin%2F
Add all needed meta tags and its administration page on pure_theme
| 0easy
|
Title: Check hook rules for list, dict and set comprehension
Body: Extend #706 with disallowing `[solara.use_state() for i in range(N)]` | 0easy
|
Title: Append=False not working for AllStrategy
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
version == 0.3.14b0
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
A clear and concise description of what the bug is.
when trying to calculate values for all indicators using below code, the data by default gets appended to the dataframe and there is no option to override it. Using "append=False" does not work. This makes it difficult to use this on streaming data.
**To Reproduce**
Provide sample code.
df.ta.strategy('All', append=False, verbose=True, timed=True)
**Expected behavior**
A clear and concise description of what you expected to happen.
"append=False" should ensure that output of AllStrategy is not added to source dataframe.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
Thanks for using Pandas TA!
| 0easy
|
Title: Mercury Cloud Can't Mix Notebook Display Types
Body: I want the output of [this](https://github.com/NSC9/Sample_of_Work/blob/Main/mathematics.ipynb) to be translated exactly to [my Mercury Cloud Site](https://nsc9-notebooks.runmercury.com/app/mathematics)
It seems Mercury Cloud ignores the Python strings I generated alongside the LaTeX pretty printer display. The last Python string it displays is "`1)`"
This is a minor issue since I could just rewrite my code by including Markdown notes using `_ = mr.Note(text="Some **Markdown** text")` from [the docs](https://runmercury.com/docs/input-widgets/note/) to replace all the intended Python strings but that would take a few hours since Python strings are deeply embedded with the LaTeX output displays within my source code. | 0easy
|
Title: `Set Variable If` is slow if it has several conditions
Body: We have a test suite that has a 28-case `Set Variable If`, viz.
```
${action} The ${element} At Screen XYZ
${locator} Set Variable If "${element}"=="Element 1" ${ELEMENT_1_LOCATOR}
... "${element}"=="Element 2" ${ELEMENT_2_LOCATOR}
... "${element}"=="Element 3" ${ELEMENT_3_LOCATOR}
etc
```
The performance of this is fine if the `${element}` matches a condition high up in the list, but then starts to become very slow for those lower down. Specifically, according to the logs the time to run the few comparisons is around 1ms each, but after a few it starts approximately doubling for each subsequent comparison. In the logs below the time taken for a single comparison reached 9.1 seconds by the time it got to the successful condition, which in this case was the 23rd case of the `Set Variable If`. Some calls in the actual testcase that's using this took even longer as they referenced elements further down the list, with the result that a test suite with fewer than 20 keyword calls overall was nowhere close to finishing even after ten minutes.
I did some profiling by hacking `time.time()` printouts into the Robot Framework source code in my venv. The delay seems to be in the logging code, specifically the `output.listeners.Listeners` and `output.listeners.LibraryLIsteners` classes:
```
before LOGGER.start_keyword: 1696503795.7360394
<class 'robot.output.logger.LoggerProxy'> took 3.814697265625e-06
<class 'robot.output.logger.LoggerProxy'> took 4.9114227294921875e-05
<class 'robot.output.listeners.Listeners'> took 4.5487611293792725
<class 'robot.output.listeners.LibraryListeners'> took 4.605533123016357
```
(values above are in seconds) - you can see the two classes took 9.1 seconds between them to process the `start_keyword` call.
The first line of the above output comes from a line I added to the very top of the calling function, `output.output.Output.start_keyword`, while the rest of the debug output was generated by changing the `start_keyword` function of the `output.logger.Logger` class to look like this:
```
def start_keyword(self, keyword):
# TODO: Could _prev_log_message_handlers be used also here?
self._started_keywords += 1
self.log_message = self._log_message
for logger in self.start_loggers:
s = time.time()
logger.start_keyword(keyword)
e = time.time()
print(f" {type(logger)} took {e-s}")
```
where my changes were to add the `s = `, `e = ` and `print` lines. Unfortunately I was unable to drill down further as I got lost in the maze of runtime-defined functions. My suspicion was that perhaps the XML/HTML loggers might take a while if they have to reshuffle large amounts of XML/HTML in order to insert a nested log about a keyword call - when you view the HTML log for the `Set Variable If`, it is nested 23 layers deep.
Robot Framework version is 6.1.1 (according to version.py), running in Python 3.10.12 on Ubuntu 22.04. The test suite is invoked using the following command `pabot --pabotlib --processes 1 --variablefile local_variable.py --outputdir Log --include <tag> <testcase root directory>`.
Happy to supply any other info, screenshots etc to help. The problem is readily reproducible on my end and for now I'm going to work around it by splitting up the keyword into several keywords each with a smaller `Set Variable If`. | 0easy
|
Title: [DOCS] datasets are currently undocumented
Body: It would be nice to have a notebook that shows a plot and a few rows of all the datasets that we have. | 0easy
|
Title: Collections: Value of `ignore_case` argument accidentally logged
Body: Some Collections library keywords log info about the ignore_case argument value (e.g. "INFO False") without mentioning what that logged value means. It could be interpreted incorrectly e.g. meaning that the keyword failed.
E.g. "Dictionaries Should Be Equal" keyword logs "INFO False" if the ignore_case argument is False.
That logging seems to come from Collections.py line 1127: print(ignore_case)
**Version information**
Robot Framework 7.0 (Python 3.11.3 on win32)
**Steps to reproduce the problem**
Running this test will log "INFO foo":
*** Settings ***
Library Collections
*** Variables ***
&{DICT1} a=1
&{DICT2} a=1
*** Test Cases ***
Compare dictionaries
Dictionaries Should Be Equal ${DICT1} ${DICT2} ignore_case=foo | 0easy
|
Title: TFTExplainer: getting explanations when trained on multiple time series
Body: I have trained TFT on multiple time series (for example: trained on a retail dataset with 350 time series, each having target, past and future covariates). My understanding was that TFTExplainer would give importance (temporal and variable) based on what it learned from all the time series. To get these, I pass the backgorund_series (and other covariates) that I had used for training to the TFTExplainer. i.e. I pass all 350 series. This gives me the following error
Traceback (most recent call last):
File "/main.py", line 90, in <module>
results = explainer.explain()
File "/lib/python3.9/site-packages/darts/explainability/tft_explainer.py", line 224, in explain
values=np.take(attention_heads[idx], horizon_idx, axis=0).T,
IndexError: index 30 is out of bounds for axis 0 with size 30
I found that in TFTExplainer.explain(), the size of attention_heads is 30 and not 350.
When I pass only one series as background series, it works (size of attention_heads is 1).
How can I get global explanations for the TFT model when it is trained on multiple time series?
Thank you.
| 0easy
|
Title: Collections: Make `ignore_order` and `ignore_keys` recursive
Body: `Lists Should Be Equal` accepts `ignore_order` (#2703) and `Dictionaries Should Be Equal` accepts `ignore_keys` (#2717). Currently both of them only work with the list/dict given to them and not recursively. We added recursive `Normalizer` class to support case-insensitive comparisons recursively (#4343), and we can easily extend it to handle also other normalization. Then also they will work recursively which I consider useful. | 0easy
|
Title: [New feature] Add apply_to_images to GaussianBlur
Body: | 0easy
|
Title: Standard docstrings style for the codebase
Body: ## Problem
Some parts of the codebase use Sphinx-style docstrings and others use Google's, and there is not a clear definition about this. Also, with the addition of auto-generated docs (#1058) , the docs render doesn't come out very pretty.
See [here](https://bwanamarko.alwaysdata.net/napoleon/format_exception.html) examples of these (and other) different styles.
## Proposal
- Define Google Style as the default style
- Change all docstrings to the new format
- Use some linter/formatter to enforce this with out pre-commit setup. (e.g. https://pypi.org/project/docformatter/)
## Alternatives
We could use another style as well (if there is a strong opinion towards that). I would only argue that using a linter/auto-formatter should help get the codebase sharp. | 0easy
|
Title: Some operations involving the identity gate cause an error
Body: ## Describe the issue
If one tries to add or subtract the identity gate to another gate (or vice-versa) on a qubit `q`, for example, `I(q) - Z(q)`, then the following error occurs:
```
Traceback (most recent call last):
File "\misc\cirq_sub_i.py", line 31, in <module>
op(p1, p2)
File "\misc\cirq_sub_i.py", line 9, in add
return x + y
TypeError: unsupported operand type(s) for +: 'GateOperation' and 'GateOperation'
```
One could perhaps add the Pauli beforehand and then apply them on qubits (this doesn't cause the error), but it isn't clear to me how one turns a LinearCombinationsOfGates to a PauliSum. Alternatively, replacing the `-` with `+(-1.0)*` solves the issue in some cases. Otherwise, one may replace the offending `I(q)` by `X(q)*X(q)` or `PauliString()`. Is this very last option the preferred way of denoting the identity?
The following assertions are satisfied (and the individual expressions do not throw an error)
```python
assert PauliString() - Z(q) == I(q) + (-1) * Z(q)
assert X(q) * X(q) - Z(q) == I(q) + (-1) * Z(q)
```
They are all "paraphrases" of the original `I(q) - Z(q)`. As an aside,
```python
assert X(q) ** 2 - Z(q) == I(q) + (-1) * Z(q)
```
also fails, because ` X(q) ** 2` is converted to an incompatible class.
## Explain how to reproduce the bug or problem
The code below goes through all possible combinations of gates and operations, and prints those not implemented. In fact, moving the `op(p1,p2)` out of the try block will reproduce the error above.
```python
from cirq import I, X, Y, Z, LineQubit
q = LineQubit(0)
paulis = (I(q), X(q), Y(q), Z(q))
def add(x, y):
return x + y
add.sym = "+"
def sub(x, y):
return x - y
sub.sym = "-"
def addm(x, y):
return x + (-1.0) * y
addm.sym = "+(-1.0)*"
for p1 in paulis:
for p2 in paulis:
for op in (add, sub, addm):
try:
op(p1, p2)
msg = "pass"
except Exception:
msg = "error"
print(f"{p1.gate}{op.sym}{p2.gate}: {msg}")
```
This snippet outputs
```
I+I: error
I-I: error
I-X: error
I-Y: error
I-Z: error
X-I: error
Y-I: error
Z-I: error
```
This happens on cirq 1.4.1. I updated right before running the code.
I assume the error can be fixed by adding the missing `__add__` and `__sub__` in some class, but I am confused by the class structure and have not managed to locate the correct one. | 0easy
|
Title: hardware monitoring
Body: I think it would be helpful to monitor the hardware to check servers raid controller or system fan status.
Is that possible or is it planned in the future ?
| 0easy
|
Title: Close datetimes result in repeated monthly X axis labels
Body: I'm quite enjoying this, but I have come across a serious data interpretation problem.
I have a dataframe read like so:
```
df = pd.read_sql_query(query,conn, parse_dates=['date'])
```
However, the datetime values are all relatively high resolution ones (i.e., 5-15 second samples over many days), and the X axis nearly always shows only "2023-02" instead of showing the date (or the hour if I'm looking at the last 24 hours).
Can we get a way to change the X-axis label resolution, or (even better), a stepwise automatic scale to format those datetimes according to the dataset granularity? | 0easy
|
Title: Add smpt server for password emails
Body: There should be routes that trigger an email for "forgot" and "reset" password options. These can be celery tasks but probably need a separate container for a simple SMTP server in order to send the emails.
The fullstack Vue project has an example of a containerized SMTP server. | 0easy
|
Title: igel serve, example call and result in igel documentation.
Body: Can we have an example of REST API calls in the documentation?
Examples with CURL, HTTPie or another client and the results would be better for newbies.
Thanks again for your good work.
| 0easy
|
Title: doc2vec's infer_vector has `epochs` and `steps` input parameters - `steps` not in use
Body: Refer to doc2vec.py, `infer_vector` function seems to be using `epochs` for the number of iterations and `steps` is not in used.
However, in the `similarity_unseen_docs` function, `steps` is used when calling the infer_vector function.

| 0easy
|
Title: [Feature request] Add apply_to_images to RandomSnow
Body: | 0easy
|
Title: Fix `test_regressions` tests failing in GitHub Actions
Body: I had to mark the `test_regressions` tests with XFAIL when running the checks in CI because they are currently failing when being run in GitHub Actions. See [here](https://github.com/tpvasconcelos/ridgeplot/actions/runs/7378769595/job/20074296603) and [here](https://github.com/tpvasconcelos/ridgeplot/actions/runs/7378769595/job/20074296398).
This is probably due to small differences in output between environments (local vs. remote macOS, Windows, and Linux).
The test, for reference:
https://github.com/tpvasconcelos/ridgeplot/blob/34498e54b2381ed97ac09f182df5a40f8e4a4943/tests/e2e/test_examples.py#L30-L37
Regardless of the solution, one nice feature to implement here is the ability to upload the plot artefacts that resulted in the failed tests and expose these artefacts to the relevant PR. This discussion might be useful for this:
- https://github.com/actions/upload-artifact/issues/50
other references on this topic:
- https://github.com/orgs/community/discussions/51403
- https://github.com/djthornton1212/djthornton1212/blob/main/.github/workflows/pr-comment-artifact-url.yml
- https://stackoverflow.com/questions/60789862/url-of-the-last-artifact-of-a-github-action-build | 0easy
|
Title: --update-snapshot without overwriting unused ones
Body: **Is your feature request related to a problem? Please describe.**
Working with a large test suites we have some tests that are being skipped depending on environment. When writing new tests with snapshots and then running `--update-snapshot` this overwrites the skipped snapshots which requires reverting the changes.
**Describe the solution you'd like**
A flag `--skip-unused` that skips the unused snapshots which would also hide the error/warning about unused snapshots.
**Describe alternatives you've considered**
Reverting the changed snapshots manually
| 0easy
|
Title: Markeplace - For the items on the menu bar, change the font to "Poppins"?
Body: ### Describe your issue.
For the items on the menu bar (the text that says "Agent store", "Library" and "Build") change the font to "Poppins"
Use the following style:
Font: Poppins
size: 20px
line-height: 28px
| 0easy
|
Title: BUG:xorbits.pandas.DataFrame has no attribute from_dict
Body: ```
File "C:\Users\xldistance\AppData\Local\Programs\Python\Python311\Lib\site-packages\vnpy-3.2.0-py3.11.egg\vnpy\app\portfolio_strategy\backtesting.py", line 452, in calculate_result
self.daily_df = DataFrame.from_dict(results).set_index("date")
^^^^^^^^^^^^^^^^^^^
File "C:\Users\xldistance\AppData\Roaming\Python\Python311\site-packages\xorbits\core\data.py", line 162, in __getattr__
raise AttributeError(item)
AttributeError: from_dict
``` | 0easy
|
Title: [DevX][XXS] Improve Output Formatting for deploy() Function in preswald/cli.py
Body: #### Description
The `deploy()` function in `preswald/cli.py` [Link](https://github.com/StructuredLabs/preswald/blob/ffd15665f503af9bcb797a12c2766fa9491018da/preswald/cli.py) currently outputs information, but the formatting could be improved for better readability. We would like to "pretty print" this output to make it clearer and more user-friendly.
#### Tasks
1. Locate the `deploy()` function in `preswald/cli.py`.
2. Analyze the current output of the `deploy()` function.
3. Update the code to format the output using pretty print techniques, such as:
- Aligning columns or sections.
- Adding headers or separators where necessary.
- Using colors for emphasis if the output is displayed in a terminal (optional, using libraries like `rich` or `colorama`).
4. Ensure the formatting works correctly with a variety of input scenarios.
#### Acceptance Criteria
- The output from the `deploy()` function is well-organized and easy to read.
- Changes are tested to ensure they don't break existing functionality.
- If applicable, include screenshots or terminal examples of the updated output.
#### Resources
- [Python's `pprint` module](https://docs.python.org/3/library/pprint.html)
- [Rich Documentation](https://rich.readthedocs.io/en/stable/) (optional for colorful terminal output)
- [Colorama Documentation](https://pypi.org/project/colorama/)(optional for basic terminal color output) | 0easy
|
Title: Polynomial Feaatures + SklearnWrapper weird behavior
Body: **Describe the bug**
A clear and concise description of what the bug is.
When using PolynomialFeaturs + SklearnWrappers the base features are duplicated, when trying to dedup using DropDuplicateFeatures the values are repeated again!!
Using the simple Titanic Dataset you can run something like this:
```python
df = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl')
X = df[['pclass','sex','age','fare','embarked']]
y = df.survived
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
pipe = Pipeline(steps = [
('ci', CategoricalImputer(imputation_method='frequent')),
('mmi', MeanMedianImputer(imputation_method='mean')),
('od', OrdinalEncoder(encoding_method='arbitrary')),
('pl', SklearnTransformerWrapper(PolynomialFeatures(degree = 2, interaction_only = True, include_bias=False), variables=['pclass','sex'])),
#('drop', DropDuplicateFeatures()),
#('sc', SklearnTransformerWrapper(StandardScaler(), variables=['Age','Fare'])),
#('lr', LogisticRegression(random_state=42))
])
pipe.fit_transform(X_train)
```
This returns the first issue:

I'm getting pclass and sex duplicated, I'm expecting to get back only interactions. This is expected from [Sklearn Docs](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) but why would I want duplicated features?
Looking into Feature Engine Docs I found `DropDuplicateFeatures()`, but if applying into the Pipeline I get this:
```python
df = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl')
X = df[['pclass','sex','age','fare','embarked']]
y = df.survived
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
pipe = Pipeline(steps = [
('ci', CategoricalImputer(imputation_method='frequent')),
('mmi', MeanMedianImputer(imputation_method='mean')),
('od', OrdinalEncoder(encoding_method='arbitrary')),
('pl', SklearnTransformerWrapper(PolynomialFeatures(degree = 2, interaction_only = True, include_bias=False), variables=['pclass','sex'])),
('drop', DropDuplicateFeatures()),
#('sc', SklearnTransformerWrapper(StandardScaler(), variables=['Age','Fare'])),
#('lr', LogisticRegression(random_state=42))
])
pipe.fit_transform(X_train)
```

getting tons of repeated features, which is totally unexpected.
**Expected behavior**
Not getting repeated/duplicated features.
**Screenshots**
Shown above.
**Desktop (please complete the following information):**
- Ubuntu 20.04
- Feature Engine 1.4.0
Thanks in Advance,
Alfonso
| 0easy
|
Title: Update the autokey-run man page
Body: ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Documentation
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [X] development
- [X] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [X] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
Update the [autokey-run](https://github.com/autokey/autokey/blob/develop/doc/man/autokey-run.1) man page format to be similar to the one that's being used in the in-process updates to the [autokey-gtk](https://github.com/autokey/autokey/blob/develop/doc/man/autokey-gtk.1) and [autokey-qt](https://github.com/autokey/autokey/blob/develop/doc/man/autokey-qt.1) man pages (including the new **SEE ALSO** section that they use).
Also add this to the **script** option since it's a new feature that was added in AutoKey 0.96.0:
Accepts full paths to Python scripts. If no full path is given, will treat the name as an existing AutoKey script name instead.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
Examine the existing man page.
### What should have happened?
The updates should be there.
### What actually happened?
They're not there yet.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | 0easy
|
Title: Push README to Pypi
Body: Currently, README is in markdown which is not supported by pypi. We need to convert the file to reStructuredText for `pypi`. This can be automated. | 0easy
|
Title: Refactor docs to move 'Admin integration' section to top level
Body: **Describe the issue**
The 'advanced_usage.rst' doc is getting rather lengthy. I propose to move the 'Admin integration' section to its own top level page.
| 0easy
|
Title: Apply black to Python templates
Body: We should look into getting black to apply to https://github.com/scrapy/scrapy/tree/master/scrapy/templates.
Mind that it would break https://github.com/scrapy/scrapy/pull/5808 at the moment, i.e. we might need to make sure template building code works with the new code style of templates. | 0easy
|
Title: Some unit tests cannot be run independently
Body: I cannot run the tests on my local machine by downloading the code going into the RF dir and executing:
`python utest/run.py -q running`
The errors I have are below (after improving the output a bit with the PR: https://github.com/robotframework/robotframework/pull/4610).
The errors seem to happen because the import path isn't setup (so, I guess imports are failing, but I'm not sure how to print that additional info)...
i.e.: in `test_imports.TestImports.test_create` I can't see how the keyword `My Test Keyword` can be loaded from the `robotframework\utest\resources\test_resource.txt`.
Is some additional setup needed in order to run these tests?
```
(py311_tests) λ python utest/run.py -q running
======================================================================
FAIL: test_create (test_imports.TestImports.test_create)
----------------------------------------------------------------------
Traceback (most recent call last):
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 33, in run_and_check_pass
assert_suite(result, 'Suite', 'PASS')
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 16, in assert_suite
assert_equal(suite.status, status)
File "X:\robocorpws\robotframework\utest\..\src\robot\utils\asserts.py", line 181, in assert_equal
_report_inequality(first, second, '!=', msg, values, formatter)
File "X:\robocorpws\robotframework\utest\..\src\robot\utils\asserts.py", line 230, in _report_inequality
raise AssertionError(msg)
AssertionError: FAIL != PASS
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 52, in test_create
self.run_and_check_pass(suite)
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 41, in run_and_check_pass
raise AssertionError('\n'.join(full_msg)) from e
AssertionError: No keyword with name 'My Test Keyword' found.
======================================================================
FAIL: test_resource (test_imports.TestImports.test_resource)
----------------------------------------------------------------------
Traceback (most recent call last):
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 33, in run_and_check_pass
assert_suite(result, 'Suite', 'PASS')
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 16, in assert_suite
assert_equal(suite.status, status)
File "X:\robocorpws\robotframework\utest\..\src\robot\utils\asserts.py", line 181, in assert_equal
_report_inequality(first, second, '!=', msg, values, formatter)
File "X:\robocorpws\robotframework\utest\..\src\robot\utils\asserts.py", line 230, in _report_inequality
raise AssertionError(msg)
AssertionError: FAIL != PASS
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 67, in test_resource
self.run_and_check_pass(suite)
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 41, in run_and_check_pass
raise AssertionError('\n'.join(full_msg)) from e
AssertionError: No keyword with name 'My Test Keyword' found.
======================================================================
FAIL: test_variables (test_imports.TestImports.test_variables)
----------------------------------------------------------------------
Traceback (most recent call last):
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 33, in run_and_check_pass
assert_suite(result, 'Suite', 'PASS')
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 16, in assert_suite
assert_equal(suite.status, status)
File "X:\robocorpws\robotframework\utest\..\src\robot\utils\asserts.py", line 181, in assert_equal
_report_inequality(first, second, '!=', msg, values, formatter)
File "X:\robocorpws\robotframework\utest\..\src\robot\utils\asserts.py", line 230, in _report_inequality
raise AssertionError(msg)
AssertionError: FAIL != PASS
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 76, in test_variables
self.run_and_check_pass(suite)
File "X:\robocorpws\robotframework\utest\running\test_imports.py", line 41, in run_and_check_pass
raise AssertionError('\n'.join(full_msg)) from e
AssertionError: Variable '${MY_VARIABLE}' not found.
----------------------------------------------------------------------
Ran 296 tests in 2.281s
FAILED (failures=3)
``` | 0easy
|
Title: [BUG] ai init sequence not pay-as-you-go
Body: **Describe the bug**
When creating Graphistry objects (`graphistry.bind(...)`), the ai mxins (featureutils, ...) init, and when something like umap, call all the way into third party libs (`umap.UMAP`). This is at minimum a performance regression (unclear how big), but also a complexity issue with the mixin flow. We've seen bugs where init code in a mixin breaks otherwise unrelated code.
**Expected behavior**
Mixins should be pay-as-you-go. For example, umap init can be deferred to the first umap call.
```python
class UMAPMixin(object):
def __init__(...):
#remove umap.UMAP(...) calls that impact non-umap calls
pass
def umap(...):
lazy_init()
```
cc + @silkspace
| 0easy
|
Title: Shape of pred & label is difference when training siamrpn_alex_dwxcorr/config.yaml
Body: Hello every one, I had trained siamrpn_r50_l234_dwxcorr/config.yaml with my PC successfully.
But when I train with siamrpn_alex_dwxcorr/config.yaml, it always raises Error during computing loss function.
After tracing, in the following function, I found that shape of pred & label is difference, pred is (17x17) and label is (25x25). which is not the case for Alex Backbone. Should I change any other configuration before training with "--cfg=(path)/siamrpn_alex_dwxcorr/config.yaml" ? Thank you
def select_cross_entropy_loss(pred, label):
pred = pred.view(-1, 2)
label = label.view(-1)
pos = label.data.eq(1).nonzero().squeeze().cuda()
neg = label.data.eq(0).nonzero().squeeze().cuda()
loss_pos = get_cls_loss(pred, label, pos)
loss_neg = get_cls_loss(pred, label, neg)
return loss_pos * 0.5 + loss_neg * 0.5
| 0easy
|
Title: Bot Activity metric API
Body: The canonical definition is here: https://chaoss.community/?p=3465 | 0easy
|
Title: Filters for Nested Documents Via Relay Does Not Filter Properly
Body: Filtering works for a base document to a query. But when filtering by a property for a nested document, the query returns all results regardless of the argument value.
Example.
Schema.py
```
class Subgraph(MongoengineObjectType):
class Meta:
model = SubgraphModel
interfaces = (RelayNode,)
class Graph(MongoengineObjectType):
class Meta:
model = GraphModel
interfaces = (RelayNode,)
class Query(graphene.ObjectType):
node = RelayNode.Field()
graph = RelayNode.Field(Graph)
graphs = MongoengineConnectionField(Graph)
```
Model.py
```
class Subgraph(Document):
meta = {'collection': 'subgraph'}
key = StringField(required=True)
class Graph(Document):
meta = {'collection': 'graph'}
key = StringField(required=True)
subgraphs = ListField(ReferenceField(Subgraph), required=True)
```
Query that fails:
```
{
graphs(key:"graph1") {
edges {
node {
id
subgraphs(key:"none") {
edges {
node {
id
key
}
}
}
}
}
}
}
```
The first argument to graphs filters properly. But the second argument does not filter at all, because all results are returned.
Result:
```
{
"data": {
"graphs": {
"edges": [
{
"node": {
"id": "R3JhcGg6NWI3NWU1YWYxMGYyMGMxZDliYmZmYzBj",
"key": "graph1",
"subgraphs": {
"edges": [
{
"node": {
"id": "U3ViZ3JhcGg6NWI3NWU1YWYxMGYyMGMxZDliYmZmYzAw",
"key": "subgraph1"
}
},
{
"node": {
"id": "U3ViZ3JhcGg6NWI3NWU1YWYxMGYyMGMxZDliYmZmYzAx",
"key": "subgraph2"
}
},
{
"node": {
"id": "U3ViZ3JhcGg6NWI3NWU1YWYxMGYyMGMxZDliYmZmYzAy",
"key": "subgraph3"
}
}
]
}
}
}
]
}
}
}
```
Here is the produced schema
```
schema {
query: Query
}
type Graph implements Node {
id: ID!
key: String!
subgraphs(before: String, after: String, first: Int, last: Int, id: ID, key: String): SubgraphConnection
}
type GraphConnection {
pageInfo: PageInfo!
edges: [GraphEdge]!
}
type GraphEdge {
node: Graph
cursor: String!
}
interface Node {
id: ID!
}
type PageInfo {
hasNextPage: Boolean!
hasPreviousPage: Boolean!
startCursor: String
endCursor: String
}
type Query {
node(id: ID!): Node
graph(id: ID!): Graph
graphs(before: String, after: String, first: Int, last: Int, id: ID, key: String): GraphConnection
}
type Subgraph implements Node {
id: ID!
key: String!
}
type SubgraphConnection {
pageInfo: PageInfo!
edges: [SubgraphEdge]!
}
type SubgraphEdge {
node: Subgraph
cursor: String!
}
```
| 0easy
|
Title: refactor DAG partial build to use shallow copy
Body: `dag.build_partially` uses `deepcopy` internally, however, the DAG may contain references to unserializable objects (this may happen with database clients). I think it's safe to do a shallow copy, so we need to test it and if it's the case, switch the deep copy | 0easy
|
Title: Add `full_name` to replace `longname` to suite and test objects
Body: Both `TestSuite` and `TestCase` objects have a `longname` attribute that contains the name of the suite/test prefixed with the `longname` of the parent suite. The functionality is useful, but the name doesn't follow our coding style and is bad in general. We are enhancing our APIs also otherwise and the result side `Keyword` object just got `full_name` for similar purpose than `longname` (#4884). That name works well also with suites and tests.
The plan is to do the following:
1. In RF 7.0 introduce the `full_name` property.
2. Preserve `longname` in RF 7.0. Mention that it is deprecated in its documentation but don't emit actual deprecation warnings.
3. Deprecate `longname` "loudly" in RF 8.0 or possibly later.
4. In some suitable future release remove `longname`.
This issue covers the two first steps above.
| 0easy
|
Title: [DOC] command line for setup virtual environment is wrong in the 4.1 documentation
Body: python3 -m .venv # create a virtualenv
correct is:
python3 -m venv .venv # create a virtualenv
| 0easy
|
Title: Add more explicit steps to the Getting Started example
Body: Improve the [Getting Started section of the docs](https://testbook.readthedocs.io/en/latest/getting-started/index.html#create-your-first-test).
Right now it's easy to miss that two files are needed: a notebook (`.ipynb` file) to test and a test file (`.py` file) to write the tests.
It would also be helpful to add an explicit step on how to run the test file. | 0easy
|
Title: [ENH] Make all collection and series transformer files private, access through init
Body: ### Describe the feature or idea you want to propose
currently like this

should be like series transformers

### Describe your proposed solution
make files private, import through init
| 0easy
|
Title: [Feature] support mistral small vlm
Body: ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
https://mistral.ai/fr/news/mistral-small-3-1
### Related resources
_No response_ | 0easy
|
Title: An Error is emitted when the ‘count’field is present in DataFrame
Body: **Describe the bug**
1.When I load a local file with a count field, it looks like this:

2. So I'm going to go ahead and execute, I met this error:
'cannot insert count, already exists'
its all:
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~/anaconda3/lib/python3.7/site-packages/IPython/core/formatters.py in __call__(self, obj)
343 method = get_real_method(obj, self.print_method)
344 if method is not None:
--> 345 return method()
346 return None
347 else:
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/profile_report.py in _repr_html_(self)
434 def _repr_html_(self):
435 """The ipython notebook widgets user interface gets called by the jupyter notebook."""
--> 436 self.to_notebook_iframe()
437
438 def __repr__(self):
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/profile_report.py in to_notebook_iframe(self)
414 with warnings.catch_warnings():
415 warnings.simplefilter("ignore")
--> 416 display(get_notebook_iframe(self))
417
418 def to_widgets(self):
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/report/presentation/flavours/widget/notebook.py in get_notebook_iframe(profile)
63 output = get_notebook_iframe_src(profile)
64 elif attribute == "srcdoc":
---> 65 output = get_notebook_iframe_srcdoc(profile)
66 else:
67 raise ValueError(
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/report/presentation/flavours/widget/notebook.py in get_notebook_iframe_srcdoc(profile)
21 width = config["notebook"]["iframe"]["width"].get(str)
22 height = config["notebook"]["iframe"]["height"].get(str)
---> 23 src = html.escape(profile.to_html())
24
25 iframe = f'<iframe width="{width}" height="{height}" srcdoc="{src}" frameborder="0" allowfullscreen></iframe>'
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/profile_report.py in to_html(self)
384
385 """
--> 386 return self.html
387
388 def to_json(self) -> str:
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/profile_report.py in html(self)
199 def html(self):
200 if self._html is None:
--> 201 self._html = self._render_html()
202 return self._html
203
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/profile_report.py in _render_html(self)
306 from pandas_profiling.report.presentation.flavours import HTMLReport
307
--> 308 report = self.report
309
310 disable_progress_bar = not config["progress_bar"].get(bool)
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/profile_report.py in report(self)
193 def report(self):
194 if self._report is None:
--> 195 self._report = get_report_structure(self.description_set)
196 return self._report
197
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/profile_report.py in description_set(self)
173 if self._description_set is None:
174 self._description_set = describe_df(
--> 175 self.title, self.df, self.summarizer, self.typeset, self._sample
176 )
177 return self._description_set
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/model/describe.py in describe(title, df, summarizer, typeset, sample)
135 # Duplicates
136 pbar.set_postfix_str("Locating duplicates")
--> 137 duplicates = get_duplicates(df, supported_columns)
138 pbar.update()
139
~/anaconda3/lib/python3.7/site-packages/pandas_profiling/model/duplicates.py in get_duplicates(df, supported_columns)
23 .groupby(supported_columns)
24 .size()
---> 25 .reset_index(name="count")
26 .nlargest(n_head, "count")
27 )
~/anaconda3/lib/python3.7/site-packages/pandas/core/series.py in reset_index(self, level, drop, name, inplace)
1247 else:
1248 df = self.to_frame(name)
-> 1249 return df.reset_index(level=level, drop=drop)
1250
1251 # ----------------------------------------------------------------------
~/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in reset_index(self, level, drop, inplace, col_level, col_fill)
5011 # to ndarray and maybe infer different dtype
5012 level_values = maybe_casted_values(lev, lab)
-> 5013 new_obj.insert(0, name, level_values)
5014
5015 new_obj.index = new_index
~/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in insert(self, loc, column, value, allow_duplicates)
3758 self._ensure_valid_index(value)
3759 value = self._sanitize_column(column, value, broadcast=False)
-> 3760 self._mgr.insert(loc, column, value, allow_duplicates=allow_duplicates)
3761
3762 def assign(self, **kwargs) -> DataFrame:
~/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py in insert(self, loc, item, value, allow_duplicates)
1189 if not allow_duplicates and item in self.items:
1190 # Should this be a different kind of error??
-> 1191 raise ValueError(f"cannot insert {item}, already exists")
1192
1193 if not isinstance(loc, int):
ValueError: cannot insert count, already exists`
3. I removed the 'count' field in train_df as if there was a conflict between my original field and the subsequent naming
4. It seems to have worked

| 0easy
|
Title: [FEATURE] add log_dtypes to pandas_utils
Body: I might also want to log the `dtypes` between each step in a pandas pipeline. It's probably best to add a separate logger for both the `dtype` as well as the shape.
Should the column names be seperate too? Maybe worth a discussion.
It would also be nice if these features were documented on the documentation page together with some other `pandas_utils` functions. | 0easy
|
Title: plotting with d3 backend
Body: - [x] soorgeon: update pygraphviz message - it still prints a message saying pygraphviz is required
- [x] clear error message: many examples set the output path to `pipeline.png` - but this wont work if using the d3 backend
- [x] update documentation | 0easy
|
Title: Mapper / Scikit-learn error from precomputed clustering
Body: <!-- Instructions For Filing a Bug: https://github.com/giotto-learn/giotto-learn/blob/master/CONTRIBUTING.rst -->
#### Description
I have a point cloud of 1920 nine-dimensional points. When applying Mapper with `DBSCAN` clustering as in the Christmas Santa notebook everything works fine. When I apply my own clustering algoritm with a precomputed distance matrix I get an error. Using Kepler Mapper I make this work by setting the parameter `precomputed=True` when calling `mapper.map()`.
PS! I used the color function from the Santa `.csv` file as a hack to make the code run. It worked for the basic clustering method.
UPDATE: I added `point_cloud.csv` to the gist, I hope it works for reproduction.
#### Steps/Code to Reproduce
https://gist.github.com/torlarse/43604dd09a98cc3f69166659cd6ddf9e
#### Expected Results
A Mapper complex :)
#### Actual Results
Please see gist for traceback.
#### Versions
Python 3.7.5 (tags/v3.7.5:5c02a39a0b, Oct 15 2019, 00:11:34) [MSC v.1916 64 bit (AMD64)]
NumPy 1.17.4
SciPy 1.3.3
joblib 0.14.0
Scikit-Learn 0.22
giotto-Learn 0.1.3
<!-- Thanks for contributing! -->
| 0easy
|
Title: Bug: Arguments precedence inconsistency in CLI `run` command
Body: ### Description
For most of the options env file has higher priority but there are some for which CLI options are "superior".
```python
reload_dirs = env.reload_dirs or reload_dir
reload_include = env.reload_include or reload_include
reload_exclude = env.reload_exclude or reload_exclude
...
ssl_certfile = ssl_certfile or env.certfile_path
ssl_keyfile = ssl_keyfile or env.keyfile_path
create_self_signed_cert = create_self_signed_cert or env.create_self_signed_cert
```
In the name of consistency, this needs to be fixed!
Also CLI > ENV
### Litestar Version
2.7.0
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | 0easy
|
Title: add unit tests for onboarding tutorial
Body: I added a small tutorial (triggered with `python -m ploomber.onboard`) but it doesn't have tests yet | 0easy
|
Title: Bug: constrained string regex + min/max length may produce invalid values
Body: When a `Field` specified both `max_length` and regex that includes start and end of string tokens and a repeatable pattern can lead to generation of invalid string value that leads to `ValidationError`. See reproduction:
```python
from pydantic import BaseModel, Field
from pydantic_factories import ModelFactory
from pydantic_factories.value_generators.regex import RegexFactory
PATTERN = r'^a+b$'
GOOD_SEED = 0
BAD_SEED = 5
class A(BaseModel):
a: str = Field(..., regex=pattern, min_length=2, max_length=10)
class AF(ModelFactory[A]):
__model__=A
AF.seed_random(GOOD_SEED)
print(AF.build()) # a='aaaaaaab'
print(RegexFactory(seed=BAD_SEED)(pattern)) # aaaaaaaaaab
AF.seed_random(BAD_SEED)
print(AF.build()) # this breaks
```
```console
Traceback (most recent call last):
File "[redacted]reproduce-bug.py", line 18, in <module>
print(AF.build()) # This breaks
File "[redacted]/factory.py", line 724, in build
return cast("T", cls.__model__(**kwargs)) # pyright: ignore
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for A
a
string does not match regex "^a+b$" (type=value_error.str.regex; pattern=^a+b$)
```
As far as I can tell, this is a result of [this piece of code](https://github.com/starlite-api/pydantic-factories/blob/main/pydantic_factories/constraints/strings.py#L57-L63) cutting off the end of string after calling `RegexFactory`. I was surprised that the test suite didn't catch this error earlier, but to my surprise the [test case that is supposed to verify this behavior](https://github.com/starlite-api/pydantic-factories/blob/main/tests/constraints/test_string_constraints.py#L55-L58) will not report any issues if the string produced by `handle_constrained_string` doesn't match the regex. Not sure if that was done on purpose, but adding `assert match is not None` leads to this test failing.
I can't think of a quick solution to this issue, however I think having a note in the documentation regarding regex fields could be helpful.
I am interested in working on a more structured solution to this issue, unless it's unlikely to be merged. :) | 0easy
|
Title: AlphaTrend Indicator Request
Body: Hi, thank you for all your work. You people are amazing. I was wondering if you can add AlphaTrend Indicator by Kıvanç Özbilgiç?
Because we all getting amazing reviews on this new indicator and maybe you can help us. Thank you!
Here is the python code: https://github.com/OnlyFibonacci/AlgoSeyri/blob/main/alphaTrendIndicator.py
Tradingview: https://tr.tradingview.com/script/o50NYLAZ-AlphaTrend/ | 0easy
|
Title: Changing Tutorials to Colab Notebooks
Body: ### Feature Description
The source file for all the tutorials on the website (autokeras.com) should be a `.ipynb` file.
### Reason
<!---
Why do we need the feature?
-->
Make it easier for people to try out the tutorials.
### Solution
<!---
Please tell us how to implement the feature,
if you have one in mind.
-->
Please help us change any of the `.md` files in the following directory to a `.ipynb` file.
https://github.com/keras-team/autokeras/tree/master/docs/templates/tutorial
You can refer to the `image_classification.ipynb` as an example.
https://github.com/keras-team/autokeras/blob/master/docs/templates/tutorial/image_classification.ipynb
Each pull request should only replace one file. | 0easy
|
Title: Voila and jupyterlab-github
Body: I tried to use voila with [jupyterlab-github](https://github.com/jupyterlab/jupyterlab-github) but it doesn't work.
The problem is that voila want to save the notebook before render it, but it is not possible in this case (the files are in readonly mode)
Do you think that it is possible to render the notebook without saving in this case ?
Regards
| 0easy
|
Title: ValueError: Field "duration" has type "timedelta64[ns]" which is not supported by Altair.
Body: It worked fine with 2 columns of datetimes and 2 columns of integers. I used df.apply(f(x)) to create a timedelta column and got the following warning. Full text:
/opt/conda/lib/python3.7/site-packages/IPython/core/formatters.py:918: UserWarning:
Unexpected error in rendering Lux widget and recommendations. Falling back to Pandas display.
Please report the following issue on Github: https://github.com/lux-org/lux/issues
/opt/conda/lib/python3.7/site-packages/lux/core/frame.py:632: UserWarning:Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 594, in _ipython_display_
self.maintain_recs()
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 451, in maintain_recs
self._widget = rec_df.render_widget()
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 681, in render_widget
widgetJSON = self.to_JSON(self._rec_info, input_current_vis=input_current_vis)
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 721, in to_JSON
recCollection = LuxDataFrame.rec_to_JSON(rec_infolist)
File "/opt/conda/lib/python3.7/site-packages/lux/core/frame.py", line 749, in rec_to_JSON
chart = vis.to_code(language=lux.config.plotting_backend, prettyOutput=False)
File "/opt/conda/lib/python3.7/site-packages/lux/vis/Vis.py", line 334, in to_code
return self.to_vegalite(**kwargs)
File "/opt/conda/lib/python3.7/site-packages/lux/vis/Vis.py", line 310, in to_vegalite
self._code = renderer.create_vis(self)
File "/opt/conda/lib/python3.7/site-packages/lux/vislib/altair/AltairRenderer.py", line 99, in create_vis
chart_dict = chart.chart.to_dict()
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/v4/api.py", line 373, in to_dict
dct = super(TopLevelMixin, copy).to_dict(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 328, in to_dict
context=context,
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 62, in _todict
for k, v in obj.items()
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 63, in <dictcomp>
if v is not Undefined
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 58, in _todict
return [_todict(v, validate, context) for v in obj]
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 58, in <listcomp>
return [_todict(v, validate, context) for v in obj]
File "/opt/conda/lib/python3.7/site-packages/altair/utils/schemapi.py", line 56, in _todict
return obj.to_dict(validate=validate, context=context)
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/v4/api.py", line 363, in to_dict
copy.data = _prepare_data(original_data, context)
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/v4/api.py", line 84, in _prepare_data
data = _pipe(data, data_transformers.get())
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 627, in pipe
data = func(data)
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 303, in __call__
return self._partial(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/altair/vegalite/data.py", line 19, in default_data_transformer
return curried.pipe(data, limit_rows(max_rows=max_rows), to_values)
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 627, in pipe
data = func(data)
File "/opt/conda/lib/python3.7/site-packages/toolz/functoolz.py", line 303, in __call__
return self._partial(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/altair/utils/data.py", line 149, in to_values
data = sanitize_dataframe(data)
File "/opt/conda/lib/python3.7/site-packages/altair/utils/core.py", line 317, in sanitize_dataframe
"".format(col_name=col_name, dtype=dtype)
ValueError: Field "duration" has type "timedelta64[ns]" which is not supported by Altair. Please convert to either a timestamp or a numerical value. | 0easy
|
Title: 你好,这个页面挺漂亮的,有没有编译前的源码呀
Body: 1. BUG反馈请描述最小复现步骤
2. 普通问题:99%的答案都在帮助文档里,请仔细阅读https://kmfaka.baklib-free.com/
3. 新功能新概念提交:请文字描述或截图标注
| 0easy
|
Title: Update readme to say that FX works with outputformat=pandas
Body: > The foreign exchange is just metadata, thus only available as json format (using the 'csv' or 'pandas' format will raise an Error)
This is incorrect | 0easy
|
Title: Logo.png文件上传后不能成功替换。
Body: 无法替换LOGO,logo.png图像上传后提示上传成功,但不能成功替换。
群晖Docker布署,请帮忙解决。
| 0easy
|
Title: Improve efficiency and runtimes or memory usage (performance optimization)
Body: Try to speed up the runtime of certain methods, or reduce how much memory they use.
This is a great issue to get started contributing to this repo! There are many ways to achieve speedup. It is easy to verify speedup via basic benchmarking that also ensures the results remain the same after you have edited a method.
Start by running one specific method on a big dataset and profiling it thoroughly to see where the code bottlenecks are.
Known methods where big speedups can likely be obtained include all methods in:
1. [`cleanlab.multiannotator`](https://docs.cleanlab.ai/master/cleanlab/multiannotator.html)
2. [`cleanlab.multilabel_classification`](https://docs.cleanlab.ai/master/cleanlab/multilabel_classification/)
3. [`cleanlab.segmentation`](https://docs.cleanlab.ai/master/cleanlab/segmentation/index.html)
4. [`cleanlab.object_detection`](https://docs.cleanlab.ai/master/cleanlab/object_detection/index.html)
5. [Datalab NonIIDIssueManager](https://docs.cleanlab.ai/master/cleanlab/datalab/internal/issue_manager/noniid.html) | 0easy
|
Title: Install latest spark version automatically
Body: Most of the time we do not pin the version of the software we install.
We should not have this hardcode for spark as well.
This is where hardcode happens: https://github.com/jupyter/docker-stacks/blob/main/images/pyspark-notebook/Dockerfile
This is how I got rid of the hardcode for Julia: https://github.com/jupyter/docker-stacks/blob/main/images/minimal-notebook/setup-scripts/setup_julia.py
We will also be able to close https://github.com/jupyter/docker-stacks/issues/1937
I mean, the issue won't be fixed in upstream, but we will no longer be responsible for updating to a new version when it is released - I think it would make sense to close such issues because this project doesn't fix/patch upstream issues most of the time. | 0easy
|
Title: Add GradientBoostingRegressor
Body: | 0easy
|
Title: Is active filter on offers not included suspend offer
Body: ### Issue Summary
suspend offer doesn't display when a filter with 'NO' option on is_active
### Steps to Reproduce
1. create suspend offer
2. select 'NO' option on is_active filter

### Technical details
* Python 3.8.10
* Django version: Version: 3.1
* Oscar version: Version: 3.1
| 0easy
|
Title: bad defaults for due date date/time picker
Body: <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Operating system
ubuntu 22.04
### `nbgrader --version`
0.9.2
### `jupyterhub --version` (if used with JupyterHub)
4.1.5
### `jupyter notebook --version`
jupyterlab 4.1.8
### Expected behavior
I have lots of due dates with random times. Faculty aren't realizing that the defaults are the current date time, so they set the date only. I think it would make more sense to default to midnight for a new due date, although obviously if you're adjusting an existing one it should default to the current due date.
Having UTC the default is also odd. I'd expect it to default to local time. I've tried a couple of configuration settings and not found one that works.
### Actual behavior
### Steps to reproduce the behavior
| 0easy
|
Title: Delete/remove client method (flask client)
Body: I can only speak for the Flask client as that is what I'm using. It would be great if the `oauth.register(...)` method had an `oauth.remove(client_name)`.
**Is your feature request related to a problem? Please describe.**
I wanted to write a test for something and it involved registering an oauth2 (flask) client for testing purposes. After the tests complete, I want a clean way to remove that testing client from the oauth object, but I cannot see a way to do this.
Without it, running the test poses the following issues. (1) If I create the client in my test with a fixed name (e.g. 'Test Client'), it will return the first client from the oauth registry for all subsequent tests, instead of creating a new client each time. But if I create the client with a unique name for each test (e.g. 'Test Client 1', 'Test Cleint 2', ...), the `oauth` registry will grow for as many clients are registered until the server restarts. Obviously the latter is the expected behaviour, but a `.remove_client` method seems missing.
**Describe the solution you'd like**
Just like with `oauth.register(...)`, maybe something like `oauth.remove_client(client_name)` would be nice. For example, in the [https://github.com/lepture/authlib/blob/master/authlib/integrations/base_client/registry.py](https://github.com/lepture/authlib/blob/master/authlib/integrations/base_client/registry.py), we could add a new method:
```python
def remove(self, client_name):
if client_name not in self._registry:
raise KeyError(f"Client {client_name} not found in registry.")
if client_name not in self._clients:
raise KeyError(f"Client {client_name} not found in registry.")
del self._registry[client_name]
del self._clients[client_name]
```
But I am not sure if this is sufficient or if it overlooks other issues.
**Describe alternatives you've considered**
As far as I can tell, we might be able to do this by simply editing the internal `oauth._clients` and `oauth._registry` dictionaries. For example `del oauth._clients[my_test_client_name]; del oauth._registry[my_test_client_name]` but I don't know if this fully accomplishes the goal. It definitely doesn't feel nice.
| 0easy
|
Title: [DOCS] Add bayesian methods to GMM density page.
Body: As mentioned https://github.com/koaning/scikit-lego/issues/608#issuecomment-1900005415, we only mention the bayesian GMM classifier in the [API docs](https://koaning.github.io/scikit-lego/api/mixture/). It would be good to add a paragraph/image to the [user guide](https://koaning.github.io/scikit-lego/user-guide/mixture-methods/) as well. | 0easy
|
Title: Mistake in tutorial code
Body: Tutorial: Similarity Queries
https://radimrehurek.com/gensim/auto_examples/core/run_similarity_queries.html#sphx-glr-auto-examples-core-run-similarity-queries-py
Notice the document order in the tutorial:
```
documents = [
"Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
```
Current snippet in the tutorial prints text in the original documents order, not the actual associated text.
```
sims = sorted(enumerate(sims), key=lambda item: -item[1])
for i, s in enumerate(sims):
print(s, documents[i])
Out:
(2, 0.9984453) Human machine interface for lab abc computer applications
(0, 0.998093) A survey of user opinion of computer system response time
(3, 0.9865886) The EPS user interface management system
(1, 0.93748635) System and human system engineering testing of EPS
```
The code should be
```
sims = sorted(enumerate(sims), key=lambda item: -item[1])
for s in sims:
print(s, documents[s[0]])
Out:
(2, 0.9984453) The EPS user interface management system
(0, 0.998093) Human machine interface for lab abc computer applications
(3, 0.9865886) System and human system engineering testing of EPS
(1, 0.93748635) A survey of user opinion of computer system response time
``` | 0easy
|
Title: Sphinx autodoc fails to reference objects through intersphinx
Body: As we don't seem to use autodoc anywhere, it seems we've missed this issue.
When using autodoc in a project, it will inspect the fully resolved name of an object, for example `aiohttp.client.ClientSession`. As the submodules are implementation details and we don't want to expose them to users, we only have `aiohttp.ClientSession` etc. in our docs.
To fix this, we should be able to use the `:canonical:` directive to create aliases that intersphinx can use for references. The end result should still not display the submodules to users reading the docs, but will allow intersphinx linking via the submodules.
https://www.sphinx-doc.org/en/master/usage/domains/python.html#directive-option-py-method-canonical
This probably needs to be done for almost all object in the reference docs. | 0easy
|
Title: InputTextMessageContent must have "entities" field, not "caption_entities"
Body: 

| 0easy
|
Title: update mutation on nested document fails
Body: Hi, I'm trying to update a nested document including an EmbeddedDocumentListField / EmbeddedDocument with graphene-mongo. Creating a new user via create mutation works perfectly fine, but when I try to update a nested document, it fails with the error
`Invalid embedded document instance provided to an EmbeddedDocumentField: ['label'] `
Here is my code:
models.py:
```python
from mongoengine import Document, EmbeddedDocumentListField, EmbeddedDocument
from mongoengine.fields import StringField
class UserLabel(EmbeddedDocument):
code = StringField()
value = StringField()
class User(Document):
meta = {'collection': 'user'}
first_name = StringField(required=True)
last_name = StringField(required=True)
label = EmbeddedDocumentListField(UserLabel)
```
app.py:
```python
from flask import Flask
from flask_graphql import GraphQLView
import graphene
from graphene_mongo import MongoengineObjectType
from mongoengine import connect
from models import User as UserModel, UserLabel as UserLabelModel
app = Flask(__name__)
class UserLabel(MongoengineObjectType):
class Meta:
model = UserLabelModel
class User(MongoengineObjectType):
class Meta:
model = UserModel
class UserLabelInput(graphene.InputObjectType):
code = graphene.String()
value = graphene.String()
class UserInput(graphene.InputObjectType):
id = graphene.String()
first_name = graphene.String()
last_name = graphene.String()
label = graphene.List(UserLabelInput, required=False)
class Query(graphene.ObjectType):
users = graphene.List(User)
def resolve_users(self, info):
return list(UserModel.objects.all())
class createUser(graphene.Mutation):
user = graphene.Field(User)
class Arguments:
user_data = UserInput()
def mutate(root, info, user_data):
user = UserModel(
first_name=user_data.first_name,
last_name=user_data.last_name,
label=user_data.label
)
user.save()
return createUser(user=user)
class updateUser(graphene.Mutation):
user = graphene.Field(User)
class Arguments:
user_data = UserInput()
def mutate(self, info, user_data):
user = UserModel.objects.get(id=user_data.id)
if user_data.first_name:
user.first_name = user_data.first_name
if user_data.last_name:
user.last_name = user_data.last_name
if user_data.label:
user.label = user_data.label
user.save()
return updateUser(user=user)
class Mutation(graphene.ObjectType):
create_user = createUser.Field()
update_user = updateUser.Field()
schema = graphene.Schema(query=Query, mutation=Mutation)
app.add_url_rule('/graphql', view_func=GraphQLView.as_view('graphql', schema=schema, graphiql=True))
if __name__ == '__main__':
app.run(debug=True, port=1234)
```
Trying to run this update mutation via graphiql :
```graphql
mutation {
updateUser(userData: {
id: "5d6f8bbbe3ec841d93229322",
firstName: "Peter",
lastName: "Simpson",
label: [
{
code:"DE",
value: "Peter Simpson"
}
]
}) {
user {
id
firstName
lastName
label {
code
value
}
}
}
}
```
I get the error:
```
{
"errors": [
{
"message": "ValidationError (User:5d6f8bbbe3ec841d93229322) (Invalid embedded document instance provided to an EmbeddedDocumentField: ['label'])",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"updateUser"
]
}
],
"data": {
"updateUser": null
}
}
```
Updating without the nested field works perfectly fine.
How can I resolve this ?
| 0easy
|
Title: add len on DocIndex
Body: # Context
len(db) should return db.num_docs when db is a DocIndex | 0easy
|
Title: [Tracker] Migrate test classes to use `BaseTester` and its functionalities.
Body: Following on from #2745
Our tests are grouped using classes for tests. In the past, we could not adopt our `BaseTester` everywhere, because it had some abstract methods which made it half annoying. With our kinda "new" `BaseTester` - [`from testing.base import BaseTester`](https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/testing/base.py#L67) - we can adopt it everywhere.
What needs to be done:
1. pickup a test file
2. add the `BaseTester` as the parent class of the test classes
3. update the `assert_close` to be `self.assert_close`
4. update the `test_gradcheck` to use `self.gradcheck` instead of `gradcheck` from `torch.autograd`
5. delete the use of `tensor_to_gradcheck_var` on the `test_gradcheck` - this is automatically handled by the `BaseTester` class
6. **extra**: remove the `dtype` argument of the `test_gradcheck` and just run it using `torch.float64`
7. open a PR :)
You can check #2745, which has done a bunch of these updates across our tests
The goal here is to reorganize our test suite. If you have any questions, you can call us and/or ask on our Slack page too [](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-2AQRi~X9Uu6PLMuUZdvfjA) | 0easy
|
Title: Add the missing docstrings to the `code_evaluator.py` file
Body: Add the missing docstrings to the [code_evaluator.py](https://github.com/scanapi/scanapi/blob/main/scanapi/evaluators/code_evaluator.py) file
[Here](https://github.com/scanapi/scanapi/wiki/First-Pull-Request#7-make-your-changes) you can find instructions of how we create the [docstrings](https://www.python.org/dev/peps/pep-0257/#what-is-a-docstring).
Child of https://github.com/scanapi/scanapi/issues/411 | 0easy
|
Title: Organizational Project Skill Demand metric API
Body: The canonical definition is here: https://chaoss.community/?p=3566 | 0easy
|
Title: [Feature request] Add apply_to_images to Pad
Body: | 0easy
|
Title: Add callback to controll number of iterations
Body: Add a simple callback to control the number of iterations. | 0easy
|
Title: Invalid use of prefix in suite name is not validated
Body: One can [force execution order](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#test-suite-name-and-documentation) by naming suite files with prefix.
If user fails to give empty suite name (eg. `01__.robot`), the `name` attribute is missing from output.xml. Although rebot doesn't mind this (log and report just show empty suite name), it causes problems to other ecosystem tooling (eg. [Jenkins plugin](https://issues.jenkins.io/browse/JENKINS-69807))
It would be nice if RF would consider this as invalid suite name. | 0easy
|
Title: Documentation of properties required for row_result
Body: **Describe the bug**
The `row_result` object which is generated by each row import has undocumented properties. This was raised [here](https://stackoverflow.com/a/68954762/39296).
** Solution **
I guess attributes should be documented [here](https://django-import-export.readthedocs.io/en/latest/api_results.html#import_export.results.Result)
**To Reproduce**
**Versions (please complete the following information):**
- Django Import Export: 2.5.0
- Python (all)
- Django (all
| 0easy
|
Title: Undeclared dependency on bs4
Body: ```
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/shop/models/notification.py", line 4, in <module>
from bs4 import BeautifulSoup
```
Add this to `setup.py` and check if it is declared in `requirements.txt`. | 0easy
|
Title: arbitrary value imputer: allow imputation with different numbers in 1 imputer
Body: Allow user to enter a dictionary with the values to use to replace the different variables | 0easy
|
Title: NMT Inference: Chunk overlength sequences and translate in sequence
Body: We may add an equivalent functionality to the NMT evaluation scripts
https://github.com/awslabs/sockeye/blob/3736f18e3286895cb1eb29817396a8b508edd802/sockeye/inference.py#L657-L659 | 0easy
|
Title: Add the ability to add a custom date format in datetime rendering
Body: In times.js we hard code a specific datetime value. But instead we should probably allow for passing in a format specifier so that a template file can control how a datetime gets rendered. | 0easy
|
Title: [FEA] featurize/umap should allow specifying which columns to multilabel
Body: **Is your feature request related to a problem? Please describe.**
Right now multilabel is default-off and only works when there is only 1 column, and it is lists of lists
**Describe the solution you'd like**
It should really be either auto-detect of all columns or explicit opt-in to specific columns
**Describe alternatives you've considered**
`g.featurize(y=['dirty_categorical'], multilabel=[ 'list_of_list_col' ])`
**Additional context**
* common in recommendations
* common in community detection
* ...
| 0easy
|
Title: tox -e expansion seems to parse incorrectly
Body: I'm using tox 4.2.3.
``` ini
[tox]
[testenv]
deps =
pytest6.x: pytest~=6.0
pytest7.x: pytest~=7.0
commands = pytest --version
```
``` console
$ tox -e py38-pytest7.x
py38-pytest7.x: commands[0]> pytest --version
pytest 7.2.0
py38-pytest7.x: OK (0.57=setup[0.15]+cmd[0.42] seconds)
congratulations :) (0.87 seconds)
```
``` console
$ tox -e 'py38-pytest{6.x,7.x}'
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/tox_env/api.py", line 248, in setup
self._setup_env()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/tox_env/python/runner.py", line 107, in _setup_env
self._install_deps()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/tox_env/python/runner.py", line 110, in _install_deps
requirements_file: PythonDeps = self.conf["deps"]
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/sets.py", line 114, in __getitem__
return self.load(item)
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/sets.py", line 125, in load
return config_definition.__call__(self._conf, self.loaders, ConfigLoadArgs(chain, self.name, self.env_name))
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/of_type.py", line 102, in __call__
value = loader.load(key, self.of_type, self.factory, conf, args)
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/api.py", line 124, in load
raw = self.load_raw(key, conf, args.env_name)
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/__init__.py", line 42, in load_raw
return self.process_raw(conf, env_name, self._section_proxy[key])
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/__init__.py", line 56, in process_raw
factor_filtered = filter_for_env(strip_comments, env_name) # select matching factors
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 13, in filter_for_env
set(chain.from_iterable([(i for i, _ in a) for a in find_factor_groups(name)])) if name is not None else set()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 13, in <listcomp>
set(chain.from_iterable([(i for i, _ in a) for a in find_factor_groups(name)])) if name is not None else set()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 65, in find_factor_groups
for env in expand_env_with_negation(value):
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 83, in expand_env_with_negation
raise ValueError(variant_str)
ValueError: py38-pytest{6.x
py38-pytest{6.x: FAIL ✖ in 0.1 seconds
7.x}: internal error
Traceback (most recent call last):
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/session/cmd/run/single.py", line 45, in _evaluate
tox_env.setup()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/tox_env/api.py", line 248, in setup
self._setup_env()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/tox_env/python/runner.py", line 107, in _setup_env
self._install_deps()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/tox_env/python/runner.py", line 110, in _install_deps
requirements_file: PythonDeps = self.conf["deps"]
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/sets.py", line 114, in __getitem__
return self.load(item)
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/sets.py", line 125, in load
return config_definition.__call__(self._conf, self.loaders, ConfigLoadArgs(chain, self.name, self.env_name))
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/of_type.py", line 102, in __call__
value = loader.load(key, self.of_type, self.factory, conf, args)
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/api.py", line 124, in load
raw = self.load_raw(key, conf, args.env_name)
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/__init__.py", line 42, in load_raw
return self.process_raw(conf, env_name, self._section_proxy[key])
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/__init__.py", line 56, in process_raw
factor_filtered = filter_for_env(strip_comments, env_name) # select matching factors
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 13, in filter_for_env
set(chain.from_iterable([(i for i, _ in a) for a in find_factor_groups(name)])) if name is not None else set()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 13, in <listcomp>
set(chain.from_iterable([(i for i, _ in a) for a in find_factor_groups(name)])) if name is not None else set()
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 65, in find_factor_groups
for env in expand_env_with_negation(value):
File "/ifs/home/dtucker/.local/pipx/venvs/tox/lib/python3.8/site-packages/tox/config/loader/ini/factor.py", line 83, in expand_env_with_negation
raise ValueError(variant_str)
ValueError: 7.x}
py38-pytest{6.x: FAIL code 2 (0.10 seconds)
7.x}: FAIL code 2 (0.01 seconds)
evaluation failed :( (0.25 seconds)
``` | 0easy
|
Title: re-write the cli using click (or maybe typer?)
Body:
### Description
I'm the creator and only maintainer of the project at the moment. I'm working on adding new features and thus I would like to let this issue open for newcomers who want to contribute to the project.
Basically, I wrote the cli using[ argparse ](https://docs.python.org/3/library/argparse.html) since it is part of the standard language already. However, I'm starting to rethink this choice because it has some issues that the [click](https://click.palletsprojects.com/en/8.0.x/why/#why-not-argparse) library already overcome.
With that said, it would be great to re-write the cli in click or even in [typer](https://github.com/tiangolo/typer), which also uses click under the hood but adds more features.
If someone wants to work on this, please feel free to start directly, you don't need to ask for permission.
_PS: Feel free to suggest other libraries. I just suggested click since I'm familiar with it_
**I hope not but If this issue stayed open for a long time, then I will start working on it myself**
| 0easy
|
Title: Allow logarithmic plots in the plot_evaluations/plot_objective
Body: When using plot_evaluations/plot_objective with at least one domain that is log-uniform it would be useful to have the possibility to plot a specific plot using one or two logarithmic axes. Default could also rely on result.space.
| 0easy
|
Title: Refactoring: jobs.py
Body: This is metaissue for list of tasks on refactoring.
After #5361 I see we need to refactor jobs.py code:
* Make singleton class like `xonsh.built_ins.XSH`
* Maybe: add jobs as `XSH.jobs` to work with this in tests more elegant
* Optimize methods to operate with jobs list: add function to get job by pid, to update job state or attributes.
* Add a few tests.
* Rename "obj" to "proc" with backwards compatibility (https://github.com/xonsh/xonsh/pull/5442)
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.