text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Recursive CTE has wrong number of columns when combined with joined table inheritance
Body: ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10123
the ORM is putting the second column in some way that CTE is not picking it up. If I do literal "Manager.id, Employee.id" it works, if I do select from the join() of the two tables, it works, so see what's up in compiler
```py
from sqlalchemy import ForeignKey
from sqlalchemy import create_engine
from sqlalchemy import select
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import Session
engine = create_engine('sqlite:///:memory:', echo=True)
class Base(DeclarativeBase):
pass
class Employee(Base):
__tablename__ = "Employee"
id: Mapped[str] = mapped_column(primary_key=True, index=True)
discriminator: Mapped[str] = mapped_column()
general : Mapped[str] = mapped_column()
__mapper_args__ = {"polymorphic_on": discriminator}
class Manager(Employee):
__tablename__ = "Manager"
id: Mapped[str] = mapped_column(ForeignKey("Employee.id"), primary_key=True, )
special : Mapped[str] = mapped_column()
__mapper_args__ = {
"polymorphic_identity": "Manager",
}
Base.metadata.create_all(engine)
my_cte = (select(Manager)
.filter(Manager.special == 'my special')
.filter(Manager.general == 'my general')
.cte(recursive=True))
session = Session(engine)
session.scalars(select(my_cte)).all()
```
| 2hard
|
Title: Replace lepture editor with boostrap-markdown for markdown
Body: Because it is light, has a code button, (check possibility of base64 image support) and it integrates with flask admin bootstrap theme.
Lepture is not rendering well
http://www.codingdrama.com/bootstrap-markdown/
| 2hard
|
Title: [MNT]: Data size consistency checks in _CollectionsWithSizes
Body: ### Summary
Extracted from https://github.com/matplotlib/matplotlib/issues/12021#issuecomment-530086713. This is a tracking issue so that we can close #12021 but the idea is not lost. It does not need immediate action (and may even be hard to act upon).
There is no built-in size check for the data in _CollectionWithSizes subclasses. For example, for `PathCollection`, one can have 10 paths, 4 sizes and 2 edgecolors.
```
import matplotlib.pyplot as plt
from matplotlib.collections import PathCollection
from matplotlib.path import Path
paths = [Path([(0, 0), (0.5, i), (1, 0)]) for i in range(10)]
# 10 paths, 4 sizes, 2 edgecolors:
pc = PathCollection(paths, sizes=s, facecolor='none', edgecolors=['r', 'g'])
ax = plt.gca()
ax.add_collection(pc)
ax.set(xlim=(0, 3), ylim=(0, 20))
```

The behavior is largely undocumented (though some plotting functions mention cycling over properties like colors). AFAICS: The paths effectively define the number of elements sizes and facecolor etc. are cycled through to match the paths (if there are more sizes that paths, the additional sizes are simply unused. If there are less sizes, the sizes are cycled).
Central question: Is this behavior desired? On the one hand, it can be convenient. On the other hand it can be confusing and lead to unnoticed errors.
Note: I suspect that changing the behavior is difficult. (i) would need deprecation, which is cumbersome but possible, (ii) *thing* (e.g. paths) and properties (sizes, facecolors) are currently decoupled. They are brought together at draw-time. If we do size checks, they likely can also happen only at draw-time. We have the individual `set_*` method and size checks in there would prevent any later change of the number of elments: `set_paths(paths); set_sizes(sizes)` would mutually exclude changing the number of elements. Note that this is similar to #26410, but I think we cannot get away with a collective `set_XYUVC` style solution here.
### Proposed fix
_No response_ | 2hard
|
Title: Experimental support for the Decimal type
Body: ## Why is this experimental?
Currently Prisma Client Python does not have access to the field metadata containing the precision of `Decimal` fields at the database level. This means that we cannot:
- Raise an error if you attempt to pass a `Decimal` value with a greater precision than the database supports, leading to implicit truncation which may cause confusing errors
- Set the precision level on the returned `decimal.Decimal` objects to match the database level, potentially leading to even more confusing errors.
To try and mitigate the effects of these errors you must be explicit that you understand that the support for the `Decimal` type is not up to the standard of the other types. You do this by setting the `enable_experimental_decimal` config flag, e.g.
```prisma
generator py {
provider = "prisma-client-py"
enable_experimental_decimal = true
}
``` | 2hard
|
Title: Optional cache for template rendering
Body: use redis
| 2hard
|
Title: [ENH] Enable chiral morgans by default
Body: # Brief Description
I would like to propose changing `janitor.chemistry.morgan_fingerprints` to pass `useChirality=True` to both `GetMorganFingerprintAsBitVect` and `GetHashedMorganFingerprint `. This should be exposed directly as default to `True` in the Python API. | 2hard
|
Title: Question around pydantic usage and performance
Body: ## Problem
Pydantic is slow, there's no question around that. Pydantic v2 (just released) is faster but still, not as fast [as other libraries](https://gist.github.com/jcrist/d62f450594164d284fbea957fd48b743?permalink_comment_id=4619455#gistcomment-4619455)
I'm curious as of why there's is a tight coupling to pydantic, given the performance impact.
## Suggested solution
I'm not in a position where I can comfortably suggest a solution.
## Alternatives
`dataclass` or `msgspec`
## Additional context
I was benchmarking Nodejs's implementation of prisma (Hapi + Prisma) and Python's implementation (FastAPI + Prisma).
Initially, python's implementation was 50x slower in raw throughput, but with lots of optimization was brought to 25x slower.
We would love to use Prisma in our backend python web stack as it's more convenient and has better dev experience than other python ORMs, but there are concerns around performance. | 2hard
|
Title: Add support for metrics
Body: Prisma have recently added support for retrieving metrics relating to the Prisma Client. We should support this too.
https://www.prisma.io/docs/concepts/components/prisma-client/metrics | 2hard
|
Title: Add Wordpress import script
Body: We could build something like wp2octopress.
https://github.com/mlindgren/wp2octopress/blob/master/wp2octopress.py
| 2hard
|
Title: Improve Client observability
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Debugging performance problems can be quite hard, especially due to the Rust black box that Prisma is. We should be able to provide some form of tracing to help users out.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Unclear at the moment, would probably build on top of https://github.com/prisma/prisma/issues/9601 / https://github.com/prisma/client-planning/issues/21
| 2hard
|
Title: Provide a type safe SQL builder
Body: Prisma recently added preview support for typed sql queries: https://www.prisma.io/docs/orm/prisma-client/using-raw-sql/typedsql
| 2hard
|
Title: Add support for third party concurrency libraries
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Not all python frameworks use the standard library `asyncio` module for running coroutines, we should add first class support for such frameworks
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Not clear, don't have enough experience using other concurrency frameworks.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
- [trio](https://github.com/python-trio/trio)
- [gevent](http://www.gevent.org/)
- [curio](https://github.com/dabeaz/curio)
| 2hard
|
Title: Add support for `@db.Date` fields
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We don't currently support `DateTime` fields that represent `Date`s at the database level because we generate a `datetime.datetime` type instead of `datetime.date` which results in failed Pydantic validation.
```prisma
date DateTime @default(dbgenerated("CURRENT_DATE")) @db.Date
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
This is essentially blocked on https://github.com/prisma/prisma/issues/3998 unfortunately...
| 2hard
|
Title: separate layout from application
Body: This would enable multiple layouts or dashboard views. This could be used with a tab display or multiple routes for example.
working on this in #158 | 2hard
|
Title: Support variables in format `$var` in addition to `${var}`
Body: Curly braces required by Robot's variable syntax can be considered somewhat distracting and even ugly. Depending on the keyboard layout, writing them can also require annoying finger acrobatics. It would thus be convenient if it would be possible to use `$var` instead of `${var}` and also `@list`, `&dict` and `%env` instead of `@{list}`, `&{dict}` and `%{env}`. For example, all these could be supported:
```
*** Variables ***
$VARIABLE value
*** Test Cases ***
Example
Log $VARIABLE
$var = Keyword
FOR $item IN @stuff
Log $item
END
```
In all the above cases the `$var` syntax is used on its own and that would work fine. This syntax wouldn't work too well when embedded to a string, though. For example, if you have variables `$name` and `$named`, string `Hello, $name!` could be parsed like `Hello, ${name}!`, but `abc$named` could be either `abc${name}d` or `abc${named}`. We could require using the `${var}` syntax in cases where `$var` is ambiguous, that's how [shell scripts](https://learn-bash.org/variables.html) work, but I believe it's better to require it always when variables are embedded. That makes the syntax more consistent and this approach is also considerably easier to implement. We can, however, look at this again later if the `$var` syntax is universally considered better and using it always (when possible) is considered important.
Supporting the `$var` syntax would be a backwards incompatible change, but if we support it only when variables are used on their own, it shouldn't be that bad. It would mean that all arguments starting with `$`, `@`, `&` and `%` would need to be escaped like `\$not_var`. We already require this with `#` (which otherwise starts a comment) so there's precedence for such syntax. Nevertheless, this would break lot of tests/tasks and the change can be made only in a major version. It would also be a good idea to deprecate using arguments starting with these characters already earlier. If this enhancement is considered important, it could be added in RF 7.0 and deprecation added in RF 6.2 or possibly already in RF 6.1.
Although the basic syntax would be simple, there are some design decision to be made:
1. Should we limit variable base name to only alpha numeric characters and underscore? In other words, should something like `$! als lkjas` be considered a variable with base name `! als lkjat` or should the whole thing be considered a literal string? Being more strict with the variable name would mean there's less need for escaping, but then these variables would work inconsistently compared to "normal" variables. It would also mean that some built-in variables like `${/}` couldn't be used like `$/`. I got a feeling that it's better if we don't limit the name.
2. Should this syntax support item access like `$list[0]` and `$dict[key][nested]`? I believe it should.
3. Should this syntax support "extended variable syntax" like `$var.upper()` and `$SPACE * 10`? I don't have too strong opinion on this, but I guess it would be better to support it. Needing to use `${var.upper()}` in this case would be a bit strange and inconsistent.
I'll add this issue tentatively to RF 7.0 scope, but we still need to make a bit more official decision about it. If we decide to include it, then another issue should be submitted about deprecating arguments starting with `$` and other such variable meta characters. | 2hard
|
Title: PyTorch 1.5.0 Upgrade
Body: The PyTorch used in this project is out dated, and some important features have changed, for example, Variable have been deprecated for quite some time.
It is time to move to the next step. | 2hard
|
Title: Performance Improvements Epic
Body: - [ ] #213
- [ ] #27 | 2hard
|
Title: MultiSymbol Read for VersionStore
Body: See #394 | 2hard
|
Title: Add support for full text search for PostgreSQL
Body: [https://www.prisma.io/docs/concepts/components/prisma-client/full-text-search](https://www.prisma.io/docs/concepts/components/prisma-client/full-text-search)
## Suggested solution
Something like the following, `search` should be added to `StringFilter` and must be an instance of `String`.
Should be noted that I'm not stuck on `String` being the name.
```py
# NOTE: actual API is still TODO
from prisma.querying import String
await client.post.find_first(
where={
'content': {
'search': String.contains('cat', 'dog'),
},
},
)
await client.post.find_first(
where={
'content': {
# for anything we don't explicitly support
'search': String.raw('fox \| cat'),
},
},
)
``` | 2hard
|
Title: Query engine panics when given a decimal mantissa larger than u128 on PostgreSQL
Body: <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
```py
await client.types.create({'decimal_': Decimal(2.1234)})
```
```
{"timestamp":"2022-05-15T17:00:38.838484Z","level":"ERROR","fields":{"message":"PANIC","reason":"called `Option::unwrap()` on a `None` value","file":"/Users/runner/.cargo/git/checkouts/quaint-9f01e008b9a89c14/479e08a/src/connector/postgres/conversion/decimal.rs","line":81,"column":39},"target":"query_engine"}
```
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
```prisma
model Types {
id Int @id @default(autoincrement())
decimal_ Decimal @default(1)
}
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
An error should be returned instead of the query engine crashing.
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> MacOS
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> PostgreSQL
- Python version: <!--[Run `python -V` to see your Python version]--> 3.9 | 2hard
|
Title: Support TogetherJS and Google Drive Realtime API
Body: We can create some plugins to implement realtime collaboration using TogetherJS [1] and the Google Drive API [2] on the admin.
[1] https://togetherjs.com/
[2] https://developers.google.com/drive/realtime/
| 2hard
|
Title: Update kneed
Body: The [kneed](https://github.com/arvkevi/kneed) library has been updated a couple of times (with bugfixes!) since it was incorporated into the yellowbrick code. I would like to update the knee finding algorithm to be consistent with the [0.7.0 release](https://github.com/arvkevi/kneed/releases/tag/v0.7.0) of kneed.
Would you welcome a PR to update the code? Thanks! | 2hard
|
Title: Metadata propagation not reliable if object type changes
Body: We currently rely on Pandas's [__finalize__](https://github.com/pandas-dev/pandas/blob/master/pandas/core/generic.py#L5179) to propagate the _metadata (including computed metadata, recommendations, and other stored properties). However, there are situations where dataframes don't always stay the same type throughout various operations. For example, in this [issue](https://github.com/lux-org/lux/pull/59#issuecomment-674100795), when we do group by. We end up with a GroupBy object, and the `_metadata` is lost. Similar issues will occur when going from Pandas Dataframe --> Series.
We should find a better strategy for metadata maintenance without having to explicitly pass in __finalize__, or ensure that when metadata properties are retrieved that they trigger a metadata recomputation if unfresh (so that the system is slightly slower but doesn't break when the metadata fields are accessed).
```python
df = pd.read_csv("lux/data/car.csv")
groupby_result = df.groupby("Origin").agg("sum")
intermediate = groupby_result.reset_index()
intermediate.cardinality # this returns None
df = intermediate.__finalize__(df)
df.cardinality # cardinality is now populated again
``` | 2hard
|
Title: Add execution errors and statistics to JSON output generated by Rebot
Body: We added support to generate JSON output files with Rebot in RF 7.0 (#4847) as mentioned in the [issue comment](https://github.com/robotframework/robotframework/issues/4847#issuecomment-1861733290), the generated output only contains information about the executed suite, not execution errors or statistics that normal XML outputs contain. Rebot itself doesn't need statistics if it is used for further processing JSON outputs (it always re-calculates them), but execution errors being omitted means that they aren't available in generated logs ether. That is a pretty severe limitation and needs to be fixed.
The initial idea was to add errors and statistics to JSON outputs when adding support to generate JSON outputs already during execution (#3423). That is a pretty big task currently planned for RF 8.0 and I believe we should fix the issue with errors and statistics already earlier.
A problem with the change is that it is backwards incompatible and external tools processing JSON output files need to be updated. It could be argued that such a change should wait for a major release, but I believe this is important enough enhancement to be done sooner. Tools would need to be updated anyway, regardless the version number, and the more we wait the more there are tools to be updated. | 2hard
|
Title: Create a localization for Turkish language docs
Body: I would like to translate some of the yellowbrick documentation into Turkish. Can you please create a localization for Turkish-language docs. | 2hard
|
Title: replace editor with Summernote or Trumbowyg
Body: - http://summernote.org/
- http://alex-d.github.io/Trumbowyg/
Choosed because both are lightweight, MIT and has base64 image support
Which one to choose?
| 2hard
|
Title: [Bug] Linux Web GUI 复制粘贴一条以上命令后,在命令记录中会丢失,录像左侧也会丢失,录像可以看到
Body: ### Product Version
3.10.17
### Product Edition
- [ ] Community Edition
- [x] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [x] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
公司自己的测试环境:https://js-internal.fit2cloud.cn/,V3.10.17,在金笑宇组织下进行了多次测试

### 🐛 Bug Description
Linux Web GUI 复制粘贴一条以上命令后,在命令记录中会丢失,录像左侧也会丢失,录像可以看到
这种情况可以复现
### Recurrence Steps
在Linux Web GUI中一次性复制粘贴输入
lsblk
fdisk -l
pvs
vgs
后执行,在命令记录中查询不到,在录像左侧也看不到,在录像里可以看到

### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | 2hard
|
Title: [FEATURE]: Support Command-R model
Body: ### Describe the feature
Support the Command-R model developed by: Cohere and [Cohere For AI](https://cohere.for.ai/)
C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities.
Corresponding huggingface page: [c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 2hard
|
Title: Task Queue (optional) for email and notifications
Body: https://github.com/Robpol86/Flask-Celery-Helper
or
Python-RQ
| 2hard
|
Title: Required array fields cannot be used in type safe raw queries
Body: <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
When using `[]` fields that are required, e.g.
```prisma
model Lists {
id String @id @default(cuid())
strings String[]
}
```
Trying to query against this model using Pydantic will cause an error, e.g.
```py
model = await client.lists.create({})
# this will cause an error because `strings` is not allowed to be `None`
found = await client.query_first('SELECT * FROM Lists WHERE id = $1', model.id, model=Lists)
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
An empty list should be set instead.
| 2hard
|
Title: Package binaries in wheels as an alternative to downloading at runtime
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We currently only package one universal wheel when we could make use of platform dependent wheels to improve first-time user experience, when someone installs a python package they don't expect to have to wait while more binaries are downloaded.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Update `setup.py` wheel building to build platform dependent wheels for every platform that prisma supports, we could also fall back to building the rust binaries on the user's machine if they are on an unsupported platform but thats outside the scope of this issue.
| 2hard
|
Title: [FEATURE]: FP8 communication in ShardFormer
Body: ### Describe the feature
Replace the dtype to fp8 in communication is useful For bandwidth restricted scenario. Please support the fp8 communication in Shardformer(SP and TP) | 2hard
|
Title: [ENH] Keeping a history of every operation applied to a DataFrame
Body: A whacky idea dreamed up during the sprint.
This would essentially record for you your data processing pipeline (on the DataFrame bit at least). Metadata could go in a `.pj` accessor in the dataframe / series.
Fun challenges include:
* Detecting use of all Pandas functions, not just PyJanitor ones
* Minimizing modification of Pandas objects as much as possible
* Avoiding brittleness to pandas code updates
* Handling multi-dataframe operations without losing computation metadata
Thoughts on how this could be possible would be nice. Dream big.
@szuckerman @ericmjl @HectorM14 | 2hard
|
Title: Fix Warnings in Build and Deploy Process
Body: When we deployed v1.5 we received the following warnings and deprecation errors:
```
python setup.py sdist bdist_wheel
/Users/benjamin/.pyenv/versions/3.10.2/envs/yellowbrick/lib/python3.10/site-packages/setuptools/dist.py:717: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
Warning: 'classifiers' should be a list, got type 'tuple'
Warning: 'keywords' should be a list, got type 'tuple'
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
```
```
python setup.py register
/Users/benjamin/.pyenv/versions/3.10.2/envs/yellowbrick/lib/python3.10/site-packages/setuptools/dist.py:717: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
Warning: 'classifiers' should be a list, got type 'tuple'
Warning: 'keywords' should be a list, got type 'tuple'
running register
running check
Registering yellowbrick to https://upload.pypi.org/legacy/
Server response (410): Project pre-registration is no longer required or supported, upload your files instead.
twine upload dist/*
Uploading distributions to https://upload.pypi.org/legacy/
Uploading yellowbrick-1.5-py3-none-any.whl
100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 294.5/294.5 kB • 00:00 • 1.9 MB/s
Uploading yellowbrick-1.5.tar.gz
100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.0/20.0 MB • 00:01 • 10.8 MB/s
View at:
https://pypi.org/project/yellowbrick/1.5/
```
Other notes:
- update classifiers to Python 3.10
- Check to make sure build/deploy is correct (e.g. the wheel build)
- Update to API tokens instead of basic login:
> During your recent upload or upload attempt of yellowbrick to PyPI, we noticed you used basic authentication (username & password). However, your account has two-factor authentication (2FA) enabled.
>
> In the near future, PyPI will begin prohibiting uploads using basic authentication for accounts with two-factor authentication enabled. Instead, we will require API tokens to be used.
>
>What should I do?
>
>First, generate an API token for your account or project at https://pypi.org/manage/account/token/. Then, use this token when publishing instead of your username and password. See https://pypi.org/help/#apitoken for help using API tokens to publish. | 2hard
|
Title: Partial type generation does not include newly generated modules
Body: <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://github.com/RobertCraigie/prisma-client-py/blob/main/docs/logging.md for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
When using a partial type generator and a custom output directory, the partially generated package is not imported when the partial type generator is ran.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
`schema.prisma`
```prisma
datasource db {
provider = "postgres"
url = env("DB_URL")
}
generator client {
provider = "prisma-client-py"
partial_type_generator = "partials.py"
output = "my_prisma"
}
model User {
id Int @id
name String
}
```
`partials.py`
```py
from prisma.models import User
```
On a fresh prisma installation (not generated), trying to generate the client will error.
```shell
prisma generate
```
```
Prisma schema loaded from schema.prisma
An exception ocurred while running the partial type generator
Error:
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "partials.py", line 1, in <module>
from prisma.models import User
ModuleNotFoundError: No module named 'prisma.models'
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The `prisma` package should point to the newly generated package.
Should be noted that in the example given above, even trying to import from `my_prisma` leads to an error but even if it didn't the point still stands as sometimes it may be useful to generate the client to a directory that is multiple directories away and fixing this would require knowledge of where the client was being generated to and some horrible python path patching.
## Solution
I do not know how this can be solved as it is due to Python's import caching mechanism, the [importlib.reload](https://docs.python.org/3/library/importlib.html#importlib.reload) may be useful here.
In terms of how to integrate a solution into this library you will have to modify the `Module.run` method in `src/prisma/generator/models.py`. | 2hard
|
Title: Add support for including count of relational fields
Body: ## Problem
Prisma supports including the count of a relational field, so we should too.
```js
const users = await prisma.user.findMany({
include: {
_count: {
select: { posts: true },
},
},
})
```
which returns an object like this
```js
{
id: 1,
email: '[email protected]',
name: 'Alice',
_count: { posts: 2 }
}
```
Prisma also have support for filtering by the count of the relation, see https://www.prisma.io/docs/concepts/components/prisma-client/aggregation-grouping-summarizing#filter-the-relation-count | 2hard
|
Title: Refactor type templates to use a schema
Body: # Problem
- Jinja2 templates do not have any form of static type checking, this means there could be holes in our templates that we haven't discovered yet, leading to bugs.
- Implementing a design for the types schema could also help Prisma improve the DMMF to be more client-agnostic.
- The implementation in `src/prisma/generator/models.py` is very tightly coupled to our `types.py.jinja` template, any changes in the template must also be reflected in our DMMF parser, this is very fragile and could lead to bugs being introduced due to duplicated and then mismatched logic.
## Suggested solution
We should design and use a schema for rendering types, similar to how Prisma does it for their TypeScript client.
This schema would be similarly structured to our DMMF schema, however I am undecided on whether or not we should work from the schema that Prisma sends us or build from the ground up.
| 2hard
|
Title: Add support for ARM architecture for CLI binaries
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently the CLI binary targets are: `windows`, `linux` and `darwin`. We should also build for targets such as `node12-linux-arm64`
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We do not currently have control over CLI binary building as this is handled and published by Prisma themselves. We will need to submit a PR in the Prisma Client Go repository.
The list of possible targets can be found here: https://github.com/vercel/pkg#targets.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
https://github.com/RobertCraigie/prisma-client-py/issues/195#issuecomment-1001350680
This is also more relevant now that we are adding support for ARM engine binaries in #233 | 2hard
|
Title: Generic Thumbnail Proxy/Processor
Body: A template filter or _macro to be used to generate image and thumbnail urls
``` python
_image(image.url, width=x, height=y)
```
By default it should support THUMBOR, src.sencha.io, rasg.us, pure PIL.
| 2hard
|
Title: Improve store get_info with collection data for VersionStore
Body: We need to improve the available meta-data we keep for the data usage of a given library.
Extend capabilities of get_info with :
- collection size
- size of old versions
- size of orphaned segments
- last access time the collection was accessed
- number (maybe size) of versions accessed each day
The user may create standardized Arctic Prometheus metrics for the above, with option to start prometheus web server ready for scraping.
The goal is to help understanding space usage providing insights for data eligible for deletion.
This could also help with solutions of multi tier storage (e.g. hybrid mongo(where most recently accessed data go) and nfs/s3(slower easier to maintain, slower retrieval))
| 2hard
|
Title: FUTURE BUG: reconsider how we deep-copy path objects
Body: We currently use `super()` in the `__deepcopy__` implementation of`Path`
https://github.com/matplotlib/matplotlib/blob/183b04fb43f57ffe66da68c729a179e397cd35f6/lib/matplotlib/path.py#L279-L287
however, on the main branch of CPython this causes infinite recursion (https://github.com/python/cpython/issues/126817). Although upstream may (or may not) sort out how to fix this, we should see if there is another way we can do what we need to do and avoid any fallout from changes in CPython.
----
Good first issue because "make sure deepcopy works on Paths" is a narrowly limited task, but hard because it will require understanding why we currently use `super()` and enough of the deepcopy/pickle protocol details to fix it.
attn @anntzer | 2hard
|
Title: Freezing server and increasing CPU usage when multiple workers using
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
Hello! It is kinda rare thing, but it already happened may be 4 times in my backend. Normally i running sanic in production with this options:
sanic main:app -H <SOME IP> -p <SOMEPORT> --fast
So the fast means to create as much workers as possible
Most of the time it works, but sometime, when i reloading server, its just freeze entire system and i don't know why.
This glitch issue only with big complicated projects, like mine, but if you set like 3-4 workers on 2 cores, it will make exact same cpu usage.
Like normally my project works with 2 workers and using 25-40% of the CPU. But sometime, i don't figure out when, it's just increasing CPU usage and freezing entire server. Only way to fix, it's to run on single worker.
I don't know what it be and how to exact reproduce error, but sometimes i have this.
I don't wating for fix of the issue, i just reporting activity, which may be someone had too. May be it's problem in my back-end codebase. But if my code works most of the time, i don't know what to think
### Code snippet
I'm sorry, but actual code it's making by private company, and i cannot share it. But i can show how possible main.py file looks like:
```
import asyncio
import platform
from sanic import Sanic, Request
from sanic.response import json, HTTPResponse
from tortoise.contrib.sanic import register_tortoise
from orm.models import User
from orm.db import TORTOISE_ORM, _DB_URL
from settings.constants import CODES, LOGGER_PATH
from exceptions.logger import add_info_to_logger
"""<60 BLUEPRINT IMPORTS>"""
DB_URL: str = _DB_URL
MODULES_GLOBAL: list[str] = TORTOISE_ORM["apps"]["models"]
PLATFORM = platform.platform()
app = Sanic("app")
if "Windows" in PLATFORM:
app.config.OAS = True
else:
app.config.OAS = False
app.static(BASE_STATIC_PATH, BASE_STATIC_PATH)
app.config.CORS_ORIGINS = "http://localhost:1234,https://productionwebsite.com"
Extend(app)
"""
<REGISTER 60 BLUEPRINT SECTION>
"""
@app.on_request
async def run_before_handler(request: Request):
request.ctx.start_time = asyncio.get_event_loop().time()
is_authenticated = await check_auth(request)
request.ctx.is_authenticated = is_authenticated
if is_authenticated:
request.ctx.user: User = await get_user_from_request(request)
@app.on_response
async def run_after_handler(request: Request, response: HTTPResponse):
end_time = asyncio.get_event_loop().time()
execution_time = end_time - request.ctx.start_time
await write_perfomance_information_to_database(
request.url,
execution_time
)
if "Windows" in PLATFORM:
...
else:
@app.exception(Exception)
async def catch_anything(request, exception):
try:
await add_info_to_logger(
LOGGER_PATH,
str({
"Text": "An error occurred",
"ErrorInfo": exception,
"ErrorUrl": request.raw_url,
"UserGot": request.ctx.user.serialize()
})
)
except Exception as exc:
await add_info_to_logger(
LOGGER_PATH,
str({
"Text": "An error occurred",
"ErrorInfo": exception,
"ErrorUrl": request.raw_url,
"UserGot": ""
})
)
return json({"status": CODES[4002]}, status=500)
@app.route("/")
async def hello_world(request: Request) -> json:
return json({"status": "system is fine"})
register_tortoise(
app,
db_url=DB_URL,
modules={"models": MODULES_GLOBAL["models"]},
generate_schemas=True
)
if __name__ == "__main__":
dev = False
if "Windows" in PLATFORM:
dev = True
app.run(
host=IP,
port=PORT,
dev=dev,
access_log=False
)
```
### Expected Behavior
Like i said, if i use more workers than i have, system just freeze. But sometime it's also freeze with --fast parameter
### How do you run Sanic?
Sanic CLI
### Operating System
Ubuntu 22.04
### Sanic Version
Sanic 23.3.0; Routing 22.8.0
### Additional context
_No response_ | 2hard
|
Title: Add support for Prisma Accelerate
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma recently announced a new product, [Prisma Accelerate](https://www.prisma.io/data-platform/accelerate) which is a query caching service built on top of the Prisma Data Platform.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
It's unclear at the moment what exactly would be required to support this but at the very minimum we would need support for the Data Proxy, #76.
| 2hard
|
Title: 2 Duplication problems and a false-positive in a portion of django.nV output, among other things.
Body: So I run `python -m pyt -a E -f example/django.nV/taskManager/upload_controller.py -trim` and out I get:
```python
5 vulnerabilities found:
Vulnerability 1:
File: example/django.nV/taskManager/misc.py
> User input at line 24, trigger word "Flask function URL parameter":
title
File: example/django.nV/taskManager/misc.py
> reaches line 33, trigger word "system(":
¤call_2 = ret_os.system('mv ' + uploaded_file.temporary_file_path() + ' ' + '%s/%s' % (upload_dir_path, title))
Vulnerability 2:
File: example/django.nV/taskManager/upload_controller.py
> User input at line 11, trigger word "get(":
¤call_3 = ret_request.POST.get('name', False)
Reassigned in:
File: example/django.nV/taskManager/upload_controller.py
> Line 11: name = ¤call_3
File: example/django.nV/taskManager/upload_controller.py
> Line 12: temp_4_title = name
File: example/django.nV/taskManager/misc.py
> Line 24: title = temp_4_title
File: example/django.nV/taskManager/misc.py
> reaches line 33, trigger word "system(":
¤call_6 = ret_os.system('mv ' + uploaded_file.temporary_file_path() + ' ' + '%s/%s' % (upload_dir_path, title))
Vulnerability 3:
File: example/django.nV/taskManager/upload_controller.py
> User input at line 3, trigger word "Flask function URL parameter":
request
Reassigned in:
File: example/django.nV/taskManager/upload_controller.py
> Line 12: temp_4_uploaded_file = request.FILES['file']
File: example/django.nV/taskManager/misc.py
> Line 24: uploaded_file = temp_4_uploaded_file
File: example/django.nV/taskManager/misc.py
> reaches line 33, trigger word "system(":
¤call_6 = ret_os.system('mv ' + uploaded_file.temporary_file_path() + ' ' + '%s/%s' % (upload_dir_path, title))
Vulnerability 4:
File: example/django.nV/taskManager/upload_controller.py
> User input at line 11, trigger word "get(":
¤call_3 = ret_request.POST.get('name', False)
Reassigned in:
File: example/django.nV/taskManager/upload_controller.py
> Line 12: temp_4_title = name
File: example/django.nV/taskManager/misc.py
> Line 24: title = temp_4_title
File: example/django.nV/taskManager/misc.py
> Line 41: ret_store_uploaded_file = '/static/taskManager/uploads/%s' % title
File: example/django.nV/taskManager/upload_controller.py
> Line 12: ¤call_4 = ret_store_uploaded_file
File: example/django.nV/taskManager/upload_controller.py
> Line 12: upload_path = ¤call_4
File: example/django.nV/taskManager/upload_controller.py
> reaches line 16, trigger word "execute(":
¤call_8 = ret_curs.execute('insert into taskManager_file ('name','path','project_id') values ('%s','%s',%s)' % (name, upload_path, project_id))
Vulnerability 5:
File: example/django.nV/taskManager/upload_controller.py
> User input at line 3, trigger word "Flask function URL parameter":
request
Reassigned in:
File: example/django.nV/taskManager/upload_controller.py
> Line 12: temp_4_title = name
File: example/django.nV/taskManager/misc.py
> Line 24: title = temp_4_title
File: example/django.nV/taskManager/misc.py
> Line 41: ret_store_uploaded_file = '/static/taskManager/uploads/%s' % title
File: example/django.nV/taskManager/upload_controller.py
> Line 12: ¤call_4 = ret_store_uploaded_file
File: example/django.nV/taskManager/upload_controller.py
> Line 12: upload_path = ¤call_4
File: example/django.nV/taskManager/upload_controller.py
> reaches line 16, trigger word "execute(":
¤call_8 = ret_curs.execute('insert into taskManager_file ('name','path','project_id') values ('%s','%s',%s)' % (name, upload_path, project_id))
```
There are many issues with this output.
(a)
Vulnerability `#1` should not be in the output, or at least, if you would argue it should be, you'd concede it's a good idea to give an option for vulnerabilities like it to not be in the output. When I say 'vulnerabilities like it' I mean, we ran it on a controller file, `upload_controller.py` which calls into `misc.py`, then we reported vulnerabilities as though we ran it on `misc.py`, resulting in a duplicate (vulnerabilities 1 and 2).
To solve this, maybe we should do something with `self.filenames[-1]` inside of `interprocedural.py` or just, at a higher level, grab the file from the -f output and skip any vulnerabilities that don't match it (note the `File: example/django.nV/taskManager/misc.py` in the output). The latter idea sounds cleaner and smoother.
(b) Vulnerability `#3` is not unknown, although we know `uploaded_file` is tainted we don't have any idea if `uploaded_file.temporary_file_path()` is something that leads to a vulnerability.
To solve this, we somehow add the return value of `uploaded_file.temporary_file_path()` to blackbox_assignments. The .args list of the sink might include `uploaded_file`, so we'll need to change this as well when we're visiting BBorBInode arguments.
(c) Vulnerabilities `#4` and `#5` are the same vulnerability, stemming from the same line.
(d) In the Vulnerability `#5` output, it doesn't show the actual `request.whatever` line that led to the vulnerability.
Perhaps these can be solved with the same code, not sure.
(e) If you run it without -trim, and search through the output you'll see `ret_render_to_response('taskManager/upload.html', 'form'form, ¤call_13)` (from the original line `render_to_response('taskManager/upload.html', {'form': form}, RequestContext(request))`), so I take it I don't handle visual_args very well when they're dictionaries. A low-priority issue from where I stand though.
Another thing that I noticed, but I'm not going to implement, is https://github.com/python-security/pyt/issues/71 | 2hard
|
Title: Streamline `lux.config` to regenerate recommendations
Body: Currently, `lux.config` does [not trigger a regeneration of recommendations](https://lux-api.readthedocs.io/en/latest/source/reference/config.html) and requires either explicit expiring or placing the config in the right place. We should streamline the config so that the dataframes are regenerated accordingly.
This is non-trivial because the config is applied to all dataframes so we don't currently have a way of keeping track of all the dataframes applicable in the session. Another thing to consider is regenerating recommendations not "from scratch" but using intermediate products to update the vis, e.g., when the plot style or plotting backend is updated, only the rendering needs to happen but not the recomputation of the recommendations. | 2hard
|
Title: Investigate the potential read performance improvement using forward pointers
Body: Currently the read path of ndarray_store, instead of fetching the segments using a query spec that matches against 'parent' (segment document stores parent as an array of ObjectIDs):
https://github.com/manahl/arctic/blob/master/arctic/store/_ndarray_store.py#L243
Investigate the potential performance improvements:
- storing in the version document a list of segment object IDs and query with {'$in': [objID1, objID2....]}
- storing in the version document a list of segment SHAs and query with {'$in': [sha1, sha2....]}
Investigate using a MongoDB 3.4 or better 3.6 as there have been improvements for the queries using $in: https://jira.mongodb.org/browse/SERVER-30189
When the prototype solution is developed, benchmark and pay attention to the indexes used, as well as understand the query execution via studying the explain() results.
Test against a variety of scenarios: variable document size, variable number of versions and symbols, variable number of shards, consistency of data.
The solution needs to be backwards compatible. | 2hard
|
Title: Rely on .pyx file and dynamically create .c files using cython
Body: see #758 for what lead to this.. I am very interested in @jjhelmus's views on this.
pro:
* Folks can just edit pyx files and only commit those.
* we get the latest bells and whistles from new cython versions
* Lighter weight and true line counts
Cons:
* cython becomes a dependancy
* More work: Need to update release and testing to generate .c files on the fly | 2hard
|
Title: [Bug] Chen 组件错误 sql 被拆分执行
Body: ### Product Version
v3.10.17
### Product Edition
- [ ] Community Edition
- [X] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [X] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
单机部署
### 🐛 Bug Description
使用 web gui 连接数据库 Oracle ,执行一条带条件 sql,当 where 关键词写错时,where 条件前面的 sql 仍然被执行
### Recurrence Steps
使用 web GUI 方式连接 Oracle 数据库, where 关键词写错进行查询
select * from FIT2CLOUD.USERS whre age > 28;

### Expected Behavior
错误 sql 报错且不被拆分执行
### Additional Information
_No response_
### Attempted Solutions
_No response_ | 2hard
|
Title: Influxdb IO kills SD card for 3-4 months.
Body: I already killed 3 SD cards for 1 year period.
### Versions:
Mycodo Version: 7.9.1
Python Version: 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.2.0]
Database Version: 8f9bf3fe5ec2
- Raspberry Pi Version: [3B]
- Raspbian OS Version: [Buster Lite]
### Reproducibility

I there some way to reduce SD card Usage?
| 2hard
|
Title: Support library keywords with both embedded and normal arguments
Body: In https://github.com/robotframework/robotframework/issues/4234 / https://github.com/robotframework/robotframework/pull/4319 support for user keywords with embedded and normal arguments was added.
It would be useful if we could do the same for library keywords in the same manor e.g.
```python
@keyword('Number of ${animals} should be')
def example(animals, count):
...
```
```
Number of horses should be 2
Number of horses should be count=2
Number of dogs should be 3
| 2hard
|
Title: ENH: Switch from disutils to setuptools
Body: We planned on switching from disutils to setuptools for some time, but this is just to keep track and reiterate some of the possible benefits:
Can have it compile c and not have to include pyx files as mentioned in #759
Can have install_requires set, which will probably fix the current issue with appveyor.
From what I'm seeing, we can remove the setup.py files in the sub directories. | 2hard
|
Title: contains eager with many levels illustrated not pathing correctly
Body: ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10003
test cases should includ single inh as well as joined inh, use assorted_eager
```py
from sqlalchemy import and_
from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
from sqlalchemy import Integer, insert
from sqlalchemy import select
from sqlalchemy import Table
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import aliased
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import joinedload
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import with_polymorphic, contains_eager
class Base(DeclarativeBase):
pass
EmplLinks = Table(
"employee_m2m",
Base.metadata,
Column("id", Integer, primary_key=True),
Column(
"type", Integer
), # 1 - head to manager, 2 - manager to employees, 3 - colleague, 4 - parent to child
Column("left", Integer, ForeignKey("employee.id")),
Column("right", Integer, ForeignKey("employee.id")),
)
class Property(Base):
__tablename__ = "property"
id: Mapped[int] = mapped_column(primary_key=True)
key: Mapped[str] = mapped_column(name="key")
value: Mapped[str] = mapped_column(name="value")
user_id: Mapped[int] = mapped_column(ForeignKey("employee.id"))
class Employee(Base):
__tablename__ = "employee"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
type: Mapped[str]
prop1 = relationship(
Property,
primaryjoin="and_(Property.user_id == Employee.id, "
"Property.key=='key1')",
uselist=False,
viewonly=True,
lazy="raise",
)
@classmethod
def __declare_last__(cls):
j = aliased(EmplLinks)
cls.colleagues = relationship(
cls,
secondary=j,
primaryjoin=cls.id == j.c.left,
secondaryjoin=and_(
cls.id == j.c.right, j.c.type.in_([1, 2, 3, 4])
),
foreign_keys=[j.c.left, j.c.right],
join_depth=0,
viewonly=True,
uselist=True,
lazy="raise",
)
@hybrid_property
def prop1_value(self):
return self.prop1.value
@prop1_value.expression
def prop1_value(cls):
return Property.value
@classmethod
def prop1_value_of_type(cls, type_):
return type_.value
__mapper_args__ = {
"polymorphic_on": "type",
"polymorphic_identity": "employee",
}
class Manager(Employee):
__mapper_args__ = {
"polymorphic_identity": "manager",
}
class Engineer(Employee):
__mapper_args__ = {
"polymorphic_identity": "engineer",
}
class Clerk(Employee):
__mapper_args__ = {
"polymorphic_identity": "clerk",
}
class UnitHead(Employee):
managers = relationship(
"Manager",
secondary=EmplLinks,
primaryjoin=and_(
Employee.id == EmplLinks.c.left, EmplLinks.c.type == 4
),
secondaryjoin=and_(
Manager.id == EmplLinks.c.right, Manager.type == "manager"
),
foreign_keys=[EmplLinks.c.left, EmplLinks.c.right],
join_depth=0,
viewonly=True,
uselist=True,
lazy="raise",
)
__mapper_args__ = {
"polymorphic_identity": "unithead",
}
# data
engine = create_engine("sqlite://", echo="debug")
session = sessionmaker(autocommit=False, autoflush=False, bind=engine)()
Base.metadata.create_all(engine)
unithead = UnitHead(type="unithead", name="unithead1")
manager = Manager(type="manager", name="manager1")
engineer = Engineer(type="engineer", name="engineer1")
clerk = Clerk(type="clerk", name="clerk1")
manager2 = Manager(type="manager", name="manager2")
engineer3 = Engineer(type="engineer", name="engineer3")
session.add_all([unithead, manager, engineer, clerk, manager2, engineer3])
session.commit()
session.flush()
#
session.execute(
insert(EmplLinks).values(type=4, left=unithead.id, right=manager.id)
)
session.execute(
insert(EmplLinks).values(type=4, left=manager.id, right=engineer.id)
)
session.execute(
insert(EmplLinks).values(type=4, left=manager.id, right=clerk.id)
)
#
prop = Property(key="key1", value="val engineer", user_id=engineer.id)
prop2 = Property(key="key1", value="val clerk", user_id=clerk.id)
prop3 = Property(key="key1", value="val manager", user_id=manager.id)
prop4 = Property(key="key1", value="val unithead", user_id=unithead.id)
session.add_all([prop, prop2, prop3, prop4])
session.commit()
session.flush()
# query
mgr = aliased(Manager)
clg = aliased(Employee)
clgs_prop1 = aliased(Property, name="clgs_prop1")
# would ideally include
# ma_prop1 = aliased(Property)
# uhead_prop1 = aliased(Property)
query = (
select(UnitHead)
.options(
contains_eager(UnitHead.managers.of_type(mgr))
.contains_eager(mgr.colleagues.of_type(clg))
.contains_eager(clg.prop1.of_type(clgs_prop1)),
)
.outerjoin(UnitHead.managers.of_type(mgr))
.outerjoin(mgr.colleagues.of_type(clg))
.outerjoin(clg.prop1.of_type(clgs_prop1))
# would ideally include
# .outerjoin(UnitHead.prop1.of_type(uhead_prop1))
#.outerjoin(mgr.prop1.of_type(ma_prop1))
#.where(
# UnitHead.prop1_value_of_type(uhead_prop1) == "val unithead",
# mgr.prop1_value_of_type(ma_prop1) == "val manager",
# clg.prop1_value_of_type(clgs_prop1) == "val engineer",
#)
)
result = session.scalars(query).unique()
head = result.one()
print(head.managers[0].colleagues)
print(head.managers[0].colleagues[0].prop1)
```
the contains_eager() fails to add the clg_prop1 columns:
```
SELECT employee_1.id, employee_1.name, employee_1.type, employee_2.id AS id_1, employee_2.name AS name_1, employee_2.type AS type_1, employee_3.id AS id_2, employee_3.name AS name_2, employee_3.type AS type_2
FROM employee AS employee_3 LEFT OUTER JOIN (employee_m2m AS employee_m2m_1 JOIN employee AS employee_1 ON employee_1.id = employee_m2m_1."right" AND employee_1.type = ? AND employee_1.type IN (?)) ON employee_3.id = employee_m2m_1."left" AND employee_m2m_1.type = ? LEFT OUTER JOIN (employee_m2m AS anon_1 JOIN employee AS employee_2 ON employee_2.id = anon_1."right" AND anon_1.type IN (?, ?, ?, ?)) ON employee_1.id = anon_1."left" LEFT OUTER JOIN property AS clgs_prop1 ON clgs_prop1.user_id = employee_2.id AND clgs_prop1."key" = ?
WHERE employee_3.type IN (?)
``` | 2hard
|
Title: Provide performance benchmarks
Body: Currently our performance will not stack up well with other Python ORMs, however we can't work on improving performance without having a baseline to work from.
We should also provide context for these benchmarks by also benchmarking other ORMs. We should include:
- SQLAlchemy
- Django
- Pewee
- Pony ORM
- SQLObject
- Tortoise ORM
- SQLModel
We could also base the benchmarks on these benchmarks: https://github.com/tortoise/orm-benchmarks | 2hard
|
Title: Incorrect naming convention on generated client
Body: <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
Model names are not properly converted to __snake_case__ according to python code styles.
For example, a model "SnakeCase" should be accessible as "snake_case" in python using the prisma client.
## How to reproduce
1. Name a model consisting of two words (the same bug probably occurs when using more than two words).
In my case `OrganisationAffiliation`.
2. Run `prisma generate`
3. Try to access the same model in python code. The model is exposed as `prisma.organisationaffiliation` instead of `prisma.organisation_affiliation.`
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
5. Run '....'
6. See error
-->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
model OrganisationAffiliation {
userId Int
user User @relation(fields: [userId], references: [id])
organisationId Int
organisation Organisation @relation(fields: [organisationId], references: [id])
is_owner Boolean @default(false)
hourly_rate Float?
@@unique([userId, organisationId], name: "affiliationId")
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Alpine Linux (inside a Docker Container).
- Database: Postgres
- Python version: 3.11.2
- Prisma version:
```
prisma : 4.10.1
prisma client python : 0.8.1
platform : linux-musl
expected engine version : aead147aa326ccb985dcfed5b065b4fdabd44b19
install path : /usr/local/lib/python3.11/site-packages/prisma
installed extras : []
```
| 2hard
|
Title: Parallel Plot with independent vline
Body: **Describe the solution you'd like**
`ParallelCoordinates` plots each vline using a shared y scale. Instead of normalizing the values (using the parameter normalize), I'd like to keep original values, and allow each vline to have its own scale (scaling / moving the axis in order to fit current figsize)
**Examples**
I believe I've seen this behavior in other packages (i.e. plotly)

| 2hard
|
Title: [FEATURE]: Support T5ForTokenClassification
Body: ### Describe the feature
Transformers new version 4.39.3 add support for T5ForTokenClassification. Add support for the new model in Shardformer | 2hard
|
Title: TQDM progress bars in jupyter notebooks do not appear in PDF via latex output
Body: - [X] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [X] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [X] I have visited the [source website], and in particular
read the [known issues]
- [X] I have searched through the [issue tracker] for duplicates
- [X] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
gives:
```
4.31.1 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0] linux
```
When using the jupyter notebook extensions of TQDM, we get nice beautiful progress bars in the notebook, which is awesome. I'm currently writing some lectures for my students, and want to share a PDF version as well. But when I export to PDF via Latex, the progress bars are replaced by raw commands in the background like `HBox` (see attachment for an example)
[tqdm_pdf_example.pdf](https://github.com/tqdm/tqdm/files/4466741/tqdm_pdf_example.pdf)
It would be awesome if there was a way to get tqdm progress bars to show up in the latex output. I'm not sure if this is a tqdm or jupyter issue though.
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| 2hard
|
Title: Add support for generating default field values using pydantic
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently if you want to manually instantiate values you need to also pass values for all fields that have default values. We should support automatically passing defaults for some default types.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We will be able to support fields like these:
```prisma
model User {
f1 String @default(cuid())
f2 String @default(uuid())
f3 DateTime @default(now())
}
```
But we will not be able to support `autoincrement()` or `dbgenerated()` as they need to talk to the database:
```prisma
model User {
f1 Int @default(autoincrement())
f2 Unsupported("circle")? @default(dbgenerated("'<(10,4),11>'::circle"))
}
```
If you do have fields with these defaults then you can still use them you just won't be able to manually construct model instances, you'll have to use the standard Prisma API to get model instances.
Adding support for `cuid()` and `uuid()` will require us to provide the rust functions that prisma uses to generate these defaults as we should not differ from their behaviour in any way. We should package these into FFI bindings and use a package like `_prisma_defaults` which we could then pass to auto-generated model definitions like so:
```py
class User(BaseModel):
id: str = Field(default_factory=_prisma_defaults.cuid)
name: str
```
| 2hard
|
Title: bind substitution fails for selectinload / lazyload + with_expression
Body: ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10569
all the way in to how selectinload applies bind substitution
```py
from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import select
from sqlalchemy import String
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import query_expression
from sqlalchemy.orm import relationship
from sqlalchemy.orm import selectinload
from sqlalchemy.orm import Session
from sqlalchemy.orm import with_expression
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)
data = Column(String)
bs = relationship("B")
class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)
a_id = Column(ForeignKey("a.id"))
boolean = query_expression()
data = Column(String)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(A(bs=[B(data="a"), B(data="b"), B(data="c")]))
s.commit()
def go(x):
with Session(e) as sess:
return sess.execute(
select(A).options(
selectinload(A.bs).options(
with_expression(B.boolean, x)
)
)
).scalars().all()
for a in go(B.data < 'c'):
for b in a.bs:
print(f"data: {b.data} boolean: {b.boolean}")
# works, the expression changes from < to ==
for a in go(B.data == 'b'):
for b in a.bs:
print(f"data: {b.data} boolean: {b.boolean}")
# but bind substitution gets stuck
for a in go(B.data == 'c'):
for b in a.bs:
print(f"data: {b.data} boolean: {b.boolean}")
```
same issue for lazyload. subqueryload seems to not have the problem. | 2hard
|
Title: Intercluster Distance Map (MDS)
Body: **Describe the solution you'd like**
Create a cluster visualization that displays the distance between cluster centers and cluster relative sizes by using [multidimensional scaling (MDS)](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.MDS.html). The visualization would be:
1. A scatter plot whose points would be described by the MDS embedding of the cluster centers
2. Each point would be sized according to the number of instances belonging tot he cluster.
3. Each point would be labeled with its cluster index
**Is your feature request related to a problem? Please describe.**
Between this and #570, Yellowbrick could provide a reasonable approximation of PyLDAViz in an interactive notebook with ipywidgets.
**Examples**
<img width="590" alt="screenshot 2018-08-21 06 55 38" src="https://user-images.githubusercontent.com/745966/44397681-39f12e80-a50f-11e8-88b8-53a76bbaa1ab.png">
| 2hard
|
Title: `GROUP` syntax for grouping keywords and control structures
Body: The proposed BLOCK keyword would allow users to logically group related steps, improving test case organization, readability, and making the logs clearer and more structured by avoiding clutter at the top level.
Currently, to achieve logical grouping, users often create custom keywords to wrap related steps, leading to extra development and maintenance efforts. With BLOCK, users can group steps directly within test cases, minimizing the need for custom keywords.
Example:
```
*** Test Cases ***
Test Server Configuration
BLOCK Check Network Configuration
Check IP Configuration ${dut1} ${dut1_ip}
Check Subnet Mask ${dut1} ${dut1_subnet}
Check Gateway ${dut1} ${dut1_gateway}
END
BLOCK Check Service Status
Check Service Status ${dut1} HTTP status=active
Check Service Status ${dut1} SSH status=inactive
END
``` | 2hard
|
Title: Add support for composite types
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
In 3.10 Prisma added support for embedded types, this looks like this in the schema
```prisma
model User {
id Int @id @default(autoincrement())
name String
photo Photo
}
type Photo {
width Int
height Int
data Bytes
}
```
This is currently only supported for MongoDB although there are plans to add support for this to other database providers in the future.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
I have not looked into the API that Prisma provides for this yet but I imagine that this will be very similar to relational fields.
https://www.prisma.io/docs/concepts/components/prisma-client/composite-types | 2hard
|
Title: 📝 Fix All typing Problems in codebase
Body: Fix All this related issues for typing:
- [x] #610
- [x] #609
- [x] #608
- [x] #607
- [x] #606
- [x] #605
- [x] #604 | 2hard
|
Title: seal
Body: it smudge
<img src="https://user-images.githubusercontent.com/16495490/131243595-c7ba962c-3c0e-493f-9df7-30db3438ffd0.png" width="500">
<img src="https://user-images.githubusercontent.com/16495490/131243629-0a2d1e1b-0d54-43ef-a942-7aa5894acd01.png" width="500">
| 2hard
|
Title: Permanently corrupted hybrid_property column labeling as a result of selecting column returned by hybrid_property.expression
Body: ### Describe the bug
The issue causes a select of `hybrid_property` to return its value under wrong label. The ORM model becomes "corrupted" by construction of a different specific select, selecting the column returned by `hybrid_property.expression`. The select causes memoization of `_proxy_key`, which is then incorrectly used as label of the column in the corrupted select.
`select(Model.id)` will return the `id` hybrid_property under `_id` key which is the name of the column returned by `hybrid_property.expression` of the `id` hybrid_property.
We have found this issue during migration of a large codebase from SQLAlchemy `1.3` to `1.4`, we will be thankful for a `1.4` fix.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
1.4.52, 2.0.32
### DBAPI (i.e. the database driver)
Any, psycopg2
### Database Vendor and Major Version
Any, PostgreSQL 15
### Python Version
3.11, Any
### Operating system
Windows, Any
### To Reproduce
```python
from sqlalchemy import Column, select
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import declarative_base
Base = declarative_base()
class Model(Base):
__tablename__ = 'model'
_id = Column(UUID(as_uuid=True), primary_key=True, nullable=False)
@hybrid_property
def id(self):
return self._id
@id.expression
def id(cls):
return cls._id
assert select(Model.id).subquery().c[0].key == 'id' and Model.id.expression._proxy_key == 'id'
assert select(Model._id).subquery().c[0].key == '_id'
# Creating above select causes the first select to fail when repeated.
# Selecting the underscored hybrid.expression's "target" Column causes
# memoization of `_proxy_key` in `Model._id.__clause_element__().__dict__`
# As a consequence column `id` is returned under name `_id` in the result rows
# instead of the expected `id`.
# Presence of the memoized `_proxy_key` in __dict__ causes it to be cloned
# in `sqlalchemy_sql_annotation_annotated._with_annotations`
# and it takes precedence when accessed via the Proxy class of hybrid property
# during select construction.
# The broken memoized state can be reverted by deleting memoized `_proxy_key` using following line
# del Model._id.__clause_element__().__dict__['_proxy_key']
assert (
select(Model.id).subquery().c[0].key == 'id'
), "Resulting `id` column would be returned incorrectly under `_id` attribute/key."
assert Model.id.expression._proxy_key == 'id'
```
### Error
```
# No exceptions
```
### Additional context
_No response_ | 2hard
|
Title: Passwords are stored plain text in logs [Bug]
Body: ### Product Version
3.10.13
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
Operating System: Ubuntu 20.04
Shell: Bash
### 🐛 Bug Description
The issue occurs when users are connected to a Linux system via JumpServer (using SSH key authentication), and then need to input passwords within the session for specific commands. For example:
When switching to a root user using the su - command, the root password is logged in cleartext within the session logs.
When connecting to a database using mysql -u root -p, the password entered is also logged in cleartext.
This poses a significant security risk, as sensitive information such as passwords should not be captured or logged in plain text within session logs.
### Recurrence Steps
Log into JumpServer.
Use SSH key-based authentication to connect to a Linux system.
Once connected, run a command that prompts for a password, such as:
su - (to switch to root)
Enter the password when prompted.
Review the session logs in JumpServer and observe that the entered password is logged in cleartext.
### Expected Behavior
Password inputs, such as those for su - or mysql -u root -p, should not be captured or stored in cleartext within JumpServer session logs. Password fields should be masked or excluded from logs entirely to protect sensitive information.
### Additional Information
_No response_
### Attempted Solutions
_No response_ | 2hard
|
Title: [FEATURE]: Support SP+PP in Llama etc.
Body: ### Describe the feature
Currently most models like Llama does not support SP together with PP. Please add support for this. | 2hard
|
Title: Fix duplication of _cls into 'model' and use class_check
Body: http://reddines.blogspot.com.br/2015/02/how-to-query-on-cls-attribute-on.html
https://github.com/MongoEngine/mongoengine/issues/450
| 2hard
|
Title: Limit Variable Scope within resource files
Body: Presently it's not possible to limit the scope of variables defined within resource files variable section to the resource file itself. This would be helpful especially if a huge amount of xpath variables are defined within different resource files. All this xpath variables pollute the global scope unnecessarily.
Maybe there could be a "private variables" section for resource files in the future? Or at least a private tag?
Rainer | 2hard
|
Title: File Upload service to store media in cloud
Body: https://www.inkfilepicker.com/
| 2hard
|
Title: Write posts in more than one language
Body: One post should have more languages, it should be subcontents and there will be a middleware to choose the default to display
| 2hard
|
Title: Selecting fields dynamically based on user input
Body: ## Problem
I see that https://github.com/RobertCraigie/prisma-client-py/issues/19 added ability to select field to the client, but what is I need to select field based on user input, so the field to be selected has to be dynamic. But the above solution, needs you to regenerate the client. I wonder if there are any pattern for this scenario or any plans to address it in the future.
| 2hard
|
Title: Channel alias
Body: If you have a channel called **articles/food/special** it will be accessible thought **http://...../articles/food/special** you may want to set an alias for this channel . (in the same way symbolic links works in FS)
Lets say you have a content **articles/food/special/delicious.html**
So, going to channel admin and setting.
aliases: ['articles/specialfoods', 'specialfoods', 'deliciousfoods']
The same content will be acessible throught.
**articles/specialfoods/delicious.html** or **specialfoods/delicious.html** or **deliciousfoods.html**
as it is the same content, all the pages will have its **canonical_url** set to the real channel url **articles/food/special/delicious.html**
If the alias name conflicts with a real channel name it will have low priority (useful when you want to unpublish a channel and put another one in the place)
| 2hard
|
Title: Port arctic to the S3 key-value store
Body: It would be super useful if arctic could use the S3 API which would allow more flexible scaling into any S3 compatible key-value store.
| 2hard
|
Title: per channel/content access control
Body: Implement a general model **AccessControl** with roles_accepted definition for read/edit/list/delete
This will be applied for Channel and Content
This should have a default **api** method to check the permission
The generic views and api should call this method to define if access is allowed.
The admin should query the fields to define the access control.
| 2hard
|
Title: Implement REST API
Body: Use https://github.com/nicolaiarocci/eve as default API layer, make it optional.
| 2hard
|
Title: [FEATURE]: Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM
Body: ### Describe the feature
Please add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM | 2hard
|
Title: incorrect SAWarning when using any together with aliased polymorphic entity
Body: ### Describe the bug
It looks like a incorrect "cartesian product" warning emitted when using `any` together with a aliased polymorphic entity and a relation configured with a secondary table.
The implicit join does not appear to be taken into account(?)
cheers
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
1.4.49, 2.0.19
### DBAPI (i.e. the database driver)
pysqlite, mysqlclient
### Database Vendor and Major Version
SQLite, MariaDB
### Python Version
3.11
### Operating system
Linux
### To Reproduce
```python
import warnings
import sqlalchemy
import sqlalchemy.orm
Base = sqlalchemy.orm.declarative_base()
class Relation(Base):
__tablename__ = 'relation'
id = sqlalchemy.Column(
sqlalchemy.Integer, primary_key=True
)
one = sqlalchemy.Column(
sqlalchemy.Integer, sqlalchemy.ForeignKey('entity.id')
)
two = sqlalchemy.Column(
sqlalchemy.Integer, sqlalchemy.ForeignKey('entity.id')
)
class Entity(Base):
__tablename__ = 'entity'
id = sqlalchemy.Column(
sqlalchemy.Integer, primary_key=True
)
_type = sqlalchemy.Column(
sqlalchemy.String(50)
)
__mapper_args__ = {
'polymorphic_on': '_type',
'polymorphic_identity': 'entity'
}
links = sqlalchemy.orm.relationship(
'Relation',
secondary=Relation.__table__,
primaryjoin=id == Relation.one,
secondaryjoin=id == Relation.two,
)
class EntityTwo(Entity):
__tablename__ = 'entity_two'
id = sqlalchemy.Column(
sqlalchemy.Integer, sqlalchemy.ForeignKey('entity.id'), primary_key=True
)
__mapper_args__ = {
'polymorphic_identity': 'entity_two'
}
engine = sqlalchemy.create_engine(
'sqlite:///:memory:', echo=True
)
Base.metadata.create_all(engine)
Session = sqlalchemy.orm.sessionmaker(
bind=engine
)
session = Session()
alias = sqlalchemy.orm.aliased(EntityTwo, flat=True)
q = session.query(
EntityTwo.id
).filter(
EntityTwo.links.of_type(alias).any(
alias.id == '1'
)
)
print(q)
# SELECT entity_two.id AS entity_two_id
# FROM entity JOIN entity_two ON entity.id = entity_two.id
# WHERE EXISTS (SELECT 1
# FROM relation AS relation_1, entity AS entity_1 JOIN entity_two AS entity_two_1 ON entity_1.id = entity_two_1.id
# WHERE entity.id = relation_1.one AND entity_1.id = relation_1.two AND entity_two_1.id = ?)
warnings.filterwarnings(
"error", category=sqlalchemy.exc.SAWarning,
message='.*cartesian product.*'
)
q.all()
```
### Error
```
Traceback (most recent call last):
File "genver.py", line 94, in <module>
q.all()
File ".venv/lib64/python3.11/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter
result = self.session.execute(
^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/orm/session.py", line 1717, in execute
result = conn._execute_20(statement, params or {}, execution_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/engine/base.py", line 1710, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/engine/base.py", line 1569, in _execute_clauseelement
compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/elements.py", line 532, in _compile_w_cache
compiled_sql = self._compiler(
^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/elements.py", line 567, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 809, in __init__
Compiled.__init__(self, dialect, statement, **kwargs)
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 464, in __init__
self.string = self.process(self.statement, **compile_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 499, in process
return obj._compiler_dispatch(self, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 82, in _compiler_dispatch
return meth(self, **kw)
^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 3545, in visit_select
text = self._compose_select_body(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 3706, in _compose_select_body
t = self._generate_delimited_and_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 1748, in _generate_delimited_and_list
return clauses[0]._compiler_dispatch(self, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 82, in _compiler_dispatch
return meth(self, **kw)
^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 2025, in visit_unary
return self._generate_generic_unary_operator(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 2346, in _generate_generic_unary_operator
return opstring + unary.element._compiler_dispatch(self, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 82, in _compiler_dispatch
return meth(self, **kw)
^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 1433, in visit_select_statement_grouping
return "(" + grouping.element._compiler_dispatch(self, **kwargs) + ")"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 82, in _compiler_dispatch
return meth(self, **kw)
^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 3545, in visit_select
text = self._compose_select_body(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 3713, in _compose_select_body
from_linter.warn()
File ".venv/lib64/python3.11/site-packages/sqlalchemy/sql/compiler.py", line 359, in warn
util.warn(message)
File ".venv/lib64/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 1640, in warn
_warnings_warn(msg, exc.SAWarning)
File ".venv/lib64/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 1677, in _warnings_warn
warnings.warn(message, category, stacklevel=stacklevel + 1)
sqlalchemy.exc.SAWarning: SELECT statement has a cartesian product between FROM element(s) "relation_1" and FROM element "entity_two_1". Apply join condition(s) between each element to resolve.
```
### Additional context
_No response_ | 2hard
|
Title: Support IPython notebooks posts
Body: https://github.com/iiSeymour/Flasked-Notebooks
on posting you can create an IPython notebook content, creates the notebook and use nbconvert to save the page on Quokka Content.
| 2hard
|
Title: Support JSON serialization with result model
Body: RF 6.1 added support to serialize the execution model (`robot.running.TestSuite`) into JSON and back (#3902). This support should be extended to the result model (`robot.result.TestSuite`) as well. Supporting writing results directly to a JSON file as part of execution is out of the scope of this issue (we already have #3423 about that), but Rebot should be able to produce `output.json` files.
A precondition for this issue is cleaning up the result model a bit. That includes removing deprecated attributes (#4846), enhancing how keyword names are stored (#4884) and enhancing timestamp handling (#4258). | 2hard
|
Title: [Bug] 命令过滤正则匹配失效
Body: ### 产品版本
v3.10.13
### 版本类型
- [ ] 社区版
- [X] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [X] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
v3.10.13 单机
Chrome 新版
### 🐛 缺陷描述
当使用正则表达式中包含以空格/换行的连续词组如 ^\s*drop\s+database\s+.*(禁止执行删库命令)时,正常空格键能被拦截,输入 drop 后回车再输入 database... 就无法拦截了,在线测试正则表达式是可以匹配上这种场景的
### 复现步骤
创建正则表达式命令组:^\s*drop\s+database\s+.*
创建命令过滤规则指定数据库并拒绝操作
连接资产,输入 drop 后回车再输入 database test;

### 期望结果
期望完整的命令能够被匹配上进行拦截
### 补充信息
_No response_
### 尝试过的解决方案
_No response_ | 2hard
|
Title: delete with join and returning clause doesn't work on SQLAlchemy 2
Body: ### Describe the bug
I have a statement like the following, which works on SQLAlchemy 1.4 but not on 2.0
```python
delete(Author).where(Author.id == Book.author_id).returning(Book.title)
```
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
https://docs.sqlalchemy.org/en/20/tutorial/data_update.html#using-returning-with-update-delete
### SQLAlchemy Version in Use
2.0.36
### DBAPI (i.e. the database driver)
psycopg2
### Database Vendor and Major Version
PostgreSQL 12.20
### Python Version
3.11
### Operating system
macOS
### To Reproduce
```python
from sqlalchemy import Column, ForeignKey, Integer, String, create_engine, delete, func
from sqlalchemy.orm import declarative_base, relationship, sessionmaker
Base = declarative_base()
Session = sessionmaker(future=True)
class Book(Base):
__tablename__ = "books"
id = Column(Integer, primary_key=True)
title = Column(String, nullable=False)
author_id = Column(
Integer, ForeignKey("authors.id", ondelete="CASCADE"), nullable=False
)
author = relationship("Author", back_populates="books")
class Author(Base):
__tablename__ = "authors"
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
books = relationship(Book, back_populates="author")
def main(engine):
with Session.begin() as session:
session.add(Book(title="Title", author=Author(name="Alice")))
stmt = delete(Author).where(Author.id == Book.author_id).returning(Book.title)
with engine.connect() as conn:
assert conn.scalar(func.count(Author.id)) == 1
check(
conn.execute(stmt),
"conn.execute()",
)
with Session() as session:
assert session.scalar(func.count(Author.id)) == 1
check(
lambda: session.execute(stmt.execution_options(synchronize_session=False)),
"session.execute(synchronize_session=False)",
)
with Session() as session:
assert session.scalar(func.count(Author.id)) == 1
check(
lambda: session.execute(
stmt.execution_options(synchronize_session="fetch")
),
"session.execute(synchronize_session='fetch')",
)
def check(result_or_fn, message):
if callable(result_or_fn):
try:
result = result_or_fn()
except Exception as exc:
print(f"{message}: {type(exc).__name__}: {exc}")
return
else:
result = result_or_fn
expected = ["Title"]
titles = result.scalars().all()
if titles == expected:
print(f"{message}: Got {expected} as expected")
else:
print(f"{message}: Expected {expected} but got {titles}")
if __name__ == "__main__":
engine = create_engine("postgresql:///issue12096", future=True)
Session.configure(bind=engine)
Base.metadata.create_all(engine)
try:
main(engine)
finally:
Base.metadata.drop_all(engine)
```
### Error
SQLAlchemy 1.4 output:
```
conn.execute(): Got ['Title'] as expected
session.execute(synchronize_session=False): Got ['Title'] as expected
session.execute(synchronize_session='fetch'): Expected ['Title'] but got []
```
The 'fetch' case above emits `RETURNING books.title, authors.id` which looks like it should work but doesn't.
SQLAlchemy 2.0 output:
```
conn.execute(): Expected ['Title'] but got [1]
session.execute(synchronize_session=False): NoSuchColumnError: Could not locate column in row for column 'books.title'
session.execute(synchronize_session='fetch'): NoSuchColumnError: Could not locate column in row for column 'books.title'
```
all three cases emit `RETURNING authors.id`...
The full traceback for `session.execute`:
```
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/sqlalchemy/engine/cursor.py", line 844, in _index_for_key
rec = self._keymap[key]
~~~~~~~~~~~~^^^^^
KeyError: Column('title', String(), table=<books>, nullable=False)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "issue12096.py", line 82, in <module>
main(engine)
File "issue12096.py", line 44, in main
check(
File "issue12096.py", line 62, in check
result = result_or_fn()
^^^^^^^^^^^^^^
File "issue12096.py", line 45, in <lambda>
lambda: session.execute(stmt.execution_options(synchronize_session=False)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2362, in execute
return self._execute_internal(
^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2247, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/bulk_persistence.py", line 2021, in orm_execute_statement
return super().orm_execute_statement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 308, in orm_execute_statement
return cls.orm_setup_cursor_result(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/bulk_persistence.py", line 818, in orm_setup_cursor_result
return cls._return_orm_returning(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/bulk_persistence.py", line 629, in _return_orm_returning
return loading.instances(result, querycontext)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 132, in instances
with util.safe_reraise():
File ".venv/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 113, in instances
*[
^
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 114, in <listcomp>
query_entity.row_processor(context, cursor)
File ".venv/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 3021, in row_processor
getter = result._getter(column)
^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 1173, in _getter
return self._metadata._getter(key, raiseerr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 169, in _getter
index = self._index_for_key(key, raiseerr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/engine/cursor.py", line 846, in _index_for_key
x = self._key_fallback(key, ke, raiseerr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/sqlalchemy/engine/cursor.py", line 824, in _key_fallback
raise exc.NoSuchColumnError(
sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for column 'books.title'
```
### Additional context
mcve requires you to run `createdb issue12096` first. | 2hard
|
Title: app module gets executed twice when serving app
Body: This has been a known problem for a while, but I'd figure I'll document it.
Issue:
1. Create an app `app.py`.
2. Run `python app.py serve`
3. app.py gets executed twice, once to execute the module and find the cli, second to import the module from the server module
Ideal behavior:
`app.py` only gets executed once. I'm not sure if this is possible however with the current architecture. | 2hard
|
Title: unable to use load_only together with contains_eager and aliased polymorphic entities
Body: ### Describe the bug
I am having issues loading select columns when using relying on `contains_eager` / `joinedload` when using aliased Polymorphic classes .
In the example below I am attempting to only load the `id` column from the `user` relationship but it appears that it will always load the `do_not_load` column from the parent class `Resource`.
When not using aliases the query works as expected.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
1.4.49, 2.0.19
### DBAPI (i.e. the database driver)
pysqlite, mysqlclient
### Database Vendor and Major Version
SQLite, MariaDB
### Python Version
3.11
### Operating system
Linux
### To Reproduce
```python
import warnings
import sqlalchemy
import sqlalchemy.orm
Base = sqlalchemy.orm.declarative_base()
class Resource(Base):
__tablename__ = 'resource'
id = sqlalchemy.Column(
sqlalchemy.Integer, primary_key=True
)
_type = sqlalchemy.Column(
sqlalchemy.String(50)
)
do_not_load = sqlalchemy.Column(
sqlalchemy.Integer
)
__mapper_args__ = {
'polymorphic_on': '_type',
'polymorphic_identity': 'resource'
}
class User(Resource):
__tablename__ = 'user'
id = sqlalchemy.Column(
sqlalchemy.Integer, sqlalchemy.ForeignKey(Resource.id), primary_key=True
)
__mapper_args__ = {
'polymorphic_identity': 'user'
}
class Manager(Base):
__tablename__ = 'manager'
id = sqlalchemy.Column(
sqlalchemy.Integer, primary_key=True
)
parent_id = sqlalchemy.Column(
sqlalchemy.Integer, sqlalchemy.ForeignKey(User.id)
)
user = sqlalchemy.orm.relationship(
User
)
engine = sqlalchemy.create_engine(
'sqlite:///:memory:', echo=True
)
Base.metadata.create_all(engine)
Session = sqlalchemy.orm.sessionmaker(
bind=engine
)
session = Session()
user_alias = sqlalchemy.orm.aliased(User, flat=True)
manager_alias = sqlalchemy.orm.aliased(Manager, flat=True)
q1 = session.query(
Manager
).join(
User, Manager.user.of_type(user_alias)
).options(
sqlalchemy.orm.contains_eager(
Manager.user.of_type(User)
).load_only(
User.id,
)
)
q2 = session.query(
manager_alias
).outerjoin(
user_alias, manager_alias.user.of_type(user_alias)
).options(
sqlalchemy.orm.contains_eager(
manager_alias.user.of_type(user_alias)
).load_only(
user_alias.id,
)
)
q3 = session.query(
manager_alias
).options(
sqlalchemy.orm.joinedload(
manager_alias.user.of_type(user_alias)
).load_only(
user_alias.id,
)
)
for label, query in (
('without_aliases', q1),
('with_aliases_contains_eager', q2),
('with_aliases_joinedload', q3)
):
if str(query).count('do_not_load'):
print('Warning: {0} contains do_not_load: {1}'.format(label, query))
```
### Error
```
Warning: with_aliases_eager_load contains do_not_load: SELECT user_1.id AS user_1_id, resource_1.id AS resource_1_id, resource_1._type AS resource_1__type, resource_1.do_not_load AS resource_1_do_not_load, manager_1.id AS manager_1_id, manager_1.parent_id AS manager_1_parent_id
FROM manager AS manager_1 LEFT OUTER JOIN (resource AS resource_1 JOIN user AS user_1 ON resource_1.id = user_1.id) ON user_1.id = manager_1.parent_id
Warning: with_aliases_joinedload contains do_not_load: SELECT manager_1.id AS manager_1_id, manager_1.parent_id AS manager_1_parent_id, user_1.id AS user_1_id, resource_1.id AS resource_1_id, resource_1._type AS resource_1__type, resource_1.do_not_load AS resource_1_do_not_load
FROM manager AS manager_1 LEFT OUTER JOIN (resource AS resource_1 JOIN user AS user_1 ON resource_1.id = user_1.id) ON user_1.id = manager_1.parent_id
```
### Additional context
_No response_ | 2hard
|
Title: Support sparse matrices
Body: | 2hard
|
Title: create a command to create/update mongo indexes
Body: I'm not sure if MongoEngine supports automatically creating indexes for EmbeddedDocuments in https://github.com/pythonhub/quokka/blob/master/quokka/core/models.py
Example: Class Comment has a meta class that does not produces any result on the main Content Class
class Comment(db.EmbeddedDocument):
(...)
meta = {
'indexes': ['-created_at', '-available_at'],
'ordering': ['-created_at']
}
class Content(HasCustomValue, Publishable, LongSlugged, Commentable)
(...)
meta = {
'allow_inheritance': True,
'indexes': ['-created_at', 'slug'],
'ordering': ['-created_at']
}
Creates only two indexes, 'created_at and slug
| 2hard
|
Title: Replace Powermock with Mockito
Body: **Summary**
- Powermock seems no longer maintained https://github.com/powermock/powermock/issues/1117 where the last commit on the master was in 2020
- Using Powermock with Java 11 causing excessive amount of "An illegal reflective access operation has occurred" warnings https://github.com/powermock/powermock/issues/969
This issue is asking to replace Powermock (mostly used for mocking static method and whiteboxing to access private method) with Mockito equivalent.
**Urgency**
Low | 2hard
|
Title: Re-evaluate catch-all exception handling throughout the codebase
Body: **Summary**
In the same category as #9679
During the work of converting from using FindBugs to SpotBugs (#10407) I found that there was a particular class of bugs that we had strewn throughout our codebase - the `REC_CATCH_EXCEPTION`. The bug code, by definition states
> REC: Exception is caught when Exception is not thrown (REC_CATCH_EXCEPTION)
> This method uses a try-catch block that catches Exception objects, but Exception is not thrown within the try block, and RuntimeException is not explicitly caught. It is a common bug pattern to say try { ... } catch (Exception e) { something } as a shorthand for catching a number of types of exception each of whose catch blocks is identical, but this construct also accidentally catches RuntimeException as well, masking potential bugs.
> A better approach is to either explicitly catch the specific exceptions that are thrown, or to explicitly catch RuntimeException exception, rethrow it, and then catch all non-Runtime Exceptions, as shown below:
```java
try {
...
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
... deal with all non-runtime exceptions ...
}
```
To summarize, essentially what I saw is that there are many places throughout our codebase where we `catch(Exception e)` where `Exception` is never thrown. It's essentially a catch-all. However, most situations we either don't want this, or might want to handle the class of `RuntimeException` differently.
To rememdy this we should:
- Remove the `REC_CATCH_EXCEPTION` from the spotbugs exclusion list
- For each instance, carefully evaluate how we want to handle the exceptions and either make a class-wide exception for spotbugs, OR improve the error handling to split between `RuntimeException` and other `Exception`
**Urgency**
I would estimate this to be middle-to-low priority.
| 2hard
|
Title: Timeout for calls to external systems
Body: **Summary**
Currently, we make blocking calls to external systems for information. All these calls should have proper timeout values and recovery mechanisms. We should not be impacted negatively if one of these services are down or have significant service degradations.
Just as importantly, proper logging should enable fast diagnosis of such problems
External systems include:
UFS
Security Plugins (BlueTalon etc)
Anything else?
**Urgency**
It is important for the reliability of our system..
| 2hard
|
Title: Root inode ACL check is not a strong/safe model
Body: **Summary**
Currently, Alluxio has a check on startup that the owner of the root directory in the Alluxio FileSystem matches the user who started the service. This check can lead to breaking scenarios.
Scenario 1: Using HDFS as a UFS
User runs as superuser of Alluxio filesystem (foo). User (foo) changes permission of / to be owner bar:bar. Command succeeds. In the event of a restart, Alluxio will fail to start since the root inode doesn't match the user who started the process.
Scenario 2: Using HDFS as UFS
User runs as superuser of Alluxio filesystem (foo). User (foo) tries to change mode of / to be 777. User mistakenly inputs chown instead of chmod. Command succeeds. In the event of a restart, Alluxio will fail to start since the root inode acl (777:foo) doesn't match the user who started the process. Fixing this requires a format.
A better approach may be to skip the root inode ACL check entirely. The original reason for this check is to prevent users from accidentally starting the Alluxio service as the incorrect user. An alternative would be to restrict the journal directory so that the permissions must be 700. If the journal is a kerberized HDFS, the keytab for the HDFS cluster must be restricted to 700 permissions as well. This will prevent any other user from being able to read the journal location configured in alluxio-site.properties and will therefore fail on startup.
**Urgency**
Can cause breaking scenarios where recovery of the Alluxio process requires creating new user, changing permissions of the Alluxio installation directory, and then starting the service in order to fix the root inode ACL. | 2hard
|
Title: finetune for other languages?
Body: how I can fine-tune for vietnamese for dialog chatbot ? Thanks guys | 1medium
|
Title: Aten arange behavior when dtype is int64 and step size is greater than range
Body: ### 🐛 Describe the bug
While testing corner cases on torch.arange, i see the following behavior when dtype is int64 and step size is greater than range.
On CPU, i get the following behavior for arange.
>> a = torch.arange(0, 0.5, 1, dtype=torch.int64)
>> a
tensor([], dtype=torch.int64)
>> a = torch.arange(0, 0.5, 1, dtype=torch.int32)
>>a
tensor([0], dtype=torch.int32)
Why is it that size of ‘a’ is 0 when dtype is int64 where as it is 1 for int32? Logically speaking the first element is anyways 0 and size should have been 1 even for int64 type, isn’t it?
### Versions
2025-03-13 05:10:24 (2.62 MB/s) - ‘collect_env.py’ saved [24353/24353]
Collecting environment information...
PyTorch version: 2.6.0+hpu_1.21.0-202.git603340c
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5 (ssh://[email protected]/habana-internal/tpc_llvm10 6423f90703886aa37631daf63eaf24f24df9ba3d)
CMake version: version 3.29.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] habana-torch-dataloader==1.21.0+git9d09025dd
[pip3] habana-torch-plugin==1.21.0+git9d09025dd
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.6.0+hpu.1.21.0.202.git603340c
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.5.1a0+1661daf
[pip3] torchdata==0.9.0+d4bb3e6
[pip3] torchmetrics==1.2.1
[pip3] torchtext==0.18.0a0+9bed85d
[pip3] torchvision==0.20.1a0+3ac97aa
[conda] Could not collect
cc @albanD | 1medium
|
Title: I am training with my own data set, but the problem of loss is empty.
Body: my pc
GTX1080Ti centos7 learnng rate 1e-5
Who has the same situation as me, please help me. thanks。
iter 310 || Loss: nan || timer: 0.1921 sec.
iter 320 || Loss: nan || timer: 0.2041 sec.
iter 330 || Loss: nan || timer: 0.2006 sec.
iter 340 || Loss: nan || timer: 0.2043 sec.
iter 350 || Loss: nan || timer: 0.2128 sec.
iter 360 || Loss: nan || timer: 0.2072 sec.
iter 370 || Loss: nan || timer: 0.2091 sec.
iter 380 || Loss: nan || timer: 0.2141 sec.
iter 390 || Loss: nan || timer: 0.2486 sec.
iter 400 || Loss: nan || timer: 0.1914 sec.
iter 410 || Loss: nan || timer: 0.2052 sec.
iter 420 || Loss: nan || timer: 0.1976 sec.
iter 430 || Loss: nan || timer: 0.1952 sec.
iter 440 || Loss: nan || timer: 0.1942 sec.
iter 450 || Loss: nan || timer: 0.2101 sec.
iter 460 || Loss: nan || timer: 0.1934 sec. | 2hard
|
Title: Ubuntu CLI commands
Body: Hi, love that Mycroft is working on Kubuntu 18.10. I only had to fiddle with pulseaudio (restart?) to get the audio out working (although the audio test worked). I've been trying to get Voice Command to work for years, starting with WSP, Vocola and Dragonfly on Windows, but the Winapi and such calls are very limited and poorly documented. Its great that Kubuntu can use Python calls.
So the first thing I wanted to do was/is to Voice Control Vim (like in this old video using .NET? https://www.youtube.com/watch?v=TEBMlXRjhZY) or at least run some CLI commands. Unfortunately, it looks like using the KDE Plasmoid is the only way to do this? Please correct me if I'm wrong. I do see there are windows navigation controls with the Desktop Control Skill (is the plasmoid necessary for this? shouldnt there be a way to use the desktop control commands throught the command line without the whole plasmoid feature?), which would be handy, but I cant seem to get the plasmoid installed. So if there is a direct way to hook into Mycroft output and redirect it to bash or some other basic form, that would be my simplest solution. I have used the CLI debug tool which is great, but dont see how that could be redirected yet. I realize Mycroft was built to operate on devices without keyboards, but output as text to CLI commands seems like a basic tool that is fundamental, even for workarounds such as a plasmoid not installing.
Installation of the plasmoid hangs on "Installing../lottie/qmldir" and had two package install errors for qtdeclarative5-qtquick2-plugin and qtdeclarative5-models-plugin. Similar to this issue for installing on Debian (https://github.com/MycroftAI/installers/issues/9), except Im using Kubuntu 18.10 Cosmic which doesnt have these packages in its repos. Im not sure if I can install them manually. Ive been using the appimage installer, but will try the manual install for Debian again. No, actually that ended where the instructions say "sudo chmod +x /usr/share/plasma/plasmoids/org.kde.plasma.mycroftplasmoid/contents/code/startservice.sh" because there is no 'code' directory created, which would have the scripts to run manually even. Not sure if I need those scripts, but after the hung appimage install I do have a plasmoid, which gives these error:
```
"Error loading QML file: file:///usr/share/plasma/plasmoids/org.kde.plasma.mycroftplasmoid/contents/ui/main.qml:33:34: Type FullRepresentation unavailable
file:///usr/share/plasma/plasmoids/org.kde.plasma.mycroftplasmoid/contents/ui/FullRepresentation.qml:31:1: module "Mycroft" is not installed"
```
I may try a restart after finishing tasks left in browser windows, but would love a path forward that doesnt require any plasmoid and all the depedencies that install required. Thanks for any pointers. | 2hard
|
Title: BUG: `array.asarray` does not respect `dtype` arg
Body: **Describe the issue**:
`dask.array.asarray` does not respect the `dtype` argument.
**Minimal Complete Verifiable Example**:
```python
>>> import numpy as np
>>> import dask.array as da
>>> Zm = da.asarray([[1, 2, 3]])
>>> Zm
dask.array<array, shape=(1, 3), dtype=int64, chunksize=(1, 3), chunktype=numpy.ndarray>
>>> Z = da.asarray(Zm, dtype=da.float64)
>>> Z
dask.array<array, shape=(1, 3), dtype=int64, chunksize=(1, 3), chunktype=numpy.ndarray>
>>> Z.compute().dtype
dtype('int64')
# same issue is present with `np` dtypes directly
>>> Z = da.asarray(Zm, dtype=np.float64)
>>> Z
dask.array<array, shape=(1, 3), dtype=int64, chunksize=(1, 3), chunktype=numpy.ndarray>
>>> Z.compute().dtype
dtype('int64')
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.8.0+3.g65270980
- Python version: 3.12.4
- Operating System: Ubuntu
- Install method (conda, pip, source): `python -m pip install git+https://github.com/dask/dask.git`
| 1medium
|
Title: [MNT]: Add Type Checking to Avoid AttributeError in Functions Handling Units
Body: ### Summary
Currently, in unit-related example scripts , the code raises an AttributeError when numpy.float64 objects are used without being converted to objects with a convert_to method. This can create issues for users who attempt to pass Numpy float values directly, unaware that the function expects specific unit-handling objects.
### Proposed fix
Add a type check at the beginning of functions to ensure that input is of the correct type. If it is not, raise an exception with a clear message (e.g., "Values must be unit-handling objects, not float"). | 1medium
|
Title: 2.24.0: not ready for `pyupgrade --py39-plus` (fails on linking DSO)
Body: Next month python 3.8mwill be EOSed.
I've tested patch generated by `pyupgrade --py39-plus` and looks like with that patch build fails on linking with
```console
+ /usr/bin/python3 -sBm build -w --no-isolation
* Getting build dependencies for wheel...
* Building wheel...
Running `maturin pep517 build-wheel -i /usr/bin/python3 --compatibility off`
📦 Including license file "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/LICENSE"
🍹 Building a mixed python/rust project
🔗 Found pyo3 bindings
🐍 Found CPython 3.10 at /usr/bin/python3
📡 Using build options features, bindings from pyproject.toml
Compiling proc-macro2 v1.0.86
Compiling unicode-ident v1.0.12
Compiling target-lexicon v0.12.14
Compiling python3-dll-a v0.2.10
Compiling once_cell v1.19.0
Compiling autocfg v1.3.0
Compiling stable_deref_trait v1.2.0
Compiling libc v0.2.155
Compiling heck v0.5.0
Compiling version_check v0.9.5
Compiling litemap v0.7.3
Compiling writeable v0.5.5
Compiling rustversion v1.0.17
Compiling memchr v2.7.4
Compiling icu_locid_transform_data v1.5.0
Compiling radium v0.7.0
Compiling cfg-if v1.0.0
Compiling tinyvec_macros v0.1.1
Compiling static_assertions v1.1.0
Compiling smallvec v1.13.2
Compiling icu_properties_data v1.5.0
Compiling serde v1.0.209
Compiling tap v1.0.1
Compiling serde_json v1.0.128
Compiling indoc v2.0.5
Compiling write16 v1.0.0
Compiling percent-encoding v2.3.1
Compiling unindent v0.2.3
Compiling utf8_iter v1.0.4
Compiling hashbrown v0.14.5
Compiling unicode-bidi v0.3.15
Compiling icu_normalizer_data v1.5.0
Compiling funty v2.0.0
Compiling utf16_iter v1.0.5
Compiling zerocopy v0.7.34
Compiling regex-syntax v0.8.4
Compiling equivalent v1.0.1
Compiling ryu v1.0.18
Compiling itoa v1.0.11
Compiling uuid v1.10.0
Compiling hex v0.4.3
Compiling base64 v0.22.1
Compiling lexical-util v0.8.5
Compiling tinyvec v1.6.1
Compiling wyz v0.5.1
Compiling form_urlencoded v1.2.1
Compiling aho-corasick v1.1.3
Compiling bitvec v1.0.1
Compiling indexmap v2.2.6
Compiling lexical-parse-integer v0.8.6
Compiling unicode-normalization v0.1.23
Compiling lexical-parse-float v0.8.5
Compiling quote v1.0.36
Compiling syn v2.0.68
Compiling ahash v0.8.11
Compiling idna v0.5.0
Compiling getrandom v0.2.15
Compiling num-traits v0.2.19
Compiling memoffset v0.9.1
Compiling url v2.5.2
Compiling regex-automata v0.4.7
Compiling pyo3-build-config v0.22.2
Compiling num-integer v0.1.46
Compiling num-bigint v0.4.6
Compiling regex v1.10.6
Compiling synstructure v0.13.1
Compiling pyo3-ffi v0.22.2
Compiling pyo3-macros-backend v0.22.2
Compiling pyo3 v0.22.2
Compiling jiter v0.5.0
Compiling pydantic-core v2.24.0 (/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0)
error: failed to run custom build command for `pydantic-core v2.24.0 (/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0)`
note: To improve backtraces for build dependencies, set the CARGO_PROFILE_RELEASE_BUILD_OVERRIDE_DEBUG=true environment variable to enable debug information generation.
Caused by:
process didn't exit successfully: `/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/target/release/build/pydantic-core-176bfdeef7d000ae/build-script-build` (exit status: 101)
--- stdout
cargo:rustc-check-cfg=cfg(Py_LIMITED_API)
cargo:rustc-check-cfg=cfg(PyPy)
cargo:rustc-check-cfg=cfg(GraalPy)
cargo:rustc-check-cfg=cfg(py_sys_config, values("Py_DEBUG", "Py_REF_DEBUG", "Py_TRACE_REFS", "COUNT_ALLOCS"))
cargo:rustc-check-cfg=cfg(invalid_from_utf8_lint)
cargo:rustc-check-cfg=cfg(pyo3_disable_reference_pool)
cargo:rustc-check-cfg=cfg(pyo3_leak_on_drop_without_reference_pool)
cargo:rustc-check-cfg=cfg(diagnostic_namespace)
cargo:rustc-check-cfg=cfg(c_str_lit)
cargo:rustc-check-cfg=cfg(Py_3_7)
cargo:rustc-check-cfg=cfg(Py_3_8)
cargo:rustc-check-cfg=cfg(Py_3_9)
cargo:rustc-check-cfg=cfg(Py_3_10)
cargo:rustc-check-cfg=cfg(Py_3_11)
cargo:rustc-check-cfg=cfg(Py_3_12)
cargo:rustc-check-cfg=cfg(Py_3_13)
cargo:rustc-cfg=Py_3_6
cargo:rustc-cfg=Py_3_7
cargo:rustc-cfg=Py_3_8
cargo:rustc-cfg=Py_3_9
cargo:rustc-cfg=Py_3_10
cargo:rustc-check-cfg=cfg(has_coverage_attribute)
cargo:rustc-check-cfg=cfg(specified_profile_use)
cargo:rerun-if-changed=python/pydantic_core/core_schema.py
cargo:rerun-if-changed=generate_self_schema.py
--- stderr
Traceback (most recent call last):
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 247, in <module>
main()
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 217, in main
value = get_schema(s, definitions)
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 57, in get_schema
return type_dict_schema(obj, definitions)
File "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/generate_self_schema.py", line 156, in type_dict_schema
raise ValueError(f'Unknown Schema forward ref: {fr_arg}')
ValueError: Unknown Schema forward ref: list[CoreSchema]
thread 'main' panicked at build.rs:29:9:
generate_self_schema.py failed with exit status: 1
stack backtrace:
0: 0x560bdf04ef75 - std::backtrace_rs::backtrace::libunwind::trace::h5c85e557799ed486
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
1: 0x560bdf04ef75 - std::backtrace_rs::backtrace::trace_unsynchronized::ha97b107185df65bb
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x560bdf04ef75 - std::sys::backtrace::_print_fmt::h490acf9e9b8c6eb2
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:65:5
3: 0x560bdf04ef75 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h9c32407e5a23c650
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:40:26
4: 0x560bdf06f17b - core::fmt::rt::Argument::fmt::hae324c745842212e
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/fmt/rt.rs:173:76
5: 0x560bdf06f17b - core::fmt::write::h8e3a6cb8df1f9a95
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/fmt/mod.rs:1182:21
6: 0x560bdf04ccef - std::io::Write::write_fmt::h83bcab37323a9399
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/io/mod.rs:1827:15
7: 0x560bdf0500c1 - std::sys::backtrace::BacktraceLock::print::hd3c35caa6032e632
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:43:9
8: 0x560bdf0500c1 - std::panicking::default_hook::{{closure}}::hd3c6083514eb2656
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:269:22
9: 0x560bdf04fd9c - std::panicking::default_hook::h94d20e9291e6eb42
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:296:9
10: 0x560bdf050691 - std::panicking::rust_panic_with_hook::hfa25182080856bef
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:800:13
11: 0x560bdf050587 - std::panicking::begin_panic_handler::{{closure}}::h2fc3fd5367175cd3
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:674:13
12: 0x560bdf04f439 - std::sys::backtrace::__rust_end_short_backtrace::h877093daaa72bd28
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:168:18
13: 0x560bdf050214 - rust_begin_unwind
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:665:5
14: 0x560bdf017f33 - core::panicking::panic_fmt::hfc4c464a0d356173
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/panicking.rs:74:14
15: 0x560bdf019cd8 - build_script_build::generate_self_schema::hf9f929900624c562
at /home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/build.rs:29:9
16: 0x560bdf019cd8 - build_script_build::main::ha5db4a51bcd9603f
at /home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/build.rs:48:5
17: 0x560bdf018703 - core::ops::function::FnOnce::call_once::h4e11fd2c02563b95
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/ops/function.rs:250:5
18: 0x560bdf018703 - std::sys::backtrace::__rust_begin_short_backtrace::h0b129e115204002e
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/sys/backtrace.rs:152:18
19: 0x560bdf0186f9 - std::rt::lang_start::{{closure}}::hd691eceb39629f76
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/rt.rs:162:18
20: 0x560bdf0490f0 - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::h90440e1dec31addc
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/core/src/ops/function.rs:284:13
21: 0x560bdf0490f0 - std::panicking::try::do_call::h864c0af700b810b6
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:557:40
22: 0x560bdf0490f0 - std::panicking::try::h81dc1c4c7a744be2
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:521:19
23: 0x560bdf0490f0 - std::panic::catch_unwind::hce4947710c9959a6
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panic.rs:350:14
24: 0x560bdf0490f0 - std::rt::lang_start_internal::{{closure}}::hb8ca788eb716154b
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/rt.rs:141:48
25: 0x560bdf0490f0 - std::panicking::try::do_call::hbe20672d94e23c41
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:557:40
26: 0x560bdf0490f0 - std::panicking::try::h08906107fe8c4aae
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panicking.rs:521:19
27: 0x560bdf0490f0 - std::panic::catch_unwind::h20cf014a4ed35f8b
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/panic.rs:350:14
28: 0x560bdf0490f0 - std::rt::lang_start_internal::he74de233149dbe8b
at /builddir/build/BUILD/rust-1.81.0-build/rustc-1.81.0-src/library/std/src/rt.rs:141:20
29: 0x560bdf019e3f - main
30: 0x7f6aa2e461c8 - __libc_start_call_main
31: 0x7f6aa2e4628b - __libc_start_main@GLIBC_2.2.5
32: 0x560bdf018625 - _start
33: 0x0 - <unknown>
warning: build failed, waiting for other jobs to finish...
💥 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit status: 101": `env -u CARGO PYO3_ENVIRONMENT_SIGNATURE="cpython-3.10-64bit" PYO3_PYTHON="/usr/bin/python3" PYTHON_SYS_EXECUTABLE="/usr/bin/python3" "cargo" "rustc" "--features" "pyo3/extension-module" "--message-format" "json-render-diagnostics" "--manifest-path" "/home/tkloczko/rpmbuild/BUILD/pydantic-core-2.24.0/Cargo.toml" "--release" "--lib" "--crate-type" "cdylib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/usr/bin/python3', '--compatibility', 'off'] returned non-zero exit status 1
ERROR Backend subprocess exited when trying to invoke build_wheel
``` | 2hard
|
Title: IN and NOT IN filter
Body: Thanks for this awesome package.
**Is your feature request related to a problem? Please describe.**
It should be possible to filter records using the **IN** and **NOT IN** operators in the **get** and **get_multi** functions.
**Describe the solution you'd like**
```python
db_asset = await crud_users.get(
db=db,
schema_to_select=User,
return_as_model=True,
id=id,
filter=User.id.not_in(ids),
is_deleted=False,
)
```
It would be great to utilize logical operators such as `and_` and `or_`.
**Describe alternatives you've considered**
Currently, one must rely on SQLAlchemy methods to execute such filtering operations.
```python
smt = select(User).where(User.reference_id == ref_id).filter(User.id.not_in(ids))
```
| 1medium
|
Title: Stuck on "WARNING:root:Setting up a new session"
Body: I downloaded the `facades` dataset. I then run `python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA`, but I'm stuck at `WARNING:root:Setting up a new session`
Even after a few hours, it says there and doesn't seem to progress. Why is this? | 1medium
|
Title: There are Chinese characters in my project, but after calling the visualize_document_datamap() method, the characters appear as garbled text.
Body: ### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
fig = topic_model.visualize_document_datamap(
sentences,
topics=topics,
reduced_embeddings=reduced_embeddings,
#custom_labels=custom_labels,
title='文档和主题的分布',
sub_title='基于 BERTopic 的主题建模',
width=1200,
height=1200
)
Even after setting
plt.rcParams['font.sans-serif'] = ['SimHei'],
I still can't see the characters.
### Reproduction
```python
from bertopic import BERTopic
# with the reduced embeddings
reduced_embeddings = UMAP(n_neighbors=15, n_components=2, min_dist=0.0, metric='cosine').fit_transform(embeddings)
fig = topic_model.visualize_document_datamap(
sentences,
topics=topics,
reduced_embeddings=reduced_embeddings,
#custom_labels=custom_labels,
title='文档和主题的分布',
sub_title='基于 BERTopic 的主题建模',
width=1200,
height=1200
)
```
### BERTopic Version
0.16.4 | 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.