text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Research support for programmatic migrations
Body: # Problem
Many other Python ORMs have support for programmatically running migrations, i.e. each individual step in a migration is written to a python file that can be modified by the user.
While we already support using standard Prisma migrations we should look into whether or not we can support programmatic migrations as well or if this must be supported directly by Prisma.
## Additional context
https://github.com/prisma/prisma/issues/4703 | 2hard
|
Title: JS cleanup
Body: There is a lot of JS files in themes and plugins.
https://github.com/pythonhub/quokka/search?l=javascript&p=1
The idea is to shrink that.
| 2hard
|
Title: add `nrows`
Body: - expose a new argument `nrows=None`
+ default: auto-detect terminal height with some sensible default such as `20` (similar to `ncols`)
- hide bars `if n < total and pos >= nrows`
+ completed/`close()`d bars will ignore `nrows` and just print according to `leave`
- [ ] TODO: link related issues
This fixes issues where there are more nested bars than room for on a screen.
Cross-platform auto-detection compatibility will be at least as annoying as is for `ncols`. | 2hard
|
Title: Add support for transactions
Body: Prisma has preview support for [interactive transactions](https://www.prisma.io/docs/concepts/components/prisma-client/transactions#interactive-transactions).
The base work for this has already been done in the `wip/transactions` branch, some kinks need to be ironed out.
## Status
- [x] Model-based access support
- [ ] Protect against namespace collision
### Model-based access
We could refactor the internal strategy to register and retrieve clients to use contextvars. However we should only use contextvars for transactions as otherwise it would break certain async use cases like the `python -m asyncio` REPL. | 2hard
|
Title: Redirects for changed urls
Body: Implemented history of changed urls for redirects
| 2hard
|
Title: Embedding Sphinx Extension
Body: This issue was opened in light of trying to embed lux widgets in the documentation. We've tried a variety of solutions listed below, but none of them were able to either import or embed the widgets.
Our most recent version can be found on this [branch](https://github.com/caitlynachen/lux/tree/custom-sphinx-ext), and was based on Altair's [documentation](https://github.com/altair-viz/altair/blob/master/altair/sphinxext/altairplot.py#L254
). We were able to show code block, but not a chart (might be worth investigating more later on).
<img width="766" alt="Screen Shot 2020-07-23 at 7 23 00 PM" src="https://user-images.githubusercontent.com/11529801/88356307-37f80180-cd1c-11ea-915f-9151e1237a01.png">
We have tried various approaches on this front, along with @westernguy2 and @jrdzha .
1. [nbsphinx](http://nbsphinx.readthedocs.io/) approach:
- Code in [our nbsphinx branch](https://github.com/caitlynachen/lux/tree/nbsphinx)
(extension to display entire notebook)
- Pandas was able to display correctly
- Save widgets with notebook
- Nbsphinx_widgets_path
-also limited to embedding full ipython notebooks

2. [jupyter-sphinx](https://jupyter-sphinx.readthedocs.io/en/latest/) approach:
- Code in [our jupyter-sphinx branch](https://github.com/caitlynachen/lux/tree/jupyter-sphinx)
- Tried using old version (to use 'jupyter_sphinx.embed_widgets', `.. ipywidgets-display::`)
- Seems to be able to build (no errors)
- Shows input, but no output
- limited documentation, so hard to reproduce
3. Other ipywidgets that also manually building sphinx ext
- [bokeh plot](https://github.com/bokeh/bokeh/blob/branch-2.2/bokeh/sphinxext/bokeh_plot.py): compiler was specific to bokeh plot, so we decided not to go for this one
- [nbinteract](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-57.pdf:): Generate Interactive Web Pages - From Jupyter Notebooks: unable to build custom widgets
4. Manually exporting the lux widgets in order to display it on docs (html)
- unable to compile/import lux-widget
As a note for the future, we might need to look into ways that we can make a static rendering of the widget without the need of a Jupyter backend. This requires us to package all the current dependencies into the export. This will also help with embedding to HTML or sharing of Lux widgets. | 2hard
|
Title: Pi-Plates
Body: Can we add the Pi-Plates and Automation hat to your software. That would be awesome! | 2hard
|
Title: Publish workflow (optional)
Body: Optional workflow for publishing
once enabled should:
Have a pipeline for approval
Replace "publish/save" button with "put for approval"
Users defined in pipeline will be able to approve
Thy will have a dashboard
also will receive an email with "one click approval link"
| 2hard
|
Title: Add support for the Prisma Data Proxy
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
The Prisma Data Proxy solves the issues with managing connections in serverless applications. While it is in beta at the moment, we should support it too.
| 2hard
|
Title: Add tests for the synchronous client
Body: We did previously have tests for the synchronous client but the test setup broke with the latest pytest version. We should add these back using the `databases/` tests. | 2hard
|
Title: Investigate removing `prisma()` prefix from model-based queries
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We currently require that query methods called from model classes must first call `.prisma()`, e.g.
```py
user = User.prisma().find_first(where={'name': 'Robert'})
```
This was implemented this way to avoid any naming conflicts between fields and query methods. However, some people do not like that you have to call `.prisma()` first, we should look into changing this.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
It *might* be possible to remove this and support directly querying, e.g.
```py
user = User.find_first(where={'name': 'Robert'})
```
The only concern I have with this is that fields could be defined with the same name as a query method, this might not matter but if it does I do not want to have to disallow using fields with the same name. | 2hard
|
Title: Upgrade Flask-Admin
Body: https://github.com/flask-admin/flask-admin changed a lot, now there is the need to adjust Quokka templaring and Quokka-themes to work with the new admin
| 2hard
|
Title: (Not an issue right now) Handle multiple returns
Body: ~~I'll try to work on this relatively soon, but~~ to think out loud..
In interprocedural_cfg.py, we have
```python
def return_handler(self, node, function_nodes):
"""Handle the return from a function during a function call."""
call_node = None
for n in function_nodes:
if isinstance(n, ConnectToExitNode):
LHS = CALL_IDENTIFIER + 'call_' + str(self.function_index)
previous_node = self.nodes[-1]
if not call_node:
RHS = 'ret_' + get_call_names_as_string(node.func)
r = RestoreNode(LHS + ' = ' + RHS, LHS, [RHS],
line_number=node.lineno,
path=self.filenames[-1])
call_node = self.append_node(r)
previous_node.connect(call_node)
else:
# lave rigtig kobling
pass
```
which cleaned is
```python
def return_handler(self, call_node, function_nodes):
"""Handle the return from a function during a function call.
Args:
call_node(ast.Call) : The node that calls the definition.
function_nodes(list[Node]): List of nodes of the function being called.
"""
for node in function_nodes:
# Only Return's and Raise's can be of type ConnectToExitNode
if isinstance(node, ConnectToExitNode):
# Create e.g. ¤call_1 = ret_func_foo RestoreNode
LHS = CALL_IDENTIFIER + 'call_' + str(self.function_call_index)
RHS = 'ret_' + get_call_names_as_string(call_node.func)
return_node = RestoreNode(LHS + ' = ' + RHS,
LHS,
[RHS],
line_number=call_node.lineno,
path=self.filenames[-1])
self.nodes[-1].connect(return_node)
self.nodes.append(return_node)
return
```
Firstly, the for loop and the if statement seem to just serve the purpose of "Is there a node of type Return or Raise in the function?" But I think all functions should have at least one return node, right? I'm not sure if I understand the original intention that well e.g. what was going to be in the else?
Secondly, here is an example to illustrate the problem/need to handle multiple returns:
TODO | 2hard
|
Title: Add support for database views
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma added preview feature support for views in `4.9.0`. We should support this too.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Unclear. From reading the release notes it looks like views should just be treating like regular models.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
https://github.com/prisma/prisma/releases/tag/4.9.0
| 2hard
|
Title: Search
Body: SearchView and Search urls should be implemented
Better if using built in MongoDB full text search feature
modules should have a way to define the index schema, maybe a new file **search_indexes.py** (remember to add in cookiecutter template)
By default Content and Channel should be indexed.
Search results should respect "indexable" attribute in channels and avoid duplications.
It should use for thumbnail the SubContent with "thumbnail" or "main_image" purpose (adding _thumb)
| 2hard
|
Title: Add support for ordering by relevance
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma has support for ordering full text search results by relevance, we should support it too.
https://www.prisma.io/docs/concepts/components/prisma-client/filtering-and-sorting#sort-by-relevance-postgresql
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Work in progress.
| 2hard
|
Title: Add support for field references in query filters
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma added support for referencing fields in query filters in `v.4.3.0`, we should support it too.
https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#compare-columns-in-the-same-table
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Unknown at this time what the client API will look like. | 2hard
|
Title: Enforce calling of `Case.call` in Python tests
Body: In Python tests generated by Schemathesis, it's crucial to call `case.call` to execute the test case properly. However, users sometimes forget to include this call, leading to potential confusion and incomplete tests.
However, there could be use cases like just collecting the generated tests - we use it in our test suite for further inspection. Probably we need to collect existing use cases first to assess the impact. | 2hard
|
Title: Control installed modules via database / remove the auto loader for blueprints
Body: I like the auto-load feature, you just drop a module folder and it is done! but it is error prone and I am already having trouble with naming problems.
Now the idea is to keep the folder /modules and the same blueprint structure, but it will not be autoloaded, to load a blueprint it will need to be activated in admin.
When you upload/extract a module in to /modules it will be automatically added to "available modules" but the default state will be **inactive**, so in admin you need to click in **activate** button and then it will perform validations, run the tests and check for naming issues.
It is urgent!
| 2hard
|
Title: Generate Async and Sync client together at the same time
Body: ## Problem
In my project, i mixed sync and async operations together, so i have to use sync and async in the same project but not in the same context.
## Suggested solution
Shall we just generate sync and async clients together?
## Alternatives
for now i write two generator, and generate clients to different position.
but the problem here
1. i have to commit the code along with code, which a little bit annoying
2. partial types feature does work anymore. because it share the same context between prisma cli and generated python model.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| 2hard
|
Title: Implement black code formatting as pre-commit
Body: In order to avoid style-related comments when discussing code with contributors, I propose that we implement the [black](https://github.com/ambv/black) python code formatter as a pre-commit requirement for development.
This will potentially have a large impact on our current code-base, so should be undertaken right before a release to ensure that we don't end up with large merge conflicts. Also for discussion:
- what styles do we want to customize with black?
- is black in its beta stage stable enough to use with our project?
- do we set this up as a pre-commit in GitHub/Travis/locally? | 2hard
|
Title: Role based access control to posts in admin
Body: in admin the access to content needs to be controlled by role.
by default we have admin, moderator, editor, viewer
- admin (can access all)
- moderator (can access all content except administration options)
- editor (can access only the self-created content or content contributed to
- viewer (can access all read only)
| 2hard
|
Title: [Bug] Web 终端使用 sz 下载 100kb 左右的文件卡在 99% 并控制台报错
Body: ### Product Version
v4.2.0; v4.1.0
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
Rocky Linux release 9.4 (Blue Onyx)
Client: Docker Engine - Community
Version: 27.2.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.16.2
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.2
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 8
Server Version: 27.2.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 472731909fa34bd7bc9c087e4c27943f9835f111
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 5.14.0-427.28.1.el9_4.x86_64
Operating System: Rocky Linux 9.4 (Blue Onyx)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.382GiB
Name: jumpserver
ID: 494993a3-2aaf-4d9e-9fb7-8278d4825beb
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: true
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://mirror.ccs.tencentyun.com/
Live Restore Enabled: true
### 🐛 Bug Description
使用web终端在下载文件报错:
<img width="424" alt="image" src="https://github.com/user-attachments/assets/a7085cdf-2726-4f07-a087-9a64353d658b">
### Recurrence Steps
使用web终端使用sz下载文件,卡在百分之九十几
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | 2hard
|
Title: Fix ParallelCoordinates tests and add fast parameter
Body: See #420
- [x] fix image comparison tests for parallel coordinates
- [x] add `fast=False` argument and docstring
- [x] create fast and regular drawing methods based on parameter
- [x] add section in documentation explaining fast vs. slow
- [x] update #230 and #59 | 2hard
|
Title: The take argument should affect the return type
Body: For example, the following will not raise an index error in mypy
```py
user = await client.user.find_unique(where={'id': 1}, include={'posts': {'take': 1}})
assert user is not None
post = user.posts[2]
```
One solution would be to modify the return type to a Tuple of the same length as the take argument, not sure how I feel about doing this as I do not want the runtime types to differ that much from the static types but I also don't want to have to convert all lists that we return to tuples at runtime either.
There might be some hacky way for overriding the getitem check mypy does. | 2hard
|
Title: Refactor internal query builder to use the JSON protocol
Body: Prisma are working on removing the internal GraphQL protocol and moving to a JSON-based approach instead which will simplify a lot of internal code and potentially result in the ability to completely remove certain codegen-metadata resulting in smaller package sizes in applications where that matters.
This would also remove the requirement to wrap json values in `Json(...)`!
[Public proposal](https://prismaio.notion.site/Public-proposal-Engine-JSON-protocol-584b96ff3e1541ba9ace50a469195b06) | 2hard
|
Title: Support defining the Prisma Schema using python models
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
One potential source of conflict when transitioning to use Prisma Client Python from other ORMs is that models must be defined in a separate Prisma Schema file.
We should support defining models using a python module.
It should then be possible to query directly from these models.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Notes:
- `prisma.schema.BaseModel` is a re-export of `pydantic.BaseModel`
Example:
```py
from typing import List
from prisma.schema import BaseModel, Field, Default, Relation, render
class User(BaseModel):
id: str = Field(is_id=True, default=Default.cuid)
name: str
posts: List['Post']
class Post(BaseModel):
id: str = Field(is_id=True, default=Default.cuid)
title: str
author: Optional[User] = Relation('author_id', references=['id'])
author_id: str
User.update_forward_refs()
if __name__ == '__main__':
render('schema.prisma', models=[User, Post])
```
Equivalent Prisma Schema file:
```prisma
model User {
id String @id @default(cuid())
name String
posts Post[]
}
model Post {
id String @id @default(cuid())
title String
author User @relation(fields: [author_id], references: [id])
author_id String
}
```
Features still to be implemented:
- Datasources (we could make these configurable on the client level?)
- Generators
Implementation notes:
- Error early if a relation is not typed as `Optional` (or automatically include them once #19 is merged)
## Potential difficulties:
- Making these models query-able will be difficult. Solution is still a work in progress.
- Integrating this schema with the standard Prisma commands
## Querying
The key for making these models queryable is replacing code during generation time.
Looks like the solution for this is to dynamically resolve base class to inherit from and use literal overloads to type it post-generation. However, mypy does not support this :/
```py
class User(models.BaseUser):
pass
```
Where:
```py
# pre-generation
class _Models:
def __getattr__(self, attr: str) -> BaseModel:
...
models = _Models()
# post-generation
class _Models:
BaseUser: QueryableUser
models = _Models()
```
The only disadvantage to this is that it is very magic and could be potentially confusing, maybe a different naming schema could help?
## Additional questions
- Is it even worth making these models queryable? | 2hard
|
Title: Version control for content
Body: Thinking on using this with different configurable backends.
- Database (versions will be stored as collections in db)
- Git (will use gittle to serialize and store versions to disk/repo)
| 2hard
|
Title: Add benchmarking to tests
Body: Due to the issues of fast vs. slow in `ParallelCoordinates` (see #230 and #448) -- it may be useful to have a benchmarking utility in our tests to ensure that our visualizers run at a near constant speed given the same data. | 2hard
|
Title: Simpler syntax to create lists, dicts, and other collections as arguments to keywords
Body: Robot's automatic argument conversion (#2890) makes it possible to have a keywords like
```python
def accept_list(arg: list):
...
def accept_dict(arg: dict):
...
```
and call them so that argument are Python literals
```
Accept List ['list', 'items', 'here']
Accept Dict {'key': 'value', 'another': 'item'}
```
This is pretty convenient, but needing to use Python syntax is a bit ugly in general and especially annoying when using embedded arguments that are supposed to enhance readability:
```
Select animals ['cat', 'dog'] from list
```
I propose we enhance argument conversion so that we support also the following syntax:
```
Accept List list, items, here
Accept Dict key=value, another=item
```
The high level semantics would be as follows:
- Separator between items is a comma _and_ a space. This allows using commas in values like `first, second,with,commas, third`. Having a comma followed by a space in the value wouldn't be possible.
- We could allow using a semicolon and a space as an alternative separator. In that case we'd use the separator that's encountered first. This would allow usages like `first; second, with, commas followed by spaces; third`.
- With dictionaries the separator between key and value is `=`. This is the syntax we use also when creating `&{dict}` variables.
- Values are considered strings by default. This can be changed by using generics (#4433) and `TypedDict` (#4477).
- Because this would be handled by type convertors, variables would have been resolved already earlier. That means that something like `${1}, ${2}` wouldn't create a list of integers. See the above point for alternatives.
- Also escapes are handled before type conversion. That makes it impossible to use something like `first, second\,with\,commas` for escaping commas in values.
This enhancement would ease using collections a lot especially if #4433 and #4477 are implemented. If we agree above semantics are fine, implementation would also be pretty easy. I thus add this to RF 5.1 scope even though the release is already late. | 2hard
|
Title: [ENH] Pyjanitor for PySpark
Body: # Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
I would like to know if there are any interest to create pyjanitor for pyspark? I'm using pyspark a lot and I would really like use custom method chaining to clean up my ETL code.
I'm not sure if it is doable or how easy it is but I would be open to explore.
<!-- # Example API
One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to
make sure that it is accessible and understandable to many people. Please provide a few examples of what the API
of the new function you're proposing will look like. We have provided an example that you should modify.
Please modify the example API below to illustrate your proposed API, and then delete this sentence.
```python
# transform only one column, while creating a new column name for it.
df.transform_columns(column_names=['col1'], function=np.abs, new_column_names=['col1_abs'])
# transform multiple columns by the same function, without creating a new column name.
df.transform_columns(column_names=['col1', 'col2'], function=np.abs)
# more examples below
# ...
```
--> | 2hard
|
Title: Allow resource files and libraries to be imported privately in other resource files
Body: It could be beneficial if users could import resource files and libraries in a "private" way. Currently, resource files are imported in a sort of recursive way where you have access to all keywords that resource file imports as well, as described in the user guide [here](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#resource-files). This can be unhelpful when you are abstracting a library or resource such that other contributors should not make direct calls to the underlying libraries.
This was especially important to me on a recent project where I was refactoring a test suite to be able choose whether to run with Browser library or SeleniumLibrary to compare their pros and cons. You could pick which browser control to use but writing a test that made direct keyword calls to the opposite library would cause that control to crash out. Using an abstracted resource file allows you to pick which library to use on the fly, however with recursive importation, a contributor could still mistakenly call a non-abstracted keyword from one or the other library and cause the opposing library crash the test when they reach the unexpected keyword.
In more biased terms, private importation of libraries and resources feels like a much more familiar mode for project structure for me given that is how python seems to handle resource/library importation. | 2hard
|
Title: Move subtask queueing to workers
Body: ## Motivations
Mars now handles all subtasks within a single supervisor. When the number of subtasks or workers is large, there can be huge load on a supervisor node. What's more, scheduling merely on the supervisor side brings considerable latency between worker tasks. If we move subtask scheduling to workers, these issues can be alleviated.
## Overall design
This design enables workers to schedule subtasks submitted to it, while supervisors only act as batch assigners and coordinators. Subtasks created from TaskService will be assigned and pushed into workers. Then inside workers, subtasks are then queued and executed given priority assigned to them. Results are then fed back to supervisors for further activation of successors.
## Subtask submission
When subtasks are generated, the assigned supervisor assignes and pushes **all** ready subtasks to corresponding workers. Unlike previous design, the supervisor no longer decides how many subtasks it need to submit to workers given global slot information, neither did it maintain queues of subtasks. Workers decide and run subtasks given their own storage, leading to faster reaction speed and narrower gap between execution.
## Subtask queueing
Inside workers, we use *queues with latches* to order and control tasks. The queue can be seen as a combination of a priority queue deciding orders of subtasks with a semaphore deciding the number of subtasks to output. The default value of the semaphore is equal to the number of slots of given bands. The basic API of the queue is shown below:
```python
class SubtaskPriorityQueueActor(mo.StatelessActor):
@mo.extensible
def put(self, subtask_id: str, band_name: str, priority: Tuple):
"""
Put a subtask ID into the queue.
"""
@mo.extensible
def update_priority(self, subtask_id: str, band_name: str, priority: Tuple):
"""
Update priority of given subtask.
"""
async def get(self, band_name: str):
"""
Get an item from the queue and returns the subtask ID
and slot ID. Will wait when the queue is empty, or
the value of semaphore is zero.
"""
def release_slot(self, subtask_id: str, errors: str = "raise"):
"""
Return the slot occupied by given subtask and increase
the value of the semaphore.
"""
@mo.extensible
def remove(self, subtask_id: str):
"""
Remove a subtask from the queue. If the subtask is occupying
some slot, the slot is also released.
"""
```
More APIs can be added to implement operations like `yield_slot`.
To parallelize IO and CPU cost, two queues are set up inside the worker.
- `PrepareQueue`: queue of submitted subtasks. A prepare task consumes items of the queue and do quora allocation as well as data moving. When a new subtask starts execution, its slot is released.
- `ExecutionQueue`: queue of prepared subtasks. An execution task consumes items of the queue and do execution. When a subtask finishes execution, its slots are then released.
## Successor forwarding
When a subtask finishes execution and we need to choose another subtask to run, we have two kinds of subtasks to schedule: subtasks already enqueued in ExecutionQueue, and subtasks whose predecessors are just filled by the execution finished just now. The latter group often have higher priority but without data preparation, and may not be scheduled because of latencies brought by queues. We design a successor forwarding mechanism to resolve this condition.
When pushing ready subtasks to scheduling service, its successors are also pushed for cache. Scheduling service decides and pushes subtasks to correct workers. Subtasks whose successors can be forwarded must satisfy conditions below:
1. Some of the successors are cached in workers
2. All dependent data of successors are already stored in workers, thus we do not need to consult Meta or Storage service for remote data retrival
3. There is enough quota for the successor
When all conditions are met, `PrepareQueue`is skipped and the subtask is inserted into `ExecutionQueue`directly. When the slot is released, the successor will be scheduled as soon as possible.
## Possible impacts
### Autoscale
Current autoscale is based on queueing mechanism at supervisor side, which must be redesigned based on worker scheduling.
### Fault Injection
As worker side of scheduling service is rewritten, fault injection need to adapt to that change. | 2hard
|
Title: Add support for ordering by relations
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Since Prisma [v3.0.1](https://github.com/prisma/prisma/releases/tag/3.0.1) ordering by relational fields is generally available. We should support it too.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
```py
await client.post.find_many(
order={
'author': {
'name': 'asc',
},
},
},
)
```
| 2hard
|
Title: Multiple Filter Support
Body: Currently when multiple filters are applied it is unclear if an OR or AND is applied. Not enough explanations and information is displayed on the visualization to indicate this.
For example, even though clauses are combined via conjunction, in this example, it seems like only one of the filters is being applied.
```python
df.intent = ["Region=New England", "Region=Southeast", "Region=Far West"]
df
```

More work needs to be done to extend the language for supporting OR. | 2hard
|
Title: Custom test settings a.k.a. test metadata
Body: It would be great If we have keyword that will be able to set `test metadata` to test report. This is not issue, when you are running tests using robot, but when you're running tests using pabot the `suite metadata` is not enough to cover all the information that should be visible at the end of the run.
Not mentioning that rebot merge will mess suite metadata when they are added dynamically.
I'm testing insurance systems and when i'm running tests I get a lot of different URLs where insurances are created. Each testcase have its own url and I'm looking for way how to `expose` that url in the report.
For now I have custom keyword that logs this url to report, but it is not sufficient as you need to open test steps, open keyword and look for url there. So it should be really cool, if there are any `test metadata` that I can use to extract information and show them at on place in report | 2hard
|
Title: Implement permission management on admin for post editing
Body: Suggested by @matheusbrat
> is it possible to apply a prefilter on quokka admin? For example, a editor could see just his publications instead of all?
Desired functionality
1. Admin can see/edit everything
2. On role create it should be possible to define if the role sees everything or only things under its own domain
3. It should be possible to put roles under roles (and child should follow the parent rules)
4. On content creating (channel and posts) should be possible to select edit permission
[ ] Editable by any user
[ ] editable by specific roles
[ ] Editable by specific authors
[ ] editable only by creator
5. Apply the same for visibility
6. Visibility is on quokka.core.views
7. Admin is under Admin models (we should override some admin methods)
| 2hard
|
Title: Support heavily nested relational querying
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We currently define all nested relational input types like: `PostCreateWithoutRelationsInput` when we could instead define them like: `PostCreateWithoutAuthorInput`. This was originally implemented like this due to complexities with pseudo-recursive types.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Supporting this either requires us to switch our type generation to completely rely on the Prisma DMMF or to make our type generation smarter with #122.
| 2hard
|
Title: Python 3 support is not ok
Body: Running with Python3.x still raises a lot of erros, some of them below:
- [ ] It misses pep8 and pyflakes packages (should add to requirements)
- [ ] flaskhtmlbuilder uses unicode() and it does not exists in py3, lead in to an error in admin
| 2hard
|
Title: Windows Image Similarity Tests
Body: A good suggestion came out from @jkeung at pycon2018, we should investigate creating separate baseline images for windows machines.
Recommend that we had a check in the base.py for if the machine is windows and then direct the test to checkout the windows folder in baseline images or otherwise, significantly increase the tolerance for them.
| 2hard
|
Title: Refactor the internal Query Builder to serialise using type information
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently our Client API is tightly coupled to the internal GraphQL API, this is because we just naively send arguments to the query engine with hardly any pre-processing.
While we do transform certain fields, we cannot ***correctly*** transform arguments like `startsWith` -> `startswith` as a model may have a field named `startsWith` which would incorrectly be sent to the query engine as `startswith`.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
The query builder should take an option that represents the expected type of the arguments, this type should then be traversed and the arguments should be transformed / serialised based on the expected type.
We should generate types that look like this:
```py
class StringListFilter(TypedDict, total=False):
has: str
has_some: Annotated[List[str], TransformKey('hasSome')]
``` | 2hard
|
Title: Add support for the Active Record Pattern
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Popular Python ORMs such as the Django ORM have support for the [Active Record Pattern](https://en.wikipedia.org/wiki/Active_record_pattern), e.g.
```py
user = User(name='Robert')
user.save()
```
Supporting this pattern could help reduce code duplication and make it easier for users to transition from another ORM to Prisma.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
*API proposal is still a work in progress.*
```py
# model objects must still be created using the standard methods
# this is a separate and more difficult issue due to typing and defaults
user = await User.prisma().create({'name': 'Robert'})
# TODO: consider renaming record() to object()
# updating a record
user.name = 'Tegan'
await user.record().update()
# re-fetching data is in-place
await user.record().refresh()
await user.record().refresh(include={'posts': True})
# deleting a record
await user.record().delete()
```
| 2hard
|
Title: Make use of property based testing
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Due to auto-generated nature of the client and the heavily nested query arguments, there are many possible combinations of arguments and methods that we do not have tests for.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should be able to make use of [hypothesis](https://hypothesis.readthedocs.io/en/latest/) and their [`from_type`](https://hypothesis.readthedocs.io/en/latest/data.html#hypothesis.strategies.from_type) strategy builder to greatly increase the coverage of our generated types. | 2hard
|
Title: [BUG]: Pytest with a specific config failed after PR #5868
Body: ### Is there an existing issue for this bug?
- [X] I have searched the existing issues
### 🐛 Describe the bug
Main repo test_shard_llama fails for these configs:
```
{'tp_size': 2,
'pp_size': 2,
'sp_size': 2,
'num_microbatches': 2,
'enable_sequence_parallelism': True,
'sequence_parallelism_mode': 'ring',
'enable_flash_attention': True,
'zero_stage': 1,
'precision': 'fp16',
'initial_scale': 1}
```
```
{'tp_size': 2,
'sp_size': 2,
'pp_size': 2,
'num_microbatches': 2,
'enable_sequence_parallelism': True,
'sequence_parallelism_mode': 'split_gather',
'enable_flash_attention': False,
'precision': 'fp16',
'initial_scale': 1}
```
The failure message is :
```
E File "/home/nvme-share/home/zhangguangyao/ColossalAI/colossalai/shardformer/modeling/llama.py", line 530, in forward
E query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
E File "/home/nvme-share/home/zhangguangyao/hf_transformers/src/transformers/models/llama/modeling_llama.py", line 206, in apply_rotary_pos_emb
E q_embed = (q * cos) + (rotate_half(q) * sin)
E RuntimeError: The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 2
```
I have found out this failure is introduced after PR #5868 merged. Please take a look.
### Environment
_No response_ | 2hard
|
Title: Export CI images to S3
Body: Create a TravisCI `after_failure` script that uploads actual images and diffs to S3 so we can diagnose TravisCI failures.
### Proposal/Issue
It seems like every time we push a commit to GitHub the TravisCI tests fail, even if they were working locally. Right now we're messing with the tolerances but with no real insight into what the key differences are in the CI environment vs locally. To better help us address these issues we should push the failed test images and their diffs to S3 for further inspection.
Issues to consider:
1. Limit the number of uploads to S3: only upload failed test actual and diff images
2. Organize by test/commit - multiple travis tests might be running at the same time
3. Log where the images were uploaded and stored
4. Long run clean up of these images over time
### Addendum
Also, export AppVeyor images using artifacts. | 2hard
|
Title: pyee error | exception calling callback for <Future at 0x737d75f0 state=finished raised KeyError>
Body: # How to submit an Issue to a Mycroft repository
When submitting an Issue to a Mycroft repository, please follow these guidelines to help us help you.
## Be clear about the software, hardware and version you are running
For example:
* I'm running Picroft
* With latest version
* With the standard Wake Word
## Try to provide steps that we can use to replicate the Issue
For example:
1. Open mycroft cli
2. wait (it comes every ca. 30 seconds)
## Provide log files or other output to help us see the error
```20:00:14.407 | ERROR | 949 | concurrent.futures | exception calling callback for <Future at 0x737d7b10 state=finished raised KeyError>Traceback (most recent call last): File "/usr/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_executor.py", line 60, in _callback self.emit('error', exc) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 111, in emit self._emit_handle_potential_error(event, args[0] if args else None) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 83, in _emit_handle_potential_error raise error File "/usr/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 121, in g self.remove_listener(event, f) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 136, in remove_listener self._events[event].pop(f)KeyError: <bound method MessageWaiter._handler of <mycroft.messagebus.client.client.MessageWaiter object at 0xb2e8fd90>> 20:00:14.408 | ERROR | 949 | concurrent.futures | exception calling callback for <Future at 0x737d79d0 state=finished raised KeyError>Traceback (most recent call last): File "/usr/lib/python3.7/concurrent/futures/_base.py", line 324, in _invoke_callbacks callback(self) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_executor.py", line 60, in _callback self.emit('error', exc) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 111, in emit self._emit_handle_potential_error(event, args[0] if args else None) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 83, in _emit_handle_potential_error raise error File "/usr/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 121, in g self.remove_listener(event, f) File "/home/pi/mycroft-core/.venv/lib/python3.7/site-packages/pyee/_base.py", line 136, in remove_listener self._events[event].pop(f)KeyError: <bound method MessageWaiter._handler of <mycroft.messagebus.client.client.MessageWaiter object at 0x7514b7b0>>```

| 2hard
|
Title: Support skipping pydantic validation
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
One reason for slower performance is that we validate the data that the prisma engine gives us using pydantic models. At least providing an option to skip validation would give a decent performance boost.
However, there are a couple concerns with this:
- We make use of custom pydantic validators for custom field types
- This will break subclass behaviour - Users will no longer be able to subclass models and write custom validators
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Add a config option to disable pydantic validation and if enabled, call `BaseModel.construct()` instead.
| 2hard
|
Title: Support discarding action results
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
When doing batch inserts the resulting data is normally completely ignored. However we will still convert the data that Prisma gives us into Pydantic models, this incurs a significant runtime cost which could be avoided.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Add a new parameter to actions to signify that the resulting data can be discarded:
```py
@overload
async def create(
...
ensure_result: Literal[True] = True,
) -> User:
...
@overload
async def create(
...
ensure_result: Literal[False] = False,
) -> None:
...
```
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
We should also limit the number of fields that are selected when this option is passed however that is out of scope for this issue.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
See this discussion: https://github.com/RobertCraigie/prisma-client-py/issues/275#issuecomment-1034264260 | 2hard
|
Title: Explore using DMMF types directly
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently, we define our own types based off of the model definitions themselves, however, Prisma also includes the actual type definitions themselves in the DMMF, this is how they generate the types in the JS client.
The fact that we generate our own types independently can result in mismatches between our types and the types that the query engine expects. An example of this is [this issue](https://github.com/prisma/prisma/issues/13892) which is requesting support for `push` for arrays in CockroachDB. Given the current structure, we generate the `push` type for CockroachDB, even though using it will result in an error from the query engine.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Design a prototype implementing this, see how feasible it is.
Known downsides to changing to this approach:
- less flexibility
- unknown how we will support pseudo-recursive types | 2hard
|
Title: Hobbies corpus unicode decoding error on Windows x64 (AppVeyor)
Body: **Describe the bug**
There appears to be a unicode decoding error on Python 3 in Windows when loading the hobbies corpus. This was observed in AppVeyor builds that use the hobbies corpus during testing such as #489 but is probably true for any Windows machine.
**To Reproduce**
```python
from tests.dataset import DatasetMixin
datasets = DatasetMixin()
hobbies = datasets.load_data('hobbies')
```
**Expected behavior**
We should either remove the unicode encoding from the corpus dataset (unless we beleive that this is a positive feature for the tests ... though currently the encoding of the data does not matter to visualizer performance) or we should fix the datasets to correctly decode the data from disk.
**Traceback**
```
tests\dataset.py:197: in load_data
return DatasetMixin.load_corpus(name, fixtures)
tests\dataset.py:230: in load_corpus
data.append(f.read())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <encodings.cp1252.IncrementalDecoder object at 0x0000001C91A0D828>
input = b'The Lonely City\xc2\xa0bristles with heart-piercing wisdom. Loneliness, according to Laing, feels \xe2\x80\x9clike b...an read The\xc2\xa0Rumpus\xe2\x80\x99s review of\xc2\xa0The Lonely City\xc2\xa0right\xc2\xa0here.)\n\nRelated Posts:\n'
final = True
def decode(self, input, final=False):
> return codecs.charmap_decode(input,self.errors,decoding_table)[0]
E UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 164: character maps to <undefined>
C:\Python36-x64\lib\encodings\cp1252.py:23: UnicodeDecodeError
```
**Desktop (please complete the following information):**
- OS: `AppVeyor Winx64`
- Python Version `PYTHON=C:\Python36-x64, PYTHON_VERSION=3.6.4, PYTHON_ARCH=64`
- Yellowbrick Version `develop`
**Additional context**
See [AppVeyor Build 0.7.300](https://ci.appveyor.com/project/districtdatalabs/yellowbrick/build/0.7.300/job/wy9k5x9linoc6tf8) - note that tests _passed_ in `
Environment: PYTHON=C:\Python27-x64, PYTHON_VERSION=2.7.14, PYTHON_ARCH=64`
| 2hard
|
Title: Consider: dropping custom SaveLoad in favor of Pickle v.5 or joblib.dump?
Body: [PEP 574](https://www.python.org/dev/peps/pep-0574/) defines pickle-version-5, which supports alternate serialization of things like large numpy arrays. It's natively available in Python 3.8, with a PyPI backport [pickle5](https://pypi.org/project/pickle5/) that works in 3.5, 3.6, and 3.7.
I'm not yet certain it allows the same mmap-on-load possible with `gensim.utils.SaveLoad`, but I suspect it might.
Similarly, `joblib`'s [dump()](https://joblib.readthedocs.io/en/latest/generated/joblib.dump.html) supports serializing large objects with large `numpy` arrays, and it is already what `sklearn` & others use for large models. Its matching [load()](https://joblib.readthedocs.io/en/latest/generated/joblib.load.html#joblib.load) has an `mmap_mode` option.
Gensim should consider whether either of these could be a superior alternative to `gensim.utils.SaveLoad` – in terms of being in sync with Python & related projects, & minimizing gensim idiosyncratic code. If so, a transition goal cold be set in some future release, with perhaps just one version supporting both modes (if continued forward-migration of old models is to be supported). | 2hard
|
Title: Add support for native query engine bindings
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma has experimental support for natively binding the query engine to the node client, this reduces the overhead between the client and the rust binary, improving performance.
We should look into whether or not this is feasible for us to do as well.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Would probably make use of [Py03](https://github.com/PyO3/pyo3) and [Py03-asyncio](https://github.com/awestlake87/pyo3-asyncio) I don't know how feasible this is yet but if this does end up shipping we would have to bundle the rust binary with the package either using wheels or a build extension as providing downloads for rust binaries is not something that would be feasible for me to provide.
Maybe related, should look into cibuildwheel for packaging, see uvloop for an example using GitHub actions
## Status
Moved status to #165 | 2hard
|
Title: Online clustering
Body: Hi,
thanks a lot for the work here!
Do you plan on adding support for online training? Similar to scikit's partial_fit()?
Best,
Olana
| 2hard
|
Title: Supporting export/copyable intent
Body: Vis objects are currently exported as code via `to_Altair` or `to_VegaLite`, but the Lux syntax is not exposed. As suggested by @adityagp, we should extend Lux with a feature that allows users to copy the code that generates the intent via a UI button so that users can paste, edit their intent. | 2hard
|
Title: post_update UPDATE statement will bump version_id a second time, but also not get the correct value if a committed_state is present
Body: ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10798
```py
from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy.orm import declarative_base
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
Base = declarative_base()
engine = create_engine("sqlite://", echo=True)
class User(Base):
__tablename__ = "user"
id = Column(Integer, primary_key=True)
class Parent(Base):
__tablename__ = "parent"
id = Column(Integer, primary_key=True)
version_id = Column(Integer)
updated_by_id = Column(
Integer,
ForeignKey("user.id"),
default=1,
onupdate=1,
)
updated_by = relationship(
"User",
foreign_keys=[updated_by_id],
post_update=True,
)
__mapper_args__ = {
"version_id_col": version_id,
}
Base.metadata.create_all(engine)
with Session(engine) as session:
u1 = User(id=1)
u2 = User(id=2)
p1 = Parent(id=1, updated_by=u1)
session.add(u1)
session.add(u2)
session.add(p1)
u2id = u2.id
session.commit()
with Session(engine) as session:
p1 = session.get(Parent, 1)
p1.updated_by
p1.version_id = p1.version_id
p1.updated_by_id = u2id
session.commit()
```
output:
```
2023-12-28 15:39:05,107 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-12-28 15:39:05,108 INFO sqlalchemy.engine.Engine SELECT parent.id AS parent_id, parent.version_id AS parent_version_id, parent.updated_by_id AS parent_updated_by_id
FROM parent
WHERE parent.id = ?
2023-12-28 15:39:05,108 INFO sqlalchemy.engine.Engine [generated in 0.00006s] (1,)
2023-12-28 15:39:05,109 INFO sqlalchemy.engine.Engine SELECT user.id AS user_id
FROM user
WHERE user.id = ?
2023-12-28 15:39:05,109 INFO sqlalchemy.engine.Engine [generated in 0.00006s] (1,)
2023-12-28 15:39:05,110 INFO sqlalchemy.engine.Engine UPDATE parent SET version_id=?, updated_by_id=? WHERE parent.id = ? AND parent.version_id = ?
2023-12-28 15:39:05,110 INFO sqlalchemy.engine.Engine [generated in 0.00007s] (2, 2, 1, 1)
2023-12-28 15:39:05,110 INFO sqlalchemy.engine.Engine UPDATE parent SET version_id=?, updated_by_id=? WHERE parent.id = ? AND parent.version_id = ?
2023-12-28 15:39:05,110 INFO sqlalchemy.engine.Engine [cached since 0.0002746s ago] (2, 2, 1, 1)
2023-12-28 15:39:05,110 INFO sqlalchemy.engine.Engine ROLLBACK
Traceback (most recent call last):
File "/home/classic/dev/sqlalchemy/test4.py", line 61, in <module>
session.commit()
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 1969, in commit
trans.commit(_to_root=True)
File "<string>", line 2, in commit
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/state_changes.py", line 139, in _go
ret_value = fn(self, *arg, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 1256, in commit
self._prepare_impl()
File "<string>", line 2, in _prepare_impl
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/state_changes.py", line 139, in _go
ret_value = fn(self, *arg, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 1231, in _prepare_impl
self.session.flush()
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 4317, in flush
self._flush(objects)
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 4452, in _flush
with util.safe_reraise():
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 4413, in _flush
flush_context.execute()
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/unitofwork.py", line 466, in execute
rec.execute(self)
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/unitofwork.py", line 629, in execute
persistence.post_update(self.mapper, states, uow, cols)
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/persistence.py", line 157, in post_update
_emit_post_update_statements(
File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/persistence.py", line 1382, in _emit_post_update_statements
raise orm_exc.StaleDataError(
sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'parent' expected to update 1 row(s); 0 were matched.
```
a naive fix is to clear out committed_state of the version_id after we know we have operated on the row already:
```diff
diff --git a/lib/sqlalchemy/orm/persistence.py b/lib/sqlalchemy/orm/persistence.py
index 3f537fb76..263009532 100644
--- a/lib/sqlalchemy/orm/persistence.py
+++ b/lib/sqlalchemy/orm/persistence.py
@@ -1634,6 +1634,10 @@ def _postfetch(
):
prefetch_cols = list(prefetch_cols) + [mapper.version_id_col]
+ state.committed_state.pop(
+ mapper._version_id_prop.key, None
+ )
+
refresh_flush = bool(mapper.class_manager.dispatch.refresh_flush)
if refresh_flush:
load_evt_attrs = []
```
but i think this should be more robust than that | 2hard
|
Title: Add support for selecting fields
Body: ## Problem
A crucial part of modern and performant ORMs is the ability to choose what fields are returned, Prisma Client Python is currently missing this feature.
## Mypy solution
As we have a mypy plugin we can dynamically modify types on the fly, this means we would be able to make use of a more ergonomic solution.
```py
class Model(BaseModel):
id: str
name: str
points: Optional[int]
class SelectedModel(BaseModel):
id: Optional[str]
name: Optional[str]
points: Optional[int]
ModelSelect = Iterable[Literal['id', 'name', 'points']]
@overload
def action(
...
) -> Model:
...
@overload
def action(
...
select: ModelSelect
) -> SelectedModel:
...
model = action(select={'id', 'name'})
```
The mypy plugin would then dynamically remove the `Optional` from the model for every field that is selected, we might also be able to remove the fields that aren't selected although I don't know if this is possible.
The downside to a solution like this is that unreachable code will not trigger an error when type checking with a type checker other than mypy, e.g.
```py
user = await client.user.find_first(select={'name'})
if user.id is not None:
print(user.id)
```
Will pass type checks although the if block will never be ran.
EDIT: A potential solution for the above would be to not use optional and instead use our own custom type, e.g. maybe something like `PrismaMaybeUnset`. This has its own downsides though.
EDIT: I also think we may also want to support setting a "default include" value so that relations will always be fetched unless explicitly given `False`. This will not change the generated types and they will still be `Optional[T]`.
## Type checker agnostic solution
After #59 is implemented the query builder should only select the fields that are present on the given `BaseModel`.
This would mean that users could generate partial types and then easily use them to select certain fields.
```py
User.create_partial('UserOnlyName', include={'name'})
```
```py
from prisma.partials import UserOnlyName
user = await UserOnlyName.prisma().find_unique(where={'id': 'abc'})
```
Or create models by themselves
```py
class User(BaseUser):
name: str
user = await User.prisma().find_unique(where={'id': 'abc'})
```
This will make typing generic functions to process models more difficult, for example, the following function would not accept custom models.:
```py
def process_user(user: User) -> None:
...
```
It could however be modified to accept objects with the correct properties by using a `Protocol`.
```py
class UserWithID(Protocol):
id: str
def process_user(user: UserWithID):
...
``` | 2hard
|
Title: Upgrade MongoDB to v6.x
Body: ## SUMMARY
MongoDB V4.4 is EOL in Feb 2024 (as per https://www.mongodb.com/support-policy/lifecycles ). It would be nice to have a version upgrade to V5.x.
Background:
We are using external MongoDB provided by our internal DB team. They started upgrading the DBs from the previous version to v5.x. DBs used by Stackstorm are currently on an exemption list as we want to stay close to OS implementation.
Thanks!
| 2hard
|
Title: Uploading images via TinyMCE or anather WYSIWYG editor
Body: Is it possible?
| 2hard
|
Title: Implement CopyNext model
Body: https://api.semanticscholar.org/CorpusID:225103260
This is similar to [`CopyNet`](https://docs.allennlp.org/models/main/models/generation/models/copynet_seq2seq/#copynetseq2seq) from [Gu et al., 2016](https://api.semanticscholar.org/CorpusID:8174613), but adds an inductive bias for copying contiguous tokens.
~~The implementation is actually just a simple extension of `CopyNet`: `CopyNext` introduces a special symbol in the target vocabulary - the "CopyNext Symbol" (CN) - which corresponds to the operation of copying another token. So the CN token always follows a copied token that is the first in its span or another CN token.~~ -- As @JohnGiorgi pointed out below, CopyNext uses a separate linear layer to calculate the "CN" score. And there are some other differences with this model as well. Most notably, they make a big simplification by treating all tokens in the source sequence as unique, and not worrying about the case where a source token may be part of the target vocabulary. This makes sense for the task that CopyNext was applied to (nested NER), but not general seq2seq tasks. | 2hard
|
Title: Delegate binary downloading to prisma
Body: ## Problem
With many possible configurations and settings, binary downloading is complicated, as such we should see if it is possible to delegate this task to the prisma as they already handle all of it for us.
This is an issue as we currently don't respect any binary targets options or http proxy options like prisma [does](https://www.prisma.io/docs/reference/api-reference/environment-variables-reference).
We would still have to download the CLI ourselves at the very least first.
| 2hard
|
Title: Add support for querying from partial models
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently partial models cannot be used with model-based access.
```py
from prisma.partials import UserOnlyId
user = await UserOnlyId.prisma().find_first(
where={
'name': 'Robert',
},
)
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
After #19 is implemented, all it would take is adding the `prisma()` classmethod to generated partial types. | 2hard
|
Title: [BUG] Examples for request body ignored during testcase generation
Body: **Describe the bug**
Yesterday I was testing an OAS which had the following pattern `^(?=[[:ascii:]]+$)[^\/\\\s$]{1,36}$` and for some reason this caused schemathesis to ignore the examples I provided for my request body.
**To Reproduce**
Steps to reproduce the behavior:
1. Add the pattern to a header parameter.
2. Add an example request body to one of your requests which uses said header parameter.
3. Run schemathesis (I used CLI version)
**Expected behavior**
Examples being included with my testcases.
**Environment (please complete the following information):**
- OS: [Windows]
- Python version: [3.11.0]
- Schemathesis version: [3.19.5]
- Spec version: [3.0.3]
| 2hard
|
Title: Why is a high tolerance required to pass Classification Report Image Similarity Test???
Body: **Describe the issue**
A clear and concise description of what the issue is.
Since adding 'support' feature, there is a fairly big jump in tolerance required for classification reports to pass image tests. We need to investigate why these high tolerances are required?
**Note:**
The classification report in #379 has pretty high tolerances and they should be reviewed.
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| 2hard
|
Title: Add a function for casting to magic types
Body: ## Problem
Referencing magic types (types that are modified by a plugin) can be annoying as it is not possible to properly represent them.
We should add a function similar to [`typing.cast`](https://docs.python.org/3/library/typing.html#typing.cast) that type checker plugins will modify the return type like it is an action method.
```py
def foo(user: User) -> None:
print(user.posts[0]) # error: posts could be none
```
## Suggested solution
```py
def prisma_cast(model: BaseModelT, include: Dict[str, Any]) -> BaseModelT:
return model
```
```py
UserWithPosts = prisma_cast(User, include={'posts': True})
def foo(user: UserWithPosts) -> None:
print(user.posts[0]) # valid
```
| 2hard
|
Title: feat: cockroachdb support
Body: ## Problem
Right now we can't use providers that have been added recently : Cockroachdb & Planetscale.
Any file generation attempt returns:
```
Error: Get config: Schema Parsing P1012
error: Datasource provider not known: "cockroachdb".
--> schema.prisma:2
|
1 | datasource db {
2 | provider = "cockroachdb"
|
Validation Error Count: 1
```
## Suggested solution
Support it
Thanks a lot ! | 2hard
|
Title: Add support for `binaryTargets` generator option
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
After #454 is merged we will not properly generate the current binary platform name. This is fine for now as Prisma gives us the binary paths at generation time but because we cannot properly detect the current binary platform we have to iterate through all of them to find one that works, this is very hacky.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should copy the implementation from https://github.com/prisma/prisma/tree/main/packages/get-platform. | 2hard
|
Title: Improve support for dynamic query building
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Other ORMs such as Django have support for custom dynamic query building with [managers](https://docs.djangoproject.com/en/4.0/topics/db/managers/), for example:
```py
# First, define the Manager subclass.
class DahlBookManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(author='Roald Dahl')
# Then hook it into the Book model explicitly.
class Book(models.Model):
title = models.CharField(max_length=100)
author = models.CharField(max_length=50)
objects = models.Manager() # The default manager.
dahl_objects = DahlBookManager() # The Dahl-specific manager.
# will only return books by Roald Dahl
books = Book.dahl_objects.all()
```
For a real world example, see: https://github.com/saleor/saleor/blob/main/saleor/order/models.py#L36
A similar pattern is very difficult to implement with Prisma Client Python.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Work in progress.
| 2hard
|
Title: Optimise generated types
Body: In order to circumvent mypy not supporting recursive types we duplicate every would-be recursive type. While this works it leads to a massive number of types being generated which slows down mypy considerably.
This appears to be unavoidable but we can improve performance by only generating types that will actually be used (we currently generate every possible relational type) and potentially re-using types.
We should also add a note to the docs somewhere that mypy performance can be improved by decreasing the depth of generated recursive types. | 2hard
|
Title: Support packaging applications into a single binary
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We have not investigated whether or not Prisma Client Python can be used with python packagers like pyinstaller and py2exe. It is highly likely packaging will not work out of the box due to how Prisma binaries are handled.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should test common packagers and investigate how we can support them. We should also support resolving binary paths relative to the module, e.g. `prisma/binaries/query-engine-darwin`.
From looking over the documentation for pyinstaller, the easiest way to support this would be to write a custom [hooks](https://pyinstaller.readthedocs.io/en/stable/hooks.html).
| 2hard
|
Title: Add support for filtering by Json values
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma supports [filtering by the values of json fields](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#json-filters), we should support it too.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
As the syntax for json filtering differs between database connectors, we should either create a wrapper that normalizes values or leave it up to the user like prisma does.
```py
user = await client.user.find_first(
where={
'pets': {
'path': JsonPath('favourite', 'breed'),
'equals': 'Parson Russell Terrier',
},
},
)
```
PostgreSQL
```py
user = await client.user.find_first(
where={
'pets': {
'path': ['favourite', 'breed'],
'equals': 'Parson Russell Terrier',
},
},
)
```
MySQL
```py
user = await client.user.find_first(
where={
'pets': {
'path': '$.favourite.breed',
'equals': 'Parson Russell Terrier',
},
},
)
```
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
[full json filter reference](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#json-filters)
[working with json fields](https://www.prisma.io/docs/concepts/components/prisma-client/working-with-fields/working-with-json-fields)
[MySQL syntax](https://dev.mysql.com/doc/refman/8.0/en/json.html#json-path-syntax)
[PostgreSQL json](https://www.postgresql.org/docs/11/functions-json.html)
| 2hard
|
Title: Add support for more aggregation features
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We currently only support aggregating by `count`, prisma [supports aggregation](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#aggregate) by sum, min, max and avg.
Prisma also supports ordering by the aggregate of relations since [v2.19.0](https://github.com/prisma/prisma/releases/tag/2.19.0)
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Add an `aggregate` action method, reference: [https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#aggregate](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#aggregate) | 2hard
|
Title: ContentBox to bundle itens
Body: | 2hard
|
Title: Settings improvement
Body: - [x] rewrite app.config to get settings from database (overwritting)
- [x] remove the code that updates settings on Config.save()
- [x] add app factory to create_minimum_app (for context)
- [x] Make quokka.utils.lazy_setting to use with create_minumum_app().app_context() to allow the use in models
| 2hard
|
Title: User Guide update
Body: DESCRIPTION TODO | 2hard
|
Title: Support query raw annotations
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma recently changed how raw queries work internally, values are now returned alongside meta information, for example:
```json
{
"count": {"prisma__type": "bigint", "prisma__value": "1"}
}
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We need to figure out how to support this internally as we can't just naively return the results from the raw query anymore, we may have to do some form of parsing using a `BaseModel` / `Field`... | 2hard
|
Title: [FEATURE] Recursion limit to examples
Body: ### Is your feature request related to a problem? Please describe
to have control on the recursion in explicit examples
### Describe the solution you'd like
from the documentation is see a similar `stateful-recursion-limit=<N>` when following links
maybe we can have a `examples-recursion-limit=<N>`
Lets say we have a tree with branches, each branch can have more branches.
Currently i notice the value of N=7 (counted the nested 'branches' occurrences in below example)
```json
{
"id":"415feabd",
"name":"Birch",
"description":"white birch tree",
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1"
}
]
}
]
}
]
}
]
}
]
}
]
}
]
}
]
}
```
There are two issues with current behavior:
1. unable to control the recursion
2. the final branch has no leaves (nested objects do not work for the final recursive call issue #2358 )
## Expectation:
considering examples-recursion-limit=2
```json
{
"id":"415feabd",
"name":"Birch",
"description":"white birch tree",
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}],
"branches":[
{
"name":"b1","leaves":[{"name":"l1"}]
}
]
}
]
}
]
}
``` | 2hard
|
Title: Namespace client methods
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Model names can clash with methods like `connect()`, `query_first()` etc
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should namespace these methods behind an attribute like `prisma` and then validate that model names won't clash with that.
For example
```py
await client.connect()
```
would turn into
```py
await client.prisma.connect()
```
| 2hard
|
Title: [BUG]: Pipeline Parallelism fails when input shape varies
Body: ### Is there an existing issue for this bug?
- [X] I have searched the existing issues
### 🐛 Describe the bug
Pipeline parallelism fails when input size is different. Such as :
```
for batch in iter:
#batch1: bs*seq=1*128
#batch2: bs*seq=1*129
outputs = booster.execute_pipeline(batch, model)
```
Error message:
```
File "/home/zhangguangyao/colossal_llama_sp/ColossalAI/colossalai/booster/plugin/hybrid_parallel_plugin.py", line 809, in backward_by_grad
super().backward_by_grad(tensor, grad)
File "/home/zhangguangyao/colossal_llama_sp/ColossalAI/colossalai/zero/low_level/low_level_optim.py", line 436, in backward_by_grad
torch.autograd.backward(tensor, grad)
File "/home/zhangguangyao/miniconda3/envs/llama_sp/lib/python3.10/site-packages/torch/autograd/__init__.py", line 244, in backward
grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False)
File "/home/zhangguangyao/miniconda3/envs/llama_sp/lib/python3.10/site-packages/torch/autograd/__init__.py", line 88, in _make_grads
raise RuntimeError(
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1, 128, 768]) and output[0] has a shape of torch.Size([1, 129, 768]).
```
### Environment
ColossalAI master branch | 2hard
|
Title: Fix Pyright linting in CI
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Our Pyright linting is actually missing files and not checking them....
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| 2hard
|
Title: Add support for filtering by non-unique properties in unique queries
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma added preview feature support for this in `4.5.0`, we should support this too (if the feature flag is set).
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#filter-on-non-unique-fields-with-userwhereuniqueinput
| 2hard
|
Title: [Metric]Add metrics
Body: <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
## Background
There is no metrics in mars and it's useful and necessary to add a simple and convenient metric framework.
## Design

`Counter`, `Gauge`, `Meter` and `Histogram` are the most commonly used metrics.
* `Counter` is a cumulative type of data which represents a monotonically increasing number
* `Gauge` is a single numerical value
* `Meter` is the rate at which a set of events occur. we can use it as qps or tps
* `Histogram` records the average value of a window data
And we can use these four types of metrics as follows:
```
# Counter can be declared as `Counter(name, description: str = "", tag_keys: Optional[Tuple[str]] = None)`
# and the others are similar.
c1 = Counter('counter1', 'A counter')
c1.record(1)
c2 = Counter('counter2', 'A counter', ('service', 'tenant'))
c2.record(1, {'service': 'mars', 'tenant': 'test'})
g1 = Gauge('gauge1')
g1.record(1)
m1 = Meter('meter1')
m1.record(1)
h1 = Histogram('histogram1')
h1.record(1)
```
We'll implement 3 types backends of metrics including console which just log the value and can be used to debug, ray and prometheus.
And there will be 4 PRs:
- [x] First pr adds metrics framework and console backend(#2742)
- [x] Second pr adds ray implementation(#2749)
- [x] Third pr adds prometheus implementation(#2752)
- [x] The last pr adds common metrics for mars like tileable graph building time and subtask execution time(#2760)
And we propose a naming convention for metrics as follows:
`[namespace].[component]_name[_units]`
namespace could be mars, component could be supervisor or worker and can be omitted.
For example, we can naming a `mars.subtask_execution_time_secs` to present the subtask execution time. | 2hard
|
Title: Better way to manage js/css dependencies
Body: Seems quokka depends on lots of external js library, such as
- bootstrap
- lepture/editor
- tinymce
So I'm considering a better way to organize and manage the js dependencies.
These days I'm trying to learn something about nodejs, and some js dependency managment project such as https://github.com/component/component.
So if you think this is the right choice, I can provide some help.
| 2hard
|
Title: Fix mypy plugin
Body: <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
The mypy plugin appears to be broken: https://github.com/RobertCraigie/prisma-client-py/runs/7595314605?check_suite_focus=true
| 2hard
|
Title: Better support for Pandas.Series
Body: We should create a LuxSeries object to take on the sliced version of the LuxDataframe, following [guidelines](https://pandas.pydata.org/pandas-docs/stable/development/extending.html#override-constructor-properties) for subclassing DataFrames. We need to pass the `_metadata` from LuxDataFrame to LuxSeries so that it is preserved across operations (and therefore doesn't need to be recomputed), related to #65. Currently, this code is commented since LuxSeries is causing issues compared to the original Pd.Series.
```python
class LuxDataFrame(pd.DataFrame):
....
@property
def _constructor(self):
return LuxDataFrame
@property
def _constructor_sliced(self):
def f(*args, **kwargs):
# adapted from https://github.com/pandas-dev/pandas/issues/13208#issuecomment-326556232
return LuxSeries(*args, **kwargs).__finalize__(self, method='inherit')
return f
```
```python
class LuxSeries(pd.Series):
# _metadata = ['name','_intent','data_type_lookup','data_type',
# 'data_model_lookup','data_model','unique_values','cardinality',
# 'min_max','plot_config', '_current_vis','_widget', '_recommendation']
def __init__(self,*args, **kw):
super(LuxSeries, self).__init__(*args, **kw)
@property
def _constructor(self):
return LuxSeries
@property
def _constructor_expanddim(self):
from lux.core.frame import LuxDataFrame
# def f(*args, **kwargs):
# # adapted from https://github.com/pandas-dev/pandas/issues/13208#issuecomment-326556232
# return LuxDataFrame(*args, **kwargs).__finalize__(self, method='inherit')
# return f
return LuxDataFrame
```
In particular the original `name` property of the Lux Series is lost when we implement LuxSeries, see `test_pandas.py:test_df_to_series` for more details.
Example:
```python
df = pd.read_csv("lux/data/car.csv")
df._repr_html_()
series = df["Weight"]
series.name # returns None (BUG!)
series.cardinality # preserved
```
We should also add a __repr__ to print out the basic histogram for Series objects. | 2hard
|
Title: Standardize benchmark code of arctic
Body: We currently we have multiple unrelated benchmarks for various scenarios:
- generic Arctic top level calls
- draft Arctic breakdown solution for keeping track of where time goes ((de)compress, numpy, serialization, MongoDB IO)
- draft Arrow serialization benchmarks
The goal is to create a standard API for benchmarks:
- requirements
- specify experiement scenarios in an easy way (e.g. DSL or just a dict for fixed steps)
- collection of results
- plotting
- break down to components (e.g. compress, numpy object creation, serialization, mongo IO)
- make sure that when benchmark mode is disabled no impact on performance
- reproducible benchmarks
- goals
- understand our code's bottlenecks
- have a standard way to perform and repeat benchmarks
A skeleton of benchmarks exists in the top level directory, benchmarks.
There are some very basic examples and a readme (https://github.com/manahl/arctic/blob/master/benchmarks.md), but these should be expanded upon to include all the storage engines and some more involved use cases and examples (i.e. chunkstore with numerics only, vs chunkstore with strings, version store with pickled objects, etc). | 2hard
|
Title: deprecate ScatterPlotVisualizer
Body: - ``ScatterPlotVisualizer`` is being moved to contrib in 0.8
- ``DecisionBoundaryVisualizer`` is being moved to contrib in 0.8 | 2hard
|
Title: Support creating variables that are not logged even on TRACE level
Body: Often in automation we need to use passwords or other values that should not be logged. To help with that, various libraries have keywords that avoid logging given values even though their keywords normally do that. For example, SeleniumLibrary has `Input Password` that doesn't log the password and also temporarily disables Robot's logging to avoid the password being logged by the underlying Selenium on DEBUG level. This works fine otherwise, but if tests are run with `--loglevel TRACE`, Robot anyway logs all argument values and the password will be visible. Browser's `Fill Secret` tries to avoid that problem by requiring the value to be passed using a special `$var` syntax so that Robot won't log the actual variable value. Unfortunately that doesn't work if the value is passed to `Fill Secret` via a user keyword and only that keyword uses the `$var` syntax like in the following example:
```robotframework
*** Variables ***
${PASSWORD} secret
*** Test Cases ***
Example
Keyword ${PASSWORD} # This value is logged on TRACE level
*** Keywords ***
Keyword
[Arguments] ${pwd}
Fill Secret selector $pwd
```
To make it easier to use values that should not be logged, I propose we add new variable syntax `*{var}` to create "secret variables". The syntax could be used when creating variables in the Variables section, when creating local variables based on keyword return values, and with `Set Global/Suite/Test/Local Variable` keywords. The syntax would be only used for creating variables, they would be used normally like `${var}`:
```robotframework
*** Variables ***
*{PASSWORD} secret
*** Test Cases ***
Example
Keyword ${PASSWORD}
*{local} = Another keyword
Keyword ${local}
*** Keywords ***
Keyword
[Arguments] ${pwd}
Fill Secret selector ${pwd}
```
Technically the `*{var}` assignment would create a custom object with this kind of implementation:
```python
class Secret:
def __init__(self, value, name=None):
self.value = value
self.name = name
def __str__(self):
return f'<Secret {self.name}>' if self.name else '<Secret>'
```
These objects would be used when the variable is passed from tests to user keywords and from user keywords to other user keywords. The real value would, however, be passed to library keywords so they would get values normally and wouldn't need to be changed to benefit from this new syntax.
Although this new syntax would work with existing keywords, we should make it possible for library keywords to require arguments to be "secret". That's basically what `Fill Secret` does but this new syntax would avoid the problem that the value can be logged on higher level. A convenient way to support this would be type hints:
```python
from robot.api.secrets import Secret
def example(secret: Secret):
...
```
This would work so that Robot would validate that the used value is a `Secret` and fail if it's not. I believe we should pass the real value, not the actual `Secret` object, to the underlying keyword also in this case. That can confuse type checkers but keywords can use [typing.cast](https://docs.python.org/3/library/typing.html#typing.cast) if that's a problem. To support normal type conversion, we should also allow parameterizing the type hint like `Secret[int]`.
Keywords could also return secret values simply by returning `Secret` instances:
```python
from robot.api.secrets import Secret
def example():
return Secret('value')
```
| 2hard
|
Title: tqdm.auto.tqdm problem detecting IPython (Spyder and Jupyter QtConsole)
Body: ## Frontmatter
- [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
>>> 4.54.1 3.8.5 (default, Sep 4 2020, 07:30:14)
>>> [GCC 7.3.0] linux
```
## Problem, Example
The `tqdm.auto.tqdm` tries to serve the nicely looking `tqdm.notebook.tqdm` class to my IPython console. The behaviour was tested using:
- IPython 7.19.0
- Spyder 4.2.0
- Jupyter Console 6.2.0
- Python 3.8.5
all obtained through a conda installation.
The shortest code to recreate the problem is with the following script
```py
# Import packages.
from tqdm.auto import tqdm
from time import sleep
# Keep track of sleep.
for _ in tqdm(range(6, 13)):
sleep(1)
```
Which works from the terminal, using basic class `tqdm.tqdm`:

Works from Jupyter Notebook using the decorated class `tqdm.notebook.tqdm`:

Fails in IPython, via Jupyter Console, Jupyter QtConsole and Spyder. It tries to serve the `tqdm.notebook.tqdm` class while this cannot be displayed properly. Resulting in the following message `HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=7.0), HTML(value='')))`.


## Background
I happened upon this issue when using the `tqdm.contrib.concurrent.process_map` function. I created an `issue`<https://github.com/jupyter/qtconsole/issues/461> on the Jupyter QtConsole git, which after a short discussion pointed out the solution for my specific case. I was also pointed out the difficulties of detecting the frontend and possible ways to solve that.
Because the solution was including the optional key `tqdm_class`,
[see the documentation](https://tqdm.github.io/docs/contrib.concurrent/#process_map), it became apparent that there is an issue in the `tqdm.auto.tqdm` class.
## Related Issues
The following issues are related, but far older and closed: #747, #645, #394.
| 2hard
|
Title: Add support for `Prisma.JsonNull` equivalent
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
https://discord.com/channels/933860922039099444/1080804462970535996/1080804462970535996
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Implement equivalent of https://www.prisma.io/docs/concepts/components/prisma-client/working-with-fields/working-with-json-fields#using-null-values | 2hard
|
Title: Configuration file support
Body: DESCRIPTION TODO | 2hard
|
Title: [shardformer, pipeline]: add `gradient_checkpointing_ratio` and heterogenous shard policy
Body: ## Description
`gradient checkpointing` is known to be memory efficient yet could slow down the TGS(Tokens per second for a GPU) to 75% (Est. value, assuming time of backward = 2x time of forward). Thus, it is a trade-off between memory limitation and throughput.
However, there are times when the user wants to control this trade-off more precisely. For example, if `gradient checkpointing=False` results in 40GB memory consumption and `gradient checkpointing=True` yields 5GB. Those equipped with 30GB memory redundancy may want to enable `gradient checkpointing` partially so that they can accelerate the training process while avoiding OOM.
**_This leads to the design of `gradient_checkpointing_ratio`, which allows users to control gradient_checkpointing more precisely. (FEATURE 1)_**
Furthermore, there is more potential when `gradient_checkpointing_ratio` is combined with Pipeline Parallelism (PP).
<img width="1545" alt="image" src="https://github.com/hpcaitech/ColossalAI/assets/31888981/05bfdacc-96b9-46d9-b02e-68363478ee7f">
As illustrated in the above figure (copy from http://arxiv.org/abs/2104.04473), different stages of PP store a various number of micro batches' gradient, e.g., in 1F1B, device 1 stores 4 micro-batches while device 4 only stores 1 micro-batch. This nature leads to extremely unbalanced memory consumption across devices.
As a common partition strategy, when a 32-layer model is partitioned on 4 devices, each device possesses 8 layers. Let us assume the activation memory of 1 micro-batch passing through 8 layers is 10GB, and with `gradient checkpointing` it reduces to 1GB. When executed on a 20GB accelerator, `gradient checkpointing` must be enabled since device 1 requires 4 \* 10GB > 20GB limitation. However, there is only 10GB memory on device 4 even with `gradient checkpointing=False`. The detailed memory consumption is shown in the following table.
- 100% execution time, yet OOM.
| Device | # Layers | # Ckpt Layers | Memory |
| ------ | -------- | ------------- | ------ |
| 1 | 8 | 0 | 40 |
| 2 | 8 | 0 | 30 |
| 3 | 8 | 0 | 20 |
| 4 | 8 | 0 | 10 |
- 121% execution time, no OOM.
| Device | # Layers | # Ckpt Layers | Memory |
| ------ | -------- | ------------- | ------ |
| 1 | 8 | 5 | 17.5 |
| 2 | 8 | 5 | 13.1 |
| 3 | 8 | 5 | 8.8 |
| 4 | 8 | 5 | 4.4 |
**_The key insight is to assign different `gradient_checkpointing_ratio` to different PP devices. (FEATURE 2)_** For PP devices with high memory consumption we can assign a higher `gradient_checkpointing_ratio`. As `gradient_checkpointing` incurs overhead, fewer layers should be assigned to make the execution time of each pipeline stage balanced. Otherwise, the pipeline stage with a higher `gradient_checkpointing_ratio` will be the bottleneck. The following illustrates an example of this strategy.
- **113% execution time**, no OOM.
| Device | # Layers | # Ckpt Layers | Memory |
| ------ | -------- | ------------- | ------ |
| 1 | 7 | 4 | 17 |
| 2 | 8 | 3 | 19.9 |
| 3 | 8 | 0 | 20 |
| 4 | 9 | 0 | 11.3 |
The solution is found by modeling the problem as a mixed-integer programming problem.
```python
import mip
import numpy as np
if __name__ == "__main__":
num_devices = 4
memory_bound = 20
weight = 0
grad = 0
total_layers = 32
activation_mem = np.linspace(40, 0, num_devices + 1).tolist()[:-1]
ckpt_mem = np.linspace(4, 0, num_devices + 1).tolist()[:-1]
std_layers = total_layers // num_devices
model = mip.Model()
num_layers = [model.add_var(var_type=mip.INTEGER) for _ in range(num_devices)]
num_ckpt = [model.add_var(var_type=mip.INTEGER) for _ in range(num_devices)]
# forward_time = model.add_var(var_type=mip.CONTINUOUS)
# backward_time = model.add_var(var_type=mip.CONTINUOUS)
forward_backward_time = model.add_var(var_type=mip.CONTINUOUS)
# Constraints
model += mip.xsum(num_layers) == total_layers
for i in range(num_devices):
model += num_ckpt[i] <= num_layers[i]
for i in range(num_devices):
# model += forward_time >= num_layers[i]
# model += backward_time >= num_layers[i] * 2 + num_ckpt[i]
model += forward_backward_time >= num_layers[i] * 3 + num_ckpt[i]
for i in range(num_devices):
model += activation_mem[i] / std_layers * (num_layers[i] - num_ckpt[i]) + ckpt_mem[i] / std_layers * num_ckpt[i] + (weight + grad) / std_layers * num_layers[i] <= memory_bound
# Objective
# 1. Forward phase: max(num_layers)
# 2. Backward phase: max(2 * num_layers + num_ckpt)
model.objective = mip.minimize(forward_backward_time)
model.optimize()
# print("Forward time:", forward_time.x)
# print("Backward time:", backward_time.x)
# print("Total time:", forward_time.x + backward_time.x)
print("Total time:", forward_backward_time.x)
std_time = std_layers * 3
# print("Slow down:", (forward_time.x + backward_time.x) / std_time)
print("Slow down:", forward_backward_time.x / std_time)
for i in range(num_devices):
weight_and_grad = (weight + grad) / std_layers * num_layers[i].x
activation = activation_mem[i] / std_layers * (num_layers[i].x - num_ckpt[i].x) + ckpt_mem[i] / std_layers * num_ckpt[i].x
print(
f"Device {i+1}: {num_layers[i].x:2.0f} layers, {num_ckpt[i].x:2.0f} checkpoints, "
f"Weight + Grad: {weight_and_grad:2.2f} GB, Activation: {activation:2.2f} GB, Total: {weight_and_grad + activation:2.2f} GB"
)
```
In summary, the two features are proposed to provide users with more flexibility to control the trade-off between memory consumption and throughput. The first feature allows users to control the overall `gradient_checkpointing_ratio`, while the second feature allows users to assign different `gradient_checkpointing_ratio` to different PP devices.
## Methods
#5508 is linked to this issue.
| 2hard
|
Title: Potentially unnecessary binary files on generate
Body: ## Bug description
In a basic Docker container:
```
FROM python:3.8
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY schema.prisma schema.prisma
RUN prisma generate
COPY . .
CMD ["python3", "main.py"]
```
I was inspecting the Docker container file size using `dive`. Looking in the `/tmp` folder it looks like there may be some unused binaries. It appears that the binaries used are in `/tmp/prisma/binaries/engines/<version>` according to [constants.py](https://github.com/RobertCraigie/prisma-client-py/blob/main/src/prisma/binaries/constants.py#L41).
There is another folder `/tmp/prisma-binaries` with 114MB of binaries with the same name, but is missing the `prisma-cli-linux` but includes the others, e.g. `prisma-query-engine-debian-openssl-1.1.x`. If I remove this folder, the prisma query engine still appears to work.
It may be that the prisma generate and binary download process is downloading duplicate files.
## How to reproduce
```
docker build --platform linux/amd64 -t test-prisma
dive test-prisma
```
## Environment & setup
- OS: MacOS Docker building for linux/amd64
- Database:
- Python version: 3.8
- Prisma version: prisma==0.6.4 | 2hard
|
Title: Add user dissimilarity matrix
Body: Hi,
first of all, thanks for your precious work. I have this suggestion: for the categorical variable is is possible to add a precomputed dissimilarity matrix (like from gower function)?
Thanks | 2hard
|
Title: Add Jupyter Notebook Widgets
Body: This is an initial issue to track and research incorporating interactive Jupyter Notebook widgets into yellowbrick.
Likely any outputs from this would be well placed in the `contrib` module
A few potentially bad use cases:
- Feature analysis one or many categories
- Quickly save image
- Change color palettes
There are already a couple existing libraries that some Yellowbrick visualizers could plug into:
- https://github.com/jupyter-widgets/pythreejs
- https://github.com/maartenbreddels/ipyvolume
- https://github.com/bloomberg/bqplot
This issue was inspired by this article:
https://towardsdatascience.com/bring-your-jupyter-notebook-to-life-with-interactive-widgets-bc12e03f0916 | 2hard
|
Title: Add support for the Unsupported type
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma supports an `Unsupported` type that means types that Prisma does not support yet can still be represented in the schema.
We should support it too.
[https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#unsupported](https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#unsupported)
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
As fields of this type are not actually supported in the Client, what we have to do is limit what actions are available in certain situations.
E.g. if a model contains a required `Unsupported` field then `create()` and `update()` are not available | 2hard
|
Title: Animated Feature Visualizers
Body: Without too much work, it appears that we can create an animated image that captures changes to a dataset over time or some series related domain.
We could accomplish this by using the built in `matplotib.animation`
Here's an example located here:
https://matplotlib.org/2.1.2/gallery/animation/dynamic_image2.html
Here's an example API:
```
from yellowbrick.features import RadViz
from yellowbrick.contrib import AnimateFeatures
X = some_tabular_data_source
time_column = 'timestamp'
oz = AnimateFeatures(RadViz, X, series_column=time_column, outpath='dynamic_images.mp4')
```
| 2hard
|
Title: Topic Saliency and Relevancy Frequency Distribution
Body: **Describe the solution you'd like**
Adapt or extend the `FreqDist` visualizer to show not just the frequency of tokens in the corpus but rank them according to their relationship to a topic model (e.g. LDA). Ranked by *saliency*, the `FreqDist` will show the frequency of the terms that contribute the most information to all topics. Ranked by *relevance* the `FreqDist` will show the terms most relevant to a specific topic and the proportion of their frequency in the topic compared to the rest of the corpus. Metric definitions are:
1. **saliency(term w)** = frequency(w) * [sum_t p(t | w) * log(p(t | w)/p(t))] for topics t; see [Chuang et. al (2012)](http://vis.stanford.edu/files/2012-Termite-AVI.pdf)
2. **relevance(term w | topic t)** = λ * p(w | t) + (1 - λ) * p(w | t)/p(w); see [Sievert & Shirley (2014)](http://nlp.stanford.edu/events/illvi2014/papers/sievert-illvi2014.pdf)
This computation requires a model (so probably our best bet is to extend `FreqDist` to `TopicFreqDist` and make it a model visualizer). It also rquires a parameter, lambda.
**Is your feature request related to a problem? Please describe.**
Implementing this will bring YB closer to being able to provide a PyLDAViz-like solution to topic modeling and clustering.
**Examples**
Here is the `FreqDist` ranked by saliency to a topic model instead of ranked purely by freqency:
<img width="623" alt="screenshot 2018-08-21 06 47 13" src="https://user-images.githubusercontent.com/745966/44397360-1a0d3b00-a50e-11e8-94bc-e684dd743ff6.png">
Here is a topic's terms ranked by relevance and compared to the rest of the corpus (e.g. red vs. blue):
<img width="635" alt="screenshot 2018-08-21 06 48 56" src="https://user-images.githubusercontent.com/745966/44397420-4923ac80-a50e-11e8-9991-84452a30384f.png">
| 2hard
|
Title: In place edit for posts
Body: If "admin" or "editor" is logged in, should see an "EDIT" icon on the top of each post, it should open a "modal" window with quick-edit form to edit Title, Slug, Body/Description and tags.
The modal should follow the "content_format" to choose the editor.
| 2hard
|
Title: Replace internal commenting system and use as default instead of disqus
Body: Started a new project to handle comments https://github.com/rochacbruno/flasqus
Quokka internal comments should be the default again but using this Flasqus solution when its ready.
Disqus, Intensedebate and facebook will still be an option but not configured by default.
| 2hard
|
Title: Support FirePAd collaborative editor (for multiauthors in realtime)
Body: https://github.com/firebase/firepad
| 2hard
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.