text
stringlengths
20
57.3k
labels
class label
4 classes
Title: Add support for distinct to filter duplicate content in one field Body: ## Problem Some task requires me to search for orphaned records in database. For example, I have two table, one has reference to the other, and I seek the element on one table that have no item on the other table that reference it. For that, my solution is to parse all rows in a table and filter on duplicate values in one column to create a python set and I can then try to search on the second table item that do not belong to the set. The issues arise when I seek the prisma find_many function, it has no "distinct", so I have to either: - fetch the whole table and then filter in python - execute a raw query ## Suggested solution Add support to distinct filter. Prisma has already support for distinct filter (see reference in additional context) ## Alternatives Since we can already execute a raw query, i'm not sure if there is another alternative that has to be considered. ## Additional context Prisma reference: https://www.prisma.io/docs/concepts/components/prisma-client/aggregation-grouping-summarizing#select-distinct
1medium
Title: Improve error message when running prisma with an outdated client Body: ## Problem When upgrading to a newer version of prisma python client - let's say, 0.13.1 to 0.14.0, if you forget to run `prisma generate` again, you'll get an error message like this > File "/Users/adeel/Documents/GitHub/my-project/backend/utils/db.py", line 36, in get_db _db = Prisma(auto_register=True, datasource={"url": get_db_uri(application_name)}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/adeel/Library/Caches/pypoetry/virtualenvs/my-project-cJkWU15t-py3.12/lib/python3.12/site-packages/prisma/client.py", line 156, in __init__ self._set_generated_properties( TypeError: BasePrisma._set_generated_properties() missing 1 required keyword-only argument: 'preview_features' It's not very obvious based on this `preview_features` message that this error actually stems from an outdated client. ## Suggested solution The prisma library already knows `prisma.__version__`. We should also track the version used for generating the client - and if it's missing or mismatched, raise an error. ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. -->
1medium
Title: [FEATURE] Support `jsonschema` 4.18.0+ Body: **Is your feature request related to a problem? Please describe.** As of v4.18.0, `jsonschema` emits the following warnings: ``` DeprecationWarning: jsonschema.exceptions.RefResolutionError is deprecated as of version 4.18.0. If you wish to catch potential reference resolution errors, directly catch referencing.exceptions.Unresolvable. DeprecationWarning: jsonschema.RefResolver is deprecated as of v4.18.0, in favor of the https://github.com/python-jsonschema/referencing library, which provides more compliant referencing behavior as well as more flexible APIs for customization. A future release will remove RefResolver. Please file a feature request (on referencing) if you are missing an API for the kind of customization you need. ``` **Describe the solution you'd like** It would be good if this package could address these warning before these deprecations turn into removals. **Describe alternatives you've considered** `jsonschema` could be pinned to older versions, but we would need to support 4.18.x at some point. **Additional context** I have raised a similar issue in `hypothesis-jsonschema`: python-jsonschema/hypothesis-jsonschema#102
1medium
Title: Profile memory usage Body: I have got simple test (with LOOP to reproduce my problem): ``` *** Test Cases *** Mem Leak Log Start Sleep ... console=${True} Sleep 3s ${l} = Create List FOR ${i} IN RANGE ${30000} ${a} = Set Variable XXX END Log End Test ... console=${True} Sleep 3s ``` I see that robot uses more than 60Mb (between 2 pauses). Log saving i have disabled `robot --log None --loglevel TRACE Test.robot` My initial problem I have a lot of test cases (9k+) and OOM killed my robot process.
1medium
Title: Deprecate setting tags starting with a hyphen like `-tag` in `Test Tags` Body: The plan is to allow using the `-tag` syntax in `Test Tags` like ```robotframework *** Settings *** Test Tags -example ``` for removing tags set earlier, for example, in suite initialization files or by using the `--set-tag` option. That functionality is planned for RF 8.0 (#5250). This issue covers deprecating using tags like `-example` as literal tag names. Such usage should cause a deprecation warning to be emitted with a note that escaping like `\-example` is possible.
1medium
Title: Add type hints to parsing API Body: At least `get_tokens`, `get_model`, and their resource and init file variants need to get type hints and adding them is easy. Adding types to the whole parsing module would be good too, but that may be too much work for RF 6.1. If it is, a separate issue about that should be submitted for RF 6.2. One reason to add type hints is that they can enable using [mypyc](https://mypyc.readthedocs.io/en/latest/index.html) for compiling the parsing module into a C extension. That could help making parsing faster as part of execution and, probably more importantly, when used by IDEs or other external tools. Full `mypy` or `mypyc` compatibility isn't in the scope of this issue, but the better the type hints are, the easier it is to continue to that direction in the future.
1medium
Title: Add support for more options in `update` and `update_many` nested types Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> There a couple of open TODOs for missing fields for these types, https://github.com/RobertCraigie/prisma-client-py/blob/f6b10084901274a02311871e3f3f940e9dd8acab/src/prisma/generator/templates/types.py.jinja#L519. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> TBD.
1medium
Title: Make Robot Framework compatible with `zipapp` Body: Hi, as just shown at the open space after applying some small changes, we can have zipapps that do contain a viable robotframework installation. There is a pull request for this feature. https://github.com/robotframework/robotframework/pull/4612 Best regards, Franz
1medium
Title: Improve import error message when the client is imported before generation Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Currently it may not be immediately clear to users that they have to run `prisma generate` when they encounter this error: ``` >>> from prisma import Prisma Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'Prisma' from 'prisma' ``` ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Python supports module level `__getattr__`, we could probably make use of this to provide an improved error message.
1medium
Title: Websocket invalid upgrade exception handling b0rkage Body: ### Is there an existing issue for this? - [X] I have searched the existing issues ### Describe the bug A client apparently sent no Upgrade header to a websocket endpoint, leading to an error as it should. An ugly traceback is printed on terminal even though the error eventually gets handled correctly it would seem. It would appear that the websockets module attempts to attach its exception on `request._exception` field which Sanic's Request doesn't have a slot for. This could be hidden if Sanic later used `raise BadRequest(...) from None` rather than `raise SanicException(...)`, suppressing the chain and giving a non-500 error for what really is no server error. Not sure though if that would from this context ever reach the client anyway but at least it could avoid a traceback in server log. If anyone wants to investigate and make a PR, feel free to (I am currently busy and cannot do that unfortunately). ```python Traceback (most recent call last): File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 111, in accept ) = self.process_request(request) File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 218, in process_request raise InvalidUpgrade("Upgrade", ", ".join(upgrade) if upgrade else None) websockets.exceptions.InvalidUpgrade: missing Upgrade header During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/sanic/sanic/server/protocols/websocket_protocol.py", line 120, in websocket_handshake resp: "http11.Response" = ws_proto.accept(request) File "/home/user/.local/lib/python3.10/site-packages/websockets/server.py", line 122, in accept request._exception = exc AttributeError: 'Request' object has no attribute '_exception' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "handle_request", line 97, in handle_request File "/home/user/sanic/sanic/app.py", line 1047, in _websocket_handler ws = await protocol.websocket_handshake(request, subprotocols) File "/home/user/sanic/sanic/server/protocols/websocket_protocol.py", line 126, in websocket_handshake raise SanicException(msg, status_code=500) sanic.exceptions.SanicException: Failed to open a WebSocket connection. See server log for more information. ``` ### Code snippet _No response_ ### Expected Behavior 400 Bad Request error reaching the client and being more silent on server side. Including the message of **missing Upgrade header** would be helpful for debugging (e.g. in case Nginx proxy config forgot to forward that header). ### How do you run Sanic? Sanic CLI ### Operating System Linux ### Sanic Version Almost 23.03.0 (a git version slightly before release) ### Additional context _No response_
1medium
Title: Warn if the client is instantiated with an outdated schema Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> If this warning was present it would've made #180 easier for the user to solve by themselves. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Raise a warning if the client is instantiated using an out of date schema.
1medium
Title: [DOC] `groupby_agg` naming and documentation Body: `groupby_agg` is possible to implement with method-chaining in native pandas, contrary to what the documentation seems to imply; native pandas would use `groupby(...).transform`, e.g. ```python df = df.assign(new_column=lambda x: x.groupby("col")["col_to_agg"].transform(agg_func)) ``` Of course, this is a mouthful to type, so it still makes sense to have a function that takes care of this operation, but `groupby_agg` seems like the wrong name for the function, and something like `groupby_transform_new_column` would be more descriptive. I'm not sure whether the implementation should also change to reflect that native pandas supports `groupby(...).transform`, and so the `.merge` would not be necessary.
1medium
Title: Hook to optionally skip specific failures Body: Users sometimes encounter errors during testing that they don't wish to address for various reasons, like server issues out of their control. Current options like disabling checks lack the required granularity, and Schemathesis halts on these failures, disrupting the testing flow. ### Proposed Solution Introduce a new hook named `ignore_failure_if` to allow conditionally skipping certain test cases based on custom logic.: ```python import schemathesis @schemathesis.hook def ignore_failure_if(context, error, case, response): # Custom logic to determine if this case should be ignored context.ignore(error) ``` ### Benefits - Provides users with more control over what constitutes a failure. - Enables more granular control than simply disabling checks. - Allows users to dig deeper into their API by ignoring certain superficial errors. ### Reporting Mark intentionally skipped cases as `[IGNORED]` in the CLI output, similar to how flaky tests are marked `[FLAKY]`.
1medium
Title: Support Prisma TS's `findUniqueOrThrow` Body: ## Problem Often when fetching a model by ID I expect the result not to be `None`. If for some reason it is, an error should be raised, rather than always returning an optional type which needs a manual guard. ## Suggested solution Follow the pattern from the TypeScript client, add `find_unique_or_raise` and `find_first_or_raise` methods. https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#finduniqueorthrow ## Alternatives A previous version of the TS client used an argument `findUnique({ throwOnNotFound: true })`, which also solved the problem just fine. I'm not sure why they decided to deprecate that and make the switch to dedicated methods.
1medium
Title: All iterations of templated tests should be executed even if one is skipped Body: In the documentation, it is written that : > Templated tests are also special so that all the rounds are executed even if one or more of them fails. It is possible to use this kind of [continue on failure](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#continue-on-failure) mode with normal tests too, but with the templated tests the mode is on automatically. In particular : `all the rounds are executed even if one or more of them fails` However, if one is skipped, none the following rounds are not executed. I don't know if it was intended or not but it caused some problems on my tests. Example: ``` *** Test Cases *** Eat yellow fruits [Documentation] Eat only yellow fruits, Fails if item given is a fruit which is not yellow and skip if item is not a fruit [Template] Check fruit and eat it red_apple # FAIL banana # PASS plane # SKIP => not a fruit lemon # should be PASS but is not executed ```
1medium
Title: Support inline flags in test cases Body: Currently, our test cases are executed with different flags based on some combination of their filename and what directory they are in. I would prefer a system where we define the flags in a directive inline, e.g. like this: ``` # flags: --target-version=py38 --preview 1+1 # output 1 + 1 ```
1medium
Title: Add support for PyPy Body: - [ ] Run tests using PyPy - [ ] Add PyPi classifiers mentioning support
1medium
Title: Recognize library classes decorated with `@library` decorator regardless their name Body: project structure: ``` shell my_module sub_module lib.py __init__.py __init__.py test.robot ``` `lib.py` ``` python from robot.api.deco import library, keyword @library(scope="GLOBAL") class lib: @keyword def my_py_kw(self): print("foo") ``` `test.robot` ``` robot *** Settings *** Library my_module.sub_module.lib *** Test Cases *** example my py kw ``` when I change the lib class from `class lib` to `class Foo`, the kw `my_py_kw ` will not be recognized anymore. I think once I used the deco `@library` for a lib class, robot should use this deco to detect lib class instead of only using the name.
1medium
Title: [TST] Investigate Windows Build Body: # Brief Description The Windows build never fails when it should. #738 comments out the windows build in `.azure-pipelines/pipeline-master.yml` so that it doesn't run until properly debugged. # Notes - This issue is a placeholder for reminding the dev team to debug the windows build.
1medium
Title: Feature: Add get_selection() to MycroftSkill class Body: A common need for Skill developers is to present a list of options to the user and ask them to make a selection. This is possible using a combination of existing methods, however it requires each developer to work out their own pattern. Adding a standard method will make this easier for developers and provide a more consistent experience for users. The current method generally involves: 1. Take list of items 2. Generate question dialog using [`join_list()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.util.format.html?highlight=list#mycroft.util.format.join_list) 3. [`get_response()`](https://mycroft-core.readthedocs.io/en/latest/source/mycroft.html?highlight=get_response#mycroft.MycroftSkill.get_response) from user 4. validate response checking for: - fuzzy match of list item - ordinal or number eg "third item", or "number two" 5. return selected item, or execute `on_fail` callback
1medium
Title: CLI is not thread-safe Body: <!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> If binaries are missing, the CLI will naively attempt to download them, this can lead to OS errors if multiple processes are started at once. ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> Ful reproduction is still in progress. Encountered stack trace: ``` lint run-test-pre: commands[0] | python scripts/cleanup.py lint run-test-pre: commands[1] | coverage run -m prisma generate --schema=tests/data/schema.prisma [DEBUG ] prisma.binaries.binaries: query-engine cached at /var/folders/ql/0v8h20s972s6zz4t_3qc49bc0000gp/T/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-query-engine-darwin [DEBUG ] prisma.binaries.binaries: migration-engine cached at /var/folders/ql/0v8h20s972s6zz4t_3qc49bc0000gp/T/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-migration-engine-darwin [DEBUG ] prisma.binaries.binaries: introspection-engine cached at /var/folders/ql/0v8h20s972s6zz4t_3qc49bc0000gp/T/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-introspection-engine-darwin [DEBUG ] prisma.binaries.binaries: prisma-fmt cached at /var/folders/ql/0v8h20s972s6zz4t_3qc49bc0000gp/T/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-prisma-fmt-darwin [DEBUG ] prisma.binaries.binaries: prisma-cli-darwin cached at /var/folders/ql/0v8h20s972s6zz4t_3qc49bc0000gp/T/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-cli-darwin [DEBUG ] prisma.binaries.binaries: All binaries are cached [DEBUG ] prisma.cli.prisma: Using Prisma CLI at /var/folders/ql/0v8h20s972s6zz4t_3qc49bc0000gp/T/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-cli-darwin [DEBUG ] prisma.cli.prisma: Running prisma command with args: ['generate', '--schema=tests/data/schema.prisma'] Traceback (most recent call last): File "/private/tmp/tox/prisma-client-py/lint/lib/python3.9/site-packages/prisma/__main__.py", line 6, in <module> cli.main() File "/private/tmp/tox/prisma-client-py/lint/lib/python3.9/site-packages/prisma/cli/cli.py", line 38, in main sys.exit(prisma.run(args[1:])) File "/private/tmp/tox/prisma-client-py/lint/lib/python3.9/site-packages/prisma/cli/prisma.py", line 44, in run process = subprocess.run( File "/usr/local/Cellar/[email protected]/3.9.1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 501, in run with Popen(*popenargs, **kwargs) as process: File "/usr/local/Cellar/[email protected]/3.9.1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 947, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/usr/local/Cellar/[email protected]/3.9.1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 1819, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) PermissionError: [Errno 13] Permission denied: '/var/folders/ql/0v8h20s972s6zz4t_3qc49bc0000gp/T/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-cli-darwin' ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> No errors. ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Mac OS - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> N/A - Python version: <!--[Run `python -V` to see your Python version]--> Python 3.9.1
1medium
Title: Cache Control: Conditional Request and 304 Response Support Body: A previous [PR](https://github.com/sanic-org/sanic/pull/2447) provides `cache-control` HTTP header to `file` function in Sanic, but the best practice is to also implement the etag and the file validator. **Correct me if I am wrong:** With cache control value of `no-cache`, a validation is required every time before the client using the cached value. As my observation, validation request is usually a conditional request (requests that contain an [If-None-Match](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-None-Match) or an [If-Modified-Since](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Modified-Since) header) to the resource URL. If the cached content is validated and the HTTP method is GET or HEAD, a [304 Not Modified response](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/304) can be sent to the client, which is faster and smaller because it doesn't have to contain the file in the body. Validation is checking the conditions in a conditional request. I think Sanic already supports 304 response in static file. I am wondering if we can do the same for `file` response in a general route handler.
1medium
Title: Implement Detection Error Tradeoff Curves (DET) Visualizer Body: A community suggestion from reddit, implement a DET curve visualizer for model comparison. From [wikipedia](https://en.wikipedia.org/wiki/Detection_error_tradeoff): `A detection error tradeoff (DET) graph is a graphical plot of error rates for binary classification systems, plotting the false rejection rate vs. false acceptance rate.[1] The x- and y-axes are scaled non-linearly by their standard normal deviates (or just by logarithmic transformation), yielding tradeoff curves that are more linear than ROC curves, and use most of the image area to highlight the differences of importance in the critical operating region.` Aspiration Wikipedia image comparing multiple models: ![440px-example_of_det_curves](https://user-images.githubusercontent.com/2944777/40618185-9d284cd4-6245-11e8-994d-4edbcdd27344.png) Sample Code for plotting one model on a mpt object. Reference link below: ``` from matplotlib import pyplot as plt def DETCurve(fps,fns): """ Given false positive and false negative rates, produce a DET Curve. The false positive rate is assumed to be increasing while the false negative rate is assumed to be decreasing. """ axis_min = min(fps[0],fns[-1]) fig,ax = plt.subplots() plot(fps,fns) yscale('log') xscale('log') ticks_to_use = [0.001,0.002,0.005,0.01,0.02,0.05,0.1,0.2,0.5,1,2,5,10,20,50] ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.get_yaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.set_xticks(ticks_to_use) ax.set_yticks(ticks_to_use) axis([0.001,50,0.001,50]) ``` **Suggested Visualizer Interface** ``` models = [LogisticRegression(), AnotherModel(), DifferentModel()] viz = DETCurve(models) viz.fit(X, y) viz.poof() ``` Sample Code Reference Link https://jeremykarnowski.wordpress.com/2015/08/07/detection-error-tradeoff-det-curves/ Source Reddit comment https://www.reddit.com/r/MachineLearning/comments/8mbif5/news_new_release_of_python_ml_visualization/dzpford/
1medium
Title: New API for using named arguments programmatically Body: RF 7.0 attempted to provide a more convenient API for setting keyword arguments programmatically (#5000). The idea was to both support named arguments with non-string values and to avoid the need for escaping `\` and other special characters in strings. Unfortunately the selected implementation approach caused backwards incompatibility problems. The implementation done in RF 7.0 was reverted in RF 7.0.1. Because being able to use non-string values with named arguments is important, for example, for the DataDriver tool, RF 7.0.1 also added a new API for exactly that purpose (#5031). This new API has a problem that it's not compatible with the JSON model. A better API is thus needed. I believe it's best to concentrate on named arguments, but we can think about escaping as well and submit a separate issue about that if needed. My proposal is to add new `named_args: dict|None = None` argument to `robot.running.Keyword`. The argument would be `None` by default and it wouldn't be set by Robot when executing tests normally. In such usages `args` would be expected to contain also named arguments using the `name=value` syntax. In programmatic usage `named_args` could be set to a dictionary containing named arguments. In that case `args` would be expected to contain only positional arguments. For execution it would be enough to enhance `robot.running.Keyword`, but it might make sense to add `named_args` also to `robot.result.Keyword`.
1medium
Title: Add support for batching raw queries Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> This is currently not supported. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> It is unknown if Prisma actually supports this internally or not but I would be surprised if they didn't. If it is supported we should add the `execute_raw` query method to the query batcher.
1medium
Title: Libdoc doesn't handle parameterized types like `list[int]` properly Body: Like in Browser lib, `List[Permission]` or `Dict[str, int]`. See https://marketsquare.github.io/robotframework-browser/Browser.html#New%20Context
1medium
Title: Make result file paths hyperlinks on terminal Body: Terminal emulators nowadays support hyperlinks pretty well.The [standard emerged in 2017](https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda) and the current list of [supporting terminal emulators](https://github.com/Alhadis/OSC8-Adoption/?tab=readme-ov-file) is pretty exhaustive. Based on a quick prototype, making the links to result files in the console after execution is easy and works very well. The simple solution ought to work fine on all Linux and OSX terminals and the main problem is handling Windows. The traditional [Windows Console](https://en.wikipedia.org/wiki/Windows_Console) isn't a terminal emulator and doesn't support ANSI colors or hyperlinks. We have custom code for handling colors, but I don't think something like that is possible with links. There are, however, various other terminals for Windows and, for example, the Microsoft developed [Windows Terminal](https://en.wikipedia.org/wiki/Windows_Terminal) is a proper terminal emulator that supports both ANSI colors and hyperlinks. The problem is that we don't currently have any code for detecting terminal capabilities. With colors we, by default, simply use ANSI outside Windows and on Windows use the aforementioned custom solution. Using the same approach with hyperlinks would be easy, but then Windows users with "proper" terminals would need to separately enable hyperlinks with `--console-colors ansi` (or `-C ansi`). That's a bit annoying, but I believe it would be fine in the beginning. I consider this so convenient feature that I tentatively add this to RF 7.1 scope and ask opinions from others on the #devel channel on our Slack. Because we want RF 7.1 out soon, there's no time for anything bigger, so my proposal is to support hyperlinks when ANSI colors are enabled and to keep them disabled on Windows by default. If we agree this is a good approach, I'll submit a separate issue about enhancing terminal capability testing in RF 7.2 to make the Windows support better.
1medium
Title: `Async for` can be used to iterate over a websocket's incoming messages Body: **Is your feature request related to a problem? Please describe.** When creating a websocket server I'd like to use `async for` to iterate over the incoming messages from a connection. This is a feature that the [websockets lib](https://websockets.readthedocs.io/en/stable/) uses. Currently, if you try to use `async for` you get the following error: ```console TypeError: 'async for' requires an object with __aiter__ method, got WebsocketImplProtocol ``` **Describe the solution you'd like** Ideally, I could run something like the following on a Sanic websocket route: ```python @app.websocket("/") async def feed(request, ws): async for msg in ws: print(f'received: {msg.data}') await ws.send(msg.data) ``` **Additional context** [This was originally discussed on the sanic-support channel on the discord server](https://discord.com/channels/812221182594121728/813454547585990678/978393931903545344)
1medium
Title: Collapse long failure messages in log and report Body: Currently long failure messages (over 40 lines by default, configurable with `--max-error-lines` (#2576)) are cut from the middle. This is done to avoid huge messages messing up logs and reports, but the problem is that some valuable information may be lost. Another issue is that even the resulting messages are somewhat long and take lot of space. The above is an old problem, but the situation is getting worse in RF 7.0 due failure messages being shown not only with tests,but also with each keyword and control structure. Earlier keywords and control structures in the result model didn't have a message at all, but it was added as part of the result model cleanup (#4883). The motivation was this: - We are adding JSON representation to the result model (#4847) and want the model to be as stable and future-proof as possible. - We likely want to in the future allow running individual keywords outside Robot core. At that point we want the result model to have a message, not only status as earlier. - In some special cases (at least with `--flatten-keywords` and `--remove-keywords`) we want to add some extra notes to result objects. Earlier we used documentation for that, but it was odd because control structures such as FOR loops cannot otherwise have a documentation. Using the message for this purpose works much better. Now that also keywords and control structures also have a message, the same message can be shown on multiple levels in the log file. That's rather annoying in general, but gets especially irritating if the message is long. To mitigate this issue, and to fix the old issue with long messages, I propose we do the following: 1. Show only the beginning of long failure messages in log and report by default. I believe we should show so much that typical messages are shown fully, but considerably less than 40 lines that is the current maximum. We could possibly also show more with tests than with keywords and control structures. 2. Have some way to show the full message. Probably a simple "Show more." button/link would be fine. 3. Stop cutting long messages otherwise. This can increase output file sizes, but I doubt the difference is too big.
1medium
Title: [BUG/ENH] deconcatenate_columns should inform users how many columns need to be provided Body: Currently, an `AssertionError` is raised if the number of columns provided to `new_column_names` is not correct, and the error message is not very informative, stating only, "number of new column names not correct." An improved error message would indicate how many columns should be provided. An even better API change might be adding a `autonumber_basename` kwarg, which automatically numbers the column names. For example, if the deconcatenated column adds 4 new columns that ought to share the same base name `serv`, it would automatically name them `serv1, serv2, serv3, serv4` (one-indexing or zero-indexing can be a kwarg as well!)
1medium
Title: Use `jsonschema.Draft202012Validator` in response validation for Open API 3.1 Body: > Hey @Stranger6667! > Since Schemathesis does support `jsonschema` >= 4, would it make sense at least for the time being to make `validate_response` support 3.1 by using `jsonschema.Draft202012Validator`? Maybe as a configurable opt-in option? This would help folks validate their API responses against a 3.1 schema. Without data generation for now. _Originally posted by @thatguysimon in https://github.com/schemathesis/schemathesis/issues/494#issuecomment-1401536089_ I think it is a reasonable improvement for the status quo and won't take much effort to implement
1medium
Title: Handler would be modify after wrapped by `app.route` Body: **Is your feature request related to a problem? Please describe.** Handler would be modify after wrapped by `app.route` ```python from sanic import Sanic from sanic.response import text app = Sanic("test") @app.get("/") def get(request): """ here is my doc """ return text("123") print(get.__doc__) print(get.__name__) ``` Original function has been changed into tuple. ```sh Built-in immutable sequence. If no argument is given, the constructor returns an empty tuple. If iterable is specified the tuple is initialized from iterable's items. If the argument is a tuple, the return value is the same object. Traceback (most recent call last): File "/Users/zinklu/code/WebProject/SanicDebug/app.py", line 22, in <module> print(get.__name__) AttributeError: 'tuple' object has no attribute '__name__' ``` **Describe the solution you'd like** `app.route` won't change original handler. ```python from sanic import Sanic from sanic.response import text app = Sanic("test") @app.get("/") def get(request): """ here is my doc """ return text("123") print(get.__doc__) print(get.__name__) ``` ```sh here is my doc get ``` **Additional context**
1medium
Title: Give local keywords in resource files precedence over keywords in test case files Body: Currently if you have a test case file like ```robotframework *** Settings *** Resource example.resource *** Test Cases *** Example Keyword 1 *** Keywords *** Keyword 2 Log Keyword in test case file ``` and the imported resource file contains ```robotframework *** Keywords *** Keyword 1 Keyword 2 Keyword 2 Log Keyword in resource file ``` the `Keyword 2` keyword that ends up being called by `Keyword 1` is from the test case file. This is rather strange and it would be more logical if the keyword from the same resource file as `Keyword 1` would be used. This behavior was deprecated in RF 6.0 when local keywords were given precedence over imported keywords with same name (#4366).
1medium
Title: Allow setting variables with TEST scope in suite setup/teardown (not visible for tests or child suites) Body: **Description** For better readability, I sometimes use variables with TEST scope, which allows passing a value to multiple keywords within a single test, ensuring that the value is not misused in another test. Unfortunately, when I want to reuse the keywords that rely on this scope, in Suite Setup and Suite Teardown, I encounter this error: `Cannot set test variable when no test is started.` **Current Workaround** The workaround I have found is to use SUITE scope. This allows the keywords to be reused in Suite Setup and Teardown. However, this makes the use of the SUITE scope confusing, as one might wonder why SUITE scope is used when TEST scope would be more appropriate. **Enhancement Proposal** Would it be possible to allow TEST scope in Suite Setup and Suite Teardown as well? Variables with TEST scope used in these setups should not be accessible outside of Suite Setup or Suite Teardown. If broader access is required, then SUITE scope should be used. **Example Code** In this [example](https://robotframework.org/code/?codeProject=N4IgdghgtgpiBcIDCBXAzgFwPZQAQGMsATOAGhBLXwCcBLABw1qzARBHIDNaAbGNBAG1Q3PgDlocRBn4YAdNSwAjLBg4hCYGVrYAqfbgDKMDEzABzNLn26AOmEMpaMoyZT1cn3AFEAHtHo+XABpGABPAHcsaiJcABUsXAAlGHQYe0dnGHiYCBisCLAvH38oQOzQyOjYhOTUtHSwONlXDHdivwCgyqiY+MSUtPtmzBy8ogKir06y7vDemoH6xvsbHNGkCAarG2HZew7S8pD56v66obBVgx7qnf0Mk1wAWTDcADU82gglPgOvd4AQSSXgAJMBngBNXAAfSBSQAkoCAEIAGW8AF8vK9cAA3L4-IL4ngobKeKhYegwAC8cW8hji9nsqKw5heb0+dEJjWKLLZtSQLDQWCCnnBUNh8KRaMxTLAM2Otz6tUGDX+nmMGHZHwJvx5Xj52s53z19hAGIAuuQSHRcTAiAAFRQAKxg+DU8E4EB4DXIihUGHeMGoaGYrEQAHY5ABGdT+1SA6iWBDADF+mAARyc1BgsC0AngggtGKAA), I need to use SUITE scope otherwise I get `Cannot set test variable when no test is started.` ```robot *** Settings *** Suite Setup Example Keyword To Reuse Suite Teardown Example Keyword To Reuse Test Setup Example Keyword To Reuse Test Teardown Example Keyword To Reuse *** Test Cases *** Test Example Keyword To Reuse *** Keywords *** Set My Variable VAR ${MY _VARIABLE} My variable value scope=TEST Log My Variable Log To Console ${MY _VARIABLE} Example Keyword To Reuse Set My Variable Log My Variable ```
1medium
Title: Report syntax errors better in log file Body: Currently when you have an invalid setting like `[Setpu]` in a test (or keyword, or task), error is reported on the console, but the test is nevertheless executed and in the log file the invalid setting isn't visible under the test at all. This isn't great. It would be better if the invalid setting would be visible under the test in the log file and it should also fail the test. A similar situation occurs with invalid syntax like `END` or `ELSE` without an opening `IF`. At the moment such invalid syntax is considered to be a keyword. We have `Reserved` library that contains matching keywords, so when these "keywords" are run they fail. Invalid syntax reported as failing keywords is odd and the resulting error messages aren't great either. A part of #4210 is detecting this kind of invalid syntax already at the parsing time, but we need to also show them in the log file somehow. A solution to both of these error reporting issues is adding new `Error` object to our `TestSuite` structure. When the parser detects invalid syntax, it can create such objects, and when they are run they can simply fail. These errors should create new `<error>` elements to output.xml and they obviously need to be show in log.html as well. A problem with this enhancement is that it's not fully backwards compatible: - Invalid settings like `[Setpu]` don't currently affect test status, but in the future they will fail the test. An obvious solution for this kind of problem is not having such invalid settings in tests in the first place. - External tools parsing output.xml need to take `<error>` elements into account. In general such tools should ignore elements they don't recognize, but there certainly can be tools that don't behave like that. - Tools working with the `TestSuite` model need to take new `Error` objects into account. Invalid syntax failing a test isn't a big concern for me, but this change possibly breaking external tools is a lot bigger problem. I don't see any other ways this could be handed, though. Doing the change in a non-major version is a bit questionable, but I don't think postponing this change to RF 7 (or changing RF 6.1 to RF 7) is a good idea either. We just need to inform external tool developers about this change in release notes and possibly also otherwise.
1medium
Title: Add support metric to ClassificationReport Body: Currently, the ClassificationReport omits support because it is difficult to put into the heatmap scale of (0.0, 1.0). We should still include it, however and _color_ it as the percent of the total number of records, while _labeling_ it with the actual support number.
1medium
Title: Extend the InterclusterDistance visualizer Body: The `InterclusterDistance` visualizer is our newest cluster visualization, and while it's been implemented completely, there are still a few updates I'd like to make to it: - [ ] Ensure it works for a large range of clustering algorithms (and remove skipped tests) -- see below - [ ] Add custom [Principal Coordinate Analysis (PCoA)](https://github.com/bmabey/pyLDAvis/blob/master/pyLDAvis/_prepare.py#L106) embedding based on an [internal PyLDAViz implementation](https://github.com/bmabey/pyLDAvis/blob/master/pyLDAvis/_prepare.py#L68) of [Jensen-Shannon Divergence](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence). - [ ] Add [Vanilla PCoA](https://github.com/bmabey/pyLDAvis/blob/master/pyLDAvis/_prepare.py#L73) embedding (which may already be in Scikit-Learn) - [ ] Investigate other scoring mechanisms besides # of instances (as in PyLDAViz for LDA, possibly Silhouette scores, something using y, or cluster diameter) and create a new issue for them or implement them. - [ ] Allow user to set color of clusters and relative opacity, computing the edge and face color opacities from the specified colors and opacity and setting them correctly as is done with hard coding now. - [ ] Update the notes section of the visualizer with new embedding and scoring when they are complete! - [ ] Create documentation example using either sklearn newsgroups corpus or hobbies corpus vectorized as TF-IDF and clustered with LDA, to show topic modeling approach similar to PyLDAViz. ### Notes on colors Right now the facecolor of the clusters is hard coded to `#2e719344` and the edgecolor of the clusters is hard coded to `#2e719399` note the `44` and `99` on the colors respectively, these set the opacity of the color; the edge is more opaque than the face of the cluster in order to allow better visibility of clusters that overlap. I would like to support the user specifying a color for all clusters or a colormap/colors for each cluster as well as the ability to specify the face opacity. If the user specifies these things, then we have to compute the relative alpha (opacity) for both the edge and the face to maintain the currently hardcoded behavior. ### Notes on supported algorithms Right now we use the `cluster_centers_` attribute of the model to embed the centers into 2 dimensional space and the `labels_` attribute to score/size the clusters. Unfortunately, not all clustering algorithms have these attributes, so we need to extend the `cluster_center_` property on the visualizer to either find a different attribute or to compute the cluster centers some how. Below is a listing of various clustering algorithms and their attributes. We would like to ensure support for the following clustering algorithms: ``` AgglomerativeClustering (Ward and Average) - children_ - labels_ - n_components_ - n_leaves_ Birch - dummy_leaf_ - fit_ - labels_ - partial_fit_ - root_ - subcluster_centers_ - subcluster_labels_ FeatureAgglomeration - children_ - labels_ - n_components_ - n_leaves_ decomposition.LatentDirichletAllocation - bound_ - components_ - doc_topic_prior_ - exp_dirichlet_component_ - n_batch_iter_ - n_iter_ - random_state_ - topic_word_prior_ ``` It would be great if we could find support for the following clustering algorithms, but it's not clear if it's possible or not either because there is no obvious centers or labels: ``` DBSCAN - components_ - core_sample_indices_ - labels_ mixture.GaussianMixture - converged_ - covariances_ - lower_bound_ - means_ - n_iter_ - precisions_ - precisions_cholesky_ - weights_ SpectralClustering - affinity_matrix_ - labels_ ``` We already have support for the following clustering algorithms (using the `cluster_centers_` attribute for embedding and the `labels_` attribute for scoring): ``` AffinityPropagation - affinity_matrix_ - cluster_centers_ - cluster_centers_indices_ - labels_ - n_iter_ KMeans - cluster_centers_ - inertia_ - labels_ - n_iter_ MiniBatchKMeans - cluster_centers_ - counts_ - inertia_ - init_size_ - labels_ - n_iter_ MeanShift - cluster_centers_ - labels_ ```
1medium
Title: Listeners cannot set timeouts Body: **Description:** I want to use a Robot Framework listener to dynamically set the execution time for each test case during test execution. For example, in a CI scenario, I can define a total execution time, and during the Robot Framework run, dynamically adjust the remaining execution time for each test case. However, it seems that there is currently no way to modify a test case's timeout parameter through a listener: ``` from robot import result, running from testlogging import debug_log logger = debug_log(__file__) class TimeoutConfig: ROBOT_LISTENER_API_VERSION = 3 def __init__(self, timeout: str): self.timeout = timeout def start_test(self, data: running.TestCase, result: result.TestSuite): logger.info('ROBOT LISTENER: Test Case {} has timeout {}', data.name, data.timeout) data.timeout = self.timeout logger.info('ROBOT LISTENER: Test Case {} has timeout {}', data.name, data.timeout) ``` Test case: ``` Suite Teardown Suite Teardown *** Test Cases *** Log Hello World [Timeout] 5 [Teardown] Testcase Teardown Sleep 20 ``` The console shows data.timout output was modified but the robot report shows the timeout still keep original.
1medium
Title: Failed imports should fail suite they belong to Body: We encountered follwoing scenario while using a custom library: (RF 6.0, Python 3.10.8) When the initialization of the custom library fails, the test execution is continued until the first keyword provided by that library is used. The result is a rather misleading "No keyword with name 'XY' found.". Here is a minimal example: ExampleLib.py: ```python class ExampleLib: ROBOT_LIBRARY_SCOPE = 'SUITE' ROBOT_LISTENER_API_VERSION = 3 def __init__(self): # do init stuff raise Exception("something bad happened while doing init stuff") def sayHello(self): return "Hello World" ``` Example.robot: ```robot *** Settings *** Library ExampleLib.py *** Test Cases *** Test SayHello ${res} = Set Variable "This is a String" ${res} = sayHello Log To Console ${res} ``` Result: ![Result](https://github.com/robotframework/robotframework/assets/103201810/6b577e36-61c7-41cd-a359-e4e7d181f01a) It would be great to have the possibility to stop the test execution from within the custom libraries constructor.
1medium
Title: [INF] Don't require column name arguments to be strings Body: It occurs to me that there are methods within `pyjanitor` that require column name method arguments to be strings (specified in the docs and/or validated in the actual code). Examples are `engineering.convert_units()` and the two methods in the finance submodule (both of which I've worked on...guilty as charged :)) Pandas does not require column names to be strings, and this could pose a big limitation for one's workflow (depending on the data set). I'm not sure how pervasive this issue is throughout the `pyjanitor` methods; but, it seems like a discussion worth having, potentially resulting in a sweep of the current code base to remove such limitations.
1medium
Title: Extend PCA Visualizer with Component-Feature Strength Body: **Describe the solution you'd like** Provide an optional heatmap and color bar underneath the PCA visualizer (by shifting the lower axes) that shows the magnitude of each feature value to the component. This provides an explanation of which features are contributing the most to which component. **Is your feature request related to a problem? Please describe.** Although we have the biplot mode to plot feature strengths, they can sometimes be visually overlapping or unintelligible, particularly if there is a large number of features. **Examples** ![image from ios](https://user-images.githubusercontent.com/745966/45625627-52dff780-ba5b-11e8-81d5-7f2a82f6b1d6.jpg) Code to generate this: ```python fig, ax = plt.subplots(figsize=(8, 4)) plt.imshow(pca.components_, interpolation = 'none', cmap = 'plasma') feature_names = list(cancer.feature_names) ax.set_xticks(np.arange(-.5, len(feature_names))); ax.set_yticks(np.arange(0.5, 2)); ax.set_xticklabels(feature_names, rotation=90, ha='left', fontsize=12); ax.set_yticklabels(['First PC', 'Second PC'], va='bottom', fontsize=12); plt.colorbar(orientation='horizontal', ticks=[pca.components_.min(), 0, pca.components_.max()], pad=0.65); ``` Though we will probably want to use the `pcolormesh` rather than `imshow` as in `Rank2D`, `ClassificationReport` and `ConfusionMatrix`. Additionally it might be a tad nicer if the color bar was above the feature plot so that the axes names were the last thing in the chart. **Notes** This idea comes from page 55-56 of [Data Science Documentation](https://media.readthedocs.org/pdf/python-data-science/latest/python-data-science.pdf). I would be happy to include a citation to this in our documentation. (HTML version is [here](https://python-data-science.readthedocs.io/en/latest/unsupervised.html#principal-component-analysis)). @mapattacker any thoughts? See also #476 for other updates to the PCA visualizer.
1medium
Title: Add basic testcases for the audioservice interface Body: Add tests for the [audioservice](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/skills/audioservice.py) interface for skills. Basically the tests need to create a mock bus and create the audioservice with ```python bus_mock = Mock()` as = AudioService(bus_mock) ``` Then do calls against the objects methods and ensure that the expected messages are sent on the mock_bus. If there are questions just comment here or join [our chat](https://chat.mycroft.ai)!
1medium
Title: Problems with non-string arguments when using `BuildIn.run_keyword` Body: When using `BuildIn.run_keyword` the (positional) arguments that follow the target keyword name should be passed to the keyword "as is". A change in RF7 changed this behavior in a specific situation; if there are (exactly) 2 positional arguments following the keyword name and the first of those arguments is iterable and the second a mapping, the arguments are no longer passed "as is" but they are expanded before being passed to the target keyword.
1medium
Title: Result model: Loudly deprecate not needed attributes and remove already deprecated ones Body: The result model (`robot.result.TestSuite`) got separate objects for representing control structures (`For`, `If`, ...) only in RF 4.0 (#3749). Before that control structures were modeled as keywords with a `type` attribute telling what they actually represent. For backwards compatibility the new objects got keywords specific attributes like `name`. These attributes were deprecated from the beginning and it's not finally time to remove them. This change will require changing our log file generation code, because it handles control structures as keywords, but that's not too big a problem. The change obviously affects all external tools handling control structures as keywords as well making the change backwards incompatible. We couldn't do it in a bug fix or even in a feature release, but doing it in a major release ought to be fine. --- **UPDATE:** It turned out that we hadn't properly deprecated all the attributes. Some only used our custom `@deprecated` decorator, but the decorator didn't do anything. Instead of removing these attributes, we decided to change the decorator so that now there's an actual deprecation warning. Those attributes that already emitted deprecation warnings were removed.
1medium
Title: Support setting values for child suites with `VAR` syntax using `scope=SUITES` Body: The current VAR syntax does not support the children argument offered by the `Set Suite Variable` Keyword. https://robotframework.org/robotframework/latest/libraries/BuiltIn.html#Set%20Suite%20Variable ``` Set Suite Variable ${SUITE_VAR} I can be used by child suites. children=True ``` Syntax like this should be possible: ``` VAR ${SUITE_VAR} I should be usable by child suites. scope=SUITE children=True ```
1medium
Title: Keyword timeout is not effective in teardown if keyword uses `Wait Until Keyword Succeeds` Body: When using `Wait Untless Keyword Succeeds` in test teardown, it is possible to have much longer execution time than allowed by the "user keyword timeout". In practice, in teardown, `Wait Untless Keyword Succeeds` does skip the keyword re-execution when timeout has been reached, but not the `retry_interval`. Out of teardown it works as expected, no re-execution occurs, therefore no `retry_interval` is included. Example code: ```robotframework *** Test Cases *** Test 1: Works as expected when used in test body [Documentation] Does not last much longer than keyword timeout. My Keyword Test 2: Keyword timeout not respected in teardown [Documentation] Lasts much longer than keyword timeout. No Operation [Teardown] My Keyword *** Keywords *** My Keyword [Documentation] This keyword should stop when reaching timeout [Timeout] 2s Wait Until Keyword Succeeds 4x 10s ... Fail Slowly 60s Fail Slowly [Documentation] Sleeps and fails. [Arguments] ${sleep_duration} Sleep ${sleep_duration} Fail failed on purpose ``` Example output: ![image](https://github.com/user-attachments/assets/de341070-a9e1-4088-a39c-6eeb92c6dfef) This feels like a bug when checked against documentation. - [Wait Until Keyword Succeeds states](https://robotframework.org/robotframework/latest/libraries/BuiltIn.html#Wait%20Until%20Keyword%20Succeeds): > All normal failures are caught by this keyword. Errors caused by invalid syntax, test or keyword timeouts, or fatal exceptions [...] are not caught. - User guide on [User keyword timeout](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#user-keyword-timeout): > User keyword timeouts are applicable also during a test case teardown, whereas test timeouts are not. I believe the expected behavior would be that retry interval are also skipped when timeout has been reached.
1medium
Title: Docs: showcase converting raw JSON into a structured type Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> https://discord.com/channels/933860922039099444/933898994923470848/1132948924915777556 ## Suggested solution <!-- A clear and concise description of what you want to happen. --> ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. -->
1medium
Title: Libdoc's default language selection does not support all available languages Body: The CLI help and input validation only allow FI, and EN as input for the `--language` switch, but more languages are currently available. Ideally this should be dynamic, but at least both the helptext for libdoc language switch and the validator should reflect what's available. https://github.com/robotframework/robotframework/blob/986351665e304db25b1a77e60384624e21ea943d/src/robot/libdoc.py#L234
1medium
Title: Failure in suite setup initiates exit-on-failure even if all tests have skip-on-failure active Body: ### Summary I am running `robot` with `--skiponfailure skip-on-failure` and `--exitonfailure`. Test cases I expect to fail are tagged with `skip-on-failure`. The behavior I desire the following: * If a failure happens on any test case not tagged with `skip-on-failure`, I want all remaining tests to be marked as failed due to `--exitonfailure`. (This is working properly.) * If a failure happens on a test case tagged with `skip-on-failure`, I want that test case status changed from "FAIL" to "SKIPPED", and I want subsequent test cases to run. (This works properly most of the time, except for the scenario described in this issue.) I have a suite in which all tests are tagged with `skip-on-failure`. If its suite setup fails, the test cases in the file are correctly marked as skipped. However, subsequent suites do not run, and robot tells me this is because of exit-on-failure mode. This worked properly in Robot Framework 3.x, when Robot used criticality instead of skipping. Once criticality was removed and skipping was introduced in Robot Framework 4.x, this stopped working properly. ### Sample code that demonstrates the problem I have a directory "robot-tests" populated with two children: "test-1.robot" and "test-2.robot": test-1.robot: ``` *** Settings *** Test Tags skip-on-failure Suite Setup This is going to fail *** Test Cases *** Test 1 [Tags] skip-on-failure Log Running test 1 *** Keywords *** This is going to fail Fail This didn't go well ``` test-2.robot: ``` *** Test Cases *** Test 2 Log Running test 2 ``` When running RF 7.0.1, here is the output. Notice that suite test-2 does not run and is marked as a failure. ``` $ robot --version Robot Framework 7.0.1 (Python 3.10.12 on linux) $ robot --skiponfailure skip-on-failure --exitonfailure robot-tests ============================================================================== Robot-Tests ============================================================================== Robot-Tests.Test-1 ============================================================================== Test 1 | SKIP | Test failed but skip-on-failure mode was active and it was marked skipped. Original failure: Failure occurred and exit-on-failure mode is in use. ------------------------------------------------------------------------------ Robot-Tests.Test-1 | SKIP | Suite setup failed: This didn't go well 1 test, 0 passed, 0 failed, 1 skipped ============================================================================== Robot-Tests.Test-2 ============================================================================== Test 2 | FAIL | Failure occurred and exit-on-failure mode is in use. ------------------------------------------------------------------------------ Robot-Tests.Test-2 | FAIL | 1 test, 0 passed, 1 failed ============================================================================== Robot-Tests | FAIL | 2 tests, 0 passed, 1 failed, 1 skipped ============================================================================== ``` If I use RF 3.2.2 (with criticality), the behavior is what I want. Notice that test-2 runs and passes. (I removed the unsupported "Test Tags" line prior to running this.) ``` $ robot --version Robot Framework 3.2.2 (Python 3.10.12 on linux) $ robot --noncritical skip-on-failure --exitonfailure robot-tests ============================================================================== Robot-Tests ============================================================================== Robot-Tests.Test-1 ============================================================================== Test 1 | FAIL | Parent suite setup failed: This didn't go well ------------------------------------------------------------------------------ Robot-Tests.Test-1 | PASS | Suite setup failed: This didn't go well 0 critical tests, 0 passed, 0 failed 1 test total, 0 passed, 1 failed ============================================================================== Robot-Tests.Test-2 ============================================================================== Test 2 | PASS | ------------------------------------------------------------------------------ Robot-Tests.Test-2 | PASS | 1 critical test, 1 passed, 0 failed 1 test total, 1 passed, 0 failed ============================================================================== Robot-Tests | PASS | 1 critical test, 1 passed, 0 failed 2 tests total, 1 passed, 1 failed ============================================================================== ``` If I move the failure to somewhere else inside of test-1, RF 7.x is able to correctly execute the remaining suites. It is only if the failure is in a suite setup or suite teardown that the incorrect behavior is observed. e.g., changing test-1.robot so that the failure is in the test case setup and not the test suite setup results in a correct running of subsequent tests after the skipped failure: ``` *** Test Cases *** Test 1 [Tags] skip-on-failure [Setup] This is going to fail Log Running test 1 *** Keywords *** This is going to fail Fail This didn't go well ``` ``` $ robot --version Robot Framework 7.0.1 (Python 3.10.12 on linux) $ robot --skiponfailure skip-on-failure --exitonfailure robot-tests ============================================================================== Robot-Tests ============================================================================== Robot-Tests.Test-1 ============================================================================== Test 1 | SKIP | Test failed but skip-on-failure mode was active and it was marked skipped. Original failure: Setup failed: This didn't go well ------------------------------------------------------------------------------ Robot-Tests.Test-1 | SKIP | 1 test, 0 passed, 0 failed, 1 skipped ============================================================================== Robot-Tests.Test-2 ============================================================================== Test 2 | PASS | ------------------------------------------------------------------------------ Robot-Tests.Test-2 | PASS | 1 test, 1 passed, 0 failed ============================================================================== Robot-Tests | PASS | 2 tests, 1 passed, 0 failed, 1 skipped ============================================================================== ```
1medium
Title: [DOC] Create release docs Body: This issue is to remind myself to create docs for doing releases. Quoting from @hectormz: ----- 🤙🤙 @ericmjl I think in the checklist after `bumpversion` we can add: ``` git push <remote> <branch> --follow-tags ``` which according to this [SO answer](https://stackoverflow.com/a/26438076/2337392) is the "sane" way to push tags as it ``` It pushes both commits and only tags that are both: - annotated - reachable (an ancestor) from the pushed commits ``` I don't completely understand the differences between annotated and unannotated flags, but I tested our `bumpversion` tags, and they qualify. _Originally posted by @hectormz in https://github.com/ericmjl/pyjanitor/issues/643#issuecomment-595498724_ -----
1medium
Title: [Actions][7.x] Move saved object fields to references for action_task_params Body: Relates to https://github.com/elastic/kibana/issues/100067 To prepare for the upcoming saved object ID migration, we need to move saved object fields to use references properly so the 8.0 migration will automatically update them. We need to add a migration that will move `action_task_params.relatedSavedObjects` and `action_task_params.actionId` to saved object references. For both, there isn't exactly a 1-1 match between the data structures. For `actionId`, I'd imagine the reference object to look like: `{ id: actionId, type: 'action', name: 'action' }` For `relatedSavedObjects`, it's a little more complicated. The `id` and `type` are straight-forward, but the saved object reference requires a `name` which might map to `relatedSavedObject.typeId` or something similar. The other piece of noteworthy data is `relatedSavedObject.namespace` which I think can safely be dropped, as the `action_task_params` saved object will always exist in the same space as the action that created it. In addition to the migration, we need to change the code that [writes](https://github.com/elastic/kibana/blob/master/x-pack/plugins/actions/server/create_execute_function.ts#L64) and [reads](https://github.com/elastic/kibana/blob/master/x-pack/plugins/actions/server/lib/task_runner_factory.ts#L181). See [the PoC PR](https://github.com/elastic/kibana/pull/107611) for more insight into how this might work.
1medium
Title: Rename code and files to use new rule terminology Body: This issue covers the last part of https://github.com/elastic/kibana/issues/90375 for the new rule terminology. We should go through our codebase and change it to reflect the new terminology of rule.
1medium
Title: Remove deprecated alert HTTP APIs Body: This issue covers the fifth part of https://github.com/elastic/kibana/issues/90375. We should remove any deprecated alert APIs when telemetry from https://github.com/elastic/kibana/issues/108716 shows usage is below 1% or deprecated for > 2.5 years (see: https://github.com/elastic/kibana/issues/90379). Don't forget docs.
1medium
Title: Remove deprecated action HTTP APIs Body: This issue covers the fifth part of https://github.com/elastic/kibana/issues/90375. We should remove any deprecated action APIs when telemetry from https://github.com/elastic/kibana/issues/108716 shows usage is below 1% or deprecated for > 2.5 years (see: https://github.com/elastic/kibana/issues/90378). Don't forget docs.
1medium
Title: Further tweak the alerting functional test to use limited privileges. Body: As a follow up to https://github.com/elastic/kibana/pull/88727, we could further tweak this test to use limited set of privileges ( roles) assigned to the `test_user` instead of giving it `all` . Here is a Meta issue linked for this effort: https://github.com/elastic/kibana/issues/60815 It would also be prudent to convert other such tests which fall under Alerting umbrella cc @LeeDr @elastic/kibana-alerting-services
1medium
Title: The Actions terminology in the code has diverged from the terminology we use verbally and in the UI Body: This issue covers the last part of https://github.com/elastic/kibana/issues/90375 for the new rule terminology. We should go through our codebase and change it to reflect the new terminology of connector. This is also a step towards addressing https://github.com/elastic/kibana/issues/69442 Align the terminology in the code with the terminology we use externally and in the UI by changing "Actions" to "Connectors". This isn't a whole sale find-and-replace, as we'll still have an Actions feature, client etc. But we'll separate between the Connector entity and the Action entity as they are treated differently in the UI.
1medium
Title: Add UI test to ensure each alert and action type renders in the management app's create flyout Body: Sometimes a new alert or action type is added to the front-end registry and doesn't render in the management app's create flyout (causing errors to be thrown). Adding a UI test that ensures the create flyout will render for each type will help catch these issues in the original PR instead of at a future time.
1medium
Title: [Alerting] We do not have E2E Tests under Basic license Body: We have several features that are specific to limited licensing, such as a banner on the Connectors flyout, encouraging users to start a trial or look at our subscriptions pages. We do not currently have a config for the functional tests suite though which runs tests under such a limited license.
1medium
Title: Create alert and actions security only test suite Body: Cleanup the test suites at the same time. Make it easier to create documents in different spaces when in `security_and_spaces` mode. Also handle scenario from: https://github.com/elastic/kibana/pull/52967#discussion_r357828034 where we have pre-created spaces we can create against.
1medium
Title: Remove the restriction that tier 0 alias has to be "MEM" Body: **Summary** There are some assumptions about our tier 0 alias being "MEM", we should not rely on that.. **Urgency** not urgent
1medium
Title: Inline exception messages inside ExceptionMessage.java and remove this class Body: **Summary** In Alluxio source code, there is `ExceptionMessage.java` which is only used to contain a lot of exception messages. This was initially aim to help Unit Tests to avoid hard-code exception strings in the tests but using these messages as the templates as a simplification. However, later on this approach shows bigger issues: - when reusing the same exception message in different places in the source code, it is very difficult to locate the actual line in the src code when exception messages are seen without stack trace. - as the source code is evolving, when some messages are no longer used, it is hard for us to tell from enum `ExceptionMessage` and remove this unused enum item, introducing dead source code. The proposed way is to simply inline all these messages into their callers. If the same message is used multiple times in different exceptions, we need to make the message different so we can tell the source of exception. If one message is used multiple times in a unit tests, we can define the template inside the unit test file to simplify the code. **Urgency** Low
1medium
Title: Alluxio documentation defects Body: **Summary** This pinned issue is a running list of all documentation defects. If you find any defects or have any suggestions to our documentation site (https://docs.alluxio.io), you can also post under this issue. Maintainers will periodically clean up issues listed here. ## TODO ### priority-high (Something missing, misleading or incorrect, plus critical content) * Talk about server-side encryption for S3 bucket, e.g. when to set `alluxio.underfs.s3.secure.http.enabled` and `alluxio.underfs.s3.server.side.encryption.enabled` and `alluxio.underfs.s3.endpoint` * Improve [S3 API page](https://docs.alluxio.io/os/user/stable/en/api/S3-API.html#rest-api), there is no introduction on the setup of "EXAMPLE USAGE". e.g., for command `curl -i -X PUT http://localhost:39999/api/v1/s3/testbucket`, what is `testbucket`? it is used without any introduction. * Explanation on Some commands in [fsadmin page](https://docs.alluxio.io/os/user/stable/en/operation/Admin-CLI.html#report) is too brief, e.g., `fsadmin report storage` has only one line to explain * Not all the Alluxio shell fs commands are covered in page https://docs.alluxio.io/os/user/stable/en/basic/Command-Line-Interface.html, including `distributedLoad` and etc * Add Quick-Get-started: Running Presto with Alluxio on S3 * Add Quick-Get-started: Running Spark with Alluxio on S3 * Update [Use-Cases.html](https://docs.alluxio.io/os/user/stable/en/Use-Cases.html) which needs to be aligned with updated message from Alluxio website * Revisit deploy docs, especially Local, Cluster, Docker, K8S to be more consistent and user-oriented * Add Glue related instructions to EMR integration documentation * Pros/Cons on EMR bootstrap vs YARN integration * Add Impersonation Configure for YARN and Spark page (e.g., Spark on YARN). [link at Slack]( https://alluxio-community.slack.com/archives/CEXGGUBDK/p1565067985154500) * Improvement in the "journal management" document: e.g., it is not clear a secondary master can not start with embedded journal ([link at Slack](https://alluxio-community.slack.com/archives/CEXGGUBDK/p1565270507214800?thread_ts=1565216871.196800&cid=CEXGGUBDK)); e.g., manually trigger checkpoints with the checkpoint command (`bin/alluxio fsadmin checkpoint`) ### priority-medium (changes will dramatically improve the docs) * Improve [FS API](https://docs.alluxio.io/os/user/edge/en/api/FS-API.html), e.g. More explanation and examples in this page. * Improve or split [Namespace-Management.html](https://docs.alluxio.io/os/user/stable/en/advanced/Namespace-Management.html). This page is not well structured and currently too long to read. * Improve or split [Alluxio-Storage-Management.html](https://docs.alluxio.io/os/user/stable/en/advanced/Alluxio-Storage-Management.html). This page is not well structured and currently too long to read. * More detailed documentation about `generate-tarball` script * More detailed instructions or examples in instructing how to connect to Kerberized HDFS as UFS * Add link to Spark performance tuning tips to [Spark page](https://docs.alluxio.io/os/user/stable/en/compute/Spark.html) ### priority-low (changes good to have) * Add "How Alluxio Works page", either restructuring the Architecture and Data Flow page or having a new ([link at Slack](https://alluxio-community.slack.com/archives/CEXGGUBDK/p1565396266284600?thread_ts=1565358306.250300&cid=CEXGGUBDK)) ## DONE
1medium
Title: Add verbose mode to fs mount Body: **Is your feature request related to a problem? Please describe.** When mounting an UFS, a list of UFS factories are checked to find one supporting this specific UFS. If none is found working with this UFS, then an warning will be printed `LOG.warn("No factory implementation supports the path {}", path);` and eventually an error will be given `java.lang.IllegalArgumentException: No Under File System Factory found for: ...`. This error message is not informative enough for the user to pinpoint the misconfiguration. For example my user story is I tried to mount an HDFS UFS with a tarball built using `-ufs-module=hdp-2.6`. I didn't feed the option `--option alluxio.underfs.version=hdp-2.6` when mounting it. It turned out that HdfsUnderFileSystemFactory checks if it supports the target UFS in the following way: ``` if (!conf.isSet(PropertyKey.UNDERFS_VERSION) || HdfsVersion.matches(conf.get(PropertyKey.UNDERFS_VERSION), getVersion())) { // conf.get(PropertyKey.UNDERFS_VERSION) defaults to 2.2 // getVersion() gives hdp-2.6 instead return true; } ``` Thus HdfsUnderFileSystemFactory thinks it will not support my UFS and I was not able to mount my UFS. The error `No Under File System Factory found for: ...` is not very helpful. **Describe the solution you'd like** It will be great if there's an `-v` or `--verbose` option added to `alluxio fs mount`. Under this mode more information will be given to help the user identify how each UFS option is checked and why each UFS factory implementation supports this target UFS or not. For example in my particular use case the desired printout can be: ``` alluxio fs mount /hdfs hdfs://...:8020/data Checking HdfsUnderFileSystemFactory ... Desired HDFS version is 2.2 while the target version is hdp-2.6. Not supported. Checking XXXFactory ... No Under File System Factory found for: ... ``` **Alternatives** An alternative is adding DEBUG logs in the UFS checks so the user can help themselves. But I guess one valid count-argument is many of users may have trouble helping themselves to the debug log and identifying the correct place to check. **Urgency** Low. This only applies to those who build their tarballs. For release tarballs we always distribution all versions possible.
1medium
Title: Refactor SetAttribute API Body: **Summary** https://github.com/Alluxio/alluxio/pull/9460 was a workaround. As a long term solution, we should separate `setAttribute` API into 1. APIs change Alluxio and UFS both, e.g., `chmod`, `chgrp` and `chown` and 2. APIs change attribute that apply to Alluxio only The reason is because they have very different requirement to check permission (e.g., if a mount point is read-only). **Urgency** Low
1medium
Title: Add documentation to explain more on "Safe Mode" Body: **Summary** When seeing messages like "alluxio.exception.status.UnavailableException: Alluxio master is in safe mode. Please try again later.", users don't know what it means and how to resolve it **Urgency** Median
1medium
Title: Remove unknown block size logic Body: **Is your feature request related to a problem? Please describe.** Unknown block sizes are no longer required. **Describe the solution you'd like** Remove unknown block size logic. **Describe alternatives you've considered** **Urgency** Adds unnecessary code to the codebase. **Additional context**
1medium
Title: Use JCommander for CLI opts parsing Body: **Is your feature request related to a problem? Please describe.** CLI opts parsing is not consistent across different Alluxio modules. Both `jcommand` and `commons-cli` are used. Proposing to use `jcommand` consistently across modules. **Describe the solution you'd like** Use `jcommand` consistently across modules. **Describe alternatives you've considered** N/A **Additional context** Discussion originally started here https://github.com/Alluxio/alluxio/pull/8494
1medium
Title: Make recursive options consistent in the filesystem shell Body: **Is your feature request related to a problem? Please describe.** We should try to make the usage and options in our filesystems shell mirror the most commonly used commands. For this issue I'm specifically requesting that options dealing with recursive operations are consistent with the most common bash/Posix implementations. Examples: - [cp](http://man7.org/linux/man-pages/man1/cp.1.html) can use `-r`, `-R`, or `--recursive` - [rm](http://man7.org/linux/man-pages/man1/rm.1.html) is the same as cp - [scp](https://linux.die.net/man/1/scp) only supports `-r` - [ls](http://man7.org/linux/man-pages/man1/ls.1.html) `-R` or `--recursive` (`-r` is taken by `--reverse` sorting flag) - [chmod](https://linux.die.net/man/1/chmod) `-R` or `--recursive`. There is no conflict with another `-r` option. It just doesn't take the lowercase flag. **Describe the solution you'd like** Most of our commands only use the `-R` flag to mark an operation as recursive. It would be nice if we supported all variants that our standard bash commands support. **Describe alternatives you've considered** None as of now **Additional context** Add any other context or screenshots about the feature request here.
1medium
Title: Strange margin in the Mesh-ish material. Body: ![ASUS1](https://user-images.githubusercontent.com/42407840/136804377-434ac274-05cc-490c-9c58-08c35c514e01.jpg) ![output1](https://user-images.githubusercontent.com/42407840/136804398-e51d852f-feb6-409e-866e-9bf28f7f62fd.png) Using realesrgan-x4plus-anime. The slice of the input and outpt is above.
2hard
Title: API Discussion for Figures and Axes Body: This issue is to discuss the open letter regarding Yellowbrick's API roadmap. To summarize, we currently attempt to only manage matplotlib `Axes` objects so that visualizers can be embedded into more complex plots and reports. However, many of our visualizers are getting increasingly complex, requiring subplots of their own. For a complete discussion of the API issue please see: https://www.scikit-yb.org/en/develop/api/figures.html The questions at hand are: 1. Like Seaborn, should YB have two classes of visualizer, one that wraps an axes and one that wraps a figure? 2. Should we go all in on the AxesGrid toolkit and continue to restrict our use of the figure, will this method be supported in the long run?
2hard
Title: [FEATURE] Negative tests for GraphQL Body: ### Checklist - [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation - [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues) - [x] I am using the latest version of Schemathesis ### Describe the bug I try to generate negative tests, however it doesn't work. ### To Reproduce ```python import schemathesis from hypothesis import settings from schemathesis import DataGenerationMethod f = open("schema.graphql") schema = schemathesis.graphql.from_file(f, data_generation_methods=[DataGenerationMethod.negative]) @schema.parametrize() @settings(max_examples=10) def test_api(case): print(case.body) ``` Minimal API schema causing this issue: ``` schema { query: Query mutation: Mutation } type Query { hello: String user(id: Int!): String } type Mutation { createUser(name: String!, age: Int!): String } ``` ### Expected behavior I was expecting negative tests, but clearly these are just ordinary. ``` { user(id: 0) } ``` ``` mutation { createUser(name: "", age: 0) { age } } ``` ### Environment ``` - OS: MacOS Ventura - Python version: 3.12 - Schemathesis version: 3.32.2 ```
2hard
Title: Use function summaries instead of inlining Body: We [currently use inlining instead of summaries](https://github.com/python-security/pyt/tree/master/pyt/cfg), for inter-procedural analysis, which makes PyT slower than it needs to be. Here are some videos, specifically the last one, explains function summaries well: [#57 Call Graphs](https://www.youtube.com/watch?v=giGqdwuZBKQ) [#58 Interprocedural Data Flow Analysis](https://www.youtube.com/watch?v=TJA7dvAV0ZI) [#59 Procedure Summaries in Data Flow Analysis](https://www.youtube.com/watch?v=LrbPmaLEbwM)
2hard
Title: Caption-Based Image Retrieval Model Body: We want to implement the Caption-Based Image Retrieval task from https://api.semanticscholar.org/CorpusID:199453025. The [COCO](https://cocodataset.org/) and [Flickr30k](https://www.kaggle.com/hsankesara/flickr-image-dataset) datasets contain a large number of images with image captions. The task here is to train a model to pick the right image given the caption. The image must be picked from four images, one of which is the real one, and the other three are other random images from the dataset. You will have to write `Step`s that produce a `DatasetDict` for Flickr30k and COCO, including code that can produce the negative examples. Each instance will consist of a caption with four images. You will also need to write model that can solve this task. The underlying component for the model will be VilBERT, and the [VQA model](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/models/vision_text_model.py) is probably a good place to steal some code getting started.
2hard
Title: PCA projections on biplot are not clear for datasets with large number of features Body: **Problem** The PCA projections visualized on a biplot are not clear, and various feature's projection can be seen to overlap with each other when working with a dataset with a large number of features. e.g. with credit dataset from yellowbrick.datasets ![bug1](https://user-images.githubusercontent.com/20489158/54920524-719ed980-4f29-11e9-9c8e-a64e1f1dc454.png) This could be solved by considering only certain features in a particular dimension, to avoid overlapping of vectors. Work is required in selecting criteria for features.
2hard
Title: Mapped columns cannot be used with type safe raw queries Body: <!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> If a column is mapped to a different name at the database level then pydantic will raise a ValidationError when attempting to construct the model object: ```prisma model User { id String @id @default(cuid()) name String @map("username") email String? @unique posts Post[] profile Profile? } ``` ```py query = ''' SELECT * FROM User WHERE User.id = ? ''' found = await client.query_first(query, user.id, model=User) ``` ``` /private/tmp/tox/prisma-client-py/py39/lib/python3.9/site-packages/prisma/client.py:308: in query_first results = await self.query_raw(query, *args, model=model) /private/tmp/tox/prisma-client-py/py39/lib/python3.9/site-packages/prisma/client.py:338: in query_raw return [model.parse_obj(r) for r in result] /private/tmp/tox/prisma-client-py/py39/lib/python3.9/site-packages/prisma/client.py:338: in <listcomp> return [model.parse_obj(r) for r in result] pydantic/main.py:511: in pydantic.main.BaseModel.parse_obj ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pydantic.error_wrappers.ValidationError: 1 validation error for User E name E field required (type=value_error.missing ``` ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> - Add the `@map` declaration to the test user model - Migrate the database (`prisma db push`) - Run `tox -e py39 -- -x tests/test_raw_queries.py::test_query_first_model` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> No errors should be raised, the column name should be correctly transformed ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Mac OS - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> SQLite - Python version: <!--[Run `python -V` to see your Python version]--> 3.9.9 - Prisma version: 3.8.1 ## Additional context This is not currently solvable by us as Prisma does not give us the field name at the database level. If they did we could simply create an alias when defining the `BaseModel`: ```py class User(BaseModel): ... name: str = Field(alias='username') ... ```
2hard
Title: Improve partial model generation API for dynamic creation Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> A user on the Prisma Python discord showcased their partial model generator which is highly dynamic and makes use of internal features, we should improve support for this so that no internal features have to be used. Their use case: > build a scaffolding tool for generating fully typed RESTAPI endpoints that are glassboxed Link to discussion: https://discord.com/channels/933860922039099444/933875073448804383/937759740061179935 ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Proposed API is still a work in progress.
2hard
Title: Really remove the 10000-token limit in [Word2Vec, FastText, Doc2Vec] Body: The *2Vec models have an underdocumented implementation limit in their Cython paths: any single text passed to training that's more than 10000 tokens is silently truncated to 10000 tokens, discarding the rest. This may surprise users with larger texts - as much of the text, including words discovered during the vocabulary-survey (which doesn't truncate texts), can thus be skipped. Fixing this would make a warning like that I objected to in PR #2861 irrelevant. Fixing this would also fix #2583, the limit with respect to Doc2Vec inference. As mentioned in #2583, one possible fix would be to auto-break user texts into smaller chunks. Possible fixes thus include: * auto-breaking user texts into <10k token internal texts * using malloc, rather than a stack-allocated array of constant length, inside the Cython routines (might add allocate/free overhead & achieve less cache-locality than the current approach) * use alloca - not an official part of the relevant C standard but likely available everywhere relevant (MacOS, Windows, Linux, BSDs, other Unixes) - instead of the constant stack-allocated array (some risk of overflowing if users provide gigantic texts) * doing some one-time allocation per thread in Python-land that's usually reused for in Cython land small-sized texts, but when oversized texts are encountered replacing that with a larger allocation. Each of these may need to be done slightly differently in the `corpus_file` high-thread-parallelism codepaths.
2hard
Title: Expand List input types Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Our query input types use `List` (e.g. `{'string': {'not_in': ['a', 'b']}}`), this severely limits the types that users can pass to these methods. We should aim to be as broad as possible. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> We could switch these types to support taking `Iterable` however this could cause some false positives as `str` is `Iterable` this means that the following would now be statically accepted for string fields: ```py { 'not_in': 'a', } ``` This would cause an error at runtime. I do not know how solvable this is by us until a PEP is drafted for a `Not` type.
2hard
Title: Feature Parity with the TypeScript Client Body: - [ ] #10 - [x] #19 - [ ] #25 - [ ] #26 - [ ] #27 - [x] #28 - [ ] #31 - [x] #39 - [x] #42 - [x] #52 - [x] #53 - [x] #54 - [ ] #64 - [ ] #76 - [ ] #103 - [x] #106 - [ ] #107 - [x] #134 - [ ] #314 - [x] #434 - [ ] #676 - [ ] #714 - [x] #719 - [ ] #816 - [x] #994 - [ ] #127
2hard
Title: Libdoc: Support documentation written with Markdown Body: It seems this was [discussed briefly back in 2016](https://github.com/robotframework/robotframework/issues/2476) but I wanted to see if there was any thoughts on supporting the Markdown format for Libdoc in 2015. With Markdown being the preferred(currently only?) markup [for copilot knowledge bases](https://docs.github.com/en/enterprise-cloud@latest/copilot/customizing-copilot/managing-copilot-knowledge-bases), this would be a helpful enhancement for anyone trying to leverage specific library documentation for their copilot code generation or autocompletes. Due to the complexity of the HTML of the libdoc outputs they don't seem to work through any HTML to Mardown conversion tools so perhaps a more simplified Libdoc HTML output would work as well?
2hard
Title: Add support for inline conditionals for query building Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Currently it is annoying to write update mutation data with conditional inputs, for example: ```py data: UserUpdateInput = {} if name: data["name"] = name if colour != 'red': data["colour"] = colour await User.prisma().update( data=data, where={"id": user_id}, ) ``` ## Suggested solution <!-- A clear and concise description of what you want to happen. --> We should support something like this: ```py await User.prisma().update( data={ 'name': name if name else prisma.omit, 'colour': colour if colour != 'red' else prisma.omit, }, where={ 'id': user_id, }, ) ``` We may also want to export the `prisma.omit` special value to the client instance as well to avoid an extra import.
2hard
Title: Add `MATCH/CASE` syntax similarly as in Python Body: Hello everyone, I would like to suggest to add match statement in robot framework. Match statement is support in python [Match Statement](https://docs.python.org/3/tutorial/controlflow.html#match-statements) This suggestion is to add another method besides IF / ELSE block.
2hard
Title: Improve type checking experience using a type checker without a plugin Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> For example, using `pyright` to type check would result in false positive errors that are annoying to fix when including 1-to-many relational fields. ```py user = await client.user.find_unique(where={'id': '1'}, include={'posts': True}) assert user is not None for post in user.posts: ... ``` ``` error: Object of type "None" cannot be used as iterable value (reportOptionalIterable) ``` This is a false positive as we are explicitly including `posts` they will never be `None` > NOTE: false positive due to our types, not a bug in pyright ## Suggested solution <!-- A clear and concise description of what you want to happen. --> 1-to-many relational fields should not be typed as optional, instead they should default to an empty list, however we should still error if the field is accessed without explicit inclusion for supported type checkers. ```py class User(BaseModel): ... posts: List[Post] = Field(default_factory=list) ``` Should be noted that we cannot do this for 1-to-1 relational fields. ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered. --> The issue can be circumvented albeit using ugly / redundant methods ```py user = await client.user.find_unique(where={'id': '1'}, include={'posts': True}) assert user is not None for post in cast(List[Post], user.posts): ... ``` ```py user = await client.user.find_unique(where={'id': '1'}, include={'posts': True}) assert user is not None assert user.posts is not None for post in user.posts: ... ```
2hard
Title: Support raw query methods with MongoDB Body: # Problem The internal `executeRaw` and `queryRaw` methods are not available when using MongoDB, we should not include them in the generated client.
2hard
Title: [Doc][Dashboard] Add Documentation about TPU Logs Body: ### Description Tracking issue to add documentation about libtpu logs written to `/tmp/tpu_logs` and how they're exposed on the Ray dashboard. ### Link Related PR: https://github.com/ray-project/ray/pull/47737 The documentation should be added to the general TPU docs (https://docs.ray.io/en/latest/cluster/kubernetes/user-guides/tpu.html) and referenced from the docs on logging (https://docs.ray.io/en/latest/ray-observability/getting-started.html#logs-view)
2hard
Title: Referring Expressions with COCO, COCO+, and COCOg Body: In the referring expressions task, the model is given an image and an expression, and has to find a bounding box in the image for the thing that the expression refers to. Here is an example of some images with expressions: <table width="100%"> <tr> <td><img src="http://bvisionweb1.cs.unc.edu/licheng/referit/refer_example.jpg"></td> </tr> </table> To do this, we need the following components: 1. A `DatasetReader` that reads the referring expression data, matches it up with the images, and pre-processes it to produce candidate bounding boxes. The best way to get the referring expressions annotations is from https://github.com/lichengunc/refer, though the code there is out of date, so we'll have to write our own code to read in that data. Other than that, the dataset reader should follow the example of [`VQAv2Reader`](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/dataset_readers/vqav2.py#L239). The resulting `Instance`s should consist of the embedded regions of interest from the `RegionDetector`, the text of one referring expression, in a `TextField`, and a label field that gives the [IoU](https://stackoverflow.com/questions/25349178/calculating-percentage-of-bounding-box-overlap-for-image-detector-evaluation) between the gold annotated region and each predicted region. 2. A `Model` that uses VilBERT as a back-end to combine the vision and text data, and gives each region a score. The model computes a loss by taking the softmax of the region scores, and computing the dot product of that with the label field. You might want to look at [VqaVilbert](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/models/vilbert_vqa.py#L23) to steal some ideas. 3. A model config that trains this whole thing end-to-end. We're hoping get somewhere near the scores in the [VilBERT 12-in-1 paper](https://www.semanticscholar.org/paper/12-in-1%3A-Multi-Task-Vision-and-Language-Learning-Lu-Goswami/b5f3fe42548216cd93816b1bf5c437cf47bc5fbf), though we won't beat the high score since this issue does not cover the extensive multi-task-training work that's covered in the paper. As always, we recommend you use the [AllenNLP Repository Template](https://github.com/allenai/allennlp-template-config-files) as a starting point.
2hard
Title: Validate query arguments using pydantic Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Currently if any invalid arguments are passed then a very verbose and potentially confusing error message is raised, for example: ```prisma model Post { id String @id @default(cuid()) title String published Boolean } ``` ```py await client.post.create({}) ``` ``` prisma.errors.MissingRequiredValueError: Failed to validate the query: `Unable to match input value to any allowed input type for the field. Parse errors: [Query parsing/validation error at `Mutation.createOnePost.data.PostCreateInput.published`: A value is required but not set., Query parsing/validation error at `Mutation.createOnePost.data.PostUncheckedCreateInput.published`: A value is required but not set.]` at `Mutation.createOnePost.data` ``` ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Use pydantic's `@validate_arguments` [decorator](https://pydantic-docs.helpmanual.io/usage/validation_decorator/). The above example would then error with something like: ``` pydantic.error_wrappers.ValidationError: 2 validation errors for PostCreateInput title field required (type=value_error.missing) published field required (type=value_error.missing) ``` This feature should however have a schema option and a programmatic method for disabling validation as validation incurs a runtime performance cost and provides no benefits when static type checkers are used. ## Additional context <!-- Add any other context or screenshots about the feature request here. --> I suspect that the `@validate_arguments` decorator doesn't handle forward references properly so we'll probably have to do some horrible monkey patching to get this to work.
2hard
Title: VersionStore: Incorrect number of segments without daterange Body: #### Arctic Store ``` VersionStore ``` #### Description of problem and/or code sample that reproduces the Reading a symbol from VersionStore causes an error like: OperationFailure: Incorrect number of segments returned for XXX. Expected: 983, but got 962. XXX But if I try to read the same with a date-range that covers the entire dataset I get back the data, which points to the fact that it might be a bug rather than data corruption which I had assumed till now. ``` # # This succeeds (actual range of data is 20170101-20190423) # m['lib'].read('sym', date_range=dr).data # This raises - "Incorrect number of segments..." m['lib'].read('sym').data ```
2hard
Title: Train a region detector on the features from Visual Genome Body: This is a project in computer vision, rather than natural language processing. It is here because we have found this `RegionEmbedder` to be important for down-stream tasks that combine vision and language features. In AllenNLP, `RegionDetector`s take an image and predict "regions of interest". Each region is represented by some coordinates and a vector expressing the contents of the region. [Visual Genome](http://visualgenome.org) is a dataset containing millions of such regions. This task about training a new region detector on the Visual Genome dataset. Most of the meat of the model will not be implemented from scratch. Rather, we will use the components that `torchvision` gives us. Most of the work will be in writing a dataset reader that can read the visual genome features, and writing a model that is basically an adapter between the AllenNLP formats and the `torchvision` formats. This project has many moving parts, and will likely be a bit on the difficult side.
2hard
Title: Use strongly typed IDs Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Currently the following code (given that user and post have the same type for their ID field) will type check but is an obvious logical error: ```py post = await client.post.find_first() assert post is not None user = await client.user.find_unique( where={ 'id': post.id, }, ) ``` ## Suggested solution <!-- A clear and concise description of what you want to happen. --> ID types should make use of `NewType` to show that they are distinct from the base type. This would, however, make working with non-autogenerated IDs more annoying as you would have to explicitly wrap the ID value with the type, for example: ```py from prisma.types import UserID user = await client.user.find_unique( where={ 'id': UserID('foo') }, ) ```
2hard
Title: Responsive design rendering issue Body: Renders like sh*t. logged in as admin http://demo.quokkaproject.org/admin/contentview/ ![2013-11-09 23 19 07](https://f.cloud.github.com/assets/1472728/1507300/dd469694-4973-11e3-8714-e3d6a499d8c1.png)
2hard
Title: Optimize iterative tiling via pruning intermediate chunks Body: <!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> ## Background Iterative tiling is used during `tile` of an operand when the input tileable has unknown shape, or the operand want to trigger part of input chunks to perform an algorithm based on the input data. For instance, `df.groupby().agg()` will decide to use `tree` or `shuffle` according to the aggregated size of the first 4 chunks. However, iterative tiling may trigger the execution of all inputs tileables and the memory usage may increase significantly. Still take the `df.groupby().agg()` as an example. ```python import mars.dataframe as md import mars.tensor as mt df = md.DataFrame(mt.random.rand(1000, 10, chunk_size=100)) df.groupby(0).agg('sum').execute(extra_config={'dump_subtask_graph': True}) ``` The graph shows: ![image](https://user-images.githubusercontent.com/357506/160064742-3d3286b8-9f4d-4c93-a52e-dbebc82c6274.png) Other than the 4 aggregated chunks that yielded inside `tile`, all the other 6 `DataFrameFromTensor` chunks are submitted as well, this may cause huge memory occupation. ## Solution We suggest solutions as below: 1. If a `tile` triggered iterative tiling, the yielded chunks should be included in result chunks, if no chunks yielded, the input chunks are treated as result chunks. 2. If a tileable is in result tileables, its chunks are included in result chunks. 3. Otherwise, chunks that are not used by result chunks will be pruned.
2hard
Title: Scope Management Body:
2hard
Title: ProjectionVisualizer: unifying functionality of PCA and Manifold Body: One of the basic high-dimensional visualization techniques that Yellowbrick makes use of is to decompose or project a high dimensional space into 2 or 3 dimensions to display the data as a scatter plot. Projections of this kind reduce the amount of space between points (decreasing sparsity) but can still give us some intuition of structures in the higher dimensionality. Currently, we have three primary decomposition methods that use this technique: - Manifold: wraps a non-linear transformer from `sklearn.manifold` to produce embeddings - PCA: uses linear principal component analysis to decompose to lower dimensionality - Text: TSNE and UMAP visualizers do the same as Manifold but with text-specific helpers These visualizers have a lot of shared functionality that can be combined to streamline these kinds of visualizations and make it easier to extend them (e.g. to add ICA, Fast PCA, etc. to the PCA decompositions, or to extend the text visualizers to use the manifold visualizations). I propose we create a `ProjectionVisualizer` base class or mixin that knows how to: - Wrap a transformer to project `X` into `X'` of shape `(n_instances, 2)` or `(n_instances, 3)` - Create a scatter plot for 2D or 3D plots (implemented in PCA) - Identify the type of the target and add colors (implemented in Manifold) - Subselect the features to use in `X` for the projection This shared functionality could then be easily used by PCA, Manifold, etc. The following notes about the class hierarchy: - The `MultiFeatureVisualizer` produces a `self.features_` attribute on `fit()` which is useful in PCA for biplots and to understand the original feature set. - The `DataVisualizer` produces `self.classes_` from y and is supposed to "provide helper functionality related to target identification" but does not currently implement this yet (it is implemented on `Manifold`) - `yellowbrick.contrib.ScatterVisualizer` _might_ be valuable to be moved to `yellowbrick.draw.scatter` and use as a mixin to handle part of these cases; though I don't necessarily want to confuse things too much. - The `JointPlot` visualizer would also benefit from the target color handling things from above. This implies that the `ProjectionVisualizer` is a `DataVisualizer` and that the `DataVisualizer` needs to be updated to handle the target identification stuff that is in `Manifold`. It also implies that `JointPlot` should be a `DataVisualizer` as well. More investigation on this topic is necessary, but I wanted to propose this solution to allow for further discussion by @DistrictDataLabs/team-oz-maintainers and @naresh-bachwani who is working on PCA this summer.
2hard
Title: Consider supporting multiple data container frameworks Body: ## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> - Some users may not be fans of pydantic and would prefer to use a different data container framework. - Some frameworks may not be compatible with pydantic but could be with other data container frameworks. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> Add a config option to specify what container framework should be used, e.g. ```prisma generator client { provider = "prisma-client-py" container = "attrs" } ``` We should consider supporting: - [ ] [attrs](https://www.attrs.org/en/stable/) - [ ] [stdlib dataclass](https://docs.python.org/3/library/dataclasses.html) - [ ] raw dictionaries - [ ] [Box](https://github.com/cdgriffith/Box) ## Additional context We should also expose a method for customising what the query methods return, however that should be a separate issue.
2hard
Title: Supporting static graphics rendering Body: Scatterplots with large number of data points take a disproportionately long time to render on the frontend, due to the large number of points we have to draw. We should explore other rendering options that are faster, even if that means that the visualizations are non-interactive and static (e.g., just an image). For portability, we should try [other rendering options in Altair](https://altair-viz.github.io/user_guide/custom_renderers.html) first. Our current rendering mechanism is Canvas, and it's not entirely clear whether SVG would be faster or slower (see [[1]](https://www.educba.com/svg-vs-canvas/),[[2]](https://stackoverflow.com/questions/5882716/html5-canvas-vs-svg-vs-div)). If not, we could migrate to matplotlib as an alternative rendering backend and create static images that is then sent to the frontend.
2hard
Title: [FEATURE] Display progress on all API operations simultaneously when multiple workers are used Body: ### Checklist - [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation - [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues) - [x] I am using the latest version of Schemathesis ### Describe the bug increasing worker count, with either `-w auto` or `-w 2` does not display the API in the test results ### To Reproduce 1. Run this command `st run tree-openapi30.yaml --dry-run -w auto` 2. See no API listed in output, just the pytest status Please include a minimal API schema causing this issue: [tree-openapi30.txt](https://github.com/user-attachments/files/16366400/tree-openapi30.txt) ### Expected behavior adding workers should not change the results view of the API ``` GET /trees E [ 20%] POST /trees E [ 40%] GET /trees/{treeId} E [ 60%] PUT /trees/{treeId} E [ 80%] DELETE /trees/{treeId} E [ 100%] ``` maybe this is as designed, then please convert this to feature ### Environment platform Linux -- Python 3.9.18, schemathesis-3.33.1, hypothesis-6.108.2, hypothesis_jsonschema-0.23.1, jsonschema-4.23.0
2hard
Title: Installer for Quokka Package Index qpi Body: There is a new repo https://github.com/quokkaproject/qpi we need to develop a command line app ``` pip install quokka-installer ``` Now user can use this to install themes and modules ``` quokka-installer install theme material ``` The above code reads qpi/themes and looks for "material" metadata, downloads theme files and unpack it in /themes folder. If --activate is passed it sets the new theme as the active. The same for modules.
2hard
Title: Provide cython compiled wheels Body:
2hard
Title: Revisit save/load architecture Body: _Originally posted by @gojomo in https://github.com/RaRe-Technologies/gensim/issues/3065#issuecomment-793064538_ > My understanding is that pickle v5 has the hooks needed for the sort of custom-separate-serialization of things like numpy arrays, but it's not yet automatic/standard. So it's not a simple matter of "use v5 and we're done". But, if we thought extending it soon in that manner, perhaps after seeing what other recent v5 users have done with numpy arrays, was likely, then doing a one-hop to v5 now, rather than hop-to-v4 now then hop-to-v5 soon after, might be a way to minimize interim states and catch any potential gotchas sooner rather than later. (AFAICT, 'pickle5' is an official backport by the same Python core contributor who wrote the v5 PEP and Python3.8+ implementation, so it should have fewer risks than relying on other arbitrary external libs.) > Even without 'upgrade-scripts', old-object-cleanup hooks that are analogous to `SaveLoad` 'specials'/etc seem possible via pickle - and if we relied on those instead, we might stay closer to other community Python practices. If we think that's the eventual future direction, we needn't break anything that works and is familiar right away - but we would want to discourage further growth/complexity in `SaveLoad` - so something like the recent lifecycle logging could move to a different superclass/mix-in.
2hard