repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
Lothrazar/Cyclic | 690216142 | Title: Disenchanting Table crash (Forge 33.0.22+ not compatible)
Question:
username_0: Minecraft Version: 1.16.2
Forge Version: 33.0.22+
Mod Version: 0.6.2
Single Player or Server: both
Describe problem (what you were doing; what happened; what should have happened):
Interacting with the disenchanting table crashes the game. Let me make this easier for you as I already know the issue
Your code is trying to gather data from LazyOptional. This function was changed as of Forge 33.0.22 and up and no longer returns the same value. Visit forge's changelog for details. This should be a very quick and easy fix.
Log file link:
(not required, the function is easy to find in your Disenchantment table object code)
Video/images/gifs (direct upload or link):
If your bug is related to anything causing LAG you need to measure the lag by taking a sample https://minecraft.curseforge.com/projects/sampler
Status: Issue closed
Answers:
username_1: Yep, version 0.7.0 fixes this. Should have at least forge 33.0.59 https://www.curseforge.com/minecraft/mc-mods/cyclic/files/3054309
 |
spacebeam/bw | 486735288 | Title: How to configure the tm.dll
Question:
username_0: still a mystery
Answers:
username_0: "tournamentModuleSettings":
{
"localSpeed" : 0,
"frameSkip" : 256,
"gameFrameLimit": 85714,
"timeoutLimits" :
[
{"timeInMS" : 55, "frameCount": 320},
{"timeInMS" : 1000, "frameCount": 10},
{"timeInMS" : 10000, "frameCount": 1}
],
"drawBotNames" : true,
"drawTournamentInfo": true,
"drawUnitInfo" : true
}
Status: Issue closed
username_0: we are using basil's tm the mistery is configured by environmental variables |
extrawurst/gitui | 594916543 | Title: make staging/unstaging async
Question:
username_0: currently this is one of the async operations and can block ~1s on large repos (tested on linux)
Answers:
username_0: staging is much faster via (https://github.com/username_0/gitui/commit/75b729cca93fe828675a4714e65749989774d29b)
now refreshing the index is the problem to show the file switching to the staged list. but this is not blocking the UI anymore
Status: Issue closed
|
Autodesk-Forge/forge-ruby-sample-app | 199639654 | Title: read credentials from environment variables
Question:
username_0: could you consider storing and reading the forge credentials in environment variables? they can be retrieved from the OS for local testing, and from the heroku or other host when going live. cf. https://github.com/Autodesk-Forge/forge-boilers.nodejs#project-5---viewerserverossderivatives
Answers:
username_0: https://github.com/Autodesk-Forge/forge-boilers.nodejs#project-5---viewerserverossderivatives |
jamesdanged/Jude | 347294056 | Title: Uncaught Error: watch ~/Application Data EPERM
Question:
username_0: [Enter steps to reproduce:]
1. ...
2. ...
**Atom**: 1.30.0-beta1 x64
**Electron**: 2.0.5
**OS**: Unknown Windows version
**Thrown From**: [Jude](https://github.com/jamesdanged/Jude) package 0.2.9
### Stack Trace
Uncaught Error: watch C:\Users\taeho\Application Data EPERM
```
At events.js:183
Error: watch C:\Users\taeho\Application Data EPERM
at _errnoException (util.js:1024:11)
at FSWatcher.start (fs.js:1386:19)
at Object.fs.watch (fs.js:1412:11)
at createFsWatchInstance (/packages/Jude/node_modules/chokidar/lib/nodefs-handler.js:37:15)
at setFsWatchListener (/packages/Jude/node_modules/chokidar/lib/nodefs-handler.js:80:15)
at FSWatcher.NodeFsHandler._watchWithNodeFs (/packages/Jude/node_modules/chokidar/lib/nodefs-handler.js:228:14)
at FSWatcher.NodeFsHandler._handleDir (/packages/Jude/node_modules/chokidar/lib/nodefs-handler.js:407:19)
at /packages/Jude/node_modules/chokidar/lib/nodefs-handler.js:455:19)
at /packages/Jude/node_modules/chokidar/lib/nodefs-handler.js:460:16)
at FSReqWrap.oncomplete (fs.js:153:5)
```
### Commands
```
-9:00.3.0 debugger:show-attach-dialog (input.hidden-input)
-8:57.9.0 core:cancel (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui.teletype-Authenticated)
-8:07.1.0 core:confirm (textarea.xterm-helper-textarea)
-7:41.3.0 script:run (input.hidden-input)
3x -7:29.5.0 diagnostics:toggle-table (atom-workspace.workspace.scrollbars-visible-always.theme-one-dark-syntax.theme-one-dark-ui.teletype-Authenticated)
-4:33.5.0 ink-terminal:paste (textarea.xterm-helper-textarea)
-4:10.7.0 tree-view:add-file (span.name.icon.icon-file-directory)
-4:05.9.0 core:confirm (input.hidden-input)
-4:00.3.0 tree-view:move (span.name.icon.default-icon)
-3:57.3.0 core:confirm (input.hidden-input)
3x -3:50.1.0 core:backspace (input.hidden-input)
-3:46.6.0 editor:move-to-end-of-screen-line (input.hidden-input)
-3:46.1.0 editor:newline (input.hidden-input)
2x -3:44.4.0 core:backspace (input.hidden-input)
3x -3:35.3.0 core:confirm (input.hidden-input)
3x -0:47.7.0 core:backspace (input.hidden-input)
```
### Non-Core Packages
```
atom-clock 0.1.16
atom-ide-ui 0.13.0
atom-material-syntax 1.0.8
autocomplete-python 1.10.5
busy-signal 1.4.3
[Truncated]
ide-python 1.0.0
indent-detective 0.2.2
ink 0.9.2
intentions 1.1.5
Jude 0.2.9
julia-client 0.7.1
language-julia 0.16.0
latex-completions 0.3.5
linter 2.2.0
linter-flake8 2.3.0
linter-ui-default 1.7.1
minimap 4.29.8
platformio-ide-terminal 2.8.3
python-autopep8 0.1.3
Repl 0.5.0
script 3.18.1
teletype 0.13.3
tool-bar 1.1.7
uber-juno 0.2.0
``` |
MicrosoftDocs/powerbi-docs | 708489780 | Title: Update required for the authentication example. ADAL authentication to MSAL
Question:
username_0: Update required for the authentication example. ADAL authentication to MSAL
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 27fb3418-db2e-54a1-9faa-7ba09c5259dc
* Version Independent ID: 87329ec8-d7d4-f66a-0562-13a9b6032543
* Content: [Get an authentication access token - Power BI](https://docs.microsoft.com/en-us/power-bi/developer/automation/walkthrough-push-data-get-token)
* Content Source: [powerbi-docs/developer/automation/walkthrough-push-data-get-token.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/live/powerbi-docs/developer/automation/walkthrough-push-data-get-token.md)
* Service: **powerbi**
* Sub-service: **powerbi-developer**
* GitHub Login: @username_2
* Microsoft Alias: **kesharab**
Answers:
username_1: Hi, @username_0 -- thanks for your comment! Assigning to the author to take care of.
username_2: Hi @username_0,
This issue is in my backlog and I'll refresh the article as soon as I can.
Thanks,
Kesem
Status: Issue closed
|
GothenburgBitFactory/taskwarrior | 296886533 | Title: [TW-1003] import/export round trip capability
Question:
username_0: _<NAME> on 2009-08-23T12:02:14Z says:_
With no data loss or modification.
Status: Issue closed
Answers:
username_0: Migrated metadata:
```
Created: 2009-08-23T12:02:14Z
Modified: 2014-02-09T02:13:26Z
```
username_0: _<NAME> on 2010-10-23T10:06:20Z says:_
Achieved via export.yaml. |
kubernetes/ingress-nginx | 726366408 | Title: Vendor specific documentation
Question:
username_0: Hi all.
I was just going through the installation of the controller on an AKS Cluster and was having issues seeing the real source IP in the logs and apply modsecurity rules for specific IP's because the controller would only see the internal IP of the Service/Loadbalancer.
After some research i found that there is an Azure annotation that must be used. If you use Helm to install the Ingress Controller and want to see the real Source IP, you need to set the `externalTrafficPolicy` to `Local`, and also set this annotation:
`--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="\"true\""`
Source: https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller
You should have a list of annotations that are vendor specific, or at least have a section explaining that these exist and where to find them. I at least couldn't find any of these informations on your webpages.
/kind documentation
Answers:
username_1: /assign |
VijayalakshmiChidambaram/InstagramClone | 842456327 | Title: Project Feedback!
Question:
username_0: It looks like the following features are not reflected on your GIF walkthrough:
- Show the username and creation time for each post.
In order for us to count these towards your submission, please record another GIF that captures these features. Once you do, please push your updates and **submit your assignment again through the course portal within 48 hours from the posted deadline so that we can regrade it**. |
flutter-webrtc/flutter-webrtc | 973204964 | Title: Custom resolution of outcome video do not work!
Question:
username_0: Hi,
I want to share camera with different resolutions by changing **width**, **height** and **frameRate** according to below code:
`
final Map<String, dynamic> mediaConstraints = {
'audio': true,
'video': {
'width: <width>,
'height': <height>,
'frameRate': <frameRate (10-60)>,
'facingMode': 'user',
'optional': [],
}
};
`
but it not work!
How to share camera width different resolutions?
Answers:
username_1: same here, it seems on android the media constraint resolutions are ignored, and instead we get a dynamically changing resolution... it may start at lower resolution, and automatically increasing up to full native resolution of the camera. |
blockframes/blockframes | 787023136 | Title: Small offset issue on carousel
Question:
username_0: The offset grows 1/3 of a pixel per slide
For bigger screens it's on the left side, for smaller screens it's on the right side
See blue line at left side of image.

Answers:
username_1: just tried it and I couldn't reproduce the bug. Added like 25 movies to the array but never got something off. Maybe this was because of different cropped images?
@username_0 if it is okay for you, I would close it
Status: Issue closed
username_0: That's good to hear. It's minor anyway so yea let's close it :) |
ponywolf/ponytiled | 193938006 | Title: Image tiles causing error
Question:
username_0: I fixed this by changing line 146 in ponytiled.lua to this
`local image = sheet and display.newImage( objectGroup, sheet, dir .. gid, 0, 0 ) or display.newImage( objectGroup, dir .. gid, 0, 0 )
`
adding the **dir ..** before both **gid** instances
Now my map loads with image tiles.
-Jon
Answers:
username_1: Nice catch. I assume it's because the directory was baked into the GID. I went ahead and made the change, pushed and gave you the credit in the comment :)
Status: Issue closed
username_2: I recently downloaded the .zip, and I had to make the same change on line 157. Was the fix undone on another merge?
username_1: I think this issue is LUA vs. JSON loading... I really need to dig into this at some point. :) |
docker/compose | 222873846 | Title: Proposal: Treat services prefixed with . as hidden or abstract
Question:
username_0: **Problem Description**
Docker Compose files are YAML files. In YAML, anchors can be used to reference and duplicate content across a document. For example, the following YAML declares an anchor named base that is applied to foo.
Source:
```
base: &base
name: Everyone has same name
foo:
<<: *base
age: 10
```
Result:
```
base:
name: Everyone has same name
foo:
name: Everyone has same name
age: 10
```
It is possible to use YAML anchors in Docker Compose files so that one service is used as the base of another service. For example, the following snippet declares a service named `.function` that has an anchor named `function`, which is used as the base of the function1 and function2 services.
```
services:
.function: &function
image: fscm
labels:
function: 'true'
networks:
- functions
function1:
<<: *function
environment:
fprocess: 'mkfscmproj'
function2:
<<: *function
environment:
fprocess: 'mkpcgeneral'
```
In the above example, the `.function` service is meant to be used as an abstract definition for the concrete `function1` and `function2` services, and should not be run as an actual service by Docker Compose. Service names cannot contain `.` in their names, so the above example cannot be deployed at all using `docker stack deploy`.
**Proposed Solution**
I propose that a future Docker Compose file format treat services starting with `.` as hidden, so that they can be easily used with YAML anchors. These hidden services would not be deployed and can be ignored by Docker Compose and `docker stack deploy`. This is similar to [GitLab CI YAML files, where jobs starting with a `.` are considered hidden](https://docs.gitlab.com/ce/ci/yaml/README.html#special-yaml-features).
**Example Usage**
In alexellis/faas, functions are declared as individual services in a Docker Compose file. There can be a large number of functions, all with nearly identical service declarations, resulting in lots of repeated lines. Here is an example of a Docker Compose file with four functions as services written in the Docker Compose 3 file format, and with a hidden service. This example is based on the docker-compose.yml file in alexellis/faas.
Docker Compose version 3 format
```
version: "3"
services:
[Truncated]
<<: *function
environment:
fprocess: "base64"
# Decodes base64 representation of request body.
decodebase64:
<<: *function
environment:
fprocess: "base64 -d"
networks:
functions:
driver: overlay
```
With anchors, the declarations for `echoit`, `wordcount`, `base64`, and `decodebase64` are much more terse, and contain only the parts specific to them.
**Related Issues**
This is partially related to the discussions about the lack of support for `extends` in version 3. See #4315 and docker/docker#31101. However, this does not replace the functionality offered by `extends` because YAML anchors do not offer Docker Compose's behaviour for combining lists and dictionaries.
Answers:
username_1: Duplicate of #2578 ?
username_0: I'd say it's definitely related, with the main difference between the proposed solution(s). Treating `.` prefixed services as hidden (ignored) avoids the need to add a whole new `ignore` to the file format (#2578), or handling arbitrary ignored sections (#1655).
username_1: Fair enough. I'm rather in favor of a solution like the #4461 PR. I think it has some advantages over your proposal, namely:
- Can apply anchors to networks, secrets, volumes... as well.
- Doesn't introduce an arcane notation that might be confusing to newcomers (That applies to the `x-` notation suggested in #1655 as well)
- Better separation of concerns (keep everything "meta" in the `extras` section instead of arbitrarily injecting it in the `services` definitions)
Let me know what you think, and if there are use cases you believe #4461 wouldn't address.
username_0: Cool, I hadn't noticed #4461! That does look like a much more inclusive solution. I'll look into it more closely.
username_0: I'm closing this. #4461 provides opportunities to support more than just anchors for services.
Status: Issue closed
|
zalandoresearch/fashion-mnist | 254691454 | Title: Convert dataset in t7 format for Torch research project usage
Question:
username_0: Hi,
I've converted the original Fashion-MNIST dataset format from `Python/Numpy` format to `t7` [format](https://github.com/username_0/train-a-fashion-classifier/blob/master/download/fashion-mnist.t7.tgz) for [Torch](https://github.com/torch/torch7) users.
The demo code for Fashion-MNIST classification can be seen from [train-a-fashion-classifier](https://github.com/username_0/train-a-fashion-classifier).
It could be useful for Torch users.
Answers:
username_0: Motivated by <NAME>'s [mnist](https://github.com/andresy/mnist), I create a [fashion-mnist](https://github.com/username_0/fashion-mnist), therefore the Torch user could simply use Fashion-MNIST dataset in their project by typing the following command:
```
luarocks install https://raw.github.com/username_0/fashion-mnist/master/rocks/fashion-mnist-scm-1.rockspec
```
I think it could be the easy way for Torch users using Fashion-MNIST dataset.
Status: Issue closed
username_1: @username_0 Greatly appreciated! I don't know very much about Torch community and how it works. Is it possible to push this helper to the torch master repo?
username_1: Hi,
I've converted the original Fashion-MNIST dataset format from `Python/Numpy` format to `t7` [format](https://github.com/username_0/train-a-fashion-classifier/blob/master/download/fashion-mnist.t7.tgz) for [Torch](https://github.com/torch/torch7) users.
The demo code for Fashion-MNIST classification can be seen from [train-a-fashion-classifier](https://github.com/username_0/train-a-fashion-classifier).
It could be useful for Torch users.
Status: Issue closed
|
danielgindi/Charts | 185591960 | Title: Bar on BarChart and CombinedChart doesn't start one Zero line
Question:
username_0: Hello,
I use MPAndroidChart and Charts on my project and, when a use BarChart or CombinedChart with BarData, i show the ZeroLine with default parameter. When the chart is drawed, bars don't start at zero but before it. See the screenshot.
Can you tell me how to fix it?

Answers:
username_1: Use
barChartView.leftAxis.axisMinimum = 0.0(Your minimum value)
username_0: The axisMinimum is setted to 0.0.
The Zero line is drawed at the top of AxisX and not on the AxisX.
Here an example where we see the AxisX and the Zero line. I will expect that there lines were the same line.

Here a second example where we see the graduation line for 0 with the Zero Line

Sorry for my English. It's hard to explain the problem
username_2: @username_0 I am a bit confused.. what's graduation line? I checked ChartsDemo, not seeing this. Are you able to reproduce with ChartsDemo?
username_2: OK just found the reason:
in `drawZeroLine()`, it draws one point lower, so the 0 axis line and zero line not overlap
```swift
context.move(to: CGPoint(x: viewPortHandler.contentLeft, y: pos.y - 1.0))
context.addLine(to: CGPoint(x: viewPortHandler.contentRight, y: pos.y - 1.0))
context.drawPath(using: CGPathDrawingMode.stroke)
```
@username_3 any purpose for this?
username_3: Yes, it was for correcting an Android rendering anomaly. On iOS it's not supposed to happen...
Status: Issue closed
username_0: Thanks |
ageneau/lila-dockerfiles | 251679559 | Title: Error setting up Docker
Question:
username_0: Hello there,
When I clone this repo and follow the instructions, I get this error:
`Digest: sha256:870548acf504373f6a275af9ed5f83a2415d2f50b47a40284ee12fefc303758d
Status: Downloaded newer image for ageneau/fishnet:latest
Creating liladockerfiles_db1_1 ...
Creating liladockerfiles_explorer_1 ...
Creating liladockerfiles_explorer_1
Creating liladockerfiles_db1_1 ... done
Creating liladockerfiles_explorer_1 ... error
ERROR: for liladockerfiles_explorer_1 Cannot start service explorer: oci runtime error: container_linux.go:262: starting container process caused "chdir to cwd (\"/lila/explorer\") set in config.json failed: no such file or directory"
Creating liladockerfiles_lila_1 ... error
ERROR: for liladockerfiles_lila_1 Cannot start service lila: oci runtime error: container_linux.go:262: starting container process caused "chdir to cwd (\"/lila/lila\") set in config.json failed: no such file or directory"
ERROR: for lila Cannot start service lila: oci runtime error: container_linux.go:262: starting container process caused "chdir to cwd (\"/lila/lila\") set in config.json failed: no such file or directory"
ERROR: for explorer Cannot start service explorer: oci runtime error: container_linux.go:262: starting container process caused "chdir to cwd (\"/lila/explorer\") set in config.json failed: no such file or directory"
ERROR: Encountered errors while bringing up the project.
`
Running on Debian Jessie, docker compose: 1.15, docker: 17.06.1-ce |
codefordayton/cmrscreen | 285576427 | Title: Add a page for Step 4: How many years has it been since you completed your sentence?
Question:
username_0: You must wait a certain amount of time after the ‘final discharge’ of the sentence for your conviction before you may apply for the record to be sealed. Final discharge means you finished serving any jail or prison sentence, any term of probation or parole, and paid any fines. Court costs, however, are not part of your sentence and unpaid court costs should not be used as a reason to block your sealing application.
* For misdemeanors (including minor misdemeanors) you must wait one year after the final dis-
charge of your conviction to apply to have your conviction record sealed.
* For felonies you must wait three years after the final discharge of your conviction to apply.
If okay, continue to next step.
Otherwise, done. |
attobot/attobot | 197417873 | Title: Attobot misprints closing backticks for REQUIRE diff if user left out final newline in REQUIRE file
Question:
username_0: As can be seen here https://github.com/JuliaLang/METADATA.jl/pull/7391 the formatting for the REQUIRE diff is not completely correct. The problem is that I didn't leave a newline at the end of my REQUIRE file so the closing triple backticks happen on the same line as the last REQUIRE entry.
Answers:
username_1: I'm working on a fix for this, PR incoming in a bit...
Status: Issue closed
|
ClickHouse/ClickHouse | 711471004 | Title: Slightly suboptimal performance on processing of very long strings (+ free dataset).
Question:
username_0: 1. Download the list of Alexa top million domains:
```
wget http://s3.amazonaws.com/alexa-static/top-1m.csv.zip
unzip top-1m.csv.zip
```
Actually it's just about 700 000 for unknown reason.
2. Download front page content of all the domains:
```
cut -d, -f2 top-1m.csv | xargs -P1000 -I{} wget --continue -O 'pages/{}' '{}'
```
Some domains does not respond, some return empty result. So, let it work for about two days.
It will download about 77 GB.
3. Create ClickHouse table:
```
CREATE TABLE sites
(
`domain` String,
`content` String CODEC(ZSTD),
`size` UInt32
)
ENGINE = MergeTree
ORDER BY domain
```
4. Insert the data:
```
for i in *; do echo $i; clickhouse-client --query "INSERT INTO sites SELECT '$i', content, length(content) FROM input('content String') FORMAT RawBLOB" < $i; done
```
Here we insert by one row in one thread. It's inefficient but Ok. Add `xargs -P` if you like.
The table will take about 13.30 GiB in ClickHouse.
5. Now we can do some content analytics to reinvent the [W3Techs](https://w3techs.com/technologies/overview/traffic_analysis) website:
```
SELECT
count(),
sum(content LIKE '%.google-analytics.%'),
sum(content LIKE '%/mc.yandex.%')
FROM sites
┌─count()─┬─sum(like(content, '%.google-analytics.%'))─┬─sum(like(content, '%/mc.yandex.%'))─┐
│ 617171 │ 186957 │ 48455 │
└─────────┴────────────────────────────────────────────┴─────────────────────────────────────┘
1 rows in set. Elapsed: 10.010 sec. Processed 617.17 thousand rows, 81.01 GB (61.66 thousand rows/s., 8.09 GB/s.)
```
It's pretty good speed for the old E5-2650 v2 server with 16 threads.
Unless we look at the `perf top`.
In `perf top` we see that the bottleneck is zstd decompression (as expected) but the second place is `memcpy` that takes slightly less than half. It can be cut down.
Answers:
username_0: It's about `avg_value_size_hint` in `DataTypeString::deserializeBinaryBulk`.
Status: Issue closed
username_0: Performance get much better (20+ GB/sec) if setting `min_bytes_to_use_mmap_io = 1`.
Although this setting is not good in general for production. |
antoyo/relm | 377870470 | Title: Updating model and view from different methods
Question:
username_0: Looking at some examples I noticed that the code that update() method in Win trait is updating both the model and the GTK+ widgets. For example (from examples/buttons.rs):
```rust
fn update(&mut self, event: Msg) {
let label = &self.widgets.counter_label;
match event {
Msg::Decrement => {
self.model.counter -= 1;
// Manually update the view.
label.set_text(&self.model.counter.to_string());
},
Msg::Increment => {
self.model.counter += 1;
label.set_text(&self.model.counter.to_string());
},
Msg::Quit => gtk::main_quit(),
}
}
```
As I understand the Elm architecture the update layer should update the model based on the events that are generated and the view layer should then take the updated model and render a new view. As the code is now the update layer both updates the model and renders the new view. I think that the separation of concerns would be better if it was possible to update the view from the Widget trait instead. This could probably be implemented in a backwards-compatible way by adding a new method update() to the Widget trait and then have the framework call that method after update() in Win is called. What are your thoughts on this?
Answers:
username_1: Well, actually the procedural macro does something similar.
Look at [this example](https://github.com/username_1/relm/issues/143).
username_1: Please reopen if you need more explanations.
Status: Issue closed
|
microsoft/fast | 859413118 | Title: Convert Monaco Adapter unit tests to be async
Question:
username_0: # Description
The tests for the Monaco Adapter service must be async or they will cause other tests to not be run, or to throw errors.
## Requirements
- Convert the tests in `monaco-adapter.service.spec.ts` to `async` and ensure the tests do not skip over other tests
- Remove the file from the ignore glob in karma.conf.js |
biggora/caminte | 315093570 | Title: Can I make nested queries
Question:
username_0: Hi there,
Is there a possibility of nested queries like how loopback provides? https://loopback.io/doc/en/lb3/Include-filter.html
Let me give an example. Lets say I have models Books, Authors and Cities where
Book belongsTo Author
Author belongsTo City
Is there a query to get all the Books written by authors from Newyork?
What is the best way to go about this? |
dotnet/runtime | 1148795965 | Title: DateTimeFormat for the en-AU CultureInfo is incorrect on .NET 5 + 6
Question:
username_0: ### Description
Per the title, the patterns seem incorrect. ShortDatePattern is notably wrong and causes my code to parse dates incorrectly.
### Reproduction Steps
.NET 5.0/6.0
```
System.Globalization.CultureInfo.GetCultureInfo("en-AU").DateTimeFormat.ShortDatePattern
d/M/yyyy
```
.NET 4.7.2
```
System.Globalization.CultureInfo.GetCultureInfo("en-AU").DateTimeFormat.ShortDatePattern
d/MM/yyyy
```
### Expected behavior
Expecting: d/MM/yyyy
### Actual behavior
Actual: d/M/yyyy
### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
_No response_
### Other information
_No response_
Answers:
username_0: 
Inconsistent between a Windows and a Mac too
Status: Issue closed
username_1: .NET 5.0 started to use globalization ICU library instead of legacy Windows NLS APIs. ICU picks the data from CLDR Unicode standard. You can see .NET 5/6are returning the [correct data](https://github.com/unicode-org/cldr/blob/7de60bb8e23cfc3e6129919f9e823c5b13ba8552/common/main/en_AU.xml#L2074). If you disagree with that, please consider [opening a CLDR ticket](https://unicode-org.atlassian.net/jira/software/c/projects/CLDR/boards/12).
If you want to learn more about the change, we did during .NET 5.0, please have a lock at the [doc](https://docs.microsoft.com/en-us/dotnet/core/extensions/globalization-icu) which mention how to get back to the old behavior too. Note, we don't recommend going back to old behavior as this will cause inconsistency and will lose other features.
username_1: When creating any culture on Windows, you decide if you want to pick the user override settings or want to get only the built-in data for such culture. Please have a look at the [remark section of CreateSpecificCulture API](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo.createspecificculture?view=net-6.0#remarks).
```
If the culture identifier of the specific culture returned by this method matches the culture identifier of the current Windows culture, this method creates a [CultureInfo](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo?view=net-6.0) object that uses the Windows culture overrides. The overrides include user settings for the properties of the [DateTimeFormatInfo](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.datetimeformatinfo?view=net-6.0) object returned by the [DateTimeFormat](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo.datetimeformat?view=net-6.0) property and the [NumberFormatInfo](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.numberformatinfo?view=net-6.0) object returned by the [NumberFormat](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo.numberformat?view=net-6.0) property. To instantiate a [CultureInfo](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo?view=net-6.0) object that with default culture settings rather than user overrides, call the [CultureInfo(String, Boolean)](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo.-ctor?view=net-6.0#system-globalization-cultureinfo-ctor(system-string-system-boolean)) constructor with a value of false for the useUserOverride argument.
```
While the doc of the [GetCultureInfo API](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo.getcultureinfo?view=net-6.0) states the following:
```
If name or altName is the name of the current culture, the returned objects do not reflect any user overrides. If name is [String.Empty](https://docs.microsoft.com/en-us/dotnet/api/system.string.empty?view=net-6.0), the method returns the invariant culture. This is equivalent to retrieving the value of the [InvariantCulture](https://docs.microsoft.com/en-us/dotnet/api/system.globalization.cultureinfo.invariantculture?view=net-6.0) property. If altName is [String.Empty](https://docs.microsoft.com/en-us/dotnet/api/system.string.empty?view=net-6.0), the method uses the writing system and comparison rules specified by the invariant culture.
```
In your case, looks you have setting `en-AU` as default culture and the user settings is using a different pattern for the short date. That is why `CreateSpecificCulture` picked this user setting and different than what `GetCultureInfo` returns. You have full control here to do whatever satisfies your scenario. |
ipython-contrib/jupyter_contrib_nbextensions | 228782806 | Title: Code folding triangle overlaps code for cells in reloaded notebooks
Question:
username_0: The code folding triangle overlaps the code text it corresponds with and there is no vertical line separating the gutter from the rest of the code cell when a notebook is reloaded.
New cells render the code folding triangle and vertical gutter line as expected. The issue is resolved in preexisting cells if the toggle line numbers extension is activated.
Could this be due to a conflicting extension?

Answers:
username_1: This is (I think) also what's behind #982. I've seen it in the past, but it seems to be working ok for me at the moment in notebook 5.x. This may be a result of the notebook version, or (as you mention) it may be a result of interactions with other nbextensions.
What versions of notebook and the nbextensions repo are you using? What other nbextensions do you have enabled? Does disabling any of them fix this for you?
username_0: I get the issue for both notebook versions 4.3.1 and 5.0.0 (different conda environments).
I have tried disabling all extensions other than codefolding (in notebook 5.0.0), but the issue persists.
I generally have the extensions shown in the attached image enabled.

username_1: Ok. I guess you're using the latest conda release (0.2.7?) rather than the repo master? There's a fix in https://github.com/ipython-contrib/jupyter_contrib_nbextensions/pull/977 which is likely to fix this for you. You can try applying it yourself if you're comfortable with trying that, otherwise, since you're at least the second person to have run into this, I'll roll another release tonight...
username_0: Ya, I am on 0.2.7. I will wait for the release tonight. Thanks.
username_1: Ok, 0.2.8 is live as a [GitHub tag](https://github.com/ipython-contrib/jupyter_contrib_nbextensions/releases/tag/0.2.8) and [on pypi](https://pypi.org/project/jupyter_contrib_nbextensions/0.2.8), conda-forge release to follow once https://github.com/conda-forge/jupyter_contrib_nbextensions-feedstock/pull/17 is merged & the resultant CI builds complete...
username_1: and, now [on conda-forge](https://anaconda.org/conda-forge/jupyter_contrib_nbextensions/files?version=0.2.8) also. Closing this as fixed, but obviously please reopen if you find 0.2.8 doesn't fix it for you.
Status: Issue closed
username_2: I have the same problem running on `Linux 4.4.0-77-generic#98-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux`
```
jupyter (1.0.0)
jupyter-client (5.1.0)
jupyter-console (5.1.0)
jupyter-contrib-core (0.3.1)
jupyter-contrib-nbextensions (0.2.8)
jupyter-core (4.3.0)
jupyter-highlight-selected-word (0.0.11)
jupyter-latex-envs (1.3.8.4)
jupyter-nbextensions-configurator (0.2.5)
```
username_3: Sorry to hear that. Unfortunately, there are some subtle race conditions when loading extensions.
Does this happen with a specific notebook (e.g. a large one) ?
username_3: If you checkout PR #1028, you can try setting a delay parameter to see if things improve. |
open-contracting/standard | 177658925 | Title: Contract extension and renewal
Question:
username_0: This issue is under consideration for the 1.1 milestone.
It builds on previous discussions in #200, #179, #184, #274
## The issue
In many systems, a contract can legitimately be extended.
In these cases, there is a need to capture information on:
- What extension options are envisaged at tender time;
- What extensions options are agreed at award or contract time;
- What extension options have been exercised
In other cases, contracts are reviewed at a set point, and renewal processes are undertaken. There is a business use-case for knowing about upcoming renewal processes.
## What we are proposing
Introducing a **period.maxExtent** field which can be used to provide the latest date a contract should be extended to.
Introducing two fields to _tender_, _award_ and _contract_.
- extensionsPermitted: [0 - 100]
- extensionOptionDetails: free text
Introducing the **contract.extendsContractID** field that references a contract.id value from the same contracting process, so that a contract extension can refer back to the contract it extends.
Adding a **relatedProcess** (see #371) codelist entry for **replacementProcess**
## Example
```json
"tender": {
"contractPeriod": {
"startDate": "2017-01-01T00:00:00Z",
"endDate": "2017-12-31T00:00:00Z",
"maxExtent": "2018-12-31T00:00:00Z"
},
"extensionsPermitted": 1,
"extensionOptionDetails": "The contract may be extended for a period of up to 12 months at the buyer's discretion and subject to satisfactory performance from the supplier over the initial term"
},
"awards": [{
"contractPeriod": {
"startDate": "2017-01-01T00:00:00Z",
"endDate": "2017-12-31T00:00:00Z",
"maxExtent": "2018-12-31T00:00:00Z"
},
"extensionsPermitted": 1,
"extensionOptionDetails": "The contract may be extended for a period of up to 12 months at the buyer's discretion and subject to satisfactory performance from the supplier over the initial term"
}],
"contracts": [{
"period": {
"startDate": "2017-01-01T00:00:00Z",
"endDate": "2017-12-31T00:00:00Z",
"maxExtent": "2018-12-31T00:00:00Z"
},
"extensionsPermitted": 1,
"extensionOptionDetails": "The contract may be extended for a period of up to 12 months at the buyer's discretion and subject to satisfactory performance from the supplier over the initial term"
}]
```
## Outstanding questions
Should this be in core or in an extension?
##Engagement
Please indicate support or opposition for this proposal using the +1 / -1 buttons or a comment. If opposing the proposal, please give clear justifications, and where possible, make an alternative proposals.
Please leave a comment indicating whether you think this proposal should form part of the core standard or whether it should be an extension.
Answers:
username_1: Is it worth including an additional field for estimated dates of subsequent notices?
username_0: The new EU forms break this down into a number of fields, separating the concepts of renewal and recurrent procedures.
In OCDS 1.1 we will now be introducing period, and maxExtent for periods, as well as allowing links to a renewal or recurrent procedure's OCDS data using ```relatedProcess```.
However, as there have been no requests in this thread for this to go into core, I would propose that for 1.1 this is handled by an extension.
@username_1 would you be able to include this in your work on extensions for trade and make a first suggestion for this? I keep getting caught up in trying to find a simple structure that can capture the different cases, but we might just need to introduce a couple of different properties here.

username_1: @username_0 Yes, I'll include this in the extensions for trade. I think you could be right about using separate properties as, although they are similar concepts, the requirements for data capture are quite different.
username_0: This was left out of 1.1 on grounds of available time, and as it will be possible to add as a community extension at a later date.
Separate work is taking place which may contribute to this in the near future.
username_2: Noting that this issue is distinct from the question of recurrence/recurring/recurrent contracts.
username_2: This issue is primarily a continuation of #200, which exclusively discusses renewals in an EU context (except where the conversation took tangents that became separate issues). In the EU, however, there is no concept of 'maxExtent' within the published data or standard forms. The forms have only:
* II.2.7 Duration of the contract, framework agreement or dynamic purchasing system
* This contract is subject to renewal (checkbox)
* Description of renewals (free-text)
There's a reference to a standard for English local government having 'Contract maximum extension date', for which I found [an example](https://www.ipswich.gov.uk/sites/default/files/ibc_active_contracts_as_ar_20.03.17.pdf). Given the little evidence for this concept, I propose it be used in a local extension for now.
I can't find any evidence for `extensionsPermitted` being expressed as a number in any jurisdiction. I can only find evidence for expression as a boolean.
As for the information collected in TED, see [this proposal](http://standard.open-contracting.org/profiles/eu/master/en/F01/#/OBJECT_CONTRACT/OBJECT_DESCR/RENEWAL) (which eliminates the need for a boolean) and contribute to its [related issue](https://github.com/open-contracting-extensions/european-union/issues/22).
Status: Issue closed
|
xxoo/node-fswin | 56147757 | Title: package.json tags
Question:
username_0: I couldn't find the package.json in the repo, so I couldn't send a pull request.
I think it'd be beneficial to you and the library if you added "fs" and "windows" tags to the package.json file
Answers:
username_1: ok, i'll add this tag in the next release.
this file is not necessary in the repo since the npm package does not contain source code anymore.
Status: Issue closed
username_0: Where do you keep the source code now?
username_1: just in the repository of course.
but the npm package is not a part of this repo.
username_0: Oh, right, sorry.
Where is the package.json stored, then? Just on npm servers and you pull it for every release?
username_1: only in npm package. i always have a local copy of that.
and i manually update the binaries every time before pushing to npm server.
the package also can be found here: http://username_1.github.io/node-fswin/fswin.7z |
laravel-admin-extensions/latlong | 654176827 | Title: 报错:Trying to access array offset on value of type null
Question:
username_0: **环境:PHP 7.4.7 + LA 1.8.1**
### 报错信息
```
Trying to access array offset on value of type null (View: /code/vendor/laravel-admin-ext/latlong/resources/views/latlong.blade.php) (View: /code/vendor/laravel-admin-ext/latlong/resources/views/latlong.blade.php)
```
经调试,问题出在 [这里](https://github.com/laravel-admin-extensions/latlong/blob/master/resources/views/latlong.blade.php#L11) $value 是 null,直接使用 $value['lat'],下边 14 行同理。

新建的时候,$value 为 null, 编辑的时候才有值。但同样的代码,本人本地环境 PHP 7.3.1 却没有报错,不懂了。
Answers:
username_0: @z-song 这里没人的?
username_1: 使用default初始值
`
[
"lng"=>lng初始值,
"lat"=>lat初始值
]
` |
gbif/portal-feedback | 305636234 | Title: Inversion Latitude et Longitude- BioReCIE
Question:
username_0: **Inversion Latitude et Longitude- BioReCIE**
To whom it may concern,
Latitude and longitude of the occurence data from the dataset BioReCIE (`https://www.gbif.org/dataset/b4be6d3b-6bb0-4d1d-8502-b09bf83ff905)` have been inverted.
Thank you by advance for correcting this error,
Sincerely,
Marielle
-----
User provided contact info: <EMAIL>
System: Chrome 64.0.3282 / Windows 10.0.0
User: [See in registry](https://www.gbif.org/api/feedback/user/2fe450f274534be31293bd1725ac17ca:6b9eac221e212bb82ce0239b075658801e351df0649c88dac6cd38fa06fb10c3f8e08d68fc4aae905b329447de9db1e50715037eafa54fd6999ed530fd4cdfdf)
Referer: https://www.gbif.org/occurrence/map?dataset_key=<KEY>
Window size: width 1264 - height 804
[API log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2018-03-15T16:49:00.371Z',mode:absolute,to:'2018-03-15T16:55:00.371Z'))&_a=(columns:!(_source),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
[Site log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2018-03-15T16:49:00.371Z',mode:absolute,to:'2018-03-15T16:55:00.371Z'))&_a=(columns:!(_source),index:'prod-portal-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
System health at time of feedback: OPERATIONAL
Answers:
username_1: Hi all,
Issue corrected on the INPN IPT, coordinates are ok after re-publishing.
Cheers,
Sophie
username_2: ...although points near Îles Glorieuses, provided with ISO code FR, are now being flagged as mismatched against Madagascar; example http://api.gbif.org/v1/geocode/reverse?lat=-11.61473&lng=47.33861 from https://www.gbif.org/occurrence/1656622749
We use Marine Regions' database, and this area is disputed between France and Madagascar. That should mean we follow the data publisher's lead, and accept either FR or MG.
username_3: We've had previous issues with how best to handle French overseas territories. I don't recall the implementation details for, say, how we handle New Caledonia et al., but I suspect that if these records occur in les Îles Glorieuses, the record(s) should probably show the country code as [TF](https://www.gbif.org/country/TF/summary) instead of FR—
leaving aside any territorial dispute.
Status: Issue closed
username_2: With an improvement to the geocoder, these areas will soon return the overlapping claims as held in Marine Regions:
```json
[
{
"id": "232",
"type": "EEZ",
"source": "http://vliz.be/vmdcdata/marbound/",
"title": "Overlapping claim Glorioso Islands: France / Madagascar",
"isoCountryCode2Digit": "MG"
},
{
"id": "232",
"type": "EEZ",
"source": "http://vliz.be/vmdcdata/marbound/",
"title": "Overlapping claim Glorioso Islands: France / Madagascar",
"isoCountryCode2Digit": "TF"
}
]
```
@username_1, I think your latitudes and longitudes have reversed again.
username_2: **Inversion Latitude et Longitude- BioReCIE**
To whom it may concern,
Latitude and longitude of the occurence data from the dataset BioReCIE (`https://www.gbif.org/dataset/b4be6d3b-6bb0-4d1d-8502-b09bf83ff905)` have been inverted.
Thank you by advance for correcting this error,
Sincerely,
Marielle
-----
User provided contact info: <EMAIL>
System: Chrome 64.0.3282 / Windows 10.0.0
User: [See in registry](https://www.gbif.org/api/feedback/user/2fe450f274534be31293bd1725ac17ca:6b9eac221e212bb82ce0239b075658801e351df0649c88dac6cd38fa06fb10c3f8e08d68fc4aae905b329447de9db1e50715037eafa54fd6999ed530fd4cdfdf)
Referer: https://www.gbif.org/occurrence/map?dataset_key=<KEY>
Window size: width 1264 - height 804
[API log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2018-03-15T16:49:00.371Z',mode:absolute,to:'2018-03-15T16:55:00.371Z'))&_a=(columns:!(_source),index:'prod-varnish-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
[Site log](http://elk.gbif.org:5601/app/kibana?#/discover?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:'2018-03-15T16:49:00.371Z',mode:absolute,to:'2018-03-15T16:55:00.371Z'))&_a=(columns:!(_source),index:'prod-portal-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'response:%3E499')),sort:!('@timestamp',desc)))
System health at time of feedback: OPERATIONAL
username_2: This dataset has been deleted, so there's nothing more to do.
The Geocode fix for overlapping claims for EEZs will be deployed this week.
Status: Issue closed
|
laravel/installer | 706381710 | Title: Fatal error: Uncaught Error: Class 'Symfony\Component\Console\Question\ConfirmationQuestion
Question:
username_0: <!-- DO NOT THROW THIS AWAY -->
<!-- Fill out the FULL versions with patch versions -->
- Installer Version: 4.0.4
### Description:
I try to install a new project with --jet.
And I get this:
Fatal error: Uncaught Error: Class 'Symfony\Component\Console\Question\ConfirmationQuestion' not found in C:\Users\×לעד\AppData\Roaming\Composer\vendor\symfony\console\Style\SymfonyStyle.php on line 283
Error: Class 'Symfony\Component\Console\Question\ConfirmationQuestion' not found in C:\Users\×לעד\AppData\Roaming\Composer\vendor\symfony\console\Style\SymfonyStyle.php on line 283
Call Stack:
0.0004 399672 1. {main}() C:\Users\×לעד\AppData\Roaming\Composer\vendor\laravel\installer\bin\laravel:0
0.0139 1283904 2. Symfony\Component\Console\Application->run() C:\Users\×לעד\AppData\Roaming\Composer\vendor\laravel\installer\bin\laravel:13
0.0520 1547136 3. Symfony\Component\Console\Application->doRun() C:\Users\×לעד\AppData\Roaming\Composer\vendor\symfony\console\Application.php:140
0.0537 1624912 4. Symfony\Component\Console\Application->doRunCommand() C:\Users\×לעד\AppData\Roaming\Composer\vendor\symfony\console\Application.php:264
0.0537 1624912 5. Laravel\Installer\Console\NewCommand->run() C:\Users\×לעד\AppData\Roaming\Composer\vendor\symfony\console\Application.php:916
0.0541 1626720 6. Laravel\Installer\Console\NewCommand->execute() C:\Users\×לעד\AppData\Roaming\Composer\vendor\symfony\console\Command\Command.php:258
2.4053 1873016 7. Symfony\Component\Console\Style\SymfonyStyle->confirm() C:\Users\×לעד\AppData\Roaming\Composer\vendor\laravel\installer\src\NewCommand.php:55
Variables in local scope (#7):
$default = FALSE
$question = 'Will your application use teams?'
Answers:
username_1: Hi there,
Thanks for reporting but it looks like this is a question which can be asked on a support channel. Please only use this issue tracker for reporting bugs with the library itself. If you have a question on how to use functionality provided by this repo you can try one of the following channels:
- [Laracasts Forums](https://laracasts.com/discuss)
- [Laravel.io Forums](https://laravel.io/forum)
- [StackOverflow](https://stackoverflow.com/questions/tagged/laravel)
- [Discord](https://discordapp.com/invite/KxwQuKb)
- [Larachat](https://larachat.co)
- [IRC](https://webchat.freenode.net/?nick=laravelnewbie&channels=%23laravel&prompt=1)
However, this issue will not be locked and everyone is still free to discuss solutions to your problem!
Thanks.
Status: Issue closed
|
WoWManiaUK/Blackwing-Lair | 467163327 | Title: [Npc] World Bosses (Loot)
Question:
username_0: **What is happening:**
Last week i killed 2 world bosses with my guild and they did't drop anything.
Sadly i don't remeber the exact date of the kills, but i can tell you that:
Akma'hat > first time he dropped a 359 ilvl item: Belt of a Thousand Gaping Mouths. Second time he dropped only 2 green jc recipes and nothing else. Third time he dropped... nothing.
Mobus > first time he dropped a 359 ilvl item: Mobus's Dripping Halberd. Second time he dropped a 359 ilvl item: Drape of Inimitable Fate. Third time he dropped... nothing.
**What should happen:**
Here the loot of Akma'hat > https://www.wowdb.com/npcs/50063-akmahat
Here the loot of Mobus > https://www.wowdb.com/npcs/50009-mobus
They're supposed to drop a 359 ilvl item at least, but according to some videos they're supposed to drop 2 epic items, some jc recipes and even a nice sum of gold.

https://www.youtube.com/watch?v=Hnd-ZDtpn9s
https://www.youtube.com/watch?v=kZwTY7b5Dcg
Atm i can report only these 2 bosses, but i'll add some comment if we'll kill some other world bosses and they'll give no loot.
Answers:
username_1: okay will make it like that: 1 recipe and 2 other items and gold
username_1: Full list
Garr - Everyone's favourite Molten Core boss, relocated to Mount Hyjal. Now with 99% less rage over the drops, guaranteed!
Mobus <The Crushing Tide> - A huge whale shark in the Abyssal Depths of Vashj'ir. Probably wants to eat you.
Xariona - A twilight dragon roaming the center of Deepholm. Sadly, it doesn't sparkle.
Akma'hat <Dirge of the Eternal Sands> - A Tol'vir style colossus located in Uldum. Still no eye lasers... everything's better with eye lasers.
Julak-Doom <The Eye of Zor> - A flesh giant, its mind taken over by a Merciless One, found wandering Twilight Highlands. You could chew through its 30 million hp, but a cork stopper would probably do the trick too.
username_1: Akma'hat fixed
username_1: Mobus fixed
username_1: Xariona fixed
username_1: Julak-Doom path and loot corrected
username_1: okay all 4 bosses fixed so far, will handle the last garr in https://github.com/WoWManiaUK/Blackwing-Lair/issues/2373
Status: Issue closed
|
tskit-dev/tskit | 850366682 | Title: Can't pickle then unpickle a tree sequence: "no attribute '_table_metadata_schemas'"
Question:
username_0: ```
ts = msprime.simulate(10)
with open('ts.pickle', 'wb') as handle:
pickle.dump(ts, handle, protocol=pickle.HIGHEST_PROTOCOL)
print(ts.draw_text())
with open('ts.pickle', 'rb') as handle:
ts = pickle.load(handle)
print(ts.draw_text())
```
Gives
```
AttributeError Traceback (most recent call last)
<ipython-input-52-2b8d7332ec21> in <module>
7 with open('ts.pickle', 'rb') as handle:
8 ts = pickle.load(handle)
----> 9 print(ts.draw_text())
10
11
~/anaconda3/lib/python3.9/site-packages/tskit/trees.py in draw_text(self, **kwargs)
5427 def draw_text(self, **kwargs):
5428 # TODO document this method.
-> 5429 return str(drawing.TextTreeSequence(self, **kwargs))
5430
5431 ############################################
~/anaconda3/lib/python3.9/site-packages/tskit/drawing.py in __init__(self, ts, node_labels, use_ascii, time_label_format, position_label_format, order)
1250 position_label_format.format(x) for x in ts.breakpoints()
1251 ]
-> 1252 trees = [
1253 VerticalTextTree(
1254 tree,
~/anaconda3/lib/python3.9/site-packages/tskit/drawing.py in <listcomp>(.0)
1251 ]
1252 trees = [
-> 1253 VerticalTextTree(
1254 tree,
1255 max_tree_height="ts",
~/anaconda3/lib/python3.9/site-packages/tskit/drawing.py in __init__(self, tree, node_labels, max_tree_height, use_ascii, orientation, order)
1455 self.node_labels[node] = label
1456
-> 1457 self._assign_time_positions()
1458 self._assign_traversal_positions()
1459 self.canvas = np.zeros((self.height, self.width), dtype=str)
~/anaconda3/lib/python3.9/site-packages/tskit/drawing.py in _assign_time_positions(self)
1481 # account here. Presumably we need to get the maximum number of mutations
1482 # per branch.
-> 1483 self.time_position, total_depth = node_time_depth(
1484 tree, max_tree_height=self.max_tree_height
1485 )
~/anaconda3/lib/python3.9/site-packages/tskit/drawing.py in node_time_depth(tree, min_branch_length, max_tree_height)
1385 assert max_tree_height == "ts"
1386 ts = tree.tree_sequence
-> 1387 for node in ts.nodes():
[Truncated]
-> 3236 return self.getter(index)
3237
3238
~/anaconda3/lib/python3.9/site-packages/tskit/trees.py in node(self, id_)
4516 individual=individual,
4517 encoded_metadata=metadata,
-> 4518 metadata_decoder=self.table_metadata_schemas.node.decode_row,
4519 )
4520
~/anaconda3/lib/python3.9/site-packages/tskit/trees.py in table_metadata_schemas(self)
3654 The set of metadata schemas for the tables in this tree sequence.
3655 """
-> 3656 return self._table_metadata_schemas
3657
3658 @property
AttributeError: 'TreeSequence' object has no attribute '_table_metadata_schemas'
```
Answers:
username_1: Weird, we're clearly missing a test! Thanks for the report.
username_2: Ah - we tested the tables for pickleability, but apparently not the tree sequence. Good catch @username_0.
We should try the Tree class as well, and see if we get a comprehensible error message when trying to pickle it. We probably don't want to actually support pickling I think, at least not until we have random access for trees.
username_1: We had tests for tree sequence pickling, but they only checked the table content of the tree sequence, not that the class was working properly. |
Automattic/mongoose | 304602468 | Title: Connection options for mongo driver are ignored
Question:
username_0: The expectation therefore is that by setting `ignoreUndefined` to `true`, any undefined query params would be ignored in the actual call to MongoDB.
**If the current behavior is a bug, please provide the steps to reproduce.**
<!-- If you can, provide a standalone script / gist to reproduce your issue -->
```javascript
// connection made with an options object of: { ignoreUndefined: true }
const teamSchema = new mongoose.Schema({
name: String,
location: String
});
const Team = connection.model('Team', teamSchema);
const jedis = new Team({
name: 'Jedis',
location: 'Alderan'
});
jedis.save().then((jedis) => {
const query = Team.find({name: 'Jedis', _id: undefined});
console.log(query.getQuery());
// Looks good here...
// { name: 'Jedis', _id: undefined }
// lets send it!
return query.exec()
// HERE IS THE ISSUE
// { name: 'Jedis', _id: null }
).then((jedis) => console.log);
// []
```
**What is the expected behavior?**
In the preceding repro-script, the expectation is that the query object sent from the MongoDB driver would be `{ name: 'Jedis' }` which would return an array containing one document. Instead the option appears to be ignored as the `_id` is equality checked with `null` which leads to no matching documents.
**Please mention your node.js, mongoose and MongoDB version.**
Node 8.9.4
MongoDB: 3.6
Mongoose: 5.0.10
Answers:
username_1: Thanks for reporting, will investigate ASAP
Status: Issue closed
username_1: Fixed in master, fix will be in 5.0.11
username_2: still not working for me; https://github.com/Automattic/mongoose/commit/cae03d379dda41b123aaeb07ad3c9ffba29fab35
username_1: @username_2 please open up a new issue with code samples. |
EdenServer/community | 393894161 | Title: Linkpearl in Bazaar
Question:
username_0: Got a bazaar glitch and a linkpearl is up for sale for 130K. I don't want to get banned because they specifically say they have no market value in the description. Can't remove price or buy it from myself.
Answers:
username_1: @username_0 you wouldn't happen to know what the conditions were that lead to you having a linkpearl in bazaar?
username_2: i purchased the linkpearl to resolve the issue.
username_3: cant reproduce
Status: Issue closed
|
georchestra/mapstore2-georchestra | 550719022 | Title: Implement Extensions uninstall
Question:
username_0: Actually, we do not have a UI to remove installed extensions.
The simplest option is to add a remove button in the context creator plugins step on plugins that have been installed as extensions.
Answers:
username_0: Needs https://github.com/geosolutions-it/MapStore2/issues/5026
Status: Issue closed
username_1: Actually, we do not have a UI to remove installed extensions.
The simplest option is to add a remove button in the context creator plugins step on plugins that have been installed as extensions.
Status: Issue closed
|
SQLiteFlow/SQLiteFlow-Issues | 436530533 | Title: [macOS] Fix Free Trial kill paid users previous purchase.
Question:
username_0: username_0(macOS) 3.5.1 - Pending Release…
Fix an issue that may cause paid users see getting trial or getting full unlock screen.
For users who experiencing this issue, you can just click ‘Get the Free Trial’ to continue using the app as a workaround before updating to the fixed version.
Answers:
username_0: This issue has been fixed in username_0(macOS) 3.5.1.
username_0: The latest version which available on the App Store has this issue fixed.
For users who experience this issue, it's recommend to update to this version to avoid this issue.
Status: Issue closed
|
fuse-box/fuse-box | 328807341 | Title: Resolve modules from source folder
Question:
username_0: Hi, I browsed through the issues and I can't find a solution to my problem. Something similar is in #991 but no luck with that.
I have a root folder set to `src`.
I am trying to import a module from `import DataSet from client/modules/form/models/dataset_model`.
I have set following in my fuse.js:
```
fuse
.bundle('app')
.alias('client', '~/client')
```
Yet, I keep getting `WARNING Statement "modules/form/models/dataset_model.js" has failed to resolve in module "client"`
Truth is that I have a .ts file there not .js.
Any luck to solve this? Thanks!
Answers:
username_0: Weird, doing following works:
```
FuseBox.init({
...
alias: {
client: '~/client/',
server: '~/server/',
shared: '~/shared/'
},
```
Looks like the `.alias` is not working properly.
username_1: @username_0, `alias` takes in an object literal: https://fuse-box.org/page/configuration#alias, hence your first code snippet is incorrect (and your second works as expected).
username_2: This has been all fixed in `v4`. Please upgrade to the [major version](https://github.com/fuse-box/fuse-box/blob/master/docs/getting-started/get-started.md)
Status: Issue closed
|
ryanmark1867/chatbot | 530783236 | Title: "what lies beneath" errors
Question:
username_0: "poster for what lies beneath" doesn't return error message - just sits there
"genre for what lies beneath" returns error
Status: Issue closed
Answers:
username_0: fixed:
What's your message change?
genre for what lies beneath
Sending message now...
Bot says,
Drama
Horror
Mystery
Thriller |
apache/trafficcontrol | 785166505 | Title: GitHub Actions Workflows with 3rd-party Actions no longer run
Question:
username_0: <!--
************ STOP!! ************
If this issue identifies a security vulnerability, DO NOT submit it! Instead, contact
the Apache Traffic Control Security Team at <EMAIL> and follow the
guidelines at https://www.apache.org/security/ regarding vulnerability disclosure.
- For *SUPPORT QUESTIONS*, use the Traffic Control slack (https://s.apache.org/atc-slack)
or Traffic Control mailing lists (https://trafficcontrol.apache.org/mailing_lists).
- Before submitting, please **SEARCH GITHUB** for a similar issue or PR.
-->
## I'm submitting a ...
<!-- delete all those that don't apply -->
<!--- security vulnerability (STOP!! - see above)-->
- bug report
## Traffic Control components affected ...
<!-- delete all those that don't apply -->
- CI tests
## Current behavior:
<!-- Describe how the bug manifests -->
Our GitHub Actions workflows that use 3rd-party Actions or Docker images no longer run. From [the logs](https://github.com/apache/trafficcontrol/actions/runs/481367446) of a recent Weasel run:
```github-actions
docker://licenseweasel/weasel:v0.4 is not allowed to be used in apache/trafficcontrol. Actions in this workflow must be: created by GitHub, verified in the GitHub Marketplace, within a repository owned by apache or match the following: */*@[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]+, AdoptOpenJDK/install-jdk@*, apache/*, burrunan/gradle-cache-action@*, conda-incubator/setup-miniconda@*, container-tools/kind-action@*, dawidd6/action-download-artifact@*, gradle/wrapper-validation-action@*, julia-actions/julia-runtest@*, julia-actions/setup-julia@*, msys2/setup-msys2@*, peaceiris/actions-gh-pages@*, peaceiris/actions-hugo@*, peter-evans/create-pull-request@*, potiuk/cancel-workflow-runs@*, r-lib/actions/*, scacap/action-surefire-report@*, shivammathur/setup-php@*, shogo82148/actions-setup-perl@*.
```
Workflows affected:
- Documentation Build
- Traffic Router Unit Tests
- Weasel license checks
## Expected behavior:
<!-- Describe what the behavior would be without the bug -->
All workflows should run and pass when appropriate.
## Minimal reproduction of the problem with instructions:
<!--
If the current behavior is a bug, please provide the *STEPS TO REPRODUCE* and
include the applicable TC version.
-->
Push a commit, observe those workflows not running.
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
--><issue_closed>
Status: Issue closed |
ets-labs/python-dependency-injector | 734980145 | Title: Injection not working for class methods
Question:
username_0: I am not quite sure if this is expected behavior or not. Methods annotated as @classmethod end up getting extra parameters injected. The following code demonstrates. I discovered this while using Closing, but filled out the example a bit as I discovered that it is a general issue for Provide.
```
import sys
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide, Closing
def my_factory():
return 'test-factory'
def my_resource():
yield 'test-resource'
print('Closing')
class Container(containers.DeclarativeContainer):
factory = providers.Factory(my_factory)
resource = providers.Resource(my_resource)
def do_function_thing(r:str=Closing[Provide[Container.resource]]) -> None:
print('from function', r)
class MyClass():
def do_instance_thing(self, r:str=Closing[Provide[Container.resource]]) -> None:
print('from instance', r)
@classmethod
def do_class_thing(cls, r:str=Closing[Provide[Container.resource]]) -> None:
print('from class', r)
@classmethod
def non_closing_class_thing(cls, r:str=Provide[Container.resource]) -> None:
print('non-closing from class', r)
container = Container()
container.init_resources()
container.wire(modules=[sys.modules[__name__]])
do_function_thing()
c = MyClass()
c.do_instance_thing()
# both of these end up getting multiple values for r:
c.non_closing_class_thing()
c.do_class_thing()
```
The resulting output is:
```
from function test-resource
Closing
from instance test-resource
Closing
Traceback (most recent call last):
File "clstest.py", line 49, in <module>
c.non_closing_class_thing()
File "/Users/scott/repos/github.com/username_0/Starlight/.venv/lib/python3.8/site-packages/dependency_injector/wiring.py", line 296, in _patched
result = fn(*args, **to_inject)
TypeError: non_closing_class_thing() got multiple values for argument 'r'
```
Answers:
username_1: Hi @username_0. Thanks, I'll take a look.
username_1: @username_0 , reproduced the issue. Working on a fix.
PS: The funny thing. I had a test for `@classmethod`. And this test passes. That test called class method from a class, not from the instance of that class.
- https://github.com/ets-labs/python-dependency-injector/blob/master/tests/unit/samples/wiringsamples/module.py#L21
- https://github.com/ets-labs/python-dependency-injector/blob/master/tests/unit/wiring/test_wiring_py36.py#L64
username_1: @username_0 , fixed in `4.3.3`. Thanks for reporting the issue. I close it for now. Please, comment or re-open if needed.
Status: Issue closed
username_0: Seems to be working. Thanks!!!
username_0: So, FYI. I am not sure if Python dictates that @classmethod should be the final decorator on the method? The injection wiring breaks if it is not the final decorator and is stacked on top of other decorators .... but that may be a moot issue if @classmethod is supposed to be last anyway.
This is not an issue for me, just thought you would want to be aware.
Thanks again.
username_1: Yep, that’s kind of known thing that classmethod should be on the very top of decorators. What actually happens is that classmethod returns an object that is treated special way by the class. If it’s not on the top, the class recognizes it as usual method. What DI does is “undecorating” of the method to get the original, decorating the original with injecting decorator and then decorating it back as classmethod.
username_1: Thank you @username_0 .
username_2: @username_1 Is there any workaround to make it work with decorated functions/methods? I'm trying to integrate it with yet another framework and wiring doesn't work if the method is decorated as:
```
@app.get('/')
async def get_root(some_service: SomeService = Provide[Application.some_service]):
...
```
The result is, service is not injected.
username_1: Hey @username_2. What framework do you use?
username_2: Im trying to use FastAPI. Tried to declare route using both decorator and `app.add_api_route` providing handler directly but seems injectors are ignored.
username_1: Ok, got it. I didn't try FastAPI. I'll try to build a sample application with it.
username_1: Hey @username_2 ,
I have just released version `4.3.8` that adds a hotfix to wiring to support `FastAPI`. Here is a working sample:
```python
import sys
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide
from fastapi import FastAPI
class Service:
def ok(self):
return 'OK'
class Container(containers.DeclarativeContainer):
service = providers.Factory(
Service
)
async def get_root(service = Provide[Container.service]):
return service.ok()
container = Container()
container.wire(modules=[sys.modules[__name__]])
app = FastAPI()
app.add_api_route('/', get_root)
```
username_2: @username_1 Thanks a lot for addressing this promptly! Is there any way to make it work with decorated functions as well on top of explicit `app.add_api_route()`?
username_2: @username_1 There is also one bug with the above solution. If you explicitly specify the type of what was injected, the app will fail to boot app:
```python
async def get_root(service: Service = Provide[Container.service]):
return service.ok()
```
will result in:
```
fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'test.Service'> is a valid pydantic field type
```
username_1: Yes, but make sure you put `@inject` decorator before `@app.api_route()`:
```python
from dependency_injector.wiring import inject, Provide
@app.api_route('/')
@inject
async def get_root(service = Provide[Container.service]):
return service.ok()
```
username_1: I don't know how to fix it. That's something that FastAPI type checking system does. I'll think about kind of trick to make it work.
username_1: I believe it's because you use ``Depends``. Dependency Injector can not find the marker.
The idea of using ``Depends`` is good. I will think If I can adapt it to fix type checking issue.
username_1: Hey @username_2 ,
I've released a new version ``4.4.1``. It supports ``Depends``. Now it looks like:
```python
import sys
from dependency_injector import containers, providers
from dependency_injector.wiring import inject, Provide
from fastapi import FastAPI, Depends
class Service:
def ok(self):
return 'OK'
class Container(containers.DeclarativeContainer):
service = providers.Factory(
Service
)
app = FastAPI()
@app.get('/')
@inject
async def get_root(service: Service = Depends(Provide[Container.service]):
return service.ok()
container = Container()
container.wire(modules=[sys.modules[__name__]])
```
Using `Depends` also fixes OpenAPI docs endpoint. It didn't work with a previous version. |
ikedaosushi/tech-news | 440326679 | Title: PICで作った自作PS/2キーボードのソースコード
Question:
username_0: PICで作った自作PS/2キーボードのソースコード<br>
PICで作った自作PS/2キーボードのソースコード. GitHub Gist: instantly share code, notes, and snippets.<br>
http://bit.ly/2UTQn6n |
GwtMaterialDesign/gwt-material | 182714134 | Title: MaterialLink enable property look and feel is the same this enabled
Question:
username_0: When I try to use MaterialLink component and need to disable under some situation, there is no look and feel difference between enable and disable. I thought it should be text color turns to grey and cursor turns to pointer, so user can realize it's disabled at the first time. May have enhancement for it?
gwt-material version: 1.6.2
sample code: MaterialLink link = new MaterialLink();
link.setEnabled (false);
Answers:
username_1: Thanks for the catch will update it on 2.0
username_0: MaterialIcon and MaterialCard have the same issue. Could enhance for them too? Thanks!
username_1: Here is a workaround about this issue.
we will a patch on GMD 2.0 as this is a minor and cosmetic update.
```css
button.disabled, i.disabled, a.disabled {
background-color: #DFDFDF !important;
box-shadow: none;
-webkit-box-shadow: none;
-moz-box-shadow: none;
color: #9F9F9F !important;
cursor: default !important;
transition: none !important;
}
i.disabled, a.disabled {
background-color: #fff !important;
}
```
username_1: Fixed via https://github.com/GwtMaterialDesign/gwt-material/commit/fb7261813a64897b20cd46aff2b53fffb90d419b
Status: Issue closed
username_0: Does this enhancement include the `iconType` ? |
knative/serving | 547244233 | Title: Auto TLS Beta
Question:
username_0: -->
## Describe the feature
This is the issue to tracking the work for Auto TLS Beta.
Below are the work items I think we need for Auto TLS Beta:
1. Support cert-manager 0.11 and above with their v1alpha2 APIs https://github.com/knative/serving/issues/6011
2. E2E tests for Auto TLS feature https://github.com/knative/serving/issues/4066, including testing the following cases:
- Certificate provision per Knative Service
- Certificate provision per namespace
- Certificate provision with HTTP01 challenge
3. Detailed documentation about Auto TLS feature and the modes it supports (DNS challenge/HTTP challenge, per ksvc/per namespace). Related issue: https://github.com/knative/docs/issues/1949
Feel free to comment in this thread if anyone has thoughts about the work items needed for Beta launch.
/cc @username_1 @tcnghia
Answers:
username_1: _(somewhat stream of consciousness)_
Does this feature cover HTTP->HTTPS or is that tracked by something else? If it is, then I can think of at least a couple things that need work.
I know it's not "auto", but it is "TLS": what about manually provisioned wildcard certs per namespace? Should this have coverage / documentation?
Do we expect to make TLS provisioning (when enabled) block readiness of the Route? This is esp. relevant when the HTTP mode is disabled (thus my HTTP->HTTPS question above). cc @dgerd
Are we aware of any users that have adopted this integration "in the wild"?
username_2: /triage accepted
username_0: I wrote a [proposal](https://docs.google.com/document/d/1L3Er41Y8PWI_pTP13QdV-YikMd6QqUkJu5XhKFfJwto/edit?resourcekey=<KEY>) about the launch plan which contains the remaining work. The doc is under review.
username_0: /reopen
username_3: @username_0 it's not clear what the next steps
I'm wondering if we should close our this issue in favour of tracking work in a project board here:
https://github.com/knative/networking/projects |
RyanSchuster/vos64 | 120650171 | Title: Source code scanners should be written
Question:
username_0: Source code scanners are needed to automate as many maintenance tasks as possible.
Style scanner - report style guide violations
- [ ] Plan
- [ ] Implement
- [ ] Verify
- [ ] Document
Documentation scanner - scan comments of a certain format, organize, and dump into the wiki
- [ ] Plan
- [ ] Implement
- [ ] Verify
- [ ] Document
Call tree scanner - more documentation; build an image showing caller-callee relationships for functions and modules and dump it into the wiki
- [ ] Plan
- [ ] Implement
- [ ] Verify
- [ ] Document
Status: Issue closed
Answers:
username_0: Saving the TODO scanner for the later documentation scanner milestone, it's good enough to start work now. |
humy2833/FTP-Simple | 170006594 | Title: Use of privateKey
Question:
username_0: Hi,
I'm trying to use the following configuration but for some reason it doesn't launch the connection to the server. Do you see anything wrong?
[
{
"name": "c7-1",
"host": "c7-1",
"port": 22,
"type": "sftp",
"username": "username_0",
"password": "",
"privateKey": "/Users/username_0/.ssh/id_rsa.pub",
"path": "/home/username_0"
}
]
Answers:
username_1: No '.pub' file.
The only private key.
ex) id_rsa, id_rsa.pub => id_rsa
Maybe... => "/Users/username_0/.ssh/id_rsa"
username_0: Thanks for the quick response. I tried that as well but nothing happens
when I correct it.
[
{
"name": "myserver.local",
"host": "myserver.local",
"port": 22,
"type": "sftp",
"username": "username_0",
"path": "/home/username_0",
"privateKey" : "/Users/username_0/.ssh/id_rsa"
}
]
username_1: You public key did was added to the authorized_keys file on the server?
Can you use ssh command to server?
ex) ssh [email protected]
username_0: Yes I can ssh freely to the target server without a password.
username_2: Hello. I have similar issues to get my sftp connection working. Is there any error output ot log I can watch to get a clue what is going on?
username_1: ssh connect used the 'ssh2 module'.
(https://www.npmjs.com/package/ssh2)
Would you like to try a test using this module?
username_2: Shure, how can I do this?
username_3: I can't get it to work either... looks like cant access private key... mine was in "C:/Users/User/id_rsa" and nothint... it says "ftp connecting..."
username_3: ok, i get it to work now.
problem was that i did not entered any "passphrase" for key
added this to connection config and all seems to be working ok:
"passphrase": "",
username_1: ok.
Please accept again the latest version.(0.4.11 or higher)
Status: Issue closed
username_4: Are there any plans to support storing the private key in the OS's keychain, or encrypting it at rest in the config file, or at least prompting for it instead of requiring that it be left in plaintext in the config file? |
PaddlePaddle/models | 675845651 | Title: [百度之星大赛]新分类微调时的bug
Question:
username_0: 
训练了八千多轮 又遇到了一样的问题!
训练的问题:
[2020-08-10 11:33:10] trainbatch 8020, lr 0.000050, loss 0.178964, time 0.08 sec
[2020-08-10 11:34:00] trainbatch 8030, lr 0.000050, loss 0.168911, time 0.10 sec
[2020-08-10 11:34:49] trainbatch 8040, lr 0.000050, loss 0.192038, time 0.10 sec
[2020-08-10 11:35:41] trainbatch 8050, lr 0.000050, loss 0.196493, time 0.08 sec
[2020-08-10 11:36:30] trainbatch 8060, lr 0.000050, loss 0.191304, time 0.08 sec
[2020-08-10 11:37:18] trainbatch 8070, lr 0.000050, loss 0.193833, time 0.08 sec
[2020-08-10 11:38:10] trainbatch 8080, lr 0.000050, loss 0.189738, time 0.08 sec
[2020-08-10 11:39:00] trainbatch 8090, lr 0.000050, loss 0.204269, time 0.08 sec
[2020-08-10 11:39:52] trainbatch 8100, lr 0.000050, loss 0.195286, time 0.09 sec
2020-08-10 11:40:16,391-WARNING: Your reader has raised an exception!
Traceback (most recent call last):
File "train_pair.py", line 238, in <module>
main()
File "train_pair.py", line 234, in main
Exception in thread Thread-1:
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1156, in __thread_main__
six.reraise(*sys.exc_info())
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/six.py", line 703, in reraise
raise value
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1136, in __thread_main__
for tensors in self._tensor_reader():
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 203, in __call__
yield self._done()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 191, in _done
return [c.done() for c in self.converters]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 191, in <listcomp>
return [c.done() for c in self.converters]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 156, in done
arr = np.array(self.data, dtype=self.dtype)
ValueError: could not broadcast input array from shape (3,64,64) into shape (3)
train_async(args)
File "train_pair.py", line 187, in train_async
for train_batch in train_loader():
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1102, in __next__
return self._reader.read_next()
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::operators::reader::BlockingQueue<std::vector<paddle::framework::LoDTensor, std::allocator<paddle::framework::LoDTensor> > >::Receive(std::vector<paddle::framework::LoDTensor, std::allocator<paddle::framework::LoDTensor> >*)
3 paddle::operators::reader::PyReader::ReadNext(std::vector<paddle::framework::LoDTensor, std::allocator<paddle::framework::LoDTensor> >*)
4 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<unsigned long>, std::__future_base::_Result_base::_Deleter>, unsigned long> >::_M_invoke(std::_Any_data const&)
5 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&)
6 ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const
----------------------
Error Message Summary:
----------------------
Error: Blocking queue is killed because the data reader raises an exception
[Hint: Expected killed_ != true, but received killed_:1 == true:1.] at (/paddle/paddle/fluid/operators/reader/blocking_queue.h:141)
Answers:
username_0: [2020-08-10 17:44:27] trainbatch 8000, lr 0.010000, loss 2.633109, acc1 0.0000, acc5 0.0000, time 0.11 sec
Traceback (most recent call last):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/reader/decorator.py", line 412, in order_handle_worker
r = mapper(sample)
File "/home/aistudio/work/metric_learning_traffic/imgtool.py", line 136, in process_image
img = random_crop(img, crop_size)
File "/home/aistudio/work/metric_learning_traffic/imgtool.py", line 64, in random_crop
resized = cv2.resize(img, (size, size), interpolation=cv2.INTER_LANCZOS4)
cv2.error: OpenCV(4.1.1) /io/opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function 'resize'
username_1: 数据shape不匹配,可以参考https://stackoverflow.com/questions/43977463/valueerror-could-not-broadcast-input-array-from-shape-224-224-3-into-shape-2
username_1: Closing this Issue. Feel free to reopen it if you have any further question.
Status: Issue closed
|
flutter/flutter | 588809567 | Title: [flutter_driver] flutter ui test on flutter module that is running on existing android app
Question:
username_0: Hello.
I've been on investigation on flutter and made a decision that flutter is wonderful to cover legacy applications.
However, when it comes to integration testing (or acceptance testing) with legacy applications, I've been failing to figure out how.
Here is my situation.

I'd like to test "a new feature by flutter". But like I mentioned before, I failed to find a way.
Is it possible to run "flutter drive" on an existing android app that contains flutter module in it?
Answers:
username_1: \cc @username_2
username_2: cc @collinjackson
Somewhat. You can't "flutter drive" (the Dart VM in your app might not even exist until a couple of screens later, the Dart VM can't drive your tests).
The open source espresso now supports Flutter! See https://github.com/flutter/samples/pull/323/files for an example (thanks @RedBrogdon!)
For iOS, it's in progress. You can test some things via standard XCTest like https://cs.opensource.google/flutter/engine/+/master:testing/scenario_app/ios/Scenarios/ScenariosTests/AppLifecycleTests.m. You currently can't assert against the UI state inside Flutter like you could with Espresso but you can assert business logic via platform channel for now.
https://github.com/flutter/flutter/issues/32987 tracking EarlGrey
username_0: @username_2 Is there a way if I can guarantee the dart vm runs before getting into the flutter drvier test?
I definitely guarantee that cause I do know when the android host app displays flutter views.
username_2: Could you give us a bit more details?
I'm getting the impression that there are 2 conflicting things. I understood the desire for UIAutomator driving blackbox testing. I'm not sure I understood the flutter drive part. How would you know there's a Flutter implementation of a module inside the blackbox? Do you have existing UIAutomator tests and are asking how to find an equivalent of new UiSelector().description("something") for a Flutter object?
username_3: You're right I'm looking for a way to access flutter objects with uiautomator. |
bbc/simorgh | 450253356 | Title: Create component e2e tests for 600 breakpoint
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Testing notes**
[Tester to complete]
Dev insight: Will Cypress tests be required or are unit tests sufficient? Will there be any potential regression? etc
**Additional context**
Add any other context or screenshots about the feature request here.
Status: Issue closed
Answers:
username_0: No longer valid
username_0: It was decided with @dr3 and @jamesdonoh that tests of this level were not necessary due to a number of reasons:
1. A check to ensure a particular component is on the page is sufficient for our purposes
2. We do not want to bloat the e2e test time with unnecessary tests
3. Our unit test coverage is sufficient to have confidence in the behaviour of the frontpage
@username_0 is going to become more familiar with the unit tests in order to understand the coverage. |
WyohKnott/image-formats-comparison | 533198275 | Title: In Firefox, some images are rendered without color management
Question:
username_0: I have a profiled display and color management enabled in Firefox. Because of the way some images (BPG, JXR, KDU, OpenJpeg) are displayed - presumably decoded in Javascript on my browser - they manage to bypass the color management system and they display heavily oversaturated. This makes it difficult to compare these images with the ones that are supported natively, since they have the wrong colors.
Possible solutions:
* some way to force Firefox to treat these Javascript loaded images like any other, so that it will color manage them?
* some way to detect whether the user has color management enabled, and load prerendered PNGs instead? or an option to do that? |
nss-evening-cohort-8/home-for-the-holidays | 380023767 | Title: StringBuilder to Display Selected Friend
Question:
username_0: ##User Story
As a User, I need to see information(name, address, phone number, Am I avoiding, holidays they are invited to, delete button and edit button) of that friend that is selected from dropdown.
## Development
- write function const friendView which will hold string builder.
- add div in html with id ="friend" for friend card.
- grab divId and do html() to print the string
- write function to set data. Use friendsdata promise and call stringbuilder function to print. chain then to connect holiday promise to display holidays for that friend.
##Acceptance Criteria
When User visit the page, they should be able to see information(name, address, phone number, Am I avoiding, holidays they are invited to, delete button and edit button) of that friend that is selected from dropdown.
Status: Issue closed
Answers:
username_0: ##User Story
As a User, I need to see information(name, address, phone number, Am I avoiding, holidays they are invited to, delete button and edit button) of that friend that is selected from dropdown.
## Development
- write function const friendView which will hold string builder.
- add div in html with id ="friend" for friend card.
- grab divId and do html() to print the string
- write function to set data. Use friendsdata promise and call stringbuilder function to print. chain then to connect holiday promise to display holidays for that friend.
##Acceptance Criteria
When User visit the page, they should be able to see information(name, address, phone number, Am I avoiding, holidays they are invited to, delete button and edit button) of that friend that is selected from dropdown.
Status: Issue closed
|
irbv-collections/MT-controlled-vocabulary | 86237923 | Title: New Country, Province, MRC or Park
Question:
username_0: [Originally posted on GoogleCode (id 1562) on 2015-01-26]
Country:US
Province/State:MINN
<b>MRC/District/County:</b>
Park:Barnesville State Wildlife Management Area
http://www.stateparks.com/barnesville_state_wildlife_management_area_in_minnesota.html
<b>ID of the specimen(s):</b>
17992
Your name:Luc |
boostorg/build | 620527262 | Title: -l option (timeout) to b2
Question:
username_0: I just experienced an issue that caused me to lose half a day, and I want to understand where this needs to be reported.
I will start with a question: is it common for make to pass its options to b2? If a user invokes make, is it default that b2 will inherit those commandline options? Because if it is, I need to report a bug here. If it isn't I need to report a bug to the application whose build scripts caused my issue.
The issue: I passed "-l 4" to make, to throttle its scheduler to where load average stays <= NCPUs. But that got passed to b2, where it caused many of the objects I was building to time out, ultimately causing the link stage to fail.
I don't know whose fault that is. Trying to understand. Thanks.
Answers:
username_0: This may have been a horrendous red herring caused by a gigantic coincidence. Closing until I learn more.
Status: Issue closed
username_0: PEBKAC |
MicrosoftDocs/sql-docs | 588848184 | Title: Example for sql restore stopatlogmark is wrong
Question:
username_0: The sample code below is wrong. It is trying to stop at the description of the logmark. It needs to pass the logmark name instead. IE. 'ListPriceUpdate' On the last line.
USE AdventureWorks2012
GO
BEGIN TRANSACTION ListPriceUpdate
WITH MARK 'UPDATE Product list prices';
GO
UPDATE Production.Product
SET ListPrice = ListPrice * 1.10
WHERE ProductNumber LIKE 'BK-%';
GO
COMMIT TRANSACTION ListPriceUpdate;
GO
-- Time passes. Regular database
-- and log backups are taken.
-- An error occurs in the database.
USE master;
GO
RESTORE DATABASE AdventureWorks2012
FROM AdventureWorksBackups
WITH FILE = 3, NORECOVERY;
GO
RESTORE LOG AdventureWorks2012
FROM AdventureWorksBackups
WITH FILE = 4,
RECOVERY,
STOPATMARK = 'UPDATE Product list prices';
Answers:
username_1: @username_0 -- thank you for your feedback. Are you referring to a specific docs.microsoft.com article?
username_0: https://docs.microsoft.com/en-us/sql/t-sql/statements/restore-statements-transact-sql?view=sql-server-ver15#restoring_transaction_log_to_mark
in this link the logmark restore example is wrong.
username_0: https://docs.microsoft.com/en-us/sql/t-sql/statements/restore-statements-transact-sql?view=sql-server-ver15#restoring_transaction_log_to_mark
in this link the logmark restore example is wrong
username_1: @username_0 -- thank you for clarifying, and I believe you are correct here, per [this article](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-of-related-databases-that-contain-marked-transaction?view=sql-server-ver15#transact-sql-syntax-for-inserting-named-marks-into-a-transaction-log).
@MikeRayMSFT -- please review [private PR 14360](https://github.com/MicrosoftDocs/sql-docs-pr/pull/14360).
Status: Issue closed
|
pypeit/PypeIt | 458851217 | Title: reading files in and out over and over again
Question:
username_0: There are two problems with our current image processing that were not addressed properly:
1) We currently have instrument specific readers that overload "load_raw_frame". I think this is the wrong approach. Instrument specific processing go beyond just reading in the frame, but extend to the entire image processing itself. For this reason, I very much dislike the corner we are boxed into in PypeIt ProcessRawImage. Here are few examples:
-- Some instruments require processing the overscan in different ways. For some instruments a simple median might be okay. For others, you need to fit as a function of one dimension.
--- Instruments like ESI have a big hot spot that varies, and cannot be simply dealth with according to a BPM. In principle one needs to fit the masked region image by image. I believe this is what we did in xidl.
-- For this reason, I think the code that processes the images should be instrument specific. In other words, it should be possible to use the generic reader in spectrograph or wherever when that works, but then a user should be able to overload the entire proc-ing process.
2) We are constantly reading and re-reading the same files, which massively slows down the code for big images particularly when the are gzipped. Each image should be read in only once.
Answers:
username_1: Addressed
Status: Issue closed
|
banool/recreation-gov-campsite-checker | 875889275 | Title: running in cron and notifications
Question:
username_0: I've got this running well on my Debian system, using this command:
`python3.7 camping.py --start-date 2021-08-01 --end-date 2021-08-13 --nights 1 --stdin < parks-sierras.txt`
I'm able to get it to run in cron (have to use the full paths in the cron command) but cannot get it to notify. I'd like to receive a system notification and bell, as well as an email. Any ideas?
cc. @username_1
Answers:
username_1: Then I'd just set up a bash script that runs the command every few minutes.
https://askubuntu.com/questions/852070/automatically-run-a-command-every-5-minutes
username_0: Ok thanks for the help, this give me stuff to work on.. Appreciate it...
username_0: @username_1 is it possible using the `--nights` arg to be "any"?
Such as `--nights *` or something like that?
username_2: --
Thanks,
<NAME>
303.809.1430
username_1: @username_0 did you end up getting yourself set up? If so, you could probably close this issue.
username_0: @username_1 Yes have it running via a shell script. Haven't had a chance to try the notification yet..
Status: Issue closed
|
openfaas/faas-cli | 265463019 | Title: python3 template proposal
Question:
username_0: <!--- Provide a general summary of the issue in the Title above -->
I have proposal for python3 template and this is how I think it should look like
https://github.com/username_0/serverless/tree/master/openfaas/templates/python3
and this is an example of two real functions based on this template
https://github.com/username_0/serverless/tree/master/openfaas/imgmod
https://github.com/username_0/serverless/tree/master/openfaas/yaml2json
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I've made a couple of changes here:
- Reduced number of layers in `Dockerfile` (image) simply by using `.dockerignore` file with reverse logic (include files), also this way only needed files are sent to Docker build context
- IMHO `run.py` seems like more logical name then `index.py`
- Added a new function `get_qs()` in to the `run.py` (collects query string parameters)
Answers:
username_1: Great idea. Python 3 is important.
Have you got the most recent CLI? I think we have this.. please double check.
username_0: Yes I have and I'm proposing improvements for existing template, cause it doesn't include query string parser function, existing `get_stdin()` function is overcomplicated and `Dockerfile` generates too many layers...
username_1: Probably not going to merge this because it breaks with the convention of our signature. However you may want to propose a client library that we can use on pip to reconstitute http data from the environment
username_1: Breaking convention means broken compatibility for existing users. I'm interested in your ideas around reducing image size
username_1: Also work working with @jockdarock who is combining OpenFaaS with Flask.
username_1: *worth
Status: Issue closed
username_1: We have a Python3 template, please check it out @username_0
Closing issue. |
Canadensys/vascan-data | 67802241 | Title: typo in authority of synonym
Question:
username_0: [Originally posted on GoogleCode (id 2212) on 2014-06-10]
<b>(This is the template to report a data issue for Vascan. If you want to</b>
<b>report another issue, please change the template above.)</b>
<b>What is the URL of the page where the problem occurs?</b>
http://data.canadensys.net/vascan/taxon/5143
<b>What data are incorrect or missing?</b>
Carex serotina Merat
<b>What data are you expecting instead?</b>
Carex serotina Mérat
<b>If applicable, please provide an authoritative source.</b>
IPNI authors<issue_closed>
Status: Issue closed |
openbmc/openbmc | 218545029 | Title: IPL: BMC in quiesce state following power on request - filesystem error prior on romulus with 910.1714.20170327n
Answers:
username_1: Some notes from @spinler on this one
```
On Romulus, I don't even see the fsi scan running at all:
Mar 27 22:42:24 romulus systemd[1]: Starting Start Power0...
Mar 27 22:42:24 romulus systemd[1]: Created slice system-start_host.slice.
Mar 27 22:42:24 romulus systemd[1]: Created slice system-avsbus\x2dworkaround.slice.
Mar 27 22:42:24 romulus systemd[1]: Created slice system-fsi\x2dbind.slice.
Mar 27 22:42:24 romulus systemd[1]: Created slice system-pcie\x2dslot\x2ddetect.slice.
Mar 27 22:42:24 romulus power_control.exe[765]: PowerControl: setting power up SOFTWARE_PGOOD to 1
Mar 27 22:42:24 romulus power_control.exe[765]: PowerControl: setting power up BMC_POWER_UP to 1
Mar 27 22:42:24 romulus system_manager.py[729]: Running System State: HOST_POWERING_ON
Mar 27 22:42:24 romulus timemanager[803]: PGOOD has changed..
Mar 27 22:42:24 romulus systemd[1]: Created slice system-cpld_trigger.slice.
Mar 27 22:42:24 romulus systemd[1]: Created slice system-op\x2dwait\x2dpower\x2don.slice.
Mar 27 22:42:24 romulus systemd[1]: Starting Wait for Power0 to turn on...
Mar 27 22:42:24 romulus systemd[1]: Starting Wait for /org/openbmc/watchdog/host0...
Mar 27 22:42:24 romulus systemd[1]: Created slice system-vcs_workaround.slice.
Mar 27 22:42:24 romulus systemd[1]: Created slice system-vrm\x2dcontrol.slice.
Mar 27 22:42:24 romulus systemd[1]: Starting Wait for /xyz/openbmc_project/state/chassis0...
Mar 27 22:42:24 romulus systemd[1]: Created slice system-fsi\x2dscan.slice.
Mar 27 22:42:24 romulus systemd[1]: Created slice system-avsbus\x2denable.slice.
Mar 27 22:42:25 romulus systemd[1]: Started Start Power0.
Mar 27 22:42:25 romulus systemd[1]: Started Wait for /org/openbmc/watchdog/host0.
Mar 27 22:42:25 romulus systemd[1]: Started Wait for /xyz/openbmc_project/state/chassis0.
Mar 27 22:42:25 romulus systemd[1]: Starting Perform AVS bus workaround on VRMs...
Mar 27 22:42:25 romulus systemd[1]: Started Perform AVS bus workaround on VRMs.
Mar 27 22:42:25 romulus systemd[1]: Starting Disable the AVS bus on the VRMs...
Mar 27 22:42:26 romulus systemd[1]: Started Disable the AVS bus on the VRMs.
Mar 27 22:42:26 romulus systemd[1]: Starting Apply voltage overrides to VRMs...
Mar 27 22:42:26 romulus vrm.sh[848]: rail set read current
Mar 27 22:42:26 romulus vrm.sh[848]: ------- ------- ------- -------
Mar 27 22:42:26 romulus vrm.sh[848]: vdna 0.898V 0.898V 0.250A
Mar 27 22:42:27 romulus vrm.sh[848]: vdnb 0.898V 0.004V 0.000A
Mar 27 22:42:27 romulus systemd[1]: Started Apply voltage overrides to VRMs.
Mar 27 22:42:27 romulus systemd[1]: Starting Enable the AVS bus on VRMs...
Mar 27 22:42:27 romulus systemd[1]: Started Enable the AVS bus on VRMs.
Mar 27 22:42:27 romulus systemd[1]: Starting Run VCS workaround on host0...
Mar 27 22:42:27 romulus openpower-proc-control[915]: filesystem error: directory iterator cannot open directory: No such file or directory [/sys/devices/platform/fsi-master/slave@00:00/hub@00/]
So I expect this to be a service ordering problem. I'll ask Lei to take a look since he owns Romulus changes.
```
username_2: It is indeed a service ordering problem, after `vcs_workaround` has dependency on `fsi_scan`, Romulus' own vcs_workaround service does not add this dependency and thus it fails.
I will re-order the services, moving cpld_trigger before vcs_workaround to make it give PGOOD before vcs_workaround, so that fsi_scan will run.
Eventually, the power on related services are:
1. avs_workaround
2. avsbus-disable
3. vrm-control
4. avsbus-enable
5. cpld_trigger (After this, PGOOD is asserted)
6. vcs_workaround
7. start_host
username_2: Patch submitted: https://gerrit.openbmc-project.xyz/#/c/3514/
username_3: 910.1716.20170405t was built with the patch mentioned and this has resolved our problem. Please push this change up ASAP so we can get it into our production builds. Thanks.
username_2: @username_4 Please help review the patch and merge, since it blocks Romulus power on.
Status: Issue closed
|
derailed/k9s | 490731842 | Title: Feature add a shortcut key to activate/deactivate headless mode
Question:
username_0: <img src="https://raw.githubusercontent.com/username_1/k9s/master/assets/k9s_small.png" align="right" width="100" height="auto"/>
<br/>
<br/>
It will be great to have a shortcut key (like CTRL-H) to activate/desactivate headless mode.
Answers:
username_1: @username_0 Totally agree on this. Thank you!!
username_1: @username_0 Please checkout v0.9.0! Hopefully close to what your were looking for.
Status: Issue closed
|
situkangsayur/SampleEMforBrainSeg | 258814064 | Title: javax.imageio.IIOException: Can't read input file!
Question:
username_0: I like your project very nice work, But when i run the project in my Mac it works very well but it show's this error "javax.imageio.IIOException: Can't read input file!", but when i run this project in Windows machine it show's this error "java.lang.IllegalArgumentException: URI i s not hierarchical”.
Can you please Help. |
tidyverse/tidyverse | 1028008444 | Title: Ggplot 2 labels produced with for loop
Question:
username_0: Hi Good Morning All,
Please I have some visualization in which I generated with for loop, but when I tried to modify the labels it is not showing on the plot. Please how do I resolve this? Find the code below.
for (i in 2:3) {
print(paste('----',names(iris[i]),'----'))
print(ggplot(iris,aes(x=Sepal.Length,y=iris[,i]))+geom_point())+ylab(colnames(iris[i]))+xlab('Sepal.Length')+ylab(colnames(iris[i]))
Sys.sleep(1)
}
Thank you for your anticipated response.
Answers:
username_1: This sort of question is a better fit for <https://community.rstudio.com>. Do you mind asking it over there? (You might want to read <https://www.tidyverse.org/help/> first to maximise your chances of getting a good answer)
Status: Issue closed
|
imbs-hl/ranger | 233870694 | Title: "Growing trees... Killed" depending on num.trees and data size
Question:
username_0: Hello,
thanks for providing the ranger package for fast RSF.
And thanks for your time reading this.
Depending on the value of the num.trees size, growing of the trees aborts suddenly, even though I have
two strong servers as described below.
There is no error message, but "killed" - please see attached screenshot.
Given `num.trees=5000` this occured at growing progress of e. g. 52%, 76%, 86%.
But never at an lower progress rate.
In another dataset I've observed this behaviour at 99%, too.
I've tried using the "dependent.variable" and "status.variable" notation instead of providing
a _survival formula_ or _survival object_, but that didn't helped, too.
I'm running the R-Script in bash mode to avoid any overhead or pertubations from RStudio.

The different training datasets I tried have 630 - 1800 Obserations and 500 features.
The `ranger()` function call is:
`ranger(dependent.variable.name = "time",
status.variable.name = "status",
data = training,
num.trees = num.trees,
save.memory = T
)`
Whereas the call from the shell is e. g. `R < run.Ranger.R --no-save`
All independent variables are numeric and scale between [0,1].
A workaround is reducing num.tress with a try-and-error approach.
If using `importance = TRUE` I have to reduce the tree size further to avoid "killed" sessions.
I thought this might be a memory issue, so I tried using `save.memory=T`. But only up to 40% (=98 GB) of memory is occupied, which doesn't make sense under this assumption.
The plot below shows time on x-axis (1 bar = 1 hour) and memory % usage on y-axis:

I'm really wondering what's the reason or bottleneck, since no error message appears but "killed" and memory monitoring reports most of resources are free.
I'd be glad and thankful for any ideas or proposals!
# Finally, here are the hardware / software stats:
## OS System
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:g raphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: RedHatEnterpriseServer
Description: Red Hat Enterprise Linux Server release 6.8 (Santiago)
Release: 6.8
Codename: Santiago
## Hardware, two servers with each (problem occurs on both, so it's server independent):
CPU: 20 Cores, Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz
RAM: 246 GB
## R
platform x86_64-redhat-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 3.2
year 2016
month 10
day 31
svn rev 71607
language R
version.string R version 3.3.2 (2016-10-31)
nickname <NAME> Patch
Answers:
username_0: I found out that the memory report above is misleading due to aggregation.
The real consumption is much higher as I will show below.
I asked if the crash is caused through the dimension (27.100 x 500) or through the inner structure. If it would be the first, then simulated data of the same size should cause a crash, too. If else, the simulation would be successful.
So I simulated a uniform [0,1] distributed matrix of same dimension, trained a SRF and logged memory consumption. (Code below) Then I did the same with the real data. In both cases the same parameters were used.
**Result:**
The RAM consumption of the real dataset goes through the roof, but for the simulation is almost constant.
When the RAM consumptions goes to the limit, the uses swap memory, then terminates the process if
this goes to it's limits, too.

## questions:
Is this a normal behaviour?
I'm wondering why the "simulation data" computation takes about 27min to finalize (100% progess), which is about 10min slower, than the "real data" (95% progress)? Why is it more expensive in computation, but cheaper in RAM?
## code (simulation) data
(only data argument was replaced when using the real data)
`
library(survival)
library(ranger)
library(dplyr)
n <- 27100
M <- data.frame(matrix(runif(n*500, 0, 1), nrow = n))
M$time <- round(rbeta(n, 10, 3)*100,0)
M$status <- (runif(n, 0, 1) + 0.2 ) %>% round(0)
ntrees <- 5000
ncores <- 15
srf <-
ranger(
data = M,
num.trees = ntrees,
num.threads = ncores,
dependent.variable.name = "time",
seed = 1,
save.memory = F,
status.variable.name = "status")
`
username_1: Thanks. This is very strange. Any idea what could still be different in the real dataset? I guess you are not allowed to share it? Could you check the size of the resulting forest in the two cases, e.g., with
`mean(sapply(srf$forest$split.varIDs, length))`
By the way, `save.memory = TRUE` has no effect on survival forests, we should add a warning.
username_0: Main differences in the datasets (real vs. simulated)
- The real dataset has logical and numerical values as features. So columns exists with values {0,1} or in [0,1] respectively.
- The simulation dataset has only feature values in [0,1] and not {0,1}.
I'll check the tree sizes as soon as my servers work fine, again:
Because yesterday I made some excessive run + kill tests, since then I have troubles.
Which means
1. I started ranger (with real data) with different parameters (e. g. mtry, trees, save.memory, playing with the data dimensions) and logged the **x** in "Growing trees .... x% Progress" + RAM demand as well.
So I retrieved a table such as below for different parameter sets.
2. Since I didn't wanted to wait until completion each time, I killed the parent process with SIGTERM
I did this dozens of times.
This had a bad effect:
Normally the simulation script (code below) runs up to ~ 25mins.
And the "Growing trees ..." progress message will be provided after 30s after starting at latest.
But now... I have to wait **2 hours** and then ranger estimates a runtime of **~12-13 days**.
## progress and memory demand example tables

## simulation script (5-10 minute version, `ntrees <- 5000` costs ~25min)
`
library(ranger)
n <- 27144
nfeat <- 500
M <- data.frame(matrix(runif(n*nfeat, 0, 1), nrow = n))
M$time <- round(rbeta(n, 10, 3)*100,0)
M$status <- round((runif(n, 0, 1) + 0.2 ),0)
ntrees <- 1000
ncores <- 15
srf <-
ranger(
data = M,
num.trees = ntrees,
num.threads = ncores,
dependent.variable.name = "time",
status.variable.name = "status",
seed = 1
)
`
username_0: Please excuse my late response.
`mean(sapply(srf$forest$split.varIDs, length)) := nSplit.mean` results in about 12K each (real + simulated data).
Increasing min.node.size enables regularization so that the problem is less expensive in terms of computing time, but still needs massive RAM.
In the picture below shows a benchmark over min.node.size:
x-axis denotes the min.node.size
y-axis denotes time (=computation time in seconds) and nSplit.mean
But nonetheless the memory demand (not shown) for the real data remains massive.
The current workaround is: Increasing nSplit.mean so that a SRF with at least 1000 trees can be grown.
This is far beyond the desired 5000 trees, but unfortunately there's no time to research how to decrease the RAM hunger.

username_1: The reason is probably that in a survival forest a cumulative hazard function has to be saved in each terminal node. If there are many unique time points in the dataset, these CHFs grow large and for many deep trees they are a lot of them.
To verify this, could you try to change the splitting rule to "extratrees" and/or "maxstat" and check if this changes the memory usage?
username_0: 900 - 1000 unqiue time points do exists.
I'll check the alternative checking rules as soon as my other computations
have finished, this may take up to a month. I'll provide an update as soon
as possible.
Today I trained a SRF in the same manner as before (min.node.size 15,
ntrees = 1000), but with more patients (62K; real data, no simulation data)
than before.
The patients utilized before are contained as a subset in these.
Very surprisingly the memory demand dropped to ~ 50 GB, which is nice, but
confusing.
username_2: Approximating survival times to a restricted grid of time values can greatly improve the performance. 1000 time points is way too many. By the way, in randomForestSRC they have a parameter for facilitating that operation. I don't feel like such a parameter is absolutely needed (I prefer full control in defining the time grid myself), but it might be useful one to have.
username_3: Dear all,
I landed on this page looking for explanations on why the Ranger function skyrockets the memory usage. Is there anything from the user side that can be done to avoid/minimize such high usage?
Thank you.
Regards,
Xavier
username_0: Hi @username_3, to point out a few things I did following @username_1 recommendations:
- aggregated longitidunal representation, e.g. instead of eg ~1000 days (3y of history) -> 36 months.
- increase the threshold for `min.node.size` from marvins comment [above](https://github.com/imbs-hl/ranger/issues/202#issuecomment-313449199)
- reduced `ntrees `
- more a workaround than a solution: train multiple smaller RF's whatever fits in RAM, then (a) use an ensemble of RF's (b) combine them with caution
Beside of that I also played around with other RF implementations. As far as I know, ranger is still the most efficient implementation in R for (survival) random forest.
username_1: @username_3 Please give some details (best with reproducible example). |
svalinn/DAGMC | 951836887 | Title: Housekeeping test may not be correct, but not causing errors
Question:
username_0: **Describe the Bug**<br/>
The housekeeping test should run `clang-format` but there appears to be an error finding it.
**To Reproduce**<br/>
All PR's will launch the Housekeeping test. Here is [one example](https://github.com/svalinn/DAGMC/pull/727/checks?check_run_id=3146406659) (expand the "Housekeeping" job on the right).
The script runs `find` and passes results through `clang-format`. When it doesn't find it, it reports an error but also does not produce any output that is later checked for pass/fail.
**Expected Behavior**<br/>
The `find` command should function correctly and possibly generate output.

Answers:
username_1: Addressed in #776
Status: Issue closed
|
umap-project/umap | 115695659 | Title: When setting Debug = False, static content is no longer loaded
Question:
username_0: When I set `DEBUG = False` URLs like http://example.com/static/storage/reqs/leaflet/leaflet.css throw a 404 error. Is this expected behaviour?
Answers:
username_1: Yes. Python should not serve statics. So when in production, you need to setup nginx (or whatever HTTP server you are using) to do that. See: https://docs.djangoproject.com/en/dev/howto/deployment/
Status: Issue closed
username_0: thanks once again. I've put an `Alias /static /static/folder` now |
NIFCLOUD-mbaas/UserCommunity | 645630724 | Title: 会員情報とデータストアの情報を紐付けたい
Question:
username_0: javascriptでログイン機能を実装したのですが、遷移したあとの画面で入力・保存したデータを会員情報(メールアドレス)と紐付けるにはどうすればよいでしょうか?
ニフクラに載っている「基本的な使い方」のポインタやリレーションだと思うのですが、あってますでしょうか?その場合、例として記載されているfruitやfoodが何を示すのかよく分からないので、ご解説いただけますと幸いです。
よろしくお願いいたします。
Answers:
username_1: データを結びつける方法は幾つかあります。
## ユーザに付属情報を追加したい
例えば電話番号、住所などです。この場合はユーザオブジェクトにデータを追加して、更新してください。
```js
currentUser
.set('tel', tel)
.set('address', address)
.update();
```
## データをそのユーザだけ閲覧、更新できるようにしたい
ACLとしてユーザを設定してください
https://mbaas.nifcloud.com/doc/current/datastore/contents_javascript.html
## そのユーザ所有のデータであると表現したい
データストアにあるデータとユーザ情報を紐付ける場合は、ポインターを使ってください。
https://mbaas.nifcloud.com/doc/current/datastore/basic_usage_javascript.html#%E3%83%9D%E3%82%A4%E3%83%B3%E3%82%BF
fruitやfoodはサンプルなので、特に意味はないです。単に `item.set('user', currentUser)` みたいにしてもらえればOKです(この場合の item もサンプルであって、特に意味はないです)。
- ユーザ : データ = 1 : n の場合はユーザに情報を紐付けてください。 `currentUser.set('item', item)` という形です。
- データ : ユーザ = 1 : n の場合はデータに情報を紐付けてください。 `item.set('user', currentUser)` という形です。
username_0: ありがとうごいます!
返事が遅くなりもうしわけございません。
無事できました!!
すみません、もう一つ質問させていただきたいのですが、
上記で登録したユーザーデータを、そのユーザーのHTML上に表示させるにはどうしたらよろしいでしょうか?
fetchAll()を使うということで正しいでしょうか?
上手く表示されなくて・・・。
username_1: 実際に表示はライブラリやフレームワークによって異なるので何とも…。データ自体は `currentUser.get('tel')` などで取得できます。
username_0: そうですよね・・・失礼いたしました
お教えいただきありがとうございました!
かなり理解が進みました!
またお願いいたします!!
Status: Issue closed
|
victorquinn/memcache-plus | 175617779 | Title: additional memcached commands
Question:
username_0: Hi,
Could you please implement the additional memcached commands:
add, replace, append, prepend, cas, incr/decr?
thx,
Answers:
username_1: Yes, definitely! They were always on my todo but we didn't use them internally so they got lost a bit. Will try to implement them ASAP though :)
Status: Issue closed
username_1: Booyah |
wallabyjs/public | 133810268 | Title: Default Phantom version in documentation
Question:
username_0: To run in PhantomJS 2, you have to specify the runner:
```
env: {
runner: require('phantomjs2-ext').path
}
```
So I recommend to put in the documentation that wallaby is using PhantomJS **1.9.8** to run test.
Why don't you put the documentation source files on Github so that people can make PRs?
Status: Issue closed
Answers:
username_1: Thanks, added the note to the docs.
Yes, it's a good idea and we are planning to put the documentation source files on Github at some stage. |
rossfuhrman/_why_the_lucky_markov | 464721291 | Title: Can we move on? He’s simplified reading from the underside of the Chief of Police’s invitation to join him tonight at the at symbol as a polite dip of the dead lottery of his mouth.
Question:
username_0: Toot: Can we move on? He’s simplified reading from the underside of the Chief of Police’s invitation to join him tonight at the at symbol as a polite dip of the dead lottery of his mouth.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
android/connectivity-samples | 679904393 | Title: Connection Failed
Question:
username_0: When I run code I cannot connect to any Bluetooth device.
it always shows two errors as
BluetoothChatService: Socket Type: Secureaccept() failed
BluetoothChatService: Socket Type: Insecureaccept() failed
Please help to fix this error.
Thanks.
Answers:
username_1: have you been able to resolve the issue you encountered? If not might you be able to clarify?
- The specific project that pertains to the issue (for example: the 'BluetoothLeGatt' project)
- Steps to repro the issue, including the API version and Android device model
- If the issue is not about a sample, would you please file on Stackoverflow and we will take a look there? |
sj26/mailcatcher | 60500558 | Title: Internal Server Error when viewing a message
Question:
username_0: Remote Address:192.168.42.101:1080
Request URL:http://dolibarr.local:1080/messages/1.json
Request Method:GET
Status Code:500 Internal Server Error
Response Headers
view source
Connection:keep-alive
Content-Length:30
Content-Type:text/html;charset=utf-8
Server:thin 1.5.1 codename Straight Razor
X-Content-Type-Options:nosniff
X-Frame-Options:SAMEORIGIN
X-XSS-Protection:1; mode=block
Request Headers
view source
Accept:application/json, text/javascript, */*; q=0.01
Accept-Encoding:gzip, deflate, sdch
Accept-Language:es-ES,es;q=0.8,en;q=0.6,gl;q=0.4
Connection:keep-alive
Cookie:DOLSESSID_2afec227796cda3b39c0f7714dbb561c=o7plgqhm5eqrnmb40ertrdi975; XDEBUG_SESSION=phpstorm
Host:dolibarr.local:1080
Referer:http://dolibarr.local:1080/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.76 Safari/537.36
X-Requested-With:XMLHttpRequest
Status: Issue closed
Answers:
username_0: Duplicated of #201 |
spatie/laravel-analytics | 206110262 | Title: Laravel LTS
Question:
username_0: With the new releases of laravel-analytics potentially breaking older release configs (e.g PHP7, Json vs p.12 etc) is there any possibility you might consider maintaining a release specifically for Laravel 5.1. I guess many users might be locked into 5.1 as it is Laravel's LTS release?
Thanks
Answers:
username_1: I only maintain what I use myself. I'm not using legacy Laravel releases anymore.
You're free to fork this package and maintain your own L5.1 compatible copy.
(Imho sticking on LTS releases for your applications is [a bad idea](https://medium.com/@jasonmccreary/laravel-lts-is-a-trap-97b1d1103961#.l0sl16utj))
Status: Issue closed
|
openpracticelibrary/openpracticelibrary | 1174392676 | Title: Feature request: "I used this practice"
Question:
username_0: 05/03/2022 05:23:44
"I would like to suggest a new functionality within each practice.The functionality would be button or icon (same as like) but informing that I as a user used the practice.For example: I've already used the "Priority Sliders" practice. And Leave it marked that I have already used it. " - @andrecataldo
via feedback form
Answers:
username_0: duplicate of #1040 |
Yismen/dainsys | 452660366 | Title: Employees
Question:
username_0: **Index**
- should not snow the other phone column
- should display project and salary info
**Edit**
- Add employee name at the top of the fom, liked to details (perhaps a vue component) you can throw in each tab or just between the tab controls and tab contents
**Show**
- status
- posotion should have more details, such as department, project and salary
- should show site
- marital status
- card and punch
- social security number
- bank and bank account
- supervisor
- nationality<issue_closed>
Status: Issue closed |
react-monaco-editor/react-monaco-editor | 662630415 | Title: how to reduce js bundle size in my react app
Question:
username_0: Run `npm run build ` and package my react app.
This library is too big that makes it slower to load the page.
How to pack that costs smaller size?
Answers:
username_1: You can use the webpack Monaco plugin to only select what you need.
Status: Issue closed
|
go-gitea/gitea | 585691116 | Title: Update docker docs about ssh server
Question:
username_0: - Gitea version (or commit ref): v1.11.3 in docker image from dockerhub same tag
- Git version: docker image
- Operating system: docker image
- Database:
- [X] MySQL
- Can you reproduce the bug at https://try.gitea.io:
- [X] Yes (provide example URL)
## Description
It may be partially ma fault but I didn't get any error nor info in the docs:
I've set up Gitea using the official docker image. Then I uploaded some ssh keys to the accounts / as deployment key. We struggled with the SSH ports, so I updated the config. This is there the fun began :)
As we wanted to use SSH I thought " hey, cool enable the built in server if Gitea has one". Then SSH worked fine so far. But uploading or removing SSH keys lead to errors like "unauthorized" or "key not found: 1". I spent much time finding out, that the docker image ships openssh as server.
## My suggestions:
• Shouldn't an error occur from the internal SSH server, that the port (22) is already in use? It didn't raise one.
• Update docker docs to point out that an SSH server is included and the internal server shouldn't be used (at least on port 22).
Answers:
username_1: Yes. Once you decide to use internal SSH server, you should do two things. One is check if the port has been used by other application, another is change the config and restart gitea. And if internal SSH server start failed, the total gitea should exit.
username_0: It currently doesn't look like doing that. That is the issue. All the other things are have been my faults of course ;) That is why this issue is just for improving the docs and Gitea behavior avoiding others spending hours on this.
username_2: Hmm. This is probably my fault.
https://github.com/go-gitea/gitea/blob/c61b902538e8e4b2bb83136f190e044a6bbcdd9b/modules/ssh/ssh_graceful.go#L19
Probably should be `log.Fatal`
username_2: Ah actually it was originally just a `log.Error` so I didn't introduce the bug.
username_0: Wow that was a fast response :+1:
Thank you
I would also update the docker docs that the image container an openssh server.
username_2: Write what you want as a comment on that PR or here and I'll add it.
Status: Issue closed
|
mpicbg-scicomp/gearshifft | 251936749 | Title: travis integration of gearshifft
Question:
username_0: for travis integration of gearshifft you need:
- account on [Travis CI](https://travis-ci.org/)
- will be connected to your github login
- organization access of travis to mpicbg-scicomp
- in travis go to your accounts page and if you cannot see your organization, then check "Review and add your authorized organizations." on the left side panel
- you referred to github, where you, as a member, can grant access for travis to mpicbg-scicomp
- in travis on the accounts page -> organizations -> mpicbg-scicomp you can enable travis CI for gearshifft
- if it is enabled, then another commit on master is required to launch the travis build process (it looks for .travis.yml in the repo)
- after succeeded we can insert a status icon in the README.md:
```
[](https://travis-ci.org/mpicbg-scicomp/gearshifft)
```
Answers:
username_1: see #125
Status: Issue closed
|
cfpb/hmda-frontend | 973060391 | Title: Enable multiple externally configured announcement banners
Question:
username_0: The homepage announcement banner is capable of displaying multiple banners in sequence, but currently only does so with the scheduled banners (i.e. Filing period open).
We want to extend this to utilize multiple banners from the environment configuration (i.e. prod-config.json) as well.<issue_closed>
Status: Issue closed |
quarkusio/quarkus | 519247328 | Title: narayana-jta extension: Improve the way narayana properties are handled
Question:
username_0: The extension has some issues around how the Narayana transaction manger properties are initialised:
1. It records the default properties at build time - Narayana normally sets config properties using reflective access to setters on various beans. Version 5.10.0.Final of Narayana adds a method for directly passing pre-configured beans (and this will avoid the need for reflective access code).
2. It sets the Narayana property factory by reflectively changing the Java access mode of a private field that holds the factory.
3. The extension hard codes the properties in the Narayana ConfigurationInfo bean. The previous version of Narayana set these properties by looking inside the Narayana Java archive which is not efficient in Quarkus. The latest version of Narayana sets the properties directly in the code as constants. Therefore the extension no longer needs to hard code the properties in the ConfigurationInfoSubstitution extension class |
boostorg/python | 251580888 | Title: get_pointer issues on VS2017 in make_ptr_instance.hpp
Question:
username_0: If you alter VS to use unique_ptr instead of auto ptr, in
`get_derived_class_object(boost::python::detail::true_, U const volatile* x)`
U is derived such that x is a regular pointer, which causes linker errors as get_pointer is not defined for regular pointers. Trying to work around this with an ifdef leads to massive errors elsewhere (the errors may not be causal, its not clear). In any event, somewhere in here type deduction/dispatch is not working correctly and it appears to have been masked by auto_ptr. This may or may not be a VS specific issue. This bug report is rather vague, but I was hoping someone might have an idea of what is going on since I'm banging my head against a wall trying to figure it out given I can't get tests to run locally.
Answers:
username_1: Thanks for reporting this. I have seen (seemingly) similar symptoms on Windows, and couldn't figure out what was going on. So thanks for helping to clarify. I have only casual exposure to Windows (and much less to MSVC), so my help will be of limited value. Sorry.
username_2: Hi,
I have no insight in the error, however I Don t think it is the same as in #116 as that error was corrected in msvc 2017, unless it got broken again in the latest update of msvc2017 which was released last week
Having said this I have the impression that support is currently a bit brittle (and I am actually concerned about that) however I have absolutely no suggestions to make...
username_0: Hmm, wait, the auto test suite is not using 2017, does that mean args should fail? Have I been banging my head against a known bug here?
username_0: Ok, so I figured out that AppVeyor does not actually use 2017, see #150, so this is actually not a real bug, as it is always using the 2015 buggy compiler.
Status: Issue closed
|
rails/rails | 1133013217 | Title: NameError: undefined local variable or method `state' for ActiveSupport::IsolatedExecutionState:Module
Question:
username_0: ### Steps to reproduce
<!-- (Guidelines for creating a bug report are [available
here](https://edgeguides.rubyonrails.org/contributing_to_ruby_on_rails.html#creating-a-bug-report)) -->
With 7.0.2.1, call either [key? or delete](https://github.com/rails/rails/blob/v7.0.2.1/activesupport/lib/active_support/isolated_execution_state.rb#L40-L46) in ActiveSupport::IsolatedExecutionState.
<!-- Paste your executable test case created from one of the scripts found [here](https://edgeguides.rubyonrails.org/contributing_to_ruby_on_rails.html#create-an-executable-test-case) below: -->
```ruby
# Your reproduction script goes here
```
### Expected behavior
State should be defined
### Actual behavior
It appears the newest release includes only part of a change made to this file in active support, [active_support/isolated_execution_state.rb](https://github.com/rails/rails/blob/v7.0.2.1/activesupport/lib/active_support/isolated_execution_state.rb).
In this [commit](https://github.com/rails/rails/commit/449101e753c20af35661afd31ff3fa090b7e919d#diff-258986eb56946168eedb2cbfd0c74bb7ec73ef1c319bdf1d8b8216d205df1219), it looks like there is some work being worked to update this to use `state`, but only part of this change is present in the 7.0.2.1 release.
### System configuration
**Rails version**: 7.0.2.1
**Ruby version**: 3.1.0<issue_closed>
Status: Issue closed |
objecthub/swift-markdownkit | 1123711544 | Title: Html inside of code blocks gets rendered as html
Question:
username_0: ## Problem
Using the HTMLGenerator, if there is html inside of code blocks, it will be rendered as html instead of as text. This causes undefined behaviour.
Example: here, a css stylesheet is being rendered, messing up the webpage.

## Proposed solution
Replace all html characters with escape characters
Answers:
username_1: Thanks a lot for the bug report! I just submitted a fix for this problem. Release 1.1.3 shouldn't have this problem anymore.
Status: Issue closed
|
EasyNetQ/EasyNetQ | 78214876 | Title: Key already exists in dictionary exception when using publisher confirms
Question:
username_0: I'm performing some stress testing of a producer/consumer solution (0.47.10.380) by pumping a large volume of messages over a simulated poor connection (using netem on an linux box) with intermittent drops in connection. Today for the first time ever I encountered an un-handled exception stating that a dictionary already contains a key which seems to be stemming from producer confirms. I have attached a screenshot of the call stack.
I will continue to monitor this and see if I can find the source and maybe attach directly to see if I can find the issue.

Answers:
username_1: @username_0
Thanks for reporting.
It could happen if new IModel is recreated after connection restoration and sequenceNumbers from previous IModel are still in dictionary(concurrent issue), because new model starts NextPublishSeqNo from the one :)
username_2: @username_1 will you investigate this?
username_0: So a possible solution could be to maybe monitor for a disconnect or a new IModel being created. In either case then automatically expire all of the messages contained in the publisher confirm dictionary as they cant be reliable guaranteed any more.
username_2: would be nice if you test it again the latest version of easynetq and rabbitmq client, this could add more hints thanks!
username_1: @username_2 Yeap, will try to repro...
username_0: Have upgraded to version 0.49.2.389 and the issue persists.
username_1: @username_0 Could you clarify your unstable network scenarios?
Yesterday I tried to repro an issue and actually I have no idea...
username_0: @username_1 Thanks for your efforts into investigating this. I'm still trying to reproduce this while attached to the source but unfortunately am only producing issue #445. I'm continuing to try and reproduce, but is slow as once I produce issue #445 the application dies and must restart the test again. As a side note, the application dying is actually good (not ideal) in my case from a production point of view as I can just restart the app service and continue.
username_0: And just as I hit "Comment" it appeared :smile:. I have mentioned in issue #445 how Im stress testing my application for durability. But is basically, application running on Win7 host, rabbit broker 3.5.2 sitting on ubuntu vm, I have a python script running intermittently to take the interface down "sudo ifconfig eth0 down". I had mentioned that I use netem, but was not used in this case. I attempt to then publish a large volume of messages.
System.ArgumentException was unhandled
_HResult=-2147024809
_message=The key already existed in the dictionary.
HResult=-2147024809
IsTransient=false
Message=The key already existed in the dictionary.
Source=mscorlib
StackTrace:
at System.Collections.Concurrent.ConcurrentDictionary`2.System.Collections.Generic.IDictionary<TKey,TValue>.Add(TKey key, TValue value)
at EasyNetQ.Producer.PublisherConfirms.ExecutePublishWithConfirmation(IModel model, Action`1 publishAction, TaskCompletionSource`1 tcs) in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\Producer\PublisherConfirms.cs:line 139
at EasyNetQ.Producer.PublisherConfirms.OnPublishChannelCreated(PublishChannelCreatedEvent publishChannelCreatedEvent) in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\Producer\PublisherConfirms.cs:line 49
at EasyNetQ.EventBus.Publish[TEvent](TEvent event) in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\IEventBus.cs:line 51
at EasyNetQ.Producer.PersistentChannel.OpenChannel() in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\Producer\PersistentChannel.cs:line 86
at EasyNetQ.Producer.PersistentChannel.ConnectionOnConnected(ConnectionCreatedEvent event) in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\Producer\PersistentChannel.cs:line 61
at EasyNetQ.EventBus.Publish[TEvent](TEvent event) in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\IEventBus.cs:line 51
at EasyNetQ.PersistentConnection.OnConnected() in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\PersistentConnection.cs:line 165
at EasyNetQ.PersistentConnection.TryToConnect(Object timer) in c:\Users\user\Documents\EasyNetQ\EasyNetQ\Source\EasyNetQ\PersistentConnection.cs:line 113
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.TimerQueueTimer.CallCallback()
at System.Threading.TimerQueueTimer.Fire()
at System.Threading.TimerQueue.FireNextTimers()
InnerException:

username_0: Just as a thought, this is one of those cases I do wonder if rabbitMQ would have been better using a guid instead of a ulong as it would be easier to confirm client side that the message was received.
username_0: So after thinking a bit more about my previous comment and doing a bit of research, I wonder would combining the channel/Model Id (guid) with the sequence number guarantee unique entries in the dictionary? I assume on a reconnect, the channel/model will be rebuilt with an new guid allowing the old unacked to expire normally without fear of conflicts. Any thoughts?
username_0: This is just a quick rabbit example ...

username_1: @username_0 In my opinion it's ugly hack :smile:
Without any doubts you can do it, but I would like to understand why it is happened...
username_0: ugly hack is a bit strong :smile: but using ulong alone with no uniqueness, in a single dictionary that can share the state, of multiple channel/model sessions (i.e. new sequence numbers begin generated after disconnect/reconnect) then this will always be a possibility unless the contents (previous session state) of the dictionary get flushed at the first instance of a disconnect/reconnect/new instance of a channel/model. Anyway, I'm only throwing ideas out there to try and help, not suggesting how this should be dealt with.
username_1: @username_0 could you test again with 0.51.0.409?
Status: Issue closed
|
parcel-bundler/parcel | 805054467 | Title: Parcel is unable to find module ./fs-search.linux-arm-musl.node
Question:
username_0: Error: Cannot find module './fs-search.linux-arm-musl.node'
Require stack:
- /usr/src/app/frontend/node_modules/@parcel/fs-search/index.js
- /usr/src/app/frontend/node_modules/@parcel/fs/lib/NodeFS.js
- /usr/src/app/frontend/node_modules/@parcel/fs/lib/index.js
- /usr/src/app/frontend/node_modules/parcel/lib/cli.js
- /usr/src/app/frontend/node_modules/parcel/lib/bin.js
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
at Function.Module._load (internal/modules/cjs/loader.js:725:27)
at Module.require (internal/modules/cjs/loader.js:952:19)
at require (/usr/src/app/frontend/node_modules/v8-compile-cache/v8-compile-cache.js:159:20)
at Object.<anonymous> (/usr/src/app/frontend/node_modules/@parcel/fs-search/index.js:15:18)
at Module._compile (/usr/src/app/frontend/node_modules/v8-compile-cache/v8-compile-cache.js:192:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1092:10)
at Module.load (internal/modules/cjs/loader.js:928:32)
at Function.Module._load (internal/modules/cjs/loader.js:769:14)
at Module.require (internal/modules/cjs/loader.js:952:19) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/usr/src/app/frontend/node_modules/@parcel/fs-search/index.js',
'/usr/src/app/frontend/node_modules/@parcel/fs/lib/NodeFS.js',
'/usr/src/app/frontend/node_modules/@parcel/fs/lib/index.js',
'/usr/src/app/frontend/node_modules/parcel/lib/cli.js',
'/usr/src/app/frontend/node_modules/parcel/lib/bin.js'
]
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] production: `parcel build --no-source-maps --public-url . src/index.html`
npm ERR! Exit status 1
```
## 🔦 Context
Packaging a frontend app for distrubution on `ARMv7` alpine.
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | `2.0.0-beta.1`
| Node | `14.15.4`
| npm/Yarn | `6.14.10`
| Operating System | `arm32v7/node:14.15.4-alpine`
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
Answers:
username_1: We only build `fs-search.linux-arm-gnueabihf.node` at the moment, not musl.
- https://unpkg.com/browse/@parcel/[email protected]/
- https://github.com/parcel-bundler/parcel/blob/v2/.github/workflows/nightly-release.yml
username_0: That's fine, any chance a musl build is coming in the future?
I've switched to `arm32v7/node:14.15.5-stretch-slim`, everything works great now, thanks!
username_2: What if I have freebsd build machine? How to manage without binary modules for freebsd?
username_3: Hi there,
Any update on this issue ? I am getting from what I think, a similar one:
`Error: Cannot find module './artifacts/index.linux-arm64-gnu.node'`
I am on `node 14.17.6`, the system is an Nvidia Jetson AGX Xavier.
username_4: heya! Any updates/build for musl?
username_5: Just started getting this error today: `Error: Cannot find module './artifacts/index.freebsd-x64.node'`
Never had this problem with FreeBSD before. What changed?
username_5: Need to emphasize the importance of this for us FreeBSD users. So many Node dependencies rely on this so we are unable to do anything at work until this is resolved.
username_1: Parcel packages which are missing freebsd binaries:
- https://unpkg.com/browse/@parcel/[email protected]/
- https://unpkg.com/browse/@parcel/[email protected]/
- https://unpkg.com/browse/@parcel/[email protected]/
- https://unpkg.com/browse/@parcel/[email protected]/
- https://unpkg.com/browse/@parcel/[email protected]/package.json |
mpt1/netrunner_arena | 181325092 | Title: Crash after choosing a side
Question:
username_0: The program crashes after I click on corp or runner.
The command window tells me "Assertion failed: deck.size() > 0 && "Cannot draw from empty deck.", file main.cpp, line 115"
This happens regardless of what other settings I click beforehand.
Answers:
username_1: Most likely scenario: you copied an executable from a build later than 453 over an existing installation from an earlier build. The NRDB API changed, so existing data files are not compatible.
Try running the newest builds found here https://github.com/mpt1/netrunner_arena/releases
in a clean directory.
username_0: The build I found works, but re-downloads all images on every run.
username_1: I cannot reproduce that. It might try to download not yet available images every run. I have disabled that behavior in the new build 459.
But once you get a '200' return code the image should stay in the 'img' folder and never be downloaded again. Are you sure it is downloading **all previous downloaded** images again?
Status: Issue closed
|
SUSE/zypper-docker | 260589391 | Title: zypper-docker fails when list-patches is called with '--severity'
Question:
username_0: Hi,
`fbaumanis@linux-pguu:~/go/src/github.com/SUSE/zypper-docker> zypper-docker lp --severity opensuse:latest
`
Then it says: `Error: no image name specified.`
This isn't right, is it?
I'm using the latest version with the latest openSUSE image.
Answers:
username_0: Sorry, it was my fault, i forgot to add the actual severity after the flag.
Issue can be closed.
Status: Issue closed
|
rbkh/angular-wireframing | 244085387 | Title: Phone devices need to be centered in working area
Question:
username_0: Currently the device are in the middle of the viewport but because of the device sidebar, it needs to be fixed.

Status: Issue closed
Answers:
username_0: This was fixed in the last PR |
department-of-veterans-affairs/caseflow | 308703042 | Title: Styleguide | Delete unused colors!
Question:
username_0: ## Story
As a developer, I want to be able to see only the colors we are using in our code so that I don't get confused and potentially add more colors than needed. Really, we should follow style guide.
## AC
- [ ] Take out any color that is not in the list below:
#212121 - `$color-base`
#323a45 - `$color-gray-dark`
#5b616b - `$color-gray`
#aeb0b5 - `$color-gray-light`
#d6d7d9 - `$color-gray-lighter`
#e4e2e0 - `$color-gray-warm-light`
#0071bc - `$color-primary`
#fff - `$color-white`
#e31c3d - `$color-secondary`
#2e8540 - `$color-green`
#02bfe7 - `$color-primary-alt`
#f9dede - `$color-secondary-lightest`
#e7f4e4 - `$color-green-lightest`
#e1f3f8 - `$color-primary-alt-lightest`
#fff1d2 - `$color-gold-lightest`
#844e9f
### Special instructions
- Anything that is #757575 should be changed to #5b616b `&color-gray`
## Dependent issues
https://github.com/department-of-veterans-affairs/caseflow/issues/4315
Answers:
username_1: `#0872b9` - `#0071bc` - `$color-primary`
editing...
username_1: Looks like the visited link color is `#4c2c92` - am wondering if it should be `#844e9f`
username_1: @lakohl ^
Status: Issue closed
|
iamartyom/ngx-flag-picker | 728681585 | Title: Overriding CSS
Question:
username_0: Hi,
Thx for sharing this great component! I integrated it in one of mine.
I tried to override the color from: .select-flags. Unfortunately nothing worked. Is there a way to do it?
Thx & Best Regards
Dennis |
mattn/go-isatty | 413910394 | Title: Termux Android cannot find `golang.org/x/sys/unix`
Question:
username_0: On Termux Android, the lib depends to `go-issaty` is failing. It cannot find `golang.org/x/sys/unix`. I might relate to https://github.com/username_1/go-isatty/pull/28?
Answers:
username_1: What version of Go do you use?
username_0: go version go.1.11.5 android/arm64
username_0: ```
go: finding github.com/stretchr/testify/mock latest
../../../go/pkg/mod/github.com/username_1/[email protected]/isatty_linux.go:6:8: unknown import path "golang.org/x/sys/unix": cannot find package
```
username_1: AFAIK, `go get` command collect dependencies.
username_2: I think it may be related to the compilation constraints _somewhere_ (either in this library or in `golang.org/x/sys`).
I don't believe it's related to #28. That PR only made the problem more explicit, the code wouldn't compile with `v0.0.4` of this library either.
**`main.go`**
```go
package main
import (
"fmt"
"os"
"github.com/username_1/go-isatty"
)
func main() {
fmt.Printf("%t", isatty.IsTerminal(os.Stdout.Fd()))
}
```
```sh
$ go mod init
go: creating new go.mod: module github.com/username_2/test
$ go get ./...
go: finding github.com/username_1/go-isatty v0.0.5
go: downloading github.com/username_1/go-isatty v0.0.5
go: finding golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223
go: downloading golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223
$ go mod tidy
$ GOOS=android GOARCH=arm64 go build .
/go/pkg/mod/github.com/username_1/[email protected]/isatty_linux.go:6:8: unknown import path "golang.org/x/sys/unix": cannot find package
```
_downgrading_
```sh
$ go get github.com/username_1/[email protected]
go: finding github.com/username_1/go-isatty v0.0.4
go: downloading github.com/username_1/go-isatty v0.0.4
$ GOOS=android GOARCH=arm64 go build .
# github.com/username_2/test
/usr/local/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /tmp/go-link-981758022/go.o: Relocations in generic ELF (EM: 183)
/tmp/go-link-981758022/go.o: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status
```
username_0: thanks @username_2 for pointing out. You are correct.
Somehow previously all build runs well.. but when update the dependencies it's broken. My initial suspect are on `go-isatty` since it recently updated.
`golang.org/x/sys` package has no support for Android. Then found this https://github.com/username_1/go-isatty/commit/4684196194d794ae77a4dcad1a1bab9aee275dd7 committed 28 Aug 2018 which use `golang.org/x/sys/unix` and released as `go-isatty v0.0.5` couple days ago.
username_1: ```go
package isatty
import (
"syscall"
"unsafe"
)
const ioctlReadTermios = syscall.TCGETS
// IsTerminal return true if the file descriptor is terminal.
func IsTerminal(fd uintptr) bool {
var termios syscall.Termios
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, fd, ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
return err == 0
}
```
If you can compile this on Android, I will add this into this repository with build-constraints.
username_0: Successfully compiled 👌
username_1: Could you please try https://github.com/username_1/go-isatty/pull/30 ?
username_0: All good
Status: Issue closed
|
klikli-dev/occultism | 1111762042 | Title: Cant create world:
Question:
username_0: **Describe the bug**
When creating world it says "Incomplete Set of Tags Received from Server Please Contact Server Operator", and then when i go "ok" it takes me to the multiplayer choose server screen. It didn't happened before, started happening now with the new version
**To Reproduce**
Steps to reproduce the behavior:
1. Create world
**Expected behavior**
The world to be created
**Screenshots**
**System (please complete the following information):**
- Occultism Version: 1.25.5
- OS: Windows 10
- Minecraft Version: 1.18.1
- Modpack Link and Version, or list of mods:
57 mods
Answers:
username_0: It works fine with 1.24.3
username_1: Please provide a log file
Status: Issue closed
username_2: Same issue as OP, same mod/MC version, Disabling/removing Occultism fixes/stops issue, and worlds create as normal.
Relevant snippet from end of my log:
[2022-01-26-4.log.gz](https://github.com/username_1/occultism/files/7948204/2022-01-26-4.log.gz)
[21:01:06] [Render thread/WARN]: Failed to load datapacks, can't proceed with server load
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Missing required tags: ResourceKey[minecraft:root / minecraft:block]:occultism:tree_soil
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) ~[?:?]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) ~[?:?]
at net.minecraft.client.Minecraft.m_91190_(Minecraft.java:2071) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.Minecraft.doLoadLevel(Minecraft.java:1906) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.Minecraft.m_91200_(Minecraft.java:1873) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.gui.screens.worldselection.WorldSelectionList$WorldListEntry.m_101744_(WorldSelectionList.java:436) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.gui.screens.worldselection.WorldSelectionList$WorldListEntry.m_101704_(WorldSelectionList.java:342) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.gui.screens.worldselection.WorldSelectionList$WorldListEntry.m_6375_(WorldSelectionList.java:273) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.gui.components.AbstractSelectionList.m_6375_(AbstractSelectionList.java:323) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.gui.components.events.ContainerEventHandler.m_6375_(ContainerEventHandler.java:27) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.MouseHandler.lambda$onPress$0(MouseHandler.java:88) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.gui.screens.Screen.m_96579_(Screen.java:527) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.MouseHandler.m_91530_(MouseHandler.java:85) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.MouseHandler.m_168091_(MouseHandler.java:185) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.util.thread.BlockableEventLoop.execute(BlockableEventLoop.java:101) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.MouseHandler.m_91565_(MouseHandler.java:184) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at org.lwjgl.glfw.GLFWMouseButtonCallbackI.callback(GLFWMouseButtonCallbackI.java:36) ~[lwjgl-glfw-3.2.2.jar%2348!/:build 10]
at org.lwjgl.system.JNI.invokeV(Native Method) ~[lwjgl-3.2.2.jar%2344!/:build 10]
at org.lwjgl.glfw.GLFW.glfwPollEvents(GLFW.java:3101) ~[lwjgl-glfw-3.2.2.jar%2348!/:build 10]
at com.mojang.blaze3d.systems.RenderSystem.m_69495_(RenderSystem.java:202) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at com.mojang.blaze3d.platform.Window.m_85435_(Window.java:333) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.Minecraft.m_91383_(Minecraft.java:1062) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.Minecraft.m_91374_(Minecraft.java:660) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.main.Main.main(Main.java:205) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]
at net.minecraftforge.fml.loading.targets.CommonClientLaunchHandler.lambda$launchService$0(CommonClientLaunchHandler.java:45) ~[fmlloader-1.18.1-39.0.59.jar%2323!/:?]
at cpw.mods.modlauncher.LaunchServiceHandlerDecorator.launch(LaunchServiceHandlerDecorator.java:37) [modlauncher-9.1.0.jar%235!/:?]
at cpw.mods.modlauncher.LaunchServiceHandler.launch(LaunchServiceHandler.java:53) [modlauncher-9.1.0.jar%235!/:?]
at cpw.mods.modlauncher.LaunchServiceHandler.launch(LaunchServiceHandler.java:71) [modlauncher-9.1.0.jar%235!/:?]
at cpw.mods.modlauncher.Launcher.run(Launcher.java:106) [modlauncher-9.1.0.jar%235!/:?]
at cpw.mods.modlauncher.Launcher.main(Launcher.java:77) [modlauncher-9.1.0.jar%235!/:?]
at cpw.mods.modlauncher.BootstrapLaunchConsumer.accept(BootstrapLaunchConsumer.java:26) [modlauncher-9.1.0.jar%235!/:?]
at cpw.mods.modlauncher.BootstrapLaunchConsumer.accept(BootstrapLaunchConsumer.java:23) [modlauncher-9.1.0.jar%235!/:?]
at cpw.mods.bootstraplauncher.BootstrapLauncher.main(BootstrapLauncher.java:149) [bootstraplauncher-1.0.0.jar:?]
Caused by: java.lang.IllegalStateException: Missing required tags: ResourceKey[minecraft:root / minecraft:block]:occultism:tree_soil
at net.minecraft.tags.TagManager.m_144592_(TagManager.java:58) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:718) ~[?:?]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:482) ~[?:?]
at net.minecraft.server.packs.resources.SimpleReloadInstance.m_143940_(SimpleReloadInstance.java:71) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.util.thread.BlockableEventLoop.m_6367_(BlockableEventLoop.java:151) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.util.thread.ReentrantBlockableEventLoop.m_6367_(ReentrantBlockableEventLoop.java:23) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.util.thread.BlockableEventLoop.m_7245_(BlockableEventLoop.java:125) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.util.thread.BlockableEventLoop.m_18701_(BlockableEventLoop.java:134) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
at net.minecraft.client.Minecraft.m_91190_(Minecraft.java:2070) ~[client-1.18.1-20211210.034407-srg.jar%23128!/:?]
... 34 more
[21:01:09] [Render thread/INFO]: Stopping! |
clay/clay-kiln | 428829347 | Title: Add custom keydown function to simple-list-input
Question:
username_0: ## Feature
I'm currently using the `simple-list-input` component and I would like to have a way to add custom validations at the moment a key is pressed.
Use cases:
- I would like to only have letters and spaces through the `simple-list-input`
- I would like to only have numbers through the `simple-list-input`
## Possible Solution
Add a prop to the component that allows us to pass a function to be called at the moment `@keydown.down` is called.
If we wrap the current implementation in a function that receives the `event` object, we will be able to prevent default depending on any condition added in the custom function.
What we have:
```js
@keydown.down="autocompleteFocus(false)"
```
Current implementation
```js
@keydown.down="handleKeyDown"
```
And call the custom function here
```js
methods() {
...
handleKeyDown(event) {
customKeyDownFn(event);
autocompleteFocus(false);
},
...
}
```<issue_closed>
Status: Issue closed |
QubesOS/qubes-issues | 207922394 | Title: i18n regressions
Question:
username_0: Qubes-manager notifications have been broken since https://github.com/QubesOS/qubes-manager/commit/4946d7b8d0dfc945b6da0f37e2219df51ade9dac because .tr() is attempted to be called on self in places where self is not the QApplication. ([[1]](https://github.com/QubesOS/qubes-manager/blob/4946d7b8d0dfc945b6da0f37e2219df51ade9dac/qubesmanager/main.py#L99), [[2]](https://github.com/QubesOS/qubes-manager/blob/4946d7b8d0dfc945b6da0f37e2219df51ade9dac/qubesmanager/main.py#L105), [[3]](https://github.com/QubesOS/qubes-manager/blob/4946d7b8d0dfc945b6da0f37e2219df51ade9dac/qubesmanager/main.py#L120), possibly others)
This results in stack traces like the following:
```
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 804, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/site-packages/pyinotify.py", line 1505, in run
self.loop()
File "/usr/lib/python2.7/site-packages/pyinotify.py", line 1491, in loop
self.process_events()
File "/usr/lib/python2.7/site-packages/pyinotify.py", line 1287, in process_events
self._default_proc_fun(revent)
File "/usr/lib/python2.7/site-packages/pyinotify.py", line 924, in __call__
return _ProcessEvent.__call__(self, event)
File "/usr/lib/python2.7/site-packages/pyinotify.py", line 644, in __call__
return meth(event)
File "/usr/lib64/python2.7/site-packages/qubesmanager/main.py", line 105, in process_IN_CLOSE_WRITE
trayIcon.showMessage(unicode(self.tr(
AttributeError: 'QubesManagerFileWatcher' object has no attribute 'tr'
```
I [tried just `s/self.tr(/app.tr(/` everywhere](https://github.com/username_0/qubes-manager/commit/df0c6235ab9359129377aa9f2bf5a486d185586c), but app isn't available everywhere either, so apparently we need to review the scopes on a call-by-call basis? I feel like there should be a cleaner way to address this, but I don't know what that would be.
Answers:
username_0: There are also more regressions where .tr() is callable, but still missing conversion from QString to something with .format(). https://github.com/QubesOS/qubes-manager/commit/d073b3582db7d9d14431106e323ae42a9178c53a did not hit all cases, for example [here](https://github.com/QubesOS/qubes-manager/blob/ed8f889c9276e3d136c8fde900a01605a746bb93/qubesmanager/main.py#L1455-L1456).
Definitely don't release to qubes-dom0-current yet ;)
username_1: Any other example? Most of the code _is_ in QObject based classes - the only exceptions I remember are:
- few classes at the beginning of `main.py` (you've hit this one)
- `block.py`, `clipboard.py` - excluded from translation for this reason
- `backup_utils.py`, `appmenu_select.py` - nothing to translate
As for wrapping `self.tr` with `unicode` - this is rather ugly, but mostly because QString is incompatible with python builtin `str`/`unicode`... The alternative is to switch to QString completely and use `QString.arg` (`%1`, `%2`, etc) instead of `str.format` (`{}` or named variant). IMHO consistency with our other python modules is more important than consistency with PyQt.
username_0: It seems I was wrong. All .tr()s except those three (and [this](https://github.com/QubesOS/qubes-manager/blob/master/qubesmanager/backup_utils.py#L94)) indeed do seem to operate on self in a class deriving from QSomething.
PR to correct just those 3: https://github.com/QubesOS/qubes-manager/pull/26
Status: Issue closed
|
DoESLiverpool/Tosca | 137543092 | Title: Organise issues into milestones to let us track progress
Question:
username_0: Some of the things listed in the issues aren't vital for the 18th March, but would be nice things to add afterwards. To help me (and anyone else) keep track of what's needed for when, I'm going to create a few milestones and group the issues into them.
(Raising this as an issue is just the easiest way to notify everyone of what I'm up to :-)
Status: Issue closed
Answers:
username_0: Okay, things are now put into one of three milestones:
* **March Performance** - things that *have* to be done before the performance on March 18th at the Music Room
* **Stretch Goals** - things that it would be awesome if there's time to fit them in for the March Performance, but that aren't essential. If you're looking for things to do, items in the March Performance milestone should take precedence. As we get nearer to the performance deadline things from here will get bounced into the Future Development milestone instead (as there won't be time left to do them).
* **Future Development** - great ideas and other things that we want to do with the polargraphs *after* the March performance
You can now get an easy view of the essential stuff to do by looking at [just the issues for the March Performance milestone](https://github.com/DoESLiverpool/Tosca/milestones/March%20Performance). Looking forward to when that's an empty list :-D |
libgdx/libgdx | 115445125 | Title: Remove GdxRuntimeException from AssetManager.unload(String) or add a new function
Question:
username_0: I would be interested in adding an additional method to the AssetManager class which will unload an asset if it exists and will not throw an error otherwise. *AssetManager.tryUnload(String)* would increase the usability of LibGdx by allowing users to unload assets without needing to worry about whether or not the asset was loaded when the unload action takes place.
Currently attempting to unload an asset which is not loaded will throw a *GdxRuntimeException*, and a workaround for this behavior is to simply check if the asset is loaded via *AssetManager.isLoaded(String)* before unloading, however I believe this to be overly verbose for the most common case (when the developer knows an asset is loaded) and *AssetManager.unload* should not throw this exception in the first place. If an asset is not loaded, then why would we care that explicitly unloading it did not work? Alternatively, if that information is valuable to users, then perhaps a boolean return of whether or not the asset existed would be acceptable.
Answers:
username_1: Trying to unload something that you haven't loaded might result in unwanted behavior since they're ref counted. It sounds like you're trying to solve the wrong problem. Please provide a valid use-case for when you would like to unload something that you haven't loaded.
username_0: In my use-case, I was disposing of a Music asset during its onCompletion event, and additionally when the parent container was disposed (i.e., Whichever comes first). The unload call in the container's dispose event was throwing the exception
Status: Issue closed
username_1: Yeah that sounds like you're trying to solve the wrong problem, you should fix your code instead. Imagine what would happen if you have two of those cases using the same asset. Better throw an exception instead of to rely on a race condition. Closing this. Please use the forum or IRC if you need help using the API. |
ContinuumIO/anaconda-issues | 514814238 | Title: Build jupyter for python 3.8
Question:
username_0: but https://anaconda.org/anaconda/jupyter/files show only 35/36/37 versions.
### Actual Behavior
<!-- What actually happens? -->
```bash
conda create -n jupyter python=3.8.0 jupyter=1.0.0
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Package python conflicts for:
jupyter=1.0.0 -> ipywidgets -> nbformat[version='>=4.2.0'] -> jsonschema[version='>=2.4,!=2.5.0'] -> functools32 -> python[version='<3']
python=3.8.0
jupyter=1.0.0 -> python[version='2.7.*|3.4.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0']
Package pip conflicts for:
python=3.8.0 -> pip
jupyter=1.0.0 -> python=3.6 -> pip
Package python-dateutil conflicts for:
jupyter=1.0.0 -> ipykernel -> jupyter_client -> python-dateutil[version='>=2.1']
Package certifi conflicts for:
python=3.8.0 -> pip -> setuptools -> certifi[version='>=2016.09']
jupyter=1.0.0 -> jupyter_console -> pygments -> setuptools -> certifi[version='>=2016.09|>=2016.9.26']
jupyter=1.0.0 -> ipykernel -> tornado[version='>=4.0'] -> certifi
Package wheel conflicts for:
jupyter=1.0.0 -> python=3.6 -> pip -> wheel
python=3.8.0 -> pip -> wheel
Package ca-certificates conflicts for:
jupyter=1.0.0 -> python=3.6 -> ca-certificates
python=3.8.0 -> openssl[version='>=1.1.1d,<1.1.2a'] -> ca-certificates
Package ipython conflicts for:
jupyter=1.0.0 -> notebook -> ipython[version='>=4.0|>=4.0.0|>=5.0']
Package setuptools conflicts for:
jupyter=1.0.0 -> jupyter_console -> pygments -> setuptools
python=3.8.0 -> pip -> setuptools
```
### Expected Behavior
Installs ok
### Steps to Reproduce
<!-- What steps will reproduce the issue? -->
```bash
conda create -n jupyter python=3.8.0 jupyter=1.0.0
```
##### Anaconda or Miniconda version:
[Truncated]
```bash
conda info # (partial)
active environment : base
shell level : 1
conda version : 4.7.12
conda-build version : not installed
python version : 3.7.4.final.0
platform : linux-64
user-agent : conda/4.7.12 requests/2.22.0 CPython/3.7.4 Linux/2.6.32-754.14.2.el6.x86_64 rhel/6.10 glibc/2.12
UID:GID : 200593:200272
netrc file : None
offline mode : False
```
</details>
##### `conda list --show-channel-urls`
main / free / forge
Answers:
username_1: python 3.8 builds of all packages are underway and are being released as they are completed. This will be out soon, but we have no concrete timeline for you - just whenever the build machines get to it.
username_2: @username_1 it seems like it takes longer than few days. Is there some ETA?
username_0: seems pretty slow
* numpy got built for py38 on osx-64 & linux-64 built ~7 days ago (but not windows)
* pandas has no py38 build yet (except on conda-forge 10 days ago)
* ipython has no py38 build yet
so, I'm not holding my breath :-)
username_3: I tried making a new environment with `conda create -n py38 -c conda-forge python=3.8 jupyter`, but when I run `jupyter console` I get Python 3.7.3. Is this because installing `jupyter` doesn't install a 3.8 kernel with it? Do I have to do extra steps after installing python 3.8 and jupyter 1.0.0-py_2 to get a 3.8 kernel? How do I do that?
username_4: The above is happening because nb_conda_kernels is required to open kernels from other conda envs than the one (mostly base) when you started the Jupyter service, and it doesnt support installation on 3.8 as of yet
username_0: Finally!
* jupyter py38 packages were (last) built on February 29.
* `conda create` defaults to python 3.8 as expected
* I am able to access python38 in jupyterlab afterwards
so, closing
Status: Issue closed
|
COVIDAnalytics/website | 596011642 | Title: Prepare ventilator allocation page
Question:
username_0: Quick explanations of the page:
1. At the beginning of the page a text explaining the motivation will be displayed.
2. (i) Then we would show on the left side of the page a US coloured map with the shortage of each state across time (with slide bar or calendar to choose from as before).
(ii) On the right side, we would like to display 2 curves on a graph with the exact same design as previously for active/number of deaths, etc
Note: if it is too hard to display something on the left and on the right, display successively is of course okay.
3. A small graph which is just a x-axis with dots on it (corresponding to the peaks of each state)
Note: Ideally this graph would be displayed just under 2.(ii) on the right side, otherwise okay of course.
4. The exact map we discussed yesterday with the different scenarios possible regarding reallocation.
However, the users should a priori be able to tune 3 parameters.
We propose to have a very large csv with as columns: c1: state, c2: date, c3: shortage, c4: parameter_1_value, c5: parameter_2_value, c6: parameter_3_value
With this, we could easily filter according to column c4, c5, c6. But there will be a huge number of lines (10 times more than previously). Is it scalable for a smooth screen display?<issue_closed>
Status: Issue closed |
isawnyu/isaw.web | 171618056 | Title: exhibition batch update: KeyError
Question:
username_0: *reviewing [scrumdo story 121](https://www.scrumdo.com/projects/story_permalink/1098334)*
The problem seems to be that "Plone" is hard-coded in the script as the site root, but that is not how the actual site is configured.
```
plone@isaw4-dev:/srv/isaw.web$ bin/client1 run scripts/batch_update.py --dry-run /home/telliott/exhib-test-2.json
2016-08-17 05:46:25 WARNING ZODB.blob (2914) Blob dir /srv/isaw.web/var/blobstorage/ has insecure mode setting
2016-08-17 05:46:29 WARNING plone.behavior Specifying 'for' in behavior 'Related items' if no 'factory' is given has no effect and is superfluous.
2016-08-17 05:46:29 WARNING plone.behavior Specifying 'for' in behavior 'Dynamic SearchableText indexer behavior' if no 'factory' is given has no effect and is superfluous.
2016-08-17 05:46:29 WARNING collective.quickupload.interfaces Importing interfaces from collective.quickupload.browser.interfaces is deprecated, please import from collective.quickupload.interfaces
2016-08-17 05:46:30 WARNING plone.behavior Specifying 'for' in behavior 'Enable Image Cropping' if no 'factory' is given has no effect and is superfluous.
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'canAddPressReleases'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'getContacts'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'canAddPressClips'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'getClips'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'canAddPressContacts'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'showTwoStatePrivateWarning'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'publishPressRoomInfrastructure'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.UpgradeFolders has a security declaration for nonexistent method 'getReleases'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.MyFormWrapper has a security declaration for nonexistent method 'RSS'
2016-08-17 05:46:33 WARNING Init Class Products.Five.metaclass.RedirectsView has a security declaration for nonexistent method 'errors'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "scripts/batch_update.py", line 67, in <module>
site = getSite(app)
File "scripts/batch_update.py", line 42, in getSite
site = app.unrestrictedTraverse("Plone")
File "/srv/isaw.web/eggs/Zope2-2.13.24-py2.7.egg/OFS/Traversable.py", line 285, in unrestrictedTraverse
raise e
KeyError: 'Plone'
```
Here's the test file content:
```JSON
{
"items": [
{
"/exhibitions/foobition/checklist": {
"title": "Conical Sundial with Oscan Dedicatory Inscription",
"context": "Stabian Baths, Pompeii",
"date": "2nd century BCE",
"dimensions": "H. 43 cm; W. 50.5 cm; D. 23 cm",
"inventory_num": "2541",
"lender": "Museo Archeologico Nazionale di Napoli",
"medium": "Marble"
}
},
{
"/exhibitions/foobition/checklist": {
"title": "Quadruple Vertical Sundial with Inscription",
"context": "Little Metropolitan Church, Athens",
"date": "ca. 300 CE",
"dimensions": "H. 50.8 cm; W. 99 cm; D. 32 cm",
"inventory_num": "1816,0610.186",
"lender": "British Museum, London",
"medium": "Marble",
"notes": "Phaidros son of Zoilos, the archon who rebuilt the Theater of Dionysos at Athens c. 300 CE"
}
}
]
}
```
Answers:
username_1: Apologies for the lousy testing. Added a site argument for specifying the site, so "--site isaw" will work in this case. Tested it on staging.
Status: Issue closed
username_0: fixed! |
wireservice/csvkit | 472865918 | Title: When reading non-utf8 stdin, emit a more specific warning; for Python 3.7+, use stdin.reconfigure()
Question:
username_0: I've skimmed the relevant parts in the source code but haven't yet dug in too deep, so a couple of quick comments/questions:
#### 1. Have the error message be more explicit when stdin is used?
Is it possible/non-trivial to adjust the warning message to say something specifically about stdin when stdin is the input? I have to admit all this time when piping into a csvkit util and getting an encoding error, I interpreted the message `with the -e flag or with the PYTHONIOENCODING environment variable` to mean that I should use either `-e` **or** set `PYTHONIOENCODING` – i.e. if `-e` wasn't working, it was because I hadn't figured out the proper encoding (though I guess I could have interpreted `Your file is not "utf-8" encoded` to mean that csvkit wasn't seeing my `-e` flag at all)
#### 2. Automatically configure the encoding for stdin for Pythons 3.7+
I saw that Python 3.7 adds a new `stdin` [method to set its encoding](https://stackoverflow.com/a/16549381/160863):
```py
sys.stdin.reconfigure(encoding='windows-1252')
```
I know the 3.7 userbase is probably still a relative minority, but is it worth adding in conditional behavior to [cli.py](https://github.com/wireservice/csvkit/blob/master/csvkit/cli.py#L218) when `six` detects version > 3.7?
Answers:
username_1: Thanks for this! I have scheduled it for the next release (sometime before April 2020).
username_2: Same problem here. On Ubuntu 18, Python 3.8.5 with this errors.
```
/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py:400: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly
Your file is not "utf-8" encoded. Please specify the correct encoding with the -e flag or with the PYTHONIOENCODING environment variable. Use the -v flag to see the complete error.
``` |
vmprof/vmprof-python | 90956205 | Title: Everything is undefined?
Question:
username_0: I run a script under `vmprof` and `vmprof -n`;
What is does it roughly:
* spawn N worker threads
* wait for these threads
Each worker does:
* make an HTTP PUT using libcurl
* poll for resource to become readable using HTTP GET, `time.sleep()` in between.

Answers:
username_0: Here's the corresponding JSON from vmprof-server:
```json
{"version": 1, "profiles": ["py:<module>:1:scripts/red_storm.py", "8070590333543882544", 9, {}, [["PyEval_EvalFrameEx", "139801367889136", 9, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 9, {}, [["PyObject_Call", "139801367292320", 9, {}, [["builtin___import__", "139801367880144", 9, {}, [["PyImport_ImportModuleLevel", "139801367986944", 9, {}, [["ensure_fromlist", "139801367985264", 9, {}, [["import_submodule", "139801367983888", 9, {}, [["load_source_module", "139801367980608", 9, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 9, {}, [["PyEval_EvalCode", "139801367915456", 9, {}, [["PyEval_EvalCodeEx", "139801367913056", 9, {}, [["py:<module>:1:scripts/put_multifile.py", "8070590333543911856", 9, {}, [["PyEval_EvalFrameEx", "139801367889136", 9, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 9, {}, [["PyObject_Call", "139801367292320", 9, {}, [["builtin___import__", "139801367880144", 9, {}, [["PyImport_ImportModuleLevel", "139801367986944", 9, {}, [["load_next", "139801367984640", 9, {}, [["import_submodule", "139801367983888", 9, {}, [["load_source_module", "139801367980608", 2, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 2, {}, [["PyEval_EvalCode", "139801367915456", 2, {}, [["PyEval_EvalCodeEx", "139801367913056", 2, {}, [["py:<module>:67:/usr/lib64/python2.7/httplib.py", "8070590333538386992", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 1, {}, [["PyObject_Call", "139801367292320", 1, {}, [["builtin___import__", "139801367880144", 1, {}, [["PyImport_ImportModuleLevel", "139801367986944", 1, {}, [["load_next", "139801367984640", 1, {}, [["import_submodule", "139801367983888", 1, {}, [["load_source_module", "139801367980608", 1, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 1, {}, [["PyEval_EvalCode", "139801367915456", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:<module>:29:/usr/lib64/python2.7/urlparse.py", "8070590333533732656", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:namedtuple:293:/usr/lib64/python2.7/collections.py", "8070590333568066480", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyRun_StringFlags", "139801368022608", 1, {}, [["run_mod", "139801368018960", 1, {}, [["PyAST_Compile", "139801367942176", 1, {}, [["compiler_body.part.27", "139801367938608", 1, {}, [["compiler_visit_stmt", "139801367930288", 1, {}, [["compiler_body.part.27", "139801367938608", 1, {}, [["compiler_visit_stmt", "139801367930288", 1, {}, [["assemble", "139801367921664", 1, {}, [["PyCode_Optimize", "139801368011056", 1, {}, [["PyCode_Optimize", "139801368011056", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], ["py:<module>:1:/dima/tmp/sync_18/module/sync/python/src/application.py", "8070590333520288304", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 1, {}, [["PyObject_Call", "139801367292320", 1, {}, [["builtin___import__", "139801367880144", 1, {}, [["PyImport_ImportModuleLevel", "139801367986944", 1, {}, [["load_next", "139801367984640", 1, {}, [["import_submodule", "139801367983888", 1, {}, [["load_source_module", "139801367980608", 1, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 1, {}, [["PyEval_EvalCode", "139801367915456", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:<module>:45:/usr/lib64/python2.7/uuid.py", "8070590333520412848", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 1, {}, [["PyObject_Call", "139801367292320", 1, {}, [["builtin___import__", "139801367880144", 1, {}, [["PyImport_ImportModuleLevel", "139801367986944", 1, {}, [["load_next", "139801367984640", 1, {}, [["import_submodule", "139801367983888", 1, {}, [["load_package", "139801367985840", 1, {}, [["load_source_module", "139801367980608", 1, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 1, {}, [["PyEval_EvalCode", "139801367915456", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:<module>:4:/usr/lib64/python2.7/ctypes/__init__.py", "8070590333520436912", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_reset_cache:265:/usr/lib64/python2.7/ctypes/__init__.py", "8070590333520433200", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["0x00007f260507139c:/usr/lib/python2.7/lib-dynload/_ctypes.so", "139801269834651", 1, {}, [["PyObject_CallFunction", "139801367292704", 1, {}, [["call_function_tail", "139801367292544", 1, {}, [["PyObject_Call", "139801367292320", 1, {}, [["type_call", "139801367639392", 1, {}, [["0x00007f260506db25:/usr/lib/python2.7/lib-dynload/_ctypes.so", "139801269820196", 1, {}, [["type_new", "139801367665888", 1, {}, [["PyType_Ready", "139801367656608", 1, {}, [["add_subclass.isra.25", "139801367635664", 1, {}, [["PyWeakref_NewRef", "139801367686592", 1, {}, [["new_weakref", "139801367674368", 1, {}, [["_PyObject_GC_New", "139801368107984", 1, {}, [["_PyObject_GC_Malloc", "139801368107728", 1, {}, [["collect", "139801368104352", 1, {}, [["dict_traverse", "139801367518208", 1, {}, [["PyDict_Next", "139801367518048", 1, {}, [["PyDict_Next", "139801367518048", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], ["load_package", "139801367985840", 7, {}, [["load_source_module", "139801367980608", 7, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 7, {}, [["PyEval_EvalCode", "139801367915456", 7, {}, [["PyEval_EvalCodeEx", "139801367913056", 7, {}, [["py:<module>:58:/dima/plc/lib/python2.7/site-packages/cherrypy/__init__.py", "8070590333526607024", 7, {}, [["PyEval_EvalFrameEx", "139801367889136", 7, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 6, {}, [["PyObject_Call", "139801367292320", 6, {}, [["builtin___import__", "139801367880144", 6, {}, [["PyImport_ImportModuleLevel", "139801367986944", 6, {}, [["load_next", "139801367984640", 1, {}, [["import_submodule", "139801367983888", 1, {}, [["load_source_module", "139801367980608", 1, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 1, {}, [["PyEval_EvalCode", "139801367915456", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:<module>:1:/dima/plc/lib/python2.7/site-packages/cherrypy/_cperror.py", "8070590333526573616", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 1, {}, [["PyObject_Call", "139801367292320", 1, {}, [["builtin___import__", "139801367880144", 1, {}, [["PyImport_ImportModuleLevel", "139801367986944", 1, {}, [["load_next", "139801367984640", 1, {}, [["import_submodule", "139801367983888", 1, {}, [["load_source_module", "139801367980608", 1, {}, [["read_compiled_module", "139801367974496", 1, {}, [["PyMarshal_ReadLastObjectFromFile", "139801368003568", 1, {}, [["PyMarshal_ReadObjectFromString", "139801368003424", 1, {}, [["r_object", "139801367993472", 1, {}, [["r_object", "139801367993472", 1, {}, [["r_object", "139801367993472", 1, {}, [["r_object", "139801367993472", 1, {}, [["r_object", "139801367993472", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], ["ensure_fromlist", "139801367985264", 5, {}, [["import_submodule", "139801367983888", 5, {}, [["load_source_module", "139801367980608", 5, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 5, {}, [["PyEval_EvalCode", "139801367915456", 5, {}, [["PyEval_EvalCodeEx", "139801367913056", 5, {}, [["py:<module>:10:/dima/plc/lib/python2.7/site-packages/cherrypy/_cpdispatch.py", "8070590333526231472", 2, {}, [["PyEval_EvalFrameEx", "139801367889136", 2, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 2, {}, [["PyObject_Call", "139801367292320", 2, {}, [["builtin___import__", "139801367880144", 2, {}, [["PyImport_ImportModuleLevel", "139801367986944", 2, {}, [["load_next", "139801367984640", 2, {}, [["import_submodule", "139801367983888", 2, {}, [["load_source_module", "139801367980608", 2, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 2, {}, [["PyEval_EvalCode", "139801367915456", 2, {}, [["PyEval_EvalCodeEx", "139801367913056", 2, {}, [["py:<module>:25:/usr/lib64/python2.7/inspect.py", "8070590333526344624", 2, {}, [["PyEval_EvalFrameEx", "139801367889136", 2, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 2, {}, [["PyObject_Call", "139801367292320", 2, {}, [["builtin___import__", "139801367880144", 2, {}, [["PyImport_ImportModuleLevel", "139801367986944", 2, {}, [["load_next", "139801367984640", 2, {}, [["import_submodule", "139801367983888", 2, {}, [["load_source_module", "139801367980608", 2, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 2, {}, [["PyEval_EvalCode", "139801367915456", 2, {}, [["PyEval_EvalCodeEx", "139801367913056", 2, {}, [["py:<module>:23:/usr/lib64/python2.7/tokenize.py", "8070590333526380976", 2, {}, [["PyEval_EvalFrameEx", "139801367889136", 2, {}, [["builtin_map", "139801367870336", 2, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 2, {}, [["PyObject_Call", "139801367292320", 2, {}, [["function_call", "139801367442240", 2, {}, [["PyEval_EvalCodeEx", "139801367913056", 2, {}, [["py:compile:192:/dima/plc/lib/python2.7/re.py", "8070590333619414192", 2, {}, [["PyEval_EvalFrameEx", "139801367889136", 2, {}, [["PyEval_EvalCodeEx", "139801367913056", 2, {}, [["py:_compile:230:/dima/plc/lib/python2.7/re.py", "8070590333619414704", 2, {}, [["PyEval_EvalFrameEx", "139801367889136", 2, {}, [["PyEval_EvalCodeEx", "139801367913056", 2, {}, [["py:compile:567:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619438768", 2, {}, [["PyEval_EvalFrameEx", "139801367889136", 2, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:parse:706:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467824", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse_sub:317:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467440", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_parse:395:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467696", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse_sub:317:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467440", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_parse:395:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467696", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse_sub:317:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467440", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_parse:395:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467696", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse_sub:317:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467440", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_parse:395:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467696", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse_sub:317:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467440", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_parse:395:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467696", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["list_slice", "139801367468544", 1, {}, [["PyList_New", "139801367468208", 1, {}, [["_PyObject_GC_New", "139801368107984", 1, {}, [["_PyObject_GC_Malloc", "139801368107728", 1, {}, [["collect", "139801368104352", 1, {}, [["dict_traverse", "139801367518208", 1, {}, [["dict_traverse", "139801367518208", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], ["py:_code:552:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619438640", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_compile:64:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619416880", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_compile:64:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619416880", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_compile:64:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619416880", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_compile:64:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619416880", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:_compile:64:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619416880", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["lookdict_string", "139801367505872", 1, {}, [["lookdict_string", "139801367505872", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], ["py:<module>:23:/dima/plc/lib/python2.7/site-packages/cherrypy/_cptools.py", "8070590333526014256", 2, {}, [["PyEval_EvalFrameEx", "139801367889136", 2, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 2, {}, [["PyObject_Call", "139801367292320", 2, {}, [["builtin___import__", "139801367880144", 2, {}, [["PyImport_ImportModuleLevel", "139801367986944", 2, {}, [["ensure_fromlist", "139801367985264", 2, {}, [["import_submodule", "139801367983888", 2, {}, [["load_source_module", "139801367980608", 2, {}, [["read_compiled_module", "139801367974496", 1, {}, [["PyMarshal_ReadLastObjectFromFile", "139801368003568", 1, {}, [["PyMarshal_ReadObjectFromString", "139801368003424", 1, {}, [["r_object", "139801367993472", 1, {}, [["r_object", "139801367993472", 1, {}, [["r_object", "139801367993472", 1, {}, [["PyCode_New", "139801367360640", 1, {}, [["PyTuple_GetItem", "139801367619520", 1, {}, [["PyTuple_GetItem", "139801367619520", 1, {}, []]]]]]]]]]]]]]]]]], ["PyImport_ExecCodeModuleEx", "139801367980176", 1, {}, [["PyEval_EvalCode", "139801367915456", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:<module>:1:/dima/plc/lib/python2.7/site-packages/cherrypy/lib/static.py", "8070590333525619120", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:init:347:/usr/lib64/python2.7/mimetypes.py", "8070590333525648560", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:read:194:/usr/lib64/python2.7/mimetypes.py", "8070590333525622832", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:readfp:205:/usr/lib64/python2.7/mimetypes.py", "8070590333525623088", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["string_concatenate", "139801367249197", 1, {}, [["PyString_Concat", "139801367578752", 1, {}, [["string_concat", "139801367571072", 1, {}, [["PyObject_Malloc", "139801367545248", 1, {}, [["PyObject_Malloc", "139801367545248", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], ["py:<module>:2:/dima/plc/lib/python2.7/site-packages/cherrypy/_cprequest.py", "8070590333520773424", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_CallObjectWithKeywords", "139801367887504", 1, {}, [["PyObject_Call", "139801367292320", 1, {}, [["builtin___import__", "139801367880144", 1, {}, [["PyImport_ImportModuleLevel", "139801367986944", 1, {}, [["load_next", "139801367984640", 1, {}, [["import_submodule", "139801367983888", 1, {}, [["load_source_module", "139801367980608", 1, {}, [["PyImport_ExecCodeModuleEx", "139801367980176", 1, {}, [["PyEval_EvalCode", "139801367915456", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:<module>:203:/usr/lib64/python2.7/Cookie.py", "8070590333520807344", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:compile:192:/dima/plc/lib/python2.7/re.py", "8070590333619414192", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_compile:230:/dima/plc/lib/python2.7/re.py", "8070590333619414704", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:compile:567:/dima/plc/lib/python2.7/sre_compile.py", "8070590333619438768", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:parse:706:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467824", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse_sub:317:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467440", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse:395:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467696", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse_sub:317:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467440", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["PyEval_EvalCodeEx", "139801367913056", 1, {}, [["py:_parse:395:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619467696", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:get:212:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619466288", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["py:__next:193:/dima/plc/lib/python2.7/sre_parse.py", "8070590333619441328", 1, {}, [["PyEval_EvalFrameEx", "139801367889136", 1, {}, [["lookdict_string", "139801367505872", 1, {}, [["lookdict_string", "139801367505872", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], ["PyErr_GivenExceptionMatches", "139801367951920", 1, {}, [["PyObject_IsSubclass", "139801367296336", 1, {}, [["PyObject_CallFunctionObjArgs", "139801367294400", 1, {}, [["objargs_mktuple", "139801367279392", 1, {}, [["PyTuple_New", "139801367617120", 1, {}, [["_PyObject_GC_NewVar", "139801368108016", 1, {}, [["_PyObject_GC_Malloc", "139801368107728", 1, {}, [["collect", "139801368104352", 1, {}, [["dict_traverse", "139801367518208", 1, {}, [["visit_decref", "139801368103728", 1, {}, [["type_is_gc", "139801367621392", 1, {}, [["type_is_gc", "139801367621392", 1, {}, []]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], "argv": "scripts/red_storm.py <EMAIL> xxx /xxx/xxx/test-vmperf 5000"}
```
username_0: Silly question: are multithreaded profiles even supported?
username_1: they're not tested, so probably not :-) maybe we should make it more clear in docs? or test it? vmprof has not seen an official 1.0 release, we're ironing issues out (like this one)
username_1: can you retry that one too?
username_0: It's better now. normal functions are shown.
decorator frames are still blank, I guess that's a separate issue.
I wonder if decorators should be shown at all, most of the time they are not relevant, but once in a while they are (e.g. imagine buggy retry decorator).
Perhaps vmprof ought to mark frames with some sort of type and then server component offer an option to hide or show "boring" frames.
Status: Issue closed
username_1: It definitely should, probably with heuristics related to the projects you're using. I'm going to close this, feel free to open another one about this topic. |
raiden-network/raiden | 401775977 | Title: Event loop is regularly being blocked for longish periods
Question:
username_0: ## Problem Definition
Using the gevent monitoring thread (enable by setting the env var `GEVENT_MONITOR_THREAD_ENABLE=1`) shows that we're regularly blocking the event loop for longish periods (the max allowed blocking time can be controlled via `GEVENT_MAX_BLOCKING_TIME`, I used `0.5` as an example).
The main blocking culprits seem to be:
- sqlite
- deepcopy calls inside the state machine
## Possible solutions
For sqlite there is [gsqlite3](https://pypi.org/project/gsqlite3/) which on paper sounds like a solution.
For the state machine we should look into using persistent / functional datastructures as a way to avoid having to use deepcopy. [Pyrsistent](https://github.com/tobgu/pyrsistent) looks promising.
Answers:
username_1: Is there a way to construct a flame graph to support this?
username_2: https://github.com/uber/pyflame
username_2: Actually, I tried pyflame with py3.7 and py3.6 and it doesn't work with either.
For py3.7 the abi is not supported.
For py3.6 It seems to be a known bug https://github.com/uber/pyflame/issues/129 .
For py3.5 raiden is not compatible
I may reintroduce one of the tools we had for profilling
username_0: Gevent already shows where the blocking occurs when running with the variables I mentioned above.
It just needs someone taking the time to look through the generated output.
To make this a bit easier the [`exception_stream` variable](http://www.gevent.org/monitoring.html#blocking) can be set to a file for example so output isn't intermingled with the raiden logs.
For profiling there is also [py-spy](https://github.com/benfred/py-spy) (which claims to support 3.3. - 3.7).
The gevent docs point to https://github.com/nylas/nylas-perftools
Status: Issue closed
username_2: We have done a lot of work on this issue, since blocking the event loop results in presence problems with the matrix transport, which led to transfer failures because the nodes appeared to be offline.
There are still some longish periods of time were the loop may block, however, they are not a source of problem ATM. The next step should be to add benchmarks and create issues based on that. For now I'm closing this issue. |
sfackler/rust-native-tls | 273248535 | Title: Support LibreSSL
Question:
username_0: There are some distros which do not have packages for OpenSSL at all, but which do provide LibreSSL. [LibreSSL](http://www.libressl.org/) is a fork of OpenSSL in which much legacy code has been removed and which has been reorganized to be easier to maintain. According to [this Reddit thread](https://www.reddit.com/r/rust/comments/7ceihp/tiny_030_released_now_supports_ipv6_and_tls/dpplds6/) some versions of LibreSSL would work due to some amount of compatibility with OpenSSL, but that is not guaranteed.
Since it looks like LibreSSL is the "native tls" on certain platforms, like [Void Linux](https://www.voidlinux.eu/), it would probably be useful to explicitly support and test it.
Alternatively, providing a "non-native TLS" option, like [rustls](https://github.com/ctz/rustls) might make it possible for users who don't have any of the native TLS options to use projects that depend on this, without having to have each downstream project add support for two different TLS crates.
Answers:
username_0: See osa1/tiny#37 for the downstream issue.
username_1: rust-openssl already supports LibreSSL.
username_0: Ah, maybe filed issue against the wrong repo then. Probably better for the person with the original issue to file a more detailed issue against `rust-openssl`.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.