repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
ckulka/baikal-docker | 976463277 | Title: Exposing local directories for all directories within /var/www/baikal/
Question:
username_0: As the title says, I've tried mapping ./www to /var/www and creaeting all the directories manually, and applying the same permissions that are in the example docker-compose.yml file to all the folders, but when I spin the container up it doesn't work.
The reason I need to do this is I need to edit the .htaaccess file (for remapping the default iPhone CardDAV value) within the html directory, but my container doesn't have vim, vi, nano, or any other text editor installed, installing them is on face value a PITA, and I would much rather map everything to a local directory for future config changes if required.
Answers:
username_1: Hi @username_0, just to make sure I understand your scenario correctly: you only need to modify the `/var/www/baikal/html/.htaccess` file?
I would recommend mounting just that modified file into the container and not the entire `/var/www/baikal` folder, for example based on [examples/docker-compose.localvolumes.yaml](https://github.com/username_1/baikal-docker/blob/8781617f179fb7b115a179d47f425c460996c876/examples/docker-compose.localvolumes.yaml):
```yaml
version: '2'
services:
baikal:
image: username_1/baikal:nginx
restart: always
ports:
- "80:80"
volumes:
- ./config:/var/www/baikal/config
- ./data:/var/www/baikal/Specific
# Mount your modified .htaccess file
- ./my.htaccess:/var/www/baikal/html/.htaccess
```
My main reason is that you then don't have to worry about all the other files in the `/var/www/baikal` folder, especially when later upgrading to a newer version.
If you really (for other reasons) want to mount everything of `/var/www/baikal` from a local directory, you'll have to
1. Copy the Baikal files from the Docker image into your local directory
2. Fix file permissions
3. Add the mount in the Docker Compose file
```bash
# Step 1: create the local directory and get the Baikal files
mkdir /my/local/directory
docker run --rm -it -v /my/local/directory:/tmp username_1/baikal cp -R /var/www/baikal /tmp
# Step 2: fix file permissions (Nginx)
chown -R 101:101 /my/local/directory
# Step 2: fix file permissions (Apache)
chown -R 33:33 /my/local/directory
```
Let me know if that helps 👍
username_0: I did not know you could mount a single file! That is a much easier option, let me do that now.
username_0: @username_1 OK so I've done the above and it has worked a treat. I am now however having issues with the rewriting for iOS 14. I've found several different links all suggesting different formats, such as:
```
RewriteRule /carddav /card.php [R,L]
RewriteRule /caldav /cal.php [R,L]
```
```
RewriteRule /.well-known/carddav /card.php [R,L]
RewriteRule /.well-known/caldav /cal.php [R,L]
```
```
RedirectPermanent /.well-known/caldav http://servername.tld/cal.php
RedirectPermanent /.well-known/carddav http://servername.tld/card.php
```
```
rewrite ^/.well-known/caldav /dav.php redirect;
rewrite ^/.well-known/carddav /dav.php redirect;
```
And others that I can now not find.
I have also just stumbled across [this](https://github.com/sabre-io/Baikal/issues/744). Much like the OP, I've tried many of those URIs too and none have worked. One comment mentions that from iOS 12 CardDAV will only work with HTTPs (which I have not configured because certificates), yet OP also mentions CardDAV working with HTTP. I cannot get an output from either of the log files in /var/log/nginx/, and I've seen my iPhone autoppulating the URI with dav.php rather than card.dav, so I honestly don't know what to believe any more!
username_1: I'm afraid I can't help you there, I don't have an iPhone - just one thing (and this is to some extend me misleading you): nginx doesn't use the `.htacess` file, I forgot to take that out when using the Docker Compose file example.
Can you please retry this with the Apache variant, e.g. just `image: username_1/baikal`?
If that doesn't work either, I think your best bet now is to open an issue with in Baikal repository given this is related to how Baikal works with iOS 14 with Apache. They are maintaining the code + configuration and might have a working setup/example for iOS 14. |
fenom-template/fenom | 252730880 | Title: Custom Filters/Modifiers
Question:
username_0: Does Fenom have a filter where you can use your own custom PHP functions? I see that it was on the To Do list, but I've also seen other things on a different To Do list that seem to be implemented already but couldn't figure out if this one was still pending or not.
Answers:
username_0: Never mind. I tested it out and it works.
Status: Issue closed
|
jfbercher/jupyter_latex_envs | 198163978 | Title: References issue
Question:
username_0: (<a id="cit-gerstner2014neuronal" href="#call-gerstner2014neuronal">Gerstner, 2014</a>) <NAME>, ``_Neuronal dynamics : from single neurons to networks and models of cognition_'', 2014.
I thin the last line should be "..." not ``..."
Answers:
username_1: Hi, these kind of quotes will render correctly in LaTeX as "typographic quotes" instead of typewriter/straight quotes. See the wikipedia [article](https://en.wikipedia.org/wiki/Quotation_mark) on quotes. anyway, if you want change this (eg because it looks better on the html output), you can edit the citation templates available in the top of the file `latex_env.js`. Thanks for using this extension. I am always happy to know that it is useful for people!
Status: Issue closed
|
godotengine/godot | 361671177 | Title: Allow expression node for visual shaders
Question:
username_0: **Issue description:**
Allow custom shader code inside of the visual shaders.
I made a mockup of what the node will look like.

Answers:
username_1: The inputs could be auto-generated from uniforms in the shader code.
username_0: This is the definition of a function. You mean scan the shader code for variables?
username_1: I meant it like adding something like `uniform vec3 Color = ...;` in the shader code and the node automatically adding an input called `Color` which is passed to the shader.
username_2: I've created a similar `Expression` node my my own noise module in:
https://github.com/username_2/godot-anl/commit/facbbcc59e0666c93b4a7877ec61910491eeb0f6
https://github.com/username_2/godot-anl/commit/299bfc61840a6bfec7f711bc8a577f1bdfae941f
I've adapted visual shader codebase to my own needs, some structures share similarities, might be useful...
username_3: I currently began to implement it. Not sure if the resulted form will be the same, but I always trying to achieve the best result. demo screenshot -

Status: Issue closed
|
jasonrohrer/OneLifeData7 | 555843014 | Title: Bear cave with bear on it doesn't block movement.
Question:
username_0: 
Basically this bug was abused for two successful apocalypses where anyone who had no idea how the bug worked would have zero chance to stop it.
Basically: Block the tile directly in front of the bear cave with an impassible tile.
Double click the bear cave to wake the bear up into its animation where it stands at the mouth.
While bear is sticking out of cave run through the bear tile as it doesn't block movement.
????
Profit.
Answers:
username_0: Also to a lesser extent you can do the same by putting a goose on a tree stump. A lot less abusable but the same concept.
username_1: Unfortunately, due to the way moving animals work, they can't cross blocking tiles.
So any block tile that generates a moving animal has to become temporarily unblocking.
I will look at the code and see if there's a way to let an animal that is blocked by the same tile that it's currently standing on "escape", but this would prevent you from slamming a door in a sheep's face.
Blocking tiles, I think, prevent you from walking south, ever ever ever, even if you're standing on that tile (this is why a door can get slammed in your face and block you from "ghosting" through it).
And in the case of the bear and the goose, they NEED to walk south. |
CocoaPods/CocoaPods | 310694570 | Title: git version issue
Question:
username_0: <!--
ℹ Please fill out this template when filing an issue.
All lines beginning with an ℹ symbol instruct you with
what info we expect.
Before you start, are you using the latest CocoaPods release?
A lot changes with Xcode releases that are not backwards compatible.
Not an issue about the CocoaPods command line app? Please file an issue in the appropriate repo - https://github.com/CocoaPods
Issues are for feature requests, and bugs; questions should go to Stack Overflow
Using CocoaPods <= 0.39: http://blog.cocoapods.org/Sharding/
Using Xcode 8: Requires CocoaPods 1.1.0 or above.
Issue with Nanaimo not loading:
Please run `[sudo] gem uninstall nanaimo` and remove all but the latest version.
Issues with `pod search`? Try deleting your cache `rm -rf ~/Library/Caches/CocoaPods`first.
-->
* [ ] I've read and understood the [*CONTRIBUTING guidelines and have done my best effort to follow](https://github.com/CocoaPods/CocoaPods/blob/master/CONTRIBUTING.md).
# Report
## What did you do?
ℹ Please replace these two lines with what you did.
e.g. Run `pod install`
## What did you expect to happen?
ℹ Please replace these two lines with what you expected to happen.
e.g. Install all pod dependencies correctly.
## What happened instead?
ℹ Please replace these two lines with of what happened instead.
e.g. Pod A is missing the subspec B for target C.
## CocoaPods Environment
ℹ Please replace these two lines with the output of `pod env`.
e.g. via `pod env | pbcopy`
## Project that demonstrates the issue
ℹ Please link to a project we can download that reproduces the issue.
You can delete this section if your issue is unrelated to build problems,
i.e. it's only an issue with CocoaPods the tool.
Answers:
username_1: please fill in the template
Status: Issue closed
|
whatwg/html | 104063207 | Title: Consider defining "write-only" form elements.
Question:
username_0: I put together a "write-only" strawman a few months ago that might be worth evaluating: https://username_0.github.io/credentialmanagement/writeonly/
Discussed briefly on https://lists.w3.org/Archives/Public/public-whatwg-archive/2014Oct/0181.html and http://discourse.wicg.io/t/write-only-input-fields/598.
Answers:
username_1: Even though "Removing the writeonly attribute will not clear an element’s write-only value flag. Once that flag is set for an element, it cannot be cleared." is true, it seems you can just as easily replace the `<input>`. Perhaps it should specify a nonce you cannot access from the page?
username_1: I'm going to close this issue as it's mostly noise for us and not helping you get more attention for the idea. Hope that's acceptable. Once there's more interest we can figure out the logistics.
Status: Issue closed
username_0: Yeah, no worries. I've poked some folks on Chrome's password management team (Hi, @vabr!) in the hopes of getting them interested in this. If I can find someone to implement this kind of thing, I'll clean up the spec in the form of PRs.
username_2: @username_1 What is the best way to re-open this discussion?
username_1: @username_2 https://discourse.wicg.io/ perhaps. |
ansible/community | 137046233 | Title: create a label explicitly for greg/robyn tasks
Question:
username_0: Considering making a label to be used as a secondary label for things that only Greg / Robyn can do for the time being. This is stuff like "pay bills / do expenses", meetup-related tasks, etc.
This is mostly because we don't necessarily want to have to assign everything to one or the other of us (particularly when it's in backlog and either of us could do it) immediately; and because we want contributors to be able to easily find things they can do.
Since we're often the ones who create these greg-or-robyn items, it seems like it would be easier for us to label those explicitly, rather than labeling everything else as "adopt me" if it's adoptable (because then we have a combo of unlabeled + adoptme things, even though often unlabeled things are adoptable, particularly if we don't religiously triage / label incoming issues.
Answers:
username_0: Also: I suggest "batmobile" as the label. Only because right now we know that at least greg and robyn are in that car, but the car could have more passengers in the future :)
Only half-jokingly :) (But willing to consider other things)
username_0: Also:
Yes, we could delineate along the lines of "backlog" vs. "ready" (ie: things can only be adopted from the ready column, not the backlog column) -- and maybe we keep our things in backlog until one of us self-assigns it and we put it in-progress.
Arguments against that: backlog is sort of a capacity-management strategy; normally it would be a way to constrain how much a team is working on -- backlog can't move over until time is freed up to do things.
In a contribution environment, that's kind of a weird thing; by definition, when a new contributor wants to contribute by working on a specific task, the capacity of the overall team increases, in a sense.
It almost seems like "adopt me" type things would be in backlog; someone could indicate they want to adopt or ask questions, and once they or we collectively scope out the further plan in the issue and when they think it might be done, it goes into Ready; once they start *actually doing work* (and showing it / linking to it in the issue), it goes to in progress.
Or maybe too philosophical for this early in the morning. :)
username_1: So why would we create a separate tag here, versus just assigning it to one of us?
Status: Issue closed
username_1: Per irc convo:
<•username_1> Not sure why we wouldn't just assign it to one of us if we know up front that it belongs to one of us.
2:50 PM <•username_1> i.e. if it's in that class -- i.e. the kind of thing only Robyn or Greg *could* do -- then we'd know that up front and assign it.
2:51 PM nitzmahone → •nitzmahone_
2:51 PM <•username_1> And yes, that might mean Robyn and Greg reassigning things back and forth and that's fine. :)
2:51 PM nitzmahone_ → •nitzmahone
2:51 PM <•username_1> Which implies that if it's a task we think anyone can do, we leave it unassigned.
2:52 PM <•username_1> (And then we are free to pick it up ourselves once we move it into in_progress). |
Azure/azure-cli | 1010647994 | Title: az vm list-sizes doesn't support location norwaywest
Question:
username_0: **Resource Provider: VM**
<!--- What is the Azure resource provider your feature is part of? --->
**Description of Feature or Work Requested: Location support for Norway West**
<!--- Provide a brief description of the feature or work requested. A link to conceptual documentation may be helpful too. --->
**Minimum API Version Required**
<!--- What is the minimum API version of your service required to implement your feature? --->
**Swagger Link**
<!--- Provide a link to the location of your feature(s) in the REST API specs repo. If your feature(s) has corresponding commit or pull request in the REST API specs repo, provide them. This should be on the master branch of the REST API specs repo. --->
**Target Date**
<!--- If you have a target date for release of this feature/work, please provide it. While we can't guarantee these dates,
it will help us prioritize your request against other requests. --->
Answers:
username_0: Below is the output of the command
# az vm list-sizes --location norwaywest -o table
(NoRegisteredProviderFound) No registered resource provider found for location 'norwaywest' and API version '2021-04-01' for type 'locations/vmSizes'. The supported api-versions are '2015-05-01-preview, 2015-06-15, 2016-03-30, 2016-04-30-preview, 2016-08-30, 2017-03-30, 2017-12-01, 2018-04-01, 2018-06-01, 2018-10-01, 2019-03-01, 2019-07-01, 2019-12-01, 2020-06-01, 2020-12-01, 2021-03-01, 2021-04-01, 2021-07-01'. The supported locations are 'eastus, eastus2, westus, centralus, northcentralus, southcentralus, northeurope, westeurope, eastasia, southeastasia, japaneast, japanwest, australiaeast, australiasoutheast, australiacentral, brazilsouth, southindia, centralindia, westindia, canadacentral, canadaeast, westus2, westcentralus, uksouth, ukwest, koreacentral, koreasouth, francecentral, southafricanorth, uaenorth, switzerlandnorth, germanywestcentral, norwayeast, jioindiawest, westus3'.
username_1: Compute |
QutEcoacoustics/audio-analysis | 371363479 | Title: Continuous integration build and tests need to be run on other platforms
Question:
username_0: ## Is your feature request related to a problem? Please describe.
We only test and build on Windows platforms. We have no visibility into failures on other platforms until users report it.
## Describe the solution you'd like
Our CI process should run ideally on both Mac and Linux.
## Describe alternatives you've considered
There are no alternatives. We probably don't need to build the project, maybe we could just run unit tests or some more basic black box tests?
## Additional context
I think Azure Pipelines may offer a great way to do this.<issue_closed>
Status: Issue closed |
PyMySQL/PyMySQL | 349780555 | Title: Deprecate old password
Question:
username_0: Old password is not recommended for all version of MySQL which PyMySQL supports.
I don't test it. I don't use it. I don't want support it. And existing code is ugly.
Deprecate it in next version, and remove it eventually.
Answers:
username_1: So how does one use the new password? I looked at the docs trying to understand what I am doing wrong, but couldn't figure it out (I'm using the password= keyword argument in the pymysql.connect function, is this no longer correct?).
PS: I arrived here from https://github.com/PyMySQL/PyMySQL/pull/713, wondering why the fix hadn't been pushed to pypi yet).
username_0: Because make new release take my time and energy.
And there is far better solution: upgrading password.
Note that old password is Pre-4.1 format. MySQL 4.1 is released 2003!
MySQL 5.7 CLI dropped old password support already.
username_1: Believe it or not, I did google, but I was under the assumption that "new password" was a pymysql concept, not a server setting. I absolutely respect your time and realize that you are under no obligation to answer my questions.
That said, I cannot upgrade a 20 year old system like that. So if you stop supporting it, I'll have to use a different package.
username_2: @username_1 - I'm in the same situation. @username_0, why would you remove backwards-compatible support? Users should have the option to use less secure methods if they so choose - not everyone is able to upgrade older systems as @username_1 already pointed out.
username_0: Huh. You have option to use old version of PyMySQL, or fork this project.
Don't forget this is not Oracle's project: I maintain this project almost as a volunteer.
Or do you want to donate $10/month for maintain old password?
username_0: One more note: Even Oracle (MySQL owner) removed old password support form MySQL client library.
Why do you think one volunteer maintainer should support 20-year-old legacy feature longer than owner company?
username_3: This is really unfortunate. I have a project that needs to connect to an old server that doesn't use new password yet. I can't force the server maintainer to update it, I just need it to work. Backwards compatibility would have been nice, I'll have to find an alternative library to use.
username_0: You can use old PyMySQL like you are using old password, old MySQL cli, etc.
username_3: I wonder if something else is going on then, because the command-line client works fine for the DB I'm connecting to, but I get the same error from the client. Is there a way to tell (with restricted rights on the DB) what's happening?
username_0: Don't bother me anymore.
MySQL already **removed** (not only deprecated) it.
I haven't test it for long years. I don't want to test and maintain it anymore.
I'm tired to maintain this project and I must reduce maintenance cost of this project.
Status: Issue closed
|
chenyee1981/bibizip | 548520783 | Title: Cape breton Community Housing Association -
Question:
username_0: [https://www.bibizip.com/#git](https://www.bibizip.com/#git)
`https://www.bibizip.com/company/view/uXnS/CAREER TRAINING CONCEPTS INC./1` salary @
`https://www.bibizip.com/company/view/vKVX/Kunstverein Trier Junge Kunst/1` salary @
`https://www.bibizip.com/company/view/dYYX/Safe Work Medicina e Segurança do Trabalho /1` salary @
`https://www.bibizip.com/company/view/7bry/MERRION HALL MANAGEMENT COMPANY LTD/1` salary @
`https://www.bibizip.com/company/view/2IUO/P Douglas Mays/1` salary @
`https://www.bibizip.com/company/view/BA9T/Proton Systems - India/1` salary @
`https://www.bibizip.com/company/view/BxOM/PECULIAR SUPPLIERS/1` salary @
`https://www.bibizip.com/company/view/e33I/BEMA RAIL TRAINING ACADEMY LIMITED/1` salary @
`https://www.bibizip.com/company/view/448R/Discover Avalon, the Land of Wonder and Enchantment/1` salary @
`https://www.bibizip.com/company/view/HNQA/FORT WAYNE TURNERS/1` salary @
`https://www.bibizip.com/company/view/QWgB/MARYLAND STREAM RESTORATION ASSOCIATION/1` salary @
`https://www.bibizip.com/company/view/qwRT/Saileela Tapes - India/1` salary @
`https://www.bibizip.com/company/view/8SVx/Telebyte S.A.de C.V./1` salary @
`https://www.bibizip.com/company/view/y2eZ/Eat Tucker/1` salary @
`https://www.bibizip.com/company/view/Epwn/HJELM-PEDERSEN CONSULT/1` salary @
`https://www.bibizip.com/company/view/EcRX/Instituto De Pesquisas Veterinarias Especializada/1` salary @
`https://www.bibizip.com/company/view/5zTO/Worldspeaking/1` salary @
`https://www.bibizip.com/company/view/ql2I/Sekundarschule Hinrich Brunsberg/1` salary @
`https://www.bibizip.com/company/view/81iS/STRATEGOS CONSULTING PRIV LIMITED/1` salary @
`https://www.bibizip.com/company/view/y2PY/EL CLUB DEL ASESOR INTERSOFT SL/1` salary @
`https://www.bibizip.com/company/view/6pMn/Zellar Home Care/1` salary @
`https://www.bibizip.com/company/view/pwKR/AMBER ADVOKATER I HÄSSLEHOLM OCH ÄLMHULT AB/1` salary @
`https://www.bibizip.com/company/view/kFGh/BreaCan/1` salary @
`https://www.bibizip.com/company/view/OlKJ/DELTA OFİS/1` salary @
`https://www.bibizip.com/company/view/7X6J/Maenan Abbey/1` salary @
`https://www.bibizip.com/company/view/aa4J/Agro del mañana/1` salary @
`https://www.bibizip.com/company/view/e5xU/Bait Al Tamur - India/1` salary @
`https://www.bibizip.com/company/view/LqUn/Golden Orange Turizm/1` salary @
`https://www.bibizip.com/company/view/oLD1/dfp.hu/1` salary @
`https://www.bibizip.com/company/view/IIQJ/KSB Service COTUMER/1` salary @
`https://www.bibizip.com/company/view/se8L/Q1033 KTMQ/1` salary @
`https://www.bibizip.com/company/view/DqUZ/Saksham Sales Corporation - India/1` salary @
`https://www.bibizip.com/company/view/UtCV/Audika AG Switzerland/1` salary @
`https://www.bibizip.com/company/view/1QMP/ARTISAN FERN LTD/1` salary @
`https://www.bibizip.com/company/view/U1fA/Angler Lawn & Landscape, Inc/1` salary @
`https://www.bibizip.com/company/view/e5Zx/BABY BELLE EXCLUSIVE/1` salary @
`https://www.bibizip.com/company/view/PUEY/C.C.G., Inc./1` salary @
`https://www.bibizip.com/company/view/EpKG/Honk Product/1` salary @
`https://www.bibizip.com/company/view/sEhS/Rigsurveys/1` salary @
`https://www.bibizip.com/company/view/a2nx/beahead.biz/1` salary @
`https://www.bibizip.com/company/view/V75R/West Feliciana Parish/1` salary @
`https://www.bibizip.com/company/view/U1ky/Ambaji Medical - India/1` salary @
`https://www.bibizip.com/company/view/UtPT/ALUVENT/1` salary @
`https://www.bibizip.com/company/view/1Qxx/afigraficas ltda/1` salary @
`https://www.bibizip.com/company/view/e5YZ/Blog Neoorog/1` salary @
`https://www.bibizip.com/company/view/PHGO/<NAME>/1` salary @
`https://www.bibizip.com/company/view/y2dh/Entretien Dijonnais/1` salary @
`https://www.bibizip.com/company/view/lDxP/The Friesian Connection/1` salary @
`https://www.bibizip.com/company/view/5O8L/SEVEN RANGERS HEALTHCARE PRIVATE LIMITED/1` salary @
`https://www.bibizip.com/company/view/K9xj/Signing Agent/1` salary @
`https://www.bibizip.com/company/view/e3Vi/Be at your Best/1` salary @
`https://www.bibizip.com/company/view/0ghx/Landesjugendchor Sachsen/1` salary @
`https://www.bibizip.com/company/view/BJ1Y/PRISM Resources Group/1` salary @
`https://www.bibizip.com/company/view/UStG/Anthony Passerino/1` salary @
`https://www.bibizip.com/company/view/UKES/Aza INC/1` salary @
`https://www.bibizip.com/company/view/kT39/BD AND P HOTELS (INDIA) PRIVATE LIMITED/1` salary @
`https://www.bibizip.com/company/view/kJFb/brunnebyved/1` salary @
`https://www.bibizip.com/company/view/9kLO/Bhardwaj Hospital - India/1` salary @
`https://www.bibizip.com/company/view/b4pd/CxSAST/1` salary @
[Truncated]
`https://www.bibizip.com/company/view/nSCJ/Route 64/1` salary @
`https://www.bibizip.com/company/view/Rvoz/Megamedia Sp. z o.o./1` salary @
`https://www.bibizip.com/company/view/csuE/Optary Consult GmbH/1` salary @
`https://www.bibizip.com/company/view/gadO/Asian Business Publication Ltd (ABPL Group)/1` salary @
`https://www.bibizip.com/company/view/1stB/Apollo PN Strapping/1` salary @
`https://www.bibizip.com/company/view/PPLT/Chakana Pacific/1` salary @
`https://www.bibizip.com/company/view/wkPP/Inkubator Ås/1` salary @
`https://www.bibizip.com/company/view/VV2z/Thorpe Park/1` salary @
`https://www.bibizip.com/company/view/gMmE/ANTELOPE INVESTMENT INC/1` salary @
`https://www.bibizip.com/company/view/ut4L/Centro Veterinario A Marosa/1` salary @
`https://www.bibizip.com/company/view/3fQR/Casa Green /1` salary @
`https://www.bibizip.com/company/view/oXgJ/DPSG St. Franziskus St. Marien Herne Eickel/1` salary @
`https://www.bibizip.com/company/view/OrPY/Dataflow Computers/1` salary @
`https://www.bibizip.com/company/view/wkOT/Inversiones Luesco SAS/1` salary @
`https://www.bibizip.com/company/view/7SbY/MediBus/1` salary @
`https://www.bibizip.com/company/view/sCiU/Pacific Controls Ltd/1` salary @
`https://www.bibizip.com/company/view/n11P/Referral Marketing & Business Networking/1` salary @
`https://www.bibizip.com/company/view/g2Yx/ARDEN TRANSFORMATION/1` salary @
`https://www.bibizip.com/company/view/9RPM/BIKAPPA S.R.L./1` salary @
`https://www.bibizip.com/company/view/ba8z/Cape breton Community Housing Association/1` salary @ |
gryffon/ringteki | 426987852 | Title: Asahina Takako lag on showing fase
Question:
username_0: A recently put face-down card in province is not immediately available to watch via Takako's ability. I takes another action or pass (I think) in order to become available.
It's a really small bug but sometimes can condition your play having that information fresh.
Cheers! |
Lagalt/Lagalt | 846022584 | Title: Bug: Posting new project with userId returns null user
Question:
username_0: Making POST method to [https://lagalt-server.herokuapp.com/api/v1/projects](https://lagalt-server.herokuapp.com/api/v1/projects) returns user:null. Project is added to db and mapped to correct user. Return body won't show it...
Request body :
`{
"title": "testT2",
"industry": "testI",
"description": "testD",
"gitlink": "testGit",
"progress": null,
"skills": [],
"user": {"id":1}
}`
Response HTTP 201
`{
"id": 15,
"title": "testT2",
"industry": "testI",
"description": "testD",
"gitlink": "testGit",
"progress": null,
"skills": [],
"user": null
}`
Status: Issue closed
Answers:
username_0: POST method return body has user id as it should. Rollback from google id fixed it |
machinezone/IXWebSocket | 574328039 | Title: "Compressed bit must be 0 if no negotiated deflate-frame extension" in Safari
Question:
username_0: Hello!
We're getting an error when trying to make a WebSocket connection from JavaScript in Safari to a C++ server using the latest version of the IXWebSocket library. The error is as follows:
`Compressed bit must be 0 if no negotiated deflate-frame extension`
We don't get this error as of IXWebSocket's SHA 1bb847a51cc54fc6417831a5a6c8b4e245923b4a but we are getting it with master.
Our suspicion is that it may be related to the per-message deflate compression changes that were made recently. Specifically, it seems that Safari/WebKit does not support per-message deflate (they offer their own "deflate-frame" compression). So, we attempted to turn off per-message deflate in our IXWebSocket server, but we're still getting this error. We're not sure if that means that it's not related to per-message deflate after all, or if it means that we aren't correctly turning it off.
Either way, we thought you'd like to know. For now, we're going to keep using 1bb847a51cc54fc6417831a5a6c8b4e245923b4a. Please let us know if you have any questions or if we can provide any additional information!
Answers:
username_1: Hi Maia,
Thanks for the report. Sorry I broke something recently ... I'm pretty sure that the bad commit is this one:
```
commit 4c66a7561e225103efc9caf262d3224700ccb70b
Author: <NAME> <<EMAIL>>
Date: Tue Feb 18 21:38:28 2020 -0800
(WebSocketServer) add option to disable deflate compression, exposed with the -x option to ws echo_server
```
The commit right before is 111475e65c30e23bd215db9dff515f88e680895b / so that one should be safe. I'll try to fix that soon. We now have unittest hooked up to github actions btw.
username_1: The unittest is green now, I believe that 4ef04b8 has fixed the problem.
username_0: Confirmed! Looking good with 4ef04b8. Thanks very much for the amazingly quick turnaround, and for making such a great library! This ticket can be closed from our standpoint.
Cheers!
--maia
username_1: Great, thanks for testing and using !
The 'fix' could be polished a bit, I don't know when I'll get to that.
username_2: Thanks Benjamin!
username_1: My pleasure Sam, undoing my previous mistake was a piece of cake 🎂🎂🎂🎂 !
Status: Issue closed
|
tobischw/notery | 419763181 | Title: Is this Paginated editor?
Question:
username_0: 
Answers:
username_1: Hi Sunjinwu,
this is not the paginated editor but a hackathon submission. I am still working on the paginated editor, I hope to have a working example soon. There are still a couple issues relating to backspaces/deleting blocks.
Status: Issue closed
|
prettier/prettier | 654858261 | Title: Adding prettier-ignore comment in JSX doubles the comment
Question:
username_0: test
</div>
);
}
```
**Expected behavior:**
I would expect prettier to ignore this ugly block entirely. This is probably a pretty uncommon use case, but I was trying to get a `ts-ignore` comment to work in JSX by following instructions here (https://github.com/microsoft/TypeScript/issues/27552#issuecomment-550531540) and ended up in this state where every time I saved the file, prettier auto formatting would double the number of comments I had
Answers:
username_1: test
</div>
);
}
```
Status: Issue closed
|
alexryd/homebridge-shelly | 567880135 | Title: Please add support for the new Shelly Bulb Duo
Question:
username_0: Hi there, please add support for the new Shelly Bulb Duo, Homebridge does find the device, but it in an unknown device. Please find below the output description. Thank you and best regards
server:~ server$ homebridge-shelly describe 192.168.x.x
Type: SHBDUO-1
CoAP description: {"blk":[{"I":0,"D":"Channel0"}],"sen":[{"I":111,"T":"S","D":"Brightness","R":"0/100","L":0}]}
CoAP status: {"G":[[0,111,1073740964]]}
HTTP Settings:: {"device":{"type":"SHBDUO-1","mac":"98F4ABD16245”,”hostname":"ShellyBulbDuo-D16245”,”num_outputs":1},"wifi_ap":{"enabled":false,"ssid":"ShellyBulbDuo-D16245”,”key":""},"wifi_sta":{"enabled":true,"ssid":"Wireless","ipv4_method":"dhcp","ip":null,"gw":null,"mask":null,"dns":null},"wifi_sta1":{"enabled":false,"ssid":null,"ipv4_method":"dhcp","ip":null,"gw":null,"mask":null,"dns":null},"mqtt":{"enable":false,"server":"192.168.33.3:1883","user":"","id":"ShellyBulbDuo-D16284","reconnect_timeout_max":60,"reconnect_timeout_min":2,"clean_session":true,"keep_alive":60,"max_qos":0,"retain":false,"update_period":30},"coiot":{"update_period":15},"sntp":{"server":"time.google.com"},"login":{"enabled":false,"unprotected":false,"username":"admin","password":"<PASSWORD>"},"pin_code":"AIwp$C","name":"","fw":"20200129-155730/master@a18bfaec","discoverable":true,"build_info":{"build_id":"20200129-155730/master@a18bfaec","build_timestamp":"2020-01-29T15:57:30Z","build_version":"1.0"},"cloud":{"enabled":true,"connected":true},"timezone":"Europe/Madrid”,”lat":41.0123,”lng":13.0123,”tzautodetect":true,"tz_utc_offset":3600,"tz_dst":false,"tz_dst_auto":true,"time":"22:57","hwinfo":{"hw_revision":"prod-2019-12","batch_id":0},"mode":"white","transition":1000,"lights":[{"ison":true,"brightness":70,"white":36,"temp":4068,"default_state":"on","auto_on":0,"auto_off":0,"schedule":false,"schedule_rules":[]}],"night_mode":{"enabled":0,"start_time":"00:00","end_time":"00:00","brightness":0}}
HTTP Status:: {"wifi_sta":{"connected":true,"ssid":"Wireless","ip":"192.168.x.x”,”rssi":-54},"cloud":{"enabled":true,"connected":true},"mqtt":{"connected":false},"time":"22:57","serial":22,"has_update":false,"mac":"98F4ABD16245”,”lights":[{"ison":true,"brightness":70,"white":36,"temp":4068}],"meters":[{"power":4.41,"is_valid":"true"}],"update":{"status":"idle","has_update":false,"new_version":"20200129-155730/master@a18bfaec","old_version":"20200129-155730/master@a18bfaec"},"ram_total":50656,"ram_free":39600,"fs_size":233681,"fs_free":166915,"uptime":18794}
server:~ server$
Answers:
username_1: I +1
username_2: Seems like the Duo is not reporting its state correctly over CoAP. @username_0 could you update your device to the latest firmware and then run the describe command again?
username_1: pi@homebridge:~ $ homebridge-shelly describe 192.168.XXX.XXX
Type: SHBDUO-1
CoAP description: {"blk":[{"I":0,"D":"Channel0"}],"sen":[{"I":121,"T":"S","D":"State","R":"0/1","L":0},{"I":111,"T":"S","D":"Brightness","R":"0/100","L":0},{"I":131,"T":"S","D":"ColorTemperature","R":"2700/6500","L":0},{"I":141,"T":"P","D":"Power","R":"0/9","L":0},{"I":211,"T":"S","D":"Energy counter 0 [W-min]","L":0},{"I":212,"T":"S","D":"Energy counter 1 [W-min]","L":0},{"I":213,"T":"S","D":"Energy counter 2 [W-min]","L":0},{"I":214,"T":"S","D":"Energy counter total [W-min]","L":0}]}
CoAP status: {"G":[[0,121,0],[0,111,37],[0,131,0],[0,141,0],[0,211,0],[0,212,0],[0,213,0],[0,214,3896]]}
HTTP Settings:: {"device":{"type":"SHBDUO-1","mac":"98F4ABD0CF33","hostname":"ShellyBulbDuo-D0CF33","num_outputs":1},"wifi_ap":{"enabled":false,"ssid":"ShellyBulbDuo-D0CF33","key":""},"wifi_sta":{"enabled":true,"ssid":"MySSID","ipv4_method":"static","ip":"192.168.XXX.XXX","gw":"192.168.XXX.1","mask":"255.255.255.0","dns":null},"wifi_sta1":{"enabled":false,"ssid":null,"ipv4_method":"dhcp","ip":null,"gw":null,"mask":null,"dns":null},"mqtt":{"enable":false,"server":"192.168.33.3:1883","user":"","id":"ShellyBulbDuo-D0CF33","reconnect_timeout_max":60,"reconnect_timeout_min":2,"clean_session":true,"keep_alive":60,"max_qos":0,"retain":false,"update_period":30},"coiot":{"update_period":15},"sntp":{"server":"time.google.com","enabled":true},"login":{"enabled":false,"unprotected":false,"username":"admin","password":"<PASSWORD>"},"pin_code":"!w$L#J","name":"","fw":"20200320-123338/v1.6.2@514044b4","discoverable":false,"build_info":{"build_id":"20200320-123338/v1.6.2@514044b4","build_timestamp":"2020-03-20T12:33:38Z","build_version":"1.0"},"cloud":{"enabled":true,"connected":true},"timezone":"Europe/Berlin","lat":53.551102,"lng":9.99368,"tzautodetect":true,"tz_utc_offset":7200,"tz_dst":true,"tz_dst_auto":true,"time":"09:17","unixtime":1586510273,"hwinfo":{"hw_revision":"prod-2019-12","batch_id":0},"mode":"white","transition":1000,"lights":[{"ison":false,"brightness":37,"white":0,"temp":0,"default_state":"on","auto_on":0,"auto_off":0,"schedule":true,"out_on_url":"","out_off_url":"","schedule_rules":["0000bss-0123456-50;0;on","0000bsr-0123456-50;0;off"]}],"night_mode":{"enabled":false,"start_time":"00:00","end_time":"00:00","brightness":0}}
HTTP Status:: {"wifi_sta":{"connected":true,"ssid":"MySSID","ip":"192.168.XXX.XXX","rssi":-66},"cloud":{"enabled":true,"connected":true},"mqtt":{"connected":false},"time":"09:17","unixtime":1586510273,"serial":6846,"has_update":false,"mac":"98F4ABD0CF33","lights":[{"ison":false,"has_timer":false,"timer_remaining":0,"brightness":37,"white":0,"temp":0}],"meters":[{"power":0,"is_valid":"true","timestamp":1586510273,"counters":[0,0,0],"total":3896}],"update":{"status":"idle","has_update":false,"new_version":"20200320-123338/v1.6.2@514044b4","old_version":"20200320-123338/v1.6.2@514044b4"},"ram_total":50376,"ram_free":39440,"fs_size":233681,"fs_free":163652,"uptime":423286}
username_0: Does the code from user username_1 solve it?
username_0: Update done, attached the output:
server:~ server$ homebridge-shelly describe 192.168.x.xx
Type: SHBDUO-1
CoAP description: {"blk":[{"I":0,"D":"Channel0"}],"sen":[{"I":121,"T":"S","D":"State","R":"0/1","L":0},{"I":111,"T":"S","D":"Brightness","R":"0/100","L":0},{"I":131,"T":"S","D":"ColorTemperature","R":"2700/6500","L":0},{"I":141,"T":"P","D":"Power","R":"0/9","L":0},{"I":211,"T":"S","D":"Energy counter 0 [W-min]","L":0},{"I":212,"T":"S","D":"Energy counter 1 [W-min]","L":0},{"I":213,"T":"S","D":"Energy counter 2 [W-min]","L":0},{"I":214,"T":"S","D":"Energy counter total [W-min]","L":0}]}
CoAP status: {"G":[[0,121,0],[0,111,73],[0,131,4828],[0,141,0],[0,211,0],[0,212,0],[0,213,0],[0,214,1601]]}
HTTP Settings:: {"device":{"type":"SHBDUO-1","mac”:”xxx”,”hostname":"ShellyBulbDuo-xxx","num_outputs":1},"wifi_ap":{"enabled":false,"ssid":"ShellyBulbDuo-xxx”,”key":""},"wifi_sta":{"enabled":true,"ssid”:”xxx”,”ipv4_method":"dhcp","ip":null,"gw":null,"mask":null,"dns":null},"wifi_sta1":{"enabled":false,"ssid":null,"ipv4_method":"dhcp","ip":null,"gw":null,"mask":null,"dns":null},"mqtt":{"enable":false,"server":"192.168.33.3:1883","user":"","id":"ShellyBulbDuo-xxx”,”reconnect_timeout_max":60,"reconnect_timeout_min":2,"clean_session":true,"keep_alive":60,"max_qos":0,"retain":false,"update_period":30},"coiot":{"update_period":15},"sntp":{"server":"time.google.com","enabled":true},"login":{"enabled":false,"unprotected":false,"username":"admin","password":"<PASSWORD>"},"pin_code":"AIwp$C","name":"","fw":"20200320-123338/v1.6.2@514044b4","discoverable":false,"build_info":{"build_id":"20200320-123338/v1.6.2@514044b4","build_timestamp":"2020-03-20T12:33:38Z","build_version":"1.0"},"cloud":{"enabled":true,"connected":true},"timezone":"Europe/Madrid”,”lat":46.0564,"lng":14.5081,"tzautodetect":true,"tz_utc_offset":7200,"tz_dst":true,"tz_dst_auto":true,"time":"09:33","unixtime":1586511219,"hwinfo":{"hw_revision":"prod-2019-12","batch_id":0},"mode":"white","transition":1000,"lights":[{"ison":false,"brightness":73,"white":56,"temp":4828,"default_state":"on","auto_on":0,"auto_off":0,"schedule":false,"out_on_url":"","out_off_url":"","schedule_rules":[]}],"night_mode":{"enabled":false,"start_time":"00:00","end_time":"00:00","brightness":0}}
HTTP Status:: {"wifi_sta":{"connected":true,"ssid”:”xxx”,”ip":"192.168.x.xx”,”rssi":-55},"cloud":{"enabled":true,"connected":true},"mqtt":{"connected":false},"time":"09:33","unixtime":1586511219,"serial":16256,"has_update":false,"mac”:”xxx”,”lights":[{"ison":false,"has_timer":false,"timer_remaining":0,"brightness":73,"white":56,"temp":4828}],"meters":[{"power":0,"is_valid":"true","timestamp":1586511219,"counters":[0,0,0],"total":1601}],"update":{"status":"idle","has_update":false,"new_version":"20200320-123338/v1.6.2@514044b4","old_version":"20200320-123338/v1.6.2@514044b4"},"ram_total":50376,"ram_free":39364,"fs_size":233681,"fs_free":164154,"uptime":1008166}
username_0: Thank you for taking the time to look into it.
username_2: Thanks @username_0 @username_1. I've added support for the Duo and published it as a beta version. You can try it out by installing `homebridge-shelly@beta` and then restarting homebridge.
username_0: Alexryd thank you! It works great.
username_1: Yes, it does! Thanks, great job :-)
username_3: Great, thanks, it works.
Any chance of adjusting the light-temp, too?
Have a splendid day =)
username_2: @username_3 You should be able to adjust the color temperature already?
username_3: Hi,
it looks like a local problem, Siri can‘t access the internet, other connections work fine.
I‘m including the screenshot:
Sorry for not checking further before commenting.
>
username_0: Confirmed, I can change the brightness and the temperature of the light with HomeKit.
Status: Issue closed
|
odranoelBR/vue-quasar-admin-example | 212611767 | Title: Inifinite-scrool CPU spíkes
Question:
username_0: Scrool down to more load more todos make CPU spikesa according to : http://forum.quasar-framework.org/topic/269/quasar-admin-examples/4
Answers:
username_0: I did not find the reason for the peaks.
Btw i put the timeout ( like quasar showcase), and now it's ok.
Fixed: 68b9d277426f7f128b3a38883652807bc2cbd2f3
Tks for report on forum.
Status: Issue closed
|
spring-cloud/spring-cloud-stream-binder-kafka | 261467143 | Title: Consider using KStreamBuilderFactoryBean from Spring Kafka
Question:
username_0: See this comment: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/198#discussion_r140350539
Answers:
username_0: Latest changes in `kafka streams` binder around the usage of `StreamsBuilderFactoryBean` from `spring-kafka` address this issue.
Status: Issue closed
|
naser44/1 | 119252130 | Title: موزيلا: لم نعد بحاجة إلى أموال غوغل لنواصل النمو
Question:
username_0: <a href="http://ift.tt/1lOKWo3">موزيلا: لم نعد بحاجة إلى أموال غوغل لنواصل النموّ</a> |
hubmapconsortium/portal-ui | 792271820 | Title: Replace Nystrom with Blood in portal information
Question:
username_0: ## Feedback
__Help Desk Issue Tracker URL__:https://on.spiceworks.com/tickets/open/1080/activity
__Message__: Please update contact info for questions about HuBMAP infrastructure from
<NAME> to <NAME> (<EMAIL>). Here is an example, although
there might be others:
https://portal.hubmapconsortium.org/docs/infrastructure
Answers:
username_1: added to next sprint |
sbt/sbt | 519068504 | Title: stopping sbt requires twice CTRL+C
Question:
username_0: ## steps
sbt version: 1.3.3
1. start a akka-http app via sbt
2. CTRL+C => exception but sbt / akka still runs
3. CTRL+C => sbt stops
## problem
The following exception occurs for the first CTRL+C
```
Exception in thread "sbt-bg-threads-1" java.util.concurrent.RejectedExecutionException
at java.base/java.util.concurrent.ForkJoinPool.externalPush(ForkJoinPool.java:1880)
at java.base/java.util.concurrent.ForkJoinPool.externalSubmit(ForkJoinPool.java:1921)
at java.base/java.util.concurrent.ForkJoinPool.execute(ForkJoinPool.java:2453)
at scala.concurrent.impl.ExecutionContextImpl.execute(ExecutionContextImpl.scala:24)
at sbt.internal.BackgroundThreadPool$BackgroundRunnable.$anonfun$cleanup$1(DefaultBackgroundJobService.scala:390)
at sbt.internal.BackgroundThreadPool$BackgroundRunnable.$anonfun$cleanup$1$adapted(DefaultBackgroundJobService.scala:389)
at scala.collection.immutable.List.foreach(List.scala:392)
at sbt.internal.BackgroundThreadPool$BackgroundRunnable.cleanup(DefaultBackgroundJobService.scala:389)
at sbt.internal.BackgroundThreadPool$BackgroundRunnable.run(DefaultBackgroundJobService.scala:359)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
```
## expectation
It should stop immediately
## notes
This behaviour changed when we updated sbt from 1.2.8 to 1.3.0 (maybe with 1.3.2)
Answers:
username_1: This is especially tricky when trying to kill the sbt process via script.
username_2: May be related: after an update to SBT 1.3.3, Play! framework cannot be stopped using CTRL-C when launched using `sbt run`.
It only logs `[warn] Canceling execution...` and does not stop. I forces to kill the process.
Reverting to 1.2.8 fixes the problem. Going to 1.3.0 makes it happen again.
username_3: Seeing this too on sbt 1.3.3 with AKKA microservices
Status: Issue closed
username_4: @username_0 Thanks for the report. This is one of sbt 1.3.x feature we added, which is to allow Ctrl-C to cancel the running task.
If you want the older behavior, you can do this:
```scala
Global / cancelable := false
```
username_5: @username_4 This seems to be caused by `StandardMain.executionContext` being shutdown by `StandardMain.closeRunnable`
before `BackgroundRunnable.cleanup()` receives its chance to execute the callbacks of `stopListeners` on it:
https://github.com/sbt/sbt/blob/v1.3.8/main/src/main/scala/sbt/Main.scala#L123
https://github.com/sbt/sbt/blob/v1.3.8/main/src/main/scala/sbt/internal/DefaultBackgroundJobService.scala#L390
This leads to an indefinite loop in `AbstractBackgroundJobService.shutdown()` because the job is never removed from `jobSet`:
https://github.com/sbt/sbt/blob/v1.3.8/main/src/main/scala/sbt/internal/DefaultBackgroundJobService.scala#L390
Should the `stopListeners` callbacks be executed on the `AbstractBackgroundJobService.pool` instead? It seems to be live just as long as all the jobs and their callback are executed:
https://github.com/sbt/sbt/blob/v1.3.8/main/src/main/scala/sbt/internal/DefaultBackgroundJobService.scala#L168
I’ll create a PR to address this. |
glotzerlab/signac | 543732545 | Title: H5Store locking issues
Question:
username_0: ### Description
I have been facing two issues that may or may not be related. Both issues are related to HDF5 file locking, a feature that was [implemented in HDF5 1.10.0 and changed in HDF5 1.10.1](https://support.hdfgroup.org/ftp/HDF5/releases/ReleaseFiles/hdf5-1.10.1-RELEASE.txt). This file locking feature is meant to help make SWMR (single writer, multiple reader) operations safer.
### To reproduce
The tests for storing pandas DataFrames in an H5Store fail. The file is already open from the `H5Store` instance, so when the `_h5get` function creates a pandas `HDFStore` object (which uses pytables internally), the file fails to open because it's already locked by the `H5Store`.
### Error output
Example of a test failure message:
```python
======================================================================
ERROR: test_clear (__main__.H5StorePandasDataTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/miniconda3/envs/py38/lib/python3.8/site-packages/pandas/io/pytables.py", line 627, in open
self._handle = tables.open_file(self._path, self._mode, **kwargs)
File "/home/username_0/miniconda3/envs/py38/lib/python3.8/site-packages/tables/file.py", line 315, in open_file
return File(filename, mode, title, root_uep, filters, **kwargs)
File "/home/username_0/miniconda3/envs/py38/lib/python3.8/site-packages/tables/file.py", line 778, in __init__
self._g_new(filename, mode, **params)
File "tables/hdf5extension.pyx", line 492, in tables.hdf5extension.File._g_new
tables.exceptions.HDF5ExtError: HDF5 error back trace
File "H5F.c", line 509, in H5Fopen
unable to open file
File "H5Fint.c", line 1400, in H5F__open
unable to open file
File "H5Fint.c", line 1615, in H5F_open
unable to lock the file
File "H5FD.c", line 1640, in H5FD_lock
driver lock request failed
File "H5FDsec2.c", line 941, in H5FD_sec2_lock
unable to lock file, errno = 11, error message = 'Resource temporarily unavailable'
End of HDF5 error back trace
Unable to open/create file '/tmp/signac_test_h5store_952j661f/signac_test_h5store.h5'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test_h5store.py", line 286, in test_clear
self.assertEqual(h5s[key], d)
File "/home/username_0/code/signac/signac/core/h5store.py", line 405, in __getitem__
return _h5get(self, self._file, key)
File "/home/username_0/code/signac/signac/core/h5store.py", line 137, in _h5get
with _pandas.HDFStore(grp.file.filename, mode='r') as store_:
File "/home/username_0/miniconda3/envs/py38/lib/python3.8/site-packages/pandas/io/pytables.py", line 505, in __init__
self.open(mode=mode, **kwargs)
File "/home/username_0/miniconda3/envs/py38/lib/python3.8/site-packages/pandas/io/pytables.py", line 661, in open
raise IOError(str(e))
OSError: HDF5 error back trace
File "H5F.c", line 509, in H5Fopen
unable to open file
File "H5Fint.c", line 1400, in H5F__open
unable to open file
File "H5Fint.c", line 1615, in H5F_open
[Truncated]
Default FS encoding: utf-8
Default locale: (en_US, UTF-8)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Summary of the h5py configuration
---------------------------------
h5py 2.10.0
HDF5 1.10.5
Python 3.7.6 | packaged by conda-forge | (default, Dec 27 2019, 00:09:34)
[GCC 7.3.0]
sys.platform linux
sys.maxsize 9223372036854775807
numpy 1.17.3
```
### System configuration
- Operating System: Ubuntu 18.04
- Version of Python: 3.7 or 3.8
- Version of signac: 1.3
Answers:
username_1: @username_0 is there anything we can do about this other than document the usage of `HDF5_USE_FILE_LOCKING` (with all appropriate "here there be dragons" warnings for SWMR users)?
username_0: @username_1 This was partly resolved by #266. Otherwise

Status: Issue closed
|
SunPower/PVMismatch | 249247218 | Title: use lookup table for 2-diode parameters (a la SAM)
Question:
username_0: related to #50 and #51
At PVSC Janine Freeman introduced a great idea to nearly eliminate errors from diode-analog cell models, which was to use a lookup table.
Since we already have the `pvmismatch.contrib.gen_coeffs` methods, all we need is to loop over it so it creates an IEC-61853 table but for Isat1, Isat2, Rs and Rsh instead of Imp, Vmp, Isc, and Voc.
Then we would have to add the interpolation methods into `pvmismatch.pvcell` or at least the option to.
Answers:
username_0: see [**Significant Improvement in Module Performance Prediction Accuracy Based on IEC-61853 Data** by <NAME>an at PVPMC-8](https://www.slideshare.net/sandiaecis/06-20170509-freeman-8th-pvpmc-iec-61853-presentation)
username_0: see [Trello](https://trello.com/c/qpLucqHp)
username_1: Dear Mikofski : i saw the fuction about' gen_coeffs.gen_two_diode()', it can return the cell parameter such as isat1, isat2, rs, rsh. actually, Some R&D Projects also need the cell reversed parameter such as ARBD,BRBD,VRBD,NRBD. The current packages can solve this problem ?
username_0: @username_1, no, I didn't add methods to fit reverse breakdown parameters, because AFAIK there isn't an IEC standard to test or measure for this, and often there is little or no published data on reverse bias characteristics of cells or modules.
But I agree that we should at least provide some guidelines or rules of thumb, maybe in the documentation or the wiki. Probably we should get a few other researchers to give their input?
The equations for the reverse breakdown are in issue #25 and in [`pvcells.py`](../blob/master/pvmismatch/pvmismatch_lib/pvcell.py#L169-L174).
Here's what I would suggest:
1. Try to find out what the breakdown voltage is and use that to set `VRBD`. Typically front contact p-type c-Si cells breakdown in reverse bias somewhere between -17.0-volts to -25.0-volts. SunPower modules are back contact n-type and they breakdown between -3.5 and -5.5 volts.
2. Try to get real reverse bias test data and fit it using [`scipy.optimize.curve_fit`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) to get ARBD, BRBD and NRBD using the equation from #25 and . Otherwise, just use the defaults for for ARBD, BRBD, and NRBD - unfortunately, this model is not great, but there is an issue #26 to make a more flexible reverse bias model
username_1: Thanks ,Mark.
I will try it follow your idea, and i also give you all feedbacks about result whatever good or bad.
hoping the pvmismatch more and more powerful
username_2: What’s your fitting data look like? I have a “global” single diode model
fitting already in production, and I’m always curious to compare fit values
between single and double diode models. |
ClinGen/clincoded | 186000446 | Title: Suppress a few moi terms in GDM pull down (for create GDM page)
Question:
username_0: We need to suppress a few terms in the moi of the Create GDM pull-down:

Also, as shown on image, change "Autosomal unknown" to "Unknown"
Answers:
username_0: There is one record that uses one of the "X"d out moi's (from Tam). Could we switch this to "X-linked inheritance, and when we add second pull-down (R9), we can add the adjective recessive? I can let Tam know. X-linked inheritance is still correct.

username_1: For reference in production:
**GDMs mode of inheritance (only 3 types have been used, in 21 records)**
- Autosomal recessive inheritance (HP:0000007)
- X-linked recessive inheritance (HP:0001419)
- Autosomal dominant inheritance (HP:0000006)
**Interpretations mode of inheritance ( only 2 types have been used [of 8 records that have mode of inheritance])**
- Autosomal recessive inheritance (HP:0000007)
- Autosomal dominant inheritance (HP:0000006)
username_1: https://1103-kd-modeofinheritancechange-2ee9609-karen.demo.clinicalgenome.org
username_0: Looks/works great in release candidate for both GCI and VCI - thank you for adding.
username_1: Included in last release (R8). Nice job and thanks for your hard work.
Status: Issue closed
|
OpenZeppelin/openzeppelin-test-helpers | 769013313 | Title: patch for "npm audit fix"
Question:
username_0: If I understood correctly, in the last version available ```"@openzeppelin/test-helpers": "0.5.9``` there are required updates:
```
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Signature Malleability │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=6.5.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ @openzeppelin/test-helpers [dev] │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ @openzeppelin/test-helpers > @truffle/contract > web3 > │
│ │ web3-eth > web3-eth-abi > ethers > elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1547 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Signature Malleability │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=6.5.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ @openzeppelin/test-helpers [dev] │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ @openzeppelin/test-helpers > @truffle/contract > web3 > │
│ │ web3-eth > web3-eth-contract > web3-eth-abi > ethers > │
│ │ elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1547 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Signature Malleability │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=6.5.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ @openzeppelin/test-helpers [dev] │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ @openzeppelin/test-helpers > @truffle/contract > web3 > │
│ │ web3-eth > web3-eth-ens > web3-eth-contract > web3-eth-abi > │
│ │ ethers > elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1547 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Signature Malleability │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=6.5.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ @openzeppelin/test-helpers [dev] │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ @openzeppelin/test-helpers > @truffle/contract > web3 > │
│ │ web3-eth > web3-eth-ens > web3-eth-abi > ethers > elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1547 │
└───────────────┴──────────────────────────────────────────────────────────────┘
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ High │ Signature Malleability │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ >=6.5.3 │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ @openzeppelin/test-helpers [dev] │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ @openzeppelin/test-helpers > @truffle/contract > │
│ │ web3-eth-abi > ethers > elliptic │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://npmjs.com/advisories/1547 │
└───────────────┴──────────────────────────────────────────────────────────────┘
```
Answers:
username_1: Hi @username_0! I’m sorry that you had this issue.
Thanks so much for reporting it! The project owner will review and triage this issue as soon as they can.
Some of the dependencies could be updated to resolve some of these issues for OpenZeppelin Test Helpers (similar to: https://github.com/OpenZeppelin/openzeppelin-test-environment/issues/152).
username_2: +1
`found 44102 vulnerabilities (42409 low, 24 moderate, 1669 high) in 2328 scanned packages`
very annoying actually :-)
username_3: @username_0 I've released a new version that will allow updating the @truffle/contract dependency to one that doesn't have the vulnerabilities you reported.
---
@username_2 Where did you get that `npm audit` report? After this release the only reported vulnerabilities in `npm audit` for OpenZeppelin Test Helpers should be a couple of low severity ones originating in web3.
Status: Issue closed
|
HubSpot/hubspot-api-nodejs | 1013815373 | Title: Duplicate parameter names for some API definitions
Question:
username_0: Some .d.ts files have duplicate parameter names.
The examples I've found exist in filesApi.d.ts,
`options` appears twice in `replace` and `upload`
https://unpkg.com/browse/@hubspot/[email protected]/lib/codegen/files/api/filesApi.d.ts
This may be a more systemic issue than just this file.
Answers:
username_0: One further note. It seems as though this isn't just an issue with the definition. I think the javascript is equally affected where both parameters are named the same thing and thus the logic is probably wrong. |
mozillazg/python-pinyin | 782581605 | Title: 提供几个拼音声调风格转换相关的辅助函数
Question:
username_0: 提供几个拼音声调风格转换相关的辅助函数,实现如下转换:
* `nǐ hǎo` -> `ni hao`
* `nǐ hǎo` -> `ni3 ha3o`
* `ni3 ha3o` -> `nǐ hǎo`
* `ni3 ha3o` -> `ni3 hao3`
* `ni3 ha3o` -> `ni hao`
* `nǐ hǎo` -> `ni3 hao3`
* `ni3 hao3` -> `nǐ hǎo`
* `ni3 hao3` -> `ni3 ha3o`
* `ni3 hao3` -> `ni hao`<issue_closed>
Status: Issue closed |
topcoder-platform/TCO21-Regionals-QA-Competition | 961167996 | Title: [Chrome] App shows 2 feedback button in the “Specs” Tab
Question:
username_0: Summary :
[Chrome] App shows 2 feedback button in the “Specs” Tab
Target URL:
https://www.newegg.com/cooler-master-pro/p/2MB-0009-00008
Steps to reproduce:
1. Open the https://www.newegg.com/cooler-master-pro/p/2MB-0009-00008
2. Scroll down and click “Specs”
3. Verify the count of Feedback button
Actual:
App shows 2 feedback button in the “Specs” Tab
Expected:
App should show 1 feedback button in the “Specs” Tab
Environment:
• Device(s): Windows
• Resolution: 1920×1080
• Operating System: Windows 10
• Browser(s): Chrome | Version 92.0.4515.131 (Official Build) (64-bit
https://user-images.githubusercontent.com/31862600/128264990-446deea1-4be4-4109-8bdf-80a36ec8cf37.mp4
Answers:
username_1: Valid

username_2: Not a bug; If the content is too big user needs to scroll all the way up to give the `Feedback` so the application has two feedback buttons on top as well as the bottom. This is intentional check the `class` name on the code
```
<div class="product-feedback is-bottom"><span>Question about the product info?</span><a href="javascript:newegg_inhouse_feedback_overview && newegg_inhouse_feedback_overview.show();" class="btn btn-mini" title="How can we improve?"><i class="fas fa-comment"></i> Feedback</a></div>
```
```
<div class="product-feedback is-top"><span>Question about the product info?</span><a href="javascript:newegg_inhouse_feedback_overview && newegg_inhouse_feedback_overview.show();" class="btn btn-mini" title="How can we improve?"><i class="fas fa-comment"></i> Feedback</a></div>
```


**Submitter**: 0 Points
**Challenger**: 0 Points | Invalid challenge,
ref: https://github.com/topcoder-platform/TCO21-Regionals-QA-Competition/issues/193#issuecomment-894592042 |
richterger/Perl-LanguageServer | 747523395 | Title: Question: Disable Symbol list provider
Question:
username_0: Hi @username_1
As some times the symbol table is stuck, is there a possibility to disable it from this extension in order to use another provider for the symbol list?
((enjoy))
cr
Status: Issue closed
Answers:
username_1: You can disable the whole extention, because the whole extesion was getting stuck. Hopefully this issue is solved with commit c3e8670c41a28027b27dbe5d9a8394c49ce640c7 |
phpactor/phpactor | 611644894 | Title: [RPC][transform] Convert transform sub-commands to first-class commands
Question:
username_0: Possible sub-commands of `transform` at the moment:
- complete_constructor
- add_missing_properties
- fix_namespace_class_name
- implement_contracts
I think it would be good to convert these "sub-command" to "first-class" commands because:
- it would allow to run them directly without going through the `transform` command
- it would allow to specify extra arguments for any of them without affecting the others if the need arises later
- it would allow to list them in the context menu directly: currently you need to pick transform from the context menu, then pick a transformation, instead of just picking the command in 1 step (unless you provide a key-binding to trigger the transform-choice menu directly)
- it would allow to show/hide them in the context menu based on the given context: e.g. in the constructor the `complete_constructor` is relevant while the `fix_namespace_class_name` and `implement_contracts` are not
- it would simplify the context menu: having to go multiple choice steps can be annoying
Basically what I am saying is that it is better to make the `context_menu` command a bit more smart and let it provide relevant commands in the given context instead of providing commands which trigger further choice menus (triggering an input callback is still fine if the picked command needs further input).
Note that even if you convert these to "first-class" command as I suggested above you could still keep the current "transform choice menu" if you want to. It would be just a chooser type command like the `context_menu` anyway.
Answers:
username_1: These are "context less" refactorings - they apply to the entire file but there is no reason they cannot be part of the context menu - https://github.com/phpactor/phpactor/blob/develop/lib/Extension/ContextMenu/menu.json (other than running out of space and key bindings ...) |
mbrn/material-table | 529277988 | Title: Problem with update editcomponent by props
Question:
username_0: Hi I am facing with a problem which is I built a custom editcomponent by React-select and option value is from props. However, material Table is not re-render again after receive the new props.
I have try two ways to solve it. Firstly, I let react-select option's value equals to props.selectList
Secondly, I try to reset whole columns but it still not working.
Is there any methods can solve this problem?
Following is my code
```
const updateState = (list) => {
let tempState = state
tempState.columns[4].editComponent = props => (
<Select
name='positiveNumerator'
options={list}
value={props.value.split(',').map(function(elem){
return(elem==""? "": {value:elem, label:elem})
})}
isMulti={true}
onChange={e => props.onChange(e.map(function(elem){return elem.value}).join(','))}
/>
)
}
const [items, setItems] = useState(props.selectList)
useEffect(() => {
// props change
console.log(`items`, items, props.selectList);
setItems(props.selectList);
updateState(props.selectList)
}, [props.selectList]);
const [state, setState] = useState({
columns: [
{ title: 'Positive Numerator', field: 'positiveNumerator'
,editComponent: props => (
<Select
name='positiveNumerator'
options={props.selectList}
value={props.value.split(',').map(function(elem){
return(elem==""? "": {value:elem, label:elem})
})}
isMulti={true}
onChange={e => props.onChange(e.map(function(elem){return elem.value}).join(','))}
/>
),
},
});
``` |
HypixelDev/PublicAPI | 505612619 | Title: Json schema
Question:
username_0: Currently its hard to figure out if something will return a double or an integer. If there was a json schema for the returned objects it would be easy to write a wrapper in any other language using https://app.quicktype.io/ to convert the schema into the desired language.
Status: Issue closed
Answers:
username_1: Unfortunately, the API is mostly data served directly from our database. So creating a schema for all of the data isn't really on the table. |
bigskysoftware/htmx | 892572418 | Title: Input array not send empty fields
Question:
username_0: Hello, I have a form where it has additional multifield and I use the name of the input to transform it into an array and receive it in the backend with PHP, something like:
`<input type = "text" name = "subject[]">`
It turns out that as I can have several of these and the fields are optional, HTMX only sends those that are filled, ignoring the fields that are empty and this ends up generating the server-side matrix in the wrong way. With JQuery Ajax works correctly, is there any way to resolve this?
https://www.php.net/manual/en/faq.html.php#faq.html.arrays
Answers:
username_0: @username_1 can you help me with this problem?
username_1: htmx does it the "normal" way: unchecked checkboxes are ignored. Some frameworks such as rails work around this by including a hidden version of the input too, but I don't think we should support server-side specific needs.
Status: Issue closed
|
Berserker66/MultiWorld-Utilities | 600695703 | Title: Entrance rando removes silvers when swordless and hard/expert pool are set
Question:
username_0: This has been raised several times on the main github, but Berserker told me during a multi to submit it here, so here I am.
When running the entrance randomizer, and swordless and hard/expert pool are set, the logically-hard-required silvers end up removed. This does apply to multi as well; I experienced it personally by way of receiving a green 20 in place of the second progressive bow when it was sent to me.
My understanding, from reading descriptions by others who are more familiar with the code than I, is that it's something to do with the patch order. The interaction of swordless and hard/expert happens correctly, but then entrance comes in, sees the pool setting, and strips the silvers. This results in an effectively unwinnable seed.
Answers:
username_1: Yeah, I don't know enough about the baserom asm to figure out where it is going wrong. I can tell someone already tried to fix it. Relevant bits from rom.py:
```
#Work around for json patch ordering issues - write bow limit separately so that it is replaced in the patch
rom.write_bytes(0x180098, [difficulty.progressive_bow_limit, overflow_replacement])
if difficulty.progressive_bow_limit < 2 and world.swords[player] == 'swordless':
rom.write_bytes(0x180098, [2, overflow_replacement])
rom.write_byte(0x180181, 0x01) # Make silver arrows work only on ganon
rom.write_byte(0x180182, 0x00) # Don't auto equip silvers on pickup
```
^ That if gets triggered. It might just be missing flags. overflow replacement is 20 rupees, this might need to put the bow back in instead of rupees. Would need to try tinkering with it
username_1: Further checking shows, that this is where
```
#Work around for json patch ordering issues - write bow limit separately so that it is replaced in the patch
rom.write_bytes(0x180098, [difficulty.progressive_bow_limit, overflow_replacement])
```
came from in the first place, so that won't solve anything.
username_2: That line was solving an issue where the json rom contained 0x180098 = 1, and also 0x180098 = 2. And then on python 3.5 and below, the order of the patch was non-deterministic and the flag would end up as 1. The flag at 0x180098 is the number of bows you're allowed to collect.
So I would have expected this to be fixed in Multiworld Utils 1.9, and in previous versions to only affect hard / swordless / enemizer (including sprite changing) generated with python 3.5 and below.
If that's not the case, then there's another bug with the bow limit and hopefully a way to replicate it.
username_1: Utils 1.9 only supports Python 3.7 and up. Annotations features are used that got introduced with 3.7
username_2: Ok, so as an update that seed was generated with Multiworld Utils 1.8 which means that it doesn't have the json patch fix. It might still be another issue though, especially if the 2nd bow doesn't appear in the spoiler log.
username_0: I'm happy to report that a run of similar settings on the current release did not produce the issue. I was able to collect the silvers (in tile room after collecting somaria in DM room, no less) and utilize them against Ganon.
[ER_M482833355_Spoiler.txt](https://github.com/username_1/MultiWorld-Utilities/files/4495901/ER_M482833355_Spoiler.txt)
[ER_M482833355.zip](https://github.com/username_1/MultiWorld-Utilities/files/4495902/ER_M482833355.zip)
username_1: Hm. I'll have to check non-progressive again then, it might spawn two instances of Bow and none of Silvers, which would explain why I didn't see them in my tests. As I was also looking for Progressive Bow (Alt), which is missing from the progressive log.
Status: Issue closed
username_1: This seems to be fixed. Should someone encounter this again on newer versions just open a new issue with a spoiler log please. |
ruby-concurrency/concurrent-ruby | 182235549 | Title: Documentation enhance: Is it possible to upgrade a reentrant rw-lock from read to write?
Question:
username_0: Hi!
On the page http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/ReentrantReadWriteLock.html there is no information about up- or downgrading locks. The parent index page gives some info, but not a lot.
IMHO it would be helpful to have a brief sentence whether or not it is possible to up- and / or downgrade a lock, as done in https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html -> "Lock Downgrading"
Answers:
username_1: Thanks, for letting us know. Makes sense to add it. Would you open a PR with the update please?
username_0: I'd love to, but I can't write documentation about something I don't know (that's why I asked the question here).
username_1: Ah sorry, I've misunderstood. The ReentrantReadWrite supports both downgrade and upgrade.
username_1: Examples can be found in specs: https://github.com/ruby-concurrency/concurrent-ruby/blob/master/spec/concurrent/atomic/reentrant_read_write_lock_spec.rb
username_1: done in 476f59383e6bb357d41416f7e1693ede6350e7bb
Status: Issue closed
|
asmagin/Cake.Sitecore | 331369286 | Title: Deprecation warning for "mapCoverage" option when running unit tests
Question:
username_0: ● Deprecation Warning:
Option "mapCoverage" has been removed, as it's no longer necessary.
Please update your configuration.
Configuration Documentation:
https://facebook.github.io/jest/docs/configuration.html
PASS Project\Shared\client\client.index.test.ts
PROJECTNAME Sample Test
√ sample (3ms)
```
Answers:
username_0: After an offline chat with @asmagin, it turns out that this issue isn't actually part of Cake but rather part of the base solution that we are using, which is a separate repo from the same developer. As such this issue can be closed.
Status: Issue closed
|
vuetifyjs/vuetify | 1107618757 | Title: [Bug Report][3.0.0-alpha.12] RadioGroup with row prop always gets column
Question:
username_0: ### Environment
**Vuetify Version:** 3.0.0-alpha.12
**Last working version:** 2.6.2
**Vue Version:** 3.2.26
**Browsers:** Chrome 97.0.4692.71
**OS:** Windows 10
### Steps to reproduce
No step, just create this form and it becomes column layout.
### Expected Behavior
Row layout
### Actual Behavior
Column layout
### Reproduction Link
<a href="https://codepen.io/username_0/pen/vYeMMZQ" target="_blank">https://codepen.io/username_0/pen/vYeMMZQ</a>
### Other comments
This also happens on the [document](https://next.vuetifyjs.com/en/components/radio-buttons/) page
<!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
Answers:
username_0: Also, because of my carelessness, I've missed that V3 is now for test purpose only and created my project with it. Is there a way to downgrade it to V2 instead of recreating the project? |
gnab/remark | 152064483 | Title: Code syntax highlighting: multiline comments
Question:
username_0: The syntax highlighting for code (specifically java code) does not handle multiline comments correctly. Only the first line is treated as a comment. For example:
```
<!DOCTYPE html>
<html>
<head>
<title>Title</title>
<meta charset="utf-8">
</head>
<body>
<textarea id="source">
# Data Structure Primitives
## Arrays
int[] array = {1, 3, 5, 2, 6, 9};
/*
* Index: 0 1 2 3 4 5
* Value: 1 3 5 2 6 9
*/
</textarea>
<script src="https://gnab.github.io/remark/downloads/remark-latest.min.js">
</script>
<script>
var slideshow = remark.create({
ratio: "4:3",
highlightLanguage: "java",
highlightStyle: "github"});
</script>
</body>
</html>
```
Produces non-comment like styling for the lines beginning Index and Value. This is probably a bug in whatever is used to do code syntax highlighting but I wasn't sure where to file this issue.
Answers:
username_1: Still a problem nearly two years later with, for instance, CSS highlighting:
---
## Comments
CSS comments are multi-line, starting with `/*` and ending with `*/`
```css
/* This should apply only to divs */
div {
background-color: red;
/* We turned this off
color: blue
*/
}
```
The code in the block `/* We turned this off` should appear this way:
```css
/* This should apply only to divs */
div {
background-color: red;
/* We turned this off
color: blue
*/
}
```
But in fact, only the line `/* We turned this off` is gray and italicized. |
librespot-org/librespot | 618425707 | Title: Problem playing some playlists
Question:
username_0: I did observe 2 related issues in the current version:
1. I am unable to start playing a song of a playlist (if the playlist is very large) like this one:
`https://open.spotify.com/playlist/0bQjBPRqy5zReJethJy3aP?si=4cIwLUI_RHC47-5kfdbJ_Q`
If i start playing it on webplayer, PC or Android and the Spotify Connect to Librespot this song will play usually without a problem.
2. A lot of times, when the playlist is finally playing, the next song will start but will continuously hang on the first second and repeat itself. This can go for several minutes, or resolve itself shortly.
I sometimes get the following WARN message in the log when that happens:
`[2020-05-14T17:50:54Z WARN librespot_playback::player] Player::seek called from invalid state
`
Answers:
username_1: Thanks for that first link, I can reproduce and confirm this issue.
username_1: Zooming into this one, I don't think it's due to the length of the playlist, but the fact that the list starts with a long list of songs that are no longer available. On the desktop clients, it'll look for local files, but obviously that won't work on Connect devices. Are you sure this behaviour isn't the same on official Connect devices?
I've tried a couple of other large playlists (with all songs available) and they play just fine.
Second bullet I am unable to reproduce.
username_0: Actually, this playlist was fully playable at that time. I just realized that the underlying albums may have been reuploaded as they are still available, just not in this playlist.
Are the other playlists you have tried also Audiobooks?
In any case, currently I cannot create a new playlist to try it as Spotify only allows me to add up to 50 Songs to a new Playlist and skips the rest when adding them.
username_1: If you find an existing one to reproduce, do let us know here and we can check it out.
username_2: I've noticed that spotify's Bandsplain podcast won't work. Not sure if this is the same problem?
Not sure how to get the URL of the track since `librespot` won't find any tracks, but here's a link to an episode:
https://open.spotify.com/episode/4pU6iIHgc1fHvtCkLQgxrT?si=bmENXjNjQ1mv6rtOH4C9Ig
If I run `librespot` v0.3.1 as a test, I see this when trying to play an episode (some things redacted):
```
# librespot -n test -u ... -p ... -v --backend pipe --device /dev/null
[2021-11-17T14:28:46Z INFO librespot] librespot 0.3.1 c1ac4cb (Built on 2021-11-14, Build ID: zRwrJ0g6, Profile: release)
[2021-11-17T14:28:46Z DEBUG librespot_playback::mixer::mappings] Volume control is now Log(60.0)
[2021-11-17T14:28:46Z DEBUG librespot_discovery::server] Zeroconf server listening on 0.0.0.0:39431
[2021-11-17T14:28:46Z INFO librespot_core::session] Connecting to AP "guc3-accesspoint-a-krhd.ap.spotify.com:4070"
[2021-11-17T14:28:47Z INFO librespot_core::session] Authenticated as "..." !
[2021-11-17T14:28:47Z DEBUG librespot_core::session] new Session[0]
[2021-11-17T14:28:47Z INFO librespot_playback::mixer::softmixer] Mixing with softvol and volume control: Log(60.0)
[2021-11-17T14:28:47Z DEBUG librespot_connect::spirc] new Spirc[0]
[2021-11-17T14:28:47Z DEBUG librespot_connect::spirc] canonical_username: ...
[2021-11-17T14:28:47Z DEBUG librespot_core::mercury] new MercuryManager
[2021-11-17T14:28:47Z DEBUG librespot_playback::mixer::mappings] Input volume 58958 mapped to: 49.99%
[2021-11-17T14:28:47Z DEBUG librespot_playback::player] new Player[0]
[2021-11-17T14:28:47Z DEBUG librespot_core::session] Session[0] strong=3 weak=2
[2021-11-17T14:28:47Z INFO librespot_playback::convert] Converting with ditherer: tpdf
[2021-11-17T14:28:47Z INFO librespot_playback::audio_backend::pipe] Using pipe sink with format: S16
[2021-11-17T14:28:47Z INFO librespot_core::session] Country: "US"
[2021-11-17T14:28:47Z DEBUG librespot_playback::player] command=AddEventSender
[2021-11-17T14:28:47Z DEBUG librespot_playback::player] command=VolumeSet(58958)
[2021-11-17T14:28:47Z DEBUG librespot_core::mercury] unknown subscription uri=hm://remote/user/.../
[2021-11-17T14:28:47Z DEBUG librespot_core::mercury] unknown subscription uri=hm://remote/user/.../
[2021-11-17T14:28:47Z DEBUG librespot_core::mercury] subscribed uri=hm://remote/user/.../ count=0
[2021-11-17T14:28:47Z DEBUG librespot_connect::spirc] kMessageTypeNotify "..." 68724ecccd67781303655c49a73b74c5968667b1 776788436 1637159327450 kPlayStatusPlay
[2021-11-17T14:28:50Z DEBUG librespot_connect::spirc] kMessageTypeNotify "..." f099bb9fd045bfce259ca0f78898e3ef81827a55 776791518 1637159330532 kPlayStatusPlay
[2021-11-17T14:28:51Z DEBUG librespot_connect::spirc] kMessageTypeLoad "..." f099bb9fd045bfce259ca0f78898e3ef81827a55 776792721 1637159330532 kPlayStatusStop
[2021-11-17T14:28:51Z DEBUG librespot_connect::spirc] State: context_uri: "spotify:show:3uKSgODaDrtWOulcJp57h7" index: 4294967295 position_ms: 0 status: kPlayStatusStop position_measured_at: 1637159331901 context_description: "Bandsplain" shuffle: false repeat: false playing_from_fallback: true
[2021-11-17T14:28:51Z DEBUG librespot_connect::spirc] Frame has 0 tracks
[2021-11-17T14:28:51Z INFO librespot_connect::spirc] No more tracks left in queue
[2021-11-17T14:28:51Z TRACE librespot_connect::spirc] Sending status to server: [kPlayStatusStop]
[2021-11-17T14:28:51Z DEBUG librespot_playback::player] command=SetAutoNormaliseAsAlbum(false)
[2021-11-17T14:28:51Z DEBUG librespot_playback::player] command=Stop
[2021-11-17T14:28:52Z DEBUG librespot_connect::spirc] kMessageTypeNotify "..." f099bb9fd045bfce259ca0f78898e3ef81827a55 776793101 1637159332115 kPlayStatusStop
[2021-11-17T14:29:14Z DEBUG librespot_connect::spirc] kMessageTypeNotify "..." 48ee7e889e6f191a6036e090ce8c76e864163ac6 776815228 1637159354242 kPlayStatusStop
```
The important bits seem to be:
```
[2021-11-17T14:28:51Z DEBUG librespot_connect::spirc] State: context_uri: "spotify:show:3uKSgODaDrtWOulcJp57h7" index: 4294967295 position_ms: 0 status: kPlayStatusStop position_measured_at: 1637159331901 context_description: "Bandsplain" shuffle: false repeat: false playing_from_fallback: true
[2021-11-17T14:28:51Z DEBUG librespot_connect::spirc] Frame has 0 tracks
[2021-11-17T14:28:51Z INFO librespot_connect::spirc] No more tracks left in queue
```
username_3: I have the same problem on the playlist "2000er-Mix". |
libgdx/libgdx | 350192271 | Title: Feature Request: Vibration intensity/amplitude
Question:
username_0: Earlier in the year when android 8.0 came out, it added support for vibration amplitude: https://developer.android.com/reference/android/os/VibrationEffect. From what I can tell, it is also available in the IOS API: https://stackoverflow.com/questions/19822295/intensity-of-custom-iphone-vibration (maybe I'm wrong).
I think somehow adding this to the api would be very useful so it's not just on or off. I don't think it'd be too hard to add support for amplitude and check if amplitude is available on the current device. I'm guessing I'd be able to make a pull request for this but I wanted to know what you guys thought first or if there's someone more experienced than me who wants to try this or has any ideas.
#### Please select the affected platforms
- [*] Android
- [*] IOS
- [ ] HTML/GWT
- [ ] Windows
- [ ] Linux
- [ ] MacOS
Answers:
username_1: I don't know for iOS but for android there's PWM (pulse width modulation) available already, see https://github.com/libgdx/libgdx/wiki/Vibrator and it works well to simulate vibration intensity.
username_0: Thanks for the PWM suggestion, it works OK, but can be pretty annoying (loudish and noticeable). I'm using a Galaxy S9 and using these patterns:
```OFF(), FULL(), P90(new long[]{2, 30}), P70(new long[]{5, 30}), P50(new long[]{10, 25}), P35(new long[]{15, 30}), P10(new long[]{20, 35});```
where OFF and FULL are special cases.
If anyone has any better patterns for this that'd be cool. I still think using the new (not deprecated) android api would be a good idea.
username_2: This feature would be great for IOS - as it currently stands, the only option is a simple one length vibrate (the method ignores the input 'miliseconds'). |
jfc3/atehere | 705021129 | Title: Add True Laurel to the SFO and CA JSON Files
Question:
username_0: Need to add True Laurel to the SFO and CA JSON files.
True Laurel
From the Eater website about True Laurel, 'Muddling the distinction between restaurant and bar isn’t a novel concept these days. Neither is the idea of a fine-dining chef moonlighting to create a more casual menu of brainy small plates. But partners <NAME>, chef-owner of the supper club-style phenom Lazy Bear, and bar director <NAME> propel these notions to higher national standards with their new joint project. True Laurel feels like an art installation, with walls and dividers set at odd, beautiful angles and an undulating bas-relief sculpture designed as a tribute to midcentury designer <NAME>. Barzelay and chef de cuisine <NAME> bring the smarts with their riffs on Americana bar food: a patty melt crisped in autumnal beef fat, Dungeness, and cheddar fondue potato chips and vegetables for scooping and fried hen-of-the-wood mushrooms with a riff on sour cream and onion dip. Torres’s cocktails similarly quicken the mind. He uses an arsenal of infusions, tonics, obscure spirits, and local fruits to exquisite, unpretentious effect. We’re living through weird, wild times; this is a sanctuary for comfort with unusual wit and style, a welcome model.'
Address: 753 Alabama St, San Francisco, CA 94110
Phone: (415) 341-0020
URL - truelaurelsf.com
Eater's 18 Best New Restaurants in America - 2018
Status: Issue closed
Answers:
username_0: Added True Laurel to the SFO and CA JSON files. |
acabunoc/open-leadership-zone | 265880233 | Title: Open Graduation Classes
Question:
username_0: <!--- DO NOT MODIFY --->
<!--- Keep everything below to begin submitting your project to the Open Leadership Zone. Add the name of your project in the Title space above, then click 'Submit new issue' --->
Thanks for submitting your project to be featured in the Open Leadership Zone at [MozFest 2017](http://mozillafestival.org/) :sparkles:
This is open to any project in [Mozilla Open Leaders round 4](https://mozilla.github.io/leadership-training/round-4/projects/) -- even if you're not coming to MozFest. This process ensures that you're following open practices and will help set you up for a sustained contributor community afterwards. If you have any questions, feel free to reach out to @username_1 or ask questions in our [chat room](http://gitter.im/mozilla/open-leadership-training).
## Open Project Checklist :clipboard:
Your project must complete the following to be a featured on [Mozilla Pulse](http://mozillapulse.org/) and the Open Leadership Zone at MozFest 2017. As you complete each exercise, check off the box and comment with a link to your completed resource. This [template repository](https://github.com/username_1/mozfest-repo-template) is here to help you if you get stuck!
- [ ] Complete [Open Leadership 101](https://mozilla.teachable.com/p/open-leadership-101)
- [ ] Provide a GitHub repository for work and discussion on your project in a comment
* new to GitHub? Here's a [step-by-step guide on using the #mozfest template](https://docs.google.com/document/d/e/2PACX-<KEY>/pub#h.xx0csqfdsayz)
- [ ] [Create file: README.md](https://mozilla.github.io/open-leadership-training-series/articles/opening-your-project/write-a-great-project-readme/) in your project repository. This file should help newcomers understand what your project is, why it's important, and kinds of help you're looking for.
- [ ] [Create file: LICENSE](http://choosealicense.com/) to give your project an open license, allowing for sharing and remixing.
- [ ] [Create file: CONTRIBUTING.md](https://mozilla.github.io/open-leadership-training-series/articles/building-communities-of-contributors/write-contributor-guidelines/) so others know how they can contribute. If you'd like, you can [remix this template](https://github.com/username_1/mozfest-repo-template/blob/master/CONTRIBUTING.md)
- [ ] [Create file: CODE_OF_CONDUCT.md](https://mozilla.github.io/open-leadership-training-series/articles/building-communities-of-contributors/write-a-code-of-conduct/). This can be the [Mozilla Community Participation Guidelines](https://www.mozilla.org/en-US/about/governance/policies/participation/) or a code of conduct of your choice
- [ ] Turn on your [Issue Tracker](https://docs.google.com/document/d/e/2<KEY>/pub#h.ay6inrqrank7) and [create issues](https://docs.google.com/document/d/e/2<KEY>/pub#h.sb4dv3al8i8a) to describe each task that you need help with and how a contributor can get started on that task. If you're participating in [Hacktoberfest](http://hacktoberfest.digitalocean.com/), [Create a label](https://docs.google.com/document/d/e/2<KEY>TQV6ax/pub#h.ah0e4pnoebnc) called `hacktoberfest` and apply it to your issues.
## Ready to be featured :tada:
- [ ] [Fill out this form](https://www.mozillapulse.org/add) to submit your project to Mozilla Pulse. Add the tag `open leadership zone`.
- [ ] Leave a comment on this issue with the text `This is ready for the Open Leadership Zone`. @username_1 will review this issue. If accepted, your project will be approved on Mozilla Pulse and featured in the Open Leadership Zone.
If you get stuck at any point, feel free to look at the [project templates](https://docs.google.com/document/d/e/2<KEY>/pub#h.xx0csqfdsayz) or reach out to your mentor or @username_1. We're here to help you through this process.
Answers:
username_0: Github repo:
https://github.com/username_0/openclasses
username_0: Hi Abby, this is ready for the Open Leadership Zone.
Pulse project link: https://www.mozillapulse.org/entry/467
Status: Issue closed
username_1: 🎉 👍 looks great, you're public on pulse now!
https://www.mozillapulse.org/entry/467 |
adafruit/Adafruit_CircuitPython_BluefruitSPI | 388974190 | Title: ATI command works ok but errors when debug is used - gives Error (id:0xa180)
Question:
username_0: Writing: ['0x10', '0x0', '0xa', '0x4', '0x41', '0x54', '0x49', '0xa', '0x0', '0x0', '0x0', '0x0', '0x0', '0x0', '0x0', '0x0', '0x0', '0x0', '0x0', '0x0']
Reading: ['0x20', '0x0', '0xa', '0x90', '0x42', '0x4c', '0x45', '0x53', '0x50', '0x49', '0x46', '0x52', '0x49', '0x45', '0x4e', '0x44', '0xd', '0xa', '0x6e', '0x52']
Reading: ['0x20', '0x0', '0xa', '0x90', '0x46', '0x35', '0x31', '0x38', '0x32', '0x32', '0x20', '0x51', '0x46', '0x41', '0x43', '0x41', '0x31', '0x30', '0xd', '0xa']
Reading: ['0x20', '0x0', '0xa', '0x90', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz', '0xzz']
Reading: ['0x20', '0x0', '0xa', '0x90', '0xd', '0xa', '0x30', '0x2e', '0x38', '0x2e', '0x30', '0xd', '0xa', '0x30', '0x2e', '0x38', '0x2e', '0x30', '0xd', '0xa']
Reading: ['0x20', '0x0', '0xa', '0x90', '0x53', '0x65', '0x70', '0x20', '0x32', '0x35', '0x20', '0x32', '0x30', '0x31', '0x37', '0xd', '0xa', '0x53', '0x31', '0x31']
Reading: ['0x20', '0x0', '0xa', '0x90', '0x30', '0x20', '0x38', '0x2e', '0x30', '0x2e', '0x30', '0x2c', '0x20', '0x30', '0x2e', '0x32', '0xd', '0xa', '0x4f', '0x4b']
Reading: ['0x80', '0xa1', '0x80', '0x0', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff', '0xff']
b'BLESPIFRIEND\r\nnRF51822 QFACA10\r\n0102030405060708\r\n0.8.0\r\n0.8.0\r\nSep 25 2017\r\nS110 8.0.0, 0.2\r\nOK'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "adafruit_bluefruitspi.py", line 253, in command_check_OK
File "adafruit_bluefruitspi.py", line 248, in command
File "adafruit_bluefruitspi.py", line 248, in command
RuntimeError: AT command failure: RuntimeError('Error (id:0xa180)',)
`
Appears to do this every time and is completely reproducible for me - the serial connection is running at standard 115200.
A glance at the code shows that there's a sleep xor print in `_cmd()` based on debug mode. Perhaps there's a timing issue here? For someone more familiar with protocol and peripheral the Reading output will probably give away what's going on here.
I've also observed a buffering issue with `read_packet()` but I'll probably put a forum post in for that RSN. I was turning on debug to try and get more info for that.
Answers:
username_0: Forum post illustrating buffering problem: https://forums.adafruit.com/viewtopic.php?f=60&t=144694
username_1: @username_0 Is this still an issue?
username_0: I don't know, I've not run that code for sometime. I'll have a look at this when I've finished my latest guide. |
oppia/oppia-android | 1137054416 | Title: Add dark mode support to OptionActivity and OptionWithoutDrawerActivity
Question:
username_0: Add dark mode support to OptionActivity and OptionWithoutDrawerActivity as well as the contained views.
(Wiki about dark mode implementation to be updated soon.)
Answers:
username_1: hey @username_0 can you suggest me some PRs that I cant take reference from
username_1: @username_0 can*
username_0: @username_1 You can take a look at some parts of this PR #4032.
username_1: @username_0 hey Ayush, in my PR there is no use of color in my PR's Activities.....so what I am supposed to do?
username_0: @username_1 You might have to look into the contained fragments and other contained view's layout. Check `options_fragment.xml`, `options_story_text_size.xml` and other contained layouts.
username_1: hey @username_0, I am a bit confused I have to add dark mode support to **option_activity.xml** and **option
_without_drawer_activity.xml**, right?
username_0: @username_1 It applies to everything you see when you open OptionsActivity in the app. Including the contained layouts and views. |
shakacode/react_on_rails | 380994860 | Title: Changes to package.json are not picked up by the rspec test helper
Question:
username_0: Bug: Not rebuilding test files when package.json changes.
To repro:
* Make no JS code changes
* Change the package.json and run yarn
* Run tests.
* Notice that rspec does not automatically result in the test webpack files being built.
Answers:
username_0: Not a bug. The issue seems to be caused by changes in the files inside node_modules, and those are specifically excluded for performance reasons. For example, suppose the packages are installed with one node version, the bundles are built, and that version changes, but the bundles are not rebuilt. The bundles don't work with the new node, yet rspec does not want to run correctly.
The fix is to save the node version in the directory where the bundles are built.
If this node version does not match when running rspec, then the bundles should be rebuilt.
The file could be called `.node-version` and saved in the directory where the bundles are created.
This way, there will be no yak shaving, head-scratching as to why the bundles were not built after the node version changed, but the bundles were not built.
username_0: Turns out that this is only a bigger issue if node-sass is in the dependencies. It's common, but not always the case.
Status: Issue closed
|
Movex-Ukraine/secure-array-files | 624221755 | Title: Existing files
Question:
username_0: Hi,
how can i make it work with a existing App with "normal" File fields?
thanks
Answers:
username_1: Hi! First, check if the field `$table->text('files')->nullable();` is used in the migration of the framework.
Next, you need to install this package: `composer require movex-ukraine/secure-array-files`
And then, just replace the standard file field with the field from the package:
use MovexUkraine\SecureArrayFiles\SecureArrayFiles;
```
public function fields(Request $request)
{
return [
...
SecureArrayFiles::make('My files', 'files')
->disk('local')
->path('my_attachments'),
...
];
}
```
Good luck!
username_1: Unfortunately, existing files will have to be transferred manually if there are not many of them. You can also try to write an action to move values from an existing field to a new table field, but you will have to write the code for these actions yourself https://nova.laravel.com/docs/3.0/actions/defining-actions.html#overview. After the transfer, the old field can be deleted from the tables.
Format in which you need to convert data about your files:
```
[{
"originalName":"favicon-32x32.png",
"name":"my_attachments/2020-05-24_Yagdk7OFLNq3jb3wYp82LWmHtNNPOqc9VHGTdmkb.png",
"url":"/storage/my_attachments/2020-05-24_Yagdk7OFLNq3jb3wYp82LWmHtNNPOqc9VHGTdmkb.png"
},
{
"originalName":"robots.txt",
"name":"my_attachments/2020-05-24_Rt1fHaMvxePXiH1NDqW6jBbwUR2yknVASfQQo0Kq.txt",
"url":"/storage/my_attachments/2020-05-24_Rt1fHaMvxePXiH1NDqW6jBbwUR2yknVASfQQo0Kq.txt"
}]
```
Status: Issue closed
username_0: ok, thank you |
lark-parser/lark | 852616718 | Title: Obtain ParserPuppet directly?
Question:
username_0: **Suggestion**
I'm writing a language server with Lark, and I'm using a recursion-based approach for error handling and inference. For this I make use of `ParserPuppet` extensively, as a helper to store copies of states and try other branches. However, right now `ParserPuppet` can only be created through exceptions during parsing. It would be much more convenient if there is a method that generates a `ParserPuppet` directly based on the initial states of the parser, so that I can start off with a puppet at the root.
This should be as simple as creating a method similar to `_Parser.parse` in `LALR_Parser` but instead returning a `ParserPuppet` instead of parsing.
**Describe alternatives you've considered**
Creating a `ParserPuppet` manually, but this feels quite hacky as `ParserPuppet`, `ParserState` and `ParseConf` are not explicitly exported.
**Additional context**
None
Answers:
username_1: The 'problem' is that ParserPuppet is `lalr` specific:
- We can have a method on `Lark` that fails if `parser!='lalr'`. This would probably mean we would create `get_puppet` methods for all classes in the `.parser` chain.
- We could have a method only on `LALR_Parser`/`lalr_parser._Parser`. This would be accessed with `lalr_instance.parser.parser.parser.create_puppet()` (plus minus a `parser`). I don't like this. It would also kinda force us to fix that part of the internals.
- The method could also be `ParserPuppet.create(lark_instance, text)`. I don't like this either.
username_0: How about an optional keyword argument for `Lark.parse()` (e.g. `get_puppet`), which if True, will return a `ParserPuppet` if we are using an `LALR_Parser`, or throws an exception otherwise?
username_1: @username_0 I created a PR: #869
username_0: Great, thank you!
Status: Issue closed
|
inveniosoftware/flask-menu | 58468280 | Title: Is it possible to know if one of the children of a menu item is active ?
Question:
username_0: Hello,
I have a two level navigation, the second level is collapsed, Is there a possibility to know if one of the children is active in the parent item ?
menu 1
menu 2 (active, if one of the children is active)
child 1
child 2 (active)
Thank you for your help.
Answers:
username_1: +1
username_2: We could change the default condition in https://github.com/inveniosoftware/flask-menu/blob/master/flask_menu/__init__.py#L69
```python
self._active_when = lambda: request.endpoint == self._endpoint or any(
child.active for child in itervalues(self._children)
)
```
or
`active` property definition in https://github.com/inveniosoftware/flask-menu/blob/master/flask_menu/__init__.py#L200-L203
```python
@property
def active(self):
"""Return True if the menu item is active."""
return self._active_when() or any(
child.active for child in itervalues(self._children)
)
```
PS: I'm for 1st solution.
--
cc @lnielsen
username_2: Alternatively we could add new property/method `has_active_child(self[, recursive=True])`. WDYT?
username_0: Thanks for the answers.
I prefer the last solution, has_active_child(…), it's more flexible, and dont break existing implementations.
username_3: :+1:
Status: Issue closed
|
GoogleCloudPlatform/cloud-builders | 221604068 | Title: Local builder for testing?
Question:
username_0: Would it be possible to run the builder locally for testing purposes?
Answers:
username_1: This feature request is already on our radar. We'll post any updates in this issue.
Status: Issue closed
username_2: You got it: https://github.com/GoogleCloudPlatform/container-builder-local |
department-of-veterans-affairs/va.gov-team | 693482090 | Title: [Design] LIH Design Intent checkpoint
Question:
username_0: ## Background
[Collaboration cycle documentation](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/working-with-vsp/vsp-collaboration-cycle/vsp-collaboration-cycle-visual.pdf)
## Tasks
- [ ] Conduct Design/Intent VSP collaboration cycle meeting
Answers:
username_1: The logged-in homepage design intent will be tracked in [this ticket](https://github.com/department-of-veterans-affairs/va.gov-team/issues/13415).
Hopefully will be scheduled for September 22 at 11 EST (pending approval from platform)
username_2: We are moving this out of Sprint 30 at my request. Removing that milestone. Should be Sprint 31 now.
username_0: @username_2 sounds good
username_1: Scheduled for September 24, 2020 at 1:30 ET
Status: Issue closed
|
opnsense/core | 686257818 | Title: GUI is missing VIP interfaces once cluster got updated to 20.7
Question:
username_0: This afternoon I've updated OPNSENSE on a DEC2670 Deciso appliance.
After the update the VIP interfaces have disappeared below "Firewall" in the GUI.
It was expected to find the VIP interfaces under 'firewall'
Direct link to carp_status is working
eg https://x.y.z/firewall_virtual_ip.php#Firewall_VIP
Hardware; DEC2670
OPNsense 20.7 (amd64, OpenSSL).
AMD GX-416RA SOC
Network Intel igb

Answers:
username_1: Moved to Interfaces Section
Status: Issue closed
username_2: Accepted "solution" in the forum already. :) |
wessberg/rollup-plugin-ts | 504335894 | Title: Types of this package are missing declare/export
Question:
username_0: reproduce:
```
mkdir wrts
cd wrts
yarn init -y
yarn add @types/node @username_1/rollup-plugin-ts rollup typescript
yarn tsc --init
mkdir src
echo "import ts from '@username_1/rollup-plugin-ts'; ts;" >> src/index.ts
yarn tsc
```
error:
```
yarn tsc
yarn run v1.16.0
$ .../wrts/node_modules/.bin/tsc
node_modules/@username_1/rollup-plugin-ts/dist/esm/index.d.ts:299:1 - error TS1046: Top-level declarations in .d.ts files must start with either a 'declare' or 'export' modifier.
299 function typescriptRollupPlugin(pluginInputOptions?: Partial<TypescriptPluginOptions>): Plugin;
~~~~~~~~
Found 1 error.
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
Status: Issue closed
Answers:
username_1: Hi there,
Thanks a lot for reporting this issue. This was also related to to #24.
I've fixed the issue and released it under v1.1.66. |
libopencm3/libopencm3-template | 393223933 | Title: flashing of a template project without verbose option fails
Question:
username_0: Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
none separate
0x2000
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v17 API v2 SWIM v4 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 3.229062
Info : STM32F103C8T6.cpu: hardware has 6 breakpoints, 4 watchpoints
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x08000430 msp: 0x20005000
** Programming Started **
auto erase enabled
Info : device id = 0x20036410
Info : flash size = 128kbytes
wrote 2048 bytes from file /home/dem/Code/your-project/my-project/your-project.elf in 0.548013s (3.650 KiB/s)
** Programming Finished **
** Verify Started **
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000002e msp: 0x20005000
verified 1232 bytes in 0.380016s (3.166 KiB/s)
** Verified OK **
** Resetting Target **
shutdown command invoked
Status: Issue closed
Answers:
username_1: thanks for reporting this, sorry it took so long to fix it! |
mysociety/verification-pages | 301717197 | Title: after successfully actioning a statement, it usually doesn't appear as "done"
Question:
username_0: This is due to the delay between changes being made in Wikidata being reflected in query results.
It's bad because people can end up actioning something twice if they get distracted because the interface looks the same after successfully actioning a statement as before, and since it won't have been updated with the statement UUID a new duplicate claim will be created.
We should probably store a "succesfully done" state on the client to avoid this. (But there may be better solutions...)<issue_closed>
Status: Issue closed |
typescript-eslint/typescript-eslint | 594837743 | Title: Throw without arguments should be a syntax error
Question:
username_0: ```JSON
{
"rules": {
"no-throw-literal": "error"
}
}
```
```TS
throw
```
Just that in a file somewhere
**Expected Result**
This should be a parse-error and should not be linted on.
**Actual Result**
Parser creates a `ThrowStatement` with `argument` set to null, which breaks rules downstream, see https://github.com/eslint/eslint/pull/13143 for context
**Additional Info**
```
[Error - 1:41:32 PM] TypeError: Cannot read property 'type' of null
Occurred while linting [fileName].ts:132
at Object.couldBeError (eslint/lib/rules/utils/ast-utils.js:1262:22)
at ThrowStatement (eslint/lib/rules/no-throw-literal.js:38:31)
at eslint/lib/linter/safe-emitter.js:45:58
at Array.forEach (<anonymous>)
at Object.emit (eslint/lib/linter/safe-emitter.js:45:38)
at NodeEventGenerator.applySelector (eslint/lib/linter/node-event-generator.js:254:26)
at NodeEventGenerator.applySelectors (eslint/lib/linter/node-event-generator.js:283:22)
at NodeEventGenerator.enterNode (eslint/lib/linter/node-event-generator.js:297:14)
at CodePathAnalyzer.enterNode (eslint/lib/linter/code-path-analysis/code-path-analyzer.js:634:23)
at eslint/lib/linter/linter.js:936:32
```
**Versions**
| package | version |
| --------------------------- | --------- |
| `@typescript-eslint/parser` | `^2.13.0` |
| `TypeScript` | `^3.8.2` |
| `ESLint` | `5.16.0` |
| `node` | `12.16.0` |
| `npm` | `6.13.7` |
Answers:
username_1: cc @username_2 - what do you think about making the `parser` less forgiving in these invalid situations to match errors that are thrown by babel-eslint/acorn/espree?
I think `typescript-estree` is probably fine to parse as is (because typescript parses fine as is), but `parser` is intended to be used in ESLint, so we should probably match the same errors they throw here?
username_2: @username_1 are you talking about building checks ourselves or leveraging TS? IIRC it came up before that almost all of the valuable checks come from semantic diagnostics (even when they seem syntactic in nature) and so if we enable that feedback from TS it forces users to require type information even if they wouldn’t need it for their rules setup
username_1: Probably the former - building just the checks required to achieve some parity with what other parsers do. That way we can be sure that base rules won't error in weird ways (like this issue).
username_0: Not sure if I should create a new error for this or post here but
```
import
```
should also be a syntax error, otherwise we get
```
[Error - 8:02:02 PM] TypeError: Cannot read property 'indexOf' of undefined
Occurred while linting file.tsx:1
at isAbsolute (eslint-plugin-import/lib/core/importType.js:48:15)
at typeTest (eslint-plugin-import/lib/core/importType.js:116:7)
at resolveImportType (eslint-plugin-import/lib/core/importType.js:148:10)
at reportIfMissing (eslint-plugin-import/lib/rules/no-extraneous-dependencies.js:130:32)
at ImportDeclaration (eslint-plugin-import/lib/rules/no-extraneous-dependencies.js:208:11)
at eslint/lib/linter/safe-emitter.js:45:58
at Array.forEach (<anonymous>)
at Object.emit (eslint/lib/linter/safe-emitter.js:45:38)
at NodeEventGenerator.applySelector (eslint/lib/linter/node-event-generator.js:254:26)
at NodeEventGenerator.applySelectors (eslint/lib/linter/node-event-generator.js:283:22)
```
username_1: cc @username_3 - if we were to make `typescript-estree` more ESTree spec complaint here, and throw issues for weird/incomplete code; would that cause issues for prettier?
username_3: No problem. These issues mentioned here are actually not easy to reproduce with Prettier. The standalone `throw` parses only if it's inside a block and not followed by a semicolon, and I can't reproduce the issue with standalone `import` at all. I mean it's already a `SyntaxError`.
username_4: Hello @username_1, I've been testing various ESLint plugins for a while now. This bug/functionality in typescript parser is causing crashes in `eslint:all`, `eslint-plugin-node` and `eslint-plugin-testing-library`.
I'm trying to figure out whether this bug should be handled by all of these community plugins or is this really a bug in typescript parser. There's some conflicting comments out there:
- https://github.com/mysticatea/eslint-plugin-node/issues/207#issuecomment-587439053.
- https://github.com/typescript-eslint/typescript-eslint/issues/1616#issuecomment-587546308
[ESLint](https://eslint.org/docs/developer-guide/working-with-custom-parsers#the-ast-specification) states that `The AST that custom parsers should create is based on ESTree`. I think some of these cases do not meet the spec.
So when ever I run into cases caused by typescript-parser generated AST which doesn't meet the spec, should I just ignore these or report them to plugin maintainers?
<details>
<summary>Crashing rules</summary>
## Rule: indent
- Message: `Cannot read property 'loc' of undefined Occurred while linting <text>:1`
- Path: `Tejas1510/Awesome-Javascript-and-React-Project/college-website/server/src/server/api/email/emailer.js`
- [Link](https://github.com/Tejas1510/Awesome-Javascript-and-React-Project/blob/HEAD/college-website/server/src/server/api/email/emailer.js#L1)
```js
const
function sendEmail() {
try {
```
```
TypeError: Cannot read property 'loc' of undefined
Occurred while linting <text>:1
at Object.VariableDeclaration [as listener] (/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/rules/indent.js:1422:69)
at /home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/rules/indent.js:1634:55
at Array.forEach (<anonymous>)
at Program:exit (/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/rules/indent.js:1634:26)
at /home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/safe-emitter.js:45:58
at Array.forEach (<anonymous>)
at Object.emit (/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/safe-emitter.js:45:38)
at NodeEventGenerator.applySelector(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/node-event-generator.js:254:26)
at NodeEventGenerator.applySelectors(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/node-event-generator.js:283:22)
at NodeEventGenerator.leaveNode(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/node-event-generator.js:306:14)
```
## Rule: no-manual-cleanup
- Message: `Cannot read property 'match' of undefined Occurred while linting <text>:3`
- Path: `wesleyclzns/Perdita/perdita-app/src/componentes/App.js`
- [Link](https://github.com/wesleyclzns/Perdita/blob/HEAD/perdita-app/src/componentes/App.js#L3)
```js
import React from 'react';
import Pidex from './Pidex'
import
function App() {
return (
<div className="App">
<header className="App-header">
<h1>Pertida Index</h1>
```
```
TypeError: Cannot read property 'match' of undefined
[Truncated]
<header className="App-header">
<h1>Pertida Index</h1>
```
```
TypeError: Cannot read property 'indexOf' of undefined
Occurred while linting <text>:3
at stripImportPathParams(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint-plugin-node/lib/util/strip-import-path-params.js:8:20)
atExportAllDeclaration,ExportNamedDeclaration,ImportDeclaration,ImportExpression(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint-plugin-node/lib/util/visit-import.js:55:40)
at /home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/safe-emitter.js:45:58
at Array.forEach (<anonymous>)
at Object.emit (/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/safe-emitter.js:45:38)
at NodeEventGenerator.applySelector(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/node-event-generator.js:254:26)
at NodeEventGenerator.applySelectors(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/node-event-generator.js:283:22)
at NodeEventGenerator.enterNode(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/node-event-generator.js:297:14)
at CodePathAnalyzer.enterNode(/home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/code-path-analysis/code-path-analyzer.js:711:23)
at /home/runner/work/eslint-remote-tester/eslint-remote-tester/ci/node_modules/eslint/lib/linter/linter.js:952:32
```
</details>
username_1: ESLint's default parser is required to adhere to the ESTree spec because that is the spec they are designed to adhere to.
Custom ESLint parsers are not beholden to the same requirement.
As a custom parser - typescript-eslint is not required to exactly adhere to the ESTree spec.
And in fact - we definitely do not adhere to it - there's no way for us to!
- We have new nodes that sit in statement/expression locations that can easily cause crashes on plugins that aren't careful.
- We also have new nodes which add to places where ESTree previously defined a single node child.
- We `null` out properties in certain cases
- We add all manner of new properties to existing nodes
An example of where we have seen this cause problems: the empty body function expression.
For example, this case especially:
```ts
class Foo {
methodWithoutBody();
// ^^ TSEmptyBodyFunctionExpression -> body == null
methodWithBody() {}
// ^^^^^ FunctionExpression -> body == BlockStatement
}
```
A lot of plugins assume that every `MethodDefinition` has a `FunctionExpression` as its `.value`, and that `FunctionExpression` has a `.body` which is a `BlockStatement`.
But `TSEmptyBodyFunctionExpression` breaks that convention at all levels - crashing plugins up that just try to do `node.value.body.body`.
There's also countless problems with stylistic rules and things like function generics, because they assume they don't exist - they don't have handling to traverse the new properties on existing nodes - which means that they create many false positives, or create broken fixes that either just delete generics or create syntactically broken code.
So the question is - what should a plugin do?
99.999999% of the time - fixing this is as simple as adding an if guard.
```
MethodDefinition(node) {
if (node.value.body) {
return;
}
// continue as normal
}
```
And for your cases - they could similarly be solved with simple `if` checks.
But should they? 🤷 That's up to the plugin themselves as to what they want to handle.
I do want to make our parser more correct, but doing so is not a small task, and it is also a breaking change. It's going to be quite some time before I will be able to tackle this.
username_4: Thanks for the very detailed answer. I think I'm starting to understand this issue now.
These examples of valid Typescript AST you mentioned and whether plugins should handle those - this is a different issue. I should have specified some examples before for commenting the issue here.
I'm more concerned how the parser outputs AST for typescript which is not valid. Here are some examples tested with astexplorer and typescript playgroud.
```ts
import
function HelloWorld() {}
```
- Espree: Unexpected token function
- TS: String literal expected.(1141)
```ts
const
function HelloWorld() { }
```
- Espree: Unexpected keyword 'function'
- TS: Variable declaration list cannot be empty.(1123)
```ts
throw
function HelloWorld() {}
```
- Illegal newline after throw
- TS: Line break not permitted here.(1142)
The parser is generating AST for all of these cases. I would expect it to recognize these as syntax errors, or something similar. I'm quite unsure whether third party plugins should be expected to handle these cases. They should however handle cases like `TSEmptyBodyFunctionExpression` if they are expected to work with typescript parser.
username_1: The typescript parser is very forgiving. It does its best to figure out what you meant and not throw errors unless it really really has to.
It does this for a number of reasons, but mainly so that use cases that need to be failure tolerant are so (eg syntax highlighters).
It emits errors separately as what it calls diagnostics.
In order to get these diagnostics, you need to use the program.
Before v3, we didn't create a program unless you configured type-aware linting.
So it meant we had no way to get the syntax errors. So we just didn't emit any...
Now we always have some form of a program, we could leverage TS to do some basic syntax validation.
username_5: thats not true, emiting errors is hidden behind flag and user has to opt in to actually get them, consumers of typescript-estree like prettier are not being affected by this
username_1: we can definitely prep a PR to do this (that's kind of the point of #2911 - laying the framework to do this easily), and we could land an option that's default off.
But it wouldn't get used until we set it on by default (which is breaking for both `typescript-estree` and `parser`).
As it stands right now `typescript-estree` is a bit of an unsafe mess that's grown and grown over the years. It's probably worth refactoring it before we start throwing in assertions.
Also worth noting that some of our AST types are inconsistent, so we'd ideally clean those up at the same time. |
home-assistant/core | 975541531 | Title: EDL21: Losing connection after network outage when using ser2net
Question:
username_0: ### The problem
EDL21 Sensor loses connection / doesn't get values after the origin socket (ser2net server) goes down because of network outage.
- platform: edl21
serial_port: socket://192.168.X.X:4001
You have to restart HA to collect values again. An internal reconnect would be nice.
### What is version of Home Assistant Core has the issue?
core-2021.8.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
EDL21
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/edl21
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
_No response_
Answers:
username_1: Hey,
I created a quick (temporary) fix for the issue by adding a timeout to the original integration:
https://github.com/username_1/edl21
Note that this implementation was only intended to solve the ser2net issue for me and might not meet the code quality of the original integration.
Regards
username_0: thx @username_1
You could add a PR but this component looks inactive, don't know if it will be added later. Anyway, this solved my problem :-) |
LivelyKernel/lively.next | 907474163 | Title: InteractiveTree: Cannot set font color of collapse icon
Question:
username_0: The color of the collapse icon (triangle) is always black when the item is not selected, regardless of the `nonSelectionFontColor`. On the other hand, `selectionFontColor` colors the icon on selection, as expected.
Possible approach: The color of the icon seems to be set in `recoverOriginalLine` (in `lively.components/tree.js`), which is derived from `attribute`. The latter holds multiple color values, including `nonSelectionFontColor`, but also black. I did not look into why this is and how this could/should be altered.

Answers:
username_1: Yes, I think there is still some things we have not really fleshed out with custom textAttributes + selection inside trees. @username_0 When you have some time to spare, can you write a comprehensive set of tests that illustrate this issue and what does not work? I think that would help us to narrow down where the semantics go off the rails here.
username_1: @username_0 Any updates on this one?
Status: Issue closed
|
quasarframework/quasar | 728820447 | Title: q-select can't show properly in the q-table top slot
Question:
username_0: **Describe the bug**
when the q-table top slot use the q-select, the select label can't show
**Codepen/jsFiddle/Codesandbox (required)**
https://codepen.io/username_0/pen/pobRxPZ
**To Reproduce**
Steps to reproduce the behavior:
1. the q-select lable 'Standard' can't show
**Expected behavior**
q-select show properly and the q-select lable 'Standard' can show
**Screenshots**
none
**Platform (please complete the following information):**
none
**Additional context**
none
Answers:
username_1: @username_0 Give your QSelect a width style: https://codepen.io/Hawkeye64/pen/wvWgQgQ
Status: Issue closed
username_0: @username_1 of course, you can use style attribute to solve it, but I think QSelect should have own initial width to render itself, in this sample, the QSelect **SHOULD** will render the 'Standard' label correctly when first show in UI
username_2: @username_0 The problem is more complex. Cannot add a default width because it will break other use-cases of QSelect. Just like native input/select, most likely you'll need to set a width for it. |
laravel/passport | 195200264 | Title: Allow custom URL schemes for redirect urls
Question:
username_0: On iOS a common way to authenticate with OAuth 2.0 is to create a custom url scheme for the app. So the redirect url you would register in passport for an app would be something like `my-app://some-url`.
At the moment passport validates URLs using the laravel validators url rule, which does not allow for custom URL schemes. I have posted an [issue](laravel/internals/issues/306) regarding the url validation rule on laravel/internals.
What do you think about removing this validation for passport? Alternatively just validate that the scheme is valid, but not that it is a IANA registered scheme.. (i.e. beginning with a letter and followed by any combination of letters, digits, plus `+`, period `.`, or hyphen `-`)
Answers:
username_1: @username_0 I agree with you and also I propose a PR which can help this issue to be resolved in Passport.
But I'm not sure `+` or `.` is been used in IANA registered schema... Please do check http://www.iana.org/assignments/uri-schemes/uri-schemes.xhtml
username_1: I guess it will not work in all browser as expected. Checkout http://stackoverflow.com/questions/8972397/plus-sign-on-subdomains-names
username_0: Made a quick test app and registered a scheme `a++://` and it is infact valid..
I think the PR looks good 👍
username_1: Okay sounds good :+1: But in PR there is no validation for the custom scheme(protocol). A user can add `*` or any special character if he wants.
username_0: Yeah I'm okay with not validating the scheme.
Some user agents could be more liberal with the scheme anyway..
Maybe the **any** rule could be something like `[a-zA-Z][a-zA-Z0-9-.+]+` instead?
username_0: Until url validation is fixed in laravel I suggest we do not use it in passport.
Would a PR that removes url validation or replaces it with a custom validation be accepted?
username_1: Customizing URL validation would be a tedious job and there are so many things which we need to consider.
ping @taylorotwell can you please look into it? I would suggest we can go with simple scheme validation and in future thinking what features we can add more. At least after adding scheme validation would not hold custom URL redirection on Passport.
username_0: What tests are failing?
Using `[a-zA-Z][a-zA-Z.\-+]+` only foo://bar is failing for me, but that test is wrong.
foo://bar should be a valid url if we want to allow custom url schemes..
username_1: I was trying with `[a-zA-Z][a-zA-Z0-9-.+]+` which you commented early and your recent comment is `[a-zA-Z][a-zA-Z.\-+]+` with the `\` at `-+` thats made the difference 😎
Status: Issue closed
username_2: Closing for lack of activity, hope you got the help you needed :) |
Eugeny/ajenti-v | 104495332 | Title: Support of HTTP/2
Question:
username_0: Nginx has started the process to support http/2. And now they have release the first patch for nginx to support http/2, but ajenti-v can't support http/2, can it support http/2?
Answers:
username_1: Maybe it's a little bit to soon.
All current distributions will not apply this patch before a while, maybe waiting for a global release could be a better idea.
If you really want this feature, your could modify the ajenti nginx_templates.py to fit your needs.
username_2: the nginx development line in Ubuntu (really, `mainline`) has 1.9.6, which supports http2.
I'm not familiar with Python, but could we condintionally check for which version of nginx is present on the system and apply spdy or http2 accordingly?
username_3: In my case (nginx 1.9.7), using SPDY results in a warning and it seems that it has been replaced already with http2. Having an option in Ajenti-V for HTTP2 would be a nice option.
(Already changed it myself in nginx.py on my install.)
username_4: username_3 what change did you make to nginx.py out of curiosity? I presume it was in the region of lines 118 to 121?
username_3: on my server, in /var/lib/ajenti/plugins/vh-nginx/nginx.py I replaced SPDY on line 120 with HTTP2:
' http2' if x.spdy else '',
Of course, this is a temporary solution and it does not replace the text "SPDY" in the admin, it just does the job by replacing in the config files.
username_1: It's not outdated, since the http2 isn't in the stable branch of nginx development.
Yes it's on mainline branch, but it's the current stable dev version, not the very stable witch will be ready to package on regular serious distribs like debian or ubuntu LTS
I made a compromise to make everyone happy. My latest PR will add the checkbox and keep the spdy one too. You have the choice, check the one you want :)
Status: Issue closed
|
tidyverse/dplyr | 491945701 | Title: `na_matches` default in join verbs is different for data tables and for data frames
Question:
username_0: Join verbs applied to data frames have `na_matches = "na"` as default, with the option to change to `na_matches = "never"`.
However, join verbs applied to data tables (`left_join` and `inner_join`) have `na_matches = "never"` as default and unique option.
See [this](https://stackoverflow.com/questions/57734832/left-join-for-tbl-na-matches-not-working) Stack Exchange post.
## Setting
```r
library(tidyverse)
library(reprex)
sessionInfo()
#> R version 3.6.1 (2019-07-05)
#> Platform: x86_64-apple-darwin18.6.0 (64-bit)
#> Running under: macOS Mojave 10.14.6
#> ...
#> other attached packages:
#> [1] reprex_0.3.0 forcats_0.4.0 stringr_1.4.0 dplyr_0.8.1
#> [5] purrr_0.3.2 readr_1.3.1 tidyr_0.8.3 tibble_2.1.3
#> [9] ggplot2_3.2.0 tidyverse_1.2.1
#> ...
```
## The data
```
con <- DBI::dbConnect(RSQLite::SQLite(), ":memory:")
df_1 <- tibble(A = c("a", "aa"), B = c("b", "bb"), D = c("d", NA))
df_2 <- tibble(A = c("a", "aa"), C = c("c", "cc"), D = c("d", NA))
copy_to(con, df_1, overwrite = T)
copy_to(con, df_2, overwrite = T)
dt_1 <- tbl(con, "df_1")
dt_2 <- tbl(con, "df_2")
df_1
#> # A tibble: 2 x 3
#> A B D
#> <chr> <chr> <chr>
#> 1 a b d
#> 2 aa bb <NA>
df_2
#> # A tibble: 2 x 3
#> A C D
#> <chr> <chr> <chr>
#> 1 a c d
#> 2 aa cc <NA>
dt_1
#> # Source: table<df_1> [?? x 3]
#> # Database: sqlite 3.29.0 [:memory:]
#> A B D
#> <chr> <chr> <chr>
#> 1 a b d
#> 2 aa bb <NA>
dt_2
#> # Source: table<df_2> [?? x 3]
#> # Database: sqlite 3.29.0 [:memory:]
#> A C D
#> <chr> <chr> <chr>
#> 1 a c d
#> 2 aa cc <NA>
```
[Truncated]
Browse[3]> f
#> Joining, by = c("A", "D")
#> exiting from: left_join.tbl_df(df_1, df_2)
#> exiting from: left_join(df_1, df_2)
#> # A tibble: 2 x 4
#> A B D C
#> <chr> <chr> <chr> <chr>
#> 1 a b d c
#> 2 aa bb NA cc
```
Here we see that `left_join` calls `left_join.tbl_df` on data frames. Further down we see that `na_matches` is set to `TRUE` before being used as argument in `left_join_impl`. All this makes sense.
Also, a Stack Exchange commenter mentioned this [news link](https://github.com/tidyverse/dplyr/blob/master/NEWS.md#joins) where it is stated "To match NA values, pass na_matches = 'na' to the join verbs; this is only supported for data frames".
So after that much research it is clear now that the default for data frames is `na_matches = "na"` but for data tables it is `na_matches = "never"` with no other option. But this is not at all clear in the [join doc](https://www.rdocumentation.org/packages/dplyr/versions/0.7.8/topics/join), which, for `na_matches` refers to the [join.tbl_df doc](https://www.rdocumentation.org/packages/dplyr/versions/0.7.8/topics/join.tbl_df).
It's always confusing for the user when a default value for the same function is different for different types, so it would be nice if someone could implement `na_matches = "na"` for data tables and set it as default, as it is for data frames.
Not sure how much work this is and I understand there might be other priorities, but in the meantime I think the doc should clearly state: "The defaults is na_matches = 'na' for data frames and na_matches = 'never' (with no other option) for data tables". |
vercel/next.js | 1050767474 | Title: Example with react-native-web is not working
Question:
username_0: ### What version of Next.js are you using?
latest for 11.11.2021
### What version of Node.js are you using?
v16.9.0
### What browser are you using?
Chrome
### What operating system are you using?
Windows
### How are you deploying your application?
npm run dev
### Describe the Bug
I just created new application from template and run "npm run dev" without doing any changes in the project :(
Following error was displayed:
```
Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
TypeError: Class constructor MyDocument cannot be invoked without 'new'
at processChild (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3353:14)
at resolve (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3270:5)
at ReactDOMServerRenderer.render (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3753:22)
at ReactDOMServerRenderer.read (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3690:29)
at Object.renderToStaticMarkup (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:4314:27)
at Object.renderToHTML (C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\render.js:739:41)
at async doRender (C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\next-server.js:1389:38)
at async C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\next-server.js:1484:28
at async C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\response-cache.js:63:36
error - TypeError: Class constructor MyDocument cannot be invoked without 'new'
Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
TypeError: Class constructor MyDocument cannot be invoked without 'new'
at processChild (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3353:14)
at resolve (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3270:5)
at ReactDOMServerRenderer.render (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3753:22)
at ReactDOMServerRenderer.read (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:3690:29)
at Object.renderToStaticMarkup (C:\Personal\NodeProjects\notes-app\node_modules\react-dom\cjs\react-dom-server.node.development.js:4314:27)
at Object.renderToHTML (C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\render.js:739:41)
at async doRender (C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\next-server.js:1389:38)
at async C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\next-server.js:1484:28
at async C:\Personal\NodeProjects\notes-app\node_modules\next\dist\server\response-cache.js:63:36
```
### Expected Behavior
npm run dev compiles the application
### To Reproduce
create new app from template and run "npm run dev"
Answers:
username_0: I forgot to mention. The error is displayed only when calling the url (opening the app in browser)
Status: Issue closed
username_1: Hi can you try next@canary, should be fixed. Close as duplicated with #31104
username_0: Thank you! I confirm the solution.
After "npm i --save next@canary" the problem disappeared!
username_2: Thanks @username_1 worked for me as well
username_3: This issue has been automatically locked due to no recent activity. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you. |
DrMerfy/vscode-overtype | 780888755 | Title: What is "readme badges" about?
Question:
username_0: If we don't see use in that I suggest to drop it and move the "all contributors" there (and add @AdamMaras and @username_1 to it).
Answers:
username_1: Mhm I'll remove it know, good idea.
username_0: Hm, if I understand the bot and the [Emoji Key](https://allcontributors.org/docs/en/emoji-key) correct, may I request an additional code contrib for #11? I'm keen to check if "links to code contributions of this project" is an actual working link...
username_1: Ofc, I'll add it there :D
Status: Issue closed
username_0: ... and the generated link works :-)
Hm... I still thank that, if we want to keep the contributors badge then we should move it to the "badge line" (as last entry) - but as the counter "1" looks quite ugly possibly just drop the counter (for now) or may add the original author and you to the contributors? |
mystor/git-revise | 477114705 | Title: Handle being called from not to git dit
Question:
username_0: When in a subdir of the git repo I am getting:
```
Traceback (most recent call last):
File "/home/username_0/.local/bin/git-revise", line 10, in <module>
sys.exit(main())
File "/home/username_0/.local/lib/python3.7/site-packages/gitrevise/tui.py", line 191, in main
inner_main(args, repo)
File "/home/username_0/.local/lib/python3.7/site-packages/gitrevise/tui.py", line 182, in inner_main
interactive(args, repo, staged, head)
File "/home/username_0/.local/lib/python3.7/site-packages/gitrevise/tui.py", line 99, in interactive
todos = edit_todos(repo, todos, msgedit=args.edit)
File "/home/username_0/.local/lib/python3.7/site-packages/gitrevise/todo.py", line 213, in edit_todos
""",
File "/home/username_0/.local/lib/python3.7/site-packages/gitrevise/utils.py", line 90, in run_editor
path = repo.get_tempdir() / filename
File "/home/username_0/.local/lib/python3.7/site-packages/gitrevise/odb.py", line 223, in get_tempdir
self._tempdir = TemporaryDirectory(prefix="revise.", dir=str(self.gitdir))
File "/usr/lib64/python3.7/tempfile.py", line 788, in __init__
self.name = mkdtemp(suffix, prefix, dir)
File "/usr/lib64/python3.7/tempfile.py", line 366, in mkdtemp
_os.mkdir(file, 0o700)
FileNotFoundError: [Errno 2] No such file or directory: '.git/revise.99am3enc'
```
cd the toplevel dir works just fine.
Status: Issue closed
Answers:
username_1: Thanks for catching this! Slipped up when changing where the tempdir is created and accidentally registered it relative to the root, rather than the current working directory. Should be fixed in v0.4.2 |
mystasly48/mystasly48.github.io | 190556936 | Title: デザイン変えよ?
Question:
username_0: Aboutとか詳しくいらないし、Workとかも詳しくいらないし、Gameとかいらないし、PCとかも詳しくいらないし。
よくよく考えてみると大した載せる情報ないんだから、1ページだけのシンプルなウェブサイトに変えないか、と。
最近流行ってますやん。
Answers:
username_0: Single Page Website、Single Page Design とかって言うらしいです
username_0: https://startbootstrap.com/template-overviews/freelancer/ とか良いね
username_0: 何の情報を載せますか
Status: Issue closed
|
kalexmills/github-vet-tests-dec2020 | 762016122 | Title: rootfs/snapshot: pkg/cloudprovider/providers/aws/aws.go; 19 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/rootfs/snapshot/blob/ef9bc7b34e1c4409ac87a5d49b6f4bf3a60b15f9/pkg/cloudprovider/providers/aws/aws.go#L3169-L3187)
<details>
<summary>Click here to show the 19 line(s) of Go which triggered the analyzer.</summary>
```go
for securityGroupID := range securityGroupIDs {
request := &ec2.DeleteSecurityGroupInput{}
request.GroupId = &securityGroupID
_, err := c.ec2.DeleteSecurityGroup(request)
if err == nil {
delete(securityGroupIDs, securityGroupID)
} else {
ignore := false
if awsError, ok := err.(awserr.Error); ok {
if awsError.Code() == "DependencyViolation" {
glog.V(2).Infof("Ignoring DependencyViolation while deleting load-balancer security group (%s), assuming because LB is in process of deleting", securityGroupID)
ignore = true
}
}
if !ignore {
return fmt.Errorf("error while deleting load balancer security group (%s): %v", securityGroupID, err)
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: ef9bc7b34e1c4409ac87a5d49b6f4bf3a60b15f9 |
oleronrob/todo-list-app | 419370168 | Title: new todo can overwrite old todo
Question:
username_0: **Describe the bug**
id conflict :
new todo can overwrite old todo
Status: Issue closed
Answers:
username_0: **Describe the bug**
id conflict :
new todo can overwrite old todo
Status: Issue closed
username_0: Change file store.js:
new id is now based on time and not in a random method. |
scambra/devise_invitable | 481156093 | Title: Tokens invalid for (seemingly) no reason
Question:
username_0: I'm using this gem in one of my apps and overall it works pretty well. However I am having an abnormal amount of users getting "invalid token" errors when trying to accept their invites. I have expirations turned off but the only thing they share in common is the invites are from about a month ago. Other users from that time accepted their invites no problem but ones that waited are having issues.
Thoughts?
Answers:
username_1: Only the last invitation token is valid, so is it possible these users were invited again and they are trying to accept first invitation?
username_0: That's not the issues here. Only 1 invite has been sent out.
username_2: I ran into the same issue. What I finally managed to figure out is that some email providers pre-click all links in all emails as a phishing/spam/malware protection. Unfortunately, that also includes clicking the 'Reject invitation' link I have in the invitation emails. I managed to fix this by changing the link in the email from directly rejecting, to going to a page from which the user then has to click a reject button (the email bots don't do that).
username_0: Interesting, but as I do not have a 'Reject Invitation' link in my emails, that's not my issue.
@username_1, feel free to close this issue for now since I'm not working on this anymore and I have no way of consistency recreating the issue
Status: Issue closed
|
OutpostUniverse/OP2Utility | 328823252 | Title: Define or document exception behaviour of Stream classes
Question:
username_0: We should attempt to define, or document as undefined, the behaviour of the Stream classes when there is an exception during a read or write. This includes exception safety, and EOF behaviour.
A failed read/write operation might leave the file pointer in an unexpected or indeterminate state. For example, the stream position might still be left at the location from before the I/O operation started, part way through the I/O operation, or completely at the end of the I/O operation regardless of whether the full number of bytes was actually processed. Additionally, the contents of the buffer may be unknown, such as during a failed read. It might be the buffer was not modified, that it was partially modified, or that it was completely filled, either with data, or with zeros.
This includes the [exception safety levels](https://en.wikipedia.org/wiki/Exception_safety) of various streams and operations. Exception safety levels are:
1. No-throw guarantee. (a.k.a. failure transparency, or no failure)
Example: Checking the length of a MemoryStreamReader.
2. Strong exception safety. (a.k.a. commit or rollback semantics)
Example: Attempting to read from a MemoryStreamReader, which may or may not extend past the bounds of the MemoryStreamReader buffer, causing the operation to abort before it is started.
3. Basic exception safety (a.k.a. no leak guarantee)
Example: Requesting a read from a FileStreamReader past the end of file, causing the buffer to be partially filled with an unknown amount of data, and the stream pointer adjusted to the end of the file which is neither the starting point nor the expected end point. The stream object can still be destroyed safely, and the file will be closed properly. The buffer might be re-initialized or ignored, and the stream could be used again after seeking to a known location.
4. No exception safety.
Example: Attempting to read past the end of a network socket stream reader, which causes the program to crash, your computer to format the hard drive, the contents of your bank account sent to a Nigerian prince, and the last 3 photos on your phone emailed to your mother, grandmother, and priest.
Let's try to avoid number 4. In particular, we might define at minimum that each derived Stream class have at least basic exception safety or better for each method. Each Stream class can then define how exception safe each of its operations are. We should strive for number 2, Strong exception safety, where possible, though we may need to settle for number 3, Basic exception safety, where performance or correctness issues place limitations on what we can accomplish.
Answers:
username_1: Thanks for the nice writeup on exceptions.
Once we review and implement exception safety for a piece of the Stream code, I'm guessing we would want to write a comment on the public function(s) involved a short description of the expected exception behaviour. This way a user would understand from a high level what to expect for exception handling when calling the function. Ideally this comment would pop up in IDE tooltips when typing out the function.
Depending on level of effort involved, it might make sense to tackle this one class at a time. IE FileStreamWriter in its own branch and then later MemoryStreamReader ans so on. |
php-pm/php-pm | 222572394 | Title: NGINX load balance error
Question:
username_0: Hello:
I am currently setting up an NGINX load balance with 3 worker instances on my remote server, but apparent the workers all got shutdown due to **PHP Fatal error: Uncaught Symfony\Component\Debug\Exception\ContextErrorException: Notice: Undefined index: HTTP_X_PHP_PM_REMOTE_IP in /dir/to/site/vendor/php-pm/php-pm/ProcessSlave.php:403**
my nginx conf:
upstream ppm {
least_conn;
server unix:/dir/to/site/.ppm/run/5501.sock;
server unix:/dir/to/site/.ppm/run/5502.sock;
server unix:/dir/to/site/.ppm/run/5503.sock;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://ppm;
}
btw. really cool project tho.
Thanks<issue_closed>
Status: Issue closed |
RT-Thread/rt-thread | 588413849 | Title: 'NoneType' object has no attribute 'groups' when switching python2 to python3
Question:
username_0: if I use python3, scons --menuconfig, or scons --target=eclipse, all is wrong.

however, using python2, it is OK.

Answers:
username_1: 也许你手动升级一下 env 的脚本就没这样的问题了
Status: Issue closed
username_0: if I use python3, scons --menuconfig, or scons --target=eclipse, all is wrong, and the wrong is the same.

however, using python2, it is OK.

username_0: 我是ubuntu系统
你说的是这样的吗:
```
source ~/.env/env.sh
pkgs --update
```
又试了一遍, 也是不行, 一样的结果.
username_1: 不是这个意思,我的意思是 pkgs --upgrade 更新 env 功能脚本到最新版,
或者你切换到 env 的功能脚本目录,使用 git pull 命令来更更新
username_0: 现在可以了, 或许 apt install scons 默认安装的是python2版本的, 需要卸载scons, apt autoremove scons, 安装py3的, pip3 install scons, 就可以使用scons编译了
Status: Issue closed
|
nakamura-to/Soma | 70712532 | Title: Create a dynamic type programmatically
Question:
username_0: Hi
I have some trouble to create a dynamic type for my unit test project in f# because i have some function that get this type as parameter.
Have you some suggestion?
Thx
Answers:
username_1: Hi
Could you give me sample code ?
I would like to know more about your trouble.
username_0: Of course,
I want to test this simple function:
```F#
let manageSuccess (successResult, message: List<string>) =
{ ReturnType.Return = true
ReturnType.Message = "some message... " + ( if (message.IsEmpty) then "" else message |> List.reduce (+))
ReturnType.UserID = successResult?ATTRIBUTETYPE1
ReturnType.UserRole = successResult?ATTRIBUTETYPE2 }
```
where: ReturnType is a simple record type, and successResult is a dynamic type from Soma after a join query.
I want to ask you, how i can programmatically create a dynamic type, in order to create a unit test for that function? In particular i don't want to map the join query result as a new type in my project, as i did with regular tables and views, but, if you say that is impossible to create a dynamic type without call the Soma library, i will follow this way to solve this problem. Maybe if i can create a fake type that match the dynamic type in my unit test project then it would be a good workaround.
Thanks for your support and work,
it helps me very much
username_1: Thank you for your explanation.
Try to use the [dynamic function](https://github.com/username_1/Soma/blob/1.8.0.7/Soma.Core/Db.fs#L1938) as follows.
```fs
open Soma.Core
type ReturnType = { Return : bool; Message : string; UserID : int; UserRole : string }
let manageSuccess (successResult, message: List<string>) =
{ ReturnType.Return = true
ReturnType.Message = "some message... " + ( if (message.IsEmpty) then "" else message |> List.reduce (+))
ReturnType.UserID = successResult?ATTRIBUTETYPE1
ReturnType.UserRole = successResult?ATTRIBUTETYPE2 }
[<EntryPoint>]
let main argv =
// create a dynamic type
let successResult = dynamic (MsSqlDialect())
successResult?ATTRIBUTETYPE1 <- 99
successResult?ATTRIBUTETYPE2 <- "admin"
// use the dynamic type
let returnType = manageSuccess (successResult, [])
printfn "%A" returnType
0
```
Or if you don't want to depend Soma library, use IDictionary instead of dynamic type. Because Soma dynamic type can be treated as IDictionary.
```fs
open System.Collections
let (?) (dynamic:IDictionary) (propName:string) :'a =
dynamic.[propName] :?> 'a
let inline (?<-) (dynamic:IDictionary) (propName:string) (value:'a) =
dynamic.[propName] <- value
type ReturnType = { Return : bool; Message : string; UserID : int; UserRole : string }
let manageSuccess (successResult, message: List<string>) =
{ ReturnType.Return = true
ReturnType.Message = "some message... " + ( if (message.IsEmpty) then "" else message |> List.reduce (+))
ReturnType.UserID = successResult?ATTRIBUTETYPE1
ReturnType.UserRole = successResult?ATTRIBUTETYPE2 }
[<EntryPoint>]
let main argv =
// create a Hashtable
let successResult = Hashtable()
successResult?ATTRIBUTETYPE1 <- 99
successResult?ATTRIBUTETYPE2 <- "admin"
// use the Hashtable
let returnType = manageSuccess (successResult, [])
printfn "%A" returnType
0
```
username_0: Awesome,
thank you :+1:
Status: Issue closed
|
jelastic-jps/lets-encrypt | 406134709 | Title: Punycode support
Question:
username_0: Can you add punycode support to addon?
Answers:
username_1: Hi Max, the ticket about your issue has been registered and fixed. It will be in the repository after the testing stage. Thanks in advance!
username_1: Hi Max, the fix has been merged. Please, try to install Let's Encrypt add-on once again. Let us know in case you have any other issues. Thanks!
Status: Issue closed
|
m-m-m/util | 166029921 | Title: NlsBundle: Support for Object methods
Question:
username_0: If an `NlsBundle` instance is created as dynamic proxy via `NlsBundleFactory` it does not properly support methods inherited from `Object` such as `toString()`, `equals(Object)` or `hashCode()`.
This is not very relevant for an `NlsBundle` instance that is just a factory to create message but still this is a bug and the `NlsBundle` instance should satisfy the contract of `Object`.
Answers:
username_0: Seems complete to me. Let me know if you miss anything here (methods like wait and notify are not dispatched to dynamic proxy invocation handler, will not make any sense and will throw an exception anyways).
Status: Issue closed
|
jaegertracing/jaeger | 428439390 | Title: Allow agent to use gRPC roundrobin loadbalancing when talking to collector
Question:
username_0: <!--
Welcome to the Jaeger project! 👋🎉
- Please search for existing issues to avoid creating duplicate bugs/feature requests.
- Please be respectful and considerate of others when commenting on issues.
- Please provide as much information as possible so we all understand the issue.
- If you only have a question, you may get a faster response by asking in
- our chat room https://gitter.im/jaegertracing/Lobby, or
- the forum https://groups.google.com/d/forum/jaeger-tracing
(but please don't double post)
-->
## Requirement - what kind of business use case are you trying to solve?
<!-- required section -->
Currently users are only allowed to specify static list of collector hosts/ports for agents to talk to. We should provide a mechanism to have agent talk to a gRPC roundrobin load balancing backed by external dns service
## Problem - what in Jaeger blocks you from solving the requirement?
<!-- required section -->
<!-- If possible, describe the impact of the problem. -->
None
## Proposal - what do you suggest to solve the problem or improve the existing situation?
<!-- It's ok if you don't have one. -->
We add the option for user to specify external naming.Resolver for gRPC.
## Any open questions to address
<!-- Questions that should be answered before proceeding with implementation. -->
Answers:
username_1: We have it already: https://www.jaegertracing.io/docs/next-release/deployment/#discovery-system-integration . We even make use of that with the Jaeger Operator: https://github.com/jaegertracing/jaeger-operator/issues/332
username_2: please update title and description, they are not accurate. What you're really trying to do is support custom discovery and load balancing configuration. |
Sikerdebaard/dcmrtstruct2nii | 795183161 | Title: BATCH PROCESSING
Question:
username_0: is there any way of processing batches of RTSTRUCT files.
Answers:
username_1: Not out of the box. You'd have to write some sort of shell-script or python script to loop over your directory structure and then call this tool on every dicom + rtstruct in order to do this.
Status: Issue closed
username_0: okay Thanks! |
AdChain/AdChainRegistryDapp | 302450486 | Title: MetaMask Overcharging Parameter Proposal
Question:
username_0: When I create a parameter proposal for any parameter value, MetaMask prompts me to pay 10000000000 ADT, when the gMinDeposit is 100 ADT. Also, notice how if I added a decimal point after 9 zeroes in 10000000000 ADT, I get 10.000000000 (10) ADT, not 100.000000000 (100) ADT.
Link to screen recording: https://drive.google.com/open?id=12z8Njd_Vo5r9v1j7BPfhahT9CyjfImMv
Answers:
username_1: I would consider opening an issue with MetaMask about this one
Status: Issue closed
|
dmitryserbin/azdev-release-orchestrator | 669448140 | Title: Dropdown selection with variables
Question:
username_0: This task has been so great! We use it heavily.
We came across a minor issue where the Release Definition we pass in a variable like $(Release). This works when the value of this variable are the definition ID. However, if you pass in the release name, even though it actually exists in the dropdown, the task fails. Can this please be fixed?
Answers:
username_1: Thanks for reporting this.
The task interface uses IDs to fetch subsequent property value in the UI. For example, to get available target definitions in the dropdown menu you need to know project ID. This means we stuck with IDs to enable dropdown menu auto-complete, otherwise it won't work. Nice for classic release pipelines, but not so convenient for YAMLs.
The root cause is Azure DevOps task API limitation (last time I checked at least), and I already did a few attempts trying workaround this, but none successful. The best option I got so far (more like a hack) is to introduce an optional boolean parameter to force task using names instead of IDs for target project, definition and release. May work fine with YAML pipelines, but will be quite ugly in the classic pipeline interface.
I might give it another go and try to solve this problem one more time - I don't like having IDs in my YAMLs either. Will keep you updated.
username_1: This will be addressed in the next release.
Status: Issue closed
|
darktrojan/shrunked | 92607392 | Title: Unmaintained translations to be removed
Question:
username_0: I've decided that it's not worth maintaining the translations which haven't been updated for a long time. They are likely to be removed.
At risk: Catalan, German, French, Italian, Polish, Portuguese (Brazil), Swedish, Turkish, Chinese (Simplified). Each of these is missing a large number of strings.
I don't want to annoy a lot of users by having their software revert to English, but I think they would rather have it all in English than half (or more) in English, half in another language.
If you can help, send a pull request, or visit http://zoo.username_0.net/username_0/shrunked/repo.php, which makes the process easier.
Answers:
username_0: I've just pushed 7f3b3c015518dd9cb9edfc0b557f22e4d2bc75e2, which leads here. If you clicked a button which led you here, it looks like it worked!
username_0: Thanks @ND83 for updating the French translation!
username_1: I can help you with the german translation.
username_0: Thanks @username_1 – although it looks like somebody has [already done it](http://zoo.username_0.net/username_0/shrunked/de/locale.php).
username_1: Ok, but the bars are not completely green, doesn't that mean there is still something missing?
username_0: Not necessarily, some of the strings are just the same as they are in English. That's okay.
Status: Issue closed
|
SlimeKnights/TinkersConstruct | 162837364 | Title: No heart canisters, 1.9.4?
Question:
username_0: IDK if I missed something but I dont have the armor tab, even checked controls and the control isnt there and even hit "o" as well as theres no canisters, or hearts in the creative menu or the jeweled apple, now I have necrotic bones. thats about it. for that part :/
Answers:
username_1: https://github.com/SlimeKnights/TinkersConstruct/wiki/FAQ
username_0: thanks for the link
username_2: @username_1 Ninja'ed me, but anyway, that isn't at the top of the list to be re-implemented, so don't expect them soon.
username_0: Thanks for the link, just asking since I couldnt find any information about it other than standard tinkers. Whats the latest version with all the stuff includng but not limited to materals and you 2, and might smelting? 1.8?
username_2: That will all be in one book most likely.
username_3: so the armor we were used to use in 1.7.10 isn't coming back in the next versions?
username_4: Nope.
username_5: As an update on this, while there is nothing from the TiC team for heart canisters, [Baubley Heart Canisters](https://minecraft.curseforge.com/projects/baubley-heart-canisters) now exists.
Status: Issue closed
|
LeaVerou/awesomplete | 59543426 | Title: 청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피
Question:
username_0: 청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피청주오피 & 영통오피 ∝ ≤밤≥》W A R《〔닷〕C O M ヮ신도림오피 |
integrations/slack | 314355410 | Title: Invite new members via slash command
Question:
username_0: Hope this makes sense as its own issue – it is related but out of the scope of [Take action via slash commands](https://github.com/integrations/slack/issues/474).
It would be great to be able to invite some to an org via a command like:
```
/github invite myorg/team username
```
Answers:
username_1: I'd love if this was possible! Especially if `username` could also be a Slack handle.
I wonder if the org admin permissions you'd need for this are going to make this challenging for us to implement
username_0: I think so |
neo4j/neo4j-javascript-driver | 242961823 | Title: Integer.d.ts
Question:
username_0: I just upgraded to the latest npm package and changed the importings accordingly. However...
import Integer from "neo4j-driver/types/v1/integer";
...
if (obj instanceof Integer) return Integer.toNumber(obj);
...results in a error...
Cannot find module 'neo4j-driver/types/v1/integer
Why?
Answers:
username_1: Hi @username_0,
I do not think you need to change imports. Those TypeScript declaration files are just type declarations, right? They do not contain any actual executable JavaScript code and modules. TypeScript knows how to extract types from NPM package and should use those defined in `types` folder. Your IDE should also just pick existing TypeScript declarations and make better code completion suggestions.
Hope this helps.
Status: Issue closed
username_0: @username_1 When I make no changes to my imports, I get this error message:
` Could not find a declaration file for module 'neo4j-driver/lib/v1/record'. '/root/app/node_modules/neo4j-driver/lib/v1/record.js' implicitly has an 'any' type.
rudl | Try `npm install @types/neo4j-driver/lib/v1/record` if it exists or add a new declaration (.d.ts) file containing `declare module 'neo4j-driver/lib/v1/record'`
username_1: I just upgraded to the latest npm package and changed the importings accordingly. However...
import Integer from "neo4j-driver/types/v1/integer";
...
if (obj instanceof Integer) return Integer.toNumber(obj);
...results in a error...
Cannot find module 'neo4j-driver/types/v1/integer
Why?
username_1: @username_0 this definitely does not sound right. Could you please share an example project to reproduce this problem?
username_0: @username_1 I´ve extracted an example: https://github.com/username_0/neo4j-test
username_1: @username_0 I'm able to make your code compile by changing imports to:
```typescript
import neo4j from "neo4j-driver";
import {Driver} from "neo4j-driver/types/v1/driver";
import Integer from "neo4j-driver/types/v1/integer";
import Record from "neo4j-driver/types/v1/record";
import Transaction from "neo4j-driver/types/v1/transaction";
import Session from "neo4j-driver/types/v1/session";
import {AuthToken} from "neo4j-driver/types/v1";
```
and `DatabaseManager#get()` function to:
```typescript
public static get(): Driver {
const authToken: AuthToken = neo4j.auth.basic("user", "pwd");
const driver: Driver = neo4j.driver(`bolt://`, authToken);
if(!DatabaseManager.neo4jClient) DatabaseManager.neo4jClient = driver;
return this.neo4jClient;
}
```
I think `import {auth, driver} from "neo4j-driver/types/v1";` does not work because this file only contains TS declarations for `auth` and `driver` functions. Rest of imports import only type declarations. So actual executable JS code still needs to be imported via `import neo4j from "neo4j-driver";`.
What do you think?
username_0: I tried this as well. I used that approach when I created this issue. Nevertheless I now pushed the Integer check. And then it does not compile anymore. Actually you are right. There should be no need to change from \*\*/lib/\*\* to \*\*/types/\*\*.
username_1: I'm not entirely sure why this does not work. It has probably something to do with `instanceof` operator which requires right operand to be a function.
However driver's JS code does not export `Integer` as a type constructor. It only exports functions to work with integers like `int` and `isInt`. Could you please use the following code without `instanceof`:
```typescript
if(neo4j.isInt(obj)) {
const i = obj as Integer;
console.log('hooray');
}
```
username_0: But with `Record` we have the Scenario and it did work. Also the types of @SnappyCroissant made this work. Unfortunately the new types do not allow this. However I am going to use your suggestion now. Thank you for your time :)
username_1: Oh, so it used to work with TypeScript declarations from this PR https://github.com/neo4j/neo4j-javascript-driver/pull/183? I'd expect it it work with updated types than, otherwise it's a problem. Do you have an example where `Record` does not work?
username_0: Yes, the PR is correct. No, sorry. What I meant was the structure of the declaration file of `Integer` and `Record`. In both files the class is the default export, but whyever it does only fail in the Integer class.
username_2: have you tried enabling `allowSyntheticDefaultImports` in tsconfig.json?
username_0: It says: Allow default imports from modules with no default export. This does not affect code emit, just typechecking. But all declaration files do have a default export. But I got it working with a minor change in integer.d.ts.
username_2: having a default export doesn't mean it's a ES module. it's just a commonjs with a `default` member on exports.
username_0: @username_2 I tried both variants, both did not work. With `allowSyntheticDefaultImports` it still returns: Cannot find module 'neo4j-driver/types/v1/integer' for
import Integer from 'neo4j-driver/types/v1/integer';
and
import {default as Integer} from 'neo4j-driver/types/v1/integer';
username_0: I am using @username_1 approach now.
if (Neo4j.isInt(obj)) return obj.toNumber();
`obj as Integer`does not work, as I said, but that's not a big issue. I declared `obj` as `any`. This does not feel clean, but for now its a single line in my project, so I am ok with it. Thank you guys anyway. Nevertheless maybe we can make it more compatible some day.
username_2: oh my bad, I just noticed you're trying to import the types, and that's not meant to happen. Typescript typings are meant to be side-by-side to their .js counterparts, but neo4j team decided to leave them apart, so yes, your only bet was to copy the types to the lib folder (and it will automatically work)
username_1: I can confirm that `Integer` import works when type declarations are in the `lib` directory together with their javascript counterparts. They were moved to a separate folder just for clarity and to not mix transpiled js and ts. Honestly, I expected presence of `types` section in `package.json` to fix all location problems like this. Do you guys know a way to keep type declarations in a different folder and fix this problem with project configuration? For example, Vue has type declarations separately: https://github.com/vuejs/vue/tree/dev/types.
@username_2 I've tried your last suggestion and it does not seem to help.
It is bad to directly import TypeScript declarations, right? Maybe that's another indication they should be moved to `lib`.
username_2: the issue is to try and import the declarations without a side-by-side js file to back it up. if you are importing typings just for the sake of accessing interfaces, you need to include the `.d` in the import, like `import Integer from 'neo4j-driver/types/v1/integer.d'` but that is weird and will be stripped after transpilation, hence why it usually keeps the typings along with the .js files.
when exporting everything, including interfaces, on the main `index.d.ts` inside `types` folder, you don't need to rely on importing types paths manually, just doing `import {v1} from 'neo4j-driver'` then using `v1.Integer` will work. that's also the reason why TS team decided to not include a typings in package.json that could be an array (see Microsoft/TypeScript/issues/6585)
username_3: Do you know when it will make to npm? Because right now it takes trial and error to figure out imports in typescript, this is how ridiculous my imports for neo4j look
```
import {StatementResult} from 'neo4j-driver/types/v1/result';
import * as neo4j from 'neo4j-driver';
import Record from 'neo4j-driver/types/v1/record';
```
the usage is not much better:
```
const values: any[] = [];
record.forEach((value) => { values.push(!neo4j.v1.isInt(value) ? value
: (neo4j.v1.integer.inSafeRange(value) ? value.toNumber() : value.toString()));
});
```
username_1: @username_3 we plan a release soon to get this fix and https://github.com/neo4j/neo4j-javascript-driver/pull/286 out. Later waits for feedback from internal testing.
Status: Issue closed
username_1: Fix has been released with 1.4.1 driver which is available on npm. |
tensorflow/tensorflow | 383713606 | Title: transform_graph build error
Question:
username_0: <em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 18.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary):Source
- TensorFlow version: Github Master version
- Python version:2.7
- Installed using virtualenv? pip? conda?: None.
- Bazel version (if compiling from source): 0.18.1
- GCC/Compiler version (if compiling from source):7.3.0
- CUDA/cuDNN version:
- GPU model and memory:
**Describe the problem**
Install transform_graph tools from the source using Bazel on my VMware VM Ubuntu 18.04
**Provide the exact sequence of commands / steps that you executed before running into the problem**
cd ~/tensorflow
bazel build tensorflow/tools/graph_transforms:transform_graph
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
And i got random error such as this:
ERROR: /home/zpluo/tensorflow/tensorflow/core/kernels/BUILD:2829:1: C++ compilation of rule '//tensorflow/core/kernels:matrix_square_root_op' failed (Exit 4)
gcc: internal compiler error: Killed (program cc1plus)
PS:Sometime it's the other file can't be built.
Answers:
username_1: Apologies for the delay in response. Is this still an issue?
username_2: I also came across this problem just now.
**System information**
* OS Platform and Distribution: Linux Ubuntu 18.04
* TensorFlow installed from: Source
* TensorFlow version: Github Master version
* Python version: 2.7
* Installed using virtualenv? pip? conda?: None.
* Bazel version (if compiling from source): 0.22.0
* GCC/Compiler version (if compiling from source): 7.3.0
username_1: Can you please paste the error? Also what are the steps you followed to install TF?
username_1: Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!
Status: Issue closed
|
redhat-cop/containers-quickstarts | 1108972716 | Title: Jenkins-Agent-Zap Error
Question:
username_0: When we are trying to build an image from the jenkins-agent-zap, we are getting the below error.
Step 11/13 : RUN curl -sL https://github.com/zaproxy/zaproxy/releases/download/v${ZAPROXY_VERSION}/ZAP_${ZAPROXY_VERSION}_Linux.tar.gz | tar zx --strip-components=1 && curl -sL https://bitbucket.org/meszarv/webswing/get/${WEBSWING_VERSION}.tar.gz | tar zx --strip-components=1 -C webswing && rm -rf webswing/demo && touch AcceptedLicense && git clone --depth 1 --branch v${ZAPROXY_VERSION} https://github.com/zaproxy/zaproxy /tmp/zaproxy && rsync -av /tmp/zaproxy/docker/{policies,scripts,zap*} /zap/ && rsync -av /tmp/zaproxy/docker/policies /var/lib/jenkins/.ZAP/ && rsync -av /tmp/zaproxy/docker/webswing.config /zap/webswing/webswing.config && rm -rf /tmp/zaproxy && touch /.dockerenv && chown root:root /zap -R && chown root:root -R /var/lib/jenkins && chmod 777 /var/lib/jenkins -R && chmod 777 /zap -R
---> Running in 6fac21dfee38
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Because of this error, we got stuck to create an docker image. Quickest reply is helpful for u.
Answers:
username_1: have just cloned and built the zap image without any issues. The URLs mentioned in step 11, can you access them? or are you behind a proxy/firewall etc?
username_0: I have cloned the repository to my local and pushed that to our bitbucket repository. From openshift console, i used the bitbucket url to do the build. At that time only it is showing the above error.
I can access that github and bitbucket url's for ZAP v2.9.0 and webswing from the servers. There is no proxy.
Seems we are having this line
tar zx --strip-components=1
and tar zx --strip-components=1 -C webswing
Please check and do the needful.
username_0: Also i have used wget --no-check-certificate command to download the files instead of curl. If i use curl I am getting below error
Step 11/15 : RUN curl -sL https://github.com/zaproxy/zaproxy/releases/download/v${ZAPROXY_VERSION}/ZAP_${ZAPROXY_VERSION}_Linux.tar.gz | tar xvf --strip-components=1 && curl -sL https://bitbucket.org/meszarv/webswing/get/${WEBSWING_VERSION}.tar.gz | tar xvf --strip-components=1 -C webswing && rm -rf webswing/demo && touch AcceptedLicense && git clone --depth 1 --branch v${ZAPROXY_VERSION} https://github.com/zaproxy/zaproxy /tmp/zaproxy && rsync -av /tmp/zaproxy/docker/{policies,scripts,zap*} /zap/ && rsync -av /tmp/zaproxy/docker/policies /var/lib/jenkins/.ZAP/ && rsync -av /tmp/zaproxy/docker/webswing.config /zap/webswing/webswing.config && rm -rf /tmp/zaproxy && touch /.dockerenv && chown root:root /zap -R && chown root:root -R /var/lib/jenkins && chmod 777 /var/lib/jenkins -R && chmod 777 /zap -R
---> Running in 9b8569546eb8
tar: --strip-components=1: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
Removing intermediate container 9b8569546eb8
error: build error: The command '/bin/sh -c curl -sL https://github.com/zaproxy/zaproxy/releases/download/v${ZAPROXY_VERSION}/ZAP_${ZAPROXY_VERSION}_Linux.tar.gz | tar xvf --strip-components=1 && curl -sL https://bitbucket.org/meszarv/webswing/get/${WEBSWING_VERSION}.tar.gz | tar xvf --strip-components=1 -C webswing && rm -rf webswing/demo && touch AcceptedLicense && git clone --depth 1 --branch v${ZAPROXY_VERSION} https://github.com/zaproxy/zaproxy /tmp/zaproxy && rsync -av /tmp/zaproxy/docker/{policies,scripts,zap*} /zap/ && rsync -av /tmp/zaproxy/docker/policies /var/lib/jenkins/.ZAP/ && rsync -av /tmp/zaproxy/docker/webswing.config /zap/webswing/webswing.config && rm -rf /tmp/zaproxy && touch /.dockerenv && chown root:root /zap -R && chown root:root -R /var/lib/jenkins && chmod 777 /var/lib/jenkins -R && chmod 777 /zap -R' returned a non-zero code: 2
username_0: If i use the original command getting the below error
Step 11/15 : RUN curl -sL https://github.com/zaproxy/zaproxy/releases/download/v${ZAPROXY_VERSION}/ZAP_${ZAPROXY_VERSION}_Linux.tar.gz \| tar zx --strip-components=1 && curl -sL https://bitbucket.org/meszarv/webswing/get/${WEBSWING_VERSION}.tar.gz \| tar zx --strip-components=1 -C webswing && rm -rf webswing/demo && touch AcceptedLicense && git clone --depth 1 --branch v${ZAPROXY_VERSION} https://github.com/zaproxy/zaproxy /tmp/zaproxy && rsync -av /tmp/zaproxy/docker/{policies,scripts,zap*} /zap/ && rsync -av /tmp/zaproxy/docker/policies /var/lib/jenkins/.ZAP/ && rsync -av /tmp/zaproxy/docker/webswing.config /zap/webswing/webswing.config && rm -rf /tmp/zaproxy && touch /.dockerenv && chown root:root /zap -R && chown root:root -R /var/lib/jenkins && chmod 777 /var/lib/jenkins -R && chmod 777 /zap -R
--
| ---> Running in c2cea8a2262b
|
|
| gzip: stdin: unexpected end of file
| tar: Child returned status 1
| tar: Error is not recoverable: exiting now
| Removing intermediate container c2cea8a2262b
| error: build error: The command '/bin/sh -c curl -sL https://github.com/zaproxy/zaproxy/releases/download/v${ZAPROXY_VERSION}/ZAP_${ZAPROXY_VERSION}_Linux.tar.gz \| tar zx --strip-components=1 && curl -sL https://bitbucket.org/meszarv/webswing/get/${WEBSWING_VERSION}.tar.gz \| tar zx --strip-components=1 -C webswing && rm -rf webswing/demo && touch AcceptedLicense && git clone --depth 1 --branch v${ZAPROXY_VERSION} https://github.com/zaproxy/zaproxy /tmp/zaproxy && rsync -av /tmp/zaproxy/docker/{policies,scripts,zap*} /zap/ && rsync -av /tmp/zaproxy/docker/policies /var/lib/jenkins/.ZAP/ && rsync -av /tmp/zaproxy/docker/webswing.config /zap/webswing/webswing.config && rm -rf /tmp/zaproxy && touch /.dockerenv && chown root:root /zap -R && chown root:root -R /var/lib/jenkins && chmod 777 /var/lib/jenkins -R && chmod 777 /zap -R' returned a non-zero code: 2
username_1: that error is saying "tar didnt get given a file to untar", if the cURL command is failing due to certificates, then it is not downloading the tar file.
as the certificate is failing for github.com, this is an environmental problem on your end, so nothing we can do.
username_1: https://www.google.com/search?q=wget+pipe+to+tar&oq=wget+pipe+&aqs=chrome.0.0i512l2j69i57j0i512l6.3985j0j7&sourceid=chrome&ie=UTF-8
username_0: Thanks for your help to resolve the issue. Currently I have integrated this new repository to our groovy pipeline. After added the below code into our groovy pipeline script, getting error and at last the piepline got failed. Below is the code i have added
sh '''
/zap/zap-baseline.py -r index.html -t http://<some website url> || return_code=$?
echo "exit value was - " $return_code
'''
}
post {
always {
// publish html
publishHTML target: [
allowMissing: false,
alwaysLinkToLastBuild: false,
keepAll: true,
reportDir: '/zap/wrk',
reportFiles: 'index.html',
reportName: 'OWASP Zed Attack Proxy'
]
The error screenshot is attached:

username_2: Hi @username_0 .. i attach a recent jenkins / successful run using this image for comparison:

seems i get the output "_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created." where you get an error .. is this permissions related perhaps. Are you running with anyuid SCC or such ? i am not running any special privileges.
If it helps the code i'm running is part of some training we run - you can see it here to compare - https://rht-labs.com/tech-exercise/#/3-revenge-of-the-automated-testing/6a-jenkins
username_0: I do understand and you are adding the ZAP script seperately in Jenkinsfile of the application which you are running from Jenkins. But we are running pipeline groovy script to do the stages of the pipeline.
Do i need to add the script seperately in application jenkinsfile or is it okay with the pipeline groovy script?
Status: Issue closed
username_1: closing as the issue is user-related. |
depuleio/aroundMe | 268953396 | Title: [Beta-Test]Flyer is not showing up
Question:
username_0: I used a 1280*800 JPG image as flyer, and it is not showing up on the site.
If there is restriction to the image size or dimension, I think you might want to notify the user?
Good luck on your project! |
henry-nazare/llvm-sra | 104792377 | Title: update SAGE submodule for SAGEExpr::getSize()
Question:
username_0: Commit 2e6ad006b842d65030be5fd1fb23900a9a9a3474 introduced two calls to `SAGEExpr::getSize()`. That method was added to the `simplify-minmax` branch of `llvm-sage` in commit henry-nazare/llvm-sage@58cd762a6dacfe5cbbe02ea3a235d7aac6d35885. However, the `master` branch of `llvm-sra` tracks the `simplify-minmax` branch of `llvm-sage` at [an earlier commit](https://github.com/henry-nazare/llvm-sage/commit/fa56f38a9ec56118894f20a990b1c923227735cb). Thus, the current `master` branch of `llvm-sra` does not actually compile with the `llvm-sage` submodule it is configured to use. Please update `llvm-sra` to use a newer `llvm-sage` revision that includes the required `SAGEExpr::getSize()` member function.
Answers:
username_0: Specifically, henry-nazare/llvm-sage@1c932f10e94334db00b50fd7e27c97dea2e69f5e (the current head of the `no-qepcad-on-minmax` branch) seems to be working well for [us](https://github.com/AlisaMaas/CArrayIntrospection). |
jakubkrys/java-wprowadzenie-tablice | 761606892 | Title: Zadanie 4
Question:
username_0: Stwórz 5-elementową tablicę cyfr do przechowywania ocen z matematyki, a następnie:
- dodaj 5 ocen o wartościach od 1 do 3,
- wyświetl 3 ostatnie oceny,
- popraw drugą ocenę, wstaw tam wartość 5,
- ustaw oceny w tablicy w taki sposób, aby były w kolejności od najmniejszej do największej (nie używaj konstrukcji, które nie były omawiane na zajęciach).<issue_closed>
Status: Issue closed |
imiphp/imi | 1067013046 | Title: 多态关联式,with区分关联
Question:
username_0: 一对一多态或者一对多多态的时候,这样关联的

但是查询的时候
```php
with('关联字段' => function(IModelQuery $query) {
$query->where(???怎么传呢,咋区分关联呢);
})
```<issue_closed>
Status: Issue closed |
fripig/article_log | 642443649 | Title: 20200620 21 tabs
Question:
username_0: # 20200620 21 tabs
- [最华丽的 Kubernetes 桌面客户端:Lens](https://mp.weixin.qq.com/s/6O_9wEppjaB8GiqoSDIdWQ)
- [老大吩咐的可重入分布式锁,终于完美的实现了~](https://mp.weixin.qq.com/s/3sJ0TfYG3tXLPwBa2AAJBg)
- [手把手教你,本地搭建虚拟机部署微服务](https://mp.weixin.qq.com/s/oRhXaVmlvVf2ho15H8P6Mg)
- [foreach 集合又抛经典异常了,这次一定要刨根问底](https://mp.weixin.qq.com/s/SMF5NeVA7PO8jYLUTxeVZA)
- [谈谈技术人员职业发展的困惑](https://mp.weixin.qq.com/s/2Ii0ZHhn2bGftv63mPRe0A)
- [今天面试问到一个 Elasticsearch 问题,给我问懵逼了......](https://mp.weixin.qq.com/s/Y2iZwdN10qJhJbYhGiy0vA)
- [前端性能优化总结 - 掘金](https://juejin.im/post/5ee6d90d518825434566d458)
- [\[译\] 5 种主流的软件架构模式 - 掘金](https://juejin.im/post/5eec76dbe51d4573f01d8d67)
- [前端存储除了 localStorage 还有啥 - 掘金](https://juejin.im/post/5ee83f10e51d4578975a7b8a)
- [Vue脚手架热更新技术探秘 - 掘金](https://juejin.im/post/5eedf58fe51d45742441ab7f)
- [為什麼要學 GraphQL? | 小惡魔 - 電腦技術 - 工作筆記 - AppleBOY](https://blog.wu-boy.com/2020/06/why-we-need-to-learn-graphql/)
- [Elastic日报 第974期 (2020-06-20) - Elastic 中文社区](https://elasticsearch.cn/article/14006)
- [PostgreSQL 的 SERIALIZABLE 的 bug – G<NAME>'s BLOG](https://blog.gslin.org/archives/2020/06/20/9582/postgresql-%e7%9a%84-serializable-%e7%9a%84-bug/)
- [LaravelConf 2018《优雅与效能,Laravel 与 swoole 整合之路》笔记 | Laravel China 社区](https://learnku.com/articles/46157)
- [Laravel 一个请求要开销掉 13 兆左右的内存,内存开销太猛多性能要求高的项目都不敢用 | Laravel China 社区](https://learnku.com/laravel/t/46148)
- [Laravel eloquent 的事务与事件 | Laravel China 社区](https://learnku.com/articles/46151)
- [Pod 解析 · Kubernetes Handbook - Kubernetes中文指南/云原生应用架构实践手册 by <NAME>(宋净超)](https://jimmysong.io/kubernetes-handbook/concepts/pod.html)
- [三十分钟接入单元测试,真香](http://github.tiankonguse.com/blog/2020/06/20/cpp-makefile-fast-use-unit-test.html)
- [VS Code 1.46 发布 - CocoaChina\_一站式开发者成长社区](http://www.cocoachina.com/articles/899049?filter=rec)
- [Python之父:这是Python入门最好的书籍,6年过去了,仍独占鳌头 - 知乎](https://zhuanlan.zhihu.com/p/149637942)
- [nuget 是如何还原包的 - 推酷](https://www.tuicool.com/articles/fqqQJzF) |
blitz-js/blitz | 1056628399 | Title: Cannot use multiple prisma clients
Question:
username_0: ### What is the problem?
I want to develop a project, which will actively use 4 MySQL database connections.
As Prisma doesn't support more than one database schema connections in one `.schema` file, I've decided to follow the second advice from this GitHub issue comment: https://github.com/prisma/prisma/issues/2443#issuecomment-630679118
In the project root, I've created `db-one`, `db-two` and `db-three` directories (in real, the names are different, but this doesn't matter) and left out the default `db` directory. In this directories, I've created one `.prisma` file and one `index.js` file. The contents of these files can be seen below. Then, I've executed these commands:
`blitz prisma db pull --schema ./db-one/schema.prisma && blitz prisma generate --schema ./db-one/schema.prisma`
`blitz prisma db pull --schema ./db-two/schema.prisma && blitz prisma generate --schema ./db-two/schema.prisma`
`blitz prisma db pull --schema ./db-three/schema.prisma && blitz prisma generate --schema ./db-three/schema.prisma`
After that, I've started dev server with `yarn dev`, and got an error, which can be seen below.
### Paste all your error logs here:
Full log after running `yarn dev` and visiting root page.
```
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /home/andrii/project-name/.env.local
info - Loaded env from /home/andrii/project-name/.env
info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5
warn - ./db/client/runtime/index.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
./db/client/runtime/index.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
info - ready on http://localhost:3000
event - build page: /
wait - compiling...
info - Using external babel configuration from /home/andrii/project-name/babel.config.js
warn - ./db/client/runtime/index.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
./db/client/runtime/index.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
info - ready on http://localhost:3000
event - build page: /_error
wait - compiling...
warn - ./db/client/runtime/index.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
./db/client/runtime/index.js
Critical dependency: require function is used in a way in which dependencies cannot be statically extracted
info - ready on http://localhost:3000
Error: Cannot find module 'os'
at webpackEmptyContext (/home/andrii/project-name/.next/server/pages/_app.js:14:10)
at ../../node_modules/.pnpm/[email protected]/node_modules/supports-color/index.js (/home/andrii/project-name/.next/server/pages/_app.js:768:41115)
at __require2 (/home/andrii/project-name/.next/server/pages/_app.js:768:3298)
at ../../node_modules/.pnpm/[email protected]/node_modules/chalk/source/index.js (/home/andrii/project-name/.next/server/pages/_app.js:768:48014)
at __require2 (/home/andrii/project-name/.next/server/pages/_app.js:768:3298)
at Object../db/client/runtime/index.js (/home/andrii/project-name/.next/server/pages/_app.js:805:750)
at __webpack_require__ (/home/andrii/project-name/.next/server/webpack-runtime.js:25:42)
at Object../db/client/index.js (/home/andrii/project-name/.next/server/pages/_app.js:602:5)
at __webpack_require__ (/home/andrii/project-name/.next/server/webpack-runtime.js:25:42)
at Object../db/index.js (/home/andrii/project-name/.next/server/pages/_app.js:1059:65) {
code: 'MODULE_NOT_FOUND'
}
Error: Cannot find module 'os'
at webpackEmptyContext (/home/andrii/project-name/.next/server/pages/_app.js:14:10)
at ../../node_modules/.pnpm/[email protected]/node_modules/supports-color/index.js (/home/andrii/project-name/.next/server/pages/_app.js:768:41115)
at __require2 (/home/andrii/project-name/.next/server/pages/_app.js:768:3298)
[Truncated]
Memory: 19.73 GB / 25.02 GB
Shell: 3.3.1 - /usr/bin/fish
Binaries:
Node: 14.17.5 - ~/.config/nvm/14.17.5/bin/node
Yarn: 1.22.11 - ~/.config/nvm/14.17.5/bin/yarn
npm: 6.14.14 - ~/.config/nvm/14.17.5/bin/npm
Watchman: Not Found
npmPackages:
@prisma/client: 3.4.2 => 3.4.2
blitz: 0.42.4 => 0.42.4
prisma: 3.4.2 => 3.4.2
react: 18.0.0-alpha-5ca4b0433-20211020 => 18.0.0-alpha-5ca4b0433-20211020
react-dom: 18.0.0-alpha-5ca4b0433-20211020 => 18.0.0-alpha-5ca4b0433-20211020
typescript: Not Found
```
### Please include below any other applicable logs and screenshots that show your problem:
_No response_
Answers:
username_1: Hey @username_0, I don't *think* there's a bug in Blitz causing this. Not sure what's happening, but I would suggest backing out all your changes until it starts working, then one by one make a change to see what breaks it.
Then feel free to post more information here
username_0: Isn't it caused by some kind of babel config issue? Or maybe my implementation of 4 clients is completely wrong? To use 4 prisma clients, I've specified `output = "./client"` in `schema.prisma` files, which, after generation, results in this directory tree _(please note the `-L 2` flag in command to limit introspecting subdirectories)_:
```fish
andrii@wsl ~/project-name (master)> tree db/ db-one/ db-two/ db-three/
db/
├── client
│ ├── index-browser.js
│ ├── index.d.ts
│ ├── index.js
│ ├── libquery_engine-debian-openssl-1.1.x.so.node
│ ├── runtime
│ │ ├── esm
│ │ │ ├── index-browser.mjs
│ │ │ ├── index.mjs
│ │ │ └── proxy.mjs
│ │ ├── index-browser.d.ts
│ │ ├── index-browser.js
│ │ ├── index.d.ts
│ │ ├── index.js
│ │ ├── proxy.d.ts
│ │ └── proxy.js
│ └── schema.prisma
├── index.js
├── migrations
│ ├── 20211117171811_changes
│ │ └── migration.sql
│ └── migration_lock.toml
└── schema.prisma
db-one/
├── client
│ ├── index-browser.js
│ ├── index.d.ts
│ ├── index.js
│ ├── libquery_engine-debian-openssl-1.1.x.so.node
│ ├── runtime
│ │ ├── esm
│ │ │ ├── index-browser.mjs
│ │ │ ├── index.mjs
│ │ │ └── proxy.mjs
│ │ ├── index-browser.d.ts
│ │ ├── index-browser.js
│ │ ├── index.d.ts
│ │ ├── index.js
│ │ ├── proxy.d.ts
│ │ └── proxy.js
│ └── schema.prisma
├── index.js
└── schema.prisma
db-two/
├── client
│ ├── index-browser.js
│ ├── index.d.ts
│ ├── index.js
│ ├── libquery_engine-debian-openssl-1.1.x.so.node
│ ├── runtime
│ │ ├── esm
│ │ │ ├── index-browser.mjs
│ │ │ ├── index.mjs
│ │ │ └── proxy.mjs
│ │ ├── index-browser.d.ts
[Truncated]
│ ├── runtime
│ │ ├── esm
│ │ │ ├── index-browser.mjs
│ │ │ ├── index.mjs
│ │ │ └── proxy.mjs
│ │ ├── index-browser.d.ts
│ │ ├── index-browser.js
│ │ ├── index.d.ts
│ │ ├── index.js
│ │ ├── proxy.d.ts
│ │ └── proxy.js
│ └── schema.prisma
├── index.js
└── schema.prisma
14 directories, 66 files
```
I assume, that in the `client` directories, there is some files, that require `os` and babel can't find it at runtime.
The good example of such file is `db/client/runtime/index.js`, where `os` is imported 16 times in my case.
username_0: | ^
27213 | if (common.HTTPParser) {
27214 | module2.exports = common.HTTPParser;
27215 | } else {
Import trace for requested module:
./db/client/index.js
./db/index.js
https://nextjs.org/docs/messages/module-not-found
```
Now I've found this issue https://github.com/prisma/prisma/issues/6899#issuecomment-849126557 (also prisma) and added this snippet to `blitz.config.js`.
And now, I get this error:
```
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /home/andrii/project-name/.env.local
info - Loaded env from /home/andrii/project-name/.env
info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5
info - Using external babel configuration from /home/andrii/project-name/babel.config.js
[BABEL] Note: The code generator has deoptimised the styling of /home/andrii/project-name/db/client/runtime/index.js as it exceeds the max of 500KB.
event - compiled successfully
event - build page: /404
wait - compiling...
event - compiled successfully
event - build page: /_error
wait - compiling...
event - build page: /
TypeError: Cannot destructure property 'components' of 'object null' as it is null.
at DevServer.renderToResponseWithComponents (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:975:90)
at DevServer.renderErrorToResponse (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:1411:35)
at async pipe.req.req (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:1372:30)
at async DevServer.pipe (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:855:25)
event - compiled successfully
TypeError: Cannot destructure property 'components' of 'object null' as it is null.
at DevServer.renderToResponseWithComponents (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:975:90)
at DevServer.renderErrorToResponse (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:1411:35)
at async DevServer.pipe (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:855:25)
at async Object.fn (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:662:21)
at async Router.execute (/home/andrii/project-name/node_modules/next/dist/server/router.js:206:32)
at async DevServer.run (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:831:29)
at async DevServer.handleRequest (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:295:20)
<w> [webpack.cache.PackFileCacheStrategy] Caching failed for pack: Error: ENOENT: no such file or directory, rename '/home/andrii/project-name/.next/cache/webpack/client-development/1.pack_' -> '/home/andrii/project-name/.next/cache/webpack/client-development/1.pack'
TypeError: Cannot destructure property 'components' of 'object null' as it is null.
at DevServer.renderToResponseWithComponents (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:975:90)
at DevServer.renderErrorToResponse (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:1411:35)
at async DevServer.pipe (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:855:25)
at async Object.fn (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:662:21)
at async Router.execute (/home/andrii/project-name/node_modules/next/dist/server/router.js:206:32)
at async DevServer.run (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:831:29)
at async DevServer.handleRequest (/home/andrii/project-name/node_modules/next/dist/server/next-server.js:295:20)
```
And now I'm completely lost...
username_0: I suppose that Babel should not bundle generated prisma client if it was generated inside the project directories.
username_2: Did you manage to solve it?
I am using multiple databases so generated 3 clients for it and when I import the generated clients then run the code. It says
```
[BABEL] Note: The code generator has deoptimised the styling of /Users/giva/dev/coinhub/cgw-management/prisma/generated/ex_user/index.js as it exceeds the max of 500KB.
```
and API is very slow on the first call.
username_3: Did you manage to solve it?
I am using multiple databases so generated 3 clients for it and when I import the generated clients then run the code. It says
```
[BABEL] Note: The code generator has deoptimised the styling of /Users/giva/dev/coinhub/cgw-management/prisma/generated/ex_user/index.js as it exceeds the max of 500KB.
```
and API is very slow on the first call.
username_4: Might help, potentially - how I worked through a bunch of the issues above to get multiple prisma schemas working in a npm workspaces monorepo with multiple packages and Next.JS: https://github.com/prisma/prisma/issues/9435#issuecomment-974576800
username_1: @username_0 sorry for delay here. I recommend generating the clients inside `node_modules`, but using custom names. For example `node_modules/.prisma1/`. That way it's excluded from babel processing. |
ProcessMaker/screen-builder | 595313203 | Title: Add the option to switch between returning objects or values in SelectList
Question:
username_0: Currently the option to return an object or a the value of an object is enables just when DataSource = Request Data. Enable this option for "Provide Values" and "Data Connectors"

<issue_closed>
Status: Issue closed |
threefoldtech/zos | 776895222 | Title: Failed to deploy network on node on mainnet
Question:
username_0: Node: `74DKSsLfnkUX6Qy9FQPQSrNZvAvBt8L76EFJcucQ2GDZ`
`Workload 83144 failed to deploy in time`
workload: https://explorer.grid.tf/api/v1/reservations/workloads/83144
Answers:
username_1: The logs i have here says this reservation has already been decomissioned

Status: Issue closed
|
reflex-frp/reflex-dom-contrib | 186755671 | Title: Why does routeSite passes route as a (constant) String while using Event for setting browser url?
Question:
username_0: I was trying to create routable reflex app using `servant-router` and `reflex-dom-contrib` but it appears that `routeSite` does routing only once so I couldn't do SPA and every route change will lead to complete page reload.
Is there any way to make `routeSite` to use a Dynamic for routes?
Answers:
username_1: Yep, I noticed this the other day. It's because this module is very early proof of concept and I've never actually used it in a real app yet. I'm sure we can make it use a Dynamic. I just haven't had the time yet to work on it. Pull requests welcome!
username_0: My reflex-fu is not that strong so my hope was that you could have something about it already :)
But I'll try.
username_1: I will definitely get to it in the near future. But probably not in the next week or two.
Status: Issue closed
username_0: Resolved in #21 |
mirage/ocaml-hex | 441395469 | Title: Documentation nitpick
Question:
username_0: https://github.com/mirage/ocaml-hex/blob/master/lib/hex.mli#L39
```ocaml
val of_string: ?ignore:char list -> string -> t
(** [of_string s] is the hexadecimal representation of the binary
string [s]. If [ignore] is set, skip the characters in the list
when converting. Eg [of_string ~ignore:[' '] "a f"]. The default
value of [ignore] is [[]]). *)
```
The example uses only characters that are also hexadecimal digits :)
To the casual `melin-document` reader, it looks like `of_string ~ignore:[' '] " a f "` is going to be `` Hex "AF"``
Answers:
username_1: Perhaps, also document how one would parse a string as hex here too, since it's a sensible place to look and where I looked when I was trying to parse hex-as-string into bytes.
username_2: Great points both! PRs welcome on this (and any other unclear bits as you used it) |
robolectric/robolectric | 76039338 | Title: AppCompatActivity view inflation error
Question:
username_0: Hey so when I am trying to inflate an AppCompatActivity I am getting a view inflation error. I am seeing the view in my generated sources and have stepped through the error a few times, but have not made any headway resolving it. I am under the impression that this should all be fixed in <https://github.com/robolectric/robolectric/issues/1446> and I am using 3.0-SNAPSHOT.
It maybe something to do with my flavors and the fact that I add an `applicationIdSuffix` to my debug builds, but when I tried to confirm this in a separate app I couldn't reproduce this error.
```java
android.view.InflateException: XML file build/intermediates/res/localhost/debug/layout/abc_screen_toolbar.xml line #-1 (sorry, not yet implemented): Error inflating class android.support.v7.widget.Toolbar
at android.view.LayoutInflater.createView(LayoutInflater.java:620)
at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:696)
at android.view.LayoutInflater.rInflate(LayoutInflater.java:755)
at android.view.LayoutInflater.rInflate(LayoutInflater.java:758)
at android.view.LayoutInflater.inflate(LayoutInflater.java:492)
at android.view.LayoutInflater.inflate(LayoutInflater.java:397)
at android.view.LayoutInflater.inflate(LayoutInflater.java:353)
at android.support.v7.app.AppCompatDelegateImplV7.ensureSubDecor(AppCompatDelegateImplV7.java:296)
at android.support.v7.app.AppCompatDelegateImplV7.setContentView(AppCompatDelegateImplV7.java:246)
at android.support.v7.app.AppCompatActivity.setContentView(AppCompatActivity.java:106)
at com.playdraft.draft.ui.SignUpPhoneNumberActivity.onCreate(SignUpPhoneNumberActivity.java:43)
at android.app.Activity.performCreate(Activity.java:5133)
at org.robolectric.util.ReflectionHelpers.callInstanceMethod(ReflectionHelpers.java:195)
at org.robolectric.util.ActivityController$1.run(ActivityController.java:122)
at org.robolectric.shadows.ShadowLooper.runPaused(ShadowLooper.java:305)
at org.robolectric.shadows.CoreShadowsAdapter$2.runPaused(CoreShadowsAdapter.java:45)
at org.robolectric.util.ActivityController.create(ActivityController.java:118)
at org.robolectric.util.ActivityController.create(ActivityController.java:129)
at org.robolectric.util.ActivityController.setup(ActivityController.java:210)
at org.robolectric.Robolectric.setupActivity(Robolectric.java:46)
at com.playdraft.draft.ui.SignUpPhoneNumberActivityTest.setUp(SignUpPhoneNumberActivityTest.java:63)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:245)
at org.robolectric.RobolectricTestRunner.runChild(RobolectricTestRunner.java:185)
at org.robolectric.RobolectricTestRunner.runChild(RobolectricTestRunner.java:54)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:149)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at android.view.LayoutInflater.$$robo$$createView(LayoutInflater.java:594)
at android.view.LayoutInflater.createView(LayoutInflater.java)
at android.view.LayoutInflater.$$robo$$createViewFromTag(LayoutInflater.java:696)
at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java)
at android.view.LayoutInflater.$$robo$$rInflate(LayoutInflater.java:755)
at android.view.LayoutInflater.rInflate(LayoutInflater.java)
at android.view.LayoutInflater.$$robo$$rInflate(LayoutInflater.java:758)
at android.view.LayoutInflater.rInflate(LayoutInflater.java)
at android.view.LayoutInflater.$$robo$$inflate(LayoutInflater.java:492)
[Truncated]
at org.robolectric.Robolectric.setupActivity(Robolectric.java:46)
at com.playdraft.draft.ui.SignUpPhoneNumberActivityTest.setUp(SignUpPhoneNumberActivityTest.java:63)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:245)
at org.robolectric.RobolectricTestRunner.runChild(RobolectricTestRunner.java:185)
at org.robolectric.RobolectricTestRunner.runChild(RobolectricTestRunner.java:54)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:149)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
... 1 more
```
Status: Issue closed
Answers:
username_0: Ok so I am closing this issue. The problem for future people was I was targeting a newer version of appcompat v7. Specifically: com.android.support:appcompat-v7:22.0.0.
username_0: more info on this, I was seeing this work from Android Studio but still failing from the command line. I don't have a reason yet, but it appears that there is some dependency issue that AS is hiding. By removing fest-android (which I should have done awhile ago) I solved this problem. |
open-telemetry/opentelemetry-specification | 836516969 | Title: How to rebuild from deltas?
Question:
username_0: An earlier discussion about rebuilding from deltas raised a question: https://github.com/open-telemetry/wg-prometheus/pull/25#discussion_r595010522.
Currently, the delta data points don't have sequence numbers. There is no way to identify duplicates or missing data points. Generally speaking, we don't know how to rebuild from deltas without sequence numbers. The spec/data model should address this issue.
cc @username_1 @username_3
Answers:
username_1: This is a recurring discussion from the days when OpenMetrics and OpenCensus tried to merge.
Sequence numbers allow for detection, but not correction, of missing information.
A rough(!) equivalent would be UDP (sequence only) vs TCP (send windows, ACKs, retransmissions, state per receiver, local deletion after last ACK, etc.). This is significant overhead compared to plain deltas, so it comes down to design goals and constraints. As far as I know, data loss is not acceptable within OTel, so some TCP-like mechanism would be needed. If data stability can be reduced, the additional complexity could be reduced accordingly.
An in-between would be to rebuild the cumulative state directly at the emitter, which lead the discussion in early 2020 back to square one: that overall complexity would be lowest if the internal representation was cumulatives, not deltas, by default.
I don't know off-hand if different receivers may request different delta periods. If that's the case, the system above would need to carry several state sets, one per delta time range. The same is true for any node where deltas can rest, or are recomputed/regrouped/cached, in the overall pipeline graph.
username_2: I'd like to split out a few things to identify/resolve:
1. Delta-Based metrics imported via a collector from a non-OTel source (e.g. StatsD receiver). For these metrics we are limited regarding solutions (e.g. sequence numbers). While I think we should do our best to support this use case, if we can't guarantee 100% consistency with OpenMetrics (or we have some failure scenarios around missing counts) I think this could be acceptable. I.e. I think we should consider this use-case "Best Effort", and am happy to document this scenario, but I don't think we should aim to provide guaranteed transmission on a protocol that wasn't designed with this in mind.
2. Delta-based metrics coming out of an SDK intended for a cumulative-backend (e.g. OTLP => OM). This relates to #731 and I hope our answer is "It's easy to export cumulative or delta metrics out of SDKs" once #731 is resolved and I'd like to push the discussion around this portion there.
3. Delta-Based metrics inside OTLP for which we can't force cumulative-at-SDK time. Here is the crux of the issue as reported. Assuming we have OTLP-push data for Delta sums, how do we convert to a Cumulative for OM export. In this case, I think you outlined the right questions:
- Can we detect missing data points and/or duplicates? (e.g. do we want sequence numbers on deltas?)
- How do we resolve missing data points/duplicates? (e.g. reset the counter?)
Given point 2, I think point 3 is a lower priority issue. That said, I think we have some answers straight away.
First, regarding seqeunce numbers. Assuming the #1574 and the Single-Writer philosophy, we should be able to use timestamp + aggregation temporality to uniquely identify a delta within OTLP (see https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/metrics/v1/metrics.proto#L279)
Second, we *do* need to document what to do when we detect an out-of-order delta sum. In this case my proposal would be to reset the counter on detection. I'm reading @RichH's comment as "this is an ok thing to do, but not ideal' and I agree. If you we know we're outputing cumulative metrics, users should use the result of #731. In scenarios where that's not practical, this is the best we can (likely) do. If folks agree, I can write this up into the data model specification.
Status: Issue closed
|
swoole/swoole-src | 752459857 | Title: Sdebug PHP8.0 support.
Question:
username_0: PHP8.0 is already released. Is there any information about sdebug to use xdebug3.0 and add PHP8.0 support?
Answers:
username_1: https://github.com/swoole/yasd
Recommended Use
Status: Issue closed
username_0: Is it xdebug compatible? Can I use it with PhpStorm?
username_2: Currently only command line support, PhpStorm will be supported later
username_3: Is there any chance that Sdebug 3.0 would be released until Yasd will support PHPStorm? This problem currently blocks our migration to PHP 8, which is kind of painful. We would gladly migrate to Yasd after there is support for PHPStorm, but until then it feels like there is no workaround. Terminal debugging is quite user unfriendly compared to PHPStorm.
username_4: Yasd support php8, you can try it.
---Original--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.