repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
styled-components/styled-components | 225771137 | Title: Using evaluated calculation from calc() to use Math functions
Question:
username_0: I have a css calc() inside a media query like this:
```
line-height: calc(
${nRows.min}rem + ${nRows.max - nRows.min} *
(100vw - ${opt.minvw}rem) / ${opt.maxvw - opt.minvw}
);
```
The problem is with using calc with vw unit. I have no way of knowing what the value of the calculation will be. I need to use Math.ceil() on the evaluated value but css calc() only lets you use + - * /.
The best I could think of is using a resize event listener and update the computed value. But since this line height calc is for all the text like paragraphs and headings on the site, it means a lot of re-rendering.
Is there a better solution for this kind of problem?
Answers:
username_1: I don't see how there could be a different solution: Either you use `calc` and CSS and so on, or you use JS. It's really as simple as that unfortunately ^^
Also do consider, that this will only be happening *if* the user resizes the browser, and apart from that only once. If you want to optimise this, I suggest you take `calc` and `vw` out of the equation, and make it purely JS calculated. This way you can make sure that it only updates when it has to.
Status: Issue closed
|
minio/minio-js | 205311275 | Title: Uncatchable error on non-valid server response
Question:
username_0: When the server response with non-valid xml with status != 200 (for example 504 nginx error with died minio backend) the client crashes with uncatchable error at xml-parsers.js:39.
To reproduce this error point the client to some address with 504 status code and non-valid xml body.<issue_closed>
Status: Issue closed |
iotexproject/iotex-core | 459334883 | Title: Sporadic TestBroadcast failure
Question:
username_0: <!-- Please only use this template for submitting reports about failing tests in iotex-core CI jobs -->
**Which jobs are failing**:
Local `make`
**Which test(s) are failing**:
TestBroadcast
**Since when has it been failing**:
Not sure
**Testgrid link**:
N/A
**Reason for failure**:
Not sure
**Anything else we need to know**:
Error log:
```
--- FAIL: TestBroadcast (21.99s)
require.go:794:
Error Trace: agent_test.go:73
Error: Received unexpected error:
timed out
github.com/iotexproject/iotex-core/testutil.init.ializers
/Users/zjshen/Projects/go/src/github.com/iotexproject/iotex-core/testutil/wait.go:16
runtime.main
/usr/local/Cellar/go/1.12.5/libexec/src/runtime/proc.go:188
runtime.goexit
/usr/local/Cellar/go/1.12.5/libexec/src/runtime/asm_amd64.s:1337
Test: TestBroadcast
2019-06-21T12:04:07.659-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.664-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/31993/p2p/12D3KooWDDRnfR7mpMZ2eKk7QD3fqKSoP3o9ZhFvhbtgXLvpbGtt"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.690-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.695-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32394/p2p/12D3KooWNB23Z2ZpgCfL87KpgTCuYnPnXFHRG3sqeV7nKUzas9Bn"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.713-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.719-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/38366/p2p/12D3KooWGVLJRV5jifnWTTdRbq6kEjrkHytpdyvgDP4xkw2tc7WL"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.744-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.751-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32303/p2p/12D3KooWAYHYEkrpuCa4Ep88hzo6a89WGrdzCEecCWj14GdX8ByR"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.804-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.813-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/36032/p2p/12D3KooWR85e3iitHu6Lfg7Up2Xs4pq6XQJ76pPq6oqnN9mdmgXg"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.858-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.867-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/42380/p2p/12D3KooWGGFyAbGyfXdpPWifV2sHQhuLrQzzYa5mfpchgUePCD4w"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.921-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.931-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32670/p2p/12D3KooWQBhn5GuHTdqaPUHLJEE7QecN5hRS4vSZzBre8vtbAVBq"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.012-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:08.024-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32397/p2p/12D3KooWS5TPppRdNAtEQJaQXc1K46vatRJ6dqQ34kqkqHUFRPEB"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.084-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:08.099-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/39775/p2p/12D3KooWFxSvjFvjF6euxvw7x1e3QpzFcnkBpAF4wWZr6tppKLGZ"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.219-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:08.236-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/36473/p2p/12D3KooWAsVAnAAqD7Td8z44w9QQN9jGqxLv8LVnvZv44vC9tubS"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.375-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
[Truncated]
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.881-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.901-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.912-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.930-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.940-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.946-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
FAIL
Status: Issue closed
Answers:
username_0: <!-- Please only use this template for submitting reports about failing tests in iotex-core CI jobs -->
**Which jobs are failing**:
Local `make`
**Which test(s) are failing**:
TestBroadcast
**Since when has it been failing**:
Not sure
**Testgrid link**:
N/A
**Reason for failure**:
Not sure
**Anything else we need to know**:
Error log:
```
--- FAIL: TestBroadcast (21.99s)
require.go:794:
Error Trace: agent_test.go:73
Error: Received unexpected error:
timed out
github.com/iotexproject/iotex-core/testutil.init.ializers
/Users/zjshen/Projects/go/src/github.com/iotexproject/iotex-core/testutil/wait.go:16
runtime.main
/usr/local/Cellar/go/1.12.5/libexec/src/runtime/proc.go:188
runtime.goexit
/usr/local/Cellar/go/1.12.5/libexec/src/runtime/asm_amd64.s:1337
Test: TestBroadcast
2019-06-21T12:04:07.659-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.664-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/31993/p2p/12D3KooWDDRnfR7mpMZ2eKk7QD3fqKSoP3o9ZhFvhbtgXLvpbGtt"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.690-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.695-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32394/p2p/12D3KooWNB23Z2ZpgCfL87KpgTCuYnPnXFHRG3sqeV7nKUzas9Bn"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.713-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.719-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/38366/p2p/12D3KooWGVLJRV5jifnWTTdRbq6kEjrkHytpdyvgDP4xkw2tc7WL"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.744-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.751-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32303/p2p/12D3KooWAYHYEkrpuCa4Ep88hzo6a89WGrdzCEecCWj14GdX8ByR"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.804-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.813-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/36032/p2p/12D3KooWR85e3iitHu6Lfg7Up2Xs4pq6XQJ76pPq6oqnN9mdmgXg"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.858-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.867-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/42380/p2p/12D3KooWGGFyAbGyfXdpPWifV2sHQhuLrQzzYa5mfpchgUePCD4w"], "secureIO": true, "gossip": true}
2019-06-21T12:04:07.921-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:07.931-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32670/p2p/12D3KooWQBhn5GuHTdqaPUHLJEE7QecN5hRS4vSZzBre8vtbAVBq"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.012-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:08.024-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/32397/p2p/12D3KooWS5TPppRdNAtEQJaQXc1K46vatRJ6dqQ34kqkqHUFRPEB"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.084-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:08.099-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/39775/p2p/12D3KooWFxSvjFvjF6euxvw7x1e3QpzFcnkBpAF4wWZr6tppKLGZ"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.219-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
2019-06-21T12:04:08.236-0700 INFO [email protected]/p2p.go:334 P2p host started. {"address": ["/ip4/127.0.0.1/tcp/36473/p2p/12D3KooWAsVAnAAqD7Td8z44w9QQN9jGqxLv8LVnvZv44vC9tubS"], "secureIO": true, "gossip": true}
2019-06-21T12:04:08.375-0700 INFO p2p/agent.go:250 Connected bootstrap node. {"address": "/ip4/127.0.0.1/tcp/37186/p2p/12D3KooWQTP7bbdbSUQTRLJ7ZB7Rzi416cT3BGdMd9C4XuepSg83"}
[Truncated]
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.881-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.901-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.912-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.930-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.940-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
2019-06-21T12:04:09.946-0700 ERROR [email protected]/p2p.go:424 Error when subscribing a broadcast message. {"error": "subscription cancelled by calling sub.Cancel()"}
github.com/iotexproject/go-p2p.(*Host).AddBroadcastPubSub.func1
/Users/zjshen/Projects/go/pkg/mod/github.com/iotexproject/[email protected]/p2p.go:424
FAIL
username_0: Saw another failure again
```
Status: Issue closed
|
kapelner/bartMachine | 1029252434 | Title: hBART in main branch?
Question:
username_0: Hello,
I was wondering if the heteroskedastic BART (hBART) features are/will be available in the main branch. It seems like the hBART branch is very outdated at this point, and I'm not sure if the features of that branch will work with the more modern features of the bartMachine package (such as visualization and variable selection).
Best,
Jacob<issue_closed>
Status: Issue closed |
department-of-veterans-affairs/va.gov-team | 523084579 | Title: Discovery: User Interview Conversation Guide
Question:
username_0: ## User Story or Problem Statement
As a UX designer, I need to create a user interview conversation guide so I can ensure that all necessary questions are documented in one place and can be referenced during user interviews.
## Goal
_A document to serve as a user interview conversation guide._
## Objectives or Key Results this is meant to further
- This furthers the discovery effort to understand more about the product being built and identify needs and pain points.
## Acceptance Criteria
- [ ] _Create conversation guide (according to template if provided)_
- [ ] _Review conversation guide with design team & PM_
- [ ] _(Optional) Review conversation guide with content team, IA team for additional questions that could be beneficial_
- [ ] _Make edits to conversation guide if necessary_
## Blockers
N/A
## Next Steps
Send both research plan and conversation guide to <NAME> for approval and scheduling with Perigean.<issue_closed>
Status: Issue closed |
rladies/meetupr | 293696352 | Title: get_members() returns meetup join date instead group join date
Question:
username_0: `find_groups('R-ladies Paris')`
returns
` id name urlname created members status organizer lat lon city`
`1 2.04e⁷ R-Ladi… rladies-… 2016-09-19 17:39:51 298 active Gabriela… 48.9 2.34 Paris`
however
`get_members('rladies-paris')`
returns members with join dates prior to group create date!
` id name bio status joined city country state lat lon`
`1 8.37e⁶ Sarah… I miss skii… active 2008-11-15 22:13:36 Paris fr <NA> 48.9 2.34`
`2 1.45e⁷ Gabri… <NA> active 2011-04-20 00:57:39 San … us CA 37.8 -122 `
Answers:
username_1: [1] "2007-03-13 08:14:36 PDT" "2018-01-30 02:22:53 PST"
```
We do some parsing of the date (maybe it's being parsed incorrectly), and if that's not the issue then either the meetup.com API is returning the wrong values or their documentation is wrong (and it's actually showing the date they joined meetup.com).
You're the first entry in the members table, and it says you joined on "2008-11-15"... is that the date you joined meetup.com, by chance?
username_0: Hi Erin,
Thanks for the quick follow-up and sorry my the slow reply.
To answer your Q: Yes, indeed I joined meetup in 2008. I didn't join
Rladies Paris until late 2016.
On a similar note, I see there is a bio column, but it seems to be coming
from a different meetup group.
For example,
members[members$id=='10380708','bio']
returns
1 Finance professional, keen on learning MA, especially classical Japa…
But if you view the same person's profile from our meetup page
<https://www.meetup.com/rladies-paris/members/10380708/>, his intro instead
shows:
Eager to learn as much R as possible
Thanks for your help
Sarah
username_2: I started looking at reproducing this awhile ago, I'm digging that back up.
username_2: I have reproduced and demonstrated this issue. The value you want is "group_profile.created".
http://rpubs.com/rladiespdx/meetup-member-dates
I also confirmed, the docs are weird.
Under Sort "joined" refers to sorting the member list by when the member joined the group:
Sort
joined
Time member joined this group
Under Response:
joined
Time member joined, represented as milliseconds since the epoch
Under Group Profile (I think this is what you want):
created
The time this member joined the Group, represented as milliseconds since the epoch
username_2: I need to look at the code to figure out what makes the most sense as far as a patch proposal.
username_2: Removed help wanted since I'm working on this :)
username_2: working on issue #31 so i can have test coverage for this one.
username_3: @username_2, are you still working on this? Might it help if I had a look? I am not skilled with formal testing but I have used this package a great deal (please see http://bit.ly/2NEHVGn). So I could at least explore capturing `group_profile.created`.
username_1: @username_2 @username_3 Anyone have an update on this? Someone ran into this bug again today and so it reminded me to come here and check the status. Let me know and we can re-assign the issue if necessary. Thanks!
username_4: I came to the same problem today,
is there any updates?
username_3: I missed my name mention. This issue was not assigned to me. It looks like @username_2 was going to address the issue.
username_4: @username_3 @username_2 @username_1
I solved it doing this:
1 download json file of members per chapter
2. convert it to a csv
3. mutate the column group_profile.created
`mutate(date_joined = lubridate::floor_date(as.POSIXct(
group_profile.created/1000, tz = "UTC", origin = "1970-01-01")))`
I guess the the mutate on step 3 should be added to the function `get_members`, and that would be it.
username_5: @username_1 what needs to happen here? Do we need to not use what the API returns for that date? |
rt2zz/redux-persist | 262163992 | Title: AutoRehydrate doesn't work in IE11
Question:
username_0: Redux-Persist 4.9.1 is working well for us, except for an issue with IE11 browsers where they do not consistently autorehydrate. It seems to be a timing issue because it does not repro consistently across machines or sessions. Persistence DOES work reliably, the initial autorehydrate just fails to fire off. This is causing data loss, by not reloading user data initially (leaving the app with default blank values) and then later persisting the blank values to storage which stomps on the original user data.
Works reliably in Chrome and other browsers, however.
Answers:
username_1: This is the first report I have heard of this. Which storage are you using? Is it possible there is some other exception being thrown during rehydration, perhaps from within one of your reducers that is causing the rehydration to fail?
username_2: We have the opposite problem: for us `persist/REHYDRATE` is firing uncontrollably *sometimes*. It also seems to be a timing issue, sometimes it works fine, other times IE11 keeps rehydrating in an endless loop.

We use both local storage and session storage, but the one that keeps re-firing is the session storage one, which we also sync across tabs with `redux-persist-crosstab`, which might also be the source of the problem (not tested yet).
username_2: Alright, yeah, removing `redux-persist-crosstab` solved the problem, will open an issue there instead. |
trailofbits/ebpfpub | 766053774 | Title: 田阳县哪里有真实大保健(找特色服务x
Question:
username_0: 田阳县妹子真实找上门服务【+V:781372524】 春节是一年当中最温暖,最盛大的节日,也是家家户户团圆的好日子。时间过得真快,掐指一算距离过年不到半个月的时间了,这个时候我们还得抓紧时间思考一下年前需要准备的各种东西,因为我们要做好准备,以最佳的精神面貌热情地迎接新一年的到来。 虽然你有你的诗和远方,也要记得停下脚步,在春节的时候回家看看父母,与他们一起过年。为了与家人过一个舒心的幸福年,每年过年前都要精心准备好年货,从最基本的吃的,穿的,到家里用的都要提前准备好,缺一不可。 今天,贴心的小编就和大家分享一下最值得配置的家居年货清单吧。毕竟辛苦了一年,好好装扮一下温馨舒适的家,这样来年才更加有动力去迎接更美好的“诗与远方”!#超值抢券# 配置年货之前,先一键领取330元大额优惠券,下单超值优惠,让你开心过大年! 使用时间:1月8日10:00~1月14日10:00,领券最高立减100!#每满99减20,上不封顶# 索利斯电吹风S-EF152,瑞士百年品牌,发型师挚爱,速干美发;奥买家全球购新年焕新价¥199,跨店满减每满99减20,新客下单可返60元套券; Morphy/摩飞绞肉机,快速切菜绞肉,方便、省时、省力,过年必备的厨房好帮手;奥买家全球购新年焕新价¥259,每满99减20,买得越多减得越多; JOHN BOSS威尔-加热破壁料理机 HE-P20,来自英国的百年家居品牌,加热破壁,释放营养精华;一机抵多机,为家人释放美味;奥买家全球购新年焕新价¥998,还可以一键领取潮流合伙人优惠券【699减120】; ......更多超好用的家居器件,一定要关注奥买家全球购【家居焕新街】专题页!#超级量贩囤货#寺岩八幻虏https://github.com/trailofbits/ebpfpub/issues/1522?rv408 <br />https://github.com/trailofbits/ebpfpub/issues/142?KAOsK <br />https://github.com/trailofbits/ebpfpub/issues/3582?hs25B <br />https://github.com/trailofbits/ebpfpub/issues/2203?xeHNU <br />https://github.com/trailofbits/ebpfpub/issues/822?cpxbw <br />https://github.com/trailofbits/ebpfpub/issues/2814?lohzs <br />https://github.com/trailofbits/ebpfpub/issues/1434?caosu <br />bnxfnvvdpxhbtxvtzbzjxnnbnbdjllzrdhd |
ktorio/ktor | 355864414 | Title: Startup exception from GMTDate constructor
Question:
username_0: Server: Netty
jdk: 10
```
"nettyCallPool-7-2@6334" prio=10 tid=0x3b nid=NA runnable
java.lang.Thread.State: RUNNABLE
at sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObjectImpl(LocaleServiceProviderPool.java:285)
at sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObject(LocaleServiceProviderPool.java:262)
at sun.util.locale.provider.CalendarDataUtility.retrieveFirstDayOfWeek(CalendarDataUtility.java:75)
at java.util.Calendar.setWeekCountData(Calendar.java:3411)
at java.util.Calendar.<init>(Calendar.java:1611)
at java.util.GregorianCalendar.<init>(GregorianCalendar.java:738)
at java.util.Calendar$Builder.build(Calendar.java:1493)
at sun.util.locale.provider.CalendarProviderImpl.getInstance(CalendarProviderImpl.java:87)
at java.util.Calendar.createCalendar(Calendar.java:1696)
at java.util.Calendar.getInstance(Calendar.java:1675)
at io.ktor.util.date.DateJvmKt.GMTDate(DateJvm.kt:7)
at io.ktor.util.date.DateJvmKt.GMTDate$default(Unknown Source:-1)
at io.ktor.sessions.SessionTransportCookie.send(SessionTransportCookie.kt:25)
at io.ktor.sessions.Sessions$Feature$install$2.doResume(Sessions.kt:55)
at io.ktor.sessions.Sessions$Feature$install$2.invoke(Unknown Source:-1)
at io.ktor.sessions.Sessions$Feature$install$2.invoke(Sessions.kt:21)
at io.ktor.util.pipeline.PipelineContext.proceed(PipelineContext.kt:49)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:23)
at com.molecularinstruments.api.endpoint.UserAuthnEndpoint.uiConfig(ApplicationResponseFunctions.kt:23)
at com.molecularinstruments.api.endpoint.UserAuthnEndpoint$1$3$1.doResume(UserAuthnEndpoint.kt:57)
at com.molecularinstruments.api.endpoint.UserAuthnEndpoint$1$3$1.invoke(Unknown Source:-1)
at com.molecularinstruments.api.endpoint.UserAuthnEndpoint$1$3$1.invoke(UserAuthnEndpoint.kt:37)
at io.ktor.util.pipeline.PipelineContext.proceed(PipelineContext.kt:49)
at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:23)
at io.ktor.routing.Routing.executeResult(Pipeline.kt:202)
at io.ktor.routing.Routing.interceptor(Routing.kt:25)
at io.ktor.routing.Routing$Feature$install$1.doResume(Routing.kt:66)
at io.ktor.routing.Routing$Feature$install$1.invoke(Routing.kt:-1)
at io.ktor.routing.Routing$Feature$install$1.invoke(Routing.kt:51)
at io.ktor.util.pipeline.PipelineContext.proceed(PipelineContext.kt:49)
at io.ktor.util.pipeline.PipelineContext$proceed$1.doResume(PipelineContext.kt:-1)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:42)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlinx.coroutines.experimental.DispatchedTask$DefaultImpls.run(Dispatched.kt:149)
at kotlinx.coroutines.experimental.AbstractContinuation.run(AbstractContinuation.kt:19)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:-1)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:844)
"nettyCallPool-7-2@6334" prio=10 tid=0x3b nid=NA runnable
java.lang.Thread.State: RUNNABLE
at sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObjectImpl(LocaleServiceProviderPool.java:285)
at sun.util.locale.provider.LocaleServiceProviderPool.getLocalizedObject(LocaleServiceProviderPool.java:262)
at sun.util.locale.provider.CalendarDataUtility.retrieveMinimalDaysInFirstWeek(CalendarDataUtility.java:84)
at java.util.Calendar.setWeekCountData(Calendar.java:3412)
at java.util.Calendar.<init>(Calendar.java:1611)
at java.util.GregorianCalendar.<init>(GregorianCalendar.java:738)
[Truncated]
at io.ktor.routing.Routing$Feature$install$1.invoke(Routing.kt:-1)
at io.ktor.routing.Routing$Feature$install$1.invoke(Routing.kt:51)
at io.ktor.util.pipeline.PipelineContext.proceed(PipelineContext.kt:49)
at io.ktor.util.pipeline.PipelineContext$proceed$1.doResume(PipelineContext.kt:-1)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:42)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlin.coroutines.experimental.jvm.internal.CoroutineImpl.resume(CoroutineImpl.kt:41)
at kotlinx.coroutines.experimental.DispatchedTask$DefaultImpls.run(Dispatched.kt:149)
at kotlinx.coroutines.experimental.AbstractContinuation.run(AbstractContinuation.kt:19)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:-1)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:844)
```
Status: Issue closed
Answers:
username_1: Notice that these stacktraces are not exceptions but a part of a thread dump |
cedadev/nappy | 1138388154 | Title: API extension: na 2 xarray.Dataarray or xarray.Dataset ?
Question:
username_0: This is more of a suggestion for a new feature than an "issue".
Background: we still have a lot of data in NASA Ames format. Currently, there's an initiative at our institute to develop a collection of tools that are basically method extensions for `xarray.Dataarray` and `xarray.Dataset`. github: [imktk](https://github.com/imk-toolkit/imk-toolkit). So I was looking for convenient ways to load the na data to xarray. And since I noted that `nappy` uses xarray internally for the conversion to netCDF, I thought that could be a possibility.
A way to do this with the existing version of nappy could be e.g.
```python
from pathlib import Path
import xarray as xr
import nappy
import nappy.nc_interface.na_to_xarray as na2xr
f = Path('./nappy/example_files/1001a.na') # from the samples collection
xr_converter_class = na2xr.NADictToXarrayObjects(nappy.openNAFile(f))
xr_tuple = xr_converter_class.convert()
arrays = xr_tuple[0] # list of data arrays
new_attrs = {} # we need to combine attributes manually
for a in arrays:
for k, v in a.attrs.items():
new_attrs[a.name + '_' + k] = v # not guaranteed to work with ANY input!
xrds = xr.merge(arrays, combine_attrs="drop")
xrds.attrs = new_attrs
print(xrds)
<xarray.Dataset>
Dimensions: (pressure: 28)
Coordinates:
* pressure (pressure) float64 1.013e+03 540.5 ... 4e-05 2.5e-05
Data variables:
total_concentration (pressure) float64 2.55e+19 1.53e+19 ... 5.03e+11
temperature (pressure) float64 288.0 256.0 223.0 ... 300.0 360.0
Attributes:
total_concentration_units: cm-3
total_concentration_long_name: total_concentration
total_concentration_title: total_concentration
total_concentration_nasa_ames_var_number: 0
temperature_units: degrees K
temperature_long_name: temperature
temperature_title: temperature
temperature_nasa_ames_var_number: 1
```
While that works for me, it's not explicitly part of the nappy API - would it be a useful extension?
Answers:
username_1: @username_0, just checking that I understand what is happening in your example.
Is the main requirement to collect the metadata associated with the `xr.DataArrays` and assign them to the `xr.Dataset`?
username_0: @username_1 honestly, I haven't had the time to work on this further ;-)
My question was if it would be good to have the export from na to xarray exposed more directly on the API, to avoid having to go through `nc_interface.na_to_xarray`. But if that point never came up in the past, I guess it's not that important and we might as well leave it as it is, I can live with that.
Regarding metadata, my guess would be that the handling of those is pretty user-specific (see `var_and_units_pattern`...), so I wouldn't touch that.
username_1: @username_0, I agree that it would be useful to bring this up to the API level. It's a sensible proposal. |
mbj4668/pyang | 444227688 | Title: pyang does not validate Xpath expression
Question:
username_0: I have a must statement like this:
must "current( = 'xpdr'" {
error-message "The only node-type supported is xpdr";
}
Looks like Pyang does not report any error and my yang gets converted to Yin without issues.
<must condition="current( = 'xpdr'">
<error-message>
<value>The only node-type supported is xpdr</value>
</error-message>
</must>
Please clarify if this is expected?
Answers:
username_1: If this is part of an unsued grouping, it isn't validated, but I view that as a bug which will be fixed.
Status: Issue closed
|
toanpv/vjreport | 399624998 | Title: Môn học
Question:
username_0: Description:
---
Bài học
Device info:
---
<table>
<tr><td>App version</td><td>1.1.7</td></tr>
<tr><td>App version code</td><td>18092701</td></tr>
<tr><td>Android build version</td><td>1477998679</td></tr>
<tr><td>Android release version</td><td>6.0</td></tr>
<tr><td>Android SDK version</td><td>23</td></tr>
<tr><td>Android build ID</td><td>mobell-i7-1101-18-v568</td></tr>
<tr><td>Device brand</td><td>mobell</td></tr>
<tr><td>Device manufacturer</td><td>mobell</td></tr>
<tr><td>Device name</td><td>mobell</td></tr>
<tr><td>Device model</td><td>nova_i7</td></tr>
<tr><td>Device product name</td><td>mobell</td></tr>
<tr><td>Device hardware name</td><td>mt6580</td></tr>
<tr><td>ABIs</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[]</td></tr>
</table> |
s-tyda/steamgifts-bot | 1042775013 | Title: Can not start in Docker
Question:
username_0: | Welcome to SteamGifts Bot!
| Created by: github.com/username_1
| Traceback (most recent call last):
| File "/app/cli.py", line 101, in <module>
| run()
| File "/app/cli.py", line 96, in run
| s = SteamGifts(**config.data)
| File "/usr/local/lib/python3.10/functools.py", line 970, in __get__
| val = self.func(instance)
| File "/app/cli.py", line 69, in data
| data = json.load(json_data_file)
| File "/usr/local/lib/python3.10/json/__init__.py", line 293, in load
| return loads(fp.read(),
| File "/usr/local/lib/python3.10/json/__init__.py", line 346, in loads
| return _default_decoder.decode(s)
| File "/usr/local/lib/python3.10/json/decoder.py", line 337, in decode
| obj, end = self.raw_decode(s, idx=_w(s, 0).end())
| File "/usr/local/lib/python3.10/json/decoder.py", line 355, in raw_decode
| raise JSONDecodeError("Expecting value", s, err.value) from None
| json.decoder.JSONDecodeError: Expecting value: line 14 column 13 (char 436)
Status: Issue closed
Answers:
username_1: Hey, what was the problem exactly? Just wanted to know if i should change README, to not let similar issues happen in the future.
username_0: It was a failure in the config.json |
h5py/h5py | 217638314 | Title: Python 3.5 3.6 support
Question:
username_0: Hi, I saw python 3.6 in the commit message.
Does h5py run on python 3.5 and 3.6 too?
Can we add these to
`Python 2.6, 2.7, 3.2, 3.3, or 3.4`
in the README?
Status: Issue closed
Answers:
username_0: I see it is solved in:
https://github.com/h5py/h5py/pull/858/commits/c6af3066e3a5b56808ff84cfca353152fdeccfb9
3.6 is supported. |
rancher/rancher | 407868691 | Title: Can't create ec2 rke cluster
Question:
username_0: Version - v2.2.0-alpha6
**Steps to reproduce (least amount of steps as possible):**
1. Try to setup a EC2 RKE cluster with 1 node
**Result:** Waiting for SSH to be available...

Status: Issue closed
Answers:
username_1: appears it was caused by using a subnet that was not publicly routeable
on another note, seems like we arent doing a good job of clkeaning up vpcs and subnets for EKS clusters https://github.com/rancher/rancher/issues/17920 |
neuhausi/canvasXpress | 483197986 | Title: Can't change font size in correlation plot and black border lines dominate if lots of features are plotted
Question:
username_0: <img width="611" alt="Screen Shot 2019-08-21 at 12 43 35 AM" src="https://user-images.githubusercontent.com/15882624/63403432-39eb6f00-c3ad-11e9-8460-6ca57d048d83.png">
[cX-canvas_Corr.json.txt](https://github.com/username_1/canvasXpress/files/3523583/cX-canvas_Corr.json.txt)
Answers:
username_1: It will be fixed in version 29.7. Sorry for the delay!
username_1: Preview.
Added the following to the config:
"correlationLabelInterval": 3,
"varLabelScaleFontFactor": 0.5
 |
MattJeanes/TARDIS | 180735704 | Title: Flight without brakes
Question:
username_0: What about an "calm" flight sound with can be enabled by releasing the brake?
As Seen in Episode S5E4 The Time of Angles
Answers:
username_0: as seen here:
(SPOILERS)
http://dai.ly/x2cup3u?start=346
username_0: I have noticed science the 2010 seems to be the only TARDIS apable of doing so, i thought of just adding Support to change flysounds for Extension developers
username_1: In my opinion, he should do flight w/o brakes for all tardises, it would make life alot easier, and players wouldn't come running to destroy your tardis. But he could also make it like the sound dampeners, which uses up alot of power, so the only things you could do would be fly and teleport...
username_0: In the series, you always hear the exterior brakes no matter if they are released or not. But for the sake of everybodys Comfort, i would just apply Stealth flight Sound ewerywhere.
Status: Issue closed
username_2: Duplicate of #52 |
spboyer/social-linker | 418377320 | Title: UI Improvements
Question:
username_0: * Move settings to the same view as link creator page, perhaps on a side column
* Give an option to create all the links for each platform at the same time.
* Option for creating short link or not for each platform.
Answers:
username_1: A possibility would be to reinvision this as a wizard layout with steps
Status: Issue closed
|
blakeblackshear/frigate | 789880217 | Title: Beta 1.4 - Still many ffmpeg errors
Question:
username_0: Remade and simplified my whole add-on config based on beta 1.4 documentation.
While the system is working (still testing), i am still getting continuous ffmpeg errors:
* Starting nginx nginx
...done.
frigate.app INFO : Creating directory: /tmp/cache
Starting migrations
peewee_migrate INFO : Starting migrations
There is nothing to migrate
peewee_migrate INFO : There is nothing to migrate
frigate.app INFO : Camera processor started for Garage: 38
detector.coral INFO : Starting detection process: 36
frigate.edgetpu INFO : Attempting to load TPU as usb:0
frigate.app INFO : Capture process started for Garage: 40
frigate.edgetpu INFO : TPU found
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
watchdog.Garage INFO : No frames received from Garage in 20 seconds. Exiting ffmpeg...
watchdog.Garage INFO : Waiting for ffmpeg to exit gracefully...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
watchdog.Garage INFO : No frames received from Garage in 20 seconds. Exiting ffmpeg...
watchdog.Garage INFO : Waiting for ffmpeg to exit gracefully...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
[Truncated]
max_seconds: 300
retain:
default: 10
objects:
person: 15
objects:
track:
- person
filters:
person:
min_area: 5000
max_area: 100000
min_score: 0.5
threshold: 0.7
database:
path: /media/frigate/clips/frigate.db
logger:
default: info
logs:
frigate.mqtt: error
Answers:
username_1: Try the following to see what ffmpeg is complaining about:
```yaml
cameras:
Garage:
ffmpeg:
inputs:
- path: 'rtsp://xxx....'
global_args: -hide_banner -loglevel info
roles:
- detect
- rtmp
```
username_0: Thanks Blake.
Logs shows a LOT of this:
ffmpeg.Garage.detect ERROR : frame= 2623 fps= 15 q=-1.0 q=-0.0 size= 8191kB time=00:02:54.80 bitrate= 383.9kbits/s dup=0 drop=1744 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2631 fps= 15 q=-1.0 q=-0.0 size= 8193kB time=00:02:55.40 bitrate= 382.7kbits/s dup=0 drop=1750 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2639 fps= 15 q=-1.0 q=-0.0 size= 8195kB time=00:02:55.80 bitrate= 381.9kbits/s dup=0 drop=1755 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2646 fps= 15 q=-1.0 q=-0.0 size= 8362kB time=00:02:56.40 bitrate= 388.3kbits/s dup=0 drop=1759 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2655 fps= 15 q=-1.0 q=-0.0 size= 8365kB time=00:02:57.00 bitrate= 387.2kbits/s dup=0 drop=1766 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2663 fps= 15 q=-1.0 q=-0.0 size= 8367kB time=00:02:57.40 bitrate= 386.4kbits/s dup=0 drop=1771 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2670 fps= 15 q=-1.0 q=-0.0 size= 8369kB time=00:02:58.00 bitrate= 385.2kbits/s dup=0 drop=1776 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2679 fps= 15 q=-1.0 q=-0.0 size= 8371kB time=00:02:58.60 bitrate= 384.0kbits/s dup=0 drop=1781 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2687 fps= 15 q=-1.0 q=-0.0 size= 8373kB time=00:02:59.00 bitrate= 383.2kbits/s dup=0 drop=1787 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2694 fps= 15 q=-1.0 q=-0.0 size= 8374kB time=00:02:59.60 bitrate= 382.0kbits/s dup=0 drop=1791 speed=1.01x
ffmpeg.Garage.detect ERROR : frame= 2702 fps= 15 q=-1.0 q=-0.0 size= 8541kB time=00:03:00.05 bitrate= 388.6kbits/s dup=0 drop=1797 speed=1.01x
username_1: Try changing to error instead of info.
username_2: So i found that if you log into the container then then /tmp/cache folder is 100% full,
I have managed to before rm everything in there, and then thing start running again, Somewhere i read there was a fix for the issue. Ill create a Issue
username_0: Thank you both. Here is the log after changing to error:
![Uploading Screenshot_20210120-192631_Home Assistant.jpg…]()
username_2: It’s prob the /tmp/cache that’s full inside the docker, please post the
df -h
from inside the container,
also
ls -al /tmp/cache
username_0: Hi username_2, thank you. I ran the commands inside the frigate container:

username_0: Thank you both. Here is the log after changing to error :
username_2: OK bud, my recommendation is lets get you to the latest version aka 8rc2 or alike, then we can have a look from there, I dont see the /tmp/cache or the /cache mounts around, so it could be an issue that came in during the creation of your docker instance.
username_2: could also be your resolution and such, try and confirm your camera is running h264 and the correct resolution on that stream.
username_0: Thank you username_2. I believe in running rc2 already. That's the beta 1.4 right? I saw you guys talking about RC3 coming soon these days?
My cheap cam (EZVIZ CTQ3W) is definitely h264 and resolution is correct as it supports only full HD and I cannot even change that (tried in the past). If I change the res settings in the frigate config then the mjpg stream gets messed up badly (pixelated, green, overlapping blurry images) .
I think your idea on the missing tmp/cache folders might be the main cause...
username_0: Just noticed this. Not sure this is normal...

username_2: thanks, can you do the df -h again for me please
username_0: Yes of course. Here you go: Still dont see the tmp/cache but frigate logs shows that it created it... interesting.

username_2: How are you running Frigate?
username_2: You can use Portainer to set the disk, otherwise im not sure if your recordings will work
username_2: https://github.com/username_1/frigate/tree/release-0.8.0#docker
So you can confirm there that you complied with the
```
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
```

username_0: Via superviser add-on. in the beta 1.2 earlier, my clips were recording all very well. had tons of clips that i could view...


username_0: no no, i didnt compile with docker at all. i just installed the add.on...
username_2: oki, well monitor it and lets see how it goes with clips and all.
username_2: you can still do a listing of the /tmp/cache folder, there should be some files in there
username_0: Ok thanks a lot. Did I install it correctly? I mean I just installed the add-on that's mentioned under HassOS section and skipped all the rest. Because the add-on installs the docker containers by itself. Doesn't it?
I can't see any tmp/cache folder... Is that not in the frigate container and somewhere else?
Sorry for all the trouble and again thanks a million for helping out.
username_2: should be inside the frig container, its more just for interest, if it built via HASS then I'm sure it fine, otherwise I'm sure this place would be blowing up with issues.
username_0: found it!

username_1: The error logs from ffmpeg indicate that your connection to the Garage camera is just spotty. Is this a wired camera? I think ffmpeg is exiting periodically and reconnecting. When that happens, I would expect to see these types of error messages:
```
frigate.video INFO : Garage: ffmpeg sent a broken frame. memoryview assignment: lvalue and rvalue have different structures
frigate.video INFO : Garage: ffmpeg process is not running. exiting capture thread...
```
Given that you do have some files in /tmp/cache (those are used to build the clips after a detection event), we know it is working sometimes. When ffmpeg exits from a broken connection, the file it was writing at the time to /tmp/cache will usually end up corrupted.
username_0: Thank you Blake.
That must be it. Garage cam is using WIFI and the signal is definitely weak. So for sure thats the problem then. Thank you!
If i get those errors again, i will test with another camera thats wired.
Btw, still no ffmpeg errors :) fingers crossed.
AGAIN: Thank you for the awesome work with Frigate!!! And thank you for everyone who's contributing.
BIg fan! :)
username_0: guys, its been a few hours now since my update and guess what?
- No ffmpeg errors or any other errors in the log. Fully clean.
- Ingress works perfectly well
- Clips recorded and available in the media browser
You did it!!!!
And i probably jinxed it now :)
Status: Issue closed
|
dart-lang/sdk | 84548171 | Title: Invalid CSS property name: -webkit-touch-callout
Question:
username_0: **[user feedback]**
Generated sunflower application from Welcome page.
Ran "sunflower.dart" (run button), received the followng
output message:
"Invalid CSS property name: -webkit-touch-callout"
There was some quick display at the top of the new panel, but I could
not read it. It disappeared too quickly.
Application seemed to work correctly - did not expect Output error message.
////////////////////////////////////////////////////////////////////////////////////
Editor: 0.6.19_r26297 (2013-08-17)
OS: Windows 7 - x86 (6.1)
JVM: 1.7.0_25
# projects: 3
# open dart files: 0
auto-run pub: true
localhost resolves to: 127.0.0.1
mem max/total/free: 967 / 178 / 95 MB
thread count: 27
index: 159557 relationships in 36627 keys in 154 sources
SDK installed: true
Dartium installed: true
Status: Issue closed
Answers:
username_1: I don't think this exists any more.
```
sdk$ git grep webkit-touch
sdk$
``` |
beetbox/beets | 315537905 | Title: Lyrics plugin error when writing lyrics to file
Question:
username_0: ### Problem
In my setup I am sharing my library between linux and windows systems. The setup works fine except when I try to write lyrics. With songs imported in my windows machine, I can't write lyrics from my unix partition and vice versa.
For example, I import a record in windows and then I fetch the lyrics and try to write them, it works fine. If I try to do the same in linux, it doesn't work. Paths seem to be hardcoded.
The same happens with records imported in linux, same error in windows.
It seems the lyrics plugin is trying to write to the path the music was imported to originally, instead of using the music path from the config file, which points to the same directory.
Running this command in verbose (`-vv`) mode from my linux partition:
```sh
beet -vv lyrics -f ABBA
```
Led to this problem:
```
user configuration: /home/chema/.config/beets/config.yaml
data directory: /home/chema/.config/beets
plugin paths:
Sending event: pluginload
lyrics: Disabling google source: no API key configured.
library database: /run/media/chema/Music/musiclibrary.blb
library directory: /run/media/chema/Music
Sending event: library_opened
lyrics: got lyrics from backend: LyricsWiki
lyrics: fetched lyrics: ABBA - Gold: Greatest Hits [1992] - Dancing Queen
Sending event: write
open failed: [Errno 2] No such file or directory: b'E:\\FLAC\\ABBA\\ABBA [1992] Gold_ Greatest Hits {Polydor - 314 517 007-2}\\01 - Dancing Queen.flac'
error reading E:\FLAC\ABBA\ABBA [1992] Gold_ Greatest Hits {Polydor - 314 517 007-2}\01 - Dancing Queen.flac: [Errno 2] No such file or directory: b'E:\\FLAC\\ABBA\\ABBA [1992] Gold_ Greatest Hits {Polydor - 314 517 007-2}\\01 - Dancing Queen.flac'
Sending event: database_change
[...]
```
Path should be /run/media/chema/Music/FLAC/ABBA/ABBA [1992] Gold_ Greatest Hits {Polydor - 314 517 007-2}/01 - Dancing Queen.flac
Answers:
username_0: Thank you for your answer. I am aware that the library can be relocated, I just thought that the lyrics plugin should get the files path based on the music folder stated in the beets configuration file, instead of using the absolute path.
I have currently no problem in querying the database from my windows and unix systems, no matter which system I am when importing new music. That's why I infered no absolute paths were being used, but if that's the way it works I suppose I can live with it.
username_1: Yep! Like everything that accesses files, the `lyrics` plugin uses absolute paths.
Status: Issue closed
|
mash-up-kr/Thing-BackEnd | 456621454 | Title: Swagger 추가
Question:
username_0: # Swagger 추가
## Swagger를 통하여 front(IOS)와 API spec을 소통한다.
<br>
### 완료 조건
- [ ] Project(Application)에 Swagger 추가
<br>
## Check List
- [ ] issue 제목은 유의미한가?
- [ ] issue 내용은 issue 내용만 확인하고도 모르는 사람도 파악할 수 있을 정도로 기술되었는가? (무엇을, 언제, 어디서...)
- [ ] reference가 있다면 추가했는가?
- [ ] 관련 issue가 있다면 추가했는가?
- [ ] 유의미한 Label을 추가했는가?
- [ ] Assginees를 추가했는가?
- [ ] Estimate를 추가했는가?
- [ ] 관련 Milestone이 있다면 추가했는가?
- [ ] 관련 Epics가 있다면 추가했는가?
---<issue_closed>
Status: Issue closed |
microsoft/fast | 685712748 | Title: Comments in html templates cause directives to not process
Question:
username_0: **Describe the bug; what happened?**
If there is a comment within an html template, any directives after the comment do not render.
**What are the steps to reproduce the issue?**
Render this component:
```ts
import { FASTElement, customElement, css, html, repeat, observable } from "@microsoft/fast-element";
const template = html<CommentTest>`
<div>
<p>Items:</p>
<ul>
${repeat((x) => x.items, html<string>`<li>${(x) => x}</li>`)}
</ul>
<!-- Here is a test comment. With it the repeat will not render. Remove it and it will. -->
<p>After the comment:</p>
<ul>
${repeat((x) => x.items, html<string>`<li>${(x) => x}</li>`)}
</ul>
</div>
`;
const styles = css`
:host {
display: block;
}
`;
@customElement({
name: "comment-test",
template,
styles,
})
export class CommentTest extends FASTElement {
@observable items = ["red", "green", "blue"];
}
```

**What behavior did you expect?**
The list to render a second time.
Answers:
username_1: Ok, tracked down the issue. Fix is easy (delete 2 unnecessary lines), testing is a bit more difficult. I should have a PR for Thursday's release.
Status: Issue closed
|
drdhaval2785/SanskritVerb | 117078370 | Title: अगिँ लुङ्
Question:
username_0: 1 - अन्ग्+सिच्+ई+त्
इत्संज्ञालोपोत्तरं अन्ग् स् ई त्
आर्धधातुकस्य इड्वलादेः इति इडागमे
अन्ग् इ स् ई त्
इट ईटि इति सस्य लोपे
अन्ग् इ ई त्
सवर्णदीर्घे कृते आन्गीत्
अनुस्वारे, परसवर्णे च कृते आङ्गीत्।
Answers:
username_1: It is too difficult to alter.
Positioning may alter, because we are applying a different algorithm.
There doesn't seem any wrong rule application.
Status: Issue closed
username_1: Happens now. |
quintel/etengine | 121441557 | Title: create new method to calculate required additional network capacity
Question:
username_0: This is part of https://github.com/quintel/etmodel/issues/1961 where I describe the background of the network calculation. The current issue deals with the core of the calculation.
**Current implementation**
Currently we calculate the required additional network capacity in a set of Gqueries (e.g. `network_lv_net_delta_peak_load_max.gql`), in which we compare the additional peak load on the that voltage level to the total available capacity on that voltage level. This calculation is performed for five of the six different voltage level in the ETM.
**Proposed implementation**
We would like to replace the total available capacity by a distribution of the available capacity over the subnets within that voltage level. I have obtained these distribution from Alliander (and will place them in a Gist). I have successfully implemented the following calculation in a Python script and would like to have a similar ETEngine method such that I can implement it in the ETM. Below I describe this method.
*Inputs*
The method would require three inputs:
- The total additional peak load, obtained from e.g. `Q(network_lv_net_peak_load_we)`; this inputs needs to be variable as we would have to perform the same method for different additional peak loads
- The total capacity of the voltage level; this can be obtained from the respective node file and needs to be an input to this method
- The distribution of the available capacity over the subnets in this voltage level. This is a new piece of data (which I will upload to a Gist) and basically consists of a list of 20 data points (between 0 and 1) describing which fraction of the network is still available. Different voltage levels require different distributions which are stored in different files. As an option for this method, I would like to be able to select which of these distributions to use (I propose to use the name of the file as an identifier, but am open to suggestions)
*Calculation*
For ease I have defined the inputs as follows
`L_p`: additional peak load
`C_tot`: total capacity of the voltage network
`a_i`: the i-th entry of the distribution
`n`: is the number of entries in the distribution file (for now 20, but this might vary in the future)
Based on the above inputs the following calculation needs to be performed:
`C_tot = SUM ( MAX ( L_p / n - C_tot / n * a_i , 0 ) ) `
*Output*
The method needs to output a single number, i.e. the required additional capacity over all subnets in that voltage level
I will then connect the existing Gqueries to this new method (and will create a separate issue for this).
Two points to be addresses are (I will create separate issues for this):
- [ ] where to store the distribution files? (I propose somewhere on ETSource)
- [ ] how to specifiy which distribution file to use?
Assigning @username_1 .
Answers:
username_1: @username_0 thanks! could you:
* add a list of and link to all the queries that need updating?
With regards where to store the distribution files: I think they belong in the country's dataset/year directory. Could you put them in a separate folder, and one for every node?
@username_2 I would like to ask you to make an estimate on time required to implement this technically.
username_1: @username_0: could you also add a numeric calculation example? So we could write the spec and be sure it calculated correctly
username_0: The following Gqueries would need to be updated (I will do this once the new method would be available):
- [network_hv_mv_trafo_delta_peak_load_max.gql](https://github.com/quintel/etsource/blob/master/gqueries/modules/network/network_impact_calculations/hv_mv_trafo/network_hv_mv_trafo_delta_peak_load_max.gql)
- [network_lv_net_delta_peak_load_max.gql](https://github.com/quintel/etsource/blob/master/gqueries/modules/network/network_impact_calculations/lv_net/network_lv_net_delta_peak_load_max.gql)
- [network_mv_d_net_delta_peak_load_max.gql](https://github.com/quintel/etsource/blob/master/gqueries/modules/network/network_impact_calculations/mv_d_net/network_mv_d_net_delta_peak_load_max.gql)
- [network_mv_lv_trafo_delta_peak_load_max.gql](https://github.com/quintel/etsource/blob/master/gqueries/modules/network/network_impact_calculations/mv_lv_trafo/network_mv_lv_trafo_delta_peak_load_max.gql)
- [network_mv_t_net_delta_peak_load_max.gql](https://github.com/quintel/etsource/blob/master/gqueries/modules/network/network_impact_calculations/mv_t_net/network_mv_t_net_delta_peak_load_max.gql)
username_0: I have pushed these distribution files for the NL dataset to a separate branch on ETSource (`load-distrbutions` (apologies for the typo in the branch name :smile:). The files are called:
- `network_hv_mv_trafo_distribution.csv`
- `network_lv_net_distribution.csv`
- `network_mv_d_net_distribution.csv`
- `network_mv_lv_trafo_distribution.csv`
- `network_mv_t_net_distribution.csv`
We currently only support the network calculation for NL.
username_1: @username_0 could you add a link to that etsource commit?
username_0: https://github.com/quintel/etsource/commit/25b253bd2bc1bbd74f8d690c267891541640a07c
username_2: Given that all of the data seems readily available, implementing the method seems pretty simple. I would estimate a day, maybe two. I would want to add support to Atlas to read the distribution files (with some simple tests), add a Node attribute to associate the node with the CSV file, and then create the method in ETEngine.
@username_0 Just so I'm clear, would you expect to call the new method like this?
```ruby
V(
some_network_converter,
required_additional_network_capacity(Q(network_lv_net_peak_load_we))
)
```
This provides the additional load calculated with another query. The total capacity of the network could probably be read from the Converter attributes instead of having to be provided explicitly.
username_0: In the table below I provide one numerical calculation example per voltage level, so you can test the individual voltage levels one by one
| Voltage level | input: Additional peak load (`L_p`) (MW) | input: Total capacity of the voltage network (`C_tot`) (MW) | Expected output (MW) |
|---|---|---|---|
|`hv_mv_trafo`| 20,000.0 | 37,830.0 | 8,053.2 |
|`lv_net`| 100,000.0 | 98,179.2 | 17,431.3 |
|`mv_d_net`| 50,000.0 | 72,000.0 | 10,144.0 |
|`mv_lv_trafo`| 50,000.0 | 67,790.4 | 11,510.6 |
|`mv_t_net`| 50,000.0 | 50,921.2 | 19,141.8 |
username_0: Good idea. So far I simply added the `network_capacity_available_in_mw` and `network_capacity_used_in_mw`. If possible, I suggest we do the same in the new method.
username_2: 
(Sorry :laughing:)
In which case, I think one day is fair for the technical implementation. 1.5 if you want to be cautious, but I don't see any complicating factors.
BTW, thanks for the clear and detailed explanation!
username_0: :laughing:
Status: Issue closed
username_2: All done!
@username_0: [Atlas](https://github.com/quintel/atlas/commit/43c3a241b2ba64c4fac82e66823373fb676dd36a) and [ETEngine](https://github.com/quintel/etengine/commit/45bbde8bf4f46fc3b1e7e2610aed137f12394afe) are updated, and [I assigned the appropriate `capacity_distribution` attributes](https://github.com/quintel/etsource/commit/074d4bd79aa678430b51f83194ab4396274732b9) on your load-distrbutions ETSource branch. ETEngine (beta) has not been deployed with these changes yet, but we can do so whenever you're ready.
If the names `capacity_distribution` and `required_additional_network_capacity_in_mw` don't meet with your approval, let me know what you'd prefer and I'll get them changed.
**GQL:** (locally)
```ruby
EACH(
V(energy_power_transformer_mv_hv_electricity, required_additional_network_capacity_in_mw(20_000.0)),
V(energy_power_lv_network_electricity, required_additional_network_capacity_in_mw(100_000.0)),
V(energy_power_mv_distribution_network_electricity, required_additional_network_capacity_in_mw(50_000.0)),
V(energy_power_transformer_lv_mv_electricity, required_additional_network_capacity_in_mw(50_000.0)),
V(energy_power_mv_transport_network_electricity, required_additional_network_capacity_in_mw(50_000.0))
)
```
**Results:**
```ruby
[
8,053.205, # hv_mv_trafo expected: 8,053.2
17,431.292799999996, # lv_net expected: 17,431.3
10,144.0, # mv_d_net expected: 10,144.0
11,510.626400000005, # mv_lv_trafo expected: 11,510.6
19,141.752799999995 # mv_t_net expected: 19,141.8
]
```
*(Thank you for [that table of expected results](https://github.com/quintel/etengine/issues/803#issuecomment-163586090); that was very helpful!)*
username_1: Super fast!
username_0: These certainly meet my approval, so no changes required there.
I'm going to work on implementing it in the relevant Gqueries right away. |
woodpecker-ci/woodpecker | 941325734 | Title: woodpecker cli to support enabling tag based builds
Question:
username_0: When we enable a repo, deployment and push events gets automatically enabled. We have automation of onboarding new microservices, where a part includes creating repo and integrate woodpecker. We drone cli to enable repos at the same time we want to enable tag based events. Is there any other way I can do this, like an API call?
Answers:
username_1: I plan to update the way woodpecker restricts if a webhook call should be executed in #281. The current plan is to allow all webhooks / events by default and add an option to disable pull requests. I think this would solve your problem somehow.
In addition to that there alreday is an api endpoint to change the settings of a project under POST: `/api/repos/" + owner + "/" + repo`. You should find the implementation of that endpoint [here](https://github.com/woodpecker-ci/woodpecker/blob/master/server/repo.go#L101). There also is a swagger documentation of woodpecker, but I don't know if it is up to date ATM.
Status: Issue closed
|
asriz7777/FX-Scripts-Functional | 412737101 | Title: NET BANKING APP : ApiV1IssuesPostIssueuserbDisallowAbact7
Question:
username_0: Project : NET BANKING APP
Job : Default
Env : Default
Category : null
Tags : null
Severity : null
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 400
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NTEwZmIyYmMtM2NlNi00MjIzLTkxZWMtY2RjM2VjNTJhZDZj; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 21 Feb 2019 03:53:08 GMT]}
Endpoint : http://54.215.136.217/api/v1/issues
Request :
{
"assertions" : "gQ5DV3kb",
"assignedTo" : "gQ5DV3kb",
"createdBy" : "",
"createdDate" : "",
"description" : "gQ5DV3kb",
"endpoint" : "gQ5DV3kb",
"env" : "gQ5DV3kb",
"failedAssertions" : "gQ5DV3kb",
"headers" : [ ],
"id" : "",
"inactive" : false,
"issueName" : "gQ5DV3kb",
"issueStatus" : "CLOSED",
"issueType" : "MANUAL",
"method" : "TRACE",
"modifiedBy" : "",
"modifiedDate" : "",
"project" : "",
"requestBody" : "gQ5DV3kb",
"responseBody" : "gQ5DV3kb",
"responseHeaders" : "gQ5DV3kb",
"result" : "gQ5DV3kb",
"statusCode" : "gQ5DV3kb",
"tags" : [ ],
"version" : ""
}
Response :
{
"timestamp" : "2019-02-21T03:53:08.922+0000",
"status" : 400,
"error" : "Bad Request",
"message" : "JSON parse error: Cannot construct instance of `com.fxlabs.issues.dto.project.Project` (although at least one Creator exists): no String-argument constructor/factory method to deserialize from String value (''); nested exception is com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot construct instance of `com.fxlabs.issues.dto.project.Project` (although at least one Creator exists): no String-argument constructor/factory method to deserialize from String value ('')\n at [Source: (PushbackInputStream); line: 19, column: 15] (through reference chain: com.fxlabs.issues.dto.project.Issue[\"project\"])",
"path" : "/api/v1/issues"
}
Logs :
Assertion [@StatusCode == 401 OR @StatusCode == 403 OR @Response.errors == true] resolved-to [400 == 401 OR 400 == 403 OR == true] result [Failed]
--- FX Bot --- |
christofferok/laravel-emojione | 318200905 | Title: emojione v3.1.3 breaks this package
Question:
username_0: Description:
emojione v3.1.3 breaks this package returning `Undefined variable: shortname`
v3.1.2 returns before trying to access the $shortname var
```
$ruleset = $this->getRuleset();
$shortcode_replace = $ruleset->getShortcodeReplace();
$unicode_replace = $ruleset->getUnicodeReplace();
$unicode_replace_greedy = $ruleset->getUnicodeReplaceGreedy();
$unicode = strtoupper($m[0]);
if ( array_key_exists($unicode, $unicode_replace))
{
$shortname = $unicode_replace[$unicode];
}
else if ( $this->greedyMatch && array_key_exists($unicode, $unicode_replace_greedy) )
{
$shortname = $unicode_replace_greedy[$unicode];
}
else
{
return $m[0]; // <-- returns here
}
$filename = $shortcode_replace[$shortname][2];
$category = (strpos($filename, '-1f3f') !== false) ? 'diversity' : $shortcode_replace[$shortname][3];
$titleTag = $this->imageTitleTag ? 'title="'.htmlspecialchars($shortname).'"' : '';
```
https://github.com/emojione/emojione/blob/3.1.2/lib/php/src/Client.php#L466
v3.1.3 no longer returns before setting the $shortname
```
if ( array_key_exists($unicode, $unicode_replace) && !in_array($unicode, $bList) )
{
$shortname = $unicode_replace[$unicode];
}
else if ( $this->greedyMatch && array_key_exists($unicode, $unicode_replace_greedy) && !in_array($unicode, $bList) )
{
$shortname = $unicode_replace_greedy[$unicode];
}
else
{
$unicode; // <-- Gets to this point
}
$filename = $shortcode_replace[$shortname][2]; // <-- shortname is not set
$category = (strpos($filename, '-1f3f') !== false) ? 'diversity' : $shortcode_replace[$shortname][3];
$titleTag = $this->imageTitleTag ? 'title="'.htmlspecialchars($shortname).'"' : '';
```
https://github.com/emojione/emojione/blob/3.1.3/lib/php/src/Client.php#L469
Status: Issue closed
Answers:
username_0: it does thanks! |
theotherp/nzbhydra2 | 603357258 | Title: Drunken Slug: Unexpected error while searching
Question:
username_0: Hello
Following today's update to v2.19.3, I can't access the Drunken Slug indexer, all other indexers are working as expected. I've pasted the log entry message and stacktrace below this message.
Check capabilities for Indexer shows the determined api limit is null and download limit is null as well.
Log Message: Drunken Slug: Unexpected error while searching
Stacktrace:
java.lang.NullPointerException: null
at org.nzbhydra.mapping.newznab.xml.NewznabXmlApilimits.getApiOldestTime(NewznabXmlApilimits.java:73)
at org.nzbhydra.indexers.Newznab.completeIndexerSearchResult(Newznab.java:474)
at org.nzbhydra.indexers.Newznab.completeIndexerSearchResult(Newznab.java:73)
at org.nzbhydra.indexers.Indexer.searchInternal(Indexer.java:194)
at org.nzbhydra.indexers.Indexer.search(Indexer.java:119)
at org.nzbhydra.searching.Searcher.lambda$getIndexerCallable$12(Searcher.java:411)
at java.util.concurrent.FutureTask.run(Unknown Source)
at org.nzbhydra.logging.MdcThreadPoolExecutor$1.run(MdcThreadPoolExecutor.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Could you please help me out.
Thanks
Answers:
username_1: Fix coming soon.
Status: Issue closed
|
playframework/play-scala-websocket-example | 241097358 | Title: Test web sockets with FakeRequest?
Question:
username_0: I can write unit tests for play controllers if I mix in suite class with **FakeApplicationFactory** and **BaseOneAppPerTest**, providing correct `fakeApplication` method which returns fake application. I can test my controllers with `FakeRequest` object, with GET/POST or other HTTP methods. However, how I can I test web sockets?
Answers:
username_1: https://github.com/playframework/play-scala-websocket-example/blob/2.6.x/test/controllers/FunctionalSpec.scala
Status: Issue closed
|
void-linux/void-packages | 577041107 | Title: ziglang - zig - Update possible?
Question:
username_0: There is a serious bug in version 0.5.0 which makes zig mostly unusable:
[Parse error when anonymous literal is the first element in an array](https://github.com/ziglang/zig/issues/3679)
Would it be possible to update the zig package with the bug fixed?
Answers:
username_1: do they have a released version?
username_0: No, that's the problem. (I know the release policy of Void Linux.)
username_1: well, if they've got a patch we can merge it
username_0: No, not really. No patch. I think we just have to wait for the next release. In the meantime I do a manual installation. Thanks for your quick reply!
Status: Issue closed
username_1: I can't do much otherwise, let's wait for the next release.
Thanks for the heads up :-) |
photopea/photopea | 517418157 | Title: animate button
Question:
username_0: the button doesn't work.
Answers:
username_1: Hi, this button should take you to a website of our sponsor, who provides the animation software.
What happens when you click it?
Status: Issue closed
username_1: Now I see what you mean. When a screen size was small, it was impossible to click. I fixed it. |
PlayWithMagic/PlayWithMagic.org | 72044361 | Title: Migration to HTTPS breaks CDN loads
Question:
username_0: After migrating the site to HTTPS, none of the CDN items are loading correctly. Need to fix this so that CDN items load reliably over HTTPS.
Answers:
username_0: Fix appears to be to load all resources using `//path.to.url/resource.name`. Will test and commit.
username_0: See http://stackoverflow.com/questions/18964800/jquery-cdn-secure-insecure-loading-issues
username_0: Fixed as of commit a1902fa.
Status: Issue closed
|
monarch-initiative/mondo | 974563497 | Title: MONDO:0020642 polycystic kidney disease add ExactMatch Mesh:D007690
Question:
username_0: **Mondo term (ID and Label):**
MONDO:0020642 polycystic kidney disease
MONDO:0004691 autosomal dominant polycystic kidney disease
**Xref that should be fixed (ID and label):**
Would it be possible to move CloseMatch MESH: D007690 from MONDO:0004691 autosomal dominant polycystic kidney disease to Exactmatch MONDO:0020642 polycystic kidney disease?
https://meshb.nlm.nih.gov/record/ui?ui=D007690 Polycystic Kidney Diseases
'scope note' includes both polycystic Kidney disease forms AD/AR so it seems to make sense for it to be associated with the parent term
**Other comments:**
I think you are planning on looking reviewing the classification of renal entities, so this might fall into this #3282<issue_closed>
Status: Issue closed |
openoakland/woeip | 805994151 | Title: Add navigation controls to map
Question:
username_0: ## Description
Currently, the Mapbox integration on the View Map page doesn't display any visible navigational controls, so the user has to know to move around the map by using click-and-drag and scrolling behavior, which isn't always intuitive. We'd like to add some visible controls similar to those described in the [Mapbox docs](https://docs.mapbox.com/mapbox-gl-js/example/navigation/) to provide more obvious control.<issue_closed>
Status: Issue closed |
cloudio-project/cloudio-services | 530177134 | Title: Performance improvements
Question:
username_0: - [ ] Replace MongoDB as process database by key-value database whereas actual values can be written without the need to read the JSON document each time.
- [ ] Find a solution to the problems using InfluxDB batch options in order to enable batched writing which can improve performance dramatically.<issue_closed>
Status: Issue closed |
ReactiveX/RxJava | 201591319 | Title: 2.0.4: an infinite disposable
Question:
username_0: This fails:
@Test
public void directScheduleOnSingleThreadExecutor() {
Scheduler scheduler = Schedulers.from(Executors.newSingleThreadExecutor());
Disposable disposable = scheduler.scheduleDirect(() -> {
});
long start = nanoTime();
while (!disposable.isDisposed()) {
assertTrue(nanoTime() - start < SECONDS.toNanos(10));
}
}
while replacing `from(...)` with `io()` works
I can't find where the error exactly is, my debugger can't jump in to the right source code line.
Answers:
username_1: Yes, the internal `ExecutorScheduler.BooleanDisposable` doesn't set its state to disposed after the task has run.
How much problem is it for you?
(Sidenote: It is generally not recommended to spin on `isDisposed()` because that nullifies the reason RxJava exists: not blocking on `Future.get()` or spinning on `Future.isDone()`.)
username_0: The bug it was causing is that when user scrolls messages up, I wanted paging to run if it not already running. After the transition to RxJava I started to use this new `scheduleDirect` method. The first paging iteration worked as expected, but after that it just hangs up without any signs of living.
This is my current workaround:
public static Disposable scheduleDirect(Scheduler scheduler, Runnable runnable) {
Scheduler.Worker worker = scheduler.createWorker();
worker.schedule(() -> {
try {
runnable.run();
} finally {
worker.dispose();
}
});
return worker;
}
username_0: (As a sidenote: some real-world tasks DO require blocking operations or running of tasks depending on other tasks state.)
username_1: Okay, for consistency with other Schedulers, I'll post a fix for this.
username_0: Cool! :)
username_1: Closing via #5005.
Status: Issue closed
|
suchipi/grep-ast | 481132052 | Title: Option to supress stderr
Question:
username_0: It would be nice to be able to just get the output that is written to stdout. This way, using this inside vim would be pretty easy:
```vim
set grepprg=grep-ast\ --supress-stderr
set grepformat^=%f:%l:%c:%m
```
While in the shell this can be done using `2> /dev/null`, this can't be done inside `grepprg` AFAIK.
`--supress-stderr` might be too ad hoc. Some alternatives that come to mind are `--vimgrep` (like [`ag`](https://github.com/ggreer/the_silver_searcher) does) or a `--formatter` option like in eslint. |
OpenExoplanetCatalogue/open_exoplanet_catalogue | 98240919 | Title: EPIC 201367065 c
Question:
username_0: Recently the planet EPIC 201367065 c was removed, justification being it was not in [http://arxiv.org/abs/1503.07866](Montet et al. 2015) (it was not in the previous revision of this either)
Nevertheless that work does describe (in the body text, section 5.3) this system as a 3-planet system, referencing the discovery paper [Crossfield et al. (2015)](http://arxiv.org/abs/1501.03798), even though they do not tabulate the properties of the third planet.
I think that it is a mistake to remove this candidate on the basis that Montet et al. (2015) do not include it in their tables, and the change that removed this planet should be reverted.<issue_closed>
Status: Issue closed |
withfig/fig | 904885731 | Title: Fig is opening Activity Monitor
Question:
username_0: ### Description:
I use `zsh-autosuggestions` in Oh my ZSH to autocomplete past shell commands (it autocompletes the command from my history with the command that has the same prefix).
Screenshot:
<img width="563" alt="Screen Shot 2021-05-28 at 09 51 17" src="https://user-images.githubusercontent.com/67437/119949635-5bbe5080-bf9a-11eb-89ac-9a7cfed7ed9f.png">
At this point, if I press "right arrow" followed by "Enter", the command runs, but Activity Monitor gets open as well. 🤷♂️
<img width="638" alt="Screen Shot 2021-05-28 at 09 51 23" src="https://user-images.githubusercontent.com/67437/119949639-5cef7d80-bf9a-11eb-85e6-0b5e1c71ecf0.png">
This basically made me disable the tool.
Thanks!
### Details:
|macOS|Fig|Shell|
|-|-|-|
|10.15.7|Version 1.0.42 (B207)|/bin/zsh|
<details><summary><code>fig diagnostic</code></summary>
<p>
<pre>Version 1.0.42 (B207)
UserShell: /bin/zsh
Bundle path: /Applications/Fig.app
Autocomplete: true
Settings.json: true
CLI installed: true
CLI tool path: /Users/viktor/.fig/bin/fig
Accessibility: true
Number of specs: 107
SSH Integration: true
Tmux Integration: true
Keybindings path: /Users/viktor/.fig/user/keybindings
iTerm Integration: true
Hyper Integration: false
VSCode Integration: true
Docker Integration: false
Symlinked dotfiles: true
Only insert on tab: false
Installation Script: true
PseudoTerminal Path: <generated dynamically>
SecureKeyboardInput: false
SecureKeyboardProcess: <none>
Current active process: ??? (???) - ???
Current working directory: ???
Current window identifier: ???</pre>
</p>
</details>
Answers:
username_1: This is so strange. To clarify, this only occurs when you first accept a suggestion from `zsh-autosuggestion` and then press enter?
Would you mind sharing a screen recording of this behavior, so I can see if anything stands out?
username_0: Sorry for unnecessary intro/delay in the video:
https://www.dropbox.com/s/ioljlq767emeybm/Screen%20Recording%202021-05-29%20at%2009.18.01.mov?dl=0
All I did was type `cd Dow`, right arrow, enter
<details>
<summary>This is my .zsrhc with minimal changes (removed GITHUB_NOTIFICATIONS_TOKEN and comments)</summary>
```sh
export ZSH=$HOME/.oh-my-zsh
ZSH_THEME="viktor"
DISABLE_UPDATE_PROMPT="true"
DISABLE_UNTRACKED_FILES_DIRTY="true"
plugins=(
autojump
zsh-autosuggestions
)
source $ZSH/oh-my-zsh.sh
source $HOME/.rvm/scripts/rvm
source /usr/local/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
eval "$(hub alias -s)"
export PATH="$PATH:$HOME/Dropbox/scripts"
if [ -f /usr/local/bin/trash ]; then alias rm="/usr/local/bin/trash"; fi
. ~/.aliases
export BUNDLER_EDITOR='/Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl -n -w'
export EDITOR='/Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl -n -w'
export PATH="/usr/local/opt/node@14/bin:$PATH"
export PATH="/usr/local/sbin:$PATH"
export PATH="$HOME/.rvm/gems/ruby-2.6.6/bin:$PATH"
[ -s ~/.fig/fig.sh ] && source ~/.fig/fig.sh
```
</details>
username_1: Sorry for my delay in responding! This is so bizarre. Would you be open to reinstalling Fig and seeing if this behavior still exists in the newest build (v1.0.45).
I think we'll need to debug this synchronously. Wanna grab some time on my calendar: https://calendly.com/username_2/30min
username_0: None of these times work for me @username_1. :(
The bug is still there in 1.0.45.
This is how I can reproduce the issue every time:
1. Type something that shows up Fig, for example: `cd `
2. Press "Right arrow"; Press "Enter"
username_1: Can you run `fig diagnostic` for me again? There is a chance this could happen if you initially launched the app directly from the DMG, rather than dragging it into /Applications.
Otherwise, we should find a time to chat that works for you because I'm struggling to reproduce this.
username_0: <details>
<summary>~ > fig diagnostic</summary>
```
Version 1.0.45 (B218) [Viktor keyboard layout]
UserShell: /bin/zsh
Bundle path: /Applications/Fig.app
Autocomplete: true
Settings.json: true
CLI installed: true
CLI tool path: /Users/viktor/.fig/bin/fig
Accessibility: true
Number of specs: 133
SSH Integration: true
Tmux Integration: true
Keybindings path: /Users/viktor/.fig/user/keybindings
iTerm Integration: true [Authenticated]
Hyper Integration: false
VSCode Integration: true
Docker Integration: false
Symlinked dotfiles: true
Only insert on tab: false
Installation Script: true
PseudoTerminal Path: /Users/viktor/.rvm/gems/ruby-2.6.6/bin:/usr/local/sbin:/usr/local/opt/node@14/bin:/Users/viktor/.rvm/gems/ruby-2.6.7/bin:/Users/viktor/.rvm/gems/ruby-2.6.7@global/bin:/Users/viktor/.rvm/rubies/ruby-2.6.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/viktor/.fig/bin:/Users/viktor/.rvm/bin:/Users/viktor/Dropbox/scripts
SecureKeyboardInput: false
SecureKeyboardProcess: <none>
Current active process: /bin/zsh (55519) - ttys002
Current working directory: /Users/viktor
Current window identifier: 8662/% (com.googlecode.iterm2)
PATH: /Users/viktor/.rvm/gems/ruby-2.6.7/bin:/Users/viktor/.rvm/gems/ruby-2.6.7@global/bin:/Users/viktor/.rvm/rubies/ruby-2.6.7/bin:/usr/local/sbin:/usr/local/opt/node@14/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/viktor/.fig/bin:/Users/viktor/Dropbox/scripts:/Users/viktor/.rvm/bin
FIG_INTEGRATION_VERSION: 3
```
</details>
username_2: Hmm still not sure what's going on. Can you try uninstalling and then reinstalling the app? Also @fwesss might be able to jump on a call with you and help debug!
username_0: I already reinstalled it. I'm on holidays until the end of July. I'd love to jump on a call in August.
username_2: Hey @username_0! How your vacation was relaxing! Are you still open to a debugging call with me and @fwesss?
username_0: I'm not sure if the build number changed in the meantime. 🤷♂️
Status: Issue closed
|
xd009642/tarpaulin | 637669885 | Title: export CARGO_HOME inside working directory
Question:
username_0: I'm working on a test where I create an http test server using the "httptest" crate and I create blocking requests to it using "reqwest". The test passes successfully when Cargo home is located in its default location. However, when I alter Cargo home's location by setting the CARGO_HOME environmental variable to a directory inside the working directory (the cargo directory where the source is also located) the reqwest request fails with a timeout and the test fails. "cargo test" works fine btw in both cases.
Answers:
username_1: So I tried setting `CARGO_HOME` on another project and running tarpaulin and it worked without any issue. Could you provide a link to a repo I can reproduce this with?
username_0: Thanks for trying it. I have a created a reprex here: https://github.com/username_0/HttptestAndReqwest
username_1: Awesome, thanks for that I can now reproduce it on my machine :+1: I'll update you when I've made some progress
username_1: Okay I've cracked this, the CARGO_HOME variable means source code is downloaded into the folder that shows up in the project directory and in the debug info so tarpaulin identifies it as source to cover. Then with tons of extra traces this likely slows down the test to the point where reqwest times out.
I've added home filtering to source analysis and my cargo handling. I just need to add it to the binary instrumentation which is going to be a bit fiddlier, but hopefully I'll have something today.
Status: Issue closed
username_1: And it's fixed and tested on your example project! Which is now also part of my integration tests. This is currently in the develop branch if you want to use it right now and it'll be in the next release. |
jlippold/tweakCompatible | 414150897 | Title: `WiiLoveMusic` working on iOS 12.1
Question:
username_0: ```
{
"packageId": "com.spark.wiilovemusic",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.spark.wiilovemusic",
"deviceId": "iPhone10,3",
"url": "http://cydia.saurik.com/package/com.spark.wiilovemusic/",
"iOSVersion": "12.1",
"packageVersionIndexed": true,
"packageName": "WiiLoveMusic",
"category": "Tweaks",
"repository": "OLD - Add https://sparkdev.me",
"name": "WiiLoveMusic",
"installed": "2.0.0-1",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.spark.wiilovemusic",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Plays wii channel music in your favourite apps!",
"latest": "2.0.0-1",
"author": "Spark",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
department-of-veterans-affairs/va.gov-team | 839727961 | Title: Update Stats for 3/26 Caregiver Meeting
Question:
username_0: Update [this spreadsheet](https://docs.google.com/spreadsheets/d/1OVghvx8yFkOSNnnzpHZulbdDnqQRGohAOr4SELO0L7M/edit#gid=506440641) through 02/26.
Updates include:
- [ ] Attempts
- [ ] Successful
- [ ] Blocked (MPI Search)
- [ ] Failed (Total Errors)
- [ ] Failed (Validation)
- [ ] Failed (Unclassified)
(Page 2)
- [ ] Applications Delivered
- [ ] Secondary Only Applications
- [ ] Attachments Dropped<issue_closed>
Status: Issue closed |
legrego/homeassistant-elasticsearch | 611131011 | Title: failed to index - field name cannot be an empty string
Question:
username_0: **Environment**
Home-Assistant version: 0.108.9
Elasticsearch version: 0.2.2
Relevant `configuration.yml` settings:
```yml
elastic:
# URL should point to your Elasticsearch cluster
url: http://xxxxx:9200
publish_frequency: 2
only_publish_changed: true
#rollover_max_age: 1d
ilm_max_size: 50mb
ilm_delete_after: 30d
exclude:
entities: [ 'sensor.es_publish_queue' ]
domains: [ 'sun' ]
```
**Describe the bug**
Getting the following errors in log. I'm using the Blue Iris community integration.
```
Logger: custom_components.elastic
Source: custom_components/elastic/__init__.py:512
First occurred: 1:15:23 AM (16 occurrences)
Last logged: 1:45:09 AM
Error publishing documents to Elasticsearch: ('2 document(s) failed to index.', [{'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'q3NX1HEBdrLqniB6HPoW', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': '172_20_0_160_alerts', 'hass.object_id_lower': '172_20_0_160_alerts', 'hass.entity_id': 'binary_sensor.172_20_0_160_alerts', 'hass.entity_id_lower': 'binary_sensor.172_20_0_160_alerts', 'hass.attributes': {'friendly_name': '1192.168.127.12 Alerts', 'Motion': '172.20.0.160 Frontyard-PTZ-cam24 Motion, 172.20.0.160 frontDoor-cam12 Motion', '': '172.20.0.160 Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 2, 7, 43, 29, 756252, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}, {'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'rXNX1HEBdrLqniB6HPoW', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': '172_20_0_160_alerts', 'hass.object_id_lower': '172_20_0_160_alerts', 'hass.entity_id': 'binary_sensor.172_20_0_160_alerts', 'hass.entity_id_lower': 'binary_sensor.172_20_0_160_alerts', 'hass.attributes': {'friendly_name': '172.20.0.160 Alerts', 'Motion': '172.20.0.160 Frontyard-PTZ-cam24 Motion, 172.20.0.160 driveway-north-cam11 Motion, 172.20.0.160 frontDoor-cam12 Motion', '': '172.20.0.160 Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 2, 7, 43, 30, 412699, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}])
Error publishing documents to Elasticsearch: ('1 document(s) failed to index.', [{'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'u3NX1HEBdrLqniB6I_r_', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': '172_20_0_160_alerts', 'hass.object_id_lower': '172_20_0_160_alerts', 'hass.entity_id': 'binary_sensor.172_20_0_160_alerts', 'hass.entity_id_lower': 'binary_sensor.172_20_0_160_alerts', 'hass.attributes': {'friendly_name': '172.20.0.160 Alerts', 'Motion': '172.20.0.160 Frontyard-PTZ-cam24 Motion, 172.20.0.160 driveway-north-cam11 Motion, 172.20.0.160 frontDoor-cam12 Motion, 172.20.0.160 sideGate-cam15 Motion', '': '172.20.0.160 Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 2, 7, 43, 32, 780605, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}])
Error publishing documents to Elasticsearch: ('2 document(s) failed to index.', [{'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'CHNY1HEBdrLqniB6D_7K', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': '172_20_0_160_alerts', 'hass.object_id_lower': '172_20_0_160_alerts', 'hass.entity_id': 'binary_sensor.172_20_0_160_alerts', 'hass.entity_id_lower': 'binary_sensor.172_20_0_160_alerts', 'hass.attributes': {'friendly_name': '172.20.0.160 Alerts', 'Motion': '172.20.0.160 Frontyard-PTZ-cam24 Motion, 172.20.0.160 driveway-north-cam11 Motion, 172.20.0.160 sideGate-cam15 Motion', '': '172.20.0.160 Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 2, 7, 44, 32, 248915, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}, {'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'CnNY1HEBdrLqniB6D_7K', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': '172_20_0_160_alerts', 'hass.object_id_lower': '172_20_0_160_alerts', 'hass.entity_id': 'binary_sensor.172_20_0_160_alerts', 'hass.entity_id_lower': 'binary_sensor.172_20_0_160_alerts', 'hass.attributes': {'friendly_name': '172.20.0.160 Alerts', 'Motion': '172.20.0.160 Frontyard-PTZ-cam24 Motion, 172.20.0.160 driveway-north-cam11 Motion', '': '172.20.0.160 Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 2, 7, 44, 32, 577103, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}])
Error publishing documents to Elasticsearch: ('1 document(s) failed to index.', [{'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'FnNY1HEBdrLqniB6F_61', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': '172_20_0_160_alerts', 'hass.object_id_lower': '172_20_0_160_alerts', 'hass.entity_id': 'binary_sensor.172_20_0_160_alerts', 'hass.entity_id_lower': 'binary_sensor.172_20_0_160_alerts', 'hass.attributes': {'friendly_name': '172.20.0.160 Alerts', 'Motion': '172.20.0.160 driveway-north-cam11 Motion', '': '172.20.0.160 Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 2, 7, 44, 33, 739323, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}])
Error publishing documents to Elasticsearch: ('1 document(s) failed to index.', [{'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': '0nNY1HEBdrLqniB6nf9t', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': '172_20_0_160_alerts', 'hass.object_id_lower': '172_20_0_160_alerts', 'hass.entity_id': 'binary_sensor.172_20_0_160_alerts', 'hass.entity_id_lower': 'binary_sensor.172_20_0_160_alerts', 'hass.attributes': {'friendly_name': '172.20.0.160 Alerts', '': '172.20.0.160 Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 2, 7, 45, 8, 150655, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}])
Traceback (most recent call last):
File "/config/custom_components/elastic/__init__.py", line 512, in do_publish
bulk_response = bulk(self._gateway.get_client(), actions)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/helpers/__init__.py", line 257, in bulk
for ok, item in streaming_bulk(client, actions, *args, **kwargs):
File "/usr/local/lib/python3.7/site-packages/elasticsearch/helpers/__init__.py", line 192, in streaming_bulk
raise_on_error, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/helpers/__init__.py", line 137, in _process_bulk_chunk
raise BulkIndexError('%i document(s) failed to index.' % len(errors), errors)
elasticsearch.helpers.BulkIndexError: ('1 document(s) failed to index.'
```
**To Reproduce**
Install Blue Iris community integration (version d2ba3fc)
Install Elastic Search community integration.
Answers:
username_1: Hey @username_0, thanks for the bug report. Did you happen to see this on any earlier versions of this component, or is `0.2.2` the first version that you tried?
username_0: I think the issue is with the latest version of Blue Iris integration..... It seems to have created a bunch of invalid / unavailable sensors. Here is what its showing:

I rolled back to an older version of the Blue Iris integration and it seems to be working. I'll leave this issue open for a bit until I can confirm.
username_0: Still occurring with older version of Blue Iris integration, but looking at the attributes, it does seem like the key is missing. The UI shows: `: BlueIris Alerts`. Once a stable version of Blue Iris is released, if the problem continues, I'll open an issue on the Blue Iris integration about the missing key in the attribute.
```
Error publishing documents to Elasticsearch: ('1 document(s) failed to index.', [{'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'eYE72HEBdrLqniB6KSmU', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': 'blueiris_alerts', 'hass.object_id_lower': 'blueiris_alerts', 'hass.entity_id': 'binary_sensor.blueiris_alerts', 'hass.entity_id_lower': 'binary_sensor.blueiris_alerts', 'hass.attributes': {'friendly_name': 'BlueIris Alerts', 'Motion': 'BlueIris back-porch-cam7 Motion', '': 'BlueIris Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 3, 1, 51, 28, 30219, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}])
Error publishing documents to Elasticsearch: ('1 document(s) failed to index.', [{'index': {'_index': 'hass-events-v4_1-000001', '_type': '_doc', '_id': 'KoE72HEBdrLqniB6tivx', 'status': 400, 'error': {'type': 'mapper_parsing_exception', 'reason': 'failed to parse', 'caused_by': {'type': 'illegal_argument_exception', 'reason': 'field name cannot be an empty string'}}, 'data': {'hass.domain': 'binary_sensor', 'hass.object_id': 'blueiris_alerts', 'hass.object_id_lower': 'blueiris_alerts', 'hass.entity_id': 'binary_sensor.blueiris_alerts', 'hass.entity_id_lower': 'binary_sensor.blueiris_alerts', 'hass.attributes': {'friendly_name': 'BlueIris Alerts', '': 'BlueIris Alerts'}, 'hass.value': 1, '@timestamp': datetime.datetime(2020, 5, 3, 1, 52, 4, 215443, tzinfo=<UTC>), 'agent.name': 'My Home Assistant', 'agent.type': 'hass', 'agent.version': '0.108.9', 'ecs.version': '1.0.0', 'host.geo.location': {'lat': 9999, 'lon': -9999}, 'host.architecture': 'x86_64', 'host.os.name': 'Linux', 'host.hostname': 'homeassistant', 'tags': ['hass']}}}])
```

username_1: Good to know it’s a bug on the Blue Iris side, thanks for taking the time to research this for me.
Nevertheless, We can make this component more resilient to bad data by simply skipping these invalid attributes.
username_0: I agree on making it more resilient to bad data. However, maybe to help track down bad data, would it be possible to change an empty key to something like "INVALID_KEY" and keep the value. This might be helpful for integration authors to know they have invalid data.
username_1: Yeah that’s certainly doable too. I’m not opposed to that approach
Status: Issue closed
|
GothamElections2017/RandomThoughts | 268468678 | Title: Solar Farm Powers EPA Environmental Center https://t.co/wJUQO0mfnh
Question:
username_0: <blockquote class="twitter-tweet">
<p lang="es" dir="ltr" xml:lang="es">Solar Farm Powers EPA Environmental Center <a href="https://t.co/wJUQO0mfnh">https://t.co/wJUQO0mfnh</a></p>
— <NAME> (@Ge_Dawn_Granger) <a href="https://twitter.com/Ge_Dawn_Granger/status/923227470467338240?ref_src=twsrc%5Etfw">October 25, 2017</a>
</blockquote>
<br><br>
October 25, 2017 at 04:39PM<br>
via Twitter |
serverless/components | 792878563 | Title: Serverless component not deploying
Question:
username_0: I am trying to deploy some components but I get the same error every time I execute `severless` command:
`TypeError: Cannot redefine property: router
at Function.defineProperty (<anonymous>)
at Function.defaultConfiguration (/home/username_0/.serverless/components/registry/npm/[email protected]/node_modules/express/lib/application.js:122:10)
at Function.init (/home/username_0/.serverless/components/registry/npm/[email protected]/node_modules/express/lib/application.js:62:8)
at Template.load (/home/username_0/advtechdev/towing-app-serverless/node_modules/@serverless/core/src/Component.js:116:34)
at async fn (/home/username_0/advtechdev/towing-app-serverless/node_modules/@serverless/template/utils.js:272:25)
at async Promise.all (index 0)
at async executeGraph (/home/username_0/advtechdev/towing-app-serverless/node_modules/@serverless/template/utils.js:294:3)
at async Template.default (/home/username_0/advtechdev/towing-app-serverless/node_modules/@serverless/template/serverless.js:67:38)
at async Object.runComponents (/home/username_0/advtechdev/towing-app-serverless/node_modules/@serverless/cli/src/index.js:220:17)`
`
# serverless.yml
website:
component: "@sls-next/serverless-component"
api:
component: express
inputs:
src:
src: ./api
hook: npm run build
dist: build
`
Answers:
username_1: @username_0 it seems that you are using a very old version of components, which is no longer maintained. Here's the latest on docs on the express component using the latest version of components:
https://github.com/serverless-components/express
username_0: I have only copied the way is explained in the documentation of @sls-next/serverless-component
https://www.serverless.com/plugins/serverless-nextjs-plugin/
Thanks for answering
username_2: @username_0 Did you find a way to make `@sls-next/serverless-component` work with the new syntax? I am facing the exact same issue and I can't find anywhere a way where it is explained how to use the Next component with the new Serverless Components syntax
Status: Issue closed
|
mihaimaruseac/HaCoTeB | 613433 | Title: Change the way section headers are treated
Question:
username_0: Right now, section headers are stripped without looking at them. They can provide valuable options and arguments with the highest priority. Using them can change totally the way a specific parser is used (think parser selection inside a parser).<issue_closed>
Status: Issue closed |
mediaelement/mediaelement-plugins | 294004071 | Title: Getting an error while using this player
Question:
username_0: Hi
Thanks For Great work but one issue im getting right now and i dont know what causes it
here an screenshot of chrome console
<img src="https://i.imgur.com/Fk48WfB.jpg"></img>
Can You check and tell what can cause this issue
Answers:
username_0: Here an another on frontend it add a black color frame in the site
<img src="https://i.imgur.com/ouVlfye.jpg"></img>
username_0: This Script in footer and css in head
css
`<link rel="stylesheet" type="text/css" href="https://cdn.jsdelivr.net/npm/[email protected]/build/mediaelementplayer.min.css">`
```
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/mediaelement-and-player.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/renderers/dailymotion.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/renderers/facebook.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/renderers/soundcloud.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/renderers/twitch.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/renderers/vimeo.min.js"></script>
<script>
jQuery(function ($) {
$('video, audio, iframe').mediaelementplayer({
stretching: 'responsive'
});
});
</script>
```
username_1: It's tough to tell without seeing your `video` tag.
username_0: im using iframe of YouTube video
username_0: @username_1
Hi This code
<script>
jQuery(function ($) {
$('video, audio, iframe').mediaelementplayer({
stretching: 'responsive'
});
});
</script>
Create the whole mess I use it to add support to youtube and other site on their iframe
How can i only add support to youtube and dailymotion ans other site
instead of adding player to simply any things that has iframe tag like Google ads jetpack-comment and Facebook page box |
AqlaSolutions/runsharp | 104449446 | Title: Instance calls in valuetype
Question:
username_0: ```
I've tried to generate calls to methods of value type.
What I've seen is it was not emitting "Constrained" before "Callvirt" for
instance value type methods. I've fixed it to use "Call" in this case.
Another solution is fixing condition of emitting "Constrained" but "Callvirt"
is not needed to be emitted for value types (exclusion for base type virtual
methods).
Vlad
<EMAIL>
```
Original issue reported on code.google.com by `<EMAIL>` on 30 Nov 2012 at 4:52
Attachments:
* [fixed_codegen.helpers.txt](https://storage.googleapis.com/google-code-attachments/runsharp/issue-26/comment-0/fixed_codegen.helpers.txt)<issue_closed>
Status: Issue closed |
anntzer/defopt | 1110914275 | Title: feature request: better support for options that take multiple arguments and have a default
Question:
username_0: It seems like right now, the best way to achieve something like this (while also ensuring the default shown in the help string is correct) is `def main(*, nums: List[int] = [1, 2, 3])`, which isn't ideal for its use of a mutable default argument. Maybe adding support for variable-length tuples (e.g., `def main(*, nums: Tuple[int, ...] = (1, 2, 3))` would solve this.
Answers:
username_1: I think you can use `Iterable[int]` instead?
Supporting `tuple[int, ...]` would be nice too, let's say that I could consider a PR adding that :)
username_0: Ahh yeah didn't see that there was already support for some generic types. Thanks! I can also look into adding Ellipsis support.
Status: Issue closed
|
dmwm/AsyncStageout | 68362908 | Title: Validate of ASO 103pre8
Question:
username_0: The release notes can be found here https://github.com/dmwm/AsyncStageout/releases/tag/1.0.3pre8
Answers:
username_0: @username_1 news? do you have a timescale to get this validated?
username_1: Monitor still has this problem with creating directory Monitor/work/ directory.
Full validation cycle will start together with CMSWeb validation, which should be deployed tomorrow.
username_0: pls could you fix 1) since @dciangot is not able to reproduce the issue. For 2) ok for me even if this ASO release is not tight to CMSWEB deployment cycle. But I understand that you want to validate everything together.
Status: Issue closed
username_0: Done. |
MicrosoftDocs/azure-docs | 549649384 | Title: Error [400] REST protocol for Azure ressource Tokens
Question:
username_0: # Getting SQL resource token
Hello, I am currently trying to obtain an Azure SQL token to authenticate my web app to an Azure SQL server via managed identities. The runtime I'm using is PHP.
This is the cURL request (http request) I'm sending :
```
# Environment variable
$env_url = $_ENV["MSI_ENDPOINT"];
$env_secret = $_ENV["MSI_SECRET"];
# Resource
$resource = "https://database.windows.net";
# Curl HTTP request
$curl = curl_init();
# Curl parameters
curl_setopt_array($curl, array(
CURLOPT_URL => $env_url."?resource=$resource&api-version=2017-09-01",
CURLOPT_HEADER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_HTTPHEADER => array(
"Secret : $env_secret"
)
));
# result from request
$response = curl_exec($curl);
$err = curl_error($curl);
$httpcode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
# close connection
curl_close($curl);
```
The response I get from this API call is the following :
```
"HTTP/1.1 400 Bad Request
Connection: close
Date: Tue, 14 Jan 2020 15:23:23 GMT
Server: Kestrel
Content-Length: 0"
```
I have tried to print every possible variable to check if they are in order, which they are.
I have absolutely no clue what is going on here, thank you for your help.
Answers:
username_1: @username_0 To better assist you, can you please share the Azure document link / URL for which this feedback is applicable to ?
username_0: https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=powershell#obtaining-tokens-for-azure-resources
username_1: @username_0 Thanks for the Azure document URL. We are actively investigating and will get back to you shortly with an update.
Status: Issue closed
|
Code4SocialGood/c4sg-services | 241815901 | Title: My Projects: "Close" Option enabled for already closed projects
Question:
username_0: 1.Login as nonprofit user: <EMAIL> / Opensource5social!
2. Click on MY Projects.
3. Select a Closed Project.
Expected Result: The user should not be able to close already closed project. The "Close" Option should be disabled.
Actual Result: The user is able to Close the already closed projects again and again.

Status: Issue closed
Answers:
username_1: Move to frontend repository |
magicDGS/ReadTools | 331117731 | Title: Non-file FASTQ inputs does fail as inputs
Question:
username_0: The current implementation in our reader factory uses `File.toPath` for opening FASTQ files, but this will fail with non-local `java.nio.Path` (e.g., HDFS). This should be fixed to open the `Path` as a stream instead of using the default `FastqReader` constructor.
Answers:
username_0: Actually this is a bug that should be fixed.
Status: Issue closed
|
RECETOX/galaxytools | 737080891 | Title: Add wrapper for RAMClustR
Question:
username_0: Add a galaxy tool wrapping RAMClustR.
Answers:
username_1: I made PRs for conda packaging of ramclustr (https://github.com/bioconda/bioconda-recipes/pull/25270) and its last non-packaged dependency (https://github.com/bioconda/bioconda-recipes/pull/25267). When they get merged and built we should switch our wrappers to use them instead of container.
username_0: The container installs directly from github, so it's also not a bad solution IMHO.
username_1: If the overhead of maintaining a container can be avoided I'd go for conda. This also gives us very lightweight biocontainers/singularity images for free.
username_0: This is definitely true and also takes away the burden of versioning ;)
username_0: @username_1 I tried switching to using the bioconda package by changing the requirement from
```xml
<requirements>
<container type="docker">recetox/ramclustr:1.1.0-recetox0</container>
</requirements>
```
to
```xml
<requirements>
<requirement type="package" version="1.09">r-ramclustr</requirement>
<requirement type="package" version="3.12.0">bioconductor-xcms</requirement>
</requirements>
```
but I get the following error: `Conda dependency seemingly installed but failed to build job environment.`
Any ideas?
username_1: @username_0 Where and when are you getting this error? Your local deployment? What environment?
username_0: I get it on local planemo deployment when trying to run the tool.
username_1: I can get `conda create env -n ramclustr r-ramclustr` locally to create fine - can you try that?
username_0: Nope, I get an error using miniconda `conda: error: unrecognized arguments: r-ramclustr` both locally (Windows Laptop using miniconda3) and on abiff machine.
username_1: oh sorry, typo in env creation, correct is:`conda create -n ramclustr r-ramclustr` (it assumes you have bioconda and conda-forge in your channels)
This works on abiff for me.
username_0: Same. Can you maybe checkout the PR branch and just run `planemo serve --docker .` in the tools/ramclustr subfolder with the modified requirements part in the xml as given above to see if you can reproduce the error?
username_1: @username_0 There seem to be some package conflicts on dependency installation. I am not sure what Planemo is doing differently. Feel free to continue testing with your container for the time being.
username_0: Thank you for investigating!
username_1: The initial implementation has been merged in https://github.com/RECETOX/galaxytools/pull/54
username_0: Updated subtasks and renamed issue
Status: Issue closed
|
Milad-Akarie/auto_route_library | 718006719 | Title: When trying to reload the whole app that exception keeps throwing 'observer.navigator == null': is not true.
Question:
username_0: #101 When trying to reload the whole app that exception keeps throwing 'observer.navigator == null': is not true.
Answers:
username_1: @username_0 How do you reload the App?
username_2: This is my router
```
@MaterialAutoRouter(
routes: <AutoRoute>[
CupertinoRoute(page: '/', initial: true),
CustomRoute(
path: '/main',
page: MainScreen,
),
]
)
```
I have Nested Expanded Navigation on MainScreen.
```
final homeNestedRouter = ExtendedNavigator<HomePageRouter>(
router: HomePageRouter(),
name: NestedRouterName.HOME,
observers: [homeRouterObserver],
);
final loungeNestedRouter = ExtendedNavigator<LoungePageRouter>(
router: LoungePageRouter(),
name: NestedRouterName.LOUNGE,
observers: [loungeRouterObserver],
);
final mypageNestedRouter = ExtendedNavigator<MyPagePageRouter>(
router: MyPagePageRouter(),
name: NestedRouterName.MYPAGE,
observers: [mypageRouterObserver],
);
```
My nested routers is static.
To log out I Reset Get.instance.
And inject the dependency again,
After that Call `runApp(MyApp())` And go to `/`
When I log in again and go to the main screen
`observer.navigator == null': is not true.`
Is there something I am using incorrectly..?
username_2: @username_1
Thank you Milad!
This is my router
```
@MaterialAutoRouter(
routes: <AutoRoute>[
CupertinoRoute(page: '/', initial: true),
CustomRoute(
path: '/main',
page: MainScreen,
),
]
)
```
I have Nested Expanded Navigation on MainScreen.
```
final homeNestedRouter = ExtendedNavigator<HomePageRouter>(
router: HomePageRouter(),
name: NestedRouterName.HOME,
observers: [homeRouterObserver],
);
final loungeNestedRouter = ExtendedNavigator<LoungePageRouter>(
router: LoungePageRouter(),
name: NestedRouterName.LOUNGE,
observers: [loungeRouterObserver],
);
final mypageNestedRouter = ExtendedNavigator<MyPagePageRouter>(
router: MyPagePageRouter(),
name: NestedRouterName.MYPAGE,
observers: [mypageRouterObserver],
);
```
My nested routers is static.
To log out I Reset Get.instance.
And inject the dependency again,
After that Call `runApp(MyApp())` And go to `/`
When I log in again and go to the main screen
`observer.navigator == null': is not true.`
Is there something I am using incorrectly..?
username_1: Could you show me your complete router setup?
The static setup
username_2: My Router Setting
```
final appNavigator = ExtendedNavigator<BleetRouter>(
navigatorKey: Router.key,
router: BleetRouter(),
guards: [DefaultGuard()],
observers: <NavigatorObserver>[appRouterObserver],
);
final homeNestedRouter = ExtendedNavigator<HomePageRouter>(
router: HomePageRouter(),
name: NestedRouterName.HOME,
observers: [homeRouterObserver],
);
final loungeNestedRouter = ExtendedNavigator<LoungePageRouter>(
router: LoungePageRouter(),
name: NestedRouterName.LOUNGE,
observers: [loungeRouterObserver],
);
final mypageNestedRouter = ExtendedNavigator<MyPagePageRouter>(
router: MyPagePageRouter(),
name: NestedRouterName.MYPAGE,
observers: [mypageRouterObserver],
);
class NestedRouterName {
static const HOME = 'homeRouter';
static const LOUNGE = 'loungeRouter';
static const MYPAGE = 'mypageRouter';
static const DEV = 'devUsersRouter';
}
@MaterialAutoRouter(
routes: <AutoRoute>[
CupertinoRoute(path: '/', page: SplashScreen, initial: true),
CustomRoute(
path: '/main',
page: MainScreen,
),
CustomRoute(
path: '/wellcom',
page: WellcomScreen,
),
CustomRoute(
path: '/auth/check',
page: AuthCheckScreen,
guards: [DefaultGuard],
),
CustomRoute(
path: '/auth/social',
page: AuthSocialScreen,
transitionsBuilder: TransitionsBuilders.slideLeft,
),
CustomRoute(
path: '/auth/phone',
page: AuthPhoneVerifyScreen,
transitionsBuilder: TransitionsBuilders.slideLeft,
),
CustomRoute(
path: '/home',
page: HomePage,
[Truncated]
MypageTabPage
```
class MyPagePage extends BaseView {
@override
List<ViewModelContainer<BaseViewModel>> buildViewModels() => [];
@override
Widget buildChild(BuildContext context) {
return Scaffold(
body: Container(
width: double.infinity,
color: AppColors.WHITE,
child: mypageNestedRouter,
),
);
}
}
```
This is all..!!
username_3: Try setting you AppRouter() as `static`.
``` dart
class AppWidget extends StatelessWidget {
static final _rootRouter = AppRouter();
@override
Widget build(BuildContext context) { ...... }
``` |
bertramdev/asset-pipeline | 192491921 | Title: Gradle plugin: different configurations for different directories
Question:
username_0: I want to set properties such as skipNonDigests, enableSourceMaps, etc. to different values depending on which directory the processed files come from. Is there a way to do that?
Answers:
username_1: only way to do that is to use separate projects in gradle at this time. You should almost always use digested asset names anyway as its silly not to these days. sourceMaps i could see being easily modified comparatively to the other option
username_0: Thanks for clearing that out. I kind of supposed that it works this way. To clarify, my case was that I had a mix of resources, some of which were external libraries such as ckeditor which has its own way to manage versions, all residing in one directory. In my war file I wanted to have _only_ digested versions of some of the resources and _only_ original, non-digested versions of the others, e.g. ckeditor. I solved the problem by setting `skipNonDigests = false` and then excluding the unwanted files in `war.eachFile`.
Status: Issue closed
|
jlippold/tweakCompatible | 747906409 | Title: `RounderLS` working on iOS 13.5
Question:
username_0: ```
{
"packageId": "com.azzou.rounderls",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.azzou.rounderls",
"deviceId": "iPhone10,6",
"url": "http://cydia.saurik.com/package/com.azzou.rounderls/",
"iOSVersion": "13.5",
"packageVersionIndexed": false,
"packageName": "RounderLS",
"category": "Tweaks",
"repository": "Azzou's repo",
"name": "RounderLS",
"installed": "1.3-2",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.azzou.rounderls",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "A little tweak to make your LockScreen rounder !",
"latest": "1.3-2",
"author": "Azzou",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
facebook/react | 408546380 | Title: An update to the state from `useState` is not registered in event handler `onTransitionEnd`
Question:
username_0: **Do you want to request a *feature* or report a *bug*?**
Bug
**What is the current behavior?**
When updating the state from `useState` doesn't actually update the component. In this case it is being updated on an event handler. I used `console.log` to verify that it is being called yet no update in the component is being dispatched. It's like React doesn't register that it wants to update the state.
Might want to throw out that I'm new with this React Hooks and it could be something that I'm missing.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React.**
I've setup a simple example showcasing my issue, I also included an implementation with the old class syntax and when doing it with classes, it works fine. [Here is the link to the CodeSandbox](https://codesandbox.io/s/mjz3o5xnly)
This example is taken from my project, I'm fading out and translating upwards so I want to keep the text until the animation is done.
**What is the expected behavior?**
The component should update with the new registered state when trying to update state in a event handler.
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
React v16.8
React DOM v16.8
Answers:
username_1: When you include an example, please write the exact reproduction instructions. It's never obvious what an example is supposed to show, what the expected behavior is, and where it deviates from the actual behavior. A list of steps to do would be very helpful!
username_0: Apologies,
*I've updated the example as well to make it easy to see the problem (hopefully)*
When the page is loaded there is no message so the `No message` text is displayed. When clicking on the button the message becomes `My message`. When clicking again then the text should become `No message` after it has transitioned but it doesn't.
The console can also be checked, the actual is after click toggle message twice and waiting for the transition to end:
```
Console was cleared
Updated Object {currentMsg: null, stagedMsg: null}
Updated Object {currentMsg: "My message", stagedMsg: "My message"}
Updated Object {currentMsg: "My message", stagedMsg: null}
handleTransitionEnd Object {currentMsg: "My message", stagedMsg: null}
```
expected is:
```
Console was cleared
Updated Object {currentMsg: null, stagedMsg: null}
Updated Object {currentMsg: "My message", stagedMsg: "My message"}
Updated Object {currentMsg: "My message", stagedMsg: null}
handleTransitionEnd Object {currentMsg: "My message", stagedMsg: null}
Updated Object {currentMsg: null, stagedMsg: null}
```
The reason why I'm expecting another update is because I'm updating the state in the transition handler.
username_1: Aren't you resetting it here?
```js
if (stagedMsg && stagedMsg !== currentMsg) setCurrentMsg(stagedMsg);
```
So even if `currentMsg` is `null`, it gets reset back to `stagedMsg`.
username_0: It checks if the prop (aka `stagedMsg`) is not null so it doesn't get fired. The purpose of this is for two uses cases:
- When currently no message is being displayed and the `stagedMsg` changes to a message, we want to update it immediately.
- When a message is currently displayed and want to change it to another message (e.g. `My message` to `My other message`) then we do that immediately as well.
Checked just in case by adding a console.log to see when it get fired and this is the result
```
Console was cleared
Updated Object {currentMsg: null, stagedMsg: null}
// After first click
Change currentMsg immediately
Updated Object {currentMsg: "My message", stagedMsg: "My message"}
// After second click
Updated Object {currentMsg: "My message", stagedMsg: null}
handleTransitionEnd Object {currentMsg: "My message", stagedMsg: null}
```
username_0: *My guess* is that when the following is executed, it somehow doesn't register any changes and does not update.
```
const handleTransitionEnd = () => {
// Dirty hack to make sure this is only called when it
// has transitioned to the empty state of the message
if (!stagedMsg) {
console.log("handleTransitionEnd", { currentMsg, stagedMsg });
setCurrentMsg(null);
}
};
```
Maybe worth noting, I have tried using timeout instead of `onTransitionEnd` but with no luck.
```
useEffect(() => {
if (!stagedMsg && currentMsg) {
setTimeout(() => {
setCurrentMsg(null)
}, 1000)
}
})
```
username_2: An easy fix seems to be changing this line:
```javascript
if (stagedMsg && stagedMsg !== currentMsg) setCurrentMsg(stagedMsg);
```
to this:
```javascript
useEffect(() => {
if (stagedMsg && stagedMsg !== currentMsg) setCurrentMsg(stagedMsg);
});
```
Which makes the code work for me.
------------------
Doing some more research, I changed
```javascript
if (!stagedMsg) {
console.log("handleTransitionEnd", { currentMsg, stagedMsg });
setCurrentMsg(null);
}
```
to this
```javascript
if (!stagedMsg) {
console.log("handleTransitionEnd", { currentMsg, stagedMsg });
setCurrentMsg(x => {
console.log("setCurrentMsg old value", x);
return null;
});
}
```
so that `setCurrentMsg` prints the previous value of the variable. It outputs the following:
```
handleTransitionEnd Object {currentMsg: "My message", stagedMsg: null}
setCurrentMsg old value: null
```
So apparently `setCurrentMsg` thinks the variable is already 'null', therefore not triggering a render; even though the console.log statement right before showed that the variable is not null but contains "My message".
username_0: I did not realise this, why it is `null`? I even use React developer tools and see that the state is not `null`.
It explains why there is no re-render but everything tells me that the state (except for the callback for `setCurrentMsg()`) is containing `My message` but the state is actually `null` somehow, which doesn't make any sense because my event handler is the only one that sets it to `null`.
username_3: I worked my way through this as @username_2 did, but changed the currentMsg to an object (which apart from probably being the simplest workaround) gave me the chance to see what the transitionEnd handler really "sees". (as different `null`s are hard to discriminate ;) )
Here is the updated CodeSandbox: https://codesandbox.io/s/n7no7px18j
The result is: the transitionEnd handler will "see" a stale state: the one it set itself (or on the first iteration the initial state, which is also already stale).
To me this seems to be a bug, as it is too unexpected
username_2: @username_3 Very interesting. I agree your code strongly points at this being a bug.
Meanwhile, I tried finding a minimal example of this problem and ended up with this: https://codesandbox.io/s/7mxz7v8x0
username_0: Thanks for the help, will have to use this solution from @username_3 for now until bug has been fixed. Was really hard to decide if it was a bug from Facebook or just me being stupid and don't know how to use hooks :)
username_3: Just as a note, this issue was introduced in [this commit (790c8ef04...)](https://github.com/facebook/react/commit/790c8ef04195f0fc11ca3fb08e63f870f81483ac), which introduced "bailout", if the reducer reduces to the previous (current) state. Which it correctly does.
The problem is, that the previous (current) state is stale / wrong in that transitionEnd handlers setState call.
I'd be happy to continue my research, but it might take some days, as I am pretty busy at the moment.
username_0: @username_1 What should we do with this issue, do you still need more information?
I tried looking into the commit but it would take too much time for me to look into what introduced the issue. I feel like I should leave that to the experts instead. 😃
username_2: I think it could be the same bug as in: https://github.com/facebook/react/issues/14849
So we should probably wait until that is fixed and see if the issue persists.
username_4: I think I have the same issue: #14910
username_3: This will be fixed by #14902
[Here is a code sandbox to test it with updated dependencies](https://codesandbox.io/s/3x9roqq8r5)
username_3: #14902
username_1: Fixed in 16.8.3.
https://codesandbox.io/s/w50mm4kzw
Status: Issue closed
|
Azure/azure-sdk-for-js | 763203481 | Title: OpenTelemetry Exporter integration with Resource API
Question:
username_0: Resource API need to be integrated with exporter code to allow customers to configure cloud extension data like roleName, roleInstance among others
https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/resource/sdk.md#specifying-resource-information-via-an-environment-variable<issue_closed>
Status: Issue closed |
arcus-azure/arcus.observability | 627227522 | Title: LogRequest is limited to HttpRequests
Question:
username_0: It seems that the `LogRequest` extension method for `ILogger` is only suited to deal with HttpRequests. However, what if I have a Function or a background worker that is triggered via a queued message that is received ?
It would be nice if there was a more generic approach available to log requests, so that we can deal with these types of requests as well.
Answers:
username_1: Are you looking for HttpRequestMessage support or what are you actually looking for?
LogRequest is purely to track HTTP requests so I'm curious to know what your scenario is around the workers and queued messages?
username_0: I hope I can explain it clearly :)
Suppose you have a software component (an Azure Function for instance) that is triggered when it receives a message from servicebus. In that case, the request is not an http request, but another kind of request. You might want to be able to track this as a request in your logging as well ?
I have a specific situation for instance:
- a background worker is triggered when a message is received via NServiceBus
- I would like to log this to AppInsights to make it visible that the triggering request was a message that has been received via a specific queue.
This is currently not possible afaik.
username_1: That's not supported indeed because it's aimed at HTTP stack, for example it is reported as a Request in Application Insights.
I'm a bit reluctant to changing this beyond HTTP request so we could rename it to LogHttpRequest but ideally we introduce a new concept such as "Trigger" or "Invocation" maybe which is fully seperate.
Would that make sense?
username_1: In terms of Application Insights, it would be tracked as a trace is that OK or what would you expect?
username_0: I would expect that it is logged as a request, or maybe an event ? Don't know what would make the most sense ?
username_1: Event typically represent business events so I'm not sure if that's a good fit.
Requests tend to be focussed on HTTP stack - https://docs.microsoft.com/en-us/azure/azure-monitor/app/data-model-request-telemetry, but I'm open for a POC if you want to pick this up? I just haven't seen a scenario where requests are used for this.
username_2: @username_0, are you looking for a request per function trigger? Because that should automatically happen if you use the APPINSIGHTS_INSTRUMENTATIONKEY app key. I have several functions that receive batches from Azure Event Hub (and one from Azure Service Bus) that log requests, but one per batch, not message.
username_3: @username_0 any progress on this? Some way we need to take?
username_1: Let's make this happen!
Having this is valuable for sure:

username_3: #250 is already the first thing to add as non-HTTP request tracking. 😉
username_3: I'll close this, as we already have now non-HTTP request tracking.
Status: Issue closed
|
ESA-VirES/WebClient-Framework | 143441404 | Title: Internal Error message
Question:
username_0: 1)
External login from mobile (Drei network) or laptop (ÖBB train router) leads to "Internal Error" message.
I used the following usernames/passwords:
on https://testing.vires.services
test_user dfCsmzhuPE1vP+9t
on https://staging.vires.services
test_user dfCsmzhuPE1vP+9t
validation_user c8tXa0t+aFVKtG3b
2)
Minor typo in "Internal Error" message
"Please contact the server administrator at <EMAIL> to inform them ..."
pls. change to plural:
""Please contact the server administrators ..."
Answers:
username_1: This is not a SW bug but rather misconfiguration of the deployment web server.
Status: Issue closed
|
ikedaosushi/tech-news | 541347997 | Title: ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations
Question:
username_0: ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations<br>
Ever since the advent of BERT a year ago, natural language research has embraced a new paradigm, leveraging large amounts of existing text to pretrain a model’s parameters using self-supervision, with no data annotation required.<br>
https://ift.tt/35KpjNi |
cloudera/hue | 327708377 | Title: make apps error
Question:
username_0: macOS Sierra 10.12.6

Answers:
username_1: I think you'd better use ubuntu, there is one page on hue website about mac install, but I tried it in failure, thus I ran into server or docker |
runelite/runelite | 362892627 | Title: Note tab face lift, addition of bullet lists?
Question:
username_0: I keep a lot of things in my note pad, but it be dreadfully hard to read and keep organized.
I think decreasing the tabbing spacing would go a long way to aid in creating clear notes.
It would also be awesome to add bullet lists with a hot-key to activate.

Answers:
username_1: Use sticky notes
username_2: The notes plugin contains only text, we will not be adding any editor-like features that isnt just plain text, but the tab distance makes sense, idk why it is so horrid atm.
Status: Issue closed
|
thomasloven/lovelace-slider-entity-row | 920060186 | Title: Documentation section for installing wtihout HACS
Question:
username_0: Hi! Can you please add short manual to README.md about installing without HACS?
Answers:
username_1: Two years and one day ahead of you:
https://github.com/username_1/lovelace-slider-entity-row/commit/1d29e2365f2a843d24ead02b246868c73db27cec#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5R6
Status: Issue closed
|
RoboJackets/robocup-firmware | 503138047 | Title: Port Radio to RTOS
Question:
username_0: We need to port the radio to RTOS so we can take advantage of yielding (instead of just delaying for 1ms).
This should basically look like the following:
```cpp
static SemaphoreHandle_t radio_semaphore;
void radio_interrupt() {
xSemaphoreGiveFromISR(radio_semaphore, NULL);
}
void radio_task_function() {
while (true) {
// TODO: Request data from the ISM
if (xSemaphoreTake(radio_semaphore, 1ms_in_ticks)) {
// TODO: Get data from the ISM
}
}
}
``` |
xamarin/Xamarin.Forms | 854660414 | Title: [Bug]
Question:
username_0: ### Description
There are many examples where an element is placed outside its own stack. For [example](https://youtu.be/8_KOYirQzcw?t=2019).
If you add a click event handler to the "Add to Bookmark" button, then a click outside the stack does not give any reaction.
This problem is very limiting when designing an interface.
Answers:
username_1: Hi, @username_0 - I'm not quite sure I understand the issue you're describing. Would you be able to elaborate a bit more and perhaps share a sample project that reproduces the issue as well? Thanks :)
username_2: No response to our request for more info, so assuming this is figured out, thanks!
Status: Issue closed
|
googleapis/google-api-php-client | 1031761386 | Title: isAccessTokenExpired seems to behave differently from v2.2 to 2.10
Question:
username_0: I upgraded from "google/apiclient": "2.2.2", to "google/apiclient": "2.10.1". in my code curl call, I have been doing a refresh token.
```
public function refreshToken()
{
if ($this->googleClient->isAccessTokenExpired() || empty($this->oauth2Token)) {
$result = $this->googleClient->fetchAccessTokenWithAssertion();
if (!isset($result['access_token'])) {
throw new \Exception('unexpected response: ' . json_encode($result));
}
$this->oauth2Token = $result['access_token'];
}
return $this->oauth2Token;
}
```
Either \Google\Client::isAccessTokenExpired is not returning true
Or \Google\Client::fetchAccessTokenWithAssertion is not returning a token.
Once my process hit the hour mark, I get
```
{"error":{"code":401,"message":"Request had invalid authentication credentials.
Expected OAuth 2 access token, login cookie or other valid authentication credential.
See https://developers.google.com/identity/sign-in/web/devconsole-project.","status":"UNAUTHENTICATED"}}
```
The code will creates a new token in google/apiclient": "2.2.2"
Status: Issue closed
Answers:
username_0: @LindaLawton Thank you. It's not a big deal. But It use to be possible to check isAccessTokenExpired().
This is how i create my client
```
/**
* @return \Google_Client
* @throws \Google_Exception
*/
public function getClient()
{
if (empty($this->client)) {
$gmbConfig = $this->settings;
// check if json file exist
if (!$gmbConfig) {
throw new \Exception("Could not read environment configuration.");
}
$this->client = new \Google_Client([
'retry' => [
// this is how many times it will try after initial attempt
// total will be first attempt + retries (in this case 5 attempts)
'retries' => 4
]
]);
$this->client->setSubject($gmbConfig['userToImpersonate']);
$this->client->setScopes($gmbConfig['scope']);
$this->client->setAuthConfig($gmbConfig['authJson']);
$this->client->setAccessType("offline");
$this->client->setApprovalPrompt("force");
$this->apiUrl = $gmbConfig['apiUrl'];
}
return $this->client;
}
```
This token is only good for 360 seconds with "google/apiclient": "2.2.2" I could check for token expire and it would return false after an hour. Now it does not return true even after the token expires. Perhaps we were using isAccessTokenExpired() incorrectly and getting away with it. Until we updated the library. |
MeredithU/angularjs_greensock_animation | 124681544 | Title: Style.scss @import link not working
Question:
username_0: I just followed your tutorial to implement bootstrap-sass and when importing like so:
-@import 'app/bower_components/bootstrap-sass-official/assets/stylesheets/bootstrap';
this link is not found when including it in the file, however, I found this works.
-@import '../../bower_components/bootstrap-sass-official/assets/stylesheets/bootstrap'; |
asciidoctor/asciidoctor-diagram | 63876672 | Title: Call ditaa with command line arguments
Question:
username_0: Hi,
is it possible to specify command line arguments or something equivalent in the asciidoc file when running ditaa? I would like to control effects like box shadows and box separations which is possible via ditaa command line arguments `--no-separation` and `--no-shadows`.
Thanks.
Answers:
username_1: No that's not yet possible, but shouldn't be too hard to add. I propose adding support for passing ditaa command line arguments as block attributes. For instance:
```
[ditaa,no-
Status: Issue closed
username_1: Resolved by 887a5771aba1b90ba2fa17c4b7470c0ac298293b
I ended up adding support for an options attribute that takes the ditaa command line arguments as value. You can write, for instance,
```
[ditaa, options="--no-shadows --scale 2.5 --round-corner"]
```
username_0: Awesome, thanks!
However, the image caching mechanism seems to be too aggressive now. Changing the options without changing the ascii image does not regenerate it. Is there some work-around?
username_1: Hi,
is it possible to specify command line arguments or something equivalent in the asciidoc file when running ditaa? I would like to control effects like box shadows and box separations which is possible via ditaa command line arguments `--no-separation` and `--no-shadows`.
Thanks.
username_1: Thanks for catching that one. The options are indeed not taken into account for the cache validation. Oversight on my part. I'll try to fix that this evening.
username_1: 5149ac682d3517155c9868ddacbf1c5a11d4b46f makes the ditaa processor take the options attribute into account when deciding if the image needs to be regenerated or not.
Status: Issue closed
username_2: I'm definitely in support of this approach, but I wonder if we should also have the option of making these customizations globally using document attributes. wdyt? Something like:
```
:ditaa-options: no-shadows,round-corners
```
or even
```
:ditaa-option-no-shadows:
:ditaa-option-round-corners:
```
username_1: Sounds like a good idea. My first implementation actually had the options in the form you're using above; without the double-dashes. That looked more asciidoc-like in my eyes than raw command line arguments. I backed away from that approach thinking that it might be easier for users (and less documentation work) if you could just use the command line arguments of the tool in question as is.
If we go ahead with the above syntax then the same syntax should be supported on the blocks themselves imo.
So you could either do
```
:ditaa-options: no-shadows, round-corners, scale=2.5
```
globally or
```
[ditaa, options="no-shadows,round-corners,scale=2.5"]
```
at the block level.
@username_0 any preference from your side? Make it look asciidoc-ish or stick with the raw command line options?
username_2: I like what you've suggested, esp the document level attribute.
username_0: I would definitely use document attributes if they were available. As a consequence, the syntax should be the same at the block level, as you already said. Your example above looks good to me. I am not an expert asciidoc user (yet) though.
username_1: Reopening for syntax change and document attribute support
username_1: Hi,
is it possible to specify command line arguments or something equivalent in the asciidoc file when running ditaa? I would like to control effects like box shadows and box separations which is possible via ditaa command line arguments `--no-separation` and `--no-shadows`.
Thanks.
username_1: I ended up implementing the following:
- The Ditaa extension supports the options `scale`, `antialias`, `separation`, `round-corners`, `shadows` and `debug`
- Options can be specified per block as `<option>=<value>` or globally as a document attribute of the form `:ditaa-option-<option>: <value>`
- Block level options override global options
- Default values are Ditaa's defaults
The semantics of the options are
- `scale`: scale factor specified as a decimal number (e.g. `scale=1.5`)
- `antialias`: true to enable antialiasing; false to disable it
- `separation`: true to leave spacing between adjacent blocks; false to remove it
- `round-corners`: true to force round corners everywhere; false to use corner style from diagram
- `shadows`: true to enable drop shadows; false to disable them
- `debug`: true to display a debug grid in the diagram; false to remove it
Status: Issue closed
username_2: Huge :+1: !
username_0: This is great, thanks! |
FullHuman/purgecss | 571104227 | Title: grunt-purgecss missing files
Question:
username_0: @username_1 it seems you used a package.json file from another package which had a `lib` folder but grunt-purgecss has a tasks folder. Also, I'm not sure if the other properties make sense in grunt-purgecsss like `main`, `module` etc.
Thanks in advance!
Answers:
username_1: Yes, the other properties do not make sense for grunt-purgecss. Released 2.1.1 for grunt-purgecss to fix this issue
username_0: @username_1 Thanks! BTW I'd still keep the files property and just adapt it. No point in having the tests in production :)
username_0: BTW is `@types/grunt` needed to be in dependencies?
username_1: Yep, it's not needed. I'll rectify this :)
username_0: Thanks! One issue I notice after upgrading the the latest version is one specific selector missing. Let me know if you want me to make a new issue abou it.
Basically, the following selector is missing with the new version:
```css
.full-news a[href*="//"]:not([href^="mpc-hc.org"]):not([href*="trac.mpc-hc.org"])::after,
.new a[href*="//"]:not([href^="mpc-hc.org"]):not([href*="trac.mpc-hc.org"])::after {
content: "\f08e"; /* fa-external-link */
display: inline-block;
font-family: FontAwesome;
margin-left: 0.3em;
}
```
If change the code to this then it's kept with the new version
```css
.full-news a:not([href*="mpc-hc.org"])::after,
.new a:not([href*="mpc-hc.org"])::after {
content: "\f08e"; /* fa-external-link */
display: inline-block;
font-family: FontAwesome;
margin-left: 0.3em;
}
```
username_1: yes, if you could make a new issue about it. Would be nice :)
Status: Issue closed
username_0: @username_1 how about excluding tests and any other unneeded files/folders? Currently everything is included. Not a big deal, but you know, better not ship unneeded files 🙂 |
nwjs/nw.js | 426156521 | Title: Phoning home to accounts.google.com
Question:
username_0: ### Expected behavior
No phoning home to google
### Actual behavior
The software phones home to google on startup. The output on the command line is the following: [ERROR:cert_verify_proc_nss.cc(974)] CERT_VerifyCert...for accounts.google.com failed err=-8179
### How to reproduce
Get a firewall that will warn you of any outgoing connections. Startup up the software. It will phone home to google.
I am running the app with --disable-sync --disable-background-networking --disable-component-update and it is still contacting google. It is like a virus basically trying to send data to google
Answers:
username_1: Would you please provide a NW app demo for reproducing? Which always phones home to google on startup.
username_2: I'm not sure if this is what you need but:
If I download nwjs v0.37.1 for linux:
Then I run `netstat -tpnc`
I start `./nw` just by itself, so it use the base plain NW.JS page and nothing else, I see in netstat:
`tcp 0 0 10.11.72.249:41016 172.16.17.32:443 ESTABLISHED 12991/./nw`
172.16.17.32 is a google IP.
https://ipinfo.io/172.16.17.32
Status: Issue closed
|
ghiscoding/Angular-Slickgrid | 612493526 | Title: Custom filename is not working.
Question:
username_0: The custom filename is not working in Angular 6.
```
this.gridOptions = {
enableAutoResize: true,
enableColumnReorder: false,
enablePagination: true,
enableAutoSizeColumns: true,
enableMouseHoverHighlightRow: true,
enableExcelExport: true,
enableExport: true,
pagination: {
pageSizes: [10],
pageSize: 10
},
exportOptions: {
format: FileType.xls,
exportWithFormatter: true,
sanitizeDataExport: true,
filename: 'QCT Tool',
},
excelExportOptions: {
format: FileType.xls,
filename: 'QCT Tool',
},
gridMenu: {
hideExportCsvCommand: false,
hideExportTextDelimitedCommand: false
}
};
```
**Slickgrid version:**
`"angular-slickgrid": "^2.17.11",`
Answers:
username_1: Probably doesn't work with spaces in the filename. Next time make sure to fill in correctly the issue template if you don't want the bot to auto-close the issue |
coding-blocks/codingblocks.online.projectx | 623649717 | Title: compare batches page (lite,premium.live) does not scroll down and thus its content are not visible at the bottom
Question:
username_0: **Describe the bug**
when u click on compare batches to compare between lite and premium batches , page loads up but it cannot be scrolled down and thus its content are not visible at the bottom
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'buy courses and click on explore for any course'
2. Click on 'compare batches under choose batch column'
3. See error
**Expected behavior**
the compare page must scroll down in order to view its lower content
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
<img width="951" alt="Capture2" src="https://user-images.githubusercontent.com/46247882/82730583-7d994d80-9d1e-11ea-979a-2e40b7e74005.PNG">
**Would you like to work on the issue?**
<!-- Please let us know if you can work on it or the issue should be assigned to someone else. -->
- Yes
- [ ] No
Answers:
username_0: Please add bounty to this issue if u feel that this is a bug . I would like to contribute and resolve this issue by participating in BOSS contest.
username_0: compare batches page (lite,premium.live) does not scroll down and thus its content are not visible at the bottom #63
username_0: @username_1 can i claim these 20 points for raising this issue ? (it is asking for pull request but i didnt submit any pull request).
username_1: @username_0 yes you can. Just paste this issue link instead of PR
username_2: I would like to be assigned to this issue
username_1: @username_2 Okay. I've assigned you the issue. please make a WIP PR asap.
username_2: Okay. Thank you.
username_2: I was going through the master branch and after running the server and app, I noticed that the 'Compare Batches' option is not present. I checked the code and searched for something vaguely similar to it but didn't find anything. Kindly guide me so that I can move forward with fixing this bug. I have also attached the screenshot of the aforementioned page with missing feature.
 |
space-wizards/space-station-14 | 1097534904 | Title: Change UplinkAccounts to use Mind instead of body
Question:
username_0: ## Description
<!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.-->
Currently the UplinkAccountsSystem uses EntityUid, which means that if the Traitor gets cloned, they aren't registered in the accounts database, and so due to recent access changes are unable to access their uplink.
Answers:
username_1: Aren't this access changes temporal, until #5842?
username_0: That PR doesn't change the Uplink system so even when it gets merged, a cloned traitor still won't have access to their account.
username_1: Wdym? Anyone who knows code can access Uplink.
username_2: To clarify, AFAIK the idea was to just have uplinks accounts linked to a secret code / login, that any player can access if they know it (and have an entity with an uplink component available), with no entity/mind restrictions. The ringtone PR doesn't actually make any changes to uplink stuff, but is intended to be the how the secret code is entered.
You could instead just add a "phone" button and recycle the nuke key-code entry UI or something if that were easier than finishing the ringtone PR.
username_3: It's not difficult I've just been busy lately, but I am currently working on finishing up the changes sloth requested and pushing it for review sometime soon. |
nicoduj/homebridge-harmony | 505973100 | Title: [BUG] - activities with type VirtualTelevisionN
Question:
username_0: **Describe the bug**
Activities with type "VirtualTelevisionN" can't be activated
**To Reproduce**
Don't know
**Expected behavior**
Activity should be launched like others
**Logs**
[10/6/2019, 10:29:42 PM] [Harmony] (Harmony)INFO - activityCommand : Returned from hub {"cmd":"harmony.activityengine?runactivity","code":401.2,"id":"0.8625857504460019","msg":"Unauthorized policy write haDevice denied"}
[10/6/2019, 10:29:42 PM] [Harmony] (Harmony)ERROR - activityCommand : could not SET status, no data
**Config**
basic config with TV only
**Additional context**
originally reported in https://github.com/username_0/homebridge-harmony/issues/194
Doing the activity config again did not produce this type of activity.
Answers:
username_0: If anyone knows how to produce such kind of activity, please fill in since I won't be able to fix or try to if I can't reproduce with my own hub
username_0: needs to be confirmed since error code means autorisation is not ok, maybe there was some other thing in the ocnfig in the original report I did not see.
username_1: I got the same after changed my Harmony configs (removing and creating activities in Harmony App). I tried to clear cache and made sure the "MainActivitiy" is correctly set but still can't fix it. The plugin works fine before I made the changes.
[3/7/2020, 11:06:11 PM] [Master Room Hub] (Master Room Hub)ERROR - activityCommand : could not SET status, no data
My config:
{
"name": "Master Room Hub",
"hubIP": "192.168.1.245",
"cleanCache": false,
"publishAllTVAsExternalAccessory": true,
"TVAccessory": true,
"mainActivity": "Live TV",
"activitiesToPublishAsInputForTVMode": [
"Apple TV",
"BBC",
"Bluetooth Music",
"Live TV",
"my TV Super",
"Netflix"
],
"playPauseBehavior": true,
"remoteOverrideCommandsList": [
{
"ActivityName": "my TV Super",
"CommandsList": [
{
"CommandName": "INFORMATION",
"NewCommand": "TVB STB;TV/VOD"
},
{
"CommandName": "ARROW_UP",
"NewCommand": "TVB STB;DirectionUp"
},
{
"CommandName": "ARROW_DOWN",
"NewCommand": "TVB STB;DirectionDown"
},
{
"CommandName": "ARROW_LEFT",
"NewCommand": "TVB STB;DirectionLeft"
},
{
"CommandName": "ARROW_RIGHT",
"NewCommand": "TVB STB;DirectionRight"
},
{
"CommandName": "SELECT",
"NewCommand": "TVB STB;Select"
},
{
"CommandName": "PLAY",
"NewCommand": "TVB STB;Red"
},
{
"CommandName": "PAUSE",
"NewCommand": "TVB STB;Red"
},
{
"CommandName": "FAST_FORWARD",
[Truncated]
],
"switchAccessories": true,
"showTurnOffActivity": "inverted",
"publishGeneralMuteSwitch": true,
"showCommandsAtStartup": false,
"otherPlatforms": [
{
"name": "Living Room Hub",
"hubIP": "192.168.1.225",
"TVAccessory": true,
"mainActivity": "Live TV",
"playPauseBehavior": true,
"switchAccessories": false,
"showTurnOffActivity": "false",
"publishGeneralMuteSwitch": false,
"showCommandsAtStartup": false
}
],
"platform": "HarmonyHubWebSocket"
}
username_0: Hi, Living Room Hub is ok ? Only the master hub produce the error ? You still have the permission error right before ? Did you try another mainActivity ?
username_1: Living Room Hub is fine. Master Room Hub still got the error. I went so far as to clear the cache, uninstall the plugin and reinstall it and still got the same error. Tried another mainActivity "Apple TV" and "BBC", got the same result. But it was all fine before I change the Harmony activities. Very strange.
Have I successful clear all files by uninstalling the plugin or there would be residual somewhere?
username_1: By the way, I don't have permission error. I was not the owner of the original bug. I just found this open issue and I have the same error.
username_0: Hi, there can be some files on the root directory of homebridge , you can try to remove them (they handle renamming of your activities, so maybe the name is not the one you think)
Be aware it is fully case sensitive ..
username_1: Ok, deleted the files, cleared cache, killed all HomeKit apps, restart homebridge, now it works!
username_0: nice, Noted, maybe there was so confusion in the plugin with old names/ new names and the file cache. I will let this bug open to dig into it when I will have some time, thanks for reporting.
Status: Issue closed
username_0: Can't reproduce, closing since there was no other report |
jiexu10/pinpoint | 127293703 | Title: USER: Devise Omniauth
Question:
username_0: As a customer
I want to use my existing authentication from other sites
So that I don't have too many accounts
ACCEPTANCE CRITERIA
- [ ] If I log in with Facebook, I should be successfully signed in
- [ ] If I do not specify a valid Facebook account, I should get an error message |
wuyifan18/DeepLog | 691982436 | Title: Question regarding the predicted variable
Question:
username_0: Yifan,
**Source**: LogKeyModel_predict.py
In the code below, can you please explain the difference between the output and predicted variables? Is output the same as predicted except it being sorted in tensors? Also, shouldn't the value of the predicted variable be something binary so that we can determine whether the predicted outcome is anomalous or not?
```
output = model(seq)
predicted = torch.argsort(output, 1)[0][-num_candidates:]
```
Thanks,
Deep
Answers:
username_1: Deep,
The output is a probability distribution describing the probability for each log key to appear as the next log key value given the history.
username_0: Shouldn't the value of the predicted variable be something binary so that we can determine whether the predicted outcome is anomalous or not?
username_1: Sort the possible log keys based on their probabilities and treat a key value as normal if it’s among the top g candidates. A log key is flagged as being from an abnormal execution otherwise.
You can read the paper for details.
username_2: @username_1 where can I modify top g in your code?
username_1: @username_2 here
https://github.com/username_1/DeepLog/blob/502aaf05be4c1251b7dc96f6439025c4fc988c66/LogKeyModel_predict.py#L51
username_2: than you @username_1 , I know that num_candidates here is a hyperparameter that is supposed to be changed according to the dataset. But my question is if my data has 24297 num_classes (while your HDFS dataset has only 28 num_classes) what can be a reasonable num_candidates? for example is 1000 too high or too low for num_candidates? I know this is a very vague question but any pointers are appreciated.
username_1: @username_2 the num_candidates is a hyperparameter, which means you should adjust it according to the metrics, such F1 measure. |
theia-log/selene | 419611748 | Title: Selene should be able to generate events from stdin - read from stdin and publish to Theia server as events.
Question:
username_0: Support for custom event generation is needed. This should be a separate CLI command - `event` which will generate events with values given on the command line as flags.
Special support should be implemented for generating events with content based on whatever comes in on STDIN.<issue_closed>
Status: Issue closed |
kubernetes-sigs/kubebuilder | 498140110 | Title: How to define types that unknow
Question:
username_0: <!-- STOP
* If this is an issue with some sort of runtime mechanics, it probably belongs in https://sigs.k8s.io/controller-runtime instead
* If this is an issue with CRD generation or webhook config generation, it probably belongs in sigs.k8s.io/controller-tools instead
* If this is an issue with scaffolding, or is definitely a cross repository effort, it probably belongs here.
-->
/kind bug
Usually in a CR, the Spec structure is fixed values, such as:
```
apiVersion: example.com/v1alpha1
kind: HelloWorld
metadata:
name: default
spec:
replica: 3
```
But in some cases, the CR structure is unknown, such as:
```
apiVersion: example.com/v1alpha1
kind: HelloWorld
metadata:
name: default
spec:
config: object
```
The `object` can be:
```
key1: value1
```
or:
```
nested:
nested:
...
key1: value1
```
Can someone tell how to write the API types?
Answers:
username_1: Is the ask how to write a one or two layer nested object, or an arbitrarily deep recursively nested object? I dunno about the second, but use a `map[string]structType{}` for the first? e.g. https://github.com/username_1/incendiary-iguana/blob/fd1fd03fabe2863817059094721ada8359f8aad0/api/v1alpha1/secretbundle_types.go#L19
username_2: `map[string]interface{}` should do the trick
username_2: but might have issues
username_0: Thank you, I think I can use `runtime.RawExtension`.
Status: Issue closed
|
ncbi/blast_plus_docs | 650339764 | Title: segfaults in step 3 of section 3
Question:
username_0: Hi! First of all, thank you so much for this tutorial.
Everything works great until I get to step 3 of section 3 (running blastn with nt and the 59 KB or 422 KB input file). I keep getting segfaults:
`/blast/bin/blastn: line 25: 9 Segmentation fault (core dumped) blastn.REAL "$@"`
I've tried changing the output format and I've tried changing the number of threads to 32, 16, or 1. I've tried Ubuntu 18.04 LTS with 500 GB and Ubuntu 20.04 LTS with 500 GB. I'm using a n1-highmem-32 us-east4c. Do you have any suggestions for what could be going wrong?
Thank you so much!
Answers:
username_1: Hello,
thank you for reporting this. I've been able to reproduce your problem by following the instructions on github. I'm looking at this issue right now.
username_0: Thank you so much!!!!
username_1: Hello,
I've updated the docker file to fix the issue you reported. Please let us know if you continue to have problems or have any comments. Otherwise, we'll consider the issue resolved.
Status: Issue closed
|
Nekmo/amazon-dash | 993262334 | Title: Any way of using as a presence detection for wifi devices on the network?
Question:
username_0: Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with amazon-dash)
- [x ] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
So i was thinking, since this is just sniffing the network for the addition of specific mac addresses appearing and then informing Home Assistant (and others). I was wondering if it would be possible to do the same to alert HA that the person has Joined the network and then notify if they leave the network. Effectively becoming a better network presence sensor than using NMAP/ICMP
Answers:
username_1: Yes, It can be used as a presence sensor :) However, if the device sends reconnection packets, the function could be executed several times. |
senecajs/seneca | 783778657 | Title: built-in message validation inspired by Vue.js props validation
Question:
username_0: *Except* validation is deep. This:
```
seneca.add({
foo: 'bar',
count: Number,
what: String,
obj: {
zed: Boolean,
list: [String]
}
}, action)
```
Answers:
username_1: @username_0 Is it currently planned for the implementation of such message validation system to support union types? E.g. if, as a user, I want to match on ‘:foo‘ when it's either 'bar' or 'baz' or null. If yes, then what is the syntax envisioned to be like? |
flamelink/flamelink-js-sdk | 438092225 | Title: Difference to other Flamelink js SDK
Question:
username_0: Hi there,
I wanted to ask, why there is a second repository for a flamelink js SDK?
https://github.com/flamelink/flamelink
https://github.com/flamelink/flamelink-js-sdk/
What is going to be the main difference?
I am confused and do not know which of those two will be the one that works in the future?
Why is there one in beta and one in alpha and the one in alpha seems to be the one that’s currently maintained. Is the one in beta mode going to be updated or can’t I trust that it will be improved and fixed in the future?
And what about this one here?
I am confused.
Answers:
username_1: Hi there,
Apologies for the confusion, I will explain.
The repo you see at https://github.com/flamelink/flamelink is the current SDK that only supports the Firebase Realtime Database. It is an oversight on our end to still have the `beta` message on the README (I will remove that today). When installing from NPM (`npm i flamelink`), you get that repo's code.
This repo you commented on (`flamelink-js-sdk`) is the newer version that supports both the RTDB as well as Cloud Firestore. Any new features will be added to this repo. Where necessary, bugfixes will still be made to the current SDK for the foreseeable future. You can install this new `alpha` SDK with `npm i flamelink@next`.
Differences between the two libraries can be seen if you look at the [migration guide](https://flamelink.github.io/flamelink-js-sdk/#/migration-guide) from the current (`v0.x`) to this new (`v1.x`) SDK.
I hope that clears it up. Please ask if you have any other questions and I'd be happy to clarify them.
username_0: Hi there,
thank you very much for that clear answer. I just saw the migration guide and except from the change from "set" to "add"/"update" it should be a piece of cake to migrate the project later on.
Best regards!
Status: Issue closed
|
IEEE-NITK/ieee-nitk.github.io | 261599722 | Title: Issues to be fixed to take part in IEEE Website Competition
Question:
username_0: [] All websites must use WordPress themes that are checked in advance of submission and score at least a 70% or higher on themecheck.org.
[] All plugins must explicitly state compatibility with WordPress 4.7.5.
** Note: The following WordPress plugins are not allowedfor use by IEEE: http://wpengine.com/support/disallowed-plugins/. **
We have a repository of photos that you may use in your design. To access them, please visit the tab labeled Photos. You may not use either approved photographs or images that you do not own or have a valid license for.
[] Digital Style Guide: ieee.org/about/webteam/styleguide/index.html – This section contains an overview of web-related requirements, best practices, and style guidelines for IEEE employees, volunteers, and partners (vendors, consultants, or contract workers) involved in the development or enhancement of IEEE websites.
[] Global Page Element and Branding Requirements: ieee.org/about/webteam/styleguide/page_elements_and_branding.html – IEEE digital publishers are asked to include IEEE-wide styles when structuring websites in order to create alignment, provide orientation, leverage the IEEE brand, and prevent users from having to learn each website. IEEE styles are detailed here for use of the IEEE enterprise-wide meta-navigation, IEEE Master Brand, site identifier, IEEE favicon, and website headers and footers.
Answers:
username_1: Wordpress themes? We don't fall into that category na?
username_0: But we still have to get 70% or higher in http://themecheck.org
username_1: Alright! No Issues there then!
username_1: @username_0 last three points, how much of it do we satisfy?
Status: Issue closed
username_1: - [x] All websites must use WordPress themes that are checked in advance of submission and score at least a 70% or higher on themecheck.org.
- [x] All plugins must explicitly state compatibility with WordPress 4.7.5.
Note:
The following WordPress plugins are [not allowed for use by IEEE:](http://wpengine.com/support/disallowed-plugins/). **We have a repository of photos that you may use in your design.** To access them, please visit the tab labeled Photos. You may not use either approved photographs or images that you do not own or have a valid license for.
- [ ] [Digital Style Guide:](http://ieee.org/about/webteam/styleguide/index.html) – This section contains an overview of web-related requirements, best practices, and style guidelines for IEEE employees, volunteers, and partners (vendors, consultants, or contract workers) involved in the development or enhancement of IEEE websites.
- [x] Fonts changed according to guidelines
- [ ] Nav Bar changed - Logo should be to the right with padding on top and bottom with a clear background . The nav bar contents should be shifted to the left
- [ ] Footer needs to be modified according to the guidelines
- [ ] [Global Page Element and Branding Requirements:](http://ieee.org/about/webteam/styleguide/page_elements_and_branding.html) – IEEE digital publishers are asked to include IEEE-wide styles when structuring websites in order to create alignment, provide orientation, leverage the IEEE brand, and prevent users from having to learn each website. IEEE styles are detailed here for use of the IEEE enterprise-wide meta-navigation, IEEE Master Brand, site identifier, IEEE favicon, and website headers and footers.
- [x] All student members’ names and IEEE member numbers
- [ ] An outline of goals and objectives of the project
- [ ] Original photos and/or videos
username_1: Nav Bar and Footer will be done by @mahim23
username_1: @anumeha Add navbar and site identifier!
Status: Issue closed
|
Azure/azure-sdk-for-php | 304122721 | Title: Some of the calls return Fatal errors
Question:
username_0: In some cases we get this error
PHP Fatal error: Call to undefined method MicrosoftAzure\\Storage\\Blob\\Models\\CreateBlobOptions::getUseTransactionalMD5() in /var/www/recurpost.com/vendor/microsoft/azure-storage-blob/src/Blob/BlobRestProxy.php on line 1941
I see the trait defined just fine and can reach its definition by following my code, but not sure why it fails at times.
It is important to mention that it does not fail every time. Only in some cases. There is nothing special about the cases when it fails. |
lutsik/MeDeCom | 348280008 | Title: startRMSE not defined
Question:
username_0: In ploting.R, at line 669 startRMSE is being assigned to "h" in abline function. startRMSE is not defined throughout the plot.K.selection() function leading to a bug. Although on careful analysis it is found that startRMSE is initialized in the commented out part line 619 and 621. |
wso2/product-apim | 607310658 | Title: Add REST API document generation maven plugin to throttling service
Question:
username_0: ### Describe your problem(s)
The internal REST service 'thottling service' using a code generator to generate server-side code skeletons using swagger specs. Currently we are using a proprietary code generator plugin and we have to migrate the code-generator to the default swagger-codegen [1].
[1] https://github.com/swagger-api/swagger-codegen
Status: Issue closed
Answers:
username_0: Fixed with https://github.com/wso2/carbon-apimgt/pull/8434 |
godotengine/godot-docs | 317772371 | Title: godot-docs/getting_started/step_by_step/ui_main_menu.rst
Question:
username_0: Let's say we want the margin, if all anchors at 0 the right and the bottom margins value are too low
The anchors should be top 0, left 0, right 1, bottom 1, and the right and bottom margins should be negative
The rest of the tutorial do not make much sense if it is intended to build a responsive layout
Answers:
username_1: I wrote this tutorial during Godot 3 beta and the way the UI layout works changed after that. It's been modified by other contributors but this may have slipped. Back 6 months ago Full Rect didn't change the margins.
username_0: Thanks for the clarification, I should try to better understand how Godot control containers works, at first glance the anchor and margins works exactly as CSS bounding box, and the containers reminds me of css framework containers, but there is more than that I guess.
username_1: Ah it doesn't work like CSS to me. Anchors act more like percentage-based margin lines (relative to the parent node) and margin like CSS padding, to give you a rough parallel using the web's box model.
In practice it's even more different in that you have a wide range of specific nodes to work with, UI components, so yeah it's really like an app UI framework in that sense - anyway Godot's entire editor UI is built with these nodes
Regarding the docs article should I change some things? Do you think responsive is misleading?
username_0: In the next tutorial I got the point where for games the layout should only be responsive according to a minimum screensize and an aspect ratio and that make sense, the game will run on a device with a fixed screensize, for windowed mode there should be a maximum and a minimum.
I think the title is ok.
In the next tutorial there must be some changes because some options are not available suchs as Full rect and fit the parent.

The icon here shouldn't be in a center container as it will break the layout

And couldn't find a way to resize the background for the ep counter when following the tutorial, but in the end folder in the source files it is resized correctly.
Well that must be written in a different Godot version, thanks for your help, I'll keep reading and figure out the changes.
username_1: 3.0 beta, right before they updated the UI system 😄 I replaced the wrong image yesterday but it can take time for the docs to rebuild, and actually i forgot to cherry pick the change to branches other than master.
For these tuts I'm looking to update the demos for the official demos repo - as there's little on UI now - and make updated video tuts for the hybrid UI workflow introduced in 3.0 stable. Maintaining the text version is a pain as it's something that's visual in nature, I feel it doesn't work well.
Status: Issue closed
|
jasred/Capstone | 310261683 | Title: Great job with mobile resizing
Question:
username_0: Great incorporation of mobile resizing! Though it may seem tedious, it is an incredibly important concept since screen sizes vary tremendously today and it’s important that you website looks exactly as you design it to. Great job!
https://github.com/jasred/Capstone/blob/master/resources/css/style.css#L540 |
BCLibraries/bc-libraries-site | 57953753 | Title: Emergency notifications display
Question:
username_0: We need a place to display special notifications (mostly closings). It should be near the top, highly visible, and hidden when there is no event. Ideally it will be AJAX-powered so that Tom or Scott or individual library heads could add a notification. |
flutter/website | 555898286 | Title: Google Codelabs: Build a Photo Sharing app with Google Photos and Flutter - refactoring to Androidx not possible
Question:
username_0: Hi,
I cannot migrate the source code or final source code to androidx.
Everytime I runt he refactor in AS it shows me the message "no usages found in project".
I have tried only refactoring the Android folder, I have changed SDK to 28 and I have checked if Androidx is set to true but still no androidx refactoring possible.
Could someone provide me the refactored code or rather tell me what else I can try to get this tutorial up and running?
Thanks in advance!!
[photos-sharing-master.zip](https://github.com/flutter/flutter/files/3985514/photos-sharing-master.zip)
Answers:
username_1: @username_2, can you comment?
username_2: Apologies for not replying earlier. The codelab has been updated use Android X, so you should no longer see a warning message when running the application.
(See this commit if you are curious about the change: https://github.com/googlecodelabs/photos-sharing/commit/cc909390da5385648fc77a19dbbdf3ff984f3b20 )
@username_1 Please close this issue :-)
username_1: Thanks, @username_2! Closing.
Status: Issue closed
|
vaadin/flow | 694834518 | Title: Vaadin For TypeScript: DeferrableResult for caching offline Endpoint requests
Question:
username_0: Part of #8905
To be able to make deferrable Endpoint requests, we need to cache the request meta info when offline, so that the service worker could retrieve such cached requests and submit those when the user comes back online.
```
class DeferrableResult<T> {
isDeferred: boolean = false;
id?: number;
result?: T;
constructor(public endpoint: string, public method: string, public args?: any) {
}
async _request() {
if (navigator.onLine) {
this.result = await client.call(this.endpoint, this.method, this.args);
} else {
this.isDeferred = true;
id = saveToIndexDB(this);
}
return this;
}
}
```<issue_closed>
Status: Issue closed |
Synergex/HarmonyCore | 370460165 | Title: Invalid JSON Response on Empty File
Question:
username_0: When querying an empty file we currently generate an invalid response, like this:
{"@odata.context":"https://localhost:8086/odata/$metadata#Cusmasas","value":[
This issue was reported by a customer but has not been verified. If correct then we should be generating an empty collection / JSON array.
Answers:
username_0: I tested this and found it not to be the case. We do generate an invalid response if a file open fails (see issue #40. But if an empty file is encountered the response looks like this for a collection:
{
"@odata.context": "https://localhost:8086/odata/$metadata#Customers",
"value": [
]
}
And a 404 (not found) is correctly returned if a specific entity is queried.
Status: Issue closed
|
PyConChina/PyConChina2016 | 174259543 | Title: 主题下需要添加讲师个人介绍。
Question:
username_0: 另外,还是希望能在日程页面http://cn.pycon.org/2016/shenzhen.html 的每个嘉宾名字后面有个简短介绍。
潘俊勇,易度云(easydo.cn)创始人
丁来强,Splunk
李力,腾讯云调度管理系统研发负责人
River,创业者,优趣工作室CTO, QPython作者,前Zynga高级系统研发工程师,前绿盟科技研发工程师,前新浪邮件工程师
何世友,爱范儿 CTO
张其川,CheungSSH总设计师
胡国涛,优趣工作室 WEGO 项目组高级工程师
馋师,互联网公司后端开发
石恩名,广州优亿科技有限公司高级技术经理
汤英康,广州优亿科技Python程序员
Status: Issue closed
Answers:
username_0: 另外,还是希望能在日程页面http://cn.pycon.org/2016/shenzhen.html 的每个嘉宾名字后面有个简短介绍。
潘俊勇,易度云(easydo.cn)创始人
丁来强,Splunk
李力,腾讯云调度管理系统研发负责人
River,创业者,优趣工作室CTO, QPython作者,前Zynga高级系统研发工程师,前绿盟科技研发工程师,前新浪邮件工程师
何世友,爱范儿 CTO
张其川,CheungSSH总设计师
胡国涛,优趣工作室 WEGO 项目组高级工程师
馋师,互联网公司后端开发
石恩名,广州优亿科技有限公司高级技术经理
汤英康,广州优亿科技Python程序员
Status: Issue closed
username_1: = |
BluSunrize/ImmersiveEngineering | 103974590 | Title: Compiling
Question:
username_0: When i try and compile Immersive Engineering I comes up with errors to do with nei
Answers:
username_1: You probablyy don't have NEI in the libs folder, or something like that..
username_0: How would i do that i'm only starting to make minecraft mods and i'm not that advanced?
username_2: If you don't know how to compile mods then you probably shouldn't compile IE as the people who need help compiling it are often the ones who provide useless error reports as these builds are often buggy, and can be prone to crashes, and unless you know how to fill out bug reports telling the devs it doesn't work then posting the error isn't helpful . and another problem with providing help to these people is it all take valuable time detracts from time available to actually write the mod. But if you really want to compile it your self then the best thing to do is to learn how to do it i.e. by watching Pahimar's Lets Mod Reboot (which gives a good introduction to the concepts you need to know before you can compile it then googling the errors ). FYI it took me 3+ hours to compile IE the first time it did. Ff you still can't figure it out then just wait for the release in about a week Blu is usually quite quick.
username_0: Well i can compile mods and fix bugs but nei and waila aren't my strong points so could you just tell me how to do it i can wait 3 hours i'm sick and i have nothing else to do @username_2 please?
username_2: These are the libs needed: AquaTweaks-1.7.10-1.0.jar, CodeChickenCore-1.7.10-1.0.7.47-dev.jar, CodeChickenLib-1.7.10-1.1.3.140-dev.jar, CoFHCore-[1.7.10]3.0.3-303.jar, EquivalentExchange3-1.7.10-0.3.507.jar, industrialcraft-2-2.2.720-experimental-dev.jar, MineFactoryReloaded-[1.7.10]2.8.0-104.jar, MineTweaker3-Dev-1.7.10-3.0.9C.jar, NotEnoughItems-1.7.10-1.0.5.111-dev.jar, Waila-1.5.10_1.7.10.jar just reference them in your compiler then you should be good to go if you do get it to compile then that is as far as I got.
username_0: thx i ask you if i get stuck
username_3: This github does not give assistance in compiling IE. Developers should be able to figure out libraries themselves.
Status: Issue closed
|
jokkedk/webgrind | 201875212 | Title: Problem with bin/preproccessor
Question:
username_0: Webgrind does not work after I did `make all`
It creates empty file cachegrind.out.xxxx.webgrind
And it works again after `make clear`.
Answers:
username_1: What happens if you run the binary preprocessor manually?
```
'/var/www/html/webgrind/bin/preprocessor' '/tmp/cachegrind.out.1234' '/tmp/cachegrind.out.1234.webgrind' 'php::call_user_func' 'php::call_user_func_array'
```
(Changing paths as applicable to your system.)
What webserver and OS are you using?
Is there anything in the PHP or server error logs?
username_0: It creates normal /tmp/cachegrind.out.1234.webgrind and I can see it in
webgrind.
But it does not work without manual running.
--
Best regards,
<NAME>
www.mavik.com.ua
username_0: In PHP this command returns code 1.
2017-01-23 9:34 GMT+01:00 <NAME> <<EMAIL>>:
> It creates normal /tmp/cachegrind.out.1234.webgrind and I can see it in
> webgrind.
> But it does not work without manual running.
>
--
Best regards,
<NAME>
www.mavik.com.ua
username_1: Does the following patch fix the issue?
```diff
diff --git a/library/Preprocessor.php b/library/Preprocessor.php
index 29668f2..afdce34 100644
--- a/library/Preprocessor.php
+++ b/library/Preprocessor.php
@@ -44,11 +44,9 @@ class Webgrind_Preprocessor
*/
static function parse($inFile, $outFile)
{
- $in = @fopen($inFile, 'rb');
- if (!$in)
+ if (!is_readable($inFile))
throw new Exception('Could not open '.$inFile.' for reading.');
- $out = @fopen($outFile, 'w+b');
- if (!$out)
+ if (!touch($outFile) || !is_writeable($outFile))
throw new Exception('Could not open '.$outFile.' for writing.');
// If possible, use the binary preprocessor
@@ -56,6 +54,13 @@ class Webgrind_Preprocessor
return;
}
+ $in = @fopen($inFile, 'rb');
+ if (!$in)
+ throw new Exception('Could not open '.$inFile.' for reading.');
+ $out = @fopen($outFile, 'w+b');
+ if (!$out)
+ throw new Exception('Could not open '.$outFile.' for writing.');
+
$proxyFunctions = array_flip(Webgrind_Config::$proxyFunctions);
$proxyQueue = array();
$nextFuncNr = 0;
```
username_0: No:
Could not open /tmp/cachegrind.out.4195 for reading.
/opt/lampp/htdocs/webgrind/library/Preprocessor.php, line 48
--
Best regards,
<NAME>
www.mavik.com.ua
username_1: Looks like a file permissions error. Can you confirm if the user your webserver (serving webgrind) is running under actually has correct permissions to read the cachegrind file?
username_0: Cachegrind file has permissions 644
So, everyone can read it.
And without bin/preprocessor webgrind works correct (only slowly).
Now I have not any messages.
Out file has size = 0
But if I run '/opt/lampp/htdocs/webgrind/bin/preprocessor'
'/tmp/cachegrind.out.2394' '/tmp/cachegrind.out.2394.webgrind'
'php::call_user_func' 'php::call_user_func_array'
from command line, it works correct.
And sudo -u daemon '/opt/lampp/htdocs/webgrind/bin/preprocessor'
'/tmp/cachegrind.out.2394' '/tmp/cachegrind.out.2394.webgrind'
'php::call_user_func' 'php::call_user_func_array'
works correct too.
--
Best regards,
<NAME>
www.mavik.com.ua
username_1: Very strange that PHP's `is_readable()` claims the cachegrind file is not readable, when webgrind was able to read it (and that file permissions also say it can be read). Does `is_readable()` always fail, or does it work on some files (other than cachegrind files).
Is your webserver allowed to call executables; i.e., does a command like `exec("touch /tmp/testFile")` succeed?
If you simply remove the readable/writable checks, and put `self::binaryParse($inFile, $outFile)` at the top of the `parse()` function, does it succeed?
username_1: No response, assuming resolved. Please re-open if not.
Status: Issue closed
|
ukdtom/ExportTools.bundle | 384594776 | Title: Auto Export Feature
Question:
username_0: Feature for setting up a scheduler that will run the export feature every day/week/year etc.
Currently users having to login and manual kick off a export job. Instead users can setup a job to run, lets say every Sunday at 5am.
Status: Issue closed
Answers:
username_1: Due to the fact, that plugin's are EOL soon, I'll not invest time into this
username_1: Yes and no.
Step 1 is closure of the Official plugins.
Step 2 is removal of the framework => No more plugins |
material-components/material-components-android | 748020161 | Title: [MaterialTimePickerDialog] Cannot inherit default theme overlay
Question:
username_0: **Description:** When using `MaterialTimePickerDialog` I cannot seem to inherit from `ThemeOverlay.MaterialComponents.MaterialTimePicker` if I want to override certain theme attributes and provide my own custom theme overlay for `materialTimePickerTheme`. Is that intentional or am I missing something?
**Expected behavior:** Being able to inherit the default theme overlay
**Source code:** For example I want to override just these two text appearances for material time picker dialog theme while also retaining the default attrs provided from the library.
```xml
<style name="ThemeOverlay.App.MaterialTimePicker" parent="ThemeOverlay.MaterialComponents.MaterialTimePicker">
<item name="textAppearanceHeadline3">@style/TextAppearance.App.Headline3</item>
<item name="textAppearanceSubtitle2">@style/TextAppearance.App.Subtitle2</item>
</style>
```
**Android API version:** 30
**Material Library version:** 1.3.0-alpha03
**Device:** Pixel 3
Status: Issue closed
Answers:
username_0: closing this, since the change got merged in #1892 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.