repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
asus4/tf-lite-unity-sample | 664867929 | Title: WebCamTexture has wrong rotation in mobile phone
Question:
username_0: When I tried running the SSD sample on Windows the camera view works perfectly fine but on an android phone the camera view on the screen is upside down. How do I fix the orientation of the camera view?
Answers:
username_1: A simple workaround is overriding `resizeOptions` in SSD.cs. But we need more tests on Andoird.
https://github.com/username_1/tf-lite-unity-sample/blob/070e98b4a43ba264d3a0f94a9d66d8bdeb8e765f/Assets/Samples/Common/BaseImagePredictor.cs#L40-L48
username_2: I tried to change the various values, but it remains a very important problem. The recognition boxes have wrong positions. Probably because the transformations of the texture occur after carrying out the position calculations, creating a discrepancy between the texture and the calculated (old) positions.
username_0: Thanks! Was able to get it fixed
username_2: Excuse me, do the recognition boxes also work? That is, are they in the right positions and sizes? Because in my case both the positions and the dimensions are wrong (like
if they needed a mirror).
username_0: It works perfectly fine for me.

username_2: Could you try to recognize a smaller object? For example by testing the recognition of a smartphone. The problem is evident only with objects from the "smaller" box
username_0: Still no issue for me.

username_2: I solved my problem. I was not using the WebCamTexture, but the video stream recovered through ARFoundation to join the two systems. By modifying the main flow and using the reported parameters I managed to correct everything.
Status: Issue closed
|
Sage/carbon | 748876214 | Title: Unit Test on radio button: Warning appears when we run unit test on radio button.
Question:
username_0: ### Current behaviour
run unit test on a radio button raises this warning: Warning: Failed prop type: Invalid prop `mt` of value `4px` supplied to `RadioButton`, expected one of [0,1,2,3,4,5,7]
### Expected behaviour
run unit test on a radio button does not raise an error
### Reproducible example
https://codesandbox.io/s/dreamy-snyder-o5xl7?file=/src/index.js
### Your environment
<!-- PLEASE FILL THIS OUT -->
| Software | Version(s) |
| ---------------- | ---------- |
| carbon-react | 49.2.0
| carbon-factory |
| react-scripts |
| React |
| Browser |
| npm |
| Operating System |
Answers:
username_1: Hi @username_0 I'm afraid that error is correct, 4px is currently not valid for the `mt` prop on `RadioButton` - https://github.com/Sage/carbon/blob/master/src/__experimental__/components/radio-button/radio-button.component.js
This component does not yet have the styled-system spacing props added to it so that prop can only be one of `[0,1,2,3,4,5,7]` which are multipliers of `8px`
username_1: @username_0 if you need the styled-system spacing props added to `RadioButton` now we can create a ticket for that as a feature request?
username_1: I'm going to close this issue as it's not a bug in Carbon, just the current behaviour of the `RadioButton` component
Status: Issue closed
username_0: @username_1 Sorry to be late, I don't see the notification on it. If you saw the sandbox, i don't use the mt prop and It raised a warning on it, that's the problem, we don"t fill an optionnal property and a warning is raised when we run unit test
username_1: ### Current behaviour
run unit test on a radio button raises this warning: Warning: Failed prop type: Invalid prop `mt` of value `4px` supplied to `RadioButton`, expected one of [0,1,2,3,4,5,7]
### Expected behaviour
run unit test on a radio button does not raise an error
### Reproducible example
https://codesandbox.io/s/dreamy-snyder-o5xl7?file=/src/index.js
### Your environment
<!-- PLEASE FILL THIS OUT -->
| Software | Version(s) |
| ---------------- | ---------- |
| carbon-react | 49.2.0
| carbon-factory |
| react-scripts |
| React |
| Browser |
| npm |
| Operating System |
username_1: Apologies @username_0 I misinterpreted the issue there, I will reopen
username_1: This is caused by https://github.com/Sage/carbon/blob/3acacda960ef02e03577a09f38ea9c374d28c584/src/__experimental__/components/radio-button/radio-button-group.component.js#L103
We either need to change this, or add the styled-system spacing props to RadioButton (which is due to be done anyway)
username_1: FE-3373
username_2: Resolved in https://github.com/Sage/carbon/releases/tag/v66.12.0
Status: Issue closed
|
DoESLiverpool/Tosca | 152031372 | Title: Design cabinet for motors and power supply
Question:
username_0: For use in Dinky but as light and portable for other performances. Needs to allow access to switches, reset buttons etc. but not dangerous parts. Maybe polar bear themed.
Answers:
username_1: *For use in Dinky but as light and portable for other performances. Needs
to allow access to switches, reset buttons etc. but not dangerous parts.
Maybe polar bear themed.*
I appreciate that there is probably a reason for this, but I just love that
it sounds like the request of someone who is a bit obsessed with polar
bears, and is just trying to drop that in casually. |
Azure/azure-xplat-cli | 81208259 | Title: ASM: Prepare design document for local networks and to manage association between local network and vnet
Question:
username_0: Done with design documentation, review in progress. Keeping this issue open until review & implementation is done
Answers:
username_0: Done with design documentation, review in progress. Keeping this issue open until review & implementation is done
username_1: @username_0 network vnet local-netwok add & remove commands are working fine now.
username_2: @username_3 this can be closed
Status: Issue closed
|
alash3al/httpsify | 568551931 | Title: Please use tags for futher releases in docker
Question:
username_0: Just found out that the whole CLI API has changed after updating my cluster resulting in dead container. It would be nice to avoid such a problem in future. There are two main concerns: stability of CLI API and tags for docker images.
P.S. Also readme became obsolete after removal of `--redirect` flag.
Answers:
username_1: I'm sorry @username_0 , I just released that major change and deprecated any old version due to the deprecation of the underlying let's encrypt API.
You're right, I should make use of tags to not to cause any production issue with any production environment.
Sorry again,
username_0: Okay then
Status: Issue closed
|
danielgindi/Charts | 910420827 | Title: Feature Request:- Add to Gradient Bar chart Bars
Question:
username_0: Is there a way to add gradient colours in bar chart ?
if not can we expect a future version with this functionality ?
I also saw a pending PR regarding the same issue:
PR #4411
Answers:
username_1: i also need this feature, finally i found a solution.
``` Swift
enum GradientDirection {
case leftToRight
case rightToLeft
case topToBottom
case bottomToTop
}
class GradientBarChartRenderer: BarChartRenderer {
var gradientColors: [NSUIColor] = []
var gradientDirection: GradientDirection = .topToBottom
typealias Buffer = [CGRect]
fileprivate var _buffers = [Buffer]()
override func initBuffers() {
super.initBuffers()
guard let barData = dataProvider?.barData else { return _buffers.removeAll() }
if _buffers.count != barData.count {
while _buffers.count < barData.count {
_buffers.append(Buffer())
}
while _buffers.count > barData.count {
_buffers.removeLast()
}
}
_buffers = zip(_buffers, barData).map { buffer, set -> Buffer in
let set = set as! BarChartDataSetProtocol
let size = set.entryCount * (set.isStacked ? set.stackSize : 1)
return buffer.count == size
? buffer
: Buffer(repeating: .zero, count: size)
}
}
private func prepareBuffer(dataSet: BarChartDataSetProtocol, index: Int) {
guard
let dataProvider = dataProvider,
let barData = dataProvider.barData
else { return }
let barWidthHalf = CGFloat(barData.barWidth / 2.0)
var bufferIndex = 0
let containsStacks = dataSet.isStacked
let isInverted = dataProvider.isInverted(axis: dataSet.axisDependency)
let phaseY = CGFloat(animator.phaseY)
for i in (0 ..< dataSet.entryCount).clamped(to: 0 ..< Int(ceil(Double(dataSet.entryCount) * animator.phaseX))) {
guard let e = dataSet.entryForIndex(i) as? BarChartDataEntry else { continue }
let x = CGFloat(e.x)
let left = x - barWidthHalf
let right = x + barWidthHalf
var y = e.y
[Truncated]
default:
break
}
view.layer.insertSublayer(gradient, at: 0)
}
func image(with view: NSUIView) -> NSUIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
}
```
use it with
`chartView.renderer = GradientBarChartRenderer(dataProvider: chartView, animator: chartView.chartAnimator, viewPortHandler: chartView.viewPortHandler)` |
rust-lang/rust-clippy | 490599159 | Title: clippy::common-cargo-metadata ignores license-file metadata
Question:
username_0: Lint `cargo::common-cargo-metadata` checks for `license` property among the other things. However, it ignores the presense of the `license-file` property, which canbe used instead of the `license` ([when the license is propriatary](https://doc.rust-lang.org/cargo/reference/manifest.html)).
I would like to suggest making checking for `license-file` as a fallback for missing `license` before emitting the warning.<issue_closed>
Status: Issue closed |
Azure/autorest | 424662071 | Title: [email protected] and later hang on Linux due to bad shebang
Question:
username_0: `[email protected]` and later versions (including `3.0.5165`, the current `autorest@beta`) hang on direct invocation with any arguments. Example:
```sh
cd "$(mktemp -d)"
npm init -y
npm install autorest@beta
./node_modules/.bin/autorest --help # Consumes 100% CPU indefinitely, no output
```
This is due to the shebang line `#!/usr/bin/env node --max_old_space_size=16384` which was added in 2c9d5a5a3. Since [Linux does not split arguments in shebangs](https://www.in-ulm.de/~mascheck/various/shebang/#splitting), this causes the following sequence:
1. The kernel reads then parses the shebang and executes `/usr/bin/env "node --max_old_space_size=16384" ./node_modules/.bin/autorest`
2. `env` sets an environment variable named `node --max_old_space_size` to `16384` then executes `./node_modules/.bin/autorest`.
3. Goto 1
So the process becomes stuck in this loop indefinitely.
Thanks for considering,
Kevin
Answers:
username_1: O_o.
Hmm. @$%$*)%
That is a real damn pity. I need that.
` node ./node_modules/.bin/autorest` works... um, I gotta go see if there's another way to smack that around.
username_1: I removed it out of the line. you'll have to `npm install autorest@beta` again in a few minutes
Oddly enough, env does the right thing from the shell, but not here.
username_0: Probably because argument splitting does not occur on the shebang. If you'd like help working through the behavior differences, let me know.
username_1: We're boned.
I don't think there is a cross-platform way that this can be solved without changing `cmd-shim` in the `npm` repo.
username_0: Would it be possible to use [`v8.setFlagsFromString`](https://nodejs.org/api/v8.html#v8_v8_setflagsfromstring_flags)? (e.g. `v8.setFlagsFromString('--max_old_space_size=16384');`)
username_1: I was not aware of that.

username_0: Haha! Yes indeed.
I hope it works reasonably well. I haven't used `v8.setFlagsFromString` much, and the warning in the documentation is pretty dire. But, if `--max_old_space_size` is supported, it seems perfect.
username_1: Nope. Didn't work.
The max heap space doesn't change.

username_0: Yep, we're boned. I should have checked that.
I haven't been able to find a portable workaround that doesn't use some sort of wrapper/launcher script yet, but I'll keep thinking. The two workarounds I have come up with so far are pretty gross:
1. If `--max_old_space_size` was not passed, re-launch using `child_process.spawnSync` then exit with its exit code. Similar to [`exec(3)`](https://manpages.debian.org/stretch/manpages-dev/exec.3.en.html), except that the parent process remains until exit (and some additional challenges checking/preserving V8 flags).
2. Instead of setting `bin` in `package.json` and relying on `npm` to generate a wrapper, use a `postinstall` script to install an appropriate wrapper for the platform which includes `--max_old_space_size`. It would require some subtlety finding the bin dir for global installations, but otherwise I think it would be straightforward.
Status: Issue closed
username_1: I fixed that a while back. It now spawns another instance if it needs to add the switch.
username_2: This issue is still applicable. I just installed autorest and it does not work.
username_1: I just installed it and it works fine:

What precisely did you do?
username_3: Make sure you're using the right AutoRest version. The `autorest` command no longer uses that shebang and configures the heap size using v8 APIs:
https://github.com/Azure/autorest/blob/master/autorest/entrypoints/app.js#L1
Try installing the latest build:
```
npm install -g https://github.com/Azure/autorest/releases/download/autorest-3.0.6194/autorest-3.0.6194.tgz
```
I'm running this on the Linux distribution Guix with Node 10.16.0 and it works fine. |
cytoscape/cyREST | 295596787 | Title: Collections Swagger could use more details
Question:
username_0: From a conversation I had with @AlexanderPico, the Collections Swagger API doesn't adequately tell you how it's different from the regular Networks API. It is, but the docs need to explain full why.
Answers:
username_0: Addressed in 3ca6036651336ad3ab5e9223575f1e2a6d7d14ba
Awaiting release.
Status: Issue closed
|
traccar/traccar | 366390119 | Title: Installer ends with a getcwd() error
Question:
username_0: ```
# ./traccar.run
Creating directory out
Verifying archive integrity... 100% All good.
Uncompressing traccar 100%
Synchronizing state of traccar.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable traccar
sh: 0: getcwd() failed: No such file or directory
```
Answers:
username_1: It worked for me. Please provide OS details.
username_0: ```
# uname -a
Linux whatever 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u
(2018-08-21) x86_64 GNU/Linux
```
```
# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.5 (stretch)
Release: 9.5
Codename: stretch
```
Status: Issue closed
username_1: Please try with updated script.
username_0: Please do not close until fix is confirmed.
username_1: Please don't invent your own policies here. I can re-open if you confirm that it's still broken. I'm not going to re-open just for you to test it.
username_0: Well, it's still broken, then. I can confirm.
username_1: Then? Have you tested it in less than 2 minutes?
username_0: No.
username_0: I did NOT confirm that the fix was right. So please keep the issue open, until I confirm.
username_0: NOW this looks good:
```
# ./traccar.run
Creating directory out
Verifying archive integrity... 100% All good.
Uncompressing traccar 100%
Synchronizing state of traccar.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable traccar
#
```
So NOW, you can close this one. Thanks for fixing this. |
codeformilwaukee/hack-night-planning | 556268642 | Title: Find space for next 3-4 hack nights (maybe 2-3 more likely :/)
Question:
username_0: February - (Bolton hall as backup)
March - DS?
April - ?
May - ?
Answers:
username_1: Possibly Ward 4 space with the Commons and possible collab with [CO:Lab](https://www.colabmke.com/)
username_2: Februrary - Clean Slate Milwaukee says we may be able to use the Welford Sanders Enterprise Center in Harambee.
March - Who is on point from Core Team on connecting with Direct Supply on usage of their space?
Ward 4 has not responded to previous attempts to contact them regarding space usage. Collaboration with CO:Lab should be thought out in a separate issue.
username_1: We already have the ask out with Commons. She’s supposed to get back to us
today. Let’s just wait.
The co:lab collab can be brought up once we know it’s a thing.
Romke
username_2: If there is an ask going out for usage of space, the rest of the team needs to be made aware of it for transparency and accountability. Let us know what happens.
username_1: There was on slack. Someone asked to reached out to them so I did.
Romke
username_0: Yeah. Thanks for reaching out Romke. Let's see what whoever Romke asked
says regarding Feb. I'll tell Shanyeill that Ill confirm whether or not we
would like Welford Sander Center, which from my recollection has decent
wifi, but I will inquire on tha tas well.
On Wed, Jan 29, 2020 at 8:25 AM <NAME> <<EMAIL>>
wrote:
> There was on slack. Someone asked to reached out to them so I did.
>
> Romke
>
>
username_2: > There was on slack. Someone asked to reached out to them so I did. Romke
username_0: I think asking the Commons now is good because we will also need spaces for
pj workshops, internship related sessions (I think), and future hack
nights. So all good in my book
On Wed, Jan 29, 2020 at 8:29 AM <NAME> <<EMAIL>>
wrote:
> There was on slack. Someone asked to reached out to them so I did. Romke
>
username_1: Commons is out for Feb.
username_2: March - Virtual COVID-19 Contingency
April - The Commons at Ward 4
May - Try United Way AgaIn?
username_0: Or may want to do it at Continuus? Depends on where things are at too
username_2: Definitely, we'll have to play this out by ear depending on how the COVID-19 situation ends up. We'll see how Continuus is for the PF meetup.
username_3: Internal slack chat suggests we might try Virtual for April Hack Night too. |
mrdoob/three.js | 532017312 | Title: can we make alphamap_fragment.glsl sample red channel?
Question:
username_0: A mix of feature request and question. I have pbr model here that is set up like this:

See, to match 3js `combined OcclusionRoughnessMetallic (RGB) texture` (#16325 ?) I only have to swap G → B and R → G, however I cannot just move opacity mask B → R, because [it appears to be samplet from G](https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderChunk/alphamap_fragment.glsl.js), conflicting with roughness.
Now I know that R channel is supposed to be used by AO in 3js, but this model does not use AO. I also do not know how common this type of material is (i e if they mass-follow some tutorial, or the tool creates it, or they just connected the nodes at random) - just know that it is out there, and consider if the alpha sampling can/should be moved
Status: Issue closed
Answers:
username_0: @username_1 well I guess it is time to write some onBeforeCompiler hack ¯\_(ヅ)_/¯ |
coldfix/udiskie | 200721754 | Title: Tray doesn't work as a service
Question:
username_0: From #129, @username_1:
I'm finding that smart tray doesn't seem to work with the posted service file and the the corrected argument. On my config.yml, I also have tray: auto. No tray is displayed with my USB plugged in.
Tray only appears if I start another instance using udiskie --smart-tray on the terminal, after udiskie is already running via service.
Answers:
username_1: @username_0 Was this reproducible on your machine?
username_0: I hope to have time to check this evening. I believe to have experienced similar issue in the past when starting via service files (irrespectively of the auto/smart flag). It could be related to X not being started up when udiskie is creating the `Gtk.StatusIcon` instance. In this case the most straight forward fix would be to start the icon with your window manager (`.xinitrc` or autostart applications). (You could start the icon without automounting, and a background daemon via service file, if you need automounting without X login)
username_2: I'm also having this problem--any updates?
And might as well ask questions: Is `--eject` option useful at all for hard disk drives or flash drives? Is `unmount` "safe" without also using `--detach` or is `--detach` recommended after an `unmount` in all cases?
username_0: My recommendation for now is: do not use udiskie tray/notifications as a systemd service.
GUI service components should be started with the window manager, i.e. in `.xinitrc` or in your window manager's autostart feature.
If you sometimes log in without window manager but still want udiskie running, you can start an udiskie instance without notifications/tray via systemd. In this case you would start an additional udiskie instance with notifications/tray but without automount with the window manager as described above.
I'm pretty short on time right now, and this is not my top priority, since there is a workaround.
username_0: About your other questions:
- `--eject` should be useful only for CD drives, where it physically ejects the CD. It is sometimes also available for other media. In that case, it somehow disables them but does not power down. Not recommended.
- If you wait for completion, unmount ensures that the disc will be synced and the drive is not left in an inconsistent state. I'm not an expert on hardware and cannot say about potential damage to the disc due to voltage peaks during removal without having powered down first or something, but I *think*, `unmount` should be safe without detach in most cases. If you want a more funded estimation, you should better ask someone else. If you get information on this, please notify me:)
Personally, I usually go for detach when I know I'm gonna remove the drive physically. |
PyLadiesCZ/pyladies.cz | 129979261 | Title: Navrhnout design pro dočasné propagování nějaké události na hlavní stránce
Question:
username_0: Časem bude potřeba navrhnout design pro situace, kdy potřebujeme dočasně propagovat nějakou událost na hlavní stránce (nový kurz, nové město, DjangoGirls workshop).
(Navrhuje @encukou)
Answers:
username_1: oki
username_1: čo tak spraviť teda na úvodnej stránke slider, aj keď sa už veľmi nepoužívajú a tam tieto akcie promovať? A ak nebude žiadna tam necháme tento statický obrázok?
username_2: To mi prijde jako dobry napad!
Odesláno z iPadu
username_1: Spravím tam carousel s tým, že bude zakomentovaný a podľa potreby sa odkomentuje teda buď header s obrázkom alebo slider. Toto by som zapracovala až neskôr keď bude web live, lebo myslím teraz nutne nie sú žiadne akcie, ktoré treba promovať v slideri.
username_1: ale neviem, či slider bude vhodný ak to má ísť bez JS a väčšinou sa na mobiloch skrývajú. Keby to dáme sem do aktuálnych zrazov a kurzov? že sa tam pridá Django Girls ked sa bude konať v tom meste alebo iná akcia. http://pyladies.cz/brno_info/
username_1: Napríklad nejako takto a bude tam odkaz na Django web <img width="1280" alt="snimek obrazovky 2016-04-08 v 10 28 32" src="https://cloud.githubusercontent.com/assets/3219358/14378351/b2d886d0-fd74-11e5-967a-e988c9c77e7c.png"> zatiaľ ma nenapadá, kam by sme to dali na homepage, iba ak by sa spravil nejaký dočasný nový blok
username_1: čo vy na to? @username_2 a @username_0 ? pripadne môžem vymyslieť niečo iné
username_2: No to by se byvalo hodilo i na ten dnesek ze je zmena a DJango Girls se blizi!!!
Odesláno z iPadu
username_1: aka je dneska zmena? a kde? a kedy bude django girls? to by fakt mohli sme dat na web
username_2: teprve se DG planuji, jeste cas.
dnes je treba zmena mista zacatecnickeho srazu. ale meli by to vsichni
vedet, protoze je to na facebooku
username_1: aha..dneska je to v apiary? tak to ok ak je to na FB :) ale tak mozeme to teda nechat tu na stranke mesta? a uzavriem to? pripravene su 3 stlpce v html a na uvodnej stranke by som to nemenila, aj tak tie eventy vsetky maju aj svoju webstranku a su promovane na FB, TW, atd.
username_2: na stranku bych o dnesku uz nepsala a klidne to zavri estli te to trapi?
evidentne dnes nemas co delat :D no asi ti dam nejaky ukoly!!!!!!
username_1: no ale mam :D robim django backend pre czehitas ale ked mi svieti vela taskov som nespokojna :D
Status: Issue closed
|
stkenny/grefine-rdf-extension | 776601813 | Title: Is there an API for DBpedia that allows me to reconcile?
Question:
username_0: I have reconciled my data on Wikidata, but was looking for an API that I can use with DBpedia.
I found this website: https://stackoverflow.com/questions/30728563/reconciliation-services-for-openrefine-not-working, where they talk about SPARQL based reconciliation service against DBPedia. But I don't know which endpoint to load.
Answers:
username_1: Hi, yes, the example at https://github.com/username_1/grefine-rdf-extension/wiki/Example-SPARQL-Endpoint-Reconciliation uses DBPedia. The endpoint is https://dbpedia.org/sparql. You should note though that the public endpoint is quite slow and I think rate limited, so if you have a large number of rows it will either take a long time or might timeout.
Status: Issue closed
username_0: oh, thank you |
amritbhanu/CSC722_Spam | 273908383 | Title: report
Question:
username_0: @username_1 can you start on the report?
And just check the results, i have done the preprocessing and feature extraction against few basic learners. ONe thing is left out, which is using smote.'
Anything else we should do?
Answers:
username_1: Sure, will get started on it too
username_0: have you created the latex document or something? Can you share?
username_1: You wanna do it in latex or on regular google doc?
username_1: Will get started on it this weekend, will also see if I can add a few extra methods
username_0: i think latex would be better
username_1: It would look better, but also harder to produce the 15+ pages he wants. We can try though, we can either use sharelatex or just put the latex project here
username_0: we can choose 1 column article through latex which will get us to 15+ pages.
username_1: Created the sharelatex doc and invited you, will keep writing on it throughout the day. I chose a relatively simple template, but don't fully love it so we might have to change to something else, let me know if you have any suggestions, and feel free to edit anything too.
Will try to add a few more learners to the ML class today too, since one is always predicting 0.
username_0: gui, any update on the code and the report. We have got a lot to catch up.
username_0: the ML which is predicting 0 might be due to class imbalance. Might need to use SMOTE like technique for balancing the class distribution
username_1: Just updated the report with the introduction and the structure of the paper, and am also looking into applying some of the learners we saw in class this semester, which might be a little annoying
username_1: Also, I'm having some trouble with the bibliography (I always f**cks me up), if you can figure it out just fix an instance there and I'll fix the rest. Gonna get started on the related work, so it's gonna be a mess of unreferenced citations
username_0: just fixed it, i will work on some of those sections on sun, before that got some work to do. Ya literature survey is the big deal
username_0: you were doing right just needed to add the bibliographystyle |
symfony/symfony | 193535072 | Title: [Logging] Exception logging is not very extendable or readable
Question:
username_0: Currently it is very hard to extend `Symfony\Component\Debug\ErrorHandler`. And at the same time it produces log messages that are not particularly readable. Here's an example of a log message this handler produces:
```
Fatal error: Call to undefined function asdasd()
```
There is no easy way to render the `Symfony\Component\Debug\Exception\FatalThrowableError` exception differently, the only way I see atm is to extend `ErrorHandler`, override `handleException` in it, but I'd need to copy-paste contents of the `handleException` that I'm overriding (and this method is pretty significant). Apart from that, in order for this copy-pasted method to work, I need access to some private fields from the base `ErrorHandler` (`loggedErrors`, `loggers`, `exceptionHandler` to name a few). In order to get access to these fields I'd need to override all methods that can mutate these and keep track of their copies in my custom `ErrorHandler` (effectively copy and pasting all those mutators as well).
All I needed to do was to replace `$this->loggers[$e['type']][0]->log($this->loggers[$e['type']][1], $message, $e);` with more readable `$this->loggers[$e['type']][0]->log($this->loggers[$e['type']][1], (string)$exception);`. Now I'm not saying that we should render exceptions in `ErrorHandler` my way, that's irrelevant to me if I can always change the way they're rendered.
Maybe we should at least create a `protected logThrowable(\Throwable)` method in this class?
Answers:
username_0: Irrelevant after 3.2.0, 8f24549 added exceptions to the context so it's all possible now
Status: Issue closed
|
facebook/chisel | 237079885 | Title: bmessage -[ViewController viewDidLoad] can't make a breakpoint.
Question:
username_0: i use command bmessage -[ViewController viewDidLoad] ,xcode console show
Setting a breakpoint at -[ViewController viewDidLoad] with condition (void*)object_getClass((id)$r0) == 0x001f9738
Breakpoint 2: where = GrowingTest`-[ViewController viewDidLoad] at ViewController.m:23, address = 0x00104940
but it does't stop . just run as without breakpoint .
Answers:
username_0: it works on iphone,but does't work on simulator。
username_1: does your view controller happen to be using KVO?
username_0: no, i found that when i hook this method with runtime ,it work .
Status: Issue closed
|
pandas-dev/pandas | 375513794 | Title: netCDF4 / cftime real_datetime object is not supported
Question:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/users/timo/oggm_venv/lib/python3.7/site-packages/pandas/core/series.py", line 183, in __init__
index = _ensure_index(index)
File "/home/users/timo/oggm_venv/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 4974, in _ensure_index
return Index(index_like)
File "/home/users/timo/oggm_venv/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 417, in __new__
name=name, **kwargs)
File "/home/users/timo/oggm_venv/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py", line 399, in __new__
yearfirst=yearfirst)
File "/home/users/timo/oggm_venv/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 467, in to_datetime
result = _convert_listlike(arg, box, format)
File "/home/users/timo/oggm_venv/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 368, in _convert_listlike
require_iso8601=require_iso8601
File "pandas/_libs/tslib.pyx", line 492, in pandas._libs.tslib.array_to_datetime
File "pandas/_libs/tslib.pyx", line 544, in pandas._libs.tslib.array_to_datetime
AttributeError: 'real_datetime' object has no attribute 'nanosecond'
```
#### Expected Output
Well, not an error would be nice.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.7.1.final.0
python-bits: 64
OS: Linux
OS-release: 4.14.78-gentoo
machine: x86_64
processor: Intel(R) Xeon(R) CPU E5-2623 v4 @ 2.60GHz
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.4
pytest: 3.9.3
pip: 18.1
setuptools: 40.5.0
Cython: 0.29
numpy: 1.15.3
scipy: 1.1.0
pyarrow: None
xarray: 0.10.9
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.5
pytz: 2018.7
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 3.0.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
netCDF4 version is 1.4.2
Answers:
username_1: I’m guessing these subclass datetime.datetime?
username_2: IIUC, this looks like an ask to support another class object for indexing purposes. Not sure if that is something we want to do *per se*...
cc @username_6
username_3: cc @username_5 @username_4 as developers of cftime (https://github.com/Unidata/cftime) - indeed I'm not sure if this is a pandas issue or a cftime issue...
username_1: It looks like `real_datetime` does subclass `datetime.datetime`. The issue is caused by the fact that pandas' timestamp-conversion functions uses a heuristic that assumes that any datetime-subclass is a `pd.Timestamp`.
@username_0 can you try pinning a `nanosecond = 0` attribute to `real_datetime` and see if that solves the problem?
username_4: @username_5 is going to be the definitive solution here. He wrote the [CFTimeIndex](http://xarray.pydata.org/en/latest/generated/xarray.CFTimeIndex.html#xarray.CFTimeIndex) to provide the sort of functionality that @username_0 is trying to achieve here.
username_0: I added "if not hasattr(t, 'nanosecond'): t.nanosecond = 0" and it now runs through smoothly.
An idea I would have to address this on the pandas side would be to replace PyDateTime_CheckExact(val) check at
https://github.com/pandas-dev/pandas/blob/7191af9b47f6f57991abdb20625d9563877370a2/pandas/_libs/tslib.pyx#L539
with a hasattr(val, 'nanosecond'), assuming that's possible in that context.
username_1: We considered that and decided against it because of the performance impact; `hasattr` is a python call whereas `PyDateTime_CheckExact` is a C call.
We may be able to figure out a C-speed way to do a check equivalent to `isinstance(obj, _Timestamp)`.
username_5: Yes, @username_0, I would recommend trying an `xarray.CFTimeIndex`. The one caveat is that it requires using purely objects that inherit from `cftime.datetime` objects (which mimic `datetime.datetime` objects, but don't ultimately inherit from them), but this is easily accomplished in recent versions of cftime which include an `only_use_cftime_datetimes` argument to `num2date`. Here's a short example:
```
In [1]: import cftime; import pandas as pd; import xarray as xr
In [2]: times = cftime.num2date([0,1,2,3,4,5,500,1000], units='days since 1801-10-01',
...: only_use_cftime_datetimes=True)
In [3]: pd.Series(range(len(times)), index=xr.CFTimeIndex(times))
Out[3]:
1801-10-01 00:00:00 0
1801-10-02 00:00:00 1
1801-10-03 00:00:00 2
1801-10-04 00:00:00 3
1801-10-05 00:00:00 4
1801-10-06 00:00:00 5
1803-02-13 00:00:00 6
1804-06-27 00:00:00 7
dtype: int64
```
username_3: We have solved the issue downstream, but it still looks like a regression to me (it used to work - albeit possibly for the wrong reasons). Do you by chance know what broke here? Is it a change in netcdf4/cftime or a change in pandas?
username_6: I suppose we would take a patch to let arbitrary datetime like scalars here but these are not supported in any way
username_5: @username_3 it looks like a recent change in cftime broke this. In the past for standard calendars, by default `cftime.num2date` would return an array of normal `datetime.datetime` objects. As of https://github.com/Unidata/cftime/pull/66 it now returns an array of `cftime.real_datetime` objects (which subclass `datetime.datetime`).
So I agree with @username_6 that this is not a pandas issue -- it might be worth bringing this up in the cftime issue tracker to discuss options there.
username_3: Thanks panda team for the quick help!
We move the discussion to https://github.com/Unidata/cftime/issues/77 instead.
Feel free to close the issue, for some reason I can't do it
Status: Issue closed
|
elastic/azure-marketplace | 334317525 | Title: Switch to using OpenJDK
Question:
username_0: Opening this issue to discuss moving from using Oracle JDK to OpenJDK.
The Oracle JDK is installed using apt package oracle-java8-installer. When Oracle releases a newer minor/patch version of a JDK, the current version is moved to an archive download URL, causing the apt package installation to break. Updates to the apt package can trail the update of a JDK by some time, causing the template to break until a new apt package is released, or [the current package is patched](https://github.com/elastic/azure-marketplace/blob/master/src/scripts/elasticsearch-ubuntu-install.sh#L291-L306) until a new package is released.
There is an apt package for OpenJDK on Ubuntu, openjdk-8-jdk, that could be used instead of oracle-java8-installer. Moving to using this package does pose a few questions
1. Is using this package more reliable than using oracle-java8-installer?
2. Is this package regularly updated for Ubuntu?
3. What pros/cons are there to using Oracle JDK vs. Open JDK e.g. tooling?
Another option to consider is building private base VMs to use for the template. These VMs would have any prerequisites such as Oracle JDK already installed.<issue_closed>
Status: Issue closed |
kubernetes-sigs/kind | 368398406 | Title: Accessing host volumes through kind
Question:
username_0: It seems there is no way to request some host volumes to be mounted inside the kind container. So then pods cannot access those host volumes through `hostPath`. This is useful for us to bring testing data in.
Answers:
username_1: definitely, this relates to #28, mostly we need to think about how best to expose this. cc @munnerz
username_1: There is not, that sort of thing would create some interesting stability guarantees, eg at some point I think we may want to move to a docker client library instead of `exec`ing `docker` at which point any option to hook it with arbitrary args would no longer work.
I'd like to avoid breaking changes going forward as much as possible, so I'd like to think about how exactly we expose options like this. I think we can provide config for host path mounts specifically, and think a bit about how that relates to multi-node (which is forthcoming...)
The closest work around right now is patching kind itself...
username_1: /priority backlog
I definitely intend to support this
username_1: /assign
/lifecycle active
I'll file a PR after https://github.com/kubernetes-sigs/kind/pull/77, also going to clean up the node lifecycle stuff a bit along with it.
username_2: +1 noted
username_1: [Using this to track volumes as well IE #246]
Multi-node is pretty solidly landed, experimenting with this again currently. This is actually active again, apologies for the delay.
username_1: [apologies, turns out the config format for this is tricky and very coupled to docker ... stay tuned ...]
The docker client library is imo, even worse for this. Evaluating options to make this better than literally plumbing through extra user provided flags to the container creation.
username_0: Looking forward to this. For now we have to use a fork to pass those arguments through.
username_1: I did some more looking yesterday, what we can do to start is mimic the format used by Kubernetes CRI to describe container mounts, which is of course quite reasonable.
/priority important-soon
username_1: We discussed this in today's meeting, triaging to the next minor release ideally, and following Kubernetes CRI for how to describe this.
username_1: Fixed in https://github.com/kubernetes-sigs/kind/pull/313
Quick example usage here: https://github.com/kubernetes-sigs/kind/pull/313#issuecomment-464967105
Will document more as we finish fleshing it out.
Status: Issue closed
username_0: Awesome! Thanks!
username_1: Sorry for the delay again, this has worked for a while but there was some debate about the best way to borrow from the upstream CRI concepts (vendor vs copy etc.).. we came to agreement in today's meeting so it's in now 😅
username_0: No worries. I have been using a fork in meantime. This is now the last thing which required a fork for me. Which is great. I will test this out soon.
username_1: Great!
I think we will need some UX improvements (relative paths?) but it's a start 😅
username_3: I guess for most testing environments this is not going to be a problem (because AFAIK most envs care about a 1 master + 1 worker setup) but fwiw this is not going to work with multiple workers, right? |
flood-io/element | 475620829 | Title: Ability to mark a step as repeatable
Question:
username_0: ```ts
step.once("Login", b => {})
step.repeat(5, "Increment counter", b => { ... })
step.while(() => {return true}, "Increment counter", b => { ... })
step("regular step", b => {})
```
Answers:
username_0: ```ts
step.once("Login", b => {})
step.repeat(5, "Increment counter", b => { ... })
step.while(() => {return true}, "Increment counter", b => { ... })
step("regular step", b => {})
```
username_0: Prior art: https://gatling.io/docs/current/general/scenario#conditional-statements
Status: Issue closed
|
redux-saga/redux-saga-beginner-tutorial | 994197205 | Title: Uncaught TypeError: (0 , _sagas.helloSaga) is not a function
Question:
username_0: 
Im having error on this lines of code, I tried adding the parenthesis and removing it seems it still persist. I also removed the default from the sagas.js.
sagaMiddleware.run(helloSaga());
sagaMiddleware.run(rootSaga()); |
andrew09/gitfiles | 273179746 | Title: Add tests, immutable.js, and proptypes to existing components
Question:
username_0: ##Testing
[ ] - Create test environment with (Jest)[https://facebook.github.io/jest/], (Enzyme)[http://airbnb.io/enzyme/], and (Chai)[http://chaijs.com/]
[ ] - Write tests for the couple of initial existing components
##Immutable.js
[ ] - Convert all (if any) data to be immutable
##PropTypes
[ ] - Add `PropTypes` checking for initial existing components
[ ] - install and import `react-immutable-proptypes` for checking immutable props (if applicable)<issue_closed>
Status: Issue closed |
axios/axios | 324741143 | Title: Cookies headers is present but Cookies are not stored in browser
Question:
username_0: I'm making cross-domain request between two subdomains: `sub1.local` and `sub2.local`. Both are on local, as you see.
So I send `POST` from `sub1.local` to `sub2.local`.
I recevie Cookie header: `Set-Cookie: name=value; expires=Sun, 20-May-2018 21:38:08 GMT; Max-Age=3600; path=/; domain=.local`.
But there is no Cookie in browser storage.
All response headers:
```
HTTP/1.1 200 OK
Date: Sun, 20 May 2018 20:43:05 GMT
Server: Apache
Set-Cookie: name=value; expires=Sun, 20-May-2018 21:43:05 GMT; Max-Age=3600; path=/; domain=.nospampls
Cache-Control: no-cache, private
Access-Control-Allow-Origin: http://sub1.local:8080
Vary: Origin
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
Content-Length: 2
Keep-Alive: timeout=10, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
```
What can be wrong?
Answers:
username_1: Hello @username_0 ,
I think you need to indicate whether or not the response to the request can be exposed to the page.
Could you try to add the following middleware on your Express backend API (sub2.local).
```
app.use(function(req, res, next) {
res.header('Access-Control-Allow-Credentials', true);
res.header('Access-Control-Allow-Origin', req.headers.origin);
res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,UPDATE,OPTIONS');
res.header('Access-Control-Allow-Headers', 'X-Requested-With, X-HTTP-Method-Override, Content-Type, Accept');
next();
});
```
Once in production, don't forget to modify req.headers.origin to the exact website you wish to allow to connect.
username_0: @username_1 Hello
I'm using Laravel as backend API.
I'm also using [laravel-cors](https://github.com/barryvdh/laravel-cors) for handling cross-domain requests. At the moment all methods are allowed, credentials are `true`.
If backend is able to send Cookie but Vue app doesn't accept them, there might be a problem not with Vue but with backend?
username_1: _I'm also using laravel-cors for handling cross-domain requests. At the moment all methods are allowed, credentials are true._
=>Did you also allowed origins? (and headers to anticipate issues)
`'allowedOrigins' => ['*'],
'allowedHeaders' => ['Content-Type', 'X-Requested-With']
`
_If backend is able to send Cookie but Vue app doesn't accept them, there might be a problem not with Vue but with backend?_
=>Agree hence my answer, your backend needs to indicate whether or not the response is exposed. Technically the blocker is your browser (if you test it on Postman, I think it should works... good test to complete btw). That's why your backend needs to play with your browser's rules.
username_0: @username_1 yes, I've allowed origins and all headers are allowed too. My current cors config:
```
'supportsCredentials' => true,
'allowedOrigins' => ['*'],
'allowedOriginsPatterns' => [],
'allowedHeaders' => ['*'],
'allowedMethods' => ['*'],
'exposedHeaders' => [],
'maxAge' => 0,
```
I've tested in Postamn analog (Insomnia) - everything was good...
username_1: ... weird. If Insomnia works well (I mean you receive cookies) and you can't see your cookies on your browser, my 2cents is that we should looking for a backend fix. I'm not familiar with Laravel but I would try doing some fine tuning with the cors config. Sorry can't help further :(
username_0: I've found an answer on stack about the problem that Chrome won't set Cookie if there is port in domain url. Can I remove port from domain url? I'm using vue-cli.
username_2: @username_0 did you find a solution for it?
username_0: @username_2 can't answer unequivocally. Sometimes I have just restarted server and it worked...
And also [your request might have `withCredentials = true` flag](https://developer.mozilla.org/uk/docs/Web/HTTP/CORS#Requests_with_credentials). Can be various implementations of this, depending on what tool do you use to make AJAX requests.
username_0: Note that `withCredentials` flag might be not in the `data` parameter:
```js
axios.post('domain.com', {
//data parameter
withCredentials: true, //wrong
name: 'name',
password: '<PASSWORD>',
})
```
but
```js
axios.post('domain.com', {
name: 'name',
password: '<PASSWORD>'
}, {
//AxiosRequestConfig parameter
withCredentials: true //correct
})
```
Status: Issue closed
username_3: I faced the same issue in chrome. My url is localhost:3000. Looks like chrome doesn't like when port is there in the url. However, I tested with Forfox and it worked fine.
username_4: For any poor SOB who lands on this page as I did, `127.0.0.1` != `localhost`. If you are serving your site from `localhost`(port whatever) and making CORS requests to `127.0.01` (port whatever) cookies will not stick! Ensure to use either only `localhost` or only `127.0.0.1`, but do not mix and match.
username_5: same problem
username_6: how to fix this issue?
username_7: MAGIC! Thanks a LOT little crustacean!
username_8: Changing the SPA origin domain fixed it for me.
I was making a SPA request to a Laravel backend, and the origin of the request (SPA) was 127.0.0.1. I found that adding a vue.config.js file in the root of the project (level of package.json) with the snipped below, I was able to align the domains. (SESSION_DOMAIN=.127.0.0.1 in the Laravel env file didn't work for me).
```
module.exports = {
devServer: {
host: 'projectname.local'
}
}
```
my source; https://stevencotterill.com/snippets/vue-cli-using-a-custom-domain
username_9: I encountered a similar situation.Finally, I solved the problem by modifying the response header.Pay attention to Origin and Access-Control-Allow-Origin. |
mlflow/mlflow | 422568052 | Title: Model Serving in Production
Question:
username_0: Can you help how we can approach the model serving in production? As of now it is running in a single threaded flask server and getting below while deploying it using mlflow pyfunc command:
mlflow pyfunc serve -r <RUN_ID> -m model
```
* Serving Flask app "mlflow.pyfunc.scoring_server" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:1234/ (Press CTRL+C to quit)
```
I saw that mlflow UI and tracking servers are running on gunicorn. Can we do the same for model serving as well?
Answers:
username_1: That would be a very useful features indeed. I have used the `pyfunc serve` CLI but due to the single-threaded nature of the flask server, it isn't a viable solution for production.
Making this scoring server more production-ready will be very appreciated. 👍
username_0: I am currently working on this one.I have looked into the code.Unlike mlflow UI and tracking,for model serving we are creating new conda env based on user defined conda.yaml file or default conda.env configuration for model deployment in a new process.So we need to add gunicorn installation at time of creation of this conda env.I will try to finish as soon as possible and raise a PR for the same.
Please let me if I miss any.
username_0: I am currently working on this one.I have looked into the code.Unlike mlflow UI and tracking,for model serving we are creating new conda env based on user defined conda.yaml file or default conda.env configuration for model deployment in a new process.So we need to add gunicorn installation at time of creation of this conda env.I will try to finish as soon as possible and raise a PR for the same.
Please let me if I miss any.
username_1: Awesome, that would be great. 👍
Status: Issue closed
username_0: Closing the issue as gunicorn support has been added in MLflow 1.0.0
username_2: when i call the function mlflow.pytorch.save_model(model,"model") the model save successed.
but when i use the command **mlflow models serve -m "model path"** -p 1234 to serve the model
the exception happened like this
```
Traceback (most recent call last):
File "/home/academy/svn-Center/src/LoadModel.py", line 8, in <module>
mlflow.pytorch.load_model("hdfs://finely:9000/3/b81d968bc5244c879cd2a13935b94b23/artifacts/model")
File "/opt/anaconda3/envs/center/lib/python3.7/site-packages/mlflow/pytorch/__init__.py", line 345, in load_model
return _load_model(path=torch_model_artifacts_path, **kwargs)
File "/opt/anaconda3/envs/center/lib/python3.7/site-packages/mlflow/pytorch/__init__.py", line 296, in _load_model
return torch.load(model_path, **kwargs)
File "/opt/anaconda3/envs/center/lib/python3.7/site-packages/torch/serialization.py", line 367, in load
return _load(f, map_location, pickle_module)
File "/opt/anaconda3/envs/center/lib/python3.7/site-packages/torch/serialization.py", line 538, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'models.networks'
```
i use this api to test load the model, but failed with the same error. **mlflow.pytorch.load_model("hdfs://finely:9000/3/b81d968bc5244c879cd2a13935b94b23/artifacts/model")**
but when i use the pytorch orgin api to save the model, then load use mlflow.pytorch.load_model , it right works
username_3: add your model.py file code in main.py file. It worked for me.
username_3: Hi can you try pasting your model code into main.py file and then run, It worked for me. |
BuildTheEarth/main-bot | 903612543 | Title: Add reminder command
Question:
username_0: Hey there bot devs,
would it be possible to get the bot to write weekly reminders?
e.g. once per week I have to write a reminder for the manager to write their Weekly report i would like to automate that so I also works when I am not around or when I forget.
Something like:
create new
`=reminder add <channel ID> | <interval(in days)> | <title>| <message>`
get list of active reminders
`=reminder list`
*showing ID and title*
`=reminder delete <id>`
`=reminder edit <id> <new message>`<issue_closed>
Status: Issue closed |
cebor/angular-highcharts | 469100707 | Title: Maximum call stack size exceeded error throws when subscribe ref$
Question:
username_0: I'm getting "Maximum call stack size exceeded" error as shown in the attachment.
<img width="844" alt="Screen Shot 2019-07-17 at 3 25 06 PM" src="https://user-images.githubusercontent.com/9349781/61366439-3133e480-a8a7-11e9-9f6f-4cd66de6ef10.png"> |
Turfjs/turf-cut | 107474186 | Title: Why is this not part of turf.js
Question:
username_0: Why is this not part of turf.js?
Answers:
username_1: @username_0 we've got a number of pending modules / fixes that we'll be rolling into a release in the future, we just haven't had a chance to wrap everything up yet
username_0: Ah ok. I didn't know that. Thanks for the fast reply!
Status: Issue closed
username_2: @username_1 Any idea when this feature will be added in? Do you have any suggestions on how to implement this using things already in turf.js?
username_0: I also used this module a few month ago. You can just take the turf-cut/index.js file and import it to zour existing project. |
WIU-Computer-Science-Association/hacktoberfest-largest-palindrome-product | 509274414 | Title: C Code Golf
Question:
username_0: Complete the challenge from issue #1 but with the least amount of code(measured in bytes) and written in C.
Name your file 'golf.c'
The winner of this challenge will get a squishy sammy the shark (great for rubber ducky debugging!) If you have already completed another code golf challenge in this repository the runner up will get the prize.
The winner is decided by who has the smallest code and submitted their pull request earliest. Don't forget to mention this issue in your pull request!
If your submission contains commits from before 5pm on October 18th your pull request will be marked as spam. If you are not at our event your pull request will also be marked as spam. |
oatpp/example-async-api | 479345870 | Title: How to set application type as application/json when using ENDPOINT_ASYNC?
Question:
username_0: Hi Leonid,
I'm using ENDPOINT_ASYNC to support APIs in the project, like an example below.
ENDPOINT_ASYNC("POST", "{path-param}/rangedata", getRangeData) {
ENDPOINT_ASYNC_INIT(getRangeData)
String pathParam;
Action act() override {
pathParam = request->getPathVariable("path-param");
return request->readBodyToDtoAsync<RangeRequestDto>(controller->getDefaultObjectMapper())
.callbackTo(&getRangeData::onBodyObtained);
}
Action onBodyObtained(const RangeRequestDto::ObjectWrapper& requestDto) {
OATPP_LOGD("MyApp", "Received clientId value=%s", pathParam->std_str().c_str());
OATPP_LOGD("MyApp", "Received value for sensorID=%s", requestDto->sensorID->std_str().c_str());
std::string sensorName = "sensor:";
sensorName += requestDto->sensorID->std_str();
/*Get the data from Redis Instance*/
auto redisConnection = RedisHandler::getInstance();
std::vector<float>valueList;
redisConnection->redisZRANGE(sensorName, 0, std::stoi(requestDto->timeRange->std_str()), valueList);
auto responseDto = RangeResponseDto::createShared();
if(valueList.size()>0)
{
for(auto i: valueList)
responseDto->readings->pushBack(i);
}
/* return result */
return _return(controller->createDtoResponse(Status::CODE_201, responseDto));
}
};
Here, valueList is filled with string values, which is temperature reading from a sensor.
The values are then pushed in to the readings list of the DTO.
The DTO for the same is as below,
class RangeResponseDto : public oatpp::data::mapping::type::Object {
DTO_INIT(RangeResponseDto, Object /* Extends */)
DTO_FIELD(List<String>::ObjectWrapper, readings) = List<String>::createShared();
};
I would like to know how to set application type as JSON in the header for this response.
- Thanks,
Puneet
Status: Issue closed
Answers:
username_0: The data is sent in JSON format.
username_1: Hello @username_0 ,
As far as you use `controller->createDtoResponse(Status::CODE_201, responseDto)` and your default object mapper for the controller is set to `oatpp::parser::json::mapping::ObjectMapper` - it should already include `Content-Type: application/json` in the response headers.
---
In case you want to add some custom headers:
```cpp
auto response = controller->createDtoResponse(Status::CODE_201, responseDto);
response-> putHeader("X-My-Header", "My-Value");
return _return(response);
```
Best Regards,
Leonid |
janastu/ncbs-omeka-backbone-client | 295378579 | Title: How to make this website a resposive website and how to make a navigation bar with dropdown menu
Question:
username_0: [Website San Pascual Batangas - Copy.zip](https://github.com/janastu/ncbs-omeka-backbone-client/files/1705682/Website.San.Pascual.Batangas.-.Copy.zip)
I want to make a resposive templates but i cannot see the error in the codes...please help me |
mapstruct/mapstruct | 101399193 | Title: @MappingTarget & "Ambiguous mapping methods found for factorizing"
Question:
username_0: ```Java
public class ObjectFactory
{
public static MyClass createImpl1()
{
return ...;
}
public static MyClass createImpl2()
{
return ...;
}
}
```
```Java
@Mapper(uses=ObjectFactory.class)
public interface MyMapper
{
...
void map(TheirClass a, @MappingTarget MyClass m);
...
}
```
```
Ambiguous mapping methods found for factorizing MyMapper.MyClass: MyMapper.MyClass ObjectFactory.createImpl1(), MyMapper.MyClass ObjectFactory.createImpl2().
```
The error makes sense if the mapping method was something like "*MyClass map(TheirClass a);*", because in that case a new instance of *MyClass* needs to be created in the generated method.
But I'm using *@MappingTarget*, which means I'm providing an implementation of the target. I shouldn't get an error about not knowing which factory method to use.
Answers:
username_1: Right, that shouldn't happen. And we should also rephrase the error message (for those cases where it is actually required) to something like "Ambiguous factory methods found for creating XX:...", as "factorizing" doesn't seem to be the right wording here.
Status: Issue closed
username_1: Fixed. |
strykeforce/thirdcoast | 270115626 | Title: Make ClientHandler an interface
Question:
username_0: Probably make sense at some point to refactor `ClientHandler` into a interface, it really only has two methods:
```
public interface ClientHandler {
void start(Subscription sub);
void shutdown;
}
```
Then we could implement a `GrapherClientHandler` and `SplunkClientHandler` with different payload formats.
At the same time I'd probably just put the client UDP destination port into the subscription since I'll be updating the format for `ClientHandler`. |
ungoogled-software/ungoogled-chromium-portablelinux | 777381552 | Title: musl portable must be --no-sandbox (or crash on video)
Question:
username_0: The title explains it all, I guess.
# Crash
```bash
$ chromium-browser https://www.youtube.com/watch?v=tOzwIYQDSbY
https://www.youtube.com/watch?v=tOzwIYQDSbY
[20638:20638:1029/165359.523682:ERROR:sandbox_linux.cc(374)] InitializeSandbox() called with multiple threads in process gpu-process.
WARNING: Kernel has no file descriptor comparison support: Function not implemented
Fontconfig error: Cannot load default config file: No such file: (null)
[20783:15:1029/165406.560372:ERROR:batching_media_log.cc(38)] MediaEvent: {"error":"{\"causes\":[{\"causes\":[],\"data\":{},\"stack\":[{\"file\":\"../../media/filters/decrypting_video_decoder.cc\",\"line\":53}],\"status_code\":264,\"status_message\":\"\"}],\"data\":{\"Decoder name\":\"DecryptingVideoDecoder\"},\"stack\":[{\"file\":\"../../media/filters/decoder_selector.cc\",\"line\":172}],\"status_code\":265,\"status_message\":\"\"}"}
[20783:15:1029/165406.560529:ERROR:batching_media_log.cc(38)] MediaEvent: {"error":"{\"causes\":[{\"causes\":[],\"data\":{},\"stack\":[{\"file\":\"../../media/filters/vpx_video_decoder.cc\",\"line\":135}],\"status_code\":265,\"status_message\":\"\"}],\"data\":{\"Decoder name\":\"VpxVideoDecoder\"},\"stack\":[{\"file\":\"../../media/filters/decoder_selector.cc\",\"line\":172}],\"status_code\":265,\"status_message\":\"\"}"}
[20783:15:1029/165406.560728:ERROR:batching_media_log.cc(38)] MediaEvent: {"error":"{\"causes\":[{\"causes\":[],\"data\":{\"codec\":\"h264\"},\"stack\":[{\"file\":\"../../media/filters/dav1d_video_decoder.cc\",\"line\":165}],\"status_code\":260,\"status_message\":\"\"}],\"data\":{\"Decoder name\":\"Dav1dVideoDecoder\"},\"stack\":[{\"file\":\"../../media/filters/decoder_selector.cc\",\"line\":172}],\"status_code\":265,\"status_message\":\"\"}"}
```
# No crash
```bash
$ chromium-browser --no-sandbox https://www.youtube.com/watch?v=tOzwIYQDSbY
[19698:19698:1029/165324.425148:ERROR:sandbox_linux.cc(374)] InitializeSandbox() called with multiple threads in process gpu-process.
WARNING: Kernel has no file descriptor comparison support: Function not implemented
[19730:19881:1029/165326.280800:ERROR:batching_media_log.cc(38)] MediaEvent: {"error":"{\"causes\":[{\"causes\":[],\"data\":{},\"stack\":[{\"file\":\"../../media/filters/decrypting_video_decoder.cc\",\"line\":53}],\"status_code\":264,\"status_message\":\"\"}],\"data\":{\"Decoder name\":\"DecryptingVideoDecoder\"},\"stack\":[{\"file\":\"../../media/filters/decoder_selector.cc\",\"line\":172}],\"status_code\":265,\"status_message\":\"\"}"}
[19730:19881:1029/165326.280976:ERROR:batching_media_log.cc(38)] MediaEvent: {"error":"{\"causes\":[{\"causes\":[],\"data\":{},\"stack\":[{\"file\":\"../../media/filters/vpx_video_decoder.cc\",\"line\":135}],\"status_code\":265,\"status_message\":\"\"}],\"data\":{\"Decoder name\":\"VpxVideoDecoder\"},\"stack\":[{\"file\":\"../../media/filters/decoder_selector.cc\",\"line\":172}],\"status_code\":265,\"status_message\":\"\"}"}
[19730:19881:1029/165326.281099:ERROR:batching_media_log.cc(38)] MediaEvent: {"error":"{\"causes\":[{\"causes\":[],\"data\":{\"codec\":\"h264\"},\"stack\":[{\"file\":\"../../media/filters/dav1d_video_decoder.cc\",\"line\":165}],\"status_code\":260,\"status_message\":\"\"}],\"data\":{\"Decoder name\":\"Dav1dVideoDecoder\"},\"stack\":[{\"file\":\"../../media/filters/decoder_selector.cc\",\"line\":172}],\"status_code\":265,\"status_message\":\"\"}"}
```
I believe the guys at Alpine are aware of this very issue; they run Chromium unsandboxed as well.
Answers:
username_1: Your running kernel either needs to be compiled with `CONFIG_CHECKPOINT_RESTORE` or `GENTOO_LINUX_INIT_SYSTEMD`. Most likely the former one. Found at [https://forums.gentoo.org/viewtopic-t-1116870-start-0.html](https://forums.gentoo.org/viewtopic-t-1116870-start-0.html).
username_1: BTW according to [this](https://github.com/NixOS/nixpkgs/issues/101211), this is an upstream issue and had been fixed in v86. The binary on the website is v84. |
tbeason/financeconferences | 572865219 | Title: [ADD] SAFE Market Microstructure Conference
Question:
username_0: Please fill in with your details.
## Conference Name
SAFE Market Microstructure Conference
## Location
Frankfurt
## Submission Deadline (m/d/y)
3/24/2020
## Conference Date (m/d/y) (start date if multiple days)
8/17/2020
## Submission Fee
0
## Website
https://safe-frankfurt.de/microstructure-submission-portal<issue_closed>
Status: Issue closed |
nativescript-rtl/ui | 650303110 | Title: CSS "direction" property seems to have no effect on GridLayout
Question:
username_0: Hey @username_1 - great plugin!
The "direction" CSS property does not seem to have any effect on GridLayout.
The displayed content for the sample below is always: "RIGHT LEFT". It does not make a difference if the style is inline or in app.css.
The isRtl property works fine and swaps the output, if set to false.
Can you reproduce or am I missing something? Thank you for any help.
Sample xml
```
<Page xmlns="http://schemas.nativescript.org/tns.xsd" xmlns:rtl="@nativescript-rtl/ui" navigatingTo="navigatingTo">
<ActionBar title="My App" icon=""></ActionBar>
<rtl:GridLayout columns="*, *" class="testclass" style="direction:ltr;">
<Label col="0" text="LEFT" class="h1"/>
<Label col="1" text="RIGHT" class="h1"/>
</rtl:GridLayout>
</Page>
```
app.css
```
.testclass {
direction: ltr;
}
```
package.json
```
"dependencies": {
"@nativescript/core": "~6.5.0",
"@nativescript/theme": "~2.3.0",
"tns-core-modules": "6.5.8",
"@nativescript-rtl/ui": "~0.1.6"
},
```
Answers:
username_1: Is it show any invalid message on console?
username_0: Sorry, should've mentioned in original post. There is no console output and the valueConverter in the directionProperty is never called. This might well be an upstream issue.
username_1: @username_0 It works on older versions of NativeScript
I don't know what they changed in NativeScript, which doesn't make it work!
username_0: @username_1 Thank you! Do you have any rough idea which commit/core modules version it was working with last?
username_1: 3.2.0 |
elastic/elasticsearch-php | 432557675 | Title: Add "include_type_name" to Put.php params
Question:
username_0: ### Summary of problem or feature request
I am using "elasticsearch/elasticsearch": 5.4.0 to support elasticsearch 5.0 and upper.
Since Elasticsearch 7.0 was released, type shouldn't be used for creating Mappings.
https://www.elastic.co/guide/en/elasticsearch/reference/master/removal-of-types.html
Since elasticsearch/elasticsearch put mapping requires a type for elasticsearch 5:
https://github.com/elastic/elasticsearch-php/blob/v5.4.0/src/Elasticsearch/Endpoints/Indices/Mapping/Put.php#L44
we can't drop it.
So it whould be awesome to include the param
```
include_type_name
```
in the
```
getParamWhitelist
```
Function of the Put.php.
https://github.com/elastic/elasticsearch-php/blob/v5.4.0/src/Elasticsearch/Endpoints/Indices/Mapping/Put.php#L63
This whould enable us to be compatible to 5, 6 and 7.
### Code snippet of problem
Change
```php
/**
* @return string[]
*/
public function getParamWhitelist()
{
return array(
'ignore_conflicts',
'timeout',
'master_timeout',
'ignore_unavailable',
'allow_no_indices',
'expand_wildcards',
'update_all_types'
);
}
```
t:
```php
/**
* @return string[]
*/
public function getParamWhitelist()
{
return array(
'ignore_conflicts',
'timeout',
'master_timeout',
'ignore_unavailable',
'allow_no_indices',
'expand_wildcards',
'update_all_types',
'include_type_name'
);
}
```
-->
### System details
- Operating System
Linux/Windows/Max-Os
- PHP Version
7.2 or higher
- ES-PHP client version
5.4.0
- Elasticsearch version
5, 6 at the moment and hopefully 7
Answers:
username_1: @username_0 I'm working to release elasticsearch-php 7.0.0 soon, I'm still fixing some issues for 6.7. Thanks for reporting this!
username_1: @username_0 I've already added `include_type_name` for `Elasticsearch\Endpoints\Indices\Mapping\Put` and `rest_total_hits_as_int` in `Elasticsearch\Endpoints\Search` for 6.7.0. I'll add the missing `track_total_hits` for 7.0.0.
username_0: Is there any reason for not adding track_total_hits to 6.7.0?
username_1: No reason, I'm considering to add it in 6.7.1.
username_0: That would be awesome!
username_1: @username_0, @shyim just released 6.7.1: https://github.com/elastic/elasticsearch-php/releases/tag/v6.7.1 with `track_total_hits` in search endpoint.
Status: Issue closed
|
dapr/components-contrib | 1146486138 | Title: MySQL state store changes order of transaction
Question:
username_0: <!-- If you need to report a security issue please visit https://docs.dapr.io/operations/support/support-security-issues -->
Similar to #1209
## Expected Behavior
<!-- Briefly describe what you expect to happen -->
For the following transaction:
```
upsert key1 value1
delete key1
upsert key2 value2
```
Expectation is that `key1` should be deleted, `key2` should have a value of `value2`.
## Actual Behavior
<!-- Briefly describe what is actually happening -->
Both `key1`:`value1` and `key2`:`value2` are present.
## Steps to Reproduce the Problem
Please use the JS snipped from the reference issue (1209)
<!-- How can a maintainer reproduce this issue (be detailed) -->
## Details
Under the hood, MySQL store attempts all the deletions first, followed by all the upserts. This changes the actual ordering of operations within the transaction, giving wrong results.
## Release Note
<!-- How should the fix for this issue be communicated in our release notes? It can be populated later. -->
<!-- Keep it as a single line. Examples: -->
<!-- RELEASE NOTE: **ADD** New feature in Dapr. -->
<!-- RELEASE NOTE: **FIX** Bug in runtime. -->
<!-- RELEASE NOTE: **UPDATE** Runtime dependency. -->
RELEASE NOTE: **FIX** MySQL state store's transaction API to respect order of operations.
Answers:
username_0: /assign |
lexik/LexikTranslationBundle | 480514000 | Title: [Symfony 3.4] Getting error - Dependency on a non-existent service "translator.formatter.default"
Question:
username_0: Hello, after including the bundle into a Symfony 3.4 project on Linux, I'm getting the following error message:
`The service "lexik_translation.translator" has a dependency on a non-existent service "translator.formatter.default".`
I've tried it with a clean 3.4 project too and the result is the same.
Steps to reproduce:
1. Create new project `php symfony new test_project 3.4`
2. Open the project directory `cd test_project`
3. Install LexikTranslationBundle `composer require lexik/translation-bundle`
4. Set up basic config in `config.yml` and add bundle class in `AppKernel.php`
5. Running `./bin/console` or any other command results in the aforementioned error message
I found out that it's referenced here, however I have no clue what this service should do and where should it come from:
https://github.com/lexik/LexikTranslationBundle/blob/b110bc15a0bff04111b0ea12911a9a73dd08569b/DependencyInjection/LexikTranslationExtension.php#L83
Any ideas? |
kata-containers/kata-containers | 975778663 | Title: FYI: image_builder.sh borks if qemu-img not installed
Question:
username_0: # Get your issue reviewed faster
This is somewhere between a PEBCAK on my part and a potentially missing FAQ in the documentation, specifically [Developer-Guide.md](https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md#build-a-rootfs-image)
When running `image_builder.sh` to build a Pod Sandbox image from a previously created root file-system - without running it in a Docker container - the user is presented with a worrying set of ERROR messages: -
```text
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop0p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop1p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop9p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop10p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop11p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop12p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop13p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop14p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop15p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop16p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop17p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop18p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop19p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop20p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop21p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop22p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop23p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop24p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop25p1 is not a block device
losetup: /tmp/tmp.bHz11oY851: Warning: file is smaller than 512 bytes; the loop device may be useless or invisible for system tools.
ERROR: File /dev/loop26p1 is not a block device
ERROR: Could not calculate the required disk size
INFO: Creating raw disk with size 126M
```
before they see: -
```text
./image-builder/image_builder.sh: line 362: qemu-img: command not found
```
The clue to the problem is in that last message - `qemu-img` isn't installed.
This is described in the [Developer-Guide.md](https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md#build-a-rootfs-image) : -
[Truncated]
```text
containerd github.com/containerd/containerd v1.5.2 36cc874494a56a253cd181a1a685b44b58a2e34a
```
# Description of problem
See above
# Expected result
See above
# Actual result
See above
# Further information
More than happy to PR this into the [Developer-Guide.md](https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md#build-a-rootfs-image) if that'd help ?
Answers:
username_1: @username_0 go ahead!
username_0: Thanks @username_1 have raised PR #2554 🙏
Status: Issue closed
|
kalexmills/github-vet-tests-dec2020 | 769678878 | Title: golang/gofrontend: libgo/go/go/internal/gccgoimporter/gccgoinstallation_test.go; 10 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/golang/gofrontend/blob/526037336231593939a517b7c0b2892d413adb40/libgo/go/go/internal/gccgoimporter/gccgoinstallation_test.go#L186-L195)
<details>
<summary>Click here to show the 10 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range [...]importerTest{
{pkgpath: "io", name: "Reader", want: "type Reader interface{Read(p []byte) (n int, err error)}"},
{pkgpath: "io", name: "ReadWriter", want: "type ReadWriter interface{Reader; Writer}"},
{pkgpath: "math", name: "Pi", want: "const Pi untyped float"},
{pkgpath: "math", name: "Sin", want: "func Sin(x float64) float64"},
{pkgpath: "sort", name: "Ints", want: "func Ints(a []int)"},
{pkgpath: "unsafe", name: "Pointer", want: "type Pointer"},
} {
runImporterTest(t, imp, nil, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: <PASSWORD><issue_closed>
Status: Issue closed |
craftcms/commerce-paypal | 383405212 | Title: The total of the cart amounts do not much order amounts.
Question:
username_0: After click the PayPal Express button, It errors with popup dialogue says "The total of the cart amounts do not much order amounts.".
I found that it occurs when I set order.paymentCurrency to different currency from the primary store currency, also enabled "Should commerce send cart information to gateway when making transactions." option in the gateway setting (Commerce Setting > Gateways > gateway:paypal express selected)).
If I disabled that option, I can make payment having no issue.<issue_closed>
Status: Issue closed |
liuyueyi/hexblog | 539387526 | Title: 191217-Ognl之内部类与静态成员属性修改使用姿势 - 一灰灰Blog
Question:
username_0: https://blog.hhui.top/hexblog/2019/12/17/191217-Ognl%E4%B9%8B%E5%86%85%E9%83%A8%E7%B1%BB%E4%B8%8E%E9%9D%99%E6%80%81%E6%88%90%E5%91%98%E5%B1%9E%E6%80%A7%E4%BF%AE%E6%94%B9%E4%BD%BF%E7%94%A8%E5%A7%BF%E5%8A%BF/
在191204-Ognl 使用实例手册 中,当时遇到一个问题,静态成员属性直接赋值时,会抛出异常;那么这个问题真的无解么? 此外之前的实例手册中,漏了一个内部类的使用姿势,本文也一并补上 |
neo4j/neo4j-ogm | 109442440 | Title: findOne performance issue with depth >= 1
Question:
username_0: Fetching data with findOne method on an object of class C is impacted by incoming relation of type R whereas this kind of relation is not specified in the java class C (suppose there is a relation between class A and class C but the Set<R> is only specified on class A)
I think that fetching related datas (thanks to depth parameter) should not get datas that java classes do not need. Running time should not be so impacted by this kind of relation since user do not want them.
I already added my problem on stackoverflow : http://stackoverflow.com/questions/32884984/sdn4-or-neo4j-ogm-performances-issue in which class A is Element, class C is Attribute and relation R is Value
Thanks for your help
Answers:
username_1: I agree, the fetching of data should explicitely specify the relationship-types that are asked for.
And if there are two directions asked for we could collect them like this:
MATCH path = (m:Label)-[:FOO|:BAR]->() WHERE id(m) = {id}
WITH m, collect(path) as paths
MATCH path = (m:Label)<-[:ALPHA|:BETA]-()
RETURN m, collect(path) as paths, path
username_2: I also agree, it would be a very welcomed improvement. I encounter the same kind of issue.
username_1: Sure, we'll look into this for one of the next releases.
username_3: There are a few things we can talk about doing here. Maybe we allow users to ask for relationships, maybe we restrict by relationship direction or maybe we look for related nodes labelled in a specific way. Maybe we do all of these things!
That said, what I've just described is exactly what a Cypher query is for. We need to be careful that we don't just end up overloading `findOne` et al to provide the functionality already available through `@Query`. We will investigate using things like fetch depth and optimising the generated MATCH queries to potentially address this issue, but we need to be sure that we're investing our endeavour to tackle a true performance problem and not just focussing specific outlying use cases.
username_4: This question seems to be relevant:
http://stackoverflow.com/questions/33486334/load-an-object-in-neo4j-2-3-ogm-depth-of-2-very-slow
username_0: @username_1 do you have any idea about when this will be fix please ? Thanks a lot
username_5: @username_0 we'll be discussing this at our planning meeting next week- we'll have an update post that.
username_4: Was this ever discussed? is there any plan to solve this problem?
My development team found it impossible to work with OGM and downgraded back down to Spring Data Neo4j 3.4. This bug causes it to be impossible to load entities with their relationships which is the whole reason we use Neo4j. If at least we could load the ids of the related entites then we can at least continue working with AspectJ to load the relationships (not the best solution but at least something).
Is there anyone out there using OGM in production? how do u guys solve the problem that you can't load entities like categories that are very well connected?
username_5: @username_4 It was discussed and it will be addressed in a future release. Curious to know how many relationships your entity is connected to
username_4: We have a category entity which holds the category name, color and some more information about the category. The number of relationships varies between the different categories, but each is connected to arounnd 20,000 nodes (and this number is growing).
The problem arises when loading an entity that has a relationship to a category with a depth bigger than one (since one gets us to the category and the second level gets us to 20,000 nodes).
In our java code the category has no relationships, only the other entites have to it.
username_5: Just so that I understand your problem better- you're asking that the OGM supports a depth per relationship type? So when loading an entity that has a relationship to a category, the category is loaded to depth 1 but the rest of the related entities are loaded to depth n? Or are you asking that the OGM understands how your domain objects are wired and load entities that match that model?
When custom queries support mapping results back to entities, would this help in the above case?
username_4: Both are good, each has their own advantages. The one I think best addresses my problem is the understanding of the object model since it is unwanted behavior that my category entity loads all its related nodes even though I will only ever be using the nodes mapped in my object. in this sense, the fact that it doesnt understand my object wiring is a performance bug while the choosing custom relationship depths is more of a feature.
Does this answer your question? Not quite sure about your last question.
username_6: I encountered the same problem and also ended up not upgrading to ogm/spring data neo4j 4.
In my case I have song nodes that are connected to genre nodes but since most genres have thousands of songs I end up touching too much of the graph by selecting more than one level.
If OGM used my object wiring for the relationships there would be no problem since my genre node has no mapped relationships, Only the songs have.
Will this problem be solved soon?
username_7: I was also trying to upgrade to SDN4 but couldn't because of said issue. Also QueryResult with no support for object/Collection of objects mapping was also creating problems in terms of clean code.
Holding back till both of these get solved !!
username_0: @username_5 and @username_4 username_4 said "My development team found it impossible to work with OGM and downgraded back down to Spring Data Neo4j 3.4". I am exactly in the same situation. For us it's a critical issue which lead us to not use SDN4. Maybe it would be great if it would be possible to add a parameter on @Relation (something like autofetch=false|true) wich stop loadding deep entities when it is false (or load only the id)
username_5: Thanks everyone for your comments. At the moment, I can confirm that this is on our to-do list and we'll update this issue when it's been assigned to a release.
username_8: Hi! First I want to thank you all for your wonderful work. This feature is an important requirement for us. We want to retrieve all kids at once (is a tree inside the graph) with depth = -1. Do you have an approximate date of release? Thanks!
username_4: Did this make it in to the 2.0.1 version. If not, is there an ETA on a solution?
username_9: Hello,
I have the same problem. Is that a correction will be made on the bug username_0 ?
username_10: FYI All: We are working on this issue actively at the moment. It is slated for release with OGM 3.0 and SDN 5.0.
username_9: Thanks!
Do you have an idea of the date of availability of the correction?
username_10: @username_9 I would say late April for a release candidate for both the OGM and SDN. We are exploring various options at the moment.
username_11: This has been addressed in 3.0.0 release by providing query load strategy based on schema, see #70 and related commits and is available in RC1 or latest snapshot.
We would love to hear your feedback.
Status: Issue closed
|
meliorence/react-native-render-html | 1121096256 | Title: Migration to v6 issues - Images with relative width
Question:
username_0: ### Decision Table
- [X] My issue does not look like “The HTML attribute 'xxx' is ignored” (unless we claim support for it)
- [X] My issue does not look like “The HTML element `<yyy>` is not rendered”
### Good Faith Declaration
- [X] I have read the HELP document here: https://git.io/JBi6R
- [X] I have read the CONTRIBUTING document here: https://git.io/JJ0Pg
- [X] I have confirmed that this bug has not been reported yet
### Description
I'm trying to migrate to v6 from the older v4 (or v5?). Besides a massive amount of breaking changes that I've been able to work around thanks to the detailed migration steps (thanks a lot for it!), I cannot still manage to work around issues with images.
With the older version, I could use `maxWidth: '100%'` on an image and it would only take the width of the parent's container. With v6, it will always use the screen width, or whatever the `contentWidth` is.
I could not find anywhere in the guide how to achieve image resizing as I used to be able to do with v4.
### React Native Information
```sh
System:
OS: macOS 12.1
CPU: (8) x64 Intel(R) Core(TM) i5-1038NG7 CPU @ 2.00GHz
Memory: 20.38 MB / 16.00 GB
Shell: 5.8 - /bin/zsh
Binaries:
Node: 14.17.6 - /usr/local/bin/node
Yarn: 1.22.5 - /usr/local/bin/yarn
npm: 7.24.0 - /usr/local/bin/npm
Watchman: 2021.06.07.00 - /usr/local/bin/watchman
Managers:
CocoaPods: 1.11.2 - /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 21.2, iOS 15.2, macOS 12.1, tvOS 15.2, watchOS 8.3
Android SDK: Not Found
IDEs:
Android Studio: 2020.3 AI-203.7717.56.2031.7678000
Xcode: 13.2.1/13C100 - /usr/bin/xcodebuild
Languages:
Java: 11.0.11 - /usr/bin/javac
npmPackages:
@react-native-community/cli: Not Found
react: 17.0.2 => 17.0.2
react-native: 0.66.3 => 0.66.3
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
```
```
### RNRH Version
6.3.4
### Tested Platforms
[Truncated]
borderColor: conf.theme.variables.darkBorderColor,
},
th: {
alignItems: 'center',
justifyContent: 'center',
textAlign: 'center',
fontWeight: 'bold',
padding: 3 * scaling,
borderLeftWidth: conf.theme.variables.borderWidth,
borderRightWidth: conf.theme.variables.borderWidth,
borderColor: conf.theme.variables.darkBorderColor,
},
};
```
### Additional Notes
_No response_
Answers:
username_1: @username_0 There is clearly an issue here, where `width: 100%` is interpreted as "100% of the viewport". Before it gets fixed, I suggest you use this custom image renderer:
```tsx
import {
useWindowDimensions,
Image,
} from 'react-native';
import RenderHTML, {
useIMGElementProps,
CustomBlockRenderer,
} from 'react-native-render-html';
const ImgRenderer: CustomBlockRenderer = props => {
const imgProps = useIMGElementProps(props);
return (
<Image
resizeMode="contain"
style={imgProps.style}
source={imgProps.source}
/>
)
};
const renderers = {img: ImgRenderer};
export default function App() {
const {width} = useWindowDimensions();
return (
<SafeAreaView style={{flex: 1}}>
<ScrollView style={{flex: 1}}>
<RenderHTML
baseStyle={baseStyle}
tagsStyles={tagsStyles}
source={{html: htmlContent}}
contentWidth={width}
renderers={renderers}
/>
</ScrollView>
</SafeAreaView>
);
}
``` |
cucumber/cucumber-ruby | 26455430 | Title: Refactor lib/runtime.rb to use composition
Question:
username_0: Right now the `Runtime` class, which has a long and gruesome history, is made bigger by a `Runtime::UserInterface` module that's included into the class. If we can break this up and use composition instead, we'll end up with a simpler codebase.
There are a few dependencies that expect the runtime to be passed in, but only use the UserInterface portion of it. Having a separate class will clarify this more.
cc @tooky |
martin-lueders/ML_modules | 462299319 | Title: QUANTUM DCD OFFSET
Question:
username_0: Hello Martin
nice to greetings you.
You can see an issue discovered in your quantum module in comparison with others.
Thanks

Answers:
username_1: Hi,
I don't understand you patch. You are sending a audio signal through the
Quantum, and also you have not selected any notes.
In this case, the output of Quantum is basically undefined and will be a
constant, which you seem to be adding to the original output of the VCO.
A Quantizer should be used in the signal path of the pitch CV which goes
into the V/Oct input of the oscillator, not into the the audio signal path
after the VCO.
You can see a nice tutorial on the Quantum by <NAME> on Youtube:
https://www.youtube.com/watch?v=1rz8u0b30mM
Hope it helps,
Martin
username_0: Hello Martin
yes, this patch in the sented image is no using any quantized note in order to show you the DC offset
Please open the patch sented this time and you can "see and Listen" the change disconnecting/connecting the cable from QUANTUM to PATCH (or any of the inputs in patch)
Is generating an audio issue
Regards
if you create t
[QUANTUM DC OFFSET.zip](https://github.com/username_1/ML_modules/files/3343957/QUANTUM.DC.OFFSET.zip)
username_1: Hi Jairo,
You are using the module in the wrong way. Please, refer to the
documentation and the video, I linked.
The Quantiser is NOT supposed to be in the Audio path, but in the CV pitch
signal path. Also, you need to select some notes to be allowed in the scale.
The way you have connected the module, it is clear that it produces just a
constant, which then is added to the audio signal, producing the DC offset.
the signal path should be:
(LFO or sequencer or Sample&Hold) -> Quantum IN, Quantum OUT -> VCO V/Oct
in, VCO out -> (mixer, filter, vca, etc.)
Best Regards,
Martin
username_0: Hello Martin
yes, the error was mix the signals
Thanks for your aclaration
In this software all is possible
:)
Status: Issue closed
|
omniauth/omniauth-saml | 232886072 | Title: SHA256 support for fingerprints
Question:
username_0: The `response_fingerprint` method only accepts `SHA1` fingerprints. Any reason not to add support for `SHA256`?
Answers:
username_1: The `response_fingerprint` method is only called when `idp_cert_fingerprint_validator` is being used. The `ruby-saml` library itself supports SHA256 fingerprints by passing `idp_cert_fingerprint_algorithm`.
I personally feel that `idp_cert_fingerprint_validator` should be deprecated and removed from `omniauth-saml`. Assuming we don't want to make that change, I think it would make sense to have the `response_fingerprint` method be aware of `idp_cert_fingerprint_algorithm`.
Are you using `idp_cert_fingerprint_validator`? If so, what is your use case?
username_2: @username_0 @username_1
Hey Guys I seem to have same problem.
I'm using this module via gitlab and I'm trying to authenticate via SAML to Azure AD
And gitlab still says Fingerprint mismatch, however after decoding answer from Azure
Could you please help me to debug it?
```
<samlp:Response ID="_c0aae85a-4d5a-4a65-a76f-fb1131334472" Version="2.0" IssueInstant="2017-10-30T16:38:17.354Z" Destination="http://gitlab.localhost/users/auth/saml/callback" InResponseTo="_1f34e0ba-1af4-47a6-b1cd-bd2d5fee12ca" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"><Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">https://sts.windows.net/b5c21891-6f6e-4454-958b-254e0997679a/</Issuer><samlp:Status><samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></samlp:Status><Assertion ID="_b10005ce-ebcd-4df5-92d4-f209e9e96de1" IssueInstant="2017-10-30T16:38:17.338Z" Version="2.0" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"><Issuer>https://sts.windows.net/b5c21891-6f6e-4454-958b-254e0997679a/</Issuer><Signature xmlns="http://www.w3.org/2000/09/xmldsig#"><SignedInfo><CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/><Reference URI="#_b10005ce-ebcd-4df5-92d4-f209e9e96de1"><Transforms><Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></Transforms><DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/><DigestValue>HpiUuZARXcbb/pw+AjJF7bCQvASly5Av+3VuTmCZtqo=</DigestValue></Reference></SignedInfo><SignatureValue>loong signature</SignatureValue><KeyInfo><X509Data><X509Certificate>CERTIFICATE</X509Certificate></X509Data></KeyInfo></Signature><Subject><NameID Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent">qPDMo2bppJqmqnFOK1kui6bB7uttrDsZ-Fzn3obb6FA</NameID><SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><SubjectConfirmationData InResponseTo="_1f34e0ba-1af4-47a6-b1cd-bd2d5fee12ca" NotOnOrAfter="2017-10-30T16:43:17.338Z" Recipient="http://gitlab.localhost/users/auth/saml/callback"/></SubjectConfirmation></Subject><Conditions NotBefore="2017-10-30T16:33:17.323Z" NotOnOrAfter="2017-10-30T17:33:17.323Z"><AudienceRestriction><Audience>https://gitlab.localhost</Audience></AudienceRestriction></Conditions><AttributeStatement><Attribute Name="http://schemas.microsoft.com/identity/claims/tenantid"><AttributeValue>
```
username_1: @username_2 Are you using GitLab.com or an on premise installation of GitLab? Regardless, it sounds like you might want to follow up within the GitLab community.
username_3: @username_1 I have gitlab on premise.
I was not sure wether to address the issue here or at gitlab community tracker.
Is there any workaround I could use in the meantime?
Because I also filed another issue regarding oAuth to AAD.
username_2: @username_1 I have gitlab on premise.
I was not sure wether to address the issue here or at gitlab community tracker.
Is there any workaround I could use in the meantime?
Because I also filed another issue regarding oAuth to AAD.
username_1: GitLab has it's own configuration layer on top of `omniauth-saml`, so it's probably best to seek help there. That goes even more if you're considering using OAuth instead of SAML.
One thing I would recommend if you're not doing it already is to use the `idp_cert` or `idp_cert_multi` configuration parameters instead of `idp_cert_fingerprint`. If you do use `idp_cert_fingerprint`, I think you may also need to set `idp_cert_fingerprint_algorithm` if your signature is not SHA1. That could be the source of your fingerprint mismatch error.
username_2: I tried all combinations of algorithms of settings, and gitlab documentation points here for more detailed documentation.
In the meantime I was able to decode SAML received from AAD and fingerprint is in SAML response looks like this: `<KEY>`
About `idp_cert_multi` and `idp_cert_fingerprint_algorithm` - I did not see any documentation regarding these options and gitlab documentation is pointing here for more options.
I'll will do some more tests and then create a gitlab issue.
About oAuth there is one issue with it, but I filed the issue already.
username_4: Here's a sample for a valid config within GitLab:
```
gitlab_rails['omniauth_providers'] = [
{
"name" => "saml",
"label" => "my saml provider",
"args" => {
"assertion_consumer_service_url" => "https://{{gitlab_external_url}}/users/auth/saml/callback",
"idp_cert_fingerprint" => "{{saml_fingerprint}}",
"idp_cert_fingerprint_algorithm" => "http://www.w3.org/2000/09/xmldsig#sha1",
"idp_sso_target_url" => "{{saml_target_url}}",
"issuer" => "https://{{gitlab_external_url}}",
"allowed_clock_drift" => 5,
"name_identifier_format" => "urn:oasis:names:tc:SAML:2.0:nameid-format:email"
}
}
```
username_2: @username_4
Thanks for the sample configuration but it doesn't work with Azure.
Did you try it specifically with Azure AD?
my config:
I did tried certificate fingerprint in two formats `xx:xx` and `xxxxx` algorithms sha1, sha256 and nothing has worked.
```
{
name: 'saml',
args: {
assertion_consumer_service_url: 'http://gitlab.localhost/users/auth/saml/callback',
idp_cert_fingerprint: "<certificate fingerprint>",
idp_cert_fingerprint_algorithm: "http://www.w3.org/2000/09/xmldsig#sha1",
idp_sso_target_url: 'https://login.microsoftonline.com/<tenantID>/saml2',
issuer: 'https://gitlab.localhost',
name_identifier_format: 'urn:oasis:names:tc:SAML:2.0:nameid-format:persistent'
},
label: 'SAML'
}
```
Allowed SAML authentication request's NameIDPolicy formats are:
urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
urn:oasis:names:tc:SAML:2.0:nameid-format:persistent
urn:oasis:names:tc:SAML:2.0:nameid-format:transient.
```
username_4: @username_2 this config is for another SAML provider , I never used the azure saml provider
username_1: @username_2 did you try using `idp_cert` with a full copy of the certificate and removing `idp_cert_fingerprint` and ` idp_cert_ fingerprint_algorithm`?
username_2: @username_1
Yes I did try multiple combinations as of configuration but also certificate
I used it as
```
idp_cert: '---begin certificate----
sadasd
asdasda
asdasdasd
-----end certificate-----
```
also
`idp_cert: '---begin certificate----\nsadasd\nasdasda\nasdasdasd\n-----end certificate-----`
and
`idp_cert: ---begin certificate----\nsadasd\nasdasda\nasdasdasd\n-----end certificate-----`
and
`idp_cer: 'sadasdasdasdaasdasdasd`.
I can free azure account to provide you with real data so we have better chance of troubleshooting.
Let me know if you are interested.
username_1: @username_2 The first form should be fine. The single-quoted forms with `'\n'` won't work since `\n` isn't recognized as an escape in single quotes.
Are you sure it's the correct certificate?
One approach you might want to take to ensure you're using the correct certificate is to use the Azure-provided metadata file to configure your integration. You can see our docs for this at https://github.com/omniauth/omniauth-saml#idp-metadata
I've never configured Gitlab from an admin side, but from [their docs](https://docs.gitlab.com/ee/integration/saml.html) it looks like you could do something like this (using the IdP metadata URL gleaned from [the Azure docs](https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saml-protocol-reference)):
```ruby
idp_metadata = OneLogin::RubySaml::IdpMetadataParser.new.parse_remote_to_hash(
"https://login.microsoftonline.com/#{YOUR_DOMAIN}/FederationMetadata/2007-06/FederationMetadata.xml"
)
gitlab_rails['omniauth_providers'] = [
{
name: 'saml',
args: idp_metadata.merge(
assertion_consumer_service_url: 'http://gitlab.localhost/users/auth/saml/callback',
issuer: 'https://gitlab.localhost'
),
label: 'Company Login' # optional label for SAML login button, defaults to "Saml"
}
]
```
I think you can possibly leave `assertion_consumer_service_url` out, since it should be auto-configured correctly.
username_5: @username_2 What error are you seeing when you try @username_1's suggested code? GitLab is currently on omniauth-saml 1.7.0 and ruby-saml 1.4.1. As far as I can see, those support the `OneLogin::RubySaml::IdpMetadataParser` approach too.
If you provide `idp_cert` instead of `idp_cert_fingerprint`, what error are you getting? I don't think it would be "Fingerprint mismatch".
username_1: @username_5 The `parse_remote_to_hash` method was added in `ruby-saml` 1.4.3. Here's the PR: https://github.com/onelogin/ruby-saml/pull/393
username_5: @username_1 Ah, right, I missed that one.
Either way, that's a different way to configure omniauth-saml, but not a reason why the "classical" method wouldn't work...
@username_2 Can you please share the error you're getting when your provide `idp_cert` instead of `idp_cert-fingerprint`?
username_2: @username_5
`idp_cert: '-----BEGIN CERTIFICATE-----MIIC8DCCAdigAwIBA...` without linebreaks
it seems to be finally not giving `Fingerprint mismatch`.
then I get this error:
`http://gitlab.localhost/users/auth/saml/omniauth_error?error=Email+can%27t+be+blank%2C+Notification+email+can%27t+be+blank%2C+and+Notification+email+is+invalid`
The user does not exist yet, but this should take care of it
`gitlab_rails['omniauth_block_auto_created_users'] = false`
Here are the saml attributes I'm sending from Azure it might be missing one:
```
givenname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims
…
surname
http://schemas.xmlsoap.org/ws/2005/05/identity/claims
…
emailaddress
http://schemas.xmlsoap.org/ws/2005/05/identity/claims
…
name
http://schemas.xmlsoap.org/ws/2005/05/identity/claims
```
More details about error:
```
gitlab | ==> /var/log/gitlab/gitlab-workhorse/current <==
gitlab | 2017-11-08_16:29:08.42671 gitlab.localhost @ - - [2017-11-08 16:29:07.842706622 +0000 UTC] "POST /users/auth/saml/callback HTTP/1.1" 302 225 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.89 Safari/537.36" 0.583877
gitlab |
gitlab | ==> /var/log/gitlab/nginx/gitlab_access.log <==
gitlab | 172.17.0.1 - - [08/Nov/2017:16:29:08 +0000] "POST /users/auth/saml/callback HTTP/1.1" 302 225 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.89 Safari/537.36"
gitlab |
gitlab | ==> /var/log/gitlab/unicorn/unicorn_stdout.log <==
gitlab | I, [2017-11-08T16:29:07.851060 #4946] INFO -- omniauth: (saml) Callback phase initiated.
gitlab |
gitlab | ==> /var/log/gitlab/gitlab-rails/production.log <==
gitlab | Processing by OmniauthCallbacksController#omniauth_error as HTML
gitlab | Parameters: {"error"=>"Email can't be blank, Notification email can't be blank, and Notification email is invalid", "provider"=>"saml"}
gitlab | Completed 422 Unprocessable Entity in 95ms (Views: 69.5ms | ActiveRecord: 0.0ms)
gitlab |
gitlab | ==> /var/log/gitlab/gitlab-rails/production_json.log <==
gitlab | {"method":"GET","path":"/users/auth/saml/omniauth_error","format":"html","controller":"OmniauthCallbacksController","action":"omniauth_error","status":422,"duration":95.99,"view":69.51,"db":0.0,"time":"2017-11-08T16:29:08.454Z","params":{"error":"Email can't be blank, Notification email can't be blank, and Notification email is invalid","provider":"saml"},"remote_ip":"172.17.0.1","user_id":null,"username":null}
```
username_5: @username_2 All right, now we're back in GitLab territory. :) Let's take this back to https://gitlab.com/gitlab-org/gitlab-ce/issues/39618.
username_1: Glad this is sorted out.
It would be good to get GitLab up to the latest ` ruby-saml` to allow users to configure using a remote metadata file. It's a far less error-prone way to ensure that the right settings are configured for your IdP.
username_6: Guys, sorry to bring this up again, but will it be of interest to add support for SHA256 to omniauth-saml? As far as I can tell, only SHA1 fingerprints are supported.
https://github.com/omniauth/omniauth-saml/blob/v1.10.0/lib/omniauth/strategies/saml.rb#L68 |
prometheus/prometheus | 278577526 | Title: Prometheus v2.0.0 data corruption
Question:
username_0: At SAP we're using Prometheus to monitor our 13+ kubernetes clusters. The recent upgrade to Prometheus v2.0.0 was initially very smooth, but is meanwhile somewhat painful, since we're seeing the following error on a daily basis. At first Prometheus returns inconsistent metric values, which affects alerting, and eventually crashes with:
```
level=error ts=2017-12-01T19:22:24.923269594Z caller=db.go:255 component=tsdb msg="retention cutoff failed" err="read block meta /prometheus/01BZFAM16QFQ3ECY7E09DH7X7H: open /prometheus/01BZFAM16QFQ3ECY7E09DH7X7H/meta.json: no such file or directory"
level=info ts=2017-12-01T19:22:24.923332144Z caller=compact.go:361 component=tsdb msg="compact blocks" count=1 mint=1512129600000 maxt=1512136800000
level=error ts=2017-12-01T19:22:28.906791057Z caller=db.go:260 component=tsdb msg="compaction failed" err="reload blocks: read meta information /prometheus/01BZFAM16QFQ3ECY7E09DH7X7H: open /prometheus/01BZFAM16QFQ3ECY7E09DH7X7H/meta.json: no such file or directory"
```
On restart it fails with
```
level=info ts=2017-11-30T13:53:15.774449669Z caller=main.go:215 msg="Starting Prometheus" version="(version=2.0.0, branch=HEAD, revision=0a74f98628a0463dddc90528220c94de5032d1a0)"
level=info ts=2017-11-30T13:53:15.774567774Z caller=main.go:216 build_context="(go=go1.9.2, user=root@615b82cb36b6, date=20171108-07:11:59)"
level=info ts=2017-11-30T13:53:15.774584415Z caller=main.go:217 host_details="(Linux 4.13.9-coreos #1 SMP Thu Oct 26 03:21:00 UTC 2017 x86_64 prometheus-frontend-4217608546-6mkiw (none))"
level=info ts=2017-11-30T13:53:15.77544454Z caller=web.go:380 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2017-11-30T13:53:15.776060323Z caller=main.go:314 msg="Starting TSDB"
level=info ts=2017-11-30T13:53:15.776080166Z caller=targetmanager.go:71 component="target manager" msg="Starting target manager..."
level=error ts=2017-11-30T13:53:16.931485157Z caller=main.go:323 msg="Opening storage failed" err="read meta information /prometheus/01BZFAM16QFQ3ECY7E09DH7X7H: open /prometheus/01BZFAM16QFQ3ECY7E09DH7X7H/meta.json: no such file or directory"
```
This can only be fixed manually by deleting at least the affected directory.
Memory usage is consistent. Nothing obvious here.
Prometheus stores the data on an NFS mount, which worked perfectly with previous versions.
Since this makes our monitoring setup quite unreliable, I'm thinking about downgrading to Prometheus v1.8.2, which did a fantastic job in the past.
I cannot see where prometheus fails to write the `meta.json`. Hopefully you know more @fabxc?
Similar to #2805.
**Environment**
* System information:
Linux 4.13.16-coreos-r1 x86_64
* Prometheus version:
prometheus, version 2.0.0 (branch: HEAD, revision: 0a74f98628a0463dddc90528220c94de5032d1a0)
build user: root@615b82cb36b6
build date: 20171108-07:11:59
go version: go1.9.2
* Prometheus configuration file:
Configuration can be found [here](https://github.com/sapcc/helm-charts/blob/master/system/kube-monitoring/charts/prometheus-frontend/templates/config.yaml).
Answers:
username_1: NFS is not supported, by any version of Prometheus. We require a POSIX filesystem.
username_0: Thanks for the quick reply.
Please also consider the 2nd part of the issue: In the same setup while using a Kubernetes PVC the retention is not considered, so the volume fills up, eventually leading to the error described above. I saw a couple potentially related commits in prometheus/tsdb. Is this issue known?
username_2: Hi, this is a known issue with NFS and windows systems, but other POSIX
systems should be fine. As you might have noticed, this has been fixed
upstream and there will be a new release soon with the fixes.
Thanks,
Goutham.
username_0: Thanks for the answer @gouthamve.
Our Prometheis v2.0.0 instances seem to work fine after manually deleting the data outside of the retention window.
Do you already have a timeline for the next release?
Is the fix you mentioned already in the master branch, so I could build and test it?
username_3: I have encountered the same error messages on a Windows server with local storage (the disk appears as `HP LOGICAL VOLUME SCSI Disk Device`).
In my case I was unable to start Prometheus after a config change until I have deleted the affected folder.
Windows Server 2012 R2
Prometheus: version=2.0.0, branch=HEAD, revision=0a74f98628a0463dddc90528220c94de5032d1a0, go=go1.9.2
Status: Issue closed
username_4: For what it's worth, we built Prometheus against the latest prometheus/tsdb and that solved this particular issue with NFS.
username_5: Related to #3506 and should be fixed by prometheus/tsdb#213 and #3508.
username_6: I still see the above issue with prometheus v2.1.0, which afaict includes #3508. I believe this indicates that the tsdb change was not sufficient.
username_7: I am seeing similar data corruption issue with prometheus v2.0.0, when i restart prometheus.

To clarify, am not using nfs. it just ext4 filesystem, below is the mount location where empty metadata.json error occurred.
/dev/vda3 on /data-2 type ext4 (rw,relatime,errors=remount-ro,data=ordered)
This can only be fixed manually by deleting at least the affected directory from the mount location. Please let me know whether this issue can be fixed with upgrading prometheus to later version of v2.1.0
username_8: Has this bug been fixed?
username_9: @username_8 if you're asking about running Prometheus on NFS, what Brian [answered](#issuecomment-348598966) hasn't changed.
If you have additional questions, please ask on the prometheus users mailing list or IRC. |
MrRefactoring/jira.js | 585663446 | Title: API Return Types not typed.
Question:
username_0: For example the result of `client.issueSearch.searchForIssuesUsingJqlGet` is typed as `Promise<any>`.
Answers:
username_1: Hi, @username_0!
Yes, I really helped develop the `jira-connector`, but support of the legacy code and the heavy weight of the library inspired me to create `jira.js`, a lighter and more relevant solution for working with Jira api.
I wrote a small script that from swagger and postman collection generates code for me for this library, but I don’t have enough time to add the part that is responsible for the returned types :(
I started writing it, but never finished it. If you are suddenly interested to help me, then I am open to communication in this direction
username_1: Maybe I will add this after a while, but I can’t promise for sure
username_2: Thanks so much for this Mr!
Very impressive and helpfull!
Noob question,
how do I set arguments in a query e.g.
client.issueSearch.searchForIssuesUsingJqlGet
import { Client } from "jira.js";
import { Jql } from 'jira.js/out/api';
Jql.arguments='creator=QB';
const data = await client.issueSearch.searchForIssuesUsingJqlGet(Jql);
console.log(data);
username_2: Thanks so much for this Mr!
Very impressive and helpfull!
Noob question,
how do I set arguments in a query e.g.
client.issueSearch.searchForIssuesUsingJqlGet
import { Client } from "jira.js";
import { Jql } from 'jira.js/out/api';
Jql.arguments='creator=QB';
const data = await client.issueSearch.searchForIssuesUsingJqlGet(Jql);
console.log(data);
username_1: @username_2 I think that's what you're looking for:
```ts
import { Client } from 'jira.js';
const client = new Client({ host: '...' });
const data = await client.issueSearch.searchForIssuesUsingJqlGet({
jql: 'creator=QB',
});
console.log(data);
```
username_2: Thanks Mr!
1. How do I extract Json keys in a safe way?
(Promise.race does not seem to help)
2. Trying to get all subtask information from all the existing subtasks. (Currently only my own)
It seems like I get a lot of unnessessary data.
Do you know of a query that retrives the user inserted data only?
Would you recommend another query?
3. I seem to always get Promise back even when trying resolve, .json()
eg. https://flaviocopes.com/javascript-promises/#consuming-a-promise
What is the simplest way?
username_1: I can't tell you exactly which query you should use. I think it's better to make a topic in the [discussion](https://github.com/username_1/jira.js/discussions/new), maybe you will get an answer there.
Status: Issue closed
|
ftsf/nico | 603369560 | Title: new error when trying to build platformer.nim on Nim 1.2.0 with latest source
Question:
username_0: ```
nim --version
Nim Compiler Version 1.2.0 [Windows: amd64]
Compiled at 2020-04-03
Copyright (c) 2006-2020 by <NAME>
git hash: 7e83adff84be5d0c401a213eccb61e321a3fb1ff
active boot switches: -d:release
```
```
git clone https://github.com/username_2/nico.git
Cloning into 'nico'...
remote: Enumerating objects: 133, done.
remote: Counting objects: 100% (133/133), done.
remote: Compressing objects: 100% (69/69), done.
remote: Total 703 (delta 83), reused 109 (delta 64), pack-reused 570
Receiving objects: 100% (703/703), 386.35 KiB | 1.54 MiB/s, done.
Resolving deltas: 100% (418/418), done.
```
```
# cd nico/examples
# nim c -r -d:release platformer.nim
Hint: used config file 'C:\Users\unicodex\.choosenim\toolchains\nim-1.2.0\config\nim.cfg' [Conf]
Hint: system [Processing]
Hint: widestrs [Processing]
Hint: io [Processing]
Hint: platformer [Processing]
Hint: nico [Processing]
Hint: common [Processing]
Hint: math [Processing]
Hint: bitops [Processing]
Hint: macros [Processing]
Hint: tables [Processing]
Hint: hashes [Processing]
Hint: algorithm [Processing]
Hint: unicode [Processing]
Hint: times [Processing]
Hint: strutils [Processing]
Hint: parseutils [Processing]
Hint: options [Processing]
Hint: typetraits [Processing]
Hint: winlean [Processing]
Hint: dynlib [Processing]
Hint: time_t [Processing]
Hint: keycodes [Processing]
Hint: controller [Processing]
Hint: sdl [Processing]
Hint: ringbuffer [Processing]
Hint: unittest [Processing]
Hint: streams [Processing]
Hint: sets [Processing]
Hint: sequtils [Processing]
Hint: os [Processing]
Hint: pathnorm [Processing]
Hint: osseps [Processing]
Hint: terminal [Processing]
Hint: strformat [Processing]
Hint: colors [Processing]
[Truncated]
Hint: nimz [Processing]
Hint: filters [Processing]
Hint: osproc [Processing]
Hint: strtabs [Processing]
Hint: cpuinfo [Processing]
Hint: ospaths [Processing]
C:\Users\unicodex\.nimble\pkgs\nico-0.2.0\nico\backends\sdl2.nim(34, 8) Warning: import os.nim instead; ospaths is deprecated [Deprecated]
Hint: parsecfg [Processing]
Hint: sndfile [Processing]
Hint: random [Processing]
C:\Users\unicodex\.nimble\pkgs\nico-0.2.0\nico\backends\sdl2.nim(333, 50) Warning: use `csize_t` instead; csize is deprecated [Deprecated]
C:\Users\unicodex\.nimble\pkgs\nico-0.2.0\nico\backends\sdl2.nim(333, 19) Error: type mismatch: got <ptr RWops, ptr uint8, int literal(1), csize>
but expected one of:
proc rwRead(context: ptr RWops; p: pointer; size: csize_t; maxnum: csize_t): csize_t
first type mismatch at position: 4
required type for maxnum: csize_t
but expression 'csize(1)' is of type: csize
expression: rwRead(fp, addr(buffer[offset]), 1, csize(1))
```
Answers:
username_1: I haven't tried this, but I think maybe doing `nimble install nico` will help since there were commits that may be unstable after bumping version up.
username_0: I completely hosed my nimble packages, Nim compiler and everything, then
reinstalled from scratch, including `nimble install nico` and this is the
error I got.
username_1: I'll investigate this issue soon or later on and try to look for a solution.
username_2: Hi, I'll look into this, I was using Nim 1.0.2, I'll upgrade to 1.2 and try to fix any issues.
username_2: I was able to reproduce this issue with a fresh `.nimble`, I've pushed a fix. Let me know if you're still having problems.
Status: Issue closed
username_0: All of the examples work now! Even on windows after installing SDL2 and libsndfile on Windows with scoop.
You're awesome, my friend. Thank you for your dedication. |
blezz23/angular-test | 964458133 | Title: .
Question:
username_0: https://github.com/blezz23/angular-test/blob/fd08655a873dfcd011506ec0d1ce86b3e5b8b4b2/src/app/pages/dashboard/datepicker/datepicker.component.ts#L15
При присвоении обозначение типа обычно не делается. в этом нету смысла и так понятно, какого типа будет переменная, особенно когда примитивные типы. С `Date` тоже самое |
Run2-CMS-BPH/BsToPhiMuMu | 304108523 | Title: Update the selector code
Question:
username_0: Do the following updates for selector code.
1. add a new tag "mc.nogen"
2. add the other needed vars
3. add other (needed) print statements
4. check the counters added recently
Answers:
username_0: @username_1: I see now that the selector code is WRONG, @username_2 didn't bother to find the bug at least. That's why we need to fix it asap. If you compare the current code with BuToKstarMuMu selector code, you can see the difference clearly.
username_1: I see. Okay, we have to fix it.
username_0: @username_1 @username_2 : can you guys try to fix this issue ? Please take prompt action.
username_1: Hi Niladri,
Internet connection in tsh hostel is extremely poor. I am afraid I may be
able to work properly after reaching BBSR.
username_2: @username_0, In my selector, there is already mc.nogen tag.
Few variables, you have suggested, I have in that selector.
I will compare the selector with SingleBuToKstarMuMuSelector.cc and modify accordingly. In the HBCSE guest house , there is no network.
username_1: Deepak, you can use internet in Tifr campus.
username_0: @username_2, I know but you try your best to fix it ASAP.
username_2: @username_0 This is the pushed selector.
https://github.com/username_2/Run2-Bs2PhiMM-1/tree/pull-selector
username_0: @username_2 : you need to make a PR to the central repo. Can you do that but make sure that you start from the code version which we have now in master branch ?
username_0: that's right, you have to make a pull request after making the necessary changes to the existing code and make sure that everything is in place. I hope you know how to make PR, I shared the presentation few days back on slack.
Status: Issue closed
username_0: +1 (closed) |
vernemq/vmq_mzbench | 487499128 | Title: problem with loading client cert
Question:
username_0: Hi I was trying to start the example use_client_cert but I had a problem with loading client cert
but I'm getting the following error
```
17:30:19.399 [error] [[email protected]] <0.263.0> Worker <0.268.0> on '[email protected]' has crashed: {badmatch,[{'PrivateKeyInfo',<<48,130,9,67,2,1,0,48,13,6,9,42,134,72,134,247,13,1,1,1,5,0,4,130,9,45,48,130,9,41,2,1,0,2,130,2,1,0,199,133,155,216,115,224,53,14,51,144,191,22,232,162,105,72,0,66,233,81,71,174,185,252,14,144,198,48,170,130,145,253,46,207,241,194,91,96,14,12,78,131,149,169,251,253,144,195,33,226,107,81,119,128,67,194,102,128,79,7,28,188,114,196,233,112,228,31,91,1,146,152,24,19,209,55,3,27,35,170,148,36,97,34,58,254,243,85,85,79,254,203,240,212,189,51,67,216,108,224,123,101,108,185,170,140,179,173,136,246,218,215,127,140,196,193,82,246,183,197,181,36,45,146,21,129,229,108,206,148,140,174,247,173,21,153,203,239,231,35,246,223,187,225,53,167,230,142,182,38,118,95,233,166,106,140,110,66,42,94,151,28,97,42,135,101,244,57,213,139,145,127,75,36,205,74,198,82,173,145,196,116,179,32,118,0,40,199,83,177,42,59,36,67,90,105,240,123,9,193,92,19,87,175,61,171,197,173,176,46,136,45,252,91,127,142,140,77,54,23,226,111,1,203,131,13,203,91,23,56,87,188,251,65,38,219,63,140,85,109,5,60,232,86,51,17,175,21,3,189,212,193,110,229,191,144,238,244,37,170,198,251,183,168,93,31,154,169,135,6,74,123,164,222,119,131,12,29,232,235,253,250,18,92,148,170,151,251,175,249,131,163,131,85,227,165,61,49,227,164,8,248,95,27,180,159,6,255,79,198,9,193,246,184,233,170,135,170,84,20,125,22,179,210,219,172,37,48,...>>,...}]} Stacktrace: [{mqtt_worker,load_client_key,3,[{file,"src/mqtt_worker.erl"},{line,296}]},{mzb_erl_worker,apply,4,[{file,"/tmp/bench_mzbench_api_mirko-Inspiron-3576_1566_715078_803996/deployment_code/node/_build/default/deps/mzbench/src/mzb_erl_worker.erl"},{line,70}]},{mzbl_interpreter,eval,4,[{file,"/tmp/bench_mzbench_api_mirko-Inspiron-3576_1566_715078_803996/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,22}]},{mzbl_interpreter,'-eval_/4-fun-0-',4,[{file,"/tmp/bench_mzbench_api_mirko-Inspiron-3576_1566_715078_803996/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,38}]},{lists,foldl,3,[{file,"lists.erl"},{line,1263}]},{mzbl_interpreter,eval,4,[{file,"/tmp/bench_mzbench_api_mirko-Inspiron-3576_1566_715078_803996/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,22}]},{mzbl_interpreter,eval_std_function,6,[{file,"/tmp/bench_mzbench_api_mirko-Inspiron-3576_1566_715078_803996/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,61}]},{mzbl_interpreter,eval,4,[{file,"/tmp/bench_mzbench_api_mirko-Inspiron-3576_1566_715078_803996/deployment_code/node/_build/default/deps/mzbench_language/src/mzbl_interpreter.erl"},{line,22}]}] State of worker: {mqtt_worker,{state,undefined,undefined}}
--
```
I have ca.crt client.crt and client.key files that are valid for connecting and authenticating to my broker.
client.key looks like this
```
-----<KEY>
-----END PRIVATE KEY-----
```
Do you have any idea how to make this work in my case, where I may go wrong?<issue_closed>
Status: Issue closed |
umbraco/Umbraco-CMS | 439512137 | Title: Object Automatic naming
Question:
username_0: I have "Work" document type.
When I make a new property, like "Title", the automatic name given is "title".
Better is concat Document Type with Property Name es "workTitle"
Answers:
username_1: Hey @username_0 this is more of a support question, make sure to ask your questions on how to use Umbraco on the forum at https://our.umbraco.com
Thanks!
Status: Issue closed
username_0: My english is very bad! I suggest to modificate Umbraco.
Now: umbraco suggest "**title**" as property alias.
Tomorrow: umbraco suggest "**workTitle**" as property alias (Document Type Name + Property Name)
Sample:

username_1: I understand, and this can be done! You just need to click on the little "lock" icon next to the title to give it a better name.
I am recommending you ask your questions on the forum so people can help you there, the issue tracker here is not suitable for questions on how to use Umbraco. :-)
username_0: Yes, MANUALLY I just need to click etc. I mean "AUTOMATICALLY" :)
username_1: @username_0 I understand what you mean. Unfortunately for you, this is not a feature we plan to implement. |
wakatime/vscode-wakatime | 895324324 | Title: Wakatime: Unknown Error (1) ; Check you log files
Question:
username_0: checked every file and reinstalled the extension for about 100 times and regenerated API key also but nothing is working.
Please Help
Answers:
username_1: I am also seeing this error message on my Windows 10 work laptop within VSCode when I hover over the WakaTime Error in the status bar (at the bottom of VSCode). When I opened the `C:\Users\<user>\.wakatime.log` file, I see the following line repeating over and over:
```text
{"caller":"/home/runner/work/wakatime-cli/wakatime-cli/cmd/legacy/heartbeat/heartbeat.go:49","func":"Run","level":"fatal","message":"failed to send heartbeat(s): failed to send heartbeats via api client: failed making request to \"https://api.wakatime.com/api/v1/users/current/heartbeats.bulk\": Post \"https://api.wakatime.com/api/v1/users/current/heartbeats.bulk\": x509: certificate signed by unknown authority","now":"2021-05-19T05:26:54-07:00","version":"v1.6.0"}
``` |
veganaut/veganaut | 174905882 | Title: Photos? Photos!
Question:
username_0: Let's test how / whether photos make veganaut.net more attractive for users. Several types of hypotheses can/should be tested:
1. X% more users will **visit** veganuat.net if there are photos (Of what? - locations? products? random vegan-related photos? completely random photos? Where are the photos visible? In the previews of all the veganaut-links people share on facebook? Only in the previews of location-details-links on fb? On embedded maps in the location previews? Only in the location details views, i.e. not on embedded maps?)
2. X% more new users will **register** if there are photos (Of what? Where?)
3. X% more users will add locations on the map / **add infos** to locations if there are photos (Of what? Where?)
Instead of hypotheses concerning the number of users we could also test hypoteses concerning the **average time** a users visits veganaut.net, the time they take until they complete the registration, the time they stay on the page after completing the registration etc., or the **amount of added infos** per user.
All these types of hypotheses can be tested without letting users add photos themselves. We could just add photos ourselves and see what happens. Which of the hypotheses we can actually test will probably be constrained by where we can gather enough data that is homogenous enough to be of any value, which, especially for type-1 hypotheses, may only be on or via the embedded maps (or maybe we could work with fb-ads to test for other type-1 hypotheses).
Anyway, if the numbers are promising we could go about testing further:
1. X% more users will **visit** veganuat.net if we let them add photos (Of what? How? Not sure whether this makes sense. If they haven't visited yet, how can they know they can add photos?)
2. X% more new users will **register** if we let them add photos (Of what? How? Just by having a button in the location details view which says "add photo of this location / product"? By having a button everywhere on veganaut.net that says "add photos of my vegan discoveries" - with the idea of somehow letting them link those photos to the map? Or is it enough if we just let them have a private collection of photos on their profile page? Or do they want others to be able to see the pictures? How? Where? etc.)
3. X% more users will add locations on the map / **add infos** to locations if we let them add photos (Of what? How? Especially interesting: would people just add photos to locations or products, but then add less information otherwise? Or would photos not affect the amount of other infos entered?)
So, there's a lot to test. How to start? How to decide how to start? How to decide how to decide how to start? Etc.
Answers:
username_0: Maybe users will want to add tons of pics and info if we have a standard photo for every single vegan option: [green salad without dressing] and a button which says [add a more accurate picture]. Hahaha. Why not test things like this? Could be fun. |
gunschu/jitsi_meet | 771653932 | Title: Add mute unmute to feature flags
Question:
username_0: **add mute, unmute and mute everyone except options to features flag** so that it can be disabled and enabled according to need like other options in feature flags.
Answers:
username_1: @username_0 Could you provide link to where this is supported in the official ios and android sdks? That would help us implement this faster.
Status: Issue closed
username_1: @username_0 These are not feature flags as far as I can find. Mute everyone already exists in the option menu. Mute and Unmute programmatically, are new actions exposed by the API, not feature flags. I've added a new issue #233 to add this action. Closing this. |
anatoliypus/web-development | 596710222 | Title: Замечания по лабораторной работе №6
Question:
username_0: - [ ] Почему расширение индексного файла php, если там находится html код ?
- [ ] Заголовки так и не поправлены (одно из первых из замечаний на паре). Все написаны в uppercase в html. Писать нужно ```<h2>Cleen code</h2>```, uppercase задавать в css
- [ ] Именование функций php не по code convention, например getPOSTparameter
- [ ] Сломалась верстка

Так как не вижу форму, проверка лабораторной работы приостанавливается до тех, пор, когда будет форма и можно будет проверить ее работоспособность
**Чекбоксы проставлять не нужно, они для преподавателя**
Answers:
username_0: - [ ] есть ошибки форматирования php
- [ ] добавить валидацию полей email и имени. Имя не должно начитаться с цифры, а email должен соответствовать форме <EMAIL>
Status: Issue closed
|
anoff/devradar | 478646572 | Title: DISCLAIMER file is changing with every other commit
Question:
username_0: There are [multiple commits](https://github.com/username_0/devradar-static/commits/gh-pages) of DISCLAIMER changes without the actual package-lock being changed
Answers:
username_0: stilil happening; probably undeterministic module resolution |
frontendbr/vagas | 484716302 | Title: [São Paulo - Alphaville] Front-end Developer na Corebiz
Question:
username_0: ## Descrição da vaga
Front-End Developer
## Local
São Paulo - Barueri
## Benefícios
- Plano de saúde
- Plano odontológico
- Seguro de vida
- VT e VR
- Curso de Espanhol
## Requisitos
**Obrigatórios:**
- HTML, CSS, Java Script, Jquery
- Pré-processadores Sass ou Less
- Automatizadores de Tarefas como: (Grunt ou Gulp)
- Não utilizar framworks bootstrap, foundation, ou afins
**Diferenciais:**
- Boas práticas de SEO
- Vtex
- React
- Angular
## Contratação
CLT - Ótima Remuneração
## Nossa empresa
A Corebiz é uma agência especializada em soluções de ecommerce desde o projeto de implantação, até o relacionamento do cliente, com metodologia própria e experiência aplicada em mais de 150 projetos. Temos escritórios no Brasil, Argentina e México.
## Como se candidatar
Por favor envie um email para <EMAIL> com seu CV anexado e Portfolio se possuir - enviar no assunto: Vaga Front-end Developer
## Labels
- Alocado
- CLT
- Júnior
- Pleno<issue_closed>
Status: Issue closed |
tkem/mbino | 253149373 | Title: Add 16-bit SPI support
Question:
username_0: `SPI.transfer16()` can be used with `spi_master_write()`.
`spi_master_block_write()` seems to simply discard the MSB for native mbed targets...
Answers:
username_0: Also try and geht rid of `SPISettings` in `spi_api.c`, in case we ever want to make this `extern "C" `.
Status: Issue closed
|
hectcastro/docker-riak | 53953067 | Title: Ring ownership after start
Question:
username_0: So starting riak cluster as per documention
```DOCKER_RIAK_AUTOMATIC_CLUSTERING=1 DOCKER_RIAK_CLUSTER_SIZE=5 DOCKER_RIAK_BACKEND=leveldb make start-cluster````
After stabilization I'll check for ring ownership;
```"ring_ownership": "[{'[email protected]',64}]",```
Thats uncool.
So docker-enter into one of the nodes to see what riak-admin cluster plan shows
````
================================= Membership ==================================
Status Ring Pending Node
-------------------------------------------------------------------------------
valid 20.3% 20.3% '[email protected]'
valid 20.3% 20.3% '[email protected]'
valid 20.3% 20.3% '[email protected]'
valid 20.3% 20.3% '[email protected]'
valid 18.8% 18.8% '[email protected]'
-------------------------------------------------------------------------------
Valid:5 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
```
Outstading changes, I'll commit them and check the status again
```
root@5ec57bbf8295:~# riak-admin cluster commit
Cluster changes committed
root@5ec57bbf8295:~# riak-admin cluster plan
There are no staged changes
```
All good. And obviously ring ownership also cool now
```"ring_ownership": "[{'[email protected]',13},\n {'[email protected]',13},\n {'[email protected]',13},\n {'[email protected]',13},\n {'[email protected]',12}]",```
So something with automatic_clustering fails here.
Answers:
username_1: @username_0, I had similar issues with unfinished cluster configuration. After digging into it I figured out that sometimes automatic_clustering.sh is executed too soon and 'cluster join' command returns "Node not found!" message. I wasn't able to come up with a quick fix for this issue and chose an alternative option: instead of joining the cluster from inside the container I'm doing it explicitly in start-cluster.sh, see bdb49dd14746b08c27ef3993ee7645f5c3b73d72 and 83ff81f5c5448218b2bbc59d4ba30978bdbc732a.
These changes give me consistently stable behaviour.
Hope this helps. |
comic/grand-challenge.org | 1026336938 | Title: Challenge status markers
Question:
username_0: For visitors of Grand Challenge it would be nice to know which challenges are still ongoing and which have already ended.
As far as I can tell we currently have no way of knowing whether a challenge is ongoing or not. One way of implementing this would be to add `start` and `end` date fields to the Phase model, which need to be filled in by the challenge organizers for each of their phases.
We could then mark challenges as either closed or ongoing in the top corner of the challenge card, and add information on how many days there are left for submitting solutions to a specific phase in the card body.
Once this is in place, we should also enable filtering challenges by their status and order challenges according to whether they are ongoing (and secondly by creation date).<issue_closed>
Status: Issue closed |
LegionDark/Issues | 193262792 | Title: Grotto Pugil - Incorrect Detect
Question:
username_0: <!--
DO NOT REMOVE OR CHANGE THE PRE-FORMATTED TEXT TO PUT @COMMANDS IN TEMPLATE!!!
GITHUB SEES `@MENTIONS`, NOT `@GMCOMMANDS`!!!
IF YOU STUPIDLY IGNORE THIS WARNING I WILL CLOSE YOUR ISSUE!11eleventytwo!
Issues will also be closed without being looked into if
the following information is missing (unless its not applicable)!!!
-->
**Date & Time**:
12/2/2016
**FFXI Client Version (use `/ver`)**:
**Server's Expected Client Version matches, yes/no? (use `$ecv`)**:
yes
**Character Name**:
Devrin
**Nation**:
Bastok
**Job(level)/Sub Job(level)**:
WAR31/MNK15
**NPC or Monster or item Name**:
Grotto Pugil
**Zone name**:
Sea Serpent Grotto
**Coordinates (use `$where`)**:
H11-I11
**ffxiah.com link (for items issues only)**:
**Multi-boxing? (multiple clients on same connection)**:
**Steps To Reproduce / any other info**:
Simply use character similar to my level and enter while sneaked. There is a single Pugil that will always aggro even when Sneak is applied. Once you apply invisible, he does not aggro. Killed me twice... so much fun.
Answers:
username_1: Can you recheck this pugil today now that maintenance is done? Also you didn't specify your `/ver` output in the template. This is relevant because the server can theoretically think its something other than a pugil, but on the client says mob with that ID should be a pugil. Darkstar itself (the software the server runs) may have an old/outdated entry there even if your version matches the server so it helps me narrow down which bug we have if I know what version it was at a glance without having to dig.
username_1: A bunch of mobs got detection corrections today, so I am hoping this is already resolved.
username_0: I just confirmed that this was fixed! Much obliged. This should make the Ninja quests easier for many.
by the way, my version is 30161004_1
Status: Issue closed
|
sourcegraph/stylelint-config | 1004182383 | Title: Custom Stylelint rule to ban BEM convention in CSS modules
Question:
username_0: #### Problem statement
This summer we started the migration to [CSS modules](https://docs.sourcegraph.com/dev/background-information/web/styling#css-modules) in [the main Sourcegraph monorepo](https://github.com/sourcegraph/sourcegraph). We decided _not_ to use the BEM convention in CSS modules apart from modifier selectors:
```scss
// Global CSS
.repo-header {
&--active { ... }
&__button { ... }
}
// CSS module
.repo-header {
&--active { ... }
}
.button { ... }
```
Notice how we keep a nested modifier selector `&--active`. But the `&__button` selector is bubbled to the top level.
To stick to this convention, we need a Stylelint rule to ban usage on `&__*` selectors in CSS modules.
#### Success criteria
1. A custom Stylelint rule is created that warns if the `&__*` selector is used.
2. It's added to our Stylelint config as [a glob-based rule](https://github.com/stylelint/stylelint/pull/5521) to apply it only to `*.module.scss` files.
#### Things to learn/practice during the implementation
Writing custom Stylelint rules to keep the codebase consistent.
#### Estimated amount of work
T-shirt size estimate: **M**.
Answers:
username_1: I absolutely love the title. Will work on it! 😍 |
mendix/MobileFeatures | 246725018 | Title: The spinner isn't always removed properly
Question:
username_0: Reproduction (using the Mendix Mobile app):
- Load a hybrid app with security enabled e.g. using the QR scanner
- Login in to the hybrid app
- Log out and directly do three-finger tap (the gesture to bring up the context menu)
- Sometimes, the spinner remains visible, blocking all input
Answers:
username_1: Can you test [this version](https://github.com/mendix/MobileFeatures/raw/fix_ticket_54440_spinner_on_logout/dist/MobileFeatures.mpk)? This should fix the problem, but I haven't merged it into the widget yet
username_0: Thanks! It would be great if @simo101 could try this with one of the AoP apps on TEST. It's sometimes causing issues there, and that's faster than setting up a repro project.
Status: Issue closed
|
MaartenGr/BERTopic | 861676803 | Title: fit_transform taking exceptionally long to run
Question:
username_0: From the `quickstart`,
```
from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
docs = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes'))['data']
topic_model = BERTopic()
topics, _ = topic_model.fit_transform(docs)
```
the `fit_transform()` method call in the last line above is taking exceptionally long to run on this newsgroups dataset.
What could be causing this? There is zero feedback while running that line so it appears to hang for hours.
Answers:
username_1: Sorry to hear you are facing this issue. To start off, it may be helpful if you were to set `verbose` to True when instantiating BERTopic. This enables logging and should give you an idea of where it hangs for you.
Second, if the 20Newsgroups example is running for hours then it is highly likely you are not using a GPU. For example, if I were to run this example on a GPU-enabled Google Colab session it runs without any issues.
Lastly, the `low_memory` parameter does not speed up BERTopic but merely allows you to actually run it in case you have little memory available. In practice, this will often reduce its speed.
username_0: Ah yes, I was running on a CPU not a GPU.
The `fit_transfor()` on the fetch_20newsgroups took 1 hour 29 minutes to run, am I'm guessing that was due to using a CPU.
Thanks for the suggestion to set `verbose` parameter to True when instantiating BERTopic; that logging should definitely help.
Similarly, I think supporting a progress-bar such as `tqdm` would help give users progress updates while the model is running.
Status: Issue closed
|
Ziv-Barber/officegen | 200973508 | Title: Support Bookmark For Word
Question:
username_0: It would be nice if there was a specific api for adding bookmark for word.
Answers:
username_1: Coming....
Testing it now
Status: Issue closed
username_1: Implemented, will be part of version 0.4.2.
You can take the last Github or wait for the release (soon!) to npm. |
JuliaLang/julia | 617747790 | Title: Provide startup option to include current active environment in julia prompt
Question:
username_0: If you're changing environments a lot it's easy to make mistakes where you think you're in one environment but you're actually in another. It would be nice if there were a startup option that would change the Julia prompt to include the current active environment, so it would look like this:
```
(@v1.4) julia>
```
or this:
```
(Example.jl) julia>
```
Answers:
username_1: ```
julia> using Pkg
julia> Base.active_repl.interface.modes[1].prompt = () -> string(Pkg.REPLMode.promptf()[1:end-5], "julia> ")
#3 (generic function with 1 method)
(@v1.4) julia> 1+1
2
(@v1.4) pkg> activate Foo
Activating new environment at `~/Documents/JuliaPkgs/TOMLX.jl/Foo/Project.toml`
(Foo) julia> 1+1
2
```
Beautiful API ;) |
DirectoryTree/LdapRecord-Laravel | 928925509 | Title: Error: Option [port] must be an integer.
Question:
username_0: - LdapRecord-Laravel Major Version: 2.3
- PHP Version: [7.4.16]
- Laravel Version: 7.29
**Commands:**
1) composer require directorytree/ldaprecord-laravel
2)php artisan vendor:publish --provider="LdapRecord\Laravel\LdapServiceProvider"
3) Inserted Lines in .env:
LDAP_LOGGING=true
LDAP_CONNECTION=default
LDAP_HOST=ldap.forumsys.com
LDAP_USERNAME=null
LDAP_PASSWORD=<PASSWORD>
LDAP_PORT=389
LDAP_BASE_DN="dc=example,dc=com"
LDAP_TIMEOUT=5
LDAP_SSL=false
LDAP_TLS=false
4) php artisan ldap:test
_ERROR:_
**LdapRecord\Configuration\ConfigurationException**
**Option [port] must be an integer.**
Answers:
username_1: For the moment, you can bypass it by editing the _ldap.php_ file in the _config_ folder:
```
'port' => intval(env('LDAP_PORT', 389)),
'timeout' => intval(env('LDAP_TIMEOUT', 5)),
```
Status: Issue closed
username_0: - LdapRecord-Laravel Major Version: 2.3
- PHP Version: [7.4.16]
- Laravel Version: 7.29
**Commands:**
1) composer require directorytree/ldaprecord-laravel
2) php artisan vendor:publish --provider="LdapRecord\Laravel\LdapServiceProvider"
3) Inserted Lines in .env:
_LDAP_LOGGING=true
LDAP_CONNECTION=default
LDAP_HOST=ldap.forumsys.com
LDAP_USERNAME=null
LDAP_PASSWORD=<PASSWORD>
LDAP_PORT=**389**
LDAP_BASE_DN="dc=example,dc=com"
LDAP_TIMEOUT=5
LDAP_SSL=false
LDAP_TLS=false_
4) php artisan ldap:test
_ERROR:_
**LdapRecord\Configuration\ConfigurationException**
**Option [port] must be an integer.**
username_2: Sorry @username_0 & @username_1! I just patched this.
I released an accidental breaking change in [v2.5.1](https://github.com/DirectoryTree/LdapRecord/releases/tag/v2.5.1) with the port validation.
It's now working normally as it did previously ([v2.5.2](https://github.com/DirectoryTree/LdapRecord/releases/tag/v2.5.1)).
I've added a test to make sure this doesn't happen again 👍
Status: Issue closed
|
rails/jbuilder | 215105350 | Title: unwanted "is_a?" json key when using extract With an activerecord object
Question:
username_0: Hi, I'm using rails version 5.02 and jbuilder 2.6.3
Im having same problem as #116 , What could be possibly happening? because this bug was fixed
here is my view.json.jbuilder
```
json.(@booking, :api_data_schemas, :tickets_api_data_schemas)
json.seats @booking.seats do |seat|
json.extract! seat, :id, :number_with_level, :bus_schedule_id, :seat_klass, :seat_klass_stars
json.price seat.price.to_i
json.price_usd seat.price_usd.to_f
end
```
and here is my json output: note the is_a? 2 times, appears as first attribute in extract! and json.(@booking, methods
```
{
"is_a?": {},
"api_data_schemas": {
"outbound": {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"departure_bus_stop": {
"title": "Subida",
"type": "integer",
"values": [
{
"title": "TERMINAL SUR",
"value": 1
},
{
"title": "Cruce Colón",
"value": 8
}
]
},
"destination_bus_stop": {
"title": "Bajada",
"type": "integer",
"values": [
{
"title": "TERMINAL OSORNO",
"value": 5
},
{
"title": "Cruce Los Tambores",
"value": 29
}
]
}
},
"required": [
"departure_bus_stop",
"destination_bus_stop"
]
}
},
"tickets_api_data_schemas": {},
"seats": [
{
"is_a?": {},
"id": 375,
"number_with_level": "1-4",
"bus_schedule_id": 417,
"seat_klass": "2",
"seat_klass_stars": 2,
"price": 25000,
"price_usd": 40.95
}
]
```
Thanks in advance
Answers:
username_1: @username_0 have you found a solution or a work around?
I'm dealing with the same issue :-/
username_0: @username_1 nope, I'm just ignoring it in the frontend.
username_1: @username_0 ugh.. thanks for the reply.
I've ended up with
```ruby
JSON.parse(json).except(:is_a?).to_json
```
before returining to the API 🤦♂️ |
PaxInstruments/labwiz-board | 175956086 | Title: Extra pin usage
Question:
username_0: We have two pins (PC1, PC4) available that are currently broken out to test pads. What can we do with them? Each is connected to a 12-bit ADC.
- SD_DETECT
- SD_ENABLE to cut power to SD can reset the card
- Add a two pin header near the modules. A wire jumper could be used to rout them to a module.
- May be needed for the ESP8266 board.
- Connect to LEDs
- Buzzer
- Use a voltage divider to measure battery voltage with one of the free pins for the low voltage side. This would be to cut off current while not measuring.
- VBUS_DETECT
- May be needed as we improve the circuit
Answers:
username_0: PC4 is now used for VBUS_DETECT. PC1 is PC1_BOOT0, which I think can serve as double duty for something like SD_DETECT.
Status: Issue closed
username_0: Closing issue |
NPellet/jsGraph | 365479479 | Title: Zoom not working on documentation
Question:
username_0: On http://jsgraph.org/tutorials/tutorial-10-zoom.html
When a users tries to zoom, it zoom on the wrong region.
Tested on the last dev version of Firefox and Chromium.
Status: Issue closed
Answers:
username_1: Yes the website tutorials and examples are stuck the v1.6.
I've updated
http://jsgraph.org/tutorials/tutorial-10_basic.html
http://jsgraph.org/tutorials/tutorial-10-zoom.html
and the examples. The rest in WIP. |
nolimitcity/nolimit.js | 618314890 | Title: Swedish jurisdiction links need to conform exactly to logotypes given by spelinspektionen.
Question:
username_0: If a game is loaded with the Swedish jurisdiction parameters for spelpaus spelgränser and självtest add a top div vertically above the game container in and place the buttons provided below with the same linking as done before. Then remove them from the options object so that the game does not receive them.
https://www.spelinspektionen.se/press/nyhetsarkiv/enklare-for-spelare-att-ta-kontroll-over-sitt-spelande/
Answers:
username_1: New bar created, "stealing" links and events that would otherwise be sent to the games to make them show the links and showing in the bar instead.
Status: Issue closed
|
microsoft/appcenter | 661463541 | Title: one or more profiles does not contain the provided certificate/
Question:
username_0: **Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. |
bryanbraun/middleman-navtree | 56725892 | Title: Labels need translations
Question:
username_0: Labels **Previous** and **Next** of Pagination helpers need to be localized
Answers:
username_1: Hmm. Good call. It looks like middleman requires the i18n gem, and we can probably use the `I18n.t()` function for translation, as described here: https://middlemanapp.com/advanced/localization/#localization-(i18n)
I can't get around to this immediately, but be my guest if you (or anybody else) wants to take a crack at it.
Status: Issue closed
username_1: Thanks to some help by @musashimm, this issue was fixed by https://github.com/username_1/middleman-navtree/pull/19.
Closing. |
google/oboe | 297160342 | Title: AudioStreamOpenSLES::processBufferCallback returns SL_RESULT_INTERNAL_ERROR
Question:
username_0: It does this if the return from the callback is != DataCallbackResult::Continue.
Don suggested we could use the SL player interface to set the play state to stopped. E.g.
https://github.com/googlesamples/android-ndk/blob/master/audio-echo/app/src/main/cpp/audio_player.cpp#L235
Additionally, we could set the stream state to stopped.
We need to test and verify that it is safe to stop an OpenSL ES stream from a callback.
Answers:
username_0: Fixed by #323
Status: Issue closed
|
omansak/libvideo | 732243757 | Title: Download video/music but i dont listenning the sound
Question:
username_0: I am Creating an Youtube to MP4 Downloader for Windows in .Net framework / C#. I am using the VideoLibrary to download the Video.
I use code the download, and use File.WriteAllBytes, and when i open the video I dont listenning the music of video.
My code:
private async void DownloadVideo()
{
string downloadUri;
String link = TxtUrl.Text;
if (TxtUrl.Text.Contains("youtu.be/"))
{
string[] dss = TxtUrl.Text.Split(new string[] { "youtu.be/" }, StringSplitOptions.None);
string dsss = dss[0] + "youtube.com/watch?v=" + dss[1];
link = dsss;
}
var videoInfos = await cli.GetAllVideosAsync(link);
try
{
TxtUrl.Text = link;
var downloadInfo = videoInfos.Where(i => i.Format == VideoFormat.Mp4 && i.Resolution == int.Parse(cboResolucao.Text)).FirstOrDefault(); // if 720p is possible
downloadUri = downloadInfo.Uri;
File.WriteAllBytes(textBox1.Text + @"\" + downloadInfo.FullName, downloadInfo.GetBytes()/*await downloadInfo.GetBytesAsync()*/);
FULLname = downloadInfo.FullName;
FULLname = FULLname.Replace(".mp4", "");
}
}
Thanks for your help!!
Answers:
username_1: 720p and above res has no sound.
username_0: Its fun, now have this error: " at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at VideoLibrary.VideoClient.<GetBytesAsync>d__10.MoveNext()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at VideoLibrary.Video.<GetBytesAsync>d__11.MoveNext()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at Download_VideoEMusic.Form1.<DownloadVideo>d__21.MoveNext() in C:\Users\andre\source\repos\Download_VideoEMusic\Download_VideoEMusic\Form1.cs:line 352"
I transferred someone else's code, and it has the problem that my. Cannot execute the "downloadInfo.GetBytes()"
Status: Issue closed
|
bwssytems/ha-bridge | 280067977 | Title: Backup configuaration to external file
Question:
username_0: Hi, would it be possible to build in a backup configuration option to save externally? To a file?
Answers:
username_1: So, the ha-bridge allows you to backup your devices data and config data already. Is this more of a question to set where the backup directory location is?
username_0: Yes I can make backups "inside" my docker container. But which to "download" via the GUI my config in a file.
username_1: Since I am nto the maintainer of docker, it would be best to post your question here https://github.com/aptalca/docker-ha-bridge
username_0: Ok, will do thanks
Status: Issue closed
|
gchq/gaffer-tools | 294374135 | Title: Filter throws error if there is no description on the schema
Question:
username_0: The view builder errors when searching through the entity and edge types. This is due to a .toLowerCase method in the schemaGroupFilter. This filter should handle undefined description fields.
Status: Issue closed
Answers:
username_1: Merged into develop. |
kstep/rust-mpd | 442958820 | Title: Status::Stop when _playing_ DSF files
Question:
username_0: Not really sure where this problem comes from but I'll try my best to explain.
I've got a basic loop that idles in blocking mode waiting for `Subsystem::Player` events, matches `Status.state` and sets the cover on song change, or unsets it if stopped. It all goes well with other types of files, but if I start playing a DSF file I have a surge of player events and `Status.state` is always `State::Stop`. Meanwhile CPU usage goes sky high while looping through those events. Is this possibly an issue with MPD itself? How can I provide more info? Any way to clear the idle events queue?
Meanwhile `mpc` correctly reports state as `playing`
Answers:
username_0: My bad, I was using `unwrap_or_default` to get the state. So after unwrapping it I get the actual error:
```Result::unwrap()` on an `Err` value: Parse(BadRate(ParseIntError { kind: InvalidDigit }))```
So you can ignore the part that concerns matching against `Status.state`
The result from probing MPD directly:
```
OK MPD 0.20.0
status
volume: 100
repeat: 0
random: 0
single: 0
consume: 0
playlist: 76
playlistlength: 6
mixrampdb: 0.000000
state: play
song: 0
songid: 51
time: 16:629
elapsed: 16.253
bitrate: 2822
duration: 628.993
audio: dsd64:2
nextsong: 1
nextsongid: 52
OK
```
username_0: Investigating why the call to `wait()` doesn't block after starting playing a DSD file I found out this.
The first to calls to `client.wait(&[mpd::Subsystem::Player])` result in `Ok([])`. Why is the list empty and not `Ok([Player])` as expected?
After those two I start getting `Err(Io(Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }))`, so yeah, that's why no blocking. Still need to figure out where this comes from...
username_0: Hi, I had a look and the parser and came up with the following which works for me:
```rust
impl FromStr for AudioFormat {
type Err = ParseError;
fn from_str(s: &str) -> Result<AudioFormat, ParseError> {
let mut dsd = false;
let mut it = s.split(':');
let rate = it.next().ok_or(ParseError::NoRate).and_then(|v| {
if v.starts_with("dsd") {
dsd = true;
v.trim_start_matches("dsd").parse::<u32>().map(|v| v * 44100 / 8)
} else {
v.parse()
}
.map_err(ParseError::BadRate)
})?;
let bits = if dsd {
1
} else {
it.next().ok_or(ParseError::NoBits).and_then(|v| match v {
"f" => Ok(0),
"dsd" => Ok(1),
_ => v.parse().map_err(ParseError::BadBits),
})?
};
let chans = it
.next()
.ok_or(ParseError::NoChans)
.and_then(|v| v.parse().map_err(ParseError::BadChans))?;
Ok(AudioFormat { rate, bits, chans })
}
}
```
I'm new to Rust so I've no idea if it's idiomatic or silly, or whatever, but that's the gist and it works. I found the info in the MPD user manual, it was missing from the protocol specs, here:
https://www.musicpd.org/doc/html/user.html#configuring-audio-outputs
Anyway, I've still no clue why the connection breaks down on parse error, haven't looked at it. Any idea?
username_0: I've found these relevant bits in the MPD code base and I've put them together in a gist so I stop cluttering this feed with overly long snippets: https://gist.github.com/username_0/585e0dd20cce521f16cd46da00fa3f68
Basically `FromStr` should be orthogonal to `ToString` in `AudioFormat.cxx`. To achieve this without loss of information I've introduced a small enum: `SampleFormat` -- have a look at the last file in the gist: https://gist.github.com/username_0/585e0dd20cce521f16cd46da00fa3f68#file-test-rs. I've used it for testing. The implementation of `FromStr` there is still not completely orthogonal though. It returns an error in case of `SampleFormat::UNDEFINED` and doesn't check if `rate` or `chans` equal zero, which they shouldn't.
Note that floating point PCM signal is defined with 32 bit resolution, not zero. Zero is defined as ` SampleFormat::UNDEFINED`, so an error. You can verify that in the definition of `sample_format_size()` in `SampleFormat.hxx`. |
dasilva333/DIM | 65986969 | Title: View only 2 out of 3 characters
Question:
username_0: So in the first iteration I was able to scroll down and see the vault first, then all 3 characters. Now I get the vault, my primary character, and my last created character. I am missing the second alt I created. All 3 are hunters.
Answers:
username_1: Just chiming in to say that I'm experiencing this as well.
username_1: Chrome developer tools window shows the following information (no idea if this is relevant):
The key "target-densitydpi" is not supported.
chrome-extension://gdjndlpockopgjbonnfdmkcmkcikjhge/www/cordova.js Failed to load resource: net::ERR_FILE_NOT_FOUND
app.js:841 false isEmptyCookie true
app.js:847 loadData
bungie.js:2 bungie constructed w null
bungie.js:94 _getToken finished with 6509139896945091855
app.js:684 user finished
6bungie.js:94 _getToken finished with 6509139896945091855
app.js:625 Uncaught TypeError: Cannot read property 'itemName' of undefined
username_1: Saw a comment on issue #20 (https://github.com/username_2/DIM/issues/20) relating to it possibly being an item not supported, so here's an album with 2 shots from DIM showing all my items: http://imgur.com/pQixzgK,7yI5Ac5
username_2: Try 1.3.7 for Chrome I patched it with a fix that should take care of it, let me know thanks. https://github.com/username_2/DIM/blob/div-movile-dist/destiny_item_viewer.zip
username_1: Still seeing just two characters with the 1.3.7 .zip file version you just linked: http://imgur.com/FHcuwkg
username_3: I am only seeing 1 out of 3 characters on my chrome extension and android, it was working fine this morning...
username_1: In the official Bungie Destiny app, my 4th Horseman is on the character that doesn't get displayed in DIV and it's listed weirdly as "CLASSIFIED" for name and "CLASSIFIED" for type of weapon. The image is a white box. Could this be the problem? I was going to test by trying to move this item to the vault, but the Destiny app encounters errors when dealing with it. I'm not near the PS4, either, so I can't manually move it to the vault or another player.
username_3: Hey, that can be the issue, I have 2 4th horsemans, Both of which are stored in each of the characters that I am missing. I am unable to move my 4th horseman in between my characters in anything including DIM, bungie did fix this issue from the past few weeks, and now, it's back.
username_3: Bungie is doing maintenance on the website, I know this since the forum catagories page does look a bit different. https://www.bungie.net/en/Forum/Categories
Bungie probably messed up the 4th horseman again, this issue had originally appeared when bungie introduced item transferring which was fixed less than a week later, hopefully, bungie will fix this issue in under a day.
When I attempt to transfer my weapon with DIM I get
Error #1627
The Vendor you requested was not found.
username_1: I made a Bungie dot net post regarding the 4th Horseman: https://www.bungie.net/en/Forum/Post/113113870/0/0
username_1: Think this can be closed now; fixed in #25
username_2: thanks
Status: Issue closed
|
dask/distributed | 891955369 | Title: gaps between distributed and dask releases to anaconda main channels results in incompatible environments
Question:
username_0: E ImportError: cannot import name 'dumps_msgpack' from 'distributed.protocol.core' (/root/miniconda/envs/test-env/lib/python3.7/site-packages/distributed/protocol/core.py)
```
Caused by the fact that `distributed.protocol.core.dumps_msgpack()` was removed in 2021.4.1 (#4677), but `dask` 2021.4.0 still relies on it.
**What you expected to happen**:
I expected that since `dask` and `distributed` are so tightly connected to each other, new versions of these libraries would be published to the main anaconda channels at the same time.
**Minimal Complete Verifiable Example**:
It's hard to create an MVCE for this since it relies on external state in a package manager, but as of 12 hours ago the steps at https://github.com/microsoft/LightGBM/issues/4285#issuecomment-841000102 could reproduce this issue.
If you need more details than that please let me know and I can try to produce a tighter reproducible example.
**Anything else we need to know?**:
**Environment**:
- Dask version: 2021.4.0
- Python version: 3.7
- Operating System: Ubuntu 20.04
- Install method (conda, pip, source): conda
Answers:
username_1: Thanks for bringing this up @username_0 . We don't have much input on the main anaconda channel.
Still, @seibert do you know who we should talk to about updating the main channel as the current versions are incompatible with one another. FWIW we are planning a release today: https://github.com/dask/community/issues/155
username_2: Thanks for reporting @username_0! FWIW some folks also ran into this with the `2021.04.1` release
on `conda-forge` (see the discussion starting here https://github.com/dask/community/issues/150#issuecomment-826844711). I think the core issue here is that we don't specify maximum allowed versions for our `dask` and `distributed` dependencies.
Over in https://github.com/dask/community/issues/155#issuecomment-841278326 I'm proposing we start pinning `dask` and `distributed` more tightly to avoid these types version inconsistency issues. If you have any thoughts on the topic, please feel free to engage over in that issue
username_0: Ok sure, will do! You can close this issue then if you'd like. To keep the discussion focused over in `dask/community`.
username_3: Yeah I think tighter pinnings as James proposed should address this going forward
username_3: cc @anaconda-pkg-build (for awareness)
Status: Issue closed
username_2: Closing as discussion moved over to the `dask/community` issue tracker and the relevant folks have been pinged here for visibility |
2sic/2sxc | 247415909 | Title: Replace RAD-File Manager with ADAM file manager
Question:
username_0: **I'm submitting a ...** <!-- check one with "x" -->
```
- [x] feature request
```
**...about** <!-- check one with "x" -->
```
- [x] edit experience / UI
```
**Current behavior**
Currently needs the RAD / Telerik manager, which many DNNs don't include by default any more, causing a lot of confusion and support request.
**Expected behavior**
File picking and uploading should work without additional extensions. It's ok if the feature set is limited, mostly to upload / pick file, but it should just work.
Goal is to use the ADAM GUI to replace the telerik. @raphael-m is working on it.
* **2sxc version(s):** all till 09.03
* **Browser:** [all]
* **DNN:** all, but especially troubling on 8.x and 9.x<issue_closed>
Status: Issue closed |
sbt/zinc | 219844710 | Title: Reorder subproject dependencies
Question:
username_0: The current state of the subproject dependencies affect us in two ways:
1. CI is slow because we cannot easily parallelize compilation and test execution.
2. Cross-compilation of projects is inconsistent.
```
[zinc [email protected]]> crossScalaVersions
[info] zinc/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincTesting/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincBenchmarks/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincCore/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincCompileCore/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] compilerBridge/*:crossScalaVersions
[info] List(2.12.1, 2.11.8, 2.10.6)
[info] zincClassfile/*:crossScalaVersions
[info] List(2.12.1, 2.11.8, 2.10.6)
[info] zincPersist/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincClasspath/*:crossScalaVersions
[info] List(2.12.1, 2.11.8, 2.10.6)
[info] compilerInterface/*:crossScalaVersions
[info] List(2.12.1)
[info] zincCompile/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincScripted/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincIvyIntegration/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
[info] zincApiInfo/*:crossScalaVersions
[info] List(2.12.1, 2.11.8, 2.10.6)
[info] zincRoot/*:crossScalaVersions
[info] List(2.11.8, 2.12.1)
```
@dwijnand has provided this graph:

In it, we can see that the compiler bridge tests depend on `zincApiInfo` and `zincClassfile`, and that's the reason why those projects have to cross-compile to 2.10.
A way to solve this issue would be to disentagle the dependencies between the projects by moving the necessary bits to independent projects or even to the dependent projects, depending on the nature of the code.
This issue was pointed out by @username_1.
Answers:
username_1: You could start by documenting what all these projects are supposed to be for, I have no idea what is supposed to be where.
username_0: @username_1 That info is in the `build.sbt`. I'm not the one that made the current distribution, so I'm probably not the best person to do that.
username_0: I took a shoot at this, have a look in https://github.com/sbt/zinc/pull/428#pullrequestreview-68908979.
Status: Issue closed
|
yytypescript/book | 1166280603 | Title: 調査: npm install -g typescriptでインストールしたtscが使えないことがある?
Question:
username_0: 次のチュートリアルの流れでtscがcommand not foundになることがあるかもしれないので調査する。
[開発環境の準備 | TypeScript入門『サバイバルTypeScript』](https://typescriptbook.jp/tutorials/setup)にて
<img width="571" alt="20220311_195720" src="https://user-images.githubusercontent.com/855338/157854298-f68fa621-a0c5-4959-8c26-9bd0bb280953.png">
[開発環境の準備 | TypeScript入門『サバイバルTypeScript』](https://typescriptbook.jp/tutorials/setup)にて
<img width="658" alt="20220311_195755" src="https://user-images.githubusercontent.com/855338/157854386-f1789a92-234b-4899-b001-2b30e1cc44c7.png">
## 発端
https://twitter.com/Yomogi_master/status/1502029034535809024
Answers:
username_0: この部分を実行した。
<img width="651" alt="20220311_200431" src="https://user-images.githubusercontent.com/855338/157855460-3f014821-e6dc-44a2-8beb-1796790ab421.png">
まっさらなmacだと、`/usr/local/opt/node@16/bin`にパスが通っていないので、`node -v`はcommand not foundになる。
今回の調査対象とは関係ないが、パスを通すところをちゃんと手順に加えたほうがよさげ。
username_0: ちなみに、パスを通してもzshを再起動しない限りnodeコマンドは見つからない。

username_0: nodeのパスを通すとnode, npmも実行できた。npm install -g typescriptで入れたtscにもパスが通っていた。

username_0: 次のチュートリアルの部分を実施した。
<img width="661" alt="20220311_201445" src="https://user-images.githubusercontent.com/855338/157856786-0599226a-9d39-4ed1-b781-b865144b6d9b.png">
問題なく実施できた。
 |
nesbox/TIC-80 | 1071499168 | Title: Pull cartridges off of Github or Website for easy collaboration and game sharing
Question:
username_0: I have a proposal if this isn't already a feature (other than SURF), The ability host your cart on your own website (e.g., https://untrustedinstaller.github.io/tic80-carts.html [not a real link on my page]) or something like that so collaboration on games is easy and it's a great way to host backups on GitHub and said website.
If this is already a feature, please let me know.
Answers:
username_1: Almost stale.
<!-- @username_2 - USED TO GIVE THE REPO OWNER A NOTIFICATION ABOUT THIS! -->
username_1: Referencing in a discussion.
username_1: Rather use `export html` instead to publish your game on a 3rd party site
Status: Issue closed
|
prometheus/prometheus | 562030756 | Title: bug in 2.16-rc0: the step is the querly log not in seconds
Question:
username_0: <!--
Please do *NOT* ask usage questions in Github issues.
If your issue is not a feature request or bug report use:
https://groups.google.com/forum/#!forum/prometheus-users. If
you are unsure whether you hit a bug, search and ask in the
mailing list first.
You can find more information at: https://prometheus.io/community/
-->
**What did you do?**
A query range with query log enabled
**What did you expect to see?**
Step in seconds in the query log
**What did you see instead? Under which circumstances?**
```
"step": 5000000000
```
Answers:
username_0: cc @cstyan
username_0: Merged in 2.16.0-rc.1
Status: Issue closed
|
stellar/js-stellar-sdk | 553222928 | Title: Horizon v1.0.0 BETA Compatibility
Question:
username_0: The upcoming Horizon release (1.0 BETA) is coming, and there are multiple breaking changes plus new features 🎉🎉🎉!
The following are the list of changes required to support this new release:
- [ ] ➕Update `/fee_stats` response.
<details>
- ✂ Remove the following fields:
```json
min_accepted_fee
mode_accepted_fee
p10_accepted_fee
p20_accepted_fee
p30_accepted_fee
p40_accepted_fee
p50_accepted_fee
p60_accepted_fee
p70_accepted_fee
p80_accepted_fee
p90_accepted_fee
p95_accepted_fee
p99_accepted_fee
```
- ➕Add support for `max_fee` and `fee_charged` fields. Each field contains a JSON object that looks like this:
```json
{
"last_ledger": "22606298",
"last_ledger_base_fee": "100",
"ledger_capacity_usage": "0.97",
"fee_charged": {
"max": "100",
"min": "100",
"mode": "100",
"p10": "100",
"p20": "100",
"p30": "100",
"p40": "100",
"p50": "100",
"p60": "100",
"p70": "100",
"p80": "100",
"p90": "100",
"p95": "100",
"p99": "100"
},
"max_fee": {
"max": "100000",
"min": "100",
"mode": "100",
"p10": "100",
"p20": "100",
"p30": "100",
"p40": "100",
"p50": "100",
"p60": "100",
"p70": "100",
[Truncated]
- [ ] 🚨 Update operation types to canonical names (if needed) (see https://github.com/stellar/go/pull/2134).
- [ ] ➕Add support for `/accounts` end-point with `?signer` and `?asset` filters. We recommend a method like `.accounts(queryParams)` (see [documentation for accounts](https://www.stellar.org/developers/horizon/reference/endpoints/accounts.html)).
- [ ] ➕Add support for `/offers` end-point with query parameters. We recommend a method like `.offers(queryParams)` (see [documentation for offers](https://www.stellar.org/developers/horizon/reference/endpoints/offers.html)).
- [ ] ➕Add support for `/paths/strict-send` end-point. See [documentation](https://www.stellar.org/developers/horizon/reference/endpoints/path-finding-strict-send.html).
We recommend a method like
strictSendPaths(sourceAsset, sourceAmount, [destinationAsset])
- [ ] ➕ Add support for `/paths/strict-receive` end-point. See [documentation](https://www.stellar.org/developers/horizon/reference/endpoints/path-finding-strict-receive.html).
We recommend a method like:
strictReceivePaths(sourceAssets,destinationAsset, destinationAmount)
- [ ] ♻ Regenerate the XDR definitions to include [MetaV2](https://github.com/jonjove/stellar-core/blob/b299b3a458a15f592352c67d4da69baa6e8fbb6a/src/xdr/Stellar-ledger.x#L309) support (also see [#1902](https://github.com/stellar/go/issues/1902)).
That's it! If you have any questions feel free to ping us on #dev-discussion in [Keybase](https://keybase.io/team/stellar.public).
Answers:
username_0: WIP https://github.com/stellar/js-stellar-sdk/pull/477
username_0: This needs to be done in js-stellar-base
username_0: Meta V2 support coming through js-base https://github.com/stellar/js-stellar-base/pull/288/files
username_0: Meta V2 is done https://github.com/stellar/js-stellar-base/releases/tag/v2.1.4
username_0: We are missing support for `/offers` and `/accounts` -- if anyone want to tackle those ones please let me know here.
username_0: This was fixed and it's available in 4.0.0
Status: Issue closed
|
kubernetes/kops | 202030620 | Title: AWS ENA Driver Not Enabled On Default AMI
Question:
username_0: Hello,
I noticed that the ENA (Enhanced Networking Adapter) isn't enabled by default in the AMIs (1.4) that kops uses by default:
```
root@ip-172-21-35-87:~# cat /etc/debian_version
8.6
root@ip-172-21-35-87:~# ethtool -i eth0
driver: vif
```
```
$ kops version
Version 1.5.0-alpha3 (git-51b7644)
```
AMI Version: k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-12-05 (ami-03fdf814)
References:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html
https://wiki.debian.org/Cloud/AmazonEC2Image/Jessie
Answers:
username_1: Which instance size was this on? Did it have enhanced networking available?
username_0: It was an R4.xlarge. Should be available.
</reply on the go>
username_1: I see now - this is a separate driver from the ixgbevf driver - my mistake.
We'll have to add a module to the base image:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html
username_1: driver: ixgbevf
version: 2.12.1-k
firmware-version:
bus-info: 0000:00:03.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
```
username_0: Awesome. Definitely good to know. I wonder what the real difference is
between the ENA and the intel one. Anyway, really appreciate your efforts.
username_2: AWS recommends ixgbevf > 2.14 for stability and performance.
ENA driver is needed on those beefy 20Gbit/s instances. This driver is not available on stock Ubuntu images and has to be installed manually.
Does kops/kubernetes provide any 'official' AMIs ? always thought that it utilises 'bare' images.
edit: ok, I see we do, so I guess this should be added (bumped ixgbevf and ENA driver)
username_3: Does anyone know if the ixgbevf 2.12.1-k in Debian 8.6 `k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09 (ami-aaf84aca)` is affected by the stability issues, of TCP timeouts and just random packet corruption?
I'm getting many of these as well:
```
[22117.455919] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[22117.489707] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
```
username_1: On ixgbevf:
On the "k8s AMIs" (which are debian jessie with a 4.4. kernel), we're running the ixgbevf driver from the linux kernel, not the out of tree version. The versioning numbering appears to not correspond directly. We switched to this as part of the move to the 4.4 kernel; with the jessie kernel we were seeing kernel panics, particularly on m4 instances (_with_ the AWS-recommended driver): https://github.com/kubernetes/kubernetes/issues/30706
I compared the 2.12.1 driver from sourceforge with the 2.14.2 driver (I could not find the upstream version control):
* 2.14.2 introduced `ixgbevf_check_tx_hang`, added here: https://github.com/torvalds/linux/commit/e08400b707739f0eca1645413924743466ea70b8, and in kernel >= 4.0
* 2.14.2 introduced `ixgbevf_set_ivar`, added in the initial commit of the driver into the kernel https://github.com/torvalds/linux/commit/92915f71201b43762fbe05dbfb1a1a0de9c8adb9 . Note that a version somewhere in between 2.12.1 and 2.14.2 was labeled in the kernel as `1.0.0-k0`. This suggests that the `-k` scheme is not comparable to the non-k scheme.
* 2.14.2 introduced an errata check, in the kernel in https://github.com/torvalds/linux/commit/8bae1b2b13beb4cf4c0f119f97640503c2b74b0f .
(there are more differences, but these seemed a reasonable sample of non-trivial changes)
@username_4 I see you do a lot of the work on the ixgbevf driver in the kernel... Is it reasonable to run the ixgbevf driver from the 4.4 LTS kernel on AWS? Any guidance is greatly appreciated!
username_4: Using the in-kernel driver is preferred, unless you are seeing issues. At
which point, our first suggestion is to try the sourceforge.net driver, to
see if the issue goes away (which would mean we fixed a known issue and
have not pushed the fix upstream yet). For the most part, the upstream
driver is kept up-to-date on a regular basis, so there should not be a
large discrepancy between the in-kernel and out-of-kernel drivers.
username_1: Thanks @username_4 so much for the guidance on the ixgbevf driver :-)
username_5: FWIW The `linux-aws` package in Ubuntu 16.04 is a huge performance win for a number of reasons, as well as providing the ENA/ixgbevf drivers out of the box. Other than that package nothing is required save marking the image as "SR-IOV" ready - Perhaps kube should install this when detecting it's installing on Ubuntu on AWS?
username_6: Another +1 for this.
Running kops 1.5.3, kubernetes 1.5.5 on r4.xlarge shows the `vif` driver in play:
```
# ethtool -i eth0
driver: vif
# cat /etc/debian_version
8.8
```
But I concur that a c4.2xlarge shows:
```
$ sudo ethtool -i eth0
driver: ixgbevf
$ cat /etc/debian_version
8.7
```
(The latter cluster will be updated later this week).
username_7: I have an m4.large instance that supports enhanced networking and thus should run the ixgbevf driver. Can we get kops to set all of this up for us in the k8s debian AMI? Otherwise I'll probably just move over to Ubuntu 16.04.
username_8: From my limited experience the ENA driver is a must.
I had a gRPC service that was experiencing poor throughput on a Kops created cluster and I narrowed it down to the fact that the default Debian image kops uses did not have the ENA installed. After making my own AMI from the kops default Debian image + ENA I have seen a ~7x improvement in throughput on i3.xlarge nodes (single node throughput increased from 1.2Gbps -> 8.03Gbps).
Cluster Setup:
Node Size: i3.xlarge
Topology: private
Networking: weave
username_9: After seeing this mentioned on HN I double-checked my own cluster, and sure enough my R4.XL machines running the `kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28` AMI are not running with ENA enabled.
```
$ cat /etc/debian_version
8.9
$ sudo modinfo ena
modinfo: ERROR: Module ena not found.
$ sudo ethtool -i eth0
driver: vif
version:
firmware-version:
```
What was your process to build a custom image @username_8? The official Debian image claims this should already be supported, so wondering what is the missing piece here.
username_10: Driver seems to be vif and not ixgbevf:
```
driver: vif
version:
firmware-version:
bus-info: vif-0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
admin@ip-172-20-74-151:/sbi
```
username_8: @username_9 I found a k8s-1.7-debian-jessie AMI that I spun up on an EC2 instance in my k8s VPC. I then followed [this guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html) to install and enable ENA on it. Once installed I made an AMI from that instance which is what I'm using now for my kops created nodes.
username_11: Related https://github.com/kubernetes/kops/issues/3868
username_12: We are running Kops 1.8 with k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02 (ami-06a57e7e) where it says that ENA is enabled. However, when i run check on that instance it returns simple network:
aws ec2 describe-instance-attribute --instance-id 123 --attribute sriovNetSupport
{
"InstanceId": "123",
"SriovNetSupport": {
"Value": "simple"
}
}
I also ssh into the instance and ran lsmod | grep ixgbevf to verify that the needed module for ENA is installed and it is not there?!!!
username_3: @username_12 have you tried the debian stretch image?
username_12: I have not, but looking at source code [https://github.com/kubernetes/kube-deploy/blob/master/imagebuilder/templates/1.8-stretch.yml](url), I dont see "ixgbevf" module included either.
username_9: @username_12 that is because the base image already has it in-kernel, so there is no need to install it on top.
Your SriovNetSupport looks fine to me, if ENA is not supported that property will be empty, while a value of simple means that enhanced networking is enabled. (see [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sriov-networking.html))
It all depends on which instance type you are using, there's different ways to do ENA as described [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html). If you use one of the newer types, you need to check for the ENA driver, not the ixgbevf one.
What instance type are you using?
username_13: i was using m5 .
username_11: @username_13 you probably where, but you need to be using the stretch ami with m5's |
pankajkumarbij/easy-job-intern | 902681379 | Title: send notification for internship opening (employer backend)
Question:
username_0: ## Is your feature request related to a problem? Please describe.
a user does not get notified when a company he/she has saved has an internship opening
## Describe the solution you'd like
add code to createInternship route that sends a notification to all the students who have added that particular company to their savedComapnies list<issue_closed>
Status: Issue closed |
MicrosoftDocs/windows-itpro-docs | 484898095 | Title: loadstate parameter /progress the log file name should be loadlog.log
Question:
username_0: misstypo
loadstate /i:migapp.xml /i:migdocs.xml \server\share\migration\mystore /progress:prog.log /l:scanlog.log
it should be
loadstate /i:migapp.xml /i:migdocs.xml \server\share\migration\mystore /progress:prog.log /l:loadlog.log
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 560b9a5d-d642-cd87-4d42-8fe4e4bab7c6
* Version Independent ID: ee572330-99ec-89a5-1830-cb8478b1e758
* Content: [LoadState Syntax (Windows 10)](https://docs.microsoft.com/en-us/windows/deployment/usmt/usmt-loadstate-syntax#feedback)
* Content Source: [windows/deployment/usmt/usmt-loadstate-syntax.md](https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/deployment/usmt/usmt-loadstate-syntax.md)
* Product: **w10**
* Technology: **windows**
* GitHub Login: @greg-lindsay
* Microsoft Alias: **greglin**
Answers:
username_1: @officedocsbot assign @username_2
username_2: Thank you for bringing this to our attention, @username_0. I will get this over to the Windows content writing team to correct the log filename as it currently shows */l:scanlog.log* which is for the [*ScanState*](https://docs.microsoft.com/windows/deployment/usmt/usmt-scanstate-syntax) syntax, not the one for *LoadState* (*/l:load.log*). We always strive to improve the quality of the technical documentation of Microsoft Docs so your feedback is appreciated!
Thank you for being part of the Microsoft Docs Community!
username_2: Hello again @username_0, we appreciate your patience. We'd like to inform you that the requested amendment to this documentation (see pull request #4983) has been approved and merged. The content update will be displayed here at docs.microsoft.com in the next scheduled publishing run. We will now close this issue since it's considered—albeit with pending updates—*resolved*. Feel free to re-open or create another issue through the doc's *feedback feature*, if you have other suggestions or ideas to improve the quality of this documentation.
Thank you for your feedback and for being part of the Microsoft Docs community!
username_2: @officedocsbot close |
baltimorecounty/baltimorecountymd.gov-assets | 428946822 | Title: Update Phone Directory
Question:
username_0: The employee phone directory uses a manual process to update employee records. This hasn't been done in a few years. The process needs to be changed to import directly from AD for employee records. The DBA team has been tasked with creating a SQL Loader to pull this file in to keep the data current.
#### Known Issues:
- AD uses a different ID for Agency then we use in this project
- Our data does not match the source of truth so both AD and our Table are out of sync
- Mail stop is no longer a valid field
Answers:
username_1: @username_2 tim is working on this, but he really isn't doing anything apart from serving as a poor man's analyst / project manager. The web service will remain the same, the database data is what is impacted.
username_0: @username_1 @username_2 The update to this gem is that we have to wait for a manager to approve the wiping out of the data in prod to then dump in the new data. I mean this only became an issue because someone saw that the data was wrong and not up to date so lets make sure we go full circle to get someone to acknowledge that yes this is still an actual issue.
username_2: @username_0 Purely out of curiosity, what ever happened to this?
username_0: @username_2 absolutely nothing so far. Apparently its a much more "difficult and complex situation" then previously thought. Instead of just importing the AD file into our table and updating our departments table with the department values from the new database there needs to be weeks and months and possibly years of analysis on why this might end up being a problem. I believe the DBA side is waiting for the AD team to update all their departments before they can move forward. And AD is saying this is a county wide change and they cant just do it they need time to analyze. In the mean time we have stale data from 2016 and the County Executive isn't even in the system or all his new staff. Im sure it'll be fixed any day now. |
lazychaser/laravel-nestedset | 684023565 | Title: Nested Dynamic Menu
Question:
username_0: Can anybody help me to do this menu with this package?

Thanks
Answers:
username_1: ` /**
* @return string
*/
public static function getTree(): string
{
$categories = self::get()->toTree();
$traverse = static function ($categories, $prefix = '') use (&$traverse, &$allCats) {
foreach ($categories as $category) {
$allCats[] = ["title" => $prefix . ' ' . $category->title, "id" => $category->id];
$traverse($category->children, $prefix . '-');
}
return $allCats;
};
return $traverse($categories);
}
/**
* @return string
*/
public static function getList(): string
{
$categories = self::get()->toTree();
$lists = '<li class="list-unstyled">';
foreach ($categories as $category) {
$lists .= self::renderNodeHP($category);
}
$lists .= "</li>";
return $lists;
}
/**
* @param $node
* @return string
*/
public static function renderNodeHP($node): string
{
$list = '<li class="dropdown-item"><a class="nav-link" href="/category/' . $node->slug . '">' . $node->title . '</a>';
if ($node->children()->count() > 0) {
$list .= '<ul class="dropdown-menu">';
foreach ($node->children as $child) {
$list .= self::renderNodeHP($child);
}
$list .= "</ul>";
}
$list .= "</li>";
return $list;
}`
Status: Issue closed
|
rubygems/rubygems | 20372644 | Title: https://rubygems.org/ - SSL_connect B: certificate verify failed
Question:
username_0: gem install pry
ERROR: Could not find a valid gem 'pry' (>= 0), here is why:
Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://s3.amazonaws.com/production.s3.rubygems.org/latest_specs.4.8.gz)
Answers:
username_1: :+1: @username_13 works on Windows 7 64bits
username_2: :+1: @username_13 @fifahuihua
username_3: @vchervanev That solved the issue for me.
username_4: :+1: for the solution provided by @fifahuihua:
```
gem sources -a http://rubygems.org
```
Now works on `Windows 7 Ultimate` and `rubyinstaller-2.1.5` on 64bit.
username_5: @username_13 you're solution works fine ! ! ! ( Windows 7 Ultimate with railsinstaller-3.1.0 )
same with username_4,, api solution is not working too!
username_6: @fifahuihua @username_4 switching the source to `http://rubygems.org` allows an attacker to install malicious gems. Use this guide to fix it properly: https://gist.github.com/luislavena/f064211759ee0f806c88
username_7: Perfect! Thanks!
username_8: Hey there, I had the same problem. Put next line code into: Start Command Prompt with Ruby
gem sources -a http://rubygems.org
it will come up with a "Do you want to add this insecure source? [yn]
Put 'y' for yes
then put
gem install pry
username_9: guys, i figured how to fix this problem.
Check this out.
ERROR: Could not find a valid gem 'sass' (>= 0), here is why:
Unable to download data from https://rubygems.org/ no such name (
https://rubygems.org/specs.4.8.gz)
1. go this website, download latest sass file
https://rubygems.org/gems/sass
2. create new folder on C drive, name `SASS` file.
3. back to the command prompt
cd C:¥SASS
gem install sass
I hope you will be succeed.
username_10: @username_9 you're installing a sass you downloaded to your harddrive. That is a workaround, but you will probably be better of by updating your rubygems in the long run.
This link contusername_29s the howto: "[Installing using update packages (NEW)](https://gist.github.com/luislavena/f064211759ee0f806c88#installing-using-update-packages-new)". Basically, you do the same thing you did with sass, but instead with rubygems itself. This will fix the problem for all future gems.
username_11: I am getting this error:
ERROR: Could not find a valid gem 'sass' (>= 0), here is why:
Unable to download data from https://rubygems.org/ - SSL_connect retur
ned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (
https://api.rubygems.org/latest_specs.4.8.gz)
Any workaround?
username_10: @username_11: yes, plenty. The short advice is to install a more up-to-date version of rubygems itself. Please have a look at the post directly before your post. It contusername_29s a link to the solution. If there's something weird happening, please let us know!
username_12: @username_8 - your solution worked for me on 2.2.2 . Thanks!
username_10: @username_12 it should be noted that this is not a good idea. As indicated by the warning you get from following that workaround. It's a temporary help, at best!
# Public service announcement for everyone wanting a Solution
There's **one** solution to this problem, and it's **updating your rubygems**. Everything else is just a workaround. Please follow the instructions by @luislavena, described in detail here: https://gist.github.com/luislavena/f064211759ee0f806c88#installing-using-update-packages-new
Please don't use the other workarounds, especially not those, that favor insecure HTTP connections.
username_13: The reason people are going for the bad/easy solution is because the right solution is not as easy as it should be, i.e. cut and paste.
If you have 2.x>, this fancy automatic solution should(might) work for unix/osx:
ruby -ruri -ropen-uri -e 'v={"2.2"=>"2.2.3","2.0"=>"2.0.15","1.8"=>"1.8.30"}[Gem::VERSION.split(".")[0..1].join(".")];uri=URI.parse("https://github.com/rubygems/rubygems/releases/download/v#{v}/rubygems-update-#{v}.gem <https://github.com/rubygems/rubygems/releases/download/v#{v}/rubygems-update-#{v}.gem>");File.write("/tmp/rubygems-update.gem",uri.read)'; gem install --local /tmp/rubygems-update.gem; update_rubygems --no-ri --no-rdoc; rm /tmp/rubygems-update.gem
The 1.8.x version would be:
curl https://github.com/rubygems/rubygems/releases/download/v1.8.30/rubygems-update-1.8.30.gem <https://github.com/rubygems/rubygems/releases/download/v1.8.30/rubygems-update-1.8.30.gem> > /tmp/rubygems-update.gem; gem install --local /tmp/rubygems-update.gem; update_rubygems --no-ri --no-rdoc; rm /tmp/rubygems-update.gem
The other versions, broken out are:
2.2.x:
curl https://github.com/rubygems/rubygems/releases/download/v2.2.3/rubygems-update-2.2.3.gem > /tmp/rubygems-update.gem; gem install --local /tmp/rubygems-update.gem; update_rubygems --no-ri --no-rdoc; rm /tmp/rubygems-update.gem
2.0.x
curl https://github.com/rubygems/rubygems/releases/download/v2.0.15/rubygems-update-2.0.15.gem > /tmp/rubygems-update.gem; gem install --local /tmp/rubygems-update.gem; update_rubygems --no-ri --no-rdoc; rm /tmp/rubygems-update.gem
After all that, if you made the http (bad) fix, do this:
gem sources --add https://rubygems.org <https://rubygems.org/>
gem sources --remove http://rubygems.org <http://rubygems.org/>
Unless there's a bug/typo in the above, this should put the whole problem to rest. Best of luck.
Dan
>
username_14: Thank you.
username_15: C:\DevKit>gem install json --platform=ruby
ERROR: Could not find a valid gem 'json' (>= 0), here is why:
Unable to download data from https://rubygems.org/ - Errno::ECONNABORT
ED: An established connection was aborted by the software in your host machine.
- SSL_connect (https://api.rubygems.org/specs.4.8.gz)
username_15: Mine issue is a bit different...
Can anyone help with an answer on how to resolve this error.:
C:\DevKit>gem install json --platform=ruby
ERROR: Could not find a valid gem 'json' (>= 0), here is why:
Unable to download data from https://rubygems.org/ - Errno::ECONNABORT
ED: An established connection was aborted by the software in your host machine.
SSL_connect (https://api.rubygems.org/specs.4.8.gz)
username_6: @username_15 what RubyGems version are you using? If ruby is trying to download `specs.4.8` you likely have an out-of-date version which does not have the correct CA certificates to securely connect to RubyGems.org.
username_16: Updating system gems worked for me:
```
gem update --system
```
username_17: 
ruby version 2.2.5p319
gem version 2.4.5.1
OS: window 7 64bit
Have anyone an idea to solve this?
Thanks you in advance.
username_18: Having this problem too. Wonder what's the cause. I noticed api.rubygems.org has a "too many redirects" error
username_17: @RupW Thanks, that's help!!!
Have to say that document resolved my issues. :+1:
Hope someone that have the same problem will resolve it.
FYI: I use MANUAL SOLUTION TO SSL ISSUE
username_18: I fixed mine by manually installing rubygems and upgrading openssl on osx yosemite
username_19: Neither @RupW or @username_18 's suggestions worked for me (OSX Yosemite 10.10.5). RubyGems version 2.6.7. Still getting the same error message. Does anyone have other suggestions?
username_18: @username_19 As someone pointed out, it's a problem with RVM rubies, they are built with old openssl. Update openssl, reinstall your ruby with `rvm install 2.3.1 --disable-binary`, which will force a recompile instead of using rvm binaries. This should address the issue.
username_19: perfect! Thanks
username_18: https://github.com/rubygems/rubygems/issues/1745
username_20: http://stackoverflow.com/a/28803395/4844439
username_18: That's **not** a solution, you are trading security (a big one) if you use `http` instead of `https`, please regret from even recommending that.
username_6: @username_18 I redacted it here and flagged it on stack overflow
username_18: Thanks @username_6 !
username_21: Fix is here `http://guides.rubygems.org/ssl-certificate-update/#background`
username_18: @username_21 nope, I applied that fix and didn't work, the only fix currently was recompiling the rubies
username_22: @username_21 slight alteration for your instructions. I copied the global.pem then I made sure I remove all sources using
`gem source --list`
`gem source --remove ...`
Finally I added back rubygems.org
`gem source --add https://rubygems.org/`
username_23: I had to reinstall openssl:
```
brew uninstall --force openssl
brew install openssl
```
username_24: I am still getting this error with a fresh install on windows
username_25: I got this error also. The cause is gem is updated and also the certificates.
Please follow this guide to update gem & the certificates, it helped me out.
http://guides.rubygems.org/ssl-certificate-update/
username_26: This solution did not work for me..If you are facing the same.. try the steps from https://gist.github.com/eyecatchup/20a494dff3094059d71d
Worked for me!!
username_27: I just removed the https url and replace it with http one.
gem sources --remove https://rubygems.org/
gem sources --add http://rubygems.org/
username_25: @username_27: This is in secure way, better follow my way if it works. Or username_10 is ok also.
username_10: I flag this answer whenever I see it on Stack Overflow, but the moderators there usually don't agree with my assessment that a dangerous way should not be on that site. It appears to be quite hard to understand that YOU ARE F*CKING OPEN TO ALL KINDS OF ATTACKS if you miss that tiny little **s** in your gem sources.
username_28: In my case:
- OSX 0.12.6
- Ruby 2.0.0-p247
- rbenv 1.1.1-28-gb943955
Updating Ruby to 2.4.2 made the trick.
username_29: Reinstalling Ruby worked for me:
1. `rvm uninstall 2.7.1`
2. `rvm install 2.7.1` |
ember-cli/ember-cli | 129235985 | Title: [Question]: why would addon unit test options differ from ember application unit test options?
Question:
username_0: Today I was writing some tests for an addon and noticed my models objects kept returning undefined for getOwner (using ember 2.3). The funny part was I extracted those same unit tests from working production ember apps (also running 2.3). Down in the guts of ember (in factoryFor) I found that my production ember unit tests had this flag set to true
```
_emberMetalCore.default.MODEL_FACTORY_INJECTIONS`
```
But ... when the addon unit tests were running this same flag was undefined (the root cause of my issue). The fix was to open test-helper.js, import Ember and set that value to true (like it is for my production ember app unit tests).
The bigger question to the ember-cli core team ... is that not true for a reason or is this just a small mistake and something I could PR / discuss more to better understand what the "right fix" looks like? Or is this flag instead "undefined" for a reason in addons?
Answers:
username_1: I have no knowledge of this or any possible reasons behind it, but I would guess it was a small mistake and a PR would be welcome.
username_0: @username_1 I'm mostly curious where this *should* be applied (hacking test-helper.js seems to work but I'll need to dive into some ember-cli source to understand better what the real fix should be).
thanks for the quick reply!
Status: Issue closed
|
binarywang/weixin-java-miniapp-demo | 1096833223 | Title: 发送模板消息。45103
Question:
username_0: me.chanjar.weixin.common.error.WxErrorException: 错误代码:45103, 错误信息:This API has been unsupported rid: 61d91326-6f7ab3ff-1c1b5fdb,微信原始报文:{"errcode":45103,"errmsg":"This API has been unsupported rid: 61d91326-6f7ab3ff-1c1b5fdb"}
看提示好像是官方不支持此接口了,请问是否有替代方法(发送模板消息)。<issue_closed>
Status: Issue closed |
openwrt/luci | 347215422 | Title: Error compiling package 'luajit'
Question:
username_0: Hello,
I currently try to compile openwrt for a EasyBox 904 xDSL.
When it comes to the 'luajit' package, it throws an error:
Package luajit is missing dependencies for the following libraries:
libc.so.6
libdl.so.2
libm.so.6
Makefile:92: recipe for target '/mnt/Easybox-904-XDSL/bin/packages/mips_24kc/packages/luajit_2017-01-17-71ff7ef-1_mips_24kc.ipk' failed
make[3]: *** [/mnt/Easybox-904-XDSL/bin/packages/mips_24kc/packages/luajit_2017-01-17-71ff7ef-1_mips_24kc.ipk] Error 1
make[3]: Leaving directory '/mnt/Easybox-904-XDSL/feeds/packages/lang/luajit'
Command exited with non-zero status 2
time: package/feeds/packages/luajit/compile#0.25#0.14#0.39
package/Makefile:107: recipe for target 'package/feeds/packages/luajit/compile' failed
make[2]: *** [package/feeds/packages/luajit/compile] Error 2
make[2]: Leaving directory '/mnt/Easybox-904-XDSL'
package/Makefile:103: recipe for target '/mnt/Easybox-904-XDSL/staging_dir/target-mips_24kc_musl/stamp/.package_compile' failed
make[1]: *** [/mnt/Easybox-904-XDSL/staging_dir/target-mips_24kc_musl/stamp/.package_compile] Error 2
make[1]: Leaving directory '/mnt/Easybox-904-XDSL'
/mnt/Easybox-904-XDSL/include/toplevel.mk:216: recipe for target 'world' failed
make: *** [world] Error 2
The build machine is running Ubuntu and packages for glibc etc. are installed an the filed are existing.
Linux 31c31144b45f 4.17.11.a-1-hardened #1 SMP PREEMPT Mon Jul 30 00:47:33 CEST 2018 x86_64 x86_64 x86_64 GNU/Linux
(Non-Ubuntu Kernel because it is running in Docker on top of Arch-Linux)
maker@3<PASSWORD>:/mnt/Easybox-904-XDSL$ find /usr/lib -name "libc.so.6"
/usr/lib/x86_64-linux-gnu/libc.so.6
maker@3<PASSWORD>:/mnt/Easybox-904-XDSL$ find /usr/lib -name "libdl.so.2"
/usr/lib/x86_64-linux-gnu/libdl.so.2
maker@3<PASSWORD>:/mnt/Easybox-904-XDSL$ find /usr/lib -name "libm.so.6"
/usr/lib/x86_64-linux-gnu/libm.so.6
I hope that this is the right place to ask.
If not, please tell me where to ask for help.
Thanks in advance!
Status: Issue closed
Answers:
username_1: luajit resides in the packages repository (https://github.com/openwrt/packages/tree/master/lang/luajit). Please open an issue there. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.