repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
MicrosoftDocs/windows-powershell-docs | 429674281 | Title: Supported range for LocalASN parameter
Question:
username_0: LocalASN parameter is defined as UInt32 but only values between 1 and 65534 are supported by this cmdlet. It would be helpful to have the valid range documented here.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a6198f91-958e-f682-a09f-50622b790b28
* Version Independent ID: f7955b94-a3e3-5935-6072-7f5363a2f76b
* Content: [Add-BgpRouter (remoteaccess)](https://docs.microsoft.com/en-us/powershell/module/remoteaccess/add-bgprouter?view=win10-ps#feedback)
* Content Source: [docset/windows/remoteaccess/add-bgprouter.md](https://github.com/MicrosoftDocs/windows-powershell-docs/blob/master/docset/windows/remoteaccess/add-bgprouter.md)
* Product: **w10**
* Technology: **powershell-windows**
* GitHub Login: @coreyp-at-msft
* Microsoft Alias: **coreyp**
Answers:
username_1: @officedocsbot close
username_2: @officedocsbot assign @username_2
username_2: @username_0
Upon your feedback, We have updated the content with relevant changes accordingly. Thanks.
username_2: @officedocsbot close |
oasis-open/dita-rng-converter | 281390664 | Title: Module naming convention?
Question:
username_0: None of our domain contraints like _par_highlightDomainConstraint.rng_ got converted. After renaming the file to _par_highlightDomainConstraint**Mod**.rng_ the conversion works.
Are there any other naming conventions regarding the file naming we must keep in mind? Is this documentend somewhere?
Best regards,
Frank
Answers:
username_0: I've faced a similar issue with .ent files not being generated from, for example, _par_conceptConstraintMod.rng_.
In this case I had to change the module type from `<moduleType>constraint</moduleType>` to `<moduleType>topic</moduleType>` to make the conversion work.
username_0: I think the underlying problem we are struggling with is as follows:
For topics, there are basically three module types:
| \<moduleType> | \<moduleShortName> | example RNG | generated DTD |
|----------------------|-------------------------------|------------------------------------|----------------------|
| topicshell | concept | concept.rng | concept.dtd |
| | par_concept | par_concept.rng | par_concept.dtd |
| topic | concept | conceptMod.rng | concept.mod |
| | | *no parson equivalent* | *missing for parson customization* |
| constraint | strictTaskbody | strictTaskbodyConstraintMod.rng | strictTaskbodyConstraint.mod |
| | par_concept-c | par_conceptConstraintMod.rng | par_conceptConstraint.mod |
For our customization, we are only using a shell and a constraint file. So it looks as if some of the required entries that go into the DTD .mod file are missing in our generated DTD files.
### Questions
1. Is the problem analysed correctly?
2. Can we fix this in the source RNG files: file name, \<moduleType> or \<moduleShortName>?
3. Do we have to adapt the XSLT scripts?
Any pointer welcome.
Best regards,
Frank
username_1: The naming conventions used in the OASIS-supplied modules are just that, conventions. However, the current RNG-to-DITA code depends on those conventions.
I reviewed the 1.3 RNG coding conventions and we didn't codify those naming conventions and if we had they would at most be SHOULD, not MUST (the TC has been moving away from trying to require specific names for files as part of the spec).
I'll review the code that depends on naming conventions and replace it where possible.
But as a workaround, you should use "Mod.rng" for all module files. That's probably a good practice regardless (that is, naming conventions are good thing and we strongly urge their use even if they are not, technically, required).
username_1: I've corrected the rngfunc:isModuleDoc() function to use the module type defined in the module metadata, not the RNG filename, to determine if a module is or is not a module of some type.
Produces the correct result using the ah-dita RNG, which does not follow the OASIS naming convention.
Pushed to develop branch.
Status: Issue closed
|
wso2/product-is | 455155309 | Title: STS client doesn't not return Multivalued attributes based on "MultiAttributeSeparator" property in the user store configuration
Question:
username_0: Moved from https://wso2.org/jira/browse/IDENTITY-6681
sts-client uses AttributeCallbackHandler for token generation. Here mulitattibute-separator is hard-coded to ",,," , only if requested claim comes with MultiAttributeSeparator it get overridden. [1]
Since this is setup with resident IDP, unable to configure an advanced claim "MultiAttributeSeparator" to be default to ",".
Even after defining a cliam "MultiAttributeSeparator" and it is requested in the request, attribute separation only happens for SAML 1.1, not for 2.0. However, when invoking from travelocity client attributes are properly sent to the client with based on "MultiAttributeSeparator" property of the user store. hence, two clients behave differently.
We should consider "MultiAttributeSeparator" property of the user store configuration also here in this flow and should be applied to SAML 2.0 as well.
[1] https://github.com/wso2-extensions/identity-inbound-auth-openid/blob/v5.1.1/components/org.wso2.carbon.identity.provider/src/main/java/org/wso2/carbon/identity/provider/AttributeCallbackHandler.java#L346
[2] https://github.com/wso2-extensions/identity-inbound-auth-openid/blob/v5.1.1/components/org.wso2.carbon.identity.provider/src/main/java/org/wso2/carbon/identity/provider/AttributeCallbackHandler.java#L359
Thanks! |
akkadotnet/akka.net | 250103879 | Title: Remoting serialization bindings may cause issues with persistence serialization
Question:
username_0: I noticed that the remoting config, contains serialization-bindings for certain primitive types.
https://github.com/akkadotnet/akka.net/blob/dev/src/core/Akka.Remote/Configuration/Remote.conf#L55
This effectively means that if someone stores primitive types with the persistence module, those values will not be serialized with the serializer they expect.
One can argue that this is a far-fetched issue. But its just the kind of thing that someone can spend many hours on trying to debug and figure out why their persistent actor is not working the way they think.
I'd like some discussion on this. At the very least we should add some remarks to the persistence docs about this. |
spring-projects/spring-session | 212320520 | Title: Session expired when server time is wrong
Question:
username_0: `public boolean isExpired() {
return isExpired(System.currentTimeMillis());
}`
`boolean isExpired(long now) {
if (this.maxInactiveInterval < 0) {
return false;
}
return now - TimeUnit.SECONDS
.toMillis(this.maxInactiveInterval) >= this.lastAccessedTime;
}`
I found these two method in class "MapSession",should it denpends on each server time in a cluster?
Session would always expired if servers time are not the same.
Status: Issue closed
Answers:
username_1: This is expected behavior. You must keep your server times in sync. |
r-lib/later | 711995269 | Title: Await later calls
Question:
username_0: I am trying to write a thread safe wrapper for later and am running into issues with exception handling.
My main goal is to "await" when later calls return.
The following code works well overall but the mutex wont get unlocked if an exception occurs in the later call. Then the mutex never gets unlocked despite the try/catch.
```c++
#include <mutex>
#include <later_api.h>
namespace asynclater {
std::mutex later_mutex;
struct AsyncLaterData
{
void (*func)(void *);
void *data;
} rsdat;
void later(void (*func)(void *), void *data, double secs)
{
later_mutex.lock();
rsdat.data = data;
rsdat.func = func;
later::later([](void *data) {
try
{
auto d = static_cast<AsyncLaterData *>(data);
d->func(d->data);
}
catch (...)
{
} // make sure mutex gets unlocked (does not work)
later_mutex.unlock();
},
&rsdat, secs);
}
void awaitLater()
{
later_mutex.lock();
later_mutex.unlock();
}
}
```
Answers:
username_1: I haven't used lambdas in C++ before, but I think it may not be safe to pass one directly to `later()` like that. `later()` expects a bare function pointer; I don't know what kind of magic the compiler is doing to the lambda to provide the function pointer to `later()`.
Since `later()` takes a bare pointer, I don't think that the compiler is able to track the lifetime of the `std::function` from your lambda, and so I think the lambda will be destroyed before it is invoked.
It is possible to wrap up `function` objects so that `later()` can handle them, but it currently involves adding a bit of code similar to [this](https://github.com/rstudio/httpuv/blob/5bb43428816d175cf66d906599c7569519c67b27/src/callback.cpp). However, now that we're using C++11 in various packages, we could make `later()` take `std::function` objects directly so that extra code isn't necessary (but that won't help you currently).
After dealing with the lifetime issues, here are some ideas:
* Have you tried using RAII to unlock the mutex? I know you can't RAII to both lock and unlock the mutex, but you could use it just for unlocking.
* Does any code after the `catch` execute?
username_0: As far as I know lambdas can decay into function pointers as long as they don't capture anything.
here is some code for reproducing the problem:
```R
Rcpp::cppFunction('
#include <Rcpp.h>
#include "later_api.h"
int later_rferror()
{
later::later([](void *data) {
try
{
Rcpp::Rcout << "A\\n";
Rf_error("some error\\n");
Rcpp::Rcout << "B\\n";
}
catch (...)
{
Rcpp::Rcout << "C\\n";
}
Rcpp::Rcout << "D\\n";
},
0, 0);
return 1;
}', depends=c("later"))
```
Calling `later_rferror()` results in:
```
[1] 1
A
Error: some error
later: exception occurred while executing callback:
```
Here the same function with `Rcpp::stop(...)`:
```R
Rcpp::cppFunction('
#include <Rcpp.h>
#include "later_api.h"
int later_rcppstop()
{
later::later([](void *data) {
try
{
Rcpp::Rcout << "A\\n";
Rcpp::stop("some error\\n");
Rcpp::Rcout << "B\\n";
}
catch (...)
{
Rcpp::Rcout << "C\\n";
[Truncated]
catch (...)
{
Rcpp::Rcout << "C\\n";
}
Rcpp::Rcout << "D\\n";
},
0, 0);
return 1;
}', depends=c("later"))
```
Results in:
```
[1] 1
raii init
A
Error: some error
later: exception occurred while executing callback:
```
username_1: Oh, the issue is simple. `Rf_error()` is a C function for throwing an error at the R level (similar to calling `stop()` in R code). It's a C function and it doesn't throw a C++ exception. My rough understanding is that it's implemented with `longjmp`, and that does not play well with C++ exception handling and destructors. See here for more information about it:
https://developer.r-project.org/Blog/public/2019/03/28/use-of-c---in-packages/index.html
I think the most straightforward way to deal with this is to wrap your code in a `tryCatch`, in R code.
This may help: I have an example [here](https://gist.github.com/username_1/4c54c765d16fdd9328e763db22512c4f) for constructing an expression from C and `eval`-ing it. All you really need to do is create a wrapper closure in R which does the `tryCatch()` around the code you really want to run, and then invoke that.
Status: Issue closed
username_0: Thank you for the information.
My application is performance sensitive so I am looking at [`R_UnwindProtect`](https://cran.r-project.org/doc/manuals/r-devel/R-exts.html#Condition-handling-and-cleanup-code) linked in the article you referenced for a solution with less overhead. (Even if I am not sure that I understand how to use it yet.)
As the issue is with the R C API I will close this issue. Thanks for all your help! |
Hagbuck/poly-compiler | 357980599 | Title: Réalisation de fiches/cas de tests
Question:
username_0: A préparer avec le rendu et pour nous tout simplement, préparer quelques scénarios et cas de tests (ex : priorités opératoires, caractères invalides etc...)
Editer pour idées :
- 3 * 4 + 9 * 2
- 4 + (6 * 3
Answers:
username_1: `3 - ` doit soulever une erreur avant le *code_gen*
username_0: L'entrée `5 1 3` doit lever une erreur de compilation.
Pour le moment génère :
```
[node_const] ~ 5 ~ (1;0) ~ 0 child(s)
[RESULT] -> 5
```
username_0: Bug signalé plus haut corrigé depuis la eaf7d7f
username_1: L'utilisation d'une variable sans déclaration :
```
var x;
x = 1;
z = x;
```
Génère l'erreur semantique : `[ERROR] ~ Error : z is not defined`
username_1: La double déclaration d'une même variable :
```
var x;
var x;
y = x - 3;
```
Génère l'erreur semantique : `[ERROR] ~ Error : x is already defined`
username_1: ```
p1(x){
return x + 1;
}
main(){
var a;
var b;
a = p1(b) = 7;
print a;
print b;
}
Doit lever une erreur car on affecte une constante à une constance. On doit vérifier lors du **node** affectation que le premier fils est un bien un **varRef**
```
Status: Issue closed
|
lerna/lerna | 526004399 | Title: Bug: Lerna skipping packages it expects to have been installed in root already
Question:
username_0: Lerna falsly assumes root packages to be present when building non-root packages.
Say a package is listed both in `devDependencies` in the "root" package and also as `dependencies` in a non-root package (let's call it "my-sub-package"). And let's assume one did not install (and hoist) any dependencies of any package in the repo yet so the "root" `node_modules` folder does not exist yet, neither exists any sub-package `node_modules` folder.
Now one executes:
```
npm run lerna -- bootstrap --loglevel silly --hoist --scope "my-sub-package" -- -- --production
```
then the following is expected:
## Expected Behavior
lerna installs all packages registered as dependencies in "my-sub-package" and hoists them to the "root" `node_modules`.
## Current Behavior
lerna skips installing dependencies that are registered as `dependencies` or `devDependencies` in the root package.
So say I have a package `common-tags` registered in a sub-package as well as in `devDependencies` in the root package then I will get the following output:
```
hasDependencyInstalled root common-tags
```
However, as mentioned before, the package (e.g. `common-tags`) has never been installed before and thus it will be missing in the `node_modules` folder and hence break the build during runtime.
## Possible Solution
* Option a) Lerna respects the `--production` flag and distinguishes between `dependencies` and `devDependencies` when checking for `hasDependencyInstalled`.
* Option b) Lerna allows us to explicitly exclude the root package from any checks via `--ignore "root"`.
## Steps to Reproduce (for bugs)
1. install lerna globally with e.g. `npm i -g lerna`
1. Create a root package json with e.g. `common-tags` registered as `devDependencies`
2. Create a subpackage named "my-sub-package" with `common-tags` package registered as `dependencies`
3. execute `npm run lerna -- bootstrap --loglevel silly --hoist --scope "my-sub-package" -- --production`
4. see that the `common-tags` package is missing in the `node_modules` folder,
## Context
This affects the automatic building of monorepo package dependencies.
## Your Environment
lerna v3.18.5
Answers:
username_1: Yep, that looks like a bug.
I wouldn't recommend using `lerna bootstrap` when package managers do well with this sort of thing nowadays (Yarn workspaces, [pnpm workspaces](https://pnpm.js.org/en/workspaces), and soon npm v7 workspaces)
username_2: Yet those same package managers say that they don't do what `lerna` does (see: [`yarn` docs on `lerna`](https://classic.yarnpkg.com/en/docs/workspaces/#toc-how-does-it-compare-to-lerna), which is contradictory.
If workspaces are a "low level primitive" wouldn't _someone_ (`lerna` or otherwise) be responsible for implementing the more complex logic on top of them?
Regardless to that – would a PR that fixes this issue be accepted? |
sensu/sensu-docs | 146433449 | Title: Explain that clients created via the /clients API are "proxy clients"
Question:
username_0: See: https://github.com/sensu/sensu/issues/1203
We need to explain that, by design, it's not possible to tell Sensu about a client that hasn't registered itself yet, and then expect keepalive messages from that client. Clients created via the API are actually proxy clients https://sensuapp.org/docs/latest/clients#proxy-clients
Status: Issue closed
Answers:
username_1: Migrating to https://github.com/sensu/sensu-docs-site/issues/105 |
pytorch/xla | 623759876 | Title: all_reduce different results v1.5 vs nightly (24.05.2020)
Question:
username_0: ## 🐛 Bug
Code using `all_reduce` does not produce the same result for v1.5 and nightly (24.05.2020)
## To Reproduce
In Colab
```
VERSION = "1.5"
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
```
and
```python
import torch
import torch.nn as nn
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
print("PyTorch version:", torch.__version__)
print("PyTorch xla version:", torch_xla.__version__)
# "Map function": acquires a corresponding Cloud TPU core, creates a tensor on it,
# and prints its core
def save_fn(index, flags):
# Barrier to prevent master from exiting before workers connect.
xm.rendezvous('init')
# Acquires the (unique) Cloud TPU core corresponding to this process's index
device = xm.xla_device()
tensor = torch.tensor([1, ], dtype=torch.float).to(device)
xm.all_reduce("sum", [tensor, ])
world_size = int(tensor.item())
print(index, "world_size:", world_size, "vs", xm.xrt_world_size())
# Spawns eight of the map functions, one for each of the eight cores on
# the Cloud TPU
flags = {}
# Note: Colab only supports start_method='fork'
xmp.spawn(save_fn, args=(flags,), nprocs=8, start_method='fork')
```
gives
```
PyTorch version: 1.5.0a0+ab660ae
PyTorch xla version: 1.5
6 world_size: 8 vs 8
1 world_size: 8 vs 8
2 world_size: 8 vs 8
0 world_size: 8 vs 8
5 world_size: 8 vs 8
3 world_size: 8 vs 8
7 world_size: 8 vs 8
4 world_size: 8 vs 8
```
And for nightly:
```
[Truncated]
7 world_size: 1 vs 8
3 world_size: 1 vs 8
5 world_size: 1 vs 8
1 world_size: 1 vs 8
4 world_size: 1 vs 8
2 world_size: 1 vs 8
```
## Expected behavior
Results should be the same ?
## Environment
Google Colab
## Additional context
<!-- Add any other context about the problem here. -->
Answers:
username_1: Bug already fixed on HEAD.
https://github.com/pytorch/xla/pull/2122
Status: Issue closed
|
cozy/cozy-ui | 297804115 | Title: Modal: create a ModalFooter to handle the modal footer layout
Question:
username_0: In the case I want to display a form in a modal, I want a submission button, I can't juste use the `primaryAction` attribute.
So it could be good to have something to handle the footer layout inside which I could put my submission and cancelation buttons.
We already have something similar in the intent modal that let user pick a contact in https://github.com/cozy/cozy-contacts:
the css
https://github.com/cozy/cozy-contacts/blob/c6ad35f6fa2439340bdd4cb14533dac3013fe06d/src/styles/intent.styl#L7-L28
```stylus
.intent-layout
height 100%
display flex
align-items stretch
flex-direction column
.intent-footer
height 4.5rem
background-color paleGrey
box-shadow inset 0 -1px 0 0 silver
display flex
align-items center
padding .5rem 1rem
.intent-footer-label
flex 1
.intent-main
flex 1
overflow-y scroll
width 100%
```
the pseudo-html markup (jsx)
https://github.com/cozy/cozy-contacts/blob/c6ad35f6fa2439340bdd4cb14533dac3013fe06d/src/components/PickContacts.jsx#L11-L19
```jsx
const IntentFooter = ({ label, onSubmit, onCancel, t }) => (
<div className="intent-footer">
<div className="intent-footer-label">{label}</div>
<Button theme="secondary" onClick={onCancel}>
{t("cancel")}
</Button>
<Button onClick={onSubmit}>{t("confirm")}</Button>
</div>
);
```
Answers:
username_1: Kinda the same concern that #320 for fixed header & footer
Already in the todo-list, just need to figure out the smartest way to do it.
username_2: @username_1 Is it closable now that we have the new layout for the modal ?
username_1: well, not really, if I understood correctly what @username_0 needed, you still can't pass children and build you own footer. But I'm wondering why you wouldn't be able to right now.
Status: Issue closed
username_0: I managed my needs in cozy-contacts, and will check later to change the code and use the new modal's layout. |
mgdm/htmlq | 1109159976 | Title: noscript
Question:
username_0: Trying to get a list of [currently available Invidious instances](https://redirect.invidious.io), I started doing
curl -s https://redirect.invidious.io | htmlq "noscript"
which gave me a list of all the noscript elements on the page, including the one I was looking for:
<noscript><div class="instances-list"><h2>Available instances</h2><ul class="list"><li><a href="https://invidious.snopyta.org">invidious.snopyta.org</a></li><li><a href="https://yewtu.be">yewtu.be</a></li><li><a href="https://invidious.kavin.rocks">invidious.kavin.rocks</a></li><li><a href="https://invidious-us.kavin.rocks">invidious-us.kavin.rocks</a></li><li><a href="https://invidious-jp.kavin.rocks">invidious-jp.kavin.rocks</a></li><li><a href="https://vid.puffyan.us">vid.puffyan.us</a></li><li><a href="https://invidious.namazso.eu">invidious.namazso.eu</a></li><li><a href="https://inv.riverside.rocks">inv.riverside.rocks</a></li><li><a href="https://vid.mint.lgbt">vid.mint.lgbt</a></li><li><a href="https://invidious.osi.kr">invidious.osi.kr</a></li><li><a href="https://invidio.xamh.de">invidio.xamh.de</a></li><li><a href="https://yt.artemislena.eu">yt.artemislena.eu</a></li></ul></div></noscript>
But when I tried to dig deeper to only get the list of URLs, it only gave me empty results, no matter what I tried:
$~ curl -s https://redirect.invidious.io | htmlq "noscript a"
$~ curl -s https://redirect.invidious.io | htmlq "noscript li"
$~ curl -s https://redirect.invidious.io | htmlq "noscript ul"
$~ curl -s https://redirect.invidious.io | htmlq "noscript div"
Is this an issue with `noscript` in general or with that specific site? Why does it find what I am looking for in the first place?
Using `htmlq 0.4.0` from [AUR](https://aur.archlinux.org/packages/htmlq-bin/)
Answers:
username_0: I know I can do
curl -s https://api.invidious.io/instances.json | jq -r '.[][1].uri'
because that is where the data from outside the `noscript` comes from, but this might still be a valid issue. |
misyero/chat-space | 544883562 | Title: 【依頼】ChatSpace挙動確認
Question:
username_0: # IPアドレス
192.168.3.116
Answers:
username_1: 道下さん、ご提出ありがとうございます!
挙動確認させて頂いたところ、以下の不備が見受けられましたので修正をお願いします!
**◯ 自動更新が機能しておらず、最新のメッセージ(テキスト/画像)が所得出来ない**
逆に言えばこれ以外は完璧でした、後もう少しです!頑張りましょう!
修正し次第、以下のChatSpace完了報告フォームを再度提出お願いします。
https://docs.google.com/forms/d/e/1FAIpQLSfNKbknajjCthhkGSyIkhspu2hOAxnESHweNLX2LQJdlO5vGw/viewform
username_1: 再提出ありがとうございます、修正お疲れ様でした!
確認しましたが、修正点がございませんでしたのでLGTMとさせて頂きます🎉
おめでとうございます! |
jwenjian/ghiblog | 628325989 | Title: Lorem Ipsum - All the facts - Lipsum generator
Question:
username_0: Card [Lorem Ipsum - All the facts - Lipsum generator](https://ift.tt/2ZWxKos) added the to the **网络收藏夹** list in the **遇见** board at `June 1, 2020 at 06:14PM`<br>
<br>
> placeholder text generator, 占位文本生成器,让你的占位文本更像一段段正常的英文段落,而不是:这是一段文本 这是一段文本<br>
<br>
---<br>
<br>
[View on Trello](https://ift.tt/2ZWxKos) |
shalotelli/angular-multiselect | 102036715 | Title: Width of drop down is not the same as the input box
Question:
username_0: Hi,
Width of drop down is not the same as the input box
it would be solved by adding following styles:
.multi-select { position: fixed; } // or relative
.multi-select-dropdown { width: 100%; } |
apache/pulsar | 978720246 | Title: Failed to restart pulsar standalone
Question:
username_0: **Describe the bug**
I start a pulasr standalone using docker-compose and produce some messages. Then run `docker-compose down ` and `docker-compose up -d`. Pulsar container failed to start.
**To Reproduce**
Steps to reproduce the behavior:
1. docker-compose up -d
2. produce messages
3. docker-compose down
4. docker-compose up -d
4. See error
**Expected behavior**
pulsar container restarts successfully
docker-compose.yaml
```
pulsar:
container_name: milvus-pulsar
image: apachepulsar/pulsar:2.6.1
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/pulsar:/pulsar/data
command: >
/bin/sh -c "
echo "" >> /pulsar/conf/standalone.conf && \
echo "maxMessageSize=104857600" >> /pulsar/conf/standalone.conf && \
echo "" >> /pulsar/conf/standalone.conf && \
echo "nettyMaxFrameSizeBytes=104857600" >> /pulsar/conf/standalone.conf && \
sed -i 's/^defaultRetentionTimeInMinutes=.*/defaultRetentionTimeInMinutes=10080/' /pulsar/conf/broker.conf && \
bin/pulsar standalone"
```
log:
[pulsar-08-25.log](https://github.com/apache/pulsar/files/7043797/pulsar-08-25.log)
Answers:
username_1: @username_0 Please add the error message and stacktrace to the description. This will help finding the issue if someone else encounters the same issue.
username_1: @username_0 Pulsar 2.6.x isn't actively maintained at the moment. Please check the result with Pulsar 2.7.3 and 2.8.0.
username_0: Ok, I will check it later. I uploaded the full log above which contains the error message and stacktrace.
username_0: I reproduce it with pulsar 2.8.0.
```
06:55:34.455 [client-scheduler-OrderedScheduler-2-0] ERROR org.apache.bookkeeper.clients.impl.internal.RootRangeClientImplWithRetries - Reason for the failure {}
io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:533) ~[io.grpc-grpc-api-1.33.0.jar:1.33.0]
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533) ~[io.grpc-grpc-stub-1.33.0.jar:1.33.0]
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:616) ~[io.grpc-grpc-core-1.33.0.jar:1.33.0]
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69) ~[io.grpc-grpc-core-1.33.0.jar:1.33.0]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:802) ~[io.grpc-grpc-core-1.33.0.jar:1.33.0]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:781) ~[io.grpc-grpc-core-1.33.0.jar:1.33.0]
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) ~[io.grpc-grpc-core-1.33.0.jar:1.33.0]
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) ~[io.grpc-grpc-core-1.33.0.jar:1.33.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: /172.27.0.2:4181
Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
at io.grpc.netty.shaded.io.netty.channel.unix.Errors.throwConnectException(Errors.java:124) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.channel.unix.Socket.finishConnect(Socket.java:243) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:672) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:649) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:529) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:465) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[io.grpc-grpc-netty-shaded-1.33.0.jar:1.33.0]
... 1 more
```
Here is the log file
[pulsar-08-25(1).log](https://github.com/apache/pulsar/files/7044481/pulsar-08-25.1.log)
username_1: @username_0 what happens if you append `--no-stream-storage` to your startup command in docker-compose.yaml ? (port 4181 is the stream storage port, that's why it might change the result)
username_0: @username_1 I got another error after adding the `--no-stream-storage`.
```
07:48:10.267 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to start pulsar service.
org.apache.pulsar.broker.PulsarServerException: java.lang.RuntimeException: org.apache.pulsar.client.admin.PulsarAdminException: java.util.concurrent.CompletionException: org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector$RetryException: Could not complete the operation. Number of retries has been exhausted. Failed reason: Maximum redirect reached: 5
at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:821) ~[org.apache.pulsar-pulsar-broker-2.8.0.jar:2.8.0]
at org.apache.pulsar.PulsarStandalone.start(PulsarStandalone.java:296) ~[org.apache.pulsar-pulsar-broker-2.8.0.jar:2.8.0]
at org.apache.pulsar.PulsarStandaloneStarter.main(PulsarStandaloneStarter.java:121) [org.apache.pulsar-pulsar-broker-2.8.0.jar:2.8.0]
Caused by: java.lang.RuntimeException: org.apache.pulsar.client.admin.PulsarAdminException: java.util.concurrent.CompletionException: org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector$RetryException: Could not complete the operation. Number of retries has been exhausted. Failed reason: Maximum redirect reached: 5
at org.apache.pulsar.functions.worker.PulsarWorkerService.start(PulsarWorkerService.java:571) ~[org.apache.pulsar-pulsar-functions-worker-2.8.0.jar:2.8.0]
at org.apache.pulsar.broker.PulsarService.startWorkerService(PulsarService.java:1490) ~[org.apache.pulsar-pulsar-broker-2.8.0.jar:2.8.0]
at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:793) ~[org.apache.pulsar-pulsar-broker-2.8.0.jar:2.8.0]
... 2 more
Caused by: org.apache.pulsar.client.admin.PulsarAdminException: java.util.concurrent.CompletionException: org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector$RetryException: Could not complete the operation. Number of retries has been exhausted. Failed reason: Maximum redirect reached: 5
at org.apache.pulsar.client.admin.internal.BaseResource.getApiException(BaseResource.java:247) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at org.apache.pulsar.client.admin.internal.BaseResource$1.failed(BaseResource.java:130) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at org.glassfish.jersey.client.JerseyInvocation$1.failed(JerseyInvocation.java:882) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime.processFailure(ClientRuntime.java:247) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime.processFailure(ClientRuntime.java:242) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime.access$100(ClientRuntime.java:62) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime$2.lambda$failure$1(ClientRuntime.java:178) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:292) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:274) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:288) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime$2.failure(ClientRuntime.java:178) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$apply$1(AsyncHttpConnector.java:204) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?]
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?]
at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$retryOperation$4(AsyncHttpConnector.java:247) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?]
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?]
at org.asynchttpclient.netty.NettyResponseFuture.abort(NettyResponseFuture.java:273) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.request.NettyRequestSender.abort(NettyRequestSender.java:473) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.HttpHandler.readFailed(HttpHandler.java:161) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.HttpHandler.handleRead(HttpHandler.java:154) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[io.netty-netty-codec-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
```
username_1: The root cause of the issue has been reported as #11842 .
Status: Issue closed
|
magma/magma | 997085237 | Title: [AMF] : Guti Registration in loop
Question:
username_0: Observed failures during GUTI registrations.
Status: Issue closed
Answers:
username_1: I executed SA_01_02_03_04_Hyper_Guti testcase with 50 loops. Testcase passed, Please find the pcap below..
[https://app.zenhub.com/files/170803235/031c3161-636e-4d7b-89e4-888fc39b08dc/download](https://app.zenhub.com/files/170803235/031c3161-636e-4d7b-89e4-888fc39b08dc/download) |
purpleducks/TheVigilante | 613720815 | Title: Allow the user to see the music credits
Question:
username_0: ### Game Engine ###
**_Feature:_** Figure out a way to credit the music used in the game. Either end of game credits or a button to pull up what's playing and by who
##### Difficulty - 3 #####
Answers:
username_0: This has now been implemented.
Status: Issue closed
|
greenlion/PHP-SQL-Parser | 88963093 | Title: Support nested join operations in the FROM clause.
Question:
username_0: _From [<EMAIL>](https://code.google.com/u/117149737953896729020/) on April 29, 2011 23:22:10_
What steps will reproduce the problem? 1. use the query ;) What is the expected output? What do you see instead? SELECT *
FROM (t1 LEFT JOIN t2 ON t1.a=t2.a)
LEFT JOIN t3
ON t2.b=t3.b OR t2.b IS NULL
(taken from http://dev.mysql.com/doc/refman/5.0/en/nested-join-optimization.html )
is parsed to
[SELECT] => Array
(
[0] => Array
(
[expr_type] => operator
[alias] => `*`
[base_expr] => *
[sub_tree] =>
)
)
[FROM] => Array
(
[0] => Array
(
[table] => (t1 LEFT JOIN t2 ON t1.a=t2.a)
[alias] => (t1 LEFT JOIN t2 ON t1.a=t2.a)
[join_type] => JOIN
[ref_type] =>
[ref_clause] =>
[base_expr] =>
[sub_tree] =>
)
[1] => Array
(
[table] => t3
[alias] => t3
[join_type] => LEFT
[ref_type] => ON
[ref_clause] => t2.b=t3.b OR t2.b IS NULL
[base_expr] =>
[sub_tree] =>
)
)
i expect the (t1... part to be parsed into two tables What version of the product are you using? On what operating system? latest on zend server using php 5.3 Please provide any additional information below.
_Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=9_
Answers:
username_0: _From [greenlion<EMAIL>](https://code.google.com/u/<EMAIL>/) on April 30, 2011 09:46:14_
Add support for nested table joins. This is a MySQL extension to the ANSI join syntax which is omitted. Since the goal is full MySQL syntax support, this is an important enhancement request.
There are plenty of examples on the listed documentation page to use as test cases, as well as the example included in this report.
**Summary:** Support nested join operations in the FROM clause.
**Status:** Accepted
**Owner:** <EMAIL>
**Labels:** -Type-Defect -Priority-Medium Type-Enhancement Priority-High
username_0: _From [<EMAIL>](https://code.google.com/u/<EMAIL>/) on May 01, 2011 22:15:31_
Nested join support is checked in at revision 42 :
SELECT *
FROM (t1 LEFT JOIN t2 ON t1.a=t2.a)
LEFT JOIN t3
ON t2.b=t3.b OR t2.b IS NULL
Array
(
[SELECT] => Array
(
[0] => Array
(
[expr_type] => operator
[alias] => `*`
[base_expr] => *
[sub_tree] =>
)
)
[FROM] => Array
(
[0] => Array
(
[table] => (t1 LEFT JOIN t2 ON t1.a=t2.a)
[alias] =>
[join_type] => JOIN
[ref_type] =>
[ref_clause] =>
[base_expr] => t1 LEFT JOIN t2 ON t1.a=t2.a
[sub_tree] => Array
(
[0] => Array
(
[table] => t1
[alias] => t1
[join_type] => JOIN
[ref_type] =>
[ref_clause] =>
[base_expr] =>
[sub_tree] =>
)
[1] => Array
(
[table] =>
[alias] => t2
[join_type] => LEFT
[ref_type] => ON
[ref_clause] => t1.a=t2.a
[base_expr] =>
[sub_tree] =>
)
)
)
[1] => Array
(
[table] => t3
[alias] => t3
[join_type] => LEFT
[ref_type] => ON
[ref_clause] => t2.b=t3.b OR t2.b IS NULL
[base_expr] =>
[sub_tree] =>
)
)
)
t/nested.php is added to verify operation
**Status:** Verified
Status: Issue closed
|
awslabs/aws-service-catalog-puppet | 795937874 | Title: Launch Role Constraints are being clobbered on spoke local portfolios
Question:
username_0: Please include a link to your expanded manifest, the full contents of your AWS CodeBuild output (see https://aws-service-catalog-puppet.readthedocs.io/en/latest/puppet/using_the_cli.html#export-puppet-pipeline-logs)
Please ensure you are using the latest version and have run a validate command on your manifest file see (https://aws-service-catalog-puppet.readthedocs.io/en/latest/puppet/using_the_cli.html#validate)
# Steps to reproduce
1. create two spoke local portfolios for the same portfolio
1.
1.
# Expected results
1. both portfolios should be shared and there should be two launch role constraints
# Actual results
1. only one of the launch role constraints are applied
Answers:
username_0: fixed
Status: Issue closed
|
Azure/azure-sdk-for-cpp | 1072581680 | Title: Use CI.yml to set any required env vars that applies for all matrix configurations per service
Question:
username_0: you're right @benbp . we use the `test-resources.json` to define the services and the env vars to be set.
We need to use the ci.yml on each service to set whatever required ENV VARs . Agree with that. I will create an issue to make it like that! :)
_Originally posted by @username_0 in https://github.com/Azure/azure-sdk-for-cpp/pull/3146#discussion_r763364981_<issue_closed>
Status: Issue closed |
orangewit3/supply_chain_simulator | 713951261 | Title: Version should be 0.4.26
Question:
username_0: https://github.com/username_1/supply_chain_simulator/blob/27101de936873e50b476814f0c815cb0c79a16ce/supplyChain.sol#L1
Version number must be changed from 0.4.20 to 0.4.26
Answers:
username_0: @username_1 can you please fix this already?
I had to update it in my own branch, but we should all already have the correct version when making any new branches and forks
username_1: I'll put up a pr for this after Nakuls code gets pushed
username_0: Do you put a PR for this yet?
Status: Issue closed
username_0: Fixed |
xamarin/java.interop | 565548990 | Title: Incorrectly removing Kotlin members
Question:
username_0: Not sure of exactly why, but a member is being removed. It might have to do with the fact that there is a property and a method with the same name...
Message:
```
Kotlin: Hiding internal getter method okhttp3/MultipartBody - contentType
```
Property:
https://github.com/square/okhttp/blob/parent-4.2.2/okhttp/src/main/java/okhttp3/MultipartBody.kt#L38
```kt
private val contentType: MediaType = "$type; boundary=$boundary".toMediaType()
```
Method:
https://github.com/square/okhttp/blob/parent-4.2.2/okhttp/src/main/java/okhttp3/MultipartBody.kt#L51
```kt
override fun contentType(): MediaType = contentType
```
Binlog:
[msbuild.zip](https://github.com/xamarin/java.interop/files/4206685/msbuild.zip)
Project:
[Square.OkHttp3.zip](https://github.com/xamarin/java.interop/files/4206732/Square.OkHttp3.zip)
Answers:
username_0: It appears to be happening for `var` as well:
https://github.com/square/okhttp/blob/parent-4.2.2/okhttp/src/main/java/okhttp3/Cache.kt#L152
```kt
private var hitCount = 0
```
https://github.com/square/okhttp/blob/parent-4.2.2/okhttp/src/main/java/okhttp3/Cache.kt#L380
```kt
@Synchronized fun hitCount(): Int = hitCount
```
username_0: Also happens with setXxx methods:
https://github.com/square/okhttp/blob/parent-4.2.2/okhttp/src/main/java/okhttp3/MultipartBody.kt#L240
```kt
fun setType(type: MediaType) = apply {
```
https://github.com/square/okhttp/blob/parent-4.2.2/okhttp/src/main/java/okhttp3/MultipartBody.kt#L233
```kt
private var type = MIXED
```
Status: Issue closed
username_2: *Release status update*
A new Preview version has now been published that includes the fix for this item. The fix is not yet included in a Release version. I will update this again when a Release version is available that includes the fix.
Fix included in Xamarin.Android 10.2.100.7 and 10.3.0.33.
Fix included on Windows in Visual Studio 2019 version 16.6 Preview 1 and higher. To try the Preview version that includes the fix, [check for the latest updates](https://docs.microsoft.com/visualstudio/install/update-visual-studio?view=vs-2019) in [Visual Studio Preview](https://visualstudio.microsoft.com/vs/preview/).)
Fix included on macOS in Visual Studio 2019 for Mac version 8.6 Preview 1. To try the Preview version that includes the fix, check for the latest updates on the **Preview** [updater channel](https://docs.microsoft.com/visualstudio/mac/update).
username_2: *Release status update*
A new Release version of Xamarin.Android has now been published that includes the fix for this item.
Fix included in Xamarin.Android 10.3.1.0.
Fix included on Windows in Visual Studio 2019 version 16.6. To get the new version that includes the fix, [check for the latest updates](https://docs.microsoft.com/visualstudio/install/update-visual-studio?view=vs-2019) or install the latest version from <https://visualstudio.microsoft.com/downloads/>.
Fix included on macOS in Visual Studio 2019 for Mac version 8.6. To get the new version that includes the fix, check for the latest updates on the **Stable** [updater channel](https://docs.microsoft.com/visualstudio/mac/update). |
JacquesCarette/Drasil | 340665693 | Title: Differences in Assumption 4 in GlassBR SRS
Question:
username_0: - [ ] caseStudies (right) should reference Section 11 to reduce duplication
- [ ] Drasil should reference the dependent Assumptions/Instance Model

Answers:
username_1: Agreed. Please make this change.
Status: Issue closed
|
cegui/cegui | 547378018 | Title: List of sample browser bugs
Question:
username_0: **[Original report](https://bitbucket.org/cegui/cegui/issue/1168) by <NAME> (Bitbucket: [Jahndal](https://bitbucket.org/Jahndal), ).**
----------------------------------------
= WidgetsSample =
MultiLineEditbox sets text from MultiLineEditbox::onTextChanged to add \n, effectively doing setText twice. Quite wasteful.
Scrollbars ignore click on bar & have one step for buttons
Scrollbar thumbs can be dragged outside the preview rect and lost until restart
*/MultiColumnList ping text is not colored
*/ImageButton not visible
*/Combobox - if text manually changed from existing to new, opened list shows previous existing text as selected
Generic/Image not visible
OgreTray/ComboDropList, Combobox, DropDownMenu crash "There is no WindowRendererFactory named 'Core/ListWidget' available"
OgreTray/ListHeader not visible
OgreTray/ListHeaderSegment not visible
OgreTray/ListboxItem not visible
OgreTray/Menubar not visible
OgreTray/PopupMenu not visible
TaharezLook/ComboDropList not visible
TaharezLook/InventoryItem not visible
TaharezLook/ListHeader not visible
TaharezLook/PopupMenu not visible
TaharezLook/SliderThumb not visible
Vanilla/ComboDropList not visible
WindowsLook/ComboDropList not visible
WindowsLook/IconButton not visible
WindowsLook/ListHeader not visible
WindowsLook/ListHeaderSegment not visible
WindowsLook/ListboxItem not visible
WindowsLook/MenuItem not visible
WindowsLook/MultiColumnList non-selected item text is white or not rendered at all
WindowsLook/PopupMenu not visible
WindowsLook/Slider has thumb only and is not functional
WindowsLook/SliderThumb not visible
WindowsLook/SystemButton not visible
= FontsSample =
Caret is too wide and causes the text to move
New font bug:
1. Select the last preset
2. Add new font mizufalp 12.0
3. Select it
4. Crash into FreeTypeFontGlyph::getRenderedAdvance(): CEGUI::FontGlyph::getImage(...) returned nullptr
= CustomShapesDrawingSample =
Left widget doesn't render FPS line
= Demo6Sample =
'Quit this demo' button doesn't work
= ModelViewSample =
Add random item in list - crash (ListViewItemRenderedString::setStringAndFormatting) if items exist, nothing if model is empty
Add random item (to root) - the same
Remove selected list item(s) - the same (or ListViewItemRenderedString::~ListViewItemRenderedString in tree) when at least one selected
Update selected list item's name - removes item from list & removes with gap in tree when the name is empty
Enable multiselection - Select multiple - Disable multiselection - Multuple items remain selected
= ScrollablePaneDemo =
No hotkey to create windows
File->Quit button in menu doesn't work
Answers:
username_1: Most of things here are fixed or outdated
Status: Issue closed
|
kiwi-cam/homebridge-broadlink-rm | 922591416 | Title: AirConditioner - ReferenceError: logLevel is not defined
Question:
username_0: **Describe the bug**
```
[homebridge-broadlink-rm-pro] This plugin is taking long time to load and preventing Homebridge from starting. See https://git.io/JtMGR for more info.
```
```
(node:8901) UnhandledPromiseRejectionWarning: ReferenceError: logLevel is not defined
at AirConAccessory.updateAccessories (/homebridge/node_modules/homebridge-broadlink-rm-pro/accessories/aircon.js:160:5)
at /homebridge/node_modules/homebridge-broadlink-rm-pro/base/platform.js:79:50
at Array.forEach (<anonymous>)
at BroadlinkRMPlatform.accessories (/homebridge/node_modules/homebridge-broadlink-rm-pro/base/platform.js:78:17)
```
**To Reproduce**
```
"accessories": [
{
"name": "Air Conditioner",
"type": "air-conditioner",
"autoSwitch": "A/C Auto Switch",
"minTemperature": 17,
"maxTemperature": 30,
"temperatureFilePath": "/homebridge/temp",
"pseudoDeviceTemperature": 23,
"defaultCoolTemperature": 17,
"defaultHeatTemperature": 23,
"autoHeatTemperature": 23,
"autoCoolTemperature": 17,
"data": {...}
},
```
homebridge-broadlink-rm-pro v4.4.4
Answers:
username_1: Thanks for the report @username_0.
The first error you show is being addressed in #335. The other one is a bug. I've just pushed a new BETA (4.4.5-beta.1) which should fix this error. Let me know how you get on.
username_0: @username_1 thank's for the fix.
Installed 4.4.5-beta1, it loaded pretty slow but looks as it works. Btw do you know what is this warning about?
```[6/17/2021, 4:01:46 PM] Homebridge v1.3.4 (ds218+) is running on port 53824.
[6/17/2021, 4:01:46 PM] [homebridge-broadlink-rm-pro] This plugin generated a warning from the characteristic 'Current Relative Humidity': characteristic was supplied illegal value: null! Home App will reject null for Apple defined characteristics. See https://git.io/JtMGR for more info.
[6/17/2021, 4:01:46 PM] [Broadlink RM] [INFO] Discovered Broadlink RM3 Mini (2737) at 10.0.0.61 (c8:f7:42:40:8c:fc)
[6/17/2021, 4:01:47 PM] [homebridge-broadlink-rm-pro] This plugin generated a warning from the characteristic 'Current Relative Humidity': characteristic was supplied illegal value: null! Home App will reject null for Apple defined characteristics. See https://git.io/JtMGR for more info.
wakeUntilAPIReady: try 10
wakeUntilAPIReady: try 20
[6/17/2021, 4:03:02 PM] [homebridge-broadlink-rm-pro] This plugin generated a warning from the characteristic 'Current Relative Humidity': characteristic was supplied illegal value: null! Home App will reject null for Apple defined characteristics. See https://git.io/JtMGR for more info.
wakeUntilAPIReady: try 30
```
username_1: Yeah, you're using "pseudoDeviceTemperature" which won't have a Humidity value but it's trying to set one. The next BETA will sort this.
Status: Issue closed
|
enonic/starter-react4xp | 433786680 | Title: Use webjars instead of custom node module for react?
Question:
username_0: From the code.
`<!-- OPTION 1: You can get React and ReactDOM from CDN like this, if you want to skip the react4xp-runtime-externals step in build.gradle:
<script crossorigin src="https://unpkg.com/react@16/umd/react.production.min.js"></script>
<script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.production.min.js"></script> -->`
To mee, this seems like the "externals" parts was made to serve the vanilla require files?
If these can be served as plain assets, why not rather use precompiled webjars? Similar to how things are done in the Babel starter?
Answers:
username_0: We prefer serving assets directly over CDN's as this gives us full control and higher security. So this should be our default approach.
We whould not "confuse" developers with CDN etc in starter. This could optionally be part of the docs imho?
username_1: Good point. I can look into that.
username_1: The externals is a transpiled asset that's needed either way - the nashorn engine runs it as part of the SSR preparations (that's what makes it different from other dependency chunks: they are also run in the backend and frontend, but everything that's packed into externals becomes directly available to _other_ JS code in the client. The rest is encapsulated into the `React4xp` object).
So I guess you could argue both ways: if you use a CDN on the client side, it's a more common and recognizable pattern. And if you serve the externals asset instead, you're guaranteed isomorphism: the same code and same versions rendering on the backend and frontend.
username_1: Updated the examples and some of the comments and the guide. Better?
username_1: Related: https://github.com/enonic/lib-react4xp/issues/22 and https://github.com/enonic/lib-react4xp/issues/18
Status: Issue closed
|
saz/puppet-ssh | 132513347 | Title: Server configuration is suffixed with `[]`
Question:
username_0: I am using puppet 4.3.2 with Ruby 2.1.8 and the latest version (2.8.1) of the SSH module.With the following puppet manifest, I end up with an invalid `/etc/ssh/sshd_config` file:
```
class { 'ssh':
server_options => {
PasswordAuthentication => 'no',
PermitEmptyPasswords => 'no',
PermitRootLogin => 'no',
PubkeyAuthentication => 'yes',
},
client_options => {},
version => 'latest',
storeconfigs_enabled => false,
}
```
The resulting `sshd` configuration is:
```
# File is managed by Puppet
AcceptEnv LANG LC_*
ChallengeResponseAuthentication no
Passwordauthentication[] no
Permitemptypasswords[] no
Permitrootlogin[] no
PrintMotd no
Pubkeyauthentication[] yes
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
X11Forwarding yes
```
This configuration is valid (notice the trailing `[]`) and, as such, the `sshd` service fails to start.
Answers:
username_0: OK, it seems that the keys must be quoted:
```
class { 'ssh':
server_options => {
'PasswordAuthentication' => 'no',
'PermitEmptyPasswords' => 'no',
'PermitRootLogin' => 'no',
'PubkeyAuthentication' => 'yes',
},
client_options => {},
version => 'latest',
storeconfigs_enabled => false,
}
```
Status: Issue closed
username_1: This should be fixed in the current master |
hhurz/tableExport.jquery.plugin | 110518466 | Title: Possibility of excluding rows by class name?
Question:
username_0: I have a situation where I may not exactly know the numeric indexes of the rows that I want to exclude from exporting without having to calculate it manually before calling the export on the table.
Would it be beneficial to have an option to exclude certain rows if given a class name?
```javascript
$('#tableContent').tableExport({
type:'csv',
excludeColumnClass: "skipmeplz"
});
```
I'd be more than happy to open a pull request with this behavior if it would be favorable.
Answers:
username_1: Thanks for your suggestion. I think there is already a similar solution for what you want do: ```data-tableexport-display="none"``` Use this data attribute for the rows you don't want to export. I added a working example on my [test page](https://github.com/username_1/tableExport.jquery.plugin/blob/master/test/index.html):
```html
<table id="hiddenrows" class="table-striped">
<thead>
<tr>
<th>C1</th>
<th>C2</th>
<th>C3</th>
</tr>
</thead>
<tbody>
<tr data-tableexport-display="none">
<td>A</td>
<td>B</td>
<td>C</td>
</tr>
<tr>
<td>D</td>
<td>E</td>
<td>F</td>
</tr>
<tr data-tableexport-display="none">
<td>G</td>
<td>H</td>
<td>I</td>
</tr>
</tbody>
</table>
```
username_0: Marvelous! Sorry for the needless issue. Consider this closed. :+1:
Status: Issue closed
|
mfogel/django-timezone-field | 315797220 | Title: Strange Offsets
Question:
username_0: <DstTzInfo 'Pacific/Auckland' LMT+11:39:00 STD>
```
Why is the timezone offset by 11 hours **and 39 minutes**? And why is it using Local Mean Time?
This bizarre behaviour is seen when accessing instances through the shell, and when running functional tests, but for some reason not in the live website itself. The website behaves as expected, but in tests this strange offset is seen, which makes them fail.
Other timezones all have arbitrary minute offsets too - Europe/London is 2 minutes for example.
I'm new to handling timezones so maybe I've missed something obvious, but I am very confused by this!
Answers:
username_0: Probably relevant: https://stackoverflow.com/questions/11473721/weird-timezone-issue-with-pytz
Status: Issue closed
username_0: Ok, turns out this isn't an issue in timezone_field, my bad. See [this answer](https://stackoverflow.com/questions/11442183/pytz-timezone-shows-weird-results-for-asia-calcutta/11442571#11442571) on SO.
I have resolved the issue I had by using the `localize` method of the timezone. |
dask/distributed | 211692282 | Title: Bandwidth during dataframe shuffles is low
Question:
username_0: I'm running the following experiment with partition frequencies of both '7d' and '30d'
```python
from dask.distributed import Client, wait, progress
c = Client('localhost:8786')
c
import dask.dataframe as dd
df = dd.demo.make_timeseries('2000', '2001',
{'x': float, 'y': float, 'id': int},
freq='1s', partition_freq='7d', seed=2)
df = df.persist()
wait(df)
start = time()
df2 = df.set_index('id').persist()
wait(df2)
end = time()
print(end - start) # trying to optimize this
```
### Profiles
- [Seven day (smaller) partitions](https://cdn.rawgit.com/username_0/9193d8c6fbb73a7b3149b6f9cb2c4e25/raw/4bdda7b8c671eb60cfaad41282b228585a1e75df/dataframe-shuffle-7d.html)
- [Thirty day (larger) partitions](https://cdn.rawgit.com/username_0/9193d8c6fbb73a7b3149b6f9cb2c4e25/raw/4bdda7b8c671eb60cfaad41282b228585a1e75df/dataframe-shuffle-30d.html)
### Analysis
Bandwidths in the small case are poor. They're close to 15MB/s per chunk. There is a lot of overlapping though. We tend to get somewhere around 50MB/s and 70MB/s. (This is for the whole machine). In the larger case they're better, around 120MB/s on average peaking up to 250MB/s. This is all on localhost though, so we should expect better.
The time to instantiate a new connection can takes a surprising amount of time (100ms or so) with wide spread, which might contribute a bit to the poor bandwidths.
My first guess on all of this is that our network is less responsive due to GIL-holding computations in Pandas (cc @username_1) but this is just a guess.
### Some things we can do
1. Reduce serialization cost of pandas dataframes (see #931 and https://issues.apache.org/jira/browse/ARROW-376) (cc @username_2)
2. Pool inter-worker connections (will try this next)
3. Investigate gil-free pandas operations (these computations get poor speedups in a thread pool, so this is presumably still a live issue (cc @username_1))
4. Byte-copy-free tornado IOStreams? (cc @pitrou)
The odds of any individual issue here resolving the problem is low so individually these are all probably pretty low priority.
Answers:
username_1: what kinds of ops are ``dd.set_index(...)`` actually doing?
username_0: Looking at the profiles may give some understanding here. In summary:
1. Splitting dataframes into groups
```python
def shuffle_group(df, col, stage, k, npartitions):
if col == '_partitions':
ind = df[col].values % npartitions
else:
ind = partitioning_index(df[col], npartitions)
c = ind // k ** stage % k
g = df.groupby(c)
return {i: g.get_group(i) if i in g.groups else df.head(0) for i in range(k)}
```
2. Splitting those dicts into different components (just getitem)
3. Concatenation along rows
4. Sorting along the index (towards the end)
Step 1 is the only thing that happens during communication heavy periods and lasts for any real duration.
username_1: 
(this is on a local machine). looks like most of the bottlenecks are actually the shuffle or am I reading this wrong?
username_1: 
username_0: I recommend starting a dask-worker with `--nprocs 8 --nthreads 1` to force inter-process communication.
username_0: A couple other anecdata points:
1. Replacing a numeric column with a text column doubles or triples the shuffle duration. To be clear we're not setting this text column as the index, it's just along for the ride.
2. Sorting a Spark DataFrames with the same data is around 30% faster in the numeric case and a few times faster in the text case (at least with my current configuration).
username_1: 
username_1: yep a bit different :>
username_0: You may also enjoy looking at the following diagnostic pages:
- http://localhost:8789/system
- http://localhost:8789/main.html
- http://localhost:8789/crossfilter
These give a good view of what a particular worker is doing
username_1: sorting is pretty much using ``np.argsort`` which is gil releasing
username_0: Any thoughts on how to improve the `shuffle_group` function? The goal here is to hash a particular column and then separate it into different sub-dataframe components. It might be simplified as follows
```python
def shuffle_group(df, col, stage, k, npartitions):
ind = hash_pandas_object(df[col]) % npartitions
c = ind // k ** stage % k
g = df.groupby(c)
return {i: g.get_group(i) if i in g.groups else df.head(0) for i in range(k)}
```
username_1: if this is my scheduler info, the workers how do I look at the worker?
```
{'address': 'tcp://127.0.0.1:8786',
'id': '98dfdc62-0023-11e7-a34b-6c4008997976',
'services': {'http': 53003},
'type': 'Scheduler',
'workers': {'tcp://127.0.0.1:52979': {'host': '127.0.0.1',
'last-seen': 1488554224.744344,
'last-task': 1488553901.274083,
'local_directory': '/<KEY>',
'memory_limit': 1288490188.0,
'name': 'tcp://127.0.0.1:52979',
'ncores': 1,
'services': {},
'time-delay': 0.006539106369018555},
```
username_0: If you have bokeh installed and start the worker with `dask-worker ...` it should come up automatically. You would have to be using a version of dask.distributed released sometime this year.
username_0: The worker bokeh servers aren't deployed if you're using the Python `LocalClient` or `Client()` convenience setup solutions.
username_1: ahh, ok
``shuffle_group(df, 'A', 10, 10, 10)``
what kind of parameters are you calling this with? whats the dtype (and len) of the column? (IOW is the the actual column from a pandas DataFrame (not dask) above?)
username_0: Yes, this function is called on each partition (pandas dataframe).
The parameter `k` controls how sharded the input dataframe becomes. It is between 1 and 100. In the example above it would have been around 20.
I don't think that the parameters stage and npartitions significantly affect performance, but npartitions is the number of partitions (1 to 10000?). Stage goes between 1 and something like `log(npartitions) / log(k)`
username_0: The column is the column on which we are sorting/shuffling. In the case above it is `'id'`
username_1: ```
In [1]: def shuffle_group(df, col, stage, k, npartitions):
...: from pandas.tools.hashing import hash_pandas_object
...: ind = hash_pandas_object(df[col]) % npartitions
...: c = ind // k ** stage % k
...: g = df.groupby(c)
...: return {i: g.get_group(i) if i in g.groups else df.head(0) for i in range(k)}
...:
In [2]: df = pd.DataFrame({'A' : np.random.randn(10000000)})
In [3]: %time _ = shuffle_group(df, 'A', 1, 10, 1)
CPU times: user 1.34 s, sys: 482 ms, total: 1.82 s
Wall time: 1.82 s
```
username_1: this is almost all hashing cost (well some for the groupby indexer computation)
``c = ind // k ** stage % k`` mainly because its producing a bunch of intermediates
```
3546 function calls (3509 primitive calls) in 1.890 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
2 0.386 0.193 0.461 0.230 hashing.py:71(hash_array)
2 0.221 0.110 0.221 0.110 {built-in method _operator.mod}
2 0.174 0.087 0.174 0.087 {method 'get_labels' of 'pandas.hashtable.Int64HashTable' objects}
6 0.160 0.027 0.160 0.027 {method 'take' of 'numpy.ndarray' objects}
6 0.148 0.025 0.148 0.025 {method 'astype' of 'numpy.ndarray' objects}
2 0.135 0.068 0.135 0.068 {pandas.algos.groupsort_indexer}
1 0.103 0.103 0.103 0.103 {built-in method _operator.floordiv}
2 0.103 0.052 0.214 0.107 algorithms.py:166(safe_sort)
1 0.067 0.067 0.067 0.067 {pandas.algos.take_2d_axis0_float64_float64}
1 0.051 0.051 1.890 1.890 <string>:1(<module>)
4 0.049 0.012 0.049 0.012 {built-in method pandas.algos.ensure_int64}
1 0.037 0.037 0.039 0.039 indexing.py:1857(maybe_convert_indices)
25 0.029 0.001 0.029 0.001 {method 'reduce' of 'numpy.ufunc' objects}
```
username_0: Is that within scope to accelerate within Pandas?
username_1: ```
In [29]: def shuffle_group(df, col, stage, k, npartitions):
...: from pandas.tools.hashing import hash_pandas_object
...: ind = hash_pandas_object(df[col]) % npartitions
...: c = ind // k ** stage % k
...: g = df.groupby(c)
...: return {i: g.get_group(i) if i in g.groups else df.head(0) for i in range(k)}
...:
In [31]: %time _ = shuffle_group(df, 'A', 1, 10, 5)
CPU times: user 1.22 s, sys: 366 ms, total: 1.58 s
Wall time: 1.58 s
In [32]: def shuffle_group(df, col, stage, k, npartitions):
...: from pandas.tools.hashing import hash_pandas_object
...: ind = hash_pandas_object(df[col]) % npartitions
...: c = (ind // k ** stage % k).astype('category')
...: g = df.groupby(c)
...: return g.groups
...:
...:
...:
In [33]: %time _ = shuffle_group(df, 'A', 1, 10, 5)
CPU times: user 909 ms, sys: 309 ms, total: 1.22 s
Wall time: 1.22 s
```
you can get a little faster by doing this
username_0: The categories trick is welcome. Thanks. I'll look into if Numba can (optionally) help with the arithmetic.
username_1: actually this is about 2x slower than it should be.
The problem is hashing a Series *also* hashes the index. which I suspect is the same every time.
so I think you can simply pass ``index=False`` as well.
username_1: ```
In [6]: df = pd.DataFrame({'A' : np.random.randn(10000000)})
In [7]: def shuffle_group(df, col, stage, k, npartitions):
...: from pandas.tools.hashing import hash_pandas_object
...: ind = hash_pandas_object(df[col]) % npartitions
...: c = (ind // k ** stage % k).astype('category')
...: g = df.groupby(c)
...: return g.groups
...:
In [8]: %time _ = shuffle_group(df, 'A', 1, 10, 5)
CPU times: user 941 ms, sys: 302 ms, total: 1.24 s
Wall time: 1.24 s
In [9]: def shuffle_group(df, col, stage, k, npartitions):
...: from pandas.tools.hashing import hash_pandas_object
...: ind = hash_pandas_object(df[col], index=False) % npartitions
...: c = (ind // k ** stage % k).astype('category')
...: g = df.groupby(c)
...: return g.groups
...:
...:
In [10]: %time _ = shuffle_group(df, 'A', 1, 10, 5)
CPU times: user 738 ms, sys: 222 ms, total: 960 ms
Wall time: 960 ms
In [11]:
```
username_0: Ah, I'm actually already doing that in the real/slightly more complex code.
username_1: yeah I think there are some things we could do withing hashing to avoid copying (which is now starting to dominate)
username_1: this is using ``numexpr`` internally (the eval) to eliminate some temporaries
```
In [16]: def shuffle_group(df, col, stage, k, npartitions):
...: from pandas.tools.hashing import hash_pandas_object
...: ind = hash_pandas_object(df[col], index=False)
...: c = (pd.eval('(ind % npartitions) / k ** stage % k')).astype('category')
...: g = df.groupby(c)
...: return g.groups
...:
...:
In [17]: %time _ = shuffle_group(df, 'A', 1, 10, 5)
CPU times: user 838 ms, sys: 277 ms, total: 1.12 s
Wall time: 827 ms
In [18]:
```
username_1: so you *might* also wish to do something completely different
if you return ``g.indices`` rather than ``g.groups`` you avoid group construction altogether.
then you can simply iterate and ``.take`` (the ``indices``) are simply the indexer into the groups which is all I think you want
username_0: I like the idea of thinking about what we can do that is entirely different. However the desired end result is a bunch (roughly 20) of separate small dataframes that we can send to different machines. Indexes alone are not sufficient.
username_1: @username_0 no my point is this. This routine simply computes the indexers. I *think* AFAIR you have a different part where you actually construct the separate frames. This would allow you to separate these things.
Not sure if this is completely useful, but it *might* be.
username_0: Maybe. It all happens sequentially on the same machine so I don't think that there would be much benefit here.
It looks like `pd.eval`/numexpr doesn't like floor division:
```python
In [120]: %time _ = pd.eval('(ind % npartitions) // k ** stage % k')
TypeError: unsupported operand type(s) for //: 'OpNode' and 'OpNode'
```
The category trick shaves off 10% from shuffle time. If our data is purely numeric and partition size is large-ish then dask.dataframe shuffles narrowly beat spark dataframe shuffles. If any of those ifs are broken then Spark dataframes pull ahead.
username_0: Rather than use numexpr or numba I'm just using numpy more intelligently. This is no longer a significant bottleneck. The dict comprehension is now probably the worst.
I don't suppose there is some cython-optimized way to shard a dataframe along a pre-computed column? Something like cut?
username_1: can u show an example of what you sre looking?
username_0: This is what I do now. Line seven is slow.
```python
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'x': [1, 2, 3, 4], 'y': [10, 20, 30, 40]})
In [3]: import numpy as np
In [4]: groups = np.array([0, 2, 1, 1])
In [5]: df
Out[5]:
x y
0 1 10
1 2 20
2 3 30
3 4 40
In [6]: g = df.groupby(groups)
In [7]: out = [g.get_group(i) for i in range(3)]
In [8]: out[0]
Out[8]:
x y
0 1 10
In [9]: out[1]
Out[9]:
x y
2 3 30
3 4 40
In [10]: out[2]
Out[10]:
x y
1 2 20
```
username_1: the overhead associated with that is really construction of each frame. e.g. effectively this does
``{i:df.take(indexer) for i, indexer in enumerate(g.indices.values())}``
its about even with costs of: ``.take``, construction of the result frames, and the grouping
```
In [23]: df = DataFrame({'A':np.random.randn(N), 'B':np.random.randint(0,ngroups,size=N)})
In [24]: %%time
...: g = df.groupby('B')
...: _={i:g.get_group(i) for i in range(ngroups)}
...:
...:
CPU times: user 714 ms, sys: 102 ms, total: 816 ms
Wall time: 815 ms
```
username_0: I wonder if it might be faster to make a copy, shuffle the data around in memory using something like set_index, and then shard out the dataframes from that already allocated (and now properly aligned) data.
username_0: Naive timings say no:
```python
In [51]: df = pd.DataFrame({'A' : np.random.randn(1000000)})
In [52]: ind = hash_pandas_object(df['A'], index=False) % 50
In [53]: %time g = df.groupby(ind); groups = {i: g.get_group(i) for i in range(50)}
CPU times: user 76 ms, sys: 0 ns, total: 76 ms
Wall time: 73.7 ms
In [54]: %time df['ind'] = ind; df2 = df.sort_values('ind'); del df['ind']
CPU times: user 128 ms, sys: 100 ms, total: 228 ms
Wall time: 230 ms
```
username_2: @username_0 since you are dealing with fixed cardinality integers you can do O(n) counting sort and get much better performance. Remember also that adding and removing DataFrame incurs internal BlockManager consolidation overhead
```
In [24]: from pandas.algos import groupsort_indexer
In [25]: ind64 = ind.values.view(np.int64)
In [26]: %timeit indexer, _ = groupsort_indexer(ind64, 50); df2 = df.take(indexer)
10 loops, best of 3: 31.2 ms per loop
In [27]: %timeit g = df.groupby(ind); groups = {i: g.get_group(i) for i in range(50)}
10 loops, best of 3: 78.7 ms per loop
```
username_0: That's a nice boost. Thank you @username_2 .
Current solution here: https://github.com/dask/dask/pull/2032/files
Further optimizations welcome, but at least on my current benchmarks this has moved down out of primary-bottleneck status.
username_3: Was curious how things had progressed since this issue was opened. Unfortunately some things likes RawGit no longer exist, so don't have a clear idea where things were.
In any event tidied things up a bit and put them in [this Gist]( https://gist.github.com/username_3/d0a12143c551c5e935403534f6150506 ). Here's [the script I used]( https://gist.github.com/username_3/d0a12143c551c5e935403534f6150506#file-shuffle_script-py ) and [the performance report]( https://gistcdn.githack.com/username_3/d0a12143c551c5e935403534f6150506/raw/4fd93f7e2392163093d5cdafa0af23d96d217ac5/dask-report.html ).
Would be curious to know whether people think this is still slow and if so where they think it can be improved. Of course things have progressed in the past few years so the same improvements may or may not make sense.
username_0: This issue has been silent for years. Closing.
@username_3 I recommend raw.githack.com today
https://gist.githack.com/username_3/d0a12143c551c5e935403534f6150506/raw/4fd93f7e2392163093d5cdafa0af23d96d217ac5/dask-report.html
Status: Issue closed
username_0: Thank you for running the experiment and posting results. That will be helpful the next time someone takes a look at this. |
cinder92/react-native-get-music-files | 330700507 | Title: App Crashes on devices with versions below Marshmallow
Question:
username_0: Hi @username_1 . We experienced an app crash on Android Kitkat (API 19) inside the method: getAll().
We couldn't get the specific line on the stacktrace but we noticed that the error happens inside the new Thread() block. We have managed to bypass the issue by removing the block of code inside the new Thread() block and just let the ContentResolver be executed in the Main Thread.
https://github.com/username_1/react-native-get-music-files/blob/bd9c9b445a10d311b26bae7207aa2b3b9b501e9c/android/src/main/java/com/reactlibrary/RNReactNativeGetMusicFilesModule.java#L124
When we did that, the App crash ceased to exist. Do you know why this happens? Also, I'm not sure about this, but is it safe to execute a ContentResolver inside a background thread? I think this causes the app crash. Note: We couldn't get the specific line (stacktrace) because the block was inside a thread. We noticed this error showing up though:
E/MediaPlayer-JNI: QCMediaPlayer mediaplayer NOT present
What do you think about this issue?
Answers:
username_1: Hello, thanks for submitting issues, yes, it's better to run that code in a separated thread for get info of songs faster and prevent to block UI, I will test on device with KitKat, and check what's happening when I have time
username_1: fixed on 2.1
Status: Issue closed
|
AngelMariages/RodaliesWidget | 304430498 | Title: ActivityThread.java
Question:
username_0: #### in android.app.ActivityThread.handleCreateService
* Number of crashes: 1
* Impacted devices: 1
There's a lot more information about this crash on crashlytics.com:
[https://fabric.io/angel8/android/apps/org.angelmariages.rodalieswidget/issues/5aa6a25b8cb3c2fa63e1dabc?utm_medium=service_hooks-github&utm_source=issue_impact](https://fabric.io/angel8/android/apps/org.angelmariages.rodalieswidget/issues/5aa6a25b8cb3c2fa63e1dabc?utm_medium=service_hooks-github&utm_source=issue_impact)<issue_closed>
Status: Issue closed |
loafoe/hubot-matteruser | 207511879 | Title: Mattermost moving to API version 4
Question:
username_0: Hey @username_1!
Cross-posting my note in https://github.com/username_1/mattermost-client/issues/31 over here too. Big thanks for sharing this project!
I wanted to let you know about our plans moving to API v4:
To make the Mattermost API web service easier to use and to offer more powerful options for these integrations, Mattermost will be moving to a [new API version](https://docs.google.com/document/d/197JwEBMnK8okFilTfGSpbsrXPY5RZOJ4gG2DXwcbwYE/edit) soon. Highlights include:
* Fully documented API endpoints
* More in-depth access to server functionality
* Wider use of established HTTP verbs
* Consistent endpoint structures
* A new and improved Go driver
We plan to release API version 4 on March 16th, with Mattermost server 3.7. While the current API version 3 will be supported until September 16th, we recommend you begin using API version 4 soon after its release.
#### Contributing
API version 4 is an active and on-going project. If you're interested in helping contribute, [please join our Mattermost community instance](https://pre-release.mattermost.com/signup_user_complete/?id=f1924a8db44ff3bb41c96424cdc20676) and the [APIv4 channel](https://pre-release.mattermost.com/core/channels/apiv4).
We've prepared a [contribution process for APIv4](https://docs.mattermost.com/developer/api4.html) and a [progress tracker for new APIv4 enpoints](https://docs.google.com/spreadsheets/d/1nPoLgwh_9zRFECpqRUZAKIWihCmX27pnDtFGLtG_WnY/edit#gid=0) to help you get started.
We're also open for suggestions on adding new API endpoints to help with your integration.
Answers:
username_1: Thanks for the heads up @username_0 . Preparing a branch to adopt v4 API.
Status: Issue closed
username_1: Switched in `v3.10.0` to API v4 |
reg-viz/reg-suit | 334780146 | Title: [FEATURE REQUEST] plugin for GCP (cloud storage)
Question:
username_0: Hi, Thank you for creating great tools :dog: !!
I'd like to use GCP (cloud storage) instead of AWS S3 when using `reg-suit`.
Is there a possibility to develop a plug-in for `cloud storage` in the future :smiley: ??
Answers:
username_1: It looks cool! And I’ve planned GCS feature(FYI see #1
But, I’m not good at GCS sdk...
If you would not mind, please show me the following code snipets?
- How to push image files to GCS with Node.js
- How to tell GCS credentials information to users’ CI service safely. ( env value or something like .aws/credential file or...
username_0: @username_1 Thanks for the response!
Just like you, I'm not familiar with GCS (and GCP) either :sob:
However, I actually want to use `reg-suit` using GCS.
Therefore, I will try to show code snippets so I'm glad if you can wait for a while :wink:
username_1: @username_0 I've just published https://www.npmjs.com/package/reg-publish-gcs-plugin. Pls check it out and send me some FBs 👋 .
Status: Issue closed
username_0: Thank you very much!! So fast work... :joy:
When actually using it, I will give feedback :pray: |
markruys/gw2pvo | 1008787127 | Title: gw2pvo.service error status=1/FAILURE
Question:
username_0: Hi,
I am fairly beginner level but tried to follow the instruction to run the code on an RP4.
When I run `sudo journalctl -u gw2pvo -f`, I get this error.

The config file information such as ID & Pass for SEMS portal is definitely correct.
When I run `gw2pvo --config gw2pvo.cfg --log debug`, it there is an error in the middle of the debug process,


However, in the end, the data is uploaded to PVoutput.
What would be issue here?
I'd love to upload the data to PVoutput automatically using this code..
Thank you!
Chris.
Answers:
username_0: Just to add on my case, when I do `sudo journalctl -u gw2pvo -f`, I get the following message.

I found a closed case where a fresh reset of RP would solve the issue, but my decide is sitting overseas and I will not be able to do a complete reset at the moment.
Is there anyway we can fix the issue without a complete reset?
Kind regards,
Chris. |
micromdm/scep | 249163208 | Title: Refactor server & client out of this repository
Question:
username_0: Thanks for the great work on this package! I'd like to suggest either moving the `scep` package to it's own repo or moving the server, client and cmd packages out of this one. The main motivations for this are:
* reduce code footprint for code & security audit purposes
* enforce strong separation of concerns between `scep` package primitives and their consumers |
epam/ketcher | 1123799688 | Title: use iframe embed standalone version ketcher init error
Question:
username_0: use iframe embed standalone version ketcher init error
```
var ketcherFrame = document.getElementById('ketcher');
this.ketcher = ketcherFrame.contentWindow.ketcher;
console.log(this.ketcher);
```
and this.ketcher will get undefined,
i must settimeout and delay 1 second to run the code will get this.ketcher success.
I want to ask how can I listen to the loading event of the ketcher under the iframe tag and execute some initialization code after the loading event is completed
Answers:
username_1: Hi @username_0 ,
Generally speaking it is not Ketcher related problem. 'Example' was created for Demo purpose only. For real usage we assume using 'ketcher-core' and 'ketcher-react' packages.
Nevertheless, we will include required changes into Ketcher example in order to allow users to use it as is.
By the way as a workaround you could try following approach: https://stackoverflow.com/questions/9249680/how-to-check-if-iframe-is-loaded-or-it-has-a-content.
Best Regards,
Andrei
Status: Issue closed
username_0: thanks,I thought the ketcher had a callback event for loading completion, sorry
username_1: use iframe embed standalone version ketcher init error
```
var ketcherFrame = document.getElementById('ketcher');
this.ketcher = ketcherFrame.contentWindow.ketcher;
console.log(this.ketcher);
```
and this.ketcher will get undefined,
i must settimeout and delay 1 second to run the code will get this.ketcher success.
I want to ask how can I listen to the loading event of the ketcher under the iframe tag and execute some initialization code after the loading event is completed
username_1: Ketcher-react provides ability to make some changes after initialization by passing `onInit` property on `<Editor />` component (please look at https://github.com/epam/ketcher/blob/c9b8109b72448a7de4af20f2f7ad90f07c982a61/example/src/App.tsx#L58).
We plan to extend 'Example' functionality to allow use it OOTB by using post message API (https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage).
More likely this functionality will be included in the next release, 2.5
username_2: <img width="135" alt="Screenshot 2022-02-16 at 11 57 03" src="https://user-images.githubusercontent.com/97884482/154230691-64135b97-fadd-45ad-af54-92598f5aa38c.png">
В айфрейме всё ещё undefined |
Galina8181/qajava-module-29-main | 923546179 | Title: Добавьте @BeforeTest и @AfterTest
Question:
username_0: https://github.com/Galina8181/qajava-module-29-main/blob/f2e9ef10b8cab58b23e58793f189b56f92004438/qajava-module-29-main/qajava-module-29-main/maven-testng-example/src/test/java/ru/skillfactory/qajava/PersonTest.java#L9
Пример можете посмотреть тут https://github.com/Danny-sth/qajava-module-29/blob/main/src/test/java/PersonTest.java |
greenplum-db/gpdb | 247087645 | Title: centos vagrant fails during configure
Question:
username_0: vagrant up gpdb or gpdb_without_orca
configure --prefix=/usr/local/gpdb
fails with
==> gpdb_without_gporca: ./configure: line 11942: #include: command not found
https://github.com/greenplum-db/gpdb/blob/master/configure#L11942
Answers:
username_1: If you apply the patch in https://github.com/greenplum-db/gpdb/pull/2659, does that make it work?
Status: Issue closed
username_0: @username_1 sadly, no. But at least I no longer have the #include error command not found.
Thanks! |
MicrosoftDocs/azure-docs | 457138606 | Title: Problem with --subscription parameter in AKS Virtual-Kubelet
Question:
username_0: Hello,
It seems that a problem happens using the --subscription parameter to find the AKS Cluster in another subscription (Also, this parameter doesn't appear in the docs but shows on az aks install-connector -h).
The issue is, if **my cluster is in the subscription ABC** and my current az context is set to subscription **XYZ**
If i run the command:
```
az aks install-connector --resource-group my-aks-rg --name my-aks --connector-name my-connector --subscription ABC --os-type Both
```
The deployment runs just fine, everything get created on the cluster, but in the pod configuration it shows:
```
- name: AZURE_SUBSCRIPTION_ID
value: XYZ
```
Because of this, when creating a deploy in this virtual-node I get the error:
```
api call to https://management.azure.com/subscriptions/XYZ/resourceGroups/MC_my-cluster-rg/providers/Microsoft.ContainerInstance/containerGroups/default-data-trust-telefone-8b56bdff5-xkmnq?api-version=2018-10-01: got HTTP response status code 403 error code "AuthorizationFailed": The client '***' with object id '***' does not have authorization to perform action 'Microsoft.ContainerInstance/containerGroups/write' over scope '/subscriptions/XYZ/resourceGroups/MC_my-cluster-rg/providers/Microsoft.ContainerInstance/containerGroups/default-data-trust-telefone-8b56bdff5-xkmnq'.
```
Expected Behavior:
When providing the --subscription flag the value of the Environment Variable AZURE_SUBSCRIPTION_ID should be the same and not be using the subscription from az context
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: acfb234b-c0db-3e6a-7938-cab614d530c0
* Version Independent ID: 2adff671-6ccb-d84a-f689-8b0d72b1577c
* Content: [Run Virtual Kubelet in an Azure Kubernetes Service (AKS) cluster](https://docs.microsoft.com/en-us/azure/aks/virtual-kubelet#feedback)
* Content Source: [articles/aks/virtual-kubelet.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/virtual-kubelet.md)
* Service: **container-service**
* GitHub Login: @iainfoulds
* Microsoft Alias: **iainfou**
Answers:
username_1: @username_0 Thanks for the question. We are currently investigating and will update you shortly.
username_2: @username_0 Can you provide the version of Azure cli which you are using
username_0: hi @username_2,
I'm using:
azure-cli (2.0.51)
aks (1.12.7)
username_2: @username_0 I am able to reproduce the issue. My cli version is 2.0.65.
Current workaround is to edit the subscription id of the deployment if the connector installation is done already.
If the installation is not done, Then set the subscription as the default one and then execute the install-connector command.
We can also remove the connector and deploy again.
This command collects the details and executes helm install to deploy the connector. That subscription id is passed during the helm install command.
Chart template is [here](https://github.com/virtual-kubelet/virtual-kubelet/blob/master/charts/virtual-kubelet-for-aks/templates/deployment.yaml#L24).
I think its the CLI's responsibility to pass the subscription id from the command(if given as an argument)
@iainfoulds Please provide your comments. Can we raise an issue with Azure cli?
username_0: @username_2 I'll follow these steps, thanks.
username_2: @username_0 I will close this issue here. I have created an issue in Azure cli on your behalf.
Status: Issue closed
username_2: This issue is fixed now |
TeamCOMPAS/COMPAS | 1127052560 | Title: Make sure online documentation is up-to-date after sorting runSubmit / yaml upgrade.
Question:
username_0: Make sure online documentation is up-to-date after sorting out the following issues:
- [ ] #740
- [ ] #741
- [ ] #750
This implies removing all the notes suggesting any of the previous issues is pending or that the runSubmit / yaml is not ready.
Answers:
username_0: I suggest reopening, addressing, and then checking the tick mark and closing for real.
username_1: Make sure online documentation is up-to-date after sorting out the following issues:
- [x] #740
- [ ] #741
- [x] #750
This implies removing all the notes suggesting any of the previous issues is pending or that the runSubmit / yaml is not ready.
username_1: Agreed, I think we still need to update the documentation before we can close this. |
esjeon/krohnkite | 876468978 | Title: Manual tiling feature request.
Question:
username_0: Manual tiling is something you can get on most desktops these days but not kde. You either get no tiling or autotiling like with this package.
I'm wondering if this package could fill the gap by providing a manual tiling option for users who just want to snap a couple windows side like on Windows 10. |
L3NNYY/AppGame | 494554941 | Title: Polished pause menu
Question:
username_0: Get a better pause menu layout and formatting (this can be done after we finalise game theme/format).
 |
martinm/twitter-like-bot | 362865090 | Title: Twitter don't allow auto liking...
Question:
username_0: Have you seen this in the developer guidelines. Have you had accounts blocked due to auto liking.
<img width="615" alt="screen shot 2018-09-22 at 16 16 46" src="https://user-images.githubusercontent.com/8058892/45918717-2d9c2080-be83-11e8-8b08-310dd712cc0d.png">
Status: Issue closed
Answers:
username_1: Have you seen this in the developer guidelines. Have you had accounts blocked due to auto liking.
<img width="615" alt="screen shot 2018-09-22 at 16 16 46" src="https://user-images.githubusercontent.com/8058892/45918717-2d9c2080-be83-11e8-8b08-310dd712cc0d.png">
username_1: Twitter changed all the rules recently, this repo was created long time ago. I do not plan to update anything here.
Status: Issue closed
|
GoogleCloudPlatform/java-docs-samples | 292241233 | Title: Wrong algorithm name using the KeyFactory
Question:
username_0: java-docs-samples\iot\api-client\manager\src\main\java\com\example\cloud\iot\examples\MqttExample.java
from: KeyFactory kf = KeyFactory.getInstance("ES256");
to: KeyFactory kf = KeyFactory.getInstance("EC");
EC is the right name for the Elliptic Curve algorithm.
https://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#KeyFactory
Answers:
username_1: @username_2 Can you comment?
username_2: I believe the JWT-based ES256 is correct , I may actually have the readme / documentation wrong and should correct that.
username_2: That said, the tests still pass and your pointer to the docs corroborates your point. I'll look into why the tests pass in both cases as we're testing adding EC and RSA devices.
username_2: Tested end-to-end and working for me, approving.
Status: Issue closed
username_3: This appears to have been resolved, closing the issue. |
mathix420/notion-charts | 666522895 | Title: Can't deploy on Vercel on the step by step instructions
Question:
username_0: I get this error

I am putting Notion token

Can you help me out? thanks!!!
Answers:
username_1: Hi !
As I can see Vercel has again changed his interface. The variable `@token_v2` is a secret not an environment variable. If you know how to install the Vercel CLI you can add the `token_v2` variable https://vercel.com/docs/cli#commands/secrets
username_1: The issue is fixed by removing the use of `@token_v2` secrets c<KEY>.
I really think this is a bad thing that Vercel is no longer asking for missing secrets in the UI ...
(@username_0 just redeploy a new project on Vercel by clicking the Deploy button in [the readme](https://github.com/username_1/notion-charts#usage))
Status: Issue closed
|
ikedaosushi/tech-news | 347036151 | Title: Chaos Engineering やっていく宣言
Question:
username_0: Chaos Engineering やっていく宣言<br>
Netflix が Chaos Engineering の論文を公開して 2 年ほど経ちました。 クックパッドは最近、 Chaos Engineering を導入する事を決めました。 この記事ではその背景を紹介したいと思います。<br>
https://ift.tt/2LOmogM |
xSAVIKx/AndroidScreencast | 165092100 | Title: Get more FPS
Question:
username_0: I have use several these apps and I found that they will lower screen quality when the screen is moving.
I think this app can use way to get more FPS
Answers:
username_1: @username_0 have no idea right now how smth like that could be done, while there is no client on smartphone.
The problem here is that I use DDMLIB that abuses ADB to fetch images from smartphone and it's a bottle neck here.
username_2: This seems to be a duplicate of #1.
Status: Issue closed
|
questdb/questdb | 628931183 | Title: dynamic timestamp is not documented
Question:
username_0: ```sql
select * from tab timestamp(x)
```
The `timestamp` clause is not not documented.
- when is it needed?
- what is the order of its execution?
- what are the parameters, their types and permitted values?
Answers:
username_1: This has been added to the documentation and will come with the next release.
Status: Issue closed
|
sindresorhus/electron-dl | 223556210 | Title: Use mime type checking for file extension when no extension can be inferred
Question:
username_0: Using `item.getMimeType()`.
Answers:
username_1: What if it's a binary with the MIME type `application/octet-stream`?
username_0: Then we don't do anything. We should only do it for clear matches.
username_1: There's a lot of ambiguous ones (see [ext-list](https://github.com/username_1/ext-list/blob/master/ext-list.json)), but we could remove those that there are more than one of to get a clear match I think.
username_0: Sounds good.
Status: Issue closed
username_1: Fixed in https://github.com/username_0/electron-dl/commit/251c7e29<PASSWORD>0<PASSWORD>c601c3c<PASSWORD>1d769a<PASSWORD>. |
electron/electron | 251032058 | Title: Screen Capture Bug - Blank browser screen in IE 11
Question:
username_0: <!--
Thanks for opening an issue! A few things to keep in mind:
- The issue tracker is only for bugs and feature requests.
- Before reporting a bug, please try reproducing your issue against
the latest version of Electron.
- If you need general advice, join our Slack: http://atom-slack.herokuapp.com
-->
* Electron version: 1.75
* Operating system: Windows 10
### Expected behavior
A user selects the IE browser as the screen capture source and begins recording. The recording captures all content in the browser as the user navigates around a web page or web app.
<!-- What do you think should happen? -->
### Actual behavior
After capturing a screen recording in IE 11, the video does not display any of the content in the IE browser except the URL and the tabs (pics below)
<!-- What actually happens? -->
### How to reproduce
1. Select Internet Explorer window as the screen capture source
2. Begin recording and navigate to a new tab
3. Open up a website and scroll to bottom of page
4. Stop recording
5. View playback
Example of blank browser on playback:

Example of blank browser in _capture source_ screen:
<img width="962" alt="screen shot 2017-08-17 at 12 18 26 pm" src="https://user-images.githubusercontent.com/7028487/29427167-426cb44e-8346-11e7-8a91-2c051ea4c957.png">
<!--
Your best chance of getting this bug looked at quickly is to provide a REPOSITORY that can be cloned and run.
You can fork https://github.com/electron/electron-quick-start and include a link to the branch with your changes.
If you provide a URL, please list the commands required to clone/setup/run your repo e.g.
$ git clone $YOUR_URL -b $BRANCH
$ npm install
$ npm start || electron .
-->
Answers:
username_1: @username_0 colleague here... You can use this app for reference: https://github.com/username_1/electron-sample-apps/tree/master/desktop-capture you get the empty window behavior when launching in electron 1.7.5.
username_2: We are no longer implementing bugfixes for versions of Electron <= `1.7.x`, so i'm going to close this issue but if it is still persisting in more recent versions of Electron we can certainly reopen it!
Status: Issue closed
|
vim/vim | 453968004 | Title: So many issues and pull requests with only one contributor?
Question:
username_0: _Instructions: Replace the template text and remove irrelevant text (including this line)_
**Describe the bug**
A clear and concise description of what the bug is.
(Issues related to the runtime files should be reported to their maintainer, check the file header.)
**To Reproduce**
Detailed steps to reproduce the behavior:
1. Run `vim --clean` (or `gvim --clean`, etc.)
2. Edit `filename`
3. Type '....'
4. Describe the error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, copy/paste the text or add screenshots to help explain your problem.
**Environment (please complete the following information):**
- Vim version [e.g. 8.1.1234] (Or paste the result of `vim --version`.)
- OS: [e.g. Ubuntu 18.04, Windows 10 1809, macOS 10.14]
- Terminal: [e.g. GNOME Terminal, mintty, iTerm2, tmux, GNU screen] (Use GUI if you use the GUI.)
**Additional context**
Add any other context about the problem here.
Answers:
username_1: When a project pre-dates github, don't be surprised to see a non-github workflow
username_2: First, please use the issue template properly. Replace the template text and remove irrelevant text as the instruction says.
If you talking about that you can see only one person (Bram) in https://github.com/vim/vim/graphs/contributors, check each commit. Each contributor name is written in each commit log.
There are several hundreds of contributors in Vim.
Anyway, this is not a place to ask a question and this is not an issue of Vim. So closing.
Status: Issue closed
username_2: See also: #1554
username_3: Seems more like Bram enjoys taking all the credit.
Vim has been on GitHub for a while now, and there has been plenty of opportunity for the right thing to be done.
username_3: Note, I said _seems_. That's important.
Also, people on GitHub may enjoy their stats being visible in various GitHub pages, like the contributor graph, so that people may click on them and see their profiles, etc.
username_4: Yes, people may enjoy it, but for me, I contribute because I want to make Vim better and not to have a nice contribution graph. Also I think it is more important that the main developer of Vim can concentrate on enhancing and improving Vim instead of having to change a workflow, that has been proven to be working well for the past 30 years. Thanks for understanding.
username_5: would not doubt that;
just some people thought they joined to discuss or gave patch/advice was a help too.. i think..
especially sometime were treated as 'asking', (though some was), but some actually were 'giving' report/improvement to vim..
as for 'workflow', to do some adjust to fit some perhaps was necessary too
e.g . recording the 'commiter' name as some formal/fixed format even in the commits msg, then some stupid shell script can generate it or list it to somewhere, then everyone happy!
e.g . some runtime files owners perhaps disappeared long time / some classic plugins were not update-2-date long time, then setup a heartbeat check to confirm their alive (sorry but that's) or willing, perhaps was necessary too..
username_5: https://github.com/vim/vim/issues/7624
username_6: No, I don't understand.
Any software developer doing things exactly the same as 10 years ago is a **bad** software developer, let alone 30 years ago.
It literally takes less than 30 minutes to learn git. That's no excuse.
30 minutes for one person so that thousands others don't have to run through loops is a no-brainer.
username_7: That's not true, either. While basic git operations are reasonably
straightforward, anything beyond the basics is horribly obscure and
inconsistent. To paraphrase <NAME>'s comment about regular
expressions:
Some people, when confronted with a problem, think "I know,
I'll use git." Now they have two problems.
And of course: https://xkcd.com/1597/
Regards,
Gary
username_7: 吕海涛 : When looking for information about Vim, the place to look is
always the built-in help. Not Google, not the manpages, not the github
metadata, just the help. In this case: :help credits
Best regards,
Tony. |
spyder-ide/spyder-notebook | 775076805 | Title: install Spyder-notebook for Spyder standalone application
Question:
username_0: hi, how to install Spyder-notebook for Spyder standalone application ?
thx
Answers:
username_1: @dalthviz Is it possible to install plugins like spyder-notebook if Spyder is installed with the stand-alone installer?
username_2: My idea is to add spyder-terminal and/or spyder-notebook to the installers after they've been migrated to the new API and so better integrated with Spyder.
For now they have too many UX issues (especially spyder-notebook), so I think it's better to wait until Spyder 5.
username_1: I agree, at least as far as spyder-notebook is concerned. Perhaps we need to wait even longer. The plugin looks nice, but when using it seriously too many rough edges become apparent. It still needs a bit more work.
@username_2 Is the new API starting to stabilize sufficiently that I can start migrating the plugin to it?
username_2: Not yet. I'd say mid February is a good time to start considering a migration.
username_3: I'm on Windows, with standalone Spyder 5.0.3 and was wondering the same thing. Has it progressed in the last months since?
username_2: We first need to port Spyder-notebook to the Spyder 5 API (which is planned for this month). Then we could create installers on this repo with Spyder-notebook included, but that's a long term goal because right now we're quite busy with other things, sorry.
username_4: Even with spyder-notebook added to the spyder installer, there's still the question of how to install another plugin to standalone spyder. I understand it's not supported yet. Is there a way to install plugins manually for standalone spyder in windows? |
SDWebImage/SDWebImage | 752798490 | Title: error: can't allocate region
Question:
username_0: ### New Issue Checklist
* [x] I have read and understood the [CONTRIBUTING guide](https://github.com/rs/SDWebImage/blob/master/.github/CONTRIBUTING.md)
* [x] I have read the [Documentation](http://cocoadocs.org/docsets/SDWebImage/)
* [x] I have searched for a similar issue in the [project](https://github.com/rs/SDWebImage/issues) and found none
### Issue Info
When I have a lot of large images loaded, I get this crash information
```
malloc: *** mach_vm_map(size=48775168) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
```
Info | Value |
-------------------------|-------------------------------------|
Platform Name | e.g. ios
Platform Version | e.g. 14.0.1
SDWebImage Version | e.g. 5.9.5
Integration Method | e.g. cocoapods
Xcode Version | e.g. Xcode 12.2
Repro rate | e.g. all the time (100%)
Repro with our demo prj | e.g. no
Demo project link | e.g. no
### Issue Description and Steps
![Uploading 截屏2020-11-29 08.24.19.png…]()
Answers:
username_1: OOM issue. Need more context.
username_0: @username_1 i set maxMemoryCost & maxDiskSize, this problem does not appear in our new version of APP at present. I think it can be shut down temporarily. thx
Status: Issue closed
|
testcontainers/testcontainers-java | 913814634 | Title: disabledWithoutDocker annotation option (for JUnit 5) isn't documented
Question:
username_0: While this is documented at the API level, one would have to know that the `disabledWithoutDocker` parameter was available in order to see it.
Happy to add this to the documentation, but wanted to make sure that was OK.
Seems like `/docs/test_framework_integration/junit_5.md` is the appropriate place for this. |
HubSpot/draft-convert | 329997415 | Title: How to convert Entities
Question:
username_0: How do I convert entities? If I create a LINK entity using the editor and save the exported html, when converting back to the editorState the LINK entities are not rendered. Seems like maybe I'm missing something in `htmlToBlock` to handle entities.
Status: Issue closed
Answers:
username_0: Sorry there was an error in my code. Was using the wrong entity type name. |
BlakeBr0/MysticalAgriculture | 230920592 | Title: AgriCraft Compatibility
Question:
username_0: Hi, I cant seem to get AgriCraft and MysticalAgriculture to work properly together. I was first having an issue with all of the crops randomly spawning inside of cropsticks. After I changed all of the Mystical Ag crops spawn chance to 0 in the json, I went back into the game. However regular crop mutations with Agricraft is not working with vanilla plants. I've only tried it with wheat seeds. If I could get help on this issue that would be much appreciated.
Status: Issue closed
Answers:
username_1: I don't make agricraft so I cant help you, sorry. |
playgameservices/play-games-plugin-for-unity | 162872504 | Title: Strange behaviour of Cloud Save and Gamer IDs (?)
Question:
username_0: I've been stuck for 2 days with the cloud save feature, as it would always fail to store data due to "INTERNAL_ERROR" even though I have tried to delete existing data from Google Drive, unlink it and sign out and in again. As I have set Gamer ID for my account, I eventually figured out that I might want to check Google Play Games for update and review its settings. It was fully updated, so I launched it and dialog shown up telling me that my Gamer ID will be used to sign in to games (which I was already told while setting it up few weeks ago). Then I went to settings and enabled automatic sign in. Ever since, it works fine, no matter if I turn the automatic sign in on or off. I suspect that all i needed was run the app.
Now - the obvious question. If there is any init required, shouldn't it be handled by the library itself internally? How am I supposed to explain this process to unexperienced user without annoying him out of the game? If it took me 2 days to try this, I suppose that normal users would never figure it out.
Or did I misunderstood something? Thank you! |
apollographql/apollo-tooling | 907660817 | Title: Extension cannot be used it type is set to module
Question:
username_0: <!--
Thanks for filing an issue on Apollo Tooling!
Please make sure that you include the following information to ensure that your issue is actionable.
If you don't follow the template, your issue may end up being closed without anyone looking at it carefully, because it is not actionable for us without the information in this template.
-->
**Intended outcome:**
The extensions should work and load the local schema.
<!--
What you were trying to accomplish when the bug occurred, and as much code as possible related to the source of the problem.
-->
**Actual outcome:**
Cannot load schema.
```
[ERROR] A config file failed to load with options: {"configPath":"/***/***/Desktop/test","requireConfig":true}.
The error was: Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: /***/***/Desktop/test/apollo.config.js
require() of ES modules is not supported.
require() of /***/***/Desktop/test/apollo.config.js from /***/***/.vscode/extensions/apollographql.vscode-apollo-1.19.3/node_modules/import-fresh/index.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules.
Instead rename apollo.config.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from /***/***/Desktop/test/package.json.
at Object.loadConfig (/***/***/.vscode/extensions/apollographql.vscode-apollo-1.19.3/node_modules/apollo-language-server/lib/config/loadConfig.js:42:34)
at async GraphQLWorkspace.reloadProjectForConfig (/***/***/.vscode/extensions/apollographql.vscode-apollo-1.19.3/node_modules/apollo-language-server/lib/workspace.js:107:22)
at /***/***/.vscode/extensions/apollographql.vscode-apollo-1.19.3/src/extension.ts:77:17
at handleNotification (/***/***/.vscode/extensions/apollographql.vscode-apollo-1.19.3/node_modules/vscode-jsonrpc/lib/main.js:489:43)
at processMessageQueue (/***/***/.vscode/extensions/apollographql.vscode-apollo-1.19.3/node_modules/vscode-jsonrpc/lib/main.js:260:17)
at Immediate._onImmediate (/***/***/.vscode/extensions/apollographql.vscode-apollo-1.19.3/node_modules/vscode-jsonrpc/lib/main.js:247:13)
```
<!--
A description of what actually happened, including a screenshot or copy-paste of any related error messages, logs, or other output that might be related. Places to look for information include your browser console, server console, and network logs. Please avoid non-specific phrases like “didn’t work” or “broke”.
-->
**How to reproduce the issue:**
The issue can be quickly reproduced by downloading a skeleton project of svelte kit.
1. Download svelte kit `npm init svelte@next test`. You can select skeleton, ts, no eslint, no prettier.
2. Add a config file `apollo.config.js` with an endpoint
3. Reload window and reload schema
```javascript
module.exports = {
client: {
service: {
name: 'foo',
url: 'https://example.org/graphql',
},
},
}
```
<!--
If possible, please create a reproduction and link to it here. If reporting a CLI issue, it's helpful to open a sample repository where your broken commands can be run against to reproduce the error. You can fork the [fullstack tutorial](https://github.com/apollographql/fullstack-tutorial) to reproduce many issues.
Provide instructions for how the issue can be reproduced by a maintainer or contributor. Be as specific as possible, and only mention what is necessary to reproduce the bug. If possible, try to isolate the exact circumstances in which the bug occurs and avoid speculation over what the cause might be.
-->
**Versions**
<!--
Please add the version of the extension/cli/tool that you are experiencing issues with. Without a version, it may not be possible to reproduce your issue.
-->
v1.19.3
**Additional Information**
https://github.com/apollographql/apollo-tooling/blob/be117f3140247b71b792a6b52b2ed26f2f76da02/packages/apollo-language-server/src/config/loadConfig.ts#L21-L25
I'm not sure but a possible solution could be also accepting `apollo.config.cjs` as a supported config file option. Should be tested
Answers:
username_1: https://github.com/apollographql/vscode-graphql/pull/4
Status: Issue closed
|
devconfcz/devconf-cz-web | 202374931 | Title: Redesign of the web schedule page
Question:
username_0: For next version I'd like to discuss redesigning the schedule page. This is just where I'm going to put notes for now.
Answers:
username_0: Note to self: I wonder why we're separating the various talks/workshops into separate tabs anyway rather than just creating an additional 'filter' for 'type' and having everything by default be on a single tab.... something to consider for the future.
username_1: for devconf it has sense, because there is too much rooms so its better to have them split somehow (otherwise there are too much columns with too narrow blocks) |
spring-projects/spring-boot | 151794590 | Title: Use jetty 9.3.x as default version
Question:
username_0: The jetty team has announced that 9.2.x will be EOL as of May 2016.
https://dev.eclipse.org/mhonarc/lists/jetty-dev/msg02726.html
It seems reasonable the default version supported by spring-boot to become 9.3.x. There are already samples for 9.3.x so this should be easy.
Jetty 9.3.x brings support for http/2, but also requires java 8 (while 9.2 was java 7).
The only issue so far that I have experienced with jetty 9.3 is that classpath keystores are not working with spring boot loader. I have raised eclipse/jetty.project#518 to address this.
Answers:
username_2: Thanks for raising the Jetty issue. Ignoring any other concerns, that's a blocker on making 9.3 the default.
username_2: The Jetty issue has been fixed and will be in 9.3.9. Placing this on hold until it's available.
username_0: @username_2 9.3.9 was released today
username_2: Thanks
username_3: I'll merge #5290 right after that one, since I believe jetty examples should be renamed from "jetty" and "jetty93" to "jetty92" and "jetty", 9.3 being the new default version.
username_2: @username_3 Feel free to merge yours first
Status: Issue closed
|
jdb78/pytorch-forecasting | 1119898397 | Title: Alternatives to teacher-forcing in autoregressive models
Question:
username_0: Hey there!
Is there any plan on working on alternatives to the teacher-forcing paradigm to train autoregressive models? This, I believe, might be quite beneficial in closing the gap between training and validation performances.
I believe it would be helpful to at least provide the option of working with
- No teacher forcing;
- Scheduled Sampling to gently change the training process from a fully guided scheme, towards a less guided scheme which mostly uses the generated token instead ([<NAME> et al.](https://arxiv.org/pdf/1506.03099.pdf)).
Thanks! |
ikedaosushi/tech-news | 859310080 | Title: エヌビディアがCPU参入アームと組みAI計算10倍速く: 日本経済新聞
Question:
username_0: エヌビディアがCPU参入 アームと組みAI計算10倍速く: 日本経済新聞<br>
<br>
https://ift.tt/3wQ8gqX |
efanovjohn/Test_Case_Extractor | 300202114 | Title: IOS-622: SIGSEGV(SEGV_ACCERR) at -[CRLCrashNXPage crash] (CRLCrashNXPage.m:37) [Bugsee]
Question:
username_0: Application Specific Information:
Selector name found in current argument registers: crash
Reported by Ñinguna
View full Bugsee session at: https://appdev.bugsee.com/#/apps/IOS/issues/IOS-622 |
HaxeFoundation/crypto | 428183052 | Title: Publish on haxelib
Question:
username_0: It would be nice to publish this to haxelib. It'll probably also need a haxelib.json file.
Answers:
username_1: There are some problems which should be fixed before that, but you can use it ( for most targets).
The problems for which I ( and Travis ) know are :
- Hashlink - BCrypt is not working
- PHP - overflow problem ( https://github.com/HaxeFoundation/haxe/issues/7533 )
- Flash - Pbkdf2Test ( is too slow and not usable at all )
Also BCrypt is too slow ,so I should replace all Array with Vector ,which is not the best solution for the moment because JS target give worst results when using vector copy() method.
Status: Issue closed
username_1: Ok, so I published alpha release . Hashlink works locally, but give error on Travis. For php will fix it in other releases .
Let see whether anyone will report other problems. |
blue-systems/netrunner-core-releases | 416581165 | Title: C1: Installing in non-english doesnt translate FF
Question:
username_0: not fixed. installing in German still has firefox esr in English.
Answers:
username_0: not fixed. installing in German still has firefox esr in English.
username_1: The issue is it tries to download a package that isn't there anymore as calamares is not refreshing the package cache before trying to install the firefox locale package.
Adding Adrian here.
Just as info. We are using Calamares 3.2.4 from Debian Buster repositories. Is there a way to force an "apt update" run to refresh the package cache in cala before it tries installing a new package?
Status: Issue closed
|
justinmk/vim-sneak | 27912722 | Title: surround.vim: yszab) should surround around b
Question:
username_0: Given this text (cursor at `|`):
```
|fooab
```
`yszab)` should cause:
```
(fooab)
```
but instead it causes:
```
(foo)ab
```
Status: Issue closed
Answers:
username_0: The current behavior is probably most correct, since it matches `{op}/`. |
ant-design/ant-design | 616533903 | Title: Not correct animations
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://ant.design/components/switch/#components-switch-demo-basic](https://ant.design/components/switch/#components-switch-demo-basic)
### Steps to reproduce
1. Open page
2. Collapse browser tab
3. Create new tab
4. Open previous tab
5. Animations not worked
### What is expected?
Animations is worked. Animation duration not change after switch browser tab
### What is actually happening?
Animations not worked.
| Environment | Info |
|---|---|
| antd | 4.2.2 |
| React | 16.8.6 |
| System | ios |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_0: GIF

username_0: @username_1
I found where the problem is, but I still do not understand how to solve.
File: `components/switch/style/index.less`
```diff
&-checked {
// ...
&::after {
- left: 100%;
+ left: any static value not 100%
margin-left: -1px;
- transform: translateX(-100%);
}
}
```
Status: Issue closed
username_1: - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://ant.design/components/switch/#components-switch-demo-basic](https://ant.design/components/switch/#components-switch-demo-basic)
### Steps to reproduce
1. Open page
2. Collapse browser tab
3. Create new tab
4. Open previous tab
5. Animations not worked
### What is expected?
Animations is worked. Animation duration not change after switch browser tab
### What is actually happening?
Animations not worked.
| Environment | Info |
|---|---|
| antd | 4.2.2 |
| React | 16.8.6 |
| System | ios |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Status: Issue closed
|
clinical-meteor/user-model | 140321743 | Title: user.fullName(), user.givenName() and user.familyName() methods are not existed.
Question:
username_0: Getting "Newly created user should have fullName(), preferredName(), and familyName() methods." error when activeEntryTests.js which is in active-entry package is run .
We have the following code block in activeEntryTests.js in active-entry/tests/gagarin path:
var user = Meteor.users.findOne({'emails.address': '<EMAIL>'});
expect(user.fullName()).to.equal('<NAME>');
expect(user.givenName()).to.equal('Jane');
expect(user.familyName()).to.equal('Doe');
Actually, we do not have these methods for user object and fullName is a property of user.profile. |
clojerl/rebar3_clojerl | 324663844 | Title: Handle protocol compilation correctly
Question:
username_0: To avoid over-writing the original protocol modules from a library or from Clojerl itself, we need to use the `*compile-protocols-path*` dynamic var. This presents the problem that there are now two different destination directories for compiled `.beam` files and the fact that there are duplicate modules, which generates error when generating a release and also could create conflicts if the code paths are not added the the runtime in the right order.
All these issues need to be addressed. |
junydania/content-catalogue | 252013795 | Title: Report of all videos
Question:
username_0: "*Report on total number of videos in grpahical format
* Report on total number of publishers
* Report on total number of comedians
* list of videos with the field name (title, desc, comedian, publisher, date uploaded, release year, video path, category)"<issue_closed>
Status: Issue closed |
zyedidia/micro | 604118278 | Title: Allow disabling default linters with linter.removeLinter
Question:
username_0: ## Description of the problem or steps to reproduce
I was trying to replace the gcc linting for clang with custom flags by customizing the linter through a custom plugin (mostly to see how it works).
I first tried to use the linter.removeLinter function (that isn't documented in the readme but is in the source code) and it didn't disabled the gcc linting. For disabling the gcc linter one has to do something like `linter.makeLinter("gcc", "", "", {}, "")`
Weirdly, calling `linter.removeLinter("gcc")` after that line seems to remove the custom gcc linter settings and replace it with the default one. It seems the linters table in the linter plugin is being initialized after the user's custom plugin?
## Specifications
<!-- You can use `micro -version` to get the commit hash. -->
Commit hash: 74523d2
OS: Fedora 31
Terminal: gnome-terminal
Answers:
username_1: I think this is because the order in which plugins are initialized is currently arbitrary, so the linter is initialized after your plugin. It would probably be a good idea to have support for controlling the order plugins are initialized in. In the meantime, you can modify the linter in `onBufPaneOpen` which will happen after plugins are initialized. |
SzymonPobiega/NServiceBus.Raw | 590851012 | Title: Stopping of Raw endpoint when using RabbitMQ on broken connection not work correctly
Question:
username_0: Hi,
I have problem when using NServiceBus.Router with RabbitMq Transport as one interface. In my production environment we had some issues with network connection, so one of our acceptance test i stop rabbiMq container and start it after some random time. When we done that we get very strange situation that some message was get from queue but never acked or processed. To avoid this problem we create mechanism of stopping router when rabbitMq is not available and start it/create when rabbit will be available again. But we get situation that all message except some that hangs. Also there was additional connection on rabbitMq from app. After some digging I found that problem is when stopping receiver in NServiceBus.Raw. There is only stopping on MessagePump but for RabbitMq it's work only when connection is in not closed state, when we have broken connection message pump is stopped, so message handling is detached from consumer but object is not disposing, reconnect task is still working and when connection is reestablished consumer download message but never processed it. Another drawback is that additional connection is shown i rabbitMq. I fixed it bu adding disposing IDisposable object when stopping receiver (I didn't any reference or reuse of object so it should be safe). That makes rabbitMq message pump disposing and disposing connection, circiuteBreaker mechanism. I not sure what situation is with sending component. I will create pull request with my fix, and If this solution is wrong please discard it.
Status: Issue closed
Answers:
username_1: @username_0 Thanks for fixing this one. I've just released it as 3.1.1
username_0: @username_1 No problem:) |
victoryw/learn-base | 445740845 | Title: as-a-leader-time-is-your-most-valuable-resource
Question:
username_0: I started by looking for anything obvious to cut. I switched one weekly one-on-one to biweekly. Then I stared at my calendar some more. Finally I recognized that something drastic had to change. Business as usual was not going to cut it.
## How to Cut, Cut to the bone and measure the pain, to get time to finish import things
We ask whether something is important, or if we value it. It is far too easy to answer in the affirmative. Instead, we should ask these two questions:
- “What is the worst case result if I cut this?”
- “Is this going to get me where I want to go in the long run?”
I asked one of my managers to attend the weekly operations meeting, then dropped it. I asked one of my senior engineers to take charge of the architecture meeting series, then dropped it. I moved all junior employees to biweekly meetings. I moved a couple of direct employees to report to a manager.
## Examine results and aftermath
- I was delegating work I knew how to do. This wasn’t growth work(对作者而言,不是对于其他人员); it was maintenance. I gave people growth opportunities. The new owners changed some processes and made improvements.
- Delegating is a gift with two recipients. You get more time, and someone else gains valuable experience.
## Make regular cuts (结论)
“I need 30 minutes more per day, so I’ll drop this single 30-minute task.” It has limited return on investment, because you’re swapping one item for another.
Instead of cutting from the bottom of your stack rank, switch your process. Selectively pick a few things, and cut everything else. Work on only your most important things.
Look at every single thing you’re doing. Determine whether you need each one to achieve your most important long-term goals. If not, ask yourself how much pain you’d feel if you cut it. Consider whether it makes sense to spend that time on your top priorities instead. Your top priorities are almost always the things that move the needle in your life, and time spent there is the most precious.<issue_closed>
Status: Issue closed |
microsoft/PSRule.Rules.GitHub | 861435767 | Title: Community profile is null
Question:
username_0: **Description of the issue**
Data exported for repository community profile is always null.
**Expected behaviour**
Community profile should be populated when valid.
**Module in use and version:**
- Module: PSRule.Rules.GitHub
- Version: **v0.1.0-B2103003**
Captured output from `$PSVersionTable`:
```text
Name Value
---- -----
PSVersion 7.1.3
PSEdition Core
GitCommitId 7.1.3
OS Microsoft Windows 10.0.19042
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
```<issue_closed>
Status: Issue closed |
flutter/flutter | 185672998 | Title: Flutter doctor wrongly suggests we have a plugin for WebStorm
Question:
username_0: ## Steps to Reproduce
Run `flutter doctor`. The output wrongly suggests that we have a plugin for WebStorm:
```
$ flutter doctor
...
[✓] IntelliJ IDEA Ultimate Edition (version 2016.2.5)
• Dart plugin installed
• Flutter plugin installed
[-] IntelliJ WebStorm (version 2016.2.4)
• Dart plugin not installed; this adds Dart specific functionality.
• Flutter plugin not installed; this adds Flutter specific functionality.
• For information about managing plugins, see
https://www.jetbrains.com/help/idea/2016.2/managing-plugins.html
...
```
## Expected behavoir
We only support IntelliJ IDEA (Ultimate & CE), so only that should be printed in flutter doctor.
Answers:
username_1: Fixed in https://github.com/flutter/flutter/commit/310b8053353f403b4a2cccd32bff56301a11ec65 ... or not?
username_0: Ah, yes, sorry my flutter was almost a week old :-)
Status: Issue closed
|
izhangzhihao/intellij-rainbow-brackets | 333241060 | Title: [auto-generated:492934835] In file: file:///Users/vngrs/projects/mohsan_challenge/Twitter-Api-Kotlin-Test/app/build/generated/source/kapt/debug/android/databinding/DataBindingComponent.java
Question:
username_0: - Plugin Name: Rainbow Brackets
- Plugin Version: 5.8.2
- OS Name: Mac OS X
- OS Version: 10.13.5
- Java Version: 1.8.0_152-release
- App Name: Studio
- App Full Name: Android Studio
- Is Snapshot: false
- App Build: AI-173.4819257
- Last Action: Annotate
```
java.lang.NullPointerException
at com.intellij.codeInsight.daemon.impl.analysis.HighlightClassUtil.checkDuplicateTopLevelClass(HighlightClassUtil.java:175)
at com.intellij.codeInsight.daemon.impl.analysis.HighlightVisitorImpl.visitClass(HighlightVisitorImpl.java:412)
at com.intellij.psi.impl.source.PsiClassImpl.accept(PsiClassImpl.java:470)
at com.intellij.codeInsight.daemon.impl.analysis.HighlightVisitorImpl.visit(HighlightVisitorImpl.java:138)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.runVisitors(GeneralHighlightingPass.java:371)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.lambda$collectHighlights$5(GeneralHighlightingPass.java:303)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.analyzeByVisitors(GeneralHighlightingPass.java:330)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.lambda$analyzeByVisitors$6(GeneralHighlightingPass.java:333)
at com.github.izhangzhihao.rainbow.brackets.visitor.RainbowHighlightVisitor.analyze(RainbowHighlightVisitor.kt:33)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.analyzeByVisitors(GeneralHighlightingPass.java:333)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.lambda$analyzeByVisitors$6(GeneralHighlightingPass.java:333)
at com.intellij.codeInsight.daemon.impl.analysis.HighlightVisitorImpl.lambda$analyze$2(HighlightVisitorImpl.java:163)
at com.intellij.codeInsight.daemon.impl.analysis.RefCountHolder.analyze(RefCountHolder.java:336)
at com.intellij.codeInsight.daemon.impl.analysis.HighlightVisitorImpl.analyze(HighlightVisitorImpl.java:162)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.analyzeByVisitors(GeneralHighlightingPass.java:333)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.lambda$analyzeByVisitors$6(GeneralHighlightingPass.java:333)
at com.intellij.codeInsight.daemon.impl.DefaultHighlightVisitor.analyze(DefaultHighlightVisitor.java:86)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.analyzeByVisitors(GeneralHighlightingPass.java:333)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.collectHighlights(GeneralHighlightingPass.java:300)
at com.intellij.codeInsight.daemon.impl.GeneralHighlightingPass.collectInformationWithProgress(GeneralHighlightingPass.java:239)
at com.intellij.codeInsight.daemon.impl.ProgressableTextEditorHighlightingPass.doCollectInformation(ProgressableTextEditorHighlightingPass.java:83)
at com.intellij.codeHighlighting.TextEditorHighlightingPass.collectInformation(TextEditorHighlightingPass.java:70)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.lambda$null$1(PassExecutorService.java:437)
at com.intellij.openapi.application.impl.ApplicationImpl.tryRunReadAction(ApplicationImpl.java:1130)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.lambda$doRun$2(PassExecutorService.java:430)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:543)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:488)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:94)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.doRun(PassExecutorService.java:429)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.lambda$run$0(PassExecutorService.java:405)
at com.intellij.openapi.application.impl.ReadMostlyRWLock.executeByImpatientReader(ReadMostlyRWLock.java:143)
at com.intellij.openapi.application.impl.ApplicationImpl.executeByImpatientReader(ApplicationImpl.java:229)
at com.intellij.codeInsight.daemon.impl.PassExecutorService$ScheduledPass.run(PassExecutorService.java:403)
at com.intellij.concurrency.JobLauncherImpl$VoidForkJoinTask$1.exec(JobLauncherImpl.java:170)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
``` |
erikw/tmux-powerline | 426293164 | Title: 'powerline-config tmux setup' returned 127
Question:
username_0: Happens when I run `tmux source-file ~/.tmux.conf` because nothing is displayed upon logging into that server via ssh.
Contents of my tmux.conf is
```
# MH from https://github.com/mtgto/dotfiles/blob/master/home/.tmux.conf
# pip install psutil
# pip install powerline-status
# set Zsh as your default Tmux shell
set-option -g default-shell /bin/zsh
# Tmux should be pretty, we need 256 color for that
set -g default-terminal "screen-256color"
# command delay? We don't want that, make it short
set -sg escape-time 1
# Set the numbering of windows to go from 1 instead
# of 0 - silly programmers :|
set-option -g base-index 1
setw -g pane-base-index 1
set -g mouse on
bind -n WheelUpPane select-pane -t= \; copy-mode -e \; send-keys -M
bind -n WheelDownPane select-pane -t= \; send-keys -M
# Allow us to reload our Tmux configuration while
# using Tmux
bind r source-file ~/.tmux.conf \; display "Reloaded!"
# Sane scrolling
set -g terminal-overrides 'xterm*:smcup@:rmcup@'
set-option -gw window-status-format "#I:#14W#F" # limit the length of inactive window name
if-shell 'env "$POWERLINE_CONFIG_COMMAND" tmux setup' '' 'run-shell "powerline-config tmux setup"'
# TPM
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'
set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'
set -g @plugin 'tmux-plugins/tmux-yank'
# continuum
set -g @continuum-restore 'on'
# resurrect
set -g @resurrect-strategy-vim 'session'
# yank
if-shell "test expr '$OSTYPE' : 'darwin.*' > /dev/null" "set-option -g default-command 'reattach-to-user-namespace -l $SHELL'"
# Initialize TMUX plugin manager (keep this line at the very bottom of tmux.conf)
run -b '~/.tmux/plugins/tpm/tpm'
```
Any clues?
Answers:
username_0: also have this in my .zpreztorc
```
zstyle ':prezto:module:tmux:auto-start' remote 'yes'
```
Status: Issue closed
username_1: This is the wrong project. Please go to https://github.com/powerline/powerline :)
username_0: Ah sorry :) |
i18next/i18next | 261628217 | Title: missing keys during loading changeLanguage() from backends
Question:
username_0: I made an example of the issue here: https://www.webpackbin.com/bins/-KuxsDAcwJ2azm0S41y-
On the first changeLanguage() (click "de") when it should write to the console t('title') right after changing the language => it returns me "missing key error"
If I am using react + redux + etc... and my App is actively rerendering on every form change, this causes it displays constants (or fallbackLng) during the loading a resorce (0.5s - 1s) which look bad.
I already reported this to i18next-xhr-backend https://github.com/i18next/i18next-xhr-backend/issues/266#issuecomment-332210256 but it looks its an issue of every backend with loading delay.
I know there are ways how to avoid rerendering before the resource from backend is ready, but I think there should not be any moment, during changing language, when `t()` returns an error, because it's loading the resource. There can be many other processes which can run some part of javascript with translate and it should not return the error, but the language which was there before `changeLanguage()` was triggered.
Maybe `changeLanguage()` should not change the `i18n.language` property immediately, but after the resource is loaded or something like that.
Answers:
username_1: might be enough to move https://github.com/i18next/i18next/blob/master/src/i18next.js#L210 to line 197 - but need to check for possible side effects.
username_1: Could you check if that would solve your issue?
username_0: Well I tried to swich i18n-xhr-backend to this type "3rdParty", but it just stopped work, and there probably more things to change to properly migrate to this type.
But this will not solve the issue anyway...
username_1: The change mentioned is not related to changing type on the xhr-backend - how do you come to that idea?
Just go to node_modules/i18next/dist/es and find the line mentioned and move it to the callback part
username_0: Ahh sorry... I was confused by the lines, if 210 or 197...
But I put the line 210 into the callback and now it's working well.
Here is the snippet from node_modules/i18next/dist/es:
```
var setLng = function setLng(l) {
if (l) {
_this4.language = l;
_this4.languages = _this4.services.languageUtils.toResolveHierarchy(l);
if (_this4.services.languageDetector) _this4.services.languageDetector.cacheUserLanguage(l);
}
_this4.loadResources(function (err) {
_this4.translator.changeLanguage(l);
done(err, l);
});
};
```
I also tested also unexisting language and everything looks fine.
Let me know I should do pull-request too. Thanks
username_1: So this solves your issue...no need for a PR...the change is done rather simple...
Just need to think about possible side effects...give me some time for testing a little deeper...but seems like a non breaking change.
Just need to be more confident we do not break stuff for existing users.
username_1: changed in [email protected] - please verify and close if ok for you.
username_0: now it's alright. thanks
Status: Issue closed
|
igvteam/igv | 1123669739 | Title: igvtools totdf with new genome json?
Question:
username_0: Hi,
The igvtools totdf tool online documentation says to use .genome files, but I see they have been deprecated. How should totdf be used now?
I may have missed it somewhere, but how do we generate the new genome json file?
Thanks,
Matt
Answers:
username_1: You create json genomes with a text editor, the fields are described here: https://github.com/igvteam/igv.js/wiki/Reference-Genome
However, for igvtools you can use a "chrom.sizes" file in lieu of a genome, this is a simple 2 column tab delimited file. First column is sequence/chromosome name, the second the sequence length. For example, the first few lines for hg19 are
```
chr1 249250621
chr2 243199373
chr3 198022430
chr4 191154276
```
username_0: Thank you, Jim.
Status: Issue closed
|
nus-cs2103-AY2021S2/pe-dev-response | 859970315 | Title: None
Question:
username_0: # Team's Response
Hi, thank you for your report. However, we have bolded the available options in the second line of the description as we can see in the screenshot you have provided. We believe this is sufficient emphasis for readers to pick up on should they have read the description in full.
## Duplicate status (if any):
-- |
dentarg/pynik | 117414222 | Title: tweet.py: tags (<, >) shouln't be presented escaped/encoded
Question:
username_0: ```
18:19:03 < osund> https://twitter.com/AlecMuffett/status/666665508574502912
18:19:04 < rufwebot> @AlecMuffett: Epic thanks to @twbtwb! #facebookcorewwwi is
now running an experimental Tor feature: reduced Onion hop
count => Lower latency => faster :-)
```

Answers:
username_0: Still a problem
```
16:12:08 <..... username_0> https://twitter.com/AlecMuffett/status/666665508574502912
16:12:09 <.... rufwebot> @AlecMuffett: Epic thanks to @twbtwb!
#facebookcorewwwi is now running an experimental Tor
feature: reduced Onion hop count => Lower latency
=> faster :-)
```
username_0: I suspect the Twitter API does the escaping/encoding
username_0: Same thing in Ruby, so yes, it has to be the API
https://twitter.com/eoinmackenrob/status/698626240895393794
```
[9] pry(main)> status.to_h
=> {:created_at=>"Sat Feb 13 21:54:10 +0000 2016",
:id=>698626240895393794,
:id_str=>"698626240895393794",
:text=>"hej hej & lite mer text < tag slut tag > foo",
```
Interesting story: http://www.hanselman.com/blog/WhyTheAskObamaTweetWasGarbledOnScreenKnowYourUTF8UnicodeASCIIAndANSIDecodingMrPresident.aspx
But not problem with that
```
22:55:17 <..... username_0> https://twitter.com/eoinmackenrob/status/698626355802607622
22:55:17 <.... rufwebot> @eoinmackenrob: "that's left us deeper..." but he
tweeted "that’s." Note the "smart" apostrophe testing
```
Status: Issue closed
|
FeenixServerProject/Archangel_2.4.3_Bugtracker | 64842697 | Title: [rogue] [puncturing wounds]
Question:
username_0: Puncturing wounds does not grant 15% bonus crit chance to mutilate.
The backstab part works fine.
http://i162.photobucket.com/albums/t243/teddy-s/mutilate_zps0kcmgowt.jpg
Answers:
username_0: current situation : puncturing wounds 3/3 does not increase crit chance of mutilate by 15%
expected situation : puncturing wounds 3/3 does increase crit chance of mutilate by 15%
How to reproduce:
You need 2 clients, on one create an undead rogue
.levelup 69
.learn 1329 (Mutilate)
.learn 674 (Dual wield)
.add 32471 (Shard of azzinoth)
.add 33801 (Vglad mutilator)
spec 3/3 puncturing wounds
.t test
On the other one create a draenei priest
.levelup 69
.mod hp 1000000000
.t test
Use rogue to cast mutilate on priest thousand times and observe that puncturing wounds does not increase mutilate's crit chance by 15%.
http://i162.photobucket.com/albums/t243/teddy-s/mutilate_zpsqyfqz6fc.jpg
The crit chance of mutilate should be alot higher ( i had agi totem up 99% of the time = +2% crit + mongoose procs )
username_1: Fixed. coming soon on PTR
username_2: Fix confirmed on PTR.
username_2: Fix confirmed on PTR.
Status: Issue closed
username_3: On live (15.4) |
Pseudonian/SynergismOfficial | 793063424 | Title: Thrift rune building cost delay is 5x more powerful than it claims to be
Question:
username_0: The description says 0.125%, which implies every 8 levels will give 1% or 800 levels will give 100%):
https://github.com/Pseudonian/SynergismOfficial/blob/70102d85dadd6d0afd85a27ed451c23374e966c9/Javascript/runes.js#L34-L38
However, the effect divides by 160, so in fact every 160 levels gives 100%, making the effect 5x as powerful as it claims to be:
https://github.com/Pseudonian/SynergismOfficial/blob/70102d85dadd6d0afd85a27ed451c23374e966c9/Javascript/buystuff.js#L3
https://github.com/Pseudonian/SynergismOfficial/blob/70102d85dadd6d0afd85a27ed451c23374e966c9/Javascript/buystuff.js#L398
The author of this issue does not express an opinion on whether the effect should be nerfed to match the description or instead the effect should be maintained and the description should be corrected, but the author of this issue does express the opinion that one of those two should be done. |
aheckmann/m | 452561194 | Title: Problem setting up on Amazon Linux 1
Question:
username_0: I’m trying to install `m` on Amazon Linux 1, with the following steps
1. `npm install -g m --build-from-source`
1. `m 4.0.10`
but then launching `mongo` gives:
```mongo: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by mongo)```
amzn1 seems to be shopping with glibc 2.17 only.
Can I force m to compile mongo rather than just install the binary?
Answers:
username_1: @username_0 Can you confirm the contents of `/etc/os-release`? It looks like the distro detection is pulling the image for Amazon Linux 2 instead of 1.
Thanks,
Stennie
Status: Issue closed
|
KellyIB/Adopt_dont_shop_paired | 559414054 | Title: User Story 39, Sortable Reviews
Question:
username_0: As a visitor,
When I visit a shelter's show page to see their reviews,
I see additional links to sort their reviews in the following ways:
- sort reviews by highest rating, then by descending date
- sort reviews by lowest rating, then by ascending date
Status: Issue closed
Answers:
username_0: User Story Complete. |
jprichardson/node-fs-extra | 111089494 | Title: with clobber true, copy and copySync behave differently if destination file is read only
Question:
username_0: copy succeeds because it will first do an fs.unlink() on the destination file (in ncp.js).
copySync fails.
To align the behaviour, in copy-file-sync.js change:
if (fs.existsSync(destFile) && !clobber) {
throw Error('EEXIST');
}
with
if (fs.existsSync(destFile)) {
if (clobber) {
fs.unlinkSync(destFile);
} else {
throw Error('EEXIST');
}
}
Answers:
username_0: Hi @username_1
BTW: thanks for the great library.
I'll submit a pull request to fix this (without the semi-colons ;)
I've also added a test for it in copy-sync-file.test.js.
```
describe('> when clobber is true and dest is readonly', function () {
it('should copy the file and not throw an error', function () {
try {
fs.chmodSync(dest, parseInt('444', 8))
fs.copySync(src, dest, {clobber: true})
destData = fs.readFileSync(dest, 'utf8')
assert.strictEqual(srcData, destData)
} finally {
// destination file is readonly so just remove it so we don't affect other tests
fs.unlinkSync(dest)
}
})
})
```
Tested on Ubuntu (see below).
On Windows 7, there are 7 tests currently failing, so no change as long as this one passes.
If this test fails, it causes other 3 additional tests to fail. Not sure why. I'll look into it if I get time.
I will test OS X and FreeBSD next week.
Result after the fix:
```
429 passing (3s)
2 pending
```
Result before the fix (Ubuntu):
```
428 passing (3s)
2 pending
1 failing
1) + copySync() > when the source is a file > when clobber option is passed when destination file does exist > when clobber is true and dest is readonly should copy the file and not throw an error:
Error: EACCES: permission denied, open '/tmp/fs-extra/copy-sync/des-file'
at Error (native)
at Object.fs.openSync (fs.js:549:18)
at copyFileSync (lib/copy-sync/copy-file-sync.js:24:16)
at Object.copySync (lib/copy-sync/copy-sync.js:31:7)
at Context.<anonymous> (lib/copy-sync/__tests__/copy-sync-file.test.js:191:18)
```
username_1: Thank you, your help is much appreciated! If you submit a PR and it passes TravisCI/Appveyor, that'd be awesome :)
username_0: I've worked out why the 7 tests are failing on windows. New issue opened #188.
username_0: I'm about to submit a PR for this. It looks clean. Since I did the PR for #188 I just wanted your feedback on what I did to deal with this in my local repository. If ok, I'll submit a PR.
````
git checkout master
git fetch upstream
git merge upstream/master [did the expected fast-forward merge]
git push origin master
git checkout CopySyncClobberROFile
git rebase master
git push origin CopySyncClobberROFile --force
```
username_1: I'm not a Git expert by any means, but AFAICT, it looks good.
username_0: hmm! Interesting. Node 0.12+ works but Node 0.10 fails. Nodejs must have changed its behavior. Any suggestions before I investigate?
```
1) + copySync() > when the source is a file > when clobber option is passed when destination file does exist > when clobber is true and dest is readonly should copy the file and not throw an error:
Error: EPERM, operation not permitted 'C:\Users\appveyor\AppData\Local\Temp\1\fs-extra\copy-sync\des-file'
at Object.fs.unlinkSync (fs.js:772:18)
at Context.<anonymous> (C:\projects\node-fs-extra\lib\copy-sync\__tests__\copy-sync-file.test.js:196:18)
```
username_1: Hmm. I know that I've had a number of issues in the past with trying to synchronously delete in Windows in my tests.
Possibly related: https://github.com/isaacs/rimraf/issues/72 (I posted in this thread awhile back). However this was a `ENOTEMPTY` error.
If possible, see if you can't dig around a bit. If it's an easy fix, let's do it. If it's difficult, I'm not opposed to dropping support for Node v0.10. But before, I'd at least like to know why.
username_0: After some pain setting up a switchable between versions nodejs on Windows, the good news is a simple fix which works in 0.10.40 and 4.2.1. I will resubmit the PR soonish. I just added a chmodSync call.
Fixed code:
```
if (fs.existsSync(destFile)) {
if (clobber) {
fs.chmodSync(destFile, parseInt('777', 8))
fs.unlinkSync(destFile);
} else {
throw Error('EEXIST');
}
}
```
Status: Issue closed
|
WoWManiaUK/Blackwing-Lair | 805073727 | Title: Fiona
Question:
username_0: **Links:**
from WoWHead or our Armory
**What is happening:**
Fiona the quest giver in Eastern Plague Lands isn't showing up
The mini map shows she is there but the character model isn't anywhere
**What should happen:**
**Is it crashing the server?:**
No
**Other Information:**
Looked all over the general area and still can't seem to find her
Answers:
username_1: need more info @username_0
need to know the quest name
username_1: tested NO issue finding fiona's first quest


username_2: Most likely talking about the caravan quests, which I believe don't function properly and are out-of-order or messed up due to this.
Could be many different reasons why such as, phasing, caravan transport is messed up, etc. @username_1
username_1: exactly @username_2
Status: Issue closed
|
aws-amplify/amplify-js | 462569507 | Title: React Authenticator component and HostedUI-signIn, Auth.federatedSignIn
Question:
username_0: <CustomAwsSignIn />
<CustomAwsForgotPassword />
<App />
</Authenticator>;
```
However, atleast one of our customers requires usage of federated sign in through 3rd party. We have got that working, but there are a few problems.
1. Sometimes when returing back to App from 3rd party login (~20-30% of logins) the Authenticator-component doesn't recognize the login without reloading the App (#3195 is related to this). I circumvented this by deleting the 'amplify-signin-with-hostedUI' cookie from localStorage and looping Auth.currentAuthenticatedUser() until it returns the user (or certain limit is reached, when the login fails and requires reload to work).
2. Refreshing the session: Amplify-docs say that session needs to be manually refreshed when using other than Facebook/Google authentication. I do know how to get the new refreshed tokens, but what would be good way to let Authenticator-component update it's state (authData from refreshed tokens)? The only way I could thinkout was to create my own Authenticator extending the Authenticator-component replacing the onHubCapsule with my own modified version with myRefresh-channel like this:
```
import { Authenticator } from 'aws-amplify-react';
import { Hub } from 'aws-amplify';
class MyAuthenticator extends Authenticator {
constructor(props) {
super(props);
Hub.listen('myRefresh', this.onHubCapsule);
}
onHubCapsule(capsule) {
console.log('RAFAELAUTHENTICATOR, capsule', capsule);
const { channel, payload, source } = capsule;
if (channel === 'auth') {
switch (payload.event) {
case 'cognitoHostedUI':
this.handleStateChange('signedIn', payload.data);
break;
case 'cognitoHostedUI_failure':
this.handleStateChange('signIn', null);
break;
case 'parsingUrl_failure':
this.handleStateChange('signIn', null);
break;
case 'signOut':
this.handleStateChange('signIn', null);
break;
case 'customGreetingSignOut':
this.handleStateChange('signIn', null);
break;
case 'parsingCallbackUrl':
localStorage.setItem(super.Constants.SIGNING_IN_WITH_HOSTEDUI_KEY, 'true');
break;
default:
break;
}
}
if (channel === 'myRefresh') {
switch (payload.event) {
case 'refresh':
console.log('MYAUTHENTICATOR, REFRESHING SESSION', payload.event);
this.setState({ authData: payload.data, error: null, showToast: false });
default:
}
}
}
}
[Truncated]
refreshSession = async () => {
const { authData } = this.props;
this.toggleDevOptionsArray(['devRefreshSession', false]);
try {
const cognitoUser = await Auth.currentAuthenticatedUser();
const { refreshToken } = cognitoUser.getSignInUserSession();
cognitoUser.refreshSession(refreshToken, (err, session) => {
const newAuthData = { ...authData, signInUserSession: session };
console.log('NEWAUTHDATA', newAuthData);
Hub.dispatch('myRefresh', { event: 'refresh', data: newAuthData });
});
} catch (e) {
console.log('Unable to refresh Token', e);
}
};
```
This all seems to work, but feels a bit 'hacky' way to do this. What would be the best way to handle these usage problems?
-Jukka |
claytongroth/Ambul8_Walkability | 344265289 | Title: Ambul8.py Comments
Question:
username_0: It would be a bit easier to understand the Ambul8.py file if you added some very brief comments to explain a little about what the code is doing for people not familiar with the libraries you are using. I would be greatly appreciated.
[https://github.com/claytongroth/Ambul8_Walkability/blob/master/Ambul8.py](url)
Thanks, @claytongroth.
Status: Issue closed
Answers:
username_0: Merged a basic Template into the master |
dials/dials | 992925070 | Title: dials.import: if you are passed a log file, do not stop, just ignore
Question:
username_0: ```
graeme@sleeper-service:~/data/i19-1-no-workie/1pc/work$ dials.import ../sugarA*
DIALS (2018) Acta Cryst. D74, 85-97. https://doi.org/10.1107/S2059798317017235
DIALS 3.dev.568-ge7d14238f
Sorry: Unable to handle the following arguments:
../sugarA.run
../sugarA_01.list
../sugarA_01_00001.log
../sugarA_02.list
../sugarA_02_00001.log
../sugarA_03.list
../sugarA_03_00001.log
../sugarA_04.list
../sugarA_04_00001.log
```
is an annoying repeated meme: suggest a note that it ignored these things and import what it can would be a better solution
Answers:
username_0: For clarity
```
dials.import foo.nxs kitten.jpg homework.docx
```
Should give
```
OK so I imported
foo.nxs
And ignoreed
kitten.jpg homework.docx
```
username_1: Would suggest adding changes decided here to #1863
username_2: The issue here is that `dials.import` generally doesn't care about file extensions, so you could name your CBF files `*.log` and `dials.import would still work just fine:
```
% cp $(dials.data get -q centroid_test_data)/centroid_0001.cbf centroid_0001.log
% dials.import centroid_0001.log
DIALS (2018) Acta Cryst. D74, 85-97. https://doi.org/10.1107/S2059798317017235
DIALS 3.dev.567-ga0cbfc99c
The following parameters have been modified:
input {
experiments = <image files>
}
--------------------------------------------------------------------------------
format: <class 'dxtbx.format.FormatCBFMiniPilatus.FormatCBFMiniPilatus'>
num images: 1
sequences:
still: 0
sweep: 1
num stills: 0
--------------------------------------------------------------------------------
Writing experiments to imported.expt
```
I think you're proposing ignoring files with a defined set of file extensions (`{jpg, log, docx,...}`) which would be a significant change in behaviour.
An alternative _could_ be to add a PHIL parameter (yes, I know) something along the lines of `error_on_unhandled=False` which would allow the user to control the behaviour by choosing between stopping with an error on encountering unhandled files, or simply printing a warning.
username_1: I think `ignore_unhanded` already exists..
username_2: Yes, you're right:
```
% dials.import -c -e2 -a1 | grep -A 1 ignore
ignore_unhandled = False
.help = "Don't throw exception if some args are unhandled"
```
username_0: What I am probably looking for is to set this to `true` by default then |
MicrosoftDocs/azure-docs | 391485390 | Title: Inconsistency in the examples
Question:
username_0: The display form "cntk" has the spoken form "see n tea k" in the first table and "c n t k" in the Notepad screen shot.
The display form for the first entry in the initial table is "c3po" whereas in the Notepad screen shot it is "3cpo". (I'm guessing these were meant to be the same?) And the spoken forms of these two are different. In the first table it's "see three pea o" and in the Notepad screen shot, it's "3 c p o" (should be "c 3 p o"?)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 15ab661b-de71-2baf-2ae6-2b4e875107df
* Version Independent ID: 966ec357-e9ff-2e2c-1027-ceda734d3a73
* Content: [Customize pronunciation - Speech Services](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-customize-pronunciation)
* Content Source: [articles/cognitive-services/Speech-Service/how-to-customize-pronunciation.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/Speech-Service/how-to-customize-pronunciation.md)
* Service: **cognitive-services**
* GitHub Login: @username_2
* Microsoft Alias: **panosper**
Answers:
username_1: @username_0 Thanks for the feedback! I have assigned the issue to the content author to investigate further and update the document as appropriate.
username_2: both indications are correct. I will correct and raise PR this week.
username_2: PR raised. good to close
username_1: @username_2 Thank you!
@username_0 We will now close this issue. If there are further questions regarding this matter, please reopen it and we will gladly continue the discussion.
Status: Issue closed
|
go-vela/community | 639076861 | Title: Add pipeline visualizations
Question:
username_0: ## Description
<!--
* What is your idea?
* A brief overview of the desired change.
* Examples are ALWAYS welcome!
-->
We should add the ability to "visualize" a pipeilne via some type of dashboard and/or graph(s).
This should include the ability to see the order of execution of a pipeline and be distinct when visualizing `stages`, `steps` and `services`.
There are already a few different tools out there that have this concept:
* https://concourse-ci.org/
* https://www.gocd.org/
* https://www.jenkins.io/projects/blueocean/
## Value
<!--
* Why is this important?
* Will this make something better, faster, etc?
* Does this have an impact on security?
-->
* improve user experience when viewing a pipeline
* enhance ability to troubleshoot a running pipeline
* maintain feature parity with existing CI solutions
## Definition of Done
<!--
* What is the expected result after this story is complete?
* Does documentation need to be written?
-->
A user has some mechanism via the UI to visualize a pipeline
## Effort (Optional)
<!--
* What is the estimated effort to complete this story?
* Estimates can be in any format (i.e. 1 hour, 2 days, 3 weeks etc.)
-->
2 weeks
## Impacted Personas (Optional)
<!--
* Who will benefit from completing this story?
* Which personas will this impact? (i.e. users, admins etc.)
* Estimates can be in any format (i.e. 1 hour, 2 days, 3 weeks etc)
-->
everyone
Answers:
username_1: potential libraries:
- [d3-dag/d3](https://github.com/erikbrinkman/d3-dag) - this is the spiritual successor of d3-dagre but it just extends [d3](https://d3js.org/) with additional dag algorithms. this one is promising, and might work well but any layout library combined with `d3` will require a lot of manual code to configure the layout and html.
- [elm-visualization](https://package.elm-lang.org/packages/gampleman/elm-visualization/latest/) - pure elm graph visualizations, it includes force directed graphs. however, like d3, it requires you to create the layout yourself which is a lot of code. the core of the library is just an in-complete d3, which makes it hard to justify not just porting to a more capable library.
- [d3-dagre](https://github.com/dagrejs/dagre-d3) - not going with this one because it is officially deprecated, but it is one of the leading solutions still out there
- [WebCola/cola.js](https://github.com/tgdwyer/WebCola) - this one looks very promising, with support for hierchal grouping of nodes, and force DAGs, open source and maintained. its an interface to d3, much like the others.
username_1: so far I've experimented with [d3-dagre](https://github.com/dagrejs/dagre-d3), [d3-dag/d3](https://github.com/erikbrinkman/d3-dag) and [elm-visualization](https://package.elm-lang.org/packages/gampleman/elm-visualization/latest/).
next up is [cola.js](https://github.com/tgdwyer/WebCola)
username_1: after consideration i feel this might be better off as a spike.
the work will have to be broken down into several smaller efforts that can be performed by the server and the ui. a few that i can think of include:
- converting the data from the database to a format that can be consumed by a graph layout library
- consume the formatted data using a graph layout library like graphviz/viz.js, DOT, d3-dag, dagre, by-hand (ouch), etc to produce a layout
- render the graph layout using a visualization library like d3, elm-visualization, WebCola/cola.js, by-hand (no way), etc.
username_1: some resources:
- [graphviz DOT guide](https://www.graphviz.org/pdf/dotguide.pdf)
username_2: [Mermaid charts](https://mermaid-js.github.io/mermaid/#/) might be another thing to look into.
username_1: one drawback to dagre-d3 was the lack of an active maintainer, i wonder if mermaid forked the library
username_1: note: the [go-graphviz](https://github.com/goccy/go-graphviz) requires c integration (CGO_ENABLED) which is an overhead dependency I would like to avoid.
its possible to use graphviz DOT generation on the client-side but at that point you may as well use a rendering library that is more robust like d3
username_1: [d3-dag](https://github.com/erikbrinkman/d3-dag) has been proving to be successful in generating numerous layouts like sugiyama. feeding the layout into d3 to render the nodes has been successful but the graph that is produced for more complex pipelines is pretty messy. IE, more nodes, the worse it looks.
it makes me consider implementing a sugiyama/topological layout by hand (which i really dont want to do) and then passing that into d3.
more responsibility on us but we could control the granularity on node ranking, and we could do more complex rendering like clustered graphs (render the steps within a stage, for example) and services.
username_3: d3-dag was actually not as flexible as I was hoping, mostly with no support for node size, layer spacing or vertical node size growth without using `arquint` layout which was depracated from the library a few releases ago.
DOT is proving to be much more reliable.
my previous assumption about graphiz support being dependent on CGO was only partially correct. we could use this [Go DOT](https://github.com/emicklei/dot) helper library to write up the graph, or write up the DOT graph string using the [Elm Graph.DOT](https://package.elm-lang.org/packages/elm-community/graph/latest/Graph.DOT) implementation.
IMO it makes sense in the client.
username_4: i'm partial to d3 myself for the flexibility.
are we sacrificing anything in terms of interactivity going down the DOT path? (the story doesn't have it as a requirement, but that is something to consider, maybe) |
SMRFoundation/NodeXLBasic | 275026370 | Title: Cacluating metrics doesn't work
Question:
username_0: I failed to use the caculating metrics function of NodeXL, then, I have tried reinstall windows system, Microsoft Office2007, as well as NodeXL. But nothing help. The problems are presented as follows:
NodeXL
An unexpected problem occurred. If it occurs again, please copy the details to the clipboard by typing Ctrl-C, then post the details to http://www.codeplex.com/NodeXL/Thread/List.aspx.
Details:
[InvalidCastException]: Unable to cast COM object of type 'System.__ComObject' to interface type 'Microsoft.Office.Interop.Excel._Workbook'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{000208DA-0000-0000-C000-000000000046}' failed due to the following error: 不支持此接口 (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).
at System.StubHelpers.StubHelpers.GetCOMIPFromRCW(Object objSrc, IntPtr pCPCMD, Boolean& pfNeedsRelease)
at Microsoft.Office.Interop.Excel._Workbook.get_Sheets()
at Smrf.AppLib.ExcelUtil.TryGetWorksheet(Workbook workbook, String worksheetName, Worksheet& worksheet)
at Smrf.AppLib.ExcelTableUtil.TryGetTable(Workbook workbook, String worksheetName, String tableName, ListObject& table)
at Smrf.NodeXL.ExcelTemplate.PerWorkbookSettings.TryGetPerWorkbookSettingsTable(ListObject& oPerWorkbookSettingsTable)
at Smrf.NodeXL.ExcelTemplate.PerWorkbookSettings.GetAllSettings()
at Smrf.NodeXL.ExcelTemplate.PerWorkbookSettings.TryGetValue(String settingName, Type valueType, Object& value)
at Smrf.NodeXL.ExcelTemplate.PerWorkbookSettings.get_WorkbookSettings()
at Smrf.NodeXL.ExcelTemplate.NodeXLApplicationSettingsBase.GetWorkbookSettings()
at Smrf.NodeXL.ExcelTemplate.NodeXLApplicationSettingsBase.CopyWorkbookSettingsToStandardSettings()
at Smrf.NodeXL.ExcelTemplate.NodeXLApplicationSettingsBase.get_Item(String propertyName)
at Smrf.NodeXL.ExcelTemplate.GraphMetricUserSettings.get_GraphMetricsToCalculate()
at Smrf.NodeXL.ExcelTemplate.GraphMetricUserSettings.ShouldCalculateGraphMetrics(GraphMetrics graphMetrics)
at Smrf.NodeXL.ExcelTemplate.CalculateGraphMetricsContext.ShouldCalculateGraphMetrics(GraphMetrics graphMetrics)
at Smrf.NodeXL.ExcelTemplate.OverallMetricCalculator2.TryCalculateGraphMetrics(IGraph graph, CalculateGraphMetricsContext calculateGraphMetricsContext, GraphMetricColumn[]& graphMetricColumns)
at Smrf.NodeXL.ExcelTemplate.GraphMetricCalculationManager.CalculateGraphMetricsAsyncInternal(CalculateGraphMetricsAsyncArgs oCalculateGraphMetricsAsyncArgs, BackgroundWorker oBackgroundWorker, DoWorkEventArgs oDoWorkEventArgs)
at Smrf.NodeXL.ExcelTemplate.GraphMetricCalculationManager.BackgroundWorker_DoWork(Object sender, DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e)
at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)
#### Attachments
[PV and solar energy.pdf](https://www.codeplex.com/Download/AttachmentDownload.ashx?ProjectName=NodeXL&WorkItemId=63937&FileAttachmentId=7657)
#### This work item was migrated from CodePlex
CodePlex work item ID: '63937'
Vote count: '1' |
mobends/MoBends | 876813842 | Title: Tongue bug on baby wolf
Question:
username_0: Hello I recently used Mo Bends to see the animations it offered.
I reproduced 2 wolves to have a 3rd one and I noticed that the animation texture of the baby wolf's tongue was too big for its size, and that there was also a bug in its feet .
Answers:
username_1: This issue has been fixed and will be rolled out in the upcoming BETA-3 update.

Status: Issue closed
|
probml/pyprobml | 898663292 | Title: differentiable particle filtering
Question:
username_0: [This ICML2021 paper](https://arxiv.org/pdf/2102.07850.pdf) explains how to backprop through the resampling step of a particle filter, using optimal transport methods.
Task: Translate [their TF code](https://github.com/JTT94/filterflow) into JAX and reproduce their initial demo.
Feel free to use the [optimal transport tools](https://pypi.org/project/ott-jax/) library in JAX.
Status: Issue closed
Answers:
username_0: moved to hermes |
dresden-elektronik/deconz-rest-plugin | 289954452 | Title: deCONZ backup/restore issues
Question:
username_0: After over half an hour no change. I restored the backup from the test RaspBee. This seemed to go without issues, the lights previously connected to test were found without a problem and are reachable. The test RaspBee again has its original MAC address. deCONZ did crash (deliberately?):
```
Jan 19 12:48:47 pi systemd[1]: deconz-gui.service: Main process exited, code=exited, status=41/n/a
```
There seems something broken in the restore logic. After untar-ing the backup file, before the crash, deCONZ saves it's database and layout when resetting or leaving/joining the network. This causes the restored database to contain records from the running database, before restore.
Answers:
username_0: Restarted deCONZ on my production Raspberry. As expected, the GUI is populated straight away, with the exception of the lights that I had power-cycled to connect to the restored backup on the test Raspberry. The _NWK Update ID_ is 0, as in the backup. The missing lights won't come back after a power-cycle.
username_0: OK take 2. To be sure, new backup (this time from the deCONZ GUI), and new restore (also from the deCONZ GUI). The GUI must do something more than POST `/config/import`, because deCONZ now seems to restart more smoothly. Again, restored configuration is on channel 11 and _NWK Update ID_ is now 7 (so increased from the 6 in the backup file). Changed channel to 25 and _NWK Update ID_ to 8, and left/joined the network. All my lights are found. End devices appear as well, but remain unreachable.
username_1: @ebauuw... not sure if this is related but do you know if you can move the RaspBee between Pi devices easily? I’m guessing that some of the issues you’ve encountered wouldn’t happen since the RaspBee would be the same between devices.
username_0: Yes, of course, you can move the RaspBee and the µSD card to another Raspberry and everything runs smoothly. That won't help you if the RaspBee breaks, though.
username_1: @username_0, yes agreed. Just wanted to make sure I understood what you were saying about backup/restore.
username_2: Just my 2 cents. Used the Backup funktcionality today to move deconz to a pi running my fhem instance (used 2 before). Old pi had 2.04.97. New one directly installed with 2.05.12. Backuped via pwa app and restored to the new one via pwa as well. Worked great. Only thing which i lost was the cooenction between lights and groups. For me not a big problem because I only have 1 light. All sensors and switches work great after the move.
Using the conbee USB Stick. I have no rules or something else. Only using it for pushing sensor events and switch events to handle them in fhem.
username_3: I tried today and failed... I wanted to move from deConz 2.0.50 on raspbian jessie to a freshly installed raspbian light stretch running deConz 2.0.52. I had the following results:
- backup/restore through phoscon web-app did not throw any errors
- all items show in Phoscon UI
- all items are visible through REST-API
- any command to an item returns success, but no physical results happen (lights do not turn on)
I looked at the logs but could not find any errors or warnings. Any ideas where I could look?
username_4: Can you please create a second backup with the current installation and send both, the original and the new backup, to <EMAIL>
I can check if something is wrong with configuration.
Meanwhile please don't change or reset anything.
username_4: Also have you configured the serial interface on the new system as described here:
https://github.com/dresden-elektronik/deconz-rest-plugin#software-requirements
Otherwise it won't work.
username_3: Will do as soon as I get home. Just one question: I configured the serial interface, but wondered whether it is required since I am using ConBee and not RaspBee (so USB not gpio). The same goes for disabling Bluetooth.
username_3: @username_4 Interestingly after using the latest version it works. Thanks! (If you want, I still have the two backup files that I could send over) |
emissions-api/emissions-api | 557445952 | Title: Inclusion of other gases - NO2, Ozone, SO2, and aerosol.
Question:
username_0: Just wondering when the other gases analysed by Sentinel 5 will be included in the API?
Great work overall however!
Answers:
username_1: Right now we have a slight problem regarding server resources…
We hope to first get a continuous import of data up and running within the next week. After that's done we can probably tell you more about our roadmap. I'll update this ticket once there is new information.
username_2: Hej @username_0, we have worked on the API in the last weeks and we now have more gases included in the API. Currently we do have carbon dioxide, methane, ozone and nitrogen dioxide. You can check any time which gases are included on our website https://emissions-api.org.
We have automated the process more or less so adding more gases is just a matter of downloading and processing time. Are there any gases you are especially interested in? Next would probably formaldehyde or sulfur dioxide.
username_0: Nitrogen dioxide was the main one :) I'm playing around with the api now, and trying to fit it into the deck.gl framework but in the python version 'pydeck'. I'm having a little trouble on the data format it expects versus what I'm getting out of the api but that's more on my understanding I think. Would be good to see a python version of the deck.gl example if possible! |
pkpkpk/cljs-node-io | 362902271 | Title: Incorrect opts structure has surprising results
Question:
username_0: Dear kind maintainer,
This may simply need none of your attention and can probably be closed, but just in case input validation strikes your fancy.
I some how decided that the following code was correct.
```
(io/aslurp "afile.txt" {:encoding ""})
```
It looked like all was well, my files were being read and pushed over the wire just as i expected. But I kept having errors reconstructing the files. After finally resorting to hexdumping the file I noticed lots of hex `0xEF 0xBF 0xBD` in my output file. after running around in circles for quite some time i realized the only place this could possibly be happening is actually reading in the file. So I dug back into your docs and found that i was indeed using your library incorrectly. the correct answer was
```
(io/aslurp "afile.txt" :encoding "")
```
Now, of course, this was user error. But just a little sprinkling of input validation could save a soul such as mine.
Anyways thank you for putting this library together, despite my inability to read it has made my life quite a bit simpler.
:heart:
Answers:
username_1: Yes, overloaded opts is awful but unfortunately its an old clojure idiom. I will look into validating them. thanks! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.