repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
moo-man/FVTT-DD-Import | 970856124 | Title: Universal Battlemap Importer Not Creating Scene
Question:
username_0: The issue that I'm experiencing is that when I'm importing my .dd2vtt files it goes through the dialogue, but doesn't create the scene. This is only happening with two of my maps. Both the file sizes are under 10mb. It uploads the .webp file to my asset library successfully.
Answers:
username_0: I tried recreating a portion of one of the maps and I ran into the same issue with the other two. I've also tried reducing the PPI down to 75 and it didn't work either.
username_1: Could you open the JavaScript console and look for errors there?
username_0: This is the error message I got:
data.mjs:382 Uncaught (in promise) Error: Invalid WallData coordinates provided which must be a length-4 array of finite numbers
at WallData._validateField (data.mjs:382)
at WallData.validate (data.mjs:273)
at new DocumentData (data.mjs:48)
at new WallData (data.mjs:1953)
at new Document (document.mjs:46)
at new BaseWall (documents.mjs:1253)
at new <anonymous> (foundry.js:9103)
at new <anonymous> (foundry.js:8348)
at new WallDocument (foundry.js:17156)
at Function.makeWall (ddimport.js:541)
_validateField @ data.mjs:382
validate @ data.mjs:273
DocumentData @ data.mjs:48
WallData @ data.mjs:1953
Document @ document.mjs:46
BaseWall @ documents.mjs:1253
(anonymous) @ foundry.js:9103
(anonymous) @ foundry.js:8348
WallDocument @ foundry.js:17156
makeWall @ ddimport.js:541
GetWalls @ ddimport.js:526
DDImport @ ddimport.js:489
(anonymous) @ ddimport.js:338
username_0: I got both of them to upload. I set the offset to 0 and that seemed to work.
username_1: If you don't mind sharing the map I could have a look at it, even if I cannot promise when :)
username_0: https://drive.google.com/drive/folders/1ZnFQ4diulLncEMmX_0BQiTd0VE6MTZml?usp=sharing
Status: Issue closed
|
dougnoel/sentinel | 831270826 | Title: Add an OnHover Test Step
Question:
username_0: As a test user I would like to be able to hover over an element so that I can have a screenshot taken and use it to prepare for a verify step.
Answers:
username_0: @Then("^I verify the (.*?)( tooltip)?( does not)? (has|have|contains?) the text \"([^\"]*)\"$")
public static void verifyElementTextContains(String elementName, String tooltip, String assertion, String matchType, String text) {
boolean negate = !StringUtils.isEmpty(assertion);
boolean hasTooltip = !StringUtils.isEmpty(assertion);
String negateText = negate ? "not " : "";
boolean partialMatch = matchType.contains("contain");
String partialMatchText = partialMatch ? "contain" : "exactly match";
if (elementName.contains("URL")) {
verifyURLTextContains(text);
} else {
String elementText;
if (hasTooltip)
//get elemetn tooltip text
else
elementText = getElement(elementName).getText();
var expectedResult = SentinelStringUtils.format(
"Expected the {} element to {}{} the text {}. The element contained the text: {}",
elementName, negateText, partialMatchText, text, elementText
.replace("\n", " "));
log.trace(expectedResult);
if (partialMatch) {
if (negate) {
assertFalse(expectedResult, elementText.contains(text));
} else {
assertTrue(expectedResult, elementText.contains(text));
}
} else {
if (negate) {
assertFalse(expectedResult, StringUtils.equals(elementText, text));
} else {
assertTrue(expectedResult, StringUtils.equals(elementText, text));
}
}
}
} |
ga4gh/vrs | 972185444 | Title: Integral vs. continuous copy number variation
Question:
username_0: Representation of copy number variation in cancer is difficult due to the confluence of tumor purity, clonality, and technical error in whole-tumor NGS assays.
To account for this, CNVs are represented not as a discrete number of copies, but an average number of copies over the sample.
I propose relaxing the restriction on CopyNumber to allow for continuous as well as integral values. |
mickelus/tetra | 899075127 | Title: Crash report - punching grass
Question:
username_0: ## Bug Report
**Observed Behaviour**
Game crash when punching grass
Crash log: <!--- If crash, otherwise remove this line --->
https://pastebin.com/NyVSnei1
**Expected Behaviour**
No game crash
**Minimal setup needed to reproduce**
- Forge version: 36.1.0
- Tetra version: 3.11.0
- Tetra configuration: Default
- Other mods: See crash log
**Steps to reproduce**
Install mods, punch grass.
Answers:
username_1: fixed in 3.11.1.
Status: Issue closed
|
violet-zct/DeMa-BWE | 435685678 | Title: Using text embeddings instead of binary embeddings
Question:
username_0: Hello,
Thank you for making available this implementation. I have my own monolingual embeddings in the text format (Word2Vec format). I am wondering whether I can use these embeddings for training bilingual embeddings instead of the FastText binary embeddings. Would it be as simple as replacing calls to load_bin_embeddings(...) in main_e2e.py with read_txt_embeddings(...)?
Answers:
username_1: Hi,
DeMa-BWE also requires the word counts of each word which can be loaded from the binary model of FastText. Then DeMa-BWE will use the word counts to computes normalized frequencies of each word. If you are loading word2vec vectors, you need to save the word counts as well.
username_0: Thank you for the reply. I think it would be fairly easy for me to save the word counts for each word separately in a different file. Could I train the bilingual embeddings after having the word2vec vectors and the word counts for the source and target languages?
Status: Issue closed
|
codelibs/fess | 466777747 | Title: NullPointerException occurs when bind DN is not set at LDAP authentication
Question:
username_0: ```java
env.put(Context.SECURITY_PRINCIPAL, principal);
```
However, since principal is set to the input value of bind DN, `NullPointerException` occurs if it is not input.
I would like to omit Bind DN (`SECURITY_PRINCIPAL`) depending on the LDAP used.
If Bind DN is not input, `SECURITY_PRINCIPAL` should not be set.
Answers:
username_1: Bind DN is required.
What is the full stacktrace?
username_0: Full stacktrace
```
java.lang.NullPointerException
at java.base/java.util.Hashtable.put(Hashtable.java:475)
at org.codelibs.fess.ldap.LdapManager.createEnvironment(LdapManager.java:82)
at org.codelibs.fess.ldap.LdapManager.createAdminEnv(LdapManager.java:91)
at org.codelibs.fess.ldap.LdapManager.validate(LdapManager.java:120)
at org.codelibs.fess.ldap.LdapManager.login(LdapManager.java:139)
at org.codelibs.fess.app.web.base.login.FessLoginAssist.lambda$resolveCredential$4(FessLoginAssist.java:150)
at org.lastaflute.web.login.TypicalLoginAssist$CredentialResolver.resolve(TypicalLoginAssist.java:191)
at org.codelibs.fess.app.web.base.login.FessLoginAssist.resolveCredential(FessLoginAssist.java:145)
at org.lastaflute.web.login.TypicalLoginAssist.findLoginUser(TypicalLoginAssist.java:120)
at org.lastaflute.web.login.TypicalLoginAssist.doLogin(TypicalLoginAssist.java:281)
at org.lastaflute.web.login.TypicalLoginAssist.loginRedirect(TypicalLoginAssist.java:250)
at org.codelibs.fess.app.web.login.LoginAction.login(LoginAction.java:55)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.lastaflute.web.ruts.GodHandableAction.invokeExecuteMethod(GodHandableAction.java:335)
at org.lastaflute.web.ruts.GodHandableAction.actuallyExecute(GodHandableAction.java:306)
at org.lastaflute.web.ruts.GodHandableAction.doExecute(GodHandableAction.java:146)
at org.lastaflute.web.ruts.GodHandableAction.lambda$transactionalExecute$0(GodHandableAction.java:137)
at org.lastaflute.db.jta.stage.JTATransactionStage.performTx(JTATransactionStage.java:102)
at org.lastaflute.db.jta.stage.JTATransactionStage.lambda$requiresNew$1(JTATransactionStage.java:59)
at org.lastaflute.di.tx.adapter.JTATransactionManagerAdapter.requiresNew(JTATransactionManagerAdapter.java:73)
at org.lastaflute.db.jta.stage.JTATransactionStage.requiresNew(JTATransactionStage.java:58)
at org.lastaflute.db.jta.stage.JTATransactionStage.selectable(JTATransactionStage.java:84)
at org.lastaflute.web.ruts.GodHandableAction.transactionalExecute(GodHandableAction.java:136)
at org.lastaflute.web.ruts.GodHandableAction.execute(GodHandableAction.java:117)
at org.lastaflute.web.ruts.ActionRequestProcessor.performAction(ActionRequestProcessor.java:253)
at org.lastaflute.web.ruts.ActionRequestProcessor.fire(ActionRequestProcessor.java:182)
at org.lastaflute.web.ruts.ActionRequestProcessor.process(ActionRequestProcessor.java:114)
at org.lastaflute.web.servlet.filter.RequestRoutingFilter.processAction(RequestRoutingFilter.java:266)
at org.lastaflute.web.servlet.filter.RequestRoutingFilter.routingToAction(RequestRoutingFilter.java:218)
at org.lastaflute.web.servlet.filter.RequestRoutingFilter.lambda$createActionFoundPathHandler$0(RequestRoutingFilter.java:184)
at org.lastaflute.web.path.ActionPathResolver.executeHandlerIfFound(ActionPathResolver.java:342)
at org.lastaflute.web.path.ActionPathResolver.mappingActionPath(ActionPathResolver.java:208)
at org.lastaflute.web.path.ActionPathResolver.handleActionPath(ActionPathResolver.java:113)
at org.lastaflute.web.servlet.filter.RequestRoutingFilter.doFilter(RequestRoutingFilter.java:118)
at org.lastaflute.web.servlet.filter.LastaToActionFilter.viaEmbeddedFilter(LastaToActionFilter.java:153)
at org.lastaflute.web.servlet.filter.LastaToActionFilter.viaInsideHookDeque(LastaToActionFilter.java:144)
at org.lastaflute.web.servlet.filter.LastaToActionFilter.viaInsideHook(LastaToActionFilter.java:128)
at org.lastaflute.web.servlet.filter.LastaToActionFilter.doFilter(LastaToActionFilter.java:120)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.lastaflute.web.servlet.filter.LastaShowbaseFilter.toNextChain(LastaShowbaseFilter.java:171)
at org.lastaflute.web.servlet.filter.LastaShowbaseFilter.lambda$viaEmbeddedFilter$3(LastaShowbaseFilter.java:150)
at org.lastaflute.web.servlet.filter.RequestLoggingFilter.actuallyFilter(RequestLoggingFilter.java:237)
at org.lastaflute.web.servlet.filter.RequestLoggingFilter.doFilter(RequestLoggingFilter.java:209)
at org.lastaflute.web.servlet.filter.LastaShowbaseFilter.viaEmbeddedFilter(LastaShowbaseFilter.java:148)
at org.lastaflute.web.servlet.filter.LastaShowbaseFilter.viaOutsideHookDeque(LastaShowbaseFilter.java:139)
at org.lastaflute.web.servlet.filter.LastaShowbaseFilter.viaOutsideHook(LastaShowbaseFilter.java:123)
at org.lastaflute.web.servlet.filter.LastaShowbaseFilter.doFilter(LastaShowbaseFilter.java:115)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.codelibs.fess.filter.WebApiFilter.doFilter(WebApiFilter.java:51)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
[Truncated]
at org.codelibs.fess.filter.EncodingFilter.doFilter(EncodingFilter.java:119)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:834)
```
username_1: Thank you for the info!
For Bind DN, the reason comes from https://github.com/codelibs/fess/issues/741#issuecomment-252801081.
username_0: I understand why this requires BindDN.
However, the LDAP I want to connect with does not need BindDN.
In that case what to do?
username_1: Added an option to skip the validation in #2185.
username_0: Thank you for your response!
Status: Issue closed
|
maemo-leste/bugtracker | 302121904 | Title: N900: OOPS when unbinding musb-hdrc
Question:
username_0: Looks similar to the vbus issue - maybe pm_runtime_get/put_sync is required?
```
[ 7232.484985] Unhandled fault: external abort on non-linefetch (0x1028) at 0xfa0ab414
[ 7232.485015] pgd = 9b0f7685
[ 7232.485046] [fa0ab414] *pgd=48011452(bad)
[ 7232.485076] Internal error: : 1028 [#1] PREEMPT ARM
[ 7232.485076] Modules linked in: u_ether u_serial bluetooth ecdh_generic ipv6 omaplfb ctr
aes_arm_bs crypto_simd cryptd ccm pvrsrvkm cmt_speech nokia_modem ssi_protocol radio_plat
form_si4713 mousedev arc4 joydev hsi_char wl1251_spi crc7 wl1251 ir_lirc_codec mac80211 li
rc_dev ir_rx51 rc_core smc91x gpio_keys rx51_battery pwm_omap_dmtimer isp1704_charger mii
sha256_generic omap3_isp videobuf2_dma_contig v4l2_fwnode cfg80211 videobuf2_memops si4713
videobuf2_v4l2 adp1653 videobuf2_core v4l2_common tsc2005 tsc200x_core videodev bq27xxx_b
attery_i2c bq27xxx_battery bq2415x_charger leds_lp5523 leds_lp55xx_common media tsl2563 rt
c_twl twl4030_vibra ff_memless omap_ssi lis3lv02d_i2c lis3lv02d hsi input_polldev ti_soc_t
hermal vfat fat [last unloaded: libcomposite]
[ 7232.485412] CPU: 0 PID: 2803 Comm: bash Not tainted 4.15.6+ #1
[ 7232.485412] Hardware name: Nokia RX-51 board
[ 7232.485473] PC is at musb_default_readl+0x4/0xc
[ 7232.485473] LR is at omap2430_musb_exit+0x2c/0x70
[ 7232.485504] pc : [<c05220f8>] lr : [<c052b218>] psr: a0020013
[ 7232.485504] sp : cb2afe70 ip : 00000000 fp : 00000000
[ 7232.485534] r10: 00000000 r9 : 00000051 r8 : 200f0013
[ 7232.485534] r7 : c2a65920 r6 : ce354d10 r5 : 00000000 r4 : ce52e010
[ 7232.485565] r3 : c05220f4 r2 : 00000000 r1 : fa0ab414 r0 : fa0ab000
[ 7232.485595] Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
[ 7232.485595] Control: 10c5387d Table: 8bfa4019 DAC: 00000051
[ 7232.485626] Process bash (pid: 2803, stack limit = 0x5105ec71)
[ 7232.485626] Stack: (0xcb2afe70 to 0xcb2b0000)
[ 7232.485656] fe60: ce52e010 ffffe000 ce388a10 c0521c
dc
[ 7232.485687] fe80: ce388a10 ce388a10 c0a325e8 ce388a44 00000034 c04c34d8 ce388a10 000000
00
[ 7232.485717] fea0: c0a325e8 c04c2170 00000011 ce388a10 c0a325e8 c0a2fef8 c9cde410 c04c0a
8c
[ 7232.485717] fec0: 00000011 cd8f7e00 c9cde400 cb2aff88 c9cde410 c0259554 00000000 000000
00
[ 7232.485748] fee0: 00000011 cb6b1600 c0259424 00000000 cb2aff88 000e4408 cb2ae000 c01eda
74
[ 7232.485778] ff00: 00000100 00000000 cb3b0180 c020a948 cb3b0cc0 0000000a cb3b0cc0 000000
01
[ 7232.485809] ff20: cb3b0cc0 0000000a 00000001 c020aefc 00000001 00000000 cb3b0cc0 c01fd4
f4
[ 7232.485839] ff40: cb6b1600 00000002 cb6b1600 00000011 00000000 cb2aff88 000e4408 c01edd30
[ 7232.485870] ff60: cb6b1600 000e4408 00000011 cb6b1600 cb6b1600 000e4408 00000011 c0106fc4
[ 7232.485900] ff80: cb2ae000 c01edee8 00000000 00000000 00000011 00000011 000e4408 b6eb3d60
[ 7232.485900] ffa0: 00000004 c0106de0 00000011 000e4408 00000001 000e4408 00000011 00000000
[ 7232.485931] ffc0: 00000011 000e4408 b6eb3d60 00000004 000e4408 00000011 00000000 00000000
[ 7232.485961] ffe0: 00000000 bedd1eec b6e161bb b6e52b46 00000030 00000001 00000000 00000000
[ 7232.485992] [<c05220f8>] (musb_default_readl) from [<c052b218>] (omap2430_musb_exit+0x2c/0x70)
[ 7232.486022] [<c052b218>] (omap2430_musb_exit) from [<c0521cdc>] (musb_remove+0x110/0x158)
[ 7232.486053] [<c0521cdc>] (musb_remove) from [<c04c34d8>] (platform_drv_remove+0x24/0x3c)
[ 7232.486114] [<c04c34d8>] (platform_drv_remove) from [<c04c2170>] (device_release_driver_internal+0xd4/0x1dc)
[ 7232.486145] [<c04c2170>] (device_release_driver_internal) from [<c04c0a8c>] (unbind_store+0x58/0x8c)
[ 7232.486175] [<c04c0a8c>] (unbind_store) from [<c0259554>] (kernfs_fop_write+0x130/0x1a0)
[ 7232.486206] [<c0259554>] (kernfs_fop_write) from [<c01eda74>] (__vfs_write+0x1c/0x11c)
[ 7232.486236] [<c01eda74>] (__vfs_write) from [<c01edd30>] (vfs_write+0xb8/0x18c)
[ 7232.486267] [<c01edd30>] (vfs_write) from [<c01edee8>] (SyS_write+0x3c/0x74)
[ 7232.486297] [<c01edee8>] (SyS_write) from [<c0106de0>] (ret_fast_syscall+0x0/0x54)
[ 7232.486328] Code: e0801001 e5812000 e12fff1e e0801001 (e5910000)
[Truncated]
[ 7232.497955] [<c0125e00>] (do_exit) from [<c010aa44>] (die+0x234/0x26c)
[ 7232.497985] [<c010aa44>] (die) from [<c0101328>] (do_DataAbort+0xa4/0xb8)
[ 7232.497985] [<c0101328>] (do_DataAbort) from [<c067fa38>] (__dabt_svc+0x58/0x80)
[ 7232.498016] Exception stack(0xcb2afe20 to 0xcb2afe68)
[ 7232.498046] fe20: fa0ab000 fa0ab414 00000000 c05220f4 ce52e010 00000000 ce354d10 c2a65920
[ 7232.498077] fe40: 200f0013 00000051 00000000 00000000 00000000 cb2afe70 c052b218 c05220f8
[ 7232.498077] fe60: a0020013 ffffffff
[ 7232.498107] [<c067fa38>] (__dabt_svc) from [<c05220f8>] (musb_default_readl+0x4/0xc)
[ 7232.498138] [<c05220f8>] (musb_default_readl) from [<c052b218>] (omap2430_musb_exit+0x2c/0x70)
[ 7232.498168] [<c052b218>] (omap2430_musb_exit) from [<c0521cdc>] (musb_remove+0x110/0x158)
[ 7232.498199] [<c0521cdc>] (musb_remove) from [<c04c34d8>] (platform_drv_remove+0x24/0x3c)
[ 7232.498229] [<c04c34d8>] (platform_drv_remove) from [<c04c2170>] (device_release_driver_internal+0xd4/0x1dc)
[ 7232.498260] [<c04c2170>] (device_release_driver_internal) from [<c04c0a8c>] (unbind_store+0x58/0x8c)
[ 7232.498321] [<c04c0a8c>] (unbind_store) from [<c0259554>] (kernfs_fop_write+0x130/0x1a0)
[ 7232.498321] [<c0259554>] (kernfs_fop_write) from [<c01eda74>] (__vfs_write+0x1c/0x11c)
[ 7232.498352] [<c01eda74>] (__vfs_write) from [<c01edd30>] (vfs_write+0xb8/0x18c)
[ 7232.498382] [<c01edd30>] (vfs_write) from [<c01edee8>] (SyS_write+0x3c/0x74)
[ 7232.498413] [<c01edee8>] (SyS_write) from [<c0106de0>] (ret_fast_syscall+0x0/0x54)
[ 7232.498443] ---[ end trace 1dd18c3e3b5270bb ]---
```
Answers:
username_0: Adding the spinlock and/or the pm_runtime_{get,put}_sync do not fix this problem, so perhaps it is something else.
username_0: I could not figure it out. I wrote a mail to the maintainers: https://marc.info/?l=linux-kernel&m=152046250820457&w=2
username_0: OK, figured it out. Will send a patch for review and see what happens with it.
Status: Issue closed
username_0: Patch should be in 4.16, and it's already in our 4.15 tree. |
rrousselGit/freezed | 671835391 | Title: Not clear how to apply custom json serializable converter
Question:
username_0: I am trying to apply my custom json converter. The README states that it is possible to add a custom json serializable converter to a model created with freezed, but it does not seem to provide an example on how to actually apply it to a model.
I tried different strategies, but all seem unsuccessful.
``` dart
// 1. Outside the model
@freezed
@ModelConverter()
abstract class Model with _$Model {
const factory Model({
String name,
String surname,
}) = _Model;
factory Model.fromJson(Map<String, dynamic> json) => _$ModelFromJson(json);
}
```
``` dart
// 2. On the factory constructor
@freezed
abstract class Model with _$Model {
@ModelConverter()
const factory Model({
String name,
String surname,
}) = _Model;
factory Model.fromJson(Map<String, dynamic> json) => _$ModelFromJson(json);
}
```
``` dart
// 3. On the factory fromJson method
@freezed
abstract class Model with _$Model {
const factory Model({
String name,
String surname,
}) = _Model;
@ModelConverter()
factory Model.fromJson(Map<String, dynamic> json) => _$ModelFromJson(json);
}
```
As stated, none of this solutions seem to work.
How can I apply my custom converter such that `toJson` and `fromJson` generated with json_serializable utilize my custom converter?
Answers:
username_1: The decorator isn't placed on the `Model` class, but the places where `Model` is used:
```dart
@freezed
abstract class Another with _$Another {
const factory Another({
@ ModelConverter Model model,
}) = _Another;
factory Anotherl.fromJson(Map<String, dynamic> json) => _$AnotherFromJson(json);
}
```
username_0: I see, now it works. Thanks.
I was having doubts because the `Model` inside `Another` class was contained in a `List`, so I thought it would not work.
``` dart
@freezed
abstract class Another with _$Another {
const factory Another({
// Adding the decorator with a list works as well!
@ModelConverter List<Model> modelList,
}) = _Another;
factory Another.fromJson(Map<String, dynamic> json) => _$AnotherFromJson(json);
}
```
Do you think it would be useful to add this example to the docs (both for the plain `Model` and `List` examples)? Should I open a PR?
username_1: Sure, feel free to make a PR
Status: Issue closed
|
voximplant/react-native-voximplant | 498778921 | Title: Receive Video Call Crash in Samsung Android 5
Question:
username_0: java.lang.RuntimeException: glCreateShader() failed. GLES20 error: 0
at org.webrtc.GlShader.compileShader(GlShader.java:24)
at org.webrtc.GlShader.<init>(GlShader.java:42)
at org.webrtc.GlGenericDrawer.createShader(GlGenericDrawer.java:153)
at org.webrtc.GlGenericDrawer.prepareShader(GlGenericDrawer.java:230)
at org.webrtc.GlGenericDrawer.drawOes(GlGenericDrawer.java:163)
at org.webrtc.GlRectDrawer.drawOes(GlRectDrawer.java:16)
at org.webrtc.VideoFrameDrawer.drawTexture(VideoFrameDrawer.java:42)
at org.webrtc.VideoFrameDrawer.drawFrame(VideoFrameDrawer.java:219)
at org.webrtc.EglRenderer.renderFrameOnRenderThread(EglRenderer.java:655)
at org.webrtc.EglRenderer.lambda$vWDJEj1GWjHSjwoQQjEEK_IVOJE(EglRenderer.java)
at org.webrtc.-$$Lambda$EglRenderer$vWDJEj1GWjHSjwoQQjEEK_IVOJE.run(lambda)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at org.webrtc.EglRenderer$HandlerWithExceptionCallback.dispatchMessage(EglRenderer.java:100)
at android.os.Looper.loop(Looper.java:135)
at android.os.HandlerThread.run(HandlerThread.java:61)
Answers:
username_1: Hello!
Thank you for reaching out to us!
Could you please provide the information about Voximplant React Native SDK version the issue is reproducible with?
Best regards,
<NAME>
Status: Issue closed
|
coleifer/django-relationships | 403056139 | Title: This project needs a maintainer.
Question:
username_0: Hi @username_1 ,
This repository seems to lack maintenance, I propose myself as a new maintainer.
Could you give me the right to push in your repository or just transfer it on my account in github?
Thank you.
Answers:
username_1: Just fork it.
Status: Issue closed
username_0: @username_1 can you give me access on pypi for https://pypi.org/project/django-relationships/ my username the same |
flutter/flutter | 308268986 | Title: Flutter analyze ignores analysis_options.yaml glob rules.
Question:
username_0: Flutter analyze does NOT obey [analysis_options.yaml](https://www.dartlang.org/guides/language/analysis-options) glob rules. It allows only for full path excludes. (This may affect #15902 and #12726)
## Steps to Reproduce
```
% flutter create exapp ; cd exapp
% echo -e 'analyzer:\n exclude:\n - OFF/**\n - OFF/baddart.dart'> analysis_options.yaml
% mkdir OFF
% echo 'noneSuchType x = 0;'> OFF/fail.dart
% echo 'noneSuchType x = 0;'> OFF/baddart.dart
% flutter analyze
error • Undefined class 'noneSuchType' at OFF/fail.dart:1:1 • undefined_class
1 issue found.
```
## Logs
N/A
## Flutter Doctor
```[✓] Flutter (Channel beta, v0.1.5, on Linux, locale pl_PL.utf8)
• Flutter version 0.1.5 at /testbed/lib/flutter
• Framework revision 3ea4d06340 (4 weeks ago), 2018-02-22 11:12:39 -0800
• Engine revision ead227f118
• Dart version 2.0.0-dev.28.0.flutter-0b4f01f759
[✓] Android toolchain - develop for Android devices (Android SDK 26.0.2)
• Android SDK at /testbed/lib/Android/Sdk
• Android NDK at /testbed/lib/Android/Sdk/ndk-bundle
• Platform android-27, build-tools 26.0.2
• ANDROID_HOME = /testbed/lib/Android/Sdk
• Java binary at: /testbed/lib/Android/android-studio/jre/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01)
[✓] Android Studio (version 3.0)
• Android Studio at /testbed/lib/Android/android-studio
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01)
[✓] Connected devices (1 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 6.0 (API 23) (emulator)
• No issues found!
```
Answers:
username_1: Which directory contains the `analysis_options.yaml` and what's the exclude rule?
username_0: @username_1 Precisely as stated in the 'steps to reproduce':
file is in the fresh project dir 'exapp' and contains
```
analyzer:
exclude:
- OFF/**
- OFF/baddart.dart
```
The error `noneSuchType at OFF/fail.dart:1:1` tells us that while analyzer skips the `OFF/baddart.dart` as it should, it analyzes the `fail.dart` file it should not. Whole `OFF/` path is excluded per `- OFF/**` rule. Analyzer should not step into that dir at all.
username_2: seems to be the same like this one https://github.com/dart-lang/sdk/issues/28754
username_0: @username_2 The same cause but discussion there digressed.
Possible places of culprit are:
```
dart-sdk/lib/analyzer/lib/src/lint/linter.dart#122
dart-sdk/lib/analyzer/lib/source/path_filter.dart#24
dart-sdk/lib/analyzer/lib/src/util/glob.dart#39 <- Glob constructor
```
I am too fresh to Dart and too short of time to analyze analyzer further.
username_0: Just checked with `Flutter 0.3.2 • channel dev` and `Framework • revision 44b7e7d3f4`
Now **Analyzer picks up globs from opening example.**
P.S. **beware of YAML** formating slips (do not affect OP example).
Do we really need to fight yaml almost 20 years into XXI century? Toml or even xml, please.
Status: Issue closed
username_2: It only works partially. As you can test with this project https://github.com/username_2/rx_command/tree/problem_with_analyzer
If I exclude the whole `excample` or `example\lib` folder it works. but if I only exclude `example\lib\json` it doesn't.
username_3: Probably it's this issue -> https://github.com/dart-lang/sdk/issues/28754
username_0: OK. I am reopening this. For me it *mostly* works, until it does not. username_3's pointer seem relevant.
username_0: Flutter analyze does NOT obey [analysis_options.yaml](https://www.dartlang.org/guides/language/analysis-options) glob rules. It allows only for full path excludes. (This may affect #15902 and #12726)
## Steps to Reproduce
```
# create new app exapp; change dir to exapp
% flutter create exapp ; cd exapp
# generate analysis_options.yaml and exclude OFF/** path there
% echo -e 'analyzer:\n exclude:\n - OFF/**\n - OFF/baddart.dart'> analysis_options.yaml
# make the OFF/ dir with two bad dart files in it
% mkdir OFF
% echo 'noneSuchType x = 0;'> OFF/fail.dart
% echo 'noneSuchType x = 0;'> OFF/baddart.dart
# call analyzer
% flutter analyze
error • Undefined class 'noneSuchType' at OFF/fail.dart:1:1 • undefined_class
1 issue found.
```
## Logs
N/A
## Flutter Doctor
```[✓] Flutter (Channel beta, v0.1.5, on Linux, locale pl_PL.utf8)
• Flutter version 0.1.5 at /testbed/lib/flutter
• Framework revision 3ea4d06340 (4 weeks ago), 2018-02-22 11:12:39 -0800
• Engine revision ead227f118
• Dart version 2.0.0-dev.28.0.flutter-0b4f01f759
[✓] Android toolchain - develop for Android devices (Android SDK 26.0.2)
• Android SDK at /testbed/lib/Android/Sdk
• Android NDK at /testbed/lib/Android/Sdk/ndk-bundle
• Platform android-27, build-tools 26.0.2
• ANDROID_HOME = /testbed/lib/Android/Sdk
• Java binary at: /testbed/lib/Android/android-studio/jre/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01)
[✓] Android Studio (version 3.0)
• Android Studio at /testbed/lib/Android/android-studio
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-915-b01)
[✓] Connected devices (1 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 6.0 (API 23) (emulator)
• No issues found!
```
username_1: I think this should be reported in dart-lang/sdk.
I don't think this is specific to Flutter. @username_0 what do you think?
username_3: Isn't this just a dupe of https://github.com/dart-lang/sdk/issues/28754?
My guess is if you put the same code into a non-Flutter `.dart` script and run `dartanalyze` on it you'll see the same thing?
Status: Issue closed
username_0: 1st - AFAIR it was partially **resolved** with some subsequent flutter/master.
*Partially* because paths specified in `analysis_options.yaml` are valid from the **`cwd`** of the tool
(from where flutter analyze ran) not from the `analysis_options.yaml` file location.
So either one needs to have `analysis_options.yaml` in the top (project) dir with paths crafted to the **.** *and* run `analyze` from there, or specify paths relative to `lib` and run `analyze` inside `lib/` (my current usage).
I just checked above with Dart **2.0.0-dev.61.0.flutter-c95617b19c** and it works as described, so I am closing this particular bug now.
So - while there are annoying quirks - there is a way to exclude subpath.
P.S. If and when someone will start to refactor analysis tool, I'd suggest the **.toml** for the configuration dsl.
username_1: Bot acted weird - closing again
Status: Issue closed
|
cerebral/url-mapper | 124965991 | Title: Documentation example: urlMapper is not a function
Question:
username_0: `````
const urlMapper = Mapper();
var matchedRoute = urlMapper('/bar/baz/:42',
````
urlMapper is not a function.
````
I believe it should urlMapper.map('/bar/baz/:42', ...
`````
Answers:
username_1: I will let @username_2 answer this one :-)
username_0: BTW I'm still using the require syntax
``````
var Mapper = require('url-mapper');
var urlMapper = Mapper();
var matchedRoute = urlMapper.map(....
``````
username_2: Will check and fix as sonn will be at pc
username_2: @username_0 It very strange since default export is fubction in fact. just check out https://tonicdev.com/npm/url-mapper
username_2: You are completely right. Will fix it now :)
Status: Issue closed
|
e2o/vue-element-query | 340316139 | Title: Browser support
Question:
username_0: Do you mind adding browser support info to the README. I will note that it doesn't work in IE11, at least not for me.
Answers:
username_1: Browser support for said browser is fixed in `[email protected]`.
Section added in the README [here](https://github.com/username_1/vue-element-query#browser-support).
Status: Issue closed
|
fluentribbon/Fluent.Ribbon | 629096841 | Title: RibbonTabItem has MouseOver style after touch on touchscreen.
Question:
username_0: **Behaviour:**
- Touch the RibbonTabItem
- RibbonTabItem is selected but has no stroke (probably has MouseOver style)

- Touch somewhere else
- RibbonTabItem has proper style with stroke
 |
moyuanhuang/leetcode | 386610248 | Title: 281. Zigzag Iterator
Question:
username_0: https://leetcode.com/problems/zigzag-iterator/
Tips:
1. To be able to know if the zigzag iterator's `hasNext`, you'll need to keep track of the **elements left** in each array.
2. To map each `iter` to it's `left` counter, I stored a tuple in the `self.iters` list.
3. Deleting empty array can be done along with popping and appending the `iters`: If a iter's left element is not 0, appending it back to the list so that I will be visited zigzagly. Otherwise continue to pop the list until you find one.
```python
# 281. Zigzag Iterator
# medium
class ZigzagIterator(object):
def __init__(self, v1, v2):
"""
Initialize your data structure here.
:type v1: List[int]
:type v2: List[int]
"""
self.iters = [(0, iter(v1)), (1, iter(v2))]
self.left = [len(v1), len(v2)]
def next(self):
"""
:rtype: int
"""
index, iterator = -1, None
while True:
index, iterator = self.iters.pop(0)
if self.left[index] != 0:
break
elem = next(iterator)
self.iters.append((index, iterator))
self.left[index] -= 1
return elem
def hasNext(self):
"""
:rtype: bool
"""
return sum(self.left) != 0
# Your ZigzagIterator object will be instantiated and called as such:
# i, v = ZigzagIterator(v1, v2), []
# while i.hasNext(): v.append(i.next())
``` |
godotengine/godot-demo-projects | 564363419 | Title: What's "SIDING_CHANGE_SPEED" in 2D/platfomer/player/player.gd?
Question:
username_0: <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot-demo-projects/issues?q=is%3Aissue
-->
**Which demo project is affected:**
Platformer 2D
**OS/device including version:**
Not OS/device related
**Issue description:**
I was looking at the player movement code and I can't figure out what "SIDING_CHANGE_SPEED" is. I think it's supposed to be how fast the player must be going before they turn to face that direction. Is is supposed to be "SLIDING_CHANGE_SPEED" or am I missing something?
Answers:
username_1: This demo has been rewritten, and this constant no longer exists in the rewrite.
Status: Issue closed
|
MicrosoftDocs/visualstudio-docs | 573569085 | Title: Unity + Visual Studio for Mac (community): Running and Debugging Unit Tests
Question:
username_0: ### Problem
I'm trying to figure out how to run my Unity game's unit tests on Visual Studio for Mac. I have the following 2 problems:
1. How to run my unit tests from inside of Visual Studio
1. How to debug my unit tests from inside of Visual Studio
For problem 1, I was hoping [visual studios docs on working with Unity](https://docs.microsoft.com/en-us/visualstudio/mac/using-vsmac-tools-unity?view=vsmac-2019#feedback) would have some answers, but it does not.
For problem 2, I found [this other visual studio thread](https://developercommunity.visualstudio.com/content/problem/125430/how-do-i-debug-unit-tests.html), and it makes sense what they suggest. I can see my test framework is being picked up by Visual Studio (see picture below)...

But after I run my tests I don't see any output:

### Request
Could you please add some documentation on the [visual studio docs for unity](https://docs.microsoft.com/en-us/visualstudio/mac/using-vsmac-tools-unity?view=vsmac-2019#feedback) to help with this process?
This questions has been asked several times in different ways in the Unity forums but now one has answered it. See the links below:
- https://answers.unity.com/questions/1561447/running-unit-tests-from-visual-code.html
- https://answers.unity.com/questions/1360577/can-i-run-unit-tests-in-visual-studio-using-the-un.html
My hope is that by this being in the documentation there is no more need to ask this question on the Unity forums.
### Specs:
```
Visual Studio for Mac (Community) Version 8.4.7
Unity Version 2019.2.17f1 Personal
macOS Mojave 10.14.6
MacBook Pro 15-inch, Mid 2015
```
Thank you.
Answers:
username_1: @username_0 VS doesn't currently support running Unit Tests from within the IDE. You would need to use the Unity editor and test runner to execute the tests.
The best next step is to use the Help > Suggest a Feature menu from Visual Studio for Mac and document what you expect to happen. I know the team is already investigating Unit Testing support but this would give you a public way to track it and the request can be updated as soon as it's available.
For future issues, it's best to use the Help > Report a Problem for the fastest response directly from the engineering team. This GitHub is for issues specifically related to the documentation.
username_0: @username_1 thank you for your prompt response.
I will do as you suggested. I thought I might have been missing a step, which is why I posted here.
I will mark this issue as resolved for now, then.
Status: Issue closed
|
transinform/transinform.github.io | 111121808 | Title: V-Nikolaeve-proshel-investizionniy-forum
Question:
username_0: 
Answers:
username_0: 
username_0:  |
jlippold/tweakCompatible | 671220309 | Title: `Filza File Manager` working on iOS 13.5
Question:
username_0: ```
{
"packageId": "com.tigisoftware.filza",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.tigisoftware.filza",
"deviceId": "iPhone12,3",
"url": "http://cydia.saurik.com/package/com.tigisoftware.filza/",
"iOSVersion": "13.5",
"packageVersionIndexed": false,
"packageName": "Filza File Manager",
"category": "Utilities",
"repository": "BigBoss",
"name": "Filza File Manager",
"installed": "3.7.7-15",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.tigisoftware.filza",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "File Manager for iPhone, iPad, iPod Touch. Supports iOS 7+",
"latest": "3.7.7-15",
"author": "TIGI Software",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```
Answers:
username_1: This issue is being closed because your review was accepted into the tweakCompatible website.
Tweak developers do not monitor or fix issues submitted via this repo.
If you have an issue with a tweak, contact the developer via another method.
Status: Issue closed
|
playframework/scalatestplus-play | 746663258 | Title: MockitoSugar is not longer part of ScalaTest
Question:
username_0: Examples still contains references to `MockitoSugar`, as it can be seen [here](https://github.com/playframework/scalatestplus-play/search?q=MockitoSugar).
`MockitoSugar` is no longer part of Scalatest, in version 3.1.1, as used in this project.
Therefore there are broken links in the docs, for example [here](https://github.com/playframework/scalatestplus-play/blob/fbe3fcf4d0019f1389e682c1c248ea7a8c876fe5/docs/manual/working/scalaGuide/main/tests/ScalaTestingWithScalaTest.md#mockito), and examples that cannot compile, such as [here](https://github.com/playframework/scalatestplus-play/blob/e420c027d44c93a7b2fc29cbc64eeede3f91f7c7/docs/manual/working/scalaGuide/main/tests/code/UserServiceSpec.scala) and [here](https://github.com/playframework/scalatestplus-play/blob/e420c027d44c93a7b2fc29cbc64eeede3f91f7c7/docs/manual/working/scalaGuide/main/tests/code/ExampleMockitoSpec.scala).
Answers:
username_1: Right, it moved from scalatest itself to scalatestplus-mockito, https://github.com/scalatest/scalatestplus-mockito
The examples definitely build: they are compiled as part of our CI (see https://travis-ci.com/github/playframework/scalatestplus-play/jobs/446711309#L3245 for an example of a manually-introduced failure)
In any case, the documentation definitely needs updating to point to the right places.
Would you be interested in contributing?
username_0: Sure! I hope I'll get to it in the next couple of days. Do you have any specific guidelines for this fix, accept for what exists in the contributing part in the readme?
username_1: Great! I don't think I have any particular extra guidelines.
username_0: I can't compile the project in intellij. This is the error I am getting:
/Library/Java/JavaVirtualMachines/jdk1.8.0_212.jdk/Contents/Home/bin/java -Djline.terminal=jline.UnsupportedTerminal -Dsbt.log.noformat=true -Dfile.encoding=UTF-8 -Didea.managed=true -Dfile.encoding=UTF-8 -jar /Users/tshetah/Library/Application Support/JetBrains/IntelliJIdea2020.2/plugins/Scala/launcher/sbt-launch.jar
[info] welcome to sbt 1.3.13 (Oracle Corporation Java 1.8.0_212)
[info] loading global plugins from /Users/tshetah/.sbt/1.0/plugins/project/project
[info] loading global plugins from /Users/tshetah/.sbt/1.0/plugins/project
[info] loading settings for project global-plugins from plugins.sbt ...
[info] loading global plugins from /Users/tshetah/.sbt/1.0/plugins
[info] loading settings for project scalatestplus-play-build from plugins.sbt ...
[info] loading project definition from /Users/tshetah/work/workspace/games/scalatestplus-play/project
[info] loading settings for project scalatestplus-play-root from build.sbt ...
[info] set current project to scalatestplus-play-root (in build file:/Users/tshetah/work/workspace/games/scalatestplus-play/)
[error] Failed to derive version from git tags. Maybe run git fetch --unshallow? Version: HEAD+20201126-0124
I am using java 8. @username_1 , do you know what can it be?
username_1: Have you tried running `git fetch --unshallow`?
username_0: Yes. I get:
```
TomerShetahMac:scalatestplus-play tshetah$ git fetch --unshallow
fatal: --unshallow on a complete repository does not make sense
```
username_1: Hmm, that's weird. Apparently `dynverGitDescribeOutput.value.hasNoTags` returns 'true' for you, which is weird when you have the whole repo.
You could try `git fetch origin --tags`, if that still doesn't work then just comment out lines 42-49 from `build.sbt` (but remember not to commit that)
username_1: Updated in #296
Status: Issue closed
|
jason-p-pickering/data-pack-importer | 300987883 | Title: Deal with empty tables
Question:
username_0: 
We must write something to the sheet, even if no data exists.
Status: Issue closed
Answers:
username_0: Fixed by writing the sums or zeros to the correct row in all cases, even where there is no data. |
ValveSoftware/steam-for-linux | 1066680567 | Title: Gyro input on Switch Pro controller and controllers using the same protocol does not work.
Question:
username_0: #### Your system information
* Steam client version (build number or date): "Built: Nov 22 2021, at 22:12:42", but this has been happening for a month or so.
* Distribution (e.g. Ubuntu): I've tested on Manjaro and Ubuntu. It probably affects all distributions.
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
#### Please describe your issue in as much detail as possible:
Since about a month ago (maybe longer, I can't remember), the gyro on Switch Pro controllers and controllers that use the same protocol (8BitDo SN30 Pro+ and 8BitDo Pro 2 are the ones I've tested) isn't recognized by Steam in Linux. I have the gyro set up to emulate a mouse or a mouse joystick in several games, and they only work in Windows right now. The same controller configuration doesn't work in Linux.
After I've calibrated the controller's gyro in Windows, and reconnected the controller in Linux, Steam's desktop controller configuration will recognize the gyro input (and move the mouse on the screen) until I launch a game, then, again, the gyro input is not recognized.
I have tried this on multiple Windows and Linux machines. Gyro controls work in Windows 10 Home and Pro on the three computers I've tried. Gyro controls do not work in Linux on the four computers (including two of the same computers that do work with Windows) I've tried. This includes both Manjaro Linux and Ubuntu Linux, with various different (recent) kernel versions.
This is most definitely a problem with Steam on Linux, since the exact same setups worked in every case until a recent Steam update, where it broke on every single Linux PC I play on.
#### Steps for reproducing this issue:
1. Connect a Switch Pro controller to Steam on Linux.
2. Set up a game to use the gyro as mouse or mouse joystick.
3. Launch the game and attempt to use the gyro.
Answers:
username_0: Btw, the controllers also have to be re-paired every time they are connected to the machine. I do this in Gnome by going to the Bluetooth settings page, clicking on the already connected controller, putting the controller into pairing mode, waiting for the "Connection" slider to slide to the disabled state, then sliding the toggle back to the enabled state.
When the controller reconnects to the device after being turned on, before I do this whole re-pairing procedure, the Steam client will list the controller as "Unknown Controller" in the Controller Settings page, and the controller simply doesn't work at all in Steam. It does work in other games and software, like Sonic Robo Blast 2 and RetroArch.
username_0: Also, if I can take this opportunity to fanboy a little bit, thank you so much for all you do with regard to gaming on Linux! I used to be a PC gamer in the late 2000s, then I switched to Linux and basically missed out on 12 years of gaming. Now, thanks to all your hard work on Proton, Steam, etc., I'm back into PC gaming and absolutely loving it!
username_1: I just ran into this same issue. Works fine on Windows, but booting into linux and the gyro stops controlling games.
username_0: This appears to be fixed. |
AbelHeinsbroek/chartjs-plugin-crosshair | 699505134 | Title: Hover not working in right side of chart
Question:
username_0: When the chart has a high amount of points (256 horisontal in this case), hover is disabled in the right side of the chart. I have tried all modes including the interpolate one included with the module, but the right side of the chart is still unresponsive.
**This is our configuration:**
crosshair: {
line: {
color: #0f0;
width: 1
},
snap: {
enabled: true,
},
sync: {
boolean: false,
},
zoom: {
enabled: false,
},
}
Answers:
username_1: @username_0 are you sure there isn't something else overlayed over the right part of the chart? or maybe some overflow?
Try right click -> inspect element
Can you reproduce it in a simple [codepen](http://codepen.io/)/[stackblitz](https://stackblitz.com/)/jsfiddle ?
I am using it with 5000+ points without any issues |
blackbaud/skyux2 | 222487428 | Title: Grids have scrollbars on Windows
Question:
username_0: ### Expected behavior
A grid that doesn't have data outside the viewport should not have scrollbard
### Actual behavior
On Windows, all Sky Grids (both list and non-list) have scrollbars even if the data fits in the view area
### Steps to reproduce
Go to the Sky docs for the Grid and List View Grid on a Windows machine and observe
Answers:
username_1: In 2.0.0-beta.24
Status: Issue closed
|
thezerothcat/LaMulanaRandomizer | 406155363 | Title: Ver 2.11.0 - Unwinnable seed
Question:
username_0: 1263842316
Only things stopping from me continuing in this current seed is a number of different items or areas. Those include:
Hand Scanner
Bronze Mirror
Life Seal
Death Seal
Spring in the Sky
Chamber of Birth
Problem is that all these involve each other. Like some items are in those areas, or require one of those items to get. Hand Scanner for example is in Chamber of Birth's Perfume Location, and Bronze Mirror is also in Chamber of Birth, while Death Seal is in Isis' Room where I need the scanner. Spring in the Sky is also only accessible via Chamber of Birth. And that's where the problem lies. I have no way into Chamber of Birth.
All the entrances.
Chamber of Birth (Vishnu's Room) <==========> Gate of Time (Surface to Gate of Guidance)
Chamber of Birth (Skanda's Room) <==========> Temple of the Sun (Isis' Anterior Chamber)
Chamber of Birth (Saraswati's Room) <==========> Spring in the Sky (Mural of Oannes)
Chamber of Birth (Deva's Room) <==========> Endless Corridor (Second Endless Corridor)
Gate of Time Surface isn't a way in so that's out. Spring in the Sky...well that's the only way in(Outside of Mausoleum Pot, but without water that's not a way in either). And Endless Corridor's only activates after talking to the Philsopher in Spring in the Sky...which we already pointed out I can't get to. The last one is Temple of the Sun, which lead's me to Skanda's room...but then I have no way out except Grail...and it's only a one way since after the 1st time, I can't go back through.
Here's the files.
[gates.txt](https://github.com/username_1/LaMulanaRandomizer/files/2826220/gates.txt)
[items.txt](https://github.com/username_1/LaMulanaRandomizer/files/2826221/items.txt)
[randomizer-config.txt](https://github.com/username_1/LaMulanaRandomizer/files/2826222/randomizer-config.txt)
Seems it might be similar to another issues for Chamber of Birth I saw earlier.
Answers:
username_1: Based on an external conversation, it seems this was actually against 2.9.0. I was unable to reproduce this seed against newer versions, and the logic looks like the bug with Chamber of Birth fixed in 2.10.0. Closing until/unless the problem comes back on a newer version.
Status: Issue closed
|
Edvinas01/meme-grid | 314100020 | Title: Ability to pin liked memes
Question:
username_0: It would be nice that people could pin they loved memes in the grid. For example, It could be cookies based.
Answers:
username_1: Currently clicking on a meme opens up its page. There could be a drop-down instead which would allow you to select an action. Though I want to keep things simple and lightweight.
@username_0 maybe you have some ideas how this could be implemented and want to give it a go?
username_1: @username_0 could you add more details how'd you imagine this feature? |
kubernetes/kubernetes | 401977383 | Title: Kubernetes namespaces stuck in terminating state
Question:
username_0: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happened**:
I'm currently assessing rook+k8s as a platform and I was simulating different scenarios, which included sporadic nodes restarts.
Then I decided to remove the installed rook operator + cluster and deleted those, but one of the namespaces stuck forever
```yaml
# kubectl get ns/rook-ceph
NAME STATUS AGE
rook-ceph Terminating 20h
root@ip-10-250-45-112:~/src/rook/cluster/examples/kubernetes/ceph# kubectl get ns/rook-ceph -oyaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"name":"rook-ceph"}}
creationTimestamp: "2019-01-22T01:22:10Z"
deletionTimestamp: "2019-01-22T19:34:44Z"
name: rook-ceph
resourceVersion: "556406"
selfLink: /api/v1/namespaces/rook-ceph
uid: 243d2cae-1de4-11e9-b2da-0afb69308c7a
spec:
finalizers:
- kubernetes
status:
phase: Terminating
```
As you can see it's stuck in the `kubernetes` finalizer. Editing the `ns/rook-ceph` and removing the finalizers section does not change anything.
**What you expected to happen**:
It either should delete or expose errors somewhere
**How to reproduce it (as minimally and precisely as possible)**:
Not sure it's simple, sorry :-(
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"<KEY>", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"<KEY>", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
```
- Cloud provider or hardware configuration: aws t3 instances
- OS (e.g. from /etc/os-release): Ubuntu 18.04.1 LTS
- Kernel (e.g. `uname -a`): `4.15.0-1031-aws #33-Ubuntu SMP Fri Dec 7 09:32:27 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux`
- Install tools:
- Others:
Answers:
username_0: /sig api-machinery
username_0: It looks like there is a dupe https://github.com/kubernetes/kubernetes/issues/60807
Please close this one if it really is. |
pingcap/tidb-operator | 697934307 | Title: Backup with BR failed
Question:
username_0: ## Bug Report
**What version of Kubernetes are you using?**
<!-- You can run `kubectl version` -->
**What version of TiDB Operator are you using?**
<!-- You can run `kubectl exec -n tidb-admin {tidb-controller-manager-pod-name} -- tidb-controller-manager -V` -->
**What storage classes exist in the Kubernetes cluster and what are used for PD/TiKV pods?**
<!-- You can run `kubectl get sc` and `kubectl get pvc -n {tidb-cluster-namespace}` -->
**What's the status of the TiDB cluster pods?**
<!-- You can run `kubectl get po -n {tidb-cluster-namespace} -o wide` -->
**What did you do?**
<!-- If possible, provide a recipe for reproducing the error. How you installed tidb-operator and tidb-cluster. -->
**What did you expect to see?**
**What did you see instead?**
S3 storage:
Jinshan Yun S3

Answers:
username_1: 使用金山云对象存储(部分兼容 aws S3协议),进行备份时,由于金山云部分对象没有返回 etag 造成获取备份大小失败。
我这边已经联系了金山云的同学进行处理,金山云原计划本周四修复,今天同步进度时金山的同学反馈因为排期问题会往后推迟,我这边会同步更新进度.
username_1: 百度云的测试结果稍后我这边也贴出来。
username_1: 金山云青岛机房已经修复该问题,我这边已经验证通过,金山云正在推进全网上线
username_0: @lichunzhu We may also consider to record the size in the metafile and then retrieve the size from the metafile later.
username_0: https://github.com/pingcap/br/issues/550 is created to retrieve the backup size from BR.
Status: Issue closed
|
typora/typora-issues | 322475812 | Title: Can't open typora window on win10 1803
Question:
username_0: After I open typora, I can see it on my task bar and task manager,and if I pree ALT+tab,I can see the thumbnail of typora too.But I can't switch to its window,and can't open muti-desktop of windows10.


Answers:
username_1: Typora version?
username_0: @username_1 0.9.50,and I found if I open two typora,the latter is all right,just the former has that problem
username_1: Try close the Typora window and reopen again? Can it be solved?
And could you send `C:\Users\{username}\AppData\Roaming\Typora\typora.log` to us (<<EMAIL>>)
Best Regards
username_0: Thx very much.After I restarted my computer,this problem was solved.I think that may be due to win10 1803 version,I will roll-back to 1709.
Status: Issue closed
|
enarx/enarx | 564153284 | Title: Migrate Enarx's test suite from Travis to Github Actions
Question:
username_0: Following positive results from #213 and enabling GHA with #229, we'd like to move our entire CI test suite over to GHA. This will allow some nice integration with Github's PR UI and sets up some good longer-term CI opportunities.<issue_closed>
Status: Issue closed |
argoproj/argo-cd | 359636897 | Title: [SPIKE] check compatibility with ArgoCD and nginx-ingress
Question:
username_0: User reported that ELB (passthrough) to nginx-ingress to argocd was not working properly.
Answers:
username_1: To add some more details:
going through nginx, and a tcp elb listener, i get rpc error: code = Unavailable desc = transport is closing
The only way i can get it to work is with a tcp elb listener and a node port service.
username_1: if someone has an example of an nginx ingress config that works, that would be great.
username_0: I determined out how to get nginx-ingress to work with ArgoCD:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
name: argocd-server-ingress
namespace: argocd
spec:
rules:
- host: argocd.example.com
http:
paths:
- backend:
serviceName: argocd-server
servicePort: https```
```
Note that this requires nginx-controller to be run with the `--enable-ssl-passthrough` option, and a passthrough ELB.
username_0: Being documented as part of PR #616
Status: Issue closed
username_2: why require ssl to terminate at argocd-server level ?
I'd rather continue to handle SSL termination at ELB level like for my other services |
beer-garden/beer-garden | 1078792363 | Title: Cannot schedule jobs against commands with dynamic choices
Question:
username_0: When attempting to schedule a job against a command with a dynamic choices parameter via the UI, the dynamically populated parameter never resolves. The UI will show the spinner as if it is loading, but even after making the selection on the parameter that is used to populate the dynamic choice, it will never populate. |
sergey-brutsky/mi-home | 358479597 | Title: Gateway version 2.61.3
Question:
username_0: After updating gateway to the latest avaible version (2.61.3) I can't find any device anymore... I tryed controlling if lan mode was still active, but there's no way to have the 2 more menus tapping on the version number. Any idea?
Answers:
username_1: I will take a look
username_1: Hi,
I've updated gateway plugin up to version 2.61.3 in my MiHome mobile app.
Eveything is working fine, I can see all my devices and send them commands.
I cannot reproduce your issue, could you please elaborate a bit ?
Thanks
Status: Issue closed
username_0: I'm really sorry for wasting your time. :-(
In gateway plugin 2.61.3 there's a much easier way to activate the "LAN mode": actually is call "Wireless Mode" and it's not hidden anymore.
My trouble was far away from that: in the same day I installed VirtualBox on my PC and the VirtualBox virtual network card was in some way conflicting with the UDP requests of Mi-Home-Lib.
Thank you for your patience and help. |
ef-labs/vertx-hk2 | 234238048 | Title: how to pass a parameter to a binder?
Question:
username_0: I am using vertx-jersey with vertx-hk2. I have one problem. Is it possible that I can invoke a custom binder with a parameter?
For example, I have a startupbinder as below:
public class StartupBinder extends AbstractBinder{
private JsonObject iEnvConfig;
public StartupBinder(JsonObject aEnvConfig){
iEnvConfig = aEnvConfig;
}
@Override
public void configure(){
bind(iEnvConfig).to(JsonObject.class).named("EnvConfig");
}
} |
ORNL-CEES/DataTransferKit | 181433346 | Title: Github Projects for DTK
Question:
username_0: If we are interested in Kanban boards to track issues for DTK, we could try out the new Github Projects boards. Hopefully, they would be faster that the 3rd party waffle app.
Answers:
username_0: I did the initial population. Github projects does not have any features right now, and moving between columns must be done manually. But the quoting github post:
```
Although we’ll quickly be adding to Projects, our initial release currently supports:
- Tools built on top of Projects by some fantastic partners, including Waffle.io and ZenHub
```
intrigues me. Does waffle work on top of Projects? Projects in their current iteration are really fast.
username_1: You can install a bot from waffle which will update github's project
username_0: @username_1 Waffle Bot is not yet, is it?
username_1: @username_0 I am not sure I haven't tried
username_0: https://github.com/integration/wafflebot
It seems like it's out. Can somebody install it on the repo?
username_0: So @username_3 installed the bot. But it does not seem to do anything (we tried adding a "need review") to some issue. How does it work again?
username_2: Looked at this some more today. This is a nice basic capability and it will be interested to see what additions are made in the near future. One thing I would like to see is to be able to only show items assigned to a single person on the board.
username_0: Yes, it is really bare right now. And you have to move all issues by hand as WaffleBot does not seem to work. But the good thing it's fast (maybe due to those reasons).
-Andrey
username_1: For now we still have the waffle board. Here is the address @username_0 https://waffle.io/ORNL-CEES/Cap/join There you can filter the board by assigned person, etc.
username_0: So, we put this back in backlog to reevaluate later?
username_1: Yeah let's wait to see if github improves their kanban
username_2: I propose one more column named 'new' for the board. This are issues that have just been created but are not yet considered backlog.
username_0: I think new issues should automatically go to backlog unless you manually move them to ready. The only advantage of "new" I see is if we decide to go through them and decide on priority later. But I think we should do the same with backlog anyway.
username_2: I infer "backlog" to mean we know we have to do this work at some point but have no plans on doing it in the near future. You are right in that new vs. backlog is effectively a priority rating for tickets that are not yet ready. It will also prevent us from having too many things that are ready.
username_0: OK. Couple questions:
1. When creating a "new" issue can't you decide at that point whether it should go to backlog or ready? What information is missing for this decision?
2. If we don't want too many things in Ready, they should go to Backlog by definition.
username_2: The difference is the definition of backlog. Backlog should be a place where we put things that we want to keep on our radar but have no plans to work on them at this time. New means a new issue that should be addressed in the near future but a number of in progress and ready tasks need to be completed before that issue can be addressed.
username_0: Hmm, is that really the definition? My understanding of Backlog was that it is everything we can't work on right now, but it does not say anything about "we have no plans for them".
@username_3 @username_1 What's your take on this?
username_2: Its not that we have no plans to work on backlog issues, its that we don't have any plans in the near-term future to address them. Perhaps a few months down the road they would be addressed and then moved into ready at that point.
username_3: Here is how I see it: New issues should automatically go to "backlog". We select the ones we are going to work on by moving then to "ready".
username_3: We can always use tags to sort things out in "backlog".
username_2: ok - Damien has convinced me. I would like to see some sort of tag system for 'high priority', 'medium priority', and 'low priority'. That will help us prioritize things sitting in the back log
username_0: Closing for now, as there is no step we can take until github provides more functionality. Will reopen then.
Status: Issue closed
|
Maaz0070/Partsagram | 713623760 | Title: Project Feedback!
Question:
username_0: <img src="https://media.giphy.com/media/xTiTnGeUsWOEwsGoG4/giphy.gif", width=200 />
Looks like you **did not link your gif walkthrough** for this assignment or it is **not rendering (animating)** properly when viewed in the README 😬. The gif helps us to make sure we don't miss any required or optional stories you have completed.
**Render your gif:**
Once you have uploaded your gif to a site like [imgur](http://imgur.com/) you can render it using the following syntax.
```
<img src="my_gif_address.gif", width=250 />
```
**Make sure all you have completed the following steps to completing your README:**
1. Make sure you have the correct README for this assignment, go to the "Setup" section in Assignment Tab for the corresponding week in the [course portal](https://courses.codepath.org).
1. Please mark of all completed stories `[x]`
1. Add a link to your animated gif walkthough to your README and make sure it renders (animates) when viewing the README.
Your assignment is incomplete until the README and gif is complete. Once completed, please push your updates and **submit your assignment again so we can regrade it**.
Still confused about how to properly submit your assignment? Check out the [Submitting Coursework](https://courses.codepath.org/snippets/ios_university/submitting_coursework.md) for detailed instructions.
Whenever you make updates to your project that require re-grading, you need to **re-submit** your project using the submit button on the associated assignment page in the course portal. This will flag your project as “updated” on our end and we know to re-grade.
You should re-submit your assignment anytime you:
- Update a previously incomplete assignment
- Add optional and additional features to an already completed assignment
Answers:
username_0: Looks like your missing the following feature/s in your GIF walkthrough:
- User stays logged in across restarts. (1pt)
- User can log out. (1pt)
- User can add a new comment. (5pts)
/cc @username_0 |
NLNOG/ring-fpingd | 218333557 | Title: make ringfpingd xenial compatible
Question:
username_0: since ringfpingd can't be started by ansible, it seems ansible just quits early on
```
root@casablanca01:/var/lib/systemd# ps axuw | grep ringfping
root 12231 0.0 0.0 14516 972 pts/1 S+ 21:27 0:00 grep --color=auto ringfping
root@casablanca01:/var/lib/systemd# systemctl start ringfpingd
Job for ringfpingd.service failed because the control process exited with error code. See "systemctl status ringfpingd.service" and "journalctl -xe" for details.
root@casablanca01:/var/lib/systemd# ps axuw | grep ringfping
root 12240 0.0 0.0 14516 976 pts/1 S+ 21:28 0:00 grep --color=auto ringfping
root@casablanca01:/var/lib/systemd# systemctl status ringfpingd.service
● ringfpingd.service - LSB: Start ringfping daemon at boot time
Loaded: loaded (/etc/init.d/ringfpingd; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-03-30 21:28:02 UTC; 6s ago
Docs: man:systemd-sysv-generator(8)
Process: 12234 ExecStart=/etc/init.d/ringfpingd start (code=exited, status=1/FAILURE)
Mar 30 21:28:02 casablanca01.ring.nlnog.net systemd[1]: Starting LSB: Start ringfping daemon at boot time...
Mar 30 21:28:02 casablanca01.ring.nlnog.net ringfpingd[12234]: starting ringfpingd...
Mar 30 21:28:02 casablanca01.ring.nlnog.net ringfpingd[12234]: daemon already running.
Mar 30 21:28:02 casablanca01.ring.nlnog.net systemd[1]: ringfpingd.service: Control process exited, code=exited status=1
Mar 30 21:28:02 casablanca01.ring.nlnog.net systemd[1]: Failed to start LSB: Start ringfping daemon at boot time.
Mar 30 21:28:02 casablanca01.ring.nlnog.net systemd[1]: ringfpingd.service: Unit entered failed state.
Mar 30 21:28:02 casablanca01.ring.nlnog.net systemd[1]: ringfpingd.service: Failed with result 'exit-code'.
root@casablanca01:/var/lib/systemd#
```
Answers:
username_1: Fixed: https://github.com/NLNOG/ring-fpingd/commit/dd6fcfe3fc1b6fae1c0b9ca1059e6c8c48d5ad87
Status: Issue closed
|
exercism/problem-specifications | 359623682 | Title: protein-translation: Codons/Acids table not rendering properly
Question:
username_0: See https://github.com/exercism/java/issues/1508
Problem might exist in the file https://github.com/exercism/problem-specifications/blob/master/exercises/protein-translation/description.md
However, I'm not sure how to fix it, because I'm not sure how the README's are generated (per https://github.com/exercism/website-copy/issues/429).
Answers:
username_1: Can confirm the table doesn't render properly on Chrome and Edge (OS Windows 10).
Would it be better to just have an embedded image of the codon table in the README? There are a lot of those available online. The issue would be that there would be more codons listed than what are currently used, so definitely would understand if writing in the table manually makes more sense.
username_2: The READMEs are just markdown. Each track may use `configlet` to generate a README that is specific to that track; `configlet` pulls from the exercise's description.md from this repo. If the site is not rendering the resulting mark down properly then this needs to be investigated further at the site level.
If the site, for whatever reason, is incapable of rendering the mark down correctly, then, it may be necessary to rethink how the information is laid out in this exercise's description.md. I suspect the problem will be found and corrected so it will not become necessary to make any changes here.
username_2: cc: @username_4
username_3: This is a [known issue](https://github.com/exercism/exercism/issues/4294)
Status: Issue closed
|
SlimeKnights/TinkersConstruct | 277585695 | Title: gravel and grout
Question:
username_0: TConstruct-1.12-2.7.4.34 running on server (not tried single player)
1-2 gravel +2 sand +2 clay = 2 or 4 grout
3+ gravel +3+ Sand + 3+ Clay = flint
Answers:
username_1: Please try and reproduce this in single player. If it does not work, give details about the type of server.
Also, please give a Forge version and a list of other mods.
username_0: Buildcraft 7.99.12
Journeymap 1.12.2-5.5.2
LLOverlay Reloaded-1.1.4-mc1.12
Mantle 1.12-1.3.1.21
Tconstruct 1.12-274.34
Forge 1.12.2-14.23.1.2554
new random single player map
Crafting Table will vary results of how many gravel would be accepted before grout turns to flint
#1-4, #2-23, #3-30, #4-4,#5-3, #6-2, #7-2
Crafting Station
all attempts 2 gravel = grout, 3+gravel =flint
took out all mods except forge and TC/Mantle recreated map same results
forge updated to 2555 after thread started
tested SP can not reproduce with only 1 stack of 64 of ea at crafting station and crafting table
after server forge upgrade to 2555 also unable to reproduce with 1 stack of ea item in crafting station and crafting table
username_2: Sounds like something is messing with recipes, probably serverside. Try reproducing with only TiC, Mantle and Forge version.
Status: Issue closed
username_1: Actually, this is a Forge big that was recently fixed, which is why unwanted Forge version. According to the chancellor, [2555 fixed shapeless recipes](https://files.minecraftforge.net/maven/net/minecraftforge/forge/1.12.2-14.23.1.2556/forge-1.12.2-14.23.1.2556-changelog.txt) |
bcgov/entity | 418975938 | Title: QA - define pre-release checklist and prod smoke test suite
Question:
username_0: ### Improve release planning/execution
## Description:
In order to help formalise and improve the repeat-ability of our releases, a new issue template will be devised. This template will include the pre-release checklist as well as the basic smoke test checklist for NameX.
This is an experiment and may need refinement going forward.
Acceptance / DoD:
- [N/A ] Product Owner advised if task >= 1 full-day, or forms part of the business day
- [N/A ] Requires deployments
- [ N/A] Test coverage acceptable
- [ ] Peer Reviewed
- [N/A ] Production burn in completed<issue_closed>
Status: Issue closed |
IRIS-Solutions-Team/IRIS-Toolbox | 701866715 | Title: Filtering data in non-linear model
Question:
username_0: Hi, I am modeling endogenous monetary credibility. So model is non-linear.
There is part of code for credibility :
%Inflation expectation
e_dl_cpi=#beta*dl_cpi_b+(1-beta)*dl_cpi_f;
dl_cpi_b=#psi_b*dl_cpi_tar+(1-psi_b)*dl_cpi{-1};
psi_b=#sw_1*exp(-dl_cpi_b_dev^2/(2*theta^2));
dl_cpi_b_dev=d4l_cpi{-1}-dl_cpi_tar{-1};
dl_cpi_f=#psi_f*dl_cpi_tar+(1-psi_f)*dl_cpi{+1};
psi_f=#sw_2*exp(-dl_cpi_f_dev^2/(2*theta^2));
dl_cpi_f_dev= d4l_cpi{+4}-dl_cpi_tar{+4};
%Credibility
c=#beta*psi_b+(1-beta)*psi_f;
I want to run the Kalman filter on the historical data to back out unobservable variables, so I use command 'filter'
[mfilt,filtdb,se2,delta,pe]= filter(m_2, data, rng); a = filtdb.mean;
But results of this filtering very strange. For example (here theta=0.25), filtered variable psi_b = 1, but when we substitute filtered data of dl_cpi_b_dev into formula for psi_b we get another result. How this possible? Maybe there is needed some additional options in filter function. Thank you.
[a.dl_cpi_dev, a.psi_b]
ans =
Series Object: 72-by-2
2004Q1: 0.47184 1
2004Q2: 0.062653 1
2004Q3: 0.70847 1
2004Q4: 0.50128 1
2005Q1: 0.65682 1
2005Q2: -0.85364 1
2005Q3: -0.76866 1
2005Q4: -1.1383 1
2006Q1: -0.15315 1
2006Q2: 0.58923 1
2006Q3: 0.96273 1
2006Q4: 0.39123 1
2007Q1: 0.73754 1
2007Q2: 1.3775 1
2007Q3: 1.233 1
2007Q4: 1.6434 1
2008Q1: 0.62704 1
2008Q2: -2.2313 1
2008Q3: -1.8699 1
2008Q4: 2.1203 1
2009Q1: 0.30439 1
2009Q2: 1.5575 1
2009Q3: 2.0403 1
2009Q4: 2.0228 1
2010Q1: 0.47708 1
2010Q2: -0.91378 1
2010Q3: -0.29327 1
2010Q4: -0.5697 1
2011Q1: -0.36393 1
2011Q2: -0.10079 1
2011Q3: -0.60599 1
[Truncated]
2017Q1: -0.136 1
2017Q2: -0.31626 1
2017Q3: -0.35046 1
2017Q4: -0.49865 1
2018Q1: -0.0144 1
2018Q2: -1.8557 1
2018Q3: -1.1711 1
2018Q4: -1.2652 1
2019Q1: -1.6101 1
2019Q2: -2.0622 1
2019Q3: -2.6186 1
2019Q4: -2.9902 1
2020Q1: -2.0937 1
2020Q2: -1.9244 1
2020Q3: -1.3508 1
2020Q4: -0.82815 1
2021Q1: -0.43604 1
2021Q2: -0.1905 1
2021Q3: -0.067506 1
2021Q4: -0.029443 1
Answers:
username_1: Hi!
If you want to simulate the true non-linear filter with add-in factors via intermediate 'simulate' commands, try to use the following option:
[mfilt,filtdb,se2,delta,pe]= filter(m_2, data, rng, 'simulate=',{'method=','selective', 'nonlinPer=',20}).
Here, the filtering process will include intermediate simulations to derive add-in factors for your choice of earmarked equations (with '#='). These add-in factors will be the difference between the linearised form of your equations and their actual non-linear form.
Best,
AndreyO
username_2: Andrey,
Thank you for mentioning the way how to simulate a model within KF steps. However, for any 2020 IRIS I am getting an error below (Matlab2020b, IRIS20201008). Thanks for help.
Error using model.simulate
The value of 'D' is invalid. It must satisfy the function: isstruct.
Error in extend.InputParser/parse (line 54)
parse@inputParser(this, varargin{:});
Error in model/simulate (line 282)
parser.parse(this, inputData, range);
Error in shared.Kalman/prepareKalmanOptions/herePrepareSimulateSystemProperty (line 295)
opt.Simulate = simulate( ...
Error in shared.Kalman/prepareKalmanOptions (line 234)
herePrepareSimulateSystemProperty( );
Error in model/filter (line 370)
[kalmanOpt, timeVarying] = prepareKalmanOptions(this, filterRange, pp.UnmatchedInCell{:}); |
dKvale/aqi-watch | 226900889 | Title: 1-hr AQI at 97 for OZONE
Question:
username_0: **AQI Watch** </br>1 monitor is reporting a 1-hr AQI above 90. A value of **97** for OZONE was reported at **Sioux Falls** (South Dakota). For more details visit the <a href=http://dkvale.github.io/aqi-watch> AQI Watch</a>. </br>_May 07, 2017 at 17:35 CDT_ </br> </br>Attention: @rrobers |
hyperboria/docs | 226723116 | Title: New Fedora users confused by fedora.md
Question:
username_0: So fedora.md was reverted to instructions for building cjdns yourself, which is fine, but it should at least *mention* that cjdns is a system package in Fedora. New users hear about hyperboria, come here and try to follow the instructions (generally get stuck at the "edit cjdns.service" step), and give up.
If they aren't specifically setting out to build it themselves, the docs here are unhelpful. I suggest a brief mention at the top to that effect:
If you are not intending to build cjdns yourself, you are probably better off installing the Fedora system version. "dnf install cjdns cjdns-selinux cjdns-tools" and see README_Fedora.md in /usr/share/docs/cjdns
If you *do* want to build it yourself, proceed.
Answers:
username_1: File a pull request?
Status: Issue closed
username_2: I went ahead and did it myself -- looks like @username_0 had done a lot of work getting it documented before but it got lost in the shuffle. Credit for 99% of the info in my latest commit goes to him.
username_0: Thanks! Looks good. |
ninja-build/ninja | 1164760731 | Title: [](https://app.fossa.com/projects/git%2Bgithub.com%2FPinkDiamond1%2Flightning-rfc?ref=badge_shield)
Question:
username_0: # This file is used to build ninja itself. # It is generated by configure.py. ninja_required_version = 1.3 # The arguments passed to configure.py, for rerunning it. configure_args = root = . builddir = build cxx = g++ ar = ar cflags = -g -Wall -Wextra -Wno-deprecated -Wno-missing-field-initializers $ -Wno-unused-parameter -fno-rtti -fno-exceptions -fvisibility=hidden $ -pipe '-DNINJA_PYTHON="python"' -O2 -DNDEBUG -DUSE_PPOLL $ -DNINJA_HAVE_BROWSE -I. ldflags = -L$builddir rule cxx command = $cxx -MMD -MT $out -MF $out.d $cflags -c $in -o $out description = CXX $out depfile = $out.d deps = gcc rule ar command = rm -f $out && $ar crs $out $in description = AR $out rule link command = $cxx $ldflags -o $out $in $libs description = LINK $out # browse_py.h is used to inline browse.py. rule inline command = "$root/src/inline.sh" $varname < $in > $out description = INLINE $out build $builddir/browse_py.h: inline $root/src/browse.py | $root/src/inline.sh varname = kBrowsePy build $builddir/browse.o: cxx $root/src/browse.cc || $builddir/browse_py.h # the depfile parser and ninja lexers are generated using re2c. rule re2c command = re2c -b -i --no-generation-date -o $out $in description = RE2C $out build $root/src/depfile_parser.cc: re2c $root/src/depfile_parser.in.cc build $root/src/lexer.cc: re2c $root/src/lexer.in.cc # Core source files all build into ninja library. build $builddir/build.o: cxx $root/src/build.cc build $builddir/build_log.o: cxx $root/src/build_log.cc build $builddir/clean.o: cxx $root/src/clean.cc build $builddir/debug_flags.o: cxx $root/src/debug_flags.cc build $builddir/depfile_parser.o: cxx $root/src/depfile_parser.cc build $builddir/deps_log.o: cxx $root/src/deps_log.cc build $builddir/disk_interface.o: cxx $root/src/disk_interface.cc build $builddir/edit_distance.o: cxx $root/src/edit_distance.cc build $builddir/eval_env.o: cxx $root/src/eval_env.cc build $builddir/graph.o: cxx $root/src/graph.cc build $builddir/graphviz.o: cxx $root/src/graphviz.cc build $builddir/lexer.o: cxx $root/src/lexer.cc build $builddir/line_printer.o: cxx $root/src/line_printer.cc build $builddir/manifest_parser.o: cxx $root/src/manifest_parser.cc build $builddir/metrics.o: cxx $root/src/metrics.cc build $builddir/state.o: cxx $root/src/state.cc build $builddir/util.o: cxx $root/src/util.cc build $builddir/version.o: cxx $root/src/version.cc build $builddir/subprocess-posix.o: cxx $root/src/subprocess-posix.cc build $builddir/libninja.a: ar $builddir/browse.o $builddir/build.o $ $builddir/build_log.o $builddir/clean.o $builddir/debug_flags.o $ $builddir/depfile_parser.o $builddir/deps_log.o $ $builddir/disk_interface.o $builddir/edit_distance.o $ $builddir/eval_env.o $builddir/graph.o $builddir/graphviz.o $ $builddir/lexer.o $builddir/line_printer.o $builddir/manifest_parser.o $ $builddir/metrics.o $builddir/state.o $builddir/util.o $ $builddir/version.o $builddir/subprocess-posix.o # Main executable is library plus main() function. build $builddir/ninja.o: cxx $root/src/ninja.cc build ninja: link $builddir/ninja.o | $builddir/libninja.a libs = -lninja # Tests all build into ninja_test executable. build $builddir/build_log_test.o: cxx $root/src/build_log_test.cc build $builddir/build_test.o: cxx $root/src/build_test.cc build $builddir/clean_test.o: cxx $root/src/clean_test.cc build $builddir/depfile_parser_test.o: cxx $root/src/depfile_parser_test.cc build $builddir/deps_log_test.o: cxx $root/src/deps_log_test.cc build $builddir/disk_interface_test.o: cxx $root/src/disk_interface_test.cc build $builddir/edit_distance_test.o: cxx $root/src/edit_distance_test.cc build $builddir/graph_test.o: cxx $root/src/graph_test.cc build $builddir/lexer_test.o: cxx $root/src/lexer_test.cc build $builddir/manifest_parser_test.o: cxx $root/src/manifest_parser_test.cc build $builddir/ninja_test.o: cxx $root/src/ninja_test.cc build $builddir/state_test.o: cxx $root/src/state_test.cc build $builddir/subprocess_test.o: cxx $root/src/subprocess_test.cc build $builddir/test.o: cxx $root/src/test.cc build $builddir/util_test.o: cxx $root/src/util_test.cc build ninja_test: link $builddir/build_log_test.o $builddir/build_test.o $ $builddir/clean_test.o $builddir/depfile_parser_test.o $ $builddir/deps_log_test.o $builddir/disk_interface_test.o $ $builddir/edit_distance_test.o $builddir/graph_test.o $ $builddir/lexer_test.o $builddir/manifest_parser_test.o $ $builddir/ninja_test.o $builddir/state_test.o $ $builddir/subprocess_test.o $builddir/test.o $builddir/util_test.o | $ $builddir/libninja.a libs = -lninja # Ancillary executables. build $builddir/build_log_perftest.o: cxx $root/src/build_log_perftest.cc build build_log_perftest: link $builddir/build_log_perftest.o | $ $builddir/libninja.a libs = -lninja build $builddir/canon_perftest.o: cxx $root/src/canon_perftest.cc build canon_perftest: link $builddir/canon_perftest.o | $builddir/libninja.a libs = -lninja build $builddir/depfile_parser_perftest.o: cxx $ $root/src/depfile_parser_perftest.cc build depfile_parser_perftest: link $builddir/depfile_parser_perftest.o | $ $builddir/libninja.a libs = -lninja build $builddir/hash_collision_bench.o: cxx $root/src/hash_collision_bench.cc build hash_collision_bench: link $builddir/hash_collision_bench.o | $ $builddir/libninja.a libs = -lninja build $builddir/manifest_parser_perftest.o: cxx $ $root/src/manifest_parser_perftest.cc build manifest_parser_perftest: link $builddir/manifest_parser_perftest.o | $ $builddir/libninja.a libs = -lninja # Generate a graph using the "graph" tool. rule gendot command = ./ninja -t graph all > $out rule gengraph command = dot -Tpng $in > $out build $builddir/graph.dot: gendot ninja build.ninja build graph.png: gengraph $builddir/graph.dot # Generate the manual using asciidoc. rule asciidoc command = asciidoc -b docbook -d book -o $out $in description = ASCIIDOC $out rule xsltproc command = xsltproc --nonet doc/docbook.xsl $in > $out description = XSLTPROC $out build $builddir/manual.xml: asciidoc $root/doc/manual.asciidoc build $root/doc/manual.html: xsltproc $builddir/manual.xml | $ $root/doc/style.css $root/doc/docbook.xsl build manual: phony || $root/doc/manual.html rule dblatex command = dblatex -q -o $out -p doc/dblatex.xsl $in description = DBLATEX $out build $root/doc/manual.pdf: dblatex $builddir/manual.xml | $ $root/doc/dblatex.xsl # Generate Doxygen. rule doxygen command = doxygen $in description = DOXYGEN $in doxygen_mainpage_generator = $root/src/gen_doxygen_mainpage.sh rule doxygen_mainpage command = $doxygen_mainpage_generator $in > $out description = DOXYGEN_MAINPAGE $out build $builddir/doxygen_mainpage: doxygen_mainpage README COPYING | $ $doxygen_mainpage_generator build doxygen: doxygen $root/doc/doxygen.config | $builddir/doxygen_mainpage # Regenerate build files if build script changes. rule configure command = ${configure_env}python $root/configure.py $configure_args generator = 1 build build.ninja: configure | $root/configure.py $root/misc/ninja_syntax.py default ninja # Packaging rule rpmbuild command = misc/packaging/rpmbuild.sh description = Building rpms.. build rpm: rpmbuild build all: phony ninja ninja_test build_log_perftest canon_perftest $ depfile_parser_perftest hash_collision_bench manifest_parser_perftest<issue_closed>
Status: Issue closed |
seattlerb/ruby_parser | 286806464 | Title: Ruby 2.5 support
Question:
username_0: Could you please add support for Ruby 2.5.x? Thx.
Answers:
username_1: done! thanks!
Status: Issue closed
username_0: Thank you!
username_2: Hey @username_1, is there any chance to get a new release of the gem? The latest version seems to be released in [mid 2017](https://rubygems.org/gems/ruby_parser) without the support of Ruby 2.
My lib [fasterer](https://github.com/username_2/fasterer) is heavily depending on the ruby parser and people have problems with the lack of Ruby 2.5. support: https://github.com/username_2/fasterer/issues/46
Thanks! |
Sitecore/Sitecore-Instance-Manager | 991229965 | Title: Change the displaing the SIM version on the "About" dialog
Question:
username_0: Currently, the version and revision are displayed in the following way:

**Expected look:**
Version: 1.10.2
Revision: 923<issue_closed>
Status: Issue closed |
graphile/graphile-engine | 709262254 | Title: Is it possible with graphql-parse-resolve-info to take the available arguments
Question:
username_0: *my graphql query*
query(
$numberOfDesiredElementsDescriptionOfTours: GetDesiredElementsFromAnArrayInput!
$numberOfDesiredElementsToursDescription: GetDesiredElementsFromAnArrayInput!
) {
my {
getUser {
descriptionOfTours {
tours(
numberOfDesiredElementsDescriptionOfTours: $numberOfDesiredElementsDescriptionOfTours
) {
toursDescription {
tours(
numberOfDesiredElementsToursDescription: $numberOfDesiredElementsToursDescription
) {
id
}
}
}
}
}
}
}
i need to get arguments `numberOfDesiredElementsDescriptionOfTours, numberOfDesiredElementsToursDescription` in getUser.
Can I use the graphql-parse-resolveinfo package to take the arguments numberOfDesiredElementsDescriptionoftours, numberOfDesiredElementsToursDescription into getUser?
Answers:
username_1: Yes, but unless you’re using it for optimisation purposes you absolutely should not do that because it breaks the caching model of GraphQL which could cause your clients to corrupt their stores. Parent fields shouldn’t change what data they resolve to based on child field arguments.
[semi-automated message] Thanks for your question; hopefully we're well on the way to helping you solve your issue. This doesn't currently seem to be a bug in the library so I'm going to close the issue, but please feel free to keep requesting help below and if it does turn out to be a bug we can definitely re-open it 👍
You can also ask for help in the #help-and-support channel in [our Discord chat](http://discord.gg/graphile).
Status: Issue closed
|
STORM-IRIT/Radium-Engine | 281735420 | Title: main-app "Clear scene" segfault or make the application chaotic
Question:
username_0: on branch master, when pushing the button "Clear scene" when the scene is empty or when no scene is selected segfault.
When the scene is not empty, remove more than the scene : the "Test" object that is marked as
```// FIXME (Florian): this should disappear```
and the FeatureTrackingEntity that appear in the Engine Objects Tree are removed.
The, the application crash at the next operation ...
Answers:
username_1: Can we close this §
username_2: Crash still there!
Status: Issue closed
username_2: Fixed by PR #269 , closing! |
log4js-node/log4js-node | 605351419 | Title: how to open debug in source code
Question:
username_0: Hi,
I seem debug() in source code, how can I open it?
thx
Answers:
username_1: See **package.json** : **debug** is a third-party dependency.
If you want to use it in your modules, it must be loaded separately. For example:
```
const messageProducer = require('debug').('smartName');
messageProducer.enabled = true;
messageProducer('this will be written to console');
```
For questions related to it, its own repo is probably more appropriate : https://github.com/visionmedia/debug
Status: Issue closed
|
ria-ee/X-Road | 218679687 | Title: How can I reach the CS and SS config homepage?
Question:
username_0: Hello;
Some questions for you help.
I have installed xroad By using LXD and it shows running.
See below

But I can't ping the ipv4 address and how can I open the config homepage in browser?
the port and the application name?
Answers:
username_1: @username_2
Hi, I have run the command of ansible-playbook -i hosts/lxd_hosts_from_local.txt xroad_dev.yml successful,
how can I access the center server and security server.
username_2: Have you tried accessing https://ip-address:4000?
username_1: Yes,I try it ,but how do I access the security server and center server in my host computer of brower. not in the virtual box, my host comput ip is 192.168.1.109, it can't ping the security server ip 10.29.35.54.should xld configure any info?
username_3: It would be much easier to just use browser inside your virtual machine. Just use ubuntu-desktop or if you currently have ubuntu-server install "Ubuntu desktop" using "sudo tasksel".
If you really need to use browser on your host machine then it would require some complicated SSH tunneling or routing to make that possible.
Xsd creates an internal "lxdbr0" interface with internal nerwork inside the virtual machine. And your host machine cannot directly use that.
username_1: @username_3 Hi, if host machine cannot directly use that, it is hard for a team to develop xroad provider and consumer interface to access xroad.
username_4: @username_1 You can forward ports from Virtual Machine to Host. If you are using VirtualBox, here is general guide: https://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/.
There is also possibility use Vagrantfile to use that for also:
https://github.com/ria-ee/X-Road/blob/develop/src/Vagrant.md
Status: Issue closed
username_5: @username_0 I am in Beijing , Have you ever met my question? the only opened issue |
microsoft/TypeScript-Website | 829002306 | Title: Docs: Type Manipulation / Mapped Types / Further Exploration / Twoslash not enabled (handbook v2)
Question:
username_0: **Page URL:** https://www.typescriptlang.org/docs/handbook/2/mapped-types.html#further-exploration
**Issue:**
On the last paragraph the code sample is not interactive. I guess the "Twoslash" thing is not enabled.

**Recommended Fix:**
Enable Twoslash for the code sample.
Answers:
username_1: You're right! Thanks
Status: Issue closed
username_0: Oh, so easy?!
I was going to take a look at it tonight 😄
username_1: Hah, yeah, sorry! |
ARPASMR/web | 402758801 | Title: tabelle di legenda
Question:
username_0: nel riquadro a sinistra aggiungere le 2 voci "Destinazioni" e "Classificazioni"
e differenziare visivamente le prime due tabelle da quelle di legenda. Per esempio con una riga vuota o indentando come segue:
Consultazione Anagrafica
------------------------------------
- Stazioni
- Sensori
- Tipologie
- Destinazioni
- Classificazioni
--------------------------------
le due pagine nuove riportano i seguenti campi rispettivamente:
A_Destinazioni.IDdestinazione concatenato con A_Destinazioni.Destinazione as DBmeteo
A_Destinazioni.IDdestinazioneREM concatenato con A_Destinazioni.DestinazioneREM e con Note as REM
A_Classificazione.IDclasse as Classe
A_Classificazione.Descrizione as Descrizione
Status: Issue closed
Answers:
username_1: Rinominati file Destinazione.php in SensoriDestinazione.php, creati i file Classificazione.php, Destinazione.php, classificazioni.php, destinazioni.php. Implementato quanto richiesto. |
statamic/v2-hub | 212180278 | Title: Feature Request: CP - Remember the last visited folder in Assets
Question:
username_0: It would be great if the last visited folder in assets could be remembered, so that one did not have to navigate to the same folder over and over again.
Thanks, cheers, Sebastian
Status: Issue closed
Answers:
username_1: Alas, we have no plans to add this in 2.x. Perhaps in v3! |
rust-lang/cargo | 219674709 | Title: Cargo fails when git is configured to use SSH for github.com
Question:
username_0: Trying to build a Rust program on [Circle CI](https://circleci.com/):
```
cargo test -v
Updating registry `https://github.com/rust-lang/crates.io-index`
warning: spurious network error (2 tries remaining): [12/-12] Malformed URL 'ssh://[email protected]:/rust-lang/crates.io-index'
warning: spurious network error (1 tries remaining): [12/-12] Malformed URL 'ssh://[email protected]:/rust-lang/crates.io-index'
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
[12/-12] Malformed URL 'ssh://[email protected]:/rust-lang/crates.io-index'
cargo test -v returned exit code 101
```
I'd never seen this error before, but I found this blog post (not English, but you can get the gist): https://andelf.github.io/blog/2016/11/18/circleci-meets-rust/
In case that link 404s, the issue is that Circle CI has something in ~/.gitconfig that configures Git to use the SSH protocol for connections to github.com. The workaround recommended in the blog post is to use sed in a previous build step to make the line a no-op:
```
sed -i 's/github/git-non-exist-hub/g' ~/.gitconfig
```
Is there something Cargo can do to mitigate this?
This is the closest Cargo issue I could find is https://github.com/rust-lang/cargo/issues/2845. Not sure if it's related.
Answers:
username_1: Historically issues with Circle CI and Rust have boiled down to https://github.com/rust-lang/cargo/issues/1851, so maybe this is just a dupe of that?
username_2: I'm not really sure, but it seems that this is more of a libgit2 issue than a cargo issue.
The relevant cargo code seems to be [this](https://github.com/rust-lang/cargo/blob/a5a298f1fd5b5ccf03ccb71c0cb6b97867e26d18/src/cargo/sources/registry/remote.rs#L72-L104). From the output we see that the url it talks about is the https one.
[In the fetch call](https://github.com/rust-lang/cargo/blob/master/src/cargo/sources/git/utils.rs#L549-L573), the url is not changed, but given directly to the git2 crate, which is only a wrapper over libgit2.
Also, while the string "Malformed URL" doesn't appear in the cargo codebase, [it does in the libgit2 one](https://github.com/libgit2/libgit2/search?q=malformed+url&type=Code).
AFAIK it is possible to compile libgit2 without ssh support. Maybe that's what Circle CI has done, and now libgit2 considers the ssh url as malformed?
username_2: Yup, I can't reproduce.
I've added this to `.gitconfig` (like described in #2845):
```
[url "[email protected]:"]
insteadOf = https://github.com/
```
and doing `cargo build` I got:
```
Updating registry `https://github.com/rust-lang/crates.io-index`
error: failed to load source for a dependency on `git2`
Caused by:
Unable to update registry https://github.com/rust-lang/crates.io-index
Caused by:
failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
failed to authenticate when downloading repository
attempted ssh-agent authentication, but none of the usernames `git` succeeded
To learn more, run the command again with --verbose.
```
This is not the error message from above.
Then, after `ssh-add`ing my key, I did `cargo build` again, and it worked. So its very likely that circleCI's copy of libgit2 doesn't have ssh support.
username_1: Cargo statically links libgit2 so this may not be related to the version on Circle CI I think? My guess is it's the colon after `github.com` in the URL, that's invalid but I'm not sure where it's coming from.
username_2: Yup seems you are right with your guess. According to [this post](https://discuss.circleci.com/t/is-it-ok-for-me-to-delete-gitconfig-in-the-box/5566) (apparently by a person with the same issue), the gitconfig looks like:
```
[url "ssh://[email protected]:"]
insteadOf = https://github.com
```
Note the missing `/` in comparison to the gitconfig entry I had above. Setting it makes me reproduce the issue. Git cloning it works out of some weird reason though, this is why I earlier thought the URL was valid.
username_1: @username_2 oh interesting! Can you clarify the behavior of with-and-without-trailing slash? With it you're getting https://github.com/rust-lang/cargo/issues/2845? (circleci doesn't use ssh-agent, we don't parse ~/.ssh/config for keys on the filesystem). Without it you get the error in the OP?
username_2: This comment describes the behaviour for me with `insteadOf = https://github.com/`, generating an url `ssh://[email protected]:rust-lang/crates.io-index`: https://github.com/rust-lang/cargo/issues/3900#issuecomment-292846424
The behaviour for me with `insteadOf = https://github.com`, generating an url `ssh://[email protected]:/rust-lang/crates.io-index` is consistent with what @username_0 reported.
I never got #2845.
Status: Issue closed
username_1: I'm going to close this in favor of https://github.com/rust-lang/cargo/issues/2078 as I believe that's the root cause here. |
bryanedds/Nu | 190101044 | Title: Design change - Groups become Entities
Question:
username_0: One thing that has been bothering me for a while is how groups can't contain groups, and that entities can't contain entities. This design limitation can be solved by combining the the functionality of a group with that of an entity. So, we'll get rid of the Group type and make a special entity dispatcher called GroupDispatcher.
The big change caused by this is that the length of entity addresses is not longer an exact number, but can vary depending on nesting. Fixing all the places the hard-code this will be somewhat error-prone.
This will also, of course, impact user data files.
It is unknown whether each Screen should be created with a default Group entity or not.
Answers:
username_0: When I think about this change, and I apply the logic behind the change to the rest of the engine, I can't see why it shouldn't apply to all Simulant types, including Game and Screen. Perhaps there should instead just be GameDispatcher, ScreenDispatcher, and GroupDispatcher all derived from EntityDispatcher, and the engine constrains their relationships to one another with run-time checks. I'm getting the feeling that there's no middle ground in this design - either it remains as it is, or goes all in on the design change.
Unfortunately, informal analysis does not seem to inform me as to which all-in design is more appropriate. It may be the case I simply need to privately fork the engine, apply the changes, and see which design is better to model and implement games with.
Status: Issue closed
username_0: After a great deal of thought, I rectified the semantics incoherent by renaming the Group concept into Layer. |
pandas-dev/pandas | 269438288 | Title: Error using pandas version 0.21.0
Question:
username_0: However, when I downgraded to pandas 0.20.3, it worked just fine. You might wanna look into this. :)
Answers:
username_1: Can you give a reproducible example?
username_2: I meet the same problem.
At first i use `action = state_action.argmax()`, it says`FutureWarning: 'argmax' is deprecated. Use 'idxmax' instead. The behavior of 'argmax' will be corrected to return the positional maximum in the future. Use 'series.values.argmax' to get the position of the maximum now.
action = state_action.argmax()`
So I change to `action = state_action.idxmax()`
When I run in 0.21.0, it gives the following error:
```
Traceback (most recent call last):
File "/Users/baron/.pyenv/versions/3.6.3/lib/python3.6/tkinter/__init__.py", line 1699, in __call__
return self.func(*args)
File "/Users/baron/.pyenv/versions/3.6.3/lib/python3.6/tkinter/__init__.py", line 745, in callit
func(*args)
File "/Users/baron/PycharmProjects/HelloPython/test_Q.py", line 26, in update
action = RL.choose_action(str(observation))
File "/Users/baron/PycharmProjects/HelloPython/RL_brain.py", line 40, in choose_action
action = state_action.idxmax()
File "/Users/baron/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pandas/core/series.py", line 1357, in idxmax
i = nanops.nanargmax(_values_from_object(self), skipna=skipna)
File "/Users/baron/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pandas/core/nanops.py", line 74, in _f
raise TypeError(msg.format(name=f.__name__.replace('nan', '')))
TypeError: reduction operation 'argmax' not allowed for this dtype
```
username_1: Can you provide a copy-pastable example @username_2?
username_2: sure,
You can test the code as fellow.
https://github.com/MorvanZhou/Reinforcement-learning-with-tensorflow/tree/master/contents/2_Q_Learning_maze
username_2: sure,
You can test the code as fellow. @username_1
https://github.com/MorvanZhou/Reinforcement-learning-with-tensorflow/tree/master/contents/2_Q_Learning_maze
username_1: Do you have a minimal test-case, something that could go in a unit test?
username_2: @username_1
```
import pandas as pd
import numpy as np
q_table = pd.DataFrame(columns=['a', 'b', 'c', 'd'])
q_table = q_table.append(pd.Series([0] * 4, index=q_table.columns, name='test1', ))
q_table = q_table.append(pd.Series([0] * 4, index=q_table.columns, name='test2', ))
print(q_table)
state_action = q_table.ix['test2', :]
print(state_action)
state_action = state_action.reindex(
np.random.permutation(state_action.index))
print(state_action)
action = state_action.idxmax()
# action = state_action.argmax()
print('\naction: ', action)
```
username_2: Here is the error message
```
Traceback (most recent call last):
File "/Users/baron/PycharmProjects/HelloPython/pandas_exercise.py", line 13, in <module>
action = state_action.idxmax()
File "/Users/baron/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pandas/core/series.py", line 1357, in idxmax
i = nanops.nanargmax(_values_from_object(self), skipna=skipna)
File "/Users/baron/.pyenv/versions/3.6.3/lib/python3.6/site-packages/pandas/core/nanops.py", line 74, in _f
raise TypeError(msg.format(name=f.__name__.replace('nan', '')))
TypeError: reduction operation 'argmax' not allowed for this dtype
```
username_1: Thanks, simplified a bit:
```python
In [11]: pd.Series([0, 0], dtype='object')
Out[11]:
0 0
1 0
dtype: object
In [12]: pd.Series([0, 0], dtype='object').argmax()
/Users/taugspurger/Envs/pandas-dev/bin/ipython:1: FutureWarning: 'argmax' is deprecated. Use 'idxmax' instead. The behavior of 'argmax' will be corrected to return the positional maximum in the future. Use 'series.values.argmax' to get the position of the maximum now.
#!/Users/taugspurger/Envs/pandas-dev/bin/python3.6
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-e0ba19c8565d> in <module>()
----> 1 pd.Series([0, 0], dtype='object').argmax()
~/Envs/pandas-dev/lib/python3.6/site-packages/pandas/pandas/util/_decorators.py in wrapper(*args, **kwargs)
34 def wrapper(*args, **kwargs):
35 warnings.warn(msg, klass, stacklevel=stacklevel)
---> 36 return alternative(*args, **kwargs)
37 return wrapper
38
~/Envs/pandas-dev/lib/python3.6/site-packages/pandas/pandas/core/series.py in idxmax(self, axis, skipna, *args, **kwargs)
1355 """
1356 skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
-> 1357 i = nanops.nanargmax(_values_from_object(self), skipna=skipna)
1358 if i == -1:
1359 return np.nan
~/Envs/pandas-dev/lib/python3.6/site-packages/pandas/pandas/core/nanops.py in _f(*args, **kwargs)
72 if any(self.check(obj) for obj in obj_iter):
73 msg = 'reduction operation {name!r} not allowed for this dtype'
---> 74 raise TypeError(msg.format(name=f.__name__.replace('nan', '')))
75 try:
76 with np.errstate(invalid='ignore'):
TypeError: reduction operation 'argmax' not allowed for this dtype
```
Is there a reason you're using object dtype here?
username_1: Seems like https://github.com/pandas-dev/pandas/pull/16449 maybe have been the root issues (cc @username_4)
NumPy will (somehow) handle object arrays in argmax/min, so I suppose `@disallow('O')` is a bit too strict.
username_1: We'll need to think about whether we want to emulate NumPy here though. It's nice to know ahead of time whether you function is valid or not for the type of the values being passed. With `object` dtype there's no way of knowing that.
username_3: I think for object dtype we should not, beforehand, decide whether such an operation works or not, but IMO we should defer that to the actual objects. Eg min/max works on strings, and so it seems logical that `argmax`/`argmin` does as well.
username_1: Fortunately, `argmin/max` didn't work on strings before :)
```
In [1]: import pandas as pd
In [2]: pd.__version__
Out[2]: '0.20.3'
In [3]: pd.Series(['a', 'b']).argmax()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-4747fce7cbb5> in <module>()
----> 1 pd.Series(['a', 'b']).argmax()
~/miniconda3/envs/pandas-0.20.3/lib/python3.6/site-packages/pandas/core/series.py in idxmax(self, axis, skipna, *args, **kwargs)
1262 """
1263 skipna = nv.validate_argmax_with_skipna(skipna, args, kwargs)
-> 1264 i = nanops.nanargmax(_values_from_object(self), skipna=skipna)
1265 if i == -1:
1266 return np.nan
~/miniconda3/envs/pandas-0.20.3/lib/python3.6/site-packages/pandas/core/nanops.py in nanargmax(values, axis, skipna)
476 """
477 values, mask, dtype, _ = _get_values(values, skipna, fill_value_typ='-inf',
--> 478 isfinite=True)
479 result = values.argmax(axis)
480 result = _maybe_arg_null_out(result, axis, mask, skipna)
~/miniconda3/envs/pandas-0.20.3/lib/python3.6/site-packages/pandas/core/nanops.py in _get_values(values, skipna, fill_value, fill_value_typ, isfinite, copy)
194 values = _values_from_object(values)
195 if isfinite:
--> 196 mask = _isfinite(values)
197 else:
198 mask = isnull(values)
~/miniconda3/envs/pandas-0.20.3/lib/python3.6/site-packages/pandas/core/nanops.py in _isfinite(values)
237 is_integer_dtype(values) or is_bool_dtype(values)):
238 return ~np.isfinite(values)
--> 239 return ~np.isfinite(values.astype('float64'))
240
241
ValueError: could not convert string to float: 'b'
```
username_3: Ah, yes :-) Although in numpy it works:
```
In [118]: a = np.array(['a', 'b', 'c'], dtype=object)
In [119]: a.min()
Out[119]: 'a'
In [120]: a.argmin()
Out[120]: 0
```
username_4: Just refreshing my memory — so in the course of tracking down the bug that prompted #16449, it turned out that `argmax` etc were always trying to coerce their inputs to float, which is why they used to fail with string data. They no longer do that. But, at least at the time, it seemed pretty tricky to get `argmax` etc to behave consistently with arbitrary object dtypes that could also contain nulls, and we decided to disallow that case. If you remove the `disallow` decorator, they currently work as expected with string data, as long as there are no null values, but once you start including null values or possibly using other types of objects things would not work as expected. I think that marking `argmax` as not allowed with object dtypes was done mainly for expediency.
username_5: I'm not sure to get all the thing involving here, but in the exemple given by @username_2 (for MorvanZhou code), is the only solution to downgrade pandas ? Is there a simpler solution like to replace argmax by an other function ?
(Sorry I'm very new to python)
username_4: 1
```
username_6: I faced the same issue and I tried with `pandas 0.19.2 and 0.18.1`. Non of them worked for me. I was able to run it successfully only after downgrading to `pandas 0.20.3`. Hope this will help someone. (y)
username_7: I'm getting this issue in pandas 1.1.5 with a Series of dtype `object` containing `pd.Timestamp`. Not sure if it was decided to fix this in the end or not, but if there needs to be a reason for why `idxmax` should work in this case, it is that `.max()` does work; if `.max()` works `.idxmax()` must work too.
username_8: Seeing the same issue as @username_7 in pandas 1.3.3, i.e. `.max()` works on `pd.Timestamp`s, but `idxmax()` does not. |
NeilFraser/JS-Interpreter | 236503778 | Title: How can I add an object properly to interpreter scope that has previously defined native getter and setter?
Question:
username_0: I have a simple object with getter and setter defined:
```javascript
var nativeObj = {_name:"test"};
Object.defineProperty(nativeObj, "name", {
get: function(){ return this._name},
set: function(value){ this._name = value}
})
```
And I try to add this object to interpreter:
```javascript
function initContext(interpreter, scope) {
var wrapper = function(text) {
text = text ? text.toString() : '';
return interpreter.createPrimitive(alert(text));
};
interpreter.setProperty(scope, 'alert', interpreter.createNativeFunction(wrapper));
//trying to convert nativeObj to pseudo
interpreter.setProperty(scope, 'nativeObj', interpreter.nativeToPseudo(nativeObj));
}
var code = 'alert(nativeObj.name)';
var myInterpreter = new new Interpreter(code, initContext);
myInterpreter.run();
```
It alerts undefined seems not to execute getter function name. When I changed code to `alert(nativeObj._name)` becuse the object has native property named "_name", it works as accepted and alerts "test".
I search the source code deeply and try something different when adding to scope:
```javascript
function initContext(interpreter, scope) {
var wrapper = function(text) {
text = text ? text.toString() : '';
return interpreter.createPrimitive(alert(text));
};
interpreter.setProperty(scope, 'alert', interpreter.createNativeFunction(wrapper));
//create pseudo object from scratch and add native property "_name"
var pseudoObj = interpreter.createObject(this.OBJECT);
interpreter.setProperty(pseudoObj, "_name", interpreter.createPrimitive(nativeObj._name));
//try to add getter&setter according to Interpreter.setProperty docs, it says fourth parameter can be property descriptor and third paramater (value) should be null
var descriptor = Object.getOwnPropertyDescriptor(nativeObj, "name");
interpreter.setProperty(pseudoObj, "name", null, descriptor);
interpreter.setProperty(scope, 'nativeObj', pseudoObj);
}
var code = 'alert(nativeObj.name)';
var myInterpreter = new new Interpreter(code, initContext);
myInterpreter.run();
```
But this time it gives error as "Uncaught TypeError: function is not a function". I am totally lost there :)
Is there a proper way to add these kind of objects to interpreter? I don not know exactly is this a bug or not implemented feature yet, if I have to implement this feature any suggestions would be greatly appreciated as how I can do it properly.
Status: Issue closed
Answers:
username_0: Ok after spending some more hours I realized that getter and setter methods in property descriptor must also be converted to pseudo. Like this:
```javascript
function initContext(interpreter, scope) {
var wrapper = function(text) {
text = text ? text.toString() : '';
return interpreter.createPrimitive(alert(text));
};
interpreter.setProperty(scope, 'alert', interpreter.createNativeFunction(wrapper));
//create pseudo object from scratch and add native property "_name"
var pseudoObj = interpreter.createObject(this.OBJECT);
interpreter.setProperty(pseudoObj, "_name", interpreter.createPrimitive(nativeObj._name));
var descriptor = Object.getOwnPropertyDescriptor(nativeObj, "name");
var getter = interpreter.createNativeFunction(function(){
return descriptor.get.apply(nativeObj)
})
var setter = interpreter.createNativeFunction(function(){
descriptor.set.apply(nativeObj, arguments)
})
var newDescriptor = {enumerable: false, configurable: false, get: getter, set: setter}
interpreter.setProperty(pseudoObj, "name", null, newDescriptor);
interpreter.setProperty(scope, 'nativeObj', pseudoObj);
}
var code = 'alert(nativeObj.name)';
var myInterpreter = new new Interpreter(code, initContext);
myInterpreter.run();
```
This works as expected and alerted "test" at last :) |
streamlit/streamlit | 993842863 | Title: Be able to add classes or ids to the st.text
Question:
username_0: **### Problem**
I've tried to style the rendered text such as font size, font family, with the variables. As far as I know, it's achievable by [using markdown with the f-string trick](https://discuss.streamlit.io/t/passing-variable-containing-text-to-markdown/16069/2), but my app file is filled with a large number of lines of code. From my use case, one file could contain Pandas data frames, data manipulation, Plotly charts, maps, HTML components. For now, I can style the `st.title` by using its id within the CSS file, but I have no idea how to do that with my texts.
**### Solution**
**MVP:** **What's the smallest possible solution that would get 80% of the problem out of the way?**
Use the `st.markown`, but don't know when will the `unsafe_allow_html=True` be deprecated.
**Possible additions:** **What are other things that could be added to the MVP over time to make it better?**
Maybe the "unsafe" part of the markdown.
**Preferred solution:** **If you don't like the MVP above, tell us why, and what you'd like done instead.**
An ability to use the classes, or ids that allows the styling addressed in the CSS file. The markdown could be hard to sanitize, and it's unsafe.
**### Additional context**
Not for now. |
Pod-Point/countries | 1140356144 | Title: Exchange rates should not be cached forever.
Question:
username_0: I believe exchange rates should not be cached forever since the value is needed for a particular hour. Is there a reason?
https://github.com/Pod-Point/countries/blob/44f01361927577d7dc0315c1510530d055c3fad9/src/Currency/Cache/Service.php#L66 |
dotnet/runtime | 822263745 | Title: Microsoft.NET.Sdk.IL nuget packages is broken
Question:
username_0: The latest available version of the `Microsoft.NET.Sdk.IL` package on nuget.org is broken as it tries to resolve runtime ilasm/ildasm packages with a version of `6.0.0` which doesn't exist until we ship.
There are different options how to fix this:
- Generate the targets file which contains the hardcoded version dynamically and embed the product version in it. We do this in couple other places and it's fairly easy to implement.
- Change how the Sdk.IL meta package references dependencies. Today the SDK is resolved via the nuget sdk resolver in a first restore phase and then latter in the overall nuget restore phase the dependent RID specific IL package is downloaded as it's listed as a PackageReference by the IL SDK. There are alternatives to that which could be considered i.e. FrameworkPackages or merging the runtime packages which only contain the native ilasm/ildasm into the IL SDK meta package.
cc @trylek @hoyosjs @ericstj
Answers:
username_0: TODO: Discuss if we wanna continue shipping the package on nuget.org.
username_1: This package is very useful for implementing low-level tests for profilers.
username_2: CC @briansull PTAL.
username_2: @Infra team, please decide on the best option between the two and implement it. JIT team does not have preference. |
jnacar/hw3 | 295254792 | Title: drawing-with-diameter.js - Diameter change doesn't work
Question:
username_0: Instead of key == 1, etc. use key == "1", even though you're trying to match numbers, the keyPressed() function converts any inputs to a string, so if you press 1, it will set the value of key to "1", not 1
Answers:
username_0: Problem was on my end, this issue is invalid
Status: Issue closed
|
mi2-warsaw/CzasDojazdu | 155551787 | Title: Angielska wersja aplikacji
Question:
username_0: @username_1
W folderze `App` stworzyć folder `en` z angielską wersją aplikacji. Skopiować wszystkie pliki z `App` do `en`.
Do zmiany/przejrzenia byłyby
- app.R
- elements/
- header -> naglowek/pasek na gorze
- sidebar -> po lewej [ja tutaj na sam koniec Twojej roboty dodam link do przenoszenia miedzy polska a angielska wersja storny]
- body -> cale cialo aplikacji
Answers:
username_0: @username_1 robiąć commity, możesz w treści commita dawać `tresc #26` - wtedy w drzewie tego issue pojawi sie historia commitow wspominajacych to zadanko
username_1: @username_0 , czy w dobrych miejscach wstawiłem tłumaczenia ?
username_0: @username_1 umieszczam po kolei commity ze sprawdzeniem :P
niestety w aplikacjishiny plik `app.R` i folder `www` musza sie tak nazywa i nie moga miec innego rozszerzenia
username_0: Niestety zapominam dodawac `#26` do commitow :P i tylko niektore tu wpadaja
Status: Issue closed
username_0: @mikolajjj @abrodecka @michalcisek mamy wersje
- polska pod nowym adresem: http://mi2.mini.pw.edu.pl:3838/CzasDojazdu/pl/
- angielska : mi2.mini.pw.edu.pl:3838/CzasDojazdu/en/ |
containerd/nerdctl | 1033889138 | Title: nerdctl login needs Enter key pressed twice
Question:
username_0: How to repro:
```txt
nerdctl login -u <your username>
```
Now, enter your password and hit enter. Nothing happens until you hit enter again.
Reproduced using MacOS BigSur Intel.
nerdctl version 0.12.1
Answers:
username_1: Using [pr 641](https://github.com/containerd/nerdctl/pull/641), after running
```
nerdctl login -u <your username>
```
and adding password, login is executed.
used:
- MacOS BigSure M1
- nerdctl version: 0.15.0
username_2: This should have been fixed in a recent release
Status: Issue closed
|
shunjizhan/Coinbot | 352862706 | Title: Command line interface error tolerance
Question:
username_0: currently if an unsupported command was entered, the program will exit.
we should give it error tolerance so that an unsupported command will do nothing, and the program should keep running
just need to test if command in ALL_VALID_COMMANDS, if not do nothing, and prompt a warning. |
instaclustr/cassandra-ldap | 490723805 | Title: LDAPS
Question:
username_0: Hi @username_1, Can you please update the docs so we could use LDAPS with self-signed Root CA ? We definitely won't use LDAP 389 because of the plaintext password and also most of the companies have their private PKI. Usually the Root CA is passed via some properties and sometimes also need to be part of the CACerts used by Java. Please let me know if more details are needed.
Regards,
Bruno
Answers:
username_1: Hi @username_0 ,
as far as I know, all it takes to enable secure communication to LDAP server (if it supports it itself) is to import the certificate of your LDAP server into truststore of Cassandra and start that node.
Protocol for LDAP server will be `ldaps` and port would change the most probably too (e.g. 636 but it is for sure deployment specific) so you have to reflect this change in `ldap.properties` file.
In this particular example (2), certificate is in `/container/service/slapd/assets/certs/ca.crt` so import would be like:
```
keytool -importcert -file ca.crt -keystore cassandra-truststore.jks -storepass <PASSWORD>
```
(1) https://github.com/osixia/docker-openldap
(2) https://github.com/osixia/docker-openldap#tls
username_0: ok, let me try it. thx.
Status: Issue closed
|
keystonejs/keystone | 564123325 | Title: Connection to MS SQL Server fails because Keystone checks for PostgreSQL version
Question:
username_0: # Bug report
## Describe the bug
Connection to Microsoft SQL Server fails because the code enters to the [checkDatabaseVersion](https://github.com/keystonejs/keystone/pull/2112/commits/54960ba81e67f093f9f5e9e2b15a718888de17aa) function which is checking for PostgreSQL version.
## To Reproduce
1. Go to the entry point of the KeystoneJS application (`index.js`) and configure [Knex Adapter options](https://www.keystonejs.com/keystonejs/adapter-knex/#optionsknexoptions) to connect to MS SQL Server:
```
const ConnectionString = require('mssql/lib/connectionstring');
const keystone = new Keystone({
name: PROJECT_NAME,
adapter: new Adapter({
knexOptions: {
client: 'mssql',
connection: ConnectionString.resolve(
'mssql://pruebas:pruebaspassword@localhost:8666/KeystoneJS'
),
},
}),
});
```
2. Run the application with `yarn dev`
3. See error

## Expected behaviour
Expected behaviour is to connect to MS SQL Server successfully as it does if I comment the call to the `checkDatabaseVersion` function:

## System information
- OS: Windows 10, Version 1709, Build 16299.1087
- Packages versions:
[email protected]
```
"@keystonejs/adapter-knex": "^6.2.0",
"@keystonejs/app-admin-ui": "^5.3.0",
"@keystonejs/app-graphql": "^5.0.1",
"@keystonejs/app-static": "^5.0.0",
"@keystonejs/fields": "^6.0.0",
"@keystonejs/keystone": "^5.3.0",
"cross-env": "^5.2.0",
"dotenv": "^8.2.0",
"msnodesqlv8": "^0.8.6",
"mssql": "^6.0.1"
```
Answers:
username_1: Good catch, we don't officially support MySQL in that we do not build and test against it however Knex can support this and we don't want to do anything that would stop this from working. Happy to accept PRs to wrap this is a try catch or an option to ignore.
username_2: A workaround by overriding the Adapter class:
```js
const { KnexAdapter: Adapter } = require('@keystonejs/adapter-knex');
// ref: https://raw.githubusercontent.com/keystonejs/keystone/master/packages/adapter-knex/lib/adapter-knex.js
Adapter.prototype.checkDatabaseVersion = async (...args) => true
```
username_3: See PR ^
Status: Issue closed
|
vueComponent/ant-design-vue | 412267010 | Title: Safari中固定头和列的表格,左右滑动时会出现双滚动条
Question:
username_0: - [ ] I have searched the [issues](https://github.com/vueComponent/ant-design-vue/issues) of this repository and believe that this is not a duplicate.
### Version
1.3.4
### Environment
Mac OS 10.14.3、Safari 12.0.3
### Reproduction link
[https://vue.ant.design/components/table-cn/](https://vue.ant.design/components/table-cn/)
### Steps to reproduce
Safari打开官网文档,table模块,固定头和列的部分,左右滑动表格,会出现双滚动条
Safari opens the official website document, table module, fixed head and column parts, slides the table left and right, and double scroll bars appear.
### What is expected?
去掉类名为ant-table-header的div中的css属性:overflow: scroll
Remove the css attribute from the div with the class "name ant-table-header": overflow: scroll
### What is actually happening?
去掉指定属性后就没有上面的滚动条了
After removing the specified attribute, there is no scroll bar above.
<!-- generated by issue-helper. DO NOT REMOVE -->
Answers:
username_1: 没能复现你的问题

username_0: 这是我这里看到的,还是会出现,我同事也会出现这样的情况
username_0: 
你好,可以看一下这个gif
username_1: ref https://github.com/ant-design/ant-design/issues/13994
Status: Issue closed
|
matbesancon/MathOptSetDistances.jl | 624512976 | Title: Domains and inequality consistency
Question:
username_0: Suppose that you have a set defined as:
`x | x >= 0 & f(x) <= 0`
with `dom(f) = {x | x >= 0}`, this implies f(x) cannot be computed if `x >= 0`.
A first idea was to compute a distance as
```julia
if x < 0
abs(x)
else
max(f(x), 0)
end
```
This can result in consistency issues (still haven't formally written down how it appears) |
Sustainsys/Saml2 | 630246697 | Title: logoutUrl not working as expected, IdP is still logged in
Question:
username_0: Hello. I'm trying to implement your connector with logoutUrl but upon logging out from my application, I'm still logged into the IdP. I'm not sure if this is a bug or just me mis-interpreting the documentation. Any input you can provide is appreciated.
Below is the configuration that I currently have within my `Web.Config`.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<sustainsys.saml2 entityId="https://nyw-az03.infra.us" returnUrl="https://nyw-az03.infra.us/Account/ExternalLoginCallback">
<identityProviders>
<add entityId="http://www.okta.com/exkkzrrtw4i3td5F60h7" signOnUrl="https://dev-786854.oktapreview.com/app/dev786854_orchestrator_1/exkkzwwtw4i3td5F60h7/sso/saml" logoutUrl="https://dev-786854.oktapreview.com/app/dev786854_orchestrator_1/exkkzwwtw4i3td5F60h7/slo/saml" allowUnsolicitedAuthnResponse="true" binding="HttpRedirect">
<signingCertificate storeName="My" storeLocation="LocalMachine" x509FindType="FindByThumbprint" findValue="711da9418e31cf1b8c0f3fb2d7a1b2143f64bb76" />
</add>
</identityProviders>
<federations>
<add metadataLocation="https://dev-786854.oktapreview.com/app/exkkzwwtw4i3td5F60h7/sso/saml/metadata" allowUnsolicitedAuthnResponse="true" />
</federations>
<serviceCertificates>
<add storeName="My" storeLocation="LocalMachine" x509FindType="FindByThumbprint" findValue="4a31f75de0d506c4488d4906145ad969d99e3814" />
</serviceCertificates>
</sustainsys.saml2>
Answers:
username_1: Please enable logs. There are quite a few requirements that need to be fulfilled for single logout to work. There is a detailed message written to the log about it, indicating what values are found and not.
Status: Issue closed
|
digital-thinking/udacity-nanodegree | 221845597 | Title: towerp0/incorrect_vector:0
Question:
username_0: I am trying to train the capstone project on my own pc. I have the latest version of tensorpack and i have made all the necessary change but still i am getting this error which i am not able to figure out .
`Traceback (most recent call last):
File "main.py", line 115, in <module>
SimpleTrainer(config).train()
File "/usr/local/lib/python2.7/dist-packages/tensorpack/train/base.py", line 93, in train
self.setup()
File "/usr/local/lib/python2.7/dist-packages/tensorpack/train/base.py", line 118, in setup
self._callbacks.setup_graph(weakref.proxy(self))
File "/usr/local/lib/python2.7/dist-packages/tensorpack/callbacks/base.py", line 44, in setup_graph
self._setup_graph()
File "/usr/local/lib/python2.7/dist-packages/tensorpack/callbacks/group.py", line 77, in _setup_graph
cb.setup_graph(self.trainer)
File "/usr/local/lib/python2.7/dist-packages/tensorpack/callbacks/base.py", line 44, in setup_graph
self._setup_graph()
File "/usr/local/lib/python2.7/dist-packages/tensorpack/callbacks/inference_runner.py", line 116, in _setup_graph
self._hooks = [self._build_hook(inf) for inf in self.infs]
File "/usr/local/lib/python2.7/dist-packages/tensorpack/callbacks/inference_runner.py", line 178, in _build_hook
fetches = self._get_tensors_maybe_in_tower(out_names)
File "/usr/local/lib/python2.7/dist-packages/tensorpack/callbacks/inference_runner.py", line 125, in _get_tensors_maybe_in_tower
return get_tensor_fn(placeholder_names, names, 0, prefix=self._prefix)
File "/usr/local/lib/python2.7/dist-packages/tensorpack/predict/base.py", line 212, in get_tensors_maybe_in_tower
tensors = get_tensors_by_names(names)
File "/usr/local/lib/python2.7/dist-packages/tensorpack/tfutils/common.py", line 112, in get_tensors_by_names
ret.append(G.get_tensor_by_name(varn))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2554, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2405, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2447, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'towerp0/incorrect_vector:0' refers to a Tensor which does not exist. The operation, 'towerp0/incorrect_vector', does not exist in the graph.`
Answers:
username_1: Hi username_0,
yes i noticed something like this too, its related to the naming of the tensors in tensorpack as far as I remember. I used a older tensorpack release, which did use other names internally. I am not sure, but i think I fixed that in the windows branch somehow, but i am not sure. The issue was related to the name of the error function. Hope that helps. Cya
username_0: Thanks ..yeah it got resolved by putting my error variable name inside InferenceRunner(dataset_test, ClassificationError('error')), |
ManageIQ/manageiq | 966788664 | Title: RPM upgrade before application setup throws backtrace to terminal
Question:
username_0: If you don't setup the database prior to running `dnf upgrade` then the script which stops evmserverd dumps this to the terminal
```
Caused by:
Could not load database configuration. No such file - ["config/database.yml"]
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/application/configuration.rb:241:in `database_configuration'
/var/www/miq/vmdb/lib/patches/database_configuration_patch.rb:32:in `database_configuration'
/opt/manageiq/manageiq-gemset/gems/activerecord-6.0.3.7/lib/active_record/railtie.rb:200:in `block (2 levels) in <class:Railtie>'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/lazy_load_hooks.rb:71:in `class_eval'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/lazy_load_hooks.rb:71:in `block in execute_hook'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/lazy_load_hooks.rb:61:in `with_execution_control'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/lazy_load_hooks.rb:66:in `execute_hook'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/lazy_load_hooks.rb:43:in `block in on_load'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/lazy_load_hooks.rb:42:in `each'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/lazy_load_hooks.rb:42:in `on_load'
/opt/manageiq/manageiq-gemset/gems/activerecord-6.0.3.7/lib/active_record/railtie.rb:198:in `block in <class:Railtie>'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/initializable.rb:32:in `instance_exec'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/initializable.rb:32:in `run'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/initializable.rb:61:in `block in run_initializers'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/initializable.rb:60:in `run_initializers'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/application.rb:363:in `initialize!'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/railtie.rb:190:in `public_send'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/railtie.rb:190:in `method_missing'
/var/www/miq/vmdb/config/environment.rb:5:in `<top (required)>'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/dependencies.rb:324:in `require'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/dependencies.rb:324:in `block in require'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/dependencies.rb:291:in `load_dependency'
/opt/manageiq/manageiq-gemset/gems/activesupport-6.0.3.7/lib/active_support/dependencies.rb:324:in `require'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/application.rb:339:in `require_environment!'
/opt/manageiq/manageiq-gemset/gems/railties-6.0.3.7/lib/rails/application.rb:523:in `block in run_tasks_blocks'
/opt/manageiq/manageiq-gemset/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `load'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `kernel_load'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:28:in `run'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/cli.rb:476:in `exec'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor.rb:399:in `dispatch'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/cli.rb:30:in `dispatch'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/base.rb:476:in `start'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/cli.rb:24:in `start'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/exe/bundle:46:in `block in <top (required)>'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/lib/bundler/friendly_errors.rb:123:in `with_friendly_errors'
/opt/manageiq/manageiq-gemset/gems/bundler-2.1.4/exe/bundle:34:in `<top (required)>'
/opt/manageiq/manageiq-gemset/bin/bundle:23:in `load'
/opt/manageiq/manageiq-gemset/bin/bundle:23:in `<main>'
Tasks: TOP => evm:update_stop => environment
```
Answers:
username_0: We might want to check if the database is set up prior to running this command
username_1: I'm curious why evm:update_stop needs environment at all
username_1: @bdunne Can you take a look? |
sitespeedio/sitespeed.io | 320617127 | Title: WebPageReplay container hangs when browser fails to start
Question:
username_0: The container on dashboard.sitespeed.io hangs and the log looks like:
```
Mozilla Firefox 61.0a1
Start WebPageReplay Record
[2018-05-06 12:33:27] Testing url https://en.wikipedia.org/wiki/Sweden iteration 1
[2018-05-06 12:34:27] Browser failed to start, trying one more time: Failed to start browser in 60 seconds.
[2018-05-06 12:34:41] 0
```<issue_closed>
Status: Issue closed |
sumakokima2/resium-sample2 | 415849603 | Title: CSSの設定
Question:
username_0: ## styled-componentsのインストール
[https://www.styled-components.com/docs/basics](https://www.styled-components.com/docs/basics)
* 表示のためのモジュールなので[-dev]はつけない
`npm install --save @types/styled-components`
## モジュールのインポート文の追加
`import styled from 'styled-components';`
## CSSの記述
cssを反映させたい.tsxファイルに直接書き込む。
通常のhtmlだと`<div class="LISTMENU">...</div>`と書くが、その箇所をコンポーネント化すればよい。
例)
```
const LISTMENU = styled.div`
font-size: 1.0em;
text-align: left;
color: #ffffff;
display: block;
width:200px;
background: #666666;
position: absolute;
top: 0px;
`;
const Package = styled.div`
display: block;
`;
const Package1 = styled.div`
display: block;
`;
```
```
<LISTMENU>
<Package>
<form id="imagelist1">
<ul>
{this.props.pins.map((d, i) => {
return (
<li>
<label>{d.name}</label>
<input type="checkbox"
id={d.id}
key={d.id}
checked={this.state.pinsshow[i]}
onClick={this.clickAction}
/>
{d.name}
</li>
);
})}
</ul>
</form>
</Package>
<Package1>
<form id="imagelist">
<ul>
[Truncated]
/>
{d.name}
</li>
);
})}
</ul>
</form>
</Package1>
</LISTMENU>
```
## styled.*** の種類
```
styled.section
styled.div
styled.button
styled.input.attrs
styled.h1
```
などがある。詳細はリファレンスを参照。 |
Azure/azure-quickstart-templates | 480354159 | Title: Issue with Domain Join ARM template when OU is specified which contains a space
Question:
username_0: --------------------MESSAGE FROM ADMIN, DELETE BEFORE SUBMITTING----------------------
Sorry to hear you had a bad experience with one of the templates :worried: But, in case you're just asking a question, we're happy to help. You can also check if the question might already have been asked here https://github.com/Azure/azure-quickstart-templates/issues?utf8=%E2%9C%93&q=is%3Aissue
We've created an outline of recommended sections to fill out that will help make this Pull Request awesome!
--------------------MESSAGE FROM ADMIN, DELETE BEFORE SUBMITTING----------------------
[201-vm-domain-join](Template link goes here)
### Issue Details
When using JsonADDomainExtension and specifying an OU, the domain join process fails if the name of the OU contains a space.
Example: "value": "\"OU=Member Servers,DC=contoso,DC=local\""
In the NetSetup log on the machine, the failure reason is: NetpProvisionComputerAccount: Cannot retry downlevel, specifying OU is not supported
Answers:
username_1: I am also getting this issue. I have tried two brand new DC/VM/vnet deployments and it happens across both of them. The OU permissions are all inheriting and DNS appears to be configured correctly. I can always manually join the machines if I logon and do it interactively.
I always get two errors:-
```
2021-08-31T14:30:39.2152508Z [Error]: Try join: domain='mydomain.local', ou='OU=EUC Cloud Hosted,DC=mydomain,DC=local', user='<EMAIL>', option='NetSetupJoinDomain, NetSetupAcctCreate' (#3:User Specified), errCode='2'.
2021-08-31T14:30:39.2152508Z [Error]: Setting error code to 53 while joining domain
2021-08-31T14:30:40.2621044Z [Error]: Try join: domain='mydomain.local', ou='OU=EUC Cloud Hosted,DC=mydomain,DC=local', user='<EMAIL>', option='NetSetupJoinDomain' (#1:User Specified without NetSetupAcctCreate), errCode='1332'.
2021-08-31T14:30:40.2621044Z [Error]: Setting error code to 53 while joining domain
2021-08-31T14:30:40.2621044Z [Error]: Computer failed to join domain 'mydomain.local' from workgroup 'WORKGROUP'.
``` |
project-flogo/catalystml-flogo | 487510168 | Title: valToArray operation
Question:
username_0: the idea is to take a single value and create an array or matrix with all the elements being that value
* __castToArray__: casts single value to array or array of arrays of given shape
* Input
* value - [int,string,float,etc]
* Optional=False
* shape - [array of ints] - array determines shape of output ([2,3] means a 2x3 matrix)
* Optional=False
* Params
* None
* OutputType - [array of arrays] (same size as inputs)
Answers:
username_0: updating operation name after change in specification
username_0: In PR https://github.com/project-flogo/catalystml-flogo/pull/100
Status: Issue closed
|
Kassensystem/ManagerApplication | 306202205 | Title: Bug beim Sortieren der Waiter-Tabelle und anschl. Tab-Wechsel
Question:
username_0: Sortieren einer beliebigen Spalte in Waiter-Tabelle --> Tab wechseln --> wieder auf Waiter-Tab
-> Die gesamte Tabelle Waiter wird nicht mehr angezeigt.
Answers:
username_0: behoben: Bei Wiederherstellung der Sortierung wurde fälschlicherweise die Sortierung auf die OrderTabelle angewendet.
Status: Issue closed
|
hankinsoft/SQLPro | 640983328 | Title: App crashes after editing entry
Question:
username_0: Version 2020.59 (Build 4200.5) Editing data: After entering a value in a row the app crashes.
Answers:
username_1: I am seeing the same problem with version 2020.59 (Build 4200.5). The app just crashes when entering data into one column value within a row and tabbing out to the next column. If I enter data in a column and press enter (to cease editing but not to move to the next column) the app does not crash, but scrolls back up to row 1 of the table (which is rather annoying when editing row thousands and something). Look forward to the fix.
username_2: What type of database are the two of you connecting two? (MySQL, Postgres, MSSQL, etc)? I'm investigating the crash logs and have an idea what's going on, but I haven't been able to reproduce it yet. Just trying to narrow it down a bit further.
username_0: Hello,
I‘m using SQLite,
have a good time
<NAME>apper
username_1: SQLite too.
username_2: FYI I've reproduced this and am working on the fix.
username_2: I've got this fixed for the next build. The updated build should be available in ~24-48 hours.
username_0: Hi Kyle,
that’s cool! Thank you very much!
Kindly
Herbert
username_2: This is available in all the latest builds (2020.61). Is anyone still seeing the issue after updating to that version?
username_1: Appears to be fixed for me. Many thanks for sorting this out. Alan...
Status: Issue closed
|
mmanela/chutzpah | 171269863 | Title: Coverage json file - file names are converted to lower cases
Question:
username_0: The file path in the coverage json file have partial lower case characters, this presents a problem in something I'm working on...
for example below, the first part is correct, the lower case start after the "site" and the file name is correct...:
```
{"D:\\Github\\IsraelHikingMap\\Site\\isrealhiking.web\\services\\mapService.ts"
...
```
Answers:
username_1: I tried a simple repro but it worked fine for me. I created a folder called CATS and put a test file in their. I then generated code coverage and the rests correctly contained the casing CATS.
Can you provide me a more detailed repro that I can run locally?
Thanks!
username_0: I think the problem is when setting in the chutzpah.json file the references without proper casing.
I can keep my current repository with the issue in case you want to look at it anytime soon, or you can get the following commit and run coverage using my chutzpah json file:
https://github.com/IsraelHikingMap/Site/commit/c9ab9e9ac4401eecb2a8bb6ba3c313765425c04c
Let me know... |
GMOD/Apollo | 70792602 | Title: Long pause when switching organisms
Question:
username_0: When changing organisms, there is a really long wait while the organism data is being downloaded. In the example below, this pause (TTFB) is nearly fifteen seconds, and the actual file download is .43 seconds. Have you noticed this? In our test instance that is up now, two assemblies are ~143k scaffolds, and one is ~10k scaffolds.

Answers:
username_1: 1 - Brown recluse . . . awesome!
2 - Great to see you using and testing 2.0 in the mainline!
3 - I think that this is because its loading all of the ref seq data at once.
We’ll need to fix that and then load on demand on the annotation panel and the ref seq panel in both the drop-down and the table.
We’ll make sure to get that fixed. Thanks for reporting it.
Nathan
>
username_0: Yeah, we're trying to get to grips with the new back-end and interface. @littlebunch is keeping us up to date. Love the new deployment system!
username_1: @username_0 @littlebunch Should be ready in mainline now. Let me know if this is not much faster. esp. on the sequence and annotator panels (as well as simply switching between organisms).
username_2: it's fast "enough" for me. :wink: :dancer:
Status: Issue closed
|
json-schema-org/json-schema-org.github.io | 259044228 | Title: Order by draft support?
Question:
username_0: I'd really like to highlight/promote implementations that have updated to support new drafts. How would folks feel abound changing the software page to group first by the most recent draft supported, then by the type of implementation, then by the language?
The great work @username_2 has done splitting the data out into structured YAML should make this a lot easier.
@username_1 any thoughts on this? I think it would be good for people to see the availability of support without poring through the list for the draft support notes, and it might encourage implementations to update in order to get a more prominent spot on the page.
Answers:
username_1: This makes sense.
I may even make some GH badges for readmes.
username_0: Badges would be cool!
username_2: I'd say the main downside would be maintenance effort since this would need to reset on every release.
#6 may be relevant or worth considering here.
If doing this, I would recommend two pages. The current software link takes you to the page with current software, then either at the top or bottom of that page there would be a link to a page with older implementations.
username_0: @username_2 that is a good plan. It would give us more flexibility than a strict by-version list. For instance drafts-06 and -07 should both be considered "current" for some time after we get draft-07 out the door. Particularly for validation where the delta is quite small. And we don't really need to keep track of who implements 3 vs 4 at the level of giving them separate pages.
Mostly, I want to highlight implementations that have moved on past draft-04, and make it *REALLY* obvious that the project is active again. This would accomplish that.
username_1: Sounds like a reasonable plan.
If you mainy want to highlight those that support 5 or above vs 4 and below, then we could also on the main page have two groups. I still prefer @username_2's idea, but that's just another one for the sake of argument.
username_0: This has now been done
Status: Issue closed
|
jlippold/tweakCompatible | 415164792 | Title: `Anemone` notworking on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.anemonetheming.anemone",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.anemonetheming.anemone",
"deviceId": "iPhone9,3",
"url": "http://cydia.saurik.com/package/com.anemonetheming.anemone/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": true,
"packageName": "Anemone",
"category": "Tweaks",
"repository": "BigBoss",
"name": "Anemone",
"installed": "",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Not working based on feedback from users in the community. The current positive rating is 0% with 0 working reports.",
"id": "com.anemonetheming.anemone",
"commercial": false,
"packageInstalled": false,
"tweakCompatVersion": "0.1.0",
"shortDescription": "An awesome theme manager!",
"latest": "2.1.8-2",
"author": "AnemoneTeam",
"packageStatus": "Not working"
},
"base64": "<KEY>",
"chosenStatus": "notworking",
"notes": ""
}
``` |
tendermint/tendermint | 180118484 | Title: consensus: GetRoundState doesn't do a proper copy
Question:
username_0: https://github.com/tendermint/tendermint/blob/master/consensus/state.go#L290
Does not copy the pointer elements, so we can end up with concurrent hash-map access crashes
Answers:
username_0: Trace from related panic:
```
[32mNOTE[0m[09-28|12:02:33] enterNewRound(2028/0). Current: 2028/0/RoundStepNewHeight [32mmodule[0m=consensus
[32mNOTE[0m[09-28|12:02:33] enterPrecommit: +2/3 prevoted proposal block. Locking [32mmodule[0m=consensus [32mhash[0m=08317E32D8C6F80B2D131B9433675D8E11AF3704
[32mNOTE[0m[09-28|12:02:33] Finalizing commit of block with 0 txs [32mmodule[0m=consensus [32mheight[0m=2028 [32mhash[0m=08317E32D8C6F80B2D131B9433675D8E11AF3704
[32mNOTE[0m[09-28|12:02:34] enterNewRound(2029/0). Current: 2029/0/RoundStepNewHeight [32mmodule[0m=consensus
[32mNOTE[0m[09-28|12:02:34] enterPrecommit: +2/3 prevoted proposal block. Locking [32mmodule[0m=consensus [32mhash[0m=0CCFD7DAFDF635CDA7AAFBA1F168FE07DC71D7D7
[32mNOTE[0m[09-28|12:02:34] Finalizing commit of block with 0 txs [32mmodule[0m=consensus [32mheight[0m=2029 [32mhash[0m=0CCFD7DAFDF635CDA7AAFBA1F168FE07DC71D7D7
[32mNOTE[0m[09-28|12:02:35] enterNewRound(2030/0). Current: 2030/0/RoundStepNewHeight [32mmodule[0m=consensus
fatal error: concurrent map read and map write
goroutine 19313 [running]:
runtime.throw(0xd2f820, 0x21)
D:/dev/go/go1.6.2/src/runtime/panic.go:547 +0x97 fp=0xc083dc6d90 sp=0xc083dc6d78
runtime.mapaccess1_fast64(0xa2cb60, 0xc08296da70, 0x0, 0xc08212ad48)
D:/dev/go/go1.6.2/src/runtime/hashmap_fast.go:112 +0x61 fp=0xc083dc6db0 sp=0xc083dc6d90
github.com/tendermint/tendermint/consensus.(*HeightVoteSet).StringIndented(0xc082a6eb00, 0xc703f0, 0x4, 0x0, 0x0)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/consensus/height_vote_set.go:175 +0x1b8 fp=0xc083dc6f38 sp=0xc083dc6db0
github.com/tendermint/tendermint/consensus.(*RoundState).StringIndented(0xc083f31900, 0x0, 0x0, 0x0, 0x0)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/consensus/state.go:177 +0x296 fp=0xc083dc7290 sp=0xc083dc6f38
github.com/tendermint/tendermint/consensus.(*RoundState).String(0xc083f31900, 0x0, 0x0)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/consensus/state.go:152 +0x40 fp=0xc083dc72c0 sp=0xc083dc7290
github.com/tendermint/tendermint/rpc/core.DumpConsensusState(0x413709, 0x0, 0x0)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/rpc/core/consensus.go:36 +0x326 fp=0xc083dc73f8 sp=0xc083dc72c0
github.com/tendermint/tendermint/rpc/core.DumpConsensusStateResult(0x0, 0x0, 0x0, 0x0)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/rpc/core/routes.go:111 +0x37 fp=0xc083dc7418 sp=0xc083dc73f8
runtime.call32(0xc082aa9340, 0xdd8370, 0xc083ccbbc0, 0x20)
D:/dev/go/go1.6.2/src/runtime/asm_amd64.s:472 +0x45 fp=0xc083dc7440 sp=0xc083dc7418
reflect.Value.call(0xa4c460, 0xdd8370, 0x13, 0xc71460, 0x4, 0x11a84f0, 0x0, 0x0, 0x0, 0x0, ...)
D:/dev/go/go1.6.2/src/reflect/value.go:435 +0x1214 fp=0xc083dc7790 sp=0xc083dc7440
reflect.Value.Call(0xa4c460, 0xdd8370, 0x13, 0x11a84f0, 0x0, 0x0, 0x0, 0x0, 0x0)
D:/dev/go/go1.6.2/src/reflect/value.go:303 +0xb8 fp=0xc083dc77f0 sp=0xc083dc7790
github.com/tendermint/go-rpc/server.makeJSONRPCHandler.func1(0x18a18a0, 0xc083ccbb40, 0xc083fc3180)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/go-rpc/server/handlers.go:131 +0x8ca fp=0xc083dc79c0 sp=0xc083dc77f0
net/http.HandlerFunc.ServeHTTP(0xc0821f5170, 0x18a18a0, 0xc083ccbb40, 0xc083fc3180)
D:/dev/go/go1.6.2/src/net/http/server.go:1618 +0x41 fp=0xc083dc79e0 sp=0xc083dc79c0
net/http.(*ServeMux).ServeHTTP(0xc0828b8480, 0x18a18a0, 0xc083ccbb40, 0xc083fc3180)
D:/dev/go/go1.6.2/src/net/http/server.go:1910 +0x184 fp=0xc083dc7a38 sp=0xc083dc79e0
github.com/tendermint/go-rpc/server.RecoverAndLogHandler.func1(0x18a1868, 0xc083f325b0, 0xc083fc3180)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/go-rpc/server/http_server.go:108 +0x46f fp=0xc083dc7b40 sp=0xc083dc7a38
net/http.HandlerFunc.ServeHTTP(0xc082463ca0, 0x18a1868, 0xc083f325b0, 0xc083fc3180)
D:/dev/go/go1.6.2/src/net/http/server.go:1618 +0x41 fp=0xc083dc7b60 sp=0xc083dc7b40
net/http.serverHandler.ServeHTTP(0xc0828ce300, 0x18a1868, 0xc083f325b0, 0xc083fc3180)
D:/dev/go/go1.6.2/src/net/http/server.go:2081 +0x1a5 fp=0xc083dc7bc0 sp=0xc083dc7b60
net/http.(*conn).serve(0xc0828b5d00)
D:/dev/go/go1.6.2/src/net/http/server.go:1472 +0xf35 fp=0xc083dc7f88 sp=0xc083dc7bc0
runtime.goexit()
D:/dev/go/go1.6.2/src/runtime/asm_amd64.s:1998 +0x1 fp=0xc083dc7f90 sp=0xc083dc7f88
created by net/http.(*Server).Serve
D:/dev/go/go1.6.2/src/net/http/server.go:2137 +0x455
goroutine 1 [select (no cases), 17 minutes]:
github.com/tendermint/go-common.TrapSignal(0xc0821f52f0)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/go-common/os.go:31 +0x164
github.com/tendermint/tendermint/node.RunNode(0x3df2108, 0xc082463160)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/node/node.go:372 +0xb92
main.main()
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/cmd/tendermint/main.go:42 +0x3bf
[Truncated]
github.com/tendermint/go-rpc/server.(*WebsocketManager).WebsocketHandler(0xc0828c01e0, 0x18a18a0, 0xc0833de2c0, 0xc0834b6b60)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/go-rpc/server/handlers.go:505 +0x39c
github.com/tendermint/go-rpc/server.(*WebsocketManager).WebsocketHandler-fm(0x18a18a0, 0xc0833de2c0, 0xc0834b6b60)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/tendermint/node/node.go:224 +0x45
net/http.HandlerFunc.ServeHTTP(0xc0821f4f80, 0x18a18a0, 0xc0833de2c0, 0xc0834b6b60)
D:/dev/go/go1.6.2/src/net/http/server.go:1618 +0x41
net/http.(*ServeMux).ServeHTTP(0xc0828b8480, 0x18a18a0, 0xc0833de2c0, 0xc0834b6b60)
D:/dev/go/go1.6.2/src/net/http/server.go:1910 +0x184
github.com/tendermint/go-rpc/server.RecoverAndLogHandler.func1(0x18a1868, 0xc083409380, 0xc0834b6b60)
D:/dev/intellij_workspaces/tendermint/core/src/github.com/tendermint/go-rpc/server/http_server.go:108 +0x46f
net/http.HandlerFunc.ServeHTTP(0xc082463ca0, 0x18a1868, 0xc083409380, 0xc0834b6b60)
D:/dev/go/go1.6.2/src/net/http/server.go:1618 +0x41
net/http.serverHandler.ServeHTTP(0xc0828ce300, 0x18a1868, 0xc083409380, 0xc0834b6b60)
D:/dev/go/go1.6.2/src/net/http/server.go:2081 +0x1a5
net/http.(*conn).serve(0xc08228de00)
D:/dev/go/go1.6.2/src/net/http/server.go:1472 +0xf35
created by net/http.(*Server).Serve
D:/dev/go/go1.6.2/src/net/http/server.go:2137 +0x455
```
Status: Issue closed
username_0: closed by https://github.com/tendermint/tendermint/pull/298 |
snorkel-team/snorkel-tutorials | 531754948 | Title: Weather Dataset Link Broken [non-agg File]
Question:
username_0: ## Issue description
It seems that the link to download the weather dataset for the non-aggregate file is broken
## Code example/repro steps
I can't get the file using wget nor can I get it using the link itself
## Expected behavior
should download the file
Answers:
username_1: Wasn't able to reproduce this issue. Ran the following commands independently to check:
```
wget https://www.dropbox.com/s/94d2wsrrwh1ioyd/weather-non-agg-DFE.csv -P data
wget https://d1p17r2m4rzlbo.cloudfront.net/wp-content/uploads/2016/03/weather-evaluated-agg-DFE.csv -P data
```
Status: Issue closed
username_0: problem on my side |
facebookresearch/habitat-sim | 1112000580 | Title: Determining available positions similar to THOR's GetReachablePositions
Question:
username_0: ## ❓ Questions and Help
Is there any way to get all reachable positions in a navmesh (up to some discretization like `FORWARD_STEP_SIZE`).
Thanks!
Answers:
username_1: Not really. There is `sim.pathfinder.get_topdown_view` which returns an occupancy grid which may be helpful for you use case but this isn't exactly all reachable positions.
The set of all reachable positions can be extremely large -- e.g. an agent with a fairly small turn angle (say 30 degrees) can reach hundreds of millions unique locations even within just a 1 meter square even with a relatively big step size like 0.25m.
username_0: Thanks!
Status: Issue closed
username_0: Thanks!
username_0: Am I right to think if I called `sim.pathfinder.get_topdown_view(meters_per_pixel=0.1, height=0)` I would get an occupancy grid of floor "cells" at 0.1 meter voxel resolution?
username_0: ## ❓ Questions and Help
Is there any way to get all reachable positions in a navmesh (up to some discretization like `FORWARD_STEP_SIZE`).
Thanks!
username_1: The floor may not be at height zero. If you already have an agent position you can use its height, otherwise you can sample navigable locations via `sim.pathfinder.get_random_navigable_point`. Note that those may not always be on the "floor"
Status: Issue closed
|
ui-schema/ui-schema | 850453756 | Title: Add PluginStack/WidgetRenderer `onErrors` listener
Question:
username_0: When rendering a composed widget for arrays/objects, it is good for performance to use `extractValue` to only get the values in a controlled level, skipping every HTML container which doesn't rely to know that data has changed.
When rendering a "proxy component", like at ds-material [Accordions](https://github.com/ui-schema/ui-schema/blob/0.2.1/packages/ds-material/src/Widgets/Accordions/Accordions.tsx), it can't know the calculated `errors` from inside the `PluginStack`.
With `onErrors`, added to `PluginStack` and `WidgetRenderer` where it is executed, this will be possible
Answers:
username_0: Released together with core 0.2.2
Status: Issue closed
|
wlandau/drake.hasty | 396227173 | Title: Direction for development
Question:
username_0: Right now, `drake.hasty` is an experiment in minimalism. What should be the driving purpose of development? Some possible directions:
## A sandbox
Breeding/testing ground for new `drake` backends.
## An independent front-end scheduler for users
As a scheduler, `drake.hasty` would support production-ready workflows that either
1. Do not need `drake`'s reproducibility features, or
2. Would suffer egregious overhead with the current version of `drake`.
Related:
- https://github.com/ropensci/drake/issues/575
- https://github.com/ropensci/drake/issues/561#issuecomment-433642782
- https://github.com/username_0/workers
## A backend for `drake` official persistent workers
It has been suggested that maybe `drake` could directly call `drake.hasty` for its scheduling needs. I think the idea was to lighten the code base in the same way `devtools` offloaded to `usethis`, `remotes`, etc. However, the more I downsize and reorganize `drake`, the more it seems like this shift might not be worth it.
1. `drake`'s code for scheduling is actually very light and simple when compared with the rest of the internals. Offloading the scheduling may not accomplish much.
2. It is difficult to disentangle `drake`'s internals from its scheduling operations. `drake` makes decisions about checking, building, memory management, etc. using data structures not available to `drake.hasty`. Many of these build operations and decisions happen outside the customizable `config$hasty_build` function.
These options are not mutually exclusive, and my assessment may change as `drake` gets smaller and simpler. Definitely a question to keep revisiting long-term.
cc @krlmlr
Answers:
username_0: I have been spending a lot of time profiling and speeding up `drake` (see this [test case](https://github.com/username_0/drake-examples/tree/master/overhead)) and I am more convinced that `drake.hasty` could have its own serious role beyond just as a sandbox. Unsurprisingly, the bottlenecks in `drake` itself are
1. Storing outputs.
2. Analyzing code.
3. Checking the status of dependencies.
`drake.hasty` does none of those things on its own.
 |
wintershine/GenerateTaskDiscordBot | 465671433 | Title: Update discord names with current task info automatically
Question:
username_0: This should also broadcast on a channel a congratulations message for reaching a certain threshold.
Potentially also when an account is one task short and what the task is.
Answers:
username_0: This should also broadcast on a channel a congratulations message for reaching a certain threshold.
Potentially also when an account is one task short and what the task is.
username_0: This will also mean we will need to add a link between discord account and in-game account. This link should probably be optional, needs to be looked into. |
haizlin/fe-interview | 456645240 | Title: [vue] 说说你对选项el,template,render的理解
Question:
username_0: [vue] 说说你对选项el,template,render的理解
Answers:
username_1: el: 把当前实例挂载在元素上
template: 实例模版, 可以是`.vue`中的template, 也可以是template选项, 最终会编译成render函数
render: 不需要通过编译的可执行函数
template和render, 开发时各有优缺点, 不过在线上尽量不要有template
username_2: el:被挂载的目标,vue生成的dom将会替换掉el的dom。
template:有新建vue实例时传入的template和el指向的外部template。
render:新建vue实例时传入render选项,template选项或外部template将不被编译。没有render选项,会将template选项的模版或el指向的外部模版编通过render函数编译成vdom |
mail-in-a-box/mailinabox | 132627662 | Title: nginx OCSP stapling does not work
Question:
username_0: In `/var/log/nginx/error.log` several of these messages appear:
`2016/02/09 22:20:21 [error] 2417#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/09 22:28:39 [error] 2417#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/09 22:46:39 [error] 2417#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/09 23:04:39 [error] 2417#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/09 23:23:04 [error] 2417#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 00:05:31 [error] 2417#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 00:16:04 [error] 2417#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 00:19:56 [error] 2418#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 01:16:20 [error] 2418#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 01:31:38 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 01:44:56 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 01:59:32 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 02:16:29 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 02:31:41 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 02:46:51 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 03:17:07 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 03:31:43 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 03:48:03 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 04:30:00 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 05:30:02 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 05:58:00 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: ocsp.startssl.com
2016/02/10 06:02:35 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 06:21:06 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
2016/02/10 06:30:04 [error] 2420#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: gv.symcd.com
`
Examining the nginx config reveals that we never set `ssl_trusted_certificate` though it is needed for OCSP stapling, even the comment in nginx-ssl.conf says so.
https://github.com/mail-in-a-box/mailinabox/blob/master/conf/nginx-ssl.conf#L68
It seems that including the root and intermediate certificates in the `ssl_certificate` is not sufficient.
Perhaps I can work on a PR in the next days if you want.
Answers:
username_1: I'm not exactly sure what's going on. On my box, [ssllabs](https://www.ssllabs.com/ssltest/analyze.html?d=box.occams.info) and openssl (below) seem to report that OCSP is working:
$ openssl s_client -connect box.occams.info:443 -status
...
OCSP Response Data:
OCSP Response Status: successful (0x0)
Maybe because Lets Encrypt is cross-signed something is working by accident for me.
Can you test your box this way?
It also might be difficult to automatically determine the trust path, though, so fixing this (if it needs to be fixed) might not be straightforward.
username_1: (It could also be that your TLS certificate provider doesn't support OCSP?)
username_0: I get this on my server:
```
CONNECTED(00000003)
OCSP response: no response sent
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
verify error:num=19:self signed certificate in certificate chain
verify return:0
---
Certificate chain
0 s: ... -snip- ...
i:/C=US/O=GeoTrust Inc./CN=RapidSSL SHA256 CA - G3
1 s:/C=US/O=GeoTrust Inc./CN=RapidSSL SHA256 CA - G3
i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
---
```
As you can probably see, I'm using RapidSSL and according to [this](https://www.rapidssl.com/learn-ssl/guides/ocsp-stapling.pdf) they support OCSP.
username_0: Setting `ssl_trusted_certificate /home/user-data/ssl/ssl_certificate.pem;` for the main domain seems to fix the issue?
```
CONNECTED(00000003)
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
verify error:num=19:self signed certificate in certificate chain
verify return:0
OCSP response:
======================================
OCSP Response Data:
OCSP Response Status: successful (0x0)
```
username_0: Got the same behavior on my other domains too, they use StartSSL certs.
I guess not using `ssl_trusted_certificate` does not always work?
@username_1 How does your config with LetsEncrypt look like?!
username_1: My config is the same as everyone else's, but I've got a Let's Encrypt cert and, I think, its intermediate cert is in the ssl_certificate file.
username_0: I tested `ssl_certificate` with domain+intermediate certs and domain+intermediate+root certs.
OCSP only works for me when `ssl_trusted_certificate` is set.
Do you only have Let's Encrypt certs? I'm surprised your nginx OCSP stapling works even though you never set `ssl_trusted_certificate` and despite the documentation explicitly states you have to set it.
username_2: I was going to look into this because I saw the same errors in my log.
```
OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get local issuer certificate) while requesting certificate status, responder: ocsp.startssl.com
```
I use start SSL. I saw however that the last message was on the 25th. Then I ran:
```
zcat -f -- error* | grep -i "OCSP.*unable to get local issuer certificate" | awk '{print $1}' | sort | uniq > ~/dates_with_ocsp_errors.lig
```
This gave me a list of dates where the system experienced ocsp errors. As far as the log goes everyday there is an error, but starting from January the 28th it isn't every day. It is more around every 2 to 3 days.
When I verify the response now all is okay:
```
OCSP response:
======================================
OCSP Response Data:
OCSP Response Status: successful (0x0)
Response Type: Basic OCSP Response
Version: 1 (0x0)
Responder Id: B390A7D8C9AF4ECD613C9F7CAD5D7F41FD6930EA
Produced At: Feb 25 16:28:51 2016 GMT
Responses:
Certificate ID:
Hash Algorithm: sha1
Issuer Name Hash: A5E2344EF5763A9CE2F31E9B9807B0075727A5F9
Issuer Key Hash: B390A7D8C9AF4ECD613C9F7CAD5D7F41FD6930EA
Serial Number: D40AEC190FB5DDA54175967BE136B7DE
Cert Status: good
This Update: Feb 25 16:28:51 2016 GMT
Next Update: Feb 29 16:28:51 2016 GMT
```
The response was fetched on the 25th. The certificate chain is in the server certificate. I find it weird that it sometimes works and sometimes it doesn't. You would think it either validates or it doesn't.
Everything you can find on the net indeed mentions that you should set:
```
ssl_stapling_verify on;
ssl_trusted_certificate ....;
```
I can't find any reference on disabling the verify; the only risk I see is a dns mitm attack on the ocsp service but all that would do is show an error on the client when he verifies the ocsp response against the cert. However I am not an expert on these matters.
username_2: I can't explain why it is working now. The correct way of doing it (at least with my limit understanding) seems to be:
- When the user pastes the intermediate chain with its root save it to a file next to the domain cert
- Reference the file in the domain specific section of the nginx config using the ssl_trusted_certificate directive
If the user doesn't paste the chain we set disable ssl_stapling_verify off or disable stapling altogether.
Does that make sense? (Or am I just rambling :smiley:)
Status: Issue closed
|
mail-in-a-box/mailinabox | 201076166 | Title: Request For Comment: Debug Log command
Question:
username_0: On several other projects, most notably, [Flynn.io](https://github.com/flynn/flynn), in order to assist the maintainers with support requests, there is a [command](https://github.com/flynn/flynn/blob/master/host/cli/collect-debug-info.go) that produces a gist of all of the pertinent debug info:
- machine configuration and status
- currently installed version
- log files
- queries running service daemons for their status
- etc.
and then posts them as a secret, anonymous gist.
Mailinabox's equivalent would do the above, plus
- response of the mailinabox's admin status checks
- git status of mailinabox local repository
- python modules installed and their versions
- php version installed
- owncloud status & check via `occ` command
- etc.
The key concern I have is around the potential to publish sensitive data, i.e. email addresses, domains, potential security vulnerabilities in mis-configured hosts, etc. I think a warning should suffice, but then again, people don't generally listen to warnings.
I've been going through the support requests at discourse.mailinabox.email and it feels, from my cursory examination, that such a tool could substantially reduce the quantity of back and forth communication in order to gather info about a person's issue.
Would the maintainers, @username_1 and @username_2 be interested in such a command?
Answers:
username_1: +1
username_2: I think this would make our life easier! There are a few caveats for me to use this, mostly concerns European law about handling personal identifiable data (i.e.; email-, ip-addresses);
- Consent for publishing needs to be explicit and upfront
- The user should be able to request the removal of the data
- The data isn't kept longer than needed
- The users states that he/she is allowed to send us the data, the logs might contain personal information about other people.
If that can be arranged it would be really helpful.
username_0: @username_2: Interesting. I am not aware of EU personally identifying information (PII) laws. Thanks for the update!
Perhaps the right move is this: the command does not automatically create a gist. Instead, it simply creates a log file, which the user is free to do whatever they please. When the debug log file is created, we throw a warning that indicates the possibility that the log file contains PII and requires user interaction before proceeding, as well as suggest a few places they can publish it themselves in order to help others assisting them with their MIAB installation.
That way, the responsibility for publishing, removal, and data retention remain with the user and not MIAB.
username_0: Going to leave this open as a discussion for the time being, if that makes sense.
Several diagnostic commands I think we should include:
pip3 list
dpkg --list
free -m
ps auxf
ifconfig
lsof -i
ufw status verbose
df -h
uname -a
lsb_release -a
Any others you can think of that would be useful?
I think we'd also want to include these files:
/etc/resolv.conf
/etc/hosts
/etc/hostname
/var/log/syslog |
saltstack/salt | 274184687 | Title: SPM fails to install additional modules correctly
Question:
username_0: ### Description of Issue/Question
SPM install does not place the custom modules and states in the correct location (`/srv/spm/salt/_modules` and `/srv/spm/salt/_states`, respectively). It instead places them inside the formula directory. This is not the [documented behavior](https://docs.saltstack.com/en/latest/topics/spm/spm_formula.html#loader-modules), and it is a regression since 2016.11.
I suspect this may be related to #42646, but I didn't notice until now because we haven't deployed a new salt master or changed my custom states between the release of 2017.1.0 and now. I've confirmed that my upgraded salt master now contains copies in both the correct and incorrect locations, but the files in the correct location have not been touched since we upgraded to 2017.1 in July.
### Setup
Assume the existence of a SPM package repository containing a package `foo-formula` which contains a custom state (i.e. `foo-formula/_states/foo.py`)
### Steps to Reproduce Issue
```
spm install foo
ls /srv/spm/salt # Note absence of _states
ls /srv/spm/salt/foo # Note presence of _states
```
### Versions Report
```
Salt Version:
Salt: 2017.7.2
Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: Not Installed
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.8
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.5 (default, Aug 4 2017, 00:39:18)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: centos 7.4.1708 Core
locale: UTF-8
machine: x86_64
release: 3.10.0-693.5.2.el7.x86_64
system: Linux
version: CentOS Linux 7.4.1708 Core
```
Answers:
username_1: @username_0 Thanks for the report!
username_2: @username_0 This should be fixed with #50211
Status: Issue closed
|
egulias/EmailValidator | 691034343 | Title: DNS validation succeeds for missing mx records
Question:
username_0: The email address `<EMAIL>` (transposed `o` and `u`) passes dns validations, but in my opinion it should not. The dns validation recognizes that the `icluod.com` domain has an `A` record but no `MX` record. The validation is successful and a warning is stored that no mx record could be found.
But is this really the correct semantics that the dns validator should do? For this domain, it will return that everything is valid and you can send an email to the recipient. However, if you check the warnings (which not everyone will do), you will see that it is indeed invalid, since there is no mx record for the domain.
One could argue that the caller has to add the logic for checking the warning. The [laravel framework (#34092)](https://github.com/laravel/framework/issues/34092) believes that the return code is the only relevant result for detecting whether an email address is correct and warnings should not lead to a validation error. That is, it would only fail if this library changes its behavior.
So what is the reason for the current (strange?) behavior.
Answers:
username_1: Hi @username_0 , it is a valid point the one you are rising. The DNS validation was inherited in "logic" from Dominic Sayer's "isemail" function/lib.
In that, the non-existance of an MX records has this same behaviour.
The point is that changing a behaviour would force a major version change. I'm working on a new version and it could fit.
Meanwhile I can suggest you add the logic by wrapping the validator in a new validator implementing the interface. I think very recently this ability was merged into laravel.
username_1: Behaviour change has been done for v3 to be a little bit more "real life" friendly.
Status: Issue closed
|
DreamRich/DreamRich | 248270333 | Title: Análise de risco
Question:
username_0: Definir uma estratégia para cada usuário sobre como devem ser separadas as reservas de patrimônio para o uso de previdência privada e seguro de vida para a proteção de patrimônio que será herdado após a invalidez ou a morte do cliente. Essa estratégia levará em conta o plano orçamentário do cliente juntamente com informações básicas de possíveis herdeiros, conjuntamente com o patrimônio esperado pelo cliente para esses herdeiros. |
tattali/CalendarBundle | 1093387335 | Title: Performance issue in CalendatEvent class
Question:
username_0: Hi,
After profiling my code, I found a performance issue in this function :
```
public function addEvent(Event $event): self
{
if (!\in_array($event, $this->events, true)) {
$this->events[] = $event;
}
return $this;
}
```
Indeed, the performance of _in_array()_ php function is horrible. Can you provide another function using _isset_() or even with no verification of duplicates ?
You can see in the Symfony Profiler, 1500ms to load 27 000 events without the check in addEvent() function VS 3100ms...


,
Thanks, |
RPi-Distro/pi-gen | 628612560 | Title: Is this the 64-bit raspios pi-gen repository?
Question:
username_0: Will this repository eventually contain the changes used to support generating the 64-bit images for Raspberry Pi OS? If not, does anyone know where the image generator is homed?
I'm confused, because the 32 bit images, which have been renamed, appear to still be part of this chain, but there's no sign of 64-bit support here.
Answers:
username_1: Yes, I'll add an arm64 branch.
username_0: Thanks. Build tags for the 5/27 build would also be appreciated.
username_2: Any idea when?
username_3: @username_1 I think this issue could be closed since the `arm64` is now available.
Status: Issue closed
username_4: Is the arm64 branch ready to use?
username_5: Depends what you mean by "ready", but AFAIK the arm64 branch is what's used to create http://downloads.raspberrypi.org/raspios_arm64/images/ (which is still in beta - see https://github.com/raspberrypi/Raspberry-Pi-OS-64bit ) |
eclipse-ee4j/krazo | 853081925 | Title: Prepare PR to integrate into Glassfish
Question:
username_0: We should file a pull request to integrate Krazo into Glassfish after their 6.1.0 release.
Answers:
username_0: See https://github.com/eclipse-ee4j/glassfish/pull/23479 :tada:
username_1: Think we can close this as the PR was merged?
username_2: Yes, this one is done!
And GlassFish 6.2.0 has been released with Krazo included
https://download.eclipse.org/ee4j/glassfish/glassfish-6.2.0.zip
Status: Issue closed
|
mozilla/pontoon | 995189930 | Title: Document feature development process
Question:
username_0: *This issue was created automatically by a [script](https://github.com/username_1/bugzilla2github/).*
## [Bug 1677601](https://bugzilla.mozilla.org/show_bug.cgi?id=1677601)
Bug Reporter: @username_1
Enforce and document a more strict development process for new features.
Answers:
username_1: Spec in progress:
https://docs.google.com/document/d/1Mgq8QVJ8YvxPtJ<KEY>/edit |
jvandertil/blog | 552240154 | Title: Add cookie policy
Question:
username_0: Currently we store the 'darkmode' preference on the users computer, which is a functional 'cookie'.
This does not require user consent, but does require a cookie policy unfortunately.
Also CloudFlare adds the _cfduid cookie, which is a 'technical' cookie which does not require consent but also needs to be named in the cookie policy.
Link to CloudFlare cookie explanation: https://support.cloudflare.com/hc/en-us/articles/200170156-Understanding-the-Cloudflare-Cookies#12345682<issue_closed>
Status: Issue closed |
usnistgov/hl7-igamt | 1070105326 | Title: [BUG] Wrong CS is deleted in profile component
Question:
username_0: **Describe the bug**
I'm trying to delete a conformance statement, but it keeps deleting something else.
**User Account**
rosinc
**URL**
https://hl7v2.igamt-2.nist.gov/igamt/ig/61856cdf8b87bc00071c6113/profilecomponent/618bf1ea8b87bc00068e5e62/message/618beb498b87bc00068e2bbe/conformance-statement
**To Reproduce**
Steps to reproduce the behavior:
1. Go to PC LOI_Common_Component > New and Add-on Order - R4 > Conformance Statements
2. Click on 'delete' for the LOI-79 (the one without a script)
3. LOI-79 is still there and another CS has been deleted (LOI-64 under ORDER.OBSERVATION_REQUEST)
**Expected behavior**
IGAMT should not delete a random conformance statement
**Screenshots**
Before I hit delete:

After I hit delete

**Additional context**
Add any other context about the problem here.<issue_closed>
Status: Issue closed |
slack-ruby/slack-ruby-client | 797080026 | Title: Events are fired twice after a disconnection
Question:
username_0: I'm having an issue where my bot is sending several time the same message after a disconnection.
## How to reproduce:
Here is a sample of my code:
```rb
realtime_client = Slack::RealTime::Client.new token: token,
websocket_ping: 3 # to fasten the deconnection detection, but not needed to triger the bug
realtime_client.on :hello do
puts 'Ready to go!'
end
realtime_client.on :message do |message|
puts 'triggered' if message.text == 'My message'
end
realtime_client.start!
```
Then, launch the bot, trigger it: one message should happen.
Disconnect your laptop from internet, wait for the warning to come (so that the bot detects the deconnection) and reconnect.
Trigger the bot: it should be triggered twice instead of once.
My logs:
```
Ready to go!
triggered
W, [2021-01-29T19:01:23.481714 #87339] WARN -- id=T03L75GUK, name=__, domain=__: is offline
W, [2021-01-29T19:01:24.484552 #87339] WARN -- id=T03L75GUK, name=__, domain=__: is offline
Ready to go!
triggered
triggered
```
## Hack
Disabling websocket pings fixes the issue:
```
realtime_client = Slack::RealTime::Client.new token: token, websocket_ping: 0
```
Answers:
username_1: What version and async library?
username_0: Hi, I'm with faye-websocker 0.11.0 and under version 0.15.1
Status: Issue closed
username_1: That's known, dup of #285. We've deprecated faye-websocket in #357, upgrade to 0.16.x with async-websocket and the problem will go away.
username_0: Oh ok. Thanks! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.