repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
mike42/escpos-php | 517658652 | Title: Can We Use Without Composer?
Question:
username_0: Hi;
Can we use without composer?
Answers:
username_1: Hi
im loading it with autoload.php, file attached.. rename it
just require the file and use namespaces
use Mike42\Escpos\Printer;
use Mike42\Escpos\PrintConnectors\NetworkPrintConnector;
use Mike42\Escpos\EscposImage;
.. just check paths for location of Mike42 folder
[autoload.php.txt](https://github.com/username_2/escpos-php/files/3816124/autoload.php.txt)
Status: Issue closed
username_2: You are of course welcome to do whatever you like on your own server, but at least you now know what to expect. |
WhyNotHugo/python-barcode | 325113282 | Title: Using 'raw' without using 'save' gives no output
Question:
username_0: I think the title explains it all.
If I try to use .raw to server an SVG barcode without using .save first, I get an empty output...
Example:
Trying to serve this in my django project:
`class APIBarcode(View):
def get(self, request, *args, **kwargs):
uuid = request.GET.get('uuid')
C39 = barcode.get_barcode_class('code39')
toserve = C39(uuid)
toserve.save('c39code')
return HttpResponse(toserve.raw, content_type="image/svg+xml")`
Gives the following output:

But serving this:
`class APIBarcode(View):
def get(self, request, *args, **kwargs):
uuid = request.GET.get('uuid')
C39 = barcode.get_barcode_class('code39')
toserve = C39(uuid)
# toserve.save('c39code')
return HttpResponse(toserve.raw, content_type="image/svg+xml")`
Gives the following output:

If someone can point me to where I should be looking I would gladly attempt to fix it!
Answers:
username_1: You probably want the `render` method, rather than `raw`:
```
def get(self, request, *args, **kwargs):
uuid = request.GET.get('uuid')
C39 = barcode.get_barcode_class('code39')
toserve = C39(uuid)
return HttpResponse(toserve.render(), content_type="image/svg+xml")`
```
username_0: @username_1 `.render()` method needs a positional argument (which I did not get how to specify from the docs), `.render` attribute gives the same empty result as I showed previously
username_1: `raw` is actually an ugly side effect of poor code (it's actually removed in the latest master, and was actually buggy).
I'm thinking about reintroducing a `raw` property, which returns the raw barcode properly (rather than write it) for situations like this.
username_0: I can take a shot at that if you'd like!
username_1: Give it a shot if you like; it'll be a while before I can address this myself.
username_1: Given that `render` already returns the rendered barcode, and seems to suffer no limitations, I think it's okay to close this.
Don't hesitate to leave a reply if you consider there's something I might have overlooked here.
Status: Issue closed
|
jcvasquezc/DisVoice | 675558460 | Title: `calc_residual(x_filt,x_emph,...)` instead of `calc_residual(x_filt,x_filt,...)`?
Question:
username_0: https://github.com/username_1/DisVoice/blob/9bda5815df9b05daf329e012379d36f07122f7cc/glottal/GCI.py#L147
In _Line 142_ an **x_emph** is calculated, but it is never used thereafter. Shouldn't it be used in the subsequent lpc-filter process in _Line 147_? i.e.,
`calc_residual(x_filt,x_filt,...)` -> `calc_residual(x_filt,x_emph,...)`
Answers:
username_1: Thanks for finding the bug
This is already fixed in the current version
Status: Issue closed
|
bigdatagenomics/workflows | 268920830 | Title: No call_cannoli function
Question:
username_0: bwa_alignment for cannoli will fail:
```
from bdgenomics.workflows.tools.spark_tools import call_adam, \
call_cannoli, \
call_conductor, \
MasterAddress, \
HDFS_MASTER_PORT, \
SPARK_MASTER_PORT
ImportError: cannot import name call_cannoli
```
Answers:
username_1: I was thinking it might be on a branch somewhere, but I can't find it. |
jembi/bsis | 122740673 | Title: Duplicate Donor merge functionality is broken
Question:
username_0: The merge duplicate donors wizard is broken after the preview.
This was caused by the change in revision 90ead799da6c7b752026ce15f6c56d84f639135b where the DonorService.setDonorDueToDonate method was changed to use a database function to calculate the next donation date for a donor.
The issue is that at the preview step of the merge duplicate donor wizard the new merged donor has not been persisted in the database.
Sadly, there weren't any unit tests for the method `getAllDonationsToMerge` in `DuplicateDonorService` which would (or at least might) have been able to pick up the issue.
Exception seen on the bsis server:
```
15:37:21,547 BSIS_CLICK_LOG bsis:23 - http://localhost:8080/bsis/donors/duplicates/merge/preview?donorNumber=000011
15:37:22,582 INFO BloodTestingRuleEngine:90 - BloodTestingRuleEngine running for donation with id '12' and donor with number '000011' using available test results = {1=A, 17=POS, 18=NEG, 19=NEG, 2=POS, 20=NEG, 23=NEG, 26=POS, 27=POS, 28=POS, 3=LOW, 4=NEG}
java.lang.NullPointerException
at service.DonorService.setDonorDueToDonate(DonorService.java:36)
at service.DonorService$$FastClassBySpringCGLIB$$1d3f3828.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:98)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:266)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:95)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
at service.DonorService$$EnhancerBySpringCGLIB$$7b360fcf.setDonorDueToDonate(<generated>)
at service.DuplicateDonorService.executeTestsAndUpdate(DuplicateDonorService.java:178)
at service.DuplicateDonorService.getAllDonationsToMerge(DuplicateDonorService.java:135)
at service.DuplicateDonorService$$FastClassBySpringCGLIB$$f4121381.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:98)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:266)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:95)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
at service.DuplicateDonorService$$EnhancerBySpringCGLIB$$6f418108.getAllDonationsToMerge(<generated>)
at controller.DonorController.findDuplicateDonorsDonations(DonorController.java:419)
at controller.DonorController$$FastClassBySpringCGLIB$$54ff8830.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:64)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
at controller.DonorController$$EnhancerBySpringCGLIB$$49bfdc7c.findDuplicateDonorsDonations(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:781)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:721)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:943)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:877)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:863)
[Truncated]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at utils.CORSFilter.doFilter(CORSFilter.java:27)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
```<issue_closed>
Status: Issue closed |
sbt/zinc | 609865334 | Title: [1.4.x] BasicVirtualFileRef.id() is an allocation hotspot
Question:
username_0: <img width="860" alt="image" src="https://user-images.githubusercontent.com/65551/80706590-e3355800-8b2b-11ea-9022-716cfde793f1.png">
<img width="941" alt="image" src="https://user-images.githubusercontent.com/65551/80706575-dca6e080-8b2b-11ea-9ad7-88c436361ae6.png">
Answers:
username_0: Probably best to keep the full `_id` as a val. One per virtual source file is okay.
We could also (or otherwise) optimize:
```
final def sourceFile: AbstractFile = {
val file = associatedFile
if ((file eq NoAbstractFile) || (file.path endsWith ".class")) null else file
}
```
To instead use `file.hasExtension` and then make sure that `VirtualFileWrap` doesn't do the string concatenation. This approach would only help when users are on the latest version of `scalac` that includes such an optimization, though.
`VirtualFileRefOrdering` should also be kept allocation free.
Status: Issue closed
|
angular/angular | 344468989 | Title: Dyamic form with i18n
Question:
username_0: ## I'm submitting a...
<pre><code>
[x] Feature request
</code></pre>
## Current behavior
I am creating forms input dynamicaly somthing like :
userInputs: Array<InputField<any>> = [
new TextInputField({
name: 'firstName',
id: 'firstName',
value: this.firstName,
placeholder: 'Prénom',
required: true,
errorMessages: [
{
validator: 'required',
message: 'Le prénom ne peut pas être vide.'
}
]
}),
new TextInputField({
name: 'lastName',
id: 'lastName',
value: this.lastName,
placeholder: 'Nom',
required: true,
errorMessages: [
{
validator: 'required',
message: 'Le nom ne peut pas être vide.'
}
]
}),
new TextInputField({
name: 'username',
id: 'username',
value: this.username,
placeholder: 'Nom d\'utilisateur',
required: true,
errorMessages: [
{
validator: 'required',
message: 'Le nom d\'utilisateur ne peut pas être vide.'
}
]
}),
...
]
## Expected behavior
I wanna be able to translate placeholder and validation errors.
Answers:
username_1: I'm sorry but we don't understand the problem you are reporting.
If the problem still exists please open a new issue and provide a plunker reproducing the problem and describing the difference between the expected and current behavior. You can use this plunker template: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5?p=catalogue
Status: Issue closed
|
phetsims/wave-interference | 424968976 | Title: Should we eliminate the separator in the control panel?
Question:
username_0: As part of https://github.com/phetsims/wave-interference/issues/356 I'm wondering if we should eliminate the separator in the control panel.



Answers:
username_1: @username_0 I think the separator line helps keep the sliders and other UI controls separate. For water especially, I think the line draws attention to the check box, which could be overlooked. Another option would be to separate the controls into two separate panels, but I think that would look pretty odd for water, which just has the one checkbox.
If you'd like to discuss further in design meeting @username_0, please go ahead and tag it .
username_0: My initial instinct was this looked odd, but I agree there are good reasons to keep it. Let's close this issue.
Status: Issue closed
|
F5Networks/f5-declarative-onboarding | 999585160 | Title: DO fails to provision LTM/ASM when there is pre-existing route configuration from the nicswap
Question:
username_0: <!--
Github Issues are consistently monitored by F5 staff, but should be considered
as best effort only and you should not expect to receive the same level of
response as provided by F5 Support. Please open a case
(https://support.f5.com/csp/article/K2633) with F5 if this is a critical issue.
When filing an issue please check to see if an issue already exists that matches your's
-->
### Environment
* Declarative Onboarding Version:1.23.0
* BIG-IP Version:15.1.2. or 16.1
### Summary
I am seeing below error when try to POST DO declaration on GCP instance.
"id": "a67ff737-6af2-45df-86f8-57c312a1a427",
"selfLink": "https://localhost/mgmt/shared/declarative-onboarding/task/a67ff737-6af2-45df-86f8-57c312a1a427",
"code": 422,
"status": "ERROR",
"message": "invalid config - rolled back",
"errors": [
"\"type\" may not be specified with \"gateway\"",
"\"type\" may not be specified with \"gateway\""
],
"result": {
"class": "Result",
"code": 422,
"status": "ERROR",
"message": "invalid config - rolled back",
"errors": [
"\"type\" may not be specified with \"gateway\"",
"\"type\" may not be specified with \"gateway\""
]
},
"declaration": {
"schemaVersion": "1.0.0",
"class": "Device",
"async": true,
"label": "Onboard BIG-IP",
"Common": {
"class": "Tenant",
"myProvisioning": {
"class": "Provision",
"ltm": "nominal",
"asm": "nominal"
}
}
}
}
### Steps To Reproduce
Steps to reproduce the behavior:
1. Submit the following declaration to bigip instance on GCP ( after nic swapping )
```json
{
"schemaVersion": "1.0.0",
"class": "Device",
"async": true,
"label": "Onboard BIG-IP",
[Truncated]
"class": "Device",
"async": true,
"label": "Onboard BIG-IP",
"Common": {
"class": "Tenant",
"myProvisioning": {
"class": "Provision",
"ltm": "nominal",
"asm": "nominal"
}
}
}
}
```
### Expected Behavior
DO should be able to succeed.
### Actual Behavior
Unable to provision ASM via DO. But i can manually provision ASM through GUI
Answers:
username_0: Below are the management routes configured by nic swaping in gcp instance
bigipuser@(bigip1)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys management-route
sys management-route default {
gateway 10.1.0.1
mtu 1460
network default
}
sys management-route mgmt_gw {
mtu 1460
network 10.1.0.1/32
type interface
}
sys management-route mgmt_net {
gateway 10.1.0.1
mtu 1460
network 10.1.0.0/16
}
bigipuser@(bigip1)(cfg-sync Standalone)(Active)(/Common)(tmos)#
username_1: Are you able to provide trace information from this failure?
https://clouddocs.f5.com/products/extensions/f5-declarative-onboarding/latest/declarations/miscellaneous.html#enabling-traces-in-do-responses
username_1: Thank you for your feedback. I have added this issue to our internal product backlog as AUTOTOOL-2768. |
monounity/karma-typescript | 235323835 | Title: Following instructions doesn't work
Question:
username_0: webpack: Compiling...
12 06 2017 11:31:43.460:INFO [compiler.karma-typescript]: Compiling project using Typescript 2.3.4
12 06 2017 11:31:46.352:ERROR [compiler.karma-typescript]: node_modules/@angular/common/src/directives/ng_class.d.ts(48,34): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.353:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/aot/compiler.d.ts(48,32): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.354:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/compile_metadata.d.ts(369,20): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.354:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/compile_metadata.d.ts(371,28): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.355:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/compile_metadata.d.ts(373,15): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.355:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/compile_metadata.d.ts(375,23): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.355:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/compile_metadata.d.ts(377,17): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.356:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/compile_metadata.d.ts(379,25): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.356:ERROR [compiler.karma-typescript]: node_modules/@angular/compiler/src/output/output_ast.d.ts(444,63): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.356:ERROR [compiler.karma-typescript]: node_modules/@angular/core/src/change_detection/differs/default_keyvalue_differ.d.ts(24,16): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.356:ERROR [compiler.karma-typescript]: node_modules/@angular/core/src/change_detection/differs/default_keyvalue_differ.d.ts(32,16): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.356:ERROR [compiler.karma-typescript]: node_modules/@angular/core/src/change_detection/differs/keyvalue_differs.d.ts(23,18): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.356:ERROR [compiler.karma-typescript]: node_modules/@angular/core/src/di/reflective_provider.d.ts(87,123): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.357:ERROR [compiler.karma-typescript]: node_modules/@angular/core/src/di/reflective_provider.d.ts(87,165): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.357:ERROR [compiler.karma-typescript]: node_modules/@angular/http/src/headers.d.ts(52,71): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.357:ERROR [compiler.karma-typescript]: node_modules/@angular/http/src/url_search_params.d.ts(46,16): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.357:ERROR [compiler.karma-typescript]: node_modules/@angular/material/typings/core/overlay/scroll/scroll-dispatcher.d.ts(29,27): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.358:ERROR [compiler.karma-typescript]: node_modules/@angular/material/typings/core/platform/features.d.ts(2,51): error TS2304: Cannot find name 'Set'.
12 06 2017 11:31:46.358:ERROR [compiler.karma-typescript]: node_modules/@angular/material/typings/core/portal/dom-portal-host.d.ts(24,51): error TS2304: Cannot find name 'Map'.
12 06 2017 11:31:46.358:ERROR [compiler.karma-typescript]: node_modules/@angular/material/typings/core/portal/portal-directives.d.ts(43,51): error TS2304: Cannot find name 'Map'.
Answers:
username_1: Hey @username_0, could you run your project with Karma in debug mode, `logLevel: config.LOG_DEBUG` and attach the log output here please? It looks like some typings are missing.
username_0: Sure. I did switch to istanbul-instrumenter-loader and made some progress on a proper config, all these related project should probably list that step, it isn't clear which parts this module controls since the instructions say to add it to frameworks, add another config section to karma.conf, plus adding something entirely different to webpack...
username_1: Which step should "these related projects" list? Which are "these related projects"? Karma-typescript consists of a Karma framework, preprocessor and a reporter, you need to add all of those to the Karma config, otherwise it won't work. Webpack isn't needed at all when running karma-typescript, so you can leave that out of the Karma config.
I take it that you moved on to another way of getting coverage, so I'm closing this.
Status: Issue closed
username_0: Got it working with webpack-instanbul-loader. But the line numbers are off,
probably because the source map points to the transpired version. I'll try
to get this working again with more configuration. |
allure-framework/allure-nose | 275157692 | Title: Allure plugin for Nosetest does not support generator tests in reports
Question:
username_0: for the given test_me.py
`#!/usr/bin/env python3
def yield_int(i):
assert i < 10
def test_me():
for i in range(0,10):
yield yield_int, i
`
running Nose with Allure performs three test cases
# nosetests --with-allure --logdir=allure-results
...
----------------------------------------------------------------------
Ran 3 tests in 0.004s
OK
and if we browse result allure xml for that run we discover three test cases, that's OK
`<ns0:test-suite xmlns:ns0="urn:model.allure.qatools.yandex.ru" start="1511096337473" stop="1511096337474">
<name>test_me</name>
<labels/>
<test-cases>
<test-case start="1511096337473" status="passed" stop="1511096337473">
<name>test_me.test_me</name>
<attachments/>
<labels/>
<steps/>
</test-case>
<test-case start="1511096337473" status="passed" stop="1511096337473">
<name>test_me.test_me</name>
<attachments/>
<labels/>
<steps/>
</test-case>
<test-case start="1511096337473" status="passed" stop="1511096337473">
<name>test_me.test_me</name>
<attachments/>
<labels/>
<steps/>
</test-case>
</test-cases>
</ns0:test-suite>`
but as you can see, all <name> tags in each <test-case> of the report are the same for all yielded tests and if thus if we run
# allure serve allure-results
we would see only one test case, that's NOT OK.

Probably, that's a problem of Nose, is there any workaround for that issue? |
matrix-org/synapse | 1112682254 | Title: Get message "You not allowed to login here" after login with OIDC provider
Question:
username_0: Hi there,
I'm trying using Keycloak OIDC as sso in matrix server. I'm using Element as client to connect to matrix server.
So after I login with Keycloak in Element UI, it return to page with message "You are not allowed to login here".
Am I leaving out any important configuration?
Thanks for your reading my issuse.
Answers:
username_1: @username_0 What exacly is your oidc configuration?
username_0: @username_1 Oh I forgot send my oidc configuration in here.
This is my configuration:
oidc_providers:
# Generic example
#
- idp_id: keycloak
idp_name: "Keycloak provider"
discover: false
issuer: "https://domain/auth/realms/{realm}"
client_id: "synapse"
client_secret: "{}my secret}"
client_auth_method: client_secret_post
scopes: ["openid", "profile"]
authorization_endpoint: "https://domain/auth/realms/{realm}/protocol/openid-connect/auth"
token_endpoint: "https://domain/auth/realms/{realm}/protocol/openid-connect/token"
userinfo_endpoint: "https://domain/auth/realms/{realm}/protocol/openid-connect/userinfo"
jwks_uri: "https://domain/auth/realms/{realm}/protocol/openid-connect/certs"
skip_verification: true
user_mapping_provider:
config:
subject_claim: "sub"
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
email_template: "{{ user.email }}"
username_2: I don't see that string in any of our templates, so I suspect it is from Keycloak, not Synapse. 🤷
username_3: "Thanks for your reply, I followed document again and it work fine." ??
Could you document what was the misconfiguration / change you did that made it work? |
kaneplusplus/bigmemory | 275813267 | Title: changes to attach.resource?
Question:
username_0: Hello,
I'm debugging some breakage in bigmemoryExtras. It looks like
bigmemory::attach.resources(descriptor, path=dir)
is defunct. Is the new setup stable, or is this still in flux? (I see some commented-out code that looks like this could be an experiment that made it to master early.)
It looks like somewhere along the dispatch path, descriptor should pick up dirname and filename elements. Can you provide a bit more info? I'd be happy to finish the changes in bigmemory or to adapt bigmemoryExtras. Please advise.
Answers:
username_1: Function `attach.resource` is used by `attach.big.matrix`, what do you mean by defunct? Do you mean it shouldn't be exported?
Dirname and filename are indeed stored in the descriptor so that you can reattach without specifying the backingpath (see https://github.com/username_2/bigmemory/issues/61).
Not sure I understood all your questions, hope I didn't answer something else.
username_0: Sorry, "defunct" was vague. I meant that the call
attach.resource(descriptor, path)
ignores the "path" argument and goes straight to the method for "big.matrix.descriptor".
There is a line in setMethod('attach.resource', signature(obj='big.matrix.descriptor')
that used to make the path argument work:
# path <- list(...)[['path']]
but it is commented out now.
has the "path" argument to the method for big.matrix.descriptor been disallowed?
It sounds like all of the path and filename info should now always be in the descriptor object itself. This is a bit tricky for bigmemoryExtras, as it has its own slots for this info, but I can adapt.
username_1: Yeap, when I added the "auto dirname feature", I disable the path argument, but let it for backward compatibility.
Status: Issue closed
|
kobaltz/clamby | 133967422 | Title: How complete is this interface?
Question:
username_0: I am planning to use clamav for scanning uploaded files and was planning to create a ruby wrapper for clamav command line utilities. On searching I found your gem.
So I just wanted to know is this production ready setup. I wanted to use this badly on my production servers.
Answers:
username_1: The wrapper is fairly basic and there isn't too much to it. However, it does leverage usage of the daemonized clamscan version; clamdscan. I highly recommend the usage of this as it will reduce the wait time of scanning the files for viruses. I've not had any issues with the gem and have been using it on production for a number of years. As long as you're able to access clamscan or clamdscan from your terminal, you should be fine to use the gem on your server.
username_0: Thaks @username_1, I was not sure it was maintained, happy to hear you are still using it.
Status: Issue closed
|
FelixMo42/hex_game | 906191032 | Title: Status and alerts text area
Question:
username_0: It would be helpful to have a message area (perhaps at the top or bottom) to display score as well as alerts and tips for the player.
Answers:
username_1: Update: made clickable buttons.
username_1: Unless anyone requires any other parts of the GUI Im gonna make a new branch and set up a better system for the GUI. |
coronalabs/corona | 742777360 | Title: "usesExpansionFile = true" doesn't work on CoronaBuilder
Question:
username_0: "usesExpansionFile = true" doesn't work on CoronaBuilder.
Simulator build works.
Answers:
username_0: As I understand it, CoronaBuillder and Native builds ignore **build.settings**, therefore there's no place that we can request usesExpansionFile = true. Would it be possible to add this as an option to the CoronaBuilder params.lua configuration file?
@username_1
username_1: Corona builder does use `build.settings` to the full extend. There's a bug somewhere. Also, you can use `build.settings` in native as well to set up plugins with gradle targets in Corona group.
username_0: @username_1 Thanks for you reply. Another thing is that "neverStripDebugInfo = true" doesn't work on CoronaBuilder either.
I wonder if there is more options in build.settings that don't work on CoronaBuilder.
username_2: @username_1
We're now using CoronaBuilder to automate builds from the command line, and these two issues are blockers for us. Is there any way we can help fix this?
Status: Issue closed
username_2: Thanks @username_1, we'll test from our end. |
hi-chi/pyHiChi | 1012073248 | Title: [Bug] Field and field mappings
Question:
username_0: After playing around with some of the field solvers and mappings I've come across a few bugs, or in some cases unwanted features. The issues can be replicated using the attached python script (just rename it to .py instead of .txt): [field_bugs.txt](https://github.com/hi-chi/pyHiChi/files/7258591/field_bugs.txt)
1. If mappings are used and fields are combined, the behaviour of different operations vary with what is being performed.
As an example. If a single field instance is created
field = hichi.PSATDPoissonField(grid_size, min_coords, grid_step, time_step)
field.set_E(null_value.address, field_value.address, null_value.address)
field.set_B(null_value.address, null_value.address, field_value.address)
and a second reference is created
combined_field = field
they will be identical. However, after performing some operation, such as addition
combined_field += field.apply_mapping(hichi.RotationMapping(hichi.Axis.Z, hichi.pi/2))
`field` and `combined_field` will now reference different objects. But under the hood, they still only depend on a single field instance, as only one has been initialised. This creates a problem primarily when the fields are updated.
field.update_fields()
will update the underlying field instance once, which will affect both `field` and `combined_field`. While
combined_field.update_fields()
will update the underlying field instance **twice**, affecting both `field` and `combined_field`. This is simply begging for errors to be made by the end-user, as it would seem more correct to run `update_fields()` on the `combined_field` rather than the initial `field`.
It is important that the user can reasonably predict how the different tools function. In my opinion, the current behaviour does not do that. There are a couple of ways, as I see it, for how this could be amended in order to avoid issues.
- If `field` and `combined_field` were to always reference the same object, this would not be an issue. However, that would not allow for the kind of flexibility that we eventually want.
- If the use of `update_field()` on `combined_field` could be made to ensure that only a single update is performed on the underlying field. This would keep the wanted flexibility, and the code would act as expected.
This is complicated by the fact that one can also have a combined field of two separate instances, utilising different time steps. Potentially, one should consider updating both fields such that they are synchronised at a given point in time (say, time0+max(time_step)). We probably also want to keep the possibility of updating the two field instances with separate time steps. I see that we currently have an `advance()` and an `update_fields()`, so one or the other could probably be used to update them according to their separate `time_step`, with the other synchronising them to some point in time.
2. If a rotation mapping is applied to the field, periodic copies of the field can appear in the original grid region when the result is plotted. While this _may_ be an intended behaviour under some circumstances, I think it should not be the default behaviour.
Would they even be real, simulation wise, to e.g. a particle placed inside the grid?

3. When playing around with the tight focusing field together with rotation mapping I happened upon some strange results along the top-Z border. The field is here non-zero, contrary to what it should be.

I did not see this effect along any other direction (although it could be due to the symmetry axis of the field being in the Z-direction) and it only manifested _after_ running `update_fields()` at least once. I also only noticed the effect when I plotted the result with a resolution higher than the underlying grid resolution (256 vs 1024 in linear dimensions). |
espressif/esp-mqtt | 824659865 | Title: esp-mqtt is a disappointing library
Question:
username_0: esp-mqtt is a disappointing library
Please suggest working example that reconnects to MQTT Server after Wifi Disconnects and does not crash the Esp32.
We are using it for IoT Device that runs 24x7.
Answers:
username_0: <img width="842" alt="Screen Shot 2021-03-08 at 8 47 43 PM" src="https://user-images.githubusercontent.com/60929494/110340694-8a89f200-804f-11eb-81ec-e293150a8495.png">
username_1: Hi, sorry for your trouble.
Can you please add more information so we can help you?
I would suggest you to add `esp_log_level_set("MQTT_CLIENT", ESP_LOG_DEBUG)` to your code, and also share with us the ESP32 crash for better help.
After we solve your issue, we'll work to improve our examples thanks for the feedback.
username_2: @username_0 what a nice mqtt library! Open source, simple code, resource saving, how could you say that...
It is suggested to provide a simple example that can be reproduced.
username_0: @username_2
Mqtt over SSL is not that reliable. I've seen ESP32 crash after Wifi Disconnects.
We migrated our code to Websocket (WSS) and it seems to work fine.
username_0: Hi,
Wifi Reconnects are an issue.
Sometimes there is no internet on Wifi.
Can you please let me know how to check reestablish mqtt connection in a while loop ?
Calling esp_mqtt_client_reconnect crashes the system.
username_1: Hi @username_0,
could you please add the logs for the crash?
Also, could you give us more information on the version of idf and esp-mqtt you are using?
username_3: There were indeed some stability/robustness issues with this library in the past, but most of them were fixed over a year ago. I'd say that approximately from [this version](https://github.com/espressif/esp-mqtt/commits/9a5187771a7fae4a45b532a6284f63a66d9d80f7), the esp-mqtt is considered quite reliable library.
Status: Issue closed
username_3: @username_0 Any update on this issue?
Closing it for now for lack of feedback. Please feel free to reopen with more details. |
EfficientElevator28/Simulation | 602618239 | Title: Fix elevator hanging on floor
Question:
username_0: When you give the step function the same floor it is on, right now it just expires 1 second and continues the simulation. However, we need to check if there are people on the floor. If so, trigger loading/unloading, change ElevatorState, etc.<issue_closed>
Status: Issue closed |
retyui/react-native-confirmation-code-field | 457720790 | Title: How to get the keyboard back after dismiss()?
Question:
username_0: I've got a TouchableWithoutFeedback that dismisses the keyboard. If this happens, and I click on the first input box for the code, the keyboard doesn't come back and I can't input again.
What am I missing?
Answers:
username_1: Perhaps you should use '.focus()' or '.blur()' methods
username_0: I got it working.
I had originally wrapped the <CodeInput> inside a <TouchableWithoutFeedback> but the onPress wasn't getting picked up. I ended up moving the <TouchableWithoutFeedback> **below** the <CodeInput> and absolute positioned it so it would be full screen, effectively overlaying a View on top of the <CodeInput> which picks up the OnPress now.
I'm getting the effect I want but it's janky, as RN can be.
Status: Issue closed
|
rust-lang/rust | 435060177 | Title: no way to not ship panic strings inside the compiled binary
Question:
username_0: Already have `panic = "abort"` in cargo, and it doesn't seem to leave the panic!() strings out of the compiled binary, even on `--release` build. I don't care about panic strings, I just want a slim generated exe that fails if something goes wrong.
`tokio`, `crossbeam` and `chrono` are adding a lot of useless strings in the `.rdata` on Windows, for example. also `opt-level = "s"` (and `"z"`)
tried to use `panic::set_hook` as well, it didn't do anything. tried to make the long path names using `remap-path-prefix` but it seemed to have no effect (the paths are still full absolute paths to current user folder/.cargo/registry). doesn't matter if I use windows paths (eg: C:\\Users or Unix C:/Users) and besides the #40552 that's two years old, but a minor issue when you need to rely on std and size optimizations.
also tried a lot of other stuff (like `link-arg=-s`, `debuginfo=0`, `no-landing-pads`, etc), while the generated exe is smaller than a full blown `cargo build --release`, so it's really an unexpected output from the compiler
Status: Issue closed
Answers:
username_1: You can do this by using a custom panic implementation like [panic-abort](https://github.com/japaric/panic-abort) which doesn't use the strings. Also see [RFC 2070](https://github.com/rust-lang/rfcs/blob/master/text/2070-panic-implementation.md) which introduced this feature and is currently the best source of information on how to use it.
username_0: I've already tried that and doesn't work on the program that is using std... unless I'm missing something obvious
username_1: Can you provide a reproduction for that? It definitely works on embedded `no_std` code, but it should also work with libstd-using code.
username_0: ```
error: duplicate lang item in crate `panic_abort`: `panic_impl`.
|
= note: first defined in crate `std`.
error: aborting due to previous error
error: Could not compile `bin`.
To learn more, run the command again with --verbose.
```
username_1: Already have `panic = "abort"` in cargo, and it doesn't seem to leave the panic!() strings out of the compiled binary, even on `--release` build. I don't care about panic strings, I just want a slim generated exe that fails if something goes wrong.
`tokio`, `crossbeam` and `chrono` are adding a lot of useless strings in the `.rdata` on Windows, for example. also `opt-level = "s"` (and `"z"`)
tried to use `panic::set_hook` as well, it didn't do anything. tried to make the long path names using `remap-path-prefix` but it seemed to have no effect (the paths are still full absolute paths to current user folder/.cargo/registry). doesn't matter if I use windows paths (eg: C:\\Users or Unix C:/Users) and besides the #40552 that's two years old, but a minor issue when you need to rely on std and size optimizations.
also tried a lot of other stuff (like `link-arg=-s`, `debuginfo=0`, `no-landing-pads`, etc), while the generated exe is smaller than a full blown `cargo build --release`, so it's really an unexpected output from the compiler
username_1: Ah, I misunderstood what you meant. Looks like that RFC indeed only targets `#![no_std]`.
username_2: Linux has a `strip` command that discards symbols from object files. Maybe there is an equivalent under Winows.
username_0: maybe the default panic implementation should use `#[cfg(...)]` to literally omit the strings on https://github.com/rust-lang/rust/blob/9ebf47851a357faa4cd97f4b1dc7835f6376e639/src/libcore/macros.rs#L12 like `#[cfg(not(panic="abort"))]` (but this doesn't work, of course) before the call to panic fn, since even assert_* macros panic with strings. I usually do that for debug strings, with a `#[cfg(debug_assertions)]] println!()` (when I actually *do* care about the information, during profile.dev / test)
username_3: You can do this by building your own std (xargo can help with this) with this Cargo feature enabled: https://github.com/rust-lang/rust/blob/master/src/libstd/Cargo.toml#L60
username_4: Did you ever solve this issue? It is easy to make the strings deterministic that are from the crate you are actually compiling. But there are strings that point to dependencies in `/home/<user>/.cargo` which I need to get rid of.
I compile with `opt-level=z` and my panic implementation does not reference the strings:
```
#[panic_handler]
fn panic(info: &PanicInfo) -> ! {
#[cfg(debug_assertions)]
print_debug!(0, "Error: {}", info);
#[cfg(not(debug_assertions))]
print_debug!(0, "Error");
loop {}
}
```
Status: Issue closed
username_5: Duplicate of https://github.com/rust-lang/rust/issues/54981. |
G-Node/odml-ui | 241255480 | Title: enable linking of sections
Question:
username_0: links should only be allowed between sections that share the same type
Answers:
username_1: Use a modifier for linking when dropping from a drag. Since we probably use CTRL for copy, linking should be ALT or CMD (macOS).
It could be ALT everywhere.
username_1: Linking works but we decided to set the modifier to CTRL+SHIFT which seems to be a common default in file browsers.
Status: Issue closed
|
galexrt/docker-sinusbot | 242553128 | Title: cannot allocate memory and download failed for https/www.youtube.com/watch?v=***; youtube-dl download failed (check youtube-dl)
Question:
username_0: Hi,
I got 2 issues, one of them is that, when I create new image or just turn off the bot, all works okay(website and settings works) to time when I click the button to connect the bot to the server then I see this message in "Instance Log"
`2017-07-13T02:13:29+02:00 Closed.
2017-07-13T02:13:29+02:00 TSClient quit. LogLevel has been increased, please try to connect again to see more details.
2017-07-13T02:13:29+02:00 Error spawning instancefork/exec /sinusbot/TeamSpeak3-Client-linux_amd64/ts3client_linux_amd64: cannot allocate memory
2017-07-13T02:13:29+02:00 Starting instance ts3server://172.16.58.3?port=1337&nickname=%E2%99%AA%20Bot%20Muzyczny%20%23%20Poczekalnia%20%E2%99%AA&password=&channel=&channelpassword=
2017-07-13T02:13:29+02:00 Could not insert into FileTransfer-Tableno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not delete from FileTransferno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not insert into FileTransfer-Tableno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not delete from FileTransferno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not insert into FileTransfer-Tableno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not delete from FileTransferno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not insert into FileTransfer-Tableno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not delete from FileTransferno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not insert into FileTransfer-Tableno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not delete from FileTransferno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not insert into FileTransfer-Tableno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not delete from FileTransferno such table: FileTransfer
2017-07-13T02:13:29+02:00 Could not create Notifications-Tabletable Notifications already exists
2017-07-13T02:13:29+02:00 Could not create WhisperReceive-Tabletable WhisperReceive already exists
2017-07-13T02:13:29+02:00 Could not create Chat-Tabletable Chat already exists
2017-07-13T02:13:29+02:00 About to run.`
Sometimes to solve that, I had to restart whole machine but It isn't works always.
Yes, I have enought memory - (0.3/1.9 GB) - in time when it happens.
And second issues is that the youtube-dl don't works. After reboot it works for 1 minute and then I see this error
`Download failed for https/www.youtube.com/watch?v=***; youtube-dl download failed (check youtube-dl)`
Updating, rebooting - doesn't help.
Answers:
username_1: To help you with your issue I need the following information from you:
* What OS are you using?
* What Docker image tag are you using?
* Kernel version?
username_0: - Ubuntu 16.10
- 4.4.0-81-generic
- By the Docker image you mean that "username_1/sinusbot:latest"
username_1: I'm looking into it.
username_0: Okay, just for information I wanna say that I was looking for answer but I just find information on sinusbot forum to solved that i have to run the process as administrator but It was for windows OS.
username_1: Please repull the Docker image as I'm not able to reproduce this issue.
If you still experience this issue, move your data directory and try with a new data directory/volume.
username_2: I'm still having that issue, I've tryed to repull and it and nothing
username_2: also, same setting as username_0
username_0: What hosting are you using?
username_2: Vultr, but now I'll try on OVH, same Ubuntu 16.04
username_2: nah, same problem on OVH, I have avaliable ram and storage
username_2: total used free shared buff/cache available
Mem: 992M 77M 531M 2.7M 383M 738M
Swap: 951M 36M 915M
username_2: well, I have followed this guide to install the original sinusbot (https://github.com/flyth/ts3soundbot/wiki/Installation---Debian-&-Ubuntu)
So it must be something about the docker, I think you need to downgrade the TS3 version to this one:
Download provided by SinusBot: http://dl.4players.de/ts/releases/3.0.18.2/TeamSpeak3-Client-linux_amd64-3.0.18.2.run
Version: 3.0.18.2
username_1: What command do you use to run the sinusbot container?
username_2: `docker run --restart=always --name sinusbot8087 -d -v /opt/sinusbot/sinusbot8087:/sinusbot/data:z -p 8087:8087 quay.io/username_1/sinusbot:latest`
username_2: I deleted it and still happening, do I try to a different one?
username_1: The latest docker image on a `4.12.11` kernel with:
```
total used free shared buff/cache available
Mem: 15G 4,8G 4,5G 1,2G 6,3G 9,3G
```
works for me.
Can you try on a server with more than 0.5GB of RAM?
username_2: I've tried on OVH which have 2GBRam and same thing happens
username_2: I will try to purchase a new VPS and clean with 1GB, so it will be dedicated to sinusbot and it's docker
username_2: This are the stats of the OVH machine with 2GB
` total used free shared buff/cache available
Mem: 1952 88 945 20 918 1638
Swap: 0 0 0`
username_2: Oh, also "uname -r" from the 2GB OVH machine
`4.4.0-81-generic`
username_1: @username_2 Did it work on the 1GB and/or 2GB machine?
username_2: nope, same error
username_2: Try to update the script to the official one, to a lower version of TS, I think that's the problem on some VPS
username_2: This >>
(https://github.com/flyth/ts3soundbot/wiki/Installation---Debian-&-Ubuntu)
So it must be something about the docker, I think you need to downgrade the TS3 version to this one:
Download provided by SinusBot: http://dl.4players.de/ts/releases/3.0.18.2/TeamSpeak3-Client-linux_amd64-3.0.18.2.run
Version: 3.0.18.2
username_1: @username_2 It's working for me, but if you want you can create a pull request to change the TS3 version to `3.0.18.2`.
username_2: submited
username_1: @username_2 You opened the pull request in the wrong repository. You have to point the pull request to `username_1/docker-sinusbot`.
username_2: when I click submit pull request I go there
username_2: nah this is really a bug on some ubuntu, 1.7GB RAM avaliable and still popping this
```
2017-09-18T22:28:55+02:00 Closed.
2017-09-18T22:28:55+02:00 TSClient quit. LogLevel has been increased, please try to connect again to see more details.
2017-09-18T22:28:55+02:00 Error spawning instancefork/exec /sinusbot/TeamSpeak3-Client-linux_amd64/ts3client_linux_amd64: cannot allocate memory
2017-09-18T22:28:55+02:00 Starting instance ts3server://127.0.0.1?port=9987&nickname=SinusBot&password=&channel=&channelpassword=
2017-09-18T22:28:42+02:00 TSClient quit.
2017-09-18T22:28:41+02:00 New connection status 0; Error 0
2017-09-18T22:28:41+02:00 The bot could not connect. This might have several reasons: the server doesn't exist at that address, the server password is wrong, the default channel doesn't exist, the bot has been banned, the server is an illegal installation or the password is wrong , the security level of your identity is too low or the server does not allow the version of the client you are using. See our Wiki / Forums for more help.
```
username_1: Did you only try ubuntu on all servers you have tested on?
username_2: Yes, and I'm thinking to try on Debian, it should be equal or very similar
username_1: @username_2 Yes they are very similar.
username_2: Tomorrow I'll try on inter server paying 0.01$ for Debian, if it works later I will purchase a new one in Aruba (as its 1$/month and inter is 5$/month except the first one)
username_2: Now i'm purchasing the VPS, if it doesn't work on debian I will make the last attemp on ubuntu 16.04 on a 32GB DEDICATED server on 1&1 and I will tell you how is going
username_2: Ok, is not your bug, it's an UBUNTU bug, it loaded in debian correctly, no problems (apparently), I will test some more instances and I'll comment the result, thanks @username_1 for you great great HELP!!!
username_2: also, one question, how many bots can hold until you get blacklisted or something?
username_1: @username_2 You are only allowed to run as many bots as the license of sinusbot, you hold, allows you to run.
username_2: Ok, I got a new error, opening issue...
username_2: sorry! nothing!!
username_1: I'm going to close this issue for now.
But will make the change to the lower TeamSpeak the next few days to ship around the Ubuntu issue.
Status: Issue closed
username_1: I just pushed the teamspeak version change myself.
As I already wrote, please close the PR in the `VeltroGaming` repository. `VeltroGaming` is not the right repository for the PR.
username_1: @username_2 Can you please test if the new image works on Ubuntu now with Teamspeak `3.0.18.2`?
username_2: Nope,
```
2017-09-21T10:59:43+02:00 Closed.
2017-09-21T10:59:43+02:00 TSClient quit. LogLevel has been increased, please try to connect again to see more details.
2017-09-21T10:59:43+02:00 Error spawning instancefork/exec /sinusbot/TeamSpeak3-Client-linux_amd64/ts3client_linux_amd64: cannot allocate memory
```
username_1: You repulled the image, right?
username_2: Yep
username_1: @username_2 Your "fix" changing the TS3 version didn't work then. I suggest you raise an error in the TeamSpeak forums about the `2017-09-21T10:59:43+02:00 Error spawning instancefork/exec /sinusbot/TeamSpeak3-Client-linux_amd64/ts3client_linux_amd64: cannot allocate memory` error.
BTW How did you repull the image?
username_2: with docker pull
username_1: @username_2 Is a normal installation of teamspeak client working on an ubuntu server?
It really seems to be an issue with the TeamSpeak client.
username_1: @username_3 Did you also experience this issue on Ubuntu?
username_3: Yes,
Ubuntu x86_64 4.4.0-64-generic
VPS with 1.7GB free RAM
Web interface works but sinusbot won't start with this error:
Error spawning instancefork/exec /sinusbot/TeamSpeak3-Client-linux_amd64/ts3client_linux_amd64: cannot allocate memory
username_1: The error actually means that TeamSpeak isn't working. Could you try installing different TeamSpeak versions directly on to the server and check if they work (you would just need to run the `ts3client_linux_amd64` binary to test it)
username_1: @username_3 Dd you try what I suggested you? As I wrote it could be the TeamSpeak version. But I'm currently not running Ubuntu.
username_3: I installed sinusbot without docker, with teamspeak version suggested in the sinusbot guide (https://github.com/flyth/ts3soundbot/wiki/Installation---Debian-&-Ubuntu), that is 3.0.18.2, it works fine for few days now.
username_3: I suggest you use the same version in your docker if that's the one recommended by sinusbot authors to avoid any problems(after all, version doesn't matter as long as sinusbot can do all it needs to do)
username_1: @username_3 @username_2 I switched the image back to use TS3 version `3.0.18.2` could you please try with the new image.
Run `docker pull quay.io/username_1/sinusbot:latest` or if you use the Docker Hub image `docker pull username_1/sinusbot:latest` and then delete the current Sinusbot container and start it again. |
LSSTDESC/imSim | 201359641 | Title: set GalSimBase.allowed_chips to restrict FITS files that are produced
Question:
username_0: For cases where the `--sensor` option is specified for `imsim.py`, we should set the `allowed_chips` attribute to a list containing the chip name: https://github.com/lsst/sims_GalSimInterface/blob/master/python/lsst/sims/GalSimInterface/galSimCatalogs.py#L168
Answers:
username_1: So the way this works now, we remove sources not on sensors from the source list when we build the dataframe right?
What is the effect of also adding them to this list? Is it related to light coming from sources on other sensors (the buffer variable in PhoSim)?
username_0: The effect should simply be to omit writing the FITS files for any sensors not on that list. Files for the sensors not in that list *could* otherwise be written out if the sims code inferred that some light from sources located on adjacent chips could land on those sensors. @username_2 can describe the details of that inference.
username_2: The relevant method is here
https://github.com/lsst/sims_GalSimInterface/blob/master/python/lsst/sims/GalSimInterface/galSimInterpreter.py#L129
For each object, it generates a postage stamp of that object, finds all of the pixels with more than 0.001 of the maximum pixel's flux, and then checks which detectors overlap with those "active" pixels.
username_1: So, if we wanted to select just single sensors, but still have them "properly" reflect flux from adjacent sources we wouldn't want to do this. Is that correct?
username_2: The GalSimCatalog carries around a list of detectors to simulate. Setting `allowed_chips` truncates that list of detectors. Whenever the GalSimInterpreter draws a source, it uses `findAllDetectors()` to loop over the detectors in the list of allowed detectors and figure out which of those detectors might conceivably receive light from the source. So: any chip that is in `allowed_chips` can still receive light from sources on adjacent chips. Put another way, using `allowed_chips` does not interfere with the ability of adjacent sources to illuminate the chips you have specified.
username_0: otoh, we do down-select the instance catalog based on the chip name:
https://github.com/LSSTDESC/imSim/blob/master/bin.src/imsim.py#L69
so if we want to allow for fluxes from nearby sources not "on" a chip according to the `chipNameFromRaDec` function, we'll need to do something different....unless that function also uses that 0.001 max pixel flux method.
username_2: The method that @username_0 referenced does not do anything clever about bright sources illuminating neighboring chips.
What we could do is use `pixelCoordinatesFromRaDec()` and force the `chipName` kwarg to be the chip we are interested in. This will return the pixel coordinates of the objects as if they were actually on our chosen chip, even if they aren't. Sources that are not on the chip will return pixel coordinates that go above or below the usual [0, 4000] range for x and y pixel coordinates. We could then only keep objects whose pixel coordinates are in some reasonably expanded range (e.g. [-500, 4500]) to make sure that we keep neighboring sources that might illuminate the chip. This, of course, will do nothing for extra bright sources that cast a very large footprint.
If we want to try to reproduce what PhoSim does, I coded up this method
https://github.com/lsst/sims_integrated/blob/feature/SIM-2112/write_end2end_scripts/python/lsst/sims/integrated/createPhoSimCatalogs.py#L88
to try to down-select InstanceCatalogs for only objects that would survive PhoSim's `trim.cpp` step.
The arguments are
name_list -- a numpy array of chipNames representing the actual result of `chipNameFromRaDec` for your sources
xpix0 -- a numpy array of x pixel coordinates for sources as if they were on the R:22 S:11 chip
ypix0 -- same as xpix0 for the y pixel coordinate
magnitude -- a numpy array of PhoSim magNorms for the sources
target_name -- the name of the chip for which you are actually making an InstanceCatalog
center -- a tuple containing the TAN_PIXEL coordinates target_name in R:22 S:11 pixel coordinates system
It returns the result of numpy.where (a list of indexes corresponding to your list of sources) representing the sources that would survive PhoSim's `trim.cpp` for the chip specified by target_name
I haven't had a chance to test it, yet, but this is what I think we would have to do to recreate PhoSim's trim step.
username_1: @username_0 Is the trim_instcat you just added in imsim_deep_pipeline based on the comment above by Scott? Or is is something different/simpler?
username_0: It's a lot simpler. It runs C++ code that filters the instance catalogs based on an acceptance cone on the sky, so if I give it the chip center and some minimal radius, it will filter out almost all of the objects not landing on the chip. It uses streams to process the instance catalog files, so its memory footprint is very small (as opposed to reading all the objects into a data frame), and it's pretty fast. When I was feeding the full focalplane instance catalog into the imsim parser, it was using 10s of GB of memory and crashing on some platforms.
username_1: OK. Thanks. I guess in PhoSim, really the trim step is also a separate executable too even though they are all run by the python driver step. So in principle this could be kept separate and then just made smarter.
Does that mean you are not using the sensor restriction in imsim.py itself now?
username_0: It's unclear whether we want to use that option or not. We might want to revise how it works, actually. Currently it runs chipnamefromradec to assign chipnames to each object and filters on the selected sensor. If we apply an acceptance cone using the new trim code, then we can skip that chipname selection in the code, but I think we might still want to set the `allowed_chips` list so that only the desired chip is written out as a FITS image.
Status: Issue closed
username_0: closed by #55 |
TerryCavanagh/diceydungeons.com | 372449344 | Title: A flame without any animation remains on the bottom of the screen with the Jester
Question:
username_0: The Jester can use the thief card to cut and split a random dice. If that dice is burnt, then a flame remains visible on the bottom of the screen and is not animated anymore.
Answers:
username_1: I encountered this as well. Jester vs. Marshmallow, 2 dice burning. I selected Magic Lockpick, and when the dice went belowdecks, the fire went almost-belowdecks and stopped animating.
username_2: This one's been fixed for a while! Closing.
Status: Issue closed
|
hybridgroup/gobot | 94439340 | Title: PWM exported functions for Intel Edison
Question:
username_0: To be able to properly use the PWM possibilities of the Edison, one needs to set the `period` and `duty cycle` to create a pulse at a given frequency:
```
If you want a 1.6kHz wave that is a 625 microsecond period or 626000 nanoseconds (assuming a 50% duty cycle).
echo -n "625000" > /sys/class/pwm/pwmchip0/pwm3/period
echo -n "312500" > /sys/class/pwm/pwmchip0/pwm3/duty_cycle
```
Unfortunately, right now, gobot developers don't have access to period and duty cycle, as a matter of fact, they have almost no access to anything pwm related and only `func (e *EdisonAdaptor) PwmWrite` seems exposed (and it does a lot of magic).
My suggestion is to export `pwmPin` and its methods so developers interested in going a bit lower level can have fun ;)
Answers:
username_0: @zankich @username_1 This is currently a blocker for me to generate a proper audio signal via PWM.
For anyone else looking at this issue, here is a good document on PWM: https://www.arduino.cc/en/Tutorial/SecretsOfArduinoPWM
username_0: Adrian and I discussed this issue in RL and he suggested to create a new PWM interface for devices offering access to period and duty cycles
username_1: Finally addressed this is part of the working done in reference to https://github.com/hybridgroup/gobot/pull/410
There is now a PWMPin that can used with all of the Linux SoC boards for more granular control over the PWM on each board.
For example:
```
a := edison.NewAdaptor()
pin := a.PWMPin("3")
// get period
period := pin.Period()
// set to 50% duty cycle
duty := period / 2
pin.SetDutyCycle(duty)
```
This is all available in the `dev` branch. I will close this issue as soon as it lands in `master` as part of the next release.
Status: Issue closed
username_1: This code has now been released as part of Gobot 1.5 so please either open a new issue if needed with any specifics, or reopen this one.
Thanks.
username_0: 😎 thanks |
dunglas/doctrine-json-odm | 360242113 | Title: DateTime::__construct() expects parameter 1 to be string, array given
Question:
username_0: My entity has:
```
/**
* @var \DateTime $createdAt
* @ApiProperty
* @Groups("read")
*/
private $createdAt;
/**
* @var \DateTime $updatedAt
* @ApiProperty
* @Groups("read")
*/
private $updatedAt;
/**
* @return \DateTime
*/
public function getCreatedAt()
{
return $this->createdAt;
}
/**
* @return \DateTime
*/
public function getUpdatedAt()
{
return $this->updatedAt;
}
/**
* @param \DateTime $createdAt
*/
public function setCreatedAt(\DateTime $createdAt)
{
$this->createdAt = $createdAt;
}
/**
* @param \DateTime $updatedAt
*/
public function setUpdatedAt(\DateTime $updatedAt)
{
$this->updatedAt = $updatedAt;
}
```
I added to services.yaml:
```dunglas_doctrine_json_odm.serializer:
class: 'Symfony\Component\Serializer\Serializer'
arguments:
- ['@serializer.normalizer.datetime', '@dunglas_doctrine_json_odm.normalizer.object']
- ['@serializer.encoder.json']
public: true
```
But I get the error message:
`DateTime::__construct() expects parameter 1 to be string, array given`
How to fix this?<issue_closed>
Status: Issue closed |
bigeasy/locket | 57001044 | Title: Implement an incoming log.
Question:
username_0: Creating an incoming log using Splice and the new synchronous pages in Strata. The log will write to a Strata Scribe, which will flush on close. However, we are going to guard the close using Sequester. All writers will share a lock, write to the same Scribe, but an exclusive lock will close the Scribe, flushing it's contents. This creates the opportunity for multiple writes to share the same Scribe, backing up on the read lock.<issue_closed>
Status: Issue closed |
prometheus/prometheus | 233156134 | Title: up healthmetric not work as expected in 2.0
Question:
username_0: ```
err := sl.scraper.scrape(scrapeCtx, buf)
cancel()
var b []byte
if err == nil {
b = buf.Bytes()
} else if errc != nil {
errc <- err
}
// A failed scrape is the same as an empty scrape,
// we still call sl.append to trigger stale markers.
if total, added, err = sl.append(b, start); err != nil {
sl.l.With("err", err).Error("append failed")
// The append failed, probably due to a parse error.
// Call sl.append again with an empty scrape to trigger stale markers.
if _, _, err = sl.append([]byte{}, start); err != nil {
sl.l.With("err", err).Error("append failed")
}
}
sl.report(start, time.Since(start), total, added, err)
```
Last line sl.report call may be use **nil** as parameter **err** after a scrape error occurred, because scrape error overwrite by append error. But.... append with zero length bytes(scrape failed) will not cause an error.
sorry for my poor English -_-
Answers:
username_1: #2787 fixes this.
Status: Issue closed
username_2: Merged. |
55Evo/Promesse | 792792232 | Title: Afficher les promesses
Question:
username_0: **As** a user
**I want** to see all of my promises
**so that** I can list my promises.
_Acceptance test 1 :_
**Given** I am a logged-in user
**When** I'm on the home page
**Then** I can see all of my promises.
_Acceptance test 2 :_
**Given** I am a logged-out user
**When** I am logging in
**Then** I'm redirected to the home page and see all of my promises<issue_closed>
Status: Issue closed |
facebook/flow | 229428073 | Title: Flow claims buffers can't be coerced to strings
Question:
username_0: I'm not 100% sure if this is a bug or intended behaviour, but I was surprised by this (see [tryflow](https://flow.org/try/#0MYewdgzgLgBAZiEMC8MwFMDuMBCBXOOdAJwAoByTEYgGwBNyBKAbgChXRIQb0A6GkAHNSAAwAW6GgJgASAN4IQAXxGMgA)):
```js
const thing = new Buffer('world');
console.log(`hello ${thing}!`);
```
```
2: console.log(`hello ${thing}!`);
^ Buffer. This type cannot be coerced to
2: console.log(`hello ${thing}!`);
^ string
```
But Node buffers can be coerced to strings. `String(buffer)` or `buffer.toString()` both work fine, as does interpolating a buffer into a template string.
Status: Issue closed
Answers:
username_1: This is intended behaviour. You should call `toString` explicitly. |
fabnumdef/chatbot-front | 637184991 | Title: [RETOURS USERS] Organisation réponses des connaissances
Question:
username_0: [Sources SKETCH / JPG](https://drive.google.com/drive/folders/1TLRsd2lIVS_J-XnzDYmd7qISr8QjFsTl?usp=sharing)
-----------------
Hiérarchie des éléments de réponses à revoir dans le détail d'une connaissance :
Un header réponse:
- détail des réponses
- lorsqu'il s'agit de choix multiples, la couleur du tag a été unifiée
<img width="622" alt="Capture d’écran 2020-06-11 à 19 05 15" src="https://user-images.githubusercontent.com/63412351/84417974-bf7b2c80-ac16-11ea-8241-5c80f89a2897.png"><issue_closed>
Status: Issue closed |
luvsound/pippi | 701902752 | Title: Add mul param to Seq instrument lanes
Question:
username_0: Will pointed out it's nice to be able to think about rhythms in terms of multiples of the base pulse too. Sometimes you don't want to think in terms of divisions and dividing by 1/4 to multiply by 4 just feels weird. So adding a `mul` param would be nice:
seq.add('instrument1', 'xxx.x', callback1, mul=3)
seq.add('instrument2', 'xx.x..', callback2, mul=2.2)
And so on...
Answers:
username_0: It also begs the question: is there a better level to do this? It would be really cool to allow this to be changed event-to-event, although that may get confusing to manage in a score...
username_0: Both mul & div are a bit confusing. I'm closing this for now to think on another option maybe for 2.1 to simplify this interface.
Status: Issue closed
|
DevOps-Squads-Spring-2018/products | 313467282 | Title: Runs the BDD unit tests to test the pipeline
Question:
username_0: **As a** develop
**I need** to test the pipeline
**So that** ensure the DevOps pipeline is working properly
**Assumptions:**
* BDD unit tests can be useful to verify the pipeline
**Acceptance Criteria:**
```
When runs the BDD integration tests against the service running in dev
Then successfully deploys the service to the Prod space.
``` |
davidsansome/tsurukame | 538059866 | Title: Feature Request: Anonymous Session
Question:
username_0: The idea is simple to explain, but probably hard to implement - allow completely offline lessons and reviewing **without** a WK account.
This could be used for a variety of reasons:
- not wanting to create an account
- testing purposes
- [keeping the App Store gods happy](https://github.com/cplaverty/KeitaiWaniKani/issues/45)
Status: Issue closed
Answers:
username_1: I don't think we'd ever implement this.
All the SRS logic is done by WaniKani so we'd have to reimplement that in the app itself, and I don't really see the purpose...
username_0: Well, that makes it simple. Just wanted to mention it! |
thingsboard/thingsboard | 340519177 | Title: A request for rule chain
Question:
username_0: I want to realize a request that if I can get the values of telemetry from the embedded HSQLDB at 12 o'clock,and then send them to the specified mailbox.
The telemetry has already been post to thingsboard via MQTT.
Is it possible ? If possible ,how to do it?
Answers:
username_1: Do you want to do it with an external script (like a cron job, for example)? If so, you can use the getTimeseries call from the Telemetry API to do it.
http://<YOUR URL HERE>:8080/swagger-ui.html#!/telemetry-controller/getTimeseriesUsingGET
I have an unfinished Python library that will make this easy. If that would be compatible with your approach, I'd be happy to provide you some sample code.
username_0: Thanks buddy,but I consider the rule chain or the function of TB first, How to realize this by the Telemetry API ?
username_2: This would be nice to have, using Rule Chain to execute a 'cron job' to prevent having to run externally.
username_3: I confirm. Such a feature is highly recommended fro Rule Chain ;)
Status: Issue closed
|
nginxinc/ansible-role-nginx | 646682346 | Title: Broke backwards compatibility template PID
Question:
username_0: Commit <PASSWORD> broke backwards compatibility. My initial config was, based on `molecule/common/playbook_template.yml`:
```
...
- role: nginxinc.nginx
tags: [ nginx ]
vars:
nginx_main_template_enable: true
nginx_main_template:
template_file: nginx.conf.j2
conf_file_name: nginx.conf
conf_file_location: /etc/nginx
user: nginx
worker_processes: auto
error_log:
location: /var/log/nginx/error.log
level: warn
worker_connections: 1024
http_enable: true
http_settings:
access_log_format:
- name: main
format: |
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'
access_log_location:
- name: main
location: /var/log/nginx/access.log
keepalive_timeout: 65
cache: false
rate_limit: false
keyval: false
server_tokens: "off"
http_global_autoindex: false
stream_enable: false
nginx_http_template_enable: true
...
```
This results in:
```
TASK [nginxinc.nginx : (Setup: All NGINX) Dynamically Generate NGINX Main Configuration File] ****************************************
fatal: [dev1]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'pid'"}
```
The fix on my side is easy: add the `pid` variable. However, I would expect that new development should not break older configurations without a prior warning.
As a more general inquiry, would it be possible to add variable defaults within nested variables, such as this? Or would it be possible to add defaults within the Jinja2 templates?
I would prefer to have the defaults within the templates; it would make sense as a user and developer to have them there because before setting a variable I already inspect the templates on how they are used.
Answers:
username_1: Sorry about that! I created a new PR to restore backwards compatibility, I'll merge it as soon as possible 😄
Status: Issue closed
|
chienmingwu/VVM | 836960606 | Title: lack a include file "definesld.com",what is it?
Question:
username_0: lack a include file "definesld.com",what is it?
Answers:
username_0: Thank you for your help,i do "mkdir EXPDIR in the same directory of RUN" ,and put -I$(EXPDIR) in both CPPFLAGS and INC in Makefile.host.pgi ,but it is still lack of "definesld.com" and the make is stopped.Any suggestion? Thank you.
username_1: Do not make in RUN directory. In t01, vvm.setup and vvm.run are designed for running the model. Try "csh vvm.setup" and then "vvm.run".
username_1: This is the first time that someone we do not know asks questions here. Is it possible to know your name and affiliation? |
kubernetes/website | 318604491 | Title: Issue with k8s.io/docs/tasks/access-application-cluster/web-ui-dashboard/
Question:
username_0: <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [ ] Feature Request
- [ ] Bug Report
**Problem:**
**Proposed Solution:**
**Page to Update:**
https://kubernetes.io/...
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:--><issue_closed>
Status: Issue closed |
rtrlib/rtrlib | 185351093 | Title: Adapt wiki examples to new API
Question:
username_0: This includes putting the expiration interval above 600 (in ex. its 5xx which will cause a RTR_INVALID_PARAM).
Answers:
username_1: I think I addressed all pending issue, please review the wiki.
Status: Issue closed
username_0: Everything looks good! thanks |
team-charls/charls | 205697409 | Title: 1.x-master does not compile
Question:
username_0: Hi,
I am trying to use 1.x-master. It lacks several files and therefore does not compile. I miss:
jpegstreamreader.h
jpegmarkercode.h
jpegstreamreader.cpp
jlscodecfactory.h
The versions from the main branch are not compatible.
Thanks,
Marcel
Answers:
username_1: Hi Marcel,
The 2.x branch and the 1.x branch are not compatible (main reason why major version number was increased). The 1.x branch doesn't have these 4 header files, but the functionality is still there.
The CI server was able to build the last commit:
https://travis-ci.org/team-charls/charls/builds/166053790
Did you do a proper checkout of the 1.x-master branch?
Victor
username_0: I will try again tonight.
Marcel
Status: Issue closed
username_0: Apologies, I must have made an error. 1.x does compile. I will verify that 1.x causes the same issue with the dicom image I sent.
Marcel
username_0: My silly error is that I failed to include header.c in the build... |
eyeinsky/org-anki | 939232456 | Title: Is it possible to sync tags?
Question:
username_0: Wonderful module; is it possible to sync heading tags as well?
Answers:
username_1: At the moment, no. I looked into it hoping it will be quick to implement but it appears that although adding tags when creating a note is straightforward, then changing tags when updating a note [isn't](https://github.com/FooSoft/anki-connect/issues/183).
I'd also like to have this feature but it might take some time until I get to it.
username_1: Pushed what I had to the `tags` branch. The update part isn't done yet.
username_1: 8a9d7d8bc029b11bb487437987995383816b1d4d (tip of `tags` branch) It's possible to now sync tags, org-mode tags are the source of truth: tags not set for a title are removed and tags not currently present are added.
I still need to figure out allowed characters in Anki vs org-mode (there are differences) and also dependency bounds, but you can play with it if you'd like.
username_1: ^ https://orgmode.org/guide/Tags.html
Tested all these characters in Anki and since they work pushed a new master with tag synchronization.
Status: Issue closed
|
Andy77550/curriculum_vitae_base | 744568173 | Title: Add README.md file
Question:
username_0: Add README.md file who explain plugin functions
Answers:
username_1: Bonjour,
Les documents ont été ajouté,
Je les mettraient à jour quand il y aura des modifications à apporter
Cordialement,
<NAME>.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Garanti
sans virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Status: Issue closed
|
LinnOk/LinnOk.github.io | 402024040 | Title: Hello World | Lion's Blog
Question:
username_0: https://linnok.github.io/2019/01/23/hello-world/#more
Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub. Quick |
openHPI/codeocean | 408724002 | Title: Show path for filenames in exercise
Question:
username_0: As a backend user of CodeOcean, I would like to see the (full) path of files created for an exercise. The path should be shown when viewing an exercise or when editing it (but is not required for implementing, as we have JSTree there). This suggested change will help me to differentiate between different files in different folders with the same name (see current status below).
<img width="1155" alt="bildschirmfoto 2019-02-11 um 11 48 16" src="https://user-images.githubusercontent.com/7300329/52558533-f0fda100-2df2-11e9-9e78-b7e5bb07cd2f.png"> |
mopidy/mopidy | 54198956 | Title: Use a "global" playlist mapping in MPD dispatch code
Question:
username_0: https://discuss.mopidy.com/t/mpc-clients-are-slow-to-connect-to-git-version-of-mopidy/518 has some of the backstory. But the short version is that redoing this mapping for users with many spotify playlists is costly (and can lead to inconsistencies).
So we should be caching this or just moving it up to the server level instead of doing this once per session / connection.
Answers:
username_1: Fixed by PR #968
Status: Issue closed
|
GeoNode/geonode-mapstore-client | 1169450694 | Title: Adopt Upload API error codes
Question:
username_0: The Upload API now returns [error codes for generic or specific exceptions](geonode/upload/api/exceptions.py).
The client will handle the error codes instead of parsing the error message.
The list of the implemented errors at the moment is:
- `upload_exception` (generic)
- `total_upload_size_exceeded`
- `upload_parallelism_limit_exceeded` |
machpavel/autickax | 60567539 | Title: Finish and Start as visualy shiftable objects
Question:
username_0: Now they have something like visual shift. Both the finish and start v.shifts are set on restart in game, but are never set in editor. This can confuse a level designer. Sets shift in editor and export additional data with VisualyShiftableGameObject class.<issue_closed>
Status: Issue closed |
bazelbuild/rules_apple | 855166407 | Title: Release tarball mirrors
Question:
username_0: Hello, I've noticed that several other repos under @bazelbuild publish release tarballs to https://mirror.bazel.build/. For instance, rules\_go has these two equivalent URLs:
```
https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.26.0/rules_go-v0.26.0.tar.gz
https:// github.com/bazelbuild/rules_go/releases/download/v0.26.0/rules_go-v0.26.0.tar.gz
```
However, this is not currently the case for rules\_apple. Would it be possible to mirror rules\_apple as well?
Having a mirror is nice because GitHub could potentially impose rate limits, which is undesirable for CI environments. |
developmentseed/titiler | 1187714509 | Title: remove `exc_info=True` in error logger
Question:
username_0: in https://github.com/developmentseed/titiler/blob/33d3492551aa6198446dc503842ee6c4e6a31332/src/titiler/core/titiler/core/errors.py#L60
we add `exc_info=True` which in theory should print the `the current exception information`, but in our case it's always returning `NoneType: None`.
My understanding is that we had exception handler for known exception which IMO do not require to log the exception information.
cc @geospatial-jeff
Answers:
username_0: note, we could also use `logger.debug` instead of `logger.error` 🤷♂️ |
pytoolz/toolz | 432321671 | Title: Unexpected curry behavior with varargs
Question:
username_0: ```python
from toolz import curry
@curry
def f(x, *args):
return x + sum(args)
```
I would expect `f(0)` to be a function, but instead it returns `1`; the `*args` is interpreted as being provided, but empty.
If there's no fundamental limitation preventing it, I would find the expected behavior useful.
Answers:
username_1: Curry basically keeps creating partials until a sufficient call is provided. `*args` can be empty in Python, so just `f(x)` is enough to satisfy that signature. Given the signature provided, the only way to see any interesting behavior out of `curry `would be to call `f()`, which would return `f` (or something equivalent to `f`). You may just want to use `functools.partial` from the standard library to bind the argument but not call the function.
username_0: Thanks—simply using `functools.partial` is a good idea.
Status: Issue closed
|
Quantum-Manager/tracker | 830867449 | Title: thumbs not displayed with Balbooa Joomla Gallery
Question:
username_0: Hi,
I'm using Balbooa Joomla Gallery as gallery manager : https://www.balbooa.com/joomla-gallery
When I use Quantum Manager to browse images manages by Balbooa Joomla Gallery , thumbs are not displayed.
See here : https://prnt.sc/10ke9s3
How can I solve that?
Thanks
L.
Answers:
username_1: @username_0 can you see what is being sent there in the browser console? are there any mistakes
username_0: here it is : http://prntscr.com/10m2vwt
username_1: there is a duplication of the imageы/images folder. If you remove one image, will it open? as I understand it, the address is incorrectly generated for the cache of images
username_0: I have solved the problem.
was coming from htaccess (generated by akeeba admin tools) to autorise reading quantum cache directory.
works fine now.
thank you
Status: Issue closed
username_2: :+1: |
jonathanj/eliottree | 712808740 | Title: Error occur when using Eliot-tree from docker container
Question:
username_0: Hi, I'm using Eliot-tree and it works very well.
The error occurs when I'm logging into a running docker container, and trying to produce the structured logs with Eliot-tree.
I'm getting the following error:
<img width="951" alt="Screen Shot 2020-10-01 at 15 33 12" src="https://user-images.githubusercontent.com/45912772/94810235-6ec55480-03fc-11eb-9f19-ebd21f22debe.png">
I'm using ubuntu 18.04, Docker version 19.03.12.
Thanks!<issue_closed>
Status: Issue closed |
andyrooger/vs-compat-ts-plugin | 598013692 | Title: Trim published package
Question:
username_0: We don't need to deploy things like the tests or deployment config with the main plugin
Status: Issue closed
Answers:
username_0: :tada: This issue has been resolved in version 1.0.1 :tada:
The release is available on:
- [npm package (@latest dist-tag)](https://www.npmjs.com/package/vs-compat-ts-plugin/v/1.0.1)
- [GitHub release](https://github.com/username_0/vs-compat-ts-plugin/releases/tag/v1.0.1)
Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket: |
scottjsimpson/NetCoreAngular | 415020787 | Title: Bad request when including a file inside a Recruiter object
Question:
username_0: Related to PR #1
Had difficulty with sending up a File to the upload controller. Looked and found the exact issue I am finding that was [posted on stackoverflow](https://stackoverflow.com/questions/45919932/upload-model-with-iformfile-propertis-from-angular2-to-asp-net-core-webapi).
I'm trying to edit the parent Recruiter object and send an avatar picture to be uploaded to azure storage, one of the solutions from the question above is to just send the file a separate request. However, there is an internal relationship with another entity ("FileUpload") used to store a reference to the stored file.
Status: Issue closed
Answers:
username_0: Resolved by separating the file upload as a separate call that updates the FileUpload reference on the Recruiter object at the time a new image selection is made. |
SICKAG/sick_lidar_localization | 1068105784 | Title: rosservice call doesn't work
Question:
username_0: i have lidarloc 2.0.0.14R installed and latest release of sick_lidar_localization package v5.3.0 launched.
In Rviz, i could get the right TF of robot pose, but i could not use the rosservice like LocInitializeAtPose, getting "False" result as below.
```
rosservice call /LocInitializeAtPose "x: 100 y: 100 yaw: 0 searchradius: 50"
success: False
```
If there any configuration required to make rosservice works?
Answers:
username_1: Thanks for reporting. The services require their arguments in a json-syntax (colon separated and with brackets)l:
```
rosservice call LocInitializeAtPose "{x: 100, y: 100, yaw: 0, searchradius: 50}"
```
This should result in successfull execution.
For further diagnosis in case of errors, it's possible to send commands directly using curl:
```
curl -i -H "Content-Type: application/json" -X POST -d "{\"data\":{\"x\":100,\"y\":100,\"yaw\":0,\"searchradius\":50}}" http://192.168.0.1/api/LocInitializeAtPose
```
Ros services are the preferred way. The native curl command display the SIM response "as is" without any conversion, which can help in case of unexpected results.
username_0: Hi,
I tried both.
1. for using rosservice call using json-syntax, I received same result
```
rosservice call LocInitializeAtPose "{x: 100, y: 100, yaw: 0, searchradius: 50}"
success: False
```
2. for using curl, here is the response i received.
```
data\":{\"x\":100,\"y\":100,\"yaw\":0,\"searchradius\":50}}" http://127.0.0.1/api/LocInitializeAtPose
HTTP/1.1 502 Bad Gateway
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 02 Dec 2021 16:33:58 GMT
Content-Type: text/html
Content-Length: 182
Connection: keep-alive
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
```
username_1: "Bad Gateway" means the SIM is unreachable. Please check the IP-adress in the curl command: The default SIM-server has IP 192.168.0.1. Argument "http://127.0.0.1/api/LocInitializeAtPose" (i.e. localhost) was probably not intended.
You can also try to query the system state with
`rosservice call LocIsSystemReady "{}"`
resp.
`curl -i -H "Content-Type: application/json" -X POST -d "{}" http://192.168.0.1/api/IsSystemReady`
username_0: what is SIM-server?
I am doing REST request and rosservice call on same device, i.e. IPC with both SICK LidarLoc 2.0 and ROS package running.
The IPC is connecting to SICK TIM571 Lidar Sensor via LAN, [IP_ADDRESS=192.168.1.5].
I tried the ` rosservice call LocIsSystemReady "{}" `, and ` success: False ` received.
Unable to exec `curl` command with IP `192.168.0.1` as this address not found.
username_1: The SIM is the localization controller, which is connected to both the PC and the Lidar sensor. See https;//cdn.sick.com/media/docs%2F0%2F20%2F720%2Foperating_instructions_operating_instructions_en_im0082720.pdf for details. The IP of this localization server (SIM) is the address for the curl commands. Please make sure that the server is reachable for http requests.
username_0: In my case, I don't use localization controller. I use industrial PC, as described in figure below:

username_1: I see, thanks! We'll have to investigate and reply soon.
username_2: For the sake of completeness, the SIM is a controller from SICK to run LiDAR-LOC. Alternatively, as @username_0 has posted, LiDAR-LOC can run on any IPC.
In this case the bad answers might be from a not activated Software. I'll keep this ticket open until we have further information. |
pygraphviz/pygraphviz | 367625231 | Title: AttributeError: module '_graphviz' has no attribute 'AGRAPH_swigconstant'
Question:
username_0: I have installed pygraphviz successfully. But when I test, it has a problem. And i can‘t search for similar questions. The details of error is below. someone who can help me,thanks!!!
F:\>python testPygraphviz.py
Traceback (most recent call last):
File "testPygraphviz.py", line 1, in <module>
import pygraphviz as pgv
File "C:\Python35\lib\site-packages\pygraphviz\__init__.py", line 58, in <module>
from .agraph import AGraph, Node, Edge, Attribute, ItemAttribute, DotError
File "C:\Python35\lib\site-packages\pygraphviz\agraph.py", line 22, in <module>
from . import graphviz as gv
File "C:\Python35\lib\site-packages\pygraphviz\graphviz.py", line 317, in <module>
_graphviz.AGRAPH_swigconstant(_graphviz)
AttributeError: module '_graphviz' has no attribute 'AGRAPH_swigconstant'
Answers:
username_1: I'm also running into this issue, as if the installation wasn't hard enough.
username_1: I'm also running into this. First installation issues and now this..
Anyone knows what's wrong? Any help would be appreciated.
username_2: I have the same problem.
Status: Issue closed
username_0: I forget this issue. Sorry.
username_0: I have installed pygraphviz successfully. But when I test, it has a problem. And i can‘t search for similar questions. The details of error is below. someone who can help me,thanks!!!
F:\>python testPygraphviz.py
Traceback (most recent call last):
File "testPygraphviz.py", line 1, in <module>
import pygraphviz as pgv
File "C:\Python35\lib\site-packages\pygraphviz\__init__.py", line 58, in <module>
from .agraph import AGraph, Node, Edge, Attribute, ItemAttribute, DotError
File "C:\Python35\lib\site-packages\pygraphviz\agraph.py", line 22, in <module>
from . import graphviz as gv
File "C:\Python35\lib\site-packages\pygraphviz\graphviz.py", line 317, in <module>
_graphviz.AGRAPH_swigconstant(_graphviz)
AttributeError: module '_graphviz' has no attribute 'AGRAPH_swigconstant'
Status: Issue closed
username_2: What did you do to resolve this issue? I still have it.
username_3: Running:
Windows 10
Python 3.6
GraphViz_64
Pygraphviz 1.3
I'm also receiving this error. Does anybody have insight into fixing it?
username_1: This shouldn't have been closed. I too still have the issue. |
MozillaFoundation/mozfest-design-2019 | 417550808 | Title: Google Doc template
Question:
username_0: ### Description
Google Docs is used for localized comms cheatsheets and other purposes. Let's create (or reuse) a generic Mozilla branded google doc template for this.
### Reference from last year
**GitHub ticket:** Couldn't find - please edit this comment and link here if found
Answers:
username_1: **Note to self to reference:**
https://docs.google.com/document/d/1qHA73B_1idkAfldOk-wb3ftNfR3Xmg2TTuQhnHLkoOk/edit#heading=h.9dtxhg3nt1qp
https://docs.google.com/document/d/1eQNvNPFoAhOpUf9oCK8vCAqi0At_uXzFWml9vI-e2MQ/edit#heading=h.fs3gtczc7i18
- document generic template in brand home when complete
username_1: I made a template on gdoc here: https://docs.google.com/document/d/1VWB85eRkoOIdUTgiJ6iaTLkM_3ncX_jUggvJJuMnJbQ/edit?usp=sharing
I also documented this in brand home templates: https://foundation.mozilla.org/en/docs/brand/design/templates/
Status: Issue closed
|
jeremykenedy/laravel-auth | 350301454 | Title: profile settings, 'save changes' button always disabled.
Question:
username_0: 
Answers:
username_0: Solved by deleted the button class attribute "disabled" - line 141
username_1: This issue was appeared at me also, and the suggested solving works.
username_2: This issue has been fixed in commit: https://github.com/username_2/laravel-auth/commit/9<PASSWORD>
Thanks!
Status: Issue closed
|
hubmapconsortium/ingest-validation-tools | 605895332 | Title: Ignore CODEX .gci
Question:
username_0: <NAME> 4:47 PM
jesus says: "The 1.gci file is the metadata file from the microscope which we (the TMC) don't have access to" (edited)
username_0 4:54 PM
Ok — From that I’m not clear what the desired outcome is… Is it a requirement / or is it a variation we accept / or is it a mistake you want flagged?
<NAME> 5:05 PM
Waiting for more from Jesus .....
New
<NAME> 5:43 PM
He says we can discard it or just ignore it.
Answers:
username_0: Close in favor of 171.
Status: Issue closed
|
mikaelsvensson/achievements | 291711726 | Title: Separate page for achievement summary
Question:
username_0: Acceptance criteria:
* The "achievement summary" for an organization has its own page.
* It is possible to go to that page directly from the home page ("click here to see the progress for the people you are in charge of").<issue_closed>
Status: Issue closed |
SerenityOS/serenity | 559441859 | Title: Build Broken, makeall.sh fails to build AK/Tests missing header declarations
Question:
username_0: I am building on Ubuntu 18.04 and everything went OK till It tries to build the AK/Tests and fails, I get the following listing:
flock AK/Tests -c "make -C AK/Tests clean all && make -C AK/Tests clean"
make[1]: Entering directory '/mnt/d/Projects/OS/serenity/AK/Tests'
Tests: CLEAN
Tests: C++ TestCircularQueue.host.o
In file included from ../../AK/NonnullRefPtr.h:31,
from ../../AK/ByteBuffer.h:30,
from ../string.h:29,
from ../../AK/Platform.h:57,
from ../../AK/Types.h:30,
from ../../AK/LogStream.h:29,
from ../../AK/NonnullOwnPtr.h:30,
from ../../AK/OwnPtr.h:29,
from ../../AK/Function.h:29,
from ../../AK/TestSuite.h:41,
from TestCircularQueue.cpp:27:
../../AK/StdLibExtras.h:44:50: **error**: variable or field ‘fast_u32_copy’ declared void
44 | [[gnu::always_inline]] inline void fast_u32_copy(u32* dest, const u32* src, size_t count)
| ^~~
../../AK/StdLibExtras.h:44:50: **error**: ‘u32’ was not declared in this scope
../../AK/StdLibExtras.h:44:55: **error**: ‘dest’ was not declared in this scope
44 | [[gnu::always_inline]] inline void fast_u32_copy(u32* dest, const u32* src, size_t count)
| ^~~~
../../AK/StdLibExtras.h:44:61: error: expected primary-expression before ‘const’
44 | [[gnu::always_inline]] inline void fast_u32_copy(u32* dest, const u32* src, size_t count)
| ^~~~~
../../AK/StdLibExtras.h:44:84: error: expected primary-expression before ‘count’
44 | [[gnu::always_inline]] inline void fast_u32_copy(u32* dest, const u32* src, size_t count)
| ^~~~~
../../AK/StdLibExtras.h:59:50: error: variable or field ‘fast_u32_fill’ declared void
59 | [[gnu::always_inline]] inline void fast_u32_fill(u32* dest, u32 value, size_t count)
| ^~~
../../AK/StdLibExtras.h:59:50: error: ‘u32’ was not declared in this scope
../../AK/StdLibExtras.h:59:55: error: ‘dest’ was not declared in this scope
59 | [[gnu::always_inline]] inline void fast_u32_fill(u32* dest, u32 value, size_t count)
| ^~~~
../../AK/StdLibExtras.h:59:61: error: ‘u32’ was not declared in this scope
59 | [[gnu::always_inline]] inline void fast_u32_fill(u32* dest, u32 value, size_t count)
| ^~~
../../AK/StdLibExtras.h:59:79: error: expected primary-expression before ‘count’
59 | [[gnu::always_inline]] inline void fast_u32_fill(u32* dest, u32 value, size_t count)
| ^~~~~
../../AK/StdLibExtras.h:68:18: error: ‘u32’ does not name a type
68 | inline constexpr u32 round_up_to_power_of_two(u32 value, u32 power_of_two)
| ^~~
In file included from ../../AK/ByteBuffer.h:30,
from ../string.h:29,
from ../../AK/Platform.h:57,
from ../../AK/Types.h:30,
from ../../AK/LogStream.h:29,
from ../../AK/NonnullOwnPtr.h:30,
from ../../AK/OwnPtr.h:29,
from ../../AK/Function.h:29,
from ../../AK/TestSuite.h:41,
from TestCircularQueue.cpp:27:
../../AK/NonnullRefPtr.h:265:14: error: ‘LogStream’ does not name a type
265 | inline const LogStream& operator<<(const LogStream& stream, const NonnullRefPtr<T>& value)
| ^~~~~~~~~
In file included from ../../AK/ByteBuffer.h:32,
[Truncated]
69 | using ::ctime;
| ^~~~~
/usr/include/c++/9/ctime:70:11: error: ‘::gmtime’ has not been declared
70 | using ::gmtime;
| ^~~~~~
/usr/include/c++/9/ctime:71:11: error: ‘::localtime’ has not been declared
71 | using ::localtime;
| ^~~~~~~~~
/usr/include/c++/9/ctime:72:11: error: ‘::strftime’ has not been declared
72 | using ::strftime;
| ^~~~~~~~
/usr/include/c++/9/ctime:80:11: error: ‘::timespec_get’ has not been declared
80 | using ::timespec_get;
| ^~~~~~~~~~~~
../../Makefile.common:99: recipe for target 'TestCircularQueue.host.o' failed
make[1]: *** [TestCircularQueue.host.o] Error 1
make[1]: Leaving directory '/mnt/d/Projects/OS/serenity/AK/Tests'
Makefile:28: recipe for target 'test' failed
make: *** [test] Error 2
make: Leaving directory '/mnt/d/Projects/OS/serenity'
Answers:
username_0: This error stream came from Windows Linux Sub-system running ubuntu 18.04. I ran the build again on a real VM ubuntu 19.10 and it worked.
The problem is building on the windows Linux sub-system.
Status: Issue closed
|
eschava/psmqtt | 168290962 | Title: Missing license
Question:
username_0: Hi!
I would like to contribute to this project and i also like to use it in a project i am currently working on.
Would mind licensing this project? Maybe MIT? http://choosealicense.com
Regards and keep up the good work :)
Answers:
username_1: Hi,
Thanks for your attention to my project!
Yes, MIT should work
Are there any steps to specify what license is used by project?
username_0: Hi,
Just create a file named 'LICENSE' with the license text. Example: https://github.com/kipe/enocean
Thank you!
Status: Issue closed
username_1: Added, thank you! |
fritz-marshal/fritz-beta-feedback | 872262445 | Title: Filter sources based on classification ("Sitewide taxonomy: Ia") stopped working
Question:
username_0: Previously, I was able to filter sources based on Classification (and then further sort that table). Since April 25 (approximately), this does not work if you choose classification = 'Sitewide taxonomy: Ia'.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'sources'
2. Click "Filter table"
3. Scroll down to 'Sitewide taxonomy: Ia' in Classifications
4. Source table remains un-filtered and un-responsive (note: this works for any other classification, e.g. "Sitewide classification: Ibn")
**Expected behavior**
Sortable source table only showing saved Ia supernovae. This worked as expected prior to ~April 25 (see note below)
**Screenshots**
<img width="1264" alt="Screenshot 2021-04-30 at 11 42 54" src="https://user-images.githubusercontent.com/71870267/116679983-73121a00-a9ab-11eb-83fe-443dafed1ee3.png">
**Platform information:**
- Fritz version: (find this in the [About](https://fritz.science/about) page)
- Browser: Chrome
- [x] I'm browsing on a desktop
- [x] I'm browsing on my phone
**Additional context**
One lead for the troubleshooting might be that a large number (3000) of Ia supernovae were batch-saved form the Growth marshal to fritz, to groupid=55 around April 25-27 with classification P set to "null" (see e.g. https://fritz.science/source/ZTF18aaajrso). Maybe this is the root of the problem?
Answers:
username_1: Unable to reproduce (see screenshot showing filtered table):

username_1: Please try again, and wait a few seconds after submitting the query
Status: Issue closed
|
pothosware/SoapyRemote | 106916981 | Title: handle sequence errors and drops for acquireRecv()
Question:
username_0: Because the link is over UDP, its natural to expect drops and reordering. The acquireRecv() implementation should detect this by dropping out of order-packets and reporting overflow/drop on these events so the caller can respond to the discontinuity. |
woocommerce/woocommerce | 698402884 | Title: Introduce GitHub workflow to run e2e tests on multiple hosts
Question:
username_0: This is blocked by some changes required to allows us to run e2e tests on external hosts. One of the blockers: https://github.com/woocommerce/woocommerce/pull/27432
Tasks:
- [ ] A new branch to build e2e tests using https://github.com/woocommerce/action-build
- [ ] Workflow file
Answers:
username_1: @username_0 I pinged you on #27432 so you can review while working on this one.
username_1: Closing this one as we are running tests on external sites.
Status: Issue closed
|
rancher/rancher | 369147152 | Title: Reconsider/fix Catalog related CRD resources and what we store
Question:
username_0: Created from https://github.com/rancher/rancher/issues/14322.
* Discuss if we want to change the way we store catalog related CRD resources
* Security check stuff that is downloaded and don't store the retrieved content 1-on-1.
Answers:
username_1: Any updates on this? Would highly appreciate a solution here - looks like @StrongMonkey came up with something #16473 back in November?
It would really help if the Enable/Disable toggle of the catalog would be permanent. Currently the Library Catalog gets re-enabled everytime we restart the rancher server workload... and thus the etcd gets filled up again.
username_2: This is maybe fixed as part of rancher/rancher/issues/14322 PR (https://github.com/rancher/rancher/pull/17596), everything is now cached to disk and the icons are checked for media type. There are probably extra security checks we can do for verifying Helm tarballs.
Status: Issue closed
username_3: It sounds like the caching was introduced in Rancher 2.2. |
department-of-veterans-affairs/caseflow | 249487578 | Title: [Backend] Hearing Worksheet | Update Hearing Worksheet controller to return real data Part 3
Question:
username_0: We want to show real data on the hearing worksheet page. In order to do that, we need to update the hearing worksheet controller to connect to real data
#### Acceptance Criteria
- Add two new keys to the "deep" `hearing.to_hash` method. Those keys:
- a "total documents in eFolder". This should be a count of all documents in the Veteran's eFolder
- a "total documents since certification date". This should be a count of all documents where the `received_at` date is after the appeal's certification date
- Use the `cache_attribute` method to cache these method values on the hearing (or appeal?) object. This way we don't make extra requests to VBMS on page reload.
Answers:
username_1: **PASSED**
### So how are we scoping? By Receipt date of document or upload date?
- in Preprod, but user whatever UAT data you can get:
```
BFKEY BFCORLID BFDCERTOOL BF41STAT
1 1367888 156292244S 2001-04-02 21:48:40.0 2001-04-02 22:01:29.0
```
- With documents from VBMS:
<img width="526" alt="screen shot 2017-09-05 at 10 02 25 pm" src="https://user-images.githubusercontent.com/18075411/30091152-0b8a7cfa-9286-11e7-977f-19371b68e429.png">
- `Appeal.where(vacols_id:'1367888').last.number_of_documents_after_certification`
```
Document list length: 11
[2017-09-05 22:01:38 -0400] Document list length: 11
=> 3
```
Status: Issue closed
|
woodpecker-ci/woodpecker | 951389378 | Title: Missing v0.14.0-rc.2 on dockerhub
Question:
username_0: Currently I can see that `laszlocloud/woodpecker-agent:v0.14.0-rc.1` however I am unable to find the `laszlocloud/woodpecker-agent:v0.14.0-rc.2` release.
I can see that there is a new repo `woodpeckerci/woodpecker-agent` however this only contains `latest`. Would it be possible to publish `v0.14.0-rc.2` as well?
Answers:
username_1: tag webhook was not enabled on woodpecker ci will test with tagging v0.14.0 if it doesnt work I'll retage after #257 was merged else we will switch to goreleaser in next version
username_1: yes did worked :)
https://hub.docker.com/layers/woodpeckerci/woodpecker-server/v0.14.0/images/sha256-47b2669eeb3b6c39a475a08e7537d520da435398a8e12c1cfc71c38157041175?context=explore
Status: Issue closed
|
ccorcos/meteor-transitioner | 118435379 | Title: Transitioner breaks Blaze.getView(elem) when route is changed during a transition
Question:
username_0: If you use Blaze.getView(elem) in a template.onRendered-callback where elem is a sub-elemet of the current view, you might get the transitioner-view instead the view of the current template.
This happens if the route is changed during a running transition.
The reason is that elem.$blaze_range is null in this case, but could not track down the root-cause of this problem.
Answers:
username_0: Ok, i found the issue:
If you switch fast between routes, one route may be rendered twice. While ccorcos:transitioner handles this situation without a problem, it may lead to problem, when you refer to an element by id.
In my case, I had an aldeed:autoform which stills relies on ids on forms. Because this form may be rendered multipe times because of a running transition i had this id twice in my dom which resulted in this unexpected behavior.
my fix was to assign a random id to the form
Status: Issue closed
|
projectchrono/chrono | 579469025 | Title: ChParallelSystem does not update sleeping bodies status
Question:
username_0: Hi!
In version 5.0.0, when using `ChParallelSystem`, even if enabled with `SetUseSleeping()`, the bodies never get marked as sleeping.
Looking at the code, I've noticed that `ChParallelSystem::Integrate_Y()` does nothing equivalent like the call to `ChSystem::ManageSleepingBodies()` that its non-parallel couterpart does.
Is this feature supported? How else can I check if a subset of bodies has come to rest?
Thanks in advance |
StraboSpot/strabo-mobile | 266958064 | Title: other base maps
Question:
username_0: we need to have the same functionality to zoom to other basemaps that we have for the zoom to offline maps. In other words, when you get the "other basemap" menu it should have the "map" zoom icon as well as the delete icon. |
rust-windowing/winit | 968563550 | Title: Android NDK update
Question:
username_0: For the next release the android ndk related crates should be update to 0.4.
Alongside the update, re-exporting ndk-glue would be helpful under the light of https://github.com/rust-windowing/winit/pull/1993
Answers:
username_0: Some thoughts on the second point regarding re-exporting: https://github.com/rust-windowing/winit/pull/2047#issuecomment-957930111 |
spatie/laravel-medialibrary | 94896312 | Title: if I don't specify registerMediaConversions method in my model then its giving me error
Question:
username_0: If I don't specify registerMediaConversions method in my model using the trait it gives me following error :
Call to undefined method Illuminate\Database\Query\Builder::registerMediaConversions()
Answers:
username_0: I don't want to convert files still I have to add a blank registerMediaConversions function to the model.
any solution.
username_1: I'll fix this in the next release, probably somewhere end of this week.
Status: Issue closed
username_1: Version 2.2.0 adds a `hasMediaWithoutConversions`-interface |
alibaba/hooks | 653985534 | Title: [RFC] useEasyReducer
Question:
username_0: We generally use `useReducer` like this in typescript:
## Demo with useReducer
```typescript
type State = {
count: number;
};
type Actions =
| { type: "increment" }
| { type: "decrement" }
| { type: "update"; payload: number };
const initialState: State = { count: 0 };
function reducer(state: State, action: Actions) {
switch (action.type) {
case "increment":
return { count: state.count + 1 };
case "decrement":
return { count: state.count - 1 };
case "update":
return { count: action.payload };
default:
throw new Error("invalid action type");
}
}
function App() {
const [state, dispatch] = React.useReducer(reducer, initialState);
return (
<div>
<div>{state.count}</div>
<button onClick={() => dispatch({ type: "update", payload: 1 })}>
update
</button>
</div>
);
}
```
However, when the state type is complex or there are many actions, manually coding the state and actions type is very annoying. And for javascript projects, the action type required by the dispatch function is a "magic string", which is not conducive to project maintenance.
So, we can provide a new way to use useReducer: wrap `dispatch` into a function call, and make better use of typescript's type inference ability.
## API
```typescript
function useEasyReducer<T, P extends Processers<T>>(processers: P, initializer: (...args: any[]) => T): [T, Dispatcher<T, P>]
type Processers<T> = Record<string, (state: T, payload?: any) => T>
type Dispatcher<T, P extends Processers<T>> = {
[key in keyof P]: P[key] extends (state: T, ...args: infer TP) => void
? (...args: TP) => void
: never
}
```
[Truncated]
return { count: payload };
}
};
export default function App() {
const [state, { increment, decrement, update }] = useEasyReducer(
processers,
initializer
);
return (
<div>
<span>{state.count}</span>
<button onClick={() => increment()}>increment</button>
<button onClick={() => decrement()}>decrement</button>
<button onClick={() => update(0)}>update to 0</button>
</div>
);
}
```
Answers:
username_1: 我还没在项目中用过 useReducer,哈哈。
username_1: 我觉得 ok. +1
username_2: 这个语法感觉有点夹生饭,有点 flux + setState 混合的感觉。。其实上面的这个 demo 可以用 `setState` 实现的,而且难易程度也差不多:
```ts
import React, {useState} from 'react'
function useMyReducer() {
const [state, setState] = useState({ count: 0 })
function increment() {
setState(state => ({count: state.count + 1}))
}
function decrement() {
setState(state => ({count: state.count - 1}))
}
function update(value: number) {
setState({count: value})
}
return [state, {
increment,
decrement,
update,
}] as const
}
export default function App() {
const [state, { increment, decrement, update }] = useMyReducer()
return (
<div>
<span>{state.count}</span>
<button onClick={() => increment()}>increment</button>
<button onClick={() => decrement()}>decrement</button>
<button onClick={() => update(0)}>update to 0</button>
</div>
);
}
```
Status: Issue closed
|
openframeworks/projectGenerator | 577561078 | Title: cannot compile projectGenerator
Question:
username_0: - openFrameworks 0.11.0
- linux 5.4.23-1 manjaro linux (arch linux based)
since last system update, projectGenerator is not working.
I have multiple linux device(laptop and lenovo p330). both are not work (but different error)
```
[p330]
projectGenerator: error while loading shared libraries: libPocoFoundation.so.64: cannot open shared object file: No such file or directory
[lenovo thinkpad 25Anniverssary]
projectGenerator: error while loading shared libraries: libboost-1.71.0 shared library missing.
```
I tried re compile projectGenerator with `compilePG.sh` where `_OF_/scripts/linux`, but cannot.
here's log
```
Compiling projectGenerator for Release
make[1]: Entering directory '/home/p330/oF/apps/projectGenerator/commandLine'
/home/p330/oF/libs/openFrameworksCompiled/project/makefileCommon/config.addons.mk:210: *** missing separator. Stop.
make[1]: Leaving directory '/home/p330/oF/apps/projectGenerator/commandLine'
make: *** [/home/p330/oF/libs/openFrameworksCompiled/project/makefileCommon/compile.project.mk:129: Release] Error 2
make: Leaving directory '/home/p330/oF/apps/projectGenerator/commandLine'
There has been a problem compiling the command line projectGenerator.
Please report this problem in the forums.
```
Answers:
username_0: by the way, I wrote thread on OF forum.
https://forum.openframeworks.cc/t/projectgenerator-is-not-work-with-libboost-1-72-0/34712
username_0: it turns out that make v4.3 related issue. I downgrade to 4.2.1 and can compile. |
markushedvall/plantuml-viewer | 298344862 | Title: How to save as image file
Question:
username_0: Hi,
how can I save the diagram as png? If I try to right click in the View file it shows the option to save, but nothing actually happens.
Thanks for helping.
Answers:
username_1: Hi, same problem for me here.
My config : Windows 10, Atom 1.24.0 and plantuml-viewer 0.7.2
username_1: This issue will be fixe in Atom's next release I guess : https://github.com/atom/atom/issues/16801
There is also an available workaround: https://github.com/atom-community/markdown-preview-plus/commit/d1f74927daf8af1f7a3db19204bf237c0ec9df2d#comments
username_1: @username_0 : this issue has been solved with the latest atom release (1.24.1) |
nywang16/Pixel2Mesh | 455699702 | Title: data
Question:
username_0: hi,
I want to know the theory of converting cloud data into two-dimensional images.
Thanks in advance!
Answers:
username_1: Hi @username_0, you can refer to the pinhole camera model in 3D vision, which is described in detail.
Status: Issue closed
username_2: Hi,
Do you know how to download the dataset and unzip it from the link below?
https://drive.google.com/open?id=131dH36qXCabym1JjSmEpSQZg4dmZVQid |
alastria/alastria-node | 287062272 | Title: No actualiza ENODE en ficheros JSON al inicializar un nodo general
Question:
username_0: Al lanzar el script "init.sh auto general XXXXX" los ficheros ~/alastria/data/permissioned-nodes.json y ~/alastria/data/static-nodes.json no contienen el ENODE recien creado. El fichero constellation.conf si se actualiza adecuadamente.
Answers:
username_1: Tras revisarlo con @marcossanlab este es el comportamiento correcto. En los nodos generales estos ficheros no deberían actualizarse.
Cierro el issue, si veis que hay algo adicional que comentar lo reabrimos.
Status: Issue closed
username_0: Pues hay algo que no entiendo muy bien porque se hace así. En los nodos generales al inicializar se copia el ENODE del nuevo nodo a permissioned-nodes_validator.json, que luego supuestamente sólo se usa para generar el permissioned-nodes.json de los validadores. Sin embargo, el permissioned-nodes_general.json solo se actualiza cuando se inicializa un validador (ver codigo más abajo) y luego es este archivo el que utiliza el nodo general para generar su permissioned-nodes.json. El efecto final es que ninguno de los nodos generales puede conectar con ningún nodo que no esté en ese permissioned-nodes.json y que además sea validador (para que puede reconocer también a su vez a este nuevo nodo). En pocas palabras, me parece que ningún nodo general va a poder conectarse a otro nodo general puesto que nunca aparecerá en su permissioned-nodes.json. Es esta la funcionalidad buscada?
`update_nodes_list() {
echo "Selected $NODE_TYPE node..."
echo "Updating permissioned nodes..."
ENODE=",
\"$1\"
]"
PERMISSIONED_NODES_VALIDATOR=${PERMISSIONED_NODES_VALIDATOR::-2}
PERMISSIONED_NODES_VALIDATOR="$PERMISSIONED_NODES_VALIDATOR$ENODE"
echo "$PERMISSIONED_NODES_VALIDATOR" > ~/alastria-node/data/permissioned-nodes_validator.json
if ( [ "validator" == "$NODE_TYPE" ]); then
PERMISSIONED_NODES_GENERAL=${PERMISSIONED_NODES_GENERAL::-2}
PERMISSIONED_NODES_GENERAL="$PERMISSIONED_NODES_GENERAL$ENODE"
echo "$PERMISSIONED_NODES_GENERAL" > ~/alastria-node/data/permissioned-nodes_general.json
fi
echo "Updating static-nodes..."
cp ~/alastria-node/data/permissioned-nodes_general.json ~/alastria-node/data/static-nodes.json
}
`
username_1: Al lanzar el script "init.sh auto general XXXXX" los ficheros ~/alastria/data/permissioned-nodes.json y ~/alastria/data/static-nodes.json no contienen el ENODE recien creado. El fichero constellation.conf si se actualiza adecuadamente.
username_1: Eso es. Los nodos generales solo necesitan ver a los validadores. Toda comunicación entre nodos generales se realiza a través de la propia blockchain, o eventos a través de ella. De ahí que el permisiones_nodes_general solo incluya a los validadores.
Son los validadores los encargados de extender los nuevos bloques a través de toda la red, y para contratos privados entre dos nodos regulares, esto se hace con ayuda de los nodos constellations, no se necesita visión directa entre los nodos Quorum.
Espero que haya quedado algo más claro. De todas formas, si quieres que miremos juntos el funcionamiento o concretemos algún tema te facilito mi contacto y lo hablamos.
Status: Issue closed
|
CodeAway/platform | 225620094 | Title: [feature] Configurable Programming Environments
Question:
username_0: <a href="https://github.com/username_0"><img src="https://avatars0.githubusercontent.com/u/4124733?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [username_0](https://github.com/username_0)**
_Saturday Apr 22, 2017 at 13:24 GMT_
_Originally opened as https://github.com/username_0/ddp/issues/1_
----
Proposal:
Admins should be able to create and add new environments for different softwares/packages/languages. Users can choose the environment they want to work with.
Answers:
username_0: <a href="https://github.com/username_0"><img src="https://avatars0.githubusercontent.com/u/4124733?v=3" align="left" width="48" height="48" hspace="10"></img></a> **Comment by [username_0](https://github.com/username_0)**
_Saturday Apr 22, 2017 at 13:59 GMT_
----
- [x] Reconfigure node environment to include the Dockerfile also in the same repo
- [x] Test Python env
- [x] Modify UI to take envs
- [x] List files from `src` only
username_0: <a href="https://github.com/username_0"><img src="https://avatars0.githubusercontent.com/u/4124733?v=3" align="left" width="48" height="48" hspace="10"></img></a> **Comment by [username_0](https://github.com/username_0)**
_Saturday Apr 29, 2017 at 11:47 GMT_
----
- [x] Check deployments with server envs
- [x] Get logs for non-server envs
username_0: <a href="https://github.com/username_0"><img src="https://avatars0.githubusercontent.com/u/4124733?v=3" align="left" width="48" height="48" hspace="10"></img></a> **Comment by [username_0](https://github.com/username_0)**
_Tuesday May 02, 2017 at 06:34 GMT_
----
- [x] Change logs to output
- [x] Rename Restart as Run
- [x] Only one project per env |
xyn9/xyn9.github.com | 186514749 | Title: 2012.11.10.1157
Question:
username_0: <p><a href="https://github.com/username_0/username_0.github.com/pull/1" class="issue-link js-issue-link" data-url="https://github.com/username_0/username_0.github.com/issues/1" data-id="185290054" data-error-text="Failed to load issue title" data-permission-text="Issue title is private">#1</a></p>
/username_0/username_0.github.com/releases/tag/2012.11.10.1157 |
ned14/outcome | 274877944 | Title: Too powerful implicit conversion
Question:
username_0: The converting constructor for `result<T>` (and probably for `outcome`) is too powerful: it is available even for types `X` for which `T` only provides an explicit constructor. The following for instance compiles and crashes:
```c++
#include "outcome.hpp"
#include <memory>
namespace out = OUTCOME_V2_NAMESPACE;
int main()
{
int i = 1;
int * p = &i;
out::result<std::unique_ptr<int>> r = p;
}
```
`std::unique_ptr` does not convert from a raw pointer exactly to avoid bugs like this. But `result` promotes this to an implicit conversion. Here is the life example: https://wandbox.org/permlink/53Vn1mnWSaLCjZTx
Normally, what you want to do in this case is to provide a "conditionally" explicit converting constructor. I.e., you provide two: explicit and converting, and them disable them with SFINAE tricks based on `std::is_convertible<X, T>`.
Answers:
username_1: This is actually a bug I already knew about, and had forgotten about. Thanks for reminding me! Note that my fix is actually to only implicitly construct from convertible things, not constructible things. The explicit constructors continue to use is_constructible.
username_1: Fixed in develop branch. Thanks for the report.
Status: Issue closed
|
snipsco/snips-issues | 328128329 | Title: [bug][android] Cannot cast valid SlotValue into TimeIntervalValue
Question:
username_0: org.parceler.ParcelerRuntimeException: Unable to find generated Parcelable class for ai.snips.nlu.ontology.SlotValue$TimeIntervalValue, verify that your class is configured properly and that the Parcelable class ai.snips.nlu.ontology.SlotValue$TimeIntervalValue$$Parcelable is generated by Parceler.
05-31 14:49:29.317 11723-11788/ai.snips.snipsdemo:snipsProcessingService W/System.err: at org.parceler.Parcels$ParcelCodeRepository.get(Parcels.java:153)
at org.parceler.Parcels.wrap(Parcels.java:72)
at org.parceler.Parcels.wrap(Parcels.java:56)
at ai.snips.platform.Parcelables$SlotValueConverter.toParcel(Parcelables.kt:96)
05-31 14:49:29.318 11723-11788/ai.snips.snipsdemo:snipsProcessingService W/System.err: at ai.snips.nlu.ontology.SlotValue$$Parcelable.write(SlotValue$$Parcelable.java:53)
at ai.snips.nlu.ontology.Slot$$Parcelable.write(Slot$$Parcelable.java:54)
at ai.snips.hermes.IntentMessage$$Parcelable.write(IntentMessage$$Parcelable.java:65)
at ai.snips.hermes.IntentMessage$$Parcelable.writeToParcel(IntentMessage$$Parcelable.java:46)
at android.os.Parcel.writeParcelable(Parcel.java:1363)
at android.os.Parcel.writeValue(Parcel.java:1268)
at android.os.Parcel.writeArrayMapInternal(Parcel.java:644)
at android.os.BaseBundle.writeToParcelInner(BaseBundle.java:1313)
at android.os.Bundle.writeToParcel(Bundle.java:1036)
at android.os.Parcel.writeBundle(Parcel.java:669)
at android.os.Message.writeToParcel(Message.java:561)
at android.os.IMessenger$Stub$Proxy.send(IMessenger.java:84)
at android.os.Messenger.send(Messenger.java:57)
at ai.snips.platform.SnipsProcessingService$initMegazord$2.invoke(SnipsProcessingService.kt:100)
at ai.snips.platform.SnipsProcessingService$initMegazord$2.invoke(SnipsProcessingService.kt:21)
at ai.snips.platform.Megazord$_onIntentDetectedListenerJna$1.receive(Megazord.kt:183)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.sun.jna.CallbackReference$DefaultCallbackProxy.invokeCallback(CallbackReference.java:520)
at com.sun.jna.CallbackReference$DefaultCallbackProxy.callback(CallbackReference.java:551)
Status: Issue closed
Answers:
username_1: Fixed in 0.56.2 release |
perliedman/leaflet-control-geocoder | 58523879 | Title: using leaflet-control-geocoded on gridlayers
Question:
username_0: Hi,
I'm wanting to use reverse geocoding on a map and pull data from a grid layer to integrate with the popup message.
I currently use this code to access map box Grid layers:
vfrbiGrid.on('click', function(e) {
if (!e.data) return;
var popup = L.popup({
keepInView: true,
closeOnClick: true
})
.setLatLng(e.latLng)
.setContent(''Site Name: ' + e.data.site_desc + '<br>' + 'Site lat: ' + e.data.lat + '<br>' + 'Site long: ' + e.data.long)
.addTo(map);
});
where vfrbiGrid is:
var vfrbiGrid = L.mapbox.gridLayer('jamiebaddeley.VF_RBI_Coverage').addTo(map),
That all works well.
What I want to do though is use your code instead as I'd get reverse geocoding results as well.
I tried this:
------
vfrbiGrid.on('click', function(e) {
geocoder.reverse(e.latlng, map.options.crs.scale(map.getZoom()), function(results) {
var r = results[0];
if (r) {
if (marker) {
marker.
setLatLng(r.center).
setPopupContent(r.html || r.name).
openPopup();
} else {
marker = L.marker(r.center)
.bindPopup(r.name)
.addTo(map)
.openPopup();
}
}
})
})
------
But I get an error in the console:
[Error] TypeError: undefined is not an object (evaluating 'location.lat')
reverse (Control.Geocoder.js, line 702)
(anonymous function) (vfrbi2, line 214)
fireEvent (mapbox.js, line 1)
(anonymous function) (mapbox.js, line 6)
(anonymous function) (mapbox.js, line 1)
(anonymous function) (mapbox.js, line 6)
_getTile (mapbox.js, line 6)
getData (mapbox.js, line 6)
_click (mapbox.js, line 6)
fireEvent (mapbox.js, line 1)
_fireMouseEvent (mapbox.js, line 1)
_onMouseClick (mapbox.js, line 1)
o (mapbox.js, line 3)
-----
it works fine when I make it map.on('click', function(e) {
but that doesn't mean I have access to the data held in the grid that I want to pull from and add to in the popup.
Can you help me please?
thanks
jamie
Answers:
username_1: It looks like the `e.latlng` you're passing in to the `reverse` method doesn't contain a `lat` property. It might be that the grid layer (which I'm not familiar with) just passes an array - Leaflet uses arrays and `L.LatLng` object interchangeably, but some parts of the geocoder plugin obviously does not.
A workaround could be to change the first lines of your code to:
```js
vfrbiGrid.on('click', function(e) {
geocoder.reverse(L.latLng(e.latlng), map.options.crs.scale(map.getZoom()), function(results) {
[...]
```
Hope this helps!
username_0: Thanks so much. It wasn't actually that but when your workaround didn't
work it caused me to focus on that particular area of the code. It turns
out that is was a case issue i.e e.latLng rather than e.latlng that made it
work!
Next step from here is trying to make the popup stay active until another
click. It's a rather a large amount of data I'm now filling the popup with
including hyperlinks so moving the mouse up to them.
Kindest Regards, and thanks again,
jamie
Status: Issue closed
username_1: Nice that it worked out! Thanks. |
protocolbuffers/protobuf | 553541053 | Title: 3.11.2: src/solaris/libstdc++.la in dist tar ball
Question:
username_0: Looks like dist tar ball has not been generated using dist or distcheck targets and some leftover Solaris build files are included in dist tar ball.
Answers:
username_1: location/URL of this tarball?
Status: Issue closed
username_0: I'vce switched to github autogenerated tar ball from git tag.
Closing |
robert7k/gentoo-overlay | 926853468 | Title: EAPI 4 not supported
Question:
username_0: The packages `dev-util/idea-*` return the error
```
* ERROR: dev-util/idea-2021.1.2.211.7442.40::username_1 failed (depend phase):
* eutils: EAPI 4 not supported
*
* Call stack:
* ebuild.sh, line 645: Called source '/var/lib/layman/username_1/dev-util/idea/idea-2021.1.2.211.7442.40.ebuild'
/ * idea-2021.1.2.211.7442.40.ebuild, line 5: Called inherit 'eutils' 'versionator'
* ebuild.sh, line 329: Called __qa_source '/var/db/repos/gentoo/eclass/eutils.eclass'
* ebuild.sh, line 114: Called source '/var/db/repos/gentoo/eclass/eutils.eclass'
* eutils.eclass, line 32: Called die
* The specific snippet of code:
* *) die "${ECLASS}: EAPI ${EAPI:-0} not supported" ;;
*
* If you need support, post the output of `emerge --info '=dev-util/idea-2021.1.2.211.7442.40::username_1'`,
* the complete build log and the output of `emerge -pqv '=dev-util/idea-2021.1.2.211.7442.40::username_1'`.
* Working directory: '/usr/lib/python3.9/site-packages'
* S: '/var/tmp/portage/dev-util/idea-2021.1.2.211.7442.40/work/idea-2021.1.2.211.7442.40'
```<issue_closed>
Status: Issue closed |
inspire-eu-rdf/inspire-rdf-guidelines | 220596556 | Title: Address: Names can be simplified
Question:
username_0: I believe the address ontology to be overly complex in regards to the names. As an example:
1) Starting from an `ad:Address`, we follow `ad:Address.component` to find a `ad:ThoroughfareName`.
2) From this, we follow `ad:ThoroughfareName.name` to get a `ad:ThoroughfareNameValue`.
3) From this, we follow `ad:ThoroughfareNameValue.name` to get a `ad:GeographicalName`.
4) This `GeographicalName` can, [according to the specs](http://inspire-eu-rdf.github.io/inspire-rdf-guidelines/#ref_cr_mappings_inspireGeographicalNames), be a `rdfs:Literal` or a complex type.
5) In case a complex type is used, the `skos:prefLabel` contains the value.
I believe 3, 4 and 5 can be merged together without loss of expressiveness.
```
ex:Name1 a ad:ThoroughfareName;
ad:ThoroughfareName.transportLink ex:Link1, exLink2;
ad:ThoroughfareName.name ex:NameValue1.
ex:NameValue1 a ad:ThoroughfareNameValue;
ad:ThoroughfareName.namePart ex:Part1, ex:Part2, ex:Part3; // from ad:ThoroughfareNameValue.nameParts
skos:prefLabel "rue de la Paix"@fr; // from GeographicalName
skos:altLabel "rue dl Paix"@fr.
ex:Part1 a ad:PartOfName; // Could also be done using properties on
ad:PartOfName.part "rue";
ad:PartOfName.type <http://inspire.ec.europa.eu/codelist/PartTypeValue/type>
```
Note that 2 must remain separate. This is because the names (NameValues) might consist of different parts in different languages:
- Dutch: "Boudewijnlaan" - 1 part
- French: "<NAME>" - 2 parts
Answers:
username_1: This is related to #28 "Encoding of geographical names".
Summary first:
- If we know that the complex information of a GeographicalName (status, pronunciation etc) will never be required by RDF applications (in general, or for certain types of applications corresponding to INSPIRE application schemas) then we can map GeographicalName to a simple type.
- Otherwise, we provide (and use) a complex representation (see the [draft ontology for Geographical Names](https://github.com/inspire-eu-rdf/inspire-rdf-vocabularies/blob/master/gn/gn.ttl)).
- With the use of rdfs:label properties (and sub properties like skos:prefLable and skos:altLabel) we could support both cases. There might be ambiguity in case that a class has multiple properties with type GeographicalName.
Now the TL;DR:
Regarding step 4: The guidelines state that for a given INSPIRE application schema, properties with type GeographicalName can be represented as a property with rdfs:range either being an rdfs:Literal (if it is known that no RDF applications need the complex information that the conceptual model of GeographicalName supports) or a complex type. At the moment it is not both. However, the use of rdfs:label to convey names would probably support that.
In the draft ontology of the Address application schema, the range of ThoroughfareNameValue.name is the complex type gn:GeographicalName ('ad:GeographicalName' was a bug in the draft ontology). The assumption is that complex information for a GeographicalName, like nameStatus and pronunciation, can be relevant as well.
The guidelines may be misleading regarding the use of skos:prefLabel and skos:altLabel when representing geographical names. A gn:GeographicalName is usually bound to a particular language (see the definition and description of GeographicalName.language in the conceptual schema). skos:prefLabel and skos:altLabel, when used as properties of a gn:GeographicalName, should probably be given in the language stated by that geographical name. However: skos:prefLabel and skos:altLabel can be used on any resource to label it, since the domain of these properties is undefined. The use of pref- and altLabel in ex:NameValue1 in your example would thus be allowed. If the INSPIRE type contained multiple properties with type GeographicalName (like AddressRepresentation), then mapping these properties to RDFS labels would be ambiguous (unless the properties were still represented, but as subPropertyOf rdfs:label). An RDF application that supports ThoroughfareNameValue.name with range gn:GeographicalName would look at the gn:GeographicalName values of ThoroughfareNameValue.name to determine the information it needs. More likely, it will filter the ThoroughfareNameValue resources that are linked by ThoroughfareName.name to identify the relevant ones (e.g. only use geographical names with nameStatus 'official' or 'standardized' [which skos:prefLabel and skos:altLabel would not support], and in a specific language).
Basically, what your example suggests is that a property with type GeographicalName can be represented by rdfs:labels (including skos:prefLabel and skos:altLabel). This is similar to a suggestion from the SmartOpenData project - see https://www.w3.org/2015/03/inspire/ (chapter "The GCM & Geographical Names"). The draft INSPIRE RDF guidelines currently leave a choice: encode properties with type GeographicalName in a particular application schema either with range rdfs:Literal (keeping the semantics of the property) or a complex type (gn:GeographicalName). In your example, if the simple encoding was used, that would mean that ex:NameValue1 would have ad:ThoroughfareName.name "r<NAME> Paix"@fr. In that case, ad:ThoroughfareName.name could be defined as a sub property of rdfs:label.
username_0: Not sure we're talking about the same thing here (in particular, I didn't quite get your last paragraph).
The point I'm making is that I see no need for both `ad:ThoroughfareNameValue` and `GeographicalName`. In my example in the opening post, I dropped `GeographicalName`, but I could as well have dropped `ThoroughfareNameValue` instead. I'm not talking about the simple or complex representation of a GeographicalName.
My question: for what use cases is `ThoroughfareNameValue` needed if we were to change the following:
```
ad:ThoroughfareName.name rdfs:range gn:GeographicalName.
ad:ThoroughfareName.namePart rdfs:domain gn:GeographicalName.
```
username_1: In your initial example you pointed out that step 2 would have to be kept as-is since the name values might consist of different parts in different languages. Therefore I assumed that you were ok with the structure of ThoroughfareNameValue and that the complexity of GeographicalName was the issue.
Your question can be applied to the INSPIRE conceptual model: Why was the `nameParts` property not modelled on the data type `GeographicalName`? Because in that case, `ThoroughfareNameValue` would indeed not have been needed.
I was not involved in the design of the INSPIRE Addresses schema, so I can only make an assumption: `nameParts` belongs to `ThoroughfareNameValue` because the model of that property (and its type `PartOfName`, which itself holds the name of a specific part, but, more importantly for this discussion, also the type of that part) specifically supports the subdivision of a thoroughfare name into parts. The `PartTypeValue` allows a data provider to define the type of each part of a thoroughfare name (type, name prefix, name, qualifier). However, because the nameParts is specific to thoroughfare names, I assume that that's the reason why nameParts does not belong to the more general GeographicalName. Also keep in mind that a GeographicalName is language specific, and so would be the subdivision into parts, meaning that a `ThoroughfareNameValue` provides a language specific name and optionally its subdivision into parts.
As to the use cases that require the name parts of a thoroughfare name, I could only guess (maybe improved searching).
Back to the RDF representation of INSPIRE data: If RDF applications will never need the name parts provided by a `ThoroughfareNameValue`, then we could change the type of `ThoroughfareName.name` to `GeographicalName`.
Unfortunately, `ThoroughfareNameValue` is not modelled as a subtype of `GeographicalName`. This would avoid at least the indirection introduced by the current `ThoroughfareNameValue.name`.
username_0: What prevents you from doing so? If this RDF model has to be a 1:1 conversion of the conceptual INSPIRE model, without trying to take advantage of the modelling features of RDF, there is no point me spending time here to suggest improvements. ;)
username_1: Nobody said that this needs to be a 1:1 conversion. As a matter of fact, the draft ontologies are already different (for example, multiplicity and the stereotype <<voidable>> are not converted). I was just describing the current situation in the conceptual model.
We do want to create guidelines that define a useful RDF encoding of INSPIRE data. Therefore, your input is much appreciated. These discussions help us identify conversion patterns.
So, I think we reached a conclusion here: Make `ThoroughfareNameValue` a subClassOf `GeographicalName` and omit `ThoroughfareNameValue.name`.
username_1: The result of this discussion has been implemented in the [revision of the vocabulary for the INSPIRE Addresses schema](https://github.com/inspire-eu-rdf/inspire-rdf-vocabularies/blob/master/ad/ad.ttl).
This issue can be re-opened in the future, if necessary.
Status: Issue closed
|
kbjr/terminus-title-control | 349872726 | Title: Support SSH connection name
Question:
username_0: I just want the title be the name of my connection setting.
Answers:
username_1: I'm assuming you mean when using the SSH plugin? That shouldn't be too hard to do. What would you expect the title to be if you're not in an SSH connection?
username_0: Yes, I means SSH plugin. It should be fallback to default title when not in an SSH connection.
username_2: That would be awesome!
username_3: That would be great if this feature has been supported! |
tensorforce/tensorforce | 828493905 | Title: EagerVariableNameReuse when running quickstart.py
Question:
username_0: Hi,
I'm using python 3.8.10.
When I try to run quickstart.py, I get a long stream of errors that I cannot make sense of:
WARNING:root:Infinite min_value bound for state.
Traceback (most recent call last):
File "C:\Users\alberto\Documents\GitHub\tensorforce\examples\quickstart.py", line 67, in <module>
main()
File "C:\Users\alberto\Documents\GitHub\tensorforce\examples\quickstart.py", line 57, in main
runner = Runner(agent=agent, environment=environment, max_episode_timesteps=500)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\execution\runner.py", line 178, in __init__
self.agent = Agent.create(agent=agent, environment=environment)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\agents\agent.py", line 112, in create
return Agent.create(agent=agent, environment=environment, **kwargs)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\agents\agent.py", line 131, in create
return Agent.create(agent=agent, environment=environment, **kwargs)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\agents\agent.py", line 104, in create
return Agent.create(agent=agent, environment=environment)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\agents\agent.py", line 71, in create
agent.initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\agents\agent.py", line 263, in initialize
self.model.initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\models\tensorforce.py", line 521, in initialize
super().initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\models\model.py", line 277, in initialize
super().initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\module.py", line 270, in initialize
module.initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\module.py", line 270, in initialize
module.initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\module.py", line 270, in initialize
module.initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\layers\dense.py", line 76, in initialize
super().initialize()
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\layers\layer.py", line 707, in initialize
self.bias = self.variable(
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorforce\core\module.py", line 620, in variable
variable = tf.Variable(
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\variables.py", line 262, in __call__
return cls._variable_v2_call(*args, **kwargs)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\variables.py", line 244, in _variable_v2_call
return previous_getter(
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\variables.py", line 237, in <lambda>
previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2633, in default_variable_creator_v2
return resource_variable_ops.ResourceVariable(
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\variables.py", line 264, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1507, in __init__
self._init_from_args(
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 1661, in _init_from_args
handle = eager_safe_variable_handle(
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 242, in eager_safe_variable_handle
return _variable_handle_from_shape_and_dtype(
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 174, in _variable_handle_from_shape_and_dtype
gen_logging_ops._assert( # pylint: disable=protected-access
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\ops\gen_logging_ops.py", line 49, in _assert
_ops.raise_from_not_ok_status(e, name)
File "C:\Users\alberto\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\framework\ops.py", line 6843, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [0] [Op:Assert] name: EagerVariableNameReuse
Answers:
username_1: Hi, can you check the TensorFlow version you have installed?
username_0: Yes, I have tensorflow 2.3.1 (and tensorforce 0.6.2).
username_1: Could you test whether updating to the latest TensorFlow version helps? Also, what's your NumPy version? I think 1.20+ cause some problems with TensorFlow, whereas 1.19.5 seems to work well (although it doesn't look like this is the problem here). Otherwise I have no idea -- for me the quickstart example runs fine.
username_0: It works with the latest tensorflow version. Version 2.3.1 got installed with pip install tensorforce. Can the requirements for tensorforce be updated?
Status: Issue closed
username_1: I assume you've installed via pip. That's a good point, the pip version hasn't been updated for a while. I've updated to a new pip version now. Assume this can be closed then. |
wallabag/wallabag | 150712380 | Title: Can't fetch articles from NYTimes.com
Question:
username_0: When trying to fetch articles from nytimes.com, [like this one](http://www.nytimes.com/politics/first-draft/2016/04/24/charles-koch-says-he-could-possibly-support-hillary-clinton/), it fails to fetch the title nor the content. I am using version 2.0.3 on desktop.
Answers:
username_1: can still reproduce with 2.2.3
also does not work: https://www.nytimes.com/2017/01/06/business/sweden-work-employment-productivity-happiness.html because it gets redirect to mobile main page.
username_2: Content is well fetched for me on both the 2.3 and 2.2.3 (and it doesn't matter it uses the mobile website):

username_1: okay, my issue solved. Thank you for pushing me in the right direction. I remove the vendor folder and ran make update. Afterwards fetching works. Thanks
But the originial issue from username_0 still remains
Status: Issue closed
|
hrantzsch/keychain | 535776499 | Title: Consider the Boost Software License?
Question:
username_0: Would you consider re-releasing this under the Boost Software License?
Answers:
username_1: (Commenting as not-the-repo-owner)
This can be tricky. At [first](https://web.archive.org/web/20150106221458/http://ideas.opensource.org/ticket/45) [reading](https://softwareengineering.stackexchange.com/questions/44076/whats-the-fundamental-difference-between-the-mit-and-the-boost-open-source-lice) of the [differences](https://law.stackexchange.com/questions/91/is-there-any-difference-in-meaning-between-the-boost-and-mit-software-licenses), the Boost license is *not* a strict superset of MIT (in particular, it loosens requirements of MIT in terms of license copies in binaries) and so one can't just hand-wave over it and relicense.
For the record, as the copyright owner of some of the code this repo is derived from (appreciate the prominent credit, btw), I would have no problem with it. But my work was *also* derivative from https://github.com/atom/node-keytar by GitHub and that's where the bulk of the original code is from.
username_2: Thanks @username_1 for the insights and thanks for the code we reused :)
TBH I don't know that much about licensing law, but I'll read up a bit on occasion. If it turns out to be simple, I don't mind changing the license at all. It seems to be easy [to change from MIT to GPL](https://opensource.stackexchange.com/questions/5832/relicensing-an-mit-licensed-project-under-the-gpl-that-has-non-code-contribution/5833#5833), but I guess GLP _is_ a strict superset of MIT then?
username_0: Yes that's right, it is the primary difference.
username_2: After what I've read so far, we'd need the permission of all copyright holders to change the license -- exactly like @username_1 pointed out. I'm not interested in trying to get that, to be honest.
@username_0 do you have any different information regarding that? Otherwise I guess we'll stay with MIT.
username_0: Well, thank you for considering it !
Status: Issue closed
|
kubernetes/kubernetes | 1175149834 | Title: Nodes become NotReady when time jumps forward
Question:
username_0: https://github.com/kubernetes/kubernetes/blob/e4fe90e6ef796583b81a559364a241579efc3593/pkg/controller/nodelifecycle/node_lifecycle_controller.go#L1065-L1101
When next `nodeStatusUpdateFrequency` period become, `kubelet` will `syncNodeStatus()`, and refresh node `Ready` again.
Possible fixes:
When `kubelet` had refresh lease, `node_life_cycle_controller` should think the node is ok.

### Kubernetes version
<details>
```console
$ kubectl version
v1.22.1
```
</details>
### Cloud provider
<details>
Not using cloud provider
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="EulerOS"
VERSION="2.0 (SP9x86_64)"
ID="euleros"
VERSION_ID="2.0"
PRETTY_NAME="EulerOS 2.0 (SP9x86_64)"
ANSI_COLOR="0;31"
$ uname -a
Linux PaaSCore-1 4.18.0-147.5.1.0.h208.eulerosv2r9.x86_64 #1 SMP Sun Sep 27 12:47:01 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
</details>
### Install tools
<details>
I installed with source code, the tag is release-1.22
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
Answers:
username_0: /sig node
username_1: What is the scenario when you need to set time forward by one day?
We tend to close this as this is not a supported scenario and the side effect is minimal in this case. Please re-open if you have a legit scenario for changing time forward.
/close |
BurcSen/sql-scratch-capstone-turn-in | 369792061 | Title: Summary
Question:
username_0: ## Rubric Score
### Criteria 1: Report: Are conclusions clear and supported by evidence in all eight answers?
* _Score Level:_ 3 (Meets Expectations)
* _Comment(s):_ Conclusions are good and make sense but could use a bit more explanation and explanation of the evidence.
### Criteria 2: Query Accuracy: Do queries do what they were supposed to do?
* _Score Level:_ 4 (Exceeds Expectations)
* _Comment(s):_ Queries are perfect, great job!
### Criteria 3: Query Formatting
* _Score Level:_ 3
* _Comment(s):_ Formatting is great, just remember to put the "ON" keyword on a new line.
### Criteria 4: Understanding underlying concepts and terminology
* _Score Level:_ 4
* _Comment(s):_ Understanding and terminology appears to have no issue, great!
### Overall Score: 14/16
Overall really excellent job! Watch the formatting and try to explain the evidence a bit better but other than that perfect, keep it up! |
trailofbits/ebpfpub | 766089763 | Title: 牡丹江西安区哪有特殊服务的洗浴a
Question:
username_0: 牡丹江西安区哪有特殊服务的洗浴【+V:781372524】是影视寒冬年,与此同时,针对网络内容的政策逐渐出台,台网剧审查趋向统一标准,两者的界限越来越模糊。整体剧集市场在呼吁现实题材、更加接地气的内容、弘扬主旋律和主流价值观的作品。在大环境下,响应市场号召的北京时代光影在这一年,产出了电视剧《我怕来不及》和两部网剧《当你恋爱时》《将军家的小娘子》。时代光影董事长王锦不惧影视寒冬的“威胁”,发挥台网联动的优势,拓宽题材类型,在原有小人物创作的基础上,大胆创新,将视角定位在“以小见大”的社会话题,以及融入多种感情的家国情怀。聚焦小人物,反映真实情感《我怕来不及》正在央视八套黄金强档热播,这是一部聚焦大时代背景下的小人物奋斗史的剧集。改革背景下,工人李春生一面实施绿色矿山计划,一面照顾白家老小。李春生越挫越勇的奋斗精神感染了大批观众,《我怕来不及》连续两周收视夺冠,引发讨论。高收视和高关注的背后,也有不少网友在网上开轰。“李春生是圣父”、“不可能有这样的人”……针对这些非议,总制片人王锦特别强调了《我怕来不及》的情感浓度,将“我怕来不及”诠释为“子欲养而亲不待”,传达出正能量的价值观。他从小人物李春生的视角出发,赞扬了李春生牺牲自我、成全大家的勇气和坚忍不拔的品质,强调守住亲情就是守住了家。从年成立以来,时代光影一直选择这些朴实善良的小人物为主角,并将他们身上的隐忍品质和真善美放大,例如前些年的《满仓进城》《俺娘田小草》《我的小姨》,还有正在热播的《我怕来不及》,这些剧集从不同角度阐述了小人物身上的大能量。在一次采访中,被问到为何选择这些“普通人”,王锦表示,一方面是觉得自己就是个小人物,能够抓到小人物的温度,体会得到小人物的情感,比如李春生为了爱与责任,抚养岳母,照顾毫无血缘关系的侄女晶晶,无条件帮助白家度过难关;另一方面,李春生这个人物确实来自于自己的亲身经历,“我爸就属于这种人,就是李春生式的人物。小时候,舅舅过世的早,舅妈改嫁,剩下两个孩子,父亲就把孩子接到家里养。后来叔叔和大伯家也经历了相似的遭遇,父亲除了照顾我之外还一直坚持照顾另外三个家庭的孩子。他付出这么多,甚至于还不被理解。”亲情的回归和担当是王锦在展现小人物和底层人民时尤为关注的部分,王锦觉得父亲是个很伟大的人,所以他自己对李春生式的人物很有认同感。这种精神更不应该被谴责,而应该被弘扬。正如王锦所说:《我怕来不及》并不是要教导每一个人去做李春生,而是想要表达他没有错,不应该被抨击,甚至期望有人能认可这样的平民英雄。我们大多数人的生活都是平淡的,但这并不意味不真实存在。截止目前,《我怕来不及》即将收官,就全国测量仪央卫晚间电视剧、猫眼数据显示来看,《我怕来不及》的收视率一直处于递增状态,我们可以看到,《我怕来不及》的观众一半是通过电视台收看,另一半则是通过网络获取视频。很多网友也通过平台弹幕发表对剧集的看法,话题性增强。从这一角度看,原本的电视受众开始往网络迁移,视频网站也需要更多元的内容来满足,台网剧间的壁垒慢慢消除,电视剧集的投放渠道更加丰富,这更有利于电视剧的影响力,促进制作公司的发展。拓宽类型,促进多元发展究其原因,视频网站受众越来越多,年龄层越来越广,必然要求内容更多元,大众化的内容也有了空间。这也是现在很多电视剧或台网联动的剧同样能在网络上引爆的原因。面对影视寒冬的影响,在综合分析行业优势利弊的基础上,年,时代光影优化了企业布局,向网剧市场进军。目前,时代光影便依托自身的优势在与视频平台打交道,与优酷合作了《将军家的小娘子》这类集爱情、甜宠、轻虐元素为一体的古装剧,以及轻体量的都市迷你偶像剧《当你恋爱时》。无论形式还是题材,都更具有网感和互动感。王锦坦言,其实所有制作公司现在都是在寒冬中爬行,大家都是痛并快乐着,对于内容制作公司,现阶段要拼的地方太多。纵观目前的内容制作市场,优质的内容供应商往往具备许多共性:绝对敏锐的政策嗅觉,优良的制作和宣发。王锦表示:“可能在政策上觉悟比较高是我们唯一小小的优势。我们知道什么东西能做,什么东西不能做,底线在哪里。此外,还要尊重创作规律,不能拔苗助长。”市场一直在变化,观众、制作手法、故事、政策都在变化,制作公司唯一能做的就是不断跟上变化。这几年剧集市场频频波动,无论是政策变化还是受众喜好,抑或平台玩法都对内容制作方提出了更高的要求。但万变不离其宗。对受众保持深刻洞察、有核心制作团队、对项目有足够的把控能力、对市场和政策有敏锐感知,永远是一家内容公司在竞争中脱颖而出的核心竞争力。声明:中华娱乐网刊载此文出于传递更多信息之目的,并非意味着赞同其观点或证实其描述。版权归作者所有,更多同类文章敬请浏览:综合资讯慷压炕芈逝https://github.com/trailofbits/ebpfpub/issues/80?lccaW <br />https://github.com/trailofbits/ebpfpub/issues/929?uFWtK <br />https://github.com/trailofbits/ebpfpub/issues/5512?ffjbv <br />https://github.com/trailofbits/ebpfpub/issues/4132?kixot <br />https://github.com/trailofbits/ebpfpub/issues/2752?ydore <br />https://github.com/trailofbits/ebpfpub/issues/5497?Kkp6p <br />https://github.com/trailofbits/ebpfpub/issues/222?suwsi <br />gonsrbjgzwmlqdtamdvatttdpxyikqpytxn |
tensorflow/serving | 281825680 | Title: Tensorflow Serving model is returning different output for the same inputs,and even negative scores.
Question:
username_0: I had trained and exported a tensorflow serving model which is for chinese text classify.I set the input placeholder as tf.placeholder(tf.float32, [None, sequence_length, embedding_size], name="input_x"),
and output the top 5 of the predict result.
But I get the different output value and score when I input the same text,and the scores with the negative value,that makes me quit confuse what is wrong with my model or params .
**The output may like the following:**
_outputs {
key: "classes"
value {
dtype: DT_STRING
tensor_shape {
dim {
size: 1
}
dim {
size: 5
}
}
string_val: "\347\231\275\351\205\222"
string_val: "\346\264\213\351\205\222"
string_val: "\351\224\200\345\224\256\346\234\215\345\212\241"
string_val: "\350\221\241\350\220\204\351\205\222"
string_val: "\351\273\204\351\205\222"
}
}
outputs {
key: "scores"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 5
}
}
float_val: 56.5367240906
float_val: -3.92663836479
float_val: -6.48062610626
float_val: -17.449054718
float_val: -32.129032135
}
}_
**anthoer different output:**
outputs {
key: "classes"
value {
dtype: DT_STRING
tensor_shape {
dim {
size: 1
}
dim {
size: 5
}
}
string_val: "\347\231\275\351\205\222"
string_val: "\351\273\204\351\205\222"
string_val: "\350\221\241\350\220\204\351\205\222"
string_val: "\351\224\200\345\224\256\346\234\215\345\212\241"
string_val: "\346\264\213\351\205\222"
[Truncated]
}
outputs {
key: "scores"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 5
}
}
float_val: 38.6646461487
float_val: 6.10928010941
float_val: 2.18265676498
float_val: -1.94573044777
float_val: -3.25391077995
}
}
Answers:
username_1: One thing to check: Make sure your exported model doesn't include any queues (e.g. FifoQueueOp [1]). That's a common mistake and it can lead to puzzling and nondeterministic behavior.
[1] https://www.tensorflow.org/api_docs/python/tf/FIFOQueue
username_0: @username_1 Thank you very much.I reviewed my code and I believeed I have find the key to the problem is because the **"dropout=0.5"** is using when predict as I set **dropout** as a constant.
After I changed it to a placeholder,I found tf serving is not support for multiple input.It raises the error _"[[Node: dropout_keep_prob = Placeholder[dtype=DT_FLOAT, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]")"_
Finally,I changed **"dropout=1"** as a constant I get the same output ,but it's not a perfect solution to solve it.
https://github.com/tensorflow/serving/issues/602
https://github.com/tensorflow/serving/issues/9
Status: Issue closed
username_2: Using TensorFlow 2.2, when saving a SavedModel that includes Dropouts, TensorFlow serving returns non-deterministic predictions.
How can I disable TF Serving from using the Dropouts layer? Alternatively, how can I remove those Dropout layers from the SavedModel file? |
MicrosoftDocs/windows-powershell-docs | 354239254 | Title: Not connected to PowerShell help
Question:
username_0: This page seems not connected to PowerShell help. When I issue a "Get-Help Remove-AppxPackage -Online", the browser does not open this page. (It opens the PackageManager Class page instead.)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 36f43873-0e34-6ec2-8bde-479945d49ed9
* Version Independent ID: 914c97a9-7dc2-6ddd-4756-7ec210db3fe8
* Content: [Remove-AppxPackage (appx)](https://docs.microsoft.com/en-us/powershell/module/appx/remove-appxpackage?view=win10-ps)
* Content Source: [docset/windows/appx/remove-appxpackage.md](https://github.com/MicrosoftDocs/windows-powershell-docs/blob/master/docset/windows/appx/remove-appxpackage.md)
* Product: **w10**
* GitHub Login: @coreyp-at-msft
* Microsoft Alias: **coreyp**
Answers:
username_1: @officedocsbot assign @username_3
username_2: Hello @username_0 thanks for your feedback.
I created a PR to improve the article so the online version link points to a valid url.
Thank you
username_3: @username_2 Thank you very much for the contribution and updating the documentation with PR. @username_0 Hope this update is helpful for you. Thanks for taking out some time to open the issue. Appreciate and encourage you to do the same in future also.
username_3: @officedocsbot close |
ros/joint_state_publisher | 540987998 | Title: remapping via <launch>.py and logging is not working as expected
Question:
username_0: Hello ros2 developers,
I am quite new to ros2 (Eloquent), so maybe i just use the functionalities wrong. (sry for that in advance)
# logging
for some reason i dont see the logging on "info" level. chanaging the logging level does not solve the problem. Logging on "warn" level works for me.
`
node = rclpy.create_node('joint_state_publisher', allow_undeclared_parameters=True,
automatically_declare_parameters_from_overrides=True)
my_logger = node.get_logger()
my_logger.set_level(LoggingSeverity.WARN) # seems to have to influence anyway
my_logger.warn("this works") # visible in console
my_logger.info('this works not) # not visible in console
`
# remapping
If i start up the node via <launch>.py launc-file the remapping does not work for me. But mostlikely
i just use it worng.
` remappings_robot_description=[((namespace, '/robot_description'), '/robot_description')]`
`
start_joint_state_publisher_cmd = Node(
...,
remappings=remappings_robot_description)
`
Answers:
username_1: For general ROS 2 questions, please first try https://answers.ros.org. It's often the case that the question has already been asked and answered there, and it serves as a central repository for user information about ROS 2. Given that, I'll close this out, but feel free to keep commenting or re-open if you think this is in error.
Status: Issue closed
username_0: remapping works fine btw, was my own mistake |
nathanriojas/inigoglassgallery | 303175225 | Title: Arrow keys should work for carousels
Question:
username_0: In the functional, pendants, and non functional pages, carousels should be able to be controlled by the arrow keys. This will make it easier to scroll between sections, rather than using the mouse.
Status: Issue closed
Answers:
username_0: The left and right arrow keys will now scroll through the carousel on each of the gallery pages to make it easier to scroll through images. |
terraref/brapi | 445139496 | Title: fix pagination for observationunits
Question:
username_0: observationunits does not provide accurate count for total pages. pagination does not work since no matter how many results exist in the db, the total page count is always 1.
Answers:
username_0: https://github.com/terraref/brapi/pull/29
is the associated pull request
Status: Issue closed
|
dotnet/efcore | 1002923873 | Title: FromRawSql should throw when Discriminator is not included in the result set
Question:
username_0: To ensure the correct type is project for materialization, `FromSqlRaw` wraps the query passed, so that this:
```csharp
var contacts = await context.Contacts.FromSqlRaw("select * from Contacts").ToListAsync();
```
Really gets resolved as this:
```sql
SELECT c FROM (SELECT * From Contacts) c WHERE c.Discriminator = "Contact"
```
While this works fine in most scenarios, it causes issues when users project explicit columns. Although this works fine as a raw query and returns multiple records:
```sql
SELECT Id, Name, Phone, Address FROM Contacts
```
This will never return any records and doesn't throw an error or log a warning:
```csharp
var contacts = await context.Contacts.FromSqlRaw("select Id, Name, Phone, Address from Contacts").ToListAsync();
```
Although this behavior is documented, it's not obvious if the documentation hasn't been read. It could lead to a lot of troubleshooting around a non-issue that just requires the user add one column to their select statement.
I suggest either logging a warning or throwing an exception when Discriminator isn't included so it's clear to the user it won't behave as expected and what to do for mitigation.
EF Core version: EF Core 6.0
Database provider: All
Target framework: .NET 6.0
Operating system: BeOS
IDE: Visual Studio 2022
Answers:
username_1: @username_2 6.0?
username_2: Yes, otherwise it would be a "breaking" change
username_1: @username_3 Nominating you to do this for 6.0. :-)
username_1: @username_3 Ping.
username_3: On my plate for today promise.
username_3: I've taken a look at this, and I'm not sure there's a good way of doing it. Here's the SQL we send to Cosmos:
```sql
SELECT c
FROM (
SELECT c["id"], c["Region"], c["PostalCode"], c["Phone"], c["Fax"], c["CustomerID"], c["Country"], c["ContactTitle"], c["ContactName"], c["CompanyName"], c["City"], c["Address"] FROM root c
) c
WHERE (c["Discriminator"] = "Customer")
```
We simply compose the discriminator filter over the user's raw SQL; if that SQL happens to not return the discriminator, our filter simply returns no rows. Our shaper does make sure that the discriminator is correct, but because of the discriminator filter there's nothing here to shape (no results are returned).
One theoretical solution is for the filter to let through documents missing the discriminator (`WHERE c["Discriminator"] = xxx OR NOT IS_DEFINED(c["Discriminator"])`); this would make the results reach the shaper, which would promptly throw. However, this modifies pretty much all Cosmos queries and may have an impact on perf, only to detect and throw on an unsupported scenario (plus it pulls back *all* documents from the container). So I'm not too keen on doing this.
If anyone has a bright idea here or I've missed something, let me know.
username_1: @username_2 @username_4 Thoughts on this?
username_2: We should verify whether the proposed change has any measurable impact on perf (and the cost charge), the worst case scenario would be when all items have a discriminator, but with a different value.
An alternative would be to not do any filtering and make the user do it, we would throw if we get a row with no discriminator or an unexpected one.
username_3: Does it really make sense to burden users with adding the discriminator filter for all their raw queries, just because they may forget to project the discriminator, an unsupported/negative scenario which should be relatively easily discoverable?
I feel a bit similarly about adding the `NOT IS_DEFINED`... I can look into the perf for that, but this feels a bit risky/too much to me just to flag this scenario...
username_4: I would be in favor of this given the first proposal complicates the SQL.
Historically relational added discriminator filters to avoid rows with unknown discriminator value in TPH scenario. Additional filtering also turned out to be problematic there at times so we had to introduce complete discriminator mapping.
For Cosmos, we have discriminator because of sharing the collection. If you are using FromSql for non hierarchy
- Is our behavior changing if the collection is not being shared by any other entity type?
- When materializing value, does our check of discriminator value is really accurate? Should we just verify that discriminator value is expected one rather than trying to materialize value from server. e.g. if user gave us raw sql to product customers, it doesn't have to have discriminator value column projected out because we know what it is. This also aligns with relational behavior upto an extent that discriminator is not need to be projected without hierarchy.
username_2: When using raw SQL users should be very conscious of discriminators. I think that letting the user deal with it provides a learning opportunity and it also empowers the user to do it in a more efficient way.
username_3: OK, let's maybe give this a quick discussion in triage tomorrow? For the record I do think it's better to omit the filtering (requiring users to do it) than to add `NOT IS_DEFINED`, although I also think the current situation isn't that bad...
username_4: My only issue with current situation is incorrect results without exception. At least if we drop the ball on user, either it works or they will get error about other entity records.
username_3: It's true we have incorrect results without exception, but the query will never ever return a single result the way it's written - this is very different from some of the more subtle "incorrect results" errors we've seen... It just feels like we shouldn't penalize users or do risky things perf-wise for it.
username_4: Depending on app logic, incorrect result not returning any row can also cause data corruption which is same as any other incorrect result errors. Hence our priority always has been incorrect result > exception > slow query. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.