repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
yoon28/realsr-noise-injection | 673961575 | Title: dataset
Question:
username_0: hi yoon,
thanks for sharing training codes, could you share the dataset or NTIRE2020 dataset?
thx
Answers:
username_1: The dataset, previously, could be downloaded from [here](https://competitions.codalab.org/competitions/22220#participate-get-data), but unfortunately, the download links on the codalab are broken now. |
bcrypto/skzi | 890263365 | Title: Отказаться от аудита на микроустройствах
Question:
username_0: В #24 предложено отказаться от аудита на микро-СКЗИ (например, ID-картах): на микроустройствах недостаточно ресурсов для ведения журнала аудита.
Пока с предложением трудно согласиться. Пакет АУ учитывает, что ресурсы СКЗИ могут быть ограничены и поэтому разрешает:
- объединять несколько событий аудита в одной записи;
- обрабатывать переполнение журнала разными способами, в том числе хранить только *n* последних записей аудита, где порог *n* может быть небольшим;
- работать без таймера.
Представляется, что требования пакета АУ вполне подъемны на микроустройствах.
Если отказаться от аудита на микроустройствах, то как разбирать инциденты безопасности или даже элементарную потерю работоспособности? Микроустройства -- это, по определению, массовые устройства. Массовые атаки, массовые сбои, массовый поток рекламаций. И без аудита...
Answers:
username_0: Комментарии не поступили. Первоначальная позиция (поддержать аудит в том числе на микроустройствах) сохраняется.
Status: Issue closed
|
kubeflow/kubeflow | 370759294 | Title: Errors when starting kubeflow with minikube
Question:
username_0: I'm attempting to get kubeflow up and running on my local machine and after examining the logs I'm getting several "Container not found in pod" errors
**Setup**
Machine: Macbook Pro 32GB RAM
VM details: Minikube via virtualbox, 8vcpu / 16gb ram / 50G disk
Kubeflow_tag: master
Install process: https://www.kubeflow.org/docs/started/getting-started-minikube/
**Symptoms**
The quickstart script completes and I can access the dashboard via localhost. However, the kubernetes dashboard is inaccessible and throws a `upstream request timeout`, Can't seem to create TFjobs or JupyterHub resources
**Errors**
`Oct 16 18:57:35 minikube kubelet[2980]: W1016 18:57:34.987929 2980 pod_container_deletor.go:77] Container "e7f5400864e0cfc0167e750e6e2aafb5c120a03b87f453b11f0b82ced96eacce" not found in pod's containers
Oct 16 18:57:35 minikube kubelet[2980]: W1016 18:57:35.116079 2980 pod_container_deletor.go:77] Container "6019dbdbd71559a80925af8a8e2904ee2c438fd99ffdceec8c6c5cefeff79000" not found in pod's containers
Oct 16 18:57:37 minikube kubelet[2980]: W1016 18:57:37.033395 2980 pod_container_deletor.go:77] Container "8c48edd6e5a6230ea216dd9dbdb42373076cf17d63462ceacc72befec9bc51fc" not found in pod's containers
Oct 16 18:57:37 minikube kubelet[2980]: W1016 18:57:37.917549 2980 container.go:393] Failed to create summary reader for "/system.slice/run-r8b92596a868b4e2c942dc273076bddf8.scope": none of the resources are being tracked.
Oct 16 18:57:46 minikube kubelet[2980]: W1016 18:57:46.894452 2980 pod_container_deletor.go:77] Container "877a1571b6be539a93b4f36185e2c3f01e2bf13568b1b605b7df2671ad1ba606" not found in pod's containers
Oct 16 18:57:46 minikube kubelet[2980]: W1016 18:57:46.928842 2980 pod_container_deletor.go:77] Container "d691ab9df4438044d7013b2c892af05296842b759206c6bf7ffefe1e3824ec30" not found in pod's containers
Oct 16 18:57:46 minikube kubelet[2980]: W1016 18:57:46.934061 2980 pod_container_deletor.go:77] Container "5ea08b59d45e3d8ef85c61629c3adec987a6b80418facdff39a3fb1f79906e33" not found in pod's containers
Oct 16 18:57:46 minikube kubelet[2980]: W1016 18:57:46.945325 2980 pod_container_deletor.go:77] Container "f52b92be48d9b693b7b73bf80d3acecd3fd8098c54eac03ae7f09ba2b10a24d1" not found in pod's containers
Oct 16 18:57:46 minikube kubelet[2980]: W1016 18:57:46.972235 2980 pod_container_deletor.go:77] Container "18098ac0707dd7408b0910db81431d983dd7214884a765eb45658dedf34a364e" not found in pod's containers
Oct 16 18:57:46 minikube kubelet[2980]: W1016 18:57:46.975970 2980 pod_container_deletor.go:77] Container "8923db20f87d55914772032bccf05e706d341957192d6d035339e8125ea92db0" not found in pod's containers
Oct 16 18:57:55 minikube kubelet[2980]: I1016 18:57:55.142852 2980 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-mlhfl" (UniqueName: "kubernetes.io/secret/4ffbed4c-d175-11e8-b6c9-08002753d444-default-token-mlhfl") pod "vizier-db-cc59bc8bd-67rk8" (UID: "4ffbed4c-d175-11e8-b6c9-08002753d444")
Oct 16 18:57:55 minikube kubelet[2980]: I1016 18:57:55.142902 2980 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-49d1ff78-d175-11e8-b6c9-08002753d444" (UniqueName: "kubernetes.io/host-path/4ffbed4c-d175-11e8-b6c9-08002753d444-pvc-49d1ff78-d175-11e8-b6c9-08002753d444") pod "vizier-db-cc59bc8bd-67rk8" (UID: "4ffbed4c-d175-11e8-b6c9-08002753d444")
Oct 16 18:58:05 minikube kubelet[2980]: W1016 18:58:05.784783 2980 pod_container_deletor.go:77] Container "be2b8951ab376fd817bd77d8d783e64c8ab7583bb2902ba49b4c30f8802b944d" not found in pod's containers
Oct 16 19:03:45 minikube kubelet[2980]: E1016 19:03:45.651941 2980 remote_runtime.go:278] ContainerStatus "d058921793e13ca59d223840c0bfcb0eff4163c8ffb8b1cb<KEY>" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: d058921793e13ca59d223840c0bfcb0eff4163c8ffb8b1cb90d9c6b44fd14a65
Oct 16 19:03:45 minikube kubelet[2980]: E1016 19:03:45.651986 2980 kuberuntime_container.go:385] ContainerStatus for d058921793e13ca59d223840c0bfcb0eff4163c8ffb8b1cb90d9c6b44fd14a65 error: rpc error: code = Unknown desc = Error: No such container: d058921793e13ca59d223840c0bfcb0eff4163c8ffb8b1cb90d9c6b44fd14a65
Oct 16 19:03:45 minikube kubelet[2980]: E1016 19:03:45.651996 2980 kuberuntime_manager.go:873] getPodContainerStatuses for pod "tf-job-operator-v1alpha2-6566f45db-xbnn6_kubeflow(4065cbc9-d175-11e8-b6c9-08002753d444)" failed: rpc error: code = Unknown desc = Error: No such container: d058921793e13ca59d223840c0bfcb0eff4163c8ffb8b1cb90d9c6b44fd14a65
Oct 16 19:03:45 minikube kubelet[2980]: E1016 19:03:45.652010 2980 generic.go:241] PLEG: Ignoring events for pod tf-job-operator-v1alpha2-6566f45db-xbnn6/kubeflow: rpc error: code = Unknown desc = Error: No such container: d058921793e13ca59d223840c0bfcb0eff4163c8ffb8b1cb90d9c6b44fd14a65
Oct 16 19:12:16 minikube kubelet[2980]: I1016 19:12:16.950844 2980 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-60c4f34f-d177-11e8-b6c9-08002753d444" (UniqueName: "kubernetes.io/host-path/60c8dd9d-d177-11e8-b6c9-08002753d444-pvc-60c4f34f-d177-11e8-b6c9-08002753d444") pod "jupyter-admin" (UID: "60c8dd9d-d177-11e8-b6c9-08002753d444")
Oct 16 19:12:16 minikube kubelet[2980]: I1016 19:12:16.955799 2980 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "jupyter-notebook-<KEY>" (UniqueName: "kubernetes.io/secret/60c8dd9d-d177-11e8-b6c9-08002753d444-jupyter-notebook-token-zvf6f") pod "jupyter-admin" (UID: "60c8dd9d-d177-11e8-b6c9-08002753d444")
Oct 16 19:12:35 minikube kubelet[2980]: W1016 19:12:35.052714 2980 pod_container_deletor.go:77] Container "311859051f3c3d97560076b479ac07710303749d5352caf730af2ff362bac6b7" not found in pod's containers`
Answers:
username_1: Thanks for opening the issue @username_0!
* Can you make sure the pods are all running with `kubectl get po --all-namespaces`?
* How are you trying to access the kubernetes dashboard in minikube? You should be able to run `minikube dashboard` to port-forward the dashboard to localhost.
* What version of minikube are you running?
username_0: So I found the issue. My virtualbox setup was broken and I wasn't aware until I did some deep-dive into minikube. Long story short, I re-installed virtualbox & minikube and things are working just fine.
Status: Issue closed
|
spring-projects/spring-kafka | 615780616 | Title: Allow update of retention.ms during topic modification via NewTopic
Question:
username_0: **Affects Version(s):** 2.3.7.RELEASE
<!--
🎁 Enhancement
-->
I'm trying to modify topic configuration after the creation of the topic (mainly retention.ms in my case, but why not other properties).
When creating a NewTopic, configuration is set correctly at creation by KafkaAdmin.addTopics. When increasing the partitions in a NewTopic, after creation, they are updated automatically by KafkaAdmin.modifyTopics.
The idea would be to update KafkaAdmin.modifyTopics to also modify retention.ms (or possibly others in NewTopics.configs).
Answers:
username_1: If your brokere is 2.3 or later, you can do this today with
```java
try (AdminClient client = AdminClient.create(kafkaAdmin.getConfig()) {
...
client.incrementalAlterConfigs(configs, options).all().get(10, TimeUnit.SECONDS);
}
```
I am not convinced that we should do this unconditionally each time an application starts but I suppose we could consider adding a `describeConfigs` call and comparing them to the current config.
Contributions are welcome.
username_0: Thanks for the answer.
Nothing pressing on this point on my side, I just thought it would be a nice addition to spring-kafka in the long run. I had already gone with a new class based that basically does what you're proposing (new class based on KafkaAdmin that read the current configuration and call incrementalAlterConfigs).
It is in kotlin and I simplified edge cases a lot though (it assumes that fatalIfBrokerNotAvailable is always true / doesn't allow calls to initialize after startup / only handles retention.ms and not all possibilities in NewTopic.configs).
Sadly, I won't have time to contribute on this point, especially with the edge cases to think about on whether we should take special care when updating some of the configs (like what is done on re-partitioning) / how failure should be handled during modification (is modification failure wanted the same way as creation, or should a new separate property be created as it is going to be called a lot more frequently than creation /...) |
arquivo/BrozzlerAdmin | 267176054 | Title: Error launching a new Collection job when the id of the job
Question:
username_0: Error launching a new Collection job when the id of the job does not have a integer to increment. Usually because the job_id was forced manually to not have one.
For instance:
`job_list[-1] = facebook_pan`
It breaks because of:
`job_number = int(job_list[-1].split('_')[1]) + 1`<issue_closed>
Status: Issue closed |
Canadensys/vascan-data | 67646373 | Title: Vernaculars of Symphoricarpos rotundifolia var. oreophilus
Question:
username_0: [Originally posted on GoogleCode (id 1284) on 2012-04-03]
FNA transfère S. oreophilus var. utahensis au S. rotundifolius var. oreophilus
J'y ai mis tous les vernaculaires - il faut les évaluer pour leur pertinence.
Il faut aussi un vernaculaire accepté (fr et en) pour S. rotundifolius
Merci
Luc
Answers:
username_0: [Originally posted on GoogleCode on 2012-04-03 13:59Z]
Correction:
Je suis allé trop vite. En fait, le taxon est transféré au nom suivant:
Symphoricarpos rotundifolius A. Gray var. vaccinioides (Rydberg) <NAME>
Voici ce que disent les auteurs: Variety vaccinioides is very often labeled Symphoricarpos oreophilus var. utahensis in herbaria and in floristic literature; however, the type of the latter name is referable to S. rotundifolius var. oreophilus.
Donc il faut de nouveaux vernaculaires pour ce taxon
Luc
username_0: [Originally posted on GoogleCode on 2012-06-22 12:56Z]
Progrès concernant ce problème?
Luc
username_1: [Originally posted on GoogleCode on 2012-06-28 20:26Z]
I used "symphorine des montagnes" for the species as a whole, because the English name "mountain snowberry" is used in CalFlora, and anything with "roundleaf" would risk confusion with the eastern S. orbiculatus, which is already called "symphorine à feuilles rondes".
I used "symphorine fausse-airelle" for the var. myrtilloides.
French names done.
username_2: [Originally posted on GoogleCode on 2012-11-19 19:03Z]
agree with using round-leaved snowberry for S. rotundifolia. Var. vaccinioides as 'mountain snowberry'.
agreed with change for S. orbiculatus as 'coralberry'. Found also FR vernacular as 'symphorine à baies-de-corail' on GRIN if suitable to list?
username_1: [Originally posted on GoogleCode on 2012-11-20 21:28Z]
I added « symphorine à baies de corail » as a French synonym (I removed the hyphens, which are incorrect in this case). Thank you Marilyn.
End of story.
Status: Issue closed
|
wangyonghong/gitalk | 709733830 | Title: 【Linux 命令】bunzip2 | 永红的互联网手记
Question:
username_0: https://yonghong.tech/linux-command/bunzip2/
创一个bz2文件压缩包 补充说明bunzip2命令 解压缩由bzip2指令创建的”.bz2”压缩包。对文件进行压缩与解压缩。此命令类似于“gzip/gunzip”命令,只能对文件进行压缩。对于目录只能压缩目录下的所有文件,压缩完成后,在目录下生成以“.bz2”为后缀的压缩包。bunzip2其实是bzip2的符号链接,即软链接,因此压缩解压都可以通过bzip2实现。 语法1bunzip2(选项)(参 |
racer-rust/racer | 730163448 | Title: Unable to find libstd under RUST_SRC_PATH.
Question:
username_0: % racer complete std::io::B
Unable to find libstd under RUST_SRC_PATH. N.B. RUST_SRC_PATH variable needs to point to the src directory inside a rust checkout e.g. "/home/foouser/src/rust/src". Current value ""/nix/store/fjghknnd4x9zpgx6hxznaiw6c7y0jr2s-rust-1.47.0-2020-10-07-18bf6b4f0/lib/rustlib/src/rust/library/libstd""
I use nix Mozilla-rust-overlay stable
nixpkgs.latest.rustChannels.stable.rust.override {
extensions = [ #
# "lldb-preview"
"clippy-preview"
# "miri-preview"
"rls-preview"
# "rust-analyzer-preview"
"rustfmt-preview"
"llvm-tools-preview"
"rust-analysis"
"rust-std"
"rustc-dev"
# "rustc-docs"
"rust-src"
]
Thanks
Answers:
username_1: I solve latest racer version.
THANKS :)) |
mintproject/mint-ui-lit | 503772178 | Title: [Model Catalog Explorer] Reorganize "Overview" Contents
Question:
username_0: It seems like the model catalog is hard to understand because the differences between configuration and setup are not clear. As discussed with @hvarg, we should reorganize contents in a more hierarchical manner. The picture below shows an example. **Only the metadata fields that are different from each version/config/setup should be shown.** For example, if the processes are the same in config and setup, only show the ones in config and not show them on setup
<issue_closed>
Status: Issue closed |
guardian/grid | 182987007 | Title: An exception occurred during Plugin [play.api.GlobalPlugin] initialization
Question:
username_0: Hello,
I'm getting error message as follow .How can i resolve the issue.Thank you.
"An exception occurred during Plugin [play.api.GlobalPlugin] initialization"
Answers:
username_1: Hey,
Can you post the full stack trace? That looks like it might well be a
configuration problem, buy the full trace will be definitive.
Thanks.
username_0: Cannot load plugin
An exception occurred during Plugin [play.api.GlobalPlugin] initialization
No source available, here is the exception stack trace:
->java.lang.NoClassDefFoundError: Could not initialize class lib.Config$
Global$.beforeStart(Global.scala:16)
play.api.GlobalPlugin.<init>(GlobalSettings.scala:214)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
play.api.WithDefaultPlugins$$anonfun$plugins$1$$anonfun$apply$9.apply(Application.scala:132)
play.api.WithDefaultPlugins$$anonfun$plugins$1$$anonfun$apply$9.apply(Application.scala:130)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
scala.collection.immutable.List.foreach(List.scala:381)
scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
scala.collection.immutable.List.map(List.scala:285)
play.api.WithDefaultPlugins$$anonfun$plugins$1.apply(Application.scala:130)
play.api.WithDefaultPlugins$$anonfun$plugins$1.apply(Application.scala:166)
play.utils.Threads$.withContextClassLoader(Threads.scala:21)
play.api.WithDefaultPlugins$class.plugins(Application.scala:128)
play.api.DefaultApplication.plugins$lzycompute(Application.scala:402)
play.api.DefaultApplication.plugins(Application.scala:402)
play.core.ReloadableApplication$$anonfun$handleWebCommand$1$$anonfun$apply$5.apply(ApplicationProvider.scala:192)
play.core.ReloadableApplication$$anonfun$handleWebCommand$1$$anonfun$apply$5.apply(ApplicationProvider.scala:191)
scala.Option.flatMap(Option.scala:171)
play.core.ReloadableApplication$$anonfun$handleWebCommand$1.apply(ApplicationProvider.scala:191)
play.core.ReloadableApplication$$anonfun$handleWebCommand$1.apply(ApplicationProvider.scala:191)
scala.Option.orElse(Option.scala:289)
play.core.ReloadableApplication.handleWebCommand(ApplicationProvider.scala:189)
play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:69)
play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:69)
scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)
scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)
scala.util.control.Exception$Catch.apply(Exception.scala:103)
scala.util.control.Exception$Catch.either(Exception.scala:125)
play.core.server.Server$class.getHandlerFor(Server.scala:69)
play.core.server.NettyServer.getHandlerFor(NettyServer.scala:39)
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$10.apply(PlayDefaultUpstreamHandler.scala:157)
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$10.apply(PlayDefaultUpstreamHandler.scala:157)
scala.util.Either.fold(Either.scala:99)
play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(PlayDefaultUpstreamHandler.scala:142)
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
com.typesafe.netty.http.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:62)
org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
username_1: That looks like missing configuration for whichever app caused that error.
Have a look at the config generator in the docker folder and check whether
your app has the necessary config.
username_0: I will check it.
Thank you.
username_0: How can i configure the config generator file for local network share files?
username_2: Quite a late response... https://github.com/guardian/grid/pull/2927 looks like it'll help here. Would be interested to get your feedback on it.
username_2: Closing - https://github.com/guardian/grid/pull/2927 should solve this.
Status: Issue closed
|
ueberdosis/tiptap | 890388630 | Title: Multiple Cursors in single user mode.
Question:
username_0: **The problem I am facing**
For styling text and repetitive tasks allowing multiple cursors like in many code editors are a god send,.
**The solution I would like**
VS Code allows to set a second cursor using an ALT+Click
Answers:
username_1: Since this is not possible with ProseMirror itself, we can’t provide such a functionality. Sorry!
Status: Issue closed
|
JuliaPackaging/CondaBinDeps.jl | 541108750 | Title: Not compatible with latest BinDeps version 1.0.0?
Question:
username_0: ```
(v1.3) pkg> add CondaBinDeps
Resolving package versions...
ERROR: Unsatisfiable requirements detected for package CondaBinDeps [a9693cdc]:
CondaBinDeps [a9693cdc] log:
├─possible versions are: 0.1.0 or uninstalled
├─restricted to versions * by an explicit requirement, leaving only versions 0.1.0
└─restricted by compatibility requirements with BinDeps [9e28174c] to versions: uninstalled — no versions left
└─BinDeps [9e28174c] log:
├─possible versions are: [0.7.0, 0.8.9-0.8.10, 1.0.0] or uninstalled
└─restricted to versions 1.0.0 by an explicit requirement, leaving only versions 1.0.0
```
Answers:
username_1: I think the upper version bound is set the JuliaRegistries:
https://github.com/JuliaRegistries/General/blob/master/C/CondaBinDeps/Compat.toml
username_0: OK. Manually changed that on my computer. Now it works. Thanks!
Status: Issue closed
username_2: Closed by #3 |
ray-project/ray | 310256198 | Title: NAN reward when using customized config
Question:
username_0: <!--
General questions should be asked on the mailing list <EMAIL>.
Before submitting an issue, please fill out the following form.
-->
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 14.04
- **Ray installed from (source or binary)**: source
- **Ray version**: 0.4.0
- **Python version**: 3.5.2 (Anaconda)
- **Exact command to reproduce**: See description
### Describe the problem
Default config works fine for me, but if change [default config](https://github.com/ray-project/ray/blob/master/python/ray/rllib/es/es.py#L29) in es.py as follows:
```
DEFAULT_CONFIG = dict(
l2_coeff=0.005,
noise_stdev=0.02,
episodes_per_batch=10,
timesteps_per_batch=500,
eval_prob=0.003,
return_proc_mode="centered_rank",
num_workers=5,
stepsize=0.01,
observation_filter="MeanStdFilter",
noise_size=250000000,
env_config={})
```
then run `python es/es.py --env Swimmer-v1 --run ES --local-dir temp`, you will get NAN reward.
### Source code / logs
Plz see this [gist](https://gist.github.com/username_0/0a57d54e0c1374552f92dcc937167447).
Answers:
username_1: Since there are only `episodes_per_batch` episodes per batch, and each episode is used for evaluation with probability `eval_prob`, on average you'll expect `0.03` episodes to be used for evaluation per algorithm iteration. When this happens, the reward is displayed as NAN. There isn't anything really wrong though.
You can test this by setting `eval_prob` to `0.5` or something like that.
username_0: Cool, got it, thank you for the quick response!
BTW, I am very curious whether the performance of RL algorithms implemented in other open source implementations is comparable with here given the same setting? Like [here](https://github.com/openai/baselines) for PPO, DQN, and [here](https://github.com/openai/evolution-strategies-starter) and [here](https://github.com/uber-common/deep-neuroevolution) for ES?
username_1: The ES implementation here is pretty similar to the OpenAI ES implementation and uses some of the same code.
I'm not sure about the Uber implementation. Our DQN implementation shares some code with the baselines implementation. Our PPO implementation is different from the baselines implementation but the performance is probably comparable in most settings.
Status: Issue closed
username_0: Very cool, thank you so much. |
ChimpGamer/NetworkManager | 832086281 | Title: Ability to set default language and Theme for TicketSystem
Question:
username_0: **Describe the solution you'd like**
Configuration with the possibility to modify the default language and theme for the TicketSystem.
**Additional context**
Also add the ability to change this directly when running the install. |
sinatra/sinatra | 168685262 | Title: Document sharing sessions
Question:
username_0: Perhaps this is already covered somewhere, but worth looking into.
Especially with regards to Rack::Protection and Sinatra sessions.
/cc sinatra/rack-protection#107 and sinatra/rack-protection#83 |
mvysny/photocloud-frame-slideshow | 566467697 | Title: Adding filter to show only horizontally or vertically taken photos
Question:
username_0: I usually add a bunch of photos (1000+) to show randomly, but since my screen is mounted horizontally it would be nice if the app would have an option to filter and show only horizontally (or vertically) taken photos.
What do you think about adding an option to show only horizontally (width > height) or vertically (height > width) taken photos?
Answers:
username_1: Thanks! This should be doable, but let me take a look first, before making any promises :)
username_1: Implemented in 1.13.16. To enable, head to `Settings / Filters/Moments / Filter on Image Size` and set it to "Landscape Only" or "Portrait Only".
Status: Issue closed
|
MellowYarker/Observer | 451494662 | Title: Detect if bloom filter is missing records.
Question:
username_0: Every record added to the database must have also been added to the bloom filter, so if we were to delete the bloom filter, it's likely any database update would fail. Therefore, we should check to see if the `# of entries in the bloom filter` < `# entries in the database`.
Here's an example
1. Add 5 records to both BF and DB
2. Delete bloom filter
3. Try to add same 5 records; bloom filter will allow it but database tx will fail.
Instead of crashing, we can check to see if the bloom filter is behind the database, if so we repopulate it.
Answers:
username_0: This doesn't seem like it's possible. There's no way to know how many elements have been added to the bloom filter. I could modify the struct to have a counter but I feel like it's redundant. It should be noted on the wiki or README that the only safe interface for modifying the database of key sets is the `gen_keys` program (*it is dangerous to modify the database via the sqlite3 CLI or other applications/programs that would not also modify the accompanying bloom filter*).
**For reference**: Write a script that lets users delete the bloom filter and database.
Status: Issue closed
|
ipython/ipython | 419516529 | Title: Print generator as lists
Question:
username_0: In IPython, every time a function returns an iterator, it just prints something like this:
```
In [48]: Path("/tmp").glob("*.txt")
Out[48]: <generator object Path.glob at 0x7f7d7dd351b0>
```
So then, you have to do this:
```
In [54]: list(_)
Out[54]:
[PosixPath('/tmp/abc.txt'),
PosixPath('/tmp/def.txt'),
...
```
Shouldn't the pretty printer expand the content of the generator when that's what's returned by the command? That's usually what I want. Now I have to litter my IPython sessions with `list(...)`.
Status: Issue closed
Answers:
username_1: Well, a generator can:
1) be infinite
2) exhaust itself once iterated over.
So printing it as a list cannot be the default option, as in many case it would break. If a generator want to have a nice repr, it needs to implement it explicitely.
Example of breaking a generator by printing it:
```
In [1]: gen = (i for i in range(10))
In [2]: gen
Out[2]: <generator object <genexpr> at 0x1083d9db0>
In [3]: list(gen)
Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
In [4]: next(gen)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-4-6e72e47198db> in <module>
----> 1 next(gen)
StopIteration:
```
Closing as I don't beelive we can do anything, but feel free to comment and we can reopen.
username_0: 1. When pretty printing a long list, IPython will truncate it, it could do the same for generators
2. When calling a function that returns a generator without assigning to a variable, it's already kind of exhausted. Sure it's in `_`, but it's not that useful.
We do it such as printing a variable, prints the <generator... thing
```
In [1]: gen = (i for i in range(10))
In [2]: gen
Out[2]: <generator object <genexpr> at 0x1083d9db0>
```
But this prints the result:
```
In [1]: (i for i in range(10))
Out[1]: [0,1,2,3,4,5,6,7,8,9]
```
Or as an option. I'm just annoyed at constantly having to do `list(_)`, especially for things that used to return a list but now return a generator in Python 3+.
username_2: ```python
In [1]: gen = (i for i in itertools.count())
```
This can not be truncated as we can never reach the end of it to decide where to truncate it!
```python
In [1]: gen = (i for i in range(10))
In [2]: list(gen)
Out[2]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
In [3]: list(gen)
Out[3]: []
In [4]:
```
I slightly different look at @username_1 's example. Printing the generator mutates it!
username_0: Why do you need to reach the end to decide to truncate it? Read enough to fill the screen and stop there.
username_1: That because in most cases for arrays the middle is truncated. You usually want to know the length to decide where to truncate. We could iterate only on a few but again this still raises above problem about exhausting iterators. I disagree that assignment to _ is useless anyway, as you proved you used it in list. Generators also represent sometime large computation (dask for example). Lazyness is on purpose and peaking at the first element could have arbitrary side effects.
If you want to register a display hook that does this you can do it without changing IPython source. But it will not be the default.
History the completed was trying to be smart about looking in generators and it was more trouble than anything, even sometime triggering external API call which broke some prod system. So we're going to be really careful about this if ever done.
username_0: Display hook? That sounds good. Thanks for the tip. |
grantjenks/python-diskcache | 716770031 | Title: Purpose of tags if not for search?
Question:
username_0: I've stumbled of this exciting project! Congrats!
Looking at tags I thought they might be very useful for searching items, but I don't see how that would work. And hence, what their intended purpose is right now? Also there is a statement in https://github.com/username_1/python-diskcache/issues/118 that search in not a goal for this project. I understand that adding saerch feature would make DiskCache look more like a real database, though.
Answers:
username_1: The tags mainly support eviction: http://www.username_1.com/docs/diskcache/api.html#diskcache.Cache.evict
For example, I used them once when doing dynamic image resizing on the server. The client could request an image of a particular size and the server would grab the image from a blob store and then resize it and cache the resized image. When the resized image was cached, it would use a key like "image-1234-size-1024-by-768" and include "image-1234" in the tag. Later if the client uploaded a new image and replaced "image-1234" then I would evict all items with tag "image-1234" from the cache.
Tags can also be simple metadata (they don't have to be strings). When retrieving a tag via get() you can request the tag too. I used that once for an expensive API call. I didn't want to change the key or value but I wanted to include some metadata to record how long the item took to generate. I stored that information as a float in the tag.
username_0: Makes sense. And having arbitrary tags is cool, but would make using them for search not exactly easier. But I still think it would be helpful to have an analog operation to `cache.iterkeys()` but over the tag values, like `cache.itertags()` so you could walk efficiently over these tags and decide dynamically what to really _get_/_fetch_ from the cache.
Status: Issue closed
username_1: Maybe but I don’t see a compelling use case. Seems easy to use iterkeys() and get() as a start.
username_0: `iterkeys()` returns only keys, but no tags, and to get the tags you must get the entire item from the cache. Which one could avoid if there was a way to get the tags only.
username_1: That’s a good point. What’s your use case?
username_0: I'm mostly interested in searching and ordering for keys, or alternatively for tags. When you work with geohashes e.g. you want to access a hash "next to" another one in your cache.
username_1: I've stumbled over this exciting project! Congrats!
Looking at tags I thought they might be very useful for searching items, but I don't see how that would work. And hence, what their intended purpose is right now? Also there is a statement in https://github.com/username_1/python-diskcache/issues/118 that search in not a goal for this project. I understand that adding saerch feature would make DiskCache look more like a real database, though.
username_1: I see. That’s helpful. So you’d like to do bounds/prefix queries on both the keys and tags? It seems possible given the indexes are there. But the API for that functionality is not obvious to me.
Maybe like:
irange_keys(lo=None, hi=None, inclusive=(True, True), keys=True, values=False, tags=False)
irange_tags(lo=None, hi=None, inclusive=(True, True), keys=True, values=False, tags=False)
The “prefix” case looks Special for strings. I’m not sure it should be included.
username_1: It might also make sense to have two separate caches: key-value and key-tag.
Status: Issue closed
|
papercss/papercss | 278940546 | Title: Create single component with css extension
Question:
username_0: We actually only have `.less` files for single-components, but what if we need to import a single-component in pure CSS? For example, in [vuejs-paper-css](https://github.com/papercss/vuejs-paper-css) I need to import the styling of each component (in `sass` or `css` since it increase the complexity of the build of each user to add `less` support).
Status: Issue closed
Answers:
username_0: It's actually done on the `separate-components`branch. |
pelagios/recogito2 | 155668924 | Title: Account settings page
Question:
username_0: * Reset password
* Delete account (What do we do with associated docs? Offer two options a la Wordpress: delete or transfer to other user?)
* Set a real/full name (optional)?
* Personal homepage link?
Answers:
username_0: Moved the "advanced" topics of reset password and delete account to separate issues (#116, #117), to be addressed later.
Status: Issue closed
|
Smile-SA/mongogento | 182724671 | Title: Magento support version
Question:
username_0: Hello All,
We are using Magento Community Edition 1.9.1, looking into the performance matrix of mongogento we would like to migrate our application into it. Please suggest is mongogento compatible with my magento version. We have also long list of extension used in the site where should we cross verify there compatibility.
Regards
Abhijit
Answers:
username_1: Hello @username_0 ,
MongoGento should run fine on CE 1.9.1.
But yes, you have to take a look on each of your extension to track if any of these are :
- processing loading of product data without using the addAttributeToSelect/addAttributeToFilter .
- processing loading of product data directly with SQL
- implementing custom product collection which herits the standard one : refactor them to herit from the mongogento collection (see the configurable product collection in the module)
Regards, and keep in touch if you are using Magento2, we may publish an alpha version of this module soon :) |
sequelize/sequelize | 92168818 | Title: [Question]: Is it possibly to order an include by it's 'through' association?
Question:
username_0: I'm having trouble figuring out how to set the order of the nested data on a query.
My model is as follows:
```
Bucket
WalkthroughBucket //defined as 'through' table, but also has it's own 'displayOrder' column
Walkthrough
```
Then I make a query form the bucket...
```
db.Bucket.findAll({
include: [{
model: db.Walkthrough,
attributes: ['id'],
order: [[db.Walkthrough, db.WalkthroughBucket, 'displayOrder', 'DESC']]
}]
})
```
What I'm expecting is that the the first returned result, let's call it `bucketA`, would have a bucketA.WalkthroughBuckets array ordered correctly by the associated `displayOrder` but it still comes back out of order.
Is there an obvious issue with my query or is this sort of thing not as I'm expecting? I couldn't actually find `order` used inside of an `include` anywhere in the docs, so I came here to confirm that's even doable.
Answers:
username_0: Pardon me, I meant `bucketA.Walkthrough` is not ordered as expected not `bucketA.WalkthroughBucket`.
username_1: `order` is generally for ordering the main table, although for how SQL works it should *just* work on the include aswell. Can you post the SQL for the call?
username_2: This is quite old but leaving my finds for help someone.
As @username_1 said, `order` for main table, but there is `separate` option to enable `order` in include. ( http://docs.sequelizejs.com/class/lib/model.js~Model.html#static-method-findAll )
```
include: [
{model: Feedback, as: 'feedbacks', separate: true, order: [['updatedAt', 'DESC']]},
]
```
Or, I could just provide `order` on main table query to order associated table.
```
include: [
{model: Feedback, as: 'feedbacks'},
],
order: [[Sequelize.col('feedbacks.updatedAt'), 'DESC']]
```
Hope this help! |
OliverSwift/Promys | 313123875 | Title: prebuilt binaries
Question:
username_0: Hi!
Do you plan on having pre-built, standalone binaries for the client?
Thanks!
Status: Issue closed
Answers:
username_1: Hi Carlos,
It's already the case. You have the client binaries on the Promys device through the embedded web page.
Or available at http://promys.me for early commissioning. These are used when building SD image. |
keptn/enhancement-proposals | 843034080 | Title: Gimlet - *One chart to rule them all*
Question:
username_0: # Gimlet.io
**Success Criteria:**
## Motivation
* *Target*:
* *Pain*:
* *Driver*:
## User Stories
* As a user, can define a remediation process (sequence) in the Shipyard file. |
jraoul2002/duck-duck-clone | 722079000 | Title: About the main-page
Question:
username_0: To accomplish my work i have :
- [x] Created the branch main-page
- [x] filled it the file index.html
- [x] created folder css
- [x] created a file style.css
- [x] created a folder Images
- [x] pushed everything on github
- [x] opened a pull request
- [ ] merged the branch main-page to the master branch<issue_closed>
Status: Issue closed |
2sic/2sxc | 176155666 | Title: Feature: Add content-type instructions capability
Question:
username_0: provide a instructions-field to describe a content-type, so that these will be shown when editing.
Answers:
username_0: Got it all to work, but it doesn't work for ghost-content types yet, as the metadata would need to be loaded from another app...
username_0: works!
Status: Issue closed
|
aws-amplify/amplify-cli | 622376079 | Title: How am I supposed to use Amplify Environment variables in my Lambda @functions?
Question:
username_0: **Which Category is your question related to?**
Environment variables, Amplify and Lambda
This is a pretty straightforward feature and question. It has been mentioned in many issues here
- https://github.com/aws-amplify/amplify-cli/issues/684
- https://github.com/aws-amplify/amplify-cli/issues/678
Or elsewhere:
- https://www.reddit.com/r/aws/comments/dia7bn/how_do_you_access_amplify_environment_variables/
- https://stackoverflow.com/questions/57446615/creating-process-env-variables-using-aws-amplify
And the [terrible documentation](https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html) does not help, only the first part of it (setting the variables) makes sense.
Here is from where I started, what I went through and what didn't work. Let me know if I have missed something.
**_Where I started_**
I wanted to remove the api id from dynamoDB generated tables for @model, in order to have a table name e.g. "project-dev" instead of "Project-6ezfre56g4erg-dev". It would have been much easier, to reference it, especially since I have multiple api using the same table.
As I understood it, Amplify is opinionated and it abstracts the implementation details, we then cannot have both custom data sources and an API generated through the @model directive.
I still tried to update the cloudFormation template of my nested stack to remove the apiId from the table name (`GetAttGraphQLAPIApiId`) everywhere it was mentionned, but the update rollback:
``` yaml
"TableName": {
"Fn::If": [
"HasEnvironmentParameter",
{
"Fn::Join": [
"-",
[
"Project",
{
"Ref": "GetAttGraphQLAPIApiId"
},
{
"Ref": "env"
}
]
]
},
```
Again, this is a managed approach, so I guess I should comply with it. However, when you offer a managed approach, it should consider that the AppSync data sources are exposed, can be changed, and that it has no consequence on the API. This is very inconsistent with the managed approach and misleading to say the least.
**_So I needed environment variables_** to store my table names. I found the documentation, switched to the managed hosting of the frontend apps to have the possibility to add environment variables (no idea why it is mandatory) and defined my env variables at the amplify app level as mentioned [here](https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html#setting-env-vars).
I expected these global, app level, env variables to be global env variables. However, as far as I know, they are not. In my lambda workers, I cannot find them at the `process.env` level. Why are they not inherited? Am I missing something?
I really don't want to manually add to my 50+ lambda workers local env variables, at the lambda level.
I also tried to update the cloudFormation template of the lambda with the same approach than the snippet above but `GetAttGraphQLAPIApiId` is not available. Since I can't have custom table names, I am back to square one.
Since it seems that Amplify env variables are exposed at build step, I tried to assign them to the backend at the build level in the `amplify.yml` file. No luck there either, it fails to build if I assign a variable.
I really like Amplify but this issue has made me lose a lot of time and was a real disappointment. I still think I could have missed something so I took the time to write all this. Please consider the effort and let me know how I should proceed to have my env variables available in my lambda workers, or change my tables names to custom names while keeping the @model directive benefits.
Answers:
username_1: @username_0 Apologize for the confusion out here. Did you try out the `amplify update function` flow to add the table names generated by the GraphQL transformer (i.e data sources) as environment variables to the function? We provide that flow and it's documented out here -https://docs.amplify.aws/cli/function#function-templates
Unfortunately at this time, the lambda functions generated by the CLI do not inherit the env variables defined in the Amplify Console. The env variables defined in the Amplify console is just used in your build configs and are build time environment variables.
username_0: @username_1 Thank you for your answer. I'll look into the doc you mentioned.
Status: Issue closed
username_2: @username_1 - is there a feature request ticket for this functionality? And is there a workaround - some better way of centrally managing e.g. a cryptographic secret and automatically getting it into lambdas as an environment variable without putting it into a file that will be in version control?
username_3: Any chance this will be picked up in a upcoming release? Becoming a real headache to deal with
username_4: @username_3 we're working on this now, unsure of a date but will try to come back soon with something close.
username_5: hi @username_4 do you have a PR or an Issue to track the status of it? many thanks
username_6: I need so much this feature. Any update about this?
username_7: Any updates? Would be cool to have a native solution
username_8: Why is this closed? We are facing similar issues when we want to set env vars...
username_9: +1. Bit silly not having this, really.
username_10: +1
username_11: +1
username_12: +1
username_13: +1
username_14: +1
username_15: +1
username_16: @username_4 - there are a lotta prayers being sent into the void here. Is there any update on making this crucial feature work? If you could check, I would really appreciate it. Thanks! It would be real nice to have env vars in Lambdas.
username_17: +1
username_18: +1
username_10: Hey gang.
So, FYI, it's totally possible to _manually_ add ENV vars to the Lambda after it has been created by Amplify, if you just need your Lambda to have access to ENV vars. I manually copied mine from the Amplify config into the AWS webtop... Just to get up and running.
Go to **Lambda > Functions > Configuration** and scroll down to the **Environment variables** section. 🤦♂️ And set your env vars by hand. That avoids the hassle of inserting them via a script, or having to integrate with AWS Secrets Manager.
username_8: These will be overwritten by amplify at some point... Trust me, I did this as the first thing, somewhere along the road it will update the CF template and remove them, potentially breaking your applications.
username_19: +1
username_20: Hi
Any update on this?
username_4: Coming back to this since it's a popular topic in order to address some ways to implement environment variables today in Lambda with Amplify (until the CLI implements this feature natively).
There are two main approaches (other than manually adding values in the Lambda console):
1. If your value is secret, you can use [Secrets Manager](https://aws.amazon.com/secrets-manager/). To do so, you first need to create a secret in the [secrets manager](https://console.aws.amazon.com/secretsmanager) console.
Next, add a statement to the `PolicyDocument` in __amplify/backend/function/<function_name>/<function-name-cloudformation-template>.json__ to give the Lambda function permission to use the secret:
```
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": {
"Fn::Sub": [
"arn:aws:secretsmanager:${region}:${account}:secret:key_id",
{
"region": {
"Ref": "AWS::Region"
},
"account": {
"Ref": "AWS::AccountId"
}
}
]
}
}
```
Next, access the token in your function:
```js
const AWS = require('aws-sdk')
const secretsManager = new AWS.SecretsManager()
const secret = await secretsManager.getSecretValue({ SecretId: 'YOUR_KEY' }).promise()
console.log(secret.SecretString)
```
There are a few approaches outlined in [this](https://github.com/aws-amplify/amplify-cli/issues/684#issuecomment-453639271) thread.
2. If your value is just a configuration value you can configure the CloudFormation configuration locally to set the value - in __amplify/backend/function/<function_name>/<function-name-cloudformation-template>.json__
For this purpose there is a section in the template - `Parameters` - that you can set.
```json
"Parameters" : {
"MyKey" : {
"Type" : "String",
"Default" : "my-environment-variable"
}
}
```
And then use these parameters in `Environment` declaration:
```json
"Environment":{
"Variables":{
"MY_ENV_VAR":{
"Ref":"MyKey"
}
}
}
```
You can also specify a CFN param in team-provider-info.json under __<env>.categories.function.<func name>__ if you want to specify environment-specific parameters
username_21: @username_4 , in the case of using team-provider-info.json, how should the lambda function access the env variable value? Thanks! |
igvteam/igv | 298357320 | Title: NumberFormatException while creating index in mac
Question:
username_0: Go to file-> load from file -> select any .sam file and get this error:
<img width="930" alt="screen shot 2018-02-19 at 14 41 59" src="https://user-images.githubusercontent.com/2141907/36390955-4878ad28-1583-11e8-8cbf-a975ab997933.png">
macOS sierra: 10.13.3 (17D47)
IGV 2.4.8
Answers:
username_1: That's not a .sam file. It looks binary.
Status: Issue closed
|
fossasia/susi_android | 399799344 | Title: Improper Information while login
Question:
username_0: **Describe the bug**
When a user tries to login with email id which is not registered he gets an error saying "**Please make sure that you enter the correct custom server URL, the entered URL is incorrect**".
**Expected Behaviour**
Instead of displaying that message it be be displayed a message like this "**The email you entered is not registered with us. Please try again.**"
**Steps to reproduce it**
1. Go to login page of the app.
2. Try To Login with a email id which is not registered.
**Would you like to work on the issue?**
Yes.
Answers:
username_0: #1846
username_0: https://github.com/fossasia/susi_android/pull/1846/commits/d851359937f2163d21ee9e5f133bf7b1b27e8cd9
username_0: #1846 https://github.com/fossasia/susi_android/pull/1846/commits/d851359937f2163d21ee9e5f133bf7b1b27e8cd9
Status: Issue closed
|
npgsql/npgsql | 356828513 | Title: Arrays/Lists of nullable types are not supported in parameters
Question:
username_0: ### Steps to reproduce
```
using (var cmd = new NpgsqlCommand("select * from meta.well where id = any(@p)", connection))
{
cmd.Parameters.AddWithValue("p", new List<int?> { 1, 2, 3});
using (var reader = cmd.ExecuteReader())//throws NotSupportedException
{
reader.Read();
}
}
```
### The issue
Arrays/Lists of nullable types are not supported in parameters
```
System.NotSupportedException : The CLR array type System.Collections.Generic.List`1[System.Nullable`1[System.Int32]] isn't supported by Npgsql or your PostgreSQL. If you wish to map it to an PostgreSQL composite type array you need to register it before usage, please refer to the documentation.
```
### Further technical details
Npgsql version: 4.0.2
Answers:
username_1: Tracked by #443.
Status: Issue closed
username_2: In the meantime, you can simply use arrays/lists of non-nullable ints. |
ZipArchive/ZipArchive | 520234652 | Title: Appears to be an issue with iOS 13 and AES encryption with v2.2.2
Question:
username_0: I have an app that uploads a zip file of logs periodically. I use AES encryption on the zip file. Starting about 5 weeks ago, the uploaded files are now corrupted and can't be extracted. The messages I'm getting are similar to the bug that prompted v2.2.2, but I'm running v2.2.2 with these issues.
I'm not 100% that the issue is iOS 13 related, but we have been using v2.2.2 almost since release, and the corrupted zip files roughly start around the time of iOS 13's release. And given that v2.2.2 uses Apple crypto, it seems possible that Apple may have changed something, or exposed an option that changed the behavior.
Reverting to ZipArchive v2.1.4 (before the Apple crypto) has resolved the problem; The zip files can now be un-zipped again.
Here's a sample of how I use ZipArchive:
```
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let logsURL = documentsURL.appendingPathComponent("SomeLogs")
let zipURL = documentsURL.appendingPathComponent(fileName ?? "SomeLogs.zip")
let success: Bool = SSZipArchive.createZipFile(atPath: zipURL.path,
withContentsOfDirectory: logsURL.path,
keepParentDirectory: false,
compressionLevel: -1,
password: <PASSWORD>,
aes: true,
progressHandler: nil)
```
If there's anything I can add that would be helpful, please let me know.
Answers:
username_1: Hi, is there any way to reproduce this bug?
username_2: There was a bug in minzip's AES implementation when using AppleCrypto that was fixed. See username_2/minizip#397 and username_2/minizip#398. Perhaps they were using an old version that had the bug and switched to a version where the bug was fixed and found the old files no longer unzipped properly.
username_1: @username_2 Thanks for the explanation!
@username_0 I will close this issue, and will repoen it if problem still exists.
Status: Issue closed
|
square/keywhiz | 65971212 | Title: Migrate to JOOQ
Question:
username_0: JDBI is getting relatively painful to use for keywhiz due to complexity and verbosity (see [AclDAO.java](https://github.com/square/keywhiz/blob/master/server/src/main/java/keywhiz/service/daos/AclDAO.java#L226)).
We've used JOOQ for some DAO tests and it'd be nice to move everything to JOOQ and keep things uniform.
Answers:
username_1: r.i.p. jdbi.
It might be worth posting in the google group explaining why we did this. Maybe also give people a heads up in case they run into any issues related to these changes?
Status: Issue closed
|
symfony/symfony | 336221250 | Title: 4.1.1 'Simple service testing' on private services not working after DI changes
Question:
username_0: **Symfony version(s) affected**: 4.1.1
**Description**
This commit made for last SF version `4.1.1` breaks feature brought to in version `4.1.0`.
`Simple service testing` doesn't work anymore. https://symfony.com/blog/new-in-symfony-4-1-simpler-service-testing
https://github.com/symfony/symfony/commit/6764d4e01272b922f6f3c9ab2e3dee2fd19acd43#diff-ce4ed09b11d8fa531159e96df52124f3
**How to reproduce**
```
// class TestX extends \Symfony\Bundle\FrameworkBundle\Test\WebTestCase\WebTestCase
$client = self::createClient();
$container = $client->getContainer();
// Or any private service
$em = $container->get(EntityManagerInterface::class);
```
Answers:
username_1: Thank for the report. We broke this in #27501 because we had no choice and the affected would be young enough that people (you...) could still do something about it. Sorry about that.
The way to go is to use `self::getContainer()` instead of `$client->getContainer()` now.
I'm going to update the blog post.
Status: Issue closed
username_0: Thank you for fast reply
username_0: ```
bin/console debug:container 'Core\Interfaces\Api\UserApiInterface'
// This service is an alias for the service LTTSIntegration\Api\UserApi
Information for Service "LTTSIntegration\Api\UserApi"
=====================================================
---------------- -----------------------------
Option Value
---------------- -----------------------------
Service ID LTTSIntegration\Api\UserApi
Class LTTSIntegration\Api\UserApi
Tags -
Public no
Synthetic no
Lazy no
Shared yes
Abstract no
Autowired yes
Autoconfigured yes
---------------- -----------------------------```
username_0: **Symfony version(s) affected**: 4.1.1
**Description**
This commit made for last SF version `4.1.1` breaks feature brought to in version `4.1.0`.
`Simple service testing` doesn't work anymore. https://symfony.com/blog/new-in-symfony-4-1-simpler-service-testing
https://github.com/symfony/symfony/commit/6764d4e01272b922f6f3c9ab2e3dee2fd19acd43#diff-ce4ed09b11d8fa531159e96df52124f3
**How to reproduce**
```
// class TestX extends \Symfony\Bundle\FrameworkBundle\Test\WebTestCase\WebTestCase
$client = self::createClient();
$container = $client->getContainer();
// Or any private service
$em = $container->get(EntityManagerInterface::class);
```
username_1: Did it work before? Would you be able to provide a reproducer, ie a repository we could clone to reproduce the issue?
username_1: Oh, yes, that's expected behavior: the dump:container shows what's *possible* to use, while the test container gives access to what's effectively used only (because we must clean up unused services to have things work).
Closing therefor, thanks for submitting.
Status: Issue closed
username_0: Great... So in our workflow, where we create `commands` & `handlers` first and not using them anywhere we can't create test for them, because everything will fail? We have to create dummy object where we will set new `CommandClass` as type hint just to be able to test it?
Also consider this scenario: we create `ServiceX`, unittest for it and we are using this service in `ControllerX`. After some time we decide to remove usage of `ServiceX` in `ControllerX`. Now when we will run all our unittests we will get errors each time. Not because something is wrong with our service, but because it is not used in application anywhere. Whats more error will say that service is not defined/aliased when it is not true, as I defined alias in `services.yaml`. This service IS defined, so phpUnit should be able to use it - now it is not, because service is not used anywhere.
unittest should not depend on what is used in application but on what is defined and allowed to use.
username_1: There are so many valid reasons for this...
If you want to test unused services, you can and should alias them in your `config_test.yaml` file (or equivalent file configuring the "test" environment.) I'd recommend using a `test.*` prefix in front of their real service name as a convention, and making them public of course.
username_0: ```
bin/console debug:container 'Core\Interfaces\Api\UserApiInterface'
// This service is an alias for the service LTTSIntegration\Api\UserApi
```
There IS alias and we run tests in `ENV=test` and unittest fails to load above service as long as it is not typeHinted somewhere in the app. This is the reason why I still commenting on this topic. I still think this is not proper behavior. Especially if all of this worked in `4.1.0` and stopped in `4.1.1`
username_1: The alias must be public, otherwise it's removed.
username_2: +1 I also think this is wrong behaviour, because it breaks Test-driven-development. Building service classes, then testing them, and then start using them inside other services or controllers is a good pattern, but does not work this way. And because it's not in the docs, users scratch their heads why this (good!) feature seems to be not working.
username_3: I was trying to use the Private Services update done in this article :
https://symfony.com/blog/new-in-symfony-4-1-simpler-service-testing
So i suppose that i have the same problem than you, trying to make it works for 2 hours now ...
Could be good to add it to the article that it was rollbacked and not available for now
We still have to declare manually each tested service as public again for tests then ?
username_4: I still got issue when loading fixtures in tests
services_test.xml
```
<service id="RoleFactoryInterface"
alias="App\Factory\RoleFactoryInterface"
public="true"
/>
<service id="UserFactoryInterface"
alias="App\Factory\UserFactoryInterface"
public="true"
/>
<service id="RoleFixtures"
alias="App\DataFixtures\RoleFixtures"
public="true"
/>
<service id="UserFixtures"
alias="App\DataFixtures\UserFixtures"
public="true"
/>
// in tests setUp(){}
$userFactory = $container->get(UserFactoryInterface::class);
$userFixtures = new UserFixtures($userFactory);
$userFixtures->setContainer($container);
$loader->addFixture($userFixtures);
dump(static::$container->has(UserFactoryInterface::class)); // true
dump(static::$container->has('RoleFactoryInterface')); // true
die;
```
Error
`ArgumentCountError: Too few arguments to function App\DataFixtures\RoleFixtures::__construct(), 0 passed in /var/www/test/vendor/doctrine/data-fixtures/lib/Doctrine/Common/DataFixtures/Loader.php on line 193 and exactly 1 expected`
username_5: @username_4 Please open new issues if you think you discovered a bug with some steps that allow to reproduce them.
username_4: @username_5 Not sure if this is bug. Kind of redirection loop as you can see, Maybe there is should be some specific nginx or symfony configs. https://github.com/symfony/symfony/issues/8257 have not tested it yet.
username_5: What I just mean is that we would need some more insight into actual problems to understand what could be improved. And that better fits in a new issue to not get lost. If you think that's something for the documentation, I'd appreciate a new issue in the docs repository. |
junyanz/pytorch-CycleGAN-and-pix2pix | 362390105 | Title: day2night test results look off
Question:
username_0: I followed the process listed and the test results are not looking like what is the paper and overall look pretty off from what I’d expect. I’m wondering if I’m doing something wrong. I’ve included some images below. Has anyone run into this?

Answers:
username_1: The PyTorch model does not work well for this particular site as it looks quite different from training sites. It works reasonably well for other test sites. For example, this one.

The model is trained with only 90 sites. I think adding more sites might help improve the results.
Also, the PyTorch was recently trained with the current PyTorch code. The results might be different from the Torch model. Please use the Torch model if you would like to reproduce the results in the paper.
username_2: Where did you get the day2night dataset?
username_1: See the [instruciton](https://github.com/username_1/pytorch-CycleGAN-and-pix2pix/blob/master/docs/datasets.md#pix2pix-datasets).
Status: Issue closed
|
pandas-dev/pandas | 208157221 | Title: Indexing MultiIndex with NDFrame indexer fails if index of indexer does not contain 0
Question:
username_0: #### Code Sample, a copy-pastable example if possible
```python
In [2]: d = pd.DataFrame([[1, 1, 3],
...: [1, 2, 4],
...: [2, 2, 5]], columns=['a', 'b', 'c'])
In [3]: d.set_index(['a', 'b']).loc[pd.Series([1, 2])]
Out[3]:
c
a b
1 1 3
2 4
2 2 5
In [4]: d.set_index(['a', 'b']).loc[pd.Series([1, 2], index=[2,0])]
Out[4]:
c
a b
1 1 3
2 4
2 2 5
In [5]: d.set_index(['a', 'b']).loc[pd.Series([1, 2], index=[2,3])]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-a23ad44827f6> in <module>()
----> 1 d.set_index(['a', 'b']).loc[pd.Series([1, 2], index=[2,3])]
/home/pietro/nobackup/repo/pandas/pandas/core/indexing.py in __getitem__(self, key)
1339 else:
1340 key = com._apply_if_callable(key, self.obj)
-> 1341 return self._getitem_axis(key, axis=0)
1342
1343 def _is_scalar_access(self, key):
/home/pietro/nobackup/repo/pandas/pandas/core/indexing.py in _getitem_axis(self, key, axis)
1527 if isinstance(labels, MultiIndex):
1528 if (not isinstance(key, tuple) and len(key) > 1 and
-> 1529 not isinstance(key[0], tuple)):
1530 if isinstance(key, ABCSeries):
1531 # GH 14730
/home/pietro/nobackup/repo/pandas/pandas/core/series.py in __getitem__(self, key)
604 key = com._apply_if_callable(key, self)
605 try:
--> 606 result = self.index.get_value(self, key)
607
608 if not is_scalar(result):
/home/pietro/nobackup/repo/pandas/pandas/indexes/base.py in get_value(self, series, key)
2304 try:
2305 return self._engine.get_value(s, k,
-> 2306 tz=getattr(series.dtype, 'tz', None))
2307 except KeyError as e1:
2308 if len(self) > 0 and self.inferred_type in ['integer', 'boolean']:
/home/pietro/nobackup/repo/pandas/pandas/index.pyx in pandas.index.IndexEngine.get_value (pandas/index.c:3992)()
/home/pietro/nobackup/repo/pandas/pandas/index.pyx in pandas.index.IndexEngine.get_value (pandas/index.c:3689)()
[Truncated]
feather: None
matplotlib: 2.0.0rc2
openpyxl: None
xlrd: 1.0.0
xlwt: 1.1.2
xlsxwriter: 0.9.3
lxml: None
bs4: 4.5.1
html5lib: 0.999
httplib2: 0.9.1
apiclient: 1.5.2
sqlalchemy: 1.0.15
pymysql: None
psycopg2: None
jinja2: 2.8
s3fs: None
pandas_datareader: 0.2.1
</details>
Answers:
username_1: ok this looks like a bug. IIRC we had a similar issue (maybe was yours)?
username_0: There was #14730 (this bug might actually be a side effect of its fix, but I didn't check).
username_1: @username_0 ok, can you add those examples as confirmation tests (they look similar).
username_0: I'm lost... have you seen the PR?
username_1: of course that is where they should be added
username_0: OK, I asked because that issue [_is already_ in the whatsnew note](https://github.com/pandas-dev/pandas/pull/15425/files#diff-52364fb643114f3349390ad6bcf24d8fL502)
So: you suggest add to the tests I already wrote the ones in this bug report, right? (except for the data content, they are exactly the same...)
username_1: ahh ok then
Status: Issue closed
|
cyu/rack-cors | 299024003 | Title: headers: :any is ignored?
Question:
username_0: Honestly I can't tell if this is a configuration problem on my part or a bug. My efforts at tracking the execution of this code have failed, so I'm not sure. In any case, when I have
```
config.middleware.insert_before 0, Rack::Cors do
allow do
origins %r{\Ahttps://*.myclients.com\z}
resource '*', :headers => :any, :methods => [:post, :options]
end
end
```
I still receive the message:
```
Request header field Content-Type is not allowed by Access-Control-Allow-Headers in preflight response.
```
I've also changed the value of `:headers` to be `['Content-Type', 'Authorization']` and receive the same message.
What's my likely mistake? Is this a bug?
Answers:
username_0: On digging into it, I think I see at least one bug: In `Rack::Cors::Resources#allow_origin?` there is the following block of code:
```
def allow_origin?(source,env = {})
return true if public_resources?
return !! @origins.detect do |origin|
if origin.is_a?(Proc)
origin.call(source,env)
else
origin === source
end
end
end
```
Status: Issue closed
username_1: Running into the same problem here. What was your solution to it? |
CoreWCF/CoreWCF | 817403664 | Title: ServiceKnownType supported?
Question:
username_0: CoreWCF 0.1.0
As I could not get the callback channel to work (#318) I tried a polling model instead.
```
[ServiceContract (SessionMode = SessionMode.Required)]
public interface IMessagePollingContract {
[OperationContract]
[ServiceKnownType (typeof (Message1))]
[ServiceKnownType (typeof (Message2))]
[ServiceKnownType (typeof (Message3))]
MessageBase PullMessage ();
}
```
The message classes:
```
[DataContract]
public class MessageBase {
}
[DataContract]
public class Message1 : MessageBase {
[DataMember]
public int Number { get; set; }
}
[DataContract]
public class Message2 : MessageBase {
[DataMember]
public double Double { get; set; }
}
[DataContract]
public class Message3 : MessageBase {
[DataMember]
public string Text { get; set; }
}
```
The service in this example does this:
```
[ServiceBehavior (InstanceContextMode = InstanceContextMode.PerSession)]
public class MessagePollingService : IMessagePollingContract {
private int _invocationCount = 0;
public MessageBase PullMessage () {
Thread.Sleep (1000);
_invocationCount++;
int k = _invocationCount % 4;
MessageBase msg = k switch {
1 => new Message1 { Number = _invocationCount },
2 => new Message2 { Double = _invocationCount },
3 => new Message3 { Text = _invocationCount.ToString () },
_ => new MessageBase (),
};
return msg;
}
[Truncated]
This works well again with the .Net Framework implementation of the WCF server, but with CoreWCF the client throws an exception on the first PullMessage call.
```
System.ServiceModel.ProtocolException: 'Error while reading message framing format at position 2 of stream (state: Start)'
Inner Exception:
InvalidDataException: More data was expected, but EOF was reached.
```
The trivial case, all derived classes commented out, does work:
```
MessageBase msg = k switch {
//1 => new Message1 { Number = _invocationCount },
//2 => new Message2 { Double = _invocationCount },
//3 => new Message3 { Text = _invocationCount.ToString () },
_ => new MessageBase (),
};
```
Entire example:
[MessagePolling.zip](https://github.com/CoreWCF/CoreWCF/files/6050139/MessagePolling.zip)
Answers:
username_1: @birojnayak will take a look after he has finished TransportWithMessageCredentials work for NetTcp.
username_0: That would be great.
Interestingly, what does work with 0.1.0 is to put the polymorphic type into a container and use `KnownTypeAttribute`:
```
namespace MessagePollingCommon.std {
[DataContract]
public class MessageBase {
}
[DataContract]
public class Message1 : MessageBase {
[DataMember]
public int Number { get; set; }
}
[DataContract]
public class Message2 : MessageBase {
[DataMember]
public double Double { get; set; }
}
[DataContract]
public class Message3 : MessageBase {
[DataMember]
public string Text { get; set; }
}
[DataContract]
[KnownType (typeof (Message1))]
[KnownType (typeof (Message2))]
[KnownType (typeof (Message3))]
public class MessageContainer {
[DataMember]
public MessageBase Message { get; set; }
}
```
```
public interface IMessagePollingContract {
[OperationContract]
MessageContainer PullMessageContainer ();
}
``` |
Kinto/kinto.js | 178503162 | Title: creating typings to integrate with typescript
Question:
username_0: i think this would explain it better than i can [http://stackoverflow.com/questions/34590168/what-are-typescript-typings](url)
Answers:
username_1: How would it look like?
username_0: i think this would explain it better than i can [http://stackoverflow.com/questions/34590168/what-are-typescript-typings](url)
username_1: Ok I see. Well if you're interested in contributing *and maintaining* such a thing, we can give it a go! But currently I don't think we'll spend some time on this since I haven't heard anybody in the team coding with TypeScript or Angular 2. @username_2 maybe?
username_2: It is a good idea, as it will not take so long to implement a TypeScript typings file descriptor as they are easy to write. See those kind of examples: https://github.com/typings/typings/blob/master/docs/examples.md
Is it useful? I really don't know, but it could be a quick win.
Two possibilities:
- bundling with kinto.js and/or kinto-http.js npm packages ;
- publishing to the [@types organization](https://www.npmjs.com/~types) on npm.
I suggest the latter.
@username_0 do you think you can start a project to create the kinto-typings package?
@username_1 does it look good to you?
username_0: I think is a bit tricky, from reading around, it is encouraged definatelytyped.org provide typings for libraries instead of library authors. since this is not a strict rule a lot of authors have gone ahead to create typings for their libs.
I think we take a moment to read this thread however before we decide on what to do. It's a similar discussion for pouchdb and a lot of benefits and cons where highlighted [https://github.com/pouchdb/pouchdb/issues/5389](url)
username_0: Now maybe this should go in a diff issue but i tried using browserify to load kintojs but couldn't for angular2 ionic 2 `declare var require: any
const db = require('pouchdb')` works for pouchdb but wouldn't find kintojs
i tried `import {kintojs} from 'kinto'` and ` import * as kinto from 'kinto'` both failed
username_1: `import Kinto from "kinto";` should work.
username_2: I had started the work, still in progress. username_0, you can participate if you want to. Any help will be very appreciated.
→ https://github.com/username_2/kinto.js.d.ts
username_0: cool!
username_3: I am very happy to see typings being created for Kinto. I would also be interested in helping out, although I am not entirely sure about the details.
Is my understanding correct that currently the best way to install the typings is
`typings install -GS github:username_2/kinto.js.d.ts/kinto.d.ts` ?
username_2: @username_3 I am a beginner with the typings ecosystem.
`typings` seems to be deprecated within TypeScript 2.0 → http://www.typescriptlang.org/docs/handbook/declaration-files/consumption.html
username_3: I am relatively certain that `typings` are still needed. It has just become much easier to consume typings for non-typescript projects. I will be looking into this further.
username_3: Feel free to check out https://github.com/username_3/kinto.js.d.ts
This is a concatenation of the declaration files generated by the typescript compiler from my typescript port of Kinto at https://github.com/username_3/kinto.js
Feel free to try / enhance
username_2: Much more complete, and it may have been easier to generate the declaration file from a typescript implementation of the library. Very good idea.
username_4: I am able to build TypeScript code that uses PouchDB now. [@types/pouchdb](https://www.npmjs.com/package/@types/pouchdb) seems to be working flawlessly for me now. Should this issue be closed?
username_1: We might come back on Typescript some day, so I'd rather leave it open in case someone is interested to take over :)
username_5: Linking issue from DefinitelyTyped in case there is any movement on that end. DefinitelyTyped/DefinitelyTyped#11660
username_6: I really would like to use this, unfortunately, the code on this repo seems to not work properly on Ionic projects.
There is a project called dts-gen, this makes a start point for JS code to Typings, even with that webpack couldn't resolve some dependency.
When I simply put the JS file for kinto.js on `index.html` the app crash with the message
`t[v].resolve is not a function`
I think that the problem is because babel provide a polyfill and there are some ES6 feature that TS does not have yet.
With a pure Angular project Kinto.js work as expected.
username_7: Hi All, I have created a hand-written definition based on Kinto.js 11.1.2. The plan is to eventually upload these to DefinitelyTyped. However, before I do that, I would like to ask for some help.
The specific URL for the repository is https://github.com/username_7/DefinitelyTyped/tree/master/types/kinto/ .
- [ ] Please take a look and let me know if I am completely off-base here. I've tried to make things like `options` more specific based on usage I saw in the raw code.
- [ ] Please help me understand how to write tests for this. Should I try to translate the JS tests into TS? Or is there another strategy I should use? I honestly don't know how to proceed on this.
One disclaimer: I do not really know Javascript or Typescript and have never programmed in either. I am also not truly aware of the best practices in either (except what little I read and from seeing definitions for other projects). So if I'm doing something crazy, please do not hesitate to let me know - preferably in the linked repository or in this thread.
Any assistance is greatly appreciated.
username_1: @username_7 awesome!
Unfortunately, I am unable to review this, I'm completely alien to typescript...
@username_0 @username_6 @username_4 would you still be interested in looking at this?
If there's anything to be changed in kinto.js itself, I'd glad to help :)
username_6: Of course, I see a fun job for my free time :)
I'm a fan of this project, haven't used it yet in production.
I like emergents technologies, this moves us forward.
On weekend I'll make some tests, and, improves if possible, and as ever, made my reports.
username_7: If anyone wants collaborator privileges into the other repository, just let me know. I'm eager to receive any help I can get.
username_8: As I understand it, DefinitelyTyped tests are not run, merely compiled. The "test pass" is whether or not the test can be compiled. As such, they're supposed to cover all of the use cases (type-wise) of the various classes/interfaces.
username_7: @username_8 Would greatly appreciate any feedback and changes that you may have.
As for the testing, based on your statement, I'll wait to write anything until the basics of the `.d.ts` file are solid.
username_5: @username_8 I know it's been a while, but do you have any changes or feedback on the specification itself? If you have an updated copy, I would love to put that into the repository and submit it to DefinitelyTyped.
username_7: @username_8 I know it's been a while, but do you have any changes or feedback on the specification itself? If you have an updated copy, I would love to put that into the repository and submit it to DefinitelyTyped.
username_7: Just updated the typings to 12.2.0 (at least, that's what is published on NPM and I used the latest docs to make the update).
username_1: Thanks for keeping up!
What would you like us to do?
I'm not specifically willing to maintain the types along the lib, but maybe a better coordination may help..
Do you have ideas?
username_7: Hi, I posted here more as an FYI than anything, since this is where the definition effort started.
My current goal is to write a series of tests that are similar to the tests that you use today. The goal is to determine whether the bindings are functioning as expected.
If everything goes okay, I will be submitting a pull request to DefinitelyTyped. At that point, the only thing to do would be to point people in that direction if they are looking for typescript bindings.
Until I can submit that PR, I would ask that this issue remain open so that any interested parties can find it.
Once the bindings are in DT, maybe we could talk about what type of coordination would be best to keep them up to date? |
MicrosoftDocs/azure-docs | 898125758 | Title: More Details on What Does and Doesn't Work with NTFS ACLs
Question:
username_0: Would it be possible to get more details on AD configurations that won't work with ANF?
- Domain Local groups used in ACLs doesn't appear to work.
- If a user is located in domain A and the computer account is in domain B, I don't think that works. There are some cross-domain access situations that don't work with ANF.
- What access patterns across two way trusts and one way trusts do or don't work.
- Perhaps a list of known AD configurations that do not work? Something like that. There are a few and they don't seem to be documented.
Thanks!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: fc5f5a0e-0ac8-6c79-c13a-5857505314cd
* Version Independent ID: 2e84acd0-241e-1aca-71f9-98bd9dd92b63
* Content: [Create an SMB volume for Azure NetApp Files](https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-create-volumes-smb)
* Content Source: [articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md)
* Service: **azure-netapp-files**
* GitHub Login: @username_1
* Microsoft Alias: **username_1**
Answers:
username_1: @username_0 - The technical team is making several additional changes related to Issue #75337 you brought up. I assume you are an ANF team member. If so, can you reach out through email <EMAIL> or <EMAIL>, so that I can connect you with the team for discussion? (GitHub issues are public and might not be appropriate for team-internal discussions.) Thanks.
username_1: @username_0 - Win, thanks again for reaching out to request this info. Per the discussion with the subject matter experts, it appears that further and thorough testing and investigation will be needed, and it will take time. I've included you in the email. We'll use the email thread for further discussion. For now, I'm closing this GitHub issue. Thanks.
#please-close
Status: Issue closed
|
information-artifact-ontology/ontology-metadata | 294836357 | Title: ensure BFO bugs are addressed
Question:
username_0: _From @GoogleCodeExporter on August 12, 2015 2:2_
```
May be a bit of a stretch re the role of the technical group, but it would be
great if we could either push or implement ourselves some of the changes that
need to be done. I am specifically referring to
http://code.google.com/p/bfo/issues/detail?id=119 which is a blocking issue.
```
Original issue reported on code.google.com by `<EMAIL>` on 8 Nov 2012 at 10:37
_Copied from original issue: OBOFoundry/Operations-Committee-RETIRED#23_
Answers:
username_0: _From @GoogleCodeExporter on August 12, 2015 2:2_
```
From technical WG call 20121109
Take the metadata out of IAO hierarchy and embed into obom.owl (obo metadata).
This will make it easier to not import IAO as a whole by default and will
reduce compatibility issues.
```
Original comment by `<EMAIL>` on 9 Nov 2012 at 6:13
username_0: _From @GoogleCodeExporter on August 12, 2015 2:2_
```
Great idea. Does this help get you started? I did this already for the mental
functioning ontology because getting the IAO class hierarchy was interfering
with my need to show the core MF ontology content to domain experts.
```
Original comment by `<EMAIL>` on 9 Nov 2012 at 8:17
Attachments:
- [ontology-metadata-slim.owl](https://storage.googleapis.com/google-code-attachments/obo-foundry-operations-committee/issue-23/comment-2/ontology-metadata-slim.owl)
username_0: _From @GoogleCodeExporter on August 12, 2015 2:2_
```
<NAME>!
The obom.owl file is at http://purl.obolibrary.org/obo/obom.owl
I removed the IAO import, and am planning on submitting what we need to IAO. I
also positioned the foaf/doap elements under the bfo hierarchy, i.e., under
information content entity
A good entry point is the Project class, which has an instance "AERO project"
which then links to the rest: ontology file, contact etc
```
Original comment by `<EMAIL>` on 9 Nov 2012 at 8:57
username_0: From technical WG call 20121109
Take the metadata out of IAO hierarchy and embed into obom.owl (obo metadata)
The obom.owl file is at http://purl.obolibrary.org/obo/obom.owl (not working now).
The one we are using now is:
http://purl.obolibrary.org/obo/iao/ontology-metadata.owl
Status: Issue closed
|
Mabonzo/prj-rev-bwfs-dasmoto | 241491140 | Title: Summary: Exceeds Expectations
Question:
username_0: Great job on this project - I really loved your addition of a footer, as well as your addition of a background texture. In terms of improvement, I recommend removing the font-weight:bold on your h1s and h2s, as well as implementing IDs for the section headings.
If you’re in the mood for a challenge, I would look into adding more items under each section and finding a nice way to lay them out. Here's a link to get you started - https://www.w3schools.com/css/css_rwd_grid.asp. Feel free to reach out and let us know if you have any questions, good luck!
Status: Issue closed
Answers:
username_1: I want to leave this project as is, as a milestone to look back on, but I will try to use more Responsive Web Design techniques as I go through the *CodeAcademy* course. Thank you for the comments and critiques! |
mperham/connection_pool | 464696493 | Title: Poor behavior when the maximum number of connections have been opened
Question:
username_0: The `@resource` condition variable is waited on in a block depending on the same mutex that is used to notify the condition variable.
Unless I'm mistaken, if that condition variable ever gets waited on, this means it'll block every thread for the full timeout, including the threads returning connections to the pool, which could have notified the waiting thread instead.
https://github.com/mperham/connection_pool/blob/master/lib/connection_pool/timed_stack.rb#L87
Answers:
username_0: Nevermind, according to Ruby's documentation, this should be fine.
Status: Issue closed
|
ambethia/recaptcha | 539899520 | Title: Getting the v3 humanity score
Question:
username_0: but I can't get the score, only a boolean.
Is there any way to get the actual number? There is another gem, new_google_recaptcha that can report the score to the controller.
Regards and thanks in advance.
Answers:
username_1: need to call api_verification manually like done here https://github.com/ambethia/recaptcha/blob/43c1a5dd460c6ee2abf4dd6d59775a4f79cd0fd7/lib/recaptcha.rb#L67
PR to optionally return the full response instead of true would also be welcome since that was asked for before
username_0: Hi,
I didn't do a PR, but I think maybe [the code in this gist](https://gist.github.com/username_0/c4d08377d5df5ce6b2540358946e16a3) could be added to your project.
Is it helpful?
I used verify_recaptcha as skeleton. In my project, I'm currently using what you suggested as:
`Recaptcha.api_verification({secret: Recaptcha.configuration.secret_key, response: recaptcha_response_token("contact")})`
and it works fine.
Thanks and regards.
username_1: looks about right PR would be nice :)
username_2: I have started working on it in #358, would love your opinion on it so we can make it happen !
username_1: 5.5.0
Status: Issue closed
|
spree-contrib/spree_braintree_vzero | 176433262 | Title: When submitting a payment for capture (settle) that was previously authorized, the cvv_response_code is wiped out
Question:
username_0: Steps to Reproduce:
- When you set up BT, make sure Auto-Capture is set to OFF
- Go through checkout with btree vzero hosted fields checkout
- Note in your database look at the Payment record you just created. Confirm that it has a correct value in the cvv_response_code field (usually 'M')
- Once you submit your transaction, go into the Spree Admin interface, find your transaction (in 'pending' state), and click 'Settle' next to it on the Payments screen
- In the database, you will note that the field cvv_response_code is now wiped out (overwritten) by the capture transaction
Technically this bug is due to an interaction between this gem and the Spree core code itself.
This is a partial log output using AR Query trace that is showing me the full callstack of where the errant UPDATE query is called from
https://gist.github.com/username_0/b53ecc58866bdc36508d00a2719ff472
Note that ```spree_braintree_vzero-5c557054d0cc/app/models/spree/payment/processing_decorator.rb:15``` calls into the Spree core code, specifically this line of code:
https://github.com/spree/spree/blob/2-4-stable/core/app/models/spree/payment/processing.rb#L150
This whole part of Spree's code is unfortunate because Braintree has no field for ```avs_response``` (it has 3 other fields for AVS information), so it seems to me that a cleaner way to do this is to delegate this to the payment method and ask the payment method for the information that is need.
Anyway, this should definitely be treated like a bug because I don't want my CVV values wiped out in my database .
Answers:
username_0: fixed with https://github.com/spree-contrib/spree_braintree_vzero/pull/98
Status: Issue closed
|
duzun/URL.js | 938250062 | Title: URL.js Version Bump
Question:
username_0: Hello - we are in need of a version bump in order to resolve an internal CVE. Are there any plans to update URL.js?
Thank you!
Answers:
username_1: Hello @username_0
I don't have any short term plans, but long term I intend to convert the code to ESM or even TS.
Do you have any ideas for improvements?
username_1: @username_0 done, increased the version to `v0.2.6` - the last version of v0.
Status: Issue closed
|
HaliteChallenge/Halite-III | 372222092 | Title: Segmentation fault when a collision coincides with out-of-fuel?
Question:
username_0: This is hard for me to verify, but I was getting a string of these segfaults this morning. They seemed to happen immediately after statements like this:
`
[warn] [31] [P1] entity 0 was directed to use 1 halite to move west, but only 0 halite was available
[warn] [31] [P0] entity 1 was directed to use 1 halite to move east, but only 0 halite was available
[warn] [31] [P0] owned entities 5, 1 collided on cell (9, 16) as the result of moves on this turn
[warn] [31] [P1] owned entities 4, 0 collided on cell (22, 16) as the result of moves on this turn
run_game.sh: line 6: 18498 Segmentation fault: 11 ./halite --replay-directory replays/ -vvv --width 32 --height 32 "java -jar build/libs/halite3-6.jar" "java -jar build/libs/halite3-6.jar"
`
I added some more protection around fuel management, and the errors went away.
Answers:
username_1: Can you by any chance email the copy of the bot that reproduces the segfault to <EMAIL>?
username_0: Done. FYI -- my latest verison doesn't collide or run out of fuel, and it's still segfaulting.
username_2: I believe this is a duplicate of #22, after investigating.
username_0: Thanks, that helps a lot to at least work around on my end.
Status: Issue closed
|
usefathom/fathom | 613542922 | Title: Is a self-hosted version of fathom GDPR compliant
Question:
username_0: I am planning to use a self hosted fathom instance. My server will run in Germany and I am having some doubts about Fathom's use of cookies and tracking policies. For my company it is very important that we can offer our customers a clean website on which they do not need to agree or disagree with any cookie consent policies. On fathom's website (https://dev.to/hmhrex/a-comparison-of-the-top-3-privacy-focused-analytics-platforms-209m), it is written that fathom does not use cookies of any kind.
I know that they don't for the paid product, but how is this for the open source code?
From what I can read from this discussion on github: https://github.com/usefathom/fathom/issues/40, Fathom does indeed use a tracker cookie. Granted, only for 30 minutes, but it's still a cookie. So I am not sure anymore if I need a cookie policy to be completely GDPR compliant. So, my question is, if there is a way to use a self hosted Fathom instance, compliant with GDPR norms, and not have a cookie consent policy for the user.
I would be very grateful for any clarification provided. Thank you very much in advance!
Status: Issue closed
Answers:
username_1: GDPR != not using cookies.
We believe that Fathom Lite is GDPR compliant but it isn't PECR Compliant (UK regulation) since it uses cookies.
username_0: Thank you for the clarification! |
nimbella/nimbella-cli | 789240312 | Title: nim project create xxx --config fails to deploy due to invalid key in project.yml
Question:
username_0: ## steps to reproduce
```
$ nim project create project_from_config --config
$ nim project deploy project_from_config
Error: Invalid key 'package' found in 'action' clause in project.yml
```
Here is the created `project.yml` =>
```yml
targetNamespace: ''
cleanNamespace: false
bucket: {}
parameters: {}
packages:
- name: default
shared: false
clean: false
environment: {}
parameters: {}
annotations: {}
actions:
- name: hello
clean: false
binary: false
main: ''
runtime: 'nodejs:default'
web: true
webSecure: false
parameters: {}
environment: {}
annotations: {}
limits: {}
package: default
```
The last line (`package: default`) is causing the issue. The error is thrown from here: https://github.com/nimbella/nimbella-cli/blob/master/deployer/src/util.ts#L305
The sample config created comes from here: https://github.com/nimbella/nimbella-cli/blob/master/src/generator/project.ts#L130
**I'm assuming the `package: default` line is accidental and can be removed from the config template generation?**
If so - I'll submit a PR. It'll be worth maybe adding a test for this command in future.
Answers:
username_1: Yes, that line can be removed. But, I caused myself this problem by not labelling package as an optional field here: https://github.com/nimbella/nimbella-cli/blob/6249ba4188152a0158d556563361cf946bd95181/deployer/src/deploy-struct.ts#L52. This caused a type error when the field was not present so I (thoughtlessly) filled it in. In fact, the field is required in later phases of deployment but is not required (and, as you discovered, can't be specified) in project.yml.
Looking forward to the PR.
username_1: @username_0 since you offered to fix this, I'm assigning the issue to you. Please call on me for any help you need getting started with nimbella cli modifications. Or, if that's not what you bargained for, let me know.
Status: Issue closed
username_1: ## steps to reproduce
```
$ nim project create project_from_config --config
$ nim project deploy project_from_config
Error: Invalid key 'package' found in 'action' clause in project.yml
```
Here is the created `project.yml` =>
```yml
targetNamespace: ''
cleanNamespace: false
bucket: {}
parameters: {}
packages:
- name: default
shared: false
clean: false
environment: {}
parameters: {}
annotations: {}
actions:
- name: hello
clean: false
binary: false
main: ''
runtime: 'nodejs:default'
web: true
webSecure: false
parameters: {}
environment: {}
annotations: {}
limits: {}
package: default
```
The last line (`package: default`) is causing the issue. The error is thrown from here: https://github.com/nimbella/nimbella-cli/blob/master/deployer/src/util.ts#L305
The sample config created comes from here: https://github.com/nimbella/nimbella-cli/blob/master/src/generator/project.ts#L130
**I'm assuming the `package: default` line is accidental and can be removed from the config template generation?**
If so - I'll submit a PR. It'll be worth maybe adding a test for this command in future.
username_1: The usual practice I've followed is to keep issues open (with a final comment indicating that they are resolved) until the change appears in a Nimbella stable version.
username_1: Assigning to myself as a reminder to include it in the change log for the next release and close it then.
Status: Issue closed
username_1: In stable version 1.12 |
polimediaupv/paella | 539319347 | Title: Tabindex omitted from all plugins buttons except volume in UPV6x (re: Feb 4 commit)
Question:
username_0: # Describe the bug
UPV Paella 5x assigned tabindex to plugins with getIndex() method so the tab key could be used to navigate them. In UPV Paella 6x (ala the Feb 4th commit), only plugins that have a getAriaLabel() method have their indexTab set. The volume plugin is the only plugin with that new method. Other plugins, like the flex skip forward, flex skip back, video qualities, fullscreen, etc, do not have the getAriaLabel() and the buttons are not given a tabIndex.
The playbackBar has a tabindex because it is manually assigned one in 06_ui_controls.js
# To Reproduce
Steps to reproduce the behavior:
1. Go to a Paella 6x video (https://paellaplayer.upv.es/paella/player-6.3.2/?id=belmar-multiresolution)
2. Click on tab key at least 4 times.
3. Notice that the tab index cycles between browser URL input box, player playback bar, and player volume button. But, not to any of the other control bar controls.
# Workaround (if any)
Use the mouse to navigate control bar plugin controls.
# Environment Information
- OS: MacOS
- Browser: Safari, Chrome, FF
- Paella version 6x
# Additional context
I'm adding a getAriaLabel() method to our site's custom control bar plugins so they are assigned a tabindex by the plugin manager. Is there any reason why I would not want to do this?
Answers:
username_1: This behavior is implemented that way on purpose. The idea is to improve usability for the visually impaired by hiding plugins that are not relevant to them.
We have not carried out usability tests in this sense, but until we have more information, the current operation seems to us to be much more adequate. However, we could study the possibility of making the ariaLabel parameter set in the configuration.
username_0: Hi @username_1 enabling plugins to be tabindex accessible via configuration makes sense. Then each site can decide what makes sense for their clients.
I suspect that tabindex enabled controls assists more people than vision impaired, as I've known paraplegics who heavily relied on keyboard navigation. I also started using the tab navigation, and noticed this change, when I got tired of using the mouse. However, I can see how too much tabindexing could annoy seeing impaired people.
Status: Issue closed
|
RubyFloripa/jobs | 1168970952 | Title: [Remoto] Ruby on Rails Developer na C2S - Contact2sale
Question:
username_0: ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/ruby-on-rails-developer-220852451?utm_source=github&utm_medium=RubyFloripa-jobs&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Contact2Sale está buscando <strong><ins>Ruby on Rails Developer</ins></strong> para compor sua equipe!</p>
<p>Acreditamos que as soluções simples são as melhores. São mais intuitivas, mais fáceis de usar e consequentemente mais efetivas! Mas, diferentemente do que as pessoas pensam em geral, a simplicidade não é o primeiro passo. Para chegar a uma solução simples e efetiva, é preciso compreender profundamente o processo e a complexidade.</p>
<p>As responsabilidades serão vinculadas ao desenvolvimento voltado para sistemas web e mobile que usam React e React Native (expo).</p>
<p>Além disso também ajudar a evoluir e refazer algumas telas do sistema legado usando Ruby on Rails, HTML e jQuery</p>
<h6>Responsabilidades:</h6>
<ul>
<li>Revisão de código</li>
<li>Atuar em conjunto com a equipe de tecnologia</li>
<li>Antecipar problemas técnicos e sugerir soluções</li>
<li>Trabalhar com melhora de performance do sistema</li>
<li>Desenvolver novas features com o time de desenvolvimento e produto</li>
<li>Executar POC para validar conceitos e novas features</li>
</ul>
## C2S - Contact2sale:
<p>Acreditamos que as soluções simples são as melhores. São mais intuitivas, mais fáceis de usar e consequentemente mais efetivas! Mas, diferentemente do que as pessoas pensam em geral, a simplicidade não é o primeiro passo. Para chegar a uma solução simples e efetiva, é preciso compreender profundamente o processo e a complexidade.</p>
<p>Acreditamos que as soluções simples são as melhores. São mais intuitivas, mais fáceis de usar e consequenteEm 2016, <NAME> e <NAME> fundaram o Gestor de Leads C2S (Contact2sale). Ambos foram executivos de grandes empresas como: Apple, Bain & Co, <NAME>, Telefônica, Lexmark e ImovelWeb. mente mais efetivas! Mas, diferentemente do que as pessoas pensam em geral, a simplicidade não é o primeiro passo. Para chegar a uma solução simples e efetiva, é preciso compreender profundamente o processo e a complexidade.</p><a href='https://coodesh.com/empresas/c2s-contact2sale'>Veja mais no site</a>
## Habilidades:
- Ruby
- Ruby on Rails
- MySQL
- PostgreSQL
- Docker Compose
- Docker
## Local:
100% Remoto
## Requisitos:
- Ser curioso e estar sempre pesquisando sobre novas tecnologias;
- Pensar de forma arquitetural, sempre se preocupando com escala e evoluções futuras;
- Possua capacidade de desenvolver sistemas do zero e também evoluir sistemas atuais;
- Gostar de fazer Clean Code e soluções simples
- Conhecimento em GIT;
- Conhecimento avançado em Ruby on Rails
- Conhecimentos ou experiências em tarefas de background (Resque ou Sidekiq);
- Conhecimento em SQL
## Diferenciais:
- ElasticSearch
- Sidekiq
- Conhecimento em infra AWS
- Firebase Authentication e Realtime Database
- Docker
## Benefícios:
- Vale alimentação
- Plano de saúde
- Auxílio Home Office
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Ruby on Rails Developer na C2S - Contact2sale](https://coodesh.com/vagas/ruby-on-rails-developer-220852451?utm_source=github&utm_medium=RubyFloripa-jobs&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Back-End |
matplotlib/matplotlib | 140300844 | Title: polar plot incorrectly handles units of degrees/radians
Question:
username_0: # Setup
`matplotlib.__version__`= 1.5.1 (installed as part of anaconda distribution)
Python 3.4.4
OS X
# Issue
Code from `examples/units/radian_demo.py` with minor modifications:
```
import numpy as np
from basic_units import radians, degrees, cos
from matplotlib.pyplot import figure, show
x = [val*radians for val in np.arange(0, 15, 0.01)]
fig = figure()
ax = fig.add_subplot(121, projection='polar')
line1, = ax.plot(x, np.array(cos(x))+1, xunits=radians)
ax = fig.add_subplot(122, projection='polar')
line2, = ax.plot(x, np.array(cos(x))+1, xunits=degrees)
show()
```

# Notes
Note that the plot on the left is correct, but the angular tick labels aren't incrementing correctly. I can't tell if this is an issue with the basic_units.py example implementation or the units interface.
If it seems like a straightforward fix, I'd be happy to take a look at it: the units interface is really slick and it would be great to get this working with polar coordinates too.
Answers:
username_1: Probably related to #4905. |
Kriechi/aws-s3-reverse-proxy | 612873833 | Title: URL Encoding causes signature to be incorrect
Question:
username_0: A URL that is coming in as `http://myproxy.com/year=2020/month=04/day=28/hour=20/test.txt` is being changed to `http://myproxy.com/year%3D2020/month%3D04/day%3D28/hour%3D20/test.txt` causing the signature not to be correct. I corrected this in my copy in **handler.go** adding the line `proxyURL.RawPath = *&req.URL.Path` in the **assembleUpstreamReq** function.
Status: Issue closed
Answers:
username_1: Thanks for reporting this! I needed to make a small additional change - see c753875.
Please let me know if this fixes your problem in your environment & use case! |
atomist/sdm-pack-aspect | 485447060 | Title: Code Inspection: npm audit on add-node-engine-aspect
Question:
username_0: ### lodash:<4.17.12
- _(error)_ [Prototype Pollution](https://npmjs.com/advisories/1065) _Update to version 4.17.12 or later._ - [CVE-2019-10744](https://nvd.nist.gov/vuln/detail/CVE-2019-10744)
- `lodash:4.17.11`:
- `@atomist/automation-client>graphql-code-generator>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-client>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-client>graphql-codegen-plugin-helpers>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-client>graphql-codegen-typescript-common>graphql-codegen-plugin-helpers>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-common>graphql-codegen-plugin-helpers>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-server>graphql-codegen-typescript-common>graphql-codegen-plugin-helpers>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-client>graphql-codegen-typescript-common>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-common>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-codegen-typescript-server>graphql-codegen-typescript-common>graphql-codegen-core>graphql-toolkit>lodash`
- `@atomist/automation-client>graphql-code-generator>graphql-toolkit>lodash`
[atomist:code-inspection:add-node-engine-aspect=@atomist/atomist-sdm]
Answers:
username_0: Issue closed because branch `add-node-engine-aspect` was deleted.
Status: Issue closed
|
greiman/SSD1306Ascii | 868409000 | Title: Atmega328PB SSD1306AsciiAvrI2c issues
Question:
username_0: FYI - some end user feedback for you.
I have had huge stability problems when using SSD1306AsciiAvrI2c.h
Display starts then randomly everything locks up. Tried a wide range of speeds using oled.setI2cClock which did prove some speeds withing a range worked better than others.
Changed code to use Wire SSD1306AsciiWire.h and all issues resolved EXCEPT now have low memory. But it does work.
Answers:
username_1: Wire has lots of timeouts and resets so it tends not to hang with marginal devices. There are two places in AvrI2c that could hang and I plan to add a timeout and reset to these. Unfortunately I don't have a marginal device to verify the fix.
I suspect most most devices hang on an I2C stop so a reset would probably would cause limited glitches since it may occur after all data is sent to the device.
username_1: I replace AvrI2c.h with one that has time-outs and some other mods. It shouldn't hang in a loop waiting for an I2C command to complete.
If there are glitches in the display we can pursue these. Debug prints can be enabled in [AvrI2c.h](https://github.com/username_1/SSD1306Ascii/blob/master/src/utility/AvrI2c.h) at about line 78.
I did some experiments an found a hang was possible if the I2C bus didn't have pull-ups.
username_0: Thank you. Stability issue resolved. I have not managed to lock up even once.
Now compiled code is bigger than Arduino IVR even with LTO enabled. This no longer such an issue for me as I reduced serial TX/RX buffer size to accommodate arduino wire code.
Status: Issue closed
|
auth0/angular-lock | 201659947 | Title: auth0 is not defined; but Auth0 is!
Question:
username_0: Using `[email protected]` and calling `lock.interceptHash()` in my `.run` method, I get an exception stating that `auth0 is not defined`. When I go to the console and see if it exists as a global variable, it is not there, but `window.Auth0` and `window.Auth0Lock` are both there.
I have [email protected] loaded via CDN and am calling `import 'angular-lock'` in my entry file (I'm using webpack).
Any ideas what I might be doing wrong?
Answers:
username_1: Auth0.js doesn't come bundled with Lock anymore, so you'll need to include it on your own. Try including `<script src="https://cdn.auth0.com/js/auth0/8.0/auth0.min.js"></script>` and see if that fixes it.
username_0: That was it. Thanks! That isn't mentioned in any of the tutorials so probably good to mention it somewhere.
Status: Issue closed
username_1: No prob. We have a note here, but I'll add something to the v2 branch readme here as well: https://auth0.com/docs/libraries/lock/v10/auth0js |
EduardManukyan/HWK-API | 692909871 | Title: Review
Question:
username_0: https://github.com/EduardManukyan/HWK-API/blob/84fbbdd5d2b6cd48815c59ec3602159cffd66d7a/favorites/favorites.js#L32
It'd be better to divide `createAndAddElement` into two different functions: `createElement` and `addElement` (and maybe call them inside of `render` function).
https://github.com/EduardManukyan/HWK-API/blob/84fbbdd5d2b6cd48815c59ec3602159cffd66d7a/home-page/home-page.js#L103
it'd be better if `searchByCriteria` will return some array and then it'll be rendered somewhere else. Here: https://github.com/EduardManukyan/HWK-API/blob/84fbbdd5d2b6cd48815c59ec3602159cffd66d7a/home-page/home-page.js#L125 |
jquense/yup | 295149276 | Title: Ability to get all the defaults from schema
Question:
username_0: I think it would be useful to get the defaults for the entire schema. Specially for libraries like Formik that requires to pass an empty set of default values for initialize the form.
Example:
```js
var schema = yup.object().shape({
name: yup.string().required().default('<NAME>'),
email: yup.string().email()
})
// Something like:
schema.defaults()
// returns:
{
name: '<NAME>',
email: '' // or undefined
}
```
Answers:
username_1: This already works, try calling default on the top schema
Status: Issue closed
username_0: @username_1 awesome, thank you! |
sewenew/redis-plus-plus | 642643300 | Title: TLS support
Question:
username_0: Hi,
Great work on this library. We've started using it and it's been really nice so far. 😃
Have you considered supporting TLS connections to Redis? hiredis supports it, and it looks like it might be a straightforward addition to redis-plus-plus. See https://github.com/redis/hiredis#ssltls-support
Thanks,
Ryan
Answers:
username_1: Hi @username_0
Thanks for using using redis-plus-plus!
YES, there's a plan to support TLS as a part of supporting Redis 6.0 which supports SSL natively. If I have any progress on it, I'll let you know :)
Regards
username_2: +1 for TLS support. @username_0 if it's in hiredis, which is a dependency to this project, then does it mean it could be feasible to enable it this way without waiting for support in redis++?
username_1: Guys, sorry for the delay.
I'm busy with another issue and other stuffs these days. I'll try this feature ASAP.
Regards
username_0: @username_2 Not in any obvious way. Redis plus plus doesn't seem to offer any hooks to allow the user of the library to override the way that the hiredis library is called under the hood to create/close the connection or send data over the connection.
Thanks for offering to work on this soon, @username_1!
username_1: Guys, I've added TLS support to redis-plus-plus. You can fetch the latest code on [ssl branch](https://github.com/username_1/redis-plus-plus/tree/ssl), and have a try.
I did some tests and it works fine. However, I still need to do more tests, and not sure if the API will be changed in the future (since hiredis just [changed its SSL API](https://github.com/redis/hiredis/commit/190bca88d0635f1e53d9e39fc69f4f21b67a1baf#diff-5931449ee480a798cabbfe5a0df96da8) on 25 May 2020, not sure if it's already stable).
In order to enable TLS, you need to [build the latest hiredis with SSL support](https://github.com/redis/hiredis#ssltls-support), and build redis-plus-plus with `-DREDIS_PLUS_PLUS_USE_TLS=ON` option when running cmake:
```
cmake -DCMAKE_PREFIX_PATH=/path/to/hiredis-ssl -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/path/to/instal/dirl -DREDIS_PLUS_PLUS_USE_TLS=ON ..
```
The following is an example on how to use it:
```
ConnectionOptions opts;
opts.host = "127.0.0.1";
opts.port = 6379;
// enable TLS
opts.tls_enabled = true;
// set TLS options. These options are the same as redis-cli's TLS options.
// You can run `redis-cli --help` to check the explanation of these options.
opts.tls_options.cacert = "CA Certificate file to verify with";
opts.tls_options.cert = "Client certificate to authenticate with";
opts.tls_options.key = "Private key file to authenticate with";
auto redis = Redis(opts);
cout << redis.ping() << endl;
```
If you have any problem on it, feel free to let me know.
Regards
username_2: Thanks. Change looks really good.
I couldn't wait for it, so last week I introduced TLS myself as well (slightly different than you), but it also worked :)
The only little remark I have about your commit is that in line 64 in tls.cpp you can retrieve an error description from the context and add it to an exception message. It's a detail, but could help debugging.
BR,
Konrad
username_1: Hi @username_2
I modified the interface of setting TLS options, and also optimize the CMakeLists.txt file so that end user doesn't need to specify `-DREDIS_PLUS_PLUS_USE_TLS` when s/he builds application code.
I've updated the example of my last comment, and you can take a look. Also you can check the [doc](https://github.com/username_1/redis-plus-plus/tree/ssl#tlsssl-support) for detail.
Regards
username_2: thanks @username_1
sorry for a delay, but I spent some of this time for testing the new feature and it seems to be working!
If you're curious, environment I used was: client on windows, built with VS2015, x64 connected to Redis Enterprise version 5. Soon I will have an option to test it against Redis 6 too.
I'll take your fresh changes now and see if they work too - any plans on merging to master and tagging?
Thanks!
Konrad
username_1: As I mentioned that I'm not sure if hiredis' API is already steady. So I'd like to keep it on the ssl branch for a while. If there're any changes on master branch, I'll merge it to this branch to ensure it has the latest features.
Also I'll keep this issue open until the code being merged into master.
Thanks again for your work!
Regards
username_2: No worries and thanks for a reply.
I have tested the new implementation as well - works very well, both in terms of build and runtime.
BR,
Konrad
username_3: Hi @username_1,
Is TLS support still only on the ssl branch or has it been merged back to master? I'm a newbie to Redis and have been looking around for a good C++ Redis library. I prefer this library but TLS/SSL support is required for the Redis project I need to work on. Please let me know the latest status.
Regards.
username_1: Hi @username_3
Since hiredis already released 1.0.0 with TLS support recently, I think the API is stable now. I'll merge the ssl branch into master this week. Because redis-plus-plus has some major change in connection management since the ssl branch, I need to do some more test. I'll let you know, when I finish these work.
Regards
username_3: Thanks @username_1 . I'll touch base with you early next week.
Regards.
username_1: Hi Guys,
ssl branch has been merged into master branch. If you have any problem on TLS support, feel free to let me know.
Regards
Status: Issue closed
username_2: @username_1 thanks for info about the merge. Could you please point me to information what sort of connection management changes were there on master?
Was about to create an issue here anyway about a similar topic. Some time ago I ran a test, where one of the cluster's node was restarted to see if redis-plus-plus would refresh the topology. Unfortunately it didn't go ok.
Code failed at https://github.com/username_1/redis-plus-plus/blob/master/src/sw/redis++/redis_cluster.hpp#L1332 - the problem was that it wasn't even able to send the CLUSTER SLOTS command.
Scenario description:
Redis enterprise 6.0.6-36
Regular DB with 2 shards and standard hashing policy
redis++ client fails to update list of nodes if a shard is migrated to another node (manually or during failover)
Any chance this is something fixed in the meantime?
Thanks!
username_1: @username_2 The connection management changes have nothing to do with cluster management, instead it tries to support creating pipeline and transaction without creating new connection.
Also I created a new issue to track the problem.
Regards
username_2: ok thanks so much @username_1 |
spacemanspiff2007/HABApp | 851686959 | Title: run_every: time parameter has no effect
Question:
username_0: If I use example code from MQTT Rule example it is not working as expected:
`self.run_every(
time=datetime.timedelta(seconds=60),
interval=datetime.timedelta(seconds=30),
callback=self.publish_rand_value
)`
I would expect that self.publish_rand_value() is called first time in 90s and after that every 30s. But it is first time called after 30s. It seems the time parameter has no effect.
Same problem when using run_minutely, it is always started immediatly and not at a random seconds as documented.
Answers:
username_1: I've made many changes and reworked the scheduler.
Would you be willing to try out the dev branch?
Be aware there are many changes so if possible try from another machine!
username_2: I tried it with the latest dev branch from yesterday and it seems to work now as you camm see from the log.
```
[2021-04-16 09:59:44,485] [ Rules] INFO | Energy management: New value = 1738.35m³.
[2021-04-16 10:00:00,208] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 10:02:04,377] [ Rules] INFO | Energy management: New value = 1738.360001m³.
[2021-04-16 10:04:10,113] [ Rules] INFO | Energy management: New value = 1738.379999m³.
[2021-04-16 10:07:04,185] [ Rules] INFO | Energy management: New value = 1738.4m³.
[2021-04-16 10:09:44,342] [ Rules] INFO | Energy management: New value = 1738.42m³.
[2021-04-16 10:12:10,338] [ Rules] INFO | Energy management: New value = 1738.43m³.
[2021-04-16 10:14:22,178] [ Rules] INFO | Energy management: New value = 1738.450001m³.
[2021-04-16 10:16:19,858] [ Rules] INFO | Energy management: New value = 1738.459999m³.
[2021-04-16 10:19:06,123] [ Rules] INFO | Energy management: New value = 1738.48m³.
[2021-04-16 10:21:38,223] [ Rules] INFO | Energy management: New value = 1738.5m³.
[2021-04-16 10:23:56,167] [ Rules] INFO | Energy management: New value = 1738.52m³.
[2021-04-16 10:26:00,197] [ Rules] INFO | Energy management: New value = 1738.530001m³.
[2021-04-16 10:28:52,557] [ Rules] INFO | Energy management: New value = 1738.549999m³.
[2021-04-16 11:13:46,684] [ Rules] INFO | Energy management: New value = 1738.82m³.
[2021-04-16 11:16:21,471] [ Rules] INFO | Energy management: New value = 1738.84m³.
[2021-04-16 11:18:42,100] [ Rules] INFO | Energy management: New value = 1738.85m³.
[2021-04-16 11:20:48,565] [ Rules] INFO | Energy management: New value = 1738.870001m³.
[2021-04-16 11:23:43,609] [ Rules] INFO | Energy management: New value = 1738.889999m³.
[2021-04-16 11:26:24,500] [ Rules] INFO | Energy management: New value = 1738.91m³.
[2021-04-16 11:28:51,223] [ Rules] INFO | Energy management: New value = 1738.93m³.
[2021-04-16 11:31:47,247] [ Rules] INFO | Energy management: New value = 1738.950001m³.
[2021-04-16 11:34:29,359] [ Rules] INFO | Energy management: New value = 1738.969999m³.
[2021-04-16 11:39:11,097] [ Rules] INFO | Energy management: New value = 1738.99m³.
[2021-04-16 11:48:52,888] [ Rules] INFO | Energy management: New value = 1739.0m³.
[2021-04-16 11:58:59,592] [ Rules] INFO | Energy management: New value = 1739.02m³.
[2021-04-16 12:00:00,212] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 12:01:11,670] [ Rules] INFO | Energy management: New value = 1739.030001m³.
```
[Truncated]
self.run.every(datetime.now().replace(minute=0, second=0, microsecond=0), timedelta(hours=1), self.update_hourly_energy_consumption)
self.run.every(datetime.now().replace(hour=0, minute=0, second=0, microsecond=0), timedelta(days=1), self.update_daily_energy_consumption)
self.energy_item = NumberItem.get_item(f'RH_BM_LD_GasMeter_GasEnergyCounter')
self.energy_item.listen_event(self.update_energy_counter, ValueChangeEvent)
log.info(f'Energy management: Initialized.')
def update_energy_counter(self, event: ValueChangeEvent):
log.info(f'Energy management: New value = {event.value}m³.')
pass
def update_hourly_energy_consumption(self):
log.info(f'Energy management: Hourly update')
pass
def update_daily_energy_consumption(self):
log.info(f'Energy management: Daily update.')
pass```
username_2: Interessting effect. As you see i added the microseconds option to the replace function. Now the function is calles many times.
```
[2021-04-16 16:53:56,550] [ Rules] INFO | Energy management: New value = 1740.200001m³.
[2021-04-16 16:56:19,865] [ Rules] INFO | Energy management: New value = 1740.219999m³.
[2021-04-16 16:58:29,260] [ Rules] INFO | Energy management: New value = 1740.23m³.
[2021-04-16 17:00:00,006] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 17:00:00,010] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 17:00:00,013] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 17:00:00,016] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 17:00:00,019] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 17:00:00,022] [ Rules] INFO | Energy management: Hourly update
[2021-04-16 17:00:00,210] [ Rules] INFO | Energy management: Hourly update```
username_1: There was still a bug in the scheduler which should be fixed now.
Could you please try again but before do
``pip install --upgrade eascheduler``
which should install eascheduler 0.0.8 and restart habapp
username_2: Updated the eascheduler, but that didn't fix the problem
```
[2021-04-17 13:53:30,090] [ Rules] INFO | Energy management: New value = 1748.799999m³.
[2021-04-17 13:55:47,298] [ Rules] INFO | Energy management: New value = 1748.82m³.
[2021-04-17 14:00:00,009] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,012] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,017] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,023] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,026] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,029] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,009] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,012] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,017] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,023] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,026] [ Rules] INFO | Energy management: Hourly update
[2021-04-17 14:00:00,029] [ Rules] INFO | Energy management: Hourly update```
username_1: If I try with the latest scheduler and latest HABApp I get a ``FirstRunNotInTheFutureError`` error (which is correct).
Maybe something while updating went wrong? Have you restarted HABApp?`
If I modify the rule I get the correct output:
```python
print(eascheduler.__version__)
a = self.run.every(time((datetime.now() + timedelta(hours=1)).hour), timedelta(hours=1), self.update_hourly_energy_consumption)
print(a.get_next_run())
b = self.run.every(time(0), timedelta(days=1), self.update_daily_energy_consumption)
print(b.get_next_run())
```
```
0.0.8
2021-04-17T16:00:00
2021-04-18T00:00:00
```
username_2: After updating to the latest dev branch and fixing some erreor in my script with help from @username_1 it is now working
username_1: Fixed with 0.30.0
Status: Issue closed
|
googleapis/google-cloud-java | 452502280 | Title: Make `LoggingEventEnhancer` more generic so can be used in more implementations
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Currently only `com.google.cloud.logging.logback.LoggingAppender` can enhance using information contained within the logging event. This isn't possible via the JDK logger although it should be.
**Describe the solution you'd like**
Replace `com.google.cloud.logging.logback.LoggingEventEnhancer` with `com.google.cloud.logging.LoggingEventEnhancer` that has a class structure similar to:
```
@FunctionalInterface
public interface LoggingEventEnhancer<E> {
void enhanceLogEntry(LogEntry.Builder builder, E event);
}
```
For backwards compatibility `com.google.cloud.logging.logback.LoggingEventEnhancer` could then change to:
```
public interface LoggingEventEnhancer extends com.google.cloud.logging.LoggingEventEnhancer<ILoggingEvent> {
...
}
```
**Describe alternatives you've considered**
None
**Additional context**
The JDK logger `com.google.cloud.logging.LoggingHandler` should then be enhanced to perform the enhancement using the new class. The old `com.google.cloud.logging.LoggingEnhancer` could be deprecated as its functionality could be subsumed by the new one. |
jasperdavey/CS490Project | 149616266 | Title: New attribute to Tags table
Question:
username_0: Due to a logical bug, $id causes a duplicate SQL error. We now use $id to identify a unique tag row and $owner to identify the owner of the tag. Please update anything that might need to be fixed on your end.<issue_closed>
Status: Issue closed |
dbnschools/moodle-theme_fordson | 336534648 | Title: LTI Tool Provider fails because of overridden favicon method
Question:
username_0: Hi there!
I just found out in a Moodle 3.5 environment the LTI Tool Provider enrolment does not work when using Fordson. This is because Fordson overrides the default favicon() method.
Context:
The enrolment method LTI Provider tries to "get_icon()" and then call the method out() on the returned object. The error received is ```"Exception - Call to a member function out() on null"``` when Fordson doesn't have a favicon configured, or ```null``` is replaced by ```string``` when the icon is configured.
The original favicon method in $CFG->dirroot . /lib/outputrenderers returns a moodle_url object.
```
4082 public function favicon() {
1 return $this->image_url('favicon', 'theme');
2 }
```
The overruled favicon method in theme_fordson/classes/output/core_renderer.php returns a string or null.
```
1440 public function favicon() {
1 return $this->page->theme->setting_file_url('favicon', 'favicon');
2 }
```
My hotfix is to not override the favicon method. The permafix would be to rework the favicon method so it returns a moodle_url object.
```
1440 public function favicon() {
1 return new \moodle_url($this->page->theme->setting_file_url('favicon', 'favicon'));
2 }
```
However this requires another change and I have not yet figured out where in the plug-in this would need to be done.
Answers:
username_1: If you open up the Fordson core_renderer.php file and locate the favicon function can you change it to this:
`public function favicon() {
if (!empty($this->page->theme->setting_file_url('favicon', 'favicon'))) {
return $this->page->theme->setting_file_url('favicon', 'favicon');
} else {
return $this->image_url('favicon', 'theme');
}
}`
That should make the function default to the Moodle core function if there is no image in the custom fordson setting.
username_1: public function favicon() {
if (!empty($this->page->theme->setting_file_url('favicon', 'favicon'))) {
return $this->page->theme->setting_file_url('favicon', 'favicon');
} else {
return $this->image_url('favicon', 'theme');
}
}
username_1: Let me know if that fixes the issue.
username_0: This is a workaround; however the problem with LTI will still occur when you load a favicon within the theme settings. This is because the method ```setting_file_url``` returns a ```string```. The core favicon method that's being overridden returns a ```moodle_url``` and as such the core plug-ins expect a moodle_url to be returned. The LTI enrolment method does not work because it calls for a favicon and is given a string instead of an object.
username_1: Do you have an idea on how we can keep the custom favicon and still make this work?
Status: Issue closed
username_2: I want to confirm this is causing issues with LTI connectivity...
username_1: We use LTI with Wordpress and have no issues. I'm not sure it is an issue.
username_2: This could be a:
- Moodle to Moodle thing
- affecting Moodle 3.5.1 (could not test anything lower) provider sites with themes that override favicons
cf. https://moodle.org/mod/forum/discuss.php?d=371252#p1511614 and following
username_1: Thanks. If a solution is found please update this thread and I will implement whatever needs to be done.
username_3: Hello,
This bug effects all theme's that have the favicon upload implemented.
I discoved the issue as far as 3.3.
For Theme Essential we just fixed this issue. See the https://moodle.org/mod/forum/discuss.php?d=371252#p1511614 for Gareths input.
As well the fix in for 3.4 / 3.5 https://moodle.org/mod/forum/discuss.php?d=376177
Gemma
username_1: Hello Gemma,
Can you open up core_renderer.php and replace the favicon function on line 1541 as follows:
` public function favicon() {
$favicon = $this->page->theme->setting_file_url('favicon', 'favicon');
if (empty($favicon)) {
return $this->page->theme->image_url('favicon', 'theme');
} else {
return $favicon;
}
}`
Let me know if this fixes the issue.
username_1: Hi there!
I just found out that in a Moodle 3.5 environment the LTI Tool Provider enrolment does not work when using Fordson. This is because Fordson overrides the default favicon() method.
Context:
The [enrolment method LTI Provider tries to "get_icon()" and then call the method out()](https://github.com/moodle/moodle/blob/master/enrol/lti/classes/tool_provider.php#L102) on the returned object. The error received is ```"Exception - Call to a member function out() on null"``` when Fordson doesn't have a favicon configured, or ```null``` is replaced by ```string``` when the icon is configured.
The original favicon method in $CFG->dirroot . /lib/outputrenderers returns a moodle_url object.
```
4082 public function favicon() {
1 return $this->image_url('favicon', 'theme');
2 }
```
The overruled favicon method in theme_fordson/classes/output/core_renderer.php returns a string or null.
```
1440 public function favicon() {
1 return $this->page->theme->setting_file_url('favicon', 'favicon');
2 }
```
My hotfix is to not override the favicon method. The permafix would be to rework the favicon method so it returns a moodle_url object.
```
1440 public function favicon() {
1 return new \moodle_url($this->page->theme->setting_file_url('favicon', 'favicon'));
2 }
```
However this requires another change and I have not yet figured out where in the plug-in this would need to be done.
Status: Issue closed
username_4: @username_1 @username_0 @username_3 @username_2
Hi All, I want to reopen this issue as far as LTI publishing from Moodle with Fordson is still not working (or not working again). I tried changing to Boost for the testing purpose and it helped to confirm that this is a Fordson problem. Changing theme for a particular course doesn't helps, you need to change it for all site. I see latest suggestion from @username_1 is already integrated into the Fordson, so it should be another issue or some new bug? Please help, we love Fordson and use it in many Moodle based projects so far but this is a serious stopper for future considerations.
username_3: Hi,
Sorry I don't use Fordson anymore. But I suggest you add the Moodle version and Fordson theme version to this comment, as well have you tried to set debuggin on your site and see what error message appears.
This might give a clue.
Gemma
username_1: @username_4
I don't know what version you are using but I believe it is fixed using the code provided in this thread. What is your Moodle and Fordson versions?
username_4: @username_1 we use latest Moodle 3.8.1+ (Build: 20200124) and Fordson v3.8 release 1.2
username_5: Since moodle is not willing to even provide a temporary fix it seems like you will have to get active sooner or later on this...
https://tracker.moodle.org/browse/MDL-72659 |
red/red | 1178266869 | Title: [Draw] Block corruption on error
Question:
username_0: == [1]
```
**To reproduce**
Any error-generating token leads to this.
**Platform version**
```
Red 0.6.4 for Windows built 11-Mar-2022/2:40:53+03:00 commit #27f71f9
```
Answers:
username_1: And `copy/deep` cures it. 😃 |
C2FO/vfs | 546451211 | Title: S3 backend CopyToFile fails when source contains %
Question:
username_0: Below will fail:
```
//create file with %
file, err := remoteLocation.NewFile("orig%20file.txt")
if err != nil {... }
file.Write([]byte("this\n"))
file.Close()
//setup target file
file2, err := remoteLocation.NewFile("new_file.txt")
if err != nil {... }
err = file.CopyToFile(file2)
if err != nil {
// WILL FAIL "No such key found"
}
```
This is because because CopyToFile ultimately calls s3.CopyObject. s3.CopyObjectInput's CopySource needs to be url encoded. Oddly, the target does not. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#CopyObjectInput
```
// The name of the source bucket and key name of the source object, separated
// by a slash (/). Must be URL-encoded.
//
// CopySource is a required field
CopySource *string `location:"header" locationName:"x-amz-copy-source" type:"string" required:"true"`
```
We should url-encode CopySource in s3 backend's copyFile() function when calling CopyObjectInput.SetCopySource().
Answers:
username_0: Changing:
```
copyInput := new(s3.CopyObjectInput).
SetKey(targetFile.key).
SetBucket(targetFile.bucket).
SetCopySource(path.Join(f.bucket, f.key))
```
to
```
copyInput := new(s3.CopyObjectInput).
SetKey(targetFile.key).
SetBucket(targetFile.bucket).
SetCopySource(url.PathEscape(path.Join(f.bucket, f.key)))
```
should do it.
Status: Issue closed
|
intellij-rust/intellij-rust | 493257974 | Title: Disable the default `--all` flag when building.
Question:
username_0: ## Environment
* **0.2.105.2133-192**
* **1.37.0**
* **CLion 2019.2.1**
* **Windows 10**
## Problem description
Building always uses the `--all` flag even when I've unchecked the option in the Cargo settings.
## Steps to reproduce
I have a very large cargo workspace. Some of the components are generally only able to be built in particular circumstances and environments. I use the `default-members` configuration in the workspace to exclude these.
Unfortunately, IntelliJ-rust seems to only want to run `cargo build --all`.
Answers:
username_0: 
I see the issue that added this option but it apparently has no effect.
https://github.com/intellij-rust/intellij-rust/issues/2604
username_1: `Compile all project targets if possible` relates to `--all-targets` option, not `--all`.
With the new experimental build tool window (https://github.com/intellij-rust/intellij-rust/pull/3926), the "Build" action builds only the current selected configuration instead of all targets.
You can enable the build tool window by setting `cargo.build.tool.window.enabled` variable to `false` (see `Actions > Registry...`).
username_0: Oof. This makes sense. I keep confusing 'targets' and 'projects'. In this case I specifically want to exclude certain projects.
Status: Issue closed
|
bdecon/econ_data | 409400495 | Title: bd CPS: new variable WORKFT
Question:
username_0: Add a new variable `WORKFT` that is based on `WKSTAT` and captures people who work full-time. I had been using `PRFTLF` and not realizing it is assigned even for people who are unemployed and usually FT. The `WKSTAT == 2` is consistent for all years 1989-present and captures people who work 35+ hours and usually work 35+ hours.
Answers:
username_0: Will have to check for 1994 onward whether to include `WKSTAT in [8, 9]` but I suspect yes.
Status: Issue closed
username_0: This is done. I did use `WKSTAT in [2, 8, 9]` for 1994-onward. |
the-sett/elm-aws-core | 660857024 | Title: Prefferred way of extracting AWS credentials
Question:
username_0: I've been looking at adding auth-functionality a-la elm-auth-demo to my frontend app on standard AWS infrastucture + some java backend.
My question is AFTER the end user has done a successful authentication against cognito; how do you recommed to pass on auth-keys to calls against other AWS Services or my Java backend?
Extracting auth-stuff from AWS.Core.Credentials seems like a real hassle, and elm-aws-core has changed substantially from v3.1.1 to 7.1.0, and upgrading my auth-demo clone was a hornets nest.
How would you recommend to use your libraries to extract the auth info?
Answers:
username_1: The `elm-auth-aws` package builds on top of Cognito. It can supply the 'id' and 'access' tokens by default, and it can also be configured to establish 'client secret' credentials too.
https://package.elm-lang.org/packages/the-sett/elm-auth-aws/latest/AWS-Auth |
BluSunrize/ImmersiveEngineering | 137861540 | Title: (Bug) NoSuchMethodError with Minetweaker.
Question:
username_0: pastebin crash log : http://pastebin.com/BgWxaPAb
Status: Issue closed
Answers:
username_1: Update to the latest version of Minetweaker.
username_0: thanks i swore i updated to latest version.
username_1: You'll want 3.0.10b
username_0: Yeah got that version works like a charm now.
username_0: hmm still crashed when workd loads up world : http://pastebin.com/45k4iQ9Z
username_2: @username_0 you managed to provoke an probably impossible situation
@username_1 yo could use a nullcheck if you do not have 100% assurance that you if block could probably fallthrough https://github.com/username_1/ImmersiveEngineering/blob/master/src/main/java/blusunrize/lib/manual/ManualPages.java#L449-L472 this If-elseIf-elseIf could use a concluding "else: fail with reason or ignore" or just a simple nullcheck :)
username_0: @username_2 explain? so i like poked a thing that cannot so called "exist"?
username_2: the recipe you forced a check on is for some (myself unknown) reason neither an ShapelessRecipes, nor an ShapelessOreRecipe, nor an ShapedOreRecipe, or an ShapedRecipes
it's more like a unsafe handling inside the code here but you must have some sort of strange recipe in your scripts causing this, can you show your scripts somehow?
username_0: @username_2 havent done anything with scripts.
username_2: then it's some other mod tempering with recipes ... you'll probably need to run this with a custom version of IE to get behind this issue because debugging a whole modpack is a tough or almost impossible task
username_0: @username_2 what kind of special version of IE?
username_0: filed a issue in the minetweaker issues list.
username_1: @username_2 has a point here. This could be avoided with a check, I will definitely include one with the next build, but I don't see myself doing a build just for this one fix.
I'm also re-opening this issue, as a reminder for myself =P
username_1: pastebin crash log : http://pastebin.com/BgWxaPAb
username_3: @username_1 was about to comment here that this has nothing to do with us, I'll close the issue on my end
username_1: Oh yeah, absolutely not your side jared. Just me being oblivious to the fact that anyone can implement IRecipe (even though I do it myself xD)
Status: Issue closed
|
CUL-DigitalServices/avocet-ui | 48638508 | Title: Migrate financial data into ZenDesk
Question:
username_0: To enable the OA Helpdesk to use ZenDesk for all the financial data entry we need to migrate existing financial data from two different spreadsheets. I've done a mapping of fields from each spreadsheet here:
https://docs.google.com/spreadsheets/d/1f1YfUGCtSpn0Dp6EuRFU9ejoYwHe1f7soj47aWGFp5Y/edit?usp=sharing
The latest versions of the spreadsheets can be provided by <NAME>.
The financial fields already exist in ZenDesk.
Answers:
username_0: John has proposed that we do some initial investigation and scoping work on this. Raymond has requested access to the spreadsheets from Philip. |
gagarinbefree/BLink | 292892933 | Title: Error when Regex try to resolve a text with Regular Expression special characters
Question:
username_0: The following line in `Models/Heavy.cs` is throwing an exception when Regex is trying to match a text when it has a character like "("
`ret += Regex.Matches(innerText, target).Count;`
Since `target` is being used as pattern, when the text has those characters, the Regex thinks it is a Regular Expression pattern and not a character to be match.
A found the error when I was trying to preview https://www.youtube.com/watch?v=2BoMgNqvR_U |
phpmetrics/PhpMetrics | 988533588 | Title: [Bug] Trying to scan an excluded folder
Question:
username_0: # Bug report
<!--
Write the PhpMetrics version used.
Before submitting a bug, please ensure you are using the latest version.
Run `phpmetrics --version` to display your current version of PhpMetrics.
-->
<!-- Please describe your problem here. -->
```
php ./vendor/bin/phpmetrics --config=.phpmetrics.json
Executing system analyzes...
Executing composer analyzes, requesting https://packagist.org...
Fatal error: Uncaught UnexpectedValueException: RecursiveDirectoryIterator::__construct(./docker/volumes/data-test): Failed to open directory: Permission denied in /var/www/vendor/phpmetrics/phpmetrics/src/Hal/Component/File/Finder.php:88
Stack trace:
#0 [internal function]: RecursiveDirectoryIterator->__construct('./docker/volume...', 0)
#1 [internal function]: RecursiveDirectoryIterator->getChildren()
#2 /var/www/vendor/phpmetrics/phpmetrics/src/Hal/Component/File/Finder.php(88): FilterIterator->next()
#3 /var/www/vendor/phpmetrics/phpmetrics/src/Hal/Metric/System/Packages/Composer/Composer.php(76): Hal\Component\File\Finder->fetch(Array)
#4 /var/www/vendor/phpmetrics/phpmetrics/src/Hal/Metric/System/Packages/Composer/Composer.php(39): Hal\Metric\System\Packages\Composer\Composer->getComposerJsonRequirements()
#5 /var/www/vendor/phpmetrics/phpmetrics/src/Hal/Application/Analyze.php(144): Hal\Metric\System\Packages\Composer\Composer->calculate(Object(Hal\Metric\Metrics))
#6 /var/www/vendor/phpmetrics/phpmetrics/src/Hal/Application/Application.php(57): Hal\Application\Analyze->run(Array)
#7 /var/www/vendor/phpmetrics/phpmetrics/bin/phpmetrics(27): Hal\Application\Application->run(Array)
#8 {main}
thrown in /var/www/vendor/phpmetrics/phpmetrics/src/Hal/Component/File/Finder.php on line 88
```
My config
```json
{
"includes": [
"app"
],
"exclude": [
"tests",
"vendor",
"docker"
],
"report": {
"html": "./tmp/php-metrics/report/",
"csv": "./tmp/php-metrics/report.csv",
"json": "./tmp/php-metrics/report.json",
"violations": "./tmp/php-metrics/violations.xml"
}
}
````
"docker" folder has access restrictions and added to config exclude. Despite this, the script tries to scan this folder.
If you run the script from another folder, for example from vendor, then there is no such problem. It looks like at startup there is a search for something without checking permission. |
yosupo06/library-checker-problems | 521089170 | Title: [問題案] Add Line Get Min
Question:
username_0: (任意) 問題ID: add_line_get_min
問題名: Add Line Get Min
# 問題概要
Qクエリ処理
- a b: 直線y = ax + bを追加
- a: x=aでのy座標の最小を求める
## 入力
```
Q
que
que
:
que
```
## 出力
```
```
## 制約
Q <= 200,000
Answers:
username_1: Add Segment Get Min もほしいです (Li-Chao tree)
username_0: データ構造は初期化O(N)かO(N log N)かという要素もあるため、最初にN本直線を与えることに
username_0: bが10^9なのは不自然なので、10^18
username_0: #205
Status: Issue closed
username_0: forumのリンクがここになっていない
username_0: (任意) 問題ID: add_line_get_min
問題名: Add Line Get Min
# 問題概要
N 本の直線 y = a_i x + b_i が存在します。 Q 個のクエリを処理してください。
- 0 a b: 直線 y = ax + b を追加
- 1 p: x = p での y 座標の最小を求める。
## 入力
```
N Q
a0 b0
a1 b1
:
a{N-1} b{N-1}
que
que
:
que
```
## 出力
```
```
## 制約
- 1 <= N, Q <= 200,000
- |値| <= 1e9 (正負は割とバグを引き起こしそうなため)
- |b| <= 1e18 (ax + bでaxは10^18までいくため)
Status: Issue closed
|
francotiveron/Choreo | 1013885729 | Title: User Levels
Question:
username_0: Paul
I thought you said this was sorted out in the past, but I was never really sure how to implement it.
Create Instruction Document on how to configure User Login
Answers:
username_0: [Choreo User Management.docx](https://github.com/username_0/Choreo/files/7272799/Choreo.User.Management.docx)
Status: Issue closed
|
muayyad-alsadi/docker-jumpshell | 183029518 | Title: it does not support mosh
Question:
username_0: ```
mosh docker-all@myserver
Error: No such container: mosh-server
Error response from daemon: No such container: mosh-server
Error response from daemon: No such container: mosh-server
Connection to mosh closed.
/usr/bin/mosh: Did not find mosh server startup message.
```
Status: Issue closed
Answers:
username_0: fixed in v1.5
https://github.com/username_0/docker-jumpshell/commit/85fa8f088752300e8015fd7a68c54853b15bd9eb |
matrix-org/matrix-spec | 1155988907 | Title: POST /_matrix/client/r0/login only accepts a type of m.login.password, m.login.token.
Question:
username_0: **Link to problem area**: https://spec.matrix.org/unstable/client-server-api/#post_matrixclientr0login
**Issue**
`POST /login` expects that the type must be one of `[m.login.password m.login.token]`, however that looks overly restrictive given the type could be anything supported by the homeserver given in `GET /login` (e.g. `m.login.sso`).
**Expected behaviour**
We should probably be less strict here and say that any type given in the homeservers `GET /login` response is acceptable |
Azure/autorest | 235992441 | Title: AddCredentials argument is incorrect in auth.md docs
Question:
username_0: Was reading through this doc and noticed https://github.com/Azure/autorest/blob/master/docs/client/auth.md lists `-AddCredentials` but the actual arugument is `--add-credentials`
Answers:
username_0: Also lots of broken links in this same document.
username_1: thanks, on it!
Status: Issue closed
|
artipie/asto | 732198562 | Title: Cache should return Optional of Content
Question:
username_0: Let's change signature of `Cache#load` method to return `CompletionStage` of Optional with Content as it's possible there will not be any content to return in cases we do not have requested data in cache and remote return any unsuccessful response.
Status: Issue closed
Answers:
username_2: Job `gh:artipie/asto#286` is not assigned, can't get performer
<!-- https://www.username_2.com/footprint/CT2E6TK9B/f1ac3931-2293-4226-b40e-38089e67df0c, version: 0.54.5, hash: ${buildNumber} -->
username_0: @username_3 release, tag=`0.32`
username_3: @username_0 OK, I will release it now. Please check the progress [here](https://www.username_3.com/t/23657-723640304)
username_3: @username_0 Done! FYI, the full log is [here](https://www.username_3.com/t/23657-723640304) (took me 14min) |
getgrav/grav-plugin-admin | 1150611654 | Title: Attaching CSV File to Page Throws Mime Type Error
Question:
username_0: The issue was resolved by copying `system/mime.yaml` to `user/config/mime.yaml` (thanks, @mahagr!) and editing the CSV mime type as follows:
```
csv:
- text/csv
- application/vnd.ms-excel
```
The Excel->CSV workflow is pretty common, so `system/mime.yaml` should be modified to include MS's vendor-specific mime type. |
imablanco/ImageProvider | 371556329 | Title: NO BITMAP !
Question:
username_0: I tried and did not work. Pickers and Camera work but no Bitmap.
I used your project
Answers:
username_0: companion object {
var mCurrentPhotoPath: String? = null
}
@Throws(IOException::class)
private fun createImageFile(): File {
// Create an image file name
val timeStamp: String = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())
val storageDir: File = getExternalFilesDir(Environment.DIRECTORY_PICTURES)
return File.createTempFile(
"JPEG_${timeStamp}_", /* prefix */
".jpg", /* suffix */
storageDir /* directory */
).apply {
// Save a file: path for use with ACTION_VIEW intents
mCurrentPhotoPath = absolutePath
}
}
val REQUEST_TAKE_PHOTO = 1
private fun dispatchTakePictureIntent() {
Intent(MediaStore.ACTION_IMAGE_CAPTURE).also { takePictureIntent ->
// Ensure that there's a camera activity to handle the intent
takePictureIntent.resolveActivity(packageManager)?.also {
// Create the File where the photo should go
val photoFile: File? = try {
createImageFile()
} catch (ex: IOException) {
// Error occurred while creating the File
//...
null
}
// Continue only if the File was successfully created
photoFile?.also {
val photoURI: Uri = FileProvider.getUriForFile(
this,
"com.example.android.fileprovider",
it
)
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, photoURI)
startActivityForResult(takePictureIntent, REQUEST_TAKE_PHOTO)
}
}
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == RESULT_OK && requestCode == REQUEST_TAKE_PHOTO ) {
val file = File(mCurrentPhotoPath)
val bitmap = MediaStore.Images.Media.getBitmap(
contentResolver,Uri.fromFile(file))
if (bitmap !=null) {
Log.d("BITMAP",bitmap.byteCount.toString())
ivPhoto.setImageBitmap(bitmap)
}
}
}
Status: Issue closed
|
istio/istio.io | 617964629 | Title: The same documentation URL for AuthenticationPolicy (namespace-wide) display two resource type different depending on the localisation
Question:
username_0: HI,
About AuthenticationPolicy we saw a difference in the documentation between 2 URLs. It's the same URL but in the second case it's for "pt-br" language/country.
*
https://istio.io/docs/tasks/security/authentication/authn-policy/#namespace-wide-policy
--> Talk about **PeerAuthentication** resource type
<img width="1090" alt="image" src="https://user-images.githubusercontent.com/2543381/81900658-933fa200-95bd-11ea-8311-0cbbca7285e5.png">
*
https://istio.io/pt-br/docs/tasks/security/authentication/authn-policy/#namespace-wide-policy
--> Talk about **Policy** resource type
<img width="1090" alt="image" src="https://user-images.githubusercontent.com/2543381/81900726-b0747080-95bd-11ea-813e-234fc3ad5315.png">
So what's the good resource type?
Thanks for take a look about this difference.
Thanks :)
Answers:
username_1: Thanks @username_0 for taking the time to report the issue. The docs under `pt-br` are maintained by different language teams, so occasionally it takes time to update the docs. Meanwhile, you can refer `en` docs which are updated frequently.
As per the [latest updates](https://github.com/istio/istio/pull/20829) `PeerAuthentication` is the right resource type for namespace wide policy. |
eFrane/vuepress-plugin-mermaidjs | 1097823490 | Title: Duplicated code blocks
Question:
username_0: Hi, I'm seeing a bug where duplicated code blocks are appearing when there is no space before the block in the markdown. It's easily remedied by adding a new line before the code block, but thought it was a bit odd.
I removed your plugin and the duplicates in my example disappear, so it looks like something weird with the plugin perhaps.
Please see example repo [here](https://github.com/username_0/vuepress-bug/blob/main/docs/guide/README.md?plain=1#L3-L6) and [here](https://github.com/username_0/vuepress-bug/blob/main/docs/guide/README.md?plain=1#L8-L19)
Example rendering can be seen at https://vuepress-bug.onrender.com/guide/
Answers:
username_1: I found a similar question with version 1.9.0
# Reproduction link
[<img src="https://c.staticblitz.com/assets/favicon-7453cf0c12d349fb64b7aa2b69cc69c026f083a27f139f0839b1f4948bed6811.png" width = "50" height = "50" alt="#" align=center />Edit on Stackblitz](https://stackblitz.com/edit/node-oop14j?file=docs/README.md)
username_1: I found it was caused by [@mbalex99/markdown-it-fence](https://github.com/mbalex99/markdown-it-fence)
# Reproduction link
[<img src="https://c.staticblitz.com/assets/favicon-7453cf0c12d349fb64b7aa2b69cc69c026f083a27f139f0839b1f4948bed6811.png" width = "30" height = "30" alt="#" align=center />Edit on Stackblitz](https://stackblitz.com/edit/mbalex99markdown-it-fence-bug?file=index.js)
username_2: Hm… maybe using markdown-it-fence wasn't the best of ideas. While it reduced the sloc, it seems to increase the maintenance load. I propose to solve this by moving back to having a built-in markdown-it plugin.
username_2: Fixed in v1.9.1, thanks @username_1.
Status: Issue closed
|
SignetStudios/slackbot-uno | 222484869 | Title: Calling /uno after playing wild crashes bot
Question:
username_0: ```[2017-04-18T17:05:31Z]slapp:info [evt] tm=T0AP3U2MA ch=C4Y9ES4VB usr=undefined message.message_changed
[2017-04-18T17:05:39Z]slapp:info [cmd] tm=T0AP3U2MA ch=C4Y9ES4VB usr=U0HE6BYPR /uno
[2017-04-18T17:05:39Z]/usr/src/app/slack_bot.js:54
[2017-04-18T17:05:39Z] unoGame.setWildColor(msg, state, msg.body.actions[0].value);
[2017-04-18T17:05:39Z] ^
[2017-04-18T17:05:39Z]
[2017-04-18T17:05:39Z]TypeError: Cannot read property '0' of undefined
[2017-04-18T17:05:39Z] at slapp.route (/usr/src/app/slack_bot.js:54:54)
[2017-04-18T17:05:39Z] at self.convoStore.del (/usr/src/app/node_modules/slapp/src/slapp.js:231:24)
[2017-04-18T17:05:39Z] at /usr/src/app/node_modules/slapp-convo-beepboop/node_modules/beepboop-persist/index.js:82:9
[2017-04-18T17:05:39Z] at wrapped (/usr/src/app/node_modules/hoek/lib/index.js:871:20)
[2017-04-18T17:05:39Z] at Object.onceWrapper (events.js:293:19)
[2017-04-18T17:05:39Z] at emitOne (events.js:96:13)
[2017-04-18T17:05:39Z] at ClientRequest.emit (events.js:191:7)
[2017-04-18T17:05:39Z] at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:522:21)
[2017-04-18T17:05:39Z] at HTTPParser.parserOnHeadersComplete (_http_common.js:99:23)
[2017-04-18T17:05:39Z] at Socket.socketOnData (_http_client.js:411:20)
[2017-04-18T17:05:39Z] at emitOne (events.js:96:13)
[2017-04-18T17:05:39Z] at Socket.emit (events.js:191:7)
[2017-04-18T17:05:39Z] at readableAddChunk (_stream_readable.js:178:18)
[2017-04-18T17:05:39Z] at TCP.onread (net.js:559:20)```
When playing a wild, should show the user their hand prior to the buttons, to reduce the need for calling /uno.
Answers:
username_1: oopsies! 👀
Status: Issue closed
|
OpusCapita/supplier | 257312980 | Title: Supplier registration fails in case company name is long
Question:
username_0: supplierId generation can generate ids longer than 30 chars which doesnt fit in the db field.
Fix is to truncate id to 27, then if exists append 3 digit counter (as already on supplier id)
Status: Issue closed
Answers:
username_1: Fixed by https://github.com/OpusCapita/supplier/commit/5236fca51346a9ac25cf4841e39624c8bf36fbe3 |
flat3/lodata | 1148200849 | Title: DeclaredProperty on relationship
Question:
username_0: Hi, I have some troubles with this case :
I have :
```
\Lodata::discover(\App\Models\Offer::class);
\Lodata::discover(\App\Models\Product::class);
$entity = \Lodata::getEntityType('Offer');
$entity->addDeclaredProperty( 'name', Type::string() );
$entity->addDeclaredProperty( 'description', Type::string() );
$entity = \Lodata::getEntityType('Product');
$entity->addDeclaredProperty( 'name', Type::string() );
$entity->addDeclaredProperty( 'description', Type::string() );
```
I can see name and description on offer, but not on product. But if i comment out the first line, it works. I think it's because I have a relationship between Offer and Product
```
Offer.php
#[LodataRelationship]
public function product() {
return $this->belongsTo(Product::class);
}
```
How can I make my case work ?
Thank you
Answers:
username_1: Hi, I created a database here with two tables for offers and products (only with an id priamry key), and two empty models for Offer and Product and added that relationship. Then I added your code to the service provider and it seemed to start up fine.
This is what I see in the OData metadata at `http://localhost:8000/odata/$metadata`, is this what you see?
```xml
<EntityType Name="Offer">
<Key>
<PropertyRef Name="id"/>
</Key>
<Property Name="id" Type="Edm.Int64" Nullable="false">
<Annotation Term="Org.OData.Core.V1.Computed" Bool="true"/>
</Property>
<Property Name="name" Type="Edm.String" Nullable="true"/>
<Property Name="description" Type="Edm.String" Nullable="true"/>
<NavigationProperty Name="product" Type="com.example.odata.Product" Nullable="true"/>
</EntityType>
<EntityType Name="Product">
<Key>
<PropertyRef Name="id"/>
</Key>
<Property Name="id" Type="Edm.Int64" Nullable="false">
<Annotation Term="Org.OData.Core.V1.Computed" Bool="true"/>
</Property>
<Property Name="name" Type="Edm.String" Nullable="true"/>
<Property Name="description" Type="Edm.String" Nullable="true"/>
</EntityType>
```
username_1: That is weird, what database backend are you using?
username_0: mysql.
I think you need to know that i'm using [this package](https://github.com/Astrotomic/laravel-translatable) and name and description in my case are stored in another table, not on their own tables :
- offers
- id
- product_id
- slug
- ...
- products
- id
- ...
- offer_translations
- id
- offer_id
- locale
- **name**
- **description**
- product_translations
- id
- product_id
- locale
- **name**
- **description**
Maybe something hang on with this ?
username_1: Yup, could well be if it's manipulating the model. I will see if I can recreate the issue with this package...
username_1: Okay I have reproduced the issue, it's actually a problem with the second line: `\Lodata::discover(\App\Models\Product::class);`
The first line discovers the Offer class, which has a relationship with the Product class, so both are discovered.
The second line causes the Product class to be rediscovered, creating a reference to a new EntityType. So when you requested the type getEntityType() it didn't give you the type the set was actually attached to.
The correct action is not to have the second "discover" method, but Lodata didn't tell you there was any problem with doing that. I'm going to fix that in the next release.
Can you confirm that removing`\Lodata::discover(\App\Models\Product::class);` fixes the issue?
username_1: Yes, exactly. In order to be able to create the relationship between the two models Lodata needed to eagerly discover the other class first. The next release will have a change that ignores the second "discover" so if you were to double up on that it won't cause the kind of issue you encountered.
Status: Issue closed
username_1: Resolved in https://github.com/flat3/lodata/releases/tag/v5.5.1 |
Merubokkusu/Discord-S.C.U.M | 807578886 | Title: ACCOUNT_PERMANENTLY_DISABLED
Question:
username_0: This package looked great! Unfortunately, I was only able to test it for ~1h before I received an email from Discord saying I was banned.
During these tests I sent 2 test DMS, tried reading messages in different channels etc.
It worked really well. And before you ask, no I didn't spam or did anything weird, I was still in the testing phase.
Please let me know if you're able to circumvent Discord's detection! I only need a bot to read messages and process them privately (in a channel that will not accept bots), nothing else.
Answers:
username_1: Pretty interesting how fast you got banned, anyway you could provide a rundown of exactly what you did before getting banned.
username_0: I should note though, I got my alt account flagged pretty early, while joining the test server since the app opened with my main account logged in, and I guess both connection on the same IP was already a flag for Discord. They made me verify with a phone number.
username_0: Also, I just realized I installed through pip so maybe I got an older version? Will try with a new account now and be more careful not to login on the same IP and get the github version
username_2: Can you do `pip show discum` in the terminal/command line?
It should look very similar to this:
Name: discum
Version: 1.0.1
Summary: A Discord Self-Bot API
Home-page: https://github.com/username_1/Discord-S.C.U.M
Author: username_1
Author-email: <EMAIL>
License: MIT
Location: c:\users\username\appdata\local\programs\python\python39\lib\site-packages
Requires: filetype, websocket-client, random-user-agent, requests-toolbelt, ua-parser, requests
Required-by:
username_3: Sorry I have already pip uninstalled then installed the github version just in case.
New alt has been running smoothly for the past couple of hours, not sure what happened the first time but so far so good!
username_2: He most likely used an older version, early versions usually got my alts disabled but not for the latest versions. |
google/kf | 477610646 | Title: [meta] What samples should we have?
Question:
username_0: We have a few app samples today that are very simple (some are even used for the integration tests). We also have a few docs that deploy a few spring (spring music) and java things (UAA).
What other samples should we add to really highlight kf?
Answers:
username_1: - Something to highlight auto-scaling
- Something that shows a static and dynamic portion with path based routing (SPA with an API--maybe a chat service?)
- Something that shows off east/west routing vs north/south like a traceroute tool
- Something to show off HTTP2 support
username_2: The CloudFoundry acceptance tests mostly use [Dora](https://github.com/cloudfoundry/cf-acceptance-tests/tree/master/assets/dora) app(written in ruby) for [autoscaler](https://github.com/cloudfoundry/app-autoscaler) tests as it has built in API support for load testing. This is the app I used as well. Though I am not quite sure if we have support for ruby-buildpack already.
However, I relied on [hey](https://github.com/rakyll/hey) for actually making concurrent requests:
```
hey -z 30s -c 50 \
-host "knative-demo-app.default.example.com" \
"http://${IP_ADDRESS}"
```
https://github.com/username_2/knative-demo#autoscaling
So we should be able to perform some basic load testing using hey irrespective of the underlying app.
username_3: - deployment: showing rollout strategy - knative's blue/green revisions
- something that shows logs/telemetry
Status: Issue closed
|
kubernetes/kops | 277007427 | Title: Exposing additional ports on bastion host.
Question:
username_0: I would like to setup OpenVPN server on the bastion host to securely access my network with private topology, unfortunately it seems that only port 22 is allowed (although with custom CIDR subnets).
Is it possible to add custom security groups to ELB and instance to support any TCP/UDP traffic?
Answers:
username_1: Did you tried with `additionalSecurityGroups` for InstanceGroupSpec ?
https://github.com/kubernetes/kops/blob/4e284e9966298b082d7b1b1468b83b14ccd4d541/pkg/apis/kops/v1alpha2/instancegroup.go#L90
username_2: Can we close this?
username_0: Unfortunately additionalSecurityGroups is attaching SG to instance instead of bastion's LB. I can simply bypass LB and connect directly to bastion, but in this case what's the point of LB in front of it? |
bcgov/entity | 588785974 | Title: Review Back-end Service Setup
Question:
username_0: ## Description:
Meet to give Name request team guidance on back-end service set up and deployment.
Acceptance for a Task:
- [ ] Requires deployments
- [ ] Add/ maintain selectors for QA purposes
- [ ] Test coverage acceptable
- [ ] Linters passed
- [ ] Peer Reviewed
- [ ] PR Accepted
- [ ] Production burn in completed
Status: Issue closed
Answers:
username_0: Had a meeting. Kial will be able to provide more info about Github actions to Lucas after he does our fornt-end. If we needed to we coudl set up the name processing and word classification service using the jenkins bc, dec and pipelines using the lear entity-filer as a base. We could copy and paste and change the params files. |
uber/okbuck | 229042075 | Title: Issue with resources using okbuck
Question:
username_0: The following resources were not found when processing Pair(//app:res_googleDebug#resources-symlink-tree, buck-out/gen/app/res_googleDebug#resources-symlink-tree/res):
RDotTxtEntry{idType=int, type=drawable, name=home_screen_gradient, idValue=0x00000000, parent=home_screen_gradient}
BUILD FAILED: //app:res_googleDebug failed with exit code 1:
generate_resource_ids
Answers:
username_1: Possible duplicate of #318
username_1: Closing this as duplicate of #318 . See the workaround in https://github.com/uber/okbuck/issues/318#issuecomment-271396450
Status: Issue closed
|
evgeniybelyaev99/BMP_to_black_and_white | 774258864 | Title: BMP to black and white
Question:
username_0: Написать консольное приложение которое запускается с 2-мя параметрами -
1. Входящий файл, который содержит BMP изображение, соответствующие спецификации https://ru.wikipedia.org/wiki/BMP
2. Выходной файл, который содержит преобразованное входное изображение в черно белом виде.
Во время выполнения программа информирует пользователя о прогрессе.
В случае несоответствия входных данных формату BMP при любых других ошибках программа должна предложить повторить ввод путей к файлам и проинформировать пользователя о возникших проблемах.
Корректность работы программы осуществлять с помощью визуального контроля в программе Paint .Net. |
geerlingguy/ansible-role-haproxy | 787576027 | Title: Manage haproxy config with personal template
Question:
username_0: Hello, I want to manager personal template with haproxy.
This is my work
`- name: Copy HAProxy configuration in place.
template:
src: "{{ haproxy_template | default('haproxy.cfg.j2') }}"
dest: /etc/haproxy/haproxy.cfg
mode: 0644
validate: haproxy -f %s -c -q
notify: restart haproxy
`
I change "SRC"
i have tested with this groupvars:
```
cat group_vars/lbservers
---
# HA PROXY #
haproxy_template: "../../../files/haproxy/haproxy.cfg.j2"
```
It's work fine :
Regards,
Mathieu
Answers:
username_1: This is exactly what I have been looking for, since I wanted to enable the build-in stats endpoint.
Right now I run an entire copy of the role in my project just for this addition to the `haproxy.cfg.j2`:
```yaml
listen stats
bind :8404
mode http
stats enable
stats hide-version
stats refresh 10s
stats uri /
stats auth {{ haproxy_stats_username }}:{{ haproxy_stats_password }}
```
Maybe this could be added to the role? |
swagger-api/swagger-ui | 220580705 | Title: Circular Reference generate errors
Question:
username_0: Master version
Circular reference generate errors.
Like "Node" contains a list of nodes for instance
```
Errors
Resolver error
too much recursion
Resolver error
too much recursion
```
Parameters view is also broken
Maybe relative to
Very slow loading times #1919
Answers:
username_1: Can you share a spec that reproduces it? We've tested circular reference support.
username_0: Here it is
[circular.swagger.json.zip](https://github.com/swagger-api/swagger-ui/files/909591/circular.swagger.json.zip)
username_0: 3.0.3 OK
3.0.5 and master KO
username_1: Interesting. I see a different behavior if the file is embedded vs hosted. No major issues with the embedded version, but once hosted... :boom:.
username_2: Looks like it was introduced into the static build here: https://github.com/swagger-api/swagger-ui/commit/dd8fb58fcd50c0f0639e540d03a273557a66cf9e
username_2: @username_1, thanks for the hint. You just saved me a couple additional hours of debugging 😄
The problem has been isolated to swagger-js, writing regression tests and looking for a fix now.
username_2: Problem introduced here: https://github.com/swagger-api/swagger-js/commit/4e2dff33e75744e9
username_2: Hi everyone - we just merged the PR that fixes this in swagger-js, and you can expect the fix to be in our Friday release tomorrow evening 😄
Status: Issue closed
|
sammchardy/python-binance | 906680148 | Title: cannot import name 'AsyncClient' from partially initialized module 'binance'
Question:
username_0: **Describe the bug**
-- from binance import AsyncClient --
**Environment (please complete the following information):**
- Python version: 3.8
- Virtual Env: virtualenv
- OS: linux
- python-binance version 1.0.1
**Logs or Additional context**
Add any other context about the problem here.
Answers:
username_1: This happened to me, make sure your don't have another module named binance in your code
username_0: thanks.
yes i found it.
file name: socket.py
it was because of this. i changed it.
Status: Issue closed
|
DougHennig/ProjectExplorer | 533562419 | Title: GIT Integration: Add-in for auto-generation of binaries from updated text equivalent
Question:
username_0: Now that I've had a bit of time to get more comfortable with how PEx integrates with Git, I've got a workflow question to put out there. It appears (quite logically) that PEx integrates with my local Git repo. It's up to me to interact with my remote repo outside of VFP and PEx. It appears that if another dev has pushed changes to the remote, I have to pull those changes to my local branch and manually generate the binaries from the text equivalents. It seems to me it should be possible to use an Add-in to see if the text equivalent is newer and then automatically update the local binary at the time I go to either edit or build. If this is a good assumption, the next question is has somebody already written this Add-in? Or perhaps it's already in Mike Potjer's VFPx Git tools?
TIA
Answers:
username_1: The text equivalent will always be newer than the binary simply because it's generated after the binary is saved. One option would be to do a test: output each binary to a text file in a temp folder, compare that text file to the one in the dev folder (which may be newer due to a repository update; a simple comparison of checksums would be fine), and for any file where the comparison fails, regenerate the binary from the text file.
How does that sound?
username_0: Much better than a timestamp, Doug. Is this a suggestion on how I can write this add-in or an offer? :-)
username_2: I have been thinking of this for some time. I've been having trouble with that process. I have 644 binary files in my project folder that get changed by foxbin2prg. It takes over 2 minutes to process my project. (PEx takes over 20 seconds to open my project.)
I have been looking more at attempting to capture if changes have been made from the git responses more than the processing the project.
Though I have thought about a second folder that holds the text equivalents and comparing checksums against those. There is always the possibility that second folder will be out of date somehow.
username_1: Thinking about this some more. I don't think we'd need to regen all the text files; theoretically, they'd all be there and current (if not, you can use FoxBin2PRG to create them). So, there are a couple of possibilities:
- Run a function that copies all text file to a temp folder. You'd need to do that BEFORE pulling/updating from the remote repo. Then do the pull/update. Then run another function that does the compare and generation of binaries from any changed text files.
- Do the pull/update then generate binaries from text files with today's date.
Thoughts? Any other ideas?
username_0: Hi Doug,
HNY to you and yours! Sorry I let this sit for so long. I've been buried in some new stuff that I had to keep focused on.
I think I like option number 2. It seems like it would be the simplest approach as long as the assumption that the "today's date of the text file" rule holds true. As far as PeX is concerned, hooking this into the BeforeModifyFile(?) event would be good.
It also occurs to me that something like this should also be called from a BeforeBuild event, although that feels like a separate use case which needs to be handled on its own, and is not PeX specific. Much like setting the recompile all option, maybe just automagically regenerating binaries when building would be another way to go for that situation. I've already got some build event code that turns off the readonly attribute (which I really can get rid of now that I'm out of VSS-one-developer-at-a-time mode).
username_1: I've been playing with this a bit and think I have a process that will work.
First, Project Explorer does a GIT FETCH to retrieve changes from the remote repository. It then does GIT DIFF --NAME-ONLY ORIGIN/MASTER to list the changed files from the fetch and saves that list. Then it does GIT MERGE FETCH_HEAD to merge the changes. You'll deal with any conflicts at this point.
Now it can go through the list of changed files and for any that are text equivalents, use FoxBin2PRG to convert them to binary. That way, it only uses FoxBin2PRG on the necessary files rather than all files.
One complication: if the merge fails because of a conflict (e.g. you and I both modified a form), then it can't do the last step, or it could but you'll have to regenerate the binaries from the merged files yourself.
Thoughts?
username_1: I added Update from Remote Repository, Push to Remote Repository, and Resolve Conflict functions to the shortcut menu that implement these ideas. If a conflict occurs during the merge, the TortoiseGit conflict dialog appears (or you're told about the conflict if TortoiseGit isn't installed).
Status: Issue closed
|
spark-jobserver/spark-jobserver | 150350493 | Title: Job Server Not Able To Handle All The Requests At A Time, Even Though Resources Are Available
Question:
username_0: We have 32 cores and 60 GB RAM on machine where we deployed the job server, but when multiple people are executing the jobs concurrently, job server was handling only 6 request at a time, is there any configuration which job server will use when submitting the jobs, we are using context per jvm feature,
We are running AWS EMR Cluster with 10 Nodes , with 160 vCores and 230GB RAM.
Answers:
username_1: Just to be clear, are you talking about 6 concurrent contexts (JVMs/applications) at a time, or 6 concurrent SJS jobs within one context?
For the latter case, there is a parameter you can adjust,
spark.jobserver.max-jobs-per-context
You will also want to enable the spark.scheduler.mode=FAIR and configure your fair-scheduler.xml properly.
The default is 8 which probably translates to about 6 concurrent jobs.
-Evan
>
username_2: If you set max jobs and setup FAIR scheduling in the context as well as in the jobs themselves, I've had over 100 jobs running across 10 contexts at once with no issues at all. Also on much less hardware than yours.
username_0: Hi @username_1 , @username_2 , thank you very much for responding, in our case we are running each and every job in a different context, also as context per jvm is enabled so every job executing in a different jvm, in this scenario is there any config which will decide how many jobs we can execute concurrently (or) it depends on number of available cores in master node?
Also i am not able to find out any property in job server conf to specify number of cores for driver, how many cores will job server allocate to each driver when running each job under a different context/jvm.
Thanks,
Santhosh.
username_1: So the only thing limiting the number of concurrent contexts is amount of available mem/cpu in your cluster.’
As for number of cores for driver, typically driver apps are single threaded and consume only 1 core. Job server Actor logic consumes more threads but really very low activity.
>
username_1: @username_0 closing. We can keep chatting but this does not appear to be a SJS issue.
Status: Issue closed
|
MaikuB/flutter_local_notifications | 1133107504 | Title: Potential issue with seeing notification actions on iOS
Question:
username_0: At some point, I've not been able to see notification actions on iOS yet. Notification actions do appear on macOS though. Upon digging further the culprit seems to be around this code https://github.com/username_0/flutter_local_notifications/blob/f126ddb781d6df88e3c89a93962a123153f107b8/flutter_local_notifications/ios/Classes/FlutterLocalNotificationsPlugin.m#L348
Here the code is querying the list of currently registered categories and then appending to that list the categories that was specified via the APIs exposed by the plugin. As far as I can tell from various docs/articles and my own testing, querying and then appending isn't necessary. It's sufficient to replace the logic with a single call to `[center setNotificationCategories:newCategories]`.
If I'm not mistaken, the logic in its current state (i.e. where it merges the categories) could potentially lead to bugs where if apps call `initialize()` multiple times with different categories, the older categories aren't remain even though the semantics of that imply that the categories in the last call should be the ones that remain.
@ened have I missed a reason why the plugin should query the list of notification categories and should spend to it instead of overriding it? |
stephy/CalendarPicker | 313360879 | Title: RN 0.54 / React 16.3 deprecation warnings for componentWillReceiveProps, componentWillUpdate
Question:
username_0: ```
Warning: componentWillReceiveProps is deprecated and will be removed in the next major version. Use static getDerivedStateFromProps instead.
Please update the following components: CalendarPicker, Swiper
Learn more about this warning here:
https://fb.me/react-async-component-lifecycle-hooks
Warning: componentWillMount is deprecated and will be removed in the next major version. Use componentDidUpdate instead. As a temporary workaround, you can rename to UNSAFE_componentWillUpdate.
Please update the following components: CalendarPicker, Swiper
```
The fixes are actually simple and maintain backward compatibility.
Replace `componentWillMount` with `componentDidUnmount`.
Replace `componentWillReceiveProps` with `componentDidUpdate` and tweak logic appropriately.
In the rare case that `getDerivedStateFromProps` is necessary, a polyfill can be used to maintain backwards compatibility.
See https://github.com/reactjs/react-lifecycles-compat
Answers:
username_0: Fixed in f4fbb16d8970eb99ed08da53a612234839e70c05
Status: Issue closed
|
rancher/rancher | 198187908 | Title: Webhook throwing error in logs if no hooks exist
Question:
username_0: **Rancher Version:** v1.3.0-rc3
**Steps to Reproduce:**
1. Setup a new version of rancher
2. SSH into rancher and view logs
3. Log into rancher
**Results:** Webhooks throwing error every time I refresh page or log in. This only happens if I have no webhooks. As soon as I add one, it goes away.
`time="2016-12-30T19:29:39Z" level=info msg="Listing webhooks"
time="2016-12-30T19:29:39Z" level=warning msg="Skipping webhook client.GenericObject{Resource:client.Resource{Id:\"1go1\", Type:\"register\", Links:map[string]string{\"account\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go1/account\", \"self\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go1\"}, Actions:map[string]string{\"remove\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go1/?action=remove\", \"stop\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go1/?action=stop\"}}, AccountId:\"1a5\", Created:\"2016-12-30T18:58:56Z\", Data:map[string]interface {}(nil), Description:\"\", Key:\"\", Kind:\"register\", Name:\"\", RemoveTime:\"\", Removed:\"\", ResourceData:map[string]interface {}(nil), State:\"active\", Transitioning:\"no\", TransitioningMessage:\"\", TransitioningProgress:0, Uuid:\"75bebdaf-0a3c-437d-ac22-f5dbcc00379a\"} because: Couldn't read webhook data. Bad driver"
time="2016-12-30T19:29:39Z" level=warning msg="Skipping webhook client.GenericObject{Resource:client.Resource{Id:\"1go2\", Type:\"register\", Links:map[string]string{\"self\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go2\", \"account\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go2/account\"}, Actions:map[string]string{\"remove\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go2/?action=remove\", \"stop\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go2/?action=stop\"}}, AccountId:\"1a5\", Created:\"2016-12-30T18:58:56Z\", Data:map[string]interface {}(nil), Description:\"\", Key:\"\", Kind:\"register\", Name:\"\", RemoveTime:\"\", Removed:\"\", ResourceData:map[string]interface {}(nil), State:\"active\", Transitioning:\"no\", TransitioningMessage:\"\", TransitioningProgress:0, Uuid:\"1c438e0f-ad55-4e7e-9a01-29e4997617d7\"} because: Couldn't read webhook data. Bad driver"
time="2016-12-30T19:29:39Z" level=warning msg="Skipping webhook client.GenericObject{Resource:client.Resource{Id:\"1go3\", Type:\"register\", Links:map[string]string{\"account\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go3/account\", \"self\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go3\"}, Actions:map[string]string{\"remove\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go3/?action=remove\", \"stop\":\"http://localhost:8080/v2-beta/projects/1a5/register/1go3/?action=stop\"}}, AccountId:\"1a5\", Created:\"2016-12-30T18:59:12Z\", Data:map[string]interface {}(nil), Description:\"\", Key:\"\", Kind:\"register\", Name:\"\", RemoveTime:\"\", Removed:\"\", ResourceData:map[string]interface {}(nil), State:\"active\", Transitioning:\"no\", TransitioningMessage:\"\", TransitioningProgress:0, Uuid:\"f8b65c0c-f10d-4c85-bfec-14438a90812c\"} because: Couldn't read webhook data. Bad driver"
2016/12/30 19:29:39 http: multiple response.WriteHeader calls`
**Expected:** No errors?
Answers:
username_0: Version - master 1/17
Verified fixed
Status: Issue closed
|
moshfeu/vscode-diff-merge | 688821991 | Title: It's compatible with the SVN extention
Question:
username_0: It doesn't work fine with the SVN extention to show what has been changed.
Answers:
username_1: Thanks for the report.
Can you post the link to the svn extension? I'm not familiar with it.
username_0: Here is the SVN extention: https://marketplace.visualstudio.com/items?itemName=johnstoncode.svn-scm.
SVN is another revision control tool similar with git, reference https://en.wikipedia.org/wiki/Apache_Subversion.
The SVN extention, can be used for the SVN controled project to replace git.
username_1: Thanks. I'll try it out and see if there's anything I can do.
username_1: I just implemented it. Please download the zip ([from the PR](https://github.com/username_1/vscode-diff-merge/pull/23#issue-486052719)), unzip it (to .vsix file) and check if that what you meant.
Thanks 🙏
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.