repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
nfroidure/ttf2woff2 | 522636071 | Title: Install failed after update to node 12
Question:
username_0: Recently I update my node to 12, here is environment list:
Node v12.13.0
Npm v6.12.0
ttf2woff2 v3.0.0
I already installed "windows-build-tools"
This is the error message:
"gyp ERR! stack Error: `C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\MSBuild.exe` failed with exit code: 1
gyp ERR! stack at ChildProcess.onExit (C:\Users\john.zuo\AppData\Roaming\nvm\v12.13.0\node_modules\npm\node_modules\node-gyp\lib\build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:210:5)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
gyp ERR! System Windows_NT 10.0.18362
gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Users\\john.zuo\\AppData\\Roaming\\nvm\\v12.13.0\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "build"
gyp ERR! cwd C:\Dev\CCL\Website\Trunk\CCL.Website\node_modules\ttf2woff2
gyp ERR! node -v v12.13.0
gyp ERR! node-gyp -v v5.0.5"
The most strange thing is that I tried to make another clean Angular project and install ttf2woff2, it is successful.
I compared the two project's node_module, I found something:
In the failed project, there are some files missing, they are:
addon.exp, addon.ilk,addon.lib,addon.node,addon.pdb and
addon.obj, etc.
Is these files missing the reason?
Thank you very much.
Answers:
username_1: I have the same issue but in Debian docker container. Does not build with node 12
```
In file included from ../../nan/nan_new.h:189:0,
from ../../nan/nan.h:203,
from ../csrc/addon.cc:1:
../../nan/nan_implementation_12_inl.h: In static member function ‘static Nan::imp::FactoryBase<v8::Function>::return_t Nan::imp::Factory<v8::Function>::New(Nan::FunctionCallback, v8::Local<v8::Value>)’:
../../nan/nan_implementation_12_inl.h:105:32: error: no matching function for call to ‘v8::Function::New(v8::Isolate*&, void (&)(const v8::FunctionCallbackInfo<v8::Value>&), v8::Local<v8::Object>&)’
, obj));
```
username_1: this is the related issue https://github.com/nodejs/nan/issues/849
username_2: @username_0 I'm having the same problem than you here on Win10. Have you been able to figure this out by any chance?
username_3: [email protected] now depends nan@^2.14.2. Perhaps this can be closed?
Status: Issue closed
|
SC-Networks/evalanche-soap-api-connector | 404301045 | Title: class not found
Question:
username_0: i created a file in the root with
but when i open it i just get
$connection = \Scn\EvalancheSoapApiConnector\EvalancheConnection::create(
'given host',
'given username',
'given password'
);
Fatal error: Uncaught Error: Class 'Scn\EvalancheSoapApiConnector\EvalancheConnection' not found in /var/www/html/index.php:2 Stack trace: #0 {main} thrown in /var/www/html/index.php on line 2
can someone tell me how to fix it, thanks
Answers:
username_1: The autoload file must be included first.
```
require 'vendor/autoload.php';
....
```
Status: Issue closed
username_2: @username_0 Hi. This connector uses the official php package manager called `composer`.
Please see the official php (https://www.php.net) and composer (https://getcomposer.org) documentation for php and programming related questions. If you have further questions regarding the usage of the connector, we're happy to help. Thank you for your understanding. |
fabric8io/docker-maven-plugin | 669242335 | Title: docker:push uses wrong repository name in module
Question:
username_0: # Description
The `docker:push` goal fails in my project, which runs the docker-maven-plugin in a module. The immediate cause of failure is that access to the repository (Docker Hub) is denied, but the log messages from the plugin indicate it is pushing (and has previous built) the Docker image with the wrong name. It has used a name constructed from the parent and module names, rather than using the configured name.
# Versions
Maven 3.6.0 on Ubuntu 18.04, with Docker 19.03.12. Plugin version 0.33.0.
# Configuration
My Maven module is called MC-back-end, and with its parent called MC. My build configuration is this:
```
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<executions>
<execution>
<id>Docker-build</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>Docker-push</id>
<goals>
<goal>push</goal>
</goals>
</execution>
</executions>
<configuration>

</configuration>
</plugin>
```
I expect it to attempt to push `docker.io/benedictadamson/mc-back-end:2.4.4-SNAPSHOT`, but it reports trying to push `docker.io/mc/mc-back-end`.
# Log messages
The log messages (with debug messages present):
```
[INFO] --- docker-maven-plugin:0.33.0:push (Docker-push) @ MC-back-end ---
[DEBUG] Configuring mojo io.fabric8:docker-maven-plugin:0.33.0:push from plugin realm ClassRealm[plugin>io.fabric8:docker-maven-plugin:0.33.0, parent: jdk.internal.loader.ClassLoaders$AppClassLoader@55054057]
[DEBUG] Configuring mojo 'io.fabric8:docker-maven-plugin:0.33.0:push' with basic configurator -->
[DEBUG] (f) execution = io.fabric8:docker-maven-plugin:0.33.0:push {execution: Docker-push}
[DEBUG] (f) keepContainer = false
[DEBUG] (f) logStdout = false
[DEBUG] (f) maxConnections = 100
[DEBUG] (f) project = MavenProject: uk.badamson.mc:MC-back-end:2.4.4-SNAPSHOT @ /home/benedict/git/MC/MC-back-end/pom.xml
[DEBUG] (f) removeVolumes = false
[DEBUG] (f) retries = 0
[DEBUG] (f) session = org.apache.maven.execution.MavenSession@30c1da48
[DEBUG] (f) settings = org.apache.maven.execution.SettingsAdapter@314f59b
[DEBUG] (f) skip = false
[DEBUG] (f) skipExtendedAuth = false
[Truncated]
Caused by: io.fabric8.maven.docker.access.DockerAccessException: Unable to push 'mc/mc-back-end:latest' : denied: requested access to the resource is denied
at io.fabric8.maven.docker.access.hc.DockerAccessWithHcClient.pushImage(DockerAccessWithHcClient.java:496)
at io.fabric8.maven.docker.service.RegistryService.pushImages(RegistryService.java:59)
at io.fabric8.maven.docker.PushMojo.executeInternal(PushMojo.java:44)
at io.fabric8.maven.docker.AbstractDockerMojo.execute(AbstractDockerMojo.java:238)
... 22 more
Caused by: io.fabric8.maven.docker.access.DockerAccessException: denied: requested access to the resource is denied
at io.fabric8.maven.docker.access.chunked.PullOrPushResponseJsonHandler.throwDockerAccessException(PullOrPushResponseJsonHandler.java:46)
at io.fabric8.maven.docker.access.chunked.PullOrPushResponseJsonHandler.process(PullOrPushResponseJsonHandler.java:23)
at io.fabric8.maven.docker.access.chunked.EntityStreamReaderUtil.processJsonStream(EntityStreamReaderUtil.java:27)
at io.fabric8.maven.docker.access.hc.DockerAccessWithHcClient$HcChunkedResponseHandlerWrapper.handleResponse(DockerAccessWithHcClient.java:779)
at io.fabric8.maven.docker.access.hc.ApacheHttpClientDelegate$StatusCodeCheckerResponseHandler.handleResponse(ApacheHttpClientDelegate.java:179)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:223)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:165)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:140)
at io.fabric8.maven.docker.access.hc.ApacheHttpClientDelegate.post(ApacheHttpClientDelegate.java:104)
at io.fabric8.maven.docker.access.hc.DockerAccessWithHcClient.doPushImage(DockerAccessWithHcClient.java:704)
at io.fabric8.maven.docker.access.hc.DockerAccessWithHcClient.pushImage(DockerAccessWithHcClient.java:494)
... 25 more
``` |
bottlerocket-os/bottlerocket | 975798610 | Title: Add new instance types to max pods calculation.
Question:
username_0: <!--
Tips:
- Please search for similar requests, including closed issues.
- Please include details about the environment you're running in.
-->
**What I'd like:**
AWS launched support for m6i instances. They should be added to the max pods calculations [here](https://github.com/bottlerocket-os/bottlerocket/blob/328b50aa489bd6c7d386ea7ad0790ec39d2a9ca9/packages/os/eni-max-pods). More details about m6i can be found [here](https://aws.amazon.com/ec2/instance-types/m6i/) and the corresponding PR for the AL2 based EKS AMI is [here](https://github.com/awslabs/amazon-eks-ami/pull/735/files)
**Any alternatives you've considered:**
None
Answers:
username_1: This was added in #1724
Status: Issue closed
|
Azure/azure-iot-sdks | 155368523 | Title: An error or a warning or just ignore it =>git: 'submodule' is not a git command. See 'git --help'
Question:
username_0: when I was trying to run "git clone --recursive https://github.com/Azure/azure-iot-sdks.git" form the tutorial of "https://github.com/Azure/azure-iot-sdks/blob/master/doc/get_started/yocto-intel-edison-c.md". It came with following as you could see below. I do not know if it is a warning or it is an error? Anyone have idea how to deal with it? Or just ignore it?
Thank in advance.
git: 'submodule' is not a git command. See 'git --help'.
Answers:
username_1: Hi @username_0,
Apparently the version of git available to the Edison from its opkg repositories is a version known as "git light". This version omits some of the less commonly used commands and, unfortunately, submodule is one of those. I am looking for a viable circumvention.
_<NAME> MSFT_
username_0: @username_1 I found that this link may be helpful, and I am trying to solve the problelm by following this link to update my nodejs which is 0.12... version. And I will let you know if it works. https://github.com/creationix/nvm
username_2: Linking to GitHub issue [(https://github.com/Azure/azure-iot-sdks/issues/562)]
Status: Issue closed
username_2: Hi @username_0
If you still have issue please re-open it. |
Azure/azure-openapi-validator | 280664537 | Title: additionalProperties should not be allowed with x-ms-client-flatten: true extension
Question:
username_0: The following will not be allowed.
`StorageAccountProperties` will be flattened so we would not know where to put the additionalProperties.
```json5
{
"StorageAccount": {
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"properties": {
"x-ms-client-flatten": true,
"$ref": "#/StorageAccountProperties"
}
}
},
"StorageAccountProperties": {
"additionalProperties": {
"type" : "object"
},
"properties": {
"p1": {
"type": "string"
},
"p2": {
"type": "string"
}
}
}
}
```
Moreover, it gets complicated if the `StorageAccount` object also had `"additionalProperties": {"type": "object"}`. Then the modeler will have a conflict between the top level additionalProperties and flattened additionalProperties.
So it is in the best interest to not allow model flattening along with additionalProperties. |
mat1jaczyyy/apollo-studio | 527649605 | Title: Pattern: Control+Click a frame makes pattern play from that frame
Question:
username_0: ### Is your feature request related to a problem? Please describe.
You can preview a pattern mid-pattern.
### Describe the solution you'd like
Control-clicking a frame could play the pattern from the clicked frame
### Your setup
Apollo 1.3.0
Windows 10 1903<issue_closed>
Status: Issue closed |
iris-hep/func_adl | 691637201 | Title: Add AsParqetFiles
Question:
username_0: Make sure there is an end-point in `ObjectStream` that will render parquet files
<!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. --><issue_closed>
Status: Issue closed |
rossfuhrman/_why_the_lucky_markov | 397666293 | Title: Striff, ladies and gentlemen. This widens the possibilities for where you can solve it .
My dad touched his chin briefly.
Question:
username_0: Toot: Striff, ladies and gentlemen. This widens the possibilities for where you can solve it .
My dad touched his chin briefly.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
arcticicestudio/igloo | 247655664 | Title: Auto toggle sign column
Question:
username_0: Vim 8 introduced a new option to configure the state of the sign column to be always visible (`yes`), always hidden (`no`) or to automatically toggle (`auto`) when signs are available to display.
This improvement is related to the warning message added to airblade/vim-gitgutter@dc73a81 which also advises to remove the custom option `g:gitgutter_sign_column_always = 1` and use `set signcolumn = auto` instead.<issue_closed>
Status: Issue closed |
crisp-im/crisp-sdk-android | 512436930 | Title: Stuck on loading
Question:
username_0: Let me know if you need any additional logs or info to solve the issue.
Thanks
Answers:
username_1: Hi there.
I'm having the same problem.
@username_0 did you find any workaround?
username_2: We decided to not integrate crispy in our android app and use just messenger.
username_3: @username_1 Did you find a workaround for this ? Thanks :)
username_1: Hi @username_3
Sadly, I didn't find any solution :(
username_3: Hi @username_1, just found a fix that works for me :
My first thoughts were that the webview was just not well deleted. So I tried to ensure the deletion of the fragment (and its webview). I run into several issues including `Duplicate ID, tag null, or parent id with another fragment for CrispFragment`.
So I managed to create my own fragment that will contain the `CrispFragment`, and handle the `CrispFragment` instance's "Lifecycle" :
```java
@EFragment(R.layout.my_useful_crisp_fragment)
public class MyUsefulCrispFragment extends Fragment {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
initCrispSDK();
}
public void initCrispSDK() {
//initialize Crisp-SDK
}
@Override
public void onViewCreated(View view, @Nullable Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
FragmentManager fragmentManager = getActivity().getFragmentManager();
CrispFragment fragment = (CrispFragment) fragmentManager.findFragmentByTag("crispFragment");
if (fragment == null) {
fragment = new CrispFragment();
FragmentTransaction ft = fragmentManager.beginTransaction();
ft.add(R.id.crispFragmentContainer, fragment, "crispFragment");
ft.commit();
fragmentManager.executePendingTransactions();
} else {
Logs.debug(this, "Creating new CrispFragment");
}
}
@Override
public void onDestroyView() {
super.onDestroyView();
if (getFragmentManager() != null) {
FragmentManager fragmentManager = getActivity().getFragmentManager();
CrispFragment fragment = (CrispFragment) fragmentManager.findFragmentByTag("crispFragment");
if (fragment != null) {
fragmentManager.beginTransaction().remove(fragment).commit();
}
}
}
}
```
(layouts/my_useful_crisp_fragment)
```xml
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/crispFragmentContainer"
android:orientation="horizontal">
</FrameLayout>
```
username_4: Hey @username_3 do you think it would be possible to use this approach in my React Native library(https://github.com/username_4/react-native-crisp-chat-sdk) for this SDK. At the moment I just inflate the Fragment into a linear layout view (https://github.com/username_4/react-native-crisp-chat-sdk/blob/master/android/src/main/java/com/reactnativecrispchatsdk/CrispChatView.java) but if I render the screen for a second time I get the `Duplicate ID, tag null, or parent id with another fragment for CrispFragment` error. Android development wouldn't be my expertise so any help in fixing this would be very grateful. I am using this SDK in a React-Native project of mine.
username_5: This bug have been on for a year. Why is the crisp team so uninterested in resolving this. we had to abandon crisp for this singular reason. Every other works perfectly for us. Its so annoying. They rarely respond to email or are they out of business? Great product. zero customer service
username_3: @username_4 No idea :( The main thing to ensure is to delete everything and recreate everything between 2 display of the CrispFragment
username_6: We are working on a brand new version of the Crisp SDK. It's currently in development and should be released in the coming weeks.
I do agree this current release is obviously unperfect.
The next version will be **fully native**
username_3: Thank you @username_6 Is there an ETA of the new version ? Could you also and in the meanwhile accept merge requests ?
username_4: Great stuff @username_6. I look forward to the new version. Once it's released I will integrate it into the react-native wrapper.
username_4: Thanks for the reply @username_3. I will give it a go and let you know how I get on.
username_7: Did anyone find a solution for that fragment issue?
username_8: @username_6 Thanks! Do you have an ETA on the release? Re: your comment on May 22
username_6: You can expect a beta in 2 weeks
Sent from my iPhone
>
username_9: Hello @username_6, is beta version already released?
username_10: It's been well over a year since this has been opened. I'm also getting infinite loading. Is there ANY estimate on when the native version cause the current version is barely useable. or at least something to fix this infinite loading thing
username_3: I'm not 100% sure of what I will tell you, but I still hope it will help :
Since they are working on a new SDK, I guess they won't really look at this issue since it may become irrelevant.
username_4: @username_6 any update on when you are going to release the new version of the android SDK?
username_11: Hello there. We published the early beta of the Crisp Android native SDK. You can find it here: https://github.com/crisp-im/crisp-sdk-android/tree/beta
username_4: Hey @username_11. Thank you. I am now using the new beta version in https://github.com/username_4/react-native-crisp-chat-sdk and it works great. When do you expect to release a stable version? And will the stable version include the API methods just like the iOS sdk does?
username_11: Awesome!
I don't have any ETA for the stable version. The goal is indeed to have the same possibilities between Android and iOS versions.
Status: Issue closed
|
github/docs | 1147431174 | Title: Better explanation for "Used by" section
Question:
username_0: But then when checking the [`About the dependency graph`](https://docs.github.com/en/github/visualizing-repository-data-with-graphs/about-the-dependency-graph#supported-package-ecosystems) link, it only states how to parse DEPENDENCIES of a project, not how to get your project recognized as a possible dependency.
### Additional information
Currently I am trying to get a public repo of a PyPI library to show the `Used by` section and I have had no luck on making it pick up the package.
- [PyPI package](https://pypi.org/project/fal/)
- [Repository](https://github.com/fal-ai/fal)
- [Other repo using said package](https://github.com/fal-ai/fal_dbt_examples/network/dependencies)
<img width="536" alt="image" src="https://user-images.githubusercontent.com/2745502/155229442-ad93fa0b-4c60-46b6-b46c-cdf03cb01225.png">
Answers:
username_1: @username_0
Thanks so much for opening an issue! I'll triage this for the team to take a look :eyes:
username_2: Hi @username_0 - I've asked an internal team about this and will let you know as soon as they get back to me.
username_0: Great, thank you both!
username_2: Hi @username_0 - the internal team confirms that the docs are correct and that you should not need to take any other steps for the dependent to show up as you expect. We've opened an internal issue for the team to investigate why your dependent isn't showing up.
username_2: Since this does not appear to be a problem with the docs, I'm going to close this issue.
Status: Issue closed
username_0: Ok @username_2, thank you very much.
username_0: Is there anywhere I should be reporting this sort of issue instead? Or a way to know about the results of this ticket?
username_2: That's a great question. The [GitHub Community](https://github.community/) is a great place to ask questions. For feedback or to let us know that something seems broken, I think the preferred place is the discussions in our [feedback repository](https://github.com/github/feedback/discussions/).
I've linked this issue to the internal issue and I would hope that they'd update this issue when they resolve the internal issue.
username_0: OK, thank you! I had asked a [GitHub Community question with on answer](https://github.community/t/github-repo-not-showing-used-by-section/229655) and now I just opened a discussion in feedback repo https://github.com/github/feedback/discussions/12101
username_3: Hi @username_0! Here's the explanation for why `fal` isn't being recognized with a link to the repository in GitHub:
For the `fal` package, the reason there's no link to the repo in the Dependency Graph is because there is no `Source Code` URL set for the [package on PyPI](https://pypi.org/project/fal/). A button link to the source code repository will be rendered in the left sidebar of the package page when the URL is set. For instance, the `datadog_api_client` package [published in PyPI](https://pypi.org/project/datadog-api-client/) provides a `Source Code` URL to the GitHub repo. We are able to use this information to associate the package with the repository, even though the repo and package use different names.
We rely on information published with the public registries (like PyPI) to associate packages to their repositories. If the package in the registry doesn't provide this metadata, we typically don't try to assume where the source for this repository is (it could be anywhere!) 🙂
The way to fix this would be to update the `fal` package in PyPI with a Source Code URL. The change will take time to take effect in our system (and it helps if there's a new version published in the registry that's used by a project, which triggers our system).
I see that it isn't clear that the package needs to specify a URL on it's registry page to be used in Dependency Graph. I am curious if @username_2 has any ideas on ways to help people know what to do in this circumstance—it's tricky because there's a lot of registries, and it'd be a lot if we had to document all of them!
I hope this helps!
username_2: @username_0 - I'm going to reopen this issue and next week we can work out how to up date the docs to help people who run into the same problem.
username_2: But then when checking the [`About the dependency graph`](https://docs.github.com/en/github/visualizing-repository-data-with-graphs/about-the-dependency-graph#supported-package-ecosystems) link, it only states how to parse DEPENDENCIES of a project, not how to get your project recognized as a possible dependency.
### Additional information
Currently I am trying to get a public repo of a PyPI library to show the `Used by` section and I have had no luck on making it pick up the package.
- [PyPI package](https://pypi.org/project/fal/)
- [Repository](https://github.com/fal-ai/fal)
- [Other repo using said package](https://github.com/fal-ai/fal_dbt_examples/network/dependencies)
<img width="536" alt="image" src="https://user-images.githubusercontent.com/2745502/155229442-ad93fa0b-4c60-46b6-b46c-cdf03cb01225.png">
username_2: @username_0 - I'm wondering if the following changes to the docs would have helped you work out how to get your dependency to show up as expected (I'll also check them with our internal team to make sure that I've correctly understood).
1. [Dependencies view](https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/exploring-the-dependencies-of-a-repository#dependencies-view) - update the first paragraph to make it clear that links are only shown if the package manager for the dependency links to the source code in a public repository on GitHub.
2. [Changing the "Used by" package](https://docs.github.com/en/code-security/supply-chain-security/understanding-your-software-supply-chain/exploring-the-dependencies-of-a-repository#changing-the-used-by-package) - revise the first paragraph to make it clear that the "Used by" section is shown only if:
- Your repository contains a package that's published on a supported ecosystem.
- In the ecosystem, your package has a link to the a public repository in GitHub where the source is stored.
- Dependency graph is enabled for the repository.
username_3: @username_2 Those are good updates. I'm always amazed how technical writers can simplify the mess of reasons that engineering gives out into something readable 😄
@username_0 Our system for trying to correlate repos and dependencies is fairly complex with a few different fall-backs. It may be that `dbt-core` previously published a Source Code link in previous versions, and we rely on the historical to associate the repo.
I see that `fal` now has a link in PyPI! Let me know in a couple of days if you don't see the link update for packages that dependend on `fal` (our process to get the metadata is on a schedule).
username_0: @username_2 , that would have helped for sure! Thank you
@username_3 OK, understood! I will let you know here again if it does not connect. Thanks for letting me know about the schedule.
username_2: Thanks @username_0 and @username_3 for your help working out what the problem was and how we can make the docs clearer.
I've updated the issue summary with the proposed docs changes and I'll add a `help wanted` label to it.
Status: Issue closed
|
jonycheung/deadsimple-less-watch-compiler | 284305854 | Title: --run-once
Question:
username_0: Are you using the command line tool `less-watch-compiler`? yes
If you are using the command line tool, which version are you using (`less-watch-compiler --version` to find out)? 1.10.0
Is the issue reproducible after updating to the latest version (` npm update less-watch-compiler`)?
Steps to reproduce:
* git clone <EMAIL>:username_0/battleship-game-gui-react-js.git
* cd battleship-game-gui-react-js; npm i;
* https://github.com/username_0/battleship-game-gui-react-js/pull/186
* npm run generate:css
Actual Results:
- no files created in src/stylesheets/css
Expected Results:
- should create css files in src/stylesheets/css
P.S. if don't use such option and run in watch, and execute and then change file -> it save a output :)
P.P.S it would be nice to save a structure e.g. less/component/cell.less -> css/component/cell.css
Answers:
username_0: bump! works on node 8.9.3
username_1: Thanks for the issue and reproduction steps. I'll take a look at this probably after the holidays.
Status: Issue closed
username_1: @username_0 the latest version `1.11.x` should fix this. |
reo7sp/tgbot-cpp | 693078451 | Title: delete message about pinned message
Question:
username_0: I want to delete message about pinning some message. I know how to send it, and how to pin it, but I don't know how to remove message
I hope it would looks like this
```
auto pin_msg = bot.getApi().pinChatMessage(chat_id, msg->messageId, false);
bot.getApi().deleteMessage(chat_id, pin_msg->messageId);
```
but pinChatMessage returns bool
Answers:
username_0: In my case pin message id is just message id for pin +1, so it would looks like
```
bot.getApi().pinChatMessage(chat_id, msg->messageId, false);
bot.getApi().deleteMessage(chat_id, msg->messageId + 1);
```
Status: Issue closed
|
lampepfl/dotty | 501102708 | Title: given Conversion cannot infer tupled function type
Question:
username_0: ## minimized code
```Scala
package test
object Main {
given fnToInt1[F,Args <: Tuple,Result](given TupledFunction[F,Args=>Result]) : Conversion[F,Int] = new {
def apply(f : F) : Int = -1
}
given fnToInt2[F,Args <: Tuple,Result](given TupledFunction[F,Args=>Result]) : Conversion[F,Int] {
def apply(f : F) : Int = -1
}
def fnToInt3[F,Args <: Tuple,Result](f : F)(given TupledFunction[F,Args=>Result]) : Int = -1
def main(args: Array[String]): Unit = {
//val i : Int = fnToInt1((a : Int) => 0) // see error 1
//val i : Int = new fnToInt2.apply((a : Int) => 0) // see error 2
val i : Int = fnToInt3((a : Int) => 0) // this works
println(i)
}
}
```
Error 1:
```
-- Error: Test.scala:16:26 -----------------------------------------------------
16 | val i : Int = fnToInt1((a : Int) => 0)
| ^
| F
|
| where: F is a type variable
| cannot be tupled as Args => Result
|
| where: Args is a type variable with constraint <: Tuple
| Result is a type variable
one error found
```
Error 2:
```
-- Error: Test.scala:17:30 -----------------------------------------------------
17 | val i : Int = new fnToInt2.apply((a : Int) => 0)
| ^
| Any cannot be tupled as Tuple => Any
-- Error: Test.scala:17:47 -----------------------------------------------------
17 | val i : Int = new fnToInt2.apply((a : Int) => 0)
| ^^^^^^^^^^^^^^
| too many arguments for constructor Object: (): Object
two errors found
```
## expectation
Both the commented out definitions of `i` in `main` should work and do exactly the same thing. As a result, I should be able to write implicit conversions from arbitrary-arity functions to other things.
Bonus points: all these definitions should also work as inline (except `fnToInt2` which I know is unsupported).
Answers:
username_1: The issue is that the way `fnToInt1` and `fnToInt2` do not provide enough constraints to infer `F`. Though I found a simple way to turn turn this around.
```scala
package test
object Main {
given fnToInt1[F,Args <: Tuple,Result] : Conversion[F, (given TupledFunction[F,Args=>Result]) => Int] = new {
def apply(f : F) = -1
}
given fnToInt2[F,Args <: Tuple,Result]: Conversion[F, (given TupledFunction[F,Args=>Result]) => Int] {
def apply(f : F) = -1
}
def fnToInt3[F,Args <: Tuple,Result](f : F)(given TupledFunction[F,Args=>Result]) : Int = -1
def main(args: Array[String]): Unit = {
val i1 : Int = fnToInt1((a : Int) => 0) // this works
val i2 : Int = fnToInt2.apply((a : Int) => 0) // this works
val i3 : Int = fnToInt3((a : Int) => 0) // this works
val i4 : Int = fnToInt3((a : Int, b: Int) => 0) // this works
println(i1)
println(i2)
println(i3)
}
}
```
username_1: It is impossible to support the first two examples due to a complete lack of any information on `F`, `Args` and `Result` when trying to infer the tuple `TupledFunction`.
Status: Issue closed
username_1: Use the workaround in the [second comment](https://github.com/lampepfl/dotty/issues/7349#issuecomment-537822631) |
dask/community | 849290093 | Title: Tracking code coverage
Question:
username_0: Some projects track code coverage (in terms of lines tested). This can be helpful when identifying what lines are still not covered generally as well as with specific PRs adding new features. Wondering what others think about tracking coverage in Dask, Distributed, and possibly other libraries 🙂
Answers:
username_1: Thanks for raising this issue @username_0. For reference, Dask tracks code coverage today (see https://codecov.io/gh/dask/dask/branch/main). I'd be in favor of adding something similar to Distributed (in fact I might try that out this afternoon).
username_0: TIL 😄
Thanks James! Please let us know if you need another hand 🙂
username_2: Would it make sense to add the same to `dask-ml` as well? If yes, I'll be happy to raise a PR.
username_2: Raised https://github.com/dask/dask-ml/pull/816 to update the `dask-ml` badges, and noticed that, according to Codecov, a `master` branch [exists](https://app.codecov.io/gh/dask/dask-ml/branch/master), but a `main` branch [doesn't](https://app.codecov.io/gh/dask/dask-ml/branch/main). That's _might_ be _one of_ the reasons why Codecov hasn't performed recent `dask-ml` checks.
It seems to me that Codecov stopped running on `dask-ml` when it was removed from `posix.yaml` as part of https://github.com/dask/dask-ml/commit/fddc19d563577b206e032cdcc4d06dec3bf76c23 (https://github.com/dask/dask-ml/pull/634).
As a side note, it's not immediately obvious to me what facilitates the generation of the [coverage report](https://dev.azure.com/dask-dev/dask/_build/results?buildId=2024&view=codecoverage-tab) that's part of the Azure Pipelines. Not sure if its [`posix.yaml`](https://github.com/dask/dask-ml/blob/main/ci/posix.yaml#L64) or [`windows.yaml`](https://github.com/dask/dask-ml/blob/main/ci/windows.yaml#L43) and/or something else.
username_1: Thanks @username_2 for looking into things for `dask-ml`. Let's move the `dask-ml` specific discussion over to the issues you've opened up there.
@username_0 in light of `dask` already tracking coverage and https://github.com/dask/distributed/pull/4670, is it safe to close this issue? Or is there something else to do here? Perhaps folks should open up an issue/PR on the specific sub-project they're interested in |
standardnotes/forum | 1048505847 | Title: Start Minimized to Tray?
Question:
username_0: I know you can turn on the option for Standard Notes to stay running as a System Tray app, but is there a way to have it start minimized to the tray?
I'm on Linux and I want to add Standard Note to the list of apps to autostart on login to the desktop, but I want it to start minimized to the tray. |
Gbuomprisco/ngx-chips | 338002490 | Title: this._onTouchedCallback is not a function
Question:
username_0: <!--
DON'T REMOVE THE TEMPLATE
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE WILL CLOSE YOUR ISSUE WITHOUT INVESTIGATING
-->
PLEASE MAKE SURE THAT:
- you searched similar issues online (9/10 issues in this repo are solved by googling, so please...do it)
- you provide an online demo I can see without having to replicate your environment
- you help me by debugging your issue, and if you can't, do go on filling out this form
**I'm submitting a ...** (check one with "x")
```
[x] bug report => search github for a similar issue or PR before submitting
[ ] support request/question
Notice: feature requests will be ignored, submit a PR if you'd like
```
**Current behavior**
<!-- Describe how the bug manifests. -->
I use the tag-input component inside a bootstrap modal (v 4.0.0+) and when open tag-input-dropdown
I have this error in console:
TypeError: this._onTouchedCallback is not a function
at TagInputComponent.TagInputAccessor.onTouched (ngx-chips.js:188)
at TagInputComponent.blur (ngx-chips.js:1107)
at Object.eval [as handleEvent] (TagInputComponent.html:52)
at handleEvent (core.js:13589)
at callWithDebugContext (core.js:15098)
at Object.debugHandleEvent [as handleEvent] (core.js:14685)
at dispatchEvent (core.js:10004)
at eval (core.js:12343)
at SafeSubscriber.schedulerFn [as _next] (core.js:4354)
at SafeSubscriber.__tryOrUnsub (Subscriber.js:243)
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
I'm using ReactiveForm.
The example code inside the page works but when i move in the bootstrap modal don't work.
**Minimal reproduction of the problem with instructions (if applicable)**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE*.
-->
**What do you use to build your app?. Please specify the version**
<!-- SystemJS, Webpack, angular-cli, etc... -->
Webpack, angular-cli,
**Angular version:**
<!-- Check whether this is still an issue in the most recent Angular version -->
angular 5
**ngx-chips version:**
<!-- Check whether this is still an issue in the most recent ngx-chips version -->
1.9.2
**Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
<!-- All browsers where this could be reproduced -->
Chrome
Answers:
username_1: Please provide an online reproduction.
Status: Issue closed
username_2: same issue here
username_2: issue resolved after setting formControlName. |
microsoft/SPTAG | 642544478 | Title: SIMD Bug
Question:
username_0: Firstly, apologies, this isn't a functional issue, I've never used your library. I just like code and am learning SIMD.
**Regarding:**
_AnnService/inc/Core/Common/DistanceUtils.h:61:
`return _mm_cvtepi32_ps(_mm_add_epi32(_mm_madd_epi16(xlo, ylo), _mm_madd_epi16(xhi, yhi)));`
The final command there _mm_ctepi32_ps, converts the 32bit added integers to a single precision float, why?
That appears to be quite a strange thing to do, I think its an error, but works on my system.
I'd use something like:
```
auto zero = Zero();
auto a_low = _mm_unpacklo_epi8(a, zero);
auto a_high = _mm_unpackhi_epi8(a, zero);
auto b_low = _mm_unpacklo_epi8(b, zero);
auto b_high = _mm_unpackhi_epi8(b, zero);
return _mm_add_epi32(_mm_madd_epi16(a_low, b_low), _mm_madd_epi16(a_high, b_high));
```<issue_closed>
Status: Issue closed |
wireservice/csvkit | 588240046 | Title: csvclean using on all files in a directory (possibly feature or bug)
Question:
username_0: Apologies as I am ok on the command line, but not an expert, so this may be something silly or this tool may just not work on multiple CSV files. The files I am using are Google Ads keyword planner outputs that are notoriously difficult to work with as use UTF-16 and tabs with some odd quirks and odd formatting (random rows etc.). All other files I have been using with csvkit have been pretty straight forward.
'csvclean -e utf-16 -t -K2 example.csv' works very well.
`csvclean csvclean -e utf-16 -t -K2 *.csv` doesn't work as expected as with other commands like stacking files.
Also trying bash code like `for f in *.csv ; do csvclean -e utf-16 -t -K2 $f ; done` doesn't work either or piping into the command or a number of items I tried.
I get the following:
`csvclean csvclean -e utf-16 -t -K2 *.csv` (returns error and all file names)
csvclean: error: unrecognized arguments: Changed Stats 2020-02-19 at 15_17_14.csv Changed Stats 2020-02-19 at 15_17_35.csv
`for f in *.csv ; do csvclean -e utf-16 -t -K2 $f ; done` (returns error and all file names removing the first word)
csvclean: error: unrecognized arguments: Stats 2020-02-19 at 15_21_33.csv
Many thanks in advance, apologies if this is a silly user error issue.
Answers:
username_1: It seems like your shell isn't automatically quoting $f, so do:
`for f in *.csv ; do csvclean -e utf-16 -t -K2 "$f"; done`
This is not a CSV Kit issue.
I don't understand what `csvclean csvclean -e utf-16 -t -K2 *.csv` is expected to do.
Status: Issue closed
|
StylishThemes/GitHub-Dark | 732379181 | Title: Empty context lines of diffs are unstyled and show up white
Question:
username_0: <!--
Thank you for reporting an issue. Please make sure that your style is up to
date and you checked the recent commits to ensure that your issue wasn't recently
addressed.
If the page is not publicly accessible, include the HTML code around the issue.
-->
Empty context parts of diffs show up as white blocks instead of being dark.
* **Browser**: Firefox 82.0
* **Operating System**: Ubuntu 18.04
* **Link to page with the issue**: https://github.com/rust-lang/rust/pull/78516/files
* **Screenshot**:
Vanilla:

With GitHub-Dark:

Status: Issue closed
Answers:
username_1: Fixed by https://github.com/StylishThemes/GitHub-Dark/commit/e332b98339432b8e8fe6480e5c707f9c48c64da8. |
CCALI/CALI-Author-Viewer-5 | 23074974 | Title: Handle an Auto ScoreSave failure
Question:
username_0: If scores aren't being saved (due to offline db perhaps) how should we inform the user before they get too far in a lesson?
One option is a warning notice appearing constantly at the top of the lesson letting them know.<issue_closed>
Status: Issue closed |
RIT-Players/PlayersDiscordBot | 554505272 | Title: Create Placeholder Files and Boilerplate Code for on deck features
Question:
username_0: Create Placeholder files/directories for the on deck features:
- [ ] Post Scheduler
- [ ] Picture Storage
- [ ] Image Lookup
- [ ] Channel Archiver
- [ ] Self Assignable Roles
- [ ] Fun Fact Generator<issue_closed>
Status: Issue closed |
gocd/gocd | 228612929 | Title: Encoding issue - changes pop up
Question:
username_0: ##### Issue Type
<!--- Please specify the issue type to help us categorize the issue, mention any one of the below types -->
- Bug Report
##### Summary
<!--- Provide a brief summary of the issue -->
<img width="748" alt="screen shot 2017-05-15 at 11 25 01 am" src="https://cloud.githubusercontent.com/assets/4715748/26044348/5b5a8f64-3961-11e7-89c2-0dd82b05c72b.png">
Answers:
username_1: I'm having trouble reproducing this issue on build.gocd.io. This appears to be the same commit, but the comment content renders correctly:

FWIW I'm on OS X, Google Chrome 58.0.3029.110 (64-bit)
username_0: @username_1 , I checked on OS X, Chrome 58.0.3029.110 (64-bit). I see that you saw the changes for `build-windows`. Check `build-windows-PR` build 1403 (Its in page 4) in the pipeline history page.
Trying to reproduce it locally.
username_2: Closing since the dashboard has been rewritten since this was raised. Assuming it's fixed, unless I'm corrected. :)
Status: Issue closed
|
aws-amplify/amplify-cli | 493547074 | Title: 'ModelOfferentityOffersCompositeKeyConditionInput' is not present when resolving type
Question:
username_0: ```
type Offer @model
@key(name: "entityOffers", fields: ["entityId", "startDate", "endDate"], queryField: "offerByDate")
@auth(rules: [
{ allow: owner, operations: [update, delete, create] },
{ allow: groups, groups: ["admin"], operations: [update, delete, create] }
])
{
id: ID!
entityId: String!
startDate: AWSDateTime!
endDate: AWSDateTime!
description: String
price: Float
image: String
title: String!
status: String!
createdAt: Float!
updatedAt: Float!
}
```
This is a fresh project so I cant understand why this is happening
Answers:
username_1: This is a duplicate of this issue #2239
We have pushed a fix for this as a part of #2246
As a workaround, if you change your @key(name: "entityOffers") to @key(name: "EntityOffers"), your push should work.
Status: Issue closed
username_1: We released the latest version of the CLI - 3.8.0 with this fix. |
aquariumbio/aquarium | 869019375 | Title: Imported Protocols and Libraries are blank[BUG]
Question:
username_0: **Describe the bug**
Sometimes when using the import function the library or protocol will be created but the content will be blank.
**To Reproduce**
Import libraries and protocols. Some will be blank (not sure what the pattern is).
**Expected behavior**
Would expect the protocols not be blank lol :)
**Screenshots**
Naw no screen shot needed. They would just be blank like the ASCI image below.
```
| B |
| L |
| A |
| P N |
| A K|
| G |
| E |
|___________________|
```
**Computer/Device (please complete the following information):**
All of them. Idk I only did it on my mac tho so who knows.
**Additional context**
Blank protocols cause existential dread and contemplation of the human condition (this is unhealthy). |
linjam/linjam | 66269453 | Title: handle local server properly
Question:
username_0: determine if or when is best to display quick-login button for localhost
surely LOCALHOST_2049_URL constant is unnecessary for example
possible solutions:
* keep it there static (current solution)
* auto-detect running local NINJAM server (better solution)
* or, dont bother displaying button for this (maybe best solution) |
openSUSE/osem | 140671954 | Title: incorrect number of tickets shown in admin tickets view
Question:
username_0: admin tickets view shows incorrect number of tickets purchased by users.
for example, two users buy different number of tickets:
<img width="342" alt="screen shot 2016-03-14 at 6 07 59 pm" src="https://cloud.githubusercontent.com/assets/10002834/13745695/79fabc38-ea15-11e5-93ca-971915a31429.png">
<img width="538" alt="screen shot 2016-03-14 at 6 08 23 pm" src="https://cloud.githubusercontent.com/assets/10002834/13745697/7b0664c4-ea15-11e5-979c-fead4a583863.png">
still admin gets the information that number of tickets sold = 2 (number of users who purchased the tickets)
<img width="881" alt="screen shot 2016-03-14 at 6 07 34 pm" src="https://cloud.githubusercontent.com/assets/10002834/13745700/7cf275f2-ea15-11e5-8b39-acd4c9de75bf.png">
shouldn't it show the total quantity of tickets sold for each ticket?
Answers:
username_0: working with this bug
username_0: @ChrisBr the sold value in admin's ticket view for a conference should show the total number of tickets sold right?
like in above example, it should've been :
- 36 for 1$ ticket
- 28 for 10$ ticket
- 33 for 90$ ticket
username_1: The table header says **sold**, but the value it shows is `= ticket.buyers.count`
Perhaps we should have column for `Sold` and another column for `Buyers`
username_0: okay @username_1 ... i will do that :)
username_2: Who cares how many Buyers there are?
username_0: i think that the main thing is to display the correct number of total tickets bought... but we may display the number of buyers if it is required.
which would be better?
username_2: You can also display a squirrel eating a nut

The question is **why** would you? ;-) This is an APP. Display data that is relevant for the people using it :-) In my opinion this is
Ticket | Price | Sold | Total Amount
------------ | ------------- | ------------- | -------------
Supporter | $15 | 10 | $150
username_0: that's true @username_2... i will display the required info only :+1:
Status: Issue closed
|
pytorch/pytorch | 404324555 | Title: [sparse sum] Sparse sum over dimmension gives unexpected results.
Question:
username_0: ## 🐛 Bug
When summing over dimension 0 of tensor of 2 dimensions I'm getting a scalar, whereas summing over dimension -2 gives the correct answer. Is that expected?
## To Reproduce
```
x = norm_adj = torch.tensor([[1., 0., 0., 1.],
[0., 1., 0., 0.],
[0., 1., 1., 0.],
[0., 1., 0., 2.]]).to_sparse()
print('RIGHT')
print(torch.sparse.sum(x,dim=-2))
print('WRONG')
print(torch.sparse.sum(x,dim=0))
```
## Expected behavior
```
RIGHT
tensor(indices=tensor([[0, 1, 2, 3]]),
values=tensor([1., 3., 1., 3.]),
size=(4,), nnz=4, layout=torch.sparse_coo)
WRONG
tensor(8.)
```
## Environment
```
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14.2
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy (1.15.4)
[pip] torch (1.0.0)
[conda] blas 1.0 mkl
[conda] mkl 2018.0.3 1
[conda] mkl_fft 1.0.6 py37hb8a8100_0
[conda] mkl_random 1.0.1 py37h5d10147_1
[conda] pytorch 1.0.0 py3.7_1 pytorch
```
Answers:
username_1: Thanks for the report, https://github.com/pytorch/pytorch/pull/16517 should fix it. |
MicrosoftDocs/powerbi-docs | 456443817 | Title: Page refers to options that are not provided by the product
Question:
username_0: There is no "Color saturation" well available, either in the Power BI web service or the current desktop client (Version: 2.69.5467.2151 64-bit (May, 2019))
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a47d7082-17fb-3b37-de23-020f49991e0c
* Version Independent ID: 4d904a62-6e2e-b05f-b222-2500df079b76
* Content: [Filled Maps (Choropleths) in Power BI - Power BI](https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-visualization-filled-maps-choropleths#feedback)
* Content Source: [powerbi-docs/visuals/power-bi-visualization-filled-maps-choropleths.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/live/powerbi-docs/visuals/power-bi-visualization-filled-maps-choropleths.md)
* Service: **powerbi**
* Sub-service: **powerbi-desktop**
* GitHub Login: @username_2
* Microsoft Alias: **username_2**
Answers:
username_1: Hi, @username_0 -- thanks for your question! I've assigned it to the author to investigate.
username_2: Hi @username_0,
Thank you for your feedback. Yes, the color saturation is now part of the Formatting pane. I'm updating the article now so it should go live no later than next week.
Status: Issue closed
username_0: @username_2 but there's no color saturation option under formatting either! As far as I can tell, there's currently no way to do a choropleth map with a continuous colour scale (as opposed to categories), which seems like kind of a basic need for an analytics package, and it's frustrating when the docs show examples. Or have I missed something that is available somewhere?
username_2: Hi @username_0
Under the Format tab, expand "Data colors". Then select the 3 dots and choose "Conditional formatting". In the pane that appears, add the field you'd like to use for saturation.
I'm adding this to the doc today because it is rather hard to find.
username_2: There is no "Color saturation" well available, either in the Power BI web service or the current desktop client (Version: 2.69.5467.2151 64-bit (May, 2019))
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a47d7082-17fb-3b37-de23-020f49991e0c
* Version Independent ID: 4d904a62-6e2e-b05f-b222-2500df079b76
* Content: [Filled Maps (Choropleths) in Power BI - Power BI](https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-visualization-filled-maps-choropleths#feedback)
* Content Source: [powerbi-docs/visuals/power-bi-visualization-filled-maps-choropleths.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/live/powerbi-docs/visuals/power-bi-visualization-filled-maps-choropleths.md)
* Service: **powerbi**
* Sub-service: **powerbi-desktop**
* GitHub Login: @username_2
* Microsoft Alias: **username_2**
Status: Issue closed
|
jlippold/tweakCompatible | 426183195 | Title: `AnsweringMachineX` working on iOS 12.0
Question:
username_0: ```
{
"packageId": "net.limneos.answeringmachinex",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "net.limneos.answeringmachinex",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/net.limneos.answeringmachinex/",
"iOSVersion": "12.0",
"packageVersionIndexed": true,
"packageName": "AnsweringMachineX",
"category": "Tweaks",
"repository": "limneos.net",
"name": "AnsweringMachineX",
"installed": "1.0-6",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "net.limneos.answeringmachinex",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "AnsweringMachine X",
"latest": "1.0-6",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
ChainSafe/ethermint | 464021763 | Title: Implement eth_gasPrice
Question:
username_0: Are we mapping gas cost to the ethermint equivalent of photons?
Answers:
username_0: Will just be using node's set gas price
username_0: There is no historical gas price oracle within tendermint and if the value is being pulled from the min gas price from node's validator the config is multi-denom and is not available through the Tendermint RPC client (unless I'm missing a detail).
Status: Issue closed
|
cocos2d/cocos2d-x | 391927448 | Title: Cocos2d-x forum website SSL error
Question:
username_0: I'm not sure if this is the correct location to enter this issue, but for the past 2 days I've been getting the following error when trying to access the discuss.cocos2d-x.org forum:
```
This site can’t provide a secure connection discuss.cocos2d-x.org sent an invalid response.
Try running Windows Network Diagnostics.
ERR_SSL_PROTOCOL_ERROR
```
This happens with any internet browser.<issue_closed>
Status: Issue closed |
webketje/tuxedo-backlight-control | 688421879 | Title: Color setting pink is not pink
Question:
username_0: Hi there!
Just wanted to say, that Pink doesn't show a Pink color for me. Fuchsia though is Fuchsia, so it could work.
Pink is more like a yellowish color.
Example images:
- [Pink](https://imgur.com/6IdqtK4) (the pink hint on the right comes from the screen)
- [Fuchsia](https://imgur.com/WO7S6tn)+
My model is a Clevo P775TM1-G. If you need any further information, please ask!
Answers:
username_1: Thank you, yes I must have messed up color order for 1 or 2 colors in the last release, I'll check them and re-release soon
Status: Issue closed
username_1: @username_0 I checked and the colors are correct. I've pushed an update to the readme with a screenshot of all the colors and matching names. I know "pink" looks more like "rose" but this is the official color word for it.
username_0: Well yeah, but it certainly shouldn't be yellow. :D
I updated yesterday, as the AUR package is based on your git. But now it doesn't open anymore for me:
```
$ backlight ui
Traceback (most recent call last):
File "/usr/local/bin/backlight", line 7, in <module>
from backlight_control import BacklightControl, backlight
ModuleNotFoundError: No module named 'backlight_control'
$ /usr/share/tuxedo-backlight-control/ui.py
Traceback (most recent call last):
File "/usr/share/tuxedo-backlight-control/ui.py", line 5, in <module>
from backlight_control import backlight
ModuleNotFoundError: No module named 'backlight_control'
```
I haven't had time yet to look further into it, so I might solve it the comming days (if this isn't a known error anyway)
username_0: changed from the `tuxedo-backlight-control` package to `tuxedo-backlight-control-git` package (both from AUR) and now got this error:
```
$ /usr/share/tuxedo-backlight-control/ui.py 8.2s
Traceback (most recent call last):
File "/usr/share/tuxedo-backlight-control/ui.py", line 349, in <module>
init()
File "/usr/share/tuxedo-backlight-control/ui.py", line 344, in init
App(root)
File "/usr/share/tuxedo-backlight-control/ui.py", line 58, in __init__
initial_mode = backlight.mode.capitalize()
AttributeError: 'NoneType' object has no attribute 'capitalize'
``` |
Marus/cortex-debug | 1187541177 | Title: Can not find how to remove default OpenOCD arguments
Question:
username_0: Hello, everyone!
I am facing a problem with setting the OpenOCD parameters to make debugging when working with STM32H7 custom board. I have set the launch.json the following way:
```
{
"version": "0.2.0",
"configurations": [
{
"name": "openocd",
"request": "launch",
"type": "cortex-debug",
"cwd": "${workspaceRoot}",
"servertype": "openocd",
"armToolchainPath": "C:/ST/STM32CubeIDE_1.9.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.externaltools.gnu-tools-for-stm32.10.3-2021.10.win32_1.0.0.202111181127/tools/bin", "serverpath":"C:/ST/STM32CubeIDE_1.9.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.externaltools.openocd.win32_2.2.0.202202231230/tools/bin/openocd.exe",
"serverArgs": [
"-f", "/Device.cfg",
"-s", "D:/Device_Firmware",
"-s", "C:/ST/STM32CubeIDE_1.9.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.debug.openocd_2.0.200.202202161333/resources/openocd/st_scripts",
"-s", "C:/ST/STM32CubeIDE_1.9.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mpu.debug.openocd_2.0.200.202202231231/resources/openocd/st_scripts",
"-c", "gdb_report_data_abort enable",
"-c", "gdb_port 3333",
"-c", "tcl_port 6666",
"-c", "telnet_port 4444",
"-c", "st-link serial 52FF72066578555442370767"
],
"executable": "${command:cpptools.activeConfigName}/Device.elf",
"interface": "swd",
"device": "STM32H743VI",
"svdFile": "STM32H743x.svd",
"configFiles": [
"/Device.cfg"
],
"preLaunchTask": "Make All",
"showDevDebugOutput": "both"
}
]
}
```
When it generates the code it makes the following:
Launching gdb-server: "C:/ST/STM32CubeIDE_1.9.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.externaltools.openocd.win32_2.2.0.202202231230/tools/bin/openocd.exe" **-c "gdb_port 50000" -c "tcl_port 50001" -c "telnet_port 50002" -s "D:\\Device_Firmware" -f "c:/Users/<NAME>/.vscode/extensions/marus25.cortex-debug-1.4.4/support/openocd-helpers.tcl" -f /Device.cfg** -f /Device.cfg -s "D:/Device_Firmware" -s "C:/ST/STM32CubeIDE_1.9.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mcu.debug.openocd_2.0.200.202202161333/resources/openocd/st_scripts" -s "C:/ST/STM32CubeIDE_1.9.0/STM32CubeIDE/plugins/com.st.stm32cube.ide.mpu.debug.openocd_2.0.200.202202231231/resources/openocd/st_scripts" -c "gdb_report_data_abort enable" -c "gdb_port 3333" -c "tcl_port 6666" -c "telnet_port 4444" -c "st-link serial 52FF72066578555442370767"
Here the arguments marked in bold are generated automatically and they are incorrect. How can I get rid of them, so that only the arguments set by "serverArgs" property are used? I was struggling by trying various properties described on [https://github.com/Marus/cortex-debug/blob/master/debug_attributes.md](https://github.com/Marus/cortex-debug/blob/master/debug_attributes.md), but could not make any of them work as I need.
Thank you!
Status: Issue closed
Answers:
username_1: Sorry, you cannot set the tcp ports. This extension will determine those. We support multiple probes and cores which all require port conflict resolution and we do that automatically. For OpenOCD, you should look at options for `searchDir`, `configFiles`, etc.
https://github.com/Marus/cortex-debug/blob/master/debug_attributes.md
Also peruse our Wiki https://github.com/Marus/cortex-debug/wiki/OpenOCD-Specific-Configuration |
wso2/product-ei | 371816394 | Title: String output Resource type for JSON doesn't work in data mapper in the Response
Question:
username_0: **Description:**
The Final response of a datamapper which is having output String when invoking via the ESB is returned again as a Numeric. String output Resource type for JSON doesn't work.
**Suggested Labels:**
EI
**Affected Product Version:**
EI 6.4.0 , EI 6.3.0
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
1. Define an input message as follows with XML resource type
Example:
<Helloo>
<LengthsOfStay>
<element>
<MessageType>Hi</MessageType>
<Time>2</Time>
<TimeUnit>Day</TimeUnit>
</element>
<element>
<MessageType>Hi</MessageType>
<Time>1</Time>
<TimeUnit>Day</TimeUnit>
</element>
</LengthsOfStay>
</Helloo>
---------------------------------------------------
2. Design the output for data mapper configurations for JSON resource type with a String
Eg:
---------------------------------------------------
{
"Helloo": {
"LengthsOfStay": [
{
**"Time": "2",**
"TimeUnit": "Day",
"MessageType": "SetMinLOS"
},
{
**"Time": "1",**
"TimeUnit": "Day",
"MinMaxMessageType": "SetMaxLOS"
}
]
}
}
------------------------------------------
**### Time is defined as String here**
2. Design the API/Proxy and add the Capp to the EI server and Invoke with the input payload.
3. You can see the output of the particular element designed as string is not preserved in the final response.
Eg: Response
"LengthsOfStay"
{
**"Time": 2**,
"TimeUnit": "Day",
"MinMaxMessageType": "SetMinLOS"
}
### Finally in the response we can see the time appears as integer/numeric again
Answers:
username_1: Fixed in commit <PASSWORD> in trunk
Status: Issue closed
|
easy-swoole/spl | 516515136 | Title: 继承EasySwoole\Spl\SplBeanl的子类可能会导致定义的属性值丢失
Question:
username_0: file: src/SplBean.php
method: dataKeyMap()
result: 在dataKeyMap()方法时,可能会导致数组元素丢失
copy code:
final private function dataKeyMap(array $array): array
{
foreach ($this->setKeyMapping() as $dataKey => $beanKey) {
if (array_key_exists($dataKey, $array)) {
$array[$beanKey] = $array[$dataKey];
unset($array[$dataKey]);
}
}
return $array;
}
analyze:
如果
1.$array=['a' => 'apple', 'b' => 'bus'];
2.$this->setKeyMapping() = ['a' => 'b'];
最终在方法体内return回来的数组只会剩下$array = ['b' => 'apple'];
$array['a']丢失了
Answers:
username_1: 这是你自己错误用法额、、你都'a' => 'b'了
Status: Issue closed
|
HuangXiZhou/blog | 318229699 | Title: LRU 算法分析与简单实现
Question:
username_0: 
在上图中,一旦 A B C D 充满所分配的内存块,那么最新出现的 E 将替代最低使用的 `A(0)`,访问顺序为 A -> B -> C -> D -> E -> D -> F。
## 原理
这里将会出现一个性能瓶颈,也就是在 Cache 满时,淘汰那些不常用的数据,空出空间存储新的数据。假设每一条数据都有一个最后访问时间, 当满额的时候,将要遍历所有元素,才能删除访问时间最小的那个元素,时间复杂度为 $O(1)$,数据量越大,性能越低。
所以选择使用 `链表`,每访问一次数据,把最新的访问数据放至头部,那尾部的数据就是最旧未访问的数据。 满额时,从链表尾部开始往前删除指定数目的数据,即可解决。
## 代码实现
```javascript
class LruCache {
constructor(maxsize) {
this._cache = {}; // 缓存
this._queue = []; // 队列
this._maxsize = maxsize; // 最大值
// 如果最大值输入非法 默认无限大
if (!this._maxsize || !(typeof this._maxsize === 'number') || this._maxsize <= 0) {
this._maxsize = Infinity;
}
// 运行定时器,定时检查过期值
setInterval(() => {
this._queue.forEach((el, idx) => {
const key = el;
const insertTime = this._cache[key].insertTime;
const expire = this._cache[key].expire;
const curTime = +new Date();
// 如果存在过期时间且超期,移除数据
if (expire && curTime - insertTime > expire) {
this._queue.splice(idx--, 1);
delete this._cache[key];
}
});
}, 1000);
}
// 生成唯一索引
_makeSymbol(key) {
return Symbol.for(key);
}
// 更新队列
_update(queue, key) {
// 移除
queue.forEach((el, idx) => {
if (el === key) {
queue.splice(idx, 1);
}
});
// 前置
queue.unshift(key);
return queue;
}
// 插入数据
set(key, value, expire) {
[Truncated]
this._queue = this._update(this._queue, key);
return this._cache[key].value;
} else if (expire && curTime - insertTime > expire) {
// 已经过期
this._queue.forEach((el, idx) => {
if (el === key) {
this._queue.slice(idx, 1);
delete this._cache[key];
}
})
return null
}
} else {
return null;
}
}
}
``` |
keleixu/GridImageSearch | 59409947 | Title: [Android Bootcamp] Project 2 GridImageSearch
Question:
username_0: My app is complete, please review. /cc @username_1 @codepath
Answers:
username_1: :+1: Looks good overall. Good job implementing so many optionals, and working on the UI enhancements.
* Great work on organizing source code
* Nice to see you used action bar for searching
* Good job on getting the Share functionality working
* Great job implementing DialogFragment for filters
* Good job using the ConnectivityManager to check for network availability
* Properly reused the single base method in search activity to fetch results for both initial load and the pagination
* Great job using staggered grid view for polishing the app UI
* Consider using Parcelable instead of Serializable([benefits](http://stackoverflow.com/questions/5550670/benefit-of-using-parcelable-instead-of-serializing-object/5551155#5551155))
I have provided a detailed [Project 2 Feedback Guide here](http://courses.codepath.com/snippets/intro_to_android/project_2_feedback) which covers the most common issues with this submitted project. Read through the feedback guide point-by-point to determine how you could improve your submission.
Let us know if you have any other thoughts or questions about this assignment. The next assignment (Twitter Client) will be very important since it introduces the majority of the remaining pieces necessary to build a fully functional API client with complex feeds of data and user creation. |
googleapis/java-bigtable-hbase | 739192868 | Title: Improve error reporting for CloudBigtableIO.writeToTable
Question:
username_0: Currently if any permanent errors occur when writing to bigtable the error that is reported to the user is not great:
```
Error message from worker: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 actions: StatusRuntimeException: 2 times, servers with issues: batch-bigtable.googleapis.com com.google.cloud.bigtable.hbase.BigtableBufferedMutator.getExceptions(BigtableBufferedMutator.java:187) com.google.cloud.bigtable.hbase.BigtableBufferedMutator.handleExceptions(BigtableBufferedMutator.java:141) com.google.cloud.bigtable.hbase.BigtableBufferedMutator.close(BigtableBufferedMutator.java:85) com.google.cloud.bigtable.beam.CloudBigtableIO$CloudBigtableSingleTableBufferedWriteFn.finishBundle(CloudBigtableIO.java:864) org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 actions: StatusRuntimeException: 2 times, servers with issues: batch-bigtable.googleapis.com com.google.cloud.bigtable.hbase.BigtableBufferedMutator.getExceptions(BigtableBufferedMutator.java:187) [...]
```
It would be a lot better if the StatusRuntimeException exceptions were unwrapped with the status and message.
Unfortunately I dont think we change this exception for the client itself, however we can catch this exception in CLoudBigtableIO and bubble up something for useful for beam |
trentm/node-bunyan | 68643912 | Title: Bunyan 1.3.5 crashed: TypeError: Object.keys called on non-object
Question:
username_0: platform: darwin
node version: v0.12.0
bunyan version: 1.3.5
argv: ["node","/usr/local/bin/bunyan"]
log line: "{\"name\":\"multichannel\",\"hostname\":\"macbook-dev\",\"pid\":44973,\"level\":30,\"res\":{\"statusCode\":401,\"header\":null},\"response\":\"{\\\"error\\\":\\\"InvalidCredentials\\\",\\\"description\\\":\\\"The access token provided has expired.\\\"}\",\"msg\":\"\",\"time\":\"2015-04-15T10:37:39.557Z\",\"v\":0}"
stack:
TypeError: Object.keys called on non-object
at Function.keys (native)
at _res (/usr/local/lib/node_modules/bunyan/bin/bunyan:946:29)
at emitRecord (/usr/local/lib/node_modules/bunyan/bin/bunyan:972:13)
at handleLogLine (/usr/local/lib/node_modules/bunyan/bin/bunyan:723:16)
at Socket.<anonymous> (/usr/local/lib/node_modules/bunyan/bin/bunyan:1124:13)
at Socket.emit (events.js:107:17)
at readableAddChunk (_stream_readable.js:163:16)
at Socket.Readable.push (_stream_readable.js:126:10)
at Pipe.onread (net.js:529:20)
Answers:
username_1: log line:
```
{"name":"multichannel","hostname":"macbook-dev","pid":44973,"level":30,"res":{"statusCode":401,"header":null},"response":"{\"error\":\"InvalidCredentials\",\"description\":\"The access token provided has expired.\"}","msg":"","time":"2015-04-15T10:37:39.557Z","v":0}
```
pretty printed:
```
{
"name": "multichannel",
"hostname": "macbook-dev",
"pid": 44973,
"level": 30,
"res": {
"statusCode": 401,
"header": null
},
"response": "{\"error\":\"InvalidCredentials\",\"description\":\"The access token provided has expired.\"}",
"msg": "",
"time": "2015-04-15T10:37:39.557Z",
"v": 0
}
```
username_1: I can repro:
```
$ echo '{"name":"multichannel","hostname":"macbook-dev","pid":44973,"level":30,"res":{"statusCode":401,"header":null},"response":"{\"error\":\"InvalidCredentials\",\"description\":\"The access token provided has expired.\"}","msg":"","time":"2015-04-15T10:37:39.557Z","v":0}' | bunyan
* The Bunyan CLI crashed!
*
* Please report this issue and include the details below:
*
* https://github.com/username_1/node-bunyan/issues/new?title=Bunyan%201.3.6%20crashed%3A%20TypeError%3A%20Object.keys%20called%20on%20non-object
*
* * *
* platform: darwin
* node version: v0.10.28
* bunyan version: 1.3.6
* argv: ["node","/Users/username_1/bin/bunyan"]
* log line: "{\"name\":\"multichannel\",\"hostname\":\"macbook-dev\",\"pid\":44973,\"level\":30,\"res\":{\"statusCode\":401,\"header\":null},\"response\":\"{\\\"error\\\":\\\"InvalidCredentials\\\",\\\"description\\\":\\\"The access token provided has expired.\\\"}\",\"msg\":\"\",\"time\":\"2015-04-15T10:37:39.557Z\",\"v\":0}"
* stack:
* TypeError: Object.keys called on non-object
* at Function.keys (native)
* at _res (/Users/username_1/tm/node-bunyan/bin/bunyan:946:29)
* at emitRecord (/Users/username_1/tm/node-bunyan/bin/bunyan:972:13)
* at handleLogLine (/Users/username_1/tm/node-bunyan/bin/bunyan:723:16)
* at Socket.<anonymous> (/Users/username_1/tm/node-bunyan/bin/bunyan:1124:13)
* at Socket.EventEmitter.emit (events.js:95:17)
* at Socket.<anonymous> (_stream_readable.js:745:14)
* at Socket.EventEmitter.emit (events.js:92:17)
* at emitReadable_ (_stream_readable.js:407:10)
* at emitReadable (_stream_readable.js:403:5)
```
Status: Issue closed
username_1: Will be in bunyan 1.3.6. Thanks for filing it!
username_0: @username_1 Np. Thanks for a quick update.
username_2: Hi there. I still have this issue and I saw 1.3.6 has not been released yet. Any news? Are there any other issues that are blocking it from being released?
username_3: Hi There,
+1 to @username_2 's question, any plans for 1.3.6?
username_1: Really sorry for the delay. bunyan 1.3.6 published to npm now. I dropped the ball after trying to get another bug fix or two into this release.
username_2: Thanks Trent.
And, BTW, Bunyan rocks! :-) |
sequelize/sequelize | 450205338 | Title: findOne adds "ORDER BY" internally breaking MSSQL
Question:
username_0: ## What are you doing?
Trying to get count of the whole table records while executing some other aggregate functions like "max", while there is no `group by` specified.
**To Reproduce**
Steps to reproduce the behavior:
1. Define model:
```js
sequelize.define('item', {
id: {
type: DataTypes.STRING,
allowNull: false,
primaryKey: true
},
column_1: {
type: DataTypes.DATE,
allowNull: true,
}
});
```
2. Run the following
```js
const result = await model.Item.findOne({
attributes: [
[db.sequelize.fn('max', db.sequelize.col('column_1')), 'max_column_1'],
[db.sequelize.fn('count', '1'), 'count']
],
});
```
3. See error
```
SequelizeDatabaseError: Column "item.id" is invalid in the ORDER BY clause because it is not contained in either an aggregate function or the GROUP BY clause.
at Query.formatError (/project-path/node_modules/sequelize/lib/dialects/mssql/query.js:309:12)
at Request.connection.lib.Request [as userCallback] (/project-path/node_modules/sequelize/lib/dialects/mssql/query.js:69:23)
at Request.callback (/project-path/node_modules/tedious/lib/request.js:37:27)
at Connection.endOfMessageMarkerReceived (/project-path/node_modules/tedious/lib/connection.js:2149:20)
at Connection.dispatchEvent (/project-path/node_modules/tedious/lib/connection.js:1172:36)
at Parser.tokenStreamParser.on (/project-path/node_modules/tedious/lib/connection.js:975:14)
at Parser.emit (events.js:193:13)
at Parser.parser.on.token (/project-path/node_modules/tedious/lib/token/token-stream-parser.js:27:14)
at Parser.emit (events.js:193:13)
at addChunk (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_readable.js:297:12)
at readableAddChunk (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_readable.js:279:11)
at Parser.Readable.push (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_readable.js:240:10)
at Parser.Transform.push (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_transform.js:139:32)
at Parser.afterTransform (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_transform.js:88:10)
at Parser._transform (/project-path/node_modules/tedious/lib/token/stream-parser.js:41:7)
at Parser.Transform._read (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_transform.js:177:10)
at Parser.Transform._write (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_transform.js:164:83)
at doWrite (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_writable.js:405:139)
at writeOrBuffer (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_writable.js:394:5)
at Parser.Writable.write (/project-path/node_modules/tedious/node_modules/readable-stream/lib/_stream_writable.js:303:11)
at Parser.addEndOfMessageMarker (/project-path/node_modules/tedious/lib/token/token-stream-parser.js:45:24)
at Connection.message (/project-path/node_modules/tedious/lib/connection.js:2138:32)
```
## What do you expect to happen?
`SELECT max([column_1]) AS [max_column_1],count(1) AS [count] FROM [item] AS [item];`
## What is actually happening?
`SELECT max([column_1]) AS [max_column_1],count(1) AS [count] FROM [item] AS [item] ORDER BY [item].[id] OFFSET 0 ROWS FETCH NEXT 1 ROWS ONLY;`
## Environment
Dialect:
- mssql
Dialect **tedious** version: 6.1.1
Database version: Microsoft SQL Azure (RTM) - 12.0.2000.8
Sequelize version: both v4 and v5
Node Version: 11.15.0
OS: Mac OS
Tested with latest release:
- Yes, v5.8.7
Status: Issue closed
Answers:
username_1: This is how `findOne` works, it limits records by 1. Use `findAll`
username_0: ok `findAll` works. |
google/ground-platform | 568053385 | Title: [Form editor] Add support for multiple choice fields
Question:
username_0: "Select one" or "select many" field types.

Initialized with single empty option field, allows options to be added/removed/modified. Blank options not
Drag to reorder and image cues will be implemented separately.
Answers:
username_0: To me the spacing of the options feels a bit cramped - I'd recommend indenting the options so that the drag affordance appears aligned with the left side of the question field.
username_0: @username_1 Thanks for taking this one! In the first PR feel free ignore the photo, remove option ("x"), and "other" functionality. Those can be added in follow-up PRs linked to this issue. HTH!
username_1: SGTM :)
Status: Issue closed
|
saltstack/salt | 202557041 | Title: [Naming consistency] git.latest "rev" option VS git.detached "ref" option
Question:
username_0: ### Description of Issue/Question
git.latest and git.detached have different key names for commit ID: "ref" vs "rev"
https://docs.saltstack.com/en/latest/ref/states/all/salt.states.git.html#salt.states.git.latest
As this is a bit confusing, one of them would be changed (with a deprecation warning) in my opinion.
Answers:
username_1: @username_2 do you have a preference here?
Thanks,
Daniel
username_2: git.detached is more recent, it should be changed. I'll take care of it.
Status: Issue closed
|
GaryLazerFinz/ECE-411-Capstone-Practicum | 264081001 | Title: Divide and Conquer?
Question:
username_0: How can we divide up the project to make it easier? I can be the webmaster if we cant get github working as advertised. I dont blame you guys. I think its confusing too. Maybe we need an ordering person, an acuator exprt, a sensor person, a programmer, a prototyper, an accountant, a webmaster, a tester, a cad designer, a motivational speaker, and a writer. Im just making examples of jobs that need done. What do you all think? What would you all like to do?
Answers:
username_1: 1. And I think we also need to consider the power supply. We use the battery or 110V power.
2. Have we decided which microcontroller we will use?
3. Do we need the digit display?
username_0: I added a section for weekly progress reports. It is in the project schedule section. Please add to them. Just put anything you want. Anything is better then nothing . |
SSYC-WebTeam/SSYCmaster | 121621851 | Title: Calendar page sidebar
Question:
username_0: Fix sidebar. It is displaying all the menus and submenus:
<img width="1437" alt="screen shot 2015-12-10 at 6 20 08 pm" src="https://cloud.githubusercontent.com/assets/13126219/11734288/b2a6f9f2-9f6a-11e5-8c36-d6203c879b59.png"><issue_closed>
Status: Issue closed |
Yentis/betterdiscord-emotereplacer | 922057604 | Title: Powercord Support
Question:
username_0: With the BDCompat plugin, everything works except prefixes and actually sending the messages: nothing is sent. I can't find any alternatives and this is a useful plugin.
Answers:
username_1: Sorry, I don't use or know anything about Powercord so I can't help you with this. Feel free to make a pull request if you can figure it out.
Status: Issue closed
|
ant-design/ant-design-mobile | 411016274 | Title: SwipeAction typescript type error 有问题的 ts 类型定义
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design-mobile/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://codepen.io/anon/pen/BMGLbb?editors=0010#0](https://codepen.io/anon/pen/BMGLbb?editors=0010#0)
### Steps to reproduce
打开支持 ts 的编辑器, 通过 JSX 的方式使用 SwipeAction 组件.
### What is expected?
ts 检查不报错.
### What is actually happening?
```
[ts]
JSX element type 'SwipeAction' is not a constructor function for JSX elements.
Type 'SwipeAction' is missing the following properties from type 'ElementClass': context, setState, forceUpdate, props, and 2 more. [2605]
```
| Environment | Info |
|---|---|
| antd | 2.2.6 |
| React | 16.7.0-alpha.2 |
| System | macOS |
| Browser | Chrome 71 |
---
version 2.2.8
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: try `import React from 'react';`
and `allowSyntheticDefaultImports: true` in tsconfig.json.
username_0: It works.
But the module should be ES6 style. https://github.com/Microsoft/TypeScript/issues/10895
username_1: This is a legacy. antd-mobile need `allowSyntheticDefaultImports` config until we change all components and dep components to write import.
Status: Issue closed
|
Azure/azure-quickstart-templates | 112069838 | Title: TypeHandlerVersion Error
Question:
username_0: [{"code":"VMExtensionProvisioningError","message":"VM has reported a failure when processing extension 'createadforest'. Error message: \"A previous attempt to install WindowsBlue-KB3055381-x64.msu failed; please check the logs on the VM (C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Powershell.DSC\\1.9.0.0)\"."}]}}
This is the error I am getting when finally the extension is processed. I figured out that this was due to the type handler version. But if i put the latest version (2.7) then the ADDC VM does not get updated as domain controller. Then obviously, the DomainJoin script fails too.
Answers:
username_1: Hello, I was seeing if you were able to figure this error out? I am trying to do a domain join and getting the same error.
username_0: Hey,
You need to put in version 2.7 instead of 1.9. That will do.
Please note that do not change the type handler version from 1.7 to 2.7 as
it will create some problems.
Only change the versions which are 1.9 to 2.7
username_1: Are you saying the SDK on my computer or on the virtual machine? I have the 2.7 SDK on my local environment.
username_0: In the quick start template, JSON script where the DSC extension is
created.
username_2: @username_0 I think DSC 2.8 is released now. Can you try with that?
username_0: Surely, will try.
It should work with that too.
Status: Issue closed
|
solo-io/gloo | 532313553 | Title: Gateway gets out of sync when Upstreams change
Question:
username_0: **Describe the bug**
When a VS with a function route references an upstream that doesn't yet have the function, that VS will not be written in the proxy.
When the upstream is updated (for example by FDS), Gateway has no knowledge to resync, and the VS is never added to the proxy (until another event causes a resync).
**To Reproduce**
Create a Swagger or GRPC Deployment/Service and a VS with a function route to the expected upstream at the same time.
The VS will get an error reported to it, and the VS will not be included in the Proxy, even after FDS discovers the functions.
**Expected behavior**
If the US gets updated, the VS will become accepted.<issue_closed>
Status: Issue closed |
sopra-fs21-group-17/pictures-server | 873917370 | Title: Lobby Countdown BE
Question:
username_0: time estimate: 1 day
description: implement a timer in backend and update the rest interface of the lobby. Update lobby entity with the needed properties.
Status: Issue closed
Answers:
username_0: 869cbb2ec3a3a5234f497b043890db1e8543ff84
username_0: time estimate: 1 day
description: implement a timer in backend and update the rest interface of the lobby. Update lobby entity with the needed properties.
username_0: 5adf1d7b8fda50130feac72c0ad6e197f4b2378d
Status: Issue closed
|
longhorn/longhorn | 735269190 | Title: [FEATURE] Speed up rebuilding by getting checksum simultaneously
Question:
username_0: **Is your feature request related to a problem? Please describe.**
We can speed up the rebuilding by [requesting the server block checksum](https://github.com/longhorn/sparse-tools/blob/bb457e12173d2073ce3f8193cf5e3208c828ac43/sparse/client.go#L248) and [calculating the local block checksum](https://github.com/longhorn/sparse-tools/blob/bb457e12173d2073ce3f8193cf5e3208c828ac43/sparse/client.go#L262) simultaneously.
**Describe the solution you'd like**
See above.
Answers:
username_0: The reason that failed replica reuse time and new replica rebuilding time are the same is:
1. Since there is actually no data on the receiver side for the new replica rebuilding case, `client.getServerChecksum` takes less time.
```
[instance-manager-r-3dab8865] [vol-r-793b61f8] time="2020-11-10T06:19:14Z" level=info msg="Sync batchInterval length 131072, total time 3.668229ms, checksum time 1.359858ms, percentage 37.0712406450088%"
[instance-manager-r-3dab8865] [vol-r-793b61f8] time="2020-11-10T06:19:14Z" level=info msg="Sync batchInterval length 131072, total time 2.857143ms, checksum time 741.813µs, percentage 25.963453701827316%"
[instance-manager-r-3dab8865] [vol-r-793b61f8] time="2020-11-10T06:19:14Z" level=info msg="Sync batchInterval length 131072, total time 2.603783ms, checksum time 760.654µs, percentage 29.213417554381454%"
[instance-manager-r-3dab8865] [vol-r-793b61f8] time="2020-11-10T06:19:14Z" level=info msg="Sync batchInterval length 131072, total time 3.549488ms, checksum time 1.706659ms, percentage 48.08183602818209%"
```
2. Though there is no writing for the failed replica reuse case, `client.getServerChecksum` takes a longer time and the client needs to calculate the local checksum.
```
[instance-manager-r-b148c9fb] [vol-r-839c225b] time="2020-11-10T06:33:56Z" level=info msg="Sync batchInterval length 131072, total time 1.479149ms, checksum time 1.478969ms, percentage 99.98783084057116%"
[instance-manager-r-b148c9fb] [vol-r-839c225b] time="2020-11-10T06:33:56Z" level=info msg="Sync batchInterval length 131072, total time 1.729528ms, checksum time 1.729388ms, percentage 99.99190530595631%"
[instance-manager-r-b148c9fb] [vol-r-839c225b] time="2020-11-10T06:33:56Z" level=info msg="Sync batchInterval length 131072, total time 1.980584ms, checksum time 1.980444ms, percentage 99.99293137781584%"
[instance-manager-r-b148c9fb] [vol-r-839c225b] time="2020-11-10T06:33:56Z" level=info msg="Sync batchInterval length 131072, total time 1.397808ms, checksum time 1.397618ms, percentage 99.98640728912697%"
```
As a result, the rebuilding time in both cases are almost the same in general.
---
It seems that removing the checksum mechanism won't speed up the rebuilding comparing with using 2 goroutines to handle checksums simultaneously. Because the client still needs time to read data from the local file.
---
A simple test result for this enhancement:
The time of rebuilding 10Gi data in ext4 filesystem of a Longhorn volume:
Before introducing this feature: 2min40sec;
After introducing this feature: 2min.
If necessary, we can use a larger data set to test the improvement.
username_1: **Observations:**
Cluster specs:
1 controlplane node (4 vCPUs / 8 GB /160 GB)
3 worker nodes (4 vCPUs / 8 GB /160 GB)
**Rebuilding replica with 10 GB of data**
- Longhorn v1.0.2: 5 min, 47 seconds
- Longhorn master: 5 min, 27 seconds
**Rebuilding replica with 16 GB of data**
- Longhorn v1.0.2: 15 min, 7 seconds
- Longhorn master: 12 min, 27 seconds
username_0: I just tested 50GB data rebuilding today. The result is almost the same as Mohamed's. I will re-check this feature later.
username_0: I just retry the test with some additional debug logs added. Here is the test step and the result :
1. Deploy Longhorn system with the `master` image
2. Deploy 2 engine images:
- `shuowu/longhorn-engine:master-1948` (Based on the `longhornio/longhorn-engine:master `and just add some addtional debug logs)
- `shuowu/longhorn-engine:v1.0.x-1948` (Based on the `longhornio/longhorn-engine:master` but revert the (sparse file sync) PR commits and add some additional debug logs)
3. Create and attach a volume, write 50GB data to it.
4. Crash and delete one replica. Check the rebuilding time of the volume with different engine images.
Result:
1. `shuowu/longhorn-engine:master-1948`
Time: around 42.5min
Log:
```
[vol-r-048aec79] time="2020-12-02T09:01:25Z" level=info msg="Sending file volume-snap-d2fe55bb-fb15-44ae-b0fe-5006554c3671.img to 10.42.1.214:10004"
time="2020-12-02T09:01:25Z" level=info msg="source file size: 59055800320, setting up directIo: true"
......
[instance-manager-r-de3b0700] [vol-r-048aec79] time="2020-12-02T09:45:17Z" level=info msg="syncFileContent succeed: server checksum time 544619 ms, client checksum time 828134 ms, send time 1721255 ms"
```
2. `shuowu/longhorn-engine:v1.0.x-1948`
Time: around 50.7min
Log:
```
[vol-r-048aec79] time="2020-12-02T11:46:32Z" level=info msg="Sending file volume-snap-94e4b90e-3094-4941-b661-a698483135cc.img to 10.42.1.221:10004"
time="2020-12-02T11:46:32Z" level=info msg="source file size: 59055800320, setting up directIo: true"
......
[vol-r-048aec79] time="2020-12-02T12:37:13Z" level=info msg="syncFileContent succeed: server checksum time 510819 ms, client checksum time 638785 ms, send time 1891356 ms"
```
The result is different. The main cause (besides the debug logs) may be the Longhorn system version is always `master` rather than upgrading from `v1.0.2` to `master` comparing with the previous test. Currently, I am not sure if there are other modifications that degrade the overall rebuilding speed. I will test it as the next step.
username_2: Performance improvements in replica rebuild are very apreciated. I use Ubuntu nodes that ask for a restarts due updates almost every week and on every restart, all replicas on the node are rebuilted.
My current metric when rebuild a replica on SSD diskand 10 Gb/s network:
Size: 50 Gi
Actual Size:38.8 Gi
Time to rebuild a replica: 648s
Replica rate: (38.8Gi/648) = 61.3 MB/s
username_3: One possible explanation (thanks to the discussion with @username_2 ):
In https://github.com/longhorn/sparse-tools/blob/c1877f4a574f57a461f7a69709defd20bc52718a/sparse/client.go#L95 , we depends on the holes to separate the sections for checksum.
But what if one section (data interval) is very big (hundreds of MB or even GB) but only changed a small amount, we will have to resync the whole section due to the difference of the checksum.
We should be able to sync more efficiently by cutting the big chunk of the data area into smaller pieces.
username_4: ## Pre Ready-For-Testing Checklist
* [x] Is the reproduce steps/test steps documented?
* [x] Is there a workaround for the issue? If so, is it documented?
* [x] Does the PR include the explanation for the fix or the feature?
* [x] ~~Does the PR include deployment change (YAML/Chart)? If so, have both YAML file and Chart been updated in the PR?~~
* [x] Is the backend code merged (Manager, Engine, Instance Manager, BackupStore etc) (including `backport-needed`)?
The PR is at https://github.com/longhorn/sparse-tools/pull/76
https://github.com/longhorn/longhorn-engine/pull/622
https://github.com/longhorn/longhorn-engine/pull/625
* [x] Which areas/issues this PR might have potential impacts on?
Area replica rebuild, snapshot coalescing,
Issues https://github.com/longhorn/longhorn/issues/2629
* [x] ~~**If labeled: require/LEP** Has the Longhorn Enhancement Proposal PR submitted?
The LEP PR is at ~~
* [x] ~~**If labeled: area/ui** Has the UI issue filed or ready to be merged (including `backport-needed`)?
The UI issue/PR is at~~
* [x] ~~**If labeled: require/doc** Has the necessary document PR submitted or merged (including `backport-needed`)?
The Doc issue/PR is at~~
* [x] ~~**If labeled: require/automation-e2e** Has the end-to-end test plan been merged? Have QAs agreed on the automation test case? (including `backport-needed`)
The automation skeleton PR is at
The automation test case PR is at~~
* [x] ~~**If labeled: require/automation-engine** Has the engine integration test been merged (including `backport-needed`)?
The engine automation PR is at~~
* [x] ~~**If labeled: require/manual-test-plan** Has the manual test plan been documented?
The updated manual test plan is at~~
* [x] ~~**If the fix introduces the code for backward compatibility** Has a separate issue been filed with the label `release/obsolete-compatibility`?
The compatibility issue is filed at~~
username_4: Testing instructions in a couple of comments in this issue: https://github.com/longhorn/longhorn/issues/2507
Split them up into multiple comments for context.
username_5: cc @username_6 @kaxing
username_6: Verified with Longhorn v1.1.2-head `06/21/2021`
Validation - **In Progress**
| Version | Data in Gi | Time taken to rebuild replica |
| -------- | --------- | ---------------------------- |
| V1.1.1 | 30 | ~16 mins |
| v1.1.2-head | 30 | ~5 mins |
Status: Issue closed
|
jOOQ/jOOQ | 1159363854 | Title: Re-usable queries and implicit joins
Question:
username_0: ### Your question:
I’m still trying to wrap my head around how to best leverage jOOQ in our product. One of the things we need to find a good pattern for, is how to make re-usable dynamic queries.
The requirements for this to work well is:
- Discoverability: Easy to discover and understand the canned queries that are there
- Flexibility: The canned queries should serve as a good starting point for solving many different requirements
- Performance: The canned queries should only fetch the data needed for the case in question and should leverage the database as much as possible
While we can make functions that generates parts of a jOOQ query for us, for instance something like: `.where(searchByPersonArgs(args))` this doesn’t feel like the optimal approach.
A better solution as far as I can see is to make use of derived tables or similar that plugs into the implicit join hierarchy that I’ve come to love. I know how to do this using views in the database, and I suspect we can plug in table functions as well.
My question is if it is possible to do something similar by creating derived tables and table returning functions in jOOQ directly. My gut feeling is that this should be possible now, but it’s not clear to me how to best do this. It is the implicit joins that stumps me.<issue_closed>
Status: Issue closed |
OBKoro1/koro1FileHeader | 480458032 | Title: python自动加的注释都在头部,把编码定义顶下去了。。。
Question:
username_0: python自动加的注释都在头部,把编码定义顶下去了。。。
`'''
@Description: In User Settings Edit
@Author: <NAME>
@Date: 2019-08-14 10:25:52
@LastEditTime: 2019-08-14 10:25:52
@LastEditors: your name
'''
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import socket
import threading`
Answers:
username_1: 有个配置出了很久了:

Status: Issue closed
|
caohehuan/Test | 626945235 | Title: java代码审查不规范二
Question:
username_0: _**public class mass {**_Class 'mass' is public, should be declared in a file named 'mass.java'不规范
public static int correct = 0;
public static int front = 0;
public static int back = 0;
public static int end=0;
public static int[] error = new int[30];
public static int[] errorId = new int[30];
public static int symbol;
public static int sSymbol;
public static int j = 0;
public static int k = 0;
public static int inResult = 0;
public static int corResult = 0;
public static int i =0;
public static String[] errorSymbol=new String[30];
public static Random random = new Random();
public static Scanner inputNumber = new Scanner(System.in);
public static void calculate(int i){ |
altunyurt/kagni | 441144705 | Title: Add cython support for performance
Question:
username_0: Currently asyncio + uvloop implementation being the fastest, still is as fast as 1/5 of redis', followed by trio implementation with 1/ 15~ redis' performance.
Hopefully, adding cython support would help achieving a decent performance enough to be useful for small - medium scale projects.
Status: Issue closed
Answers:
username_0: Currently asyncio + uvloop implementation being the fastest, still is as fast as 1/5 of redis', followed by trio implementation with 1/ 15~ redis' performance.
Hopefully, adding cython support would help achieving a decent performance enough to be useful for small - medium scale projects. |
libcthorne/edictweb | 286562363 | Title: Compare performance with MongoDB
Question:
username_0: postgresql query times:
"a" search: initial ~6000ms, subsequent ~2000ms
"t" search: initial ~13000ms, subsequent ~2000ms
"to" search: initial ~2000ms, subsequent ~700ms
"cat" search: initial ~30ms, subsequent ~20ms
"walk" search: initial ~50ms, subsequent ~10ms
"violet" search: initial ~10ms, subsequent ~7ms
Answers:
username_0: MongoDB (no significant difference between initial/subsequent):
"a": ~300ms
"t": ~300ms
"to": ~140m
"cat": ~20ms
"walk": ~12ms
"violet": ~5ms
"cat to": ~150ms
"a b": ~450ms
"to dance": ~120ms
"to dance a": ~350ms
"to dance a b": ~600ms
"to dance a b c": ~800ms
"to dance a b c d": ~850ms
"to dance a b c d e": ~1000ms
username_0: Postgres (again):
"a": ~3000ms, ~1500ms
"t": ~8000ms, ~1500ms
"to": ~900ms, ~700ms
"cat": ~80ms, ~40ms
"walk": ~60ms, ~30ms
"violet": ~30ms, ~20ms
"cat to": ~860ms, ~800ms
"a b": ~3300ms, ~2000ms
"to dance": ~800ms, ~700ms
"to dance a": ~1900ms
"to dance a b": ~2500ms
"to dance a b c": ~4000ms, ~2700ms
"to dance a b c d": ~3600ms
"to dance a b c d e": ~3800ms
Measured with timedelta:
```
if paginate:
+ import datetime
+ t1 = datetime.datetime.now()
+
paginator = Paginator(matching_entries, per_page=20)
try:
matching_entries = paginator.page(page)
@@ -45,7 +48,11 @@ def search_entries(query, paginate=True, page=None):
matching_entries = paginator.page(paginator.num_pages)
total_matches = paginator.count
+ _matches = list(matching_entries)
+ t2 = datetime.datetime.now()
+ time_taken = (t2-t1).total_seconds()*1000
```
Status: Issue closed
username_0: SQL plan:
```
QUERY PLAN
--
Sort (cost=107269.49..107795.50 rows=210406 width=143) (actual time=1533.155..1533.155 rows=0 loops=1)
Sort Key: (sum(importer_invertedindexentry.weight)), importer_invertedindexentry.dictionary_entry_id
Sort Method: quicksort Memory: 25kB
-> GroupAggregate (cost=82880.47..88666.63 rows=210406 width=143) (actual time=1533.132..1533.132 rows=0 loops=1)
Filter: (count(DISTINCT importer_invertedindexentry.index_word_text) = 7)
Rows Removed by Filter: 205297
-> Sort (cost=82880.47..83406.48 rows=210406 width=143) (actual time=1201.522..1225.673 rows=205297 loops=1)
Sort Key: importer_invertedindexentry.id, importer_dictionaryentry.id
Sort Method: quicksort Memory: 63294kB
-> Hash Join (cost=12358.39..64277.61 rows=210406 width=143) (actual time=252.705..1038.146 rows=205297 loops=1)
Hash Cond: (importer_invertedindexentry.dictionary_entry_id = importer_dictionaryentry.id)
-> Bitmap Heap Scan on importer_invertedindexentry (cost=5049.40..53023.51 rows=210406 width=23) (actual time=74.260..568.344 rows=205297 loops=1)
Recheck Cond: ((index_word_text)::text = ANY ('{e,dance,b,d,c,a,to}'::text[]))
-> Bitmap Index Scan on importer_invertedindexentry_index_word_text_531933ab (cost=0.00..4996.80 rows=210406 width=0) (actual time=64.717..64.717 rows=205297 loops=1)
Index Cond: ((index_word_text)::text = ANY ('{e,dance,b,d,c,a,to}'::text[]))
-> Hash (cost=5078.44..5078.44 rows=178444 width=120) (actual time=178.052..178.052 rows=178163 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 26459kB
-> Seq Scan on importer_dictionaryentry (cost=0.00..5078.44 rows=178444 width=120) (actual time=0.021..80.102 rows=178163 loops=1)
Total runtime: 1541.581 ms
```
username_0: Postgres with increased work_mem comes very close to MongoDB times posted above.
username_0: MongoDB write speed (with validation): [Request 1] Finished dictionary file import in (entries: 178163, duration: 0:51:19.380600)
username_0: MongoDB write speed (without validation): [Request 3] Finished dictionary file import in (entries: 178163, duration: 0:48:33.635711)
username_0: Postgres write speed: [Request 4] Finished dictionary file import in (entries: 178163, duration: 0:47:18.661757) |
department-of-veterans-affairs/va.gov-team | 704670145 | Title: Monitoring and Alerting — Launch automated error alert triaging
Question:
username_0: ## Problem Statement
FE Tools is not triaging errors that happen on the website, and there are too many errors for us to triage them manually
## Hypothesis or Bet
*How will this initiative impact the quality of VFS or VSP teams' work?*
*How will this initiative be easy for VFS or VSP teams? Or how will it be easier than what they did before?*
## We will know we're done when... ("Definition of Done")
*What requirements does this project need to meet for you to finish this initiative?*
## Launch Checklist
### Guidance (delete before posting)
_This checklist is intended to be used to help answer, "is my VSP initiative ready for launch?". All of the items in this checklist should be completed, with artifacts linked---or have a brief explanation of why they've been skipped---before launching a given VSP initiative. All links or explanations can be provided in **Required Artifacts** sections. The items that can be skipped are marked as such._
_Keep in mind the distinction between **Product** and **Initiative** --- each Product needs specific supporting documentation, but Initiatives to improve existing Products should reuse existing documentation for that Product. [VSP Product Terminology](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/teams/vsp/product-management/product-terminology.md) for details._
### Is this service / tool / feature...
### ... tested?
- [ ] Usability test (_TODO: link_) has been performed, to validate that new changes enable users to do what was intended and that these changes don't worsen quality elsewhere. If usability test isn't relevant for this change, document the reason for skipping it.
- [ ] ... and issues discovered in usability testing have been addressed.
* _Note on skipping: metrics that show the impact of before/after can be a substitute for usability testing._
- [ ] End-to-end [manual QA](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/quality-assurance/README.md) or [UAT](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/research/planning/what-is-uat.md) is complete, to validate there are no high-severity issues before launching
- [ ] _(if applicable)_ New functionality has thorough, automated tests running in CI/CD
### ... documented?
- [ ] New documentation is written pursuant to our [documentation style guide](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/platform/documentation/style-guide)
- [ ] Product is included in the [List of VSP Products](https://docs.google.com/spreadsheets/d/1Fn2lD419WE3sTZJtN2Ensrjqaz0jH3WvLaBtn812Wjo/edit#gid=0)
* _List the existing product that this initiative fits within, or add a new product to this list._
- [ ] Internal-facing: there's a [Product Outline](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/teams/vsp/product-management/product-outline-template.md) checked into [`products/platform/PRODUCT_NAME/`](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/platform/)
* _Note: the Product Directory Name should match 1:1 with the List of VSP Products_
- [ ] External-facing: a [VFS-facing README](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/teams/vsp/product-management/product-readme-template.md) exists for this product/feature tool
- [ ] ... and should be located at `platform/PRODUCT_NAME/README.md`
- [ ] External-facing: a [User Guide](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/teams/vsp/product-management/writing-user-guides.md) exists for this product/feature/tool, and is updated for changes from this initiative
- [ ] ... and should be linked from the VFS-facing README for your product
- [ ] ... and should be located within `platform/PRODUCT_NAME/`, unless you already have another location for it
- [ ] _(if applicable)_... and post to [#vsp-content-ia](https://dsva.slack.com/channels/vsp-content-ia) about whether this should be added to the [Documentation homepage](https://department-of-veterans-affairs.github.io/va.gov-team/)
- [ ] _(if applicable)_ Post to [#vsp-service-design](https://dsva.slack.com/channels/vsp-service-design) for external communication about this change (e.g. VSP Newsletter, customer-facing meetings)
### ... measurable
- [ ] _(if applicable)_ This change has clearly-defined success metrics, with instrumentation of those analytics where possible, or a reason documented for skipping it.
* For help, see: [Analytics team](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/platform/analytics)
- [ ] This change has an accompanying [VSP Initiative Release Plan](https://github.com/department-of-veterans-affairs/va.gov-team/issues/new/choose).
## Required Artifacts
### Documentation
* **`PRODUCT_NAME`**: _directory name used for your product documentation_
* **Product Outline**: _link to Product Outline_
* **README**: _link to VFS-facing README for your product_
* **User Guide**: _link to User Guide_
### Testing
* **Usability test**: _link to GitHub issue, or provide reason for skipping_
* **Manual QA**: _link to GitHub issue or documented results_
* **Automated tests**: _link to tests, or "N/A"_
### Measurement
* **Success metrics**: _link to where success metrics are measured, or where they're defined (Product Outline is OK), or provide reason for skipping_
* **Release plan**: _link to Release Plan ticket_
## TODOs
- [ ] Convert this issue to an epic
- [ ] Add your team's label to this epic
Answers:
username_1: Related tickets have been resolved. Closing
Status: Issue closed
|
greenplum-db/gporca | 391685805 | Title: checking Checking ORCA version... configure: error: Your ORCA version is expected to be 3.14.XXX
Question:
username_0: Dear ALL:
When I configure greenplum-db useing the command "./configure --with-perl --with-python --with-libxml --with-gssapi --prefix=/usr/local/gpdb", I got an error "Your ORCA version is expected to be 3.14.XXX" , but I think I have already installed the gporca v3.14. Are there something I omitted? Or how can I solve this problem. Thanks.
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/naucrates
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/gpdbcost
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/gpopt
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/gpos
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libnaucrates.so*
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libgpdbcost.so*
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libgpopt.so*
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libgpos.so*
[root@spa-42-185-112 gporca-3.14.0]# cmake -GNinja -H. -Bbuild
-- Build type: RelWithDebInfo
-- Configuring done
-- Generating done
-- Build files have been written to: /root/work/gporca-3.14.0/build
[root@spa-42-185-112 gporca-3.14.0]# ninja install -C build
ninja: Entering directory `build'
[0/1] Install the project...
-- Install configuration: "RelWithDebInfo"
-- Installing: /usr/local/lib/libgpos.so.3.14.0
-- Installing: /usr/local/lib/libgpos.so.3
-- Installing: /usr/local/lib/libgpos.so
-- Installing: /usr/local/include/gpos
-- Installing: /usr/local/include/gpos/error
-- Installing: /usr/local/include/gpos/error/CAutoLogger.h
-- Installing: /usr/local/include/gpos/error/CErrorHandlerStandard.h
-- Installing: /usr/local/include/gpos/error/CException.h
-- Installing: /usr/local/include/gpos/error/CErrorContext.h
-- Installing: /usr/local/include/gpos/error/CAutoExceptionStack.h
-- Installing: /usr/local/include/gpos/error/CMessageRepository.h
-- Installing: /usr/local/include/gpos/error/CLogger.h
-- Installing: /usr/local/include/gpos/error/CMessage.h
-- Installing: /usr/local/include/gpos/error/IErrorContext.h
-- Installing: /usr/local/include/gpos/error/CErrorHandler.h
-- Installing: /usr/local/include/gpos/error/ILogger.h
-- Installing: /usr/local/include/gpos/error/CMessageTable.h
-- Installing: /usr/local/include/gpos/error/CMiniDumper.h
-- Installing: /usr/local/include/gpos/error/CSerializable.h
-- Installing: /usr/local/include/gpos/error/CAutoTrace.h
-- Installing: /usr/local/include/gpos/error/CFSimulator.h
-- Installing: /usr/local/include/gpos/error/CLoggerSyslog.h
-- Installing: /usr/local/include/gpos/error/CLoggerStream.h
-- Installing: /usr/local/include/gpos/base.h
-- Installing: /usr/local/include/gpos/utils.h
-- Installing: /usr/local/include/gpos/types.h
-- Installing: /usr/local/include/gpos/io
-- Installing: /usr/local/include/gpos/io/IOstream.h
-- Installing: /usr/local/include/gpos/io/iotypes.h
-- Installing: /usr/local/include/gpos/io/COstreamFile.h
-- Installing: /usr/local/include/gpos/io/CFileDescriptor.h
-- Installing: /usr/local/include/gpos/io/CFileWriter.h
-- Installing: /usr/local/include/gpos/io/CFileReader.h
-- Installing: /usr/local/include/gpos/io/COstreamBasic.h
-- Installing: /usr/local/include/gpos/io/COstream.h
-- Installing: /usr/local/include/gpos/io/COstreamString.h
-- Installing: /usr/local/include/gpos/io/ioutils.h
-- Installing: /usr/local/include/gpos/test
-- Installing: /usr/local/include/gpos/test/CTimeSliceTest.h
-- Installing: /usr/local/include/gpos/test/CUnittest.h
[Truncated]
checking how to run the C++ preprocessor... g++ -std=c++11 -E
checking gpos/_api.h usability... yes
checking gpos/_api.h presence... yes
checking for gpos/_api.h... yes
checking naucrates/init.h usability... yes
checking naucrates/init.h presence... yes
checking for naucrates/init.h... yes
checking gpopt/init.h usability... yes
checking gpopt/init.h presence... yes
checking for gpopt/init.h... yes
checking gpdbcost/CCostModelGPDB.h usability... yes
checking gpdbcost/CCostModelGPDB.h presence... yes
checking for gpdbcost/CCostModelGPDB.h... yes
checking for strnicmp in -lxerces-c... yes
checking for Xerces-C... checking for gpos_init in -lgpos... yes
checking for main in -lgpdbcost... yes
checking for InitDXL in -lnaucrates... yes
checking for gpopt_init in -lgpopt... yes
checking Checking ORCA version... configure: error: Your ORCA version is expected to be 3.14.XXX
[root@spa-42-185-112 gpdb-5.15.1]#
Answers:
username_1: Hi @username_0,
Thanks for trying out ORCA.
Can you please follow the instructions here https://github.com/greenplum-db/gporca/commit/2a5e9fce472d3f1a29c16e1e53974ad5925ee735
and see if it solves the issue.
Thanks
username_2: Please let us know if this issue persists. If so please re-open it.
Status: Issue closed
username_3: I also met the same issue, but the solution provided by username_1 does not work.
username_4: Dear ALL:
When I configure greenplum-db useing the command "./configure --with-perl --with-python --with-libxml --with-gssapi --prefix=/usr/local/gpdb", I got an error "Your ORCA version is expected to be 3.14.XXX" , but I think I have already installed the gporca v3.14. Are there something I omitted? Or how can I solve this problem. Thanks.
```
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/naucrates
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/gpdbcost
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/gpopt
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/include/gpos
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libnaucrates.so*
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libgpdbcost.so*
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libgpopt.so*
[root@spa-42-185-112 gporca-3.14.0]# rm -rf /usr/local/lib/libgpos.so*
[root@spa-42-185-112 gporca-3.14.0]# cmake -GNinja -H. -Bbuild
-- Build type: RelWithDebInfo
-- Configuring done
-- Generating done
-- Build files have been written to: /root/work/gporca-3.14.0/build
[root@spa-42-185-112 gporca-3.14.0]# ninja install -C build
ninja: Entering directory `build'
[0/1] Install the project...
-- Install configuration: "RelWithDebInfo"
-- Installing: /usr/local/lib/libgpos.so.3.14.0
-- Installing: /usr/local/lib/libgpos.so.3
-- Installing: /usr/local/lib/libgpos.so
-- Installing: /usr/local/include/gpos
-- Installing: /usr/local/include/gpos/error
-- Installing: /usr/local/include/gpos/error/CAutoLogger.h
-- Installing: /usr/local/include/gpos/error/CErrorHandlerStandard.h
-- Installing: /usr/local/include/gpos/error/CException.h
-- Installing: /usr/local/include/gpos/error/CErrorContext.h
-- Installing: /usr/local/include/gpos/error/CAutoExceptionStack.h
-- Installing: /usr/local/include/gpos/error/CMessageRepository.h
-- Installing: /usr/local/include/gpos/error/CLogger.h
-- Installing: /usr/local/include/gpos/error/CMessage.h
-- Installing: /usr/local/include/gpos/error/IErrorContext.h
-- Installing: /usr/local/include/gpos/error/CErrorHandler.h
-- Installing: /usr/local/include/gpos/error/ILogger.h
-- Installing: /usr/local/include/gpos/error/CMessageTable.h
-- Installing: /usr/local/include/gpos/error/CMiniDumper.h
-- Installing: /usr/local/include/gpos/error/CSerializable.h
-- Installing: /usr/local/include/gpos/error/CAutoTrace.h
-- Installing: /usr/local/include/gpos/error/CFSimulator.h
-- Installing: /usr/local/include/gpos/error/CLoggerSyslog.h
-- Installing: /usr/local/include/gpos/error/CLoggerStream.h
-- Installing: /usr/local/include/gpos/base.h
-- Installing: /usr/local/include/gpos/utils.h
-- Installing: /usr/local/include/gpos/types.h
-- Installing: /usr/local/include/gpos/io
-- Installing: /usr/local/include/gpos/io/IOstream.h
-- Installing: /usr/local/include/gpos/io/iotypes.h
-- Installing: /usr/local/include/gpos/io/COstreamFile.h
-- Installing: /usr/local/include/gpos/io/CFileDescriptor.h
-- Installing: /usr/local/include/gpos/io/CFileWriter.h
-- Installing: /usr/local/include/gpos/io/CFileReader.h
-- Installing: /usr/local/include/gpos/io/COstreamBasic.h
-- Installing: /usr/local/include/gpos/io/COstream.h
-- Installing: /usr/local/include/gpos/io/COstreamString.h
-- Installing: /usr/local/include/gpos/io/ioutils.h
-- Installing: /usr/local/include/gpos/test
-- Installing: /usr/local/include/gpos/test/CTimeSliceTest.h
[Truncated]
checking gpos/_api.h presence... yes
checking for gpos/_api.h... yes
checking naucrates/init.h usability... yes
checking naucrates/init.h presence... yes
checking for naucrates/init.h... yes
checking gpopt/init.h usability... yes
checking gpopt/init.h presence... yes
checking for gpopt/init.h... yes
checking gpdbcost/CCostModelGPDB.h usability... yes
checking gpdbcost/CCostModelGPDB.h presence... yes
checking for gpdbcost/CCostModelGPDB.h... yes
checking for strnicmp in -lxerces-c... yes
checking for Xerces-C... checking for gpos_init in -lgpos... yes
checking for main in -lgpdbcost... yes
checking for InitDXL in -lnaucrates... yes
checking for gpopt_init in -lgpopt... yes
checking Checking ORCA version... configure: error: Your ORCA version is expected to be 3.14.XXX
[root@spa-42-185-112 gpdb-5.15.1]#
```
username_4: I tried to compile gporca at 3.42 or install the binary package, both failed. The OS is RHEL 7.
```
$ sudo tar xvzf bin_orca_centos5_release.tar.gz -C /usr/
$ sudo tar xvzf bin_orca_centos5_release.tar.gz -C /usr/local
$ which gcc
/opt/gcc-6.2.0/bin/gcc
$ cat /etc/ld.so.conf
include ld.so.conf.d/*.conf
/usr/local/lib
$ CFLAGS="-O0 -g3" ./configure --prefix=`echo ~`/greenplum-db-devel --with-pgport=5432 --with-perl --with-python --with-libxml --with-gssapi --enable-debug --enable-orca --disable-gpfdist
checking Checking ORCA version... configure: error: Your ORCA version is expected to be 3.42.XXX
```
username_5: @username_4 hey did you try to do ldconfig after completing the build part of optimizer
Ex : cd depends
./configure
make
make install_local
cd ..
Then
try to do ldconfig
and then build the database
Status: Issue closed
username_4: Thanks for the info, however my env has a problem connecting outside, just close this. |
ubershmekel/redditp | 874850357 | Title: Site goes back to first image randomly
Question:
username_0: Hello!
I have a difficult time reproducing this, so I apologize for a vague issue..
I use Firefox on my OnePlus 6T. Whenever I use your (amazing) site, sometimes when I swipe to the next image it just goes back to the first image.
Sometimes it takes 20 images, sometimes only 3.
What other info can I give you to help you figure this out?
Answers:
username_1: Is it reaching the end and rotating back to the start? How often does it happen?
username_0: Hello!
Good news, I just managed to reproduce it.
Let's say x = 35.
If there are x items loaded and I am on item X and scroll to the right before the next items are loaded, it goes back to item 1. It should wait and load x + 1 instead.
username_1: The wrap-around functionality is for situations where the subreddit has no more images. I could perhaps start to load the next set of posts when you hit the slide before the last, not on the last slide. Just so there would be less chance of you missing. I guess you're swiping quickly compared to how fast the loading is.
username_0: I would suggest that wrap around only activates when there are no more images on the subreddit, and perhaps notifying the user they have been brought back to the beginning again.
Currently I am brought back to the start because the next images are not loaded yet; not because there are no more images to he loaded. This last scenario sounds like your intention but doesn't really work well.
So when someone scrolls to the next image which is not loaded yet, we have to wait unt that api call is done. If the api call returns no images, the subreddit has no more images to show so we can wrap around. If the api does return images, we load the next one.
I dont know how often images are loaded currently, but perhaps loading them 10-15 images before the current last one is accessed would prevent the user from having to wait? |
scalaz/scalaz-stream | 22269534 | Title: More principled `Step` type
Question:
username_0: It is sometimes useful / necessary when binding to an external API to grant control over traversing the stream to some external process. For these situations, we could use a more principled way of 'stepping' a stream, such that after each step, the caller receives the current emitted values and a next step, as well as the latest finalizer.
Answers:
username_1: +1
@username_2 that's exactly what I need too... have you had any success? (This is the closest I've come so far, but it does not handle the Task context at all...: https://gist.github.com/username_1/2ed965bde7324cb73325)
There's a "toTask trick" discussed here that looks useful: https://groups.google.com/forum/#!topic/scalaz/gx0eXHpQN48
username_2: @username_1 I ended up with the toTask trick as well
username_3: @username_2 @username_1 This is interesting, because we're actually removing `toTask` in 0.7. In theory, it would be not-horrifically-hard to implement `Process[Task, A] => Iterator[A]` on top of `step` directly (this is in fact what the new `io.toInputStream` function does in effect, though significantly obscured by some complex chunking logic). More importantly, using `step` directly gives you the ability to close the stream if you happen to not iterate over the whole thing. I'm not sure how relevant that is though, given that `Iterator` is `next(): A` and `hasNext(): Boolean`. Honestly, `Iterator` is just a really bad type…
username_1: @username_3 hmm... that's unfortunate. I tried using step, but couldn't get it to work when there was a Task context (e.g. see link earlier) - though this could be because it's a bit beyond me. toTask was much easier. I agree that Iterator isn't a great type, but there's not much choice when it's an integration issue. In my case I need a TraversableOnce, so Iterator was the simplest solution. I think it's a good idea to make it easy to drive a Process externally (i.e. suck on it till the end or close it early). The alternative (running the process and integrating via a blocking queue) is nasty.
username_3: @username_1 There's various things we can do to try to make this situation better. The problem of course is that if we go far enough in this direction, we just reinvent iteratees. :-) Perhaps a `Process[Task, A] => (Iterator[A], () => Unit)` would make sense tucked in some sort of side module somewhere.
username_1: @username_3 yeah, that's a good idea - with the close operation optional if iteration is to the end. Brainstorm:
- Since it's probably nicer (and closer to the way Process works) to have `def next() : Option[A]` rather than Iterator's interface, could be worth exposing that too (e.g. Iterator implementation might well be built on top of that, since it needs to read ahead to implement `hasNext()`).
- Might also be worth considering `Process1[I,O] => (Iterator[I] => Iterator[O], () => Unit)`? In particular, I have my eye on `RDD.mapPartitions` in Spark :-)
username_3: @username_1 I think it would probably be best to move this to a new issue discussion so we have it tracked as an open feature request. Care to do the honors? :-)
username_4: A TraversableOnce only need implement foreach, and Process[Task, A] =>
TraversableOnce[A] is rather trivial. The whole point of TraversableOnce is
keeping control of iteration, which is the sticking point here. Alas, it
does not allow early termination.
username_3: TraversableOnce[A]` is rather trivial.
Actually, `Process[Task, A] => TraversableOnce[A]` is less trivial than you would think if you want to support *incremental* evaluation. Obviously dropping down to `runLog` and throwing it all into memory is easy. Doing things properly with `step` is slightly more complex. Not hard, but I wouldn't classify it as "trivial".
username_5: OT, but I would argue that it is in fact trivial, as long as we understand
the engineer's definition of _trivial_:
http://fishbowl.pastiche.org/2007/07/17/understanding_engineers_feasibility/
cheers,
jed.
username_4: class ProcessTraversable[A](p: Process[Task, A]) extends Traversable[A] {
def foreach[U](f: A => U): Unit = {
val sink: Sink[Task, A] = Process.constant((x: A) => Task.delay(f(x)))
p.to(sink).run.run
}
}
What am I missing?
username_3: @username_4 I hadn't considered implementing it with a sink. That works well.
username_1: @username_4 I wish I knew scalaz well enough to understand that! ;)
username_4: It is rather trivial. Sink is a kind of Process used to produce side
effects. type Sink[M, A] = Process[M, A => M[Unit]], which looks weird but
I'll explain.
A "normal" side effect, like any method in a very imperative Java program,
has type A => Unit. For example, going back to Java:
void println(Object o) {
print(o);
print("\n");
}
In other words, it takes parameters which are represented by type A. Then
it does stuff. Finally it doesn't return anything, which is represented by
type Unit.
The type of that Process is A => M[Unit] however. This simply means that
the side effect represented by "Unit" is going to be executed in the
context of the monad M, given the parameter A.
(x: A) => Task.delay(f(x))
Pretty simple, right? Given the parameter x of type A, return a Task which,
when executed, will perform f(x), where f is our side effect. The function
f could be, for example, the println method above. Let's call that function
"sideEffect". So the sink is:
Process.constant(sideEffect)
That is, it is an infinite stream of "sideEffect", where each "sideEffect"
is a function that takes an A and produces a side effect, returning
nothing. They happen to be identical because that suit our needs.
So that's our "sink". As I said at the beginning, a sink is used to perform
side effects. To give an example how, let's say we have the following two
processes:
val source: Process[Task, Int] = Process(1, 2, 3, 4, 5)
val sink: Sink[Int] = Process(f, g, h, i, j)
Then if I write "source.to(sink)", I'll get this:
Process(f(1), g(2), h(3), i(4), j(5))
And, when I run that process, I'll call the functions f, g, h, i and j with
their respective parameters.
In this case, I'm generating an infinite stream of "f", which is the
parameter to "foreach", using that with the sink, then running. That means
that if I have:
val source: Process[Task, Int] = Process(1, 2, 3, 4, 5)
val traversable = new ProcessTraversable(source)
traversable.foreach(println)
That will result in:
Process(println(1), println(2), println(3), println(4), println(5)).run.run
with the result of printing each number.
That's a Traversable -- it's a more specific type than a TraversableOnce,
in which "foreach" can only be called once. If all you need is a
TraversableOnce, this implementation suffices. |
godotengine/godot-proposals | 654702270 | Title: [Complex Text Layouts: 1/4] BiDi/Shaping engine API structure, text processing refactoring.
Question:
username_0: *This proposal is follow up to #4, to get some community feedback/preferences on a specific parts of CTL support implementation.*
**Describe the project you are working on:**
The Godot Engine
---
**Describe the problem or limitation you are having in your project:**
Currently, text display is extremely limited, and only supports simple, left-to-right scripts.
---
**Describe the feature / enhancement and how it helps to overcome the problem or limitation:**
Proper display of the text requires multiple steps to be done:
🔹 BiDi reordering (placing parts of the text as they are displayed), should be done on the whole paragraph of text, e.g. any part of the text that logically independent of the rest.
<details>
<summary>Click to expand</summary>

</details>
🔹 Shaping (choosing context dependent glyphs from the font and their relative positions).
<details>
<summary>Click to expand</summary>

</details>
🔹 Since text in each singe line should maintain logical order, breaking is done on non-reordered text (but it should be shaped, and shaping requires direction to be known, hence text is temporary reordered back for breaking).
Then each line is reordered again (using slightly different algorithm), there's no need for shaping it again, results can be taken from the step 2.
<details>
<summary>Click to expand</summary>

</details>
🔹 Optionally some advanced technics can be using for line justification, but just expanding spaces should be OK in general.
<details>
<summary>Click to expand</summary>

</details>
🔹 For some types of data (urls/emails/source code) each part should be processed separately.
<details>
<summary>Click to expand</summary>

[Truncated]
***API (Text input, cursor/selection control), should be handled by controls or module?***
🔹 Only complex, font specific functions (e.g. ligature cursors), do everything else in the controls.
🔹 Common cursor control API for all controls (e.g. `ShapedString->move_caret(CursorPos, +/- Magnitude, Type WORD/CHAR/LINE/PARA) -> CursorPos`, `ShapedStgring->hit_test(..., Coords) -> CursorPos`).
🔹 Something else?
***API (Font, Canvas)***
Currently, we have duplicate string drawing functions both in Canvas and Font, do we need both?
---
**If this enhancement will not be used often, can it be worked around with a few lines of script?:**
It will be used to draw all text in the editor and exported apps.
---
**Is there a reason why this should be core and not an add-on in the asset library?:**
Main implementation can and probably should be module, and have support for the custom GDNative implementations, but substantial changes to the core are required anyway.
Answers:
username_1: In the first step, new line is a paragraph separator and text should be split first at paragraph separators before doing bidi (and I think the rest of the layout), https://unicode.org/reports/tr9/#P1
username_0: *Current list of proposed changes:*
---
### Core Changes (String)
1. Change String to use UTF-16 internally.
2. Add UTF-32 string class, similar to `CharString` (`Char32String` ?), for file and macOS/Linux `wchar_t` access.
3. String:
* add UTF-32 functions:
* `Char32String utf32() const;`
* `bool parse_utf32(const char32_t *p_utf8, int p_len = -1);`
* `static String utf32(const char32_t *p_utf8, int p_len = -1);`
* add `wchar_t` macros for `wchar_t` APIs (used almost exclusively on Windows, where it will return ptr directly)
```c++
#ifdef WINDOWS_ENABLED
#define FROM_WC_STR(m_value, m_len) (String((const CharType *)(m_value), (m_len)))
#define WC_STR(m_value) ((const wchar_t *)((m_value).get_data()))
#else
#define FROM_WC_STR(m_value, m_len) (String::utf32((const CharType32 *)(m_value), (m_len)))
#define WC_STR(m_value) ((const wchar_t *)((m_value).utf32().get_data()))
#endif
```
* add `char32_t` (`Char32Type` ?) and `wchar_t` constructors
* rename `c_str` to `get_data`?
* add UTF-16 support functions (or macros):
```c++
static bool is_single(char16_t p_char);
static bool is_surrogate(char16_t p_char);
static bool is_surrogate_lead(char16_t p_char);
static bool is_surrogate_trail(char16_t p_char);
static char32_t get_supplementary(char16_t p_lead, char16_t p_trail);
static char16_t get_lead(char32_t p_supplementary);
static char16_t get_trail(char32_t p_supplementary);
```
* change `ord` to read supplementary chars
* change `size` and `length`
* `size` - return UTF-16 code point count + termination 0
* `length` - return char count
* remove `select_word` and `word_wrap` and use `TextServer` instead
* remove `ucaps.h` and move `_find_upper` and `_find_lower` to `TextServer` (reuse old implementations for fallback `TextServer`)
* use `TextServer` for text to digit decoding to handle non 0123456789 numerals
4. Changes in the code that is using String
* change Windows APIs to use WC cast macros (plain C-cast on Windows, on other OSs `wchar_t` is only used for few debug prints and assimp file names)
* change `Variant` to reflect string changes
* add `Vector<uint8_t>` to/from string functions for all UTF-8 / UTF-16 and UTF-32 variants for file access
* update PCRE config to use UTF-16
* nothing else should be affected
5. Update GDNative API to reflect string changes
6. Update and extend string tests to reflect changes.
---
### TextServer (handles font and text shaper implementations)
1. Fallback, Full(ICU/HB/Graphite) and GDNative wrapper modules
API mock-up for TextServer base class:
```c++
[Truncated]
* `TextEdit` -> default align for text, scroll bar
* tons of custom line handling code should be replaced
* code / line number / breakpoint and tab direction probably should not be mirrored (split to the CodeEdit subclass?)
* text edit will need special handling mode for code (probably can be mixed with highlighting)
* code highlighting code might need some changes to work well with new text rendering
* directly use `TextServer` API for multiline and input
* `Tree` -> align and tree line / expander position
* `RichTextLabel`
* probably should be mostly rewritten for scratch
* directly use `TextServer` API for multiline / rich text and inline objects
* `RichTextEdit` - maybe
* some editor parts should have own sub container with layout always set to LTR (probably there are more)
* Dock position popup
---
### Export
Auto include ICU database to exported project.
* `EditorExportPlugin` for data.
username_2: There's [this asset](https://github.com/username_2/godot-arabic-text) that I wrote, which adds a new label node with simple BiDi reordering and Arabic shaping. It's written in GDScript, so it's probably not useful here, but perhaps something similar could be made for a fallback module, if any.
username_3: Proposal looks great, Love the idea of having a TextServer and optionally making use of platform implementations to avoid having to include ICU or similar in all the export templates.
username_3: My only feedback is that I am not sure if its worth having String as UTF16 (as opposed to just using UCS32 everywhere). Nowadays platforms too much memory to make it worth saving it on this, and strings will never take up that much space.
username_0: The only reason for UTF-16 is ICU, which is using it for its APIs.
username_3: @username_0 but most string manipulation in Godot assumes UCS, from parsers to text editors and all other stuff, so I feel it may be a better idea to, worst case, just convert to UTF16 when calling ICU, I am not sure if this has a cost other than converting the string, though.
username_0: Converting should be fast, and ICU have its own API for it. I guess we can go with UTF-32 (and only convert it to UTF-16 to get BiDi runs).
If it's gonna cause too much trouble, moving to UTF-16 after the rest of the CTL stuff is implemented won't be any harder than doing it first.
And some ICU APIs already moved to extendable UText abstraction layer which can be used with UTF-32 strings directly (BiDi is currently not one of them, but eventually we'll be able to get rid of convertion).
username_0: Done some testing, conversion cost is quite low, going with UTF-32 should be fine.
Tests done with the Noto font sample texts (about 11 % of the strings have characters outside BMP).
- UTF-32 to UTF-16 conversion ≈ 0.2 % (includes conversion itself and substring range recalculation back to original string).
- BiDi reordering ≈ 5.4 %
- Script detection ≈ 9.0 %
- Shaping ≈ 85.4 % (with single fallback font, about 23 % of glyphs taken from the fallback).
username_3: sounds great then!
username_4: Where would I plug in (multichannel) signed distance field font atlases for normal and complex layouts?
username_0: Proposal is implemented in https://github.com/godotengine/godot/pull/41100 and https://github.com/godotengine/godot/pull/42595.
Status: Issue closed
|
scorelab/senz | 577313183 | Title: update favicon icon
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Currently the favicon for the website is the default React logo
**Describe the solution you'd like**
Senz already has a logo. It would be better to use the same as icon.
**Screenshots:**

**Remarks:** I am working on this issue<issue_closed>
Status: Issue closed |
silverstripe/silverstripe-framework | 409007071 | Title: Enable http caching in dev environments by default
Question:
username_0: I've often spent a lot of time trying to figure out why my cache headers haven't changed while testing after implementing http caching as per [the docs](https://docs.silverstripe.org/en/4/developer_guides/performance/http_cache_headers/#global-opt-in-for-page-content). This is because there's some [config in framework](https://github.com/silverstripe/silverstripe-framework/blob/4/_config/config.yml#L17) to disable this behaviour in dev mode.
Looks like it was disabled in dev intentionally as part of https://github.com/silverstripe/silverstripe-framework/pull/8086. Would people be happy for this to be removed? I'm sure there was a reason it was added but for me personally it's been more of a hindrance than a helper. If there are valid reasons to keep it in then I'd be happy to raise a PR to the docs to clarify the default behaviour on dev mode.
Answers:
username_1: That config isn't mentioned anyway and its a really messy way to test something in dev (temporary yml file). This makes sense as an environment variable rather than config so it can be switched on and off easily.
username_2: I'd be inclined to say "yes" - caching is a touchy thing to implement and it would make sense for it to be easy to test in dev mode, and not behave differently from test/live so you can easily debug issues with it
username_3: I'm inclined to update the docs to mention "caching is disabled in dev". I'd hazard a guess that the amount of devs confused by "why is my stuff not updating" is greater than the amount of devs diagnosing HTTP caching issues in dev rather than test mode.
username_4: Closed by #8819 (docs update)
Status: Issue closed
|
kohlbrr/threads | 233678364 | Title: Coding standard?
Question:
username_0: Do we want to hash out a very basic coding standard for us all to abide by? This might be something we can just develop organically as we learn each-others coding styles, or something we want to bang out beforehand.
Answers:
username_0: Do we want to hash out a very basic coding standard for us all to abide by? This might be something we can just develop organically as we learn each-others coding styles, or something we want to bang out beforehand.
username_1: I reference this issue to adding an .eslintrc file as style guide: #7
username_0: Makes sense - closing
Status: Issue closed
|
Apicurio/apicurio-registry | 861685872 | Title: JSON schema compatibility rule bug: Remove an optional field should be backward compatible not full compatible
Question:
username_0: It is tested with the code in at this [commit](https://github.com/Apicurio/apicurio-registry/commit/c8c510ed7fd0ca47ef27d55554d8ecfdc84252dd).
```
"original": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"postal_code": {
"type": "number"
},
"street": {
"type": "string"
}
}
},
"updated": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"street": {
"type": "string"
}
}
}
```
Answers:
username_1: Hello username_0,
Can you please share the code / test results that you tested this with?
username_2: Hello @username_0 ,
I've run your test case and it works as expected. Please share more info (as @username_1 mentioned) or I'll have to close this issue. Thanks!
See https://github.com/username_2/apicurio-registry/tree/tmp1
username_0: Hi username_1 and username_2,
Thanks for the timely response. I tested the with the latest change in the [master branch](https://github.com/Apicurio/apicurio-registry/commit/c8c510ed7fd0ca47ef27d55554d8ecfdc84252dd).
Now I run the test with with `tmp1` branch. The issues I reported in this and #1443 is gone. But a new issue emerges
The following change is backward compatible only.
```
"original": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"postal_code": {
"type": "number"
}
},
"additionalProperties": {
"type": "number"
}
},
"updated": {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"postal_code": {
"type": "number"
},
"street": {
"type": "number"
}
},
"additionalProperties": {
"type": "number"
}
}
```
Backward Difference: [Difference(diffType=OBJECT_TYPE_PROPERTY_SCHEMAS_MEMBER_ADDED, pathOriginal=, pathUpdated=/properties, subSchemaOriginal=null, subSchemaUpdated=street)]
Foward Difference: [] |
aws/aws-cdk | 698521541 | Title: [aws_cdk.core] Providing an official string constants Service principal and Managed Policy, etc
Question:
username_0: <!-- short description of the feature you are proposing: -->
Similar to this repo here: https://github.com/kevinslin/cdk-constants/blob/master/lib/services.ts
I would love to have a list of AWS officially maintained list of constant files with all the available Service Principal and Manged policy, service name, service endpoint, etc.
### Use Case
It makes development on CDK much faster without the need for me look up any IAM documents on the web and I could just simply import an value.
Also, having an official constants constants gives me the comfort that I don't need to worry about that the file contains outdated or unsupported strings.
### Proposed Solution
<!-- Please include prototype/workaround/sketch/reference implementation: -->
Should be a simple TS file exporting all the constants.
Would need some AWS internal validations that those are correct.
### Other
<!--
e.g. detailed explanation, stacktraces, related issues, suggestions on how to fix,
links for us to have context, eg. associated pull-request, stackoverflow, gitter, etc
-->
* :wave: I may be able to implement this feature request
---
This is a :rocket: Feature Request
Answers:
username_1: Feels like this should go to [region-info](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_region-info.RegionInfo.html). No?
username_2: Seems reasonable, since there might be differences either across partitions or regions -- principals might be different between regions/partitions, or some regions/partitions may not have some AWS services at all.
username_3: Example of where I would like to use a contant:
```java
ServicePrincipal.Builder.create("cognito-idp.amazonaws.com").build()
``` |
TsudaKageyu/minhook | 260280924 | Title: Make crash in this case:
Question:
username_0: Any ways to fix it ?
After hook:

Function prototype and pseudocode:

Original asm:

Answers:
username_0: Minhook still does not support this?
username_1: That's a duplicate of:
https://github.com/TsudaKageyu/minhook/issues/44
Status: Issue closed
|
ament/ament_lint | 226226436 | Title: ament_uncrustify formats messages with default values incorrenctly
Question:
username_0: Given the following message:
```
int16[2] value_a [1, 2]
int16[2] value_b [3, 4]
```
the C++ generator produces the following code:
```c++
...
TestMsg_()
: value_a({{1, 2}}),
value_b({{3, 4}})
{
}
...
```
however, `uncrustify` with https://github.com/ament/ament_lint/blob/master/ament_uncrustify/ament_uncrustify/configuration/ament_code_style.cfg reformats the generated code as:
```
6: --- /home/username_0/Projects/ros2_java/output/build_isolated_java/rosidl_generator_cpp/rosidl_generator_cpp/rosidl_generator_cpp/msg/various__struct.hpp
6: +++ /home/username_0/Projects/ros2_java/output/build_isolated_java/rosidl_generator_cpp/rosidl_generator_cpp/rosidl_generator_cpp/msg/various__struct.hpp.uncrustify
6: @@ -41,2 +41 @@
6: - value_b({{3, 4}})
6: - {
6: + value_b({{3, 4}}) {
6:
6: 1 files with code style divergence
```
which causes `ament_uncrustify` to fail.
Answers:
username_1: @username_0 what lead you to notice this? @username_3 or @username_2 are we running uncrustify on our generated code by default?
username_2: Yes, when being explicitly requested (https://github.com/ros2/rosidl/blob/69400938eef1f6886fd65951d18fae7d3780e47d/rosidl_cmake/cmake/rosidl_generate_interfaces.cmake#L36-L38) the generators add linters e.g. (https://github.com/ros2/rosidl/blob/master/rosidl_generator_cpp/cmake/rosidl_generator_cpp_generate_interfaces.cmake#L121-L144).
username_0: @username_1 this branch (https://github.com/username_0/rosidl/tree/linter-errors) adds a message like the one in the issue description and activates linters to show that the generated message does not pass `ament_uncrustify`
username_2: The current configuration file for uncrustify actually configures the style checked exactly in the way the code generator generates the code:
```
Various_()
: value_a({{1, 2}}),
value_b({{3, 4}})
{
}
```
Maybe you can try a similar example but with a different initialization / types. If that passes this may be a bug in `uncrustify` which gets confused by the `{{` and `}}` from the initialization arguments. You might want to report this upstream with an example.
username_0: @username_2 yeah, the uncrustify configuration seems to be fine, but it doesn't seem to work, at least on my workspace. Have you tried the branch I pushed? https://github.com/username_0/rosidl/tree/linter-errors
username_2: No, I haven't.
username_0: Will you then?
username_2: I do believe your posted result. That `uncrustify` complains about the generated code. But since we pass the "correct" configuration to `uncrusitfy` (which works in similar examples as intended, please see plenty of existing code which uses the same style and passes the linter) there is not much we can do about it (except marking the lines in the code generation step to be ignored by uncustify or fixing the problem upstream in uncrustify).
Since I don't see what I would gain by testing your branch (I would expect to see exactly the result you posted) I am not sure why I should do that?
username_0: From your previous comment, where you pasted a snippet formatted according the style configuration, one could assume that it was generated by actually running uncrustify. So I wonder what was the point of posting that handcrafted snippet at all if I already had showed that it wasn't the expected result.
Anyway, we seem to agree that there may be an issue in uncrustify, so I filed a ticket detailing the issue: https://github.com/uncrustify/uncrustify/issues/1142
username_3: waiting for the bug to be addressed upstream
Status: Issue closed
username_2: I will close this since the issue is tracked upstream. Keeping this open doesn't seem to provide a significant "visibility" advantage. |
shadow/shadow | 385899066 | Title: Subject: Shadow v1.6.1 released, adds multi-threading support
Question:
username_0: From: <NAME> <jansen -at- cs.umn.edu>
To: "shadow-support -at- cs.umn.edu" <shadow-support -at- cs.umn.edu>,
Subject: Shadow v1.6.1 released, adds multi-threading support
Date: Mon, 24 Dec 2012 16:32:45 -0500
I'd like to notify everyone of a new Shadow release, v1.6.1. This release
is particularly exciting as it adds support for running multi-threaded
simulations!
Larger topologies are more densely populated with events and are
therefore inherently more parallelizable. Preliminary testing with 6
threads (1 master + 5 workers) indicates a 44% reduction in runtime with
the small-m1.xlarge topology and a 70% reduction in runtime with the
large-m2.4xlarge topology.
See the announcement here:
http://shadow.cs.umn.edu/blog/shadow-release-1-6-1/
Best Regards,
Rob<issue_closed>
Status: Issue closed |
Power-Maverick/PCF-CustomControlBuilder | 744777614 | Title: Add Release Configuration Mode
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Need to reduce the size of the bundle.js files.
**Describe the solution you'd like**
I would like to be able to select Development or Release mode for the msbuild command that actually builds the component. Currently I am using this tool to build all my components but the bundle.js files are much larger than they need to be.
```
msbuild /p:configuration=Release
```
**Describe alternatives you've considered**
I could also just use DevOps to create a pipeline for these controls to build them in release mode but it would be nice to have the option in the tool.
**Additional context**
Add any other context or screenshots about the feature request here.
Answers:
username_1: Hi @username_0 the functionality already exists. In CDS Solution Details section, before building the CDS project, check "Managed" and then build it. This will create a production ready file which can be deployed to your environment using the "Deploy" button.

username_0: Perfect, thanks again for all your work on this tool!
Status: Issue closed
|
MarcGamesons/twitch-userscript-use-chitchat | 391883887 | Title: @mentions messages are colored red
Question:
username_0: I think chitchat was designed for streamers, not viewers. So, opening a stream (lets say forsen) with chitchat enabled shows red colored messages for lines that mention @forsen. This would make sense if I was forsen, but I am a viewer and the red colored messages should be for lines that mention @me not the streamer. Perhaps there should be some kind of choice.
Example:

Answers:
username_1: This is intended behaviour of ChitChat, there is nothing I can do about that. I only embed ChitChat on Twitch via userscript.
You would have to contact the creator of ChitChat (https://github.com/mape) and ask if they can do something about that.
Status: Issue closed
|
facebook/relay | 161315383 | Title: updateSchema fails on first run (BabelRelayPlugin)
Question:
username_0: When I run `updateSchema` to generate my `schema.json` file for `BabelRelayPlugin` for the first time I get the following error:
`Error: Cannot find module '../schema.json'`
Now I realize that it happens because I have my babel relay plugin set on `.babelrc`, so I tried to remove it, run updateSchema, add it back, and run my application, and it worked.
How can I run updateSchema without including the plugin?
I've tried creating another `.babelrc` file in the updateSchema folder, to override my project's .babelrc which includes the plugin, but that didn't work.
**updateSchema.js:**
```
'use strict';
require('babel-register');
let fs = require('fs');
let path = require('path');
let Schema = require('../schema');
let graphql = require('graphql').graphql;
let introspectionQuery = require('graphql/utilities').introspectionQuery,
printSchema = require('graphql/utilities').printSchema;
(function(){
graphql(Schema, introspectionQuery).then(function(result){
if(result.errors){
console.error('Error inspecting schema: ', JSON.stringify(result.errors, null, 2));
}
else{
fs.writeFileSync(path.join(__dirname, '../schema.json'), JSON.stringify(result, null, 2));
}
});
})();
fs.writeFileSync(path.join(__dirname, '../schema.graphql'), printSchema(Schema));
```
**BabelRelayPlugin.js:**
```
var getBabelRelayPlugin = require('babel-relay-plugin');
var schema = require('../schema.json').data;
module.exports = getBabelRelayPlugin(schema);
```
**.babelrc:**
```
{
"presets": [
"react",
"es2015",
"stage-0"
],
"plugins": [
"./data/plugins/babelRelayPlugin",
"add-module-exports",
"transform-class-properties",
"transform-runtime"
],
"compact": true
}
```
Status: Issue closed
Answers:
username_1: How did you solve this problem?
username_2: For this case i wrote https://github.com/username_2/babel-plugin-transform-relay-hot
Which successfully starts without schema.json. And when it somehow generated hot reload babelRelayPlugin without restarting you dev server which runs via babel. |
microsoft/PowerToys | 541317637 | Title: FancyZone version 0.14.1.0 renders Civilization VI non-responsive
Question:
username_0: FancyZone version 0.14.1.0 renders Civilization VI (started through Steam) completely non-responsive when fullscreen introfilm is skipped.
The issue is not affected by 'Flashing Zones' setting being on or off.
Answers:
username_1: Sorry, I can't reproduce.
Do you run Civ with DX11 or 12? Do you have multiple screens? Does disabling FancyZones solve the issue? Or maybe you have to stop PowerToys? What are your FancyZones settings (a screenshot would be great)?
username_0: Yes, I run multiple screens (BENQ G2420HDB, Acer XR342CKP) connected to a Nvidia Geforce 1060, running 2 different layouts. The BENQ runs portrait mode 3 rows (no space between them) layout, the Acer runs a landscape Priority Grid or a custom layout with 2 windows (no spaces between them).
I have tried to run both DX11 and DX12 with no difference, immediate non-response when hitting space/mouse key when intro film starts.
Atm, all FZ toggles except 'Flash Zones' are ON ,

username_2: @username_1, were you able to repo?
username_1: So, after some more playing with this, I was able to reproduce. The issue happens only if FancyZones zone the Civ window - it breaks both when the Civ is run in fullscreen and when run in a window.
I don't think we can do anything about it, it looks like Civ window does not like to be moved by external tools. I'll investigate this further though to see if we can make FancyZones exclude Civ window by default.
In the meantime, you can add Civ to the excluded apps list:

username_3: If you snap the Civ windows using the built-in Windows Snap, does it also break as when snapped using FZ?
I'm asking because of this https://github.com/microsoft/PowerToys/issues/1050 where FZ snapping causes an unexpected behavior.
username_1: It does not react to the built-in snap - the window ignores both Win + arrows and moving to the edge of the screen.
username_2: does it happen with any other AAA game running window mode? Was randomly reading this, https://devblogs.microsoft.com/directx/demystifying-full-screen-optimizations/, and we may be able to detect AAA games in window mode this way.
if we do ignore apps this way and someone is trying to adjust, i do think we should figure out a way to tell a user we can't move it.
username_1: For the record: tested with 2016 DOOM, the game still works you snap the window to a zone
username_2: @username_1, are we thinking this is fixed with the ignore list / 0.15 work?
username_1: I think yes. The excluded list is the only way to do this, since at leas one other game works and can be zoned without issues.
username_2: I think in docs or wiki we need to start a list of known apps or just maybe just populate by default known apps inside FZ.
username_2: @username_1 / @username_3 did we ever come to a conclusion here?
username_3: Rule of thumb, if an app works correctly with Windows Snap, it should also work with FancyZones, but if it doesn't react to Windows Snap or if it breaks when snapped, it won't work with FancyZones as well and I don't think there is anything we can do to workaround it.
I'm OK with adding known apps to the excluded list by default.
username_2: adding to 0.19 as this should be a small cost item
username_2: @username_3 is this still an issue?
username_3: @username_2
the conclusion was that we can consider adding these problematic apps to a default list of excluded apps, but then we would have maintain the list and verify for every new version of the app if it's still incompatible.
I would say that it's better to have a section in the wiki listing the apps that we know may have issues with FZ.
username_2: Created https://aka.ms/PowerToysAppCompat
Status: Issue closed
|
allegro/allegro-api | 577733335 | Title: Błędy związane z wygenerowaniem tokenu
Question:
username_0: Od 7 marca od godz. 18:00 otrzymuje błąd najpierw:
`{"error":"invalid_grant","error_description":"Invalid refresh token: XXXXXXXX","http_code":400}`
A teraz:
`{"error":"invalid_token","error_description":"Cannot convert access token to JSON","http_code":401}`
Proszę o szybką odpowiedź co może być tego przyczyną. Nigdy wcześniej to nie występowało.
Answers:
username_1: Pierwszy błąd oznacza, że chcesz odświeżyć token za pomocą nieprawidłowego refresh tokena. Taka sytuacja może mieć miejsce, gdy:
- minie czas ważności dla refresh tokena;
- refresh token zostanie wykorzystany już wcześniej, co wygeneruje nową parę access + refresh, stare tokeny w tym momencie tracą ważność;
- użytkownik zmieni hasło;
Drugi błąd oznacza, że używasz nieważnego access token lub tokena z sandboxa na produkcji.
Przeslij proszę przykładowe logi (request i respone w formacie cURL) za pomocą naszego [formularza kontaktowego](https://allegro.pl/pomoc/kontakt?kategoria=4a6df9f8-0556-4b18-8930-68906a58c18e&subjectId=6093913a-efdf-4b94-a444-bde85b8c3d8b), w zgłoszeniu dopisz GitHub #2927. Wtedy będziemy w stanie wskazać, co dokładnie wydarzyło się w Twoim przypadku.
username_0: Wygenerowałem wszystkie tokeny od nowa i poszło. Pierwszy raz mi się taka sytuacja zdarzyła. Użytkownik nie miał zmienianego hasła.
username_2: Potwierdzam taki błąd, napewno na Sandboxie coś takiego występuje co jakiś czas.
I właśnie dwie godziny temu miałem ten sam błąd.
username_1: @username_2 Jeżeli zauważysz taki błąd, przeslij proszę przykładowe logi przez nasz [formularz kontaktowy](https://allegro.pl/pomoc/kontakt?kategoria=4a6df9f8-0556-4b18-8930-68906a58c18e&subjectId=6093913a-efdf-4b94-a444-bde85b8c3d8b), w zgłoszeniu dopisz GitHub #2927. Będziemy to analizować.
-----
[Jak oceniasz API Allegro? ](https://github.com/allegro/allegro-api/issues/2904) - weź udział w naszej ankiecie |
moimikey/iso3166-1 | 149561332 | Title: buffer breaks in IE11
Question:
username_0: @username_2
Hola! I'm not sure if this is something that can be fixed in this module, but IE11 seems to have some some issues with `fromString` method of `buffer. Trying to look for a fix and I'll open a PR if I find one but maybe you have some ideas as well?

Answers:
username_1: @username_0 hi phillip :) taking a look now.
username_0: @username_1
Thanks! :)
username_2: @username_0 could you try pulling down version v0.2.6 and see if that fixes the problem?
username_0: @username_1 getting this error when I build, could be cause of my build task though.
```
Error: Cannot find module './zlib' from '/Users/philip/Documents/Websites/sk-marketing/node_modules/iso3166-1/lib'
```
username_2: @username_0 eep! pull 0.2.7 :)...
username_0: @username_1
It still persists. I noticed buffer updated this function. Not sure if it will help though.
```JS
function fromString(string, encoding) {
if (typeof encoding !== 'string' || encoding === '')
encoding = 'utf8';
if (!Buffer.isEncoding(encoding))
throw new TypeError('"encoding" must be a valid string encoding');
var length = byteLength(string, encoding);
if (length === 0)
return Buffer.alloc(0);
if (length >= (Buffer.poolSize >>> 1))
return binding.createFromString(string, encoding);
if (length > (poolSize - poolOffset))
createPool();
var actual = allocPool.write(string, poolOffset, encoding);
var b = allocPool.slice(poolOffset, poolOffset + actual);
poolOffset += actual;
alignPool();
return b;
}
```
Also it seems like `zlib-browserify` is now being searched outside of the module's node_module directory.
`Error: Cannot find module 'zlib-browserify' from '/Users/philip/Documents/Websites/sk-marketing/node_modules/iso3166-1/src'`
username_2: @username_0 lets try again >.< -- v0.2.8
after that, then i'll need to get back to you :(...
username_0: @username_2
Still persists :(.. It's not urgent since, I'll disable this for now in IE but it would be great to have a fix. Let me know if it's something on my side as well. Thanks again!
username_2: i still want to revisit this... ie11 hell.
username_0: @username_2 last time I looked at it, the issue seems to be coming from buffer and I think node's newer version should fix these issues but I'm not 100% sure.
Status: Issue closed
username_2: fixed in latest version |
jogboms/flutter_spinkit | 360359237 | Title: Fix outdated example code in the README
Question:
username_0: **Describe the bug**
It appears the example code in the README is outdated.
This doesn't work:
SpinKitRotatingCircle(
color: Colors.white,
width: 50.0,
height: 50.0,
);
This does work:
SpinKitRotatingCircle(
color: Colors.white,
size: 50.0,
);
**SpinKit name**
`SpinKitRotatingCircle`<issue_closed>
Status: Issue closed |
INM-6/python-neo | 86070294 | Title: duplicate_with_new_array tries to convert units of new array
Question:
username_0: To reproduce:
```python
a=AnalogSignal([1,2,3]*pq.mV)
b=a.duplicate_with_new_array([4,5,6]*pq.ms)
--> Error: cannot convert mV to ms
```
Answers:
username_1: ```python
import quantities as pq
from neo.core import AnalogSignal
sig0 = AnalogSignal(signal=[.01, 3.3, 9.3], units='uV', sampling_rate=1*pq.Hz)
sig1 = sig0.duplicate_with_new_array_units([1,2,3], units='ms')
# sig1 = sig0.duplicate_with_new_array([1,2,3]*pq.ms)
print "sig0 = ",sig0 # sig0 = [ 0.01 3.3 9.3 ] uV
print "sig0 type ",type(sig0) # sig0 type <class 'neo.core.analogsignal.AnalogSignal'>
print "sig1 = ",sig1 # sig1 = [1 2 3] ms
print "sig1 type ",type(sig1) # sig1 type <class 'neo.core.analogsignal.AnalogSignal'>
```
worked, return b with new units.
in case with some changes in file analogsignal.py like following:
```python
def duplicate_with_new_array_units(self, signal, units=None):
if units is None:
new = self.__class__(signal=signal, units=self.units, sampling_rate=self.sampling_rate)
else:
new = self.__class__(signal=signal, units=units, sampling_rate=self.sampling_rate)
new._copy_data_complement(self)
new.annotations.update(self.annotations)
return new
``` |
BluSunrize/ImmersiveEngineering | 122181657 | Title: [suggestion] Tiered Engineer's Hammer
Question:
username_0: Just a thought for modpacks to have an option for a Tiered Hammer. One that only works for low level multi-block like the coke oven, one that works for mid level machines and lower, and one that works for high level machines and lower.
Answers:
username_1: What purpose would such an addition serve?
username_0: If a modpack has a progressive tech ability, and uses a questing mode, it could come in handy to award tiers to players as they progress higher in questing.
username_2: In a questing mode pack you can achieve something like that by adding recipes for hammers with nbt tags: a nbt tag list with the name "multiblockInterdiction" containing the names of all multiblocks that cant be formed with that hammer.
username_0: hmm OK, could you give me an example with MineTweaker? Just the crafted item nbt for say a coke oven and blast furnace, don't need the whole line.
username_0: I think I have this right for the script after looking at the code. I just did a quick test. It loads fine plus crafts, but the multiblock still forms.
https://gist.github.com/username_0/4884542e78ee9579c59e
username_3: Wrong script as well as wrong names.
All my multiblocks start with "IE:", so it's "IE:Excavator".
And it's a TagList, not a TagCompound, so it's written like the Lore tag:
.withTag({multiblockInterdiction: ["IE:Excavator", "IE:ExcavatorDemo"]});
Status: Issue closed
|
pocoproject/poco | 109726030 | Title: Compile error using XCode 7.0.1 on Mac OSX 10.11
Question:
username_0: Hello,
From the command line I can build a Poco based executable just fine using Poco's builtin build system.
My source directory is outside of POCO_BASE. So I set the POCO_BASE and PROJECT_BASE environment variables and all is well. (Actually I did have to remove config.make from POCO_BASE because it sets POCO_BUILD which overrides where to put OBJs and EXEs - I Believe this to be a bug in the Poco Build system.)
Back to the main problem.
When I create an "external build system" project in Xcode and properly set POCO_BASE and PROJECT_BASE, I cannot compile my code. I receive the following error on an inclusion of <iostream>.
----
In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/iostream:37:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/__config:23:10: fatal error: 'unistd.h' file not found
#include <unistd.h>
^
1 error generated.
---
XCode generates a ton of environment variables and passes them to the shell where make runs. I'm thinking that might be part of the problem
Any ideas?
I'd like to stick with the external makefiles.
Answers:
username_1: I'm not an Xcode expert, but you may have an [SDK mix](https://forum.qt.io/topic/58926/solved-xcode-7-and-qt-error/3)
username_0: Thanks!
I'll give it a try!
Best regards,
===
<NAME>
Sent from my phone
>
Status: Issue closed
|
vitoziv/VIPhotoView | 59367928 | Title: imageView like IBOUTLET
Question:
username_0: *
This excellent for storyboard and imageView Object
self.imageView.hidden=YES;
UIImage *image = self.imageView.image;
VIPhotoView *photoView = [[VIPhotoView alloc] initWithFrame:self.view.bounds andImage:image];
photoView.autoresizingMask = (1 << 6) -1;
[self.view addSubview:photoView];
Answers:
username_1: Cool.I like storyboard show the result view. But it needs more codes, I think it's pretty useful in some simple case.
username_0: yes, also included this lines.. for better appearance to hide status bar like application photo
////// .h
@interface YourController : UIViewController{
BOOL shouldHideStatusBar;
}
/////// .m
-(void) showHideNavbar:(id) sender
{
// write code to show/hide nav bar here
// check if the Navigation Bar is shown
if (self.navigationController.navigationBar.hidden == NO)
{
// hide the Navigation Bar
[self.navigationController setNavigationBarHidden:YES animated:NO ];
//[[UIApplication sharedApplication] setStatusBarStyle:UIStatusBarStyleLightContent];
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
CATransition* transition = [CATransition animation];
transition.duration = 0.2;
transition.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
transition.type = kCATransitionFade; //kCATransitionMoveIn; //, kCATransitionPush, kCATransitionReveal, kCATransitionFade
//transition.subtype = kCATransitionFromTop; //kCATransitionFromLeft, kCATransitionFromRight, kCATransitionFromTop, kCATransitionFromBottom
[self.navigationController.view.layer addAnimation:transition forKey:nil];
[[self navigationController] popViewControllerAnimated:NO];
}
// if Navigation Bar is already hidden
else if (self.navigationController.navigationBar.hidden == YES)
{
// Show the Navigation Bar
[self.navigationController setNavigationBarHidden:NO animated:NO];
//[[UIApplication sharedApplication ] setStatusBarStyle: UIStatusBarStyleDefault];
[[UIApplication sharedApplication] setStatusBarHidden:NO withAnimation:UIStatusBarAnimationNone];
CATransition* transition = [CATransition animation];
transition.duration = 0.2;
transition.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
transition.type = kCATransitionFade; //kCATransitionMoveIn; //, kCATransitionPush, kCATransitionReveal, kCATransitionFade
//transition.subtype = kCATransitionFromTop; //kCATransitionFromLeft, kCATransitionFromRight, kCATransitionFromTop, kCATransitionFromBottom
[self.navigationController.view.layer addAnimation:transition forKey:nil];
[[self navigationController] popViewControllerAnimated:NO];
}
}
////////[super viewDidLoad]
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc]
initWithTarget:self action:@selector(showHideNavbar:) ];
Status: Issue closed
|
Becomebright/LeetCode | 679503593 | Title: 546. Remove Boxes
Question:
username_0: Question: *https://leetcode-cn.com/problems/remove-boxes/*
ref. *https://leetcode-cn.com/problems/remove-boxes/solution/yi-chu-he-zi-by-leetcode-solution/*
## Solution
The DP solution is really clever. Sadly, I can not figure out it by myself.
### S1. DP + Memorization
``` Python3
class Solution:
def removeBoxes(self, boxes: List[int]) -> int:
n = len(boxes)
dp = [[[0] * n for _ in range(n)] for _ in range(n)] # dp[n][n][n]
def calc(l, r, k): # DP + memorization
if l > r:
return 0
if dp[l][r][k] > 0:
return dp[l][r][k]
while r > l and boxes[r] == boxes[r-1]:
# This loop is not necessary, it's for reducing the recursion times
# when boxes[r-1] == boxes[r], dp[l][r][k] is equivalent to dp[l][r-1][k+1]
r -= 1
k += 1
dp[l][r][k] = calc(l, r-1, 0) + (k+1) ** 2
for i in range(l, r):
if boxes[i] == boxes[r]:
dp[l][r][k] = max(dp[l][r][k], calc(l, i, k+1) + calc(i+1, r-1, 0))
return dp[l][r][k]
return calc(0, n-1, 0) |
symfony/symfony | 163163225 | Title: ChoiceType error with Object and option multiple set to false
Question:
username_0: hi, i get an error when I set multiple= false
`Error: Call to a member function getId() on null`
```
->add('abonnement', ChoiceType::class, [
'choices' =>$this->array_of_object,
'choices_as_values'=>true,
'choice_value'=> function($obj) {
return $obj->getId();;
},
'choice_label' => function( $obj) {
return $obj->getNom();
},
'choice_attr' => function($obj) {
if (in_array($obj->getId(), $this->userHasModules)) {
return ['selected'=>'selected'];
}else{
return array();
}
},
'expanded' =>false,
'multiple' =>false,
'attr' => array('class'=>'select2'),
'mapped' =>false,
'required' =>false
])
```
but if I write in choice_value's function
```
echo "<pre>".print_r($users,true)."</pre>";
throw new \Exception("Error Processing Request", 1);
```
I can see my obj !!!!!
If I set multiple to true -> no error.
Answers:
username_1: Yes, this is expected and not yet documented: `choice_value` callable may get passed `null`.
So you should use:
```php
'choice_value'=> function(ObjectClass $obj = null) {
if (null === $object) {
return null;
}
return $obj->getId();;
},
```
username_0: yeahhh you saved me !
Now it works.
But I don't understand your solution :)
Status: Issue closed
|
magfest/website-issues | 158366868 | Title: labs: /maglabstestchambers page not obvious what it does
Question:
username_0: _From @username_0 on June 3, 2016 12:37_
Is this page going to show off the test chambers that have been selected? if so, probably should have some 'coming soon'
It might be a good idea to describe in slightly more human terms what a test chamber is and why they're important to maglabs, and how to submit one.
_Copied from original issue: magfest/webhooktheme#15_
Answers:
username_1: I have added additional text.
The real question we NEED to ask you, the reader, is "How would you push the boundaries of what is possible at a Music and Gaming Festival?" What does this question mean to you? Is it building the worlds coolest dance space where everyone can get down? Is it getting 1000 people to simultaneously play one game like TwitchPlaysPokemon? Is it making registration a human-free experience? Don't let us limit you. Be as creative as you possibly can. Be as crazy as you possibly can. Seriously, JOIN US.
Currently, we have yet to reveal the first Test Chamber. STAY TUNED. COMING SOON.
Status: Issue closed
username_0: _From @username_0 on June 3, 2016 12:37_
Is this page going to show off the test chambers that have been selected? if so, probably should have some 'coming soon'
It might be a good idea to describe in slightly more human terms what a test chamber is and why they're important to maglabs, and how to submit one.
_Copied from original issue: magfest/webhooktheme#15_
username_0: so like, this is still kinda abstract :)
core message: test chambers are events/areas submitted by attendees for other attendees, and is a large focus on maglabs
just make sure that core message gets through in the text if we can
username_1: These events can't possibly happen WITHOUT YOU. Make yourself jealous you aren't an attendee in your own Test Chamber.
I added this to the bottom. Does that do it for you @username_0 ? Does that rock your socks?
username_0: i'll say SHIP IT, though maybe we'll come back to it on a second pass. for today, awesome :) |
ReactiveX/RxJava | 55846749 | Title: How to know when an Observable supports backpressure
Question:
username_0: ```Observable.just(1,2)``` respects backpressure, yet ```Observable.just(1)``` does not. The second case uses ```ScalarSynchronousObservable``` which does not respect backpressure (nor unsubscription prior to onCompleted). This bit me in a unit test and the workaround is to use ```Observable.just(1).onBackpressureBuffer()``` but it struck me as a possible source of confusion indeed errors for myself and others.
This might lead into a broader topic which is how can I know programmatically that an ```Observable``` respects backpressure? I've written an ```Operator``` that requires its (multiple) inputs to be backpressure enabled and I would love to be able to perform a check programmatically that the inputs are valid.
We could add a ```boolean Observable.respectsBackpressure()``` method to the API and the various transformations (say ```o.count()```) would pretty easily pass or modify that value as appropriate.
Something like this would be especially useful for doing a *strict* merge that only allowed backpressure aware sources to be used. At the moment I only find out at runtime via a ```MissingBackpressureException``` that I have a problem and the circumstance that brought it about may be rare and not covered by my tests.
What do people think?
Answers:
username_1: How is this knowledge different than not knowing how many elements a sequence is going to emit? If it will terminate? If it is hot or cold?
username_2: Java 8 type annotations might be enough here. Of course, it requires an infrastructure around to perform the validation (a la checker framework).
username_0: Now that you mention it, not very different!
I suppose the biggest problem is that an observable constructed in a flatMap is something that won't be constructed till some arbitrary time in the future. I'm forgetting that and it busts the idea with ```Observable.respectsBackpressure```. @username_2 's suggestion may have legs inasmuch as I could get the type system to ensure that inside a flat map construction I have definitely constructed an observable that respects backpressure. I guess it's something that could be revisited once we are using java 8. Not sure if the type system can offer something prior to Java 8 that just involves non-breaking API additions.
username_0: What about including some backpressure documentation in javadoc for each operator? I guess it needs careful wording because some Operators given backpressure-respecting sources will provide a backpressure-respecting observable but without backpressure-respecting sources perhaps will not. From that point of view the term *backpressure aware* is probably better than *supports backpressure*.
At the moment my check for backpressure awareness is to inspect the source and look for patterns that I'm familiar with involving a Producer etc. Conclusive if you know what to look for but not ideal.
username_3: I could imagine an annotation:
```java
@Backpressure(input = IGNORED, output = RESPECTED)
public final Observable<T> onBackpressureBlock() { }
```
where ```input``` could be one of the following:
- ```IGNORED```: doesn't matter if upstream does or doesn't support backpressure. Example: ```onBackpressureBlock```.
- ```REQUIRED```: upstream has to respect backpressure and the operator will never ask for Long.MAX_VALUE. Example: ```observeOn```
- ```AMBIVALENT```: for intermediate operators that don't themselves do backpressure "airlocking", the downstream's input behavior is reflected upwards. Example: ```map```, ```filter```
and ```output``` could be one of the following:
- ```NONE```: ignores any downstream backpressure requests. Example: ```just(T)```
- ```RESPECTED```: if downstream uses backpressure or not, the operator works accordingly. Example: ```range(x, n) | n > 1```
- ```ALWAYS```: The operator will always work in backpressured mode (i.e., no fast paths). Example: ```zip``` (always buffers sources).
Connection matrix:
Upstream output | Downstream input | Behavior
-----|-----|------
NONE | IGNORED | Firehose
NONE | REQUIRED | Likely MissingBackpressureException
NONE | AMBIVALENT | Depends on deeper downstream's input
RESPECTED | IGNORED | Firehose
RESPECTED | REQUIRED | Batch-like execution
RESPECTED | AMBIVALENT | Depends on deeper downstream's input
ALWAYS| IGNORED | Batch-like execution
ALWAYS | REQUIRED | Batch-like execution
ALWAYS | AMBIVALENT | Depends on deeper downstream's input
Further examples:
```java
@Backpressure(input = REQUIRED, output = ALWAYS)
public final <U, R> Observable<R> zipWith(
@Backpressure(output = REQUIRED)
Observable<U> other,
Func2<T, U, R> func) { }
@Backpressure(input = IGNORED, output = NONE)
public final class PublishSubject { }
@Backpressure(output = NONE)
public static <T> Observable<T> just(T value) { }
@Backpressure(input = AMBIVALENT, output = RESPECTED)
public final <R> Observable<R> map(Func1<T, R> func) { }
```
username_0: Nice idea, I like it. I think *Neutral* might be a better word than *Ambivalent* because ambivalent literally means 'mixed emotions'. There are some grey areas that might be nice to represent, one of them being ```merge``` which tolerates a lack of backpressure awareness in its sources to a certain extent (can tolerate lots if we bump up system property ```rx.ring-buffer-size```) as opposed to a custom operator that I just made that throws an exception immediately when more arrives than is expected.
So in terms of inputs we might have:
* ```IGNORED```
* ```REQUIRED``` - will fail if more arrive than requested
* ```PREFERRED``` - may fail if more arrive than requested
* ```NEUTRAL``` - don't care
username_0: ```IGNORED``` might be dropped as ```NEUTRAL``` covers it.
username_1: How would the use the annotations, just for docs?
We do have notes in the Javadoc about backpressure but they aren't very comprehensive. We call out the temporal operators to say they don't have backpressure support.
username_0: I don't have a use case apart from documentation at the moment. The annotation will suit me fine or some standardized addition to the javadocs (or both).
username_1: The annotations as documentation seem okay to me. The annotations on the parameters are intriguing, though they feel very intrusive on first impression. We'd only need them though on public APIs where an `Observable` is a parameter, correct?
username_0: I'm returning to this issue because it keeps striking me about the codebase as I browse it. Yep I think putting the annotations on the Public API and you suggest putting them on say the static version of `merge` rather than the instance methods?
username_1: Moving to 2.0 for further discussions.
username_3: In 2.x, all operators support backpressure but if you aren't requesting fast enough, you'll get a `MissingBackpressureException` from the operator itself (and not from observeOn/flatMap somewhere down the line...)
username_3: I've added [annotations](https://github.com/ReactiveX/RxJava/blob/2.x/src/main/java/io/reactivex/annotations/BackpressureKind.java) in 2.x; I can port it back to 1.x and also update the javadoc where the backpressure text is missing.
username_0: Thanks for pursuing this @username_3.
I just wanted to review the data model underlying this.
Every operator/source can be classified with *one* only of these:
* `BackpressureSupport.FULL_WITH_MBE`
* `BackpressureSupport.FULL_NO_MBE`
* `BackpressureSupport.PASS_THROUGH`
* `BackpressureSupport.NONE`
`MBE` is `MissingBackpressureException`
And further we could offer information about the requesting behaviour of an operator:
* `RequestBehaviour.REQUESTS_MAX_VALUE`
This last is how I'd replace `UNBOUNDED_IN`.
username_0: We could also specify when backpressure is not required of a source used by an operator, perhaps by annotating a parameter or annotating a method:
```java
public Observable<T> mergeWith(@Backpressure.NOT_REQUIRED Observable<T> source) {
...
}
@Backpressure.NOT_REQUIRED
public Observable<T> onBackpressureBuffer() {
...
}
```
Status: Issue closed
username_3: The current 2.x contains annotations on the operator methods themselves (some may be missing) and up for discussion/PRs. If you have further input on the issue, don't hesitate to reopen this issue or post a new one.
username_0: Thanks, that sounds good. |
flokicker/dorfbauprojekt | 349834662 | Title: Menschen Fortpflanzung II
Question:
username_0: Die Fortpflanzung unseres Dorfs kommt nun noch einen extra Schliff.
Eingebaut werden soll, das sich die Bewohner tatsächlich suchen und finden.
Dazu kommt es nun also, das es 3 Faktoren geben soll, die man im Spiel als Spieler zwar sieht, aber nicht direkt wahr nimmt. Unsere Menschen im Spiel sollen aber "eigene Interessen" vertreten und so auch darauf beharren. So kann es den Fall geben, das ein Mensch keinen passenden Partner findet und er alleine - womöglich zuhause bei den Eltern lebend, alleine im Alter stirbt.
Nun gut... Zurück zum Ursprungsgedanken:
- [ ] 1. Faktor: **die Größe.**
Hier gibt es 5 verschiedene Größen die unsere Menschen haben, wesshalb die Menschen sich untereinander möglicherweise Interessant finden oder nicht.
1. 160-160cm / 170-179cm/180-189cm/190-199cm & alles größer als 2m ( max 2.10 ( SEHR SELTEN 1 von 100 ))
Die Größen erreichen Frauen : 1-3
Die Größen erreichen Männer: 3-5
Hier darf es natürlich auch gerne mal 1-2 ausnahmen geben die im jeweiligen Fall kleiner/größer sind.
Für Frauen heißt es also, sie sindan 3-5 Interessiert ( die ausnahmsfälle, sollten dann als 3 gezählt werden, aber geschrieben darf da stehen 162 cm )
Die Männer sind and 1-3 interessiert ( auch hier darf, wenn die frau ausnahmsweise 200cm groß ist , diese als 3 gewertet werden, auch wenn sie 200cm groß ist. )
2. **Haarfarbe:**
Blond / Blond-Braun / Braun / Braun -schwarz / schwarz
Hier gilt: egal ob frau oder mann, beide haben interesse an 3 arten haarfarbe.
So fallen immer zwei haarfarben aus dem raster, aber bei 3 weiteren haarfarben, hat man immer wieder die chance einen partner zu finden.
wir können es gerne vereinfachen indem wir nur 3 haarfarben nehmen.
blond / braun / schwarz
3. Der Beruf. ( Im Sinne von gleicher erziehung / gleiches berufliches interesse - im sinne der Aristokratie )
So interessieren sich die menschen der "zeitalter" füreinander
Sammler / Bauern / etc. ( vorerst nur diese zwei, da wir mehr noch nicht im spiel haben )
- [ ] Fazit: Zwei gerade 18 gewordene Kinder verschiedener Häuser interessieren sich füreinander.
Diese sind gebunden durch die Arbeit aber nicht in der Lage sich kennen zu lernen.
Wenn wir sie nun interaktiv bewegen und sagen, sie sollen gemeinsam holz hacken, sammeln gehen oder jagen, lernen diese sich kennen und werden gesteuert von uns ein paar und wollen kurz danach ein Haus beziehen / bauen.
- [ ] Die Informationen ( 3 Faktoren ) sind teils sichtbar für uns:
Das proflfenster des Bewohner zeigt auf, ob er männlich oder weiblich ist,
ob er interesse an weiblichen oder männlichen menschen hat,
welche haarfarbe er hat,
wie groß er ist
- [ ] Skizze zum Profil der Bewohner kommt nachträglich damit du weißt wie es aussehen soll.
- [ ] Ebenso möchte ich die gleichgeschlechtlichen Interessen der Menschen thematisieren.
So soll ein Mann Interesse an einer Frau sowie an einem Mann haben dürfen, ebenso eine Frau an Mann und Frau.
Diese Funktion des Gleichgeschlechtlichen lebens soll erst ab 100 Bewohner eintreten, damit wir unsere Bewohner vor erst sichern zum aufstrebendem Dorf und es aber später nicht nicht thematisiert lassen. Ich möchte im Spiel auch Spieler ansprechen, die eben so fühlen. In den wenigsten Spielen wird genau das gemacht, wobei es doch gerade heute so wichtig wäre alle Menschen dafür über all zu sensibilisieren. Maybe als sozialkritisches Spiegelbild für unsere Gesellschaft die so sogar versteckt später im Spiel auftaucht und manche Kritiker übersehen und mitten im Spiel damit konfrontiert werden - käme es soweit das ich eine Kritik lesen dürfte, würde ich mich jetzt schon freuen was dazu zu lesen auch wenn ich das thema der gleichgeschlechtlichen beziehung im spiel noch für zu unreif halte und weiter drüber nachdenke...
**Wichtige Frage** : ist die art des verkuppelns überhaupt möglich für dich einzubauen @username_1
Answers:
username_1: Ja das ist sicher möglich! Muss mir nur Gedanken dazu machen und ein Codekonzept erstellen, aber das wird so funktionieren wie du beschrieben hast
username_0: stark! Dann kriegt man einen tollen welt-zyklus hin! :-)
username_1: @username_0 komm erst in ner stunde nach hause (han kein whatsap auf diesem handy) also um 10uhr bin ich aufm ts |
CadQuery/CQ-editor | 330683349 | Title: Segmentation Fault on First Run
Question:
username_0: I followed the instructions to set the conda environment up, verified that it was the active one, and tried to run CQ-editor. I immediately get a seg fault.
```bash
Segmentation fault (core dumped)
```
There's no way to know where it comes from based on that message, but I've gotten a lot of seg faults with FreeCAD over the years.
I'm running Ubuntu 17.10 64-bit
Answers:
username_1: ```
username_0: ```bash
$ conda env list
# conda environments:
#
cq-occ-testing /home/jwright/.conda/envs/cq-occ-testing
freecad_cq3 /home/jwright/.conda/envs/freecad_cq3
/home/jwright/miniconda
/home/jwright/miniconda/envs/freecad_cq3
base /home/jwright/miniconda3
cqgui * /home/jwright/miniconda3/envs/cqgui
env-name /home/jwright/miniconda3/envs/env-name
freecad-py3 /home/jwright/miniconda3/envs/freecad-py3
```
gdb output:
```bash
(gdb) run
Starting program: /home/jwright/miniconda3/envs/cqgui/bin/python run.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
PyType_IsSubtype (a=0x626f6174656d5f74, b=0x7ffff23e0680 <sipVoidPtr_Type>)
at Objects/typeobject.c:1357
1357 Objects/typeobject.c: No such file or directory
```
Is there maybe a *-dev package I need to install?
username_0: Sorry, didn't see that request in your original post.
```bash
(gdb) bt
#0 PyType_IsSubtype (a=0x626f6174656d5f74, b=0x7ffff23e0680 <sipVoidPtr_Type>)
at Objects/typeobject.c:1357
#1 0x00007ffff21d5d21 in vp_convertor ()
from /home/jwright/miniconda3/envs/cqgui/lib/python3.6/site-packages/sip.so
#2 0x00007ffff21d65a5 in sip_api_convert_to_void_ptr ()
from /home/jwright/miniconda3/envs/cqgui/lib/python3.6/site-packages/sip.so
#3 0x00007ffff60b8dfd in qpycore_init() ()
from /home/jwright/miniconda3/envs/cqgui/lib/python3.6/site-packages/PyQt5/QtCore.so
#4 0x00007ffff5fa19dc in PyInit_QtCore ()
from /home/jwright/miniconda3/envs/cqgui/lib/python3.6/site-packages/PyQt5/QtCore.so
#5 0x00007ffff7a230d7 in _PyImport_LoadDynamicModuleWithSpec (spec=spec@entry=0x7ffff63eb668,
fp=fp@entry=0x0) at ./Python/importdl.c:159
#6 0x00007ffff7a2121b in _imp_create_dynamic_impl (module=<optimized out>, file=<optimized out>,
spec=0x7ffff63eb668) at Python/import.c:1982
#7 _imp_create_dynamic (module=<optimized out>, args=<optimized out>)
at Python/clinic/import.c.h:289
#8 0x00007ffff79641b9 in PyCFunction_Call (func=func@entry=0x7ffff7f77ee8,
args=args@entry=0x7ffff63eb2e8, kwds=kwds@entry=0x7ffff63e94c8) at Objects/methodobject.c:114
#9 0x00007ffff7a04be8 in do_call_core (kwdict=0x7ffff63e94c8, callargs=0x7ffff63eb2e8,
func=0x7ffff7f77ee8) at Python/ceval.c:5089
#10 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3391
#11 0x00007ffff79fd01e in _PyEval_EvalCodeWithName (_co=0x7ffff7fafdb0,
globals=globals@entry=0x7ffff7f6acf0, locals=locals@entry=0x0, args=<optimized out>,
argcount=2, kwnames=0x0, kwargs=0x7ffff7e97ea0, kwcount=0, kwstep=kwstep@entry=1, defs=0x0,
defcount=defcount@entry=0, kwdefs=kwdefs@entry=0x0, closure=0x0,
name=name@entry=0x7ffff7f59990, qualname=0x7ffff7f59990) at Python/ceval.c:4153
---Type <return> to continue, or q <return> to quit---
#12 0x00007ffff79fd332 in fast_function (kwnames=0x0, nargs=<optimized out>, stack=<optimized out>,
func=0x7ffff7f9aea0) at Python/ceval.c:4965
#13 call_function (pp_stack=pp_stack@entry=0x7fffffffb5e0, oparg=oparg@entry=2,
kwnames=kwnames@entry=0x0) at Python/ceval.c:4845
#14 0x00007ffff7a01dc6 in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>)
at Python/ceval.c:3322
#15 0x00007ffff79fc6b0 in _PyFunction_FastCall (co=<optimized out>, args=<optimized out>, nargs=2,
globals=<optimized out>) at Python/ceval.c:4906
#16 0x00007ffff79fd5d4 in fast_function (kwnames=0x0, nargs=<optimized out>, stack=<optimized out>,
func=0x7ffff7f216a8) at Python/ceval.c:4941
#17 call_function (pp_stack=pp_stack@entry=0x7fffffffb820, oparg=oparg@entry=1,
kwnames=kwnames@entry=0x0) at Python/ceval.c:4845
#18 0x00007ffff7a01dc6 in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>)
at Python/ceval.c:3322
#19 0x00007ffff79fc6b0 in _PyFunction_FastCall (co=<optimized out>, args=<optimized out>, nargs=1,
globals=<optimized out>) at Python/ceval.c:4906
#20 0x00007ffff79fd5d4 in fast_function (kwnames=0x0, nargs=<optimized out>, stack=<optimized out>,
func=0x7ffff7f799d8) at Python/ceval.c:4941
#21 call_function (pp_stack=pp_stack@entry=0x7fffffffba60, oparg=oparg@entry=1,
kwnames=kwnames@entry=0x0) at Python/ceval.c:4845
#22 0x00007ffff7a01dc6 in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>)
at Python/ceval.c:3322
#23 0x00007ffff79fc6b0 in _PyFunction_FastCall (co=<optimized out>, args=<optimized out>, nargs=1,
globals=<optimized out>) at Python/ceval.c:4906
#24 0x00007ffff79fd5d4 in fast_function (kwnames=0x0, nargs=<optimized out>, stack=<optimized out>,
func=0x7ffff7f79bf8) at Python/ceval.c:4941
---Type <return> to continue, or q <return> to quit---
#25 call_function (pp_stack=pp_stack@entry=0x7fffffffbca0, oparg=oparg@entry=1,
kwnames=kwnames@entry=0x0) at Python/ceval.c:4845
#26 0x00007ffff7a01dc6 in _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>)
[Truncated]
defcount=defcount@entry=0, kwdefs=kwdefs@entry=0x0, closure=closure@entry=0x0)
at Python/ceval.c:4174
#71 0x00007ffff79fd69b in PyEval_EvalCode (co=co@entry=0x7ffff7ef7ae0,
globals=globals@entry=0x7ffff7f3b288, locals=locals@entry=0x7ffff7f3b288) at Python/ceval.c:730
#72 0x00007ffff7a39092 in run_mod (arena=0x7ffff7f56258, flags=0x7fffffffd7d0,
locals=0x7ffff7f3b288, globals=0x7ffff7f3b288, filename=0x7ffff7e7b298, mod=0x645ba8)
at Python/pythonrun.c:1025
#73 PyRun_FileExFlags (fp=fp@entry=0x69a030,
filename_str=filename_str@entry=0x7ffff7e81050 "run.py", start=start@entry=257,
globals=globals@entry=0x7ffff7f3b288, locals=locals@entry=0x7ffff7f3b288,
closeit=closeit@entry=1, flags=flags@entry=0x7fffffffd7d0) at Python/pythonrun.c:978
#74 0x00007ffff7a391f7 in PyRun_SimpleFileExFlags (fp=fp@entry=0x69a030, filename=<optimized out>,
closeit=closeit@entry=1, flags=flags@entry=0x7fffffffd7d0) at Python/pythonrun.c:420
#75 0x00007ffff7a39693 in PyRun_AnyFileExFlags (fp=fp@entry=0x69a030, filename=<optimized out>,
closeit=closeit@entry=1, flags=flags@entry=0x7fffffffd7d0) at Python/pythonrun.c:81
#76 0x00007ffff7a556cd in run_file (p_cf=0x7fffffffd7d0, filename=0x603550 L"run.py", fp=0x69a030)
at Modules/main.c:340
#77 Py_Main (argc=argc@entry=2, argv=argv@entry=0x602260) at Modules/main.c:810
#78 0x0000000000400bbc in main (argc=2, argv=<optimized out>) at ./Programs/python.c:69
```
username_1: Thanks @username_0 , so the crash is either PyQT or SIP related. I do not really get how this occurs - I have exactly the same packages in my env. The only idea I can is updating to a different version of one of those packages
Could you try updating the sip package to a different version
```conda install -c conda-forge sip=4.18.1```
or using a different pyqt build:
```conda install -c conda-forge 5.6.0=py36_2```
but definitely not at once.
I still do not get the big picture, you nominally have the same packages as me but still getting a segfault. Could that be because some of the packages are cached locally and in reality different to what is on the server? Maybe if you have time, you could try running `conda clean --all` and repeating the install.
Anyhow, thanks for trying it out!
username_1: Any luck @username_0 ? I did setup CI, so the env spec seems to be OK in principle. I'll try to modify the conda environment to use the defaults channel for pyqt; maybe this will solve the problem.
username_0: Thanks!
Status: Issue closed
username_1: OK, just in case I added this SIP version to the env file. CI seems to be happy about it too. |
clc/eyes-free | 59718653 | Title: Talkback reacts to side tap when screen is locked
Question:
username_0: ```
What steps will reproduce the problem?
1. set the side tap action to open a context menu
2. lock the screen
3. side tap
What is the expected output? What do you see instead?
The sound for the context menu gets played and the message about navigating in
a cicrcle gets spoken, which should not happen. This is a regression, was fixed
in a previous beta then reappeared.
What version of the product are you using? On what operating system?
talkback 4.1 beta on android 4.4.4.
Please provide any additional information below.
```
Original issue reported on code.google.com by `<EMAIL>` on 10 Jan 2015 at 7:51 |
smallstep/certificates | 791546236 | Title: Windows says enrolled certificate is invalid
Question:
username_0: Hello, I have a setup where there's an ADCS as root CA and I'm using Step as intermediate.
I have to take CSRs from Windows VMs and bring to another VM to properly enroll them and the decided method is ACME, so from there I use a python script as client and get the signed certificates. Back on the target VM, Windows says it can't verify the certificate signature. The public key from the cert is different from the one at the CSR and I have no idea on how to debug it. The only other difference I noticed is that all other certificates on the chain are signed with RSA and the one at the end is EDCSA (I saw on another issue that it shouldn't be a problem).
How can I figure out where the certificate is being "tampered"?
Answers:
username_1: There shouldn't be an issue for any standard client, but you can also create and use an RSA key as an intermediate. As an example of an ECDSA key signed by an RSA intermediate (not exactly the same, but close) is facebook.com
username_0: I took some time to figure how I send the CSR and load it at the client and now pubkeys are the same, but I still get the certificate is invalid, which leads me to believe that's a Windows thing. I'm doing some more tests and will update that issue, also I have to test signing everything with RSA keys, seems weird that Windows Server wouldn't accept something so common, but I'll be damned.
BTW, the exact error I get is `Certificate request processor: The signature of the certificate cannot be verified: 0x80096004 (-2146869244 TRUST_E_CERTIFICATE_SIGNATURE)`, which reminds me I love Windows errors, so much information, so little of it says what is going on. My research on that error pointed that it happened a lot when Microsoft imposed a new minimum of 1024-bits to key sizes.
username_1: @username_0 Perhaps you can use `step fileserver` + `step certificate inspect` to debug:
With the certificate and the key you can run a test fileserver like this:
```shell
step fileserver --address 127.0.0.1:8443 --cert localhost.crt --key localhost.key /path/to/root
```
Then you can inspect the certificate with:
```shell
step certificate inspect --roots root_ca.crt https://localhost:8443
```
You can also sign the CSR using step:
```shell
step ca certificate localhost.csr localhost.crt
```
username_2: If you are using ACME, could you use the Win-ACME tool for requesting and installing the certificates directly on your Windows VM’s? |
mika-cn/maoxian-web-clipper | 563249607 | Title: FAQ 文件夹映射 提供的脚本有错
Question:
username_0: https://username_1.github.io/maoxian-web-clipper/faq.html
"Command: mklink \D C:\Users\jack\Downloads\mx-wc C:\Users\jack\OneDrive\clips"应改为
"Command: mklink /D C:\Users\jack\Downloads\mx-wc C:\Users\jack\OneDrive\clips"
Answers:
username_1: 確認嗎,如果確認,我下次更正上去,我自己的 windows 系統最近崩了,無法確認。
username_2: 创建符号链接。
MKLINK [[/D] | [/H] | [/J]] Link Target
/D 创建目录符号链接。默认为文件
符号链接。
/H 创建硬链接而非符号链接。
/J 创建目录联接。
Link 指定新的符号链接名称。
Target 指定新链接引用的路径
(相对或绝对)。
```
username_1: 好的,我修正一下。
Status: Issue closed
username_1: @username_0 已修正,感謝回饋
@username_2 感謝幫忙確認 :)
username_0: font{
line-height: 1.6;
}
ul,ol{
padding-left: 20px;
list-style-position: inside;
}
不好意思才看到邮件,非常感谢你做的这个插件,对我很有帮助!
2425199380
<EMAIL>
签名由
网易邮箱大师
定制 |
rancher/rancher | 776609967 | Title: OKTA Integration Stops Working Randomly
Question:
username_0: **Steps to reproduce (least amount of steps as possible):**
Enable OKTA Integration which worked for numerous months until we had to restart the Rancher server itself to add more memory. After this point, OKTA integration will stop working within a few hours of enabling itself. We have 2 other exact match systems that have OKTA integration that are working.
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): 2.4.5
- Installation option (single install/HA): HA
<!--
If the reported issue is regarding a created cluster, please provide the requested info below
-->
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Hosted
- Machine type (cloud/VM/metal) and specifications (CPU/memory): 2 vCPU, 12GB Memory
- Kubernetes version (use `kubectl version`): 1.17.9 and 1.15.12
Primarily I'm not seeing anything in the logs on the primary Rancher server and this may simply be a case of not knowing what to look for in logging or how to implement debug logging to determine why OKTA is failing after having worked for months with no changes made on the OKTA side.
Is there an easy way to reset any and all SSO to default? |
crc-32/BrowseNX | 871624869 | Title: Instalation error 12.0.1 ATM 19.1
Question:
username_0: Error: 2356-0002 (0x564)
Module: Goldleaf (356)
Description: NCA metadata is missing
I can install games with sig-patches and everything except BrowseNX NSP
Answers:
username_1: Similar problems on 12.1.0, I guess this needs to be rebuild for newer versions?
Status: Issue closed
username_2: NSP will no longer be supported going forward as there's no real advantage on newest Atmosphere
username_1: The way to go would be to build a forwarder NSP then? |
dereklieu/prose | 167099706 | Title: Directory listing doesn't seem to work.
Question:
username_0: Hi there!
I can't seem to be able to list the content of directories. For instance: https://prose.openintegrity.org/#frontend/www/tree/master/content
Cheers,
Jun
PS: This has the latest version deployed. |
stealjs/steal | 151011114 | Title: Plugins configuration
Question:
username_0: This is meant to replace #414 I have this partially implemented in the [modules branch](https://github.com/stealjs/steal/tree/modules). Here's a sample of how it works:
```json
{
"system": {
"plugins": {
"*.js": [
"foo-plugin
]
}
}
}
```
Where `foo-plugin` looks like:
```js
var dep = require("some-npm-dep");
exports.locate = function(locate, load){
return 'foo://' + load.name;
};
```
## Goal
The primary goal of this feature is to provide a way to write extensions to Steal that can have NPM dependencies. Currently we have 2 classes of extensions:
1. **Extension** that can override loader hooks by wrapping these hooks. These either have to be built into Steal (and therefore be a blessed plugin) or added as `configDependencies`. The disadvantage there is that configDependencies cannot have have npm dependencies because they are loaded by NPM is configured.
2. **Plugins** are those that are used by the bang syntax or `ext` config. These are good for translating to JavaScript but cannot be used against JavaScript modules.
## Use case
Any community idea could be implemented using this plugin config. An example is steal-conditional which is not (yet) part of Steal core and is awkward to use because of that.
Another example is transpilers. Upgrading our versions of traceur/babel is a real pain right now because you have to update steal-systemjs which uses them, steal which has the transpiler source, and transpile which use them for the build. If it were an external plugin it would be much easier to maintain as you'd only have to update that project and would have the same export type in the browser and during the build.
## Plan
I would like to get this out in a prerelease so people can try it out. There needs to be tests added and there's one action point I'm unsure about:
* Should plugins config apply only to the top-level project or should a dependency be able to load plugins. If a dependency is able to load plugins do those plugins run against all modules or just the modules within its package?
Answers:
username_0: cc @username_2 @username_1
username_0: My instinct is to make them only be set by top-level packages. There might be use cases for dependencies to load plugins but I cannot think of any right now. My worry is, what if a dependency uses a transpiler plugin, would we want that plugin to be loaded *on top of* the plugin the root project is already using? I don't think we would.
username_1: A dependency should be able to load a plugin. Why wouldn't it? Say I wanted to use bootstrap, which uses the less plugin, I shouldn't have to load less myself.
username_0: My concern is that we'll wind up loading 5 javascript transpilers because 1 dep uses traceur, another uses the babel plugin version 0.1.0, another version 0.2.0, etc. Maybe we could have something like a `devPlugins` config that works like the `plugins` config but only applies when you are the root package.
username_2: @username_0 as i can see. there a only a few hooks that can be used by a plugin.
can you explain (in short) what are the hooks for
https://github.com/stealjs/steal/blob/8394a50f919041cf01c44c5673edbf01678de257/src/system-extension-plugins.js#L36-L42
username_0: Those are all of the normal loader hooks that our extensions use today. Will need to document them but in short:
* normalize: takes the import string like `'./foo'` and translate it to a full module name like `[email protected]#foo`
* locate: specifies where the module is located like `http://example.com/foo.js`
* fetch: fetches the module using XHR, script inject, fs.readFile in Node, or whatever else
* instantiate: This is a slightly weird one, but you specify the module's dependencies and a function that when executed will get the module's value. See [system-json](https://github.com/stealjs/system-json/blob/master/json.js#L61) for an example usage.
username_1: Better than not being able to load at all.
username_2: @username_0 is this plugin feature reasonably ready? can i try to refactor steal/sass with this new plugin api?
username_0: The API is ready, it's not ready in terms of will be released any time soon. But if you want to experiment and see if there are issues, I think that would be valuable.
I created one example plugin here that you can use as reference: https://github.com/stealjs/steal-buble
username_0: This particular API of the "plugins" idea is being dropped. Below is a summary of the issues with this API and a proposed way forward.
# Problems
## Performance
The plugins idea worked by blocking loading so that plugins are loaded after the `steal.loader.configMain` but before `steal.loader.main`. This adds a 3rd layer of loading, causing the main to be loaded more slowly. Due to poor performance of loading steal in development mode, I don't think it's wise to add an API based on slowing it down even more.
## Plugin Inception
A major technical challenge is getting around the fact that plugins block JavaScript from loading. By blocking JavaScript from loading plugins block *themselves* from loading.
Steal has the ability to clone it's loaders. This way you can configure different loaders for different purposes. Classic `!` style plugins are loaded through a `pluginLoader`, for example. The way to fix the issue would be to use a plugin loader to load the plugins, that way they are not blocked by plugins config.
However, when you clone the loader you don't get all of the npm configuration with it. The `steal-clone` module provides that capability. To do this we'd need to pull steal-clone into core, which I guess isn't *too* bad. But steal-clone is currently a bit buggy.
# Future
Meanwhile, well after this feature was worked on, we added a `plugins` configuration to provide a way to pre-load plugins (regular bang plugins) like `steal-css`. All this does is fetch the package.json.
I think the future of a plugins sort of idea would be to extend this behavior and provide a way to actually run code, probably by making that module part of the `configDependencies`. For example, let's say you had:
```json
"steal": {
"plugins": ["steal-babel"]
}
```
And then steal-babel had
```json
{
"steal": {
"isPlugin" true
}
}
```
Or something to that effect (almost definitely not that name 😅). Then the plugin would effectively be loaded as a configDependency.
This gets around the performance issue because we are not adding a new layer of preloading, but rather using configDependencies which already exists.
This would mean that plugins can't have npm dependencies. I think it's worth that tradeoff, just bundle your dependencies if needed.
-----
I'm closing this issue as we are not going to pursue this API any further. Since this is a rather large change I would prefer that future APIs go through the [Steal RFCs](https://github.com/stealjs/rfcs) process so that it can be discussed more in depth.
Status: Issue closed
username_2: what do you mean by this? steal-react cant depend on react-dom?
username_0: No, I don't mean that. steal-react, steal-css, any of those types of plugins can continue to work exactly like they do today. This is about the other sort of "super dependency" plugins. These are plugins that have to run before your `main` runs. Anything that operates on `.js` files would need to be a super plugin. |
riotkit-org/infracheck | 890455619 | Title: Optionally specify a cache lifetime per check
Question:
username_0: Not every check must be ran every time. Some things needs to be checked ex. once a day like domain expiration or tls expiration. Others like "is service working" should be checked as often as possible.
Answers:
username_0: Implemented.
Status: Issue closed
|
NorthernMan54/node-red-contrib-homebridge-automation | 498296197 | Title: hb-status node errors upon inject-once after deploy/restart
Question:
username_0: I have a flow I put together to compare homebridge with another homekit node and I'm using the hb-status node to synchronize the 2 upon restart. The inject node works fine when I inject manually, but I've tried up to a 10 second delay and I always get an error for each hb-status node after deploy/restart of flow/node-red. Here's the error from one of them:
```
"Homebridge not initialized: HomebridgeCC:22:3D:E3:CE:22Default-ManufacturerAttic Air Conditioner00000049"
```
Do I need to "initialize" somehow before the first injection?
Answers:
username_1: Connected with NRCHKB/node-red-contrib-homekit-bridged#144
username_2: @username_0 when node-red starts up, my node does homebridge instance discovery for 20 seconds, then initializes everything, so put a 30 second delay and you should be good.
username_0: @username_2 - that worked!
You know, I also noticed that upon deploy, the hb-event nodes all fire. So I may not even need the inject once at all. (I may need manual inject to handle synch issues.)
Status: Issue closed
username_1: @username_0 in cas of any doubt feel free to open new Issue. I know that documentation is not clear sometimes and we will work on this in near future. |
rabbit-tian/blog | 237427573 | Title: JS事件(鼠标事件和键盘事件)
Question:
username_0: ## JS事件(鼠标事件和键盘事件)
1. #### 鼠标事件
鼠标位置,可视区位置:clientX、clientY
例子:跟随鼠标的Div
```javascript
<style>
div{width: 20px;height: 20px;border-radius: 50%;background-color: red;position: absolute;}
</style>
<script>
// 封装函数
function getPos(ev)
{
var scrollTop=document.documentElement.scrollTop||document.body.scrollTop;
var scrollLeft=document.documentElement.scrollLeft||document.body.scrollLeft;
return{x: ev.clientX+scrollLeft, y:ev.clientY+scrollTop}; // Json的使用
};
document.onmousemove=function (ev)
{
var aDiv=document.getElementsByTagName('div');
var oEvent=ev||event;
var pos=getPos(oEvent);
for(var i=aDiv.length-1;i>0;i--)
{
// 除了第一个div,都是跟着前一个div走
aDiv[i].style.left=aDiv[i-1].offsetLeft+'px';
aDiv[i].style.top=aDiv[i-1].offsetTop+'px';
};
// 第一个div跟着鼠标移动
aDiv[0].style.left=pos.x+'px';
aDiv[0].style.top=pos.y+'px';
};
</script>
</head>
<body>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
<div></div>
</body>
```
2. 键盘事件
keyCode, 获取用户按下键盘的哪个按键
例子:键盘控制Div移动
[Truncated]
</style>
<script>
document.onkeydown=function (ev)
{
var oEvent=ev||event; // 兼容IE和火狐的事件的写法
var oDiv=document.getElementById('div1');
if(oEvent.keyCode==37)
{
oDiv.style.left=oDiv.offsetLeft-10+'px';
}
else if(oEvent.keyCode==39)
oDiv.style.left=oDiv.offsetLeft+10+'px';
};
</script>
</head>
<body>
<div id="div1"></div>
</body>
```## JS事件(鼠标事件和键盘事件) |
raff/godet | 224937051 | Title: Consider exporting sendRequest while the API is incomplete
Question:
username_0: Since a lot of APIs are not yet implemented in godet, I think exposing something like the sendRequest method would be useful.
This way, users can use godet to access APIs that are not yet implemented in godet. For example, I needed to capture the screenshot in a specific size, but I had no way to call `Emulation.setVisibleSize` with godet. If we had `SendRequest`, I could simply call it this way:
```go
godet.SendRequest("Emulation.setVisibleSize", godet.Param{})
```
Answers:
username_1: Ok, let me think about it.
Status: Issue closed
|
j0k3r/php-imgur-api-client | 228483997 | Title: Error handling when uploading too fast
Question:
username_0: It seems they rate limit how often you can upload.
Error has occured in file username_1/php-imgur-api-client/lib/Imgur/Middleware/ErrorMiddleware.php on line 74:
"Request to: /3/image failed with: "429,You are uploading too fast. Please wait 16 more minutes.,ImgurException,Array"
Answers:
username_1: What could be an option for you?
Creating a custom exception for the _uploading too fast_?
username_0: Yes, that would be best (i now had to debug the code to find out the exact error)
username_2: When response is 429, $responseData['data']['error'] is an array and line 74 produces Notice: Array to string conversion.
username_3: It looks like the `$responseData['data']['error']` will be the array when response is `429`.
Perhaps we have to create the condition for `response 429` and try to convert the array to string before merging strings.
username_1: If someone can dump the `$responseData` and provide it here when response is 429 I'll implement a fix
username_3: @username_1 , my error `response` JSON is as follows:
```
array(3) {
'data' =>
array(3) {
'error' =>
array(4) {
'code' =>
int(429)
'message' =>
string(56) "You are uploading too fast. Please wait 59 more minutes."
'type' =>
string(14) "ImgurException"
'exception' =>
array(0) {
...
}
}
'request' =>
string(13) "/3/image.json"
'method' =>
string(4) "POST"
}
'success' =>
bool(false)
'status' =>
int(400)
}
```
username_3: Here is the formatted JSON response:
```
{
"data": {
"error": {
"code": 429,
"message": "You are uploading too fast. Please wait 57 more minutes.",
"type": "ImgurException",
"exception": []
},
"request": "\/3\/image.json",
"method": "POST"
},
"success": false,
"status": 400
}
```
username_1: Should be fixed in next release.
Thanks for reporting @username_0 and for the data to fix the bug @username_3.
Status: Issue closed
|
rakudo/rakudo | 504883120 | Title: Regression in Cro (fetching a json file eats RAM)
Question:
username_0: Script:
```perl6
#!/usr/bin/env perl6
use Cro::HTTP::Client;
use Telemetry;
start loop {
exit 42 if T<max-rss> > 2_000_000;
sleep 0.5;
}
my $url = ‘https://modules.perl6.org/search.json’;
my $resp = await Cro::HTTP::Client.get: $url,
headers => [
User-Agent => ‘perl6 ecosystem unbitrot’,
],
;
my $x = await $resp.body;
```
If you run it on newer-ish rakudo it'll eat your RAM. This wasn't the case before. Bug confirmed: https://colabti.org/irclogger/irclogger_log/perl6?date=2019-10-09#l679
The code comes from a [real script](https://github.com/perl6/ecosystem-unbitrot/blob/e919c8cba58231c12a75e7473fc8f5f96f677735/lib/Unbitrot/Utils.pm6#L13-L17) that I use for [ecosystem-unbitrot](https://github.com/perl6/ecosystem-unbitrot) repo.
Bisected with [Blin](https://github.com/perl6/Blin):
```bash
bin/blin.p6 --old=2018.11 --new=HEAD --custom-script=croregression.p6 Cro::HTTP::Client
```
* croregression.p6 – Fail, Bisected: 541a4f1628e4e156f6eefc547938746f7b736104
Answers:
username_1: Does it do this at HEAD of `Cro::HTTP::Client`?
username_0: The HEAD version requires Log::Timeline which doesn't pass tests:
```
===> Testing: Log::Timeline:ver<0.3>
# Failed test 'Log file has expected number of lines'
# at t/output-json-lines.t line 30
# expected: '6'
# got: '42'
reached end of string when looking for something
```
Otherwise it works fine, indeed!
username_0: Hmm that's weird. Log::Timeline is clean in Blin, but it wasn't when I was trying to install it myself.
username_1: I believe there was a `JSON::Fast` version released at some point with a regression, iirc that led to `:!pretty` leaving newlines in, and we relied on it not doing that in order to produce JSONLines format. It's possible Blin runs the latest `JSON::Fast` without that.
username_2: On HEAD the script uses 1.2 GB RSS. That's kinda much, but it stays there for a while until it finishes. With Cro::HTTP 0.8.1 the script bails out when reaching 2GB
username_2: It's indeed Cro::HTTP commit <PASSWORD> that fixes this issue. The commit message fits and it explains why it's triggered by a change in rakudo:
"Fix a memory leak in FrameParser
Timotimo++ for helping with identifying a source of the leak.
It seems we previously relied on a rakudo bug that terminated loop
after `when` branch executing, as it seems to be fixed now, we were
appending an unchanging $data to $buffer in a loop which caused a
severe memory leak. Adding an explicit `last` fixes it."
I guess it's time for a Cro::HTTP release :) Closing this issue as it's not rakudo's fault.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.