repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
getredash/redash | 314837964 | Title: Helm chart to deploy Redash on Kubernetes
Question:
username_0: ### Issue Summary
I have developed a Helm chart that deploys Redash on Kubernetes: https://github.com/kubernetes/charts/pull/5071
### Technical details:
* Kubernetes is a popular open source cluster container orchestration system.
* Helm is an open source package manager for Kubernetes: https://helm.sh/
* The Helm community maintains a directory of charts ("packages") that provides "one command deployment" of a wide range of tools: https://github.com/kubernetes/charts
The above PR is a chart I authored for Redash - it has not yet been merged by the charts repo managers but hopefully will be at some point.
The chart will deploy the Redash server as well as separate ad-hoc and queued worker containers. These are configured to use a PostgreSQL and Redis containers which are managed using the existing Helm charts for these tools. A persistent volume claim is created for PostgreSQL and an Ingres controller can be enabled to proxy inbound requests.
For more detail please see the README at https://github.com/username_0/charts/blob/redash/incubator/redash/README.md
I would love to see if the Redash team has any feedback on this or is interested in co-maintaining this chart :)
Answers:
username_1: Hi 👋
Apologies it took so long to follow up. Between the V4 release and all the incoming stuff we have, I was a bit behind on GitHub issues/PRs.
Thank you and @valentin2105 for giving a stab at creating a Helm Chart for Redash. I'm sure it will be useful for many!
I'll be happy to chime in on some things to change (if there are) in how the chart deploys Redash -- should I make my comments here or in https://github.com/kubernetes/charts/pull/5071?
Also, is there a point in having this pull request, given we have one for the official repository?
username_0: @username_1 firstly - thanks for making such a great product and thanks for making it FOSS :fireworks:
If you could make comments on kubernetes/charts#5071 that would be great - if they are more substantive requests we can always spin them out to follow-ups.
This issue (it's not a PR itself) was just a way to let your team know about the work happening in the official repository. I don't think there is any action needed other than confirming that you are now aware, so you can probably close this :)
Status: Issue closed
username_1: Oops. I was triaging so many issues/PRs yesterday, that I didn't notice. 😅
username_2: @username_1 Please consider reopening this issue and moving the helm chart to the getredash org in GitHub. The PR went stale waiting for contribution agreement and vacations, and now the helm stable is no longer accepting new charts. Now they prefer external chart repositories that are submitted to Helm Hub. If @username_0 is no longer working on this I would volunteer. (And after 18 months I wouldn't blame him.)
They suggest helm chart repos can be hosted as GitHub pages, which can host the documentation on variables and use. |
amitshekhariitbhu/Fast-Android-Networking | 172152409 | Title: Add support for custom content-type
Question:
username_0: Thanks for great library.
Please add support to add custom content-type when using RequestBuilder. It would be nice to be able to add bytes as the body, then custom content-type.
Something like:
AndroidNetworking.post(url).addByteBody(bytes).addContentType("application/json")
Regards,
Willem
Answers:
username_1: @username_0 : added in development branch
username_0: Thanks, great work. Tested and working. Will provide more feedback if more feedback comes up.
username_2: Hi, Thanks for this awesome library. I have one question. How to get the status code from the response.? I didn't find any way to get that.
username_1: @username_2 : You are welcome.
to get the response code you can call response.code() if you are using getAsOkHttpResponse.
And also like this :
We have a callback method onError(ANError error).
```
if (error.getErrorCode() != 0) {
// received error from server
// error.getErrorCode() - the error code from server
// error.getErrorBody() - the error body from server
// error.getErrorDetail() - just an error detail
Log.d(TAG, "onError errorCode : " + error.getErrorCode());
Log.d(TAG, "onError errorBody : " + error.getErrorBody());
Log.d(TAG, "onError errorDetail : " + error.getErrorDetail());
} else {
// error.getErrorDetail() : connectionError, parseError, requestCancelledError
Log.d(TAG, "onError errorDetail : " + error.getErrorDetail());
}
```
username_3: @username_1
Could you please provide support that `OkHttpResponse` object will get passed in `onResponse` method of `ParsedRequestListener` interface?
In that case, we can get the status code from `OkHttpResponse` object and still enjoying the parsed object in `onResponse`. Thanks in advance!
username_1: As of I have done to get OkHttpResponse in Development as getAsOkHttpResponse
As of now I have uploaded it on beta.
So please compile using the below link.
`compile 'com.amitshekhar.android:android-networking:0.2.0-beta-0`
```
.getAsOkHttpResponse(new OkHttpResponseListener() {
@Override
public void onResponse(Response response) {
if (response != null) {
// we can find header also like response.headers() or anything than can be obtained from OkHttpResponse
if (response.isSuccessful()) {
Log.d(TAG, "response is successful");
try {
Log.d(TAG, "response : " + response.body().source().readUtf8());
} catch (IOException e) {
e.printStackTrace();
}
} else {
Log.d(TAG, "response is not successful");
}
} else {
Log.d(TAG, "response is null");
}
}
@Override
public void onError(ANError error) {
Utils.logError(TAG, error);
}
});
```
I think you are telling to provide both parseResponse and OkHttpResponse in a response listener
username_3: Thank you for your quick reply!
I am aware of the `OkHttpResponseListener` functionality. What I requested, can be summarized like below,
```
.getAsParsed(new TypeToken<User>() {}, new ParsedRequestListener<User>() {
@Override
public void onResponse(Response response, User user) {
// we can find header like response.headers()
// we can get user as parsed
}
@Override
public void onError(ANError anError) {
// handle error
}
});
```
username_1: I will try to add this as soon as possible.
Status: Issue closed
|
thousandetherhomepage/ketherhomepage | 262552351 | Title: Corner bug with pixel selector
Question:
username_0: 
Answers:
username_0: Unable to select space adjacent to the tiny square in the corner.
username_1: Huh weeeeird! I'm going to look into this. Thank you! |
iiitv/Odyssy | 238382117 | Title: Create app for Academics
Question:
username_0: This app would be used to display items under nav-bar item `Academics`
### Features:
Following views are to be added at `academics/view-name/`:
* B.tech (`academics/btech/course/`)
* CSE
* IT
* M.Tech (`academics/mtech/course/`)
* CSE
* IT
* Phd
* Academic Calendar<issue_closed>
Status: Issue closed |
autopkg/autopkg | 820729330 | Title: Unable to use repo-update and repo-delete with local file paths
Question:
username_0: ## Issue description
AutoPkg 2.3 introduces a regression in behavior: the `repo-update` and `repo-delete` verbs don't work with file paths.
The problem was introduced [here](https://github.com/autopkg/autopkg/pull/715/files#diff-2ca451fd9f3d68a9309d94e0f9e0f20e2df6e1ad0faf5852643d41401291170bR712).
## Steps to reproduce
1. `autopkg repo-add recipes`
2. `autopkg repo-update ~/Library/AutoPkg/RecipeRepos/com.github.autopkg.recipes`
3. Observe error — note the URL is an incorrect combination of GitHub + file path.
```
ERROR: Can't find an installed repo for https://github.com/Users/user/Library/AutoPkg/RecipeRepos/com.github.autopkg.recipes
```
## Desired behavior
Repos should be updated if referred to by file path, as they were in AutoPkg 2.2 and earlier.
## Note on AutoPkgr
AutoPkgr uses file paths to perform repo update operations, so this issue has already manifested as a problem for those users. (https://github.com/lindegroup/autopkgr/issues/666)
I suspect it may also be related to the VirusTotalAnalyzer issue as well. (https://github.com/lindegroup/autopkgr/issues/665)<issue_closed>
Status: Issue closed |
newsboat/newsboat | 715658846 | Title: Add help dialog to URL view dialog
Question:
username_0: As pointed out in https://github.com/newsboat/newsboat/pull/1213#issuecomment-703368329, as of 2.21 the URL view (_src/urlviewformaction.cpp_) lacks the help dialog. This is a problem because the URL view supports a lot of operations which aren't mentioned in the "hints" line at the bottom of the screen.
This can be fixed by copying code that handles `OP_HELP` in any other dialog, like _src/feedlistformaction.cpp_.<issue_closed>
Status: Issue closed |
manelmendez/classpip-services | 257830471 | Title: [Infrastructure] Create method to upload images from client
Question:
username_0:  [S- #6 [Infrastructure] Create method to upload images from client](https://trello.com/c/9MsPjv2Y/12-s-6-infrastructure-create-method-to-upload-images-from-client)
Status: Issue closed
Answers:
username_0:  [S- #6 [Infrastructure] Create method to upload images from client](https://trello.com/c/9MsPjv2Y/12-s-6-infrastructure-create-method-to-upload-images-from-client)
Status: Issue closed
|
hasit/vscode-gi | 286511527 | Title: Templates personalization
Question:
username_0: Allow the users to save a template (from the GitHub API or a personalized one) and modify it according to their needs.
Answers:
username_1: Hello @username_0,
Could you expand on this a little more? Let's us start a conversation and brainstorm about how we can integrate this into the extension. I'd be happy to hear ideas.
Also, sorry for the late response. I haven't looked at this repo's issues in a long time.
username_0: Hi @username_1.
As far as I remember, my suggestion was...
The extension allows the user to get a gitignore file from a GitHub API; these files are written by the GitHub community, reason why the gitignore files could be considered as templates that could be modified by developers according to the project needs.
With that in mind, what if the VS Code users also could create their own gitignore files and save them in a local storage folder or even download a file from the gitignore API, modify it and save it for future projects. |
rollup/rollup-plugin-typescript | 256093600 | Title: Zero-byte in output
Question:
username_0: Hello, I'm experimenting with this plugin, and trying to achieve smooth experience both for frontend and backend builds.
For now I got pretty simple setup for «backend»:
```js
return rollup(
{
entry: entry, // '.ts' file
external: id => id !== entry, // mark everything as external
plugins: ts(
{
typescript: require('typescript')
}),
})
.then(bundle =>
{
return bundle.generate(
{
format: 'cjs',
exports: 'auto',
})
})
```
Somehow NUL-byte is inserted inside `require('typescript-helpers');`.
I see `'\'use strict\';\n\nrequire(\'\u0000typescript-helpers\');\n…` when I debug output.
What on earth could cause this weird insertion to happen?
* `[email protected]`
* `[email protected]`
* `[email protected]`
* node `v7.5.0`
Answers:
username_0: I believe this happens when `typescript-helpers` are marked as external. I got similar build but with dependencies bundled (for frontend target) and it bundles just ok.
username_0: This is not a critical, cause I just replace zero-byte with empty string, but definitely it would be interesting to track this down.
@Rich-Harris
username_0: Actually, I found this is done by intent. 🤔
https://github.com/rollup/rollup-plugin-typescript/blob/be1278243c6b5246780e766bc28b7cd405ce8b83/src/index.js#L23-L24
Maybe this works in frontend bundles, but when the target is cjs/node it leads to corruption of every file which includes `typescript-helpers` which means, every file. Is it an ugly workaround?
username_1: Replacing this plugin's own helper functions with `ts-lib` and `--importHelpers` would get rid of that "virtual module" and the NUL character.
Status: Issue closed
|
pulibrary/pulfalight | 1057739659 | Title: Location info not routing to Aeon/call slips
Question:
username_0: Top container profiles don't appear if multiple folders from a box are requested so staff won't know where the item is located. Not sure if this is a result of a character limit? In the example, "elephant size box" is missing.
[containerprofilemissing.docx](https://github.com/pulibrary/pulfalight/files/7565610/containerprofilemissing.docx)
[containerprofilemissing2.docx](https://github.com/pulibrary/pulfalight/files/7565611/containerprofilemissing2.docx)
Answers:
username_1: @username_0 This is marked sudden priority - for the sake of documentation can you put on this ticket why that's the case? Presumably it's impossible to find these materials when this happens?
username_0: This issue will result in items being very difficult to locate.
username_2: To add a follow up to Faith's comment, items are shelved in size categories at both SC locations. This could result in having to search multiple vaults and or floors to find where a single box is physically located.
Status: Issue closed
|
schmir/pytest-twisted | 59432653 | Title: Unhandled Exceptions
Question:
username_0: I ran across the following problem: if you have an unhandled exception somewhere in your Twisted stuff (e.g. a missing errback handler), pytest-twisted "hangs" and needs a ctrl-c to stop the test -- at which point you get a traceback into pytest-twisted internals.
Obviously, having a Deferred without an errback is a problem, but if pytest-twisted can do something nicer here, that'd be amazing. (FWIW, _result ends up being None in this case). I don't know enough about greenlets nor pytest to suggest something.
I do have a SSCCE that I detailed in this blog post: https://username_0.ca/pytest-twisted-blockon (and I hereby declare that's Unlicensed if you want any of it).
Answers:
username_1: Thanks for your bug report. I don't use python anymore and therefore won't take care of the issue.
Maybe I can move the project to the new pytest organization.
username_2: In your case deferred won't be called, and I there is needed some kind of timeout, like in https://github.com/eugeniy/pytest-tornado
So it's more like a error in the test, but which should be catched by testing tool.
Unhandled errors should be reported too, I think.
I don't know if it is possible, but maybe we need to look into twisted.trial internals
username_0: Its possibly enough just for pytest-twisted to add an errback handler to every deferred it sees...but I didnt try that yet.
username_2: but this won't help with deferreds that haven't been called.
and callLater doesn't return deferred.
I am not sure if I'm getting it right, but in your case the best solution will be deferLater
```python
from twisted.internet import reactor, task
import pytest
@pytest.fixture
def foo():
def blammo():
raise RuntimeError('foo')
d = task.deferLater(reactor, 0.1, blammo)
return pytest.blockon(d)
def test_meaning(foo):
assert foo == 42
```
and for the pytest-twisted the best solution to have a timeout for deferreds (like trial and pytest-tornado do)
username_0: Well, my example test was meant to illustrate the problem I *really* encountered which is/was that an unhandled exception wasn't getting logged or printed anywhere.
Definitely that's a code problem (i.e. something missing an errback handler) but I filed a ticket in case there's something smart pytest-twisted could do for these cases.
So, yes, your fix above is correct -- but in a complex code-base there can easily be hard-to-find Deferreds that are missing appropriate handling (e.g. someone calls a Deferred-returning method as if it was synchronous).
username_2: Yeah, I understand problem now.
The tricky part is to find unhandled exceptions,
and as far as I understood, the only thing you can get about unhandled errors is that they are logged
So there is a possible solution which adds log handler
```diff
diff --git a/pytest_twisted/plugin.py b/pytest_twisted/plugin.py
index c3f4c73..954ff6b 100644
--- a/pytest_twisted/plugin.py
+++ b/pytest_twisted/plugin.py
@@ -16,6 +16,16 @@ def blockon(d):
if greenlet.getcurrent() is not current:
current.switch(result)
+ def error_observer(eventDict):
+ if eventDict['isError'] and 'failure' in eventDict:
+ if not eventDict.get('why'):
+ failure = eventDict['failure']
+ if greenlet.getcurrent() is not current:
+ current.throw(failure.type, failure.value, failure.tb)
+
+ from twisted.python import log
+ log.addObserver(error_observer)
+
d.addCallbacks(cb, cb)
if not result:
_result = gr_twisted.switch()
```
I'm not really familiar with greenlet, so there might be some edge cases
username_0: Yeah, that looks plausible. I will try against my test-case.
I also am unfamiliar with greenlet, so...
username_3: In case someone wants a copy/paste fixture that asserts on any (noticed) unhandled deferred error, here's what I pieced together from reading here and chatting in #twisted. Noticed as in it banks on `gc.collect()` `__del__`'ing the problem deferred and probably various other caveats.
https://gist.github.com/username_3/b4929da7f414d8173a4d87fa7d2cd29b
```python
import gc
import twisted.logger
class Observer:
def __init__(self):
self.failures = []
def __call__(self, event_dict):
is_error = event_dict.get('isError')
s = 'Unhandled error in Deferred'.casefold()
is_unhandled = s in event_dict.get('log_format', '').casefold()
if is_error and is_unhandled:
self.failures.append(event_dict)
def assert_empty(self):
assert [] == self.failures
@pytest.fixture
def assert_no_unhandled_errbacks():
observer = Observer()
twisted.logger.globalLogPublisher.addObserver(observer)
yield
gc.collect()
twisted.logger.globalLogPublisher.removeObserver(observer)
observer.assert_empty()
```
username_3: @vtitor, would the above thing be considered useful in a PR with test(s)?
@username_0, any chance you've been using it? I just noticed it in my code again and thought 'hey, I should share this...'. I can't say I've been exercising it much though so I was curious if you had any feedback about it 'working' or not in real use.
username_3: Given that it yields nothing, should `assert_no_unhandled_errbacks()` be a wrapping decorator instead of a fixture?
(entirely untested)
```python
def assert_no_unhandled_errbacks(f):
@functools.wraps(f)
def wrapped(*args, **kwargs):
observer = Observer()
twisted.logger.globalLogPublisher.addObserver(observer)
result = f(*args, **kwargs)
gc.collect()
twisted.logger.globalLogPublisher.removeObserver(observer)
observer.assert_empty()
return result
return wrapped
```
username_3: Hmm, I guess a fixture let's you auto-use... not sure if that's a good approach or not.
username_4: Hi: @username_3
Using that proposed fixture with the test involves nothing more than including it in the test_ function call as a fixture, correct?
The reason is that I tried to use it when I encountered the same problem (https://github.com/pytest-dev/pytest-twisted/issues/61 ); however, it did not seem to change the problem.
username_3: @username_4, that looks to be what I did with it once upon a time.
https://github.com/username_3/stlib/blob/b34796cbba959d9cb2cb843f3cc5fc815c7cb6c6/epyqlib/tests/utils/test_twisted.py#L97
```python3
def test_sequence_normal(action_logger, assert_no_unhandled_errbacks):
```
But, it's definitely a 'best effort' sort of situation, not a guaranteed catch. I'll look at #61 now.
username_5: Thanks for the fixture workaround. Using a decorator does not seem to work, but the fixture did. This bug is actually very problematic while developing the test cases, and it requires some serious debugging until you find this ticket and realize where the error is. |
invertase/react-native-firebase | 941541400 | Title: [🐛] Social auth signin doesn't trigger onathstatechanged on build release
Question:
username_0: <!---
**Note <= v5 is deprecated, v5 issues are unlikely to get attention. Feel free to ask in discussions instead.**
Hello there you awesome person;
Please note that the issue list of this repo is exclusively for bug reports;
1) For feature requests, questions and general support please use [GitHub Discussions](https://github.com/invertase/react-native-firebase/discussions).
2) If this is a setup issue then please make sure you've correctly followed the setup guides, most setup issues such as 'duplicate dex files', 'default app has not been initialized' etc are all down to an incorrect setup as the guides haven't been correctly followed.
-->
<!-- NOTE: You can change any of the `[ ]` to `[x]` to mark an option(s) as selected -->
<!-- PLEASE DO NOT REMOVE ANY SECTIONS FROM THIS ISSUE TEMPLATE -->
<!-- Leave them as they are even if they're irrelevant to your issue -->
## Issue
<!-- Please describe your issue here --^ and provide as much detail as you can. -->
<!-- Include code snippets that show your usages of the library in the context of your project. -->
<!-- Snippets that also show how and where the library is imported in JS are useful to debug issues relating to importing or methods not found issues -->
Describe your issue here
In debug mode, during development, after signing with google, onauhtstatechanged gets triggered and the user gets to have access to the main app but after building for release with `gradlew assembleRelease`, clicking the login button shows the google-select-account popup and after selecting an account, the popup closes and nothing happens.
---
## Project Files
<!-- Provide the contents of key project files which will help to debug -->
<!-- For Example: -->
<!-- - iOS: `Podfile` contents. -->
<!-- - Android: `android/build.gradle` contents. -->
<!-- - Android: `android/app/build.gradle` contents. -->
<!-- - Android: `AndroidManifest.xml` contents. -->
<!-- ADD THE CONTENTS OF THE FILES IN THE PROVIDED CODE BLOCKS BELOW -->
### Javascript
<details><summary>Click To Expand</summary>
<p>
#### `package.json`:
```json
# N/A
```
#### `firebase.json` for react-native-firebase v6:
```json
# N/A
```
</details>
### iOS
<details><summary>Click To Expand</summary>
[Truncated]
- [ ] Both
- **`react-native-firebase` version you're using that has this issue:**
- 12.0.0
- **`Firebase` module(s) you're using that has the issue:**
- auth
- **Are you using `TypeScript`?**
- `Y` & `3.8.3`
</p>
</details>
<!-- Thanks for reading this far down ❤️ -->
<!-- High quality, detailed issues are much easier to triage for maintainers -->
<!-- For bonus points, if you put a 🔥 (:fire:) emojii at the start of the issue title we'll know -->
<!-- that you took the time to fill this out correctly, or, at least read this far -->
---
- 👉 Check out [`React Native Firebase`](https://twitter.com/rnfirebase) and [`Invertase`](https://twitter.com/invertaseio) on Twitter for updates on the library.
Status: Issue closed
Answers:
username_1: Issues between debug and release are always and forever project-specific problems. The module works in release fine.
I suspect this is a google signin module usage issue even, nothing to do with react-native-firebase. That said, the react-native google sign in module also works fine in release. That module does not have a release mode issue.
You have also largely ignored the template where we request necessary information for troubleshooting.
Strongly suggest watching `adb logcat` unfiltered as you attempt this, but there will be no change in this project as a result of your project's problem, combined with ignoring the issue template, so I will close this |
aws/aws-lambda-builders | 996718910 | Title: Python version validation fails in CircleCI cimg/python3.9
Question:
username_0: **Description:**
When using the SAM CLI, it calls this library which then fails when trying to validate the installed Python version despite there being a valid installation. Interestingly the legacy image `circleci/python:3.9.6` works fine.
**Steps to reproduce the issue:**
1. Create a CircleCI pipeline with the executor image `cimg/python:3.9.6`
2. Run `sam build` with any template file
**Observed result:**
The command fails when trying to validate the Python version with these logs:
```
2021-09-10 01:43:20,142 | Invalid executable for python at /home/circleci/.pyenv/shims/python3.9
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 58, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 48, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.9, Found version: /home/circleci/.pyenv/shims/python3.9.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
2021-09-10 01:43:20,157 | Invalid executable for python at /home/circleci/.pyenv/shims/python
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 58, in wrapper
File "aws_lambda_builders/workflows/python_pip/validator.py", line 48, in validate
aws_lambda_builders.exceptions.MisMatchRuntimeError: python executable found in your path does not match runtime.
Expected version: python3.9, Found version: /home/circleci/.pyenv/shims/python.
Possibly related: https://github.com/awslabs/aws-lambda-builders/issues/30
```
**Expected result:**
The command completes succesfully.
**Other notes**
When running the command generated by the validation module python_pip.validator manually like so:
```bash
/home/circleci/.pyenv/shims/python3.9 -c "import sys; assert sys.version_info.major == 3 and sys.version_info.minor == 9"
```
it completes successfully. This works with all Python versions found on the PATH checked by the validator.
Answers:
username_1: I'm unable to reproduce with these steps:
```bash
docker run --rm -it cimg/python:3.9.6
# From inside container
pip install aws-sam-cli
sam init --name sam-app --runtime python3.9 --dependency-manager pip --app-template hello-world
cd sam-app
sam build
```
Anything I'm missing?
username_0: We're installing the SAM CLI via the CircleCI-provided orb [circleci/[email protected]](https://circleci.com/developer/orbs/orb/circleci/aws-sam-serverless) rather than with pip. Their install method looks something like this:
```
curl -L https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip -o aws-sam-cli-linux-x86_64.zip
unzip aws-sam-cli-linux-x86_64.zip -d sam-installation
sudo ./sam-installation/install
```
This may have something to do with it.
username_2: Closing because it's impossible to reproduce.
Status: Issue closed
|
vipulnaik/daily-updates | 589655075 | Title: Vipul Saturday checklist | 2020-03-28
Question:
username_0: ## Chores
- [x] Clean out trash
- [x] Post rental listing on Trulia/HotPads and strategize about the rental situation
## Other
- [x] Provide consulting on my Wikipedia editing experience to an individual considering it (not for me)<issue_closed>
Status: Issue closed |
PostgREST/postgrest | 530430892 | Title: [PaaS Deployment] Scalingo one-click button
Question:
username_0: Hello. I'm a developer at Scalingo, Scalingo is a European and Heroku compatible PaaS.
And today I come to you because we want to make the project easily deployable on Scalingo via a one-click button.
Would you agree that we could propose our file additions as well as documentation about the deployment with us?
Answers:
username_1: @username_0 Hey there,
Does Scaling allow creating multiple database roles on a free tier PostgreSQL?
username_0: @username_1 Hi, thanks for your answer.
Yes, it's possible to create multiple roles on a free tier PostgreSQL database.
username_0: Hello, I haven't heard from you since, is it possible to make this one-click?
username_1: Yes. However this brand exposure would be similar to what we offer on our [Sponsor level on Patreon](https://www.patreon.com/postgrest). If Scalingo would be willing to support us, I could add the one-click button in our [README](https://github.com/PostgREST/postgrest#sponsors) and [docs](http://postgrest.org/en/v6.0/#sponsors).
Let me know if you're interested in this. You can also contact me privately(email on profile).
username_2: I tried it but it seems not possible to create custom roles with the default user... |
microsoft/winget-cli | 947873326 | Title: flexible handling of GetConsoleWidth method's default value would be useful in scripting
Question:
username_0: <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
When listing output ```winget``` truncates column width(s) to accommodate for the total width of the console, which is determined by a call to ```GetConsoleWidth``` in ```src\AppInstallerCLICore\TableOutput.h```
```cpp
namespace details
{
// Gets the column width of the console.
inline size_t GetConsoleWidth()
{
CONSOLE_SCREEN_BUFFER_INFO consoleInfo{};
if (GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &consoleInfo))
{
return static_cast<size_t>(consoleInfo.dwSize.X);
}
else
{
return 120;
}
}
}
```
If the console is not wide enough, output of a column is truncated and a UTF8 Horizontal Ellipsis (…) is append to the end of the column, also in ```src\AppInstallerCLICore\TableOutput.h```; see the first listing in the image below.
```cpp
void OutputLineToStream(const line_t& line)
{
auto out = m_reporter.Info();
for (size_t i = 0; i < FieldCount; ++i)
{
const auto& col = m_columns[i];
if (col.MaxLength)
{
size_t valueLength = Utility::UTF8ColumnWidth(line[i]);
if (valueLength > col.MaxLength)
{
size_t actualWidth;
[Truncated]
Adding native PowerShell support as proposed in [#221](https://github.com/microsoft/winget-cli/issues/221) would be the ideal solution, however a quick simple patch to bump up the default value to 180+ would be great in the mean time.
```cpp
namespace details
{
// Gets the column width of the console.
inline size_t GetConsoleWidth()
{
CONSOLE_SCREEN_BUFFER_INFO consoleInfo{};
if (GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &consoleInfo))
{
return static_cast<size_t>(consoleInfo.dwSize.X);
}
else
{
return 180; // or more if your heart desires 😋
}
}
}
```
Answers:
username_1: Why truncate at all? If stdout is a file or a pipe, long lines should never be truncated!
username_0: I agree, but that would require more work, which would become unnecessary once powershell is natively supported, hence the simple solution suggestion to change one default. |
Kuwamai/PointCloudShader | 716731805 | Title: テクスチャ生成用スクリプトの公開
Question:
username_0: 以前Jupyter notebookで書いてるので、Google collabとか使って公開できたら楽そう
* [HIP_VRC/hip2tex.ipynb at master · username_0/HIP_VRC](https://github.com/username_0/HIP_VRC/blob/master/hip2tex.ipynb)
* [google Colaboratory CSVファイル読込み|ふじさん|note](https://note.com/092i034i/n/n76f2c2de1974)
* [3D都市データの公開 - OPEN DATA | 3D City Experience Lab.](https://3dcel.com/opendata/) |
alumae/gst-kaldi-nnet2-online | 111633131 | Title: Plugin compile error
Question:
username_0: Hi,
I followed the installation steps and I got an error when I want to compile specifying Kaldi's root directory.
command: "make depend KALDI_ROOT=/home/kaldi-trunk make".
error: g++ -M -msse -msse2 -Wall -I.. -pthread -DKALDI_DOUBLEPRECISION=0 -DHAVE_POSIX_MEMALIGN -Wno-sign-compare -Wno-unused-local-typedefs -Winit-self -DHAVE_EXECINFO_H=1 -rdynamic -DHAVE_CXXABI_H -DHAVE_ATLAS -I/home/georgescu/kaldi-trunk/tools/ATLAS/include -I/home/georgescu/kaldi-trunk/tools/openfst/include -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -g -fPIC -DHAVE_CUDA -I/usr/local/cuda/include -DKALDI_NO_EXPF -I/home/georgescu/kaldi-trunk/src *.cc > .depend.mk
make: *** No rule to make target `make'. Stop.
How to solve it?
Thanks.
Answers:
username_1: 'make depend' and 'KALDI_ROOT=/path/of/kaldi-trunk make' should be separate commands.
Status: Issue closed
|
Affirm/affirm-merchant-sdk-android | 437802034 | Title: PromoRequest should include page_type
Question:
username_0: Ex: https://sandbox.affirm.com/api/promos/v2/MJQ0B6BEVJDPS0YG?field=ala&amount=50000&logo_color=blue&logo_type=logo&page_type=product&promo_external_id=promo_set_default_product&show_cta=true
Should be an optional field.
Answers:
username_1: https://github.com/Affirm/affirm-merchant-sdk-android/pull/14
Status: Issue closed
|
dotnetdevbr/vagas | 837934926 | Title: [Híbrido] Desenvolvedor .Net - JR/PL
Question:
username_0: ## Descrição da vaga
**Selecionamos para empresa de tecnologia da informação:**
- Nível de experiência: Junior –> Pleno
- Setor: Tecnologia da Informação – Serviços
- Local de trabalho: Vila Olímpia – SP (Regime: 50% on-site e 50% home office)
## Local
**Híbrido**
## Requisitos Técnicos:
- NET - C#, ASP
- Padrão MVC, Webservices REST, SOAP
- PL/SQL – SqlServer, Oracle
- IIS
- TFS, Azure DevOps
## Atividades
- Criação e manutenção de Sistemas;
- Criação e manutenção de Scripts de Cargas Massivas de dados;
- Criação e manutenção de Serviços automatizados de execução de tarefas;
- Atividades colaborativas com a equipe de desenvolvimento, conforme nossa metodologia;
- Desenvolver códigos de alto nível conforme as boas práticas de mercado;
- Buscar a otimização das soluções buscando desempenho, alta disponibilidade e baixa latência;
- Ter foco permanente em Privacidade e Segurança;
## Contratação
**PJ a combinar pretensão salarial**
## Como se candidatar
**Por favor envie um email informando sua pretensão salarial para <EMAIL> com seu CV anexado - enviar no assunto: Vaga Desenvolvedor .Net - JR/PL*
**Ou candidatar-se através do link:** https://jobs.recrutei.com.br/grupo-dream-work/vacancy/7936-dw1064-desenvolvedor-net-jrpl
#### Nível
- Pleno
- Sênior |
RSS-Bridge/rss-bridge | 218875143 | Title: Sexactu bridge no more works
Question:
username_0: There was a refactoring last week on GQ magazine, which broke my Sexactu bridge.
Not knowing if anybody else use it, I prefer to add an issue to track it.
I'm gonna work on it soon (today or tomorow)<issue_closed>
Status: Issue closed |
RolifyCommunity/rolify | 2801565 | Title: undefined method `is_admin?' for #<User:0x00000102de3cc8>
Question:
username_0: Tried everything I can see in this tutorial, readme, etc
```ruby-1.9.2-p180 :010 > User.first.roles
User Load (0.1ms) SELECT "users".* FROM "users" LIMIT 1
Role Load (0.1ms) SELECT "roles".* FROM "roles" INNER JOIN "users_roles" ON "roles"."id" = "users_roles"."role_id" WHERE "users_roles"."user_id" = 1
=> [#<Role id: 1, name: "admin", resource_id: nil, resource_type: nil, created_at: "2012-01-11 07:00:48", updated_at: "2012-01-11 07:00:48">] ```
I only have one user, which works out well for my purpose of this bug....
```ruby-1.9.2-p180 :009 > User.first.has_role? "admin"
User Load (0.1ms) SELECT "users".* FROM "users" LIMIT 1
(0.1ms) SELECT COUNT(*) FROM "roles" INNER JOIN "users_roles" ON "roles"."id" = "users_roles"."role_id" WHERE "users_roles"."user_id" = 1 AND (((name = 'admin') AND (resource_typ
e IS NULL) AND (resource_id IS NULL)))
=> true ```
So...has_role? works...
``` User Load (0.1ms) SELECT "users".* FROM "users" LIMIT 1
NoMethodError: undefined method 'is_admin?' for #<User:0x00000102de3cc8>
from /Users/username_004/.rvm/gems/ruby-1.9.2-p180/gems/activemodel-3.1.2/lib/active_model/attribute_methods.rb:385:in 'method_missing'
from /Users/username_004/.rvm/gems/ruby-1.9.2-p180/gems/activerecord-3.1.2/lib/active_record/attribute_methods.rb:60:in 'method_missing'
from /Users/username_004/.rvm/gems/ruby-1.9.2-p180/gems/rolify-2.1.0/lib/rolify/role.rb:83:in 'method_missing'
from (irb):11
from /Users/username_004/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.1.2/lib/rails/commands/console.rb:45:in 'start'
from /Users/username_004/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.1.2/lib/rails/commands/console.rb:8:in 'start'
from /Users/username_004/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.1.2/lib/rails/commands.rb:40:in '<top (required)>'
from script/rails:6:in 'require'
from script/rails:6:in '<main>'```
and there we go, is_admin(re: dynamic shortcuts) fails to work. To me, it seems to be tripping up on something specific to the role's method missing.
To add, there are no gems outside of rolify and cancan. I am rolling my own, very simple authentication at this time, using the User class/model/table.
Answers:
username_1: I've run into the same issue, @amolk 's solution worked for me (added the admin role to a user). This seems like it's [still] a bug to me? |
cloudinary/cloudinary_php | 437931433 | Title: Development
Question:
username_0: Hey,
I see that the only way of uploading files is by specifying a URL with the media. However, how can I do this in development? When I try and upload from development, I get a DNS error, because my localhost is unreachable from Cloudinary.
Is there any way of upload the file contents instead of specifying a url?
Thanks
Answers:
username_1: Can you please share what exactly have you tried and didn't work for you so we can take a closer look?
username_0: Hmm, didn't realise I could pass in the actual base64 encoded version of the file. This means I now have this:
```
\Cloudinary::config([
'cloud_name' => config('readcast.cloudinary.cloud'),
'api_key' => config('readcast.cloudinary.key'),
'api_secret' => config('readcast.cloudinary.secret'),
]);
// Upload to Cloudinary
\Cloudinary\Uploader::upload(base64_encode($response->getBody()), [
'resource_type' => 'video',
'public_id' => config('app.env') . '/' . $slug
]);
```
My only issue now is when I attempt to upload to Cloudinary I get a `cURL error 28: SSL connection timeout (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)` error which suggests there is some SSL/HTTPS security issue somewhere. Is this something on my end, the SDK's end or Cloudinary's end and how can I fix it?
username_1: Hi @username_0. It is likely that you're getting a timeout because your video is too big to be sent in one go. Please use the `upload_large` method instead -
https://github.com/cloudinary/cloudinary_php/blob/f3b084d9e5fa67f3a0ecba931cd9da513ddf665b/src/Uploader.php#L110
For more information -
https://cloudinary.com/documentation/upload_videos#chunked_video_upload
username_0: Hmm, when I use the `upload_large` method I get another error. It starts with `fopen(...`. I can't get the end of the error because the base64 contents of the file is too long for it to log.
Any idea on what might be the issue?
username_1: Can you please share your cloud name (so I can check your logs) and the exact code you're using that leads to the error?
If you rather keep it private, please feel free to open a support ticket at <EMAIL> and share the details there.
username_0: My cloud name is `readcastapp`
username_1: Thanks, @username_0. However, the code you shared lacks the `upload_large` method.
I also can't seem to find any related error in your logs. Can you please also share the exact Base64 string you're trying to upload?
username_0: Oh, sorry, fetched the wrong code.
This is the base64 string that I'm attempting to upload. Maybe it's length is the issue. https://gist.github.com/username_0/a2d7bd1f862538e75d49445cc0f1225c
username_2: Uploading using Base64 representation is limited to file-size of 60MB. The file that you shared is 1.5MB.
username_1: @username_0, Your Base64 data URI needs to follow this scheme - https://en.wikipedia.org/wiki/Data_URI_scheme.
You might want to refer to [this SO](https://stackoverflow.com/questions/3967515/how-to-convert-an-image-to-base64-encoding/13758760#13758760) thread for help.
Status: Issue closed
username_0: In the end I didn't implement Cloudinary, as I couldn't get it configured and working the way I want it. Although, thanks for your help! |
Esri/runtime-questions | 61145085 | Title: Is the SQLite database used for the runtime geodatabase easily viewable with tools like SQLite Expert or are the table and field names set up using keys that are stored somewhere and will need to be looked up?
Answers:
username_1: Yes - change the extension to SQLite and dig in. Not sure what the official answer is from Esri but we have done this many times for diagnostics.
username_2: The internals of the mobile geodatabase stored within sqlite can be viewed using many tools that support sqlite. As always we don't encourage direct editing of the tables.
username_3: Whenever I try to open a .geodatabase file in a SQLite browser I get "file is encrypted or is not a database". I've tried Sqlite Administrator, the Firefox SQLite add on, and SQLite Expert personal edition. Does anyone have an example I can open that has worked for them? I've only used .geodatabases that I have found in various Runtime for .NET related gitub repos.
username_4: @username_3 As @username_2 says, you should only use these tools to view/inspect your *.geodatabase file. Having said that, there may be something wrong with your *.geodatabase file, but on many occasions I have had issues opening a *.geodatabase file in [sqlitebrowser](http://sqlitebrowser.org/) yet the same file opens fine using [SQLite Pro](https://www.sqlitepro.com/). But I can open the same *.geodatabase file in SQLite Manager for Firefox which you reported as not working so maybe it is the *.geodatabase file.
BTW: You should not have to change the extension of your *.geodatabase file.
Status: Issue closed
|
victor0210/slim | 428038302 | Title: cdn 引入报错
Question:
username_0: <script src="https://unpkg.com/slim-store/slim.min.js"></script>
```
slim.min.js:1 Uncaught ReferenceError: require is not defined
at slim.min.js:1
at slim.min.js:1
```
Answers:
username_1: fixed the issue with slim v3.0.6
pr: https://github.com/username_1/slim/pull/36
Status: Issue closed
|
flutter/flutter | 896234262 | Title: [Google One Tap SignIn] Support this feature from flutter
Question:
username_0: ## Use case
I want to suggest to my users to sign in with Google when the app starts, like this:

## Proposal
Maybe adding a method `startOneTapSignIn` that starts it and returns the same as current `signIn` method
```dart
final _googleSignIn =
GoogleSignIn(scopes: ['email'], clientId: oauthClientId);
final result = await _googleSignIn.startOneTapSignIn();
```
Answers:
username_1: This is more relevant now that Google is sending out emails saying they'll be discontinuing the old API (See #88084)

They provide a migration page [here](https://developers.google.com/identity/gsi/web/guides/migration)
username_2: https://pub.dev/packages/google_one_tap_sign_in
One tap sign in google is ready on Flutter
username_2: https://pub.dev/packages/google_one_tap_sign_in
One tap sign in google is ready on Flutter
username_0: It's nice that it exists in another library, but it is not in this one 🤓 |
stellar/go | 434091859 | Title: Issue: Streampayments contains empty memo fields
Question:
username_0: The Streampayments function in stellar go horizon, returns empty memo Type and memo Value. All other fields work fine.
https://godoc.org/github.com/stellar/go/clients/horizon#Client.StreamPayments
https://godoc.org/github.com/stellar/go/clients/horizon#Payment
This issue is tested for memo type - ID, TEXT, HASH
An rough example is below -
```
const address = "GBD3ECXAO4427NFYIZH6TYSZVX2I76KVUHYYKJIQZUYC3GHA73KHGNNV"
ctx := context.Background()
cursor := horizon.Cursor("98924319576453121")
horizon.DefaultPublicNetClient.StreamPayments(ctx, address, &cursor, func(payment horizon.Payment) {
fmt.Println("Payment Memo Type", payment.Memo.Type)
fmt.Println("Payment Memo Value", payment.Memo.Value)
}
```
Answers:
username_1: Duplicate of #924.
Status: Issue closed
username_0: @username_1 :
My suggestion related to the issue is that this information should be updated for Streampayments function in godocs or Streampayments should work without LoadMemo.
username_1: @username_0 I agree this should be clarified. @username_2 lets confirm the behaviour in the new `horizonclient` and modify as needed.
username_2: Memo is a field on transactions and not on operations; so it wont be populted when getting payments. You will always need to get the memo with an extra API call. If we want memo on operations, there needs to be further discussions on that as it involves changes to horizon.
username_0: @username_2 As it will need discussion on this, I will suggest adding a note/comment on Streampayments function on godoc stating the use of LoadMemo for loading memo after streaming payments.
username_2: See discussion for possible workaround in #1255.
#1260 will be used to track implementation |
iizukanao/node-rtsp-rtmp-server | 200531102 | Title: nginx-rtmp-module
Question:
username_0: What's the difference with 'nginx-rtmp-module'
Answers:
username_1: This is not an nginx's module.
username_0: I see, i just want to know the difference between this and nginx-rtmp-module -_-.
username_1: There are many differences and I can't tell you them briefly. See [README](https://github.com/username_1/node-rtsp-rtmp-server/blob/master/README.md) for the features.
username_0: ok.thanks!
Status: Issue closed
|
Lerondo/Mythe | 59705350 | Title: GetComponent
Question:
username_0: ```c
other.GetComponent<HealthController>().UpdateHealth(-_currentAttackDmg);
other.GetComponent<Unit>().KnockBack(this.transform.position, 2f, 2f);
other.GetComponent<PlayerController>().SetJustHit(true);
```
Bestand: https://github.com/Lerondo/Mythe/blob/master/Assets/Scripts/Enemies/Melee.cs
GetComponent kost performance. Probeer dit niet de hele tijd te doen als het niet nodig is.
Je zou het in een variabele kunnen opslaan ter referentie.<issue_closed>
Status: Issue closed |
lars-sh/parent | 509192836 | Title: New utility class: Read current artifacts meta information
Question:
username_0: A new class should provide the meta information of the currently running artifact, such as name, version, developer ...
Is that possible somehow?
Answers:
username_0: This requires to either determine or select which JAR has been loaded as "main" JAR
Status: Issue closed
username_0: This will be possible once #19 is done. Current idea to get the requested information: `Resources.readManifest(MyMainClass.class).getMainAttributes(Name.IMPLEMENTATION_TITLE)` |
pschloss/yowbook | 200818443 | Title: Create pedigree relationships
Question:
username_0: For each animal:
* Show their pedigree
* List siblings (include sex)
* List offspring (include sex)
* Indicate whether animal was a multiple
List the animals' information by their eartag and a link to their page. Will need a way to create records for animals that we don't have data from (e.g. if you bought an animal, you may not know it's grandsire or have it's information)
Answers:
username_0: Need to represent relationshp between ewe/ram and lambs using has_many/has_many relationships. A lamb has_many (i.e. 2) parents and a ewe/ram has_many lambs. Creating this model will make it easier to traverse genealogies and count offspring.
In addition, could include a ewe/ram having many relationships with each other although this might get more complicated than I want to deal with right now. The advantages of this would be that I could set "marriages" to last 6 months starting at the time of breeding through the birth of the lamb. This way, when a user enters the dam's eartag number to create a new lamb, it would automatically populate the eartage number of the sire.
These types of relationships are discussed at https://stackoverflow.com/questions/31614819/parent-child-relationship-in-user-model-self-join
Status: Issue closed
|
rbavery/CropMask_RCNN | 336785790 | Title: Refactor TRAIN notebook into command line script a la matterport nucleus example
Question:
username_0: The notebook already pulls mostly from this script. Now that it is running on notebooks, might be time to refactor for portability and efficiency.
See
https://github.com/matterport/Mask_RCNN/tree/master/samples/nucleus
Status: Issue closed
Answers:
username_0: This is done. Improvements need to be made to generalize crop_mask.py, wv2_config.py and wv2_dataset.py to work with other imagery sources, like Planet. mrcnn/visualize.py has also only been changed to work with 8 channel and 3 channel wv2 imagery, partially.
Some cells in inspect_data and inspect_model notebooks still fail with 8 channel wv2 imagery or 3 channel wv2 imagery (3 channel only works on center pivot branch). Basically, the mrcnn library and the crop_mask package that runs on top of it still needs a modularity overall, but closing this issue since stuff works and more issues to come. |
AndreyZabuyskiy/Flight | 623952229 | Title: Считать среднюю рекомендуемую скорость всех диспетчеров
Question:
username_0: Посчитай среднюю скорость от всех диспетчеров чтоб было проще ориентироваться пилоту. При этом для каждого диспетчера оставить отображение его рекомендуемой скорости
Answers:
username_0: Средняя рекомендуемая высота не должна считаться внутри самолёта.
Ты хардкодишь количество диспетчеров, хотя их может быть и больше двух, это костыль.
Сделай все эти расчёты внутри Simulator, где у тебя есть список диспетчеров и кидай результат в функцию самолёта Show()
Status: Issue closed
|
Setono/SyliusGiftCardPlugin | 1123349224 | Title: Purchased card has a zero amount
Question:
username_0: On 0.12.x branch, when a gift card is automatically created during a purchase, the amount is filled only if the card amount is configurable.
For fixed amount, the amount is allways 0.
I think it is because the card is created in the `AddToCartTypeExtension`, and the cartItem prices are not already calculated at this step.
I can suggest a pull request, but I am not sure about the method to use
Answers:
username_1: I don't really get the problem here, as if the gift card has not a configurable amount (so fixed one) then Sylius mechanics takes place and set the OrderItem price itself.
What is your issue exactly ?
username_0: My issue is the gift card is actually created with a 0 amount, maybe I misconfigured something ?
username_1: The gift card amount should be done on OrderProcess. Can you come up with either some test to reproduce, or a scenario ?
username_0: * create a product with "gift-card" ON and "amount configurable" OFF
* add the product to the cart
* a gift cart is created with amount 0€
username_1: Thank you @username_0 , I could reproduce with your scenario, and your PR is indeed fixing the issue. I'm just adding tests and we'll be able to proceed with merge |
JOML-CI/JOML | 99913903 | Title: Error in setEulerAnglesXYZ from Quaternionf
Question:
username_0: Currently is rotating around the wrong axis and causing a reflection, the error is at the end of the method
x = cx * cycz + sx * sysz;
y = sx * cycz - cx * sysz;
z = cx * sycz + sx * cysz;
w = cx * cysz - sx * sycz;
should be
w = cx * cycz + sx * sysz;
x = sx * cycz - cx * sysz;
y = cx * sycz + sx * cysz;
z = cx * cysz - sx * sycz;
Maybe it the same error exists in Quaterniond ot in setEulerAngles ZYX, but I didn´t use those.
Answers:
username_1: You are absolutely right! Thanks for finding and reporting. :)
username_1: The issue has been fixed now in the relevant branches of JOML, and the 1.4.1-SNAPSHOT version is available at the snapshot repository of oss.sonatype.org.
Status: Issue closed
|
numba/numba | 20719949 | Title: Error when using numba.datetime in struct
Question:
username_0: When creating a numba struct containing a numba datetime member, I get an error:
AttributeError: 'function' object has no attribute 'is_pointer'
I'm using numba v0.11.0 under Anaconda. A simple test case that reproduces this error is:
```python
import numba
fields = [('x', numba.i8), ('date', numba.datetime)]
test_struct = numba.struct(fields=fields, name='test_struct')
@numba.jit(argtypes=[test_struct[:]])
def test_struct_func(data):
sz = data.shape[0]
for k in range(sz):
tmp1 = data[k]['x']
tmp2 = data[k]['date']
```
with the resulting error message:
```bash
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-98-dbaf938186b6> in <module>()
5 test_struct = numba.struct(fields=fields, name='test_struct')
6
----> 7 @numba.jit(argtypes=[test_struct[:]])
8 def test_struct_func(data):
9 sz = data.shape[0]
/Users/uname/anaconda/lib/python2.7/site-packages/numba/decorators.pyc in _jit_decorator(func)
222 sig, lfunc, wrapper = compile_function(env, func, argtys,
223 restype=return_type,
--> 224 nopython=nopython, func_ast=func_ast, **kwargs)
225 return numbawrapper.create_numba_wrapper(func, wrapper, sig, lfunc)
226
/Users/uname/anaconda/lib/python2.7/site-packages/numba/decorators.pyc in compile_function(env, func, argtypes, restype, func_ast, **kwds)
131 assert kwds.get('llvm_module') is None, kwds.get('llvm_module')
132
--> 133 func_env = pipeline.compile2(env, func, restype, argtypes, func_ast=func_ast, **kwds)
134
135 function_cache.register_specialization(func_env)
/Users/uname/anaconda/lib/python2.7/site-packages/numba/pipeline.pyc in compile2(env, func, restype, argtypes, ctypes, compile_only, func_ast, **kwds)
142 pipeline = env.get_pipeline(kwds.get('pipeline_name', None))
143 func_ast.pipeline = pipeline
--> 144 post_ast = pipeline(func_ast, env)
145 func_signature = func_env.func_signature
146 symtab = func_env.symtab
/Users/uname/anaconda/lib/python2.7/site-packages/numba/pipeline.pyc in __call__(self, ast, env)
189
190 if self.is_composed:
--> 191 ast = self.transform(ast, env)
192 else:
193 try:
[Truncated]
--> 241 return visitor(node)
242
243 def generic_visit(self, node):
/Users/uname/anaconda/lib/python2.7/site-packages/numba/type_inference/infer.pyc in visit_Assign(self, node)
569 rhs_var = node.value.variable
570 if isinstance(target, ast.Name):
--> 571 node.value = nodes.CoercionNode(node.value, lhs_var.type)
572 elif lhs_var.type != rhs_var.type:
573 if lhs_var.type.is_array: # and rhs_var.type.is_array:
/Users/uname/anaconda/lib/python2.7/site-packages/numba/nodes/coercionnodes.pyc in __init__(self, node, dst_type, name)
26
27 type = getattr(node, 'type', None) or node.variable.type
---> 28 if dst_type.is_pointer and type.is_int:
29 assert type == Py_uintptr_t, type
30
AttributeError: 'function' object has no attribute 'is_pointer'
```
Answers:
username_1: `numba.datetime` doesn't exist anymore, but `np.datetime` should work fine. I'm closing this issue, please open a new one if something goes wrong.
Status: Issue closed
|
smeijer/leaflet-geosearch | 1179886074 | Title: SearchControl.ts:412 Uncaught (in promise) when clicking a result
Question:
username_0: Hi
I'm using geosearch with Leaflet Js
I have included the script after the Leaflet script
and this
new GeoSearch.GeoSearchControl({
provider: new GeoSearch.OpenStreetMapProvider(),
}).addTo(map);
The search works fine and produces results but when I click on a search result I get this in the console.
SearchControl.ts:412 Uncaught (in promise) TypeError: Cannot convert undefined or null to object
at Function.keys ()
at i.showResult (SearchControl.ts:412:28)
at SearchControl.ts:404:12
Any help much appreciated
Answers:
username_1: @username_0 out of curiosity, does that also happen if you click on the map, rather than on the result? As in, run the search as you did, but then instead of clicking on the result, click the map (sorry if that was obvious).
username_0: --
Neil
Lewes.co.uk <http://www.lewes.co.uk>
username_1: Thanks - I thought I'd seen something like this when experimenting with all local versions of this app and its dependencies, must've been something else (it's been a few months, so my memory may have been a bit hazy).
username_0: --
Neil
Lewes.co.uk <http://www.lewes.co.uk>
username_1: No, I'm sorry, I don't have any insight into this offhand. The issue sounded similar to something that I'd encountered while working with local copies of this app and its dependencies (as opposed to installing them from NPM), but I'm not really a Node expert so I couldn't debug it. I had _assumed_ that the problem in my case was something going awry in the build process (since the problem went away when I switched to NPM sources), but I that was only a "best guess" on my part.
Though I'm not in a position to help with this particular issue right now, you could definitely help the repo owner or another dev debug this by providing a little more information about your situation. I'd suggest
- noting the installed version of this module
- listing your Node and NPM versions
- noting the browser and browser version where the bug occurs
- providing a sample of your code where this dependency is being used
username_0: Hi Darren
Thanks for that, I'm not using Node though I'm just including this file
https://unpkg.com/[email protected]/dist/geosearch.umd.js
The problem occurs in
Safari Version 14.1.2 (16611.3.10.1.6) and
Chrome Version 99.0.4844.74 (Official Build) (x86_64) |
spacemeshos/go-spacemesh | 531916808 | Title: Implement a priority queue for gossip messages
Question:
username_0: Following the investigation of the delay in the hare messages during the broadcast of ATXs we came to a conclusion that the messages are waiting in the queue for pending gossip messages.
With 300 nodes we find out the queue had about 150-180 length with about 100-500ms per single message which implies at least 40 seconds of delay for the last message in the queue.
We decided that the right way to fix it (considering the time limit) is inducing a priority queue for the messages in which the hare messages will have priority over the atx messages.<issue_closed>
Status: Issue closed |
shish/rpi-router-forumula | 490747635 | Title: Persist routing tables and rules
Question:
username_0: iptables configs are currently persisted using the `iptables-persistent` package, which is ok (and will hopefully be replaced with nftables.conf which has load-on-boot built-in)
But for these configs to do anything, we need some `ip rule` and `ip route` configs, and I can't see how to save those (beyond writing a custom service to run a shell script at boot time)
(Would they even work if run at boot time, or do we need to wait for the USB dongles to come online first?) |
NucleusPowered/Nucleus | 156513132 | Title: Add ability to have "sign" warps
Question:
username_0: This is a much requested feature. We need to:
Add Data for the following:
- [ ] Warp Name
- [ ] Permission Required
- [ ] Custom Warmup
Add Commands for the following:
- [ ] Create Warp
- [ ] Add/Remove Permission from Warp
- [ ] Add/Remove Warmup
Do we just put this on signs, or on any block? If we just put it on signs, do we force the signs to have some sort of marker on the sign to indicate it's a warp?
Answers:
username_1: I find the general format where it says something like [Warp] on the first line then the warp name on the second, then possibly any extra information on the next lines. I think having it on a block is sort of unnecessary as there isn't an amazing way to portray that it is a warp block.
It could be good to be able to fill in the sign manually with the warp tag and name but also a command(with a separate permission) that allows you to right click a sign and it changes to that.
This format could be a general automatic format though, and they could maybe specify their own format either by just making any sign which they already formatted a warp with the command and if it doesn't have text add the default, which could be configured in the configuration.
username_2: Would also be nice to have a cost associated with the warp.
In Essentials you can do:
``
[Warp]
warp_name
cost
permission_group
```
username_3: So, in the #185 issue, you mentioned you were waiting for "API 5 or API 6" to do something with this. Given we're on API 7, any further news? |
aws-amplify/amplify-flutter | 820643787 | Title: [Datastore] API sync failure on configure
Question:
username_0: Hi. I am using datastore api and I get this message on initializing app.
```
E/amplify:aws-datastore(21773): Failure encountered while attempting to start API sync.
E/amplify:aws-datastore(21773): GraphQLResponseException{message=Subscription error for Video: [GraphQLResponse.Error{message='Validation error of type FieldUndefined: Field '_deleted' in type 'Video' is undefined @ 'onUpdateVideo/_deleted'', locations='null', path='null', extensions='null'}, GraphQLResponse.Error{message='Validation error of type FieldUndefined: Field '_lastChangedAt' in type 'Video' is undefined @ 'onUpdateVideo/_lastChangedAt'', locations='null', path='null', extensions='null'}, GraphQLResponse.Error{message='Validation error of type FieldUndefined: Field '_version' in type 'Video' is undefined @ 'onUpdateVideo/_version'', locations='null', path='null', extensions='null'}], errors=[GraphQLResponse.Error{message='Validation error of type FieldUndefined: Field '_deleted' in type 'Video' is undefined @ 'onUpdateVideo/_deleted'', locations='null', path='null', extensions='null'}, GraphQLResponse.Error{message='Validation error of type FieldUndefined: Field '_lastChangedAt' in type 'Video' is undefined @ 'onUpdateVideo/_lastChangedAt'', locations='null', path='null', extensions='null'}, GraphQLResponse.Error{message='Validation error of type FieldUndefined: Field '_version' in type 'Video' is undefined @ 'onUpdateVideo/_version'', locations='null', path='null', extensions='null'}], recoverySuggestion=See attached list of GraphQLResponse.Error objects.}
E/amplify:aws-datastore(21773): at com.amplifyframework.datastore.appsync.AppSyncClient.lambda$subscription$2(AppSyncClient.java:291)
E/amplify:aws-datastore(21773): at com.amplifyframework.datastore.appsync.-$$Lambda$AppSyncClient$FIbUjC1l68HcabhkQcx9zYkvUI8.accept(Unknown Source:8)
E/amplify:aws-datastore(21773): at com.amplifyframework.api.aws.SubscriptionEndpoint$Subscription.dispatchNextMessage(SubscriptionEndpoint.java:392)
E/amplify:aws-datastore(21773): at com.amplifyframework.api.aws.SubscriptionEndpoint.notifySubscriptionData(SubscriptionEndpoint.java:226)
E/amplify:aws-datastore(21773): at com.amplifyframework.api.aws.SubscriptionEndpoint.access$700(SubscriptionEndpoint.java:60)
E/amplify:aws-datastore(21773): at com.amplifyframework.api.aws.SubscriptionEndpoint$AmplifyWebSocketListener.processJsonMessage(SubscriptionEndpoint.java:570)
E/amplify:aws-datastore(21773): at com.amplifyframework.api.aws.SubscriptionEndpoint$AmplifyWebSocketListener.onMessage(SubscriptionEndpoint.java:476)
E/amplify:aws-datastore(21773): at okhttp3.internal.ws.RealWebSocket.onReadMessage(RealWebSocket.kt:333)
E/amplify:aws-datastore(21773): at okhttp3.internal.ws.WebSocketReader.readMessageFrame(WebSocketReader.kt:245)
E/amplify:aws-datastore(21773): at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.kt:106)
E/amplify:aws-datastore(21773): at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.kt:293)
E/amplify:aws-datastore(21773): at okhttp3.internal.ws.RealWebSocket$connect$1.onResponse(RealWebSocket.kt:195)
E/amplify:aws-datastore(21773): at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:519)
E/amplify:aws-datastore(21773): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
E/amplify:aws-datastore(21773): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
E/amplify:aws-datastore(21773): at java.lang.Thread.run(Thread.java:923)
W/amplify:aws-datastore(21773): API sync failed - transitioning to LOCAL_ONLY.
```
My graphQL schema is like below:
```
enum TagStatus {
ACTIVE
INACTIVE
}
type Tag @model {
id: ID!
text: String!
rating: Int!
status: TagStatus!
# New field with @connection
videos: [Video] @connection(keyName: "byTag", fields: ["id"])
}
# New model
type Video @model
@key(name: "byTag", fields: ["tagID", "contentUrl"]) {
id: ID!
tagID: ID!
tag: Tag! @connection(fields: ["tagID"])
contentUrl: String!
thumbUrl : String!
}
```
I executed `amplify codegen models` and compiled successfully.
Also, in android-studio with JS GraphQL plug in I have error message on my schema.graphql file that is saying "Unknown directive "model"". It doesn't generate any errors for compiling anyway.
Answers:
username_0: `amplify update api` and selecting `Enable DataStore for entire API` solved the problem
username_1: This is a bug report I just filed that includes a couple of bugs I found and the workarounds to each
after reading your bug report I think it could help others that might land on this page
https://github.com/aws-amplify/amplify-flutter/issues/822 |
loot/skyrimse | 506245979 | Title: Realistic Water Two 2.0 Update - Notes
Question:
username_0: ### [Realistic Water Two](https://www.nexusmods.com/skyrimspecialedition/mods/2182)
###### Sources:
_Whats New.html_ included with RWT 2.0 installer.
###### Added:
- Mod Configuration Menu
- `Realistic Water Two - Landscape Fixes.esp`
###### Removed:
- `RealisticWaterTwo - FlowingLakes.esp`
- `RealisticWaterTwo - Open Cities.esp`
- `RealisticWaterTwo - Waves.esp`
---
- [ ] Update Cleaning Info.
- [ ] Mark removed plugins as incompatible with new version.
- [ ] Update patch messages.
- [ ] Add requires MCM message.<issue_closed>
Status: Issue closed |
mapbox/mapbox-gl-leaflet | 348529624 | Title: this._map._proxy is undefined
Question:
username_0: If you integrate this with webkit, react-leaflet and leaflet when closing the map it is throwing an error on Line 75.
`L.DomEvent.off(this._map._proxy,` L.DomUtil.TRANSITION_END, this._transitionEnd, this);
this._map._proxy is undefined because in leaflet.js it removes the animation proxy right before removing the layers.
Answers:
username_0: Thanks for the comment... we had pull the file in directly and made a similar change. We will attempt to install to the commit and test.
Status: Issue closed
|
TechEmpower/FrameworkBenchmarks | 223625761 | Title: Remove play2-scala-activate benchmark
Question:
username_0: I think [activate](https://github.com/fwbrasil/activate) is not being maintained anymore.
Answers:
username_1: Thanks @username_0. I'll Prepare a PR tomorrow to get it removed.
username_0: Thank you! It's amazing the effort you guys put into this.
Status: Issue closed
|
emailjs/emailjs.github.io | 611065676 | Title: "Domain emailjs.org is pending renewal or deletion"
Question:
username_0: The current domain "emailjs.org" shows the message "NOTICE: This domain name expired on 4/22/2020 and is pending renewal or deletion." from GoDaddy...
It would be nice, if the domain could be renewed.
Answers:
username_1: cc @andris9 @felixhammerl |
tencentyun/cos-java-sdk-v5 | 309630940 | Title: 请问如何支持MultipartFile类型文件
Question:
username_0: 请问如何支持MultipartFile类型文件
Answers:
username_1: 分块类型文件可使用TransferManager类,自动进行文件的分块,详见demo
https://github.com/tencentyun/cos-java-sdk-v5/blob/master/src/main/java/com/qcloud/cos/demo/TransferManagerDemo.java
或SDK 文档
https://cloud.tencent.com/document/product/436/12263#.E9.AB.98.E7.BA.A7-api-.E6.96.87.E4.BB.B6.E4.B8.8A.E4.BC.A0(.E6.8E.A8.E8.8D.90)
Status: Issue closed
|
GhostWriters/DockSTARTer | 647808438 | Title: Add youtube-dl-server
Question:
username_0: # Feature request
**Is your feature request related to a problem? Please describe.**
N/A
**Describe the solution you'd like**
Add [youtube-dl-server](https://github.com/nbr23/youtube-dl-server)
**Describe alternatives you've considered**
N/A
**Additional context**
Simple Web and REST interface for downloading youtube videos onto a server.
Answers:
username_0: I worked on the `yml` files for it. Let me know if I got any wrong.
[youtube-dl-server.zip](https://github.com/GhostWriters/DockSTARTer/files/4849181/youtube-dl-server.zip)
username_0: Re-uploaded ZIP file without `-` and added labels.
Status: Issue closed
username_0: Closing issue and creating a PR. |
kids-first/kf-portal-ui | 857889377 | Title: Studies page: few more things to do for V1
Question:
username_0: - [ ] Add the text search bar at the top of the facets panel. Support search by study codes and study names
- [ ] add Showing 1 - 10 of 28 above the table
- [ ] Facets panel: implement styling as defined by Lucas in the spec
- [ ] Add tooltips on the data category column titles. Use the names showed in the facet Data Categories. SV is Structural Variation
- [ ] Replace facet name Data Categories by Data Category
- [ ] Make sure that facet names are the same as the ones displayed in the query bar. Currently, the facet Domain is in lower case in the query i.e. domain = Cancer
- [ ] Experimental strategy = No Data instead of __missing__ in the query bar
Status: Issue closed
Answers:
username_1: Closing, please see subtasks
username_1: - [ ] Add the text search bar at the top of the facets panel. Support search by study codes and study names #3014
- [ ] add Showing 1 - 10 of 28 above the table #3130
- [ ] Facets panel: implement styling as defined by Lucas in the spec #3131
- [ ] Add tooltips on the data category column titles. Use the names showed in the facet Data Categories. SV is Structural Variation #3125
- [ ] Replace facet name Data Categories by Data Category #3132
- [ ] Make sure that facet names are the same as the ones displayed in the query bar. Currently, the facet Domain is in lower case in the query i.e. domain = Cancer #3131
- [ ] Experimental strategy = No Data instead of __missing__ in the query bar #3135
- [ ] ~~Use short name instead of name from dataservice~~ (Moved to #3108)
- [ ] ~~Change the way family data is set : if there is family count = 0, then family data should be true. Otherwise, it should be false~~ (Moved to #3108) |
ZKjellberg/dark-souls-3-cheat-sheet | 153633460 | Title: Call Over gesture
Question:
username_0: The checklist says "Trade any possible item with Pickle-Pee, Pump-a-Rum crow to receive the Call Over gesture". This appears to be incorrect; I didn't get the gesture until my second trade. I traded in a Firebomb first, followed by a Homeward Bone.
A related issue: #28
Answers:
username_1: This gesture has confused me as many report differently. I originally had this as Homeward bone which both wiki's state. Others state it can also be received with any carvings. I may revert this to Homeward Bone when I return from travel at the end of the week. If anyone can confirm the carvings, I'll add that as well.
Status: Issue closed
username_1: I've corrected the walkthrough and Gesture page to both state it is available by trading a Homeward bone.
Thanks for reporting. |
nao-pon/elFinder | 64213574 | Title: [Feature Wanted] Multiple thumbnails
Question:
username_0: Hi,
First of all, thanks for your fork and your support :+1:
I was wondering if it could be possible to have multiple thumbnails generation (with different settings) instead of only one.
For now, its 1 image => 1thumbnail, what about to have 1 image => n thumbnails ?
Thumbnails config could be parametizable : as for now xe could choose tmb size, tmb directory, tmb crop, etc...
On the JS side, the `getFile` callback will return an array of thumbnails (or an object where keys are tmb setting name and value tmb URL) instead of one `tmb` field.
Could it be possible ? :yum:
Answers:
username_1: @username_0 It's an interesting idea. Let me think about it. :thought_balloon: |
achiu8/natandp | 101323509 | Title: nav bar not centered
Question:
username_0: @username_1 yo can you take a look at this?
<img width="1229" alt="screen shot 2015-08-16 at 7 12 53 pm" src="https://cloud.githubusercontent.com/assets/1305229/9296926/c2dd7dc0-444b-11e5-88c7-38e7056b2528.png"><issue_closed>
Status: Issue closed |
fourkitchens/scrummy-react-dom | 386957629 | Title: Reconnection Issues
Question:
username_0: Frequently, scrummy shows the reconnection error
https://github.com/fourkitchens/scrummy-react-dom/blob/master/src/actions/ScrummyAPI.js#L24
This _could_ be a server issue, but it may be a good idea to try upgrading https://www.npmjs.com/package/reconnecting-websocket first to see if that fixes it.
Answers:
username_0: Addresed by #41
Status: Issue closed
|
quilljs/quill | 226978170 | Title: Is there a guide for how to use with reactjs?
Question:
username_0: I just want a simple WYSIWYG editor with a minimal set of features.
I'd prefer not to use a third party reactjs integration project as it seems these projects are often not fully up to date and maintained.
Is there some instructions somewhere for a simple way to run Quill in ReactJS?
thanks
Answers:
username_1: Hi @username_0,
I don't know of such guide. Have you seen [zenoamaro/react-quill](https://github.com/zenoamaro/react-quill)? Would that be helpful to check an approach to use Quill with React?
Cheers.
Status: Issue closed
username_3: </div>
</div>
</div>
);
}
```
username_4: @username_3 do you happen to have a working example of the above code you could share?
username_3: </div>
</div>
</div>
);
}
}
```
username_5: Thank you username_3 but your code is a little hard to understand because it imports other custom made components. Do anyone has a code on github that use redux or not, at least react ? |
TheTabletop/CityGate | 225199062 | Title: Invalid login info creates internal server
Question:
username_0: Description:
Sending an invalid uhid to the login/ endpoint causes an internal service error
Severity:
Low.
Comments:
It seems like a lot of the login functionality is still in the works. There does not seem to be an support for passwords
Answers:
username_1: Has been fixed, login now returns information pertinent to accessing site
Status: Issue closed
|
commonplaceworld/udacity-hugo-theme | 335482069 | Title: deploy.sh hangs
Question:
username_0: I'm having trouble using deploy.sh (or raw git, either one). deploy.sh hangs when doing a push, never resolves.
Status: Issue closed
Answers:
username_0: I fixed/resolved this issue by re-cloning the repo and following the setup and deploy scripts. I think I had gotten something out of whack by using git add/commit/push a la carte.
username_1: Glad that you were able to fix it. I think that we should start working on branches to avoid being behind when we both work at the same time. What do you think?
username_0: Yes, I agree. Just so I remind myself how this should work, let me outline a process—please give me your feedback and suggestions! We could post such an outline as part of a suggested collaborative project process outline.
1. First, before doing new work, I create a branch (labeled appropriately for that work), and I do the work and test it on the branch.
2. When I'm ready to merge and deploy, I first do a fetch and pull any new commits by others to the master (to avoid collisions).
3. Then I merge my branch into the master and resolve any issues. Then commit and push.
4. Rinse and repeat.
username_1: I like this outline. One thing I keep having issue with is that the deploy script doesn't pull the changes for the public repo (commonplaceworld.github.io). Which is creating lots of conflict so instead I'm rerunning the submodule script. I'll test adding it to a test repo then add it here.
Regarding pull requests, I think that we can use branches once we have a stable theme. Then when we want to add a feature, we can open a branch, work on it, then open a pull request to have a more organized and visible workflow. Another workflow could be added for writing posts.
username_0: Actually, I was kind of thinking about how nice it would be to make that script smarter and take care of likely cases as you describe. That would be a big productivity help. Thx, if your able to do it!
username_1: I'm having trouble using deploy.sh
It's not pushing my changes. I've run "hugo" first.
When I try "git push" in the public dir it hangs.
I have made changes to about/_index.md and about/contact.md and can't push them.
I'm also confused about the message "On branch master
Your branch is ahead of 'origin/master' by 3 commits.
(use "git push" to publish your local commits)
nothing to commit, working tree clean"
This may have something to do with my ssh key which I may be having trouble with?

username_1: I added the pull command to the deploy script but I don't have something to test it with. Let me know if it still causes any issues. I'll keep this issue open for a week till we make sure it's not causing any problems.
username_0: Ok, I will deploy a blog post and a jupyter notebook in the next couple of days and see how it goes. **Question**: It sounds like the pull request would become part of the workflow, then, right? I'm good with that. I've always liked the idea that code gets talked about (at least just accepted if it's minor, or discussed otherwise to whatever degree necessary.)
username_1: Yes I think that is the best way to go. At least we can manage the conflicts that arise from working on the same files at the same time.
Status: Issue closed
|
jeremylong/DependencyCheck | 1074050041 | Title: How to run dependencycheck aggregate for gradle projects ?
Question:
username_0: Hi,
Please bear with me.
I am a complete beginner in using the dependencycheck.
I am planning to run the dependencycheck on my project and here we have a significant number of child gradle projects under one main project.
I am able to run the scan and it generates individual reports for each project.
Can some one point me as to how we can run the scan so that a single aggregate report will be generated for all set of projects ?
Thanks in advance
Answers:
username_1: Use the `dependencyCheckAggregate` task: https://username_1.github.io/DependencyCheck/dependency-check-gradle/configuration-aggregate.html
username_0: So, say if i have one base build.gradle file and multiple build.gradle files for individual sub projects
In base build.gradle file i will:
Add the dependency org.owasp:dependency-check-gradle:${project.version}
Add the line: check.dependsOn dependencyCheckAggregate
Define the dependencyCheck task.
Is it right ?
Because i don' see anywhere in the page where we are defining the dependencyCheckAggregate task like we are doing for dependencyCheckAnalyze
and how do we invoke the scan in that case ?
username_1: ```
./gradlew dependencyCheckAggregate
```
username_0: Thankn you @username_1 , i will check on this.
I had another question.
Is it possible to run the dependency-check commandline tool on gradle projects ? Will it show all the vulnerabilities as it shows up when we run the gradle task of dependencycheck ?
username_1: No - the CLI cannot be used to run against a gradle project. Some do use the ODC CLI on the `./build` directory - but honestly the results are not as accurate; also depending on what the project is you may need to add a command to download the dependencies (similar to `mvn dependency:copy-dependencies`). Using the CLI on a gradle build (or maven build) without downloading the dependencies can cause you to miss transitive dependencies (again, this entirely depends on the build itself though).
Times when you would not need to download the dependencies when using the CLI to scan a build directory would be if you are building an EAR, WAR, or a spring-boot JAR. In those cases all the transitive dependencies are bundled into the build artifact already. In almost every other case if using the CLI you need to download the transitive dependencies. For gradle, I believe this requires adding a task - so I would highly recommend just using the gradle plugin as the results will be more accurate.
username_0: Thanks a lot @username_1
Wishing you a very Happy New year..!
Status: Issue closed
|
sackerman-usgs/UAS_processing | 371247411 | Title: Improve file creation and output structure
Question:
username_0: Currently, the processing ends with a new image folder that contains 4 versions of the photos: the original filenames geotagged and the originals; and the new filenames with the WHSC exif tags and the originals.
Answers:
username_0: Addressed this in commit #4
Status: Issue closed
|
ThreeTen/threetenbp | 223435085 | Title: DateTimeFormatter not properly formatting month and day of week string
Question:
username_0: I'm using ThreeTen on Android via [ThreeTenABP](https://github.com/JakeWharton/ThreeTenABP). I'm trying to format a ZonedDateTime as `Tuesday, March 19`. But I'm only ending up with `Tue, 3 19`. I believe this is different than #50. It's happening on all devices, and the actual date is correct, just the formatting is wrong.
I'm pretty sure I'm following the [formatting string guidelines](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) correctly.
```
DateTimeFormatter dateFormat = DateTimeFormatter.ofPattern("EEEE, MMMM d");
String dateString = zonedDateTime.format(dateFormat);
```
The bizarre part is that when I unit test this code it works fine, but running on an actual Android device truncates day of the week and uses integer for month.
Answers:
username_1: I'm afraid as I don't use the library on Android I don't have anything to add. Ultimately, this will only happen if it can't find the text for the relevant value.
username_0: Can you clarify what you mean by "this will only happen if it can't find the text for the relevant value." I can try testing different values/configurations.
username_1: The code to find the text is [here](https://github.com/ThreeTen/threetenbp/blob/master/src/main/java/org/threeten/bp/format/SimpleDateTimeTextProvider.java#L66). If text is not found, [the numeric value is output as a fallback](https://github.com/ThreeTen/threetenbp/blob/master/src/main/java/org/threeten/bp/format/DateTimeFormatterBuilder.java#L2852).
username_2: @username_0, I've made a test for this in https://github.com/username_2/lazythreetenbp/commit/6866656607ed4c91dc32e9b06fc8351f76543ca8
Could you build it and run all Instrumentation tests on devices on which you have that problem?
Status: Issue closed
|
dotnet/arcade | 299059151 | Title: [Feed package] Turn off publishing of Feed package from BuildTools
Question:
username_0: Once Feed package is publishing from Arcade, disable publishing from BuildTools. Note, there is still work to be done (official builds, signing, etc...) before we're ready for that.
Answers:
username_1: FYI @jcagme
username_1: @username_0 you mean each repo will need to start consuming the Feed package in Arcade instead of the Feed package in Buildtools?
username_1: For now, we should start by adding documentation in BuildTools to point people to the code over here.
Once repos that are using Buildtools start consuming the Arcade version, then we can disable the package publish there
username_2: why can we not disable package publish from buildtools now? Disabling the publish doesn't delete already published versions.
username_1: Until repos don't start consuming the version of Arcade, they might need to do fixes or modifications to the package in BuildTools so they get the change.
I prefer to wait until they don't depend actively on it.
Status: Issue closed
username_3: I'm going to let @weshaggard make then call regarding buildtools |
freefq/free | 849822235 | Title: Q
Question:
username_0: 请问只是知道节点名要怎么找地址啊 (流泪猫猫头.jpg
Answers:
username_1: 不知道你说的是什么地址,希望多打一点字描述清楚点。。
username_0: !
emm 描述潦草了..sory
就是

比如 像知道一个节点叫 “美国 16”
然后想找它的url地址导入
(就是之前重装然后弄丢了一个觉得好用的节点 然后只知道名字想找回
这样弄得了嘛 因为感觉给的链接都是随机的 然后弄不了 这样子的
(不过前后摸了几个钟倒是给我整回来了现在 还是想问一下这样能实现吗 只是知道节点名的话
Status: Issue closed
username_0: 请问只是知道节点名要怎么找地址啊 (流泪猫猫头.jpg
username_1: ……这些节点都是网络上采集到的免费节点,一半最多两三天有效期。
名字是根据节点IP物理地址+随机数
username_0: 这样 谢谢大佬解释୧( ⁼̴̶̤̀ω⁼̴̶̤́ )૭ |
portainer/portainer | 787202776 | Title: Add feature to do bulk image pulls from image list view
Question:
username_0: <!--
Thanks for opening a feature request for Portainer !
Do you need help or have a question? Come chat with us on Slack https://portainer.slack.com/
Before opening a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
Also, be sure to check our FAQ and documentation first: https://documentation.portainer.io/
-->
**Describe the solution you'd like**
It would be really awesome to be able to go to the Images tab/view and, from the table of images, select multiple images, then click a `Pull` button to issue a pull command to all selected images.

**Describe alternatives you've considered**
Currently what the only way to pull images that I'm aware of is to click the image, and from the image details page then click the `Pull` button. So I have to do this individually for each image.
 |
Azure/azure-cli-extensions | 858476175 | Title: Prompt new version when there is a new version of the running extension
Question:
username_0: - If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)
### Extension name (the extension in question)
### Description of issue (in as much detail as possible)
The author of quantum extension asked is there a way to prompt users to update extension version.
This issue is a feature request.
My preliminary thought is prompt new version when there is a new version of the running extension in frequency of once a week.
When users run
```
az quantum ...
```
If there is a new version, give a warning like
```
A new version *** of quantum extension is available. Current version is ***. Run *** to install latest version.
```
-----
Answers:
username_1: Was this a generic feature request besides quantum?
username_0: It is a generic feature.
username_2: CLI support: https://github.com/Azure/azure-cli/pull/14803/files#diff-818d51706b268b283c41fef3ee4436df9356c57e657c6e0f0859b1b9445d3e04R164 |
BuyAware/buyaware.org | 160011135 | Title: Make about page
Question:
username_0: This page should be accessible from the navbar. The navbar will be modified anyway. It'll probably be included in the base.html which is the parent template to home.html.
References:
[Plane bootstrap about page template](http://startbootstrap.com/template-overviews/round-about/)
[Random cool about page of a project I know](http://octanis.org/about/)<issue_closed>
Status: Issue closed |
opendatacube/datacube-core | 973197613 | Title: Datacube breaks geopandas s3 reading
Question:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/env/lib/python3.6/site-packages/geopandas/io/file.py", line 129, in _read_file
req = _urlopen(filename)
File "/usr/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/usr/lib/python3.6/urllib/request.py", line 549, in _open
'unknown_open', req)
File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/lib/python3.6/urllib/request.py", line 1395, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib.error.URLError: <urlopen error unknown url type: s3>
```
Note that without the datacube import, this code works fine.
### Environment information
1.8.4.dev81+g80d466a2 on Sandbox
Answers:
username_1: My guess it's because of this:
https://github.com/opendatacube/datacube-core/blob/3a49f78ead159da505cd78803d6710e7762b3a7e/datacube/utils/uris.py#L219-L238
@username_0 note that importing geopandas BEFORE importing datacube would allow you to keep using `.read_file`, so that's a work around. But it seems like an error in geopandas to be honest. It somehow decides to use normal http for s3 access
username_1: dodgy fix when import order can not be controlled is to do this before attempting to read s3 resources:
```python
gpd.io.file._VALID_URLS.discard("s3")
```
username_1: https://github.com/geopandas/geopandas/issues/2068
username_0: Thanks for following this up with Geopandas!
username_2: In a strange co-incidence, I just encountered this problem today, and can verify that it is still an issue. |
w3c/csswg-drafts | 168006448 | Title: [css-custom-properties] Document :host, :host-context styling, scoping
Question:
username_0: https://drafts.csswg.org/css-variables/
http://www.html5rocks.com/en/tutorials/webcomponents/shadowdom-201/#toc-style-host
Answers:
username_1: These are all defined in the [Scoping spec](https://drafts.csswg.org/css-scoping/). If you ever have trouble finding something, the [CSS Index](https://drafts.csswg.org/indexes/) can often help with that; in particular, a Ctrl+F for ":host" would lead you right to where it's defined. ^_^
Status: Issue closed
|
sveltejs/svelte | 391423730 | Title: Preserve script/style attributes in preprocess
Question:
username_0: In v3 (and in some edge cases in v2) it's important that `<script>` attributes are preserved by any `preprocess` steps.
Whether or not the preprocessor should be able to control the attributes is an open question. One possibility: if it returns `{ attributes: [...], ... }` then they replace whatever's there initially (shape of that array's contents TBD), otherwise existing attributes are left alone.
Status: Issue closed
Answers:
username_0: closed via #1954 |
DeepRegNet/DeepReg | 670952487 | Title: Bug when using multiple test folders
Question:
username_0: # Issue description
Incorrect error when using multiple folder data in testing with paired dataset loader.
## Type of Issue
Please delete options that are not relevant.
- [ ] Bug report
- [ ] Test request
#### For bug report or feature request, steps to reproduce the issue, including (but not limited to):
1. OS
2. Environment to reproduce
3. What commands run / sample script used:
on branch 12, run the demo, when using the predict (see readme)
Traceback (most recent call last):
File "/home/yipenghu/miniconda3/envs/deepreg/bin/predict", line 33, in <module>
sys.exit(load_entry_point('deepreg', 'console_scripts', 'predict')())
File "/home/yipenghu/git/DeepReg/deepreg/predict.py", line 352, in main
args.config_path,
File "/home/yipenghu/git/DeepReg/deepreg/predict.py", line 264, in predict
save_dir=log_dir + "/test",
File "/home/yipenghu/git/DeepReg/deepreg/predict.py", line 136, in predict_on_dataset
"Sample is repeated, maybe the dataset has been repeated."
ValueError: Sample is repeated, maybe the dataset has been repeated.
#### Additional details / screenshot
Answers:
username_1: so the reason is that:
we only check for each folder, the id of images (indexes) are unique, but we didn't check across multiple folders, therefore, we need to add one more index for all data loaders to indicate the folder.
username_0: is it better to just accumulate one single index?
username_1: What do you mean accumulate, for instance, with grouped data, the ID of a pair is
- moving group index
- moving image index
- fixed group index
- fixed image index
- label index
it will be hard to increase the number on one of them
username_1: ooops, this bug might be a bit more complicated,
if the training dataset has multiple folders, then for grouped data or unpaired data, we should mix between folders,
but for now one dataset is created per folder, and then concatenated together
need to think a bit more about this
Status: Issue closed
|
nicolafranchini/VenoBox | 707868711 | Title: mp4 video doesn't support !!!
Question:
username_0: Just support youtube or vimeo video .
But I need support local video or html5 video.
Do you please add this option also ?
Answers:
username_1: See https://github.com/username_2/VenoBox/issues/104#issuecomment-716195983
Status: Issue closed
|
ndharasz/onthefly-ios | 210255824 | Title: Delete Passenger Functionality
Question:
username_0: ## Story/Task Details
- [ ] As a user, I need to be able to drag a user into a "trash can" to delete them from a particular flight
- [ ] Visual feedback to let me know I'm about to delete them (e.g. red cell)
## Acceptance Scenarios
- Given: A user drags a passenger into a trash can section of the flight layout scene
- When: They release the user after the cell turns red
- Then: The info for the passenger is deleted and set to an empty configuration<issue_closed>
Status: Issue closed |
nidi3/code-assert | 301739139 | Title: [Question] Why I got these forbidden dependencies?
Question:
username_0: Hi I am trying to write some rules for my project but I get some violations of dependencies that I am not aware why they are happening:
The project is at https://github.com/username_0/games-monolith and the test https://github.com/username_0/games-monolith/blob/master/gamepage/impl/src/test/java/org/username_0/games/game/DependencyTest.java
When it runs I got:
```
NOT_EXISTING org.username_0.games.details.impl There is a rule for this element, but it has not been found in the code.
NOT_EXISTING org.username_0.games.reviews.impl There is a rule for this element, but it has not been found in the code.
UNDEFINED io.reactivex There is no rule given for this element.
DENIED org.username_0.games.game -> This dependency is forbidden.
io.reactivex (by org.username_0.games.game.GameResource, org.username_0.games.game.GameResource$1, org.username_0.games.game.GameResource$2)
DENIED org.username_0.games.game.impl -> This dependency is forbidden.
io.reactivex (by org.username_0.games.game.impl.SchedulerProducer)
org.username_0.games.game (by org.username_0.games.game.impl.TemplateProducer)
```
But why? I am not sure since for example `io.reactivex` is added as exclusion
Thank you very much.
Answers:
username_1: Closing by https://twitter.com/alexsotob/status/969679734476025856
Status: Issue closed
|
kagkarlsson/db-scheduler | 565804115 | Title: Created two tasks but only one is stored in table
Question:
username_0: I use Spring to instantiate the `Scheduler with:
```java
@Bean
public Scheduler scheduler() {
List<Task<?>> tasks = schedulerTasks(); // this method returns two tasks
Scheduler scheduler = Scheduler
.create(dataSource(), tasks)
.tableName("scheduled_task")
.threads(1)
.build();
return scheduler;
}
```
In `scheduled_task` database table only first task is present.
I tried to debug the code but without success. I cannot identify why the second task is not persisted/triggered.
Answers:
username_0: I forgot to specify that I use `Java 11` and DbScheduler `6.2.`
username_1: Hi, sorry for the late reply!
If you have recurring tasks that you want to start, you have to add them via the builder-method `startTasks`. Or is this one-time tasks?
username_1: `startTasks` makes sure the initial execution is stored in the table.
username_0: I changed the code according to your suggestions but I receive an error:

I use Java 11.
username_1: Your list is a bit too generic, it needs to match the `startTasks` signature:
`<T extends Task<?> & OnStartup>`
Change it to `List<RecurringTask>` if you have only recurring ones.
Status: Issue closed
|
dillingham/nova-items-field | 449133464 | Title: Create button overflow issue
Question:
username_0: Text may overflow from create button, but I'm not sure how it should be fixed.

Otherwise the package works great!
Answers:
username_1: Open to a PR
Status: Issue closed
|
quartznet/quartznet | 717921238 | Title: Quartz as a Service
Question:
username_0: Hi
I can schedule jobs to both ramstore and database, but am having problems using Quartz as a service, the install of the service seems fine, attachments are testing the service when installed with service running (issues) not running seem ok I am new to Quartz so any help would be appreciated


, TIA
Answers:
username_1: So I presume that you need the remoting functionality to invoke scheduler remotely. This might be caused by some permission requirements that your principal running the Windows service is missing. Have you tried running the service with different permissions (different user), like network service?
If you don't need the remoting bits, you can just remove them from your configuration.
Status: Issue closed
|
wemake-services/wemake-python-styleguide | 560203040 | Title: Support `AssignExpr` where possible
Question:
username_0: We use a lot of assignment related checks in our code. We fetch the number of variables in a function, calculate names overlap, etc. `python3.8` introduced new way to assign things with `:=`
And while I dislike the general approach, we need to be sure that everything works correctly even when users have these bad structure inside their code.
However, we still do not allow to use `:=` inside the code with this rule: #34
This is only for better user experience.
Blocked by:
- #1136
- #1135
Answers:
username_0: We also don't know currently where this should be added. As the first step - we can find these places and create a list of them.
username_0: One interesting case:
```python
if x := 1:
print(x)
```
Status: Issue closed
|
pnp/cli-microsoft365 | 690972640 | Title: [Bug] Large file downloading from share point raise RangeError
Question:
username_0: ### Category
- [x] Enhancement
- [x] Bug
- [x] Question
### Version
Please specify what version of the library you are using: [[email protected]]
Please specify what version(s) of SharePoint you are targeting: [SharePoint online]
### Expected / Desired Behavior / Question
Download the zip file (about 4.5GB) into the specified path location.
### Observed Behavior
```
buffer.js:224
throw err;
^
RangeError: "size" argument must not be larger than 2147483647
at Function.Buffer.allocUnsafe (buffer.js:253:3)
at Function.Buffer.concat (buffer.js:441:23)
at Request.<anonymous> (/usr/local/lib/node_modules/@pnp/office365-cli/node_modules/request/request.js:1133:30)
at emitOne (events.js:116:13)
at Request.emit (events.js:211:7)
at IncomingMessage.<anonymous> (/usr/local/lib/node_modules/@pnp/office365-cli/node_modules/request/request.js:1083:12)
at Object.onceWrapper (events.js:313:30)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)
```
### Steps to Reproduce
Hi guys, first time to raise an issue here. Hope I'm doing it correctly.
I was trying to make a pipeline for downloading large zip data file from sharepoint online.
I tested with smaller zip files and I worked fine.
But when I tried on a larger 4.5 GB zip file it failed.
I searched online and it seems this error comes from the js buffer which has a limitation about 2.14GB
I'm wondering will there be a fix soon? or maybe some methods to bypass this 2GB limitation?
Thank you very much!
(related stackoverflow I found: https://stackoverflow.com/questions/44994767/javascript-read-large-files-failed)
Answers:
username_1: We'll have a look at is asap. Thank you for reporting @username_0 and sorry for the trouble
username_1: @username_0 could you give us the exact command that you're calling so that we have clear repro steps?
username_0: Hi!
Yes, the terminal commend is this (calling from python):
```
cmd = f"spo file get --webUrl '{self.__args.office365_url}' --url '{target_path}' --asFile --path '{download_path}'"
```
username_1: @username_2 didn't you have some big files we could test this issue with?
username_2: @username_1 Yes I do have some test scenario's for the move issue we are having. Tested with 10mb, 25mb, 100mb, 1gb. Those seem to download perfectly with version 3.0.0 of the CLI on windows. But after reading the issue it might has to do with the fs.readfilesync which has a 2GB buffer limit in node x64. I will try and find a 5gb file to test it with later today.
username_2: A little more detail; using `
m365 spo file get --webUrl 'https://tenant.sharepoint.com/sites/crisismanagement' --url '/sites/crisismanagement/Docs2/Test/5GB.db' --asFile --path 'C:/temp/5GB.db'`
I had the following experience:
- the node process downloads with about 66mb/s and grows accordingly to around *5.2gb* (Given the 8gb limit I started killing of some other processes like teams)
- At 5.2GB memory consumption it started writing to disk
- It did fail with the following error:
```
internal/validators.js:85
throw new ERR_OUT_OF_RANGE(name, `>= ${min} && <= ${max}`, value);
^
RangeError [ERR_OUT_OF_RANGE]: The value of "length" is out of range. It must be >= 0 && <= 2147483647. Received 5_368_709_120
at Function.concat (buffer.js:540:5)
at Request.<anonymous> (C:\Git\pnp\office365-cli\node_modules\request\request.js:1133:30)
at Request.emit (events.js:223:5)
at IncomingMessage.<anonymous> (C:\Git\pnp\office365-cli\node_modules\request\request.js:1083:12)
at Object.onceWrapper (events.js:312:28)
at IncomingMessage.emit (events.js:228:7)
at endReadableNT (_stream_readable.js:1185:12)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
code: 'ERR_OUT_OF_RANGE'
}
```
Going through the code it of the request.js file it seems to fail when putting the repsone from the buffer to the body that it returns. The only thing I can come up with is to use a filestream instead of the body response using the pipe option `request('http://google.com/doodle.png').pipe(fs.createWriteStream('doodle.png'))` but I am not sure if that would work.
username_1: Nice findings! Have you tried using the `.pipe()` you mentioned? Does it work as intended?
username_2: Not yet, Will try today or Monday!
username_2: So minor update; managed to test some more ;), but not quite there yet. We are using a custom implementation for all our requests in the `request.ts` file. I hacked in a `.pipe()` there and now I **do get the file written to disk** but somewhere the response is still cast, or transformed, as a string. I still get the same error (but now only after the file is saved to disk). Debugging is a bit hard since it does require at least a 2.1 GB file 😊. Will try to see if I can come up with a better approach on Monday.
username_0: Hi @username_2 ! How did the alternative approach go?
I'm happy as far as we can have the file written to disk!
username_2: @username_0 it looks like I have managed to solve the issue in [this repo](https://github.com/username_2/cli-microsoft365/tree/feature/bug-1698). It took me a bit more time than hoped for. As it turns out the `request-promise-native` library we are using is doing something with the body and will always try to translate the body to a string and is loading everything into memory. So after giving up on fixing that I did include the `request` library as that is a dependency of `request-promise-native` and low and behold things are making more sense now.
The whole stuff is also way better on memory consumption as it is no longer trying to download the full 2.5GB of my test file into memory. Would you have a change to test this repo against your scenario? I did manage to test with some various file sizes (from 10mb to 5GB) but before creating a PR and cleaning up a bit I would love some feedback! Thanks in advance.
username_0: Hi @username_2 ! Silly question... how do I test this repo against my scenario?
Shall I clone the repo and paste over some already existing files? (I'm not sure how to specify versions via installation of yarn/npm)
username_2: @username_0 no problem! The quickest way is to clone my repo and follow the steps in our [guide](https://github.com/pnp/cli-microsoft365/wiki/Minimal-Path-to-Awesome). Let me know if that helps!
username_0: @username_2 Hmm...! I think I'm doing something wrong. I got two errors when running `npm test`
Could you help me with that please!
What I did:
1. started docker without `RUN npm i -g @pnp/office365-cli` in my Dockerfile
(I ran the pip list to ensure office365-cli
2. inside the docker terminal, I clone the git repo, and `cd cli-microsoft365`
3. git checkout from master to `git checkout feature/bug-1796`
4. running the install from the guide:
```
npm i
npm run build
npm test
```
The error message from npm test:
```
8342 passing (10s)
2 failing
1) spo file get
writeFile called when option --asFile is specified:
AssertionError [ERR_ASSERTION]: false == true
+ expected - actual
-false
+true
at cmdInstance.action (dist/m365/spo/commands/file/file-get.spec.js:348:17)
at SpoFileGetCommand.handleRejectedODataJsonPromise (dist/Command.js:242:21)
at request_1.default.getLargeFile.then (dist/m365/spo/commands/file/file-get.js:70:30)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
2) spo file get
writeFile called when option --asFile is specified (debug):
AssertionError [ERR_ASSERTION]: false == true
+ expected - actual
-false
+true
at cmdInstance.action (dist/m365/spo/commands/file/file-get.spec.js:379:17)
at SpoFileGetCommand.handleRejectedODataJsonPromise (dist/Command.js:242:21)
at request_1.default.getLargeFile.then (dist/m365/spo/commands/file/file-get.js:70:30)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
```
Is it due to permission issue?
username_2: Oh thats my bad! I never ran the tests, those are not updated yet. Could you skip the test for now so after `npm run build` just to the `npm link`. The tests are failing because I have changed the logic of saving with the `--asFile` parameter. I will go through the tests asap, but the code it self is correct so you can ignore the test for the first run :). Sorry for the inconvenience!
username_0: Hi @username_2 ! I was able to get the package installed in docker.
But when I'm using the same script that worked before, it gives this login error.
Shall I add a `filename` attribute to the `args` when I call `client = O365Client(args)`?
```
Traceback (most recent call last):
File "download_data.py", line 67, in <module>
client.login()
File "/code/util/o365_cli.py", line 126, in login
r = self.run(input=b"status")
File "/code/util/o365_cli.py", line 115, in run
p = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1278, in _execute_child
executable = os.fsencode(executable)
File "/usr/lib/python3.6/os.py", line 800, in fsencode
filename = fspath(filename) # Does type-checking of `filename`.
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
username_3: Will the next package release include this fix?
username_1: As soon as @username_2 submits a PR, we'll try to get it in as quickly as possible.
username_0: Hi @username_1 @username_2 ! Any news on this fix? Thank you!
username_2: Hi @username_0 i have a few days off in the run up to ignite, hope to work on it at the end of the week based on your replies. Sorry for the delay! Been busy remodeling our home.
username_3: :) Hope that the remodeling is going smoothly!
Is there an ETA for the next release that includes the bugfix? Or would it be better for us to find a workaround in the meantime, @username_2 ?
username_2: hey @username_0 the current implementation requires that you have a filename specified if you are trying to download large files. We are thinking of making the path required if you are using the asFile, how would you feel about that? If we can get an agreement on that topic I am sure I can wrap up the PR somewhere this week :).
username_1: @username_2 if I look at the current documentation, it already states that `path` is required when using `asFile`
username_2: Darn, you are absolutely right! Will update tests and do PR (have to do some rework due to our latest changes but will get them in asap)
Status: Issue closed
username_1: @username_0 we have just merged the fix for this issue and it's available in the latest beta version v3.2. You can try it by installing the beta using: `npm i -g @pnp/cli-microsoft365@next`. Once again, thank you for reporting! 👏
username_4: Great work on this @username_2 👏🏻🚀❤️ |
lots0logs/gh-action-get-changed-files | 568634053 | Title: TypeError: Cannot read property 'filter' of undefined
Question:
username_0: Hey guys,
seems to be the same error as in #2, just a different file.
```
/home/runner/work/_actions/username_3/gh-action-get-changed-files/2.0.6/dist/main.js:3567
const commits = context.payload.commits.filter(c => c.distinct);
^
TypeError: Cannot read property 'filter' of undefined
```
Full runthrough with error (just trying your action out here ;P): https://github.com/python-telegram-bot/python-telegram-bot/pull/1786/checks?check_run_id=459027793#step:6:1
Your action is implemented [here](https://github.com/python-telegram-bot/python-telegram-bot/blob/type_hinting_master/.github/workflows/mypy.yml#L27), should be correct.
Answers:
username_1: I got this same error as well when attempting to use this Action.
username_2: Hi is there any update on this? Also got the same error on version 2.0.6. Is there a version that does work in the meantime that we can use?
username_3: Sorry for not noticing this issue before now. I can confirm that v2.0.5 works without issue so long as its used on repos owned by an organization. Not sure when I will have time to look into this new error but I will do it asap.
username_4: Can you explain what you mean by this please? This is the repo I'm working on, definitely owned by an org https://github.com/alan-turing-institute/the-turing-way
username_0: Maybe he meant PRs from inside the organization
username_4: Hi @username_0! I've branched, not forked, and I'm a member of that org ☺️
username_0: Destroying my theories in an instant, pfff :(
username_3: What I meant by that was that the repo is owned by an org account and not a personal user account.
username_0: My PR situation is also owned by a company/collective
username_3: I'll have to dig deeper into this. Not sure off hand what is causing the failure.
username_5: Hi,
this happens when the action is used in a pull request (atleast for me) and is resolved when using it on push. I assume the payload used in https://github.com/username_3/gh-action-get-changed-files/blob/master/main.js#L6 isn't available in the same way for pull requests.
username_3: Thanks. That is helpful!
username_6: Happens for me as well and makes this action useless for me - which is a shame, because it's the only action to get a list of changed files that actually looks decent.
username_3: I'm happy to review a pull request from anyone who has the time to add support for events other than `push`. I just don't have time right now. Especially since we only use this action on the `push` event at my job so it works fine.
username_7: I encountered the same problem, this action does not work for pull requests. For what it's worth, https://github.com/marketplace/actions/file-changes-action has almost identical functionality and works fine for me on pull requests.
Status: Issue closed
|
facebook/redex | 197175724 | Title: can't build integ tests on mac osx
Question:
username_0: on mac osx, I can't link the 4 integ tests without hacking.
I think we need to link some individual objects in opt.
I made a patch here. does it look reasonable?
https://github.com/username_0/redex/commit/0a253564d849cceb7fbdd53a6cbe0f06097bebe2 |
github-vet/rangeloop-pointer-findings | 774150910 | Title: vallard/drone-kube: vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go; 42 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/vallard/drone-kube/blob/80daccfd417f170d7b48ac32cb4d4a84da0a19f7/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L2751-L2792)
<details>
<summary>Click here to show the 42 line(s) of Go which triggered the analyzer.</summary>
```go
for _, node := range test.nodes {
var podsOnNode []*v1.Pod
for _, pod := range test.pods {
if pod.Spec.NodeName == node.Name {
podsOnNode = append(podsOnNode, pod)
}
}
testFit := PodAffinityChecker{
info: nodeListInfo,
podLister: algorithm.FakePodLister(test.pods),
failureDomains: priorityutil.Topologies{DefaultKeys: strings.Split(v1.DefaultFailureDomains, ",")},
}
nodeInfo := schedulercache.NewNodeInfo(podsOnNode...)
nodeInfo.SetNode(&node)
nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo}
fits, reasons, err := testFit.InterPodAffinityMatches(test.pod, PredicateMetadata(test.pod, nodeInfoMap), nodeInfo)
if err != nil {
t.Errorf("%s: unexpected error %v", test.test, err)
}
if !fits && !reflect.DeepEqual(reasons, affinityExpectedFailureReasons) {
t.Errorf("%s: unexpected failure reasons: %v", test.test, reasons)
}
affinity := test.pod.Spec.Affinity
if affinity != nil && affinity.NodeAffinity != nil {
nodeInfo := schedulercache.NewNodeInfo()
nodeInfo.SetNode(&node)
nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo}
fits2, reasons, err := PodSelectorMatches(test.pod, PredicateMetadata(test.pod, nodeInfoMap), nodeInfo)
if err != nil {
t.Errorf("%s: unexpected error: %v", test.test, err)
}
if !fits2 && !reflect.DeepEqual(reasons, selectorExpectedFailureReasons) {
t.Errorf("%s: unexpected failure reasons: %v, want: %v", test.test, reasons, selectorExpectedFailureReasons)
}
fits = fits && fits2
}
if fits != test.fits[node.Name] {
t.Errorf("%s: expected %v for %s got %v", test.test, test.fits[node.Name], node.Name, fits)
}
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
The following paths through the callgraph could lead to a goroutine:
(SetNode, 1) -> (Value, 0) -> (String, 0) -> (StatusText, 1) -> (Run, 1) -> (startHTTP, 1) -> (SetBlockProfileRate, 1) -> (CreateFromKeys, 3) -> (on100, 0)
(SetNode, 1) -> (GetTaintsFromNodeAnnotations, 1) -> (Unmarshal, 2) -> (convertMapNumbers, 1) -> (Unmarshal, 1) -> (Local, 0) -> (Unscaled, 0) -> (newIntValue, 2) -> (getChallenges, 1)
[Truncated]
(SetNode, 1) -> (MilliValue, 0) -> (ScaledValue, 1) -> (infScale, 0) -> (, 0) -> (NewGauge, 1) -> (NewCloseWaitServer, 0) -> (CreateFromKeys, 3) -> (scheduleFrameWrite, 0) -> (take, 0)
(SetNode, 1) -> (Value, 0) -> (String, 0) -> (StatusText, 1) -> (Run, 1)
(SetNode, 1) -> (MilliValue, 0) -> (ScaledValue, 1) -> (infScale, 0) -> (, 0) -> (NewGauge, 1) -> (NewCloseWaitServer, 0) -> (CreateFromKeys, 3) -> (PortForward, 3)
(SetNode, 1) -> (Value, 0) -> (String, 0) -> (StatusText, 1) -> (init, 1) -> (handleCmdResponse, 2) -> (Errorf, 1) -> (newQueueMetrics, 1) -> (Init, 4)
(SetNode, 1) -> (Value, 0) -> (String, 0) -> (StatusText, 1) -> (Build, 0) -> (finishRunning, 1) -> (Kill, 2) -> (kill, 3) -> (DeleteCollection, 4) -> (setListSelfLink, 3)
(SetNode, 1) -> (GetTaintsFromNodeAnnotations, 1) -> (Unmarshal, 2) -> (convertMapNumbers, 1) -> (, 0) -> (NewGauge, 1) -> (NewCloseWaitServer, 0) -> (CreateFromKeys, 3) -> (Dial, 1) -> (NewConnection, 1)
(SetNode, 1) -> (Value, 0) -> (Unlock, 0) -> (Flock, 2) -> (New, 1)
(SetNode, 1) -> (Value, 0) -> (String, 0) -> (StatusText, 1) -> (New, 2) -> (run, 0) -> (waitForTerminationSignal, 0) -> (read, 2) -> (PortForward, 3)
(SetNode, 1) -> (GetTaintsFromNodeAnnotations, 1) -> (Unmarshal, 2) -> (convertMapNumbers, 1) -> (, 0) -> (NewGauge, 1) -> (NewCloseWaitServer, 0) -> (CreateFromKeys, 3) -> (forkCall, 5)
(SetNode, 1) -> (GetTaintsFromNodeAnnotations, 1) -> (Unmarshal, 2) -> (convertMapNumbers, 1) -> (Reader, 1) -> (init, 2) -> (setBytes, 1) -> (DialTimeout, 4) -> (newMux, 1)
(SetNode, 1) -> (GetTaintsFromNodeAnnotations, 1) -> (Unmarshal, 2) -> (convertMapNumbers, 1) -> (Count, 2) -> (CountP, 3)
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 80daccfd417f170d7b48ac32cb4d4a84da0a19f7<issue_closed>
Status: Issue closed |
ikeq/hexo-theme-inside | 473583034 | Title: 你好,请问可以增加一个本地图片的设置吗
Question:
username_0: 首先,非常感谢你提供了如此优秀的Hexo主题。
我在使用过程中,想要自定义文章的图片本地路径,但是好像你主题的设置会自动按照CDN的方式加上前后缀,请问想要贴上本地图片应怎样修改。
Answers:
username_1: 在 assets 那里配成本地路径的前缀不行吗
username_0: 比如我的md图片为xx.gif
assets:
prefix: /img/
suffix: ''
后缀留空,但是实际链接却是/img//.cf//xx.gif
不知道为什么会加上一个域名后缀
username_0: Clean之后问题解决,再次感谢
Status: Issue closed
|
PaddlePaddle/Paddle | 286659839 | Title: img_conv中的groups参数和num_channels参数有什么区别?
Question:
username_0: 在使用paddle的img_conv层时发现有三个参数影响层与层之间传递的channel,分别是num_channels,num_filters和groups。
因为之前使用tensorflow和mxnet的经验,都是指定input_channels和output_channels之后,自动生成*input_channels * output_channels*个filter,比如输入channel是16个,输出channel是32个,那么会生成16 * 32个卷积核,对应每个输出通道有16个卷积核,和16个输入通道的图像做卷积操作之后合成一幅图像,那么本层的输出就是32个图像。
在paddle中似乎是通过**groups**来确定上一层输入多少个通道,通过*groups * num_filters*来确定本层产生多少个卷积核,输出通道数被设定为**num_filters**。但是这样一来,**groups**和**num_channels**的区别是什么呢?
总感觉这里用三个参数来实现有点怪,不知道我理解的对不对,希望大神解答一下。
Answers:
username_1: 这两其实没啥关系, 我们目前的文档有错误,给你带来的不便,我们尽快会修改。谢谢了~
username_0: 明白了,谢谢解答
Status: Issue closed
|
HaveAGitGat/Tdarr | 820273047 | Title: Small rare file locking issue
Question:
username_0: 

**Please provide the following information:**
- OS: Windows
**Additional context**
A vanilla Windows 10 installation, nothing else installed
Answers:
username_1: Thanks, haven't had any other reports of this. Seems like some process locked the file preventing it from being deleted. Will re-open if it's a more widespread issue.
Status: Issue closed
|
daviscook477/BaseMod | 309597885 | Title: Can you give some tips about mvn package the example_mod in the tutorial?
Question:
username_0: First, thanks for your excellent work on this mod/tool.
I followed your tutorial in "Getting Started (For Modders)", everything is good, but since I'm new to java..it's very confusing using the mvn, I know how to create new project and package a project followed a tutorial on youtube..But for this specific mod tutorial, I can't figure out how to package the example_mod, I tried to copy the pom.xml from basemod project to the example_mod and change some tags manually, but it didn't work, there are problems about the dependencies I guess.
Is there a way to run the launcher in eclipse (like modding in minecraft) or can you offer some tips on how to mvn package the example_mod? I see there is a build guide about basemod, problem is the example_mod is not a mvn project from the start :(
Answers:
username_1: Hey! Glad to see you were able to get started with it. I literally just got the example mod part of the tutorial done earlier today. I'm aware that it doesn't explain maven yet. I'm hopefully going to do that tomorrow. For now the best advice I have is to use `pom.xml` from a mod like FruityMod or something and try and modify the tags. You should be able to use the export to jar option that Eclipse has for this example mod. Maven is necessary for when your mod includes assets like images, localization, etc...
username_0: Thanks for the tips, and I just realize why I didn't find the "Getting Started (For Modders)" page earlier, thought some thing was wrong with my eyes lol.
Ok, I will try using the pom.xml from the FruityMod haha, btw, I was tring to add a zh_CN localization file for the FruityMod last night.
username_0: Success! Using the pom.xml from FruityMod, tanks :)
username_1: Just as a heads up about some slightly disappointing information. BaseMod currently doesn't load localization files based on the language the game is set to so currently mods themselves are required to check the game language and load the appropriate localization file. Getting localization support done right is on my todo list.
Status: Issue closed
|
bang-olufsen/create | 348207561 | Title: Upgrade 'BeoCreate Essentials' into a proper Node module
Question:
username_0: Currently BeoCreate Essentials, the collection of Node modules handling interaction with the system (DSP, sources, network setup, etc.) is structured like a module but is missing some things to be properly treated as such – *package.json* for example. Adding in these missing things would allow *npm* to automatically make sure that any dependencies are installed, improving the portability of the module.
Answers:
username_0: With Beocreate 2, this issue is now resolved.
Status: Issue closed
|
numba/llvmlite | 590362429 | Title: python setup.py clean --all does not remove ffi/build
Question:
username_0: If one calls `python setup.py build` the directory `ffi/build` is created. `python setup.py clean --all` does not remove it however.
This is a problem because stale builds can persist. For example `pip install .` fails because of this if `python setup.py build` was previously executed.
Answers:
username_1: @username_0 thanks for submitting this. Can you discern, if this caused by the code Numba developers wrote in the `setup.py` or is this a failure of the Python build system?
username_0: @username_1 I have never really written a non-trivial `setup.py` but I think `setup.py` is to blame. - At least indirectly: From what I can tell at a glance `ffi/build` is created by `ffi/build.py` - which is somehow called by `setup.py`. But unless `setup.py` is told about the directory created by `ffi/build.py` it has no way to know about it. - That is just my guess though. I have not really looked into it.
username_2: @sklam wrote this originally, and it was doing some hacky things to try to handle building a standalone shared library that isn't a Python extension. Not sure if he recalls how this was supposed to work.
username_0: Fix in #569 |
CELAR/app-orchestrator | 77652498 | Title: Use ss-scale-resize and ss-scale-disk to request vertical scaling actions on VMs from SlipStream
Question:
username_0: The corresponding issue on SS side: https://github.com/slipstream/SlipStreamClient/issues/103
NB! The implementation is not yet there. It will scheduled for the weeks W22 and W23.
I'll update this ticket when https://github.com/slipstream/SlipStreamClient/issues/103 is implemented.<issue_closed>
Status: Issue closed |
daoseng33/DAOSearchBar | 322896361 | Title: Problem with cocoapod download
Question:
username_0: I'm trying to download the component, but the following error is occurring: Remote branch 1.0.4 not found in upstream origin
Tanks
Answers:
username_1: Hello, I fixed the problem, plz try it again, thanks
username_0: Hello, the download problem has been fixed. But I identified a problem, where in Pods when downloading, the DAOSearchBar class is without the open, and I can not access it.
Thanks
Status: Issue closed
|
rust-lang/rustup | 662499839 | Title: Error on update leading to rollback
Question:
username_0: <!-- Thanks for filing a 🐛 bug report 😄! -->
**Problem**
Receiving an error on update (stable/nightly) leading to rollback
**Steps**
❯ rustup update
info: syncing channel updates for 'stable-x86_64-apple-darwin'
info: syncing channel updates for 'nightly-x86_64-apple-darwin'
info: latest update on 2020-07-21, rust version 1.47.0-nightly (f9a308636 2020-07-20)
info: downloading component 'rust-analysis'
info: downloading component 'rust-src'
info: downloading component 'rustc'
48.1 MiB / 48.1 MiB (100 %) 43.6 MiB/s in 1s ETA: 0s
info: downloading component 'rust-std'
info: downloading component 'cargo'
info: downloading component 'rust-docs'
info: removing previous version of component 'rust-analysis'
info: removing previous version of component 'rust-src'
info: removing previous version of component 'rustc'
info: removing previous version of component 'rust-std'
info: removing previous version of component 'cargo'
info: removing previous version of component 'rust-docs'
info: installing component 'rust-analysis'
info: Defaulting to 500.0 MiB unpack ram
info: installing component 'rust-src'
info: rolling back changes
error: could not rename component file from '/Users/chetanconikee/.rustup/tmp/gx57en2l_r5p93hz_dir/bk' to '/Users/chetanconikee/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin'
error: tar entry kind 'Symlink' is not supported
info: checking for self-updates
stable-x86_64-apple-darwin unchanged - rustc 1.45.0 (5c1f21c3b 2020-07-13)
nightly-x86_64-apple-darwin update failed - rustc 1.47.0-nightly (d7f945163 2020-07-19)
info: cleaning up downloads & tmp directories
Answers:
username_1: Likewise on Win10 x64 (MSVC ABI, if it matters).
username_2: This looks like toolchains have started including symlinks, which they only
can on some platforms, and definitely not on windows, so I think we should
reassign this bug to rust itself.
username_3: There's already a PR to fix this at https://github.com/rust-lang/rust/pull/74578
username_4: I am also seeing this on linux (Ubuntu 18.04)
username_5: Just ran into this as well on Manjaro 20.0.3
username_6: Seen on macOS 10.15.6
username_7: It's not surprising, it's a bug in the channel generation, not a bug in Rustup. See https://github.com/rust-lang/rustup/issues/2434#issuecomment-661655854
username_0: Seems like https://github.com/rust-lang/rust/pull/74578 is merged to master and yet I see failures on update
```
12.8 MiB / 12.8 MiB (100 %) 2.7 MiB/s in 4s ETA: 0s
info: removing previous version of component 'rust-analysis'
info: removing previous version of component 'rust-src'
info: removing previous version of component 'rustc'
info: removing previous version of component 'rust-std'
info: removing previous version of component 'cargo'
info: removing previous version of component 'rust-docs'
info: installing component 'rust-analysis'
info: Defaulting to 500.0 MiB unpack ram
info: installing component 'rust-src'
info: rolling back changes
error: could not rename component file from '/Users/xxx/.rustup/tmp/uq2qki3860ngfh1s_dir/bk' to '/Users/xxx/.rustup/toolchains/nightly-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin'
error: tar entry kind 'Symlink' is not supported
info: checking for self-updates
stable-x86_64-apple-darwin unchanged - rustc 1.45.0 (5c1f21c3b 2020-07-13)
nightly-x86_64-apple-darwin update failed - rustc 1.47.0-nightly (d7f945163 2020-07-19)
info: cleaning up downloads & tmp directories
```
username_8: It was merged like 10 minutes ago, what do you expect? Nightlies are built once a day - well, night :).
username_7: Also you can expect to encounter this if the next few nightlys lack a component you depend on, unless you remove those components, rustup will be unable to update nightly until it has everything you need.
username_9: More accurately, around midnight GMT/UTC (not exactly sure what timezone) the last PR that landed is promoted to "nightly".
(Effectively every time we merge a PR we build a "nightly" from it - [`rustup-toolchain-install-master`](https://github.com/kennytm/rustup-toolchain-install-master) lets you install any PR build, although I believe we delete old ones after a number of days. you could use it to test that the bug is fixed, maybe?)
So you have to wait another 13h-14h or so before a new nightly is published.
username_2: This was fixed in rust itself
Status: Issue closed
|
wp-net/WordPressPCL | 788507650 | Title: WP 5.6 Application Passwords (basic Auth)
Question:
username_0: With the latest Release 5.6 Wordpress introduced application Passwords.
(https://make.wordpress.org/core/2020/11/05/application-passwords-integration-guide/)
While it has close to no advantages over JWT Auth in normal WP setups it can be a great alternative when it comes to WP Multisites.
Since JWT is encoding the domain it is valid for, I frequently run into issues maintaining an management App that needs to access multiple subdomain sites in a Multisite and wonder if enabeling Basic Auth in conjunction with revokable App passwords would not be a better solution in a https world.
Any thoughts on not having basic auth as an alternative to token auth.
Answers:
username_1: Hey,
yes I definitely want to support this since it's a baked-in feature in the REST API (I thought I had already created an issue for that 🤔). I can't make any promises how quickly I could add this, but I'm always happy to look at Pull Requests. :)
username_1: I just published a new version 1.9.0 that supports Application Passwords, feel free to try it out.
Status: Issue closed
|
compbiocore/conda-recipes | 231656255 | Title: update biobambam2 recipe to compile from source
Question:
username_0: Currently, recipe has been forked from bioconda where they pull a binary release compiled on Debain etch. It appears to be functional for marking duplicates, but long term it will be safer and more reliable to compile both the programs and the dependencies from source |
CloverHackyColor/CloverBootloader | 930854733 | Title: To load Clover after enabled secure boot, could you give a secure boot key ?
Question:
username_0: To load Clover after enabled secure boot, could you give a secure boot key ?
It cannot load Clover unless disabled secure boot.
Answers:
username_1: If you mean Apple Secure Boot then it is disabled in Clover forever as not compatible with latest Monterey.
If you mean UEFI Secure Boot then you should sign Clover with some utility and write the signature into your BIOS that is not same as mine.
Be a developer to use the secure boot or forget it.
Status: Issue closed
|
abstiles/deluminate | 64311945 | Title: Bad text antialiasing
Question:
username_0: I don't know the reason for this, but I am looking at the page
http://www.infoq.com/articles/designing-restful-http-apps-roth
and the text is noticeably more difficult to read with Deluminate, than other alternatives:
- Chrome's Hacker Vision smoothing looks much better.
- Firefox' NoSquint also looks best, and lets me choose the color of the text.
Compare:
Deluminate

NoSquint:

Answers:
username_0: It seems that text is not properly optimized for aligning with screen pixels, so fuzzy edges and lines are more frequent.
username_1: I think this is a natural consequence of the strategy I use to invert the luminance. Normally text rendering tries to take into account the difference between light-on-dark and dark-on-light text, but when Deluminate performs an inversion and hue-rotate that logic may be defeated.
If you tweak a page's CSS to force it to use `-webkit-font-smoothing: antialiased` or `-webkit-font-smoothing: subpixel-antialiased`, does either help?
username_0: `-webkit-font-smoothing` doesn't seem to have any effect.
I have come to a good compromise by setting
`-webkit-filter: hue-rotate(180deg) invert() brightness(144%) saturate(135%) contrast(88%)`
Status: Issue closed
username_2: Hey! I'm having a similar situation.
Sometimes the inverted text is harder to read than the original one.
I've tried both your suggestions: `-webkit-font-smoothing` doesn't seem to change anything on my Chrome as far as I can tell, no matter what value I set it to. Can you perhaps point me to a working example?
`-webkit-filter: hue-rotate(180deg) invert() brightness(144%) saturate(135%) contrast(88%)` as a workaround does seem to work, although it does have an effect on other elements besides text as well, which I suppose is undesirable.
username_2: Addendum: I've quickly looked for an example, and I've found this:
http://maxvoltar.com/sandbox/fontsmoothing/
If I take a screenshot and magnify I see this:
http://screencast.com/t/AYnyyg0eV
To me all three of these seem to use `subpixel-antialiasing` -- judging from the colored pixels at the font edges.
@username_1 Does this website render differently for you, perhaps?
username_1: I have had to disable subpixel rendering system-wide on my own machines due to similar problems with other inversion software, so when I suggested those settings, it was in hope that they'd work for others, not because I used it myself. I don't have a better solution, I'm sorry to say. I think the text-rendering engines just don't handle inversion correctly.
username_2: I agree. So potentially this could be fixed in Chromium.
I intend to write a bit longer reply later. |
internetee/registry | 213310688 | Title: Registrar: Credit/debit card payment solution
Question:
username_0: Foreign registrars have only bank transaction option and TransferWise if they know and use it for payments to their credit accounts in .ee registry. To make this process simpler and more flexible.
Answers:
username_0: For the initial intermediary we have chosen EveryPay.
Test account access is ordered.
username_0: Access to test account created to Maciej
username_1: I spent some time investigating how EveryPay works and there are few questions that we should answer before moving forward, mostly related to compatibility with banklink and the ability to use different intermediaries in the future (i.e Stripe), especially since it's an open source project.
Given that we have refactoring for banklink planned in issue #646, I think we should plan for the following URL schema and controller inheritance to follow the 7-action Rails convention:
```
## Payments creation and return from intermediary:
/registrar/invoices/:invoice_id/payments/:payment_channel/
A complete controller dedicated to a single intermediary payment creation and return.
## Callbacks handling
/registrar/invoices/:invoice_id/payment_callbacks/:payment_channel/
A complete controller for callbacks handling from one single intermediary.
```
In case of EveryPay, we would use:
```
GET /registrar/invoices/1233/payments/every_pay/new
To display info and redirect the user to CC format page.
PUT /registrar/invoices/1233/payments/every_pay/4687ee51e9e1cca682f4b81e961d28
To redirect the customer back to our system, and display relevant info about
the payment. The last part of the URL is a nonce we set ourselves.
POST /registrar/invoices/1233/payment_callbacks/every_pay/
For callbacks, as they send them as POST requests
```
Similarly, Swedbank, would use:
```
GET /registrar/invoices/1233/payments/swedbank/new - for the payment redirect
POST /registrar/invoices/1233/payment_callbacks/swedbank/ - for the payment return
```
I haven't yet figured out how to separate them properly, but I'm thinking we might put all payment gateways that we use here into it's own separate mountable engine with its own git repo. It will be EIS specific, other people can write their own that covers stripe/paypal/etc.
We however need to create a clear interface on handling the payments, which we currently do in a method on [BankLink::Request](https://github.com/internetee/registry/blob/master/app/models/bank_link.rb#L53-L61).
---
We also need to decide if and how we want to store info about payments. Most payment providers require some kind of unique identifier to be set on the merchant side to isolate each transaction. It cannot be the invoice reference number, as there might be two payments attached to the same invoice, with one being failed or not completed. We can:
- [ ] Create a new table (i.e `payments`) with a foreign key on `invoices` and create a new record there every time someone wants to pay an invoice. It should contain only the data that is relevant to us (created_at, updated_at, status, payment_channel/intermediary)
- [ ] Don't create anything, and rely on BankTransactions <--> Invoice binding to provide this information. However, this does not give us the information about payments that were not successful.
username_0: Proposal sounds and looks good. Lets proceed with the trackable solution with extra table.
username_2: So it should be possible to create new payment in UI, not `BankTransaction` (which probably might be used only for storing transactions associated with imported bank statements. Another questions is where do we store another payment gateways' transactions. I don't think they are `BankTransaction`-s.
btw, `AccountActivity` model seems to act as a collection of payments, so the complete billing model should be probably changed according to the needs introduced with this (#419) ticket. Also check #447.
username_1: Partial payments introduce complexity we probably don't want. I'd avoid them at all cost, otherwise we need to figure out what to do with invoices that are partially paid, which is pretty big deal accounting-wise. Plus, as a customer you are not allowed to change the amount in the credit card payment. If you do that with Javascript, HMAC will be broken and will not be authorized in EveryPay.
I am against `payments/confirm` as you are not confirming all payments. Some of them are denied, cancelled, etc. Plus, it mixes REST with RPC, which is not a good idea IMHO.
username_2: I don't think there is some accounting software that wouldn't support partial payments or an accountant who would ignore them, but that's up to @username_0.
"confirm" was just an example. It's up to you how exactly it should be called.
username_0: we do not support partial payments in the registry software at the moment nor do I see any reason to implement it. We have and will continue to handle these rare cases manually usually by paying back the partial payment and asking registrar by making new payment for the correct sum.
username_2: Perhaps other registries may benefit from this feature?
username_0: I agree, but at this moment it seems to introduce too many complexities. Also as registrars are in control of the billing - creating, paying and canceling the invoices as necessary it is hard to see what is the added value here.
username_0: web UI's do not work after cofiguring the EveryPay - passenger will not start
error message:
```
[ E 2018-04-26 11:31:51.0187 4709/Tw age/Cor/App/Implementation.cpp:305 ]: Could not spawn process for application /home/registrar/registrar/current: An error occurred while starting up the preloader.
Error ID: 398791ec
Error details saved to: /tmp/passenger-error-QaoJgK.html
Message from application: undefined method strip' for nil:NilClass (NoMethodError)
/home/registrar/registrar/releases/433/app/models/payments.rb:2:in<module:Payments>'
/home/registrar/registrar/releases/433/app/models/payments.rb:1:in `<top (required)>'
```
username_1: Please add the following to `config/application.yml`:
```
# You should list other payment intermediaries here. Each one of them needs their own class in /app/models/payments/
payments_intermediaries: >
every_pay
```
It's in the branch's example file as well, I assumed that those are added automatically:
https://github.com/internetee/registry/pull/829/files#diff-318c31f2f8c28eecdbe0119432a98af0
username_0: * logged in with mobile id, made the payment and automatic redirect back to merchant took me back to login screen - the user session was ended at some point. Same problem with credit card and bank link payments
username_2: Wouldn't it be better to call it "Payment gateway" instead? If I understand correctly, the entity in question is a payment facility, not amount of money paid.
- https://en.wikipedia.org/wiki/Payment_gateway
username_1: This is essentially what we do here, we ask the customer to authorize a payment order to pay a specific invoice and then given the state of that payment order, make changes in our system. Plus, it's quite natural for an invoice to have multiple payment orders, differentiated by channels/banks.
username_1: That's not true, please read BankLink documentation. Usage of service 1011 is exactly that: creating a payment order that customer cannot change, and sending it over the wire to the bank.
https://www.seb.ee/sites/default/files/web/files/pangalingi_tehniline_specENG.pdf
The same goes for EveryPay, the customer can only authorize or cancel the pre-filled payment order.
username_1: I'll change it `PaymentOrder`, since `Payment` is ambiguous in this context. `PaymentGateway` is not a good name for this abstraction because it gravitates towards a singleton or otherwise it does not model real world. There's only one PayPal or EveryPay, not many. Let me use an example to illustrate that:
Imagine that you have a company that sells physical goods and needs to ship them to customers using different transport companies (Postal, DHL, UPS, etc.). You have an `Order` object that represents items to be sent to a customer. Now, you have a module called `ShippingCarriers` that represents person or entity that transports the goods from you to the customer. Each `ShippingCarrier` knows how to prepare a manifest, deliver a package and cancel a delivery. That's roughly analogous to using `PaymentGateway` as an abstraction to process payments.
A typical interface for that would look close to the following:
```ruby
order = Order.find_by(params)
shipping_carrier = ShippingCarriers::UPS.new(order)
shipping_carrier.prepare_shipping_manifest
shipping_carrier.deliver_package
shipping_carrier.complete_delivery
shipping_carrier.cancel_delivery
```
Looks good, but if we were to model real world, it should be replaced with a singleton since there's only one UPS and one Omniva, not many. Now we have a bunch of non-OO procedures:
```ruby
order = Order.find_by(params)
ShippingCarriers::UPS.prepare_shipping_manifest(order)
ShippingCarriers::UPS.deliver_package(order)
ShippingCarriers::UPS.complete_delivery(order)
ShippingCarriers::UPS.cancel_delivery(order)
```
IMHO a better choice is to have an object called `Shipment` that takes an `Order` and `carrier` arguments:
```ruby
order = Order.find_by(params)
# A carrier can also be an injectable object, depending on how complicated the logic is.
shipment = Shipment.new(order, :omniva)
shipment.prepare_manifest
shipment.deliver
shipment.mark_as_delivered
shipment.cancel
```
username_1: I'm pretty certain that's how Estonian ID should work (smart/mobile/ID card), otherwise it would be unsafe. When you exit the page to go to pay, you end the session. You need to authenticate again to resume. You can check it at your internet bank: login, close the page, try to reopen the page. You'll see a cache missing message and be redirected back to the login screen.
username_0: I cannot name any services using bank link or credit card payments that require user to login again after the payment is processed. So purely from user experience point of view we should try to retain the session during the credit card and bank link payments.
That said the last update that keeps the action in one tab still arrives to the same result of ending the session and requiring to log in again. Tested with both mobile-id and id-card logins.
username_0: users logged in using pki auth are logged out as well on back to merchant action - same as with mobile-id and id-card users
username_2: It's worth making sure that `secure_session_cookies` and/or `same_site_session_cookies` in `application.yml` are `false`, just in case.
username_1: As @username_2 suggested, we need to change staging values as they currently don't allow for post requests to retain session in non-GET requests. They need to be update to the same values as in production, where it works properly.
username_0: in production
Status: Issue closed
|
sul-dlss/preservation_catalog | 713823962 | Title: C2M audit doesn't play nicely with pres robots update_catalog step
Question:
username_0: **Describe the bug**
We have had a number of occurrences of C2M running (currently kicked off Thurs night/early Friday morning) while a bunch of objects are going through accessioning. The problem seems to be a sort of race condition:
- when pres robots update_catalog step is in progress (https://github.com/sul-dlss/preservation_robots/blob/master/lib/robots/sdr_repo/preservation_ingest/update_catalog.rb), it is trying to add a new version to the object in prescat,
WHILE
- when C2M is running, (https://github.com/sul-dlss/preservation_catalog/blob/master/app/lib/audit/catalog_to_moab.rb), it is trying to verify the version in the catalog is what is on disk.
This can land us in the soup:

Note the same number of object have errored in both workflows. :-(

These also show up in Honeybadger:
pres robots:

The most recent occurrence was this morning:
https://app.honeybadger.io/projects/55564/faults/68322527 (6,882 occurrences)
But it happened the previous 2 weeks as well:
https://app.honeybadger.io/projects/55564/faults/59562582 (roughly 30 occurrences last week and 900 the week before)
Answers:
username_1: When this has happened in the past, I've followed this remediation plan:
1. Run preservation audit from prescat to make sure the moabs are ok. This will flip the status from "unexpected version" to "ok" if the audits pass.
2. Make sure the version numbers shown in Argo match the version numbers for the moabs.
3. Set update-catalog to completed manually
4. Watch the items complete accessioning
username_2: This issue touches many parts of the preservation code and the reason of the failure is unknown. Move this out of the the first responder queue. This may be better to address in a maintenance workcycle.
username_3: bracketed notes and some emphasis/formatting are @username_3's.
There are two broad categories of fix that I can think of off-hand, but we should discuss as a team. I'm not wild about either of these ideas:
1. When looking at a specific druid, pres cat asks pres robots whether that druid is queued for processing.
- advantages
- keeps things event driven, keeps us biased towards updating the catalog based on known events.
- disadvantages
- introduces a circular dependency in terms of network API calls between pres cat and pres robots. currently, pres robots call pres cat, but not vice versa (pres robots has no API available to other apps over the network).
- i'm not sure if redis supports this sort of query across the queue in a performant way.
2. If a pres robots worker says to `update-catalog` with a version it just added to a Moab, and it's the version pres cat has in the catalog, respond with a non-`500` http code (`200`? some other `2xx` that indicates the resource already exists?). Still send a honeybadger alert? This is unusual, but now a known condition, and I think there's a good chance we'd ignore the alerts anyway.
- advantages
- simpler, avoids circular dependency
- disadvantages
- reduces strictness of expectations around update calls from pres robots to pres cat. allows more opportunity for a bug in pres robots update to overwrite existing info about a preserved version of something, without a scary alert.
actually, a 2b.:
- Similar to 2, above, but pres robots sends as one of its request params a hash of `manifestInventory.xml` from the Moab version it's calling `update-catalog` on. If pres cat sees that it has that Moab version already, calculate a hash of `manifestInventory.xml` for that Moab version. If it matches the one sent by pres robots, all good, respond with a `200`, maybe log a warning and/or send a clearly info/non-scary HB message.
- advantages
- avoids circular dependency
- gives assurance that pres robots worker really was trying to tell pres cat about the exact content it was already aware of. we get this assurance is because `manifestInventory.xml` has signatures (checksums and sizes) for all the other manifest files in the moab version, and one of those manifest files is `signatureCatalog.xml`, which has signatures for all content files in that version and prior versions.
- probably no more complex than option 1 above
- should be performant: the file to be checksummed is typically a few hundred kB at most.
- disadvantages
- more complex than option 2 above
- would require touching the dreaded `CompleteMoabHandler` in pres cat 😄, though i think this is a pretty approachable thing to have to do with it.
Now that I've thought it through a bit more, i actually like option 2b. but would still love feedback for other approaches or refinements to any of the above.
username_3: There are two broad categories of fix that I can think of off-hand, but we should discuss as a team. I'm not wild about either of these ideas:
1. When looking at a specific druid, pres cat asks pres robots whether that druid is queued for processing.
- advantages
- keeps things event driven, keeps us biased towards updating the catalog based on known events.
- disadvantages
- introduces a circular dependency in terms of network API calls between pres cat and pres robots. currently, pres robots call pres cat, but not vice versa (pres robots has no API available to other apps over the network).
- i'm not sure if redis supports this sort of query across the queue in a performant way.
2. If a pres robots worker says to `update-catalog` with a version it just added to a Moab, and it's the version pres cat has in the catalog, respond with a non-`500` http code (`200`? some other `2xx` that indicates the resource already exists?). Still send a honeybadger alert? This is unusual, but now a known condition, and I think there's a good chance we'd ignore the alerts anyway.
- advantages
- simpler, avoids circular dependency
- disadvantages
- reduces strictness of expectations around update calls from pres robots to pres cat. allows more opportunity for a bug in pres robots update to overwrite existing info about a preserved version of something, without a scary alert.
Actually, idea 2b.:
- Similar to 2, above, but pres robots worker sends as one of its request params a hash of `manifestInventory.xml` from the Moab version it's calling `update-catalog` on. If pres cat sees that it has that Moab version already, calculate a hash of `manifestInventory.xml` for that Moab version. If it matches the one sent by pres robots, all good, respond with a `200`, maybe log a warning and/or send a clearly info/non-scary HB message.
- advantages
- avoids circular service dependency
- gives assurance that the pres robots worker really was trying to tell pres cat about the exact content pres cat was already aware of. we get this assurance is because `manifestInventory.xml` has signatures (checksums and sizes) for all the other manifest files in the moab version ([example](https://github.com/sul-dlss/preservation_catalog/tree/34755248115348bce3e0de97cd45dde3a2ba053f/spec/fixtures/storage_root01/sdr2objects/bj/102/hs/9687/bj102hs9687/v0003/manifests/manifestInventory.xml)), and one of those manifest files is `signatureCatalog.xml`, which has signatures for all content files in that version and prior versions. through that chain of signatures, we can have pretty good assurance that pres robots and pres cat are in agreement about the content in question.
- probably no more complex than option 1 above
- should be performant: the file to be checksummed is typically a few kB.
- disadvantages
- more complex than option 2 above
- would require touching the dreaded `CompleteMoabHandler` in pres cat 😄, though i think this is a relatively approachable thing to have to do with it.
Now that I've thought it through a bit more, I actually like option 2b. But would still love feedback, whether other approaches or refinements to any of the above. |
thomasclaudiushuber/Wpf-Hosting-Uwp-MapControl | 482259893 | Title: XDG0062 Catastrophic failure error in Visual Studio 2019 designer.
Question:
username_0: Hello Thomas
I followed along as best I could but I'm still getting the error:
WindowsXamlManager and DesktopWindowXamlSource are supported for apps targeting Windows version 10.0.18226.0 and later. Please check either the application manifest or package manifest and ensure the MaxTestedVersion property is updated. FriendsApp.Wpf MainWindow.xaml 26 Catastrophic failure
WindowsXamlManager and DesktopWindowXamlSource are supported for apps targeting Windows version 10.0.18226.0 and later. Please check either the application manifest or package manifest and ensure the MaxTestedVersion property is updated.
This happens when looking at your sample application's MainWindow.xaml. Am I missing something?
Answers:
username_1: Having the same issue using the current dependencies |
sustainers/authentic-participation | 620410591 | Title: Write Annotated Principles: a detailed overview of the Principles with added context
Question:
username_0: _Originally posted by @username_0 in https://github.com/sustainers/authentic-participation/issues/6#issuecomment-630351616_
---
# Summary
Create an annotated version of the Principles, à la [Annotated OSD](https://opensource.org/osd-annotated), to explain them in more context
# Background
An annotated copy of the Principles is a no-nonsense, simple extension to the Principles. Just like the Annotated OSD offers more context to the Open Source Initiative's Open Source Definition, the Annotated Principles would serve a similar function for us.
# Details
It does not need to be long. We should focus on simplicity and starting small to get something published, and then iterate further with feedback and a community, if we can build one up. :slightly_smile:
So, closing criteria might be 3-4 sentences for each Principle to offer more context and explanation of meaning.
# Outcome
Better insurance that the meaning of the Principles of Authentic Participation will not be corrupted by decentralization
Answers:
username_0: For now, this is blocked on the Advocate Kit in #6. Ideally, this page fits in as part of the Advocate Kit in the future. |
Anuken/Mindustry-Suggestions | 741253683 | Title: Add a way for players to join each other without having to portfoward or go to the other persons house. (non steam players)
Question:
username_0: It has come to my attention that a lot of players have been getting annoyed with "how to make servers" and "how to portfoward" to play with eachother without having to join a server by someone else, so here's what I have to say, add a feature like among us where players could have public/private lobbys where people could join and play with other players without having to do all that hard work. Basically a substitute for co-op.
Answers:
username_1: 1. RIP Template
2. IF YOU DON'T WANT TO DO PORT FORWARDING OR OTHER FREE WAYS TO HOST A GAME AT LET OTHER PEOPLE CONNECT. ***THEN FUCKING PAY SOME COMPANY TO HOST IT FOR YOU***. NOTHING IS FUCKING FREE NOR CHEAP FOR AN AD-LESS, NO IN-APP PURCHASES, MOSTLY FREE TO DOWNLOAD AND PLAY GAME LIKE, I DON'T KNOW, LIKE FUCKING MINDUSTRY, THIS GAME?
username_2: @username_1 a) Chill out, b) valid points, c) there might be *some options* to simplify the existing system, which we should consider.
Obviously it’s not going to change from the current system, where one person plays host, since servers aren’t free, but there (might?) have been some innovations in internet technology in the last 20 years. Perhaps something as simple as a guide/wizard for setting up port forwarding would be a valid option.
username_2: And by that I mean a guide that ships with the game (as opposed to searching the internet for guides)
username_0: Sleep deprivation is a thing give me a break
username_1: That would be hard, as each router manufacturer has a different interface for their devices.
username_3: ok then use a vpn if you dont have your router password
the concept of public servers that anyone can join.... did you try public servers?
username_4: zerotir |
dotnetcore/Home | 431816786 | Title: SmartSql 申请加入 NCC
Question:
username_0: SmartSql = MyBatis + Cache(Memory | Redis) + R/W Splitting +Dynamic Repository + Diagnostics ......
项目地址:https://github.com/Smart-Kit/SmartSql
Answers:
username_1: 同意
username_2: 赞同加入。
非常期待这款能对标 [MyBatis-Plus](https://mybatis.plus/) 的优秀项目在实现 2019 roadmap 后再向前迈出一大步。
username_3: 感谢对开源社区的贡献!同意+1
username_4: 同意
username_5: 同意
username_6: 赞同加入。
唯一能在能与Mybatis对标的.NET ORM。性能上与Dapper媲美,但又比Dapper强大很多。值得推荐!
Status: Issue closed
|
japkit/japkit | 383005910 | Title: japkit does not compile with Java 11
Question:
username_0: ... waits for https://github.com/eclipse/xtext/issues/1182
Answers:
username_1: @username_0 is this a with or a against?
username_0: @username_1 , thanks for asking. I am not actually sure about the precise meaning of with vs against, so I describe steps to reproduce instead ;-)
1. Clone japkit
2. Run mvn clean install with a Java Home pointing to a JDK 11
3. See lots of exceptions in log
To be clear: I don't change source or target level in the pom to 11. It ist still 8.
username_1: it works fine for me with Xtext 2.16 as expected.
- you have to update maven javadoc plugin too.
- you have to deal with the removed javax.annotation.*
username_0: @username_1 thank you for this helpful hints! Will try asap.
I think I mainly matched your "Xtext should not "explode" with Java 11" to my experience when switching to JDK 11. That's why I assumed I should wait for eclipse/xtext#1182 before trying again.
username_1: Xtext 2.16 is with java 11 against target<=10
Xtext 2.17 will work with java 11 target to
username_0: After upgrading to xtend 2.16 it builds with Oracle JDK 11: https://travis-ci.org/japkit/japkit/jobs/471320344
Status: Issue closed
|
inspirehep/inspirehep | 716550978 | Title: titles which include `<z<` are not displayed in holdingpen
Question:
username_0: **Describe the bug**
Articles with titles that include `<z<` are displayed without title in the holding-pen brief or detailed view.
Hence no link to detailed view from brief listing.
**To Reproduce**
Steps to reproduce the behavior:
1. harvest 1912.05206 or 2002.02968 from arXiv
2. look at the resulting workflow in the holding-pen
3. See error
**Examples**
2010.02710 `Investigating the Effect of Galaxy Interactions on AGN Enhancement at 0.5<z<3.0`
1912.05206 `The Strength of the 2175Å Feature in the Attenuation Curves of Galaxies at 0.1<z<3`
2002.02968 `Interpreting the Spitzer/IRAC Colours of 7<z<9 Galaxies: Distinguishing Between Line Emission and Starlight Using ALMA`
**Expected behavior**
The title should be visible. In brief view it should be clickable leading to the detailed view.
**Screenshots**

**Additional context**
In addition there **might** be a problem terminating the WF, they are still 'waiting' after 2 hours although the record is in INSPIRE.
Status: Issue closed
Answers:
username_1: **Describe the bug**
Articles with titles that include `<z<` are displayed without title in the holding-pen brief or detailed view.
Hence no link to detailed view from brief listing.
**To Reproduce**
Steps to reproduce the behavior:
1. harvest 1912.05206 or 2002.02968 from arXiv
2. look at the resulting workflow in the holding-pen
3. you see only the authors but not the title
**Examples**
2010.02710 `Investigating the Effect of Galaxy Interactions on AGN Enhancement at 0.5<z<3.0`
1912.05206 `The Strength of the 2175Å Feature in the Attenuation Curves of Galaxies at 0.1<z<3`
2002.02968 `Interpreting the Spitzer/IRAC Colours of 7<z<9 Galaxies: Distinguishing Between Line Emission and Starlight Using ALMA`
**Expected behavior**
The title should be visible. In brief view it should be clickable leading to the detailed view.
**Screenshots**

**Additional context**
In addition there **might** be a problem terminating the WF, they are still 'waiting' after 2 hours although the record is in INSPIRE. |
DLHub-Argonne/home_run | 382817014 | Title: Converting To/From Numpy Arrays
Question:
username_0: Some types of functions expect to receive and return numpy arrays. We could make the shims for these functions resilient to whether a user provides a list object or a numpy array, and transform data back into lists from ndarrays if required.
Answers:
username_0: This was fixed awhile ago.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.