repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
florisboard/florisboard | 816386527 | Title: Adding user defined layouts
Question:
username_0: It would be nice if we could not only add user defined symbols, but also rearrange the entire layout of the keyboard
Maybe this could be done by importing a user-made xml file or even by an in-app layout editor (the former is obviously easier to implement (?))
Answers:
username_1: Thanks for your suggestion! I think what you are asking is like #196, but that you also ask to import layout json files. Will add the import request to my todo list, as it is already planned sometime in the future.
username_2: i think just letting the user add a layout.json file would be a lot easier to implement than a GUI to do it in the app like in #196
username_3: Agreed, this would be awesome! It would be great to be able to make a layout.json file that we can import for a custom layout. |
daattali/gslides-betternotes-extension | 76279998 | Title: Enlarge "upcoming slide" thumbnail as well
Question:
username_0: A few people requested this feature, look into it when I have a free day
Answers:
username_1: This feature would be much appreciated. It boggles the mind that Google hasn't fixed this. The next slide is so small it's not even remotely useful by default.
username_0: Currently scrambling to write my thesis, but I completely forgot about this. I'll try to take a look in an upcoming weekend!
Status: Issue closed
username_0: @username_1 done. Please check to make sure this is what you wanted. There is a new options when you click on the icon in the URL toolbar that allows you to resize the next slide
username_1: Works, thanks a lot :) Google really should redesign the left side of the speaker notes window...
username_0: Yeah, it's horrible. The new version also lets you manually drag the sidebar to select the size, should be more useful!
username_2: Any chance you could do the same for the previous slide? If it's too complicated don't worry about it, already a great extension! |
crystal-lang/crystal | 622917705 | Title: Cannot use aliases to link to methods/macros in docs
Question:
username_0: ```cr
alias FB = Foo::Bar
module Foo::Bar
# Using `FB.baz` does not link, while `Foo::Bar.baz` does.
class Baz
end
def self.baz
end
end
```
 |
ocaml-community/awesome-ocaml | 514550437 | Title: Add a section about Data Science/ML/DL/RL/etc
Question:
username_0: Basically reuse information from these posts https://discuss.ocaml.org/t/is-it-possible-to-use-machine-learning-and-deep-learning-frameworks-through-ocaml/4570
Answers:
username_1: It's a good idea, would you like to submit a PR?
username_0: Yes, I will.
Status: Issue closed
|
Magenic/MAQS | 871383920 | Title: Logging Config Stack overflow issue
Question:
username_0: When trying to unit test the Logging Config, a stack overflow occurred.
Answers:
username_1: @TroyWalshProf The issue is inside of
``` csharp
public static LoggingEnabled GetLoggingEnabledSetting()
{
switch (Config.GetGeneralValue("Log", "NO").ToUpper())
{
case "YES":
return LoggingEnabled.YES;
case "ONFAIL":
return LoggingEnabled.ONFAIL;
case "NO":
return LoggingEnabled.NO;
default:
throw new ArgumentException(StringProcessor.SafeFormatter($"Log value '{Config.GetGeneralValue("Log", "NO")}' is not a valid option"));
}
}
```
When the default is being called, the exception is being thrown, the FirstChanceHandler is catching this exception. Inside the FirstChanceHandler, it is performing the same call to GetLoggingEnabledSetting which is causing a stack overflow.
My proposal is to have NO and default perform the same action. Thoughts?
Status: Issue closed
|
pangeo-data/pangeo | 476233348 | Title: Schmidt Futures VESRI Funding Opportunity
Question:
username_0: The nature of this call (specifically targeted towards Climate model biases) precludes a proposal with Pangeo as the primary focus. However, Pangeo-style infrastructure would make an excellent "methodological advance" as well as something that this "core team" might help provide. My understanding is that the core team composition and activities will be determined by what the proposals request--if proposals request lots of Pangeo-style cloud infrastructure, that is what they will provide.
If anyone out there is considering applying to this call, please don't hesitate to mention Pangeo in your proposal and leverage the concepts we have openly developed regarding cloud computing and machine learning in the context of climate data. It is not necessarily required that you include Pangeo collaborators on your proposal, since these are potentially services that the core team could provide. Pangeo technologies are all open source and freely available for anyone to use and build on.
Full disclosure: I am part of a team that plans to submit to this call. I am posting this here because I see Pangeo as very broadly applicable to this sort of work, beyond any one specific proposal. My goal here is to influence the core team.
Answers:
username_0: This issue was moved to https://discourse.pangeo.io/t/schmidt-futures-vesri-funding-opportunity/42
Status: Issue closed
|
primefaces/primevue | 728267960 | Title: Calendar dateFormat not working
Question:
username_0: per the docs:
`dateFormat="dd.mm.yyyy"` in the calendar is used to change the date.
It is not working.
<Calendar v-model="launch.date" v-bind:id="'dateInput_'+index" class="form-control"
placeholder="Click to set date" view="month" dateFormat="dd.mm.yy" :yearNavigator="true"
yearRange="2020:2030"></Calendar >

Answers:
username_1: Should be;
dateFormat="dd.mm.yy"
Status: Issue closed
|
googleads/googleads-mobile-unity | 939024921 | Title: IronSource Rewarded callbacks not fired
Question:
username_0: <!-- DO NOT DELETE
validate_template=true
template_path=.github/ISSUE_TEMPLATE/bug_report.md
-->
### Step 0: Are you in the right place?
* For general technical questions, or help with project-specific issues like setting up ads in
your app, reach out to our support team on the
[Developer Forum](https://groups.google.com/forum/#!categories/google-admob-ads-sdk/game-engines).
* For assistance with your AdMob account, reach out to
[AdMob Support](https://support.google.com/admob/?hl=en#topic=7383088).
* For feedback on [our documentation](https://developers.google.com/admob/unity/start),
send your feedback by pressing the **Send Feedback** button at the top right of the
documentation page you are on.
* For issues related to __the code in this repository__, continue filing this GitHub issue.
* Once you've read this section and determined that your issue is appropriate for
this repository, **please delete this section**.
### [REQUIRED] Step 1: Describe your environment
* Unity version: 2020.3.13
* Google Mobile Ads Unity plugin version: 6.0.1
* Platform: _____ Android
* Platform OS version: Android 9
* Any specific devices issue occurs on: Asus ZenPad 3S
* Mediation ad networks used, and their versions: IronSource, UnityAds
### [REQUIRED] Step 2: Describe the problem
When I load and show rewarded ads, sometimes callback events not fired when ads shown from IronSource
#### Steps to reproduce:
Initialize admob, load rewarded ads, show rewarded
#### Relevant Code:
```
// load
AdRequest request = new AdRequest.Builder().Build();
_rewardedAd = new RewardedAd(_adUnit);
_rewardedAd.LoadAd(request);
_rewardedAd.OnUserEarnedReward += OnUserEarnedRewardEvent;
_rewardedAd.OnAdLoaded += OnAdMobAdLoadedEvent;
_rewardedAd.OnAdFailedToLoad += OnFailedToLoadEvent;
_rewardedAd.OnAdFailedToShow += OnAdFailedToShowEvent;
_rewardedAd.OnAdOpening += OnAdOpeningEvent;
_rewardedAd.OnAdClosed += OnAdClosedEvent;
// Show method
if (_rewardedAd != null && _rewardedAd.IsLoaded())
_rewardedAd.Show();
```<issue_closed>
Status: Issue closed |
deepimagej/models | 762648993 | Title: Linking to external objects
Question:
username_0: Some of our models are trained using ZeroCostDL4Mic notebooks. Those notebooks are also in the Bioimage.IO. It would be nice to have the connection between objects in the bioimage.io that do not necessarily belong to the same GitHub repository.
Example of our model:
https://github.com/deepimagej/models/blob/6f65fb15f062ed2e4de3de7605aeb414c294a63e/manifest.bioimage.io.yaml#L81
which refers to this notebook:
https://github.com/HenriquesLab/ZeroCostDL4Mic/blob/master/manifest.bioimage.io.yaml#L563
@username_1<issue_closed>
Status: Issue closed |
joelberkeley/spidr | 1147498696 | Title: An `Isomorphic` type for isomorphic shapes?
Question:
username_0: In some case it's useful to be able to say "this thing works for this shape and any shapes isomorphic to it". Isomorphic shapes include those with extra or fewer dimensions of length 1 e.g. [3, 1, 2], [3, 2] and [1, 3, 2]. It is essentially a cross between `Squeezable` and a subset of `Broadcastable`.
A use case I came across was `ClosedFormDistribution` for `Gaussian`. This is implemented for event shape [1], but could equally be implemented for [] or [1, 1], with sth like
```
Isomorphic shape [] => ClosedFormDistribution shape Gaussian where
...
```
and we'd presumably reshape within the implementation as appropriate. Similar could be done for `GaussianProcess` so that we don't need `targets = [1]` |
GKhankles/move-thrower | 864479921 | Title: Design Suggestion - Put User Sign Up on a Separate Page
Question:
username_0: **Summary**:
Put the User Sign Up on a different page to declutter the UI
**Description**:
It’s a design decision, but keeping the User Log-in and Sign Up on the main page makes the UI feel cluttered, especially with all of the other boxes on the main page.
**Severity**:
Trivial<issue_closed>
Status: Issue closed |
firebase/firebase-tools | 748064353 | Title: Firebase Firestore rules path in production
Question:
username_0: <!-- DO NOT DELETE
validate_template=true
template_path=.github/ISSUE_TEMPLATE/bug_report.md
-->
<!--
Thank you for contributing to the Firebase community!
Think you found a bug?
=======================
Yeah, we're definitely not perfect! Please use this template and include a minimal repro when opening the issue. If you know how to solve the issue, please create a Pull Request, and we'd be happy to review it!
Have a feature request?
========================
Great, we love hearing how we can improve our products! However, do not use this template to submit a feature request. Please submit your feature requests to: https://firebase.google.com/support/contact/bugs-features/
Have a usage question?
=======================
We get lots of those and we love helping you, but GitHub is not the best place for them and they will be closed. Please take a look at the guide first: https://firebase.google.com/docs/cli/
If the official documentation doesn't help, try asking through our official support channel: https://firebase.google.com/support/
Additional locations to check for solutions or assistance from the community:
- Stack Overflow: https://stackoverflow.com/
- Firebase Slack Community: https://firebase.community/
*Please avoid duplicate posting across multiple channels!*
-->
### [REQUIRED] Environment info
<!-- What version of the Firebase CLI (`firebase-tools`) are you using? Note that your issue may already be fixed in the latest versions. The latest version can be found at https://github.com/firebase/firebase-tools/releases -->
<!-- Output of `firebase --version` -->
**firebase-tools:** 8.16.2
<!-- e.g. macOS, Windows, Ubuntu -->
**Platform:** macOS
### [REQUIRED] Test case
<!-- Provide a minimal, complete, and verifiable example (http://stackoverflow.com/help/mcve) -->
### [REQUIRED] Steps to reproduce
Following the [docs](https://firebase.google.com/docs/reference/rules/rules.debug), I tried to debug [the path](https://firebase.google.com/docs/reference/rules/rules.firestore.Resource#__name__)
When I'm using `the emulator` _locally_, the path is like `/databases/$(database)/documents/col1/doc1/col2/doc2/...`
so this `resource['__name__'][0]` will be `databases`
and `resource['__name__'][4]` will be `doc1`
for example, this `test` function will return `true` _(locally, using the emulator)_:
```
function test() {
return string(resource['__name__'][4]).matches('doc1');
[Truncated]
```
Or:
```
function test() {
return string(resource['__name__'][0]).matches('databases');
}
```
But, when I `deploy` the rules, the `test` function becomes `false`
Is it because the path in production is different? If so, how can I see it? is there any example for a production path..
### [REQUIRED] Expected behavior
Expected the path to be the same
### [REQUIRED] Actual behavior
Seems there's a different in the path between the emulator and production
Answers:
username_0: @google-oss-bot updated
username_1: @username_0 thank you for the detailed report!
@username_2 can you assign this to someone who knows enough about the emulated rules engine to look into it?
username_2: @username_0 Could you please try this against the Rules Playground in Firebase Console too? Once you test it against a mock request, you should also be able to inspect the results of the expression in the right panel.
username_0: @username_2 `test` function is `false` in the Rules Playground in Firebase Console which should be `true`
username_0: @username_2 @username_1 I updated the issue for full details. Thanks
Status: Issue closed
username_2: I believe you're experiencing the error because `resource` variable is `null` in production in the request you're testing. Remember in rules that the `resource` variable is only available if the document already exists in the database -- e.g. for a `get` request, `resource` will be null unless the document `col1/doc1/col2/doc2/col3/doc3` exists in your database. You may have this already in your emulator but not in the production database.
Since you only need the path of the document, I'd suggest you use `request.path` instead. That variable is ALWAYS available and works just like `resource['__name__']`. For example, `string(request.path[0])` gives you `database`.
----
Also, FYI, you can use the right panel to the detailed sub-expressions in the Rules Playground. For example, the following screenshot shows the value returned by the `string(...)` call.

username_0: @username_2 @username_1 `col1/doc1/col2/doc2/col3/doc3` exists in production and it's not null!
And yes I know that `request.path` works fine, but I need `resource['__name__']` which throws `null value` error only if it's used for the 3rd sub-collection
Why did you close it without even validate or simulate the issue? did you at least try the full example I provided?
username_0: @username_2 do I need to create a new issue for `resource['__name__']` being null if it's used in the 3rd sub-collection in production only?
username_2: @username_0 I did try your example in Firebase Console in the Rules Playground and I tested against all three levels. I found that if `cols1/doc1/cols2/doc2/cols3/doc3` does not exist, then the function will throw a null value error. Once I create it in the Data tab in console, running the simulation again gives me `true`. This is consistent with top and second levels.
I don't want to jump into conclusions, but please make sure to double check the spelling of the paths in the Playground and Data tab to make sure they match exactly. If the issue persists, the best way for us to move forward is for you to create a minimal repro (e.g. a GitHub repository with sample rules and a script that reads data from the 3rd collection, on web or Node.js client (not admin)). Since there is nothing to be done in the Firebase CLI or Emulator Suite, I've also closed this issue -- production issues are better handled by [Firebase Support](https://firebase.google.com/support). |
micrometer-metrics/micrometer | 523735516 | Title: Thread metrics should be whole numbers, not decimals
Question:
username_0: [ExecutorServiceMetrics](https://github.com/micrometer-metrics/micrometer/blob/master/micrometer-core/src/main/java/io/micrometer/core/instrument/binder/jvm/ExecutorServiceMetrics.java) creates gauges with double values causing merics like pool size being exported as a double. This doesn't make sense and creates problems with averages over time.
Answers:
username_1: All metrics are doubles. See https://stackoverflow.com/questions/58068832/why-is-a-counter-a-double-and-not-a-long
Feel free to add specific details of your usecase, and specifics for your averages problem overtime. Are you referring to potential loss of precision?
username_0: Perhaps the following screenshot from Grafana would make the problem clearer.

username_1: 
That is a Grafana behavior. The values were reported at their whole values (1.0, 2.0, etc). The avg is computed by Grafana based on the window size you are displaying.
username_0: It seems it's possible to cast the values to whole numbers on Grafana as shown below. I'll close this ticket.

Status: Issue closed
|
microsoft/DeepSpeed | 739590945 | Title: Question about the definitions in block sparse attention
Question:
username_0: Hi, I have some question regarding the block sparse attention.
If I understand the description of API correctly, `block` is the block size (i.e., number of tokens in a block) while `num_local_blocks` denotes the number of blocks (`#tokens_per_window = block * num_local_blocks`) in a local window. So no matter which value (`unidirectional` or `bidirectional`) I choose for `attention`, the tokens within a block will attend each other?
Answers:
username_1: Yes, that is correct. Tokens within a block always attend to each other no matter if it is uni/bi-directional. However, if you look at a local window, in the case of unidirectional, you can consider tokens within a block only attend to other tokens in the blocks before them in the same local window. While in case of bidirectional all tokens in the local window (no matter which block they are in) attend to each other.
username_0: So if we want to use it for the machine translation, how do we set these
hyper parameters? For machine translation, tokens within a block cannot
attend to each other in decoder.
username_1: You can use the attention mask to neutralize it; in such cases, attention mask is of dimension [leading dimensions, S, S] in which S stands for sequence length.
username_0: I see. That is indeed one option. |
flutter/engine | 111938895 | Title: Call NetworkInterface.list() then crash application!!
Question:
username_0: Following code cause to crash application
[source]
import 'dart:io';
import 'dart:convert';
import 'dart:async';
main() async {
List<NetworkInterface> interfaces = await NetworkInterface.list(
includeLoopback: true, includeLinkLocal: true);
}
[project]
https://github.com/username_0/hello_skyengine/tree/master/dartio_networkinterface
Answers:
username_1: This API is not implemented in Dart for Android. See Socket::ListInterfaces in dart/runtime/bin/socket_android.cc.
Socket::ListInterfaces is also not setting the OSError. CObject::NewOSError then consumes the null OSError, resulting in a segfault.
Status: Issue closed
|
22116/debian_bridge | 524116570 | Title: IO Error occured
Question:
username_0: No matter what command I issue except for -h and -v, it shows this error and quits.
`ERROR debian_bridge_cli::starter > IO errors occured: EOF while parsing a value at line 1 column 0`
Answers:
username_1: @username_0 could you provide more details running it with extended verbosity level `-vvv`. Also would be fine if you'll write your OS
Status: Issue closed
username_1: Closed as there are no response |
FriendsOfPHP/security-advisories | 46839314 | Title: What constitutes a Security Advisories?
Question:
username_0: I think it would be a good idea to draw up a document that can help people decide what a Security Advisory is.
Answers:
username_1: Closing as there some discussions in the FIG group about that topic and some PSRs are on their way.
Status: Issue closed
|
modin-project/modin | 375672751 | Title: TypeError when importing modin
Question:
username_0: ### System information
- **Amazon linux (Centos)**:
- **Modin installed from binary**:
- **Modin version**: 0.2.2:
- **Python version**: 3.7.0:
- **import modin.pandas**:
### Describe the problem
I am getting an import error because of an unexpected argument `use_raylet`
### Source code / logs
```
In [1]: import modin.pandas
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-d42df328a6c7> in <module>()
----> 1 import modin.pandas
~/anaconda3/lib/python3.7/site-packages/modin/pandas/__init__.py in <module>()
78 include_webui=False,
79 redirect_worker_output=True,
---> 80 use_raylet=True,
81 )
82 except AssertionError:
TypeError: init() got an unexpected keyword argument 'use_raylet'
```
Answers:
username_1: What is your Ray version?
```python
import ray
ray.__version__
```
username_1: Hi @username_0, I did a bit more digging. It looks like you may have recently built Ray from source. We depend on the pypi release of Ray, and the `ray.init` API changed slightly since the last Ray release. If you want to resolve this issue, you will have to install Ray using pip (`pip install ray`).
cc @robertnishihara
username_0: I have updated the original issue report with the version. It is 0.5.3
username_0: installing `ray` via pip doesn't seem to resolve the issue either. I will try to check with an older version of ray and report back
username_1: You may need to `pip uninstall ray; pip install ray`.
username_0: This is weird. I am unable to install `ray` at all using `pip`.
```bash
dev-machine % pip install ray
Collecting ray
Could not find a version that satisfies the requirement ray (from versions: )
No matching distribution found for ray
(18-10-31 20:11:08) <1> [~]
dev-machine % pip install ray==0.5.3
Collecting ray==0.5.3
Could not find a version that satisfies the requirement ray==0.5.3 (from versions: )
No matching distribution found for ray==0.5.3
(18-10-31 20:11:10) <1> [~]
```
username_0: Was able to install using `pip install -U https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-0.5.3-cp37-cp37m-manylinux1_x86_64.whl` based on instructions [here](https://ray.readthedocs.io/en/latest/installation.html)
But the problem of the `use_raylet` persists and like it says in the URL, it installs version 0.5.3
username_1: Oh, it is a Python 3.7 issue. Ray 0.5.3 on pypi did not ship with Python 3.7 support.
There is a PR that will resolve this `use_raylet` issue: https://github.com/ray-project/ray/pull/3176, and once it gets merged the nightly wheels (where you installed from) will reflect that change.
In the meantime, it is possible to install this way:
```
pip install git+https://github.com/username_1/modin/@no_raylet
```
I pushed a branch that removes the `use_raylet` option to my fork of Modin. Everything else is identical to current master.
Status: Issue closed
username_0: Success @username_1
```bash
python -c "import modin.pandas"
Process STDOUT and STDERR is being redirected to /tmp/ray/session_2018-10-31_20-41-01_29838/logs.
Waiting for redis server at 127.0.0.1:12739 to respond...
Waiting for redis server at 127.0.0.1:17503 to respond...
Starting the Plasma object store with 6.4900694009999995 GB memory using /dev/shm.
``` |
ampproject/amphtml | 892042483 | Title: Reduce size of v0.mjs
Question:
username_0: **summary**
There are a few straightforward steps we could take to reduce the binary size of `v0.mjs` from 61kb --> 48kb (20% drop).
1. Move `HistoryBindingVirtual` from v0.mjs to viewer-integration: 2kb saving.
2. Move `Performance` and `Viewer` services to viewer-integration: 3kb savings.
3. Completely migrate from `R0` to `R1` for all 0.1 components: at least 6kb savings, and likely significant runtime speedup.
4. Enabling `ADVANCED_OPTS` in Closure when we are fully type checked: 2kb savings.
Answers:
username_1: Hi, username_0, I am confused about 'migrate from R0 to R1 for all 0.1 components', Does it mean upgrade components with preact? will it benifit performance and how? I looking forward to your answer.BWTY
username_0: Hey @username_1, it does not mean that. Its converting all of our components to using the R1 Resources System which was created as part of https://github.com/ampproject/amphtml/issues/31915. By default all components use R0.
https://github.com/ampproject/amphtml/blob/27a4ef3c53b21044d82973b20891a95179d1075f/src/base-element.js#L108-L124 |
ioBroker/ioBroker.zigbee | 925086156 | Title: Discussion: Breaking change and Version 2.0
Question:
username_0: Hello all,
the ioBroker dev team introduced a new functionality for the ioBroker which, if implemented, will force adapters which deal with physical devices to correctly set roles and types for all states in order for their device detector to be able to identify the devices automatically.
This may need a breaking change for the Zigbee Adapter as not all devices currently follow this method, and for the most case, the Zigbee devices do not expose some of the "suggested" states as "battery Low" or "unreachable" in the way the device detector requires it.
Due to this, I would propose to set up a dev branch towards ioBroker.zigbee 2.0 with 3 goals in mind:
- Auto-configuration on new installs - only type and port should be needed to configure, the remaining configuration data should be possible to obtain from defaults or from the herdsman internal backup structure. Implementing this will require a change in the way the herdsman is initialised, paired with some critical changes in the Zigbee-herdsman to support this. The benefit for the ioBroker user would be that default new installations would automatically use "safe" network id, pan id and ext-pan id, and, on supporting coordinators allow for an autoselection of the Zigbee channel based on a network scan.
- Use of Herdsman Exposes for as Many devices as possible. The Plan here would be to flat out remove all non-essential states and devices definitions from the Zigbee adapter when generating the development branch and to rely on people with devices with issues to help us to get the most out of the exposes functionality, leaving explicit device configurations for devices which do not play well with the expose setup (again, the PTVO is a good example for this)
- Compliance with the new device definitions for all devices which are handled completely by exposes.
Thoughts on this ?
A.
Answers:
username_1: Very important ... nothing will be "enforced" and also "battery low" and such are mostly optional fields ...
In fact it should better read: It would be beneficial if created devices and channels use the correct roles and types for relevant states ... :-))
username_0: Thank you for the clarification. I updated the post above to reflect this.
The way I see it, the benefits of setting correct roles and states for the device detector to use is so high that it does outweigh the changes of a breaking change in the zigbee Adapter - which was the reason for me to open this issue.
A.
username_1: I completely agree!! Especially for unexperienced used the benefits are outstanding! I just wanted to make it clear :-)
Mainly means: If a device can not offer a certain (optional) state then you do not need to struggle around and try to find something ... just leave it out. the required ones are the importent ones
username_2: The ability to use the detector has appeared for a long time in iobroker.devices and iobroker.iot, so I'm surprised that something new awaits us :)
OK. We try to set the correct roles and types when creating states. Not for devices though. Or I'm wrong?
If somewhere the role and type is set incorrectly - let's set it correctly! I don't see a problem with this, it can be done in the current codebase.
You can refuse to use the descriptions of the devices specified in the adapter. But it is better to gradually, to make sure that we do not lose some functionality.
I do not see any critical changes in the proposed changes that seriously change the logic of the adapter, so I suggest not to overstate the version to 2.0, as users will look for something new for themselves, but will not find :)
username_0: One of the ideas was to go away from explicit device definitions with this, so that we don’t have to deal with editing each and every device. That would swap a lot of states, hence the 2.0 suggestion.
I don’t mind not to use that Version number, but I really would love to drop most devices from devices.JS and clear out the states.JS as well
a.
username_2: You can delete it gradually, making sure that the new states of the device to be deleted remain functional. Or it can be tough - deleting everything and then adding the necessary :)
What we are discussing now is our internal problems :) Better, let's think about some innovations that users will really need. Any ideas?
* I see the need for a correct backup-restore procedure so that you can unload and transfer it to another computer or another stick.
* It is also necessary to revise the "binding" and add "reporting".
* I would also add the ability to change the picture (along with renaming the device so that you can insert your picture)
* In the dashboard the ability to sort and filter by the date of addition, by the signal level, by the battery level.
One more innovation, I would attribute as a whole to the iobroker - event handlers. Now, to see what function is executed when the button is pressed, you need to go to the scripts and look for the code there. It would be nice to see the handler code like "history" in admin states list - right next to the state on which it is used. And so that a new handler can be added right next to it.
username_1: In fact yes it is there for some time and we clearly see that the use is kind of limited because of the described facts that many adapters are ot structuring the sttes as needed ... And thwts why the topic was discussed in the last developer meeting with the call to the devs to check their adapters.
That change also might require to split devices meaningful into channels or such - but this is a big "it depends". So yes ideally it is "just" a fix in "role" and "datatype" (if relevant). SO if it really becomes breaking depends on where the mismatches are.
To yourt last innovation idea: The current structure with each adapter allowing to have own event handlers on state-ids and such this will not really work that well. For customer logic it is mainly the JavaScript Adapter relevant, bt also here you can use regex and other stuff to subscribe a script to many object-ids or also a subscribe can be "once" ... and I do not start with Aliasses and subscribes to them which in the end is also on the read object... so this is tooo complex and also the information is too spreaded into all adapter processes that I do not see a good way to achieve this.
username_1: Ps: also an Option could be to leave existing States Andrade new compatible. Could especially work for unreach and lowbat maybe. Would then also not breaking
username_0: I am aware of this. Unfortunately, there are a number of strange implementations for specific devices which are not mirrored to others of a similar type, and the zigbee Adapter already has a large number of states which are of questionable use in the long run.
I would rather go ahead and make one breaking change and then stick with the correct definitions than inflate the adapter with duplicates.
A.
username_0: This sounds interesting. I would want to add 2 things to that:
- Display the room on the dash card
- Offer an option to make the device card the default card rather than the dash card.
Alongside this we would need a fix for device cards without content on OS X devices (i have seen this for safari, firefox under certain conditions, chrome on certain conditions) but lack the skillset to fix it.
A.
username_2: I'll take it upon myself
username_2: is it similar to external converters? now there is already such functionality in the adapter.
Status: Issue closed
|
kivy/python-for-android | 158500339 | Title: Window pans after calling activity
Question:
username_0: after calling zxing qrcode scanner, the window pans, exactly as if a soft keyboard was required.
intent = Intent("com.google.zxing.client.android.SCAN")
this occurs with new_toolchain.
it occurs whether a qr code was scanned or whether the activity was cancelled.
Answers:
username_1: To add, this works as expected in the old toolchain.
username_2: Does this happen specifically with SDL2? Or with pygame as well?
username_1: only tested with sdl2 bootstrap |
hakimel/reveal.js | 670867006 | Title: How to control slides over netwrok
Question:
username_0: Hello
Is there a way to control the slides presented over network from only one computer?
I mean anyone who has the IP adress can view it in the browser but he could also navigate the slides. I wonder if there is a way to make the presentation read-only over network for anyone who has the IP adress and can only view it (can't navigate himself).
Thank you<issue_closed>
Status: Issue closed |
i18next/react-i18next | 377103453 | Title: Future of the NextJs example
Question:
username_0: We can discuss the future of the NextJs example here, as it has gotten a lot of attention lately, and has some shortcomings.
**Current problems:**
1. Language subpaths are causing a lot of issues
2. Initial language and the translation HOCs are not working well together
**Future goals:**
1. Refactor server and `Link` usage to match [this example](https://github.com/zeit/next.js/issues/2833#issuecomment-414919347), thus preventing 404s and fixing #569
2. Unify translation workflow, and make it less prone to user error: expose a single `withNamespaces` HOC that supports tree traversal, and make it abundantly clear that this is the _only_ way to translate content. Fixes issues like #603.
I believe the best way forward is a complete rewrite to simplify a few things. The first step is to decide if we want to support language subpaths at all.
Are there any other issues people are aware of currently?
**I would like to compile a full list of current issues we are facing here, as well as any new features, or features we plan to remove, before proceeding with any work on the NextJs example.**
Only then can we have a clear plan and execute it.
Answers:
username_0: @username_1 Funny, as we were discussing privately - there _is_ a `with-react-i18next` example in the NextJs repo. I see you contributed there as well. Is this supposed to be identical/synced with our example here?
username_1: @username_0 the one on next.js is a lot older (v7) and i planned to update it as soon we got this one here stable (in the past we had just a readme here pointing to the sample on next.js) - but i decided to fix issues here - as i don't follow all the issues on next.js and think issues / questions related to it just pollute the next.js repo.
In the future i would prefer having a link from next.js to the sample here as it would make it easier to be maintained. But yet i'm not sure what the opinion of the next.js team is - as they seem to prefer having all the samples there??!
username_1: Important note: This is an example not a boilerplate.
Eventual one problem we have is we try to solve some stuff very convenient by eg. providing the withI18next HOC - which magically solves most needed to be done to work. My bet most people landing here with issues come from another example withApollo, withRedux, withMobx whatever and add the stuff shown in our sample which in most case just won't work without proper adjustments - so eventual we should more focus on showing what needs to be done to solve it (teaching) - instead of trying to hide away the more complicated things...but not completely sure - this is just something that came to my mind in the morning.
username_2: I agree with @username_1 . Teaching users to understand how things work is the hardest and most important thing to be done. So more users could give help to issues, as well as contribute back to the library.
Once users understand how things work basically, ones could customize himself for their need.
username_3: Hi,
I've been testing this sample in my project for a few days, it is mostly working really great, so thank you !
I might have some input regarding a few issues :
- Concerning language subpath, in your _app.js, you do use router.replace, but you provide the same parameter to both url and :as . As far as I understand this, this is wrong, you need to provide the actual path to your file in url (meaning /index.js) and the "visual" one in :as (meaning /en/index.js).
The way you do it trigger a server reload since client can't find /en/index.js.
You may want to provide some routing there as well to handle conversion from /post/945 to /post?id=945
- I corrected this behavior in my project for changing language quite simply as follow, and the language change is working as I suspect it should (no server reload) :
i18n.on("languageChanged", lng => {
if (process.browser) {
const originalRoute = window.location.pathname;
const correctedVisualPath = originalRoute.replace(/\/.*?\//, `/${lng}/`)
if (correctedVisualPath !== originalRoute) {
let actualPath = correctedVisualPath.replace(/\/.*?\//, "/")
if(correctedVisualPath.startsWith(`/${lng}/test`))
{
actualPath = '/test?search=' + correctedVisualPath.match(/[^/]*?$/)[0];
}
Router.replace(actualPath, correctedVisualPath, {
shallow: true
});
}
}
});
- I didn't yet find a way for the similar issue on normal Link, so for now I just write every Link as follow, and it work quite nicely :
`<Link url='/mypage' as=/${lng}/mypage><a>Link</a></Link>`
- I also am experiencing the issue described in #603 . It seems to only happens for the default language and on "first load" : if I run `npm run build / start` the very first connection won't display the default language, generate a lot of .missing.json. If I hit ctrl-F5, default language is displayed. After that, any connection from any computer is displaying default language correctly.
- Finally loading a page with new translation file seems to trigger some kind of clientside rerender, resulting in a "white flash" . I suppose this has something to do with the way translation files are loaded. This doesn't seems to break anything, but it's not "smooth".
Thank you again,
username_4: Since your example is easier for you to maintain, you should link from nextjs examples to your example. I suspect the nextjs team will be fine with that since it reduces their maintenance and is a better user experience to see the most up to date example. There is some precedent for that here: https://github.com/zeit/next.js/tree/master/examples/page-transitions
Is there a way to detect broken ssr and throw exception in dev mode? It could be a good idea to consider doing that if it is possible so that people know something is not working as expected before shipping the product to production.
The example _app.js needs a comment linking to a page describing how to add i18next to a project already using something like Apollo. I tried adding i18next to my nextjs apollo project and could not fix the broken ssr despite reading all the relevant issues I could find only hinting I needed to merge something. I finally had to give up and tried adding react-intl example pattern to my apollo project and it worked fine in ssr. If you could try and add i18next to the nextjs with-apollo example and document exactly what it took to get it to work, it would help a ton of people be able to use i18next rather than be forced to react-intl.
username_0: Couple things here:
1. The new rewrite will hopefully not contain an `_app.js` file.
2. This is a NextJs example. We are not doing **anything** else. No Apollo.
Please, let's stay on topic!
username_0: @username_4 I have written many times that SSR is an advanced topic, and we expect users to understand and be aware of this.
This issue is to discuss the future of the NextJs example. It has nothing to do with Apollo, nor anything to do with some `next-i18next` repo. Unless your comments are _directly related_ to that, please post them elsewhere.
username_4: My comments are all directly related to the future of the example from my perspective...I get that we both come at things from very different angles. Adding some supporting documentation referencing common issues the examples choose not to cover would be so helpful...I am not sure why the hostility.
Please try and understand the value of abstracting out dependencies and code to make it easier for users to onboard and maintainers to roll out fixes to the "glue" code between react-i18next and consumer projects. This is a pattern that nextjs users are used to: next-seo, next-offline, etc each wrapping sensible defaults and dependencies of things. Doing this would make a huge difference to the way you structure a future example so I just wanted to put it out there as an idea before you started just in case.
username_0: For those that want to follow the development of the new example, it can be found here (temporarily):
https://github.com/username_0/react-i18next-with-nextjs
username_1: @username_4 @username_0 not read through the full thread -> but just my 50 cents on why it workes with react-intl without issues. => You bundle the translations with your applications - therefore is no async loading (namespace splitting) going on - all synchronous - all easy as piece of cake. Just add the translations in react-intl sample as a lazy import and see everything starts breaking.
So we could just remove i18next-xhr-backend, language-detection -> so we get free from even doing custom stuff in the server.js. All gets simple - no problems.
Just this sample is about adding i18next in production ready way - not make it dead simple for just doing getting started demos by oversimplify the real challenges of SSR and i18n. My problem is what i see in all the errors, issues and questions...those people using next.js have most time not the slightest idea what they are doing - how initialProps work because it's so damn easy to get started with one of the samples.
username_1: @username_0 @username_4 so i really would say the sample should be rather straight forward to learn what needs to be done - but integrations with all the apollo, mobx, redux, whatever needs to be solved in userland. Extending the react-i18next documentation with samples / hints for doing more complex stuff will be very welcome - but would go to far for the sample.
username_0: @username_1 Yes, we're on the same page and have been from the start. **It's a sample, not a boilerplate.** We intentionally do not want to "abstract out code". That would defeat the purpose of an example.
The new example I'm working on should be simpler and more clear, and will contain a _lot_ of comment-based documentation.
username_0: It seems like everyone (at least so far) thinks the language subpath functionality should remain in the example, so I will rewrite that as well.
username_0: The new example is coming along very nicely, and is in fact nearly done.
@username_1 I've hit one interesting problem that I need your help with - something we did not even realise was occurring in the original example:
Let's assume we've got a namespace called `contact-us` that contains some content for the contact page. If a user arrives at the root URL which does not rely on the `contact-us` namespace, and then the NextJs app navigates to the contact page, the `i18next-xhr-backend` will indeed fetch the relevant `contact-us` namespace JSON file. This was clear in the original example.
However, NextJs has no idea about i18next translation JSON network requests, or any of that. The result is that the incoming route (in this case `contact-us`) will be displayed before the translation data is loaded over the network.
The place to block a page transition while waiting for data is inside the NextJs `getInitialProps`. We can `await` any promise. However, I cannot seem to find anything useful to await.
I tried `i18n.hasResourceBundle` and `i18n.getResourceBundle`, but both are sync functions that return information about data already in the store.
I tried polling `i18n.hasResourceBundle` in a loop, but it seems that `i18next-xhr-backend` doesn't actually start making network requests until the React tree that requires namespaces is rendered, so that won't work either.
In short, it seems we're going to have to trigger network requests manually. It should be trivial, assuming there is a way to do this. Is there?
username_0: @username_1 I was able to come up with a hacky solution that requires one to initialise the `i18next-xhr-backend` via class construction so that we can have a reference to it and later invoke class methods:
if (!i18n.nsFromReactTree.every(ns => i18n.hasResourceBundle(i18n.languages[0], ns))) {
await Promise.all(
i18n.nsFromReactTree.map(ns => {
if (!i18n.hasResourceBundle(lng, ns)) {
return new Promise(resolve => {
i18nextXHRBackend.read(lng, ns, (error, loadedNS) => {
i18n.addResourceBundle(lng, ns, loadedNS);
resolve();
});
});
}
return Promise.resolve();
})
);
initialI18nStore = i18n.store.data;
initialLanguage = i18n.language;
}
Hopefully you can understand this. It's a quick solution and is nasty nested code. Basically it's calling the `read` method on the in `i18next-xhr-backend` class instance manually, and then after receiving the JSON, manually calling `i18n.addResourceBundle`.
This works 100%. However, it's clearly a workaround and I feel it's bug prone and likely to break outright in the future.
Let me know if there's an official way to achieve this that I'm missing, or if you would suggest a different approach.
This is the final hurdle to releasing the new example.
username_1: that is very hacky and won't work in production -> as soon you got two components both needed that namespace it will load it twice. i18next handles resource loading very nicely by doing proper deduplication. => https://github.com/i18next/react-i18next/blob/master/src/NamespacesConsumer.js#L116 is like it is solved in the current version
The loadNamespace function has a callback that will be called when translations were already loaded or got loaded.
The render prop or hoc both can be set to wait before rendering their inner content or pass down a prop signaling readyness - if that helps.
username_0: Thanks, I'll take a look at `loadNamespace` tomorrow. I think I tried it, but quickly passed as I didn't realise it did not return a promise. That's good news though.
By the way, none of this is happening inside the HOC, and will take place once per page render - I've refactored things quite a bit.
username_0: So, the [new example](https://github.com/username_0/react-i18next-with-nextjs) is now ready for review.
Some noteworthy points:
1. In the end, it was necessary to use `_app.js`. I thought we could get away with just using `_document.js`, but it's a bit more complicated than that. The way `_app.js` is set up, most users should never have to touch/change anything for basic use cases.
2. Related to `_app.js`, the new HOC approach has completed abandoned `NamespacesConsumer` in favour of `withNamespaces`, and the `I18nextProvider`.
3. In fact, we could have gotten away with using `withNamespaces` and `I18nextProvider` directly out of `react-i18next` core, were it not for the need for our `nsFromReactTree` array, which keeps track of all required namespaces, given a React component tree - irrespective of whether components are NextJS pages or not (this is crucial).
4. I also had to make modifications to our usage of `Link`. @timneutkens example was very helpful - we need to parameterise our route params and also pass an `as` argument to achieve true SPA with multi-langs. In the new example, you can switch languages in a fully-SPA manner without any hard reloads.
For anyone that has time, please do review the code, experiment with the app, and let me know about any issues you might find. I've put the example on a [free Heroku dyno here](https://react-i18next-with-nextjs.herokuapp.com/) (it can take some time to boot).
username_1: @username_0 looks good to me...one thing I ask myself shouldn't we just add that https://github.com/username_0/react-i18next-with-nextjs/blob/master/lib/withNamespaces.js#L23 to the current implementation in the main repo -> does not hurt and for v10 we planned the same to better support SSR out of the box.
username_0: @username_1 Yes, that's exactly what I was thinking as I wrote it. It's just a matter of pushing into a simple array. The thing is, there are definitely use cases where users would want to clear that array - new route, etc.
For this example (and probably in real projects) I decided that clearing the array on a new route doesn't make much sense as it only applies to clientside navigation, and the browser is already going to have the previous-route's namespaces anyways.
Anyways, after getting to know the `react-i18next` library better, in my opinion `I18nextProvider` and `withNamespaces` is the best approach currently available, and `nsFromReactTree` works flawlessly if you stick to it. I'm not sure how it would work if people are mixing in `NamespacesConsumer` etc.
username_1: @username_0 fun fact withNamespaces uses NamespacesConsumer: https://github.com/i18next/react-i18next/blob/master/src/withNamespaces.js#L33
So doing same `nsFromReactTree ` in the sample could be done in consumer https://github.com/i18next/react-i18next/blob/master/src/NamespacesConsumer.js#L25 (just in the new way - not using reportNS)
username_0: Ah yes, I remember seeing that now. Must have slipped out of my head immediately.
As far as I can tell, the new example solves all open related bugs/issues.
@username_1 Is there anyone else whose review you'd like to request before we think about merging?
username_1: @username_0 guess we could merge it...and from there improve with the promise based i18next,...
username_0: I think the promise-based release would only affect [this line](https://github.com/username_0/react-i18next-with-nextjs/blob/master/pages/_app.js#L84) of the NextJs example.
But sounds good, I will put this work into a PR soon.
username_0: #613
username_5: @username_0 I'm not really a pro i18next config but I had to set `fallbackLng` to null because in production the front-end always switched back to the `DEFAULT_LANGUAGE` when the server side language was right. Otherwise, the new example looks great, thank you very much.
username_2: The last example failed on my project. Decided to rollback to old working version.
I'm afraid of trying out new things here. Such a lengthly and hopeless hours of debugging without any chance to get things done.
Breaking without any chance for you to fix it. It's this library now.
I'm not sure...
username_0: @username_5 That sounds like a bug. Can you help me to reproduce that? If setting `fallbackLng` to `null` fixes your problem, that potentially sounds like an issue with `i18next-browser-languagedetector`.
username_0: @username_2 I am sorry to hear you are frustrated. I understand - creating SSR React apps with localisation and code splitting is not an easy task.
We are doing our best to improve the example and help other developers as much as we can.
I will personally help you with your project if you'd like - just email me directly.
username_1: @username_0 @username_5 could it be initialLanguage gets not passed down. But we really need some steps to reproduce.
@username_2 sorry for the frustration - if the old system worked there is no need to update i would say. We just try to make the new sample more like a learning point - looking at the last issues related to next.js and react-i18next most cases where related to changes made and combining other next.js samples without the knowhow to "glue" things together -> the sample here is no boilerplate and no just put everything in at it will not break under any changes made.
username_2: @username_0 @username_1 Really appreciated all your efforts. A thank you is not enough.
But we should stop all the pain one and for once by introducing a `next-i18next` package.
I think you have enough deep knowledge to make such thing happen.
Our goal is to minimize the interface to integrate/update to existing project.
username_0: @username_2 I believe this is what @username_4 was asking for as well. Now that we have discussed and completed the example here, maybe it is time to consider this.
What would a `nextjs-i18next` package be? There is _way_ more going on than being able to simply do a `yarn add nextjs-i18next` and being "done". If you look through the new example, you will see how many areas we need to touch. I suppose a package _could_ be possible, but we would need to create a HOC to wrap `_app.js` so that we have access to `getInitialProps`, as well as some other things.
username_5: @username_0 @username_1
In the `config.js`
```
...
detection: {
order: ['header', 'querystring'],
},
...
```
And set `fallbackLng: DEFAULT_LANGUAGE`,
In my case, my Google Chrome language settings is `fr` but `DEFAULT_LANGUAGE` is `en`. So what happens is:
1. Reload the page `cmd + R` (back-end refresh)
2. I see `french` translations
3. It changes to `english` translations (not good, I should see french)
Both my `i18n.language` and `initialLanguage` say `fr` when I debug.
Let me know if you need more info
username_5: Note that set `fallbackLng` to `null` solves the problem.
username_0: @username_5 We only have English and German content in the example. What do you mean you see French translations? Do you have a repo I can check out and reproduce this with?
username_3: @username_0 Thanks for the work, the new example, after some work to make it as I wished, is working allmost perfectly.
small question : is it normal that the language change event is fired at every page change ? seems weird (maybe something I did wrong, I "glued" a few thing on top of i18n)
username_5: @username_0 I setup my own website following the example but I can't share the code. I changed the detection because it's not what I wanted. When I change my settings in my browser, I want it to change right away when I refresh, I'm not sure what the cookie settings brings as value, can you explain this please?
username_0: @username_5 Please open a separate issue if you believe it's a bug, or post your question on StackOverflow. It sounds like a custom-implementation concern.
username_5: I won't open anything since the example works fine for me and I have no idea what to asks, I'm just giving you feedback. I can try to reproduce the issue with the example maybe?
username_0: @username_3 That is a great point, I had not noticed that. I believe it's because `initialLanguage` is [being set on a new page render](https://github.com/i18next/react-i18next/blob/master/example/nextjs/pages/_app.js#L111).
@username_1 Is this expected/normal? Should we avoid resetting `initialLanguage` on the clientside?
If so, should be an easy fix.
username_0: @username_5 If you can reproduce the issue with the example, that would definitely be a bug. In general, use of cookies is the most reliable method I've found of sharing lang state between client and server. It would make sense in your case that your client reverts to English, because it isn't reading headers.
username_5: It's probably the cause, I guess not many people will change their chrome language settings.
username_0: It's irrespective of your browser settings.
username_5: What do you mean? Google Chrome uses language settings to set the languages header
username_1: @username_0 yes...i guess we should run that only once -> https://github.com/i18next/react-i18next/blob/master/src/utils.js#L22
doing that on the hooks version (meant it was in the <v10 too) -> https://github.com/i18next/react-i18next/blob/master/src/hooks/useSSR.js#L3
username_0: Ah, so it's [this line](https://github.com/i18next/react-i18next/blob/master/src/utils.js#L30) that's causing the change function to fire?
Do you think performing a check against multiple `initialLanguage` calls belongs in `react-i18next` core, or user land (the example)?
username_1: @username_5 https://github.com/i18next/react-i18next/blob/master/example/nextjs/config.js#L29 -> caches on cookie and detect is needed to sync -> you still can have the header setting on second position (eg. server not yet getting language via cookie - inital set will be from header)
username_1: @username_0 i would say core - easy fix in utils
username_0: Something like:
if (props.initialLanguage && !props.i18n.language) {
props.i18n.changeLanguage(props.initialLanguage);
}
Is that what you had in mind? Can create a quick PR, or if you want to commit directly.
Thanks again @username_3, will be fixed shortly.
username_1: @username_0 will set a variable to check it gets set once and only once. Will make a patch version in the next 5min.
username_1: ```js
let initializedLanguageOnce = false;
let initializedStoreOnce = false;
export function initSSR(props, setIsInitialSSR) {
// nextjs / SSR: getting data from next.js or other ssr stack
if (!initializedStoreOnce && props.initialI18nStore) {
props.i18n.services.resourceStore.data = props.initialI18nStore;
if (setIsInitialSSR) props.i18n.options.isInitialSSR = true;
if (props.i18nOptions) props.i18nOptions.wait = false; // we got all passed down already
initializedStoreOnce = true;
}
if (!initializedLanguageOnce && props.initialLanguage) {
props.i18n.changeLanguage(props.initialLanguage);
initializedLanguageOnce = true;
}
}
```
looks ok?
username_3: damn that was quick ! thx ! 👍
Status: Issue closed
username_0: I think we have accomplished everything necessary/related to this topic. Future bugs/problems with the example can be opened as new issues.
If anyone is indeed interested in a `next-i18next` boilerplate/npm package, please do add your suggestions over at [the new repo](https://github.com/username_0/next-i18next).
username_6: Well done! 🥳👍
username_0: Hello everyone - I've just published the initial release of [next-i18next](https://github.com/username_0/next-i18next) on npm.
It accomplishes everything the example does, but should be much easier and more flexible to incorporate into your existing projects, or new projects.
If anyone wants to start using the package as an early tester, I would be very appreciative of any feedback.
Also @username_1 perhaps after a few minor releases over there and a feeling of stability, it could be worthwhile to link to the package from somewhere in the `react-i18next` repo.
username_7: Easier maybe but what makes it "more flexible"? We don't know how `next-i18next` works internally and because of this I don't see how to tweak its behavior.
Maybe the old example should also be there besides a reference to `next-i18next` package?
username_0: Hello again @username_7. The `next-i18next` package is entirely open source - you're welcome to read through and see how it works internally. As I mentioned in https://github.com/username_0/next-i18next/issues/347, incremental features/changes via PR are very welcome from anyone!
Part of the problem we had with having an example in the `react-i18next` repo is that it became very difficult and time-consuming to maintain. If you are feeling up to taking on this task, I'm sure @username_1 would be happy to consider the possibility.
username_1: @username_7 @username_0 sure always open for suggestions...honestly I would suggest @username_7 looking deeper into it by making an example without using next-i18next (getting it right is harder than it seems).
In the past, we just had a sample on react-i18next and there were tons of issues of people applying what they saw in the sample to there own project but some small but very important piece was left out.
So next-i18next gets my full respect - for the work that was put into from @username_0 and the community behind it and also for what it achieved. |
jmazzi/crypt_keeper | 263233806 | Title: Ruby 2.4 - can't modify frozen String
Question:
username_0: Ruby: 2.4
Rails: 5.1.4
Testframework: Minitest
Crypt_keeper: 1.1.1
I'm running into some issues using the above configuration. The error occurs when running the following ActiveSupport::TestCase setup, but not during application usage.
```ruby
# frozen_string_literal: true
class Revenue < ApplicationRecord
crypt_keeper :field_1, :field_2,
encryptor: :aes_new,
key: ENV['CRYPT_KEEPER_KEY'],
salt: ENV['CRYPT_KEEPER_SALT'],
encoding: 'UTF-8'
end
```
```ruby
# frozen_string_literal: true
require 'test_helper'
class DestroyServiceTest < ActiveSupport::TestCase
setup do
params = {
field_1: 'Value 1',
field_2: 'Value 2'
}
@object = Model.create(params)
end
test 'some test' do
# perform some test
end
end
```
The error (```RuntimeError: can't modify frozen String```) gets triggered on ```@object = Model.create(params)``` .
I was able to track it down to the ``` force_encodings_on_fields``` method inside ```lib/crypt_keeper/model.rb ``` .
When I change
```ruby
send(field).force_encoding(crypt_keeper_encoding)
```
To
```ruby
send(field).dup.force_encoding(crypt_keeper_encoding)
```
The error no longer gets triggered, but only when you disable spring as well.
Answers:
username_0: Another observation is that these errors only pop up when using binary als column type.
When changing the column types to "text", we're all in the green again.
Currently using Postgres 10.
username_1: @username_0 Could you provide the backtrace?
username_0: @username_1 : Sorry for the late reply. I'll reply one later today.
Status: Issue closed
|
rmpestano/cukedoctor | 427937099 | Title: maven plugin build failures in jenkins pipeline
Question:
username_0: I'm using cukedoctor in a maven project. I've added cukedoctor-converter (r1.2.1) as a dependency with the 'test' scope.
I've also added cukedoctor-maven-plugin (r1.2.1) and configured it according to the Readme, so that it runs during the verify phase.
Running _mvn clean install_ generates the documentation as would be expected locally. However, when pushing to our pipeline app the following error breaks the build:
`
Caused by: java.lang.NoClassDefFoundError: com/github/cukedoctor/api/DocumentAttributes
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.getDeclaredMethods(Class.java:1975)
....
`
Further down the stacktrace we get this:
`
Caused by: org.apache.maven.plugin.PluginContainerException: A required class was missing while executing com.github.cukedoctor:cukedoctor-maven-plugin:1.2.1:execute: com/github/cukedoctor/api/DocumentAttributes
`
This seems to be a result of an import statement which looks for com.github.cukedoctor.api.Document.attributes, which it cannot find, which is just as well because it does not exist in the source code or locally in my .m2 directory.
There does not appear to be any documentation available for troubleshooting this issue. If there is another way we can get the documentation to generate any time a developer builds we'd be open to hearing about it.
Oddly, this error only manifests occasionally when we commit to a remote branch, but always fails when we merge to master.
Any advice?
Answers:
username_0: Forgot to ask: are there any other dependencies or libraries we might have forgotten to include that would cause this?
username_0: It seems there is a 'Jenkins Pipeline Plugin' for cukedoctor. I'm assuming we need to install this to get this to work, although it's odd that our build sometimes works without it. I'm going to close the issue as this seems to be the likely culprit.
Status: Issue closed
|
kriasoft/universal-router | 1081563245 | Title: Simple Change: Make resolveRoute a class member of UniversalRouter instead of privately scoped
Question:
username_0: **I'm submitting a ...**
<!-- (check one with "x") -->
- [ ] bug report
- [x] feature request
- [ ] other (Please do not submit support requests here (below))
## Feature request
Would there be any backwards compatibility issues with moving the resolveRoute function to be inside the UniversalRouter class? The reason for this is when extending UniversalRouter, you cannot override the resolveRoute function as its a privately scoped function in the UniversalRouter.js file, outside of the UniversalRouter class.
Currently I have my resolveRouter passed in as a parameter, but I have a custom class for my router to add in functionality like history API. Being able to override resolveRouter inside that class would be a cleaner interface, keeping like logic together. I don't see any conflicts with this change and would improve class support. Is there a good reason for this to stay as a private function?
## Proposed Change
```
class UniversalRouter {
constructor(routes, options) {
...
}
resolve(pathnameOrContext) {
....
const resolve = this.options.resolveRoute || this.resolveRoute;
...
}
resolveRoute(context, params) {
if (typeof context.route.action === 'function') {
return context.route.action(context, params);
}
return undefined;
}
}
```
Then custom router classes can override it or it can still be passed in as an option.
```
class MyRouter extends UniversalRouter {
resolveRoute(context, params) {
// custom logic
}
}
```
Answers:
username_1: You can override `resolveRoute` in your custom class like this:
```js
class YourRouter extends UniversalRouter {
constructor(routes, options) {
super(routes, { ...options, resolveRoute: this.resolveRoute })
}
resolveRoute(context, params) {
// custom logic
}
}
```
But could you tell us a little about why you need to extend the UniversalRouter class, i.e. what functionality are you missing? |
openshift/installer | 703688516 | Title: Openstack Parameter for Bootstrap Floating IP
Question:
username_0: # Version
```console
$ openshift-install version
openshift-install 4.5.8
built from commit 0d5c871ce7d03f3d03ab4371dc39916a5415cf5c
release image quay.io/openshift-release-dev/ocp-release@sha256:ae61753ad8c8a26ed67fa233eea578194600d6c72622edab2516879cfbf019fd
```
# Platform:
Openstack UPI
# What happened?
Bootstrap Floating IP is automatically created and destroyed.
# What you expected to happen?
I need a parameter in `install-config.yaml` to assign a Floating IP address for Bootstrap Instance. The IP addresses in my public network do not all has the permission to access internet due to the security restriction of our company . So if it's automatically created there is a great chance that the bootstrap is not able to pull images successfully from internet.
I know it's a rare case, but I think being able to specify the floating IP for bootstrap like we already did for API/Ingress is a good function anyway. If it's OK I would like to implement it.
# How to reproduce it (as minimally and precisely as possible)?
N/A
# Anything else we need to know?
N/A
# References
N/A
Answers:
username_1: /label platform/openstack
/close
The bootstrap floating IP is just convenience for you to retrieve the logs from the bootstrap node in the event the deployment fails at the bootstrap stage. It is unrelated to pulling container images on the bootstrap node.
You may want to [use a proxy](https://github.com/openshift/installer/blob/master/docs/user/customization.md#proxy) have a look at the installation on restricted networks. Note that we support installation on restricted networks for OpenStack platform since 4.3 but the documentation is [still in the works](https://github.com/openshift/openshift-docs/pull/22478). You can find a rendered version of the WIP documentation at https://osp-restricted-ipi-osdocs1021--ocpdocs.netlify.app/openshift-enterprise/latest/installing/installing_openstack/installing-openstack-installer-restricted.html. |
gseidler/The-MSX-Red-Book | 614051135 | Title: Error in sprites data order in Figure 24
Question:
username_0: Hi Gustavo!
I have detected an error at the point 2 Video Display Processor in the Sprites section, in the table for detail the data order for 16x16 Sprites (Figure 24).
https://github.com/username_1/The-MSX-Red-Book/blob/master/the_msx_red_book.md#sprites
The data from the four sections flow vertically: Up-left + Down-left + Up-right + Down-right
While the table is displayed with a horizontal flow.
You can verify this with the following example in MSX BASIC:
```
10 screen 1,2
20 for i=0 to 31
30 read A:vpoke base(9)+i,A
40 next
50 put sprite 0,(100,100),1,0
60 end
100 data 255,255,255,255
110 data 255,255,255,255
120 data 0,0,0,0
130 data 0,0,0,0
140 data 170,85,170,85
150 data 170,85,170,85
160 data 255,255,255,255
170 data 255,255,255,255
```
Show at MSXPen > WEB MSX
https://msxpen.com/?code=-M6jDV5vBLfZPATn3tGx
Answers:
username_1: Thank you! Verified and committed the change.
Status: Issue closed
|
jmcnamara/XlsxWriter | 388214515 | Title: 'num_format': '0' not handled correctly when checking for duplicate formats
Question:
username_0: Duplicate formats are not resolved correctly when one of them contains `'num_format': '0'`. It is handled the same as `'num_format': 0` which is the general format:
```python
import xlsxwriter
with xlsxwriter.Workbook('test.xlsx') as workbook:
worksheet = workbook.add_worksheet()
format_number = workbook.add_format({'num_format': 1, 'border': 1})
format_common = workbook.add_format({'border': 1})
worksheet.write(1, 0, 'test', format_common)
worksheet.write(0, 0, 1234567, format_number)
```
The workaround, for now, is to use `'num_format': 1` (1 as a number) which is equivalent to `'num_format': '0'` ('0' as a string).
Answers:
username_0: CC @kamalk-github
Status: Issue closed
|
yuex/chrome-secure-shell-solarized | 148658570 | Title: Compare colors with other gists and use Javascript console for easy "install"?
Question:
username_0: Your screenshots look great. I haven't tested all the options yet, but I've found a couple other gists that show how you can set the term settings by using the Javascript console, (open Secure Shell settings then Ctrl+Shift+J) and by typing commands you can set various settings. Perhaps this would improve your install method?
Answers:
username_1: Here you go.
```
var color_scheme = {
'base00': '#073642',
'base01': '#dc322f',
'base02': '#859900',
'base03': '#b58900',
'base04': '#268bd2',
'base05': '#d33682',
'base06': '#2aa198',
'base07': '#eee8d5',
'base08': '#002b36',
'base09': '#cb4b16',
'base0A': '#586e75',
'base0B': '#657b83',
'base0C': '#839496',
'base0D': '#6c71c4',
'base0E': '#93a1a1',
'base0F': '#fdf6e3',
};
term_.prefs_.set('background-color', "#002b36");
term_.prefs_.set('foreground-color', "#eee8d5");
term_.prefs_.set('cursor-color', "#eee8d5");
term_.prefs_.set('color-palette-overrides',
[color_scheme.base00,
color_scheme.base01,
color_scheme.base02,
color_scheme.base03,
color_scheme.base04,
color_scheme.base05,
color_scheme.base06,
color_scheme.base07,
color_scheme.base08,
color_scheme.base09,
color_scheme.base0A,
color_scheme.base0B,
color_scheme.base0C,
color_scheme.base0D,
color_scheme.base0E,
color_scheme.base0F]);
``` |
scalameta/scalameta | 156110351 | Title: performance degradation when Tree.tokens is called repeatedly
Question:
username_0: v0.20.0 represents a significant performance hit in comparison with v0.1.0-RC4. We need to check what's going on and fix it. My hypothesis is the malfunction of the newly introduced tokenization cache.
Status: Issue closed
Answers:
username_0: Fixed in https://github.com/scalameta/scalameta/pull/409. |
dotnet/roslyn-analyzers | 118228364 | Title: Port FxCop rule RS1006: #N/A
Question:
username_0: **Title:** Invalid type argument for DiagnosticAnalyzer's Register method
**Description:**
DiagnosticAnalyzer's language-specific Register methods, such as RegisterSyntaxNodeAction, RegisterCodeBlockStartAction and RegisterCodeBlockEndAction, expect a language-specific 'SyntaxKind' type argument for it's 'TLanguageKindEnumName' type parameter. Otherwise, the registered analyzer action can never be invoked during analysis.
**Proposed analyzer:** Microsoft.CodeAnalysis
**Notes:**<issue_closed>
Status: Issue closed |
onnx/onnx-tensorrt | 885126846 | Title: error: Could not find suitable distribution for Requirement.parse('six>=1.9')
Question:
username_0: I am running a Yocto build and sometime during the build, I see the following error
```
Processing dependencies for protobuf==3.3.0
Searching for six>=1.9
Reading https://pypi.python.org/simple/six/
Couldn't find index page for 'six' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.python.org/simple/
No local packages or working download links found for six>=1.9
error: Could not find suitable distribution for Requirement.parse('six>=1.9')
ERROR: python setup.py install execution failed.
```
The package itself is installed it seems:
```
$ pip --version
pip 20.3.4 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
$ pip show six
Name: six
Version: 1.15.0
Summary: Python 2 and 3 compatibility utilities
Home-page: https://github.com/benjaminp/six
Author: <NAME>
Author-email: <EMAIL>
License: MIT
Location: /home/qct/.local/lib/python2.7/site-packages
```
Another thought was the possibility of no internet access but `ping google.com` does work
Answers:
username_1: I'm not too familiar with Yocto, maybe a set up issue somewhere? |
rust-lang/crates.io | 269641462 | Title: Yanked crate shows in search list but is not installable
Question:
username_0: I learned about `rustfmt` today and learned that it has cargo integration to enable `cargo fmt`. I searched cargo from the command line and got this output:
```
± (master *%) $ cargo search rustfmt
Updating registry `https://github.com/rust-lang/crates.io-index`
rustfmt = "0.9.0" # Tool to find and fix Rust formatting issues
rustfmt-nightly = "0.2.13" # Tool to find and fix Rust formatting issues
rfmt = "0.1.0" # Another Rust source code formatter.
scout = "1.1.0" # Small fuzzy finder for the command line
cargo-make = "0.7.3" # Rust task runner and build tool.
bcc-sys = "0.3.1" # Rust binding to BPF Compiler Collection (BCC)
cargo-fmt = "0.1.0" # Allows `rustfmt` to be called through `cargo`
imgui = "0.0.16" # High-level Rust bindings to dear imgui
harbor = "0.1.0" # Project manager for CoreOS fleets.
aries = "0.1.0" # a web framwork for rust-lang.
... and 29 crates more (use --limit N to see more)
```
Ah ha! `cargo-fmt` is what I want, right? RIGHT?
```
± (master *%) $ cargo install cargo-fmt
Updating registry `https://github.com/rust-lang/crates.io-index`
error: could not find `cargo-fmt` in `registry https://github.com/rust-lang/crates.io-index`
```
That's odd, it just showed up in the search!
I resort to the browser: https://crates.io/crates/cargo-fmt
<img width="795" alt="screen shot 2017-10-30 at 10 59 10 am" src="https://user-images.githubusercontent.com/197224/32177726-67382b8c-bd61-11e7-9e59-33871c0f86a9.png">
Yanked. I think what I want is `rustfmt`. I installed it, and sure enough:
```
Installing /Users/colin/.cargo/bin/cargo-fmt
Installing /Users/colin/.cargo/bin/rustfmt
```
I think my confusion could be avoided by:
1. Convey in `cargo search` results that a package has been yanked. Maybe by highlighting the name in an ANSI color and outputting a colored message at the bottom that says something like "Packages in yellow have been yanked and are not directly installable. See https://somewhere.else for more information."
1. Have the crate page for `cargo-fmt` have some kind of text explaining _why_ it was yanked and perhaps suggesting a next step (e.g. "Package `X` has superseded this package" or "This package was erroneously published" or something like that).
That second one is probably the task for crates.io to tackle, although [searching through the cargo repo for "yanked" shows](https://github.com/rust-lang/cargo/search?q=yanked&type=Issues&utf8=%E2%9C%93) that I'm not the first person to encounter a problem like this:
1. [Start requiring a reason for all yanks #2608](https://github.com/rust-lang/cargo/issues/2608)
1. [Mismatch against yanked version doesn't inform user about yank #4260](https://github.com/rust-lang/cargo/issues/4260)
I think I'm going to file a similar issue with cargo since there's potentially work to be done there, too.
Answers:
username_1: 1. is a duplicate of https://github.com/rust-lang/crates.io/issues/145 as far as the crates.io side, and then your rust-lang/cargo#4679 would cover displaying the yanked info in the results of `cargo search` (that change would need to be made in cargo).
I'm going to leave this issue open for crates.io displaying a yank reason to go with rust-lang/cargo#2608.
username_2: since https://github.com/rust-lang/cargo/issues/2608 does not seem to have any movement I'm going to close this issue for now. should such a mechanism ever be implemented in cargo we can reopen the issue :)
Status: Issue closed
|
EndPointCorp/end-point-blog | 500590108 | Title: Comments for Google Drive for virtual machine images
Question:
username_0: Comments for https://www.endpoint.com/blog/2019/09/30/google-drive-for-vm-images
To enter a comment:
1. Log in to GitHub
1. Leave a comment on this issue.
Answers:
username_1: It seems that the gdrive script has some issues that prevent logging in at present.

However, for those looking for a replacement: rclone seems to be a very feature rich program that not only accesses Google Drive, but quite a number of cloud providers- with options to use it with everything from simple ftp servers to Amazon S3 storage.
You can find their main website here: https://rclone.org/
github page here: @rclone |
shadowsocks/shadowsocks-org | 213078070 | Title: Follow IANA registry for AEAD ciphers
Question:
username_0: IANA has a dedicated AEAD registry defining a numeric ID, a common name, and reference specification for AEAD ciphers. Compliant Shadowsocks implementations should follow the scheme http://www.iana.org/assignments/aead-parameters/aead-parameters.xhtml
In particular, AEAD_CHACHA20_POLY1305 must be implemented as a common cipher. Implementations intending to run on devices with hardware AES acceleration should also implement AEAD_AES_128_GCM, AEAD_AES_192_GCM, and AEAD_AES_256_GCM.
Answers:
username_1: LGTM. Let's keep the alias to ensure backward compatibility.
username_1: Updated via https://github.com/shadowsocks/shadowsocks-org/commit/3e7a7075ddbd2b74e1fdbcb5beb7ee6fda550161
Status: Issue closed
|
eubr-bigsea/citron | 313679918 | Title: Execution parameter from Intersection Operation empty
Question:
username_0: I don't know if is necessary show the **execution** parameter without any options.

Answers:
username_1: @waltersf
Status: Issue closed
|
open-mmlab/mmcv | 739747349 | Title: mmcv.cnn.ConvModule is expected to work with conv3d, right?
Question:
username_0: https://github.com/open-mmlab/mmcv/blob/1d5678c9fc6e96561b4ee5a9079d4a19e3da53c5/mmcv/cnn/bricks/conv_module.py#L29-L35
In these lines, the author mention conv2d, but it seems to work with conv3d too. We might need to update the doc.
Answers:
username_0: I found that mmaction2 is using it, see also https://github.com/open-mmlab/mmaction2/blob/303b62bc9dbd81a90bc3890da3a72231c060fbe9/mmaction/models/backbones/resnet3d.py#L95-L118
So, I believe we might need to update the doc.
username_1: Update it in https://github.com/open-mmlab/mmcv/pull/651, it has been merged.
username_1: This issue can be closed @hellock.
Status: Issue closed
|
GeotrekCE/Geotrek-admin | 286242663 | Title: Should require hostname
Question:
username_0: ```
[settings]
host = ...
```
Should be added to `conf/settings.ini.sample` and should be mandatory.
If not, some attakers use `*` as hostname and Django complains.
Moreover, I think
```
[nginx-conf]
default = True
```
Should not be de default value.<issue_closed>
Status: Issue closed |
flathub/com.skype.Client | 890617615 | Title: Cannot screen share on Wayland
Question:
username_0: Skype currently cannot screen share on Wayland in the Deb or in the Flatpak.
This is as Skype does not show the share screen button on Wayland, despite Electron supporting Wayland screen sharing.
Skype not showing the screen share button on Wayland is entirely arbitrary, given one can succesfully screen share from the Skype web app in Chromium (ignoring minor Chromium WebRTC PipeWire bugs).
So if Skype were to remove this restriction, and ship with a semi recent version of Electron, screen sharing would likely work on wayland. Working Electron screen sharing apps include [Jitsi Meet](https://github.com/jitsi/jitsi-meet-electron/issues/567#issuecomment-839917891) and [Slack](https://github.com/flathub/com.slack.Slack/issues/101#issuecomment-808430530).
When Skype enables the share screen button on wayland a change to the manifest will be needed. We should build in PipeWire support similar to what was done for [Slack Flatpak](https://github.com/flathub/com.slack.Slack/pull/118).
The combination of Skype making the button visible, the Flatpak building with PipeWire, and passing `--enable-features=WebRTCPipeWireCapturer` should make screen sharing work for Skype on Wayland. |
dotnet/efcore | 699030936 | Title: Logging in with connection string of existing connection fails with EFCore somewhere between versions > 3.1.4 and 3.1.8
Question:
username_0: <!-- Describe what isn't working as expected -->
After updating from EFCore 3.1.4 to 3.1.8 (the latest stable version), I keep getting login errors when I try to login with the connection string of an existing connection.
### Steps to reproduce
This is the code I'm using that worked before:
https://gist.github.com/username_0/aac08397e4a4a6949c3dd5c76d6e3a73
Here's the exception that I'm getting:
```
Microsoft.Data.SqlClient.SqlException (0x80131904): Fehler bei der Anmeldung für den Benutzer "sa".
at Microsoft.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at Microsoft.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, Boolean withFailover)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString connectionOptions, SqlCredential credential, TimeoutTimer timeout)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(TimeoutTimer timeout, SqlConnectionString connectionOptions, SqlCredential credential, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, Boolean applyTransientFaultHandling, String accessToken, DbConnectionPool pool, SqlAuthenticationProviderManager sqlAuthProviderManager)
at Microsoft.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions)
at Microsoft.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnectionPool pool, DbConnection owningObject, DbConnectionOptions options, DbConnectionPoolKey poolKey, DbConnectionOptions userOptions)
at Microsoft.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject, DbConnectionOptions userOptions, DbConnectionInternal oldConnection)
at Microsoft.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject, DbConnectionOptions userOptions, DbConnectionInternal oldConnection)
at Microsoft.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection)
at Microsoft.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection)
at Microsoft.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
at Microsoft.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
at Microsoft.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
at Microsoft.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
at Microsoft.Data.SqlClient.SqlConnection.Open()
at IKON_2_0.Server.Data.IKONDataAccessLayer.GetGZWerteAnSTHF(Int32 IdSthf, Int32 IdTeiln, Int32 werteTyp, Int32 jahr, Nullable`1 jahr2) in G:\source\repos\IKON 2.0\IKON_2_0\Server\Data\IKONDataAccessLayer.cs:line 734
at IKON_2_0.Server.Controllers.WerteErfassungController.GetWerteErfassungBySTHF(Int32 idTree, Int32 idTeilnehmer, Int32 werteTyp, Int32 jahr, Int32 endJahr) in G:\source\repos\IKON 2.0\IKON_2_0\Server\Controllers\WerteErfassungController.cs:line 670
at lambda_method1632(Closure , Object )
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.AwaitableObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|24_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Logged|17_1(ResourceInvoker invoker)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Session.SessionMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Session.SessionMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authorization.Policy.AuthorizationMiddlewareResultHandler.HandleAsync(RequestDelegate next, HttpContext context, AuthorizationPolicy policy, PolicyAuthorizationResult authorizeResult)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Builder.Extensions.MapWhenMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Builder.Extensions.MapMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
ClientConnectionId:7e434a15-4323-4f91-b39f-9628145fb7c3
Error Number:18456,State:1,Class:14
```
### Further technical details
EF Core version:
Database provider: Microsoft.EntityFrameworkCore.SqlServer 3.1.8
Target framework: .NET 5.0
Operating system: Windows 10
IDE: Microsoft Visual Studio Community 2019 Preview
Version 16.8.0 Preview 2.1
Answers:
username_1: Duplicate of https://github.com/dotnet/efcore/issues/22083
Status: Issue closed
|
clarinsi/babushka-bench | 817369297 | Title: Macedonian dataset
Question:
username_0: Hi @username_1 ,
as far as I understand this commit message:
https://github.com/clarinsi/babushka-bench/commit/841c47d5630e1a55cf21659874c5e3af9575b0a6#diff-fd8b5fda8a45abe08c7b3247d4abb7b1395dd3bf6008738388f42ff052bef9fe
The Macedonian dataset comes from the 1984 Multext-east data, but I still have some questions 😅
* Who was responsible for the NE annotation process and is there a paper out that can be cited :thinking:
* Universal PoS tags were recently added. However, the original Macedonian dataset from Multext-east does not come with disambiguated tags and also the PoS tags used in Multext-east are not identical to universal PoS tags. Could you explain the mapping scheme a bit :thinking:
Many thanks,
Stefan :heart:
Answers:
username_1: @username_0, hi.
Recently we managed to disambiguate the 1984 Macedonian corpus on the level of morphosyntax, but it is still not officially published as some improvements of the annotation are being performed as we speak (and they do not go along very fast).
We were eager to start experimenting with this notoriously under-resourced language, also wanted to add at least basic support for it to our [CLASSLA pipeline](https://pypi.org/project/classla/), therefore we performed a train:dev:test split of the preliminary data here on babushka-bench.
For what I know?, the corpus still does not contain any NE annotations. We would like to add some for Macedonian in general (not sure whether the 1984 corpus is the best for this task), inter alia, to add Macedonian NER support to [CLASSLA](https://pypi.org/project/classla/). If you happen to know people interested in the task, we have some decent annotation guidelines from other South Slavic languages and quite probably funding available as well. I would not mind hearing on your wider motivation in Macedonian as we are eager to improve support for it on all levels. We will do so also in the MaCoCu project which starts this June (crawling top-level domains of different South-Eastern-European countries, Turkey included, curating / selecting data, building pre-trained language models).
Nice work with the dbmdz models btw, we use them primarily for processing German data. We recently published [BERTić](https://huggingface.co/CLASSLA/bcms-bertic), if you happen to be in need of processing Croatian, Serbian etc.
Nikola
username_0: Hi Nikoka,
thanks for your detailed answers!
Sorry for my misunderstanding, the dataset of course has no NE annotations 😅 But talking about NE, you may have noticed that the recent spacy version comes with a (better) support for Macedonian, including a trained model for NER. The author sent me that dataset (see https://twitter.com/_inesmontani/status/1356280197746606099). They plan to release it publicly, so maybe it could also be integrated here for benchmarking.
My colleague and I are working on Macedonian-focussed LMs, so we're primarily looking for datasets for our evaluations. E.g. WikiANN as silver standard is ok, but better datasets are heavily needed :)
I just had a look at the BERTić model, results are really looking good! Have you considered working on an ELECTRA model as well :thinking: For mono-lingual models, I could clearly see a performance boost (did a lot of ELECTRA pre-training for our DBMDZ models recently). However, I tried to train multilingual ELECTRA models (same languages as mBERT), but the performance was not really good, so I'm not sure if this would also be the same for 4 languages :thinking:
username_1: Stefan, hi.
Busy period. I just contacted Borijan. I will motivate him to publish the dataset if it is CC-BY-SA. No need for all the e-mail writing. :-)
What textual data do you use for building the Macedonian LM? I have a ~320M tokens crawl of the .mk domain (used for building these static embeddings https://www.clarin.si/repository/xmlui/handle/11356/1359) if you can profit from that?
For evaluation of the model, I guess, part-of-speech tagging on our dataset might be a proper way to evaluate? I would be very much interested in hearing details of what exactly you are doing for Macedonian, to coordinate efforts as much as possible. We can also switch to e-mail (nikola tod ljubesic ta jsi tod si).
Regarding BERTić, these are officially four languages, but purely linguistically speaking, these are variants of a single pluricentric language. In other words, I actually did not do any multilingual pre-training via Electra. Good to know that your results were not that good if the need for multilingual training ever arises! |
junegunn/vim-emoji | 169800119 | Title: Install with pathogen?
Question:
username_0: Would it be trivial to make this installable via [pathogen](https://github.com/tpope/vim-pathogen)?
```
cd ~/.vim/bundle
git clone git://github.com/username_1/vim-emoji.git
```
Would be nice in case we don't otherwise use Plug
Answers:
username_1: It already works with pathogen. All major Vim plugin managers, vim-plug, vundle, neobundle, etc, are compatible to pathogen.
Status: Issue closed
username_0: Bummer. It didn't seem to work for me then. I'll investigate when I'm back at my machine
username_2: Any update on this? |
robinrodricks/FluentFTP | 540990821 | Title: Authentication failed because the remote party has closed the transport stream
Question:
username_0: **FTP OS:** Unix / Windows / Embedded
Windows
**FTP Server:** Pure-FTPd / DrFTPD / Vsftpd / ProFTPD / Vax / VMS / OpenVMS / Tandem / HP NonStop Guardian / IBM OS400 / AS400 / Windows CE
FileZillaServer
**Computer OS:** ?
Windows Server 2016
I'm using the FluentFtp v192.168.127.12 and it was working until the 13rd December. After this date, the FTPs explicit server close the connection when I upload a file or when I try to download, but the FileZilla client is working still.
The error:
"FluentFTP.FtpException: Error while uploading the file to the server. See InnerException for more info. ---> System.IO.IOException: Error de autenticación porque la parte remota cerró la secuencia de transporte."
I'm test with the FluentFtp v.172.16.17.32 and I have the same error.
**Logs :**
<!---
Please generate logs from FluentFTP and paste them in the marked area below:
See this link for steps : https://github.com/username_2/FluentFTP#faq_trace
-->
```
<paste logs here but DO NOT delete the lines above and below this line>
# Connect()
Status: Connecting to 10.11.45.194:990
Response: 220-FileZilla Server 0.9.60 beta
Response: 220-written by <NAME> (<EMAIL>)
Response: 220 Please visit https://filezilla-project.org/
Status: Detected FTP server: FileZilla
Command: AUTH TLS
Response: 234 Using authentication type TLS
Status: FTPS Authentication Successful
Status: Time to activate encryption: 0h 0m 0s. Total Seconds: 0,0680652.
Command: USER FTPscanPRD01
Response: 331 Password required for <PASSWORD>
Command: PASS ***
Response: 230 Logged on
Command: PBSZ 0
Response: 200 PBSZ=0
Command: PROT P
Response: 200 Protection level set to P
Command: FEAT
Response: 211-Features:
Response: MDTM
Response: REST STREAM
Response: SIZE
Response: MLST type*;size*;modify*;
Response: MLSD
Response: AUTH SSL
Response: AUTH TLS
Response: PROT
Response: PBSZ
Response: UTF8
Response: CLNT
Response: MFMT
Response: EPSV
Response: EPRT
[Truncated]
Response: 215 UNIX emulated by FileZilla
Command: TYPE I
Response: 200 Type set to I
# OpenPassiveDataStream(AutoPassive, "STOR TestFtp/prueba.txt", 0)
Command: EPSV
Response: 229 Entering Extended Passive Mode (|||50001|)
Status: Connecting to 10.11.45.194:50001
Command: STOR TestFtp/prueba.txt
Response: 150 Opening data channel for file upload to server of "/TestFtp/prueba.txt"
Status: There is stale data on the socket, maybe our connection timed out or you did not call GetReply(). Re-connecting...
Status: Disposing FtpSocketStream...
Status: Not sending QUIT because the connection has already been closed.
Status: Disposing FtpSocketStream...
# Dispose()
Status: Disposing FtpClient object...
Status: Disposing FtpSocketStream...
Status: Disposing FtpSocketStream...
Status: Disposing FtpSocketStream...```
Answers:
username_1: I think it's probably a problem on TLS session resumption with FileZilla Server and .NET (probably by some system update).
Try uncheck this on FileZilla Server: https://prnt.sc/qewsjc. This will probably solve. But:
"Not requiring session resumption allows session stealing attacks. The problem with FTP is that the data connection does not authenticate the client: Imagine you a want to upload a new version of your website. To initiate the transfer your client sends the PASV command followed by the STOR command. The server opens a port and waits for the client to connect to it and upload the file. Now an attacker comes along and figures out the port the server listens on. He connects to the port before you can and uploads a piece of malware to your website." (https://forum.filezilla-project.org/viewtopic.php?t=36903)
FileZilla people will probably solve this problem with a rewrite of the server which will be using GnuTLS
Learn a discussion about this problem here: https://forum.filezilla-project.org/viewtopic.php?t=51601
username_2: https://github.com/username_2/FluentFTP/issues/311
https://github.com/username_2/FluentFTP/issues/335
https://github.com/username_2/FluentFTP/issues/26
username_0: Hi Robinicks,
Thanks for your help; I read all the threats and the solution pass to
disable resumption sessions on a FileZilla Server; but this server is
inaccessible by us because it is maintained for another company and we
can't changed this settings.
Of course we unknown if sure that the Ftp Server was upgraded with one of
this Kb's or not.
We use Net Framework 4.6.2 and we need a solution in the client side
application.
It's possible??
username_3: I also have recently been affected by this issue `System.IO.IOException: Authentication failed because the remote party has closed the transport stream` on Connect().
I don't have details on or control of the SFTP server.
Client details: Windows Server 2012, .NET v4.5.2, FluentFTP v16.2.1.
username_2: @username_0 Its a highly technical problem. I'm sorry I cannot help. Perhaps you can check the other threads I linked to. Try disabling SSL and just use plain FTP and see if it works.
Status: Issue closed
|
schubergphilis/mercury | 291568380 | Title: Can you add TLS1.3 support?
Question:
username_0: Can you add TLS1.3 support to mercury?
Answers:
username_0: TLS1.3 is currently planned for the golang version 1.11 milestone.
https://github.com/golang/go/issues/9671
Once the new golang with TLS1.3 is available we'll investigate if we need any additional coding to support this.
username_1: Will this feature be included in golang 1.11 or 1.12 (as https://github.com/golang/go/issues/9671 says)?
username_0: Once the golang developers add support for TLS1.3 in the crypto libraries, Mercury will also support this.
The golang time-line at the time of writing suggests that this might be added in 1.11 or 1.12, but since the target seems to be moving, it will probably golang 1.12. the Draft for TLS1.3 got passed though, so i hope it won't be much longer...
username_0: implemented in de6f131, will be enabled by default in go1.13
Status: Issue closed
|
gobstones/gs-console | 151028420 | Title: Linter para CoffeeScript
Question:
username_0: Aclaración:
Durante la definición del proyecto hablamos de __no__ usar CoffeeScript. Entre otras cosas porque a medida que los features de las nuevas versiones de ES avanzan, coffee va a ser menos usado y menos mantenido, mientras que ES es mantenido por todos los browsers y a medida que se implemente ES2015 va a correr sin necesidad de transpilar.
Dicho esto, si se va a usar CoffeeScript, hay que agregar una herramienta de [linter](http://www.coffeelint.org/) al pipeline para mantener estilo y calidad homogénea.
Esto además es una guía de estilo para aquel que quiera contribuir al proyecto mediante PR.<issue_closed>
Status: Issue closed |
egorsmkv/simple-django-login-and-register | 317821189 | Title: Add checks when a user authenticated
Question:
username_0: Like in SignInView
```
def dispatch(self, request, *args, **kwargs):
# Do not show the sign in page if user already authenticated
if request.user.is_authenticated:
return redirect('index')
```<issue_closed>
Status: Issue closed |
kata-containers/tests | 640001819 | Title: Issues to build CRI-O with CentOS 8
Question:
username_0: We are trying to install CRI-O with CentOS 8, however, we are hitting an issue
```
+ make 'BUILDTAGS=exclude_graphdriver_devicemapper libdm_no_deferred_remove'
touch "/home/gabycentos/go/.gopathok"
GO111MODULE=on go build --mod=vendor -ldflags '-s -w -X github.com/cri-o/cri-o/internal/version.buildInfo=1592345023 -X github.com/cri-o/cri-o/internal/version.buildDate=2020-06-16T22:03:43Z -X github.com/cri-o/cri-o/internal/version.GitCommit=<PASSWORD> -X github.com/cri-o/cri-o/internal/version.gitTreeState=dirty' -tags "exclude_graphdriver_devicemapper libdm_no_deferred_remove" -o bin/crio github.com/cri-o/cri-o/cmd/crio
GO111MODULE=on go build --mod=vendor -ldflags '-s -w -X github.com/cri-o/cri-o/internal/version.buildInfo=1592345038 -X github.com/cri-o/cri-o/internal/version.buildDate=2020-06-16T22:03:58Z -X github.com/cri-o/cri-o/internal/version.GitCommit=<PASSWORD> -X github.com/cri-o/cri-o/internal/version.gitTreeState=dirty' -tags "exclude_graphdriver_devicemapper libdm_no_deferred_remove" -o bin/crio-status github.com/cri-o/cri-o/cmd/crio-status
make -C pinns
make[1]: Entering directory '/home/gabycentos/go/src/github.com/cri-o/cri-o/pinns'
cc -std=c99 -Os -Wall -Wextra -static -c -o pinns.o pinns.c
cc -o ../bin/pinns pinns.o -std=c99 -Os -Wall -Wextra -static
/bin/ld: cannot find -lc
collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:9: ../bin/pinns] Error 1
make[1]: Leaving directory '/home/gabycentos/go/src/github.com/cri-o/cri-o/pinns'
```
any ideas of how to solve this?
/cc @username_1
Answers:
username_1: @username_0, which version are you trying to build?
username_1: Anyways, just gave it a try using a centos8 container and cri-o master:
```
[root@3e948f29c160 cri-o]# export BUILDTAGS="exclude_graphdriver_devicemapper exclude_graphdriver_btrfs libdm_no_deferred_remove"
[root@3e948f29c160 cri-o]# export GO111MODULE=off
[root@3e948f29c160 cri-o]# make
GO111MODULE=on go build --mod=vendor -ldflags '-s -w -X github.com/cri-o/cri-o/internal/pkg/criocli.DefaultsPath="" -X github.com/cri-o/cri-o/internal/version.buildDate='2020-06-16T23:11:29Z' -X github.com/cri-o/cri-o/internal/version.gitCommit=<PASSWORD>0a8e -X github.com/cri-o/cri-o/internal/version.gitTreeState=clean ' -tags "exclude_graphdriver_devicemapper exclude_graphdriver_btrfs libdm_no_deferred_remove" -o bin/crio github.com/cri-o/cri-o/cmd/crio
GO111MODULE=on go build --mod=vendor -ldflags '-s -w -X github.com/cri-o/cri-o/internal/pkg/criocli.DefaultsPath="" -X github.com/cri-o/cri-o/internal/version.buildDate='2020-06-16T23:11:39Z' -X github.com/cri-o/cri-o/internal/version.gitCommit=<PASSWORD> -X github.com/cri-o/cri-o/internal/version.gitTreeState=clean ' -tags "exclude_graphdriver_devicemapper exclude_graphdriver_btrfs libdm_no_deferred_remove" -o bin/crio-status github.com/cri-o/cri-o/cmd/crio-status
make -C pinns
make[1]: Entering directory '/root/go/src/github.com/cri-o/cri-o/pinns'
cc -std=c99 -Os -Wall -Werror -Wextra -static -c -o pinns.o pinns.c
cc -o ../bin/pinns pinns.o -std=c99 -Os -Wall -Werror -Wextra -static
strip -s ../bin/pinns
strip: ../bin/pinns[.gnu.build.attributes__libc_freeres_fn]: Warning: version note missing - assuming version 3
make[1]: Leaving directory '/root/go/src/github.com/cri-o/cri-o/pinns'
./bin/crio -d "" --config="" config > crio.conf
INFO Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio-status.8.md -out docs/crio-status.8.tmp && touch docs/crio-status.8.tmp && mv docs/crio-status.8.tmp docs/crio-status.8) || \
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio-status.8.md -out docs/crio-status.8.tmp && touch docs/crio-status.8.tmp && mv docs/crio-status.8.tmp docs/crio-status.8)
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio.conf.5.md -out docs/crio.conf.5.tmp && touch docs/crio.conf.5.tmp && mv docs/crio.conf.5.tmp docs/crio.conf.5) || \
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio.conf.5.md -out docs/crio.conf.5.tmp && touch docs/crio.conf.5.tmp && mv docs/crio.conf.5.tmp docs/crio.conf.5)
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio.conf.d.5.md -out docs/crio.conf.d.5.tmp && touch docs/crio.conf.d.5.tmp && mv docs/crio.conf.d.5.tmp docs/crio.conf.d.5) || \
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio.conf.d.5.md -out docs/crio.conf.d.5.tmp && touch docs/crio.conf.d.5.tmp && mv docs/crio.conf.d.5.tmp docs/crio.conf.d.5)
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio.8.md -out docs/crio.8.tmp && touch docs/crio.8.tmp && mv docs/crio.8.tmp docs/crio.8) || \
(/root/go/src/github.com/cri-o/cri-o/build/bin/go-md2man -in docs/crio.8.md -out docs/crio.8.tmp && touch docs/crio.8.tmp && mv docs/crio.8.tmp docs/crio.8)
```
Your specific error makes me think you're missing some dependency. Do you have a list of packages you're installing as dependencies?
username_0: @username_1 the version that I am using is https://github.com/kata-containers/runtime/blob/master/versions.yaml#L192, the dependencies that are installed in CentOS are https://github.com/kata-containers/tests/blob/master/.ci/setup_env_centos.sh#L61, I also include the packages `btrfs-progs-devel`
username_0: Closing this issue as I fixed it
Status: Issue closed
|
HE-Arc-Indus/sp18-recap-ci-bm_np | 322743647 | Title: Where is jenkins?
Question:
username_0: Où est votre jenkinsfile?
Answers:
username_1: Nous n'arrivions pas à nous connecter à Jenkins (le mot de passe ne s'affichait pas). C'est maintenant ok en passant pas la commande bash de la machine afin de récupérer le mot de passe admin.
Status: Issue closed
|
mkhstar/suneditor-react | 596708579 | Title: Issue with align and list buttonList options
Question:
username_0: Getting the following error while using list and align in buttonList.
align.js:83 Uncaught TypeError: this.util.changeElement is not a function
at Object.active (align.js:83)
at Object._applyTagEffects (main.js:15186)
at _onChange_historyStack (main.js:16309)
at pushStack (main.js:10630)
at Object.push (main.js:10651)
at Object.setContents (main.js:14563)
at Object.setContents (main.js:16683)
at SunEditor.componentDidUpdate (main.js:17168)
at commitLifeCycles (react-dom.development.js:19835)
at commitLayoutEffects (react-dom.development.js:22803)
at HTMLUnknownElement.callCallback (react-dom.development.js:188)
at Object.invokeGuardedCallbackDev (react-dom.development.js:237)
at invokeGuardedCallback (react-dom.development.js:292)
at commitRootImpl (react-dom.development.js:22541)
at unstable_runWithPriority (scheduler.development.js:653)
at runWithPriority$1 (react-dom.development.js:11039)
at commitRoot (react-dom.development.js:22381)
at finishSyncRender (react-dom.development.js:21807)
at performSyncWorkOnRoot (react-dom.development.js:21793)
at react-dom.development.js:11089
at unstable_runWithPriority (scheduler.development.js:653)
at runWithPriority$1 (react-dom.development.js:11039)
at flushSyncCallbackQueueImpl (react-dom.development.js:11084)
at flushSyncCallbackQueue (react-dom.development.js:11072)
at flushPassiveEffectsImpl (react-dom.development.js:22883)
at unstable_runWithPriority (scheduler.development.js:653)
at runWithPriority$1 (react-dom.development.js:11039)
at flushPassiveEffects (react-dom.development.js:22820)
at react-dom.development.js:22699
at workLoop (scheduler.development.js:597)
at flushWork (scheduler.development.js:552)
at MessagePort.performWorkUntilDeadline (scheduler.development.js:164)
Status: Issue closed
Answers:
username_0: Was looking at old version! It got resolved with latest. |
cda-group/arc | 935877123 | Title: First Class Streams
Question:
username_0: Streams in Arc-Script are currently second class since they cannot be passed around as values inside of tasks.
```
task Identity(): ~i32 -> ~i32 {
on event => emit event;
}
```
instead of something like:
```
task Identity(in: ~i32, out: ~i32) {
on event from in => emit event into out;
}
```
Having first-class streams would increase the language's expressiveness (which could be a double-edged sword), by allowing for example the definition of loops in the dataflow:
```
task Identity(stream: ~i32) {
on event from stream => emit event into stream;
}
```
A solution is to introduce the concept of *capabilities* that restrict streams into being read-only and write-only.
Similar to this, Go supports first-class channels with capabilities. Channels support two operations:
* push a data item into the channel
* pull a data item from the channel
```go
package main
import "fmt"
func main() {
messages := make(chan string)
go func() { messages <- "ping" }()
msg := <-messages
fmt.Println(msg)
}
```
I will fill in more information below. I think a problem here is going to be that first-class streams impose requirements on the runtime which we might not support in our current model.
Answers:
username_1: If we plan a transform-to-FSM-approach in order to make arc-script integrate with Arcon's event handling, will not first class streams greatly complicate the conversion to FSMs. Instead of just the top-level task, now any function, arbitrarily deep in the CFG can now need conversion.
username_0: You're correct. It is maybe possible to restrict it so it's only possible to block inside of tasks. Blocking inside of a function would be a compilation error. In the FSM you also need to store the stream as part of the state.
username_2: I thought we decided against passing streams around in the last dev meeting. What is the main need apart from support for expressing loops? Can you provide some motivation on why u want this level of decomposition for something that has only static meaning?
username_0: I put it here mainly to document the design decision. First class streams are a tempting thing to add to the language because it adds a lot of expressiveness. Though it would likely make the language "too expressive" and make it possible to express programs which break correctness. I will investigate what applications there are in Go which use this feature.
username_0: I think you hit the nail on the head. The main problem here is to decide how to support cycles in the dataflow graph. In my view there are a couple of solutions:
# Solution 1: Builtin iterate operator
This approach represents cycles as a builtin higher-order operator named `iterate`. This operator is a function with a signature of:
```
iterate: A -> ((A, B) -> (B, C)) -> C
```
In other words, you give it a stream `A` and an iteration function and get a stream `C` back. The iteration function takes two streams of type `A` and `B` and returns two streams of type `B` and `C`. The system will apply `A` to the iteration function to get `C` and also create an implicit feedback-channel for `B`. In other words, the output `B` stream of the iteration function is fed as input to the iteration function. For example, here is an "identity loop" where events loop once through the function:
```
fun loop_once(a: ~i32): ~i32 {
val c: ~i32 = iterate(a, fun(a: ~i32, b: ~i32): (a, b));
c
}
```
Pros:
* Easy to support by the system since it only requires a builtin construct.
* Restricts all loops to follow the same sound structure. It is impossible to write unsound loops which violate event time or for example deadlock (unless memory runs out).
Cons:
* Specialized constructs complicate the language syntax and semantics which make proofs, explanation, and optimisation more difficult. Ideally there should be no high-level black-boxes. Instead, syntactic abstractions should desugar into simpler terms.
* Comes at a cost of generality as all loops now need to follow the same structure. We cannot for example have a loop with an arbitrary number of input/output streams.
# Solution 2: Channels
(TODO)
# Solution 3: Channels and Streams
(TODO)
# Solution 4: Single Assignment Streams
(TODO) |
tensorflow/tfjs | 833378718 | Title: Detecting more than 1 hand with HandPose
Question:
username_0: Hello Team,
Great work with hand pose detection.
Wish to know when the feature of detecting more than one hand will be made available. Also, is there a workaround/tweak now to enable detection of both hands of a person?
Regards,
Jay
Answers:
username_0: Hi @pyu10055 , @lina128 ,
Any update on this issue please?
Regards
Jay
username_1: Thank you for feature request @username_0! @lina128 and team are actually working on a multi-hand extension to HandPose. Stay tuned!
Status: Issue closed
username_2: Please check out latest handpose with multiple poses here https://github.com/tensorflow/tfjs-models/tree/master/pose-detection , thank you |
SED-ML/sed-ml | 798938932 | Title: Require `SubTask.order` to be unique within a task
Question:
username_0: For reproducible executions, there should be a well-defined order for the execution of subtasks. This is substantially addressed by the `order` attribute. However, the execution order is not well-defined when multiple subtasks have the same `order`.
There's at least a couple ways to address this
- Require (or at least recommend) that `order` be unique within a task
- Define a canonical ordering algorithm which orders subtasks based on one of
- The order of their definition in the SED-ML file or
- Their task attribute (e.g., alphabetical sorting of `subtask.task.id`)
I propose requiring `order` to unique within a task. This is straightforward to understand and easy to validate. In my opinion, using a canonical ordering algorithm is more complicated, both to understand and implement.
Answers:
username_1: I agree with making the order unique.
I don't think a canonical order is necessary (or I don't understand what this should be good for). Multiple subtasks can be executed concurrently using multiprocessing (if there are no dependencies) so such a canonical order would not imply execution order.
username_0: If one desires to execute sub-tasks in parallel, I think more clarity is needed. To me, the order attribute suggest that sub-tasks are intended to be executed in serial. Two subtasks with the same value of the order attribute, for example, could be used to indicate that those sub-tasks are intended to be independent and could be executed in parallel.
A simpler way to facilitate parallel computation could be for each sub-task to indicate the other sub-tasks that it must follow (its dependencies). This would provide modelers the ability to control dependencies.
username_2: Given that implementation of this is likely to use the 'sort' function newly added to libsedml, I don't see any need to require 'order' attribute values to be unique. Nobody's going to get anything wrong if the list is sorted in order value.
As I understand it, this comes down to the following suggested addition to the L1v4 spec:
"If the order is not defined or is identical to another subtask, 'executed in any order' includes the ability to execute them in parallel, if desired."
username_1: I agree. Same order states parallelization is possible between subtasks.
username_0: I don't we think we should rely on everyone using a specific function in libSED-ML. Already, some people are using jLibSED-ML instead.
Text similar to what Lucian wrote would help clarify the intended interpretation of the order attribute.
If subtasks are executed starting from the endpoint of a previous subtask, what is intended to happen when to subtasks with higher values of their order attribute? It seems to me that simulation programs would need to raise an error. Keeping track of a dependency tree rather than a linear order would allow more flexibility here.
username_2: Having a dependency tree is a reasonable suggestion for L2, but for now, everything would have to be harmonized and all data shared between 'order' values.
username_2: To follow up from the meeting yesterday, I finally thought of a way to describe the dependency tree with respect to the 'order' attribute:
* Every subtask is dependent on every other subtask with a lower 'order' value than its own.
* Every subtask is independent of every other subtask with the same or higher 'order' value as its own.
* Every subtask with no 'order' attribute is completely independent of every other subtask.
There will be many dependency trees that will be impossible to construct with this scheme, but it is at least true that an explicit dependency tree can be constructed from these rules.
username_0: When multiple sub-tasks have the same order, the semantics of how their result should be applied to the input of sub-tasks with higher order is under-defined (e.g., should their result be summed? averaged? differenced?). As a result, one of two additional rules is needed:
- It should not be possible to define sub-tasks with higher order than the lowest order that has multiple sub-tasks.
- All tasks with higher order than the lowest order that has multiple sub-tasks should be executed independently.
The latter rule is just straw argument. While it outlines the semantics, this would be non-intuitive. Therefore, this should not be the rule.
username_2: There is no concept in SED-ML of 'the result should be applied to the input'. This does not happen, ever. Nothing is ever summed; nothing is ever averaged; nothing is ever differenced while executing subtasks. There is only model state. You perform a subtask, and any given model state might change, or it might not. The SED-ML interpreter doesn't need to worry about it at all.
We did just vote to change this slightly with the addition of ComputeChange elements to SubTasks, but even there, it does not have access to anything other than the current model state. In the past, it was required to take the model state unchanged, however it was received from other subtasks. Now it has the ability to modify the model state if it wishes. But again, it cannot sum anything nor average anything nor difference anything that is not part of the static model state that it is given (i.e. variables and current values).
username_1: Many of the constraints we discussed yesterday and also here in the issue become relevant when we move to a more "workflow" like SED-ML system in level 2. At the moment things which can happen with tasks are very limited. All the concepts of chaining simulations or using outputs of one task as input of another task do not really exist at the moment.
username_0: When there are multiple sub-tasks with the same order, what is the starting model state for sub-tasks with higher order? This is the "output" that I'm referring to. Which model state from previous sub-tasks should be used?
username_2: Oh! I think I finally understand the question! Phew.
Let's say that a given RepeatedTask involves two models, mod1 and mod2. Subtask A performs a timecourse on mod1, and subtask B performs a timecourse on mod2. So far, so good: it clearly doesn't matter which order A and B are performed, an indeed they could be executed in parallel if need be, so they are both given an 'order' of '1'.
However, now we have subtask C, which calculates the Jacobian of mod1. As such, it must have a higher order, so we give it an 'order' of 2.
But! For the first time, it suddenly matters if we performed things in parallel! Because the RepeatedTask doesn't necessarily know anything about what A and B are doing; it just hands them each a plate with the model states of mod1 and mod2 on it, and lets them change things. If you run A and then B, or if you run B and then A, you're fine: subtask C will still be handed mod1 in the state it was in after A ran. But! If you run A and B in parallel, each change the 'suite' of models on the plate differently: mod1 and not mod2 is changed after A runs, and mod2 and not mod1 is changed after B runs. When it's time to execute C, how to you synchronize your suite again?
This does indeed need clarification in the spec. For me, my first instinct is to say that when you run things in parallel, you must collect a set of 'diffs' (essentially) on the model states, and apply them all to the set of initial model states. In this example, we'd apply the A diff to the suite, and mod1 would change, then apply the B diff to the suite, and mod2 would change. Then you'd run C, and everything will work.
But what do you do if A and B both make changes to the same element of mod1? The options would seem to be:
1) Error out.
2) Pick one diff arbitrarily and warn the user.
3) Pick one diff arbitrarily and silently continue.
4) Try to sum/average the diffs
Number 1 is obviously the safest, but I can imagine some scenarios where it would not be appropriate:
* If there were no more subtasks to perform anyway, and the repeated task was set 'reset=true'.
* If any subsequent subtasks (or repeated tasks) behaved the same regardless of which diff was applied.
which might indicate that 2 or 3 would be OK, at the tool's discretion.
I think 4) is a fool's errand, and should never be attempted.
My upshot: tell simulators that when executing subtasks in parallel, they must combine any changes those subtasks make on the model states, unless both a) there are no more subtasks, and b) the repeated task is set 'reset=true'.
username_0: For L1V4, I suggest scenario (1).
For L2, I suggest we change the way that dependencies among sub-tasks are specified to allow more flexibility. The order attribute largely only supports a linear flow of sub-tasks (with parallelization at individual orders). If SED-ML instead captured the dependency of each sub-task (i.e., id of the sub-task that it has to run after), SED-ML could support more complex trees of sub-tasks. Such trees would cover scenarios (2) and (3) by allowing the modeler to explicitly state the dependencies they want. If someone wanted to do (4), this could be encoded into a chain of sub-tasks, albeit opaquely; arguably this could be done already, although not with parallelization.
Such dependencies trees would make SED-ML increasingly look like a workflow language. For example, such sub-task dependencies could be specified similar to how SLURM, PBS, and GitHub Actions capture dependencies between jobs.
As part of the L2, we could try to capture merging of "results" of sub-tasks directly in SED-ML rather than something that has to be encoded into a simulation task. To think about how to capture this, it would be helpful to see a few concrete examples. Its seems that specific simulations may want to have sub-tasks merged in different ways. Its not clear to me (yet) what should be captured in a more general way.
username_1: I don't see this as an issue
It is clear from the definition that if you run a task with higher order all lower order tasks must already have been executed and the models are in the state of the end execution of all lower tasks.
I.e. as soon as C (oder 2) runs, mod1 has the end stage of A executed and mod2 has the end stage of B executed. It does not matter at all if you run A and then B, B and then A or A and B at the same time, because the two tasks change different models!
You cannot parallelize two tasks with the same order in the listOfSubTasks if they change the same model and if the reset is False on later subtasks using the model. There is no way to know what the end stage of the model is. This should be clearly written in the specification.
username_0: I agree that if subtasks with the same order change the states of different models, the semantics of their parallel execution of these subtasks is clear. Where is this constraint that sub-tasks that have the same order must reference different models described in the specifications?
I don't see how `resetModel` is relevant here. My interpretation is that `resetModel` applies to the whole iteration of a repeated task, not between individual subtasks.
username_2: I have added the following to address this issue. First, in the listOfSubTasks section:
"In some cases, it may be possible to run some subtasks in parallel if the following conditions are met:
The order attribute of both subtasks must be the same or unset.
The subtasks must not change the model state of the same model.
If there are subtasks to run with a higher order value, or if the resetModel attribute of the
RepeatedTask is \false", it must be possible to obtain and apply the changed model states from
all parallel subtasks before running the next subtask."
and then in the 'order' section:
"Leaving the order undefined for a SubTask implies that the SubTask may be executed before, after, or
in parallel with any other SubTask. Giving the same order to multiple SubTask elements is an explicit
statement that each SubTask in the group may be executed before, after, or in parallel with any other
SubTask in the group. It is recommended that users always explicitly set the order attribute for this
reason."
Leaving this open to ensure that the wording is OK and that nothing else is missing (or wrong!)
username_0: I think these conditions are more restrictive than necessary. I would separate two issues:
(a) whether a description of a set of subtasks is valid (i.e., semantically well-defined)
(b) whether subtasks can be executed in parallel
Regarding (a), subtasks are ill-defined if (i) a there a multiple subtasks with the same order that make (different) changes to the same model and (ii) there are additional subtask(s) with higher order that involve the same model. In this case, its semantically unclear what model to use for the later subtask(s). In my opinion, a validation tool should identify such cases and raise errors.
Assuming simulation tools raises errors when (a) is true, subtasks with the same order could always be run in parallel. In addition, subtasks with successive orders that don't share models or that don't share models can also be executed in parallel.
username_2: Hmm, I think you're right that the issues need to be separated.
For a), I think that it would actually be valid for two tasks to change the same model and have the same order if they could be executed in either order (i.e. if 'A then B' and 'B then A' both resulted in the same model state). However, this would be a situation that could not be parallelized. Do we want to say that it is therefore illegal?
For b), if you want to relax the restrictions, perhaps the best thing to say is simply 'if you analyze the tasks and determine they can be parallelized, then go for it'. This would allow people to ignore the 'order' if they could analytically prove that the order didn't matter, and would also save them from the situation in a).
(As an extra note: if task1 modifies model1 and task2 modifies model2, that's insufficient to know if they can be parallelized; one could be parameterized with the other, i.e. with a Variable for task1 that targets model2.)
username_2: As in https://github.com/SED-ML/sed-ml/issues/54:
The latest updates to the spec call out the possibility of parallelization in a few places. In listOfTasks:
"Each top-level task is defined such that its execution is independent of the others: if one task is executed
after another, the states of the models must be completely reset so there’s no cross-contamination of one
task to the next. This means that the top-level tasks are particularly well suited to being executed in
parallel, should that be desired."
in 'resetModel':
"When the resetModel attribute is set to "true", the individual repeats of the task may be paralellizable,
assuming any child Range of the RepeatedTask does not depend on the results of any individual repeat
(as is theoretically possible for the FunctionalRange, but not for the other Range types.)"
in 'listOfSubmodels':
"In some cases, it may be possible to run some subtasks in parallel. Interpreters may use the order
attribute as a hint in making the decision about what to parallelize, but in general, should be prepared
to examine which Model each SubTask is modifying and which model or models are being used as input."
(I changed to this from my previous list of requirements, which was too restrictive. Instead, I'm basically just punting to interpreters to determine this on their own.)
in 'order':
"Leaving the order undefined for a SubTask implies that the SubTask may be executed before or after
any other SubTask. Giving the same order to multiple SubTask elements is an explicit statement that
each SubTask in the group may be executed before or after any other SubTask in the group. It is
recommended that users always explicitly set the order attribute for this reason.
Any order value does not imply whether the SubTask may be executed in parallel with other SubTask
elements. Interpreters who wish to parallelize subtasks should operate from the assumption that in the
default case, each SubTask would be executed in some order, and adjust accordingly."
username_0: attribute as a hint in making the decision about what to parallelize, but in general, should be prepared
to examine which Model each SubTask is modifying and which model or models are being used as input.
I think this can be simplified:
"Subtasks can also be executed in parallel when they do not share any state. Interpreters can determine this from the descriptions of the subtasks."
username_2: Oh, hmm. I guess I don't know for sure either way when FunctionalRanges are supposed to be evaluated. I was remembering when they were sort of proposed to have 'while' functionality (instead of 'for'), but then that didn't quite work.. or at least we thought it didn't work?
Let's say that they're evaluated at the beginning of the run, and that's probably simplest, not to mention the most likely to have implementations.
Also, I like your changes; I made the following changes:
Added to FunctionalRange:
"The value of any Variable child of a FunctionalRange should be calculated before the first simulation
begins, and will not be affected by any SubTask in the RepeatedTask."
'resetModel':
"When the resetModel attribute is set to "true", the individual repeats may be executed in parallel."
And then I used your text for 'listofSubTasks'
username_0: Sounds good.
Status: Issue closed
|
DevExpress/testcafe | 579194913 | Title: Don't finalize the pending command in case of the 'ExecuteSelectorCommand' command
Question:
username_0: Example (pseudo code):
```js
await t
.click('a[target=blank]')
.click('link on child page') // a few page redirects here
.expect(('selector-on-child-page').textContent).ok()
```
Answers:
username_0: Fixed in https://github.com/DevExpress/testcafe/commit/36337d3afcc2c2954530b1e5d8a271fef8ed5dbb
Status: Issue closed
|
tableau/server-client-python | 306531836 | Title: Fullname and email
Question:
username_0: Hello,
I have a python script utilizing the Tableau Server Client (TSC). I am trying to pull down information about the users. Below is the code, slightly modified from the example given online from the Tableau Github page (https://tableau.github.io/server-client-python/docs/api-ref#requests).
Using the link above as a reference, scroll down to “Users”. Just below will show the attributes available in the UserItem class. You will see 'name', site_role','fullname' and 'email' are included in that list.
When I try to query the API for 'fullname' and 'email' I get “None”. According to documentation Tableau populates the email address and display name with info from AD during import. When I log into Tableau, the two fields are populated as expected. But as stated, when I do a pull using TSC, it pulls back nothing. I don’t think it’s my code as I am not getting errors. And I am using two other attributes (name and site_role) that both pull back the correct information when queried. See below for the code I am using.
---Works---
with server.auth.sign_in(tableau_auth):
all_users, pagination_item = server.users.get()
UsersDB = [(user.name,user.site_role) for user in all_users]
-----------
What I want to do is include 'user.fullname' and 'user.email' in the above query. But when I do I get "None" as my results for those two attributes only, 'user.name' and 'user.site_role' both pull back data.
---Does not work correctly----
with server.auth.sign_in(tableau_auth):
all_users, pagination_item = server.users.get()
UsersDB = [(user.fullname,user.name,user.site_role,user.email) for user in all_users]
-------------------------------
SIDE NOTE: I have been reviewing the user_item.py file to see if something was overlooked. I am NOT an expert with classes so I am not really sure what am looking at. However I don't think properties were declared for email and full name. I tried added them and get errors saying things like "inconsistent use of tabs and spaces in indentation". When I replace and existing property to avoid this error I get an error saying "can't set attribute". So I obviously don't understand this file. I need this information for a work project.
Here is the file I was reviewing....
https://github.com/tableau/server-client-python/blob/master/tableauserverclient/models/user_item.py
I was messing with "def __init__" and the '@properties' below it.
Please advise.
Answers:
username_0: SIDE NOTE clear up - I figured out the tabs and spacing issue. Apparently it didn't like it when I tabbed for indentation. So I put 4 spaces every for my issues and it solved that issue.
Once I cleared up that, I get an error like "fullname must not be empty".
username_1: Seems like the problem is that user.fullname and user.email are the two fields that are not returned by our Rest API by default. Using the Rest API, you could get those two fields by using the 'fields' parameter to expand your query (https://onlinehelp.tableau.com/current/api/rest_api/en-us/help.htm#REST/rest_api_concepts_fields.htm%3FTocPath%3DConcepts%7C_____8)
Currently, however, TSC does not support using the 'fields' parameter, so there is no way to get user.fullname and user.email. I'll add this to our backlog.
username_1: Seems like the problem is that user.fullname and user.email are the two fields that are not returned by our Rest API by default. Using the Rest API, you could get those two fields by using the 'fields' parameter to expand your query (https://onlinehelp.tableau.com/current/api/rest_api/en-us/help.htm#REST/rest_api_concepts_fields.htm%3FTocPath%3DConcepts%7C_____8)
Currently, however, TSC does not support using the 'fields' parameter, so there is no way to get user.fullname and user.email. I'll add this to our backlog.
username_2: req_option = tsc.RequestOptions()
req_option.filter.add(tsc.Filter(tsc.RequestOptions.Field.Name,
tsc.RequestOptions.Operator.Equals,
'xyz'))
newuid= server.users.get(req_option)
print('newuid',newuid)
newid = newuid[0][0].id
newname = newuid[0][0].name
fullnewname = newuid[0][0].fullname
print('newname',newname,fullnewname)
Output:
newname xyz None
Please could you update us when the fullname attribute is fixed to display full name. so for ID xyz, I except to see Zen XY.
username_3: Hello,
Any news about this enhancement ?
I've been struggling with user_item.py to find a solution to bring up all the fields from a specific user but without any success for now :(
Thanks.
username_3: It seems that a lot of fields aren't returned by the Rest API by default.
In fact, I can only see :
- name
- last_login
- id
- site_role
Thanks.
username_4: Hi,
I also have the same problem, any enhancement updated?
Regards,
Shen
username_5: I had the same problem. It's ok if this feature isn't programmed yet, but the documentation match current production. Please remove `.email` and `.fullname` from the `users.update` example:
https://tableau.github.io/server-client-python/docs/api-ref#users
username_6: server.users.get_by_id() returns `fullname` but not `email`
username_7: +1 for being able to return email address
username_8: This seems like an obvious thing to add in. Please have email address return for a given user item!
username_9: For `users.get()`, it should be returning full name and email address now with the changes from #713 (this was included in release 0.14.0).
As for `users.get_by_id()`, I think email is being returned there but I'm not sure if it's just recent Tableau Server versions or not.
username_1: For `users.get_by_id()`, email should now be returned if you are using Tableau Online or the [latest Tableau Server releases](https://www.tableau.com/support/releases/server) (December) from 2019.3 through 2020.3. 2020.4 was just released, but won't have the fix until 2020.4.1.
username_10: In Tableau Server 2019.4 it still seems to return none.
username_9: @username_10 could you confirm which specific version of 2019.4 you are using? The latest maintenance release there is 2019.4.17 and that _should_ have the server-side part of this fix Chris mentioned.
username_10: We are running Tableau Server Version: 2019.4.3 (20194.20.0128.2054) 64-bit Windows.
```
user1 = server.users.get_by_id('9f9e9d9c-8b8a-8f8e-7d7c-7b7a6f6d6e6d')
print(user1.name)
print(user1.email)
```
I used this (but with an ID from my server) as a test case.
The name was returned but the e-mail still returned "None".
Also confirmed there was indeed a name on the server for that user.
username_9: @username_10 ahh okay, thanks for confirming. The fix here requires both using the latest TSC library _and_ being on the latest maintenance releases for Tableau Server. So for your case that would be **2019.4.17**. When your server is upgraded to that version then this should work for you.
username_10: Oh, whoops! Thanks for clearing that up for me @username_9 .
username_11: #565
Upgrading to v.15 confirmed works! Thanks guys!
- Tableau Server Version: 2020.4.2 (20204.21.0217.1203) 64-bit Windows
- tableauserverclient v0.15
```python
import tableauserverclient as TSC
tableau_auth = TSC.TableauAuth('USERNAME', 'PASSWORD')
server = TSC.Server('https://SERVERURL')
with server.auth.sign_in(tableau_auth):
all_users, pagination_item = server.users.get()
for user_item in all_users:
print("User name: '{}' fullname: '{}', User email: '{}'".format(user_item.name,
user_item.fullname,
user_item.email))
# =>
# User name: 'username123' fullname: 'User Name123', User email: '<EMAIL>'
```
username_9: I'll go ahead and close this one based on the recent confirmations.
Status: Issue closed
|
EndPointCorp/end-point-blog | 273098359 | Title: Comments for Continuing an interrupted git-svn clone
Question:
username_0: Comments for https://www.endpoint.com/blog/2010/05/13/continuing-interrupted-git-svn-clone
To enter a comment:
1. Log in to GitHub
2. Leave a comment on this issue.
Answers:
username_1: Thanks so much, but in my case doesn't work, I just join into a company to help with the migration of 10 years old and 70k revisions SVN projects to git, sadly the svn clone break and when I try to fetch start again... I have been working with this and the most successful run only gave me 25k commits.
username_2: I love you 💖 |
thingsboard/thingsboard | 596137044 | Title: I was working with thingsboard locally and now I'm not able to access localhost:8080 anymore!
Question:
username_0: Hi,
I was using Thingsboard without any problem, but after a power outage I'm not able to access localhost:8080 anymore and I get:
`"curl: (7) Failed connect to localhost:8080; Connection refused"`
What should I do now?!!
Answers:
username_0: After executing "cat /var/log/thingsboard/thingsboard.log | grep ERROR" here is what I got:
```
2020-04-07 19:35:19,213 [main] ERROR o.s.boot.SpringApplication - Application run failed
2020-04-07 19:59:27,061 [main] ERROR o.s.boot.SpringApplication - Application run failed
2020-04-07 20:06:03,409 [main] ERROR o.s.boot.SpringApplication - Application run failed
2020-04-07 20:20:03,710 [Thingsboard Cluster-reconnection-0] ERROR c.d.driver.core.ControlConnection - [Control connection] Cannot connect to any host, scheduling retry in 1000 milliseconds
2020-04-07 20:20:03,873 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.a.ruleChain.RuleChainActor - Could not save EventEntity
2020-04-07 20:20:03,939 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.actors.ruleChain.RuleNodeActor - Could not save EventEntity
2020-04-07 20:20:03,939 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.actors.ruleChain.RuleNodeActor - Could not save EventEntity
2020-04-07 20:20:03,980 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.actors.ruleChain.RuleNodeActor - Could not save EventEntity
2020-04-07 20:20:03,986 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.actors.ruleChain.RuleNodeActor - Could not save EventEntity
2020-04-07 20:20:04,008 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.a.ruleChain.RuleChainActor - Could not save EventEntity
2020-04-07 20:20:04,081 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.actors.ruleChain.RuleNodeActor - Could not save EventEntity
2020-04-07 20:20:04,082 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.actors.ruleChain.RuleNodeActor - Could not save EventEntity
2020-04-07 20:20:04,085 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.actors.ruleChain.RuleNodeActor - Could not save EventEntity
2020-04-07 20:20:04,089 [Akka-akka.actor.default-dispatcher-2] ERROR o.t.s.a.ruleChain.RuleChainActor - Could not save EventEntity
2020-04-07 20:21:11,380 [main] ERROR o.s.boot.SpringApplication - Application run failed
```
username_1: post the logs before the application run failed
it can be because of the port conflict or because the database is not accessible. make sure regarding those
username_0: I know the problem is because of the port number cause I'm trying to have 2 thingsboard on my computer. Shouldn't I change the port in thingsboard.yml? I did that but still have the problem!Which ports should I change? is HTTP port enough or not?
Best Regards,
<NAME>ovat
username_1: the issue seems not to persists
Status: Issue closed
|
Dav1dde/glad | 411735929 | Title: GladLoadGL on NIM
Question:
username_0: how should the gladLoadGL proc look like?
proc load(): bool =
#what should I write here?
if(not gladLoadGL(load)): echo "error!"
Answers:
username_1: @johnnovak maybe you can help here if you have a sec
Status: Issue closed
username_1: Closing for no activity, I hope your problem was resolved. |
nojb/ocaml-macaroons | 217291556 | Title: interoperability with other macaroons implementations
Question:
username_0: Actually we came across this problem when trying to verify some macaroons produced by macaroons.js. Related issue: https://github.com/me-box/databox-arbiter/issues/27
After experimenting and referring to some other implementations with @yousefamar, here are three points that may need to be fixed:
1. packet header encoding/decoding: [libmacaroons' implementation](https://github.com/rescrv/libmacaroons/blob/6d125f7c23ba2a3f8d91e302e1e6a119a055cfa0/packet.c#L51) assumes a max value of 65535(uint16) and hexify it as the four byte header
2. alphabet used by base64 codec: it seems that [an uri-safe alphabet](https://github.com/rescrv/libmacaroons/blob/6d125f7c23ba2a3f8d91e302e1e6a119a055cfa0/base64.c#L58) should be used
3. the primitives of hmac algorigthm used to produce the signature: ocaml-sodium's auth function uses primitive [hmacsha512256](https://github.com/username_2/ocaml-sodium/blob/master/lib_gen/sodium_bindgen.ml#L22), which on the c part will call [hmacsha512](https://github.com/jedisct1/libsodium/blob/master/src/libsodium/crypto_auth/hmacsha512256/auth_hmacsha512256.c#L47) eventually. But both implementation of c [libmacaroons](https://github.com/rescrv/libmacaroons/blob/6d125f7c23ba2a3f8d91e302e1e6a119a055cfa0/port.c#L103) and python [pymacaroons](https://github.com/ecordell/pymacaroons/blob/master/pymacaroons/utils.py#L51) use sha256 explicitly.
This lib is a nice implementation itself, clear interfaces and realy handy to use. But it would be better if it could inter-operate with other macaroon libs.
I'll submit a PR dealling with the first two points. But not sure about the third one, it seems there is no way to demand usage of sha256 for auth functions at the moment @username_2. For now, I can work around this by making a local pin to use sha256 primitives.
Answers:
username_1: Hi @username_0, thanks a lot for your contribution. I am not working actively on this project but will gladly and happily merge any good contribution and will do my best to make any further improvements possible.
username_2: There is no explicit sha256 binding at the moment. libsodium calls `hmacsha512256` `crypto_auth` so that is what we use for the `Auth` module. I would welcome a PR for an `Auth_hmacsha256` module (and `Auth_hmacsha512` and `Auth_hmacsha512256` modules) which should be just different invocations of the same `Gen_auth` functor plus some interface and documentation.
Status: Issue closed
username_0: Thanks for your quick response! :+1:
I'll close this up for now and see what I can do about the libsodium bindings later.
Thanks! :smiley: |
RunestoneInteractive/RunestoneServer | 673789710 | Title: 7-3-5 Activity 8 Autograder broken
Question:
username_0: **What Course are you in**
boston-pd-2020
**What Page were you on**
7.3
**What is your username**
<EMAIL>
**Describe the bug**
A clear and concise description of what the bug is. Vague statements like X does not work are not helpful.
The Autograder crashes when I try to run the program because it cannot call WordPairsList
**Traceback**
If you got here through the Bug Report page and there was a stack trace, please paste it here
Error
./RunestoneTests.java:45: error: cannot find symbol
WordPairsList list = new WordPairsList(words);
^
symbol: class WordPairsList
location: class RunestoneTests
./RunestoneTests.java:45: error: cannot find symbol
WordPairsList list = new WordPairsList(words);
^
symbol: class WordPairsList
location: class RunestoneTests
./RunestoneTests.java:57: error: cannot find symbol
WordPairsList list = new WordPairsList(words);
^
symbol: class WordPairsList
location: class RunestoneTests
./RunestoneTests.java:57: error: cannot find symbol
WordPairsList list = new WordPairsList(words);
^
symbol: class WordPairsList
location: class RunestoneTests
4 errors
**Additional context**
Add any other context about the problem here.
Answers:
username_1: Hi Ashok,
Thanks for reporting this. For some reason it's using the junit test from activity 9 instead of activity 8, but I'm not sure why. I'll look into it.
-Beryl
Status: Issue closed
username_1: This is now fixed! |
dart-lang/sdk | 128060153 | Title: VM: Snapshots don't properly handle ephemerons.
Question:
username_0: ```
import 'dart:isolate';
import 'dart:developer';
class Box {}
main() {
var port = new RawReceivePort();
port.handler = ((reply) {
print("Got reply: $reply");
var box1 = reply[0];
var colors = reply[1];
if (colors[box1] != "red") {
throw "Missing color of box1";
}
var box3 = new Box();
if (colors[box3] != null) {
throw "Bogus color of box3";
}
port.close();
});
Isolate.spawn(child, port.sendPort);
}
child(replyPort) {
var box1 = new Box();
var box2 = new Box();
var tag = new UserTag("blue-tag");
var colors = new Expando();
colors[box1] = "red";
colors[box2] = "green";
colors[tag] = "blue";
// Note that 'tag' is not reachable from this message.
replyPort.send([box1, colors]);
}
```
yeilds
```
Invalid argument(s): Illegal argument in isolate message : (object is a UserTag)
#0 _SendPortImpl._sendInternal (dart:isolate-patch/isolate_patch.dart:193)
#1 _SendPortImpl.send (dart:isolate-patch/isolate_patch.dart:177)
#2 child (file:///usr/local/google/home/rmacnak/dart1/sdk/a.dart:36:13)
#3 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:264)
#4 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:148)
```
Answers:
username_1: You are sending an `Expando` across a `SendPort`, along with one of the keys. That should work - we guarantee that the same object sent twice in the same message will be deserialized to the same object in both instances, so the key in the expando should match the accompanying object.
I guess this is the VM's internal implementation of `Expando` that fails to serialize in some way.
username_0: Two implementation issues:
- The serializer writes ephemerons as if they were strong references
- The deserializer neglects to rehash the Expando
username_2: The question really is how you access the Expando object in the receiving isolate and how you preserve the identify of the Expando object across multiple messages.
username_1: You should probably not preserve the identity of the expando object. Like most other objects, it should be copied when sent. That could mean that you will eventually send multiple expando objects, each with potentially different keys. Only objects sent multiple times as part of the same message need to have their (relative) identity preserved.
As an optimization, any non-constant key of an expando that isn't sent in the same message can be dropped as well, since it will be unreachable on the other end (or you can just wait for GC).
(This assumes constants are converted to similar constants on the receiving end).
Status: Issue closed
|
downshift-js/downshift | 571688472 | Title: MenuItems not reading in Windows with NVDA/JAWS after upgrade from v1 to v5
Question:
username_0: <!--
Thanks for your interest in the project. I appreciate bugs filed and PRs submitted!
Please make sure that you are familiar with and follow the Code of Conduct for
this project (found in the CODE_OF_CONDUCT.md file).
Please fill out this template with all the relevant information so we can
understand what's going on and fix the issue.
I'll probably ask you to submit the fix (after giving some direction). If you've
never done that before, that's great! Check this free short video tutorial to
learn how: http://kcd.im/pull-request
-->
- `downshift` version: v5.0.3
- `node` version: 12.16.1
- `npm` (or `yarn`) version: 1.21.1
**Relevant code or config**
https://github.com/username_0/carbon/blob/upgrade-downshift-dependency/packages/react/src/components/Dropdown/Dropdown.js
Dropdown has a ListBox component as a dependency
https://github.com/username_0/carbon/tree/upgrade-downshift-dependency/packages/react/src/components/ListBox
**What you did**:
Upgraded Downshift from v1 straight to v5 :hurtrealbad:
**What happened**:
After the upgrade this Dropdown component's menuitems are not longer being read by NVDA or JAWS in Windows 10
<!-- Please provide the full error message/screenshots/anything -->
**Reproduction repository**:
https://deploy-preview-5373--carbon-components-react.netlify.com/iframe.html?id=dropdown--default
**Problem description**:
When using NVDA or JAWS 2020 in Windows 10 the menuitems are not being read. It seems like focus is never moving from the input field on the combobox. The a11y announcer doesn't seem to changing sub tree to menu items like expected. This is the markdown output with a menu option receiving focus

Honestly any help with this would be great. The problem seems to be isolated to Windows environments which is interesting. Testing with VoiceOver on macOS the menuitems read as expected.
Answers:
username_1: First things first I don't see any input field. It seems that you actually have a select rather than a combobox. Maybe give [useSelect](https://github.com/downshift-js/downshift/tree/master/src/hooks/useSelect) a try if you have a function component.
We are not narrating using that aria-status-message anymore as this is supposed to be done by the screen reader automatically. In your case it's not happening because there is no aria-activedescendant on that button you keep focus on.
2 solutions: 1 - use the hook I mentioned above (recommended but I don't know what's your implementation, it may take longer than the second suggestion). 2 - get aria-activedescendant from `getInputProps` and use that on the button.
Solution 2 is simpler but it will not make your dropdown accessible. Screen readers on Windows will not enter in Forms Mode when interacting with the dropdown. That is why `useSelect` should be used. It's designed for this specific case. When the dropdown opens, `useSelect` will put the focus on the menu, so the screen reader will go into Forms/Application mode, because the menu has role="listbox".
Strongly suggest you to use the hook :). Good luck!
username_0: After some testing implementing solution 2 today I can confirm you're right. No menu-items are spoken with either NVDA/JAWS 2020 in Windows 10. Going to dig into solution 1 then 👍
username_1: Let me know how it goes, it will be interesting to find out how you migrated from v1 to a hook (y)
username_0: We finished our upgrade from v1 to v5 which included the above mentioned React Hooks refactors of various select components 🏄
https://github.com/carbon-design-system/carbon/pull/5373
Status: Issue closed
|
leoding86/webextension-pixiv-toolkit | 552405289 | Title: v3.7.2 Rename Format
Question:
username_0: v3.7.2
Remove page number in filename when downloading illustation has only one image
but my rename format is {title}({id}_p{pageNum})
now i have to add filename "0"....
can add original filename? or add other tag like {originId}
example : {tag}=79012732_p0.jpg
Answers:
username_1: I'll improve the logic for removing the "pageNum".
What is {originId} mean? Do you mean the origin filename?
username_0: yes i mean origin filename
本來是希望{id}能像原始檔名一樣(ex.79012732_p0)
或者是新增新的標籤來代表原始檔名 new tag like {originId}
username_1: 你可以使用{id}_p0的规则来实现这个需求,3.7.3添加了设置可以在插画只有一张图片的时候也保留页码参数(计划今天发布)。
username_0: 對XD 但考慮到多張的情形 不能寫死成{id}_p0
感謝您今天將發布版本
Status: Issue closed
|
NervJS/taro | 417669095 | Title: 自定义组件无法继承 AtComponent 组件。
Question:
username_0: **问题描述**
在自定义组件继承了 `Taro-UI` 中的 `AtComponent` 组件,在 `weapp` 模式下编译正常,但是微信开发工具报错。
**复现步骤**
``` bash
1. git clone <EMAIL>:username_0/taro-issue-reproduce.git
2. npm install
3. npm start
4. 使用微信开发工具打开项目
5. 查看控制台
```
**期望行为**
微信开发工具不报错,自定义组件工作正常。
**报错信息**
``` js
VM2680:1 thirdScriptError
sdk uncaught third Error
Unexpected token import
SyntaxError: Unexpected token import
console.error @ VM2680:1
(anonymous) @ WAService.js:1
(anonymous) @ WAService.js:1
e @ appservice?t=1551858416718:1630
window.onerror @ VM2680:1
VM2680:1 thirdScriptError
sdk uncaught third Error
module "npm/taro-ui/dist/weapp/index.js" is not defined
Error: module "npm/taro-ui/dist/weapp/index.js" is not defined
at require (http://127.0.0.1:56439/appservice/__dev__/WAService.js:1:912710)
at http://127.0.0.1:56439/appservice/__dev__/WAService.js:1:912460
at http://127.0.0.1:56439/appservice/npm/taro-ui/dist/index.js:2:20
at require (http://127.0.0.1:56439/appservice/__dev__/WAService.js:1:912851)
at http://127.0.0.1:56439/appservice/__dev__/WAService.js:1:912460
at http://127.0.0.1:56439/appservice/components/panel/index.js:23:15
at require (http://127.0.0.1:56439/appservice/__dev__/WAService.js:1:912851)
at <anonymous>:47:7
at HTMLScriptElement.scriptLoaded (http://127.0.0.1:56439/appservice/appservice?t=1551858416718:1752:21)
at HTMLScriptElement.script.onload (http://127.0.0.1:56439/appservice/appservice?t=1551858416718:1764:20)
```
**系统信息**
```
Taro CLI 1.2.15 environment info:
System:
OS: Windows 10
Binaries:
Node: 10.13.0 - D:\nodejs\node.EXE
npm: 6.4.1 - D:\nodejs\npm.CMD
```
**补充信息**
暂无
Answers:
username_1: 我也遇到这个问题了
username_2: close via https://github.com/NervJS/taro/commit/49a97e9d9e57376fa82389975a253683eac1d577
Status: Issue closed
|
Wikidata/Wikidata-Toolkit | 273193544 | Title: Create wrapper for wbsetclaimvalue
Question:
username_0: I would like to use wbsetclaimvalue to change values for existing statements, like what is being done with the Pywikibot tutorial: https://www.wikidata.org/wiki/Wikidata:Pywikibot_-_Python_3_Tutorial/Changing_Items
I can write the code for this. I just need some guidance where to do it. Thanks!
Answers:
username_1: Hello! You should have a look at https://github.com/Wikidata/Wikidata-Toolkit/blob/master/wdtk-examples/src/main/java/org/wikidata/wdtk/examples/EditOnlineDataExample.java#L174
Side remark: in the latest released version (0.7), the login feature is broken. It has been fixed in the master branch.
username_2: Hi @username_0, I have started to work on the implementation of specific API actions like wbsetclaimvalue, but it turns out to be a fairly big challenge… I need to work my way through a jungle of bureaucracy. If you are still interested in this, grab a katana and join me! 👺 :hocho: |
nfelly/Umbaska | 161672255 | Title: Error 1.10
Question:
username_0: Hi, sorry I can't register on your website, because I don't know what that's mean "What is Umbaska written in?" (I'm from Poland xd)
I use spigot 1.10 and bensku's skript to 1.10
http://pastebin.com/nzXe4M0D
Answers:
username_1: The answer is Java
Status: Issue closed
|
reapit/foundations | 819320819 | Title: Images for Sales/App Review team build are not saving correctly
Question:
username_0: The RES sales build is not saving images correctly through the API. When images are added via `POST /propertyImages` they are not visible in Agency Cloud yet the API returns a 201
Answers:
username_0: Liaising with desktop team and infrastructure team as we just need to switch this build over to use S3. The problem mentioned in the description only affects this build and not customer databases
Status: Issue closed
username_1: Not a platform issue - handed over as appropriate |
fkleon/math-expressions | 664774560 | Title: [Bug] Wrong result for numbers more than 16 digits
Question:
username_0: Wrong result comes for numbers more than 16 digits. For example
Expression expression = Parser().parse('88888888888888888');
print(expression.toString());
Expected result should be `88888888888888888` but we get `88888888888888900.0` which is wrong.
**NB: This happens for each and every number.**
If possible, please provide a resolution ASAP as I'm using it in my production app.
Answers:
username_1: Hi @username_0, thanks for the report.
You are running into a limitation of the datatype that is used to represent numbers within this library. Numbers are handled as regular [double values](https://api.dart.dev/stable/2.8.4/dart-core/double-class.html) which are 64 bit double-precision floating point numbers. They only offer roughly up to 15 digits of precision.
This could be improved by utilising a big decimal representation, for example as implemented by [decimal](https://pub.dev/packages/decimal).
I don't have plans to implement this anytime soon, but contributions are welcome. |
grpc/grpc | 162781032 | Title: Python memory leaks on exit with outstanding calls
Question:
username_0: When python exits with outstanding calls, a cyclic reference at the Cython level prevents proper cleanup, leading to a memory leak warning.
```
D0628 16:18:07.661735081 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:08.663269688 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:09.664535323 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:10.665863923 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:11.667386620 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:12.668871835 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:13.670009823 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:14.671078765 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:15.672284821 10465 iomgr.c:99] Waiting for 3 iomgr objects to be destroyed
D0628 16:18:16.673620721 10465 iomgr.c:118] Failed to free 3 iomgr objects before shutdown deadline: memory leaks are likely
D0628 16:18:16.673701411 10465 iomgr.c:80] LEAKED OBJECT: tcp-client:ipv4:192.168.3.11:443 fd=6 0x7fea300045a8
D0628 16:18:16.673723755 10465 iomgr.c:80] LEAKED OBJECT: tcp-client:ipv4:172.16.31.10:443 fd=12 0x7fea30005168
D0628 16:18:16.673746282 10465 iomgr.c:80] LEAKED OBJECT: tcp-client:ipv4:172.16.17.32:443 fd=23 0x7fea30006888
D0628 16:18:16.673830891 10465 metadata.c:238] WARNING: 1 metadata elements were leaked
D0628 16:18:16.673872448 10465 metadata.c:238] WARNING: 3 metadata elements were leaked
D0628 16:18:16.673903482 10465 metadata.c:238] WARNING: 2 metadata elements were leaked
D0628 16:18:16.673927907 10465 metadata.c:238] WARNING: 2 metadata elements were leaked
D0628 16:18:16.673955019 10465 metadata.c:238] WARNING: 1 metadata elements were leaked
D0628 16:18:16.673982805 10465 metadata.c:238] WARNING: 9 metadata elements were leaked
D0628 16:18:16.674006559 10465 metadata.c:238] WARNING: 3 metadata elements were leaked
D0628 16:18:16.674031913 10465 metadata.c:238] WARNING: 3 metadata elements were leaked
D0628 16:18:16.674055336 10465 metadata.c:238] WARNING: 2 metadata elements were leaked
D0628 16:18:16.674124369 10465 metadata.c:238] WARNING: 2 metadata elements were leaked
D0628 16:18:16.674160625 10465 metadata.c:238] WARNING: 2 metadata elements were leaked
D0628 16:18:16.674188014 10465 metadata.c:238] WARNING: 3 metadata elements were leaked
D0628 16:18:16.674212776 10465 metadata.c:238] WARNING: 2 metadata elements were leaked
D0628 16:18:16.674313468 10465 metadata.c:238] WARNING: 7 metadata elements were leaked
D0628 16:18:16.674342804 10465 metadata.c:238] WARNING: 4 metadata elements were leaked
D0628 16:18:16.674363545 10465 metadata.c:238] WARNING: 2 metadata elements were leaked
D0628 16:18:16.674384519 10465 metadata.c:251] WARNING: 2 metadata strings were leaked
D0628 16:18:16.674404965 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:40 GMT
D0628 16:18:16.674425889 10465 metadata.c:255] LEAKED: 3
D0628 16:18:16.674449029 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
D0628 16:18:16.674470734 10465 metadata.c:255] LEAKED: 443:quic
D0628 16:18:16.674491956 10465 metadata.c:251] WARNING: 2 metadata strings were leaked
D0628 16:18:16.674518349 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:18:05 GMT
D0628 16:18:16.674541054 10465 metadata.c:255] LEAKED: quic=":443"; ma=2592000; v="34,33,32,31,30,29,28,27,26,25"
D0628 16:18:16.674566538 10465 metadata.c:251] WARNING: 5 metadata strings were leaked
D0628 16:18:16.674587736 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:55 GMT
D0628 16:18:16.674609252 10465 metadata.c:255] LEAKED: /google.logging.v2.ConfigServiceV2/CreateSink
D0628 16:18:16.674633129 10465 metadata.c:255] LEAKED: x-goog-api-client
D0628 16:18:16.674654447 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:35 GMT
D0628 16:18:16.674677346 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:29 GMT
D0628 16:18:16.674699780 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
D0628 16:18:16.674719762 10465 metadata.c:255] LEAKED: Requested entity was not found.
D0628 16:18:16.674741234 10465 metadata.c:251] WARNING: 2 metadata strings were leaked
D0628 16:18:16.674763658 10465 metadata.c:255] LEAKED: logging.googleapis.com:443
D0628 16:18:16.674786116 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:18:02 GMT
D0628 16:18:16.674809218 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
D0628 16:18:16.674831358 10465 metadata.c:255] LEAKED: 29600m
D0628 16:18:16.674853004 10465 metadata.c:251] WARNING: 3 metadata strings were leaked
D0628 16:18:16.674875729 10465 metadata.c:255] LEAKED: /google.logging.v2.LoggingServiceV2/ListLogEntries
D0628 16:18:16.674933951 10465 metadata.c:255] LEAKED: Bearer ya29.CjAPA_i1yKBinTNZ2b9wS5FgZElPA8fZG4Xg24nF9mkKcyVlTSNZvNGKGLQVKah_X0U
D0628 16:18:16.674954541 10465 metadata.c:255] LEAKED: 5
D0628 16:18:16.674974456 10465 metadata.c:251] WARNING: 2 metadata strings were leaked
D0628 16:18:16.674993382 10465 metadata.c:255] LEAKED: alt-svc
D0628 16:18:16.675010531 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:36 GMT
D0628 16:18:16.675029725 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
[Truncated]
D0628 16:18:16.675722238 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:31 GMT
D0628 16:18:16.675740482 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
D0628 16:18:16.675758867 10465 metadata.c:255] LEAKED: /google.logging.v2.ConfigServiceV2/DeleteSink
D0628 16:18:16.675778098 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
D0628 16:18:16.675794892 10465 metadata.c:255] LEAKED: /google.logging.v2.ConfigServiceV2/UpdateSink
D0628 16:18:16.675813321 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
D0628 16:18:16.675831682 10465 metadata.c:255] LEAKED: gax/0.12.1 gapic/0.1.0 gax/0.12.1 python/2.7.11
D0628 16:18:16.675852964 10465 metadata.c:251] WARNING: 6 metadata strings were leaked
D0628 16:18:16.675870778 10465 metadata.c:255] LEAKED: /google.logging.v2.ConfigServiceV2/ListSinks
D0628 16:18:16.675891221 10465 metadata.c:255] LEAKED: Requested sink was not found
D0628 16:18:16.675909202 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:18:01 GMT
D0628 16:18:16.675927607 10465 metadata.c:255] LEAKED: Bearer ya29.CjAPA5-Bwf<KEY>
D0628 16:18:16.675945394 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:27 GMT
D0628 16:18:16.675963916 10465 metadata.c:255] LEAKED: /google.logging.v2.MetricsServiceV2/DeleteLogMetric
D0628 16:18:16.675984275 10465 metadata.c:251] WARNING: 2 metadata strings were leaked
D0628 16:18:16.676002156 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:53 GMT
D0628 16:18:16.676017818 10465 metadata.c:255] LEAKED: Tue, 28 Jun 2016 20:17:39 GMT
D0628 16:18:16.676033360 10465 metadata.c:251] WARNING: 1 metadata strings were leaked
D0628 16:18:16.676048260 10465 metadata.c:255] LEAKED: /google.logging.v2.MetricsServiceV2/GetLogMetric
```
Answers:
username_1: What's the reference cycle?
username_0: Sorry, I mispoke. It is a hanging reference. When we put an operation on the wire, we manually call ```Py_INCREF()``` on the operation tag which holds a reference to the call. The call never gets cancelled when it goes out of context, and so we never retrieve the operation tag on the other side of the completion queue.
username_1: I think I'm still not understanding "When python exits with outstanding calls" - doesn't the use of a cleanup thread inside the channel mean that calls will be cancelled when that thread is joined?
Are we seeing this leak-on-exit in the output of our unit tests? If so, which ones? If not, what would a unit test to exercise this defect look like? Or a sample user story, if a unit test would be too weird?
How certain are we that leak-on-exit-and-spew-some-logging is a GA blocker? To me it sounds like a minor annoyance that would be merely nice to fix.
username_0: Yes, the _exit_tests.py show this behavior.
gcloud + gax feels like a tier 1 use case to me. I'm amenable to pushing this off the GA milestone if you disagree.
username_1: I'm still not understanding the defect itself - why doesn't the call get cancelled? It's a managed call, so why doesn't the cleanup thread cancel it? You mention a `Py_INCREF` on the operation tag and the tag holding a reference to the call, but why isn't there a corresponding `Py_DECREF` on that same tag? What's happening that the completion queue isn't getting drained at exit? Or is it, but somehow the tags that come out of it aren't getting `Py_DECREF`ed as they should be?
username_0: Sorry, to clarify this is for synchronous calls.
username_1: Synchronous response-unary calls?
username_0: Yes.
username_1: ... and what's the interpreter exit mechanism? Ctrl-C at the keyboard, or a signal, or something the application initiates?
username_0: Ctrl-C at the keyboard. In the tests it is simulated as a SIGINT sent to the process, but the intent is for Ctrl-C interpreter exit. I haven't tested calling ```sys.exit(0)``` from another thread, but I imagine it would have a similar effect.
username_1: I'm just not making the connections I need to make. I'm not getting "the operation tag which holds a reference to the call", because for synchronous (blocking) response-unary calls we use `None` as the tag (right?). `None` certainly doesn't hold a reference to the call, right?
username_0: The operation tag reference is done at the Cython layer:
https://github.com/grpc/grpc/blob/master/src/python/grpcio/grpc/_cython/_cygrpc/call.pyx.pxi#L46
username_0: This issue could probably be solved by removing only the Cython reference, and leaving our reference to managed calls to be cleaned up by the CleanupThread.
I'd like to move in the direction of less state tracking because of possible performance gains, but I'd be fine with making a minimal change to fix the issue pre-GA and revisiting un-needed state tracking post-GA
username_2: I'd originally added in the tag-as-GC-root thing to attempt to ensure memory safety whether or not things went wrong in the Python layers (attempting to ensure no use-after frees from Python). That said, I can't immediately come up with a situation where removing that reference could threaten that, so, I think I'm (tentatively) fine with that reference going away.
username_2: Although... regardless of that reference being there (or not), I'm feeling like a more proper solution would be pumping all queues on exit...
username_0: AFAICT, its only safe to pump a queue after shutting down (so you know when to stop), and that should only be done on completion queue destruction. The issue there is that calls have references to their completion queues:
https://github.com/grpc/grpc/blob/master/src/python/grpcio/grpc/_cython/_cygrpc/channel.pyx.pxi#L69
I actually played around with removing that reference to do what you are suggesting https://github.com/grpc/grpc/pull/7122
As it turns out, it is possible for the completion queue to go out of scope before the calls, and if that happens, nothing cancels the calls, and then the completion queue hangs trying to shut down, so I abandoned that idea.
username_1: As I read and reread this issue, I get the sense that the invariant that's being violated is "calls that have operations started on them either get those operations finished or get cancelled", and that it's being violated just in the blocking sections of `_channel.py`, and that an appropriate fix would be to wrap the blocking synchronous RPC invocation code in a `try`/`finally` that cancels the call in the `finally`. Would that not be a small fix to make? Would it remedy this issue? Would it also remedy #7090?
username_0: I don't believe that will work. We still need to get the call off the completion queue by calling ```grpc_completion_queue_next()``` This completion queues get pumped as part of their destruction, but outstanding calls hold references to the completion queues. We might be able to make this work, but it would end up being a larger change than what is proposed.
As far as #7090, the issue is much harder to address. #7090 is a result of a cancellation before connection. An AVL tree is set up with "possible" connections, and gets destroyed when one of the connections is actually used. When ```grpc_shutdown()``` attempts to free the AVL tree, it needs a valid "ext_ctx" to do so, but no such context is available during final cleanup. Ensuring the cancelled call gets destroyed seems to fix the issue, and I've verified that ```grpc_call_destroy()``` never gets called when the error occurs.
username_0: @username_3
#7275 should have resolved the client side leaks with outstanding calls. Could you verify that gcloud + gax no longer has this leak on the master branch?
username_3: @username_0 I'd be glad to try it. Given that the repository is not Python-only, can you suggest a `pip` command-line that would install from the master?
username_0: @username_3
From the repo root you could try:
```python setup.py install```
If that doesn't work you could try,
```
python setup.py sdist
pip install dist/grpcio-0.16.0.dev0.tar.gz
```
username_2: Quick note:
`GRPC_PYTHON_BUILD_WITH_CYTHON=1` needs to be set and you must have Cython installed if you're going from a fresh repository check-out., e.g.:
`GRPC_PYTHON_BUILD_WITH_CYTHON=1 pip install .`
username_0: I've verified that this is fixed with the helloworld example.
Status: Issue closed
username_4: Has this change been released? If not, is there an ETA? I'd also prefer to use pip and whatnot. |
pulumi/pulumi | 650688111 | Title: Resource ignored an input is undefined variable
Question:
username_0: When creating a resource and defining its inputs -
If one of the inputs is a variable which is undefined the resource is ignored on deployment.
Furthermore -
All other resources defined after troubled resource are also ignored, and if already defined - will be deleted.
No error is thrown.
This happens in Python with the pulumi_aws module.
For example, everything after `myApi` will be ignored or deleted if exists, as var assigned to description is undefined:
``` class APIendpoint(pulumi.ComponentResource):
def __init__(self, name, opts=None):
super().__init__('APIendpoint:myEndpoint', name, None, opts)
myApi = pulumi_aws.api_gateway.RestApi('myApi', description=someUndefinedVariable)
``` |
Sylius/Sylius | 63189347 | Title: [RFC][ResourceBundle] Services definition loading
Question:
username_0: What do you think about creating a new loader. I don't know how name it for now, perhaps `ServicesLoader` ? This class should be a Factory, its method `createLaoder` should return $this but it should create the right loader depending the format chosen. Why $this? because we can a __call method which directly call the good method on the loader and allow us to add extra methods.
```php
public function createLaoder($format)
{
$this->loader = new ...();
return $this
}
// It call method a on symfony loader
public function __call(...)
{
$this->loader->method();
}
// You can add easily custom method, for example load an array of file instead a single file
public function myCustomMethod();
```
With this loader, we can extract it from the Extension and make it thinner.
@Sylius/core-team @stloyd What do you think ?
Answers:
username_0: @Sylius/core-team feedback ?
Status: Issue closed
|
vercel/next.js | 1158936512 | Title: Build times with swc are the same speed as babel
Question:
username_0: ### Verify canary release
- [X] I verified that the issue exists in Next.js canary release
### Provide environment information
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Wed Feb 19 06:37:35 UTC 2020
Binaries:
Node: 16.14.0
npm: 8.5.2
Yarn: N/A
pnpm: N/A
Relevant packages:
next: 12.1.0
react: 17.0.2
react-dom: 17.0.2
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
Vercel
### Describe the Bug
I was expecting the swc compiler to greatly improve my build times compared to babel. But after testing the build times on vercel, they seem to be almost identical.
In the repoduction repo below, the speed with swc deploys in 1m 47s. The babel version deploys in 1m 55s - which seems to in margin of error.
### Expected Behavior
Building with swc should be much faster.
### To Reproduce
I originally discovered this on one of my own repos, but I can't share it publicly. However, I was able to reproduce by forking the [Supabase website](https://github.com/username_0/supabase).
1) Deploy the project on vercel without a `.babelrc`
2) Add a `.babelrc` file with the following:
```
{
"presets": ["next/babel"],
"plugins": []
}
```
3) Deploy on vercel
4) Redeploy the babel version, making sure to not use the build cache.
5) Compare build times
Answers:
username_1: The repo you link to seems to be private. Could you add a reproduction so this can be investigated? :pray:
You mention deployment, which is not the same as compilation time. Are you comparing the compilation?
username_0: You're right, I am comparing deployment. Is that a mistake? I assumed that compiling would be a significant part of it deployment. So a 5x improvement in compilation time should lead to noticeable difference in deploy time. I just figured it's easier to compare vercel deployment since it's standardised and logged.
I'm also tested by running `next build` on my local machine, but saw similar times there too.
Am I thinking about this correctly? Maybe it's working as intended, and I just have incorrect assumptions.
Status: Issue closed
username_1: Depending on project size, dependencies, plugins, etc., compiling could be the smallest or biggest part of a deployment.
I cannot confirm any "bugs" here. If you are having trouble with your project or the build speed, I suggest opening a [question in Discussions](https://github.com/vercel/next.js/discussions/new?category=Help) to get help figuring out what could cause a slow-down in your particular case. :+1: |
Azure/azure-cli | 224448912 | Title: Public IPv6 address retruned as null
Question:
username_0: az --version is 2.0.3
az network public-ip list retrieves Public IP addresses but those enabled for IPv6 show up as null. Below is the output.
{
"dnsSettings": null,
"etag": "W/\"0dbc0843-57f0-4bce-8e60-3e46813378d6\"",
"id": "/subscriptions/fed7f475-6055-4e3c-8529-c1345df70589/resourceGroups/IPv6v2/providers/Microsoft.Network/publicIPAddresses/LBIPv6PubAddressvw4konls2vfbw0",
"idleTimeoutInMinutes": 4,
"ipAddress": null,
"ipConfiguration": {
"etag": null,
"id": "/subscriptions/fed7f475-6055-4e3c-8529-c1345df70589/resourceGroups/IPv6v2/providers/Microsoft.Network/loadBalancers/IPv6v4LBvw4konls2vfbw0/frontendIPC
onfigurations/XRELoadBalancerFrontEndIPv6",
"name": null,
"privateIpAddress": null,
"privateIpAllocationMethod": null,
"provisioningState": null,
"publicIpAddress": null,
"resourceGroup": "IPv6v2",
"subnet": null
},
"location": "centralus",
"name": "LBIPv6PubAddressvw4konls2vfbw0",
"provisioningState": "Succeeded",
"publicIpAddressVersion": "IPv6",
"publicIpAllocationMethod": "Dynamic",
"resourceGroup": "IPv6v2",
"resourceGuid": "a4b819aa-4963-4775-94c5-9d2222a5e389",
"tags": null,
"type": "Microsoft.Network/publicIPAddresses"
}
Answers:
username_1: same issue here. Seems to happen randomly because worked when I tried creating a new one.
username_2: The service does not return a value for the IP address, and that is why it is deserialized to `null`. @username_3, is this is expected behavior?
username_1: After more digging on my end I realized I was wrong in my prior report, or at least it was not the cause of the problem. I was getting a resource not found error for a NIC I had made, which led me to look into the components of the NIC, public IP included. It seemed odd that the IP was null, leading me to believe it was the culprit. However, now I believe you are right, it doesn't get assigned an IP until later when actually attached to a VM perhaps?
Either way, I resolved my problem because the error had more to do with using the wrong region (server template specified eastus, but NIC was located in southcentralus leading to the resource not found error).
Thanks.
username_3: Yes, that's correct. The IP Address doesn't get assigned until its either associated to the VM for the NIC it's attached to or to a Load Balancer.
username_3: Is this resolved? Can it be closed?
username_2: When I create an IPv6 public IP and then create a VM using that IP, the value that displays for that public IP is an IPv4 address:
```
{
"ipAddress": "172.16.31.10",
"ipConfiguration": {
"etag": null,
"id": "/subscriptions/xxxxx/resourceGroups/tjp-ip/providers/Microsoft.Network/networkInterfaces/vm1VMNic/ipConfigurations/ipconfigvm1",
"privateIpAddress": null,
"privateIpAllocationMethod": null,
"provisioningState": null,
"publicIpAddress": null,
"resourceGroup": "tjp-ip",
"subnet": null
},
"location": "westus",
"name": "ip1",
"provisioningState": "Succeeded",
"publicIpAddressVersion": "IPv6",
"publicIpAllocationMethod": "Dynamic",
"resourceGroup": "tjp-ip",
"resourceGuid": "3f3e3ef6-17fa-4ecd-907f-e93f631b92be",
"tags": null,
"type": "Microsoft.Network/publicIPAddresses"
}
```
username_0: As far as I know IPv6 addresses can / should only be assigned to load balancers. VM's don't yet support native IPv6 addresses. But I'm no longer seeing the issue:
{
"dnsSettings": null,
"etag": "W/\"dcc7a9ac-75b3-4fbc-8729-597be38dae5c\"",
"id": "/subscriptions/fed7f475-6055-4e3c-8529-c1345df70589/resourceGroups/IPv6/providers/Microsoft.Network/publicIPAddresses/XREMemcacheLBIPv6PubAddress45ekwzeohxvjm1",
"idleTimeoutInMinutes": 4,
"ipAddress": "fc00:e968:6179::de52:7100",
"ipConfiguration": {
"etag": null,
"id": "/subscriptions/fed7f475-6055-4e3c-8529-c1345df70589/resourceGroups/IPv6/providers/Microsoft.Network/loadBalancers/IPv6v4LBMemcache45ekwzeohxvjm1/frontendIPConfigurations/XREMemcacheLoadBalancerFrontEndIPv6",
"name": null,
"privateIpAddress": null,
"privateIpAllocationMethod": null,
"provisioningState": null,
"publicIpAddress": null,
"resourceGroup": "IPv6",
"subnet": null
},
username_3: Our announcement last September was for native IPv6 for VM's so we do support it.
@username_2 I'm researching this now. I've recreated this and also cannot find how to query for the v6 address. Will revert back once I get an answer.
username_0: Right. But you can't assign a public IPv6 address directly to a VM (or you shouldn't be able to). The 'right' way for the VM to get an IPv6 address (today) is through DHCP. The IPv6 communication / connectivity to other IPv6 addresses is through the LB.
username_3: So I talked to some folks and you do indeed can only do this as a Public IP for a Load Balancer.
See the article here. In particular the "details" section and limitations. Including the DHCP assigned IPv6 address for the VMs.
https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-ipv6-overview
sorry for the confusion. I was also under the impression that we enabled this directly on the VM's, meaning you could assign it directly as we announced it as providing "native" IPv6 support for VM's. The support is correct but the how to get there was a bit more nuanced.
thanks.
Status: Issue closed
username_2: In light of this, I'm closing this issue.
username_4: Please, I have not ip address

Whats wrong..?
username_5: same issue as @username_4 dont see the private ipv6 address for the VM. Any fixes ?
username_2: According to the Networking team, there is nothing to fix. IPv6 address is not for the VM itself but for the load balancer.
https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-ipv6-overview |
LeRoger/web-design-prework | 343576139 | Title: prework feedback
Question:
username_0: Hey Roger! Thanks for submitting the pre-work for the CoderSchool Web Design course.
Here is some feedback after going over your prework assignments:
- **Assignment 1**: Good job in 'almost' wrapping all the HTML content using HTML elements. Some people left out some of the raw content and that is considered to be bad practice (You left the raw text `<NAME> ... Chess Champion` at the end). I would use `h2` instead of `h1` for the two second largest headings below the picture. Other than that, your HTML blog looks great!
- **Assignment 2**: The animal card looks slick! You know your CSS well! Your final result looks exactly like the prototype. I have no further feedback except for an aesthetic point that maybe you should not give the same border style to the inner image and inner container.
- **Assignment 3**: Good work on using the 'get' method of the form correctly and for styling the GOOGLE word. You got the assignment on point.
Overall, great work on the prework assignments! Class starts next Tuesday July 31st, hope to see you there. And let me know if you have further questions. |
carbon-design-system/carbon | 697039481 | Title: Proposal: Allow "ExpandableTile" to accept a label for the expansion trigger
Question:
username_0: Hey, was wondering if we might be able to extend the `<ExpandableTile />` component a bit. What we’re looking to do is add the ability to label the expansion trigger, as well as set the positioning/color of the trigger (see image).
Happy to contribute this back if there are no objections.
<issue_closed>
Status: Issue closed |
multiformats/go-multiaddr | 366108242 | Title: ValueForProtocol() and multiple occurences of same protocol
Question:
username_0: There can be multiple occurences of the same protocol in one multiaddr. Example that's already happening: `/ipfs/QmRelay/p2p-circuit/ipfs/QmRelay`
`Protocols()` looks like it can handle it, but `ValueForProtocol()` will only return the first occurence.
Answers:
username_1: https://github.com/multiformats/go-multiaddr/pull/81/files#diff-e2cd38f38509c9033189fbf0317e414eR46
username_0: You code faster than your own shadow, Sir.
Status: Issue closed
|
marigold-ui/marigold | 672820735 | Title: Container component - Layout
Question:
username_0: Like a Box as a primitive to add margin, padding and colors.
- [ ] create component
- [ ] add CSS normalization
- [ ] add theme styles
- [ ] create story file
- [ ] create test file
- [ ] create index file<issue_closed>
Status: Issue closed |
jussmaki/pathfinding-algorithms | 811018101 | Title: Vertaisarviointi
Question:
username_0: Projekti ladattu: Maanantai 15.2.2021 klo 20:20:41EET.
Moi Jussi, tässä muutamia ajatuksia projektista :)
Hyviä puolia projektista:
- Järkevä rakenne, algoritmit ja sovellusliittymä sekä tiedostojenhallinta erkautettu käyttöliittymästä selvästi.
- Toimiva käli, hakee selvästi jo reitin ja piirtää sen kartalle. Pidin siitä, että reitin voi hakea monesti uudelleen, tämä tulee lisätä omaan projektiini myöskin.
- Hyvät kattavat testit, näistä sain ideoita omiin toteutuksiini myös. Esimerkiksi contains()-metodin käyttö reitin alkioiden tutkinnassa.
- Dokumentaatio näytti olevan myös kunnossa. Mietin ehkä, että ei tarvitse kirjoittaa "Ei ajankohtainen" yms. moneen kohtaan, koska prossessin näkökulmasta se on oletus. Dokumenaatiota suosittelen kirjoittamaan sitä mukaa, kun jokin tietty osa on valmis. Tässä kohtaa projektia kuitenkin on jo aloitettu testaamista jne. niin niistä voi helposti kirjoittaa jo sen mikä on myös loppuraportissa.
Kehitettävää:
- Kaikista silmiinpistävä kehityskohde tällä hetkellä on se, että algoritmit eivät löydä lyhintä reittiä. Huomasin, että etäisyyksiä arivioidessa reittejä tutkitaan nyt vain suoraan liikuttaessa. Diagonaalisen liikkumisen ja suoraan liikkumisen etäisyys on kuitenkin eri, joten tämä tulisi implementoida myös algoritmeihin. Suoraan liikkuessa etäisyys on 1 ja diagonaalisesti sqrt(2). Muuten algoritmien rakenne näyttää oikealta.
- Kurssin pääpaino ymmärtääkseni on tietorakenteissa ja algoritmeissa, joten Javan valmiiden luokkien testaamiseen ei mielestäni kannata käyttää aikaa. Lähinnä mietin tuota file-kansion kartanmuodostajaa, että sen voisi mahdollisesti excludata.
- Ei ehkä niin oleellinen kehitysehdotus projektille, mutta kansiorakenne on hieman sekava. Nyt kaikki sijaitsee projektin juuressa – mahdollisesti voisi luoda oman kansion koodille samalla tavalla kuin dokumentaatiolle.
- GitHubin projektin juureen voisi ladata tuon "AR0017SR.map"-kartan, niin ohjelman ajaminen luonnistuu helpommin nyt vielä ilman käyttöohjetta;)
- Käyttöliittymässä oli hyvin noita etäisyyksiä konsolissa, hyvä! Niihin voisi vielä ehkä selvemmin kirjoittaa kumpaa algoritmia ajetaan, sekä myös tarkan etäisyyden kun saat tuon diagonaalisen liikkumisen luonnistumaan.
- Kun diagonaalinen liikkuminen on myös lisättynä ohjelmaan, testeissä voisi myös tarkastella lyhimpien reittejen pituuksia, ei pelkästään nodejen määrää (tämän voi myös lisätä käliin). |
karma-runner/karma-jasmine | 309362216 | Title: Misleading status message when using `fit` or `fdescribe`
Question:
username_0: ```
[email protected]
[email protected]
[email protected]
[email protected]
```
Hit me up if there's additional information required :)
Answers:
username_1: Steps to reproduce:
- clone https://github.com/username_1/karma-bug
- npm install
- karma start karma.conf
I'm running:
- win 10
- karma-cli version 1.0.1
- chrome version 65.0.3325.181
This issue might be related to
https://github.com/karma-runner/karma/issues/2587
username_1: After a downgrade to the latest Version of 2.x ( `npm install --save [email protected]` ) the output is ok:
```
karma start karma.conf
02 04 2018 10:51:05.481:WARN [karma]: No captured browser, open http://localhost:9876/
02 04 2018 10:51:05.492:INFO [karma]: Karma v2.0.0 server started at http://0.0.0.0:9876/
02 04 2018 10:51:05.493:INFO [launcher]: Launching browser Chrome with unlimited concurrency
02 04 2018 10:51:05.503:INFO [launcher]: Starting browser Chrome
02 04 2018 10:51:07.368:INFO [Chrome 65.0.3325 (Windows 10.0.0)]: Connected on socket n8zN-9nwxlg8Nca9AAAA with id 40420476
Chrome 65.0.3325 (Windows 10.0.0) ERROR: 'DEPRECATION: fit and fdescribe will cause your suite to report an 'incomplete' status in Jasmine 3.0'
Chrome 65.0.3325 (Windows 10.0.0): Executed 1 of 2 (skipped 1) SUCCESS (0.014 secs / 0 secs)
```
username_0: Makes me wonder - We're using 3.1.0 and none of the above do report any different status (Which would be highly appreciated).
username_0: It appears this error has been reported to `jasmine` already, but responsibility for the bug is being pushed back towards `karma-jasmine`.
https://github.com/jasmine/jasmine/issues/1532
username_2: There's a PR for it https://github.com/karma-runner/karma-jasmine/pull/192 and it was even merged, but I was unable to find the new version of it. In my opinion it's critical bug, so I would like to have fix ASAP.
username_3: @username_9 Any ETA on a release? This bug is stopping us from using Jasmine 3, and the fix is already merged.
username_4: Posting to follow
username_5: Posting to follow
username_6: Posting to follow
username_7: Please do not "post to follow". Click the Subscribe button on the right sidebar instead.
<img width="268" alt="screen shot 2018-04-25 at 5 36 23 pm" src="https://user-images.githubusercontent.com/2053478/39279674-4c64798a-48af-11e8-81ff-78ca76249cba.png">
Sorry for adding to the noise.
username_0: Maybe people keep posting and add "noise" so that @dignifiedquire or other users with write-access do read and bump the version :)
username_8: New release created 🎉 https://github.com/karma-runner/karma-jasmine/releases/tag/v1.1.2 🎉
Works great, fixes this issue 🎖
username_9: Fixed by #192
Status: Issue closed
|
jchristn/WatsonWebsocket | 615529485 | Title: Communication with JS websockets
Question:
username_0: _Thanks for your work with this library. Really appreciate it._
I'm trying to communicate with a JS WebSocket. Roughly like:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Document</title>
</head>
<body>
<!-- <script src="app.js" type="text/javascript"></script> -->
<script type="text/javascript">
var socket = new WebSocket('http://localhost:8083/socket');
socket.onopen = function() {
// alert('handshake successfully established. May send data now...');
socket.send("Hi there from browser.");
};
socket.onmessage = function (evt) {
//alert("About to receive data");
var received_msg = evt.data;
alert("Message received = "+received_msg);
};
socket.onclose = function() {
alert('connection closed');
};
</script>
</body>
</html>
```
But `http://` is not supported: `SyntaxError: An invalid or illegal string was specified`
[This SO question](https://stackoverflow.com/questions/52757238/c-sharp-httplistener-prefixes-not-accepting-ws-preventing-js-websocket-to-conn) seems to be related, but I still couldn't make it work.
Do you have any hints?
Answers:
username_1: Hi @username_0 please try changing ```var socket = new WebSocket('http://localhost:8083/socket');``` to ```var socket = new WebSocket('ws://localhost:8083/socket');```. I took a screenshot below showing it working on mine using the ```TestServer``` project.

Please let me know if this works!
username_0: Interestingly, it does only work in Chrome, but not in Firefox.
username_1: Is this what you get in Firefox?
```
var socket = new WebSocket('ws://localhost:8000/socket');
[Exception... "<no message>" nsresult: "0x805e0006 (<unknown>)" location: "JS frame :: debugger eval code :: <TOP_LEVEL> :: line 1" data: no] debugger eval code:1:14
<anonymous> debugger eval code:1
Content Security Policy: The page’s settings blocked the loading of a resource at ws://localhost:8000/socket (“connect-src”). debugger eval code:1:13
```
It may be related to this:

Discovered these: https://stackoverflow.com/questions/37298608/content-security-policy-the-pages-settings-blocked-the-loading-of-a-resource
and
https://github.com/eclipse/che/issues/13736
username_1: Oh! Try going to another page. When I first went to my own site (www.joelchristner.com) i.e. any site that doesn't have this content security policy meta tag, it worked!

username_1: Also interesting to note that Chrome starts outbound websocket connections using IPv6 whereas Firefox is using IPv4.
Status: Issue closed
username_1: Going to close this, hope it helps. Seems the library is behaving as it should. Please re-open if you have further issues @username_0 !
username_0: Yep. Thanks for your response! I use this only in combination with CEF, so I’m good :) |
playgameservices/play-games-plugin-for-unity | 177508385 | Title: Provide more information on why authorization failed
Question:
username_0: Right now the `Authorize` method only accepts a callback that can receive a `boolean` that states if the authentication was successful or not. So with the current api I cannot differentiate between an error (no internet connection) or the user explicitly declining auth (canceling).
According to [Quality Checklist for Google Play Games Services](https://developers.google.com/games/services/checklist#required) I should not show the sign in prompt if the user declined, but I currently cannot implement it properly.
Answers:
username_1: Have you tried the silent sign in process when your game starts.
`PlayGamesPlatform.Instance.Authenticate((bool success) => {}, true);`
username_2: Starting Auth Transition. Op: SIGN_IN status: ERROR_NOT_AUTHORIZED
Is the errror I am getting. I have 2 devices, 1 signs in fine and the other one keeps getting unauth error.
What is going on? i even have debug on but it tells me nothing!
username_3: In my game I do not try to sign the user into GPS unless the user has said that they want to via a dialog box they get when they first run the game. If they say no to that dialog I leave it up to the user to sign in through a menu that deals with GPS, and give them the option to sign in automatically if they do want to sign in. This allows me to follow that checklist and not worry about the result from the Authorize method as the user has expressly stated they want to sign in by accepting the dialog box.
username_4: I have the same problem that was mentioned by username_2. I successfully sign in on my phone, but on other devices I get this ERROR_NOT_AUTHORIZED (event with the save account). I have alpha version of the app published, player services settings are published too, linked apps are configured correctly and my google account is listed in both test users lists (as alpha-tester of the app and as the tester of google play services). Maybe someone may suggest what is the problem here? Any help will be very appreciated. |
ReimarFinken/org-git-link | 537124113 | Title: how to conditionally disable org-git-link
Question:
username_0: org-git-link is exactly what I was looking for -- or so I thought. After using it for a few days, I discovered that I want ordinary org links about half the time.
How can I call the standard org-store-link sometimes and org-git-link at other times?
[Other people](https://stackoverflow.com/questions/56158827/how-do-i-disable-or-rein-in-org-git-link-org-plus-contrib-20190513) want this too. |
swaywm/sway | 264831836 | Title: Spontaneous segfaults
Question:
username_0: I am running sway-0.14.0-2.fc26.x86_64 from the fedora repositories. Periodically, it will segfault and dump me back to VT. I haven't seen any kind of pattern to the segfaults, although it often happens while while launching an xwayland application. More specifically, I have noticed it multiple times while I am using a python module called streamlink (https://github.com/streamlink/streamlink) to launch mpv. At a guess, it happens about once every 50 times I use it. I've also seen crashes while using firefox 57 beta.
The only other unusual thing I am running is redshift compiled from https://github.com/giucam/redshift.git, and launched by a systemd user daemon that executes `redshift -m wayland`
Log output from a crash triggered by streamlink is attached. This includes firefox subsequently crashing as well. Beyond what I've attached, it goes on to list many crashing xwayland applications over the course of the next few seconds.
[journalctl.txt](https://github.com/swaywm/sway/files/1378337/journalctl.txt)
Answers:
username_1: https://github.com/username_1/sway/wiki#how-do-i-report-issues
username_1: Stack trace too as well
Status: Issue closed
|
LLPDNNX/NANOX | 318713749 | Title: Investigate jet id efficiency for heavily displaced jets
Question:
username_0: Current loose jet ID (https://twiki.cern.ch/twiki/bin/viewauth/CMS/JetID13TeVRun2016) maybe suboptimal when jets are heavily displaced since e.g. no charged hadrons are reconstructed anymore
Answers:
username_1: @username_0 Is this the right page?
https://twiki.cern.ch/twiki/bin/view/CMS/JetID13TeVRun2016
username_0: yes
username_1: Can we say this is done? @username_0
username_0: yes - so conclusion:
* jet inefficiency only seen for ctau=10m;
* only ISR jet reconstructed for ctau=100m
=> "small" increase of jet acceptance at edge of parameter space using "own" jet id will be too much work considering that also jet energy scale needs to be retuned/validated
Status: Issue closed
|
danielgindi/Charts | 399218147 | Title: How to get a value from line graph if there is more than one lines
Question:
username_0: * [ ] I've read, understood, and done my best to follow the [*CONTRIBUTING guidelines](https://github.com/jjatie/Charts/blob/master/CONTRIBUTING.md).
## What did you do?
I'm using using one custom scroller to show the value of graph. i'm also using
chartView.getEntryByTouchPoint(point: scrollerView.center).
## What did you expect to happen?
this works perfectly for one line or one dataSet's line graph but when there is more than one lines then what should i do to get all the lines data at that point.
## What happened instead?
It's only returning one of the line entry and not both of them so what should i do to get all of them at the same time or one by one??
## Charts Environment
**Charts version/Branch/Commit Number:3.2.1**
**Xcode version:10.1**
**Swift version:4.2**
**Platform(s) running Charts:ios**
**macOS version running Xcode:10.13.6**
## Demo Project
ℹ Please link to or upload a project we can download that reproduces the issue.
Answers:
username_1: ask on stack overflow please.
Status: Issue closed
|
alphagov/govuk-design-system | 1014853233 | Title: Update accessibility statement when iterated accordion goes live
Question:
username_0: ## What
Update [our accessibility statement](https://design-system.service.gov.uk/accessibility/) to remove mention of the [current WCAG fail](https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships) for the [accordion component](https://design-system.service.gov.uk/components/accordion/).
## Why
The iterated component will fix the accessibility issue, where section headings can be mistaken for links, but are treated as buttons.
## Who needs to know about this
Technical Writer
## Done when
- [ ] Technical Writer updates accessibility statement
Status: Issue closed
Answers:
username_0: This duplicates [#1823](https://github.com/alphagov/govuk-design-system/issues/1823), so I'm closing it. |
vazco/uniforms | 214811205 | Title: Wrapping a uniforms component with `createContainer` ?
Question:
username_0: <TextField
name="name"
floatingLabelText="Name"
/>
<SelectFieldContainer
name="songId"
hintText="Related Song"
valueKey="songId"
displayField="title"
/>
</AutoForm>
);
};
```
```javascript
// The container
const SelectFieldContainer = createContainer((props) => {
let data = [];
const songsSubscription = Meteor.subscribe('songs.list');
const loading = !songsSubscription.ready();
if (!loading) {
data = Songs.find().fetch().map(song => ({
songId: song._id,
title: song.title,
}));
}
return {
...props,
isLoading: loading,
options: data,
};
}, SelectField);
export default MusicXMLSelectFieldContainer;
```
```javascript
// SelectField
const Select = ({
options,
value, // current value
onChange,
disabled,
id,
name,
hintText, // display when nothing is selected,
inputRef,
multi,
valueKey,
...props
}) => {
const key = valueKey || '_id';
return (
<SelectField
[Truncated]
disabled={disabled}
id={id}
name={name}
onChange={(event) => {
const newValue = multi ? event : event && event[key];
onChange(newValue);
}}
placeholder={hintText}
ref={inputRef}
value={value}
options={options}
multi={multi}
valueKey={key}
/>
);
};
export default connectField(Select);
```
I might have made a typo somewhere while pasting, so please tell me if I should clarify anything!
Answers:
username_1: Your tree is like this
```
+ form
+ container
+ connect
+ field
```
whereas it should have been
```
+ form
+ connect
+ container
+ field
```
because afaik the connected component should be a direct descendant of a form or a descendant of a component that's wrapped by connectfield.
username_0: You're the champ, @username_1 😀
Status: Issue closed
|
mscdex/ssh2 | 64922188 | Title: Question regarding verifying host against its fingerprint
Question:
username_0: Why do we need to provide the hostHash in config while instantiating a client, cannot that be figured out automatically on the basis of "hostkey_format" in ssh.js?
Answers:
username_1: `hostkey_format` is unrelated. It's the host key type, such as `ssh-dsa` or `ssh-rsa`.
There is no standard hashing algorithm for generating host fingerprints, so the `hostHash` parameter simply allows you to choose what you want to use. SHA-1 and MD5 are the most common.
Status: Issue closed
username_0: Thanks for explaining, I was curious on how Filezilla shows the fingerprints without tasking any parameter,
username_1: Software applications like that just force a hashing method. Some use MD5 and some use SHA-1.
username_0: Great thanks, that explains it :) |
keep-network/website | 511401998 | Title: Update center border on "Media Kit" section of Press Page
Question:
username_0: Center border on "Media Kit" section of Press Page should be 50% transparent
Answers:
username_1: Current implementation:
<img width="1188" alt="Keep_Network" src="https://user-images.githubusercontent.com/57226633/69092455-69486100-0a1a-11ea-9e74-d03c4d4ee704.png">
Requested update:
<img width="871" alt="Keep_Website" src="https://user-images.githubusercontent.com/57226633/69093155-e0322980-0a1b-11ea-8c6c-f7a6d11c8cbf.png">
Status: Issue closed
|
onekiloparsec/SwiftAA | 688082559 | Title: Calculating the rise and set times for a star
Question:
username_0: Hi there,
thank you for this awesome framework!
Is it possible to use SwiftAA to calculate when a star will rise and set for a given date and location on earth?
I tried the following:
```
let star = AstronomicalObject(
name: "Sirius",
coordinates: EquatorialCoordinates(
rightAscension: Hour(.plus, 6, 45, 9.25),
declination: Degree(.minus, 16, 42, 47.3)
),
julianDay: JulianDay(Date())
)
let result = star.riseTransitSetTimes(
for: GeographicCoordinates(
positivelyWestwardLongitude: Degree(.plus, 7, 46, 42),
latitude: Degree(.plus, 49, 9, 3),
altitude: 210)
)
```
Unfortunately it doesn't work since riseTransitSetTimes will call the unimplemented initializer of AstronomicalObject and cause a fatalError. Is there another way to get the rise and set times of a star?
Answers:
username_1: Oh that's a very good point. Indeed `AstronomicalObject` conforms to the `CelestialBody` protocol, which gives access to the `riseTransitSetTimes` function. This is an inconsistency, and I thank you for pointing it out!
Now, the computation of Rise and Set times of a planet is based on an interpolation between 3 days where coordinates are not the same. This condition is not satisfied by an Astronomical object...
Hence, one must search a bit a solution.
username_1: @username_0 I have implemented a solution (which also homogenize the return types of twilights).
But I would rather prefer make a release with unit tests validating the result. I couldn't find any reliable source so far. Do you have, for a given location on Earth, (fairly) precise of expected rise and set times (< 1 minute) of an object?
username_1: Note that I've pushed my code, and now the project includes a Swift Playground. Your example is inside it. You can use it to test the results.
username_0: @username_1 thank you for the solution! :)
Unfortunately I do not have examples with highly accurate rise and set times, I use Stellarium most of the time.
The playground example gives me plausible results as far as I can tell.
username_0: Testing a bit further I found that for polaris I do not get rise and set times for my current location (which is expected) but there is no transitError as well:
```
let polaris = AstronomicalObject(
name: "Polaris",
coordinates: EquatorialCoordinates(rightAscension: Hour(.plus, 2, 31, 47.08), declination: Degree(.plus, 89, 15, 50.9)
),
julianDay: JulianDay(year: 2020, month: 9, day: 6)
)
let results = polaris.riseTransitSetTimes(
for: GeographicCoordinates(
positivelyWestwardLongitude: Degree(.plus, 7, 46, 42),
latitude: Degree(.plus, 49, 9, 3),
altitude: 210)
)
```
If I understand the api correctly there should be a CelestialBodyTransitError of type alwaysAboveAltitude?
username_1: Hello @username_0 You are perfectly right. This mistake is due to a inconsistency in using the `RiseTransitSetTimes` in the Earth `twilights` function. I have corrected the code. I've added a test, and will try to add some more, then I'll make a release with all these fixed.
Status: Issue closed
username_1: SwiftAA v2.3 released, with updated interface for tiwlights and rise, set, times. As far as I can tell, I can now close this issue. Feel free to open a new one if you think something's still missing / incorrect. |
dotnet/arcade | 489449150 | Title: None
Question:
username_0: @username_1 this failed because dotnet/toolset build-def doesn't specify ArtifactsCategory for the build. Can we set their artifact category to .NetCore ?
Answers:
username_1: Sure, feel free to open a PR. Must have been missed when onboarding the repo.
Status: Issue closed
username_2: I presume this is fixed now. |
oanc/anc2go | 447156209 | Title: Running the app localy
Question:
username_0: Hello, I am trying to export the OANC content to a NLTK compatible output. Reading this, https://www.anc.org/software/anc2go/, I've seen your project can help. Can you please give details on how can I run it locally?
Answers:
username_1: Hi @username_0,
Sorry, I did not see this when you created the issue. The best way to generate NLTK compatible output is with the online service (http://anc.org:8080/ANC2Go), which should be back online now.
Unfortunately the code is not easy to build and run locally as it depends on modules that lived in Maven repositories that are no longer online. I hope to deploy those dependencies to Maven Central when I get a chance. |
metakirby5/zenbu | 1147468268 | Title: zenbu.py issue
Question:
username_0: Package installed from AUR [zenbu-git](https://aur.archlinux.org/packages/zenbu-git)
```
python --version
Python 3.10.2
```
After run zenbu in the terminal got these error's:
```
Traceback (most recent call last):
File "/usr/bin/zenbu", line 33, in <module>
sys.exit(load_entry_point('zenbu==1.0.5', 'console_scripts', 'zenbu')())
File "/usr/lib/python3.10/site-packages/zenbu.py", line 770, in main
zenbu = Zenbu(
File "/usr/lib/python3.10/site-packages/zenbu.py", line 328, in __init__
self.refresh()
File "/usr/lib/python3.10/site-packages/zenbu.py", line 343, in refresh
deep_update_dict(
File "/usr/lib/python3.10/site-packages/zenbu.py", line 171, in deep_update_dict
if isinstance(d, collections.Mapping):
AttributeError: module 'collections' has no attribute 'Mapping'
```
And no more. Run with any functional flags e.g. -l, w --dry prints same error. Zenbu only appears correctly w/o any errors with help flags (-h).
Status: Issue closed
Answers:
username_1: Should be fixed now, can you give it a try?
username_0: Fixed! Thanks!
Zenbu the best utility, long live! 😉 |
google/clif | 225924899 | Title: Build fails on Ubuntu 16.04
Question:
username_0: I tried building on Ubuntu 16.04 using ninja and it failed with the following error:
```
CMake Error at tools/clif/backend/CMakeLists.txt:15 (include_directories):
include_directories given empty-string as include directory.
```
I installed protobuf from source following the directions [here](https://github.com/google/protobuf/blob/master/src/README.md).
Here's the output from the full run:
```
username_0@mew:~/Workspace/clif/clif$ ./INSTALL.sh
+ INSTALL_DIR=/home/username_0/opt
+ CLIFSRC_DIR=/home/username_0/Workspace/clif/clif
+ LLVM_DIR=/home/username_0/Workspace/clif/clif/../clif_backend
+ BUILD_DIR=/home/username_0/Workspace/clif/clif/../clif_backend/build_matcher
++ cmake --version
++ head -1
++ cut -f3 '-d '
+ CV=3.5.1
+ CV=(${CV//./ })
+ (( CV[0] < 3 || CV[0] == 3 && CV[1] < 5 ))
++ protoc --version
++ cut -f2 '-d '
+ PV=3.3.0
+ PV=(${PV//./ })
+ (( PV[0] < 3 || PV[0] == 3 && PV[1] < 2 ))
++++ which protoc
+++ dirname /usr/bin/protoc
++ dirname /usr/bin
+ PROTOC_PREFIX_PATH=/usr
+ declare -a CMAKE_G_FLAG
+ declare -a MAKE_PARALLELISM
+ which ninja
/usr/bin/ninja
+ CMAKE_G_FLAGS=(-G Ninja)
+ MAKE_OR_NINJA=ninja
+ MAKE_PARALLELISM=()
+ echo 'Using ninja for the clif backend build.'
Using ninja for the clif backend build.
+ [[ '' =~ ^-?-h ]]
+ PYTHON=python
+ [[ -n '' ]]
+ echo -n 'Using Python interpreter: '
Using Python interpreter: + which python
/usr/bin/python
+ CLIF_VIRTUALENV=/home/username_0/opt/clif
+ CLIF_PIP=/home/username_0/opt/clif/bin/pip
+ virtualenv -p python /home/username_0/opt/clif
Running virtualenv with interpreter /usr/bin/python
New python executable in /home/username_0/opt/clif/bin/python
Installing setuptools, pkg_resources, pip, wheel...done.
+ /home/username_0/opt/clif/bin/pip install --upgrade pip
Requirement already up-to-date: pip in /home/username_0/opt/clif/lib/python2.7/site-packages
+ /home/username_0/opt/clif/bin/pip install --upgrade setuptools
Requirement already up-to-date: setuptools in /home/username_0/opt/clif/lib/python2.7/site-packages
Requirement already up-to-date: appdirs>=1.4.0 in /home/username_0/opt/clif/lib/python2.7/site-packages (from setuptools)
Requirement already up-to-date: packaging>=16.8 in /home/username_0/opt/clif/lib/python2.7/site-packages (from setuptools)
Requirement already up-to-date: six>=1.6.0 in /home/username_0/opt/clif/lib/python2.7/site-packages (from setuptools)
[Truncated]
-- Targeting NVPTX
-- Targeting PowerPC
-- Targeting RISCV
-- Targeting Sparc
-- Targeting SystemZ
-- Targeting X86
-- Targeting XCore
-- Could NOT find Z3 (missing: Z3_LIBRARIES Z3_INCLUDE_DIR) (Required is at least version "4.5")
-- Looking for sys/resource.h
-- Looking for sys/resource.h - found
-- Clang version: 5.0.0
-- Performing Test CXX_SUPPORTS_NO_NESTED_ANON_TYPES_FLAG
-- Performing Test CXX_SUPPORTS_NO_NESTED_ANON_TYPES_FLAG - Failed
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1")
-- Checking for module 'protobuf'
-- Found protobuf, version 3.3.0
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython2.7.so (found version "2.7.12")
CMake Error at tools/clif/backend/CMakeLists.txt:15 (include_directories):
include_directories given empty-string as include directory.
```
Answers:
username_1: Please share /usr/lib/pkgconfig/protobuf.pc (if it's not there we have a problem).
We suspect it lacks the line includedir=${prefix}/include
What is `pkg-config --cflags-only-I protobuf` output?
username_2: This also happens for me on Debian 9.0 when I'm using a protobuf package I've built myself and installed in my own location rather than system wide (where it could clobber the distro managed stuff). This appears to fix it:
`pkg-config --cflags-only-I protobuf` is empty. But if I set `PKG_CONFIG_PATH` to point to the location where I installed protobuf properly, it works.
```
~/oss/clif$ export PKG_CONFIG_PATH=/home/greg/oss/tool_env/lib/pkgconfig
~/oss/clif$ pkg-config --cflags-only-I protobuf
-I/home/greg/oss/tool_env/include
```
But `cmake` doesn't appear to be picking that up from the environment and still fails the same way. Perhaps it needs another flag?
username_2: Ah... nevermind. *Solved* for me: The `../clif_backend/build_matcher/CMakeConfig.txt` file was caching the old empty value so cmake wasn't re-running pkg-config protobuf. Delete that before re-running `INSTALL.sh` (or rerunning cmake if done manually) and it found everything via the `PKG_CONFIG_PATH` environment variable and is happily on its way to building.
username_2: Context from an in person discussion: We found that pkg-config is being too smart for our own good vs our current cmake setup... When you have protobuf installed in /usr on the system, pkg-config reads the protobuf.pc file and determines that the necessary -I flag is -I/usr/include which it then removes (thus the empty output) as that is already an implied default include path. Triggering the "empty-string" error.
We should be able to fix that by modifying `tools/clif/backend/CMakeLists.txt` or something related.
username_0: @username_2 Gotcha - thanks for looking into it for me. I think your assessment is correct. For completeness, here's the output @username_1 asked for:
```
username_0@mew:~$ pkg-config --cflags-only-I protobuf
username_0@mew:~$ cat /usr/lib/pkgconfig/protobuf.pc
prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include
Name: Protocol Buffers
Description: Google's Data Interchange Format
Version: 3.3.0
Libs: -L${libdir} -lprotobuf -pthread -lpthread
Libs.private: -lz
Cflags: -I${includedir} -pthread
Conflicts: protobuf-lite
```
Also, for anyone who comes across this thread later, I was able to workaround this by commenting out lines containing "{GOOGLE_PROTOBUF_INCLUDE_DIRS}" in the relevant CMakeLists.txt (specifically, lines 33 and 37 in [tools/clif/backend/CMakeLists.txt](https://github.com/google/clif/blob/4e412c07cf7ab0351925e4aad4f5af806a0b0b98/clif/backend/CMakeLists.txt#L33-L37)).
username_3: I'm facing the same problem on F25, with a fresh clone of clif. Here are my outputs for what @username_1 asked for:
$ cat /usr/lib64/pkgconfig/protobuf.pc
prefix=/usr
exec_prefix=/usr
libdir=/usr/lib64
includedir=/usr/include
Name: Protocol Buffers
Description: Google's Data Interchange Format
Version: 2.6.1
Libs: -L${libdir} -lprotobuf -lpthread
Libs.private: -lz
Cflags: -I${includedir}
# Commented out because it crashes pkg-config *sigh*:
# http://bugs.freedesktop.org/show_bug.cgi?id=13265
# Conflicts: protobuf-lite
The output of `pkg-config --cflags-only-I protobuf` is empty, understandably so since the headers are in `/usr/include`.
username_1: Sorry about the breakage. This will be fixed in the next "release" this month. Was busy at PyCon.
Meanwhile "workaround this by commenting out lines containing
"{GOOGLE_PROTOBUF_INCLUDE_DIRS}" in the relevant CMakeLists.txt (specifically, lines 33 and 37 in tools/clif/backend/CMakeLists.txt)."
username_3: @username_1 Thanks! Took me a while to find the right `CMakeLists.txt` file. I tend to get very confused with cmake :-p; and now I'm noticing the previous comment linked to the appropriate lines in the source!
Sorry about the noise.
username_2: I suggest opening a new issue against the current version if build issues are still happening. I run into a few myself depending on what platform I am on, I'm still trying to sort through them to find things that are my own errors vs things we could improve in `INSTALL.sh`.
username_4: This repo was unmaintained from December 2017 to September 2019.
It was recently updated, see https://github.com/google/clif/discussions/36#discussioncomment-85120 for some background.
If this issue is still relevant please repopen it or ask questions at https://github.com/google/clif/discussions/.
Status: Issue closed
|
bispojr/observatorio-ufj-covid19 | 622906891 | Title: [Simulação] Fazer texto descrevendo como executar a simulação
Question:
username_0: Como pré requisito dessa atividade tem a necessidade de avaliar como será a configuração final do simulador, visto que tem diversas forma de executar a simulação.
Status: Issue closed
Answers:
username_0: Como pré requisito dessa atividade tem a necessidade de avaliar como será a configuração final do simulador, visto que tem diversas forma de executar a simulação. |
zsqk/zsqk | 398511468 | Title: Docker 入门
Question:
username_0: # Docker 入门
## 适用版本
Docker version 1.12.3, build 6b644ec
当时的版本比较低, 现在 docker 已经按照年月来安排版本号了.
版本号在 18.09.
## 查看 docker 镜像列表
```
docker images
```
https://docs.docker.com/engine/reference/commandline/images/
## 查看正在运行中的 docker 容器
```
docker ps
```
https://stackoverflow.com/questions/16840409/how-to-list-containers-in-docker
## 进入 docker 容器内部
```
docker attach ${CONTAINER ID}
```
control + d 会停止该容器.
文档说 control + c 会向容器发送关闭信号, 我实测 control + d 会运行 `exit` 命令来关闭.
如果要离开而不停止容器, 需要 control + p + q.
参考: https://stackoverflow.com/questions/19688314/how-do-you-attach-and-detach-from-dockers-process
文档: https://docs.docker.com/engine/reference/commandline/attach/#extended-description
## 运行/停止一个容器
```
docker start ${CONTAINER ID}
docker stop ${CONTAINER ID}
```
https://docs.docker.com/engine/reference/commandline/start/
## 写 Dockerfile 文件
https://docs.docker.com/develop/develop-images/dockerfile_best-practices
Answers:
username_0: 练习: https://labs.play-with-docker.com/
username_0: 编排
docker stack
支持两种编排方案:
Swarm, Kubernetes
Swarm 示例:
```yaml
# https://raw.githubusercontent.com/docker-library/docs/9efeec18b6b2ed232cf0fbd3914b6211e16e242c/postgres/stack.yml
# Use postgres/example user/password credentials
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: <PASSWORD>
adminer:
image: adminer
restart: always
ports:
- 8080:8080
```
username_0: # Docker for Beginners - Linux 笔记
注:
- `[]` 内为可选.
- `$` 后为变量.
## container 相关
### docker container run [参数] $image_name [要执行的命令]
根据 image 创建一个 container.
参数:
`--interactive` say you want an interactive session.
`--tty` allocates a pseudo-tty.
`--rm` tells Docker to go ahead and remove the container when it’s done executing.
`--detach` will run the container in the background.
`--name $名称` 为 container 命名.
`-e $环境变量赋值` will use an environment variable.
image 可能会有默认的命令, 如果不制定, 就会执行默认的命令.
### docker container ls
### docker container logs $container_name
### docker container top $container_name
### docker exec -it $container_name $command
在一个运行中的 container 内执行命令.
技巧: 将 $command 赋值为 `sh` 可以创建一个新的 shell 进行交互.
username_0: Docker Hub
官方的 image 仓库.
我们在 Dockerfiles 中 FROM 默认即来自 Docker Hub.
一个非系统的项目一般会有多个版本的 image, 主要区别是基底系统的不同, 一些区别可以参考:
https://medium.com/swlh/alpine-slim-stretch-buster-jessie-bullseye-bookworm-what-are-the-differences-in-docker-62171ed4531d
一般来说有如下常用的系统:
- Debian buster (当前 Debian 版本)
- Debian buster-slim (精简版 Debian)
- Alpine (非常轻量级的 Linux)
可能还会看到:
- Debian stretch (上一个 Debian 版本)
- windowsservercore
- nanoserver https://hub.docker.com/_/microsoft-windows-nanoserver |
Megan-OMA/Public-OMA-Documents | 193584607 | Title: AVSystem TestFest Docs
Question:
username_0: Singapore TestFest 2016
Results:
Client:
[AVSystem-Client-Results-Post.pdf](https://github.com/username_0/Public-OMA-Documents/files/632055/AVSystem-Client-Results-Post.pdf)
Server:
[AVSystem-Server-Results-Post.pdf](https://github.com/username_0/Public-OMA-Documents/files/632057/AVSystem-Server-Results-Post.pdf)
Logo:

Answers:
username_0: Smaller logoL

Status: Issue closed
|
Zulko/moviepy | 70920607 | Title: Invalid Syntax Error
Question:
username_0: I'm trying to run a script, that worked perfectly on my Macbook, on a CentOS VPS and I'm seeing the following error:
```
Traceback (most recent call last):
File "edit.py", line 3, in <module>
from moviepy.editor import *
File "/usr/lib/python2.6/site-packages/moviepy/editor.py", line 22, in <module>
from .video.io.VideoFileClip import VideoFileClip
File "/usr/lib/python2.6/site-packages/moviepy/video/io/VideoFileClip.py", line 3, in <module>
from moviepy.video.VideoClip import VideoClip
File "/usr/lib/python2.6/site-packages/moviepy/video/VideoClip.py", line 22, in <module>
from .io.gif_writers import (write_gif,
File "/usr/lib/python2.6/site-packages/moviepy/video/io/gif_writers.py", line 5, in <module>
from moviepy.decorators import (requires_duration,use_clip_fps_by_default)
File "/usr/lib/python2.6/site-packages/moviepy/decorators.py", line 88
for (k,v) in kw.items()}
^
SyntaxError: invalid syntax
```
Answers:
username_1: Thanks for the report, that's scary. I guess it's because you are using an old python version which doesn't understand this syntax. I may fix this in the future, but the faster would be that you try installing python 2.7 or python 3. Tell me if that works.
username_0: Seems to work fine with 2.7. Thanks!
Status: Issue closed
username_2: How have you upgrade from 2.6 to 2.7, could you please tell. |
ctuning/ck-quantum | 368683467 | Title: Write engaging VQE documentation
Question:
username_0: We should create engaging documentation for our open VQE challenge! Something that would convey our enthusiasm for the subject even to those who lonely sit in front of their screens in the dark, not mingling with us at a nice venue!


Answers:
username_0: @tomparks, @oc251, @stevebrierley Many thanks for your contributions to [Wiki](https://github.com/ctuning/ck-quantum/wiki/)! Now we can launch the [1st Open QCK Challenge](https://github.com/ctuning/ck-quantum/tree/master/module/challenge.vqe).
username_0: If you find some time after the New Year, please update formulae [here](https://github.com/ctuning/ck-quantum/wiki/Measuring-Performance).
username_0: Closing as the documentation is probably as engaging as it gets.
Status: Issue closed
|
quintel/etmodel | 521460179 | Title: Is it correct that flh of PV are smaller for weather years than for default settings?
Question:
username_0: I notice that for all weather years the flh of PV is smaller than for 2015.
Are the same sources and methods used for sun?

Answers:
username_1: Yes, I noticed that too. What exactly do you mean with "the same sources and method"? The flh are calculated for the weather years based on the "fit factor" I derived from the NL2015 curve. This "fit factor" is the factor between the total irradiation and the number of flh. In 2015 the total irradiation was higher than in the other weather years, resulting in a higher number of flh.
username_0: That explains the low flh for the weather years.
Is my reasoning correct?
- We always use 867 flh for PV
- If 2015 is a year with high total irradiation we underestimate flh for weather years
If it is, is it worth to check weather 2015 is indeed a year with high total irradiation and use a different value to calculate the "fit factor".
username_1: Yes, your reasoning is correct. It would indeed be interesting to investigate whether 2015 was a very sunny year. A few ways to go could be:
1. Check if there's any source providing information about (calculated) year-specific full load hours for solar PV in NL based on the irradiation. If this information is available, we could calculate the "fit factor" based on these numbers.
2. Use another (less sunny) year than 2015 to base the "fit factor" on.
3. Take the average of irradiation for a couple of years to base the "fit factor" on.
Option 1 would be the most thorough way to go. However, I am not sure if such a source is available which is why I'm not sure if this option is worth the (investigation) time. Hence, option 3 could be an easy fix for now?
username_0: I think I found a useful source ([KNMI jaaroverzicht 2015](https://www.knmi.nl/nederland-nu/klimatologie/maand-en-seizoensoverzichten/2015/jaar)).
This source states that 2015 was very sunny with 1894 sun hours.
Is also states that the average number of sun hours is 1639.
So, if we combine 867 flh with a standard year 2015 would have 1002 flh.
We can use 1002 to determine the fit factor we use for the weather years.
*Screenshot KNMI Jaaroverzicht 2015*

username_1: Nice, thanks for investigating, @username_0!
I can update the solar pv curves based on this new fit factor tomorrow morning. Is that still in time for the deploy, @username_2?
username_2: Yes :)
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.