repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
cake-build/cake
115399437
Title: OpenCover: Code example is wrong Question: username_0: The following example is wrong. There is no `SetOutputFile` extension method for `OpenCoverSettings`. ![image](https://cloud.githubusercontent.com/assets/357872/10984968/b2947cd6-841d-11e5-9515-a05acd583075.png)<issue_closed> Status: Issue closed
bokeh/bokeh
321963321
Title: Major Label Overriders not accepting dictionary Question: username_0: #### Description of expected behavior and the observed behavior When overriding ticks I'm unable to pass a dict unless wrapping in eval(str()). Although I found a fix I'm kinda curious why this is happening... Newbie coder 👍 #### Complete, minimal, self-contained example code that reproduces the issue ``` from bokeh.layouts import row from bokeh.plotting import figure, show, output_file factors = range(0,4) labels = ['Ed', 'Jon', 'Christian', 'Ed'] x = [50, 40, 65, 10] dot = figure(title="Categorical Dot Plot", tools="", toolbar_location=None, y_range=[0-1,max(factors)+1], x_range=[0,100]) dot.segment(0, factors, x, factors, line_width=2, line_color="green", ) dot.circle(x, factors, size=15, fill_color="orange", line_color="green", line_width=3, ) factor_labels = dict(zip(factors, labels)) dot.yaxis.major_label_overrides = factor_labels #dot.yaxis.major_label_overrides = eval(str(factor_labels)) ``` Gives #### Stack traceback and/or browser JavaScript console output --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-224-fea246dbd58f> in <module>() ----> 1 show(dot) ~\Anaconda3\lib\site-packages\bokeh\util\api.py in wrapper(*args, **kw) 188 @wraps(obj) 189 def wrapper(*args, **kw): --> 190 return obj(*args, **kw) 191 192 wrapper.__bkversion__ = version ~\Anaconda3\lib\site-packages\bokeh\io\showing.py in show(obj, browser, new, notebook_handle, notebook_url) 135 if obj not in state.document.roots: 136 state.document.add_root(obj) --> 137 return _show_with_state(obj, state, browser, new, notebook_handle=notebook_handle) 138 139 #----------------------------------------------------------------------------- ~\Anaconda3\lib\site-packages\bokeh\io\showing.py in _show_with_state(obj, state, browser, new, notebook_handle) 162 163 if state.notebook: --> 164 comms_handle = run_notebook_hook(state.notebook_type, 'doc', obj, state, notebook_handle) 165 shown = True 166 ~\Anaconda3\lib\site-packages\bokeh\util\api.py in wrapper(*args, **kw) 188 @wraps(obj) 189 def wrapper(*args, **kw): --> 190 return obj(*args, **kw) 191 192 wrapper.__bkversion__ = version [Truncated] 237 separators=separators, default=default, sort_keys=sort_keys, --> 238 **kw).encode(obj) 239 240 ~\Anaconda3\lib\json\encoder.py in encode(self, o) 197 # exceptions aren't as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ''.join() would do. --> 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks) ~\Anaconda3\lib\json\encoder.py in iterencode(self, o, _one_shot) 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) --> 257 return _iterencode(o, 0) 258 259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, TypeError: keys must be a string Answers: username_1: Your code works as-is for me on master with no errors: <img width="224" alt="screen shot 2018-05-10 at 08 08 42" src="https://user-images.githubusercontent.com/1078448/39876924-61124076-5429-11e8-871a-5190635e30ab.png"> If there ever was a big, it's been fixed since (your issue did not state any version info, please **always** provide version info) In any case there is no reason to do things this way, Bokeh has built-in support for real categorical factors: https://bokeh.pydata.org/en/latest/docs/user_guide/categorical.html Status: Issue closed
simonbengtsson/jsPDF-AutoTable
334205281
Title: The afterPageContent, beforePageContent and afterPageAdd hooks are deprecated. Question: username_0: First of all, awesome plugin! It seems that is is using depricated jsPDF hooks though. I get this error in the console: `The afterPageContent, beforePageContent and afterPageAdd hooks are deprecated. Use addPageContent instead.` Status: Issue closed Answers: username_1: did u resove the problem cz i'm getting the same error
dlr-eoc/ukis-pysat
618837727
Title: Provide documentation for datahub log in Question: username_0: Connecing to a datahub requires user credentials to be set according to the target datahub, e.g. Copernicus SciHub. Calling `src = Source(source=Datahub.Scihub)` looks for user credentials but fails if not set in an environment: ``` File "<env_dir>\lib\site-packages\ukis_pysat\data.py", line 57, in __init__ self.user = env_get("SCIHUB_USER") File "<env_dir>\lib\site-packages\ukis_pysat\file.py", line 23, in env_get raise KeyError(f"No environment variable {key} found") KeyError: 'No environment variable SCIHUB_USER found' ``` The documentation on this is rather sparse and should be more detailed. There are different ways on providing user credentials, depending on the OS and development tools. - User credentials could be provided as plain text, which is not recomennded. - PyCharm offers the possibility to store the credentials in an environment in the Run/Debug configuration dialog. - Credentials could be set in the following way (example in plain text, but can be loaded from files): ``` os.environ["SCIHUB_USER"] = "Tim" os.environ["SCIHUB_PW"] = "<PASSWORD>" ```<issue_closed> Status: Issue closed
rossfuhrman/_why_the_lucky_markov
563720162
Title: I probably cried. Often you’ll use other classes in the arguments for the first monster in the list and goes through each item in a little colorblind girl with a colon. Question: username_0: Toot: I probably cried. Often you’ll use other classes in the arguments for the first monster in the list and goes through each item in a little colorblind girl with a colon. One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots
sympy/sympy
185453192
Title: assume_integer_order in bessel.py should be more liberal Question: username_0: Right now things like `jn(nu, x).rewrite(besselj)` won't work unless you explicitly set `nu = Symbol('nu', integer=True)`. It should be more liberal, only failing if the argument is not an integer. In general, SymPy should not require the correct assumptions to be true, only for the incorrect assumptions to not be True. As a side note: I'm unclear why this has to require nu be an integer. Aren't bessel functions defined for any value? Answers: username_1: Some tests are failing as the assumption of having an interger parameter is removed. ```python assert jn(2.57082614100218 - 0.580811206370047*I, z) == sqrt(2)*sqrt(pi)*sqrt(1/z)*besselj(3.07082614100218 - 0.580811206370047*I, z)/2 ``` Should I change the test in this case? username_0: We should double check the result. username_2: hello i would like to work on this. can you guide me where to start. I'm new here username_0: I believe there is already a fix at https://github.com/sympy/sympy/pull/11785 Status: Issue closed username_3: @username_0, regarding your question about the integer order in the description of this issue, for spherical bessel functions (jn, yn) the order has to be an integer AFAIU. This is in contrast to normal bessel functions, where the order can even be complex-valued (modified bessel functions). username_0: So is the merged PR #11785 incorrect? username_3: I totally agree with what you said in https://github.com/sympy/sympy/issues/11777#issue-185453192: "It should be more liberal, only failing if the argument is not an integer. In general, SymPy should not require the correct assumptions to be true, only for the incorrect assumptions to not be True." Since in PR #11785 there is no check for the incorrect assumptions to not be True, i.e., it does not check whether `nu != Symbol('nu', integer=False)`, this PR does not address your statement. I'll work on a new PR to fix this.
S0NN1/advent-of-code-2020
774725990
Title: Cannot compile solution for day1 part 2 Question: username_0: I'm having issues with compiling code for day 1 part 2. Steps to reproduce: Clone the repo Run `cargo run` inside the folder `day_1` The last command executes correctly but only prints the solution for part 1 I believe this is a critical issue and should be handled ASAP. Thank you Answers: username_1: Fixed in latest commit Status: Issue closed
alexedwards/scs
292160942
Title: stale sessions in data store - `deadline` plus `saved` property? Question: username_0: How are folks handling stale sessions in the data store? For example, redis has its [expire](https://redis.io/commands/expire) command but by default it is set to never expire. If a vacuuming routine compares `time.Now()` > `deadline`, then the session should be deleted. That's the logic, yes? Is there any use case for including a `saved` property (the date when a session was last saved) - in addition to the deadline? Right now I can't think of any. Maybe I just answered my own question but am double-checking with others. Answers: username_0: Wow, this is great! I see the stores already include checks for expired tokens. Excellent library! Thanks, @alexedwards! Status: Issue closed
fluffysquirrels/GGJ2015
55533909
Title: Player can move through wall if collision volume is not clamped to grid unit size Question: username_0: This is currently the case on Level1 if you move up to the back wall. It is a result of the position clamp fix forcing the player through the wall meaning that Unity temporarily disregards collisions between the player and the wall - allowing the player to escape. I simple fix for now is to ensure that all art assets adhere to the unit grid layout. Status: Issue closed Answers: username_0: This is currently the case on Level1 if you move up to the back wall. It is a result of the position clamp fix forcing the player through the wall meaning that Unity temporarily disregards collisions between the player and the wall - allowing the player to escape. I simple fix for now is to ensure that all art assets adhere to the unit grid layout. username_1: I'm not quite clear on how to reproduce this. Can you show a video or an annotated screenshot demonstrating which wall to jump over? username_0: Sure - will do at 12, I'll update you then.... username_0: Uploaded a video to /testing username_1: Ah! Got it. I thought you meant the bottom left edge / wall. I especially like when the player crouches at the end and falls into infinity! I'll take a look at why this is happening and try to stop it. username_0: Haha. I think I know why - it's kind of my fault but possibly unavoidable in a way (the fix being to fix the art and collision volumes). At the moment the collision geometry of the level prevents the player from intersecting it (due to the rigid body with isKinematic ticked on the player object). However if you adjust the transform through code (er.... significantly on a single frame I guess?) as we are currently (and necessarily) with my clamp position function then you can force the volumes to intersect. On the next FixedUpdate the collision system will then ignore collisions from intersecting volumes so the player can then escape. All because I hastily modelled the environment and have messed up the size! I guess this can be fixed from an art standpoint but we could possibly prevent it from happening from a code stand point as well? By snapping the player BACK to the previous square or something along those lines? We definitely do not want to allow the player to not adhere to the grid system as then nothing works well. username_0: Oops didn't realise you can only assign one person at a time? I'll reassign you for now... username_0: Fixed environment model - you'll need to pull as I had to update the Level1 scene as a result! username_0: Not sure whether to close this one for now? Status: Issue closed username_1: Just tested this after your fix. I tried to hop through all four walls of the grid and couldn't. I think it's fine to close this issue now with the environment fix and we can open a new one if the problem re-occurs.
notify-run/notify.run
1043201688
Title: Subscribed channel says: Not found Question: username_0: Subscribed channel says: Not found. Notifications (pop-ups and sounds) ok, but the channel doesn't show list and says (in red): Not found. Phone and pc have the same problem. Thank you very much. Answers: username_1: Sorry about that, I found the bug and am deploying a fix. Should be live in < 10 minutes. username_0: Working. Tnx. Status: Issue closed
ray-project/ray
606303483
Title: fine grained CUDA / GPU controls? Question: username_0: <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is your question? Hi I have multiple systems with diverse hardware in my portfolio. For example this system with 3 GPUs. Cuda device 0 has 1gb VRAM Cuda device 1 has 16gb VRAM Cuda device 2 has 48gb VRAM How can I configure ray to only do calculations on cuda devices 1 and 2? My current example model has a size of 11gb. So I can load it once on Cuda device 1 but four times on cuda device 2. How to configure for this with the fractional gpus? Is there a way to allocate memory on an absolute gb scale? Or is there a flag so ray automatically figures out the paralellization under the hood? I had an intense look through the docs but failed to understand: https://ray.readthedocs.io/en/latest/using-ray-with-gpus.html#fractional-gpus *Ray version and other system information (Python version, TensorFlow version, OS):* latest Answers: username_1: Ray doesn't innately support this for now. (It currently does not support resource isolation). You should probably find a way to pin each task to GPU. For example, when you want to pin CPU, you can do sth like this https://stackoverflow.com/questions/61051911/how-to-ensure-each-worker-use-exactly-one-cpu. (I am not that familiar with GPU, but I assume there should be a way to do). @robertnishihara Do you know any way to do this?
jlippold/tweakCompatible
350097067
Title: `InteliX` partial on iOS 11.3.1 Question: username_0: ``` { "packageId": "com.ioscreatix.intelix", "action": "working", "userInfo": { "arch32": false, "packageId": "com.ioscreatix.intelix", "deviceId": "iPhone10,3", "url": "http://cydia.saurik.com/package/com.ioscreatix.intelix/", "iOSVersion": "11.3.1", "packageVersionIndexed": true, "packageName": "InteliX", "category": "Tweaks", "repository": "Packix", "name": "InteliX", "installed": "1.3.8", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 90% with 19 working reports.", "id": "com.ioscreatix.intelix", "commercial": true, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "Grouped Notifications for iOS 11", "latest": "1.3.8", "author": "iOS Creatix", "packageStatus": "Working" }, "base64": "<KEY> "chosenStatus": "partial", "notes": "Notifications don’t open" } ```
department-of-veterans-affairs/va.gov-team
967155077
Title: AWS Staging: VAOS Direct Scheduling clinic Availability Calendar >Next button not enabled Question: username_0: 1. Log in as Cecil. 2. Select Primary Care. Select Cheyenne. 3. Enter April 1, 2022 for 'when do you want to schedule this appointment'. Select continue. 4. Expected response: The month of April displays and I can advance to the next month (using the >Next button). 5. Actual Response: The month of April displays and I CAN NOT advance to the next month (using the >Next button). I have to go back and change the date to May 1, 2022 if I want to see May availability. 6. I can use < Previous just fine. It is the >Next that is all in grey. april 1 selected call: https://staging-api.va.gov/vaos/v0/facilities/983/available_appointments?type_of_care_[…]&clinic_ids[]=455&start_date=2022-04-01&end_date=2022-05-31 may 2 selected call: https://staging-api.va.gov/vaos/v0/facilities/983/available_appointments?type_of_care_[…]&clinic_ids[]=455&start_date=2022-05-01&end_date=2022-06-30 (edited) Max future days to the clinic is 450 days. So that is not the issue. Status: Issue closed Answers: username_1: Appears to be fixed now. @username_0 feel free to validate on your end as well
Way2CU/Ranger-Site
240007442
Title: Site doesn't pass W3C validation. Question: username_0: [As seen here](https://validator.w3.org/nu/?doc=http%3A%2F%2Flp.berghoff.co.il%2F) site doesn't pass validation. Notable issues are using `li` elements as direct children of `div`. @iliyaM you just make sure this passes afterwards. @username_1 this bug is solely yours. What you can't do, write here. Answers: username_1: Errors for LI elements fixed. Error: Attribute seamless not allowed on element iframe at this point. Spoke with @username_0 about this. It should be done on system level. Status: Issue closed
FCC-Alumni/alumni-network
217080624
Title: User Profile Route Question: username_0: A route specifically for viewing a user's profile, that really highlights their profile and is less of a preview. e.g. `/dashboard/users/:username_1` Could link to this from community, mentorship, and chat, to give users a nice way of seeing details about another user. Thoughts? Can discuss. Answers: username_1: @username_0 Yes, this is the next big step - I think once he have this done, we will have a pretty solid working app, and can think about cleaning things up and an initial potential release. I have more in mind for the future, but once this, chat, and search are fully done, I think we will be in good shape for an MVP (oh yeah, plus copy revisions and looking at what to do with both protected and unprotected home pages) username_1: @username_0 should this be an unprotected route? so even non-users could see it? If so we'd have to hide email and such - if we even include that. Status: Issue closed
AzureAD/microsoft-authentication-library-for-objc
637428512
Title: Can MSALWebviewParameters be used with SwiftUI to fetch a token interactively? Answers: username_1: SwiftUI should be compatible with existing UIKit frameworks. Apple has some guidance here: https://developer.apple.com/tutorials/swiftui/interfacing-with-uikit. Let us know if that still doesn't address your issue. Thanks.
ballerina-platform/ballerina-lang
588694425
Title: Not possible to access json payload when payload is a json array Question: username_0: **Description:** I have a POST resource and I want to pass a JSON array as the request payload. ````java @http:ResourceConfig { methods: ["POST"], path: "/news-articles/validatetest", cors: { allowOrigins: ["*"], allowHeaders: ["Authorization, Lang"] }, produces: ["application/json"], consumes: ["application/json"] } resource function validateArticlesTest(http:Caller caller, http:Request req) { var x = req.getJsonPayload(); io:println(x); io:println("test"); } ```` But seems when run this and invoke, var x becomes allways null. ```` curl -X POST http://localhost:9090/news-articles/validatetest -H "Content-Type: application/json" --data '[{"aaa":"amaval", "bbb":"bbbval"},{"ccc":"amaval", "ddd":"bbb val"}]' ```` **Affected Versions:** 1.1.3 Answers: username_0: I could get this work by below approach by casting. ``` resource function validateArticlesTest(http:Caller caller, http:Request req) { json[]|error jsonarray = <json[]>req.getJsonPayload(); io:println(jsonarray); } ``` Hence closing the issue. Status: Issue closed username_1: @username_0, when casting, the resultant value is always the target type if successful. So the following (without `error`) will work. ```ballerina json[] jsonarray = <json[]>req.getJsonPayload(); ``` But if unsuccessful, it will result in a panic. For example, in case `req.getJsonPayload()` could evaluate to an error or could even be a `json` payload but not a JSON array (e.g., JSON object, digit, null, etc.) the cast to `json[]` will result in a panic, which in an HTTP resource will result in a 500 internal server error. If that's not the intended behaviour, and you want to specifically handle the error scenarios, it would be better to do something like ```ballerina json|error x = req.getJsonPayload(); if x is json[] { // Valid, received a JSON array. } else if x is json { // Invalid - JSON, but not an array. } else { // Invalid - `error` } ``` username_0: Thank you very much for the insight @username_1
HBiSoft/HBRecorder
918554116
Title: Screen recording not working in android version 9 Question: username_0: **HBRecorder version** for example 2.0.0 **Device information** Samsung - SDK version 29 **Screenshots** If applicable, add screenshots to help explain your problem. ![WhatsApp Image 2021-06-11 at 3 17 20 PM](https://user-images.githubusercontent.com/41869999/121671713-57308680-cac8-11eb-8ecc-69c79eb63464.jpeg) ![WhatsApp Image 2021-06-11 at 3 17 06 PM](https://user-images.githubusercontent.com/41869999/121671717-5861b380-cac8-11eb-8ccd-e59fb96fb2d8.jpeg)
Neztore/save-server
994334801
Title: (413) Payload Too Large. Question: username_0: **Describe the bug** When I try uploading a file around 1MB or more it does not let me, some screenshots also give this error. **To Reproduce** 1. Get a 1MB file or bigger. 2. Upload the file and it will give you the error. **Expected behavior** The file should upload normally. **Screenshots** ![image](https://user-images.githubusercontent.com/46667568/133008826-bd98b71d-ed1c-4c1b-8463-6ab007961963.png) **Additional context** Latest ShareX version. Answers: username_1: This is a common issue, and is usually not due to Save-Server itself. The default limit within save-server is around 10mb Nginx, by default, applies a low payload limit. [Follow these steps](https://www.tecmint.com/limit-file-upload-size-in-nginx/) to fix this - most proxy softwares, I imagine, apply a similar restriction by default. username_0: Hey, I have tried that and it has worked. I have also found how to change the default save-server limit. Thanks! Status: Issue closed
Yaruno292/EndlessRunner
269596145
Title: Naamgeving en engels Question: username_0: Je hebt feedback gekregen van **username_0** op: ```c public class DIE : MonoBehaviour { private void OnTriggerEnter2D(Collider2D collision) { animationPlayer.ded = true; } } ``` URL: https://github.com/Yaruno292/EndlessRunner/blob/master/code/Player/DIE.cs Feedback: Classes zijn altijd PascalCasing, gebruik correct engels.[](http://www.studiozoetekauw.nl/codereview-in-het-onderwijs/ '#cr:{"sha":"fb4e6075badd5775e05b549059fa4523e333bf98","path":"code/Player/DIE.cs","reviewer":"username_0"}')
github-vet/rangeloop-pointer-findings
777117186
Title: keybase/client: go/kbtest/chat.go; 6 LoC Question: username_0: [Click here to see the code in its original context.](https://github.com/keybase/client/blob/a144e0ce38ee9e495cc5acbcd4ef859f5534d820/go/kbtest/chat.go#L1139-L1144) <details> <summary>Click here to show the 6 line(s) of Go which triggered the analyzer.</summary> ```go for _, inboxItem := range inboxItems { c.InboxCb <- NonblockInboxResult{ ConvRes: &inboxItem, ConvID: inboxItem.GetConvID(), } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: a144e0ce38ee9e495cc5acbcd4ef859f5534d820
dotvanilla/vanilla
937225842
Title: Getting started Question: username_0: Hey @username_1. Sorry to disturb you again. I just cloned this repo but am unable to open the project in my VS. Can you publish a release? It will be very helpful if you release a compiled version. It will be very helpful of you. Thanks. Answers: username_1: Hi, please wait for me a few more days. I need time to relearn webassembly. there are something i forgot for the webassembly compiler development. WebAssembly is not design for win32 programming, and the COM technology is design for win32. so webassembly probably not working for COM.....but you can search for the nodejs package that may supports the COM interface. webassembly application can utilize the nodejs api. username_1: there is a unknow bug in this compiler project, i needs time to solve this bug
JulzCryptoMyriad/JulzWidget
987230209
Title: Create 2 options for wallets Question: username_0: ![image](https://user-images.githubusercontent.com/8561085/131925714-ac0f10b8-a8c0-4fff-8743-a8a1aaf35c11.png) Must be able to choose is using a mobile wallet(using WAalletConnect) or identify which wallet they have on the webpage they are using. Answers: username_0: Depends on #7
johnfairh/RubyGateway
1033258879
Title: Ruby version problem Question: username_0: I have swift project. в нем $LOAD_PATH : /Library/Ruby/Site/2.6.0 /Library/Ruby/Site/2.6.0/x86_64-darwin20 /Library/Ruby/Site/2.6.0/universal-darwin20 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby/2.6.0 /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby/2.6.0/x86_64-darwin20 /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby/2.6.0/universal-darwin20 /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0 /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/x86_64-darwin20 /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/universal-darwin20 In termenal /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/site_ruby/2.7.0 /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/site_ruby/2.7.0/x86_64-darwin20 /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/site_ruby /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/vendor_ruby/2.7.0 /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/vendor_ruby/2.7.0/x86_64-darwin20 /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/vendor_ruby /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/2.7.0 /Users/alexandr/.rvm/rubies/ruby-2.7.0/lib/ruby/2.7.0/x86_64-darwin20 How to make it fashionable to look from the ruby ​​project in the same place and from the terminal? Answers: username_1: Closing, answered. Status: Issue closed
glotzerlab/hoomd-blue
541195435
Title: No error issued when different particle vertices for specified for different ranks Question: username_0: ## Description <!-- Describe the problem. --> When different vertices are applied to each rank in a hoomd run, no error is produced meaning that different particles can be run for each rank. This has been shown to be a problem if the user is generating a new particle configuration each time the code is run meaning that for each rank the particle vertices will be different. The issues become clear during compression of a simulation because the system is unable to properly run overlap checks. to reproduce the problem, initialize the signac workspace and run the signac project file submit_job ## Script ```python # Include a minimal script that reproduces the problem you can reproduce this problem on great lakes with the script provided here. The reduce overlap code is what generates a slightly different particle for each rank based on the isovalue file which is just the file for the vertices of the particle. [Problem_Reproduction.zip](https://github.com/glotzerlab/hoomd-blue/files/3990248/Problem_Reproduction.zip) ``` <!-- Attach any input files needed to execute the script. --> ## Output <!-- What output did you get? --> ## Expected output <!-- What output did you expect? --> ## Configuration <!-- What is your system configuration? --> <!-- Remove items that do not apply. --> Platform: - Mac - Linux - GPU - CPU Installation method: - Conda package - Compiled from source - glotzerlab-software container <!-- What software versions do you have? --> Versions: - Python version: 3.7 - **HOOMD-blue** version: 2.8.1 ## Developer <!-- Are you able to fix this bug for the benefit of the HOOMD-blue user community? --> Answers: username_1: @b-butler This issue is general to all param and TypeParam dict values. The user can break things by specifying different values (e.g. temperatures) on different ranks. In v2.x I have strategically inserted broadcast calls on parameters where I expect users to pass different values (such as seed which many users set to a clock value). The broadcast call passes the value from rank 0 out to all the ranks and any values set on other ranks are ignored. For v3 we could consider a general solution for all parameters and type parameters. This would require writing and exposing broadcast methods for all possible types that users may pass to params and TypeParams. Given that these may be user-defined classes (e.g. particle filters) or nested dicts, we may need to rely on pickle to pack and unpack the objects. Alternately, we could just not broadcast custom types and assume that users pass them in correctly. username_1: I'm not sure a general solution is possible given the nature of the data model. We will need to address topics like this in a future MPI tutorial. Status: Issue closed
Septima/spatialsuite-google-analytics
154417549
Title: Haderslev:GoogleAnalytics mangler vidergående opsætning Question: username_0: Vi mangler en opsætning til vores profiler. Der er blevet oprettet 2 hovedsagelige profiler: Basis og Maxi, men de er helt tomme... Det vi p.t. kan se er et samlet antal af bruger. Vores forventning har været at spore anvendelsen af: - **alle** eksisterende profiler - **alle** eksisterende lag. Er det noget vi selve kunne sætte op?
NYCPlanning/labs-geosearch-docker
564213913
Title: 1912 Ditmars fails Question: username_0: @username_0 commented on [Fri Jan 19 2018](https://github.com/NYCPlanning/labs-geosearch-pad-normalize/issues/27) 19-12 Ditmars works --- @chriswhong commented on [Mon Jan 22 2018](https://github.com/NYCPlanning/labs-geosearch-pad-normalize/issues/27#issuecomment-359464865) Should be addressed by #11
space-wizards/space-station-14
267477189
Title: We need sprite offsets. Question: username_0: Just a general way to offset sprites would be nice. Answers: username_1: I think we have this though, we have offset parameters that load in? I might be mistaken username_0: Not that I can see, at least not on the basic `SpriteComponent` Status: Issue closed
DimensionDataResearch/packer-plugins-ddcloud
271743181
Title: Unable to find uploaded source_image Question: username_0: Hi, I am experiencing an issue with packer-plugins-ddcloud. I have uploaded a customer image to use as a source_image, however the plugin does not seem to find it. Expected result is that it should find the customer image and use this as a source_image to modify (provisioners e.g. Ansible) and build and upload a new customer image on the Dimension Data cloud. Packer output: ```bash # packer build debian9.json ddcloud-customerimage output will be in this color. ddcloud-customerimage: Resolving datacenter 'EU7'... ddcloud-customerimage: Resolved datacenter 'EU7'. ==> ddcloud-customerimage: Image 'Debian9-base' not found in datacenter 'EU7'. Build 'ddcloud-customerimage' errored: One or more steps failed to complete ``` Used builder configuration: ```json "builders": [{ "type": "ddcloud-customerimage", "mcp_region": "EU", "mcp_user": "MYLOGIN", "mcp_password": "<PASSWORD>", "datacenter": "EU7", "networkdomain": "develop", "vlan": "prod-vlan-120", "source_image": "Debian9-base", "target_image": "Debian 9 test", "use_private_ipv4": "true", "communicator" : "ssh" } ``` Thanks in advance! Answers: username_1: Hi looks like it should work - I'll check this out first thing tomorrow (am in AU and it's almost bedtime) username_1: Ok, had a glance at the code and it looks like you've hit a bug - will sort it out first thing tomorrow; sorry for the inconvenience :) username_0: Thank you! username_1: No problem :) In the meanwhile, would you mind running packer again with the following environment variables set and then attaching the resulting log file to this issue? I'm off to bed but will have a look tomorrow. ```bash export PACKER_LOG=1 export PACKER_LOG_PATH=$PWD/packer.log export MCP_EXTENDED_LOGGING=1 ``` username_0: [packer.log](https://github.com/DimensionDataResearch/packer-plugins-ddcloud/files/1449860/packer.log) I have installed your latest release beta4 which shows that it is unable to find an OS Image or Customer Image named Debian9-base. Which I did upload as a customer image. username_1: From the log, it looks like the CloudControl API says there is no image, but I suspect that this relates to guest OS customisation (older API doesn't return images with GOC disables), had to deal with this in our Terraform provider a while back. Will check it out tomorrow username_1: Ok, I'm going to try updating to the current version of the CloudControl client library (should be able to see images that have GOC disabled) and rebuild. Will post here when a new release is out. username_1: Ok, so I'm about to create the release (thanks for bearing with me). This won't actually _fix_ your problem but will at least confirm for us that the problem relates to the newer style of images (where you can disable GOC). If that's the case, it should only take a couple of hours to implement. Sorry about that, BTW - it looks like a case of our Packer plugins not keeping up with new features in CloudControl (I didn't realise anyone was using the plugins, but if you are then I'm happy to keep them up to date). username_1: Ok @username_0, could you try running [v1.0.3-beta5](https://github.com/DimensionDataResearch/packer-plugins-ddcloud/releases/tag/v0.1.3-beta5) with logging and posting the log, please? username_0: Just woke up and about to go to work, will run it immediately once I get there. username_0: I have run beta5, it seems to work alright. But cleanup on failure goes wrong. I still have a network issue as ddcloud does not seem to support DHCP for some reason, hoping to figure out how to get around that... [packer.log](https://github.com/DimensionDataResearch/packer-plugins-ddcloud/files/1452747/packer.log) username_1: Yeah, they don't support DHCP unfortunately - I've been caught out by that one too :grin: Fortunately, I built one that'll do what you want: https://github.com/DimensionDataResearch/mcp2-dhcp-server username_1: (sorry, I know it's slightly more awkward than native support within CloudControl but once it's set up, it's pretty much set-and-forget) username_1: (just turn off PXE / iPXE features as needed) username_0: Thanks :+1: username_1: No worries - give us a yell if you have any trouble with it. username_0: I am a bit disappointed as to needing to run a VM with a DHCP server in order to deploy VMs naturally. How do the Dimension Data images work to circumvent this? username_1: Yeah, it's not the best :-/ As I understand it (I'm not part of the team that does the MCP and CloudControl), CloudControl uses VMWare's [Guest OS Customisation](https://docs.mcp-services.net/display/CCD/Introduction+to+Cloud+Server+Provisioning%2C+OS+Customization%2C+and+Best+Practices) facility to achieve configuration of stuff like IP addresses (more [here](https://docs.mcp-services.net/display/CCD/Best+Practices+and+Tips+around+Linux+Guest+OS+Customization+Client+Images) and distro support matrix [here](https://docs.mcp-services.net/pages/viewpage.action?pageId=3015255)). When you initially imported your image, were you presented with a choice to enable / disable guest OS customisation? I could be wrong, but if not then you might be using a distro that VMWare doesn't know how to customise. username_1: I'll reach out to the relevant team to confirm this, BTW. username_0: Thanks I'll give that a try, yes I did disable guest OS customisation as I did not have vmware tools installed. username_1: Ah, sorry, just looked through the support matrix myself - looks like Debian is supported, but not for GOC :( username_1: So if you're using Debian, you'll probably need either DHCP or static IPs baked into the image (yuck). username_0: I tried to fake it being Ubuntu 16.04 instead, giving the following error: ddcloud-customerimage output will be in this color. ddcloud-customerimage: Resolving datacenter 'EU7'... ddcloud-customerimage: Resolved datacenter 'EU7'. ddcloud-customerimage: Deploying server 'packer-build-bddaa7002f' in network domain 'develop' ('e7e74af0-6dcd-41a9-8c12-9e60ad24c6a5')... ==> ddcloud-customerimage: Request to deploy server 'packer-build-bddaa7002f' failed with status code 400 (INVALID_INPUT_DATA): administratorPassword must not be provided if the imageId corresponds to a Linux Customer Image or Windows 2003 Customer Image. Build 'ddcloud-customerimage' errored: unexpected EOF ==> Some builds didn't complete successfully and had errors: --> ddcloud-customerimage: unexpected EOF Not quite sure what this is supposed to mean. username_0: I did not provide "initial_admin_password" as part of the builder in Packer. username_1: Ah - ok, that might be a bug (or undefined behaviour at least); let me have a look. username_1: Creating a new release now... username_1: Ok, would you mind giving it one more try on [beta7](https://github.com/DimensionDataResearch/packer-plugins-ddcloud/releases/tag/v0.1.3-beta7) (with logging)? username_0: [packer.log](https://github.com/DimensionDataResearch/packer-plugins-ddcloud/files/1453284/packer.log) Issue is still present unfortunately. username_1: LOL ok, sorry, I know what the problem is now. Give me a minute. username_1: Creating a new release (sorry about this, but I think we've got it right this time)... username_1: (it's been close to 12 months since I've touched this codebase and it's taking me a while to get familiar with the design again) username_1: https://github.com/DimensionDataResearch/packer-plugins-ddcloud/releases/tag/v0.1.3-beta8 username_0: Thanks! It seems to be doing something right now. username_0: Right now my VM is stuck on "Wait for Guest Ip Address" username_1: Hmm - you may need to follow [these](https://docs.mcp-services.net/display/CCD/Best+Practices+and+Tips+around+Linux+Guest+OS+Customization+Client+Images ) instructions to make sure the original source image is customisation-compatible. I'm not sure if you actually need `open-vm-tools` or equivalent to be present in the image... You could also try just deploying a server from the source image and then logging into the console to see the effects of customisation (assuming the image was marked as supporting customisation). username_1: Probably worth seeing if server deployed via the UI from that source image boots correctly; it would at least make it easier to see what IP it winds up with (otherwise, perhaps the DHCP option may wind up being easier to use). username_0: Thanks for the help so far. I am still trying to figure out why the IP is not assigned, would be nice if someone from Dimension Data could tell me what's causing the it. username_1: Hi, sorry to hear you're still having problems :-/ I think the best way to get support on this is to try deploy your image using the UI and then log a new ticket that doesn't mention Packer but just the server deployment problem(s). That way it'll get routed to the right people. username_1: I take it following the instructions in that page I linked didn't help? username_1: If the deployment fails I can probably show you how to use `curl` or `httpie` to view the full server details (including reason for failure) via the CloudControl API, which may help the CloudControl folks diagnose the issue... username_0: If you could provide me with that info, that would be most helpful. Thanks! username_1: Sure - no problem: Do you prefer curl, HTTPie, or POSTMAN (personally I prefer HTTPie or POSTMAN but any of those is fine)? username_0: I prefer httpie or postman as well. username_1: Try this: ```bash http get 'https://api-EU.dimensiondata.com/caas/2.5/{orgId}/server/server/{serverId}' Accept:application/json --auth-type basic --auth 'user:password' ``` Where `{orgId}` is, from the logs, `799dda5b-93ba-411c-a2a3-61c4c6e20c54`, and `{serverId}` is the Id of the server (you can see that in the CloudControl UI). username_0: Thanks, that resulted in giving me the following information. ```json "progress": { "action": "DEPLOY_SERVER", "numberOfSteps": 14, "requestTime": "2017-11-09T07:54:52.000Z", "step": { "name": "WAIT_FOR_GUEST_IP_ADDRESS", "number": 11 }, "updateTime": "2017-11-09T08:29:00.000Z", "userName": "x" }, ``` Doesn't really say why, but I guess I should make a ticket with this information. username_1: Yeah - that part of the process is a little opaque, but essentially it means that the server has booted but is has not picked up the configured IP address. I believe the customisation process modifies stuff in `/etc` and if your distro has unexpected stuff in there it may not be successful in doing so. I'd say at this stage yes, the best option is to raise a ticket; someone with knowledge of the system internals will need to take a look at it. username_1: BTW, you can see a list of the steps and what they do [here]( https://docs.mcp-services.net/display/CCD/Introduction+to+Cloud+Server+Provisioning%2C+OS+Customization%2C+and+Best+Practices). (sorry I couldn't more directly helpful but I have no deeper access to the system than you do) username_1: BTW, it looks you do need to have VMWare tools or open-vm-tools installed for GOC to work correctly. username_0: I did have them installed, I am seeing the following issue still when using Ubuntu 16.04 as a base image. ``` ==> ddcloud-customerimage: Request to delete server failed with unexpected status code 400 (SERVER_STARTED): Server with id 6a2c6ef9-ddd0-4e41-88a5-0717e9ae7053 is started but must be stopped to perform this operation. Please Power Off or Shutdown the Server (as appropriate) and try again. Build 'ddcloud-customerimage' errored: unexpected EOF ``` Could you make it so that it forces deletion or stops and then deletes? username_1: Oops, I would have though it would already do that - I'll look into it first thing tomorrow. Thanks for sticking with it! username_1: I've made the change (turns out it was done correctly in another step, but not this one), but have to get to bed; will create a release as soon as I wake up (sorry about that). username_1: Never mind - just created `beta9` release. Go for it. Will be back online tomorrow morning :) PS. If you still have any problems, attach a log and I'll see what's going on. username_0: Thanks!! Sleep well :) username_0: [packer.log](https://github.com/DimensionDataResearch/packer-plugins-ddcloud/files/1457819/packer.log) Getting a segmentation fault now. username_1: 😭 username_1: Believe it or not, we're making progress! The server deploy / destroy is working now and it's the destroying the firewall rule that is causing problems. Looking into it now. username_1: Turns out the code path for `use_private_ipv4` hasn't been used before and it was trying to delete a non-existent firewall rule. Fixing now. username_1: Right, `beta10` is ready to go - fingers crossed this is the last release you'll have to try. username_0: Thanks, sorry for the lack of response. You are a day ahead of us and by the time you are at work I am home and unable to run Packer. :smile: I am running it now and waiting for the results! username_1: No worries - I'm used to the time-zone thing and besides, it seems like you're the one who's having to wait for me rather than the other way around :) username_0: The build has finished succesfully now! Build 'ddcloud-customerimage' finished. ==> Builds finished. The artifacts of successful builds are: --> ddcloud-customerimage: Customer image 'Ubuntu 16.04 test' ('ea3e0f47-d598-4135-8943-0f5b60c11d67') in datacenter 'EU7'. username_1: Sorry again for all the trouble; I built this more than a year ago, and nobody used it so it didn't receive much testing. I'll give it a little love in the coming months if I can (will probably get it merged into Packer as a built-in module). username_1: (let me know if the image deploys successfuly, BTW) username_0: No worries! Glad to be able to help test it! In fact used to do a lot of testing for companies, and worked in software development myself. I'll test if the image deploys now, I didn't do much to it besides install python and run Ansible debug returning which network it's in. username_1: If it helps, BTW, there's a libcloud driver for CloudControl, and some (rather basic because we've had a couple of PRs stuck in limbo for a while) Ansible modules for it too. username_0: Haven't heard of it before, what is it's use case? username_1: Libcloud's a Python library that provides an abstraction over most of the cloud providers out there. If you can write Python it's not a bad way to automate things (it's how we wrote those Ansible modules, for example). https://docs.mcp-services.net/display/LPC/LibCloud+Python+Client username_0: Ah cool, thanks I'll give it a look! Yes I do write Python.. for quite a few years already, but you probably checked my profile? :+1: username_1: Yep, that's why I suggested it :wink: Status: Issue closed
mrdoob/three.js
1054838890
Title: Drag control needs to access its raycaster Question: username_0: **Is your feature request related to a problem? Please describe.** I need to access the raycaster of drag control to handle the objects when they are not in the default layer. I see that the transform control can access its raycaster to set to different layer, why it isn't the same with drag control? **Describe the solution you'd like** How about ? ``` getRaycaster() { return _raycaster; } ``` Answers: username_1: Sounds good! Would you like to make a PR? Status: Issue closed
ContinualAI/avalanche
761278573
Title: Add (Lopez-paz, 2017) Metrics: ACC, FWT & BWT Question: username_0: We should add the metrics proposed in (Lopez-paz, 2017) within the **beta version** of Avalanche. Answers: username_1: I'd love to work on this issue username_0: Perfect @username_1, refer to @username_2 for any problem/doubts you may encounter! :) Status: Issue closed
Neeke/PHP-Druid
197592130
Title: PHP 7 extension is broken Question: username_0: Build time: ``` /dev/shm/BUILD/php-pecl-druid-0.6.0/ZTS/druid.c: In function 'zim_DRUID_NAME_getData': /dev/shm/BUILD/php-pecl-druid-0.6.0/ZTS/druid.c:559:21: warning: implicit declaration of function 'Z_TYPE_PP' [-Wimplicit-function-declaration] if (argc > 1 && Z_TYPE_PP(content) != IS_ARRAY) ^~~~~~~~~ ``` Runtime: ``` + /usr/bin/php -n -d extension=json.so -d extension=curl.so --define extension=/dev/shm/BUILDROOT/php-pecl-druid-0.6.0-1.fc25.remi.7.0.x86_64/usr/lib64/php/modules/druid.so --modules + grep Druid PHP Warning: PHP Startup: Unable to load dynamic library '/dev/shm/BUILDROOT/php-pecl-druid-0.6.0-1.fc25.remi.7.0.x86_64/usr/lib64/php/modules/druid.so' - /dev/shm/BUILDROOT/php-pecl-druid-0.6.0-1.fc25.remi.7.0.x86_64/usr/lib64/php/modules/druid.so: undefined symbol: Z_TYPE_PP in Unknown on line 0 ``` Answers: username_1: Can you fix it? username_0: Sorry, not in a short delay. username_0: See pr #6 (untested, only build) Status: Issue closed
owncloud/ocis-reva
698802455
Title: [eos] Cannot set mtime in file upload Question: username_0: <d:prop><d:getetag/><d:getlastmodified/></d:prop> </d:propfind>' -k | xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 649 100 415 100 234 387 218 0:00:01 0:00:01 --:--:-- 605 <?xml version="1.0" encoding="utf-8"?> <d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns" xmlns:oc="http://owncloud.org/ns"> <d:response> <d:href>/remote.php/webdav/file.txt</d:href> <d:propstat> <d:prop> <d:getetag>"2452963196928:062c0215"</d:getetag> <d:getlastmodified>Fri, 11 Sep 2020 03:43:45 +0000</d:getlastmodified> </d:prop> <d:status>HTTP/1.1 200 OK</d:status> </d:propstat> </d:response> </d:multistatus> ```
tidyverse/ggplot2
810435363
Title: ragg device functions not working with ggsave Question: username_0: It's not working for me however, the following code results in an empty image being saved. Related to https://github.com/r-lib/ragg/issues/69 ``` r library(ggplot2) library(ragg) p <- ggplot(iris, aes(Sepal.Length, Sepal.Width, col = Species)) + geom_point() ggsave("gg_test.png", p, device = agg_png) #> Saving 7 x 5 in image ``` <sup>Created on 2021-02-17 by the [reprex package](https://reprex.tidyverse.org) (v1.0.0)</sup> <details style="margin-bottom:10px;"> <summary> Session info </summary> ``` r sessioninfo::session_info() #> ─ Session info ─────────────────────────────────────────────────────────────── #> setting value #> version R version 4.0.3 (2020-10-10) #> os macOS Mojave 10.14.6 #> system x86_64, darwin17.0 #> ui X11 #> language (EN) #> collate en_GB.UTF-8 #> ctype en_GB.UTF-8 #> tz Europe/Paris #> date 2021-02-17 #> #> ─ Packages ─────────────────────────────────────────────────────────────────── #> package * version date lib source #> assertthat 0.2.1 2019-03-21 [1] standard (@0.2.1) #> backports 1.2.0 2020-11-02 [1] standard (@1.2.0) #> cli 2.2.0 2020-11-20 [1] standard (@2.2.0) #> colorspace 2.0-0 2020-11-11 [1] standard (@2.0-0) #> crayon 1.3.4 2017-09-16 [1] standard (@1.3.4) #> digest 0.6.27 2020-10-24 [1] standard (@0.6.27) #> dplyr 1.0.2 2020-08-18 [1] standard (@1.0.2) #> ellipsis 0.3.1 2020-05-15 [1] standard (@0.3.1) #> evaluate 0.14 2019-05-28 [1] standard (@0.14) #> fansi 0.4.1 2020-01-08 [1] standard (@0.4.1) #> farver 2.0.3 2020-01-16 [1] standard (@2.0.3) #> fs 1.5.0 2020-07-31 [1] standard (@1.5.0) #> generics 0.1.0 2020-10-31 [1] standard (@0.1.0) #> ggplot2 * 3.3.3 2020-12-30 [1] standard (@3.3.3) #> glue 1.4.2 2020-08-27 [1] standard (@1.4.2) #> gtable 0.3.0 2019-03-25 [1] standard (@0.3.0) #> highr 0.8 2019-03-20 [1] standard (@0.8) #> htmltools 0.5.1.1 2021-01-22 [1] standard (@0.5.1.1) #> knitr 1.30 2020-09-22 [1] standard (@1.30) #> labeling 0.4.2 2020-10-20 [1] standard (@0.4.2) #> lifecycle 0.2.0 2020-03-06 [1] standard (@0.2.0) #> magrittr 2.0.1 2020-11-17 [1] standard (@2.0.1) #> munsell 0.5.0 2018-06-12 [1] standard (@0.5.0) [Truncated] #> scales 1.1.1 2020-05-11 [1] standard (@1.1.1) #> sessioninfo 1.1.1 2018-11-05 [1] standard (@1.1.1) #> stringi 1.5.3 2020-09-09 [1] standard (@1.5.3) #> stringr 1.4.0 2019-02-10 [1] standard (@1.4.0) #> styler 1.3.2 2020-02-23 [1] standard (@1.3.2) #> systemfonts 1.0.1 2021-02-09 [1] standard (@1.0.1) #> textshaping 0.3.0 2021-02-10 [1] standard (@0.3.0) #> tibble 3.0.4 2020-10-12 [1] standard (@3.0.4) #> tidyselect 1.1.0 2020-05-11 [1] standard (@1.1.0) #> vctrs 0.3.5 2020-11-17 [1] standard (@0.3.5) #> withr 2.3.0 2020-09-22 [1] standard (@2.3.0) #> xfun 0.19 2020-10-30 [1] standard (@0.19) #> yaml 2.2.1 2020-02-01 [1] standard (@2.2.1) #> #> [1] /Library/Frameworks/R.framework/Versions/4.0/Resources/library ``` </details> Thanks. Answers: username_1: Thanks, I too faced this problem a while ago. As the default value of `ragg::agg_png()`'s `unit` is `px`, which is the same as `grDevices::png()`, I guess ggplot2 needs some wrappers like this: https://github.com/tidyverse/ggplot2/blob/dbd7d79aa35c49b4eab200cf5f7a084a7748e776/R/save.r#L181-L185 username_2: This is the lookup table that converts file extensions into graphic devices, so it wouldn't have an effect if users specify the graphics device as a function. Instead, if they do so, they will have to set `res` and `units` manually. However, we should change the lookup table so it uses ragg devices by default if available. username_2: See also: #4086 username_3: Yeah — the plan is to use ragg if available for all file types where it makes sense in the next release. If someone is eager to implement the logic they should feel free but otherwise I'm on it the next time I look at ggplot2 username_1: Yes, what I meant was the users need to define some wrapper function like the one below. I guess a lot of people will use AGG as RStudio graphic device (as it's great) and they'll want to use the same rendering on `ggsave()` as well. Probably the next release will not be very soon, so I think it's meaningful to show a workaround for this here. ``` r library(ggplot2) p <- ggplot(iris, aes(Sepal.Length, Sepal.Width, col = Species)) + geom_point() # 300 is the same dpi as ggsave()'s default png <- function(...) ragg::agg_png(..., res = 300, units = "in") ggsave("gg_test.png", p, device = png) ``` username_3: The behaviour of `ggsave()` is quite unfortunate... ragg uses the exact same default values as `grDevices::png()` but still does not behave the same way username_2: The following works just fine, though. If people want to use custom graphics devices they need to be prepared to set a few parameters correctly, I think. Maybe the real issue is to document `ggsave()` better, and to explain that when using a device function, some settings such as `dpi` or `units` may not work and/or may have to be set explicitly. ``` r library(ggplot2) p <- ggplot(iris, aes(Sepal.Length, Sepal.Width, col = Species)) + geom_point() ggsave("~/Desktop/gg_test.png", p, device = ragg::agg_png, res = 300, units = "in") #> Saving 7 x 5 in image ``` <sup>Created on 2021-02-22 by the [reprex package](https://reprex.tidyverse.org) (v1.0.0)</sup> ![gg_test](https://user-images.githubusercontent.com/4210929/108750769-0db33000-7507-11eb-8f01-349d8cd1f4a8.png) username_4: @username_2, when I run your example, my `gg_test.png` file is 7x7 pixels big and 116 bytes big. `p` prints to Viewer just fine, it's `ggsave` with `agg_png` that saves a tiny 7x7 pixels file. My session info: ``` R version 4.0.3 (2020-10-10) Platform: x86_64-apple-darwin17.0 (64-bit) Running under: macOS Big Sur 10.16 Matrix products: default LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] ggplot2_3.3.3 loaded via a namespace (and not attached): [1] magrittr_2.0.1 tidyselect_1.1.0 munsell_0.5.0 colorspace_2.0-0 R6_2.5.0 [6] ragg_1.1.1.9000 rlang_0.4.10 fansi_0.4.2 dplyr_1.0.5 tools_4.0.3 [11] grid_4.0.3 gtable_0.3.0 utf8_1.1.4 DBI_1.1.1 withr_2.4.1 [16] systemfonts_1.0.1 ellipsis_0.3.1 digest_0.6.27 assertthat_0.2.1 tibble_3.0.6 [21] lifecycle_1.0.0 crayon_1.4.1 textshaping_0.3.1 farver_2.0.3 purrr_0.3.4 [26] vctrs_0.3.6 glue_1.4.2 labeling_0.4.2 compiler_4.0.3 pillar_1.5.0 [31] generics_0.1.0 scales_1.1.1 pkgconfig_2.0.3 ``` username_5: Hello, In addition, it seems that `ggsave()` does not close the device when using `agg_png()` as device (it does when specifying `'png'` as device). The following reprex ends with to the `Too many open devices` error. ``` library(ggplot2) library(ragg) library(glue) data(iris) for (i in 1:200){ p <- ggplot(iris) + geom_point(aes(Sepal.Length,Sepal.Width,color=Species)) ggsave(glue("Plot_{i}.png"), plot=p, device=agg_png(), path="./tmp/") } ``` (This is solved by explicitly calling `dev.off()` after `ggsave()`) username_1: @username_5 What `device` argument accepts is a **function** (e.g. `agg_png`), not a function call (e.g. `png()`). username_6: I can confirm w/ @username_4 that using your code Claus with `agg_png` + friends as the device returns small images, 116 bytes, 7x7 *pixels*. ![Screen Shot 2021-03-24 at 6 29 02 PM](https://user-images.githubusercontent.com/29187501/112396787-d46d0c00-8cce-11eb-87db-2b2bc1ab6b2e.png) `ragg` version 1.1.2 username_2: Ah, this makes sense. `ggsave()` has a `units` argument so it doesn't get captured by `...` and thus doesn't get handed off to `agg_png()`. Workaround is to specify the resolution directly in px, not in inch. username_6: Yup! You can get the "expected" behavior as so, without specifying arguments inside `agg_png`: ``` ggsave( "gg_test.png", p, device = ragg::agg_png, res = 300, units = "in", height = 1200, # this is ACTUALLY pixels width = 1400, # also pixels limitsize = FALSE # because ggplot2 thinks it's inches ) ``` Status: Issue closed
JulienNigon/SpellcasterUniversityPrivateBeta
876679001
Title: Typo Pologne pour les titres Question: username_0: ![image](https://user-images.githubusercontent.com/14259450/117183325-b4edf680-add7-11eb-9dfc-abc0ec3b6bf1.png) Pourrais tu faire que quand on lance le jeu en Polonais il utilise pour les titres la Mops à la place de la Berry Rotunda? Status: Issue closed Answers: username_0: Typo Mops mise en remplacement pour les lettres qui manquent
espressif/esp-adf
794918763
Title: How to make playback of mp3 from SD card using esp32-s2? Question: username_0: Hi all, Working on mp3 playback from sd card using esp32-s2. wroom is used. I'm able to create/save a text file and open/edit that file. but stuck at mp3 playback. requesting experienced suggestions and guidance. I will be happy if there is a sample code. thanks in advance. Answers: username_1: Hi, @username_0 Currently adf support sdio interface as fatfs dirver, but s2 has no sdio interface. So you need to choice spi as fatfs interface, and app code do not need change. username_0: @username_1 Sir, can you please provide a sample code? username_0: Any updates? Highly appreciates your suggestions... username_1: HI , @username_0 The log provided shows some undeclared errors. It may be an error such as not including a header file. You need to be careful about some dependency issues when porting. username_0: the i2s header files seems to be OK. username_0: esp32 works perfectly with internal DAC. While using the same code for Espressif ESP32-S2-Soala-1_V1.2, (MP3 Playback from spiffs) it gives error that the problem with i2s as my previous post. Still the issue not resolved. Any example code released so far? Play back from spiffs or from SD Card? Expecting a solution at the earliest. Thank you. username_2: Hi there! I've just encountered this issue as well, allbeit with another example (the "Play MP3 file from SD Card" to be precise). I ended up just using the sdspi host in the audio_board_sdcard_init function of my custom board. It ended up looking (and working) like this. Just like username_1 mentioned. ``` c++ esp_err_t audio_board_sdcard_init(esp_periph_set_handle_t set, periph_sdcard_mode_t mode){ esp_err_t ret; esp_vfs_fat_sdmmc_mount_config_t mount_config = { .format_if_mount_failed = SDCARD_FORMAT_ON_MOUNT_FAIL, .max_files = SDCARD_OPEN_FILE_NUM_MAX, .allocation_unit_size = 16 * 1024 }; sdmmc_card_t* card; const char mount_point[] = SDCARD_MOUNT_POINT; ESP_LOGI(TAG, "Initializing SD card"); gpio_set_pull_mode(SDCARD_MISO_GPIO, GPIO_PULLUP_ONLY); // TODO: Replace with seperate pull-ups. gpio_set_pull_mode(SDCARD_MOSI_GPIO, GPIO_PULLUP_ONLY); gpio_set_pull_mode(SDCARD_SCLK_GPIO, GPIO_PULLUP_ONLY); gpio_set_pull_mode(SDCARD_CS_GPIO, GPIO_PULLUP_ONLY); ESP_LOGI(TAG, "Using SPI peripheral"); sdmmc_host_t host = SDSPI_HOST_DEFAULT(); spi_bus_config_t bus_cfg = { .mosi_io_num = SDCARD_MOSI_GPIO, .miso_io_num = SDCARD_MISO_GPIO, .sclk_io_num = SDCARD_SCLK_GPIO, .quadwp_io_num = -1, .quadhd_io_num = -1, .max_transfer_sz = 4000, }; ret = spi_bus_initialize(host.slot, &bus_cfg, SPI_DMA_CHAN); if (ret != ESP_OK) { ESP_LOGE(TAG, "Failed to initialize bus."); return ret; } sdspi_device_config_t slot_config = SDSPI_DEVICE_CONFIG_DEFAULT(); slot_config.gpio_cs = SDCARD_CS_GPIO; slot_config.host_id = host.slot; ret = esp_vfs_fat_sdspi_mount(mount_point, &host, &slot_config, &mount_config, &card); if (ret != ESP_OK) { if (ret == ESP_FAIL) { ESP_LOGE(TAG, "Failed to mount filesystem. " "If you want the card to be formatted, set the EXAMPLE_FORMAT_IF_MOUNT_FAILED menuconfig option."); } else { ESP_LOGE(TAG, "Failed to initialize the card (%s). " "Make sure SD card lines have pull-up resistors in place.", esp_err_to_name(ret)); } return ret; } return ret; } ``` I hope this helps you and maybe others in the future as well! Kind regards, Jochem username_0: Thanks @username_2 , It helps me. i2s shows some error while using the internal DAC. i think, i2s function is not portable to esp23s2 from esp32. Status: Issue closed
sixteenmillimeter/mcopy
459255131
Title: Frame counting method takes a long time on large videos Question: username_0: Improve the frame counting feature for filmout so that it determines frame count using ``` ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 "${video}" ``` Status: Issue closed Answers: username_0: This has been resolved with commit f239f862e8b44f925d5ec8c7ab943d622c73ff56. For every type of supported video format or container *except* .mkv (Matroska) there is a new ffprobe command which runs in a fraction of the time. For .mkv files, the old command is used.
kubernetes/kubernetes
117488864
Title: Rename RawPodStatus/PodStatus in kubelet Question: username_0: Forked from #17259 We want to rename some types and functions in the container runtime interface. - type RawPodStatus to PodStatus - GetPodStatus() to GetAPIPodStatus() (should be eventually deprecated). - GetRawPodStatus() to GetPodStatus() @username_1, I am assigning this to you so that you do with, or after your pending PRs to avoid rebases. Answers: username_0: /cc @dchen1107 Status: Issue closed username_1: close this issue, because it has been done in #17420. :)
ajency/F-BCircle
240932751
Title: Single View of Listing html - Issues Question: username_0: Tested on: http://staging.fnbcircle.com/single-view.html Tested on: Chrome 1) The start of the breadcrumb is not aligned with the next section. 2) The header elements should be prominent- maybe bold. 3) The location and map below the listing title looks faded. 4) View when clicked on +10 for areas of operation should be shown. 5) Change the text "Send Enquiry" to "Send an Enquiry". 6) For the count of contacts and enquiries, there will be buckets set. So we have the following values. "Fewer than 10, 10+ Contact Requests/Enquiries, 50+ Contact Requests/Enquiries, 100+ Contact Requests/Enquiries and so on. 7) Change "Is this your business? Claim it now. or Report Now" to "Is this your business? Claim it now or Report" 8) Photos & Documents - On click of cover image - no way to go to the next images. 9) "Article" should be "Articles" 10) "Also listed In" should be "Also Listed In" 11) Browse Categories should display the categories having higher count first. 12) On click of the map link below the title, it should take to the Map section but along with the address displayed too. 13) When clicked on Listed In - it should scroll exactly above Also Listed in section. Similarly for Overview, Similar Businesses. 14) The scrollable menu should comes as soon the menu section disappears. 15) On mouse hover of the categories under browse categories section, they should be highlighted. Similarly the download links for documents. 16) Hours of operation - See more view? 17) "Updates" should be inline with the next section 18) Add share icons - Whatsapp, Linkedin, Facebook, Twitter, Google+ Answers: username_1: Points fixed leaving 18 and 21. username_0: @username_1 6. For the count of contacts and enquiries, there will be buckets set. So we have the following values. "Fewer than 10, 10+ Contact Requests/Enquiries, 50+ Contact Requests/Enquiries, 100+ Contact Requests/Enquiries and so on. 12. On click of the map link below the title, it should take to the Map section but along with the address displayed too. On mouse hover the download links for documents should be highlighted.
uwsampl/relay-bench
548356106
Title: Including telemetry graphs in the webpage Question: username_0: Now that we have the `vis_telemetry` subsystem, we should include the generated graphs in the generated webpage. @AD1024 can you try to modify the website generator to include the last run's telemetry graphs in the generated web page (say, under a header for each experiment that has a graph)? It would be valuable for usability
trondr/DriverTool
1112442361
Title: DriverTool.exe missing from package. Question: username_0: The Script\DriverTool folder of a package is missing DriverTool.exe when using the PowerShell module to create the package. Reproduced with: $models = @("10RS") $models | Foreach-Object{ Write-Host "Getting driver updates for model $_"; Get-DtDriverUpdates -Manufacturer Lenovo -ModelCode "$_" -OperatingSystem "WIN10X64" -OsBuild "21H2" -ExcludeDriverUpdates @("BIOS","Firmware") -Verbose } | Invoke-DtDownloadDriverUpdates Workaround: Copy DriverTool.exe from powershell module manually. Example: copy "C:\Program Files\WindowsPowerShell\Modules\DriverTool.PowerCLI\1.0.22023\internal\tools\DriverTool\DriverTool.exe" "C:\temp\DU\10RS\2021-11-12-1.0\Script\DriverTool\DriverTool.exe"<issue_closed> Status: Issue closed
coleifer/peewee
88295364
Title: [Support request] No primary key for a model Question: username_0: Hi Charles, I've got a table which logs keywords, schema is: ``` class Keyword(Model): name = CharField(null=False) created = DateTimeField(null=False, default=datetime.utcnow) class Meta: indexes = ( (('created', 'name'), False) ) ``` In short, I want to drop the auto-generated primary key column (id). How do I prevent it from being readded in subsequent db creations, I've taken a look into the manual and not found the syntax. ----- In long, in the database however it has added an id column and associated pkey index. This table now has 35 million rows and that pkey index is 10gb. I've taken a look into the code and it looks safe to drop the column, we only perform an aggregate query on the table in any case: `Keyword.select(Keyword.created, Keyword.name).where(Keyword.created > 3_days)` OR `Keyword.select().where(Keyword.created > 3_days).count()` Neither of these should need a primary key as we can get exact duplicates (35mil rows are for 3 days of data). Many thanks, Alex PS. It would also be awesome to be able to name index constraints, getting a few problems when running schema migrations because of name mismatches between environments. Status: Issue closed Answers: username_1: You need a primary key of some sort, so your best bet is to: ```python class Keyword(Model): name = CharField(null=False) created = DateTimeField(null=False, default=datetime.utcnow) class Meta: primary_key = CompositeKey('name', 'created') ``` Unfortunately peewee does not support "no primary key" at all. username_0: Hmm, but surely this will imply a unique constraint which isn't what I'm after because the dataset can contain duplicate values. Is there any plans for peewee to support no primary key set ups? username_1: Oh wait my mistake I forgot that this *is* possible! Edited my comment above. username_1: I've also updated the docs to make this more clear. username_0: Awesome thanks Charles :) username_2: Thanks @username_1 for this answer of yours - it helped me greatly now, three years later. :+1: (I was struggling with peewee trying to retrieve data from a non-existing sequence as below) ``` psycopg2.ProgrammingError: relation "issue_assignees_id_seq" does not exist LINE 1: SELECT CURRVAL('"issue_assignees_id_seq"') ``` Applying the `primary_key = False` to the inner class fixed the issue.
os-js/osjs-server
752990458
Title: Dynamic mount point Question: username_0: Hi, Is it possible to create mountpoint at runtime? For example, in an application, can I careate a new mountpoint with one of adapters registered before? server.js: can we do something like this? `{....core.configuration.mountpoints, newMountpoint} `and using it as mountpoints list in application. or just create an mountpoint entry and use it: ```js const mountpoint = ({ adapter:'monster', name:'Run time mountpoint', attributes : ............. }) ``` an then use `mountpoint`. for creating mountpoint in application. Also How can I do this independent of an application. creating `mountpoint` at osjs run time under some condition, and accessing it in whole osjs. Answers: username_1: Create a service provider. You could wire it together either via signals (events) or a service contract. ``` class MyServiceProvider { constructor(core, options = {}) { this.core = core this.options = options } async init() { this.core.singleton('my-service', this.createMyContract()) } createMyContract() { return { mount: () => this.createSomeMountpoint() } } createSomeMountpoint() { // This is where you would add stuff to do actual operations } } ``` Then via your apps do `core.make('my-service').mount()`. username_1: Which does not have to be an app ofc. And as for conditions, you can create your own state(s) and such and as I mentioned wire it together with events. If you have some idea how you need it to work, then show me and I can write a more complete example. username_0: Still I didn't write service provider. Of course I think one way may helps: when a user try logins with username and password, after validation he gets token and endpoints list. then we could save tokens in session storage for every adapter method query(checking user's privileges) and save endpoints and name in osjs settings. In this way, certainly we need implement `server.js` for each application needs this mountpoints(with passing `token ` and `endpoint` to `server.js`) and call vfs there and return response. I hope I have told it clear. username_1: With the scenario you just described it sounds like you might not need any runtime dynamic mount/unmount on the client-side, but rather make the login process manipulate the mounpoint list (upon login) that usually comes from the statically generated configuration. As for the server side, since you're using the same adapter for everything, I think a nice way to approach this is to make a mountpoint adapter support RegEx in the `name` property. I think this solves the issue in a fast and efficient way for your needs -- at least if you know what name(s) you're going to use for your mountpoints. So, to summarize: 1. Change the Filesystem component in the client to initially store the mountpoint list from config * Add API method to add to this list * Add this API method to the VFS service contract 2. Change the Filesystem component in the server to support RegEx matching on mountpoint names username_0: As you said before, we must write serviceProvider to decouple this operation from login and depends it to auth SP. But our new issue is creating client mountpoints to showing user! Do we need create a client service provider for this porpuse? we store mountpoint name in server session. (actually we need create mountpoint list in client after user login too). username_0: This wasn't clear to my why i need modify Filesystem component. what is your mean of `VFS service contract`. Is it necessary to modify server filesystem when we could add dynamically? Where might it cause a problem when push to mountpoints? username_1: we store mountpoint name in server session. (actually we need create mountpoint list in client after user login too). Please use the following issue for client side stuff: https://github.com/os-js/osjs-client/issues/134 In any case, I would recommend creating a service provider for this. Store the data required to set up the mountpoints in the user data, then just use the new methods that will be implemented by https://github.com/os-js/osjs-client/issues/134. The reason for this is that the VFS is not available before *after* a user has logged in (this can be customized, but I would not recommend it). Status: Issue closed username_1: Why did you close this? username_0: I didn't should close an issue when someone self-assigned it..right? username_0: Hi, Is it possible to create mountpoint at runtime? For example, in an application, can I careate a new mountpoint with one of adapters registered before? server.js: can we do something like this? `{....core.configuration.mountpoints, newMountpoint} `and using it as mountpoints list in application. or just create an mountpoint entry and use it: ```js const mountpoint = ({ adapter:'monster', name:'Run time mountpoint', attributes : ............. }) ``` an then use `mountpoint`. for creating mountpoint in application. Also How can I do this independent of an application. creating `mountpoint` at osjs run time under some condition, and accessing it in whole osjs. username_1: But I don't think this is solved :smile: There still is no proper way to dynamically add/remove mountpoints on the server. Pushing into that array is a very brute force way of doing this, and is not guaranteed to work forever of some internal mechanics change slightly. I wrote here what I feel needs to be done to solve this correctly: https://github.com/os-js/osjs-server/issues/42#issuecomment-736073632 username_1: I just realized there's a way to dynamically add mountpoints in the server already :blush: ``` // Example using the standard home mountpoint setup core.make('osjs/fs').mount({ name: 'example', attributes: { root: '{vfs}/{username}' } }) ``` I might actually make it so the client also supports using `mount()` with an object just like the new `addMountpoint` i made there so that the API signatures are similar. username_1: If that does not solve your issue, please re-open :blush: Status: Issue closed
doyeka/final_project
318751744
Title: All the edges have the same weight Question: username_0: ### Problem I'm not sure if it's just because of some strange alloy default, but right now all of the edges have the same edge weight. When I change the constraints under `positiveEdgeWeight` so that all weights must be greater than some number, e.g. 5, all the edge weights will switch to a different number, e.g. 6, but they will still all be the *same* weight. ### Proposed solution Delete predicates until the graph once again has random edge weights and modify the code accordingly so that the graph is always like this.
skpm/dialog
793439087
Title: `showMessageBox` segfaults Question: username_0: ```ts import dialog from "@skpm/dialog"; import iconDefault from "./assets/icon.png"; // https://stackoverflow.com/a/58718960/5572146 export function impl<T>(struct: T): T { return struct; } await dialog.showMessageBox(impl<dialog.MessageBoxOptions>({ type: "info", title: `title`, message: `message`, icon: iconDefault, })); ``` * sketch: v70.3 * @skpm/dialog: v0.4.1 Answers: username_0: it seems to have something to do with the `icon` field. after making the `icon` field optional (the readme says it should be optional, [but it's not](https://github.com/skpm/dialog/blob/56e4e23af47683633396bf678abf6b609d94ef6e/type.d.ts#L77)) and omitting it a dialog is shown. username_1: what's the error? The icon is optional (https://github.com/skpm/dialog/blob/master/lib/message-box.js#L75-L91), it looks like a mistake in the types. Happy to accept a PR to fix it
JabRef/jabref
343122033
Title: No visual difference between static and dynamic groups Question: username_0: In current 4.3.1 I don't see a difference between static and dynamic groups. In 3.8.1 dynamic groups where shown in italic font. Answers: username_1: I think, we had a discussion about this a few months ago. From the technical side there is no problem in readding a visual distinction between the two modes for a group. However, we couldn't come up with a use case, where the knowledge of the mode is important and thus makes it worthwhile to show it to the user. Why do you want such a visual indication? "It was there in 3.8.1" is not a good enough argument to bring it back ;-). username_0: A use-case is hard to describe here. It depends on how an individal brain looks on informations. ;) For me it is absolutly different if a groups is created out of a keyword (dynamic) or if I created it static and added entries in their manually. For me it makes absolutely sens to use different visual styles for that groups. But I miss the words to describe it in an general understandable way. Sorry. username_0: Take it like this: The tree structure in JabRef represtent the think-ways in the brain of a user. Each user is individual as a person, individual in the use-cases and comes from different scientific areas. For my brian it is absolutly important to see with first view a difference between real groups and dynamic-keyword-groups. ;) username_1: Thanks for the follow-up. I understand that some users (or their brains) find the distinction between dynamic and static groups helpful. However, for most users such a distinction is not helpful and/or they don't understand it (e.g. previously dynamic groups were displayed in italics but this is everything than self-explanatory). Moreover, in my opinion, it is not worth the hassle to add a preference to toggle between a visual distinction on or off. However, you can edit the icon and/or the color of the group yourself. Hence, it is very easy for you to introduce a color scheme in which all dynamic groups are say red and all static ones are displayed in blue. Status: Issue closed username_0: "color scheme"? Can you explain that? username_1: When you edit a group, you can specify a color of the icon (e.g. 0x5eba7dff gives green, see https://www.w3schools.com/colors/colors_hexadecimal.asp?color=5eba7dff). By using colors consistently for dynamic vs static groups you get a color-coded representation of this distinction. username_0: I can specify the color by rgb hex value? This is far away from being ergonomic. I am the user - not a machine. This is a design-bug. If you offer the freedom to choose color then offer also a color chooser. There is on step missing. So step back (delete the color field) or forward (color picker). But this wouldn't solve my problem. I don't want to setup the color for each keyword-group again and again. I want to setup the "color for all existing and new in future keyword groups" **one time**. username_2: Pleas be patient. We are all working in our free time on JabRef. The edit and add group dialog will be redesigned when converted to JavaFX. There is already an issue with some ideas how it might look like. Feel free to add your ideas https://github.com/JabRef/jabref/issues/2630 username_0: Please stop discussing ressources here. Issues are not about that. An Issue/BugReport/FeatureRequest is only to find a solution or to document if the core developers would accept patches for something. An Issue is **NOT**: Fix this now! Because fo #2630 I would say this is an Issue. Just connect it to #2630 and leave it open to prevent other users open an Issue for the same topic again. Thats all. Doesn't matter if you fix it now, in five years, me or another person. username_0: #2630
softloud/happypillpain
916266436
Title: pipeline Question: username_0: ## objective Clean data pipeline and document in such a way that Hollie can understand and check it. ## tasks ## notes - will need to merge all issues that pertain to this Answers: username_0: ``` source("R/scale_match.R") library(glue) df <- tar_read(w_obs_long_metapar) tar_load(w_scales) for (i in 1:nrow(df)) { message(glue(" *** Row {i} *** ")) scale_match( o = df$outcome[i], desc = df$covidence_desc[i], m = df$model_type[i], scale_df = w_scales ) } ``` ![image](https://user-images.githubusercontent.com/13408706/130749681-5e474b6f-5544-43e0-b58c-d56fba872ed6.png)
KovDimaY/SimpleChat-Socket.io
308348070
Title: DEV-8: Improve verification process Question: username_0: At the moment there are two things that can be improved in the verification process before the user joins a room: 1) User comes to the room and only after this he/she sees the error message. All other members of the room will see a new user with a blank name joined and left the room: ![image](https://user-images.githubusercontent.com/26466644/37875215-74c480ae-303c-11e8-90e1-a9235c9683c8.png) I suppose that would be much better if the user cannot come to the room if the name is blank. Also, Users class can be improved so it does not let User to be created if the name or room provided is blank. 2) The alert is a default one from the browser and it looks a bit ugly: ![image](https://user-images.githubusercontent.com/26466644/37875157-a43dafc8-303b-11e8-98ac-418390cc5a32.png) Maybe is a good idea to use some library for showing a nicer alert window, like **sweetalert.js**, or **Zebra_Dialog**, or **bootboxjs**, or any other library. To work on this issue, please use a branch **DEV-8**. When the work is finished - create a PR to the **devel** branch. At the end of each development circles, code from the **devel** branch (if everything is ok) goes to the **master** branch and deploys to production. Answers: username_0: Added some simple validation: ![image](https://user-images.githubusercontent.com/26466644/37929032-160a8dde-313f-11e8-8310-919398cb3e7d.png) Added preventing mechanism so now the user does not join the room if he/she has incorrect credentials. Also, the look-and-feel of alert was improved by the library **sweetalert.js**: ![image](https://user-images.githubusercontent.com/26466644/37929129-60978852-313f-11e8-8e9a-586f9402df8e.png) I push my changes to **DEV-8** and create a PR to **devel**. When changes will be **merged** and **tested** in **devel** - I will close the issue. username_0: Everything was tested on devel and works as expected. The issue can be closed. Status: Issue closed
flyimg/flyimg
306891616
Title: Replace the current demo image used in the Documentation Question: username_0: Current image use for the documentation not available anymore (https://m0.cl/t/resize-test_1920.jpg), we should replace it with a new one. Suggestion: - https://www.mozilla.org/media/img/firefox/template/page-image.4b108ed0b8d8.png Answers: username_0: Closing this issue, there's an issue for rewriting the documentation which includes replacing the demo image. #159 Status: Issue closed
ythy/blog
293011795
Title: expose-loader 解析jQuery Question: username_0: ``` { test: require.resolve('jquery'), use: [{ loader: 'expose-loader', options: 'jQuery' }, { loader: 'expose-loader', options: '$' } ] } ```
IntelRealSense/realsense-ros
911341887
Title: L515 configuration failing Question: username_0: Hi, I use the L515 with ROS melodic. I use the following launch file to get maximum resolution for depth and rgb data. `<launch> <arg name="serial_no" default=""/> <arg name="usb_port_id" default=""/> <arg name="device_type" default=""/> <arg name="json_file_path" default=""/> <arg name="camera" default="l515"/> <arg name="tf_prefix" default="$(arg camera)"/> <arg name="external_manager" default="false"/> <arg name="manager" default="realsense2_camera_manager"/> <arg name="fisheye_width" default="1280"/> <arg name="fisheye_height" default="720"/> <arg name="enable_fisheye" default="false"/> <arg name="depth_width" default="1024"/> <arg name="depth_height" default="768"/> <arg name="enable_depth" default="true"/> <arg name="infra_width" default="1280"/> <arg name="infra_height" default="720"/> <arg name="enable_infra1" default="false"/> <arg name="enable_infra2" default="false"/> <arg name="color_width" default="1920"/> <arg name="color_height" default="1080"/> <arg name="enable_color" default="true"/> <arg name="fisheye_fps" default="30"/> <arg name="depth_fps" default="30"/> <arg name="infra_fps" default="30"/> <arg name="color_fps" default="6"/> <arg name="gyro_fps" default="400"/> <arg name="accel_fps" default="250"/> <arg name="enable_gyro" default="false"/> <arg name="enable_accel" default="false"/> <arg name="enable_pointcloud" default="false"/> <arg name="pointcloud_texture_stream" default="RS2_STREAM_COLOR"/> <arg name="pointcloud_texture_index" default="0"/> <arg name="enable_sync" default="false"/> <arg name="align_depth" default="true"/> <arg name="publish_tf" default="true"/> <arg name="tf_publish_rate" default="0"/> <arg name="filters" default=""/> <arg name="clip_distance" default="-2"/> <arg name="linear_accel_cov" default="0.01"/> <arg name="initial_reset" default="false"/> <arg name="unite_imu_method" default=""/> <arg name="topic_odom_in" default="odom_in"/> <arg name="calib_odom_file" default=""/> <arg name="publish_odom_tf" default="true"/> <arg name="allow_no_texture_points" default="false"/> <group ns="$(arg camera)"> <include file="$(find realsense2_camera)/launch/includes/nodelet.launch.xml"> [Truncated] [ INFO] [1622800462.586704954]: Motion Module was found. [ INFO] [1622800462.587185384]: num_filters: 1 [ INFO] [1622800462.587217767]: Setting Dynamic reconfig parameters. [ INFO] [1622800462.890762268]: Done Setting Dynamic reconfig parameters. [ WARN] [1622800462.890956379]: Given stream configuration is not supported by the device! Stream: Depth, Stream Index: 0, Width: 1024, Height: 768, FPS: 30, Format: Z16 [ WARN] [1622800462.891018674]: Using default profile instead. [ INFO] [1622800462.891306240]: depth stream is enabled - width: 320, height: 240, fps: 30, Format: Z16 [ WARN] [1622800462.891427744]: Given stream configuration is not supported by the device! Stream: Color, Stream Index: 0, Width: 1920, Height: 1080, FPS: 6, Format: None [ WARN] [1622800462.891474792]: Given stream configuration is not supported by the device! Stream: Confidence, Stream Index: 0, Width: 640, Height: 480, FPS: 30, Format: None [ INFO] [1622800462.891838174]: setupPublishers... [ INFO] [1622800462.895660053]: Expected frequency for depth = 30.00000 [ INFO] [1622800462.934061382]: setupStreams... [ INFO] [1622800462.945827959]: insert Depth to L500 Depth Sensor [ INFO] [1622800462.986689405]: SELECTED BASE:Depth, 0 [ INFO] [1622800462.987343673]: RealSense Node Is Up! ` Finally only a depth image of resolution 240 x 320 is published. RGB data is missing completely. Furthermore, it says it is using USB port 2.1, although it is attached to USB 3.0. What is the problem here? I appreciate help. Answers: username_1: Hi, The Linux system can sometimes, accidentally, detect a USB 3.1 connection as a USB 2.1 connection if the cable was attached slowly or loosely but usually the issue is caused by a low-quality USB cable. In any case, the fact that the device is recognized by the system as being connected through a USB 2.1 port can explain the issues with the configuration. Status: Issue closed username_0: Indeed, when the connection was USB 3.1, the configuration worked.
kahlan/kahlan
235060688
Title: Unexpected results from toBeCalled Question: username_0: Given the following example from the [Matchers documentation](https://kahlan.github.io/docs/matchers.html): ```php it("expects `time()` to be called and followed by `rand()`", function() { $foo = new Foo(); expect('time')->toBeCalled()->ordered; expect('rand')->toBeCalled()->ordered; $foo->date(); $foo->random(); }); ``` To create the following reduced test case: ``` it('expects to be sorted before checking if its an array', function () { function _generate_antimatter () { $antiparticles = []; array_multisort( $antiparticles ); is_array( reset( $antiparticles ) ); return $antiparticles; } expect('array_multisort')->toBeCalled()->ordered; expect('is_array')->toBeCalled()->ordered; _generate_antimatter($this->dependencyWithoutQuery); }); ``` Results in the following unexpected output in Kahlan 3.1.15: ``` It expect actual to be called. actual: (string) "array_multisort()" actual called times: (integer) 0 expected to be called: (string) "array_multisort()" expect->toBeCalled() failed in `./spec/hyperdrive.spec.php` line 81 It expect actual to be called. actual: (string) "is_array()" actual called times: (integer) 0 expected to be called: (string) "is_array()" ``` Answers: username_1: your `_generate_antimatter ()` function must be inside class that registered in composer autoload to do that test username_0: PHP introduced function imports in PHP 5.6 near the end of 2015. I realize not every is jumping on procedural programming, but I chose Kahlan because it's gotten me extremely far in my procedural codebase thus far. Based on my survey of the PHP testing landscape Kahlan was the best I could do to achieve the 74% percent test coverage for thus far, and as discussed in related issues I'm using `file` autoload, which I understand is a kludge in Composer. That said, I do not plan to do any OOD in my codebase. That means to `class` or `extends` keywords unless I'm doing mocks and have no other choice. I suppose it's up to the creators of Kahlan if it's within the purview of their vision for the progression of Kahlan as to whether or not it will at some point facilitate procedural programming using PHP 5.6 features. username_0: Closing as this usage is not covered under the current codebase. I may use it in a future issue against Composer if that's the hold-up. To me it's not clear. But what I do know is many use classes in PHP for faux namespaces and that seems night right given PHP's new digs. Status: Issue closed
filecoin-project/lotus
955567022
Title: makefile subtly broken when GOCC is unset Question: username_0: The makefile is subtly broken with the introduction of GOCC; when unset the version check fails. ``` lotus $ make bash: version: command not found expr: syntax error: unexpected argument ‘1016000’ lotus $ GOCC=go make ... # everything normal ```<issue_closed> Status: Issue closed
hasherezade/IAT_patcher
983373511
Title: rsp error Question: username_0: window 10 hook api crash!!! 1、 ![image](https://user-images.githubusercontent.com/5162637/131431350-ccc84fb7-d72e-4fd8-9649-8e25fa5416de.png) 0x00007FF737D20A34 is the pre main function 2、 ![image](https://user-images.githubusercontent.com/5162637/131431401-7c851a01-04b7-4123-80ac-38f667e0404b.png) 0x00007FF73AD01013 to jmp the main function 3、jmp function in the stack will be changed ![image](https://user-images.githubusercontent.com/5162637/131431522-6af01983-b243-4b8a-971c-9c42f3c0e5ed.png) 4、has been changed ![image](https://user-images.githubusercontent.com/5162637/131431610-857f420d-be57-4200-828e-30ae7d6f12f3.png) 00007FF73AD010A0 should be 0x00007FF73AD01013 5、will go to error rsp ![image](https://user-images.githubusercontent.com/5162637/131431714-0155e3cf-fe40-4d3a-b3f2-6d5da11866d8.png) Answers: username_1: Hi, in which circumstances did the crash happened? Did it happened on return from your replacement function? Are you sure that the function that you used as a replacement has **exactly the same API** as the function that you was hooking? Not only parameter numbers and types have to match, but also the **calling convention has to be identical** ( i.e. `_stdcall` function cannot be use as a replacement of `_cdecl` function, and vice versa). It there is mismatch, the return will not be good. username_0: yes! i am sure! i modify the asm like this,it test ok! i am not sure it will work well all of windows version! ![image](https://user-images.githubusercontent.com/5162637/131474184-77515fa5-b9a5-48e6-8985-b38a01c3d73c.png) username_0: and also there has a bug in the hexf.cpp file. i build the asm by yasm, and there has 0xff code, the hexf.cpp will break,so when i copy these code to stubdata.h it will be error! and i modify this file like this. ![image](https://user-images.githubusercontent.com/5162637/131475034-f2081bd8-3580-4a82-a244-92660769cdd8.png) username_0: #5 username_1: Thank you for your contribution, the new release is ready: https://github.com/username_1/IAT_patcher/releases/tag/v0.3.5.4 username_0: thank you for your project. it saves my time. Status: Issue closed
redis-store/redis-rails
158252576
Title: have trouble install redis-rails gem Question: username_0: ![6](https://cloud.githubusercontent.com/assets/16144214/15762937/69325c2c-28d5-11e6-9a88-0a700d25db2f.png) Answers: username_1: What happens after you follow the instructions in the error message? This is not related to redis-rails. Please install a not shitty version of Ruby. :-) > username_2: gem 'redis-rails', git: 'https://github.com/redis-store/redis-rails.git' @username_0 username_2: some errors in Gemfile @username_1 username_1: @username_2 and @username_0, can you folks try to install redis-rails under the latest Ruby version? it seems your version is outdated and might be causing issues on Windows. username_3: I had the same issue on Rails 5, Ruby 2.3 on OSX. Thanks @username_2 for the solution. username_1: closing, and working with the team to add the minimum requirements on issue posting so that non-issues like this and/or duplicates aren't reported. Status: Issue closed username_0: how to do it? username_0: @username_3 did u make ut? username_1: @username_0 @username_3 @username_2 this should be solved by specifying the version: ```ruby gem 'redis-rails', '~> 5' ``` You shouldn't need the github dependency anymore.
Fody/Fody
387165399
Title: Add configuration to turn of legacy weaver lookup Question: username_0: Scanario: A weaver is configured in FodyWeavers.xml, but not included as NuGet package. The build may succeed, because Fody finds any weaver by legacy lookup. To make this more strict, there should be a flag in configuration that allows to turn of legacy lookup. Alternative: Turn of legacy lookup by default, and have a flag to turn it on again. @username_2 @username_1 Whats your opinion about this? Answers: username_1: Seems reasonable. I wouldn't mind having legacy support off by default (most weavers should be up to date), as long as the error message is clear and references the config option. username_2: Isnt legacy lookup prone to bugs via side effects? And if it doesn’t find one, isnt the fix trivial for the user?... add the package? username_0: Yeah, default option for legacy lookup should probably be OFF username_2: What I don’t understand why would we want anyone to enable it? username_0: There might be some legacy weavers around, and it's not a big deal keeping it for a while... username_2: Ok i am a little confused. What do u define as “legacy lookup” versus “legacy weaver”? Status: Issue closed
betagouv/e-controle
545655705
Title: Je suis contrôleur mais pas agent de la Cour des comptes, je ne comprends pas comment accéder aux réponses. Question: username_0: On Mon, 23 Dec 2019 at 16:41, <NAME> <<EMAIL>> wrote: J'envoie un petit mémo. - L'utilisatrice n'est probablement pas dans le réseau de la Cour - pas de VPN ? - Elle ne peut pas accéder aux fichiers via l'explorateur Windows. - Elle n'avait pas compris qu'en allant sur la page du questionnaire en cliquant sur "consulter", elle pouvait voir les réponses. C'est parce qu'elle a vu le bouton "Voir les réponses" et rien ne lui faisait penser qu'il y avait un autre endroit pour "voir les réponses". - Elle suggère de changer "voir les réponses" en "comment voir les réponses". ---------- Forwarded message --------- From: <NAME> <<EMAIL>> Date: Mon, 23 Dec 2019 at 16:06 Subject: Re: problème accès doc sur e.controle To: xxx Cc: e-controle-beta <<EMAIL>> Bonjour Madame, Merci pour votre message. Peut-être qu'il faudra vérifier que votre PC est connecté au réseau de la Cour et que la connexion fonctionne. Aussi, êtes vous sous Windows 10 ou bien sous Windows 7 ? Je pourrais vous recontacter par téléphone si vous le souhaitez. En vous remerciant, Amicalement, <NAME> Pour l'équipe e.contrôle On Mon, 23 Dec 2019 at 14:53, Anne Bonjour, Je n'arrive pas à accéder aux documents mis en ligne sur e.controle dans le cadre de plusieurs enquêtes par plusieurs organismes. Ex de la copie d'écran du message d'erreur: j'ai suivi le ppt de création d'un emplacement réseau...j'ai ce même message d'erreur quel que ce soit le lien que j'essaie de télécharger sur un emplacement réseau. Pouvez vous m'aider ? Bien cordialement et très bonnes fêtes. Answers: username_0: Explication : Cet utilisateur "contrôleur" n'est pas un agent de la Cour des comptes. Ca peut arriver mais c'est extrêmement rare. Quand elle se connecte, elle voit le bouton "voir les réponses". ![Capture](https://user-images.githubusercontent.com/43180136/71812861-8c85a900-3078-11ea-9b2d-2fd668fcd374.JPG) Elle accède aux tutos webdav mais ça ne peut pas marcher pour elle car elle n'est pas dans l'AD de la Cour. username_0: Suggestion : - reprendre son idée : "Comment voir les réponses" - Ajouter un encart : "Le préalable : être connecté aux réseaux de la Cour (depuis votre bureau, wifi JF ou VPN)" " Les réponses sont également accessible depuis chaque questionnaire en cliquant sur "consulter""
jlippold/tweakCompatible
469047878
Title: `IPACE` working on iOS 12.2 Question: username_0: ``` { "packageId": "null.leptos.ipace", "action": "working", "userInfo": { "arch32": false, "packageId": "null.leptos.ipace", "deviceId": "iPhone10,3", "url": "http://cydia.saurik.com/package/null.leptos.ipace/", "iOSVersion": "12.2", "packageVersionIndexed": true, "packageName": "IPACE", "category": "Addons (Activator)", "repository": "(null)", "name": "IPACE", "installed": "0.0.1", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "null.leptos.ipace", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Internet Protocol Address Change Event (Activator, configured for IPv4 and IPv6)", "latest": "0.0.1", "author": "Leptos", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
NicolasCARPi/jquery_jeditable
913691691
Title: How do we catch the "id" of the edited control when submitting to a custom function? Question: username_0: # Description How do we catch the "id" of the edited control when submitting to a custom function? Thanks! Answers: username_1: See: https://github.com/username_1/jquery_jeditable/#submitting-to-function-instead-of-url Status: Issue closed username_0: Thanks for the quick response... I implement that, but I can't find the original element's id in any of the function accessible objects... neither "this", "value" or "settings". I realize I might be missing something. I'm just stomped. Great work by the way, thanks for sharing! username_1: That would be `$(this).attr('id')`. username_0: Great, thanks! username_0: Hi Nicolas, just wanted to let you know I tried it, and I'm still not able to get the id, or any other attribute from the node being edited. Here is a quick snippet showing you everything I tried. Hope you can shed some light into this or publish a basic example showing this succeeding. Thanks!! `$('.editable').editable( (value, settings) => { const new_value = value.trim(); console.log(sn, "- this:", $(this)); console.log(sn, "- this 2:", this); console.log(sn, "- this id:", $(this).attr('id')); console.log(sn, "- this id v2:", this.parentNode.getAttribute('id')); console.log(sn, "- this id v2:", $(this).parentNode.getAttribute('id')); console.log(sn, "- this data-assignment-id:", $(this).attr('data-assignment-id')); ` username_1: I suspect it's because you're using an arrow function, and it impacts what is "this". Try with a normal function. username_0: That was it! Thank you. I appreciate your follow-up. Great library, thanks again!
pulumi/docs
431161598
Title: Document dependsOn a Component resource Question: username_0: Document dependsOn a Component resource. (A new feature in 0.17.1). See [https://pulumi-community.slack.com/archives/C84L4E3N1/p1551945676230900](https://pulumi-community.slack.com/archives/C84L4E3N1/p1551945676230900). Status: Issue closed Answers: username_1: Closing out stale issue. This is documented in https://www.pulumi.com/docs/intro/concepts/programming-model/
miguelgrinberg/python-socketio
1025419766
Title: Could you make a new release please? :-) Question: username_0: Hey @miguelgrinberg, I'm a new user who got confused with the docs for catch-all '*' being out of sync with the last pip release. I'm gonna grab it using the github url now (excited to use it!), but if you're able to make a release sometime soon I'm sure it'd be appreciated by others. Thanks! Answers: username_0: Thank you Miguel!
outflanknl/RedELK
1032718505
Title: Erroneous Password Generation Question: username_0: In a number of places the setup generates random passwords, like here: https://github.com/outflanknl/RedELK/blob/master/elkserver/install-elkserver.sh#L354 In one of our automated deployments this generated a Kibana password starting with `-` which caused the container to get stuck in a boot loop with an error like this: ``` ERROR Extra serve options "--elasticsearch.password" must have a value ``` The assumption here is that the password bricked the install. I think you tried to account for this when you generate them since you use `tr` to strip out all characters except `........` but the regex is incorrectly formatted. Example using current `tr` pipe: ``` b33f@DESKTOP-7RNOI72:~$ echo sG-RxQmym6l4xfacfxem6r0jxvM02X1Q |tr -dc _A-Z-a-z-0-9 sG-RxQmym6l4xfacfxem6r0jxvM02X1Q ``` I think what you want is this: ``` b33f@DESKTOP-7RNOI72:~$ echo sG-RxQmym6l4xfacfxem6r0jxvM02X1Q |tr -dc _A-Za-z0-9 sGRxQmym6l4xfacfxem6r0jxvM02X1Q ``` Please change all the instances like the example at the top.<issue_closed> Status: Issue closed
SeleniumHQ/seleniumhq.github.io
624345878
Title: Looks like ActionChains is not working with iframe in Python Question: username_0: ## 🐛 Bug Report I am trying to move an element from outside of iframe to inside iframe but it seems that never worked. ## To Reproduce This is a website URL where I executed below code: https://grapesjs.com/demo.html ``` element_source = self._selenium.find_element(By.XPATH, '//div[@class="gjs-block-label"][contains(.,"Tooltip")]') element_target=self._selenium.find_element(By.CLASS_NAME,'gjs-frame') actions = ActionChains(self._driver) actions.drag_and_drop(element_source, element_target).perform() time.sleep(5) ``` ## Expected behavior Expecting that element from the right panel should move to left iframe. ## Environment * OS: Windows * Browser: Chrome * Browser version: 81.0.4044.138 Answers: username_1: Hi @username_0 , Thanks for raising an issue. This room is for Selenium documentation issues. This issue is best suitable to raise in Selenium/issues. Can you please raise the same issue in [here](https://github.com/SeleniumHQ/selenium/issues) Thanks, Harsha. Status: Issue closed username_0: Thanks @username_1 for a quick guide. Done.
xem/sheet
534995953
Title: 3,14 Question: username_0: First: Its (again) a very impressive code. So, I tested it with A1: 3,14 B1: 3,00 C1: =A1+B1 On "full with headers" it results in 3,143,00 On "minimal" it results in a NaN. On "Full" it results in 3,143,00. Its not really a bug, just my two cents. Keep up your impressive work! :) Answers: username_1: Hey! Sorry about that, you need to use "." in floating point numbers, not "," I know there are languages where "," is used as decimal separator, like "1 234,56" in french, But in english, the comma is a thousands separator, and the dot is a decimal sparator: "1,234.56" And in JavaScript (the language in which this app was coded), the decimal separator is also "." (ex: 1234.56) username_1: Oh, thanks a lot for the kind words! a whole team contributed to it :) username_0: Hey again, yeah this make sense. I tested it again and the results are now: 6.14000000000000**1** in full & full with headers. In minimal there are no results. 😄 So, thanks to your impressive team, its always a nice time to surf into your code! username_1: Ah, that's another particularity of floating-point numbers in today's computers. In 99% of programming languages (the ones relying on the underlying architecture), floating point addition is error prone. The most famous example is 0.1 + 0.2 != 0.3. More info on: https://0.30000000000000004.com/ Sorry, not our fault :) username_1: (also, "minimal" only supports integers) username_0: Thanks again for the very helpful explaination! :) Status: Issue closed
julia-vscode/julia-vscode
537977244
Title: Missing reference error for `local` var Question: username_0: Variables declared with `local` ends up with a missing reference warning. Note that `global` works fine. ![image](https://user-images.githubusercontent.com/1159782/70855641-15972f00-1e83-11ea-999d-8cc99c7dd0be.png) ![image](https://user-images.githubusercontent.com/1159782/70855650-44150a00-1e83-11ea-9151-48ead6d4fe9a.png) https://github.com/julia-vscode/julia-vscode/releases/tag/v0.13.0-alpha.1 VSCode version: ``` Version: 1.40.2 Commit: <PASSWORD> Date: 2019-11-25T14:52:45.129Z Electron: 6.1.5 Chrome: 76.0.3809.146 Node.js: 12.4.0 V8: 7.6.303.31-electron.0 OS: Darwin x64 18.7.0 ``` Answers: username_1: https://github.com/julia-vscode/StaticLint.jl/pull/43 Status: Issue closed
dokku/dokku
163767844
Title: Non-deployed applications cannot be renamed Question: username_0: Description of problem: If you attempt to rename an application that has not been deployed, you will get an error from the new application being "deployed". Seems to be related to cache clearing, which we probably do not need if the application has not been deployed. ``` -----> uname: Linux ci 3.13.0-85-generic #129-Ubuntu SMP Thu Mar 17 20:50:15 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux -----> memory: total used free shared buffers cached Mem: 3953 2652 1300 1 578 1512 -/+ buffers/cache: 561 3391 Swap: 0 0 0 -----> docker version: Client: Version: 1.11.2 API version: 1.23 Go version: go1.5.4 Git commit: b9f10c9 Built: Wed Jun 1 21:47:50 2016 OS/Arch: linux/amd64 Server: Version: 1.11.2 API version: 1.23 Go version: go1.5.4 Git commit: b9f10c9 Built: Wed Jun 1 21:47:50 2016 OS/Arch: linux/amd64 -----> docker daemon info: Containers: 10 Running: 1 Paused: 0 Stopped: 9 Images: 31 Server Version: 1.11.2 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 139 Dirperm1 Supported: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: host bridge null Kernel Version: 3.13.0-85-generic Operating System: Ubuntu 14.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.861 GiB Name: ci ID: VO2Q:4VU7:5ISL:EPIN:K5PO:SORR:7O4B:HBZS:4URM:TNWQ:5DUK:V74A Docker Root Dir: /var/lib/docker Debug mode (client): true Debug mode (server): false Registry: https://index.docker.io/v1/ WARNING: No swap limit support -----> sigil version: 0.4.0 -----> herokuish version: herokuish: 0.3.13 [Truncated] +++ [[ -n 1 ]] +++ set -x +++ source /var/lib/dokku/core-plugins/available/common/functions ++++ set -eo pipefail ++++ [[ -n 1 ]] ++++ set -x + apps_rename_cmd apps:rename node-js-app io-js-app + declare 'desc=renames an app' + local cmd=apps:rename + [[ -z node-js-app ]] + [[ -d /home/dokku/io-js-app ]] + local OLD_APP=node-js-app + local NEW_APP=io-js-app + mkdir -p /home/dokku/io-js-app + docker run --label=dokku --rm -v /home/dokku/node-js-app/cache:/cache dokku/node-js-app chmod 777 -R /cache Unable to find image 'dokku/node-js-app:latest' locally Pulling repository docker.io/dokku/node-js-app docker: Error: image dokku/node-js-app not found. See 'docker run --help'. ```<issue_closed> Status: Issue closed
yurafuca/nicotap
467949432
Title: launcher icon を適用する Question: username_0: **要望の概要** nicotap に launcher icon を適用する. **要望のモチベーション** launcher icon が Android Studio の default アイコンだから. **補足 (optional)** この要望に関連する情報や画像がある場合,ここに追加してください. Answers: username_0: ![icon](https://user-images.githubusercontent.com/5700824/61197200-93a5ad00-a70e-11e9-9a5c-776ef1d33efc.png) username_0: Lightness を下げた.touch effects の部分が詰まって違和感があるので取り除いた. ![ic_launcher_round](https://user-images.githubusercontent.com/5700824/61202501-9e6a3d00-a722-11e9-8cb1-ba370216cefc.png) username_0: v1.0.0-alpha26 で暫定的に対応した.デザインがチープに感じられるので継続的にデザインを改善していきたい.デザイナやデザインに知見のある方に意見をいただけると嬉しいです. Status: Issue closed
Chia-Network/chia-blockchain
964699076
Title: High virtual memory used in Windows 10 Question: username_0: Hello! Anybody here is having problems with increase of virtual memory when farming in Windows 10? I verified that when the eligible plot found is read, the virtual memory grow a bit. I have more than 4500 plots. When I open chia GUI, the virtual memory begin grows. ![virtualmemory](https://user-images.githubusercontent.com/88332935/127878212-b89e5f7b-b0c6-4089-b8a9-11ff4167eafd.PNG)
MicrosoftDocs/azure-docs
582284505
Title: Preview or GA? Question: username_0: 'Control egress traffic' as described in https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic#egress-traffic-overview seems to depend on this feature. Since 'limit egress trafic' is GA https://azure.microsoft.com/en-us/updates/egress-lockdown-in-azure-kubernetes-service-aks-is-now-generally-available/ I would assume egress-outboundtype is also GA, unlike what's indicated in this page. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 7d721081-0c1c-abf3-a398-386a718446d0 * Version Independent ID: 92b5504d-e25a-be94-e5e5-151083a70607 * Content: [Customize user-defined routes (UDR) in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/egress-outboundtype) * Content Source: [articles/aks/egress-outboundtype.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/egress-outboundtype.md) * Service: **container-service** * GitHub Login: @mlearned * Microsoft Alias: **mlearned** Answers: username_1: Thanks for the feedback! I have assigned the issue to the content author to investigate further and update the document as appropriate. username_0: @username_1 @mlearned Do you have any update on this? Is `outboundType` GA already or is it still only part of the preview? username_1: @username_0 outboundType is in Preview currently. There is no ETA for GA as of now. I will share here as I get an update on GA timelines. username_2: Hi. Any update on the GA time plan ? Zone redundancy, private cluster and multiple nodepools features require standard loadbalancer and without outboundType feature SLB require public IP address, which is against our security policy. username_3: The outboundtype details can be found on this AKS issue. Could you follow up with us there so you get the latest? https://github.com/Azure/AKS/issues/1384 username_3: @karishma-Tiwari-MSFT can you please close this issue in favor of the tracker I shared previously? username_1: We will now close this issue. If there are further questions regarding this matter, please tag me in a comment. I will reopen it and we will gladly continue the discussion. Status: Issue closed
galaxyproject/training-material
797113416
Title: Video on scrna-intro is 1 slide out of sync due to Requirements slide Question: username_0: * [the Video](https://training.galaxyproject.org/training-material/topics/transcriptomics/videos/#video-transcriptomics-scrna-intro) If you look at the video at 6 seconds in, there is an extra slide added "Requirements", which is not defined [in the markdown](https://github.com/galaxyproject/training-material/blob/12853cc254201681cbdb700843cc36bcaf9be70f/topics/transcriptomics/tutorials/scrna-intro/slides.html#L31). I believe this throws off the video captions by 1 slide. Is this an easy fix? Status: Issue closed Answers: username_1: It's already fixed :) we're rebuilding soon username_1: #2314 , #2315 username_0: Oh sweet!
microsoft/winget-cli
621252829
Title: "error C2039: 'back_inserter': is not a member of 'std'" when building Question: username_0: # Environment Windows SDK: 10.0.18362.0 Visual Studio: Microsoft Visual Studio Enterprise 2019 Preview Version 16.7.0 Preview 1.0 Platform Toolset: Visual Studio 2019 (v142) Windows Version: 2004 (19628.1) Answers: username_1: This is very odd, no idea why its happening. The only thing that comes to mind is updating our C++/WinRT reference. username_2: @username_0 are you still having problems building this project? username_0: I've just tested the master branch with the latest Visual Studio 2019 Preview version and it worked! Status: Issue closed
aws/sagemaker-xgboost-container
1073543167
Title: TypeError: predict() got an unexpected keyword argument 'pred_contribs' with xgboost v0.90 Question: username_0: Hi @eitansela I am using the `inference.py` file and have trained my model using xgboost v0.90. from xgboost import XGBRegressor model = XGBRegressor() However, when I run the script and invoke the endpoint to make prediction, I run into the error. Here's what my `inference.py` code looks like: ``` import json import os from io import BytesIO import pickle as pkl import numpy as np import sagemaker_xgboost_container.encoder as xgb_encoders import xgboost as xgb from os import listdir from scipy import sparse # Load your model def model_fn(model_dir): """ Deserialize and return fitted model. """ model_file = "xgboost-model" booster = pkl.load(open(os.path.join(model_dir, model_file), "rb")) return booster def input_fn(request_body, request_content_type): """ The SageMaker XGBoost model server receives the request data body and the content type, and invokes the `input_fn`. Return a DMatrix (an object that can be passed to predict_fn). """ if request_content_type == "text/csv": values = [i for i in request_body.split(',')] values = [val.strip() for val in values] # to 2-d numpy array npa = np.array(values).reshape(-1,1) return npa if request_content_type == "text/libsvm": return xgb_encoders.libsvm_to_dmatrix(request_body) else: raise ValueError("Content type {} is not supported.".format(request_content_type)) [Truncated] names = model.get_booster().feature_names prediction = model.predict(input_data, validate_features=False) feature_contribs = model.predict(input_data, preds_contribs=True, validate_features=False) output = np.hstack((prediction[:, np.newaxis], feature_contribs)) return output def output_fn(predictions, content_type): """ After invoking predict_fn, the model server invokes `output_fn`. """ if content_type == "text/csv": return ",".join(str(x) for x in predictions[0]) else: raise ValueError("Content type {} is not supported.".format(content_type)) ```
Azure/azure-functions-durable-extension
948239804
Title: Durable Function & Azure Storage degradation Question: username_0: ### Description My setup involves using an event hub trigger that starts new orchestrations. An orchestration will call different activity functions to process the events until eventually the events are sent to various Event Grid topics using output bindings. I've never had a problem with this until recently where I'm noticing performance degradation. I have a setup where the same data is being sent through both a prod environment and non-prod environment. My prod environment works as expected but my non-prod environment degrades very quickly. The only way I can solve the issue is by deleting the storage account, but after about an hour it degrades to a point where none of my events are being sent in a timely manner. This is a huge issue for us as we cannot deploy anything to production until we figure out why this started happening. In our prod env, the time it takes for an event to be sent to event hub and be routed to my endpoint via event grid is about ~1 second. It's extremely fast. It's been working great for almost a year, but in the last few months I've been noticing a degradation problem in the non-prod env which has prevented us from being able to deploy to prod. ### Expected behavior Orchestrations finish executing within a timely manner. The prod environment does exactly what I would expect. I can batch process 512 events at a time with ease. ### Actual behavior The non-prod environment quickly degrades to a point where I will not see events come through my webhook for 20+ minutes or more. It continues to degrade as time goes on. When I recreate the storage account everything works as expected, but after about an hour I notice degradation occurring. ### App Details non-prod env: - **Durable Functions extension version**: 2.5.0 - **Azure Functions runtime version**: ~3.0 - **Programming language used**: c# - **Event Hub extension version**: 4.2.0 - **Functions SDK version**: 3.0.13 prod env: - **Durable Functions extension version**: 2.4.1 - **Azure Functions runtime version**: ~3.0 - **Programming language used**: c# - **Event Hub extension version**: 4.0.1 - **Functions SDK version**: 3.0.11 ### Screenshots This is what I'm seeing in Azure Storage: ![image](https://user-images.githubusercontent.com/419283/126261043-232e06c6-4c4d-4bfe-a09f-92738427e225.png) Answers: username_1: Just going to follow along here but have you tried rolling back any of your package updates in your non-prod environment to help isolate the issue? username_2: @username_0, it looks like you are hitting a regression introduced in v2.5.0 in the DurableTask Azure Storage implementation. The good news is that we published a hotfix version of that package with [the fix](https://github.com/Azure/durabletask/pull/568). Just add the following reference to your .csproj file: ``` <PackageReference Include="Microsoft.Azure.DurableTask.AzureStorage" Version="1.8.8" /> ``` The version of the package with the fix will be included by default in v2.5.1 of the Durable Functions extension that will be released in the next two weeks or so. Status: Issue closed username_0: I included the package reference to my project, published the changes, and let it run for about an hour now. I'm sorry to say that it doesn't look like it has fixed the problem I'm seeing. Is there anything else I should try doing? Should I recreate the storage account to give it a clean slate, for instance? Side note: I only recently updated to version 2.5.0 for the durable task extension, but I've been experiencing this problem for a couple months now. username_2: ### Description My setup involves using an event hub trigger that starts new orchestrations. An orchestration will call different activity functions to process the events until eventually the events are sent to various Event Grid topics using output bindings. I've never had a problem with this until recently where I'm noticing performance degradation. I have a setup where the same data is being sent through both a prod environment and non-prod environment. My prod environment works as expected but my non-prod environment degrades very quickly. The only way I can solve the issue is by deleting the storage account, but after about an hour it degrades to a point where none of my events are being sent in a timely manner. This is a huge issue for us as we cannot deploy anything to production until we figure out why this started happening. In our prod env, the time it takes for an event to be sent to event hub and be routed to my endpoint via event grid is about ~1 second. It's extremely fast. It's been working great for almost a year, but in the last few months I've been noticing a degradation problem in the non-prod env which has prevented us from being able to deploy to prod. ### Expected behavior Orchestrations finish executing within a timely manner. The prod environment does exactly what I would expect. I can batch process 512 events at a time with ease. ### Actual behavior The non-prod environment quickly degrades to a point where I will not see events come through my webhook for 20+ minutes or more. It continues to degrade as time goes on. When I recreate the storage account everything works as expected, but after about an hour I notice degradation occurring. ### App Details non-prod env: - **Durable Functions extension version**: 2.5.0 - **Azure Functions runtime version**: ~3.0 - **Programming language used**: c# - **Event Hub extension version**: 4.2.0 - **Functions SDK version**: 3.0.13 prod env: - **Durable Functions extension version**: 2.4.1 - **Azure Functions runtime version**: ~3.0 - **Programming language used**: c# - **Event Hub extension version**: 4.0.1 - **Functions SDK version**: 3.0.11 ### Screenshots This is what I'm seeing in Azure Storage: ![image](https://user-images.githubusercontent.com/419283/126261043-232e06c6-4c4d-4bfe-a09f-92738427e225.png) username_2: @username_0, I think you just may need to wait a bit to go through your queue backlog (or start with a fresh storage account). Here is the median queue message age by the time we process it (age is measured in milliseconds): ![image](https://user-images.githubusercontent.com/8934380/126374647-362a1c25-4893-469f-b71c-e51589e530dc.png) username_2: As for the previous few months, if you have a time range (ideally in UTC) in the last 60 days that was before you upgraded to v2.5.0 that you would like me to investigate, please let us know and I can take a look. username_0: I ended up deleting the storage account and let it run for a couple hours now. This is what I'm seeing: ![image](https://user-images.githubusercontent.com/419283/126395943-970c6616-4b6a-4a3e-9f8e-371662bb763b.png) ... about an hour ago is when it starts to degrade. Some instances are created but never finish. Some instances are created and finish very quickly. The top record from the above screen shot has a completed status and I can see the history here: ![image](https://user-images.githubusercontent.com/419283/126396149-3895cc2f-322a-4aa2-a01c-d6913cb3f578.png) The record right below that has a pending status does not have any history: ![image](https://user-images.githubusercontent.com/419283/126396231-4c596b35-f0bb-419d-8f10-b3809a67396d.png) From there, all instances are in a pending state ![image](https://user-images.githubusercontent.com/419283/126396416-d1f93f76-bba6-4c93-8fcc-654ab1a50360.png) There is not a single completed instance in the last hour: ![image](https://user-images.githubusercontent.com/419283/126396573-c72ba8b6-9980-4f04-9fc0-28d648237895.png) ... and none of these instances stuck in a pending state have any history. username_2: Thanks for the update, I'm taking a deep dive into this now. username_0: Oops, I realize now that instances in a pending state will not have history. I did another query that filters out completed and pending statuses and here is what I see: ![image](https://user-images.githubusercontent.com/419283/126397826-e6975531-4905-4953-9109-6f63d6319b57.png) ... right around the same time I noticed the degradation occurring (not sure if I should consider this as a degradation problem at this point), I see many instances stuck in a running state. Here's history for some of the instances that are stuck in the running state: ![image](https://user-images.githubusercontent.com/419283/126398040-cf518ae3-20a9-49cf-aae2-806ebcb8dcfe.png) ![image](https://user-images.githubusercontent.com/419283/126398088-2b14ef8b-a0fe-4b97-9236-a31ad44551f0.png) ![image](https://user-images.githubusercontent.com/419283/126398148-97eee300-5ab4-4525-b206-2eb65fbf7ab0.png) username_2: @username_0 So I dug into this and I am confused. From what I can tell, occasionally you have workers that despite succeeding renewing both the intent and the ownership lease, it stops listening to the control queue (this should only happen when it loses a lease). Then, because it still thinks it owns the partition, it keeps renewing the leases so no other worker can grab the partition. Our telemetry is a bit sparse in how this could happen, so I will have to closely examine the code to see how we got stuck in this edge case (once for each partition nonetheless). The weird thing is these all happened at approximately the same time (between 20:10-20:12 UTC). There was a brief blip at 20:05 UTC, but it appears that was just a blip where partitions moved between workers (this generally takes 1-2 minutes to ensure that no two workers executed the same partition at the same time). I will definitely need to dive further into this, but in the meantime, I have you tried seeing if v2.4.1 of the extension has these issues in your debug environment? username_0: I downgraded to 2.4.1. I'll let it run over night and check on it again in the morning. I'll let you know what happens. username_0: Downgrading to 2.4.1 seems to have fixed the issue. username_1: Hmmm ... OK this has me nervous now since we are on 2.5.0. :/ username_2: This is the first report I have seen for this issue on v2.5.0, so I don't think this is a widespread issue. I did notice the app was using the app lease feature, which had some changes in our partition management flow in v2.5.0 that could be potentially related. I'll investigate those new code paths first. username_2: Ok, I finally understand what is going on here. 1. Durable Functions v2.5.0 uses v1.8.6 of DurableTask.AzureStorage, which has [a bug](https://github.com/Azure/durabletask/pull/568) that can slow down orchestration processing for the case where customers have lots of ExecutionStarted events in the same batch. 2. v1.8.8 of DurableTask.AzureStorage has a fix for this bug, but it takes a dependency on on DurableTask.Core v2.5.5 that introduced a [separate null reference exception](https://github.com/Azure/durabletask/issues/554). This null reference bug encountered when an orchestration completes can [cause us to lose a partition](https://github.com/Azure/durabletask/issues/575) until app restart. What I need to do is release a version (v2.5.6) of DurableTask.Core with [the fix](https://github.com/Azure/durabletask/issues/560), and a version (v1.8.9) of DurableTask.AzureStorage that references that version of DurableTask.Core. v2.5.1 of the extension that will release in the next week or two will have a version of DurableTask.AzureStorage with _neither_ of these bugs. I hope to publish the DurableTask packages with the fix by the end of the day. Thank you so much for your patience with this! username_2: v1.8.9 should have both fixes and have been released. username_0: Sorry for the delay here. I had some other things come up that I needed to take care of. I pulled down 1.8.9 of DurableTask.AzureStorage and updated WebJobs.Extensions.DurableTask back to 2.5.0 but I'm experiencing the same issue as before. Here's the list of package references I'm using for the function app: ![image](https://user-images.githubusercontent.com/419283/127075744-18e74fe2-28a1-4bcc-906d-f279a0dcd7c5.png) username_3: I believe I'm facing a similar issue (reported in https://github.com/Azure/durabletask/issues/590) username_4: @username_0 (or @username_3), can you share an orchestration instance ID that got stuck so we can investigate further? username_3: I'm afraid that I'm running this in our own environment, and not in Azure Functions, so am instance ID from me won't help you.
aws/aws-sdk-js
938086447
Title: URL to view CloudWatch Log Group Question: username_0: Confirm by changing [ ] to [x] below: - [x] I've gone through [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/welcome.html) and [API reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/) - [x] I've checked [AWS Forums](https://forums.aws.amazon.com) and [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) for answers **Describe the question** I couldn't find an [API](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_Operations.html) which can give us a link to access the logs for a given log group. **Is there a way to get a URL for given ARN of CloudWatch Log Group.** As a workaround, I can form the URL by looking at the pattern for given ARN. ARN: `arn:aws:logs:us-east-1:XXXXXXXXXXXXXX:log-group:dummy:*` URL: `https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logsV2:log-groups/log-group/dummy` Please suggest any workaround/API which can be helpful. Answers: username_1: Hey @username_0 I believe there is no API that'll return an URL with all the logs, the workaround you are using shoyld work. To get the data without the URL I would use [`filterLogEvents `](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#getLogEvents-property) and filter using a pattern. https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html Would mark this as a feature request and do some more research in the meantime.
west-tandon/ReSearch
205437521
Title: Change script execution/installation procedure Question: username_0: Replace Makefile method of building and installing command-line utilities with the method described here: http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html Answers: username_0: Possibly, still use Makefile: it would be better to use **venv** anyway. However, maybe the the mechanism linked above could be used somehow—would be nice. **To be discussed.**
thii/FontAwesome.swift
254738547
Title: This is not working on interface builder. I see a question mark. Any help? Question: username_0: I dragged a UILabel unto storyBoard. Then i changed the font to font awesome. Then i pasted the icon ->  It displays correctly on story board. But when i run the app, i am seeing a question mark. Answers: username_1: Probably the font was not loaded into memory before your storyboard was loaded. Do you have a sample project which reproduces this? username_2: For me this happens whenever the alpha of a previously working image changes or is set initally. See gif (buttons that turn to ? were clicked on which changes alpha by default) ![2017-09-21 12 43 49](https://user-images.githubusercontent.com/1874352/30694223-feaa793a-9eca-11e7-841e-15c9c8a8e181.gif) username_2: `let barButtonItem = UIBarButtonItem.init(title: "", style: .plain, target: target, action: action) barButtonItem.setTitleTextAttributes([NSAttributedStringKey.font: UIFont.fontAwesome(ofSize: iconSize)], for: .normal) barButtonItem.title = " \(String.fontAwesomeIcon(name: name)) " barButtonItem.tintColor = color return barButtonItem` username_3: I have the same problem. I ended up just making them images :( I am using 1.2.0 username_2: @username_1 Is this a quick fix? username_1: I'll try to have a look at it sometime this weekend. username_4: Haven't tested it, but im pretty sure you have to set title text attributes for selected (and other) states too. When selecting / pressing the button it changes to selected state and the title text attributes will be unset for that state. username_0: yea.. maybe for a UIButton. But it doesnt work for a UILabel. username_5: @username_2 @username_4 This is a change in iOS 11 where setting textAttributes for themes or for controls requires setting the text attributes for both .normal and .highlighted. username_6: I had the same issue, shows in storyboard and ? in app. I walked the FontFamily and noticed font wasn't loading as I only used it in storyboard, I added a call to `UIFont.fontAwesome(ofSize: 16)` which installs the font via private FontLoad class. Worked after that. username_7: Could someone confirm if in fact #170 provides a solution to this issue using: `UIFont.loadFontAwesome()` ? username_2: This has mostly been fixed by, as mentioned by @username_5 applying attributes for multiple UIControlState enums, but the issue remains when a control is set to disabled (even though .disabled is used when setting the attributes). ```barButtonItem.setTitleTextAttributes([NSAttributedStringKey.font: UIFont.fontAwesome(ofSize: iconSize)], for: .normal) barButtonItem.setTitleTextAttributes([NSAttributedStringKey.font: UIFont.fontAwesome(ofSize: iconSize)], for: [.selected, .highlighted, .disabled, .reserved])``` username_2: @username_1 Any progress on this issue? To summarise my recent findings: - Holding down on a bar button doesn't cause the ? mark effect - Setting a bar button to disabled does - This occurs despite setting all of the UIControlStates as in my last comment - It seems that the UIControlStates addition fixes holding down, but not disabling a button Many thanks for your hard work! username_7: @username_2 Are you using `FontAwesomeBarButtonItem` as a custom `UIBarItem` ? If not, you should try that. I managed to reproduce what you're describing by not using `FontAwesomeBarButtonItem` and fixed it by setting the custom `UIBarItem`. username_2: Could you provide sample code? I get the same outcome with the same setup of FontAwesomeBarButtonItem, and with just setting the title to cssCode I always get ? username_5: @username_2 Can you setTitleTextAttributes for each of the states separately, without using the option set? That's how I've done it. When you tap on the barButtonItem, as shown in your video, the state of the barButtonItem changes to highlighted or selected, and apparently it doesn't know what to do in those cases. username_2: @username_5 That worked! ```swift let titleTextAttributes = [NSAttributedStringKey.font: UIFont.fontAwesome(ofSize: iconSize)] barButtonItem.setTitleTextAttributes(titleTextAttributes, for: .normal) barButtonItem.setTitleTextAttributes(titleTextAttributes, for: .selected) barButtonItem.setTitleTextAttributes(titleTextAttributes, for: .highlighted) barButtonItem.setTitleTextAttributes(titleTextAttributes, for: .disabled) ``` It seems that setting UIControlStates as an array of options has issues, which is a shame because it means less dry code. Would this be an issue with swift or this library? Many Thanks @username_5! username_5: I think it's a UIKit issue and I think it's new in iOS 11. Neve filed a bug though. Before that you could set the theme for .normal and it applied to all the states. I think it worked that way since the beginning. username_8: This also displays a question mark instead of an icon when I assign this to the text of a UILabel `let phoneIconString = String.fontAwesomeIcon(name: .phone)` Status: Issue closed username_0: closing this issue. Apple released glyphs. I'll use those instead of a lib.
hathach/tinyusb
487901310
Title: LPC54114 has issue with cdc_msc_hid example Question: username_0: **Describe the bug** MSC is working but cdc and hid doesn't work well. There seem to be issue with control transfer when host send HID SET_IDLE, and HID GET_REPORT_DESCRIPTOR. **Set up (please complete the following information):** - OS: Ubuntu 18.04 - Board: LPCXpresso54114 - Firmware Code: examples/device/cdc_msc_hid<issue_closed> Status: Issue closed
FreeProving/free-compiler
624174377
Title: Add Agda as a dependency Question: username_0: The Agda backend will use the AST and pretty printer from the [agda compiler](https://github.com/agda/agda) [package](https://hackage.haskell.org/package/Agda). Add it as new dependency to the compilers `.cabal` file. Create a new branch called `dev-agda` and add it to the CI Pipeline configuration. Use it to test the effects of adding Agda on the CI Pipelines run time and cache size. Furthermore use this branch as `development` branch for the future Agda backend issues.<issue_closed> Status: Issue closed
kalabox/kalabox
110092498
Title: If no session file available "kbox start" fails Question: username_0: Received this error on running "kbox start": ``` Cannot read property 'name' of undefined TypeError: Cannot read property 'name' of undefined at /Users/alec/kalabox/apps/cvt-working/plugins/kalabox-plugin-pantheon/lib/env.js:254:44 at /usr/local/bin/lib/core/events.js.jx:147:11 at /usr/local/bin/lib/core/events.js.jx:146:24 at /usr/local/bin/lib/core/events.js.jx:130:23 at process._tickCallback (node.js:812:13) ``` Doing a little debugging, it looks like kalabox-plugin-pantheon fails to retrieve a session file and can't retrieve the "name" attribute to assign to the git info. I'm guessing this was because a session file wasn't available in ~/.kalabox/terminus/session and kbox start doesn't re-auth (unlike kbox pull, which appears to). Adding a reAuthSession() to getSession() in client.js seems necessary, but I think that'd necessitate promise-ifying getSession(), which is called numerous times, so wanted to get professional opinions from @username_1 and @bcauldwell before leaping into that and making a hot mess... **To Replicate** - Remove session file from ~/.kalabox/terminus/session - Try "kbox start" Answers: username_1: Nice. solid bug report. username_1: @username_0 when you first got this did your session file no longer exist? username_1: This will be fixed in 0.10.3. Status: Issue closed
IPS-LMU/emuR
115526302
Title: _emuDB suffix for the database directory Question: username_0: I think it would be a good idea that we allow the database root dir to have the extension `_emuDB` so that the user automatically sees that this folder is a emuDB. This would be optional but we then gradually should enforce it's usage. Just to be clear: `ae` and `ae_emuDB` would both be permitted names for the `ae` database root directory. Status: Issue closed Answers: username_0: done in emuDBhandle refactor
pnpm/pnpm
396027020
Title: PNPM update is confusing Question: username_0: ### pnpm version: Not sure (see below) ### Code to reproduce the issue: n/a ### Expected behavior: If I have the latest version, I shouldn't be seeing a message prompting me to update. ### Actual behavior: When installing a package, pnpm exits with a message saying: `Update available! 2.18.2 → 2.23.1` However, `pnpm -v` says `2.23.1`. `which pnpm` says `/usr/local/bin/pnpm`, as I installed it using the recommended method of `npm install -g pnpm`. ### Additional information: - `node -v` prints: `v11.6.0` - Windows, OS X, or Linux?: macOS Answers: username_0: I figured out part of the reason behind this, and I think it has something to do with Homebrew. ``` $ which pnpm /usr/local/bin/pnpm $ pnpm i -g pnpm Nothing to stop. No server is running for the store at /Users/username_0/.pnpm-store/2 Packages: +1 -1 +- Downloading registry.npmjs.org/pnpm/4.3.0: 5.57 MB/5.57 MB, done Resolving: total 1, reused 0, downloaded 1, done /usr/local/Cellar/node/12.3.1/pnpm-global/2: - pnpm 3.8.1 + pnpm 4.3.0 $ pnpm -v 3.8.1 $ npm i -g pnpm /usr/local/bin/pnpx -> /usr/local/lib/node_modules/pnpm/bin/pnpx.js /usr/local/bin/pnpm -> /usr/local/lib/node_modules/pnpm/bin/pnpm.js + [email protected] updated 1 package in 3.426s $ pnpm -v 4.3.0 $ ``` So basically: * npm and pnpm are not installing pnpm to the same place. * `pnpm` points to the instance installed by `npm`. * PNPM's recommendation to update itself using `pnpm i -g pnpm` ends up not working for me. Status: Issue closed
widdix/aws-cf-templates
670319174
Title: IAM::Role Permission Boundary Question: username_0: TemplateID: `*/*` Region: `us-east-1` Our client's security policy requires that every Role they create be subject to a [permission boundary](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html). I'm hoping you'll accept a PR passing a `PermissionBoundary` into any module that creates an IAM Role (FWIW both here and `cfn-modules`). Unfortunately, they were reminded of this requirement while deploying their production environment so I need to get make it available ASAP somewhere. I can certainly maintain a fork while you consider the proposal, but wanted to get the ball rolling in an effort to minimize/avoid changing dependencies in our build pipeline. Answers: username_1: None of our IAM roles should grant iam access that would make it possible to create/modify policies or attach them? I don#t see how permission boundaries help here? What's the reason behind this requirement? username_2: I've never used a permission boundary before so I'm not terribly qualified to speculate, but I assume it's this: [How can I use permissions boundaries to limit the scope of IAM users and roles and prevent privilege escalation?](https://aws.amazon.com/premiumsupport/knowledge-center/iam-permission-boundaries/) I know the role *does not* grant the policy, but I assume they cascade the requirement to ensure no role is created which could logically do so. username_2: I received confirmation that the permission boundary must be attached to every Role that gets created. username_1: Work is in progress: see #465 Status: Issue closed username_1: merged
jlippold/tweakCompatible
435473671
Title: `stunnel 5.50 (HTTPS Proxy)` working on iOS 11.3.1 Question: username_0: ``` { "packageId": "com.thireus.stunnel", "action": "working", "userInfo": { "arch32": false, "packageId": "com.thireus.stunnel", "deviceId": "iPhone8,2", "url": "http://cydia.saurik.com/package/com.thireus.stunnel/", "iOSVersion": "11.3.1", "packageVersionIndexed": false, "packageName": "stunnel 5.50 (HTTPS Proxy)", "category": "Networking", "repository": "repo.thireus.com", "name": "stunnel 5.50 (HTTPS Proxy)", "installed": "4.5.0", "packageIndexed": false, "packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.", "id": "com.thireus.stunnel", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "The stunnel program is designed to work as an SSL/TLS encryption wrapper between remote client and local (inetd-startable) or remote server.", "latest": "4.5.0", "author": "Thireus", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
vuejs/vuepress
556847463
Title: Use different indexes for nested numbered list Question: username_0: <!-- Please don't delete this template or we'll close your issue --> <!-- Before creating an issue please make sure you are using the latest version of VuePress. --> ## Feature request <!-- Please ask questions on StackOverflow. --> <!-- https://stackoverflow.com/questions/ask?tags=vuepress --> <!-- Issues which contain questions or support requests will be closed. --> #### What problem does this feature solve? Currently, vuepress does not number nested lists with other symbols. This can lead to a bit of confusion when creating step-by-step instructions. Consider the following block of Markdown ~~~ 1. A numbered list 1. A nested numbered list 2. Which is numbered 2. Which is numbered ~~~ Both normal numbers are used to describe the list in a list and vuepress renders this correctly, restarting the numbering on the indented block. Using GitHub-rendering here: 1. A numbered list 1. A nested numbered list 2. Which is numbered 2. Which is numbered We can see that they use roman numbering for the inner list. I have not been able to do this using the built in render in vuepress. Currently testing with the default template. #### Proposed changes Nubered list indented from previous list should notate the list using other symbols or by appending the outer lists number as well (1.1, 1.2 etc). Inner list can use letters (1.a 1.b), roman literals (i ii iii iv) or something else. #### Are you willing to work on this yourself? I am not proficient in the inner workings of this package and therefor I might struggle with the implementation. Answers: username_1: Thanks for filing the issue @username_0! From just preliminary research, it sounds like the best solution to solve this would be for us to implement a markdown extension that detects alpha-ordered lists with roman numerals as well as letters. For now though, I wanted to at least suggest that a temporary workaround for you (while we figure out our strategy as a team) is to use CSS to target sub-list items: ```css // Not my preferred way of writing CSS, but for a simple fix li > ol > li { list-style: lower-roman; } ``` In the meantime, we'll take your feedback into consideration as we plan out our roadmap. Thank you and let us know if you have any questions in the meantime! username_0: Thanks for the reply! I can confirm that the workaround is working. I will use that and look forward to the feature update.
MyIntervals/emogrifier
654847697
Title: Allow PHP 8 installs Question: username_0: … a bit like this (but with GitHub actions): https://github.com/tijsverkoyen/CssToInlineStyles/pull/198/files Answers: username_1: Is the PHP 8 alpha available via GitHub actions now? Yes please, let's be on the ball about this and future PHP versions, so we are not having to make patch releases to support a new PHP version :) username_0: Yes, the setup PHP action [provides the PHP 8 nightly](https://github.com/marketplace/actions/setup-php-action#tada-php-support). Status: Issue closed
mjansson/rpmalloc
1108098543
Title: custom memory_map/memory_unmap need thread-safe or not? Question: username_0: I can't find any topic on this in the README. I guess it is necessary? Answers: username_1: Yes, the functions can be called simultaneously from multiple threads. I'll add a note about it in the docs. username_1: Done in https://github.com/username_1/rpmalloc/commit/00070f7a3070306cb315934667f57c9569f1fd47 Status: Issue closed
kiaev/Project1
994766055
Title: Code style in tests Question: username_0: https://github.com/username_1/Project1/blob/1a4e44bc9b38b74801f1680593b72700683dbe75/src/test/java/TestUnit.java#L7-L16 Эти переменные лучше объявлять внутри тестовых методов, потому что когда ты запускаешь пачку тестов, и какие-то падают, ты должен открыть упавший метод, и прямо в нём увидеть, какие данные приходят и какие должны вернуться, а не листать в начало станицы, высматривая значение каждой переменной. Тестовые методы должны восприниматься скорее как отдельные сущности с собственными переменными. https://github.com/username_1/Project1/blob/1a4e44bc9b38b74801f1680593b72700683dbe75/src/test/java/TestUnit.java#L31-L33 Не нужно добавлять печать в консоль после каждого теста, ты и так в Идее будешь видеть, какие методы пошли, какие упали, и с какими данными. Попробуй специально сломать какой-нибудь метод, чтобы увидеть, как идея отображает упавшие методы. Answers: username_1: According to issue "Code style in tests #8" in class ReverserTest delete out in console, move variables in methods Status: Issue closed
riseupnet/riseup_help
132020144
Title: Usage Agreement for Mail Accounts page pertinence Question: username_0: Hi! While browsing through the git repo looking for pages to translate, I found the [Usage Agreement for Mail Accounts](https://help.riseup.net/en/email/usage-agreement) page. It is not accessible by browsing menus on h.r.n and seems to contain quite redundant information. If it is to be kept, you should add a nav link to it somewhere (and actualize the information on it). If not, it should be deleted. Answers: username_1: The usage agreement is linked on the pages [email](https://help.riseup.net/en/email#frequently-asked-questions-faq) and [creating lists](https://help.riseup.net/en/lists/list-admin/configuration/creating-lists/#who-can-create-a-list). To be honest it is not very prominently linked. I could add a nav_link, but it may be better to keep the navigation simple. What do you (and others) think? username_1: meanwhile i vote for closing this in favor of more urgent issues Status: Issue closed username_0: No relevant anymore, closing!
UGS-GIO/LakeBonnevilleSM
795397051
Title: Filling the Lake Question: username_0: Suggested change to remove "but" from the beginning of the sentence: Volcanic events changed the land surface which redirected the river southward, at first pooling by Grace, Idaho, then later eroding out the Oneida Narrows, and flowing south through Cache Valley into to the young Lake Bonneville. Answers: username_1: Done Status: Issue closed
JetBrains/rider-theme-pack
955395906
Title: Icons in the Rider IDE is not included Question: username_0: The Rider theme looks pretty nice. However, the icons are not the same as the Rider IDE. Will them be included in future versions? for example: the icon of dir Pycharm ![image](https://user-images.githubusercontent.com/16919236/127421566-3e1bbde8-0d4e-4496-bbef-0e93ce29bb5d.png) the icon of dir Rider ![image](https://user-images.githubusercontent.com/16919236/127421697-bc10374e-17d2-40df-9a5e-b2d3688ad1c2.png)
novucs/factions-top
345481389
Title: Error with EpicSpawners 5.2.0.6 Legacy 1.12.jar Question: username_0: INFO [FactionsTop] Enabling FactionsTop v1.2.0 28.07 22:49:12 [Server] ERROR Error occurred while enabling FactionsTop v1.2.0 (Is it up to date?) 28.07 22:49:12 [Server] INFO java.lang.NoClassDefFoundError: com/songoda/epicspawners/EpicSpawners 28.07 22:49:12 [Server] INFO at net.novucs.ftop.hook.EpicSpawnersHook.initialize(EpicSpawnersHook.java:30) ~[FactionsTop%20(4).jar:?] 28.07 22:49:12 [Server] INFO at net.novucs.ftop.FactionsTopPlugin.loadSpawnerStackerHook(FactionsTopPlugin.java:285) ~[FactionsTop%20(4).jar:?] 28.07 22:49:12 [Server] INFO at net.novucs.ftop.FactionsTopPlugin.onEnable(FactionsTopPlugin.java:130) ~[FactionsTop%20(4).jar:?] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:321) ~[flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:332) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:404) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.craftbukkit.v1_8_R3.CraftServer.loadPlugin(CraftServer.java:313) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.craftbukkit.v1_8_R3.CraftServer.enablePlugins(CraftServer.java:272) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.s(MinecraftServer.java:408) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.k(MinecraftServer.java:372) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.a(MinecraftServer.java:327) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.DedicatedServer.init(DedicatedServer.java:267) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.run(MinecraftServer.java:563) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at java.lang.Thread.run(Unknown Source) [?:1.8.0_161] 28.07 22:49:12 [Server] INFO Caused by: java.lang.ClassNotFoundException: com.songoda.epicspawners.EpicSpawners 28.07 22:49:12 [Server] INFO at java.net.URLClassLoader.findClass(Unknown Source) ~[?:1.8.0_161] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.java.PluginClassLoader.findClass(PluginClassLoader.java:102) ~[flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.java.PluginClassLoader.findClass(PluginClassLoader.java:87) ~[flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:1.8.0_161] 28.07 22:49:12 [Server] INFO at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:1.8.0_161] 28.07 22:49:12 [Server] INFO ... 14 more 28.07 22:49:12 [Server] INFO [FactionsTop] Disabling FactionsTop v1.2.0 28.07 22:49:12 [Server] INFO [FactionsTop] Preparing shutdown... 28.07 22:49:12 [Server] INFO [FactionsTop] Shutting down chunk worth task... 28.07 22:49:12 [Server] INFO [FactionsTop] Saving everything to database... 28.07 22:49:12 [Server] INFO [FactionsTop] Terminating plugin services... 28.07 22:49:12 [Server] ERROR Error occurred while disabling FactionsTop v1.2.0 (Is it up to date?) 28.07 22:49:12 [Server] INFO java.lang.NullPointerException 28.07 22:49:12 [Server] INFO at net.novucs.ftop.FactionsTopPlugin.onDisable(FactionsTopPlugin.java:177) ~[FactionsTop%20(4).jar:?] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:323) ~[flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.java.JavaPluginLoader.disablePlugin(JavaPluginLoader.java:360) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:336) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:404) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.craftbukkit.v1_8_R3.CraftServer.loadPlugin(CraftServer.java:313) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at org.bukkit.craftbukkit.v1_8_R3.CraftServer.enablePlugins(CraftServer.java:272) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.s(MinecraftServer.java:408) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.k(MinecraftServer.java:372) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.a(MinecraftServer.java:327) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.DedicatedServer.init(DedicatedServer.java:267) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at net.minecraft.server.v1_8_R3.MinecraftServer.run(MinecraftServer.java:563) [flux.jar:FluxSpigot-04511af6] 28.07 22:49:12 [Server] INFO at java.lang.Thread.run(Unknown Source) [?:1.8.0_161] Answers: username_1: Duplicate of #95 Status: Issue closed
benjaminbear/docker-ddns-server
961925205
Title: Mounting volumes does not work Question: username_0: Sorry for the question but I can't get the image to work. It seems that mounting the volumes does not work. Both with docker-compose and docker-run I get the error. `ERROR: for ddns Cannot start service ddns: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/home/docker/ddns/database" to rootfs at "/root/dyndns/database" caused: mount through procfd: open o_path procfd: open /var/lib/docker/overlay2/ef39d03481702de8c0a78268fdbc289a44817d9a0c1bb2ec4f50c381a3b382ee/merged/root/dyndns/database: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type ` OS: Alpine Linux 3.14.1 Docker version: 20.10.7 Answers: username_1: Could you add the whole "docker run" command and a "ls -la" on the mounting point "/home/docker/ddns/database" for further investigations? username_0: Sure, here´s my docker run command: `docker run -it -d \ -p 8080:8080 \ -p 5353:53 \ -p 5353:53/udp \ -v /home/docker/ddns/bind-data:/var/cache/bind \ -v /home/docker/ddns/database:/root/dyndns/database \ -e DDNS_ADMIN_LOGIN=admin:123455546. \ -e DDNS_DOMAINS=dyndns.example.com \ -e DDNS_PARENT_NS=ns.example.com \ -e DDNS_DEFAULT_TTL=3600 \ --name=dyndns \ bbaerthlein/docker-ddns-server:latest` I have of course adjusted the Domain and Parent NS. The data here serve as an example Here´s my directory docker01:/home/docker/ddns# ls -la drwxr-xr-x 4 root root 4096 . drwxr-xr-x 10 root root 4096 .. drwxr-xr-x 2 root root 4096 bind-data drwxr-xr-x 2 root root 4096 database username_2: Same issue here. For me it looks like the path inside of the container doesnt exist. ![image](https://user-images.githubusercontent.com/44546543/129480936-c0a4d547-aa45-4dc0-8657-1e94f0fd584c.png) There is a file called dyndns in the root directory but not a folder. However there is a database folder with a ddns.db file in it.. Could the database folder be the actual folder to bind to the volume? I'm gonna try a litte bit around for myself and will post again if I find something useful. username_1: looks like the database is created at the wrong place, have to test this out. A temporary fix could be using /root/database as mounting point instead Status: Issue closed
cdhart/cdhart-html
1025913474
Title: Ziggynahh May 07 2021 at 12:40PM Question: username_0: <blockquote class="twitter-tweet"> <p dir="ltr">3 years ago yall was tweeting this... add this to your photo gallery https://t.co/Uz9vIWwUKp</p> &mdash; naj (@Ziggynahh) <a href="https://twitter.com/Ziggynahh/status/1390723160828489733">May 7, 2021</a> </blockquote> <br> <br> May 07, 2021 at 12:40PM<br> via Twitter
i18next/i18next
191299121
Title: escape->escapeValue in docs Question: username_0: I was scratching my head quite a bit at http://i18next.com/translate/interpolation/ until I realized you probably mean `escapeValue: false` rather than `escape: false`, right? I'm not sure which repository this is held in so I'm just creating an issue. Feel free to point me to the repo and I'll give you a PR. Status: Issue closed Answers: username_1: argh....missed that. absolutely right...changed and published update. sorry for that...
urbanairship/ios-library
486875254
Title: ITMS-90809: Deprecated API Usage - UIWebView Question: username_0: # Preliminary Info ### What Airship dependencies are you using? v10.0.3 via Cocoapods ### What are the versions of any relevant development tools you are using? Xcode 10.3 # Report Just submitted my app to TestFlight and got an email with the error: ITMS-90809: Deprecated API Usage - Apple will stop accepting submissions of apps that use UIWebView APIs . See https://developer.apple.com/documentation/uikit/uiwebview for more information. I was wondering if all the code related to UIWebView will have to be removed and only keep WKWebView. It was just a warning, the build ended up being available on TestFlight, but I think this is something that will have to be addressed at some point. Thks Answers: username_1: @username_0 : SDK 12.0.0-beta.2 is available with all usage of UIWebView removed. Please take a look at it and let us know if it meets your needs. Thanks. Status: Issue closed username_2: I have tried updating the UrbanAirship to 12.0.0 (pod 'UrbanAirship-iOS-SDK', '~> 12.0.0'). I can see that all the UIWebViews are removed. But i am not able to compile my project. I am getting the error **No such module 'AirshipKit'**. I have tired deleting the derived data, changed the framework search paths but with no help. I still can't run my App. Any help would be appreciated. username_3: @username_1 any way to get the UIWebView removal into the 11.1.x series? I need to support iOS 10 but also remove UIWebView references to make Apple happy.
MeGysssTaa/ReflexIssueTracker
257052231
Title: Console Error 1.7.x Question: username_0: This Code when i run my server and it wil show this error `[Craft Scheduler Thread - 4/INFO]: [Reflex] (i) | Reflex: Loading Reflex 10.1-2U (J1.8.0_144S1.7.X) [19:24:11] [Craft Scheduler Thread - 4/WARN]: Exception in thread "Craft Scheduler Thread - 4" [19:24:11] [Craft Scheduler Thread - 4/WARN]: org.apache.commons.lang.UnhandledException: Plugin Reflex v10.1-2U generated an exception while executing task 23 at org.bukkit.craftbukkit.v1_7_R4.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:56) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.resize(HashMap.java:703) at java.util.HashMap.putVal(HashMap.java:662) at java.util.HashMap.put(HashMap.java:611) at java.util.HashSet.add(HashSet.java:219) at org.bukkit.plugin.java.JavaPluginLoader.createRegisteredListeners(JavaPluginLoader.java:242) at org.bukkit.plugin.SimplePluginManager.registerEvents(SimplePluginManager.java:539) at rip.reflex.Reflex.valueOf(ge:366) at rip.reflex.Reflex$$Lambda$19/926062703.run(Unknown Source) at org.bukkit.craftbukkit.v1_7_R4.scheduler.CraftTask.run(CraftTask.java:71) at org.bukkit.craftbukkit.v1_7_R4.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:53) ... 3 more` i cann't /reflex bug or everything about reflex command<issue_closed> Status: Issue closed
arq5x/lumpy-sv
301552980
Title: Insert size calculations differ between LUMPY Express and recommendations for calling LUMPY directly Question: username_0: ``` This takes reads starting with the 1st up to the 1,000,000th (`gawk '{ if (NR<=1000000) print > "/dev/stdout" ; else print > "/dev/null" }'` and runs the distribution based on up to 1,000,000 (`-N 1000000` of them that meet the criteria for inclusion in [pairend_distro.py](https://github.com/arq5x/lumpy-sv/blob/213a4171131cc12b20b8fb9f0428886aaead6344/scripts/pairend_distro.py). Each of these solutions seems to indicate different goals in generating the distribution. The LUMPY recommendations make an explicit effort to skip the first 100,000 reads in calculating the distribution while the LUMPY Express implementation uses these 100,000. The LUMPY Express implementation suggests that up to 1,000,000 reads are necessary to build the distribution which is 100x more reads than used in the LUMPY recommendations. Is either or both of these goals important in achieving a representative distribution? Neither distribution will be well-distributed throughout the genome since the bams have already been sorted at this point. In my own quick testing, the distributions were more similar when using 1,000,000 reads than 10,000. It's also worth noting that (at least in my testing), `gawk '{ if (NR<=1000000) print > "/dev/stdout" ; else print > "/dev/null" }'` was much slower than `head -n 1000000` which achieves the same goal of capturing only the first 1,000,000 lines.
cgeroux/DHSI-cloud-course
234951898
Title: Make note to keep track of IP address of VM Question: username_0: During CLI demo to create an image of a VM we terminate the VM and create an image of the volume. When then needed to re-create the VM and because of SSL certs and redirects from HTTP to HTTPS we need to make sure the newly created VM keeps the same IP for those to continue working correctly.
crocodic-studio/crudbooster
237940578
Title: Transferring the project to Keenetic Entware error 500 Question: username_0: I created the project on localhost. Now I want to transfer it to the server on Keenetic Entware. The server is running nginx, php7, mariadb. I transfer the project to a folder and when I try to go to the page I get: `http://192.168.2.1:88/insta/public/ 500 (Internal Server Error)` Help me run the project. Nginx Settings: ` user nobody; worker_processes 1; #error_log /opt/var/log/nginx/error.log; #error_log /opt/var/log/nginx/error.log notice; #error_log /opt/var/log/nginx/error.log info; pid /opt/var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; index index.php index.html index.htm; sendfile on; keepalive_timeout 65; gzip on; server { listen 88; root /opt/share/www; location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } root /opt/share/www; fastcgi_pass unix:/opt/var/run/php7-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } }` Answers: username_1: Looks like nginx mis-configuration. Turn on logging and see `/opt/var/log/nginx/error.log to find out whats wrong`. Status: Issue closed
bcgov/wps
564443039
Title: Documentation Displayed Question: username_0: **As a** *User* **I want** *to see the documentation for the 90% weather calculation* **So That** *I can explain how the values were generated and* **So That** my work can be validated* **Additional Context** - Provide a clear output of input parameters used - Provide a clear description of how the 90% value is calculated **Acceptance Criteria** - [ ] Given (Context), When (action carried out), Then (expected outcome) - [ ] Given output documentation is reviewed, When entered back into the calculator, Then the 90% weather values can be re-created<issue_closed> Status: Issue closed