repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
go-delve/delve
450587516
Title: go get dlv failed Question: username_0: Please answer the following before submitting your issue: Note: Please include any substantial examples (debug session output, stacktraces, etc) as linked gists. 1. What version of Delve are you using (master version)? 2. What version of Go are you using? (go version go1.10 linux/ppc64le)? 3. What operating system and processor architecture are you using?(ppc64le) 4. What did you do?(build dlv) 5. What did you expect to see? 6. What did you see instead? ``` go get -u github.com/go-delve/delve/cmd/dlv # github.com/go-delve/delve/pkg/proc src/github.com/go-delve/delve/pkg/proc/disasm.go:12:14: undefined: archInst ``` Answers: username_1: @username_0 Maybe this ? The same on linux i386. https://github.com/go-delve/delve/issues/1323 username_0: I install on ubuntu 16.04 ppc64le. username_2: That architecture is not currently supported. username_3: Can any one suggest how to add delve support for PowerPC(ppc64le) with ubuntu linux os.
bozimmerman/CoffeeMud
523974297
Title: 5.9.8.1 (fresh install, not upgrade) seems to have issues creating new characters Question: username_0: Haven't upgraded for a while so decided to move my working install aside and start fresh. Mud built (though with complaints about the mozilla js packages in the doc building phase) and started up fine, but when I went to telnet in to create my archon, I got the following: Name: username_0 'Z3ndrag0n' is not recognized. That name is also not available for new players. (And it didn't seem to matter what name I chose.) In the end, I hit "*" and it successfully created my initial character, but wondering if there's been a mistake here that most people are missing because upgrades versus fresh installs? Answers: username_1: Z3ndrag0n is not a legal name, because it contains numbers. Since you didn't include any other names you chose, I will assume they all contained invalid characters, which would explain the problem. Status: Issue closed
wongjiahau/TTAP-Bug-Report
276917734
Title: Bug report #-507299196 Question: username_0: Object reference not set to an instance of an object. ==================== at Time_Table_Arranging_Program.Pages.Page_Login.<<Browser_OnLoadCompleted>g__ExtractData14_3>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.AsyncMethodBuilderCore.<>c.<ThrowAsync>b__6_0(Object state) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) ==================== <HEAD><TITLE>myUTAR - The Universiti Tunku Abdul Rahman Web Portal</TITLE> <SCRIPT language=javascript> function MM_openBrWindow(theURL,winName,features) { window.open(theURL,winName,features); } function mypopup(url, sbar, resize, width, height, top, left){ tit='' reWin=window.open(url, tit, 'toolbar=no,location=no,directories=no,status=no,menubar=no,scrollbars=' + sbar + ',resizable=' + resize + ',width=' + width + ',height=' + height + ',top=' + top + ',left=' + left) } function checkPhone(evt){ evt = (evt) ? evt : window.event var charCode = (evt.which) ? evt.which : evt.keyCode if ((charCode > 46 && charCode < 58) || charCode==45 || charCode==13){ return true } else{ alert("You can only key in numeric number") return false } } function checkNumeric(evt){ evt = (evt) ? evt : window.event var charCode = (evt.which) ? evt.which : evt.keyCode if((charCode > 47 && charCode < 58) || charCode==13){ return true } else{ alert("You can only key in numeric number") return false } } function IsNumeric(strString) { var strValidChars = "0123456789.-/"; var strChar; var blnResult = true; if (strString.length == 0) return false; for (i = 0; i < strString.length && blnResult == true; i++) { strChar = strString.charAt(i); if (strValidChars.indexOf(strChar) == -1) { blnResult = false; } } return blnResult; } function logout(myPath,logoutURL){ //alert(myPath+logoutURL) [Truncated] <TD>KB520</TD> <TD></TD></TR> <TR align=center> <TD>236</TD> <TD>L</TD> <TD>3</TD> <TD align=right>90</TD> <TD>Thu</TD> <TD>12:00 PM - 01:00 PM</TD> <TD>1.0</TD> <TD>1-14</TD> <TD>KB520</TD> <TD></TD></TR></FORM></TBODY></TABLE><BR><BR><BR></DIV> <FORM method=get name=frmRefresh action=masterSchedule.jsp><INPUT type=hidden value=2 name=reqCPage> <INPUT type=hidden name=reqUnit> <INPUT type=hidden name=reqDay> <INPUT type=hidden value=Any name=reqFrom> <INPUT type=hidden value=Any name=reqTo> </FORM> <DIV><FONT id=notDisplay color=black>Page Loaded In 15 miliseconds </FONT></DIV><!-- End Content --></TD></TR></TBODY></TABLE></TD> <TD rowSpan=2 width=10><IMG src="https://unitreg.utar.edu.my/portal/courseRegStu/images/clear.gif" width=10></TD></TR></TBODY></TABLE></TD></TR><!--<script src="https://unitreg.utar.edu.my/portal/publicFunction.js"></script>--> <TR id=notDisplay> <TD class=footerFont vAlign=top> <HR align=center SIZE=1 width="99%" noShade> Copyright ยฉ 2017, <NAME>. All rights reserved. <BR>Info Optimized for Internet Explorer 5.0 and above. Best viewed with 1024 x 768 pixels.<BR>Terms of Usage </TD></TR></TBODY></TABLE></BODY>
OpenRoberta/openroberta-lab
455348866
Title: Add HTTP as a communication protocol to share messages between EV3s (and maybe other robots) Question: username_0: **Is your feature request related to a problem? Please describe.** I think that robotic projects where more robots and devices talks together are the most intriguing ones. But one problem is that the communication methods offered by the blocks is often limited to bluetooth. This is okay if we want a simple communcation between a small number of robots, but it's not sufficient when the number incrases or we add other kind of devices (like Raspberry). **Describe the solution you'd like** One of the solutions that came to my mind is to use HTTP to share message using two blocks: - a block to send a message to a specific robot/device: this block will have a url as an argument (hostname, port and path) and the data to send; - a block to receive a message from a robot/device: this block will have only the url parameter and will start a server listening at the specified port for the specific path; To simplify things further it would be possible to choose a fixed port and remove it from the block arguments, leaving there only the hostname (or ip) and the path. The url parameter could be used to select the target robot and the path could be used as a subject/channel of the message. **Additional context** I'm part of an association that teaches children the basics of programming using robots and sometimes we present projects in fairs. Our last one included 6 EV3s and a Raspberry, the configuration was the following: - 3 robots were the players of a game; - 3 robots were the controllers of the players; - the Raspberry was connected to a monitor to show the status of the game; All the connections were based on bluetooth: controllers were paired to the relative robot and each robot was connected to the raspberry. The programs were written using the official graphical language. This worked, but it wasn't very reliable: sometimes one of the robot disconnected, went out of range or the received message was corrupted. If we were usign HTTP over WIFI we could have setup an access point and connected all the robots to it, and then configure the blocks with the right urls. Some advantages of this approach are: - all the robots would be able to talk to every other robot without adding new connections; - there shouldn't be corrupted messages; - hopefully more distance range; Here's a possible example of programs to send a message from the robot A to B: Program of robot A (controller): ![immagine](https://user-images.githubusercontent.com/15476739/54311531-10940f00-45d5-11e9-9423-ac15b4bb40f5.png) Program of robot B (player): ![immagine](https://user-images.githubusercontent.com/15476739/54311769-987a1900-45d5-11e9-98c3-514e1469556b.png) **Proof of concept** I tried to implement a proof of concept and I have a working solution for the EV3 (using leJOS). The code of my example needs refactor because currently it adds the http functionality to the bluetooth blocks (which I renamed to "CommunicationSend" and "CommunicationReceive"): the biggest disadvantage of this is that now there is code related to http also for robots that doesn't have WIFI. Clearly, if OpenRoberta is interested in this feature it is necessary to design and define the details better. Anyway, you can try my proof of concept using the following branches: https://github.com/username_0/robertalab/tree/http-messages https://github.com/username_0/robertalab-ev3lejos-v1/tree/http-messages https://github.com/username_0/blockly/tree/http-messages Answers: username_1: @username_0 I like the idea. I don't like the url parameters though. This would make it hard to use for kids. If the robots would be connected to the internet anyway we could use a group chat protocol that is run by the server. Then all messages would go through the server. This would then not require any use, just a group-name. An alternative to using the server would be a peer to peer chat protocol (like https://en.wikipedia.org/wiki/Tox_(protocol)) if we can find libraries for the ev3. In any case we'd need a way to configure a chat-group name (in the robot config) and maybe an invite code (when using the server). Also ideally we could configure the robots name in the configuration.
idealley/cloudcms-manage-menus
239808345
Title: Improve Breadcrumb for items that are not in the menu Question: username_0: When an article is not linked to a menu, but to a category only (the example is `Article 2`) it works correctly for the navigation but we need: 1. to add the relation `a:category-association': 'OUTGOING` to the breadcrumb traverse 2. to manage the edge case it produces: * The category does not have the property `parent` with an `id` * The breadcrumbParser passes a root of `undefined` * The `parent.length` check in the `parseBreadcrumb()` method is `false` 3. it might be worth to change the type and add a mapping to keep the code simple Answers: username_0: The breadcrumb displays but not yet correctly. (`parseBreadCrumb()`). The types have been changed (adding some mappings). Status: Issue closed
vuejs/vetur
293173206
Title: Component mixin methods/members are not recognized by VSCode Intellisense Question: username_0: - [x] I have searched through existing issues - [x] I have read through [docs](https://vuejs.github.io/vetur) - [x] I have read [FAQ](https://github.com/vuejs/vetur/blob/master/docs/FAQ.md) ## Info - Platform: Win - Vetur version: 0.11.7 - VS Code version: 1.19.3 ## Problem Component mixin methods/members are not recognized by VSCode Intellisense fullscreen.js ![image](https://user-images.githubusercontent.com/5399223/35626953-07bbb25e-06a0-11e8-87a7-bc284a65ed60.png) component.vue ![image](https://user-images.githubusercontent.com/5399223/35626972-17e69cc0-06a0-11e8-979f-96a762ddcddf.png) Message shown wthen hovering 'makeFullscreen': ![image](https://user-images.githubusercontent.com/5399223/35626982-265c863e-06a0-11e8-94c3-8da24b6ff1c6.png) No error messages in Panel -> Output -> Vue Language Server ## Reproducible Case In VSCode, 1. create jsconfig.json file: ` { "compilerOptions": { "alwaysStrict": true, "checkJs": true, "target": "es2016", "module": "es2015", "noImplicitAny": true, "noImplicitReturns": true, "baseUrl": "./", "paths": { "@/*" : ["*"], "~/*" : ["*"] } }, "include": [ "**/*", "**/*.vue" ] } ` 2. create a mixinTest.vue file: ` <script> const fullscreen = { [Truncated] export default { el: '#app', template: '<div id="page">' + ' <button @click="click"> Click me </button>' + ' <div ref="fullscreen"> Content </div>' + '</div>', mixins: [fullscreen], methods: { click(e) { this.makeFullscreen(this.$refs.fullscreen); } } }; </script> ` 3. the method call "makeFullscreen" in the "this.makeFullscreen(this.$refs.fullscreen);" line will be underlined with red. Answers: username_1: It's a known issue. We cannot support mixin for now in TypeScript, which is the engine underhood. If you uses mixin a lot, you can consider using class style API provided in other libraries. Status: Issue closed
Talesoft/tale-jade
157134945
Title: div not closing properly even with proper indentation while using loops and conditions. Question: username_0: ``` article.home div div.column img.cover(src=$home['cover']) div.stack-item a.subscribe(href="/subscribe") p Subscribe Now p 1 post p.rupees-only 1 only - $count = 0 each $story, $key in $post['stories'] - $slug = explode('/', $story['slug']) div.post-story a.headline(href="/".$slug[0]."/".$slug[1])= $story["headline"] a.author(href="/author/".$story['author-id'])= $story["author-name"] if $count == 3 - break - $count++ div.comment-group div.comment-wrapper - $count1 = 1 each $value, $key in $latest - $slug = explode('/', $value['slug']) - $commentClass = 'comment--blue' if $count1 % 2 == 0 - $commentClass = 'card--red' - $imageUrl = "background-image:url('http://example.org/".$value['hero-image']."')" div(class='card '.$commentClass) a.image-wrapper div.image(alt=$value['headline'], width='300', style=$imageUrl, href='/'.$slug[0].'/'.$slug[1]) div.content-wrapper div.content a.headline(href='/'.$slug[0].'/'.$slug[1])= $value['headline'] a.author(href='/author/'.$value['author-id']) span= $value['author-name'] - $count1++ ``` I am expecting that div.comment-group should parallel to article.home div but when rendering the actual result is it's parallel to div.column. How can I fix that? Answers: username_1: Give a code example, pls. username_0: In sublime after making an enter it just got an space and because of that it was behaving like that. After using tab it is working fine. Status: Issue closed username_2: You can indent with either tabs or spaces. Just make sure that you don't mix them (and even that works in most cases) and that you always have the same amount of spaces for each level (Don't indent level 1 with 2 spaces and then, suddenly, level 2 more lines below with 4 spaces) Closing this. If you have further complications, please open a new issue :)
joindin/joindin-platformsh
425294100
Title: Cache files/images for the duration we did at Combell Question: username_0: The old Apache cache conf was as follows: ``` ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(ico|flv|pdf|mov|mp3|wmv|ppt)$"> ExpiresDefault A300 Header append Cache-Control "public" </FilesMatch> <FilesMatch "\.(gif|jpg|jpeg|png|swf|txt|js|css)$"> ExpiresDefault A3600 Header append Cache-Control "public" </FilesMatch> ``` We'll need something else for platform.sh: https://docs.platform.sh/configuration/routes/cache.html#http-cache If we set headers correctly, Cloudflare will cache this stuff for us, and we'll get a nice boost in page load speed as a bunch of our traffic will be served from their edge rather than Platform.sh. Answers: username_1: I got this. Status: Issue closed username_0: Not fixed. If you request e.g. https://joind.in/inc/img/event_icons/icon-7174-small.png you'll get back a no-cache header, and an Expires header of right now. username_0: The old Apache cache conf was as follows: ``` ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(ico|flv|pdf|mov|mp3|wmv|ppt)$"> ExpiresDefault A300 Header append Cache-Control "public" </FilesMatch> <FilesMatch "\.(gif|jpg|jpeg|png|swf|txt|js|css)$"> ExpiresDefault A3600 Header append Cache-Control "public" </FilesMatch> ``` We'll need something else for platform.sh: https://docs.platform.sh/configuration/routes/cache.html#http-cache If we set headers correctly, Cloudflare will cache this stuff for us, and we'll get a nice boost in page load speed as a bunch of our traffic will be served from their edge rather than Platform.sh. EDIT: Looks like this is what we're looking for: https://docs.platform.sh/configuration/app/web.html#locations Status: Issue closed username_1: Should be working now.
kairosdb/kairosdb
415164161
Title: Cassandra/Scylla compression Question: username_0: A nice to have would be kairos cassandra module exposing the ability to create/alter the tables in order to compress the sstables. Right now I'm doing this manually after the namespace and tables creation. Answers: username_1: Can you post an example of what you are doing? username_0: You stated [here](https://github.com/kairosdb/kairosdb/issues/23#issuecomment-66723592) that the best compaction strategy would be `DateTieredCompationStrategy`. So I'm trying out that compaction strategy with compression too: ``` ALTER TABLE data_points WITH compression = { 'sstable_compression': 'LZ4Compressor' } AND compaction = { 'class': 'DateTieredCompactionStrategy' }; ``` I got between 55% and 70% disk space usage savings. Status: Issue closed username_0: :heart:
wmbeers/cmv-app
571566328
Title: Saved maps with no layers show misleading error message when loaded Question: username_0: <!-- If youโ€™re filing a bug, please provide the following information: --> __How often can you reproduce it?__ <!-- Use [x] to mark your choice. --> - [ ] Always - [X] Sometimes - [ ] Rarely - [ ] Unable - [ ] I didnโ€™t try <!-- Please provide a detailed description of the issue. Include specific details to help us understand the problem. --> __Description:__ When loading a saved map that has no layers defined, the message "No savedMap passed to _loadMap function" is growled. ![image](https://user-images.githubusercontent.com/379889/75374079-9a49f480-5899-11ea-943d-ac2ae6b3d49a.png) <!-- List the step-by-step process to reproduce the issue. --> __Steps to reproduce:__ 1. Load a map that has no saved layers, e.g. https://stage.fla-etat.org/est/secure/map/index.html?loadMap=341 <!-- Describe what you expected to have happen after completing the steps above. --> __Expected results:__ A clearer warning, rather than a vague error. <!-- Describe what actually happened after completing the steps above. --> __Actual results:__ The error message shown above. Answers: username_0: A couple of things: I don't think loadMap and _loadMap should return deferred, and should instead handle the errors internally (moving code from the getSavedMapBeanById callback function in loadMap to _loadMap, and not worrying about having loadMap return a deferred at all, because we don't use it anywhere. username_0: Related: <NAME> 8 minutes ago Also, if I try to share a map and I have not saved my changes, I get the 'Unsaved Changes' dialog asking me to save or cancel --- which is expected. However, If I close the dialog or click cancel, the 'Share Map' dialog containing the link still displays. Bill 6 minutes ago Oh, good catch. :+1: 1 Bill 3 minutes ago Do you think changing the "You have unsaved changes" dialog from showing "OK" and "Cancel" buttons to "Yes" and "No" buttons would be a good way to fix that last issue? Bill 2 minutes ago The user might be thinking, "I don't want to save these changes, but share what I originally had" <NAME> < 1 minute ago That could work. Should we also add a note or more detail that mentions by clicking no, you can still share, but it will be whatever the map was when it was last saved? username_0: Related: when saving the very first saved map, it doesn't show up in user's list of available maps to open until the map/Index.html page is reloaded in the browser.
blockframes/blockframes
747636368
Title: Add a hint on document upload to warn that it will be downloadable Question: username_0: Thanks > closed Answers: username_1: Hints are added everywhere where a document can be eventually downloaded (event for the images and poster / banner). All screens are available [here](https://www.figma.com/file/qpLBi85x3y0s4fLpJPACOm/%5B-Wireframing-Elements-%5D?node-id=0%3A1) Exemples : Hint_1 <img width="549" alt="hint_1" src="https://user-images.githubusercontent.com/55499036/100857686-49e96e00-348d-11eb-9a03-c94a1464d60d.png"> Hint_2 <img width="509" alt="hint_2" src="https://user-images.githubusercontent.com/55499036/100857783-64bbe280-348d-11eb-8978-c21ed90b81df.png"> Hint_3 <img width="684" alt="hint_3" src="https://user-images.githubusercontent.com/55499036/100857803-6ab1c380-348d-11eb-8988-a7eeb29884b0.png"> Status: Issue closed username_0: Thanks > closed
openshift/origin
181518336
Title: Provision test users and token explicitly for extended tests Question: username_0: Currently the tests rely on AllowAll auth and request a token: https://github.com/openshift/origin/blob/master/test/util/client.go#L67 Talking with Jordan in https://github.com/openshift/origin/pull/11254 he mentioned it would be best to provision the user and api token explicitly. That should allow tests to run regardless of auth config.
apollographql/apollo-server
310541138
Title: info.cacheControl not passed when using mergeSchemas Question: username_0: <!--**Issue Labels** While not necessary, you can help organize our issues by labeling this issue when you open it. To add a label automatically, simply [x] mark the appropriate box below: - [x] has-reproduction - [ ] feature - [ ] docs - [ ] blocking - [ ] good first issue To add a label not listed above, simply place `/label another-label-name` on a line by itself. --> ## Problem `info.cacheControl` is not passed to resolvers when using `mergeSchemas`. ## Reproduction https://github.com/username_0/apollo-server-cachecontrol-bug ## The gist of it ```js const schema = makeExecutableSchema({ typeDefs: `...truncated...`, resolvers: { Query: { books: (_, args, __, info) => { // Logs `undefined` when using mergeSchemas console.log(info.cacheControl); return books; } } } }); const merged = mergeSchemas({ schemas: [schema] }); const app = new Koa(); app.use(koaBody()); const router = new Router(); router.post('/graphql', graphqlKoa({ schema, // works; logs an object for info.cacheControl // schema: merged, // does not work; logs undefined for info.cacheControl context: { foo: 'bar' }, tracing: true, cacheControl: true })); ``` ## Relevant dependencies: ``` $ yarn list --pattern apollo yarn list v1.5.1 โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ””โ”€ [email protected] $ yarn list --pattern graphql yarn list v1.5.1 โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ””โ”€ [email protected] ``` Answers: username_1: We are having similar issues, but we are constructing our schema using `makeExecutableSchema`. Our dependencies are $ yarn list --pattern apollo; yarn list --pattern graphql yarn list v1.6.0 โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ””โ”€ [email protected] โœจ Done in 1.20s. yarn list v1.6.0 โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ”œโ”€ [email protected] โ””โ”€ [email protected] โœจ Done in 1.19s. If we could get in memory caching working, we would start investigating moving to redis in future. However, the current situation does not increase my level of confidence. username_2: We are having the same problem. Any news on this issue? username_2: It turns out I implemented a solution on my own side. I am basically building the local schema, then merging it with remote schema with `mergeSchemas` (because it's easier to get the merged schema definition) and finally enhancing the merged schema defintion with my resolvers + a resolver that is responsible for delegating queries to the remote graphql service. This way, `info.cacheControl` is available in all resolvers. ```javascript // index.js import { printSchema } from 'graphql' import { makeExecutableSchema, mergeSchemas } from 'graphql-tools' import typeDefs, { resolvers } from '../my-local-schema' import getRemoteSchema, { withDelegationResolvers } from './my-remote-schema' export default () => getRemoteSchema().then((remoteSchema) => { const localSchema = makeExecutableSchema({ typeDefs, resolvers }) const mergedSchema = mergeSchemas({ schemas: [remoteSchema, localSchema] }) const mergedTypeDefs = printSchema(mergedSchema) const enhancedResolvers = withDelegationResolvers(resolvers, remoteSchema) const myAwesomeSchema = makeExecutableSchema({ typeDefs: mergedTypeDefs, resolvers: enhancedResolvers }) }) // my-remote-schema.js import fetch from 'node-fetch' import { HttpLink } from 'apollo-link-http' import { delegateToSchema, FilterRootFields, introspectSchema, makeRemoteExecutableSchema, transformSchema } from 'graphql-tools' const remoteResolver = schema => (parent, args, context, info) => delegateToSchema({ schema, operation: info.operation.operation, fieldName: info.fieldName, args, context, info }) export const withDelegationResolvers = (resolvers, remoteSchema) => { const { Query = {}, ...otherResolvers } = resolvers const remoteResolver = remoteSchema ? remoteResolver(remoteSchema) : f => f return { ...otherResolvers, Query: { ...Query, remoteSchemaRootField: remoteResolver }, } } export default () => { const link = new HttpLink({ uri: remoteGraphqlHost, fetch }) return introspectSchema(link).then((schema) => { const remoteSchema = makeRemoteExecutableSchema({ schema, link, fetch }) return transformSchema(remoteSchema, [ new FilterRootFields((operation, rootField) => rootField === 'remoteSchemaRootField') ]) }) } ``` username_3: @username_2 Nice solution! Can you provide what do you use for the remoteSchemaRootField in your typedefs? I keep getting this error: `Query.remoteSchemaRootField defined in resolvers, but not in schema` And this makes sense, since I have not defined it anywhere in the schema. Any advice will be welcomed :) username_2: hey @username_3 . Glad you liked it! This `remoteSchemaRootField` is an abstract name of a root field coming from the introspection of the other graphql backend. Imagine I have 2 graphql APIs: Projects (local) and Users (remote, which I will run the introspection query against to). Then this would be the root query of my local API: ```graphql type Query { project(projectId: ID!): User } ``` And this would be the root query of the remote API: ```graphql type Query { user(id: ID!): User } ``` In this case, the enhancedResolvers would look something like this: ```javascript const withDelegationResolvers = (resolvers, remoteSchema) => { const { Query = {}, ...otherResolvers } = resolvers const remoteResolver = remoteSchema ? remoteResolver(remoteSchema) : f => f return { ...otherResolvers, Query: { ...Query, user: remoteResolver }, } } ``` And the function to fetch the remote schema would filter the root query to include only the root fields that you want to expose through your API (in this case, user): ```javascript export default () => { const link = new HttpLink({ uri: 'http://user.com/graphql', fetch }) return introspectSchema(link).then((schema) => { const remoteSchema = makeRemoteExecutableSchema({ schema, link, fetch }) return transformSchema(remoteSchema, [ new FilterRootFields((operation, rootField) => rootField === 'user') ]) }) } ``` What do you think? username_3: @username_2 Thanks for the explanation, now I got it :) For some weird reason, I have not thought about the fact that `remoteSchemaRootField ` has to be defined in the remote schema. I am looking for a solution, where I can pass down the cache setting from the Apollo server to the remote schemas. This may be a good solution with some more tweaking. Do you have any idea what would be the best way to pass down all the remote resolvers to the main schema? username_4: This bug extends further than cacheControl -- it affects all extensions. It looks like it does call the extension eventually, but it is out of order (some time after the resolver). username_5: This is known issue with how `mergeSchemas` works and is also why we have deprecated graphql-tool in favor of federation which *will* support cache control and tracing ๐ŸŽ‰ Status: Issue closed
Lullabot/amp-library
360779797
Title: AMP Live Blog (Live-List) Support Question: username_0: Hi, I am trying to implement live-list feature of AMP. https://ampbyexample.com/components/amp-live-list/ Is it supported by this library? As soon as i am adding amp-list-list JS on my header, this library is removing the same.
Dav1dde/glad
899581428
Title: Where is the documentation Question: username_0: I can't find a documentation for glad2 to refer, some function is a little hard to understand Answers: username_1: Have you seen: https://github.com/username_1/glad/tree/glad2#documentation, it links to the [wiki](https://github.com/username_1/glad/wiki/C) here? Status: Issue closed username_0: All right, I got it! THANK YOU A LOT.
jhipster/generator-jhipster
91092852
Title: Allow manage users and authorities Question: username_0: @jdubois , I'd like to start one PR to create one feature to manage users and roles. Besides to use liquibase to insert authorities(roles), users and to associate user with authorities, I think that would be interesting there is in the menu a function to manage users and roles. What do you think? Answers: username_1: I guess somebody is working on that PR already plz check open issues username_1: its #1525 username_0: Thanks! Status: Issue closed username_2: yes. but not merged yet.(@moifort, not ready?) username_3: And there was a separate issue about creating new authorities: https://github.com/jhipster/generator-jhipster/issues/780 and a pull request: https://github.com/jhipster/generator-jhipster/pull/851 which was rejected, now the change lives on my fork ( https://github.com/username_3/generator-jhipster/commit/a9a9780931d1e4f6b2a32c41843f9982b7e6faa8 ) username_0: @username_3 @username_1, let's concentrate the conversation in the 1525
r-spatial/stars
786080047
Title: na.rm ignored in st_apply on proxy layers Question: username_0: It looks the `...` arguments doesn't propagate all the way through when computing on proxy layers. That's how I interpret it at least. Reprex below. ``` r library(stars) #> Loading required package: abind #> Loading required package: sf #> Linking to GEOS 3.8.0, GDAL 3.0.4, PROJ 6.3.1 tif = system.file("tif/olinda_dem_utm25s.tif", package = "stars") x <- read_stars(tif) x <- x*10 x[[1]][5000:6000] <- NA out <- tempfile(fileext = ".tif") write_stars(x, dsn = out) x <- read_stars(out, proxy = T) plot(x) ``` ![](https://i.imgur.com/ew0Gqmb.png) ``` r y <- read_stars(tif, proxy = T) new <- c(x, y, along = "band") plot(new) ``` ![](https://i.imgur.com/uKtNoLi.png) ``` r res <- st_apply(new, MARGIN = c("x", "y"), FUN = mean, na.rm = TRUE) plot(res) ``` ![](https://i.imgur.com/8Av2ArQ.png) ``` r x <- read_stars(tif) y <- x*10 x[[1]][5000:6000] <- NA new <- c(x, y, along = "band") plot(new) ``` ![](https://i.imgur.com/FMekBvx.png) ``` r res <- st_apply(new, MARGIN = c("x", "y"), FUN = mean, na.rm = TRUE) plot(res) ``` ![](https://i.imgur.com/WKy0TQD.png) <sup>Created on 2021-01-14 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)</sup> <details> <summary>Session info</summary> [Truncated] #> stringi 1.5.3 2020-09-09 [3] CRAN (R 4.0.2) #> stringr 1.4.0 2019-02-10 [3] CRAN (R 4.0.0) #> testthat 3.0.1 2020-12-17 [3] CRAN (R 4.0.3) #> tibble 3.0.4 2020-10-12 [1] CRAN (R 4.0.3) #> tidyselect 1.1.0 2020-05-11 [1] CRAN (R 4.0.2) #> units 0.6-7 2020-06-13 [1] CRAN (R 4.0.2) #> usethis 2.0.0 2020-12-10 [1] CRAN (R 4.0.3) #> vctrs 0.3.6 2020-12-17 [3] CRAN (R 4.0.3) #> withr 2.3.0 2020-09-22 [1] CRAN (R 4.0.2) #> xfun 0.20 2021-01-06 [3] CRAN (R 4.0.3) #> xml2 1.3.2 2020-04-23 [3] CRAN (R 4.0.0) #> yaml 2.2.1 2020-02-01 [3] CRAN (R 4.0.0) #> #> [1] /home/au206907/R/x86_64-pc-linux-gnu-library/4.0 #> [2] /usr/local/lib/R/site-library #> [3] /usr/lib/R/site-library #> [4] /usr/lib/R/library ``` </details> Answers: username_1: [1] 1 ``` username_0: Ehh well, the problem is that I expected exactly that, but currently don't get that when using proxy layers (third image from the top in the issue here). It works perfectly on the non-proxy layers as per the image just above here. username_0: I've edited the reprex to be more concise - now only the two strictly relevant plots are shown. username_1: Helpful! Status: Issue closed username_1: Thanks, that should work now. username_0: Thanks Edzer! Works now. (Btw, I noticed that `par(mfrow = c(1,2))` doesn't appear to have any effect when plotting the stars objects here - wasn't that supposed to give us two plots arranged side by side?) username_1: Yes/no: it does so in the following example: ```r x = read_stars(system.file("tif/L7_ETMs.tif", package = "stars")) par(mfrow = c(1,2)) plot(x[,,,1], key.pos = NULL, reset = FALSE) plot(x[,,,2], key.pos = NULL, reset = FALSE) ``` i.e. with single layer rasters without key; otherwise, the multi-raster and/or key placement use the multi-plot mechanism (`layout`). username_0: Ah, I see. Thanks for the clarification!
naser44/1
103017690
Title: ุจุนุฏ ู‚ู„ูŠู„ ........ุฃู‡ู… ุฅุญุชูุงู„ุงุช ูŠูˆู… 25 ุฃุบุณุทุณ . Question: username_0: <a href="http://ift.tt/1hE8tqc">&#1576;&#1593;&#1583; &#1602;&#1604;&#1610;&#1604; ........&#1571;&#1607;&#1605; &#1573;&#1581;&#1578;&#1601;&#1575;&#1604;&#1575;&#1578; &#1610;&#1608;&#1605; 25 &#1571;&#1594;&#1587;&#1591;&#1587; .</a>
ARMmbed/greentea
125201224
Title: It does not release the serial port when timeouts Question: username_0: It's happening lately that I have to restart a board. I could not find this reported (open or close), I recall it was already reported. Env: Target: k64f, OS: windows, mbedgt 0.1.14 ``` // mbedgt time outs because of the test block // running mbedgt again mbedgt: mbed-host-test-runner: started MBED: Instrumentation: "COM64" and disk: "D:" HOST: Copy image onto target... 1 file(s) copied. HOST: Initialize serial port... ........................................mbedgt: mbed-host-test-runner: stopped mbedgt: mbed-host-test-runner: returned 'TIMEOUT' ``` Answers: username_1: ARM Internal Ref: IOTSYST-719 username_2: Good, I will investigate this ASAP. username_3: @username_2 this issue may be related to issue #48 restarting the target may be helping mbed host test to exit as well. Anyways closing the serial port should be part of mbed host test when mbed gt times out username_2: Yes, let's see how this issues behaves when #48 is fixed. username_3: It should be fixed by: ARMmbed/htrun#54 #63 Status: Issue closed
graphql-nexus/nexus
840025518
Title: N+1 issues when getting nested types Question: username_0: Let's say I have a stock with historical pricing data, and a list of certain stocks, with a query `getList` query that gets all stocks on the list with their latest price. ``` query getListWithPrices($listId: String!) { getList(listId: $listId) { id name stocks { id name latestPrice } } } Where `latestPrice` is a query that grabs pricing from the historical pricing table, and orders by date `desc` grabbing the first result. Structuring this with Nexus + Prisma is leading to an N+1 issue, and I was hoping to find some help. Here's how things are so far: 1. The list resolver grabs the correct list by ID (1 Query) 2. The list has 100 stocks (1 Query, all stocks who are a member of the list), 3. For each stock, query the priceHistory table, order by `desc` date, and grab the first one (100 Queries, as the field exists on the stock resolver) Is there a way to do this in a more efficient way without needing to create a completely separated query for handling lists like this? Thanks in advance. Answers: username_1: Same issue. username_2: This isn't really a nexus problem but rather a challenge when creating nested graph-like structures in GraphQL in general. There are a few ways to deal with this but the most common approach to handle this is to use the dataloader pattern. This pattern was made popular by Facebooks [dataloader](https://github.com/graphql/dataloader) library for NodeJS. The gist of the library is rather simple. While resolving your query it will collect all the `Stock.id` 's that need a `latestPrice` it will then give you all those id's in a batch so you can resolve them. Generally in SQL-like queries this would result in something like: `SELECT * FROM priceHistory WHERE stock_id IN (...)`. This makes is so that when you resolve 100 stocks in the list you only issue 1 query to the priceHistory table. If you want to know more check out the dataloader library on GitHub :) username_0: Thanks for the detailed response! I found dataloader not long after, and forgot to comment on this issue. I think the nexus docs could add some sections describing some of this stuff, as I've had to resort to stack overflow and issues on github to find solutions to problems I feel are pretty common and should be addressed. Anyway, appreciate the response! Status: Issue closed
talos-org/client
380086949
Title: Screen 2: Blockchain configuration Question: username_0: #Second Screen - allow user to configure advanced settings for blockchain (fields in Multichain conf file) - allow user to accept defaults for advanced settings and skip to Finish - allow user to go back to First onboarding screen, exit onboarding, and continue
ibest/HTStream
208934846
Title: output file parameter thoughts/opinion Question: username_0: seems weird to have to include an ending '_' when you specify output file prefiex (eg myoutput_ ) can't the '_' be added by default (seems the norm) cleaned reads should have similar postfix as Illumina, so instead of PE1/PE2 how about R1/R2 (maybe even include the _001?, so cleaned_reads_R1_001.fasta.gz). Seems more apps will expect the R1/R2 within read id more than PE1/PE2 (our use of PE is I think legacy) gz output by default, helps with good behavior ;) but of course only when outputting a file. Answers: username_0: Fixed, default is R1,R2, and SE Status: Issue closed
Peripli/service-manager-cli
411451800
Title: smctl lb returns no brokers Question: username_0: # smctl lb returns no brokers ## Description When `smctl lb` is executed no brokers are returned (even though there are some in the service manager). When `smctl curl /v1/brokers` is executed a non-empty result is returned. ## Steps to reproduce 1. Register a valid broker in SM (smctl rb ...) 2. Execute smctl lb ## Observed Results No brokers are displayed. ## Expected Results: At least the registered broker is returned. ## Known affected versions: pre1.4.1 ## Additional remarks This seems to be a regression as old versions work as expected (tested with v1.0.0) Status: Issue closed Answers: username_0: The issue appears only when using older version of SM.
miguelcobain/ember-leaflet
169504977
Title: L.Icon.Default.imagePath - Integration Testing Question: username_0: Incase anyone runs into this issue during integration testing. ``` Source: Error: Couldn't autodetect L.Icon.Default.imagePath, set it manually. ``` Just add this to the top of your integration test: ```js import hbs from 'htmlbars-inline-precompile'; //Needed to silence leaflet autodetection error L.Icon.Default.imagePath = 'some-path'; moduleForComponent('... ``` Should we maybe add a note in the docs somewhere? Thanks Answers: username_1: Thanks for sharing. Status: Issue closed
rbdannenberg/o2
163715173
Title: OSC send integration Question: username_0: There is a command, o2_delegate_to_osc() to redirect O2 messages to OSC clients. This needs to be implemented fully and tested. I think we should handle timestamps by delaying the message locally until the timestamp time, then send the message without timestamps. (It would be better to translate timestamp to an OSC timestamp, form a bundle, and send a timestamped message immediately, but I do not think it is common for OSC systems to use timestamps; therefore, timestamped bundles would probably not be received properly.) Answers: username_0: Done. Timestamp translation is performed according to liblo timestamps, but only for incoming and outgoing bundles. Status: Issue closed
MicrosoftDocs/azure-docs
838236572
Title: Wrong commands in the Official Documentation Question: username_0: The issue was encountered by CSS engineer <NAME> <EMAIL>, while delivering training for new hires, they realized that all the commands are wrong on the official documentation due to the updates of the AzureAD module. On the documentation the commands suggested to use are: ![image](https://user-images.githubusercontent.com/61716458/112077324-e1a8c000-8b41-11eb-8342-0abc602ad516.png) Connect-AzureAD **New-AzureADMSAdministrativeUnit** -Description "West Coast region" -DisplayName "West Coast" **Example of the error:** PS C:\Users\jarojasm> New-AzureADMSAdministrativeUnit -Description "West Coast region" -DisplayName "West Coast" New-AzureADMSAdministrativeUnit : The term 'New-AzureADMSAdministrativeUnit' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + New-AzureADMSAdministrativeUnit -Description "West Coast region" -Dis ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (New-AzureADMSAdministrativeUnit:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException **PS C:\Users\jarojasm> New-AzureADAdministrativeUnit** cmdlet New-AzureADAdministrativeUnit at command pipeline position 1 Supply values for the following parameters: DisplayName: hola ObjectId DisplayName Description -------- ----------- ----------- be4ca889-956f-47d8-887d-4ae71472ee61 hola ![image](https://user-images.githubusercontent.com/61716458/112077200-97bfda00-8b41-11eb-8a0a-e8f5912d574a.png) In brief, the commands on the doc are: --> AzureADMSAdministrativeUnit but they should be as New-AzureADAdministrativeUnit --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 10d0bacc-f835-c3d2-5b13-7491a190a3c3 * Version Independent ID: 2cfc9c21-b9b7-2a2c-348f-dd6fb2cdc9e8 * Content: [Add and remove administrative units - Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/roles/admin-units-manage) * Content Source: [articles/active-directory/roles/admin-units-manage.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/roles/admin-units-manage.md) * Service: **active-directory** * Sub-service: **roles** * GitHub Login: @username_3 * Microsoft Alias: **username_3** Answers: username_1: @username_0 Thanks for the feedback. We will engage the content development team for further investigation, they will verify and perform necessary changes in the document as needed. username_2: Hi @username_1, Jazmin and I have reviewed this in further detail and found that the commands can work as they are documented but the AzureAD module needs to be updated before hand. Our personal recommendation would be only adding instructions on how to update the AzureAD module, we have confirmed the below method works: Uninstall-Module AzureADPreview Uninstall-Module AzureAD Install-Module AzureAD -Force Once the latest version of the AzureAD module is installed the commands will work as documented: ![image](https://user-images.githubusercontent.com/81253238/112182909-54a64b00-8bc3-11eb-8ce8-27e47129b52c.png) username_3: Hi and @username_0 and @username_2 Thanks very much for your feedback. We've made several updates to the administrative unit docs that hopefully are more clear. We've also add a Prerequisites doc with steps to install AzureAD and AzureADPreview modules: https://docs.microsoft.com/en-us/azure/active-directory/roles/prerequisites Let us know if you have any problems or additional feedback. thanks username_3: #please-close Status: Issue closed
tibel/Weakly
300067700
Title: weak key dictionary? Question: username_0: Hi is there reason why there is no weak key dictionary? Answers: username_0: would be usefull for caching for example username_1: Do you mean something like https://docs.microsoft.com/en-us/dotnet/api/system.runtime.compilerservices.conditionalweaktable-2? username_0: ah yes but we still using .net 3.5 and this come only in 4.0 username_1: Also Weakly does not support .NET 3.5, only 4.5 and newer. Status: Issue closed
Grammarsalad/Proficiencies
330444850
Title: Mechanics Question: username_0: - Set snares (+10%--15?--to set snares: implement something like 'trap revisions' which gives auto success but options based on total skill level) - Boni to detect/disable traps and open locks - Craft and use Mechanical devices: advanced crossbows, bombs, tech ammo, Answers: username_0: # Mechanics (Intelligence) **Requirements**: None **Set Snares** This skill allows the character to set snares to trap and damage <PRO_HISHER> enemies. <PRO_HESHE> gains a bonus to set snares equal to 10 plus <PRO_HISHER> Wisdom Modifier. Additionally, <PRO_HESHE> can set one additional trap per day. (change that wisdom to just +15%/rank)
kipoi/kipoiseq
368401810
Title: Additional transforms Question: username_0: - k-mer or gapped k-mer counter - simple reverse complementation function ## transforms/augmentation.py - random sub-sequence augmentation - random reverse-complementation Answers: username_0: - experiment with using Numba to implement k-mer counting or other transforms. Have a fallback method if the user doesn't have numba installed locally ```python try: import numba @numba.jit def kmers.... except: def kmers ```
jrgp/linfo
332712736
Title: Use internal css and js Question: username_0: Currently to load html output, linfo use 5 file contain 3 css, 1 js and 1 image as default and sometimes it being more for extra image or etc. We can use internal css and js remove extra request. Someone maybe want to use linfo html preview as part of another service and they must customize location of media files. Because of that we can use internal css and js. For images i recommend to merge images into one file and convert it to base64 to use in css. Answers: username_0: Also for change theme i think it's better to simply add a theme name as class to body and use one file for all themes, because it's very simple and don't need include each file with all of styles! username_1: Most browsers will cache assets and are also faster to request 5 smaller files instead of one big. Base64 decoded images are also a lot larger. The next reason against is maintainability. And for themes - it's in no way better/faster to request one css file with all themes instead of a single, replaceable, one with the current theme. And most ones who use this in another project will build customized HTML and just use the API of this package. username_2: We *could* make the images a sprite sheet and minify/combine the JS and CSS files down, but I'm not sure what the gain would be as the size of JS and CSS for linfo is miniscule compared to other projects. All of it together is just a few KB. username_1: And as a last reason against inline: "all" projects with an enabled content security policy will fail cause enable `inline-unsafe` is like don't have a CSP. username_0: All of image in icons folder is about 27.8kb and combine them and after compressing is maybe less than 10kb. for example compress logo.png is reduce about 65% of file size via [compressor.io](https://compressor.io/compress) Also all of css is about 6.1kb and combine theme colors in one files is less than 8kb and it's nothing! At least you can use one css, one image at all. Also namespace is not used correctly in php codes and need to be fix. for example for create new instase for `Timer`, must use backslash at start of class name like `\Timer` because of namespace conflict. username_2: This might be worth thinking about. Regarding that namespace thing, does using it shorthand (eg `Timer`) matter when we have the relevant `use` statements at the top of the file? username_0: No it's not about using shorthand. For example to define new instance from simple datetime class, you must declare as `new \DateTime` instead of `new DateTime` (with backslash before name of class). or to use try catch you must define exception with backslash because it's global class. like `catch (\Exception $e)`. This lets PHP know that this call should be resolved from the global space instead of approaching it relatively. Without backslash it's relative and it's may not work with global namespace in some frameworks using global namespaces. maybe [this article about Namespacing in PHP](https://code.tutsplus.com/tutorials/namespacing-in-php--net-27203) help understand this problem. username_1: As an example: https://github.com/username_2/linfo/blob/686a35498694217ae9bc66800f87de3d949b2712/src/Linfo/OS/BSDcommon.php#L151 In all other places it's used with the correct `use \Linfo\Meta\Timer;` statement https://github.com/username_2/linfo/search?l=PHP&p=1&q=Timer username_1: @username_2 I would recommend to use PHP cs fixer in Travis to prevent things like this in future. *I'm not 100% sure if it detects missing use statements but it can detect unused ones* Related to the theme thing and so on it would be an idea to switch to less/sass and gulp/grunt/webpack to keep code simple and readable but compress everything during build. It also could make theming a lot easier if I just have to adjust some variables. username_3: Sprite sheets would not be an optimization. Vanilla Linfo generally shows three images. If we do the math, adding a generous 1460 bytes http request overhead: * logo.png - 1646 bytes (+1460 bytes http req overhead) * os_linux.gif - 586 bytes (+1460 bytes http req overhead) * distro_debian.gif - 638 bytes (+1460 bytes http req overhead) **= 7250 bytes (7.3 kb)** 7.3 kb is still less than the proposed 10 kb sprite sheet. That means that page load time would be slower with a sprite sheet, presumably defeating the purpose. It doesn't appear as though either of these changes would necessarily improve Linfo, at least in its current state. username_0: It's correct. but it's only 7.5 kb **not mb!** username_0: I do it and create one sprite image for icons. 10 kb was guess. real size is 4610 bytes (**4.6 kb**). it size of one file image file contain all of images:) [check image size](https://github.com/username_2/linfo/pull/94) username_1: So you add *2.26 KB* and *185 lines* to reduce the amount of requests by *2* for an administrativ application that will never get seen by any search-bot or something else that's not the admin!? This whole "issue" is optimization to death at the wrong place. I've no usage stats but so far I used this package the delivered html is only an example and never used in production. If you want a simple plug'n'play tool to monitor your server you will end-up with Munin or something similar. This package is in php (who the hell wants to monitor a whole server in PHP to see simple numbers?) - so it's primary use-case is to get system-stats for dashboards or calculations in PHP. So in all cases you will need the API and not the delivered HTML. (not agains you @username_2 ) but the delivered HTML is ugly as hell - so no one will use this in a productive environment included in any other Application without changes. And as standalone this is a pretty bad choice (like said above). username_2: Sorry for the late reply everyone. @username_3 thanks for the info. I was hoping you'd reply :) @username_1 I fully agree. The most _practical_ use for Linfo is likely its JSON API, and people will likely use something else other than PHP with a webserver for gathering metrics, like collectd or something else. I still spin Linfo's web UI up on servers I run and use it to look at them periodically, but it's definitely a niche use case. As for the UI, I started this when I was 16 and wanted to create a lighter, faster, simpler and more functional version of phpsysinfo, with led to a deliberately minimal and lightweight UI. If it wasn't for @username_3's blue/white theme he added over 7 years ago Linfo likely wouldn't even be usable. Status: Issue closed
Caterinacrisponi/programmazione
444828567
Title: il codice dell'esercizio รจ rotto Question: username_0: @Caterinacrisponi hai sbagliato a fare il copia e incolla dall'editor di faust, ha incollato anche numeri e parole che non hanno nulla a che vedere con il codice Faust. di seguito il codice corretto: ``` import("stdfaust.lib"); pan1 = vslider("p1 [style:knob]", 0.5,0,1,0.01); frq = vslider("f1 [style:knob] [unit:Hz]", 440,100,20000,1); pan2 = vslider("p2 [style:knob]", 0.5,0,1,0.01); pan3 = vslider("p3 [style:knob]", 0.5,0,1,0.01); pan4 = vslider("p4 [style:knob]", 0.5,0,1,0.01); process = os.oscsin(frq*1), os.oscsin(frq*2), os.oscsin(frq*3), os.oscsin(frq*4) <: _ * (sqrt(1-pan1)), _ * (sqrt(1-pan2)), _ * (sqrt(1-pan3)), _ * (sqrt(1-pan4)), _ * (sqrt(pan1)), _ * (sqrt(pan2)), _ * (sqrt(pan3)), _ * (sqrt(pan4)) : _+_, _+_, _+_, _+_ : _+_, _+_ : _ *(0.25), _ *(0.25); ```<issue_closed> Status: Issue closed
International-Data-Spaces-Association/InformationModel
947447920
Title: Title or Description field on representation or artifact level Question: username_0: Dcat supports a title and description property on distribution level. Since I need to represent (map) from DCAT to IDS I was looking for those props on representation or artifact level. It seems that IDS does not foresee a title and description field on artifacts and representation? If that is right, I would suggest to consider this, since mapping from DCAT seems to be common requirement, at least in MDP it is. Answers: username_1: Good point. Should be easy to achieve and definitely a benefit. username_0: Just saw that representation is a subclass of dcat:distribution. According to the DCAT spec title and description are formally already part of the ids:representation but according to my knowledge, they are not part of the generated Java classes. username_2: I think the solution would be as easy as ```turtle ids:Representation rdfs:subClassOf ids:Described ``` username_1: Added the suggested changes to the `ids:Representation` class in #483. The IDS Information Model splits the dcat:Distribution into the ids:Representation and ids:Artifact. Therefore, it is not necessary to add title / description information to an artifact. Status: Issue closed
CTeX-org/ctex-kit
837665842
Title: ctex v2.5.6 ไธญvtex.tds.zipไธญ๏ผŒ\tex\generic\ctex ่ทฏๅพ„ไธ‹๏ผŒzhmapๆ˜ฏไธ€ไธชๆ–‡ไปถ๏ผŒไปฅๅพ€็‰ˆๆœฌๆ˜ฏไธ€ไธชๆ–‡ไปถๅคน๏ผŒๅฏผ่‡ดctexๆ›ดๆ–ฐ็ญ‰ๅ‡บ้”™ Question: username_0: ๅฆ‚้ข˜ Answers: username_1: ไฝ ่ฏด็š„ๆ˜ฏ GitHub ไธŠ็š„ release ไนˆ๏ผŸ่ฟ˜ๆ˜ฏ tlmgr ๆ›ดๆ–ฐ็š„๏ผŸ username_2: Releases ้‡Œ้ข็š„ๆ–‡ไปถ็Žฐๅœจๅทฒๆ›ดๆ–ฐ๏ผš<https://github.com/CTeX-org/ctex-kit/releases/tag/ctex-v2.5.6> username_0: ๆˆ‘็”จ็š„MiKTeX๏ผŒๆ˜ฏๅฎๅŒ…่‡ชๅŠจๆ›ดๆ–ฐ็š„๏ผŒๅปบ่ฎฎๅ‘ไธ€ไธชๆ›ดๆ–ฐ๏ผŒ่ฟ™ๆ ท่ƒฝๅ…จ้ƒจๅŒๆญฅไฟฎๆญฃ username_3: ๅฏไปฅ้‡ๆ–ฐไธŠไผ ไธ€ๆฌก CTAN๏ผŒๅœจๅค‡ๆณจ้‡Œ่ฏดๆ˜ŽๅŽŸๅ› ๏ผŒCTAN ๅ›ข้˜Ÿไผšๆ›ดๆ–ฐๆ–‡ไปถ๏ผŒไฝ†ไธไผš้‡ๆ–ฐๅ‘ๆ›ดๆ–ฐ้‚ฎไปถ๏ผˆๅฏ่ƒฝๅ› ไธบไธๆถ‰ๅŠ็‰ˆๆœฌ้€’ๅขž๏ผ‰ใ€‚texlive ๅ’Œ miktex ๅฏ่ƒฝ้œ€่ฆ้‚ฎไปถๅ„้€š็Ÿฅไธ€้ใ€‚ไน‹ๅ‰ๆˆ‘ไธŠไผ  thmmools ๆ—ถ๏ผŒๅ› ไธบๆ‰“ๅŒ…้”™่ฏฏไนŸ็ปๅކ่ฟ‡ไธ€ๆฌก๏ผˆๅฝ“ๆ—ถๅฏผ่‡ดไบ†ๆ— ๆณ•ไปŽ dtx ็”Ÿๆˆ sty๏ผŒๆ˜ฏ texlive ๅ…ˆๅ‘้‚ฎไปถ่ฟ‡ๆฅ็š„๏ผ‰ใ€‚ username_3: miktex ๅทฒ็ปๆœ‰็›ธๅบ”ๆŠฅๅ‘Š https://github.com/MiKTeX/miktex/issues/756๏ผŒCTAN ไธŠๆ›ดๆ–ฐๅŽๅœจ้‚ฃไธช issue ไธ‹ๅ›žๅค๏ผˆๆ้†’ๆ›ดๆ–ฐ๏ผ‰ๅบ”่ฏฅๅฐฑๅฏไปฅไบ†ใ€‚ username_2: ๅ‡ ๅคฉๅ‰ๆˆ‘ๅทฒ็ปๆŠŠไฟฎๅคๅŽ็š„็‰ˆๆœฌไธŠไผ  CTAN ไบ†๏ผŒTeX Live ้‚ฃ่พน็จๅพฎๅปถ่ฟŸไบ†ๅ‡ ๅคฉ๏ผŒไฝ†[ๆ˜จๅคฉไนŸๅค„็†ไบ†](https://www.tug.org/svn/texlive?view=revision&revision=58583)ใ€‚ username_2: ๆ นๆฎ https://github.com/MiKTeX/miktex-packaging/issues/236#issuecomment-804079552 ไธญ็š„ๅ้ฆˆ๏ผŒmiktex ้‡ๆ–ฐๆ‰“ๅŒ…๏ผŒๅบ”่ฏฅๅ‡ ๅคฉๅ†…ๅฐฑๅฏไปฅๆ›ดๆ–ฐใ€‚ Status: Issue closed
sql-machine-learning/sqlflow
527938450
Title: does sqlflow support hive2 Question: username_0: **Is your feature request related to a problem? Please describe.** does sqlflow support hive2? if support ,how to connect to hive2? **Describe the solution you'd like** I failed to connect hive2 in the following way๏ผš SQLFLOW_DATASOURCE='hive://root:root@hostname:10010/iris' it says failed to ping database. **Describe alternatives (optional)** **Additional Notes** Answers: username_1: Hi @username_0, what is the Hive version you are using? the Hive version we used in our CI is `Hive 2.3.2`, SQLFlow should work with Hive 2.x. "Failed to ping database" is usually due to the incorrect server configuration or the `SQLFLOW_DATASOURCE`. There are several ways to debug this. As the first step, I would suggest trying to connect your Hive server by writing a Python script using `impyla==0.16.0`. You can find an example [here](https://github.com/cloudera/impyla#usage). If you can't connect your Hive server using impyla, then it is likely additional authentication configuration is needed. Status: Issue closed
numba/numba
377020869
Title: Scalars raise typing errors for ndenumerate, reshape, transpose, min, etc. Question: username_0: ## Reporting a bug - [x] I am using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG). - [x] I have included below a minimal working reproducer (if you are unsure how to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports). Sorry if this was reported elsewhere or dismissed as something that numba will not support, but I tried searching and couldn't find anything general on this (only some related issues such as #846, #1825). Basically, numpy will cast scalars to zero-dimensional arrays, while numba will throw a typing error for a few of its functions. Here are a few examples of how this bug manifests: ```python @njit def f(): return list(np.ndenumerate(1)) f() ``` ```python @njit def f(): return np.reshape(1, (1,)) f() ``` ```python @njit def f(): return np.transpose(1) f() ``` ```python @njit def f(): return np.min(1) f() ``` ```python @njit def f(): return np.max(1) f() ``` There are likely a few more instances (`sum`, `prod`, `argmin`, `argmax`, `median`), although I did check the list of supported methods somewhat thoroughly. Each case produces the same error: ``` Invalid usage of Function(<class '...'>) with parameters (int64) ``` and, in all cases, removing the `@njit` decorator gives a reasonable result, as does replacing the scalar with a 1-d array. To motivate why this can matter, a user might call `np.clip(a, a_min=0, a_max=1)` to clip a scalar value for `a`. And a numba implementation of `np.clip` (#3468) might use `np.ndenumerate(a)` which would then run into this corner case. It seems unusual to handle this case within the `np.clip` implementation. But more to the point this is inconsistent with numpy, when I presume one of the goals is for existing `numpy` code to work with `numba` off-the-shelf. Answers: username_1: Thanks for the report. This is an unsupported feature, noted in https://github.com/numba/numba/issues/3175, but seems to be more prevalent. I'll have a think about whether there's a quick way to fix this!
cfpb/hmda-platform
513371461
Title: 2020 Edits: Q652 Question: username_0: Edit #: Q652 Edit Type: Quality Category: LAR Data Fields: Debt-to-Income Ratio Description: Instructional Text for Users: Please review the information below and update your file, if needed. Edit Logic: The DTI reported is greater than 0 but less than 1, which may indicate a misplaced decimal point.<issue_closed> Status: Issue closed
ivarptr/yu-writer.site
1135817259
Title: ๆš‚ๅœๆ›ดๆ–ฐ 1.6.0 ็‰ˆๆœฌ็š„้€š็Ÿฅ Question: username_0: ๅคงๅฎถๅฅฝ๏ผŒ็›ฎๅ‰ Yu Writer ๆ˜ฏไฝฟ็”จ web ๆŠ€ๆœฏๅผ€ๅ‘ๅนถ่ฟ่กŒๅœจ Electron.js ๅนณๅฐไน‹ไธŠ๏ผŒไฝœ่€…็ป่ฟ‡ๅคง้‡็š„ๅฐ่ฏ•ๅ’ŒๅŠชๅŠ›ๅŽไปๆœช่ƒฝ่ฎฉๆ–ฐ็‰ˆๆœฌๆต็•…ๅœฐ่ฟ่กŒ๏ผŒ็‰นๅˆซๆ˜ฏ็ผ–่พ‘่พƒๅคง็š„ๆ–‡ๆกฃๆ—ถ๏ผˆๆฏ”ๅฆ‚่ถ…่ฟ‡ 8k ็š„ๆ–‡ๆกฃ๏ผ‰๏ผŒๅ…ถๅก้กฟๆ„Ÿ่ฎฉไฝœ่€…ๆ— ๆณ•ๆŽฅๅ—๏ผŒ่€Œไฝœ่€…ๅˆๆฐๅฅฝ็ปๅธธ้œ€่ฆ็ผ–่พ‘่ถ…ๅคง็š„ๆ–‡ๆกฃใ€‚ ่™ฝ็„ถๅก้กฟ็š„ๅŽŸๅ› ไธ่ƒฝๅฎŒๅ…จ่ต– Electron.js ๅ’Œ JS๏ผŒไธ่ฟ‡ๅœจๆญค่ฟ‡็จ‹ไธญไฝœ่€…้€ๆญฅ่Œๅ‘่‡ชๅทฑๅผ€ๅ‘่ทจๅนณๅฐ็š„ GUI ็จ‹ๅบๆก†ๆžถ๏ผˆ็”จไบŽๆ›ฟๆข Electron.js๏ผ‰ไปฅๅŠ่ฏญ่จ€๏ผˆๆ›ฟๆข JS๏ผ‰็š„ๆƒณๆณ•๏ผŒ็ป่ฟ‡ๅทฎไธๅคšไธ€ๅนดๅคš็š„ๆ—ถ้—ดๆ‘ธ็ดข็Žฐๅœจไฝœ่€…ๅคง่‡ดๆœ‰็‚นไธ็‚นๅ„ฟๆ–นๅ‘๏ผŒๅฆ‚ๆžœ่ƒฝ้กบๅˆฉๅฎŒๆˆ๏ผŒๅˆฐๆ—ถไผšไปฅๅผ€ๆบๅ…่ดน็š„ๅฝขๅผๅˆ†ไบซๅ‡บ่ฟ™ไธ€ๅฅ—ๅทฅๅ…ท้“พ๏ผŒๅฅฝ่ฎฉ่ขซ Electron.js ๆˆ–่€… JS ่™่ฟ‡็š„ๅผ€ๅ‘่€…ไธๅ†ๆމๅคดๅ‘ใ€‚ไธ่ฟ‡ๅ› ไธบไธชไบบ็š„ๆ—ถ้—ดๅ’Œ็ฒพๅŠ›ๆœ‰้™๏ผŒ็›ฎๅ‰ๅช่ƒฝๆš‚ๅœ Yu Writer ็š„ๆ›ดๆ–ฐ๏ผŒ่ฝฌ่€Œไธ“ๅฟƒๅผ€ๅ‘่ฟ™ๅฅ—ๆก†ๆžถๅ’Œ่ฏญ่จ€๏ผŒๅธŒๆœ›ๅ„ไฝ็†่งฃใ€‚ ๅœจ่ฟ™ๆˆ‘ไนŸๅˆ†ไบซไธ€ไธ‹็›ฎๅ‰็ผ–่พ‘ๆ–‡ๆกฃ็š„ๅทฅๅ…ท๏ผšVSCode๏ผˆhttps://code.visualstudio.com/๏ผ‰ + Markdown Preview Enhanced๏ผˆhttps://marketplace.visualstudio.com/items?itemName=shd101wyy.markdown-preview-enhanced๏ผ‰ VSCode ่™ฝ็„ถไนŸๆ˜ฏไฝฟ็”จ web ๆŠ€ๆœฏ + Electron.js ๅผ€ๅ‘๏ผŒไธ่ฟ‡่ทŸๅคง้ƒจๅˆ†ไปฃ็ ็ผ–่พ‘ๅทฅๅ…ทไธ€ๆ ท๏ผŒๅฎƒ็š„็ผ–่พ‘ๆก†ๆ˜ฏ่‡ชๅฎšไน‰็š„๏ผˆ่‡ชๅทฑ็”ป็š„๏ผ‰๏ผŒ้€š่ฟ‡่ฎก็ฎ—็ญ‰ๅฎฝๅญ—ไฝ“ๆ–‡ๆœฌ็š„้ซ˜ๅบฆๅ’Œๅฎฝๅบฆไปฅๅฎž็Žฐๅ…‰ๆ ‡็š„ๆญฃ็กฎไฝ็ฝฎ็š„ๆ˜พ็คบๅ’Œๆ–‡ๆœฌ้€‰ๅ–๏ผŒ็„ถๅŽ็ผ–่พ‘ๆก†ไป…ไป…็”ปๅ‡บๆญฃๅœจ็ผ–่พ‘็š„ๅฏ่ง†ๅŒบๅŸŸ็š„ๆ–‡ๆœฌๅณๅฏ๏ผŒไธๅœจ็ผ–่พ‘ๅŒบๅŸŸ็š„ๆ–‡ๆœฌๅชไฟ็•™ๅœจๅ†…ๅญ˜้‡Œ๏ผŒๅ…ถไธญ็š„็†่ฎบๅ’Œๅฎž่ทตๆ–นๆณ•ๆœ‰ๅ…ด่ถฃ็š„ๅ„ไฝๅฏไปฅ็ฟป้˜… VSCode ็š„ Blogใ€‚็†่ฎบไธŠ๏ผŒๅฆ‚ๆžœไธ่€ƒ่™‘ๆ–‡ๆœฌๅŠ ไบฎ๏ผˆๅณ่ฏญๆณ•ไธŠ่‰ฒ๏ผ‰็š„่ฏ๏ผŒๆ˜ฏๅฏไปฅ็ผ–่พ‘ๆ— ้™ๅคง็š„ๆ–‡ๆœฌใ€‚ไธ่ฟ‡็ผบ็‚นๆ˜ฏๅฆ‚ๆžœ้€‰็”จ้ž็ญ‰ๅฎฝๅญ—ไฝ“๏ผŒๅˆ™ๅ…‰ๆ ‡ๅ’Œ็ผ–่พ‘ๅฏ่ƒฝไผšๅ…จ้ƒจ้ƒฝไนฑไบ†ๅฅ—๏ผŒไธ่ฟ‡็”จๆฅ็ผ–่พ‘ไธญๆ–‡ๅด็ขฐๅทงๆฒก้—ฎ้ข˜๏ผˆๅ› ไธบไธญๆ–‡ๆ–นๅ—็ญ‰ๅฎฝ๏ผŒๅ“ˆๅ“ˆ๏ผ‰ใ€‚ๅฆๅค–ๅฏไปฅ็”จ Kateใ€Atom Editor ๆ›ฟไปฃ VSCode๏ผŒๅฆ‚ๆžœไธ้œ€่ฆ โ€œ้กน็›ฎโ€ ๆฆ‚ๅฟต๏ผŒ็›ดๆŽฅ็”จ KWriterใ€GEdit ๆˆ–่€… Vim ไนŸ่กŒใ€‚ ๅฆๅค– VSCode ๅ’Œ Yu Writer ไธ€ๆ ท๏ผŒๅ…ถๆ–‡ๆกฃ๏ผˆ้กน็›ฎ๏ผ‰้ƒฝๆ˜ฏ็›ดๆŽฅๅŸบไบŽๆœฌๅœฐ็š„ๆ–‡ไปถๅคนๅ’Œๆ–‡ไปถ๏ผŒๆ‰€ไปฅๅฆ‚ๆžœไฝ ไน‹ๅ‰ๅทฒ็ปๆœ‰ๅพˆๅคš็”จ Yu Writer ็ผ–่พ‘็š„ๆ–‡ๆกฃ๏ผŒๅช้œ€็›ดๆŽฅ็”จ VSCode ๆ‰“ๅผ€ โ€œๆˆ‘็š„ๆ–‡ๆกฃ > Yu Writer Librariesโ€ ๆ–‡ไปถๅคนๅณๅฏ๏ผŒๅฎŒๅ…จไธ้œ€่ฆๅฏผๅ‡บๅฏผๅ…ฅใ€‚ ๅฅฝไบ†๏ผŒ้’ๅฑฑไธๆ”น๏ผŒ็ปฟๆฐด้•ฟๆต๏ผŒๅ’ฑไปฌๅŽไผšๆœ‰ๆœŸใ€‚ Answers: username_1: ็จๆœ‰ๅฏๆƒœ๏ผŒๅ…œๅ…œ่ฝฌ่ฝฌ่ฟ˜ๆ˜ฏ่ง‰ๅพ—Yu Writer็š„ๅŠŸ่ƒฝ้€‚ๅˆๆˆ‘ใ€‚็ฅๆ„ฟไฝœ่€…็š„่ทจๅนณๅฐGUIๆก†ๆžถๆ—ฉๆ—ฅ้ขไธ– username_2: ๅฏไปฅ่€ƒ่™‘ไฝฟ็”จ `Rust` ็‰ˆๆœฌ็š„ `Tauri`๏ผŒ[github](https://github.com/tauri-apps/tauri) username_3: @username_2 ๆ˜ฏ็š„๏ผŒๆญฃๆ˜ฏๅŸบไบŽ่ฟ™ไธช๏ผŒ็„ถๅŽๆญ้…ไธŠ่‡ชๅทฑ็š„่„šๆœฌ่ฏญ่จ€ใ€‚ username_4: ๅŠ ๆฒน๏ผŒๆ—ถ้—ด่ฟ˜้•ฟ๏ผŒ็ฅไธ€ๅˆ‡ๅฎ‰ๅฅฝ username_5: ไฝ ่ฟ™ๆ˜ฏ่‡ชๅทฑๆŠŠ่‡ชๅทฑ้€ผไธŠๆขๅฑฑไบ†ใ€‚ๅŠ ๆฒน๏ผ username_6: ่ฟ™ๆ ท็œŸ็š„ๅฅฝๅ—๏ผŸ
LDSSA/wiki
840488751
Title: How to set delivery deadlines Question: username_0: What should be the rule for delivery dates? Answers: username_1: The differences from last year would be: - in the specialization, instead of the deadlines being per BLU they are per spec, so all BLUs need to be ready 2 months before the first one is realeased - the capstone would have to be prepared at the same time as spec 6 These values are still not fixed, cause in the other issue it's more about the format of the metrics themselves username_1: My current proposal a bit better defined for this batch: - All Units of a specialization should be delivered 2 months before first unit is released: - Spec 2: 16th June 2021 - Spec 3: 13th July 2021 - Spec 4: 4th August 2021 - Spec 5: 1st September 2021 - Spec 6: 29th September 2021 - Hackathon to be delivered 2 months before hackathon (problem + instructor solution) - Hckt 1: 15th August 2021 - Hckt 2: 12th July 2021 - Hckt 3: 3rd August 2021 - Hckt 4: 31st August 2021 - Hckt 5: 28th September 2021 - Hckt 6: 16th November 2022 - Capstone to be delivered at same time as hckt 6 - 16th November username_1: @username_2 am going to share in both #teaching and #qa channels but would love your feedback here username_0: Clarification: when we say delivered do we mean A. Delivered to QA B. Delivered and fully QA'd (closed) username_2: @username_1 I agree with the proposed dates; I never really think of QAing all the spec BLU's at the same time and I think it's a great idea. @username_0 I would say that we mean option A. Delivered to QA. What needs to happen after is that the QA team needs to commit to review the materials in a certain period of time. I would suggest that for the SLU's we make that period the following two weeks and for the BLU's/Hackathon a little longer between two/three weeks. The reason for the longer time in BLU/s is that we will try that the same people will review the same spec materials. username_1: @username_0 I meant delivered to QA username_3: Hey @username_0 @username_1 @username_2 ! ๐Ÿ‘‹ This issue has been inactive for more than 28 days now, and is now in `rotten` state. ๐Ÿ˜ข Can you give it some love? โค๏ธ If no action is taken this issue will be closed on 16 July 2021. ๐Ÿงนโœจ username_1: Added in the private calendar: https://calendar.google.com/calendar/u/2?cid=Y19vcWhqYmU5cjNjdjVrcWtvdGtlczM3M2RvZ0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t username_1: I'm going to leave this for the weekend in case of last minute feedback and close it at the end of the weekend Status: Issue closed username_1: Alright , closing since it was added on the calendar
dlemstra/Magick.NET
329295011
Title: iTXt: chunk data is too large after removing XMP data using Adobe XMP Toolkit Question: username_0: Hi, I have a PNG with a lot of 8MB+ of XMP metadata. After removing some of the metadata using [Adobe XMP Toolkit](https://www.adobe.com/devnet/xmp.html) I get the following exception: **Unhandled Exception: ImageMagick.MagickCoderErrorException: iTXt: chunk data is too large `E:\images\blank_after.png' @ error/png.c/MagickPNGErrorHandler/1711** Here is a [link](https://drive.google.com/open?id=1DBHStmY_ZsxXzHQiWrU8lzxGekg9Yy-i) to the file that is used prior to removing XMP metadata: And a [link](https://drive.google.com/open?id=1kvCPxNa6OHovfRSnEdCvHbwvOUIO-jAz) to the file after the removing the XMP metadata. I am able to open the file using Adobe Photoshop and other image editors. Adobe XMP Toolkit does not reduce the file size after removing metadata, I wanted to open / save using Magick.NET to reduce the file size. Looks like a dup of #105. I am using Magick.NET-Q16-HDRI-AnyCPU 7.4.6.0 Answers: username_1: Sorry for the late response but I completely forgot about this issue. It looks like you are running into a limit inside the libpng library. The size of your iTXt chunk is larger than that limit. At this moment we have no method to change/remove this limit but I will try to add that later this week. I was about to publish a new release but I will wait with that and add this feature to ImageMagick (and Magick.NET) first. username_1: The feature has been added and you can read the file in the next release with the following code: ```C# var settings = new MagickReadSettings() { Defines = new PngReadDefines() { ChunkMallocMax = 0, // Unlimited }, }; using (var image = new MagickImage("YourFile.png", settings)) { image.Read(Files.SnakewarePNG, settings); } ``` username_1: The new release has been published, can you give it another try @username_0? Status: Issue closed username_0: @username_1 Thanks for the fix. I am able to read PNGs w/ large XMP payloads. Keep up the good work!
Enlcxx/angular2-resizing-cropping-image
372197606
Title: customization Question: username_0: Hi, really enjoyed using your plugin. could you please to give a bit information about following? 1. how can I use 'cropping' within component, I mean not only in templates? For example I would like to wrap cropping.fit() with my own function: Something like this: ` someMethod () { //do somthing; this.cropping.fit() } ` 2. Is there any possibility to reduce step of zoomIn/zoomOut? 3. Is there any possibility to limit the output of the picture beyond the borders of ly-img-container? 4. Is there any possibility to put a picture 'fit to screen' as initial(default)? Thanx again and looking forward to your replay Answers: username_1: Hi, for you to use in the component, you can use ViewChild example: ```ts @Component({ ... }) export class MyComponent implements AfterViewInit { @ViewChild(LyResizingCroppingImages) imgCropper: LyResizingCroppingImages; ngAfterViewInit() { console.log(this.imgCropper); } someMethod() { this.imgCropper.fit(); } } ``` Everything else for now is not possible, I will keep it in mind for the new features in the next version. username_0: Thank you for the explanation. Also thanks for the intention to improve. Please also take into account a rotation possibility if it is possible for you. I'll keep track your updates. Thanks again username_2: About the rotation, I was thinking that I could rotate with any degree, for example, rotate 45 degrees, 35 degrees, 175 degrees ... Obviously, without the image going beyond the boundary of the cropping area. demo: ![ss](https://user-images.githubusercontent.com/26355793/47485481-c667e800-d803-11e8-9ab1-142e4bb98ba7.png) That will be in another extraction request. username_1: new version [1.7.5](https://alyle-ui.firebaseapp.com/components/resizing-cropping-images) username_0: Thank you again. Is setScale(scale: number) a setter for the privet property zoomScale? Also is it possible to get access to the picture after it is loaded? username_0: Regarding rotation, I think if it could rotate 90 degrees - it would be enough for avatars cropping username_0: Wow, Thanks a lot for the new version, now it is much better username_1: yes, it is available in [npm](https://www.npmjs.com/package/@alyle/ui) [here](https://stackblitz.com/edit/resizing-cropping-image-p2mwnp?file=src%2Fapp%2Fapp.component.html) a demonstration of the `setScale ()` with an input range but there is a bug, when the scale is set to `0` username_0: @username_1 Thank you for the example and for the great improvement. Can I ask you a question more? Is there a possibility to get an image size on 'loaded' for the case to show user info if the size is to small for example username_1: Now `loaded` emit an event with `width & height`, where this shows some data of the current image. ![ss](https://user-images.githubusercontent.com/8032887/47540005-acc4b000-d898-11e8-9c8f-c0b02cdd406c.png) But will be available for the next version, if you want to try it now you can install the latest build in the following way `yarn add @alyle/ui@nightly` Regarding `scale`, Is it thought that it would be like that? ```html <ly-cropping [config]="myConfig" [scale]="scale"> ``` username_0: @username_1 I'have installed npm i @alyle/ui@latest -s After run ng serve I got these list of errors: ERROR in node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,38): error TS1005: ')' expected. node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,81): error TS1005: ')' expected. node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,94): error TS1005: '(' expected. node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,96): error TS1005: ';' expected. node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,104): error TS1005: ';' expected. node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,105): error TS1005: ';' expected. node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,106): error TS1128: Declaration or statement expected. node_modules/@alyle/ui/src/theme/theme2.service.d.ts(86,115): error TS1109: Expression expected. node_modules/@alyle/ui/themes/minima/base.d.ts(50,19): error TS1005: ';' expected. node_modules/@alyle/ui/themes/minima/base.d.ts(64,1): error TS1128: Declaration or statement expected. username_1: Thanks for reporting this error, you can send me the result of `ng -v` username_1: Alyle UI has some requirements for it to work properly. typescript> = 2.9.x angular => 6.1.10 but I still found an error, for the next version it will be fixed username_1: new version [1.7.6](https://alyle-ui.firebaseapp.com/) username_0: @username_1 I've updated angular and looks like works Ok. Thank you. There is a couple things left: 1. When I try to get and crop photo from camera on my cell phone, the cropper rotates the picture 90 degrees. Could you please check this if it is a bug? 2. Rotation feature, - it is still needed for avatars rotation 90, do you plan to add it in future versions? username_1: Yes, use `clean()`, available in version 1.7.7 username_0: Yes I think it can be an autocrop after each change. I'd like to tell you thanks again and again, for your work and explanations. I couldn't find a decent cropper for angular before meet your plugin. With your help and the improvement now the plugin is suitable to the project I work on. I also really like to have your lightweight library (which makes my development-life easier) in my project. Can I ask you more questions please? I use the plugin for avatars-processes and I'd like to give users the opportunity to edit avatars whenever they want(not only when they upload and save cropped result but later as well). For this purposes I need to keep original pictures and also to keep all changes which were applied to pictures before the crop, such as scaling, moving etc...So when a user is pressing edit-button I need to display them the cropper with an original picture in the position which is equal a cropped result. I mean picture in the cropper should be at the same position as on cropped avatar. The question is the next: Is there the way to: 1) get the original picture, 2) get all data of a changed picture position , 3) assign pictures to the cropper from an external resource(database or cdn) 4) assign to cropper data(which are described in the point 2)? username_1: To get the original image will be available in the next version. Everything else will be available possibly for version 1.8.0 Thanks for helping to improve this component. username_1: New version [1.7.9](https://alyle-ui.firebaseapp.com/components/resizing-cropping-images) username_3: Is there any way to modify clean() method? username_1: ``` username_3: Is it possible to get this kind of task with selected coordinates and multi color selected boxes? ![Capture](https://user-images.githubusercontent.com/26071027/94991464-c4386780-059c-11eb-9560-9b77e07fdfc4.PNG) username_1: @username_3 It's not possible I'm sorry. username_3: how to set width=500px; and height=400px; of an image on image uploading. username_1: @username_3 I don't understand what you mean, but you can find the demos [here](https://alyle.io/components/image-cropper).
bfgssr/announce
638313378
Title: ๆฐธไน…ๅ›žๅฎถq็พค 250544326 ๆฐธไน…ไธ‹่ฝฝ็ฆๅˆฉๅœฐๅ€ http://suo.nz/5gSZ8T ๅคงๅฎถ่ฎฐๅพ—ๅŠ ็พค๏ผŒๅˆซ่ตฐไธขไบ†๏ผŒ็พค้‡Œ็ฆๅˆฉๅคšๅคšใ€‚ใ€‚ Question: username_0: ๆฐธไน…ๅ›žๅฎถq็พค 250544326 ๆฐธไน…ไธ‹่ฝฝ็ฆๅˆฉๅœฐๅ€ http://suo.nz/5gSZ8T ๅคงๅฎถ่ฎฐๅพ—ๅŠ ็พค๏ผŒๅˆซ่ตฐไธขไบ†๏ผŒ็พค้‡Œ็ฆๅˆฉๅคšๅคšใ€‚ใ€‚ Answers: username_0: ๆฐธไน…ๅ›žๅฎถq็พค250544326 ๅŠ ็พคๆฐธไธ่ตฐไธข ๅผบ็ƒˆๆŽจ่ไธ‹่ฝฝๆณจๅ†Œใ€‚ๆœ€ๆ–ฐ็ฆๅˆฉ http://534899.com?channel=VKC4Z ๆœ€ๆ–ฐๅคง็ง€ๅนณๅฐ http://asf281-4.7136oe.com:555/?channel=WG789 ๅ…จๆ–ฐๅคง็ง€่ทณ่›‹ๅนณๅฐๅ“ฆ http://weixin.eyalvyou.cn/45XRh3UN_G ๅ•็‹ฌๅคซๅฆป็›ดๆ’ญๅนณๅฐ https://g1b.xyz/1186l.html http://1.172tu1.com/u/1436988 username_0: ๅผบ็ƒˆๆŽจ่ไธ‹่ฝฝๆณจๅ†Œใ€‚ๆœ€ๆ–ฐ็ฆๅˆฉ http://1.172tu1.com/u/1436988 https://plp.meiqww.cn/lp/4?c=1296MY4 ๅ…จๆ–ฐๅคง็ง€่ทณ่›‹ๅนณๅฐๅ“ฆ http://weixin.eyalvyou.cn/45XRh3UN_G ๆœ€ๆ–ฐๅคง็ง€ๅนณๅฐ http://534899.com?channel=VKC4Z Status: Issue closed
kubernetes/kubeadm
1173392357
Title: how to skip "kubeadm config images pull" ? Question: username_0: <!-- !!! IMPORTANT !!! Before hitting the submit button, please note that requests for support must be sent to the support channels or #kubeadm on k8s Slack and not this issue tracker: https://git.k8s.io/kubernetes/SUPPORT.md If you are experiencing a problem make sure you check the Kubernetes and kubeadm troubleshooting guides: https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/ https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/ If this is a BUG REPORT or a FEATURE REQUEST please answer the following questions. --> ## What keywords did you search in kubeadm issues before filing this one? we want to skip images pull . https://github.com/kubernetes/kubernetes/blob/0ade4678a7ba527d22f6baa81034dea423267608/cmd/kubeadm/app/util/runtime/runtime.go#L118-L122 we make the new oci image shim. but the image exsit is in the fucntion. it cant be the command the find the image exsit if user define oci shim. ## Is this a BUG REPORT or FEATURE REQUEST? FEATURE REQUEST <!-- If this is a BUG REPORT, please: - Fill in as much of the template below as you can. If you leave out information, we can't help you as well. If this is a FEATURE REQUEST, please: - Describe *in detail* the feature/behavior/change you'd like to see. In both cases, be ready for followup questions, and please respond in a timely manner. If we can't reproduce a bug or think a feature already exists, we might close your issue. If we're wrong, PLEASE feel free to reopen it and explain why. --> ## Versions **kubeadm version** (use `kubeadm version`): **Environment**: - **Kubernetes version** (use `kubectl version`): - **Cloud provider or hardware configuration**: - **OS** (e.g. from /etc/os-release): - **Kernel** (e.g. `uname -a`): - **Container runtime (CRI)** (e.g. containerd, cri-o): - **Container networking plugin (CNI)** (e.g. Calico, Cilium): - **Others**: ## What happened? during kubeadm init . we find images must be pulled if image not exsit by `crictl` test. even if the image implementation by user define . ## Anything else we need to know? https://github.com/fanux/sealos/issues/883 Answers: username_1: If you want to use a private repository, you can use `--image-repository` flag. ```sh kubeadm init --image-repository sealos.hub:5000 ```
cdnjs/cdnjs
159768502
Title: [Request] Add parallaxify by hwthorn Question: username_0: **Library name:** parallaxify **Git repository url:** https://github.com/username_3/parallaxify **npm package url(optional):** nope **License(s):** Released under the MIT license. http://username_3.mit-license.org **Official homepage:** http://username_3.github.io/parallaxify/ **Wanna say something? Leave message here:** ===================== Notes from cdnjs maintainer: You are welcome to add a library via sending pull request, it'll be faster then just opening a request issue, and please don't forget to read the guidelines for contributing, thanks!! Answers: username_1: @username_2 Because it has no tag and no response for a long time but I can trace commit log to find out it has two version `v0.0.1`, `v0.0.2` https://github.com/username_3/parallaxify/commit/da67338e1ecbfd75259f66006bffd7dfae8a14bb#diff-9e1a7db63605440b0fdc61611347bbf5R2 https://github.com/username_3/parallaxify/commit/993793fd318b379a6ed4ec5db05948d04f6ce426#diff-9e1a7db63605440b0fdc61611347bbf5R2 Should I add it without git tag? Thanks! username_2: Just do it :+1: Status: Issue closed username_3: let me know and we can add the tag username_2: @username_3 didn't see the tag yet, would be good to have it! Is there any plan on this? @username_1 @pvnr0082t please take care of this.
NLua/NLua
54241801
Title: Lua and IIS/WCF no-go Question: username_0: I'm working with Aerospike, which uses Lua to do map/reduce and other things. Anyways, when I try to run the Aeropsike sample code in a simple WCF service I get the dreaded dll errors: Unable to load DLL 'lua52': The specified module could not be found. (Exception from HRESULT: 0x8007007E) at KeraLua.NativeMethods.LuaLNewState() at NLua.Lua..ctor() at Aerospike.Client.LuaInstance..ctor() in c:\AerospikeClient\aerospike-client-csharp-3.0.12\AerospikeClient\Lua\LuaInstance.cs:line 32 at Aerospike.Client.LuaCache.GetInstance() in c:\AerospikeClient\aerospike-client-csharp-3.0.12\AerospikeClient\Lua\LuaCache.cs:line 34 at Aerospike.Client.QueryAggregateExecutor.RunThreads() in c:\AerospikeClient\aerospike-client-csharp-3.0.12\AerospikeClient\Query\QueryAggregateExecutor.cs:line 65 at Aerospike.Client.QueryAggregateExecutor.Run(Object obj) in c:\AerospikeClient\aerospike-client-csharp-3.0.12\AerospikeClient\Query\QueryAggregateExecutor.cs:line 55 I've made sure that all the OS stuff is the same (explicitly importing the 64 bit version for a 64 bit build, etc), and when I get it wrong the build fails. When I get it right the build succeeds and everything's fine until it tries to invoke Lua. I've also tested this outside of Aerospike (i.e. simple WCF that tries to run some Lua script), same issue. I can post sample code if you'd like, but basically this happens any time I instantiate a Lua instance (i.e. Lua li = new Lua()) in a WCF or .net Web Application. Answers: username_0: Adding, a bit more detail from the constructor call: at KeraLua.NativeMethods.LuaLNewState() at KeraLua.Lua.LuaLNewState() at NLua.LuaLib.LuaLNewState() at NLua.Lua..ctor() My next step is to make sure the correct permissions are assigned to the Lua .dlls unless there are any other (and probably better) ideas. username_1: I could take a look at this later if you could post a sample project. I'm guessing either the Lua DLL couldn't be found or can't be loaded for some reason. Maybe architecture mismatch, or a missing dependency (e.g. Visual C runtime?) username_0: I've looked and I'm pretty sure I have the Visual C runtime installed (I'm using VS2013 btw), but just to be sure which dll should be in System32? I noticed that when I built my application the Lua52.dll didn't end up in the temporary .net folders (i.e. [sysroot]/Microsoft.Net/[version]/Temporary ASP.NET Files...). I think VS might be having an issue copying the .dll? username_0: Here's a link to download all the code (this includes the aerospike stuff and all the relevant nuget packages including NLua): https://www.dropbox.com/s/p8qaijxoe2axh9g/aerospike-client-csharp-3.0.12.zip?dl=0 username_1: I'm not sure which version of the C runtime the Lua DLL requires. If you add a DLL (as a file) to a project in Visual Studio by default that doesn't copy it. You might need to set the DLL to copy to output. username_0: Yeah, I have that option set to "true". username_2: Hi @username_0 are you able to create a Console Application with NLua? Did you download the library or installed from NuGet? Thank you. username_0: Yes, console apps work fine. In both instances (console and web) I used nuget to install the package. Once I started getting errors, I added the source projects so I could see if there was anything else going on, but it looks like it mostly boils down to some issue with IIS and Lua51.dll/Lua52.dll. On the permissions front I tried doing my code in "release" mode (so it doesn't use the temporary .net folders) and put the Lua52.dll in the bin directory manually and set permissions manually. That, so far anyways, hasn't worked either. username_0: ...Adding, I'm running Windows 8.1, 64 bit. username_1: I was able to get the same exception, although using IIS express, and a 32-bit build (the projects didn't seem to build in 64 bit.) I was able to fix that by manually moving lua52.dll into the bin\Debug folder for the AerospikeWCFClient project. Was this the project you were trying to use NLua from? You could also try placing the correct version of lua52.dll in c:\windows, if this resolves your issue then I think that would confirm that the issue is the placement of the DLL. username_0: Hmmm... I'll have to play with this in the morning my time, but I thought the .dll should load in the AerospikeWCF project. The Client (theoretically, anyways) shouldn't care at all about the .dll as it's calling the "GetData" function in the WCF service. Anyways, I'll follow your steps and see what happens. Thanks! username_3: I got the same issue , Windows 7, 64 bit. and everything seem to be ok but lua52.dll, I don't know where to get the right dll,I have many version and then copy to windows/ windows/system32 ..ect but got the same error: luanet_registryindex username_2: Hi @username_3 you can't use lua52.dll from Lua Binaries you need to use the lua52 from NLua. Is basically the same code, NLua version added a few methods to work with NLua username_2: @username_0 I am not sure if this could be issue, but check if your service is allowed to do P/Invoke. If you are sure you can do P/Invoke * Check if you have installed vcredist 2013 x86/x64 If you want to make NLua work with x64 and x86 you can add a folder right next to NLua and KeraLua.dll x64 and x86 and copy the lua52.dll (from NLua README) into this folder. (This is exactly what the NuGet version does). If you can't P/Invoke (due security restrictions) you will need to use NLua_Safe (C# only implementation this is about 10x slower) username_0: I think I'll start with NLua_Safe and, assuming it works, if performance is really an issue (the Aerospike cluster should be doing most of the heavy-lifting) then I'll hammer some more on getting P/Invoke to work. Thanks! username_0: NLua_Safe is working for me now. I will revisit if, for some reason, performance becomes an issue. Thanks! username_2: Good :) Status: Issue closed
rauenzi/BetterDiscordAddons
1111818745
Title: [Bug] AccountDetailsPlus No settings menu Question: username_0: The new update of AccountDetailsPlus does not have a settings option even after a manual update to the plugin. Answers: username_1: Can you elaborate? username_0: ![image](https://user-images.githubusercontent.com/69104450/150673478-96546933-b357-4a6d-b3c4-f308cfbb3ae8.png) no settings button on the plugin info Status: Issue closed username_0: Is there supposed to be one? Before the update there was one. Was this removed? username_1: I just pushed an update to resolve this. username_0: I updated it, thanks for the help. ๐Ÿ‘
aws-samples/cloudfront-authorization-at-edge
641199914
Title: [Question] Accessing S3 bucket from within the SPA? Question: username_0: Hi, Sorry if this is slightly off topic, but I was wondering if it is possible to access the private S3 bucket hosting the SPA, with a jquery `.get` function from within the SPA? Background: - I'm trying to setup authentication to access a private S3 bucket, and using this open source repository (https://github.com/rufuspollock/s3-bucket-listing) to list all items within the S3 bucket once authenticated. (I have also opened a ticket there https://github.com/rufuspollock/s3-bucket-listing/issues/101, where I go through some of the steps I've done). - I've run the cloudformation template, and replaced the react SPA, with the `index.html` mentioned in the above ticket, and logging in works great, and I can nearly see the entire `index.html` page, but it doesn't display anything in the S3 bucket itself. I have some theories about this, but wasn't sure who to ask, so hoping someone could point me in the right direction. - In the [list.js](https://github.com/rufuspollock/s3-bucket-listing/blob/gh-pages/list.js#L122), it uses a `$.get(s3_rest_url)`, however this returns an empty string (from what I can tell), and I think this is due to the `list.js` not passing in the authentication tokens required to access the S3 bucket (in the same way the user needed to sign in to access the `index.html`). If this is the case, would a solution be to download the cookies from the browser post logging by the user, in the `list.js` file and pass one of the auth tokens in the `$.get(s3_rest_url)` so that it can access the S3 bucket? Please let me know if you need any more information, and any advice would be greatly appreciated! Thank you. Answers: username_1: Hi @username_0 Access a private S3 bucket? Like the private S3 bucket from the solution here? That should work seamless. For other buckets, you could fromt them with this CloudFront distribution too and thus Lambda@Edge protect them. But tthat bucket needs to add the CloudFront OAI from this solution to its bucket policy, so that CloudFront can read the items. From the looks of it rufuspollock/s3-bucket-listing is coded against being run as S3 website (not CloudFront) which is another ball game. username_0: Hi @username_1, Thank you for the response! So the cloudformation template sets up the correct permissions to access the S3 bucket, namely: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <cloudfront_id>" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<bucket_name>/*" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <cloudfront_id>" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::<bucket_name>" } ] } ``` And, I am able to access the files within the bucket when I go to something like: `https://<cloudfront_url>.cloudfront.net/test/hello-world.txt`, but it's just that the `index.html` file doesn't seem able to list the files out, so I assumed this could be an authentication with jquery/javascript issue in the `list.js` file. Should the solution be using the S3 bucket in website mode? username_0: I suppose if this is useful, these are the steps I've done so far: - Run the cloudformation template provided from this repository, however removing the HTTPHeaders for debugging reasons. - With the newly created S3 bucket, I removed the react application that was in there, and placed in the `index.html` and `list.js` from the `s3-bucket-listing repository`, however edited the `index.html` to be: ``` <!DOCTYPE html> <html> <head> <title>S3 Bucket Listing Generator</title> </head> <body> <div id="navigation"></div> <div id="listing"></div> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script> <script type="text/javascript"> var BUCKET_URL = 'https://<cloudfront_url>.cloudfront.net'; </script> <script src="list.js"></script> </body> </html> ``` - I also added in a `hello-world.txt` file to the bucket for test purposes. - Navigating to the `https://<cloudfront_url>.cloudfront.net` works fine, and I can login correctly with the details provided. The `index.html` just doesn't seem to be able to get the list of files in the S3 bucket correctly however. username_2: I think the issue is that accessing a file in an app or the app itself, e.g. index.html is an entirely different matter than having the **app itself** access a file in the bucket. In the first case, the security principal is a browser attempting to read a file in a bucket, and the cloudfront is all set up with permissions for the bucket; the principal authenticates through a Cognito **user pool**, gets directed thru CloudFront to the bucket file requested and, as you point out, all is golden. In the second case it's an entirely different permissions landscape. The security principal is code inside the SPA or regular app itself (not the currently accessing user from the user pool), and the code has **no permissions whatsoever** to reach outside the app and do anything, including read from the bucket it is running in. By the way, that presents an interesting bootstrapping problem. Any permissions CloudFront had via OAI to let a browser read an app file do not transfer to the app code's own permissions. There are some insecure ways to solve this; one more secure possibility is to give the app itself the desired role using a mechanism like a Cognito identity pool. username_0: Hi @username_2 Thank you for the detailed response and explanation of the problem I'm having. Can you please explain how you'd go about giving the app the necessary role using Cognito identity pool within the `index.html` file? I suppose the idea being that once the user has authenticated, the `index.html` page already has the correct permissions to access the bucket and doesn't need additional authentication? I'm not too sure how to go about this as this is all quite new to me. username_2: An index.html might be *part* of an app, but it's just HTML, not executable code. You'd be doing the identity assumption in the javascript code components in the REACT or Angular app. This [AWS Doc ](https://github.com/amazon-archives/amazon-cognito-identity-js) gives a decent overview of the bits and pieces. You'll need to google around for more examples though. AWS seems very big on promoting their [Amplify framework ](https://docs.amplify.aws/lib/auth/getting-started/q/platform/js) for this purpose, but the projects I work on access the identity pool directly for authorization without Amplify. A very quick look turned up the very [last paragraph ](https://aws.amazon.com/blogs/mobile/accessing-your-user-pools-using-the-amazon-cognito-identity-sdk-for-javascript/)in this article and its [associated repo](https://github.com/amazon-archives/amazon-cognito-identity-js), which illustrates in a cursory way how you'd bring an identity pool into an app that is already using a user pool. It's a pre-amplify example, or you could bite the bullet and go down the AWS-favored amplify path, which seems to be better documented. username_0: Thanks @username_2 for the links and explanations. I'll have a go looking through them and see if I can get something together. Thank you! username_3: @username_0 Actually @sathed came up with this one be we just had a really similar issue where we're using mkdocs. While we could authenticate and login to the index page fine (even subsequent html pages if you had the direct link). None of the links worked and the search bar that searched for pages didn't work. We actually fixed this by modifying the `Content Security Policy` that the lambdas use. Change this policy in the template. You will find a section in the template that looks like this. Modify it to your needs to allow urls you will be using. ``` ... ... HttpHeaders: Type: String Description: The HTTP headers to set on all responses from CloudFront. To be provided as a JSON object Default: |- { "Content-Security-Policy": "default-src 'none'; img-src 'self'; script-src 'self' https://code.jquery.com https://stackpath.bootstrapcdn.com; style-src 'self' 'unsafe-inline' https://stackpath.bootstrapcdn.com; object-src 'none'; connect-src 'self' https://*.amazonaws.com https://*.amazoncognito.com", ... ... } ... ... ``` Be sure to also update the version of the lambda's so it will redeploy them. Since you updated the version of the lambdas be sure to update that version reference in your CloudFront behaviors as well. (We were doing it manually so those behaviors were not set). Finally you can verify that the content policy is in your deployed lambdas by opening up the lambda and looking at `configuration.json` username_3: @username_0 Actually @sathed came up with this one because we just had a really similar issue where we're using mkdocs. While we could authenticate and login to the index page fine (even subsequent html pages if you had the direct link). None of the links worked and the search bar that searched for pages didn't work. We actually fixed this by deleting the `Content Security Policy` that the lambdas use. You can change this policy to your needs in the template. You will find a section in the template that looks like this. Modify it to your needs to allow urls you will be using. ``` ... ... HttpHeaders: Type: String Description: The HTTP headers to set on all responses from CloudFront. To be provided as a JSON object Default: |- { "Content-Security-Policy": "default-src 'none'; img-src 'self'; script-src 'self' https://code.jquery.com https://stackpath.bootstrapcdn.com; style-src 'self' 'unsafe-inline' https://stackpath.bootstrapcdn.com; object-src 'none'; connect-src 'self' https://*.amazonaws.com https://*.amazoncognito.com", ... ... } ... ... ``` Be sure to also update the version parameter of the lambda's so it will redeploy them. Since you updated the version of the lambdas be sure to update that version reference in your CloudFront behaviors as well. (We were doing it manually so those behaviors were not set by this template). Finally you can verify that the content policy is in your deployed lambdas or not by opening up the lambda and looking at `configuration.json` Now with all of this said, I find it a little silly to have the `Content-Security-Policy` at all since there is authorization in front of this anyway. Anyone scanning this will get an authorization requirement sent back to them anyway. So you can just delete this header altogether and now you can list pages from other pages that live in the same s3 bucket (or other buckets) just fine. username_4: @username_0 I'm also trying to implement this with https://github.com/rufuspollock/s3-bucket-listing but unfortunately I'm unable to see the bucket contents. Were you able to make this work? I also use have " var BUCKET_URL = 'https://<cloudfront_url>.cloudfront.net';" in my index.html and disabled "Content-Security-Policy" as @username_3 suggested. Would appreciate if you share your solution. Thanks ![image](https://user-images.githubusercontent.com/94445331/142408817-658b2a3f-9f2f-4c66-981c-5333cda965bb.png) username_1: This really is a question the the author of https://github.com/rufuspollock/s3-bucket-listing: to support S3 bucket access via CloudFront, instead of to S3 directly. Accessing an S3 bucket via CloudFront is a whole different thing than accessing an S3 bucket directly. CloudFront in front of S3 is supposed to abstract S3, so that the client does not even know the origin is S3. A call like Bucket.ListObjectsV2 can be done directly against the S3 bucket, but not against CloudFront in front of S3. @username_2 is right that what you need to make direct S3 access work with the solution here, is a Cognito Identity Pool. With that you need to create a web app that uses the JWTs that this solution places in the cookies, and trades them in for AWS credentials using the Cognito Identity Pool. With those credentials you can then use eg the s3-bucket-listing solution you mention. Since this question comes by more often, I'd be great to document in this repo how to setup an identity pool, and do direct bucket access. Maybe even add a parameter "CreateIdentityPool" to make it more easy. That's a fair amount of work though.
chakra-ui/chakra-ui
626943608
Title: [Icon] backface-visibility: hidden; Question: username_0: What is the purpose of this line? https://github.com/chakra-ui/chakra-ui/blob/14b0ad5fb5c821032168793406d5a0c2c3cfc849/packages/chakra-ui/src/Icon/index.js#L9 It creates a new compositing layer, which has performance implications! Answers: username_1: Thanks for point this out. @username_0. We can do without that attribute We've removed it from the next release. Status: Issue closed
ruslanskorb/RSKImageCropper
63272873
Title: hey. i need a little help Question: username_0: If I have a picture taken in iphone landscape mode. The cropper view gets the image turned 90degree left. How can I make it not to turn the image at any condition ? By the way sorry if my English is bad. Answers: username_0: And also the cropped version is also a square not a circle shaped image. I would really appreciate your help Status: Issue closed username_1: [1] You need to rotate the image to the desired angle before pass it to `RSKImageCropViewController`. [2] You need to set the property `applyMaskToCroppedImage` to `YES`.
benperk/ASA
794325426
Title: Pg. 610 question # 4, answer A appears to be incorrect. Question: username_0: Pg. 610 question # 4, answer A appears to be incorrect. There is no such tool as _Database Management Assistant (DMA)_ --> there is however, a _Database Migration Assistant (DMA)_. Status: Issue closed Answers: username_1: This looks like a name change, because there was a feature called Database Management Assistant (DMA) which does the same as the Database Migration Assistant (DMA). I agree that the current name is Database Migration Assistant (DMA). I will get the question updated, as well as the discussion about DMA starting on page 559. With that noted, the answer on page 689 for this questions is correct, in that, only D is not a correct answer, A, B and C are valid.
OIEau/uwwtd
182567563
Title: SK UWWTPs that does not appear in the home map Question: username_0: There are agglomerations with very small UWWTPs that does not appear in the map http://webnuxdev.rnde.tm.fr/uwwtd_sk/treatment-plant/skcao30603445137412/2014 http://webnuxdev.rnde.tm.fr/uwwtd_sk/treatment-plant/skcao30603445137411/2014 Normally if it belongs to agglomerations of 2000 p.e. and more it has be displayed. Only agglomerations less than 2000 p.e. does not have to appear. Answers: username_1: It effectively appears on the tool but as 2 TP have exactly same coordinates and size in the same category, it means two rounds on on the other and they are overlayed, the not compliant appearing on top: again a data provision issue. Status: Issue closed
reuters-graphics/action_covid-stringency-index
619966907
Title: Data updated only till march end Question: username_0: The endpoints were updated on 28 April 2020. You may want to migrate to Version 2. https://covidtracker.bsg.ox.ac.uk/about-api The data migration is complete, and /data/OxCGRT_latest.csv now has the latest data from the new database. Please note there is a completely different data structure here. Old legacy data can be found in /legacy_data_20200425. https://github.com/OxCGRT/covid-policy-tracker#update-28-april-2020-1300-utc
Jeffail/gabs
135654976
Title: Outer array? Question: username_0: Can you provide examples of how to access values when top level json is an array? i.e. ``` [{"hello":"friend"},{"good":"morning"}] ``` Maybe it'd be nice to be able to get a slice of jsonParsed? Or Be able to search via index like "0.hello"? Answers: username_1: Hey @username_0, to access a particular index you can use `jsonParsed.Index(i)`, so for your example of "0.hello" you would need `jsonParsed.Index(0).S("hello")`. You can also iterate the elements in an array by accessing a slice of the children with `jsonParsed.Children()`. Status: Issue closed
kubernetes-sigs/kubespray
695981094
Title: kubelet_flexvolumes_plugins_dir undefined in v2.14.0 Question: username_0: <!-- Please, be ready for followup questions, and please respond in a timely manner. If we can't reproduce a bug or think a feature already exists, we might close your issue. If we're wrong, PLEASE feel free to reopen it and explain why. --> **Environment**: - **Cloud provider or hardware configuration:** openstack - **OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`):** ``` Linux 3.10.0-1127.19.1.el7.x86_64 x86_64 NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" ``` - **Version of Ansible** (`ansible --version`): ``` ansible 2.9.6 config file = /Users/v.fidunin/workdir/kubespray/test_spray/ansible.cfg configured module search path = ['/Users/v.fidunin/workdir/kubespray/test_spray/library'] ansible python module location = /Users/v.fidunin/workdir/kubespray/test_spray/env/lib/python3.8/site-packages/ansible executable location = /Users/v.fidunin/workdir/kubespray/test_spray/env/bin/ansible python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)] ``` - **Version of Python** (`python --version`): ``` Python 3.8.5 ``` **Kubespray version (commit) (`git rev-parse --short HEAD`):** ``` a1f04e98 ``` tag v2.14.0 **Network plugin used**: calico **Command used to invoke ansible**: ``` ansible-playbook -i inventory/k8s-vf/hosts.yaml --become --become-user=root cluster.yml [Truncated] +{% endif %} resources: requests: cpu: 200m @@ -69,10 +71,12 @@ spec: value: /etc/config/cloud.conf hostNetwork: true volumes: +{% if kubelet_flexvolumes_plugins_dir is defined %} - hostPath: path: "{{ kubelet_flexvolumes_plugins_dir }}" type: DirectoryOrCreate name: flexvolume-dir +{% endif %} - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate ``` If this ok, I can make PR. Answers: username_1: Looks fine by me, if you could submit a PR that'd very welcome @username_0 username_0: @username_1 done
openshift/cluster-monitoring-operator
433605643
Title: Use specific template for altertmanager Question: username_0: Hello, is there any way to use custom template for alertmanager? ``` .../opsgenie.tmpl:/etc/alertmanager/templates/opsgenie.tmpl:ro ``` and then using it ``` - name: opsgenie opsgenie_configs: - api_key: ... send_resolved: true teams: SuperTeam tags: '{{ template "opsgenie.default.tags" . }}' message: '{{ template "opsgenie.default.message" . }}' source: '{{ template "opsgenie.default.source" . }}' description: '{{ template "opsgenie.default.description" . }}' priority: '{{ template "opsgenie.default.priority_mapper" . }}' ``` Answers: username_0: Nvm, I have found a way how to do it. I don't think it's the best solution but if anyone has better solution, let me know. 1. Add peristence to alertmanagers 2. Copy your template file to .../alertmanager1/alertmanager-db/templates/opsgenie.tmpl 3. Add config for alertmanager to ``` templates: - /alertmanager/templates/opsgenie.tmpl ``` Status: Issue closed username_1: Closing out as there seems to be a fix.
MicrosoftDocs/powerbi-docs
1149742736
Title: Add Removing permissions Question: username_0: Please add a section Removing permissions from a user - When removing users from Permissions tab, you need to remove the user manually from the dataset Manage permissions. - Note: if there is many datasets, you may not see the Update button anymore. Please change the screen resolution or the zoom of the browser settings Example - the update button is hidden below the last message of the highlighted window: ![image](https://user-images.githubusercontent.com/98977972/155606336-dc57c15f-6837-4ef4-b698-e94293a1df98.png) Thanks, Martina --- #### Document Details โš  *Do not edit this section. It is required for docs.microsoft.com โžŸ GitHub issue linking.* * ID: 72ff3e88-8793-8e2b-bdb2-0e3d09372b93 * Version Independent ID: 0e1245cb-d7af-c73c-39c4-39f5665ae51a * Content: [Publish an app in Power BI - Power BI](https://docs.microsoft.com/en-us/power-bi/collaborate-share/service-create-distribute-apps) * Content Source: [powerbi-docs/collaborate-share/service-create-distribute-apps.md](https://github.com/MicrosoftDocs/powerbi-docs/blob/main/powerbi-docs/collaborate-share/service-create-distribute-apps.md) * Service: **powerbi** * Sub-service: **pbi-collaborate-share** * GitHub Login: @maggiesMSFT * Microsoft Alias: **maggies** Answers: username_1: @username_0 - I have assigned a writer to track this issue. You may also submit a direct edit as a contributor (preferred) or create a work item for this change. Details can be found in the engineering hub. username_2: Hi @username_0, thanks for bringing this back to my attention. And what you describe is also true in the opposite direction - when you remove someone's permissions in the manage permissions page, it doesn't remove the permissions they get if they have them through an app. The product team is aware of this inconsistency but doesn't have a fix yet. I think the manage permissions -> app direction of the problem is documented somewhere, but not the direction that you mentioned. I will check the docs and make sure this situation in both directions is documented adequately. Thanks again for taking the trouble to reach out to us! @username_1 username_2: Hi @username_0, I have updated the to provide info about removing permissions given through apps. The updates should be online in a day or so. Thanks for your input!
mozilla/addons-server
207615482
Title: Support authentication with addons-frontend in development workflow Question: username_0: ### Describe the problem and steps to reproduce it: Run addons-server with docker and addons-frontend AMO with its proxy server. Logging in from addons-frontend works but logging in from addons-server on localhost:3000 does not. ### What happened? FxA redirected to http://olympia.dev. ### What did you expect to happen? FxA redirected to http://localhost:3000 and log in was successful. ### Anything else we should know? This is really a configuration problem. If addons-server was setup to use the "local" FxA config then this would work. There should be some way to tell addons-server to use this configuration instead of the default. The code path for the log in link starts from [olympia.accounts.helpers](https://github.com/mozilla/addons-server/blob/4a59ee4b4e7bf425fdc9e64afbc3bbe57a942bc9/src/olympia/accounts/helpers.py). Status: Issue closed Answers: username_1: This works now.
koocyton/reactor-guice
1142233788
Title: ้กน็›ฎๅผ•็”จไบ†mysql:[email protected]็ญ‰19ไธชๅผ€ๆบ็ป„ไปถ๏ผŒๅญ˜ๅœจ1ไธชๆผๆดž๏ผŒๅปบ่ฎฎๅ‡็บง Question: username_0: ๅคงไฝฌ๏ผŒ็œ‹ไฝ ่ฟ™ไธช้กน็›ฎ่ฐƒ็”จไบ†mysql:[email protected]็ญ‰19ไธชๅผ€ๆบ็ป„ไปถ๏ผŒๅญ˜ๅœจ1ไธชๅฎ‰ๅ…จๆผๆดž๏ผŒๅปบ่ฎฎไฝ ๅ‡็บงไธ‹ใ€‚ ``` ๆผๆดžๆ ‡้ข˜๏ผšOracle MySQL ่พ“ๅ…ฅ้ชŒ่ฏ้”™่ฏฏๆผๆดž ๆผๆดž็ผ–ๅท๏ผšCVE-2021-2471 ๆผๆดžๆ่ฟฐ๏ผš Oracle MySQLๆ˜ฏ็พŽๅ›ฝ็”ฒ้ชจๆ–‡๏ผˆOracle๏ผ‰ๅ…ฌๅธ็š„ไธ€ๅฅ—ๅผ€ๆบ็š„ๅ…ณ็ณปๆ•ฐๆฎๅบ“็ฎก็†็ณป็ปŸใ€‚ Oracle MySQL ็š„ MySQL Connectors ไบงๅ“ไธญๅญ˜ๅœจ่พ“ๅ…ฅ้ชŒ่ฏ้”™่ฏฏๆผๆดž๏ผŒ่ฏฅๆผๆดžๅ…่ฎธ้ซ˜็‰นๆƒๆ”ปๅ‡ป่€…้€š่ฟ‡ๅคš็งๅ่ฎฎ่ฎฟ้—ฎ็ฝ‘็ปœๆฅ็ ดๅ MySQL ่ฟžๆŽฅๅ™จใ€‚ๆˆๅŠŸๆ”ปๅ‡ปๆญคๆผๆดžไผšๅฏผ่‡ดๅฏนๅ…ณ้”ฎๆ•ฐๆฎ็š„ๆœชๆŽˆๆƒ่ฎฟ้—ฎๆˆ–ๅฏนๆ‰€ๆœ‰ MySQL ่ฟžๆŽฅๅ™จๅฏ่ฎฟ้—ฎๆ•ฐๆฎ็š„ๅฎŒๅ…จ่ฎฟ้—ฎ๏ผŒไปฅๅŠๅฏผ่‡ด MySQL ่ฟžๆŽฅๅ™จๆŒ‚่ตทๆˆ–้ข‘็น้‡ๅคๅดฉๆบƒใ€‚ ๆผๆดž็บงๅˆซ๏ผšไธญๅฑ ๅฝฑๅ“่Œƒๅ›ด๏ผš[0, 8.0.27) ๆœ€ๅฐไฟฎๅค็‰ˆๆœฌ๏ผš8.0.27 ๅผ•ๅ…ฅ่ทฏๅพ„๏ผš com.doopp:reactor-guice:0.14.1:->com.doopp:[email protected]>mysql:[email protected] ``` ่ฟ˜ๆœ‰ๅ…ถๅฎƒๅ‡ ไธชๆผๆดž๏ผŒไฟกๆฏๆœ‰็‚นๅคšๆˆ‘ๅฐฑไธ่ดดไบ†๏ผŒไฝ ่‡ชๅทฑ็œ‹ไธ‹ๅฎŒๆ•ดๆŠฅๅ‘Š๏ผšhttps://www.mfsec.cn/jr?p=i9b03d
HDFGroup/h5pyd
736727753
Title: `hsload --link` fails for chunked datasets Question: username_0: I create a simple file with a single chunked dataset ```python import h5py with h5py.File("chunked.h5", "w") as h5file: h5file["/dset"] = h5file.create_dataset("chunked", (1000, 1000), chunks=(100, 100)) ``` Then, when running `hsload --link` on `chunked.h5`, I get an error: ``` File ".../h5pyd/h5pyd/_apps/utillib.py", line 341, in create_dataset num_chunks = dsetid.get_num_chunks(spaceid) AttributeError: 'h5py.h5d.DatasetID' object has no attribute 'get_num_chunks' ``` Apparently, `get_num_chunks` is part of the API of `h5py` only since 3.0 (https://docs.h5py.org/en/stable/whatsnew/3.0.html?highlight=get_num_chunks#new-features) but `h5pyd` need version 2.9 (https://github.com/HDFGroup/h5pyd/commit/a9eb75bd8ce71953b464f3e28efdd8153d2d79ee) ? Answers: username_0: Using `hsload --link` with `h5py==3.0.0` works without problem. username_1: Yes, the change of the h5py major version number was tripping up my version checking. This should be working now with either h5py 2.9.0 or >3.0, see: https://github.com/HDFGroup/h5pyd/commit/5cfaa5badd01d2cd359104ca652bb0527697d00d. Status: Issue closed
newrelic/infra-integrations-sdk
688749900
Title: Update metrics API endpoint for FedRAMP - POMI Question: username_0: Currently dimensional metrics are going through CloudFlare and are routed to a Cell. Both of these mechanisms are not FedRAMP approved. For FedRAMP customers there is a special gov-infra-api.newrelic.com domain that doesn't use CloudFlare or cells. In order to move Infra to dimensional metrics in a FedRAMP approved way, we need to make sure the dimensional metrics capability is following the same approach: going straight to CHI and avoid CloudFlare and cells. Instead of using the current domain: metric-api.newrelic.com, we will be using infra-api.newrelic.com. For POMI we need to update the default url. Answers: username_1: The SDK does not have a concept of endpoints so does not understand what _ FedRAMP_ is. This is used to communicate with the agent and so the changes are most likely needed there (if there are any to be made). Status: Issue closed
rodjek/puppet-lint
420098775
Title: Check for bare words outside of attribute values Question: username_0: Bare words are common for setting attribute values. However, if you simply forget a $ on a variable you get a bare word in a place you probably didn't want it. ``` $my_variable = 'test' if my_variable != test { fail('you forgot the $') } ``` We should emit a warning for any bare words not on the right hand side of an attribute. Answers: username_1: It's very common for bare words to be used after `include`, `require` and `contain`. Any implementation should probably also allow this.
darrenstarr/VtNetCore.UWP
304724338
Title: showing some sample images from the demo app on readme will be helpful Question: username_0: if we can have a little explaination along with images tht what exactly is a uwp terminal control ? that would be helpful. Thanks. keep up the good work. Answers: username_1: Hey @username_0, thanks for the feedback. It's still quite early in development and while it's out in the wild, it's really not polished as of yet. I'm in the process of adding a scripting engine as well as support for Windows Mixed reality. There's also a chance that instead of using UWP, it will use either Xamarin Forms or Avalon UI. Let's see where it goes from here. I think I'm about a month away from investing in setting the fixed direction which the VtNetCore library will go. username_0: @username_1 Just out of curosity, is it a command prompt terminal? or a xaml UI control with fluent design or something?
redisson/redisson
674856883
Title: InternalThreadLocalMap memory leak Question: username_0: <!-- ะกonsider Redisson PRO https://redisson.pro version for advanced features and support by SLA. --> **Expected behavior** **Actual behavior** after long running, application gets oom **Steps to reproduce or test case** our app runs about 1 month **Redis version** cloud service, not sure **Redisson version** 3.11.2 **Redisson configuration** very big array(1048576 size) in each threadlocal, new array was created in old-gen and always fail [stacktrace.log](https://github.com/redisson/redisson/files/5040176/stacktrace.log) ![heap-dump1](https://user-images.githubusercontent.com/22172401/89625959-3fe2a480-d8cb-11ea-8fa3-32567db5ee15.png) ![heap_dump2](https://user-images.githubusercontent.com/22172401/89625969-44a75880-d8cb-11ea-8ba1-7a7d1bbe5bfa.png) Answers: username_1: Duplicate with https://github.com/redisson/redisson/issues/1975 Status: Issue closed
neo4j-graphql/neo4j-graphql-js
594009439
Title: Nested Filters do not work as expected for Interface types Question: username_0: **Version**: 2.13.0 **Problem**: Nested filters no longer appear to work properly with respect to interfaces. **Description**: If I use the concrete type for a property in my Schema, I can use that property's generated Filter type as expected, but if I refer to the type by interface, the filter no longer works. **Example** Given a Schema: ``` type Person implements SentientBeing { id: ID! email: String! } type Ship { id: ID! name: String! captain: SentientBeing! @relation(name: "PILOTS", direction: "IN") } ``` The following query will not return any Ships: ``` query GetShipsByCaptainEmail ( $email: String! ) { Ships(filter: { captain: { email: $email } }) { id } } ``` However, if in the schema one changes the value of Ship.captain from SentientBeing! to Person!, the query will return the expected Ships. Status: Issue closed Answers: username_1: This should now be fixed, please reopen if you're still seeing this problem.
pinpoint-apm/pinpoint
904860408
Title: Plugin integration test fails to recognize local maven repositories Question: username_0: Plugin integration test fails to recognize the jar files in local maven repositories specified with <repositories><repository></repository></repositories> tags in pom.xml It seems like maven-resolver plugin skips scanning when repository id is null, while DependencyResolver.java in pinpoint-test uses null as a repository ids.<issue_closed> Status: Issue closed
xguan2014/xguan2014.github.io
835750113
Title: ใ€.Net Coreใ€‘ๆถˆๆฏ้˜Ÿๅˆ—Channel | ๆทฑๅบฆไผ˜ๅ…ˆ Question: username_0: https://blog.bfsdfs.com/2021/03/14/%E3%80%90.Net%20Core%E3%80%91%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97Channel/ ่ฝฌ่‡ช๏ผšhttps://www.cnblogs.com/tiger-wang/p/14068973.html ๅ‰่จ€ไปŠๅคฉ็ป™ๅคงๅฎถๅˆ†ไบซไธ€ไธชๅพฎ่ฝฏๅฎ˜ๆ–น็š„็”Ÿไบง่€…/ๆถˆ่ดน่€…ๆ–นๆกˆ็š„็‰นๆ€ง่งฃๅ†ณ๏ผšChannelใ€‚ ChannelๅœจSystem.Threading.Channelsๅ‘ฝๅ็ฉบ้—ดไธ‹๏ผŒCore 2.1ไฝฟ็”จๆ—ถ๏ผŒ้œ€่ฆไปŽNugetไธŠๅฎ‰่ฃ…ใ€‚ 1%ย dotnetย addย packageย System.Threading.C
StyraHem/ShellyForHASS
611223857
Title: switches wait until polling interval to update state Question: username_0: I'm using shelly 1 switches. I just stared using them a couple of days ago. When I us the lovelace "Picture Elements Card" with an entity on the my floor plan I can click on the light and the real switch immediately turns on. The GUI however doesn't get indication of the state change until the next polling cycle. The default is 60 seconds so the switches appear to be very non-responsive. Since the GUI state doesn't change, clicking the element again will not change the state. So in other words I can not turn a light on and then back off before the polling cycle. Watching the database state_change events the database record is also updated on this polling cycle. Ideally the state of the switch would update when the switch is toggelled and not wait until the next polling cycle. I also use a different brands of switches and the shelly is the only that works in this fashion. Answers: username_1: Please check wiki: your network doesnโ€™t play well with CoAP-msg due to some restrictions on multicast packets. Simone username_0: tcpdump only shows outbound packets from HA to the multicast address 192.168.3.11 as identified in the wiki. Multi case from the switch is to 244.0.0.251. From tcpdump: 15:25:28.723151 IP 192.168.10.81.5353 > 172.16.31.10.5353: 0*- [0q] 6/0/0 PTR _http._tcp., PTR shelly1-98F4ABD0B919._http._tcp.local., (Cache flush) SRV shelly1-98F4ABD0B919.local.:80 0 0, (Cache flush) TXT "id=shelly1-98F4ABD0B919" "fw_id=20200320-123430/v1.6.2@514044b4" "arch=esp8266", (Cache flush) A 192.168.10.81, (Cache flush) NSEC (449) Any chance this is the CoAP message? username_0: I update firmware in my wifi router and now I at least get the following 16:29:17.693402 IP 192.168.10.81 > 192.168.3.11: igmp v2 report 192.168.3.11 but this still doesn't seem to get the messages desired. I tried using the mqtt capability of the switch and that works great. I'll probably just switch to using that until I hit a need to upgrade the wifi. Thanks for the information on this. username_1: Please check our wiki: [FAQ2:-Slow-updates-of-lovelave-UI](https://github.com/StyraHem/ShellyForHASS/wiki/FAQ2:-Slow-updates-of-lovelave-UI) [FAQ6:-Troubleshoot-CoAP-messages](https://github.com/StyraHem/ShellyForHASS/wiki/FAQ6:-Troubleshoot-CoAP-messages) Simone username_0: I never got this to work, but I buy the fact that it's probably a problem with my wifi router not passing the messages. Status: Issue closed
kiwi-cam/homebridge-broadlink-rm
995647217
Title: Heater Cooler auto fan mode toggle like oscillation. Question: username_0: **Describe the solution you'd like** I currently have coolers setup around my house with temperature control using the auto fan mode. Oscillation is set up as well. I would like to setup the different fan modes but the fan modes do not have an automatic setting. It would be nice to have a fan mode setting (auto or manual) underneath the oscillation toggle. **Describe alternatives you've considered** I'm currently thinking about having a fan speed of 1-6 (my Cooler has 5 fan speeds) and making the 6th speed be automatic but it's not elegant. **Additional context** ![image](https://user-images.githubusercontent.com/85386859/133207506-2ae2495a-c2be-486f-9b23-ca6df963cbd9.jpeg) Here is what I have in mind.
realm/realm-java
274272621
Title: Unrecoverable error. msync() failed Question: username_0: ``` #### Version of Realm and tooling Realm version(s): ? Realm gradle plugin 3.5 Android Studio version: 3.0 Which Android version and device: 2.3.3 Gingerbread | custom android device Answers: username_1: We call `msync` with `MS_SYNC` to flush data to the disk. From the exception message, it seems the `msync` system call failed with `EIO`. Since it is the first time we get this kind of crash reported and it was happening on a custom build 2.3.3, i doubt there is a bug with the `msync` call with the that device. I think there is not too much we can do about it. Status: Issue closed username_0: Thanks @username_1 ! Gd to know. We think that its a hardware issue as well.
JBetts97/TimeGraph
782797551
Title: Add and remove timezones from graph Question: username_0: **Is your feature request related to a problem? Please describe.** The application does not support more than 2 timezones. For example, if I wanted to setup a meeting between my timezone and two others, the 'timegraph' will only show my timezone and one other which I have selected. **Describe the solution you'd like** A way to add and remove different timezones.
mskcc/vcf2maf
244005095
Title: How to skip the VCF filtering step? Question: username_0: Hi Cyriac, Even when I don't provide any vcf file, the script "vcf2maf.pl" still looks for the file "~/.vep/ExAC_nonTCGA.r0.3.1.sites.vep.vcf.gz" (as default), which I don't have and don't want to use, since I would like to retain all my variants from the VCF file. Is there a way to skip the VCF filtering step altogether? Thanks for your help, Silvia Status: Issue closed Answers: username_1: Hi Silvia. No variants are removed by the filtering step. They are only tagged as "common_variant" under a column named `FILTER`. Please see the documentation for the `FILTER` column at https://github.com/mskcc/vcf2maf/blob/master/docs/vep_maf_readme.txt#L125 username_1: I'll leave this issue open, since it may be useful to people who don't want to add the "common_variant" tag. username_1: Hi Cyriac, Even when I don't provide any vcf file, the script "vcf2maf.pl" still looks for the file "~/.vep/ExAC_nonTCGA.r0.3.1.sites.vep.vcf.gz" (as default), which I don't have and don't want to use, since I would like to retain all my variants from the VCF file. Is there a way to skip the VCF filtering step altogether? Thanks for your help, Silvia username_2: Nice if vcf filter option can be made optional, especially for genomes of other species. ```sh vcf2maf.pl --input-vcf variants_filt_vcftools.snpeff.sorted.vcf --output-maf variants_filt_vcftools.snpeff.sorted.vc2maf.maf --ref-fasta CanFam3_1.fa --sp ecies canis_familiaris ERROR: Provided --filter-vcf is missing or empty: /~/.vep/ExAC_nonTCGA.r0.3.1.sites.vep.vcf.gz ``` username_1: As of commit 3bbee40 @username_0 `--filter-vcf` can now be set to `0` or `""` to disable `common_variant` tags under the FILTER column. @username_2 vcf2maf ignores `--filter-vcf` when `--ncbi-build ne "GRCh37"`. However, your command doesn't specify `--ncbi-build` so it remains at the default value of `GRCh37`. As a fix, vcf2maf will also now require `--species eq "homo_sapiens"` before using the default `filter-vcf`. Status: Issue closed
Palking/TopDownTiles
204443853
Title: (drastically) improve map Question: username_0: make the map way bigger and let view follow the player Status: Issue closed Answers: username_0: implement camera class (most likely); research first tho username_0: make the map way bigger and let view follow the player username_0: -[] Need to split UI into static and dynamic (eg PLAYER text) or let dynamic render elsewhere username_0: - [ ] add clamping Status: Issue closed username_0: Main features done
go-check/check
555788305
Title: Is there an easy way to make this work with VSCode Question: username_0: Using vanilla gotest testing, vscode is able to identify that a function is a test and provide link above the test function to run the test. By default, it does not do this for check.v1 test methods. Is there a way to make it work? If so, documentation would be nice Answers: username_0: looked into this a little bit more and noticed there is still a test function up at the top that works with VScodes I can use to run all the tests in the suite. It would be nice to have a way to run the individual tests in the suite using that some ctrl-p action. Maybe, I will write an extension for this
commons-app/apps-android-commons
353842322
Title: Date not ok Question: username_0: When uploading several files at once the date is missing. Also sometimes wrong. The App added the upload date, instead of the date " according to Exif data". https://commons.wikimedia.org/w/index.php?title=File:View_of_Leh_from_Tsemo_castle,_Ladakh.jpg&oldid=315035923 https://commons.wikimedia.org/w/index.php?title=File:View_of_Leh_from_Tsemo_castle,_Ladakh_,_2.jpg&oldid=315036119 **Device and Android version:** Samsung J510FM Android 7.1.1 **Commons app version:** 2.8.1 **Would you like to work on the issue?** No. Answers: username_1: Just checking, did it happen only when you uploaded multiple files at once? If so we should probably include that into the issue title. For the "View_of_Leh_from_Tsemo_castle,_Ladakh" set, both [File 1](https://commons.wikimedia.org/w/index.php?title=File:View_of_Leh_from_Tsemo_castle,_Ladakh.jpg&oldid=315035923) and [File 2](https://commons.wikimedia.org/w/index.php?title=File:View_of_Leh_from_Tsemo_castle,_Ladakh_,_2.jpg&oldid=315036119) seemed to have the dates wrong (upload dates instead of EXIF dates), indeed. username_2: I can confirm that for all my uploads the date is missing. This happens for single uploads such as https://commons.wikimedia.org/wiki/File:Grab_Hackpf%C3%BCffel.jpg and batch uploads https://commons.wikimedia.org/wiki/File:Kirche_Hackpf%C3%BCffel_-_1.jpg. username_3: @username_4 Do you think this could have been caused by our recent upload change at #1720 ? username_4: Maybe @username_3 , I will try to check this on Monday. username_3: Just talked to @ashishkumar468 about the potential cause of this bug. It seems that the temp file URI changes made at #1749 to fix a different bug, may have caused this. It isn't limited to just date, but much of the Exif metadata would have been omitted as well, unfortunately. For instance, see the difference between https://commons.wikimedia.org/wiki/File:Miss_Claudes_creperie_in_Brisbane.jpg and https://commons.wikimedia.org/wiki/File:Nepalese_pagoda_in_south_bank_Brisbane.jpeg Fortunately, this should be fixed soon. :) Thanks for your patience everyone. username_3: Hi @username_0 , we just released this bugfix with v2.9 in beta. Could you please let us know if the problem is solved for you? Status: Issue closed username_5: Fixed in https://github.com/maskaravivek/apps-android-commons/pull/1 and #1968
StrimBagZ/StrimBagZ
143849531
Title: Fix NullPointerException in ui.activities.PlayerActivity.initDescriptionUI Question: username_0: ### Version: 1.1.6-1 (3121011) | net.lubot.strimbagz ### ### Stacktrace ### <pre>ui.activities.PlayerActivity;initDescriptionUI;; ui.activities.PlayerActivity;onCreate;;</pre> ### Reason ### java.lang.NullPointerException ### Link to HockeyApp ### * [https://rink.hockeyapp.net/manage/apps/245422/crash_reasons/115943094](https://rink.hockeyapp.net/manage/apps/245422/crash_reasons/115943094)
Legolaszstudio/novynotifier
540837309
Title: Multiple widgets have the same globalKey[BUG/HIBA] Question: username_0: **รrd le a hibรกt* Auto bejelentkezรฉs esetรฉn a bejelentkezรฉs gomb nincs deaktรญvรกlva, ezรฉrt 2 widget hasonlรณ globalKeyt hasznรกl **Hogyan reprodukรกljuk** A hiba reprodukรกlรกsรกhoz valรณ lรฉpรฉsek: 1. Menjรผnk a bejelentkezรฉs fรผlre 2. Nyomjunk le a bejelentkezรฉs gombot auto bejelentkezรฉs elรถt **Elvรกrt viselkedรฉs** A bejelentkezรฉs gomb inaktรญv **Kรฉpernyล‘kรฉpek** ![Screenshot_20191218-112534](https://user-images.githubusercontent.com/40274710/71239447-de070780-2306-11ea-9b82-0a370da9ada4.jpg) **Telefon:** - Eszkรถz: samsung galaxy j5 2017 - OS: android 9.0 - App Verziรณ v0.0.8 Status: Issue closed Answers: username_0: Megjavรญtva a v0.0.9-es verziรณban ([c5d8f4d](https://github.com/NovySoft/novyNaplo/commit/c5d8f4dc18225fc848af0f70681b2d5ea9d88a0c))
saltstack/salt
615168061
Title: Add extra tag info for Beacons that produce multiple status messages Question: username_0: **Is your feature request related to a problem? Please describe.** The Beacon facility of Salt publishes events when conditions are met (or just periodically). Some beacon providers generate at most a single message every time they are doing their work. But some beacon providers monitor multiple objects and can generate multiple messages in each interval. One example is the `diskusage`, which potentially generates a message per disk. Most beacon providers add additional information to the beacon message tag so that the information maintains its context. However, some beacon providers do not add this information, e.g. the same `diskusage`. This causes generic monitoring software to treat these events as updates of previous information of the same object, while it is actually about different objects. **Describe the solution you'd like** Extend some of the beacon tags with additional information. At least for the following providers: * diskusage: TAG based on MOUNT should be added * haproxy: TAG based on SERVER should be added * network_info: TAG based on INTERFACE should be added * pkg: TAG based on PKG should be added * ps: TAG based on PROCESS should be added The other providers do not need action because they produce: 1. only single messages (aix_account, avahi_announce, bonjour_announce, cert_info, glx_info, load, memusage, proxy_example, telegram_bot_msg, twilio_txt_msg); or 2. properly extend the tag (adb, inotify, journald, log_beacon, napalm_beacon, network_settings, sense_hat, service, sh, smartos_imgadm, status); or 3. are producing discrete events (btmp, wtmp, watchdog). **Additional context** Add any other context or screenshots about the feature request here. Answers: username_1: @username_0 could you please clarify your case for me. I've configured a beacon like this: ``` beacons: network_info: - interfaces: wlp4s0: type: greater bytes_recv: 0 tap0: type: greater bytes_recv: 0 lo: type: greater bytes_recv: 0 ``` And I'm seeing this events in master log: ``` [DEBUG ] Sending event: tag = salt/beacon/alpha/network_info/; data = {'interface': 'wlp4s0', 'network_info': {'bytes_sent': 124732545, 'bytes_recv': 1164847263, 'packets_sent': 648340, 'packets_recv': 2317795, 'errin': 0, 'errout': 0, 'dropin': 0, 'dropout': 0}, 'id': 'alpha', '_stamp': '2020-05-14T00:13:57.957564'} [DEBUG ] Sending event: tag = salt/beacon/alpha/network_info/; data = {'interface': 'tap0', 'network_info': {'bytes_sent': 144413, 'bytes_recv': 30832769, 'packets_sent': 731, 'packets_recv': 164656, 'errin': 0, 'errout': 0, 'dropin': 357, 'dropout': 0}, 'id': 'alpha', '_stamp': '2020-05-14T00:13:57.958089'} [DEBUG ] Sending event: tag = salt/beacon/alpha/network_info/; data = {'interface': 'lo', 'network_info': {'bytes_sent': 70909518, 'bytes_recv': 70909518, 'packets_sent': 787539, 'packets_recv': 787539, 'errin': 0, 'errout': 0, 'dropin': 0, 'dropout': 0}, 'id': 'alpha', '_stamp': '2020-05-14T00:13:57.958535'} ``` So for each event we can't determine the interface from the event id. But we see it in `event['interface']` field. Do you want to be able to filter these events by event id? Say `salt/beacon/<minion_id>/network_info/wlp4s0`? username_0: @username_1 The beacon messages may be handled by a generic message processor. For display, or to keep the 'latest' state. In a generic message processor you do not want to have the knowledge of the beacon message structure, specifically about which field is a "key" field for each beacon type. The uniform way is to append that knowledge to the beacon tag as already done by several beacon providers. But in these cases that is not done. And for the "standard" beacons in the current release, it is just the 5 ones mentioned. But any newly introduced beacon, whether standard or custom, may have the same problem again. Your example `salt/beacon/<minion_id>/network_info/wlp4s0` is exactly the solution I had in mind. And it can be realised easily because the beacon framework already supports extending the tag. username_1: @username_0 thank you for explanation! It's a good idea! username_0: @username_1 the next thing would be to make the "event" type beacons identifiable as such. to distinguish them from "status" type beacons. for "event" beacons, every instance may be important. for "status" beacons, typically only the latest one is important. the standard ones are `btmp`, `wtmp` and `watchdog` (note: the `watchdog` beacon uses its own package name to add to the tag, barely useful). for events, I propose a new payload field `beacon-type: event`. The opposite would be: `beacon-type: status`, but that can be the default. The field name is unique enough to not conflict/overlap with any field in any beacon provider. username_1: @username_0 looks reasonable for me. @team-core I want someone else to take a look at these ideas. username_0: when applicable, I'm volunteering to create a PR for both code updates and [`doc/topics/beacons/index.rst`](https://docs.saltstack.com/en/latest/topics/beacons/) update. username_1: @username_0 could you please put your comment https://github.com/saltstack/salt/issues/57174#issuecomment-628460030 to the issue text to not lose it. username_0: not impatient, but may be I should mention: "done"
OllieBoyne/sslap
855417804
Title: Rectangular problems Question: username_0: First of all, thank you for your great piece of code! I'm wondering whether it is possible or not to handle rectangular sparse problems, i.e. people < objects. Status: Issue closed Answers: username_0: Sorry I entered this issue twice because Github experienced a temporary issue when I submitted this first one, resulting in an error. Please drop this issue.
Cloud-CV/EvalAI
251135808
Title: Add support for different versions of same packages for different Challenges Question: username_0: ## Problem The current challenges on EvalAI use the same version of the libraries like `numpy`, `scipy`, `matplotlib` etc. But this needs to be changed since different challenges will want different version of the same libraries. Possible solutions that I can think of are: - Different submission workers for each challenge - Run the submission worker in different docker containers Discussion/suggestions are welcome! Answers: username_1: @username_0, Whenever we make a new challenge, the versions of the software required are given in the `requirements.txt` inside `evaluation_script`, right? username_1: @username_4 I'd like to work on this. :smile: username_0: @username_1: do you suggest a way to implement this? We have been brainstorming about this. Let us know if you have some ideas about how to do this. username_1: @username_4 I will definitely look into this issue and get back to you with few days. username_1: @username_0 Since the functions of the major libraries like `numpy`, `scipy` and `matplotlib` won't change much across versions, how about maintaining one submission worker and then importing the libraries in the `submission_worker` according to the challenge we are running. username_1: @username_0 I think splitting the submission_worker would be the best way to go. What do you think? username_0: @username_1: correct. If you want to work on it and would like to propose the workflow, feel free to do it. username_2: Hey, I'd like to work on this. I was thinking that instead of creating a docker container or running a different worker, I was thinking that while a submission is being evaluated, the `cwd` of the worker can be changed and the required packages can be locally installed in the submission directory using `pip install numpy -t .` command which will cause the version specified to be installed in the submission directory. These should then get used instead of the global packages. username_3: @username_0 One solution can be using multiple workers, for all challenges. If one worker goes down, any challenge won't be affected. username_4: Closing this since we now support challenge based worker. Status: Issue closed
sequelize/umzug
394155705
Title: Custom storage jsdoc in constructor Question: username_0: I'm passing a custom storage object in the umzug constructor, which works perfectly. However, the jsdocs for the constructor only allows for strings: ` * @param {String} [options.storage='json'] - The storage. Possible values: * 'json', 'sequelize', 'mongodb', an argument for `require()`, including absolute paths. ` Which my IDE (IntelliJ) warns about during type inspection. Status: Issue closed Answers: username_1: Fixed by https://github.com/sequelize/umzug/commit/328172b7be2e1a25193f6875f6dfcb35a6ff2f3e
kintesh/containerise
582596618
Title: Syntax to Assign Multiple Domains to Same Container? Question: username_0: Forgive my ignorance. I daily use a site that uses single sign on and its control panel page is on a separate domain from its primary domain. Is this possible? If so, what is the correct syntax? I tried the following thus far without success. !*.domain1.com, !*.domain2.com , Domain !*.domain1.com,!*.domain2.com , Domain !*.domain1.com; !*.domain2.com , Domain !*.domain1.com;!*.domain2.com , Domain Answers: username_1: Hmm.. don't you have the ![icon](https://github.com/kintesh/containerise/blob/master/static/icons/icon.png) ? If you click on it, you won't need to edit the lines. Just click on the "+" and add the other website. username_0: @username_1 Your suggestion helped me think differently. Many thanks. I had been just using containerise by always using the CSV editor but then realized the container dropdown in the top of the GUI which reminded me that's one way multiple domains could be added to the same container that way. That made me realize I just need to use the same syntax in the CSV file but just make a point of referencing the same container name when adding multiple domains. I was foolishly thinking multiple domains linked to the same container would be noted on the same line. So it's as simple as follows on multiple lines. E.G. !.google.com.com , GoogleSingleSignOnContainer !.gmail.com , GoogleSingleSignOnContainer !.youtube.com , GoogleSingleSignOnContainer @username_1 @kintesh Thanks for all amazing contributions you guys have made to this extension! username_1: I'm glad you figured it out :+1: Have a nice day :slightly_smiling_face: Status: Issue closed
OpenNebula/one
331566226
Title: OpenNebula information when creating database in MySQL Question: username_0: <!--////////////////////////////////////////////--> <!-- COMPLETE THIS SECTION FOR FEATURE REQUESTS --> <!--////////////////////////////////////////////--> # Enhancement Request ## Description OpenNebula should show more information when creating database in MySQL if the creation is not possible. ## Use case Intentar arrancar OpenNebula con MySQL y un usuario que no tenga privilegios de creaciรณn sobre MySQL. OpenNebula returns that it can not create the Database, but should give some reason. <!--////////////////////////////////////////////--> <!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM --> <!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS --> <!-- PROGRESS WILL BE REFLECTED HERE --> <!--////////////////////////////////////////////--> # Progress Status - [ ] Branch created - [ ] Code committed to development branch - [ ] Testing - QA - [ ] Documentation - [ ] Release notes - resolved issues, compatibility, known issues - [ ] Code committed to upstream release/hotfix branches - [ ] Documentation committed to upstream release/hotfix branches
UniversalRobots/Universal_Robots_ROS_Driver
924636330
Title: got TF_OLD_DATA warning from time to time Question: username_0: # Summary *Introduction to the issue* I got TF_OLD_DATA from time to time, only restart the robot launch file can solve it. error msg is like: ``` [ WARN] [1624001595.379673101]: TF_OLD_DATA ignoring data from the past for frame arm_upper_arm_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.379702295]: TF_OLD_DATA ignoring data from the past for frame arm_shoulder_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.379725272]: TF_OLD_DATA ignoring data from the past for frame arm_wrist_1_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.379750800]: TF_OLD_DATA ignoring data from the past for frame arm_wrist_2_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.379775379]: TF_OLD_DATA ignoring data from the past for frame arm_wrist_3_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380370301]: TF_OLD_DATA ignoring data from the past for frame arm_forearm_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380401027]: TF_OLD_DATA ignoring data from the past for frame arm_upper_arm_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380426203]: TF_OLD_DATA ignoring data from the past for frame arm_shoulder_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380452244]: TF_OLD_DATA ignoring data from the past for frame arm_wrist_1_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380478272]: TF_OLD_DATA ignoring data from the past for frame arm_wrist_2_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380502892]: TF_OLD_DATA ignoring data from the past for frame arm_wrist_3_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380555666]: TF_OLD_DATA ignoring data from the past for frame arm_forearm_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380582803]: TF_OLD_DATA ignoring data from the past for frame arm_upper_arm_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained [ WARN] [1624001595.380611405]: TF_OLD_DATA ignoring data from the past for frame arm_shoulder_link at time 1.624e+09 according to authority /robot_state_publisher Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained ``` My understanding is that the timestamp that publishes the joint state of the ur robot is late than the current timestamp of tf has. # Versions - ROS Driver version: master - Affected Robot Software Version(s): master - Affected Robot Hardware Version(s): UR10e - Robot Serial Number: - UR+ product(s) installed: UR10e - URCaps Software version(s): # Impact No impact everything is working except the warning. ## Use Case and Setup ur driver is on master, ur client is from Debian. ## Project status at point of discovered *When did you first observe the issue?* - The issue comes not so often, several times per day. ## Steps to Reproduce It's hard to reproduce, I guess there is some network communication drop? ## Expected Behavior No warning, or have warning sometimes, but recover soon. In my case, once the warning happens, it won't disappear unless I restart the robot launch file. ## Workaround Suggestion restart the robot launch file Answers: username_1: This should not happen, but I also cannot imagine how this should happen. Do you run a multi-machine ROS setup? If there are multiple machines part of one ROS network, there could be issues, if their times are not synchronized exactly. But I would expect a different output in that case. username_0: Thanks for the reply. There is only one ros machine.
ballerina-platform/ballerina-lang
339722077
Title: enhancement : allow config file path as an URL Question: username_0: **Description:** It would be nice to be able to provide a config file as an url ! If the config API can use a url path to the file it would be easy to use the Spring cloud config server combined with ballerina and it would rock. Or any url based config system. **Suggested Labels (optional):** config integration usability Thanks Cyril Answers: username_1: @username_0 Thanks for the suggestion and we will consider this for one of our future release. In case you are interested, you can consider contributing for this feature. [1] https://ballerina.io/open-source/ [2] https://github.com/ballerina-platform/ballerina-lang/blob/master/CONTRIBUTING.md
louisianatiger/CloudMigration
954039931
Title: Vulnerability - Email Address Disclosure Question: username_0: **URL:** http://php.testsparker.com/process.php?file=Generics/contact.nsp **Name:** Email Address Disclosure **Severity:** Information **Certainty:** 95% **Email Address(es) :** <EMAIL> You can see vulnerability details from the link below: https://www.netsparkercloud.com/issues/detail/1d6df94cc3074397b060ad6404197c63
cypress-io/cypress
580644649
Title: cy.exec does not work on Shippable CI when using our cypress/base image Question: username_0: When using `cypress/base:12.16.1` image on ShippableCI `cy.exec` gives the following error Cypress v4.1.0 Linux Debian ``` /bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)' to be empty ```
mlkrauz/Coin-Compare
940006373
Title: result.html construction part 1 (Coin Chart) Question: username_0: **User Story** AS a potential cryptocurrency investor I WANT specific details on a cryptocurrency of my choosing SO THAT I can compare it against others, and the entire crypto market. **Acceptance Criteria** WHEN I select a cryptocurrency from the search bar or coinlist, THEN I am presented with a chart of that specific crpytocurrency's performance over the last 7 days. THEN at least three market performance statistics are populated below.
nextflow-io/nextflow
99409411
Title: Channel.fromPath throws an exception when curly brackets pattern is specified Question: username_0: When `Channel.fromPath` argument is a non-default file system including a curly brackets glob pattern an exception is raised. For example: Channel.fromPath('s3:///cbcrg-eu/nmdpflow-data/raw/**_R1*{fastq,fq,fastq.gz,fq.gz}') It throws: java.lang.IllegalArgumentException: Illegal character in path at index 39: s3:///cbcrg-eu/nmdpflow-data/raw/**_R1*{fastq,fq,fastq.gz,fq.gz} at java.net.URI.create(URI.java:852) ~[na:1.8.0_40] at nextflow.file.FileHelper.asPath(FileHelper.groovy:241) ~[nxf-commons-0.15.1.jar:na] at nextflow.extension.Bolts.asType(Bolts.groovy:340) ~[nxf-commons-0.15.1.jar:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_40] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_40] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_40] at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_40] at org.codehaus.groovy.runtime.metaclass.ReflectionMetaMethod.invoke(ReflectionMetaMethod.java:51) ~[groovy-2.3.11.jar:2.3.11] at org.codehaus.groovy.runtime.metaclass.NewInstanceMetaMethod.invoke(NewInstanceMetaMethod.java:54) ~[groovy-2.3.11.jar:2.3.11] at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324) ~[groovy-2.3.11.jar:2.3.11] at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1206) ~[groovy-2.3.11.jar:2.3.11] at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1015) ~[groovy-2.3.11.jar:2.3.11] at groovy.runtime.metaclass.NextflowDelegatingMetaClass.invokeMethod(NextflowDelegatingMetaClass.java:73) ~[nextflow-0.15.1.jar:na] at org.codehaus.groovy.runtime.InvokerHelper.invokePojoMethod(InvokerHelper.java:889) ~[groovy-2.3.11.jar:2.3.11] at org.codehaus.groovy.runtime.InvokerHelper.invokeMethod(InvokerHelper.java:880) ~[groovy-2.3.11.jar:2.3.11] at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodN(ScriptBytecodeAdapter.java:166) ~[groovy-2.3.11.jar:2.3.11] at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.asType(ScriptBytecodeAdapter.java:589) ~[groovy-2.3.11.jar:2.3.11] at nextflow.Channel.fromPath(Channel.groovy:163) ~[nextflow-0.15.1.jar:na] at nextflow.Channel.fromPath(Channel.groovy) ~[nextflow-0.15.1.jar:na] at nextflow.Channel$fromPath.call(Unknown Source) ~[na:na] at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45) ~[groovy-2.3.11.jar:2.3.11] at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108) ~[groovy-2.3.11.jar:2.3.11] at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) ~[groovy-2.3.11.jar:2.3.11] at main.run(main.nf:35) ~[na:na] at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:290) ~[nextflow-0.15.1.jar:na] at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:142) ~[nextflow-0.15.1.jar:na] at nextflow.cli.CmdRun.run(CmdRun.groovy:183) ~[nextflow-0.15.1.jar:na] at nextflow.cli.Launcher.run(Launcher.groovy:377) [nextflow-0.15.1.jar:na] at nextflow.cli.Launcher.main(Launcher.groovy:520) [nextflow-0.15.1.jar:na] Caused by: java.net.URISyntaxException: Illegal character in path at index 39: s3:///cbcrg-eu/nmdpflow-data/raw/**_R1*{fastq,fq,fastq.gz,fq.gz} at java.net.URI$Parser.fail(URI.java:2848) ~[na:1.8.0_40] at java.net.URI$Parser.checkChars(URI.java:3021) ~[na:1.8.0_40] at java.net.URI$Parser.parseHierarchical(URI.java:3105) ~[na:1.8.0_40] at java.net.URI$Parser.parse(URI.java:3053) ~[na:1.8.0_40] at java.net.URI.<init>(URI.java:588) ~[na:1.8.0_40] at java.net.URI.create(URI.java:850) ~[na:1.8.0_40] Answers: username_0: Fixed in version `0.15.2` Status: Issue closed
CemOezcan/metalfi
568112043
Title: `Evalution::vectorAddition()` might be unnecessary Question: username_0: Without having looked into the purpose/inner works of `Evalution::vectorAddition()` deeply: There is a lot of existing support in Python for vector operations (`numpy` package), so I don't know of it is really necessary to implement such stuff on your own.
home-assistant/operating-system
771460816
Title: Hassos 5.8 Question: username_0: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] udev.sh: executing... [12:16:51] INFO: Update udev information [cont-init.d] udev.sh: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. [12:16:51] INFO: Starting local supervisor watchdog... 20-09-12 12:16:55 INFO (MainThread) [__main__] Initializing Supervisor setup 20-09-12 12:16:55 INFO (MainThread) [supervisor.docker.network] Can't find Supervisor network, creating a new network 20-09-12 12:16:56 INFO (MainThread) [supervisor.bootstrap] Initializing Supervisor Sentry 20-09-12 12:16:56 INFO (MainThread) [supervisor.bootstrap] Seting up coresys for machine: raspberrypi4-64 20-09-12 12:16:56 INFO (SyncWorker_0) [supervisor.docker.supervisor] Attaching to Supervisor homeassistant/aarch64-hassio-supervisor with version 2020.12.6 20-09-12 12:16:56 INFO (SyncWorker_0) [supervisor.docker.supervisor] Connecting Supervisor to hassio-network 20-09-12 12:16:56 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state CoreState.INITIALIZE 20-09-12 12:16:56 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete 20-09-12 12:16:56 INFO (MainThread) [__main__] Setting up Supervisor 20-09-12 12:16:56 INFO (MainThread) [supervisor.api] Starting API on 172.30.32.2 20-09-12 12:16:57 INFO (MainThread) [supervisor.host.info] Updating local host information 20-09-12 12:16:57 INFO (MainThread) [supervisor.host.services] Updating service information 20-09-12 12:16:57 INFO (MainThread) [supervisor.host.network] Updating local network information 20-09-12 12:16:58 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information 20-09-12 12:16:58 INFO (MainThread) [supervisor.host] Host information reload completed 20-09-12 12:16:58 INFO (MainThread) [supervisor.host.apparmor] Loading AppArmor Profiles: {'hassio-supervisor'} 20-09-12 12:16:58 INFO (MainThread) [supervisor.host.services] Reloading local service hassos-apparmor.service 20-09-12 12:16:58 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-dns versions: ['2020.11.0'] 20-09-12 12:16:58 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-dns with version 2020.11.0 20-09-12 12:16:58 INFO (MainThread) [supervisor.plugins.dns] Starting CoreDNS plugin 20-09-12 12:17:00 INFO (SyncWorker_0) [supervisor.docker.dns] Starting DNS homeassistant/aarch64-hassio-dns with version 2020.11.0 - 172.30.32.3 20-09-12 12:17:00 INFO (MainThread) [supervisor.plugins.dns] Updated /etc/resolv.conf 20-09-12 12:17:00 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-audio versions: ['17'] 20-09-12 12:17:00 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-audio with version 17 20-09-12 12:17:00 INFO (MainThread) [supervisor.plugins.audio] Starting Audio plugin 20-09-12 12:17:01 INFO (SyncWorker_1) [supervisor.docker.audio] Starting Audio homeassistant/aarch64-hassio-audio with version 17 - 172.30.32.4 20-09-12 12:17:01 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-cli versions: ['2020.11.1'] 20-09-12 12:17:01 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-cli with version 2020.11.1 20-09-12 12:17:01 INFO (MainThread) [supervisor.plugins.cli] Starting CLI plugin 20-09-12 12:17:03 INFO (SyncWorker_0) [supervisor.docker.cli] Starting CLI homeassistant/aarch64-hassio-cli with version 2020.11.1 - 172.30.32.5 20-09-12 12:17:03 INFO (SyncWorker_1) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-observer versions: ['2020.10.1'] 20-09-12 12:17:03 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-observer with version 2020.10.1 20-09-12 12:17:03 INFO (MainThread) [supervisor.plugins.observer] Starting observer plugin 20-09-12 12:17:05 INFO (SyncWorker_0) [supervisor.docker.observer] Starting Observer homeassistant/aarch64-hassio-observer with version 2020.10.1 - 172.30.32.6 20-09-12 12:17:05 INFO (SyncWorker_1) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-multicast versions: ['3'] 20-09-12 12:17:05 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-multicast with version 3 20-09-12 12:17:05 INFO (MainThread) [supervisor.plugins.multicast] Starting Multicast plugin 20-09-12 12:17:06 INFO (SyncWorker_0) [supervisor.docker.multicast] Starting Multicast homeassistant/aarch64-hassio-multicast with version 3 - Host 20-09-12 12:17:06 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json 20-09-12 12:17:06 INFO (MainThread) [supervisor.homeassistant.secrets] Loaded 0 Home Assistant secrets 20-09-12 12:17:06 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/raspberrypi4-64-homeassistant versions: ['landingpage'] 20-09-12 12:17:06 INFO (SyncWorker_1) [supervisor.docker.interface] Attaching to homeassistant/raspberrypi4-64-homeassistant with version landingpage 20-09-12 12:17:06 INFO (MainThread) [supervisor.homeassistant.core] Starting HomeAssistant landingpage 20-09-12 12:17:06 INFO (MainThread) [supervisor.homeassistant] Update pulse/client.config: /data/tmp/homeassistant_pulse 20-09-12 12:17:07 INFO (SyncWorker_1) [supervisor.docker.homeassistant] Starting Home Assistant homeassistant/raspberrypi4-64-homeassistant with version landingpage 20-09-12 12:17:07 INFO (MainThread) [supervisor.hassos] Detect HassOS 5.8 / BootSlot A 20-09-12 12:17:07 INFO (MainThread) [supervisor.store.git] Cloning add-on https://github.com/hassio-addons/repository repository 20-09-12 12:17:07 INFO (MainThread) [supervisor.store.git] Cloning add-on https://github.com/home-assistant/addons repository [Truncated] **Kernel logs:** <!-- - use this command: dmesg - Enable SSH on OS level and login, then use `dmesg`. --> **Description of problem:** <!-- - Is the problem reproducible? Yes - Has this been working before (is this a regression?) Yes - Has there been attempt to rule out harware issues? (different SD card etc.) Same Problem on SSD od SD on both of my pi 4 OS 5.6, 5.7, 5.8 and 5.9 Have all same issue If I install 5.5 and when Home Assistant is up install my snapshot then update to 5.8 ...This work find ..All perfect exept when try to add new addons have this issue 500 Server Error for http+docker://localhost/v1.40/images/create?tag=5.2.0&fromImage=homeassistant%2Faarch64-addon-configurator: Internal Server Error ("Get "https://registry-1.docker.io/v2/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)") Look similaire problem to me --> Answers: username_1: Hm, this is not the only report I saw with this error: ``` 20-12-19 21:29:20 ERROR (SyncWorker_0) [supervisor.docker.interface] Can't install homeassistant/raspberrypi4-64-homeassistant:2020.12.1 -> 500 Server Error for http+docker://localhost/v1.40/images/create?tag=2020.12.1&fromImage=homeassistant%2Fraspberrypi4-64-homeassistant: Internal Server Error ("Get "https://registry-1.docker.io/v2/": context deadline exceeded"). ``` I cannot reproduce this here, so it depends on the network environment. But since it used to work before 5.6, I assume it is realted to the upgrade to Buildroot 2020.11 or/and systemd-reolved. I just updated to the latest stable version of systemd-resolved and created a build, can you try with this build? https://os-builds.home-assistant.io/6.0.dev20201220/ username_0: This is the way for me to reproduce ------------------ I have a strange problem to share My network: Cable ISP to WiFi router -> Client ( Iogear GWU637 ) + 50 Feet cat6 to router Netgear R6020 Pi 4 connecter wire to Netgear . With OS Version 5.5 and before everything work fine With 5.6 and up: Not able to clean install ( stuck at Preparing Home Assistant ) and not able to update or install new addons. Have this message: 500 Server Error .......bla bla bla :.... context deadline exceeded (Client.Timeout exceeded while awaiting headers)") If I connect Pi 4 direct to IoGear + 50 Feet cat6 (bypass Netgear) trouble disapper. Its look like version 5.6 and up is more sensitive to timing. ( Timing have been change ? ) ---------------------- I will try this build .... But I need instruction to do it ha core update --version 2020.12.20 May be ????? Thanks for quick answer . username_1: You can install it using balenaEtcher (if you do a fresh install, just download the `tar.xz` file). If you want to use the built-in update mechanism you need to switch to the `dev` channel: ``` ha supervisor options --channel dev ha supervisor reload ha os update --version 6.0.dev20201220 ``` Don't forget to switch back to the stable channel afterwards. ``` ha supervisor options --channel stable ``` username_0: I did clean from sd install .... FAIL with same result You have the complete logs ...The logs say retiring in 30 sec...never did If you need more clean install from sd .. you welcome [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] udev.sh: executing... [17:25:33] INFO: Update udev information [cont-init.d] udev.sh: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. [17:25:34] INFO: Starting local supervisor watchdog... 20-12-16 17:25:37 INFO (MainThread) [__main__] Initializing Supervisor setup 20-12-16 17:25:37 INFO (MainThread) [supervisor.docker.network] Can't find Supervisor network, creating a new network 20-12-16 17:25:38 INFO (MainThread) [supervisor.bootstrap] Initializing Supervisor Sentry 20-12-16 17:25:38 INFO (MainThread) [supervisor.bootstrap] Seting up coresys for machine: raspberrypi4-64 20-12-16 17:25:38 INFO (SyncWorker_0) [supervisor.docker.supervisor] Attaching to Supervisor homeassistant/aarch64-hassio-supervisor with version 2020.12.7 20-12-16 17:25:38 INFO (SyncWorker_0) [supervisor.docker.supervisor] Connecting Supervisor to hassio-network 20-12-16 17:25:38 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state CoreState.INITIALIZE 20-12-16 17:25:38 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete 20-12-16 17:25:38 INFO (MainThread) [__main__] Setting up Supervisor 20-12-16 17:25:38 INFO (MainThread) [supervisor.api] Starting API on 172.30.32.2 20-12-16 17:25:39 INFO (MainThread) [supervisor.host.info] Updating local host information 20-12-16 17:25:39 INFO (MainThread) [supervisor.host.services] Updating service information 20-12-16 17:25:39 INFO (MainThread) [supervisor.host.network] Updating local network information 20-12-16 17:25:40 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information 20-12-16 17:25:40 INFO (MainThread) [supervisor.host] Host information reload completed 20-12-16 17:25:40 INFO (MainThread) [supervisor.host.apparmor] Loading AppArmor Profiles: {'hassio-supervisor'} 20-12-16 17:25:40 INFO (MainThread) [supervisor.host.services] Reloading local service hassos-apparmor.service 20-12-16 17:25:40 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-dns versions: ['2020.11.0'] 20-12-16 17:25:40 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-dns with version 2020.11.0 20-12-16 17:25:41 INFO (MainThread) [supervisor.plugins.dns] Starting CoreDNS plugin 20-12-16 17:25:42 INFO (SyncWorker_0) [supervisor.docker.dns] Starting DNS homeassistant/aarch64-hassio-dns with version 2020.11.0 - 172.30.32.3 20-12-16 17:25:42 INFO (MainThread) [supervisor.plugins.dns] Updated /etc/resolv.conf 20-12-16 17:25:42 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-audio versions: ['17'] 20-12-16 17:25:42 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-audio with version 17 20-12-16 17:25:42 INFO (MainThread) [supervisor.plugins.audio] Starting Audio plugin 20-12-16 17:25:43 INFO (SyncWorker_0) [supervisor.docker.audio] Starting Audio homeassistant/aarch64-hassio-audio with version 17 - 172.30.32.4 20-12-16 17:25:44 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-cli versions: ['2020.11.1'] 20-12-16 17:25:44 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-cli with version 2020.11.1 20-12-16 17:25:44 INFO (MainThread) [supervisor.plugins.cli] Starting CLI plugin 20-12-16 17:25:45 INFO (SyncWorker_0) [supervisor.docker.cli] Starting CLI homeassistant/aarch64-hassio-cli with version 2020.11.1 - 172.30.32.5 20-12-16 17:25:45 INFO (SyncWorker_1) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-observer versions: ['2020.10.1'] 20-12-16 17:25:45 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-observer with version 2020.10.1 20-12-16 17:25:45 INFO (MainThread) [supervisor.plugins.observer] Starting observer plugin 20-12-16 17:25:47 INFO (SyncWorker_0) [supervisor.docker.observer] Starting Observer homeassistant/aarch64-hassio-observer with version 2020.10.1 - 172.30.32.6 20-12-16 17:25:47 INFO (SyncWorker_1) [supervisor.docker.interface] Found homeassistant/aarch64-hassio-multicast versions: ['3'] 20-12-16 17:25:47 INFO (SyncWorker_0) [supervisor.docker.interface] Attaching to homeassistant/aarch64-hassio-multicast with version 3 20-12-16 17:25:47 INFO (MainThread) [supervisor.plugins.multicast] Starting Multicast plugin 20-12-16 17:25:48 INFO (SyncWorker_0) [supervisor.docker.multicast] Starting Multicast homeassistant/aarch64-hassio-multicast with version 3 - Host 20-12-16 17:25:48 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json 20-12-16 17:25:48 INFO (MainThread) [supervisor.homeassistant.secrets] Loaded 0 Home Assistant secrets 20-12-16 17:25:48 INFO (SyncWorker_0) [supervisor.docker.interface] Found homeassistant/raspberrypi4-64-homeassistant versions: ['landingpage'] 20-12-16 17:25:48 INFO (SyncWorker_1) [supervisor.docker.interface] Attaching to homeassistant/raspberrypi4-64-homeassistant with version landingpage [Truncated] 20-12-21 00:05:17 INFO (MainThread) [supervisor.addons] Phase 'AddonStartup.SYSTEM' starting 0 add-ons 20-12-21 00:05:17 INFO (MainThread) [supervisor.addons] Phase 'AddonStartup.SERVICES' starting 0 add-ons 20-12-21 00:05:17 INFO (MainThread) [supervisor.core] Skiping start of Home Assistant 20-12-21 00:05:17 INFO (MainThread) [supervisor.addons] Phase 'AddonStartup.APPLICATION' starting 0 add-ons 20-12-21 00:05:18 INFO (MainThread) [supervisor.misc.tasks] All core tasks are scheduled 20-12-21 00:05:18 INFO (MainThread) [supervisor.misc.hwmon] Started Supervisor hardware monitor 20-12-21 00:05:18 INFO (MainThread) [supervisor.core] Supervisor is up and running 20-12-21 00:05:18 INFO (MainThread) [supervisor.homeassistant.core] Home Assistant setup 20-12-21 00:05:18 INFO (MainThread) [supervisor.host.info] Updating local host information 20-12-21 00:05:18 INFO (SyncWorker_1) [supervisor.docker.interface] Updating image homeassistant/raspberrypi4-64-homeassistant:landingpage to homeassistant/raspberrypi4-64-homeassistant:2020.12.1 20-12-21 00:05:18 INFO (SyncWorker_1) [supervisor.docker.interface] Downloading docker image homeassistant/raspberrypi4-64-homeassistant with tag 2020.12.1. 20-12-21 00:05:18 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json 20-12-21 00:05:18 INFO (MainThread) [supervisor.resolution.fixup] Starting system autofix at state CoreState.RUNNING 20-12-21 00:05:18 INFO (MainThread) [supervisor.resolution.fixup] System autofix complete 20-12-21 00:05:18 INFO (MainThread) [supervisor.host.services] Updating service information 20-12-21 00:05:18 INFO (MainThread) [supervisor.host.network] Updating local network information 20-12-21 00:05:20 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information 20-12-21 00:05:20 INFO (MainThread) [supervisor.host] Host information reload completed 20-12-21 00:05:33 ERROR (SyncWorker_1) [supervisor.docker.interface] Can't install homeassistant/raspberrypi4-64-homeassistant:2020.12.1 -> 500 Server Error for http+docker://localhost/v1.40/images/create?tag=2020.12.1&fromImage=homeassistant%2Fraspberrypi4-64-homeassistant: Internal Server Error ("Get "https://registry-1.docker.io/v2/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"). 20-12-21 00:05:33 WARNING (MainThread) [supervisor.homeassistant.core] Error on Home Assistant installation. Retry in 30sec username_0: I have a idea ( may be not so good ? ) just instatt 2 or more router WiFi in and WiFi out till you have this condition with 5.6 and up ...after try with 5.5 .. If 5.5 work and 5.6 fail you have my condition. Just a idea ...Or I will make few test for you.. Le dim. 20 dรฉc. 2020, ร  19 h 25, <NAME> <<EMAIL>> a รฉcrit : > I did clean from sd install .... FAIL with same result You have the > complete logs ...The logs say retiring in 30 sec...never did > > If you need more clean install from sd .. you welcome > username_1: Just to be clear, your network looks like this right? ``` |--------| |----------------------| |---------------| |-----------------| | ISP | - WiFi - | WiFi to Eth (GWU637) | - Ethernet - | Netgear R6020 | - Ethernet - | RPi 4 with HAOS | |--------| |----------------------| |---------------| |-----------------| ``` And when you take the Netgear R6020 out of the equation things start to work? It could be an MTU issue, can you try login onto the OS using ssh (port 22222), see https://developers.home-assistant.io/docs/operating-system/debugging/ ``` sysctl net.ipv4.tcp_mtu_probing=1 ``` username_0: Yes My configuration is exactly that When I was in at ha > Prompt I have to login and # .....Issue the command and it fix the problem.. Does this fix will stay on reboot. and be implement in next issue P.S. this test was done OS 5.9 Thanks username_0: Does MTU setting is persistent ...Because after few hours trouble came back. Reissue the command and work again ???? Strange ? Le lun. 21 dรฉc. 2020, ร  09 h 31, <NAME> <<EMAIL>> a รฉcrit : > Yes My configuration is exactly that > > When I was in at ha > Prompt I have to login and # .....Issue the > command and it fix the problem.. > Does this fix will stay on reboot. and be implement in next issue > P.S. this test was done OS 5.9 > > Thanks > > > username_1: The command is not persistent! Currently there is no way to make it persistent. We might consider setting it by default in HAOS. So the problem went away as soon as that option was set? username_0: I am not so sure anymore because if I reboot and wait 1 hour I am not able to install new addons , if I try 2 ,3 times it will go even without option. For me the option seemed to help but not a fix. After The first addons all next one work fine. This issue NEVER happened with OS 5.5 and before. username_1: Hm, it could be related to #1113 then. I have a fix for that which will eventually go into 5.10. Would be good if you can give it a try with 5.10 again. username_0: Yes as soon 5.10 ready or if I have access I will. username_1: @username_0 I made a pre-release, you can find it here: https://os-builds.home-assistant.io/6.0.dev20201222/ If you want to use the update system, you need to switch to the dev channel: ``` ha supervisor options --channel dev ha supervisor reload ha os update --version 6.0.dev20201222 ``` Don't forget to switch back to the stable channel afterwards: ``` ha supervisor options --channel stable ``` username_0: Same Issue and message username_0: I try a few more tests. I was able for the first time since (5.5 ) to make a clean install .On the observer logs 3 time retry in 30 sec but get on finishing install . After trying to install addons it fail twice and get ok after... When first addons get ok all others get ok. after a few hours add new addons fail on first try but get ok on multiple retry. Le mar. 22 dรฉc. 2020, ร  18 h 10, <NAME> <<EMAIL>> a รฉcrit : > Same Issue and message > username_2: I've also got this error when I try to update to the latest Home Assistant Core. I am running on a virtual machine for test purposes. But since a while (i am running 0.118.4) I can't update anymore. I get to following error: 20-12-23 08:21:36 ERROR (SyncWorker_4) [supervisor.docker.interface] Can't install homeassistant/qemux86-64-homeassistant:2021.1.0.dev20201223 -> 500 Server Error for http+docker://localhost/v1.40/images/create?tag=2021.1.0.dev20201223&fromImage=homeassistant%2Fqemux86-64-homeassistant: Internal Server Error ("Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"). I tried to update to the latest dev build by executing the following commands but that didn't solve the problem for me. ``` ha supervisor options --channel dev ha supervisor reload ha os update --version 6.0.dev20201222 ``` Some others have the same problem: https://www.reddit.com/r/homeassistant/comments/kejpyg/2020121_broke_my_system/ username_2: I tried to create a new virtual machine with the latest dev build (6.0.dev20201222). However, when install it, it gets stucked at the "Preparing Home Assistant" screen with the same error as mentioned above. Thus, a fresh install with the latest HASS OS doesn't work work me. Changing the DNS server is the solution for me. On the forum there is a thread about this problem: https://community.home-assistant.io/t/cant-install-homeassistant-2020-12-0/256131 username_0: Might help..... If I try to add new addons will fail most of the time. After few retry will add OK. But ( very strange ) If I log in before ssh [email protected] -p 22222 will add Ok in the first attempt. I make at least 20 test .. Hope will help but it's not logical for me that when your login it will make a difference. username_0: I did 20+ more tests... on both on my Pi 4 4Gb Running HassoS 5.9 HA 2020.12.1 If I try to add any new addons it's ALWAYS fait on first attempt .. If I just ssh [email protected] -p 22222 to Pi 4 doing nothing prompt ( ha >) ...The add any addons will ALWAYS install on first try. Strange but true. Le mer. 23 dรฉc. 2020, ร  16 h 53, <NAME> <<EMAIL>> a รฉcrit : > Might help..... > If I try to add new addons will fail most of the time. After few retry > will add OK. > But ( very strange ) If I log in before ssh [email protected] -p 22222 > will add Ok in the first attempt. > I make at least 20 test .. > Hope will help but it's not logical for me that when your login it will > make a difference. > username_1: a clean install @username_0 was that with the development build? You can also download the latest development build for a fresh installation from https://os-builds.home-assistant.io/6.0.dev20201228/. username_0: Yes it was with dev build but it's retry 3 times ..This build has the same issue .. I change my netgear ( $25 ) for tp-link ($100) and the problem is gone. That does not say 5.6 and up work ok but with a better router it's fine. P.S. I tried the dev 20201228 on my old setup of course. What do you think of my 2 last messages ( strange ? but real ) username_3: was helping someone troubleshoot this, we verified he had full working DNS and internet connectivity and could manually pull the image from, docker.io, reverting to an older build fixed the issue. Did anything change in the code wrt to timeouts (which might explain why some hit it and some don't), is this really just timing out as the error indicates? or is the some other issue with the create function being called? username_4: is there a developers version of the OS? Above 5.2 my system freezes. I tried 5.10 again and it froze after a little over 3 hours. I did post the logs in the 1119 issue. Not sure what to try next on my PI4 64 bit with SSD. Thanks username_0: I try 20201228 2 week ago. Are you sure you want to try 20201220 and not 20210105 ? username_4: I am talking about the OS version. Lots of different versions (Supervisor, OS, HA) so easy to confuse. username_0: Me too os version 20201228 If I try will be la test is version username_0: I try this one 2 week ago os-builds.home-assistant.io/6.0.dev20201228 Le dim. 10 janv. 2021, ร  18 h 18, <NAME> <<EMAIL>> a รฉcrit : > Me too os version 20201228 > > > If I try will be la test is version > > > username_4: Were you having the same RPI4 freeze issues? If so, did this fix it? username_0: No freeze issue RPI4 boot from ssd username_5: It's not just the RPI. I have a virtual machine and I can't get it to work either! On my last instance I couldn't update and now after a reinstall I can't get to the CLI at all with it unable to download the docker image. Please fix! username_6: I also see Post "http://supervisor/core/check": context deadline exceeded (Client.Timeout exceeded while awaiting headers) when issuing a ha core check since 2 weeks or so. Home Assistant is seemingly unphased by this and the UI check is working. Reboots are working. I have no idea what to do. username_7: I have the same timeout, while command: `docker exec homeassistant python -m homeassistant --script check_config --config /config` is working properly, without timeout
glmmTMB/glmmTMB
218486828
Title: systematic segfault Question: username_0: Hi, After having installed `glmmTMB` without any (apparent) problem, I get a segfault every time I try to run the examples from `glmmTMB()` ... not really sure what to do here... For what it's worth, it's an archlinux box with r-mkl. Answers: username_1: Well that's not good. Hopefully the TMB experts here can suggest some diagnostic steps ... username_2: @username_0 Try to run ```.R library(TMB) ``` from a clean R session. Do get a warning about the Matrix package version ? Try to run a simple example: ```R runExample("simple") ``` Do you get a segfault here as well ? If yes, please post the output. username_0: + parameters=list(u=u*0, beta=beta*0, logsdu=1, logsd0=1), + .... [TRUNCATED] TMB has received an error from Eigen. The following condition was not met: index >= 0 && index < size() Please check your matrix-vector bounds etc., or run your program through a debugger. zsh: abort (core dumped) R --no-save --quiet --no-init-file ``` username_2: We need a little more: * ``` packageVersion("TMB") packageVersion("Matrix") packageVersion("RcppEigen") sessionInfo() ``` * Do you get the same segfault with: ``` runExample("randomregression") ``` * I'd also like to see the compilation output. * If you are willing to debug a bit try this from terminal: ```shell R -d gdb run TMB:::runExample("simple", flags="-O0 -g", clean=TRUE) bt ``` and post the backtrace. Relevant lines are probably in `#0-#5`. username_1: Is this a 32-bit system? I'm not sure how widely this has been acknowledged, but there is a problem with Matrix 1.2-8 on 32-bit OSs: see [this StackOverflow question](http://stackoverflow.com/questions/42374635/r-lme4-error-in-usr-lib-rstudio-bin-rsession-malloc-memory-corruption/42674915#42674915). If you're on a 32-bit system, try downgrading to Matrix 1.2-7 (e.g. `devtools::install_version("Matrix","1.2-7")`) and see if that helps ... username_0: @username_1 : This is a 64 bit system (Archlinux with kernel 4.10.6) @username_2: TMB : version 1.7.8, Matrix: 1.2.8 RccpEigen: 0.3.2.9.1 R is version 3.3.3 svn rev 72310 as built with MKL from [r-mkl](https://aur.archlinux.org/packages/r-mkl), although I am not entirely certain that the linking with MKL is completely right... ```r sessionInfo() R version 3.3.3 (2017-03-06) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Arch Linux locale: [1] LC_CTYPE=en_US.utf8 LC_NUMERIC=C [3] LC_TIME=en_US.utf8 LC_COLLATE=en_US.utf8 [5] LC_MONETARY=en_US.utf8 LC_MESSAGES=en_US.utf8 [7] LC_PAPER=en_US.utf8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.utf8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base ther attached packages: [1] TMB_1.7.8 loaded via a namespace (and not attached): [1] colorspace_1.3-2 Matrix_1.2-8 DBI_0.6 Rcpp_0.12.10 [5] grid_3.3.3 lattice_0.20-35 ``` Running `runExample("randomregression")` returns the same error / segfault as `runExample("simple")`. Finally running the command through gdb gives the following output (which is quite beyond me to be honest): [link to pastebin](https://paste.scratchbook.ch/view/1a89322c) username_2: @username_0 Interesting. The model ran without segfault using compilation flags `-O0 -g`. This could indicate some sort of over optimization in your R configuration. That's why I'd be curious to see the output from the *compilation* of e.g. `runExample("simple",clean=TRUE)`. username_0: [Here](https://paste.scratchbook.ch/view/4bc7b57e) is the compilation output without specifying any flag. username_2: @username_0 That explains it. At present TMB is not very tolerant to setting the openmp flag for non parallel implementations. It uses the `_OPENMP` preprocessor flag to signify that the template is parallelized. Non-parallel code like glmmTMB will crash. Having `-fopenmp` as part of `R CMD config CXXFLAGS` is not standard. None of the configurations in https://cran.r-project.org/doc/manuals/r-release/R-admin.html does that. Why not use `SHLIB_OPENMP_CXXFLAGS` ? An easy workaround is to make your own `~/.R/Makevars` with modified `CXXFLAGS`. username_0: Right. It turns out `-fopenmp` was "hard set" by the AUR script building the package. Once that adjusted, both `TMB` and `glmmTMB` seem to run fine. Thank you both @username_2 and @username_1 for your time and dedication! Status: Issue closed
gofiber/fiber
658243900
Title: ๐Ÿค” Am I correct to organize middlewares like this? Question: username_0: **Question description** I'm trying fiber, but I have a little bit confusion, or is there a better way to do like this? Thanks! ```go package main import ( "github.com/gofiber/fiber" "github.com/gofiber/cors" "github.com/gofiber/fiber/middleware" ) func CheckAdmin(c *fiber.Ctx) { if !ok { c.SendStatus(403) return } c.Next() } func main() { app := fiber.New() app.Use(middleware.Favicon()) app.Use(middleware.RequestID()) app.Use(middleware.Compress()) app.Use(middleware.Logger()) app.Use(middleware.Recover()) app.Use(cors.New( // ... )) app.Post("/everyone-1", func(c *fiber.Ctx) { // ... }) app.Post("/everyone-2", func(c *fiber.Ctx) { // ... }) app.Post("/everyone-3", func(c *fiber.Ctx) { // ... }) app.Post("/admin-1", CheckAdmin, func(c *fiber.Ctx) { // ... }) app.Post("/admin-2", CheckAdmin, func(c *fiber.Ctx) { // ... }) app.Post("/admin-3", CheckAdmin, func(c *fiber.Ctx) { // ... }) // 404 app.Use(func(c *fiber.Ctx) { c.SendStatus(404) }) // start server // ... } ``` Answers: username_1: @username_0 hi! Yes, it's correct too. Also, you can do it, like this: ```go // ... app.Use( middleware.Favicon(), middleware.RequestID(), middleware.Compress(), middleware.Logger(), middleware.Recover(), cors.New(), ) // ... ``` username_0: Thanks @username_1 , Is it working as expected included checkAdmin and 404 middleware? username_2: Yes correct, everything looks fine :+1: you could simplify the admin check using the prefix ```go // dont forget to call c.Next() in your middleware admin := app.Group("/admin", CheckAdmin) app.Post("/admin-1", func(c *fiber.Ctx) { // ... }) app.Post("/admin-2", func(c *fiber.Ctx) { // ... }) app.Post("/admin-3", func(c *fiber.Ctx) { // ... }) ``` Status: Issue closed username_0: @username_2 Thanks!!!
leifeld/btergm
320866749
Title: data helper functions Question: username_0: Hi, One of the pain points that I've encountered when using `btergm` is converting panel data from its more common representations to objects on which we can run the network analysis. I wrote a few helper functions with lots of sanity checks. Not sure if this would be of any interest to the community, but I thought I would post it here in case someone has comments or suggestions. There's an example in the README. Cheers! https://github.com/username_0/btergmHelper Answers: username_1: Thanks! I'll close this as there are no follow-up posts. Status: Issue closed
JordanMartinez/purescript-veither
838065694
Title: Implement `genVeither` Question: username_0: ```purescript genVeither :: forall a errorRows . MapRows UseGen ("_" :: a | errorRows) allGenRows => RL.RowToList allGenRows rowList => Record allGenRows -> Gen (Veither errorRows a) ``` Status: Issue closed Answers: username_0: Closed in https://github.com/username_0/purescript-veither/commit/eff53802ef76020d81931fd2530ac562926a4c65 I found that I needed two variants to enable one to control the probability from which to select a given label's generator.
rancher/rancher
485134675
Title: [UI] Support tolerations for Istio tracing Question: username_0: UI for https://github.com/rancher/rancher/issues/21487 After bump Istio to v1.2.4, tracing can configure tolerations. Answers: username_1: Version: master-head (v2.3) (8/29/19) This adds Tolerations support for the Tracing pod. ![image](https://user-images.githubusercontent.com/45179589/63982183-805c6000-ca76-11e9-9353-cb36d1b4ebe1.png) I tested by setting a label on one of my nodes on a cluster then setting the toleration in Istio settings under tracing accordingly. This worked as expected. I tested again with labels on a couple of nodes, and some different options -- they all worked as expected. Status: Issue closed username_2: UI for https://github.com/rancher/rancher/issues/21487 After bump Istio to v1.2.4, tracing can configure tolerations. username_2: Re-opening to test specifically with node taints. Status: Issue closed
GunterOdimm/Java_Study
453989581
Title: Java Test Study Code 035 - jSON ํŒŒ์ผ์„ ์ถœ๋ ฅํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ Question: username_0: Study.java.helper ์ฝ”๋“œ ```java package Study.java.helper; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.UnsupportedEncodingException; public class FileHelper { private static FileHelper FC1; public static FileHelper getInstance() { if (FC1 == null) { FC1 = new FileHelper(); } return FC1; } private FileHelper() { } public boolean write(String filePath, byte[] data) { boolean result = false; OutputStream out = null; try { out = new FileOutputStream(filePath); out.write(data); System.out.println("[INFO] ์ €์žฅ์— ์„ฑ๊ณตํ•˜์˜€์Šต๋‹ˆ๋‹ค. >> " + filePath); } catch (FileNotFoundException e) { // ํŒŒ์ผ ์ €์žฅ ๊ฒฝ๋กœ๊ฐ€ ์กด์žฌํ•˜๋Š”์ง€ ์•Šํ•˜๋Š”์ง€ ํ™•์ธ e.printStackTrace(); } catch (IOException e) { // ํŒŒ์ผ ์ €์žฅ ๊ณต๊ฐ„์ด ์ถฉ๋ถ„ํ•œ์ง€ ํ™•์ธ e.printStackTrace(); } // ์ €์žฅ์˜ ์„ฑ๊ณต์—ฌ๋ถ€์— ์ƒ๊ด€ ์—†์ด ์ŠคํŠธ๋ฆผ์€ ๋ฌด์กฐ๊ฑด ๋‹ซ์•„์•ผ ํ•œ๋‹ค.(์—๋Ÿฌ ์ƒ๊น€) finally { if (out != null) { try { out.close(); } catch (IOException e) { e.printStackTrace(); } } } return result; } public byte[] read(String filePath) { byte[] data = null; InputStream in = null; try { [Truncated] // ์•ž์„œ ์ค€๋น„ํ•œ read() ๋ฉ”์„œ๋“œ์˜ ํ˜ธ์ถœ // ๋‚ด์šฉ์„ ๋ฌธ์ž์—ด๋กœ ๋ฐ˜ํ™˜ํ•œ๋‹ค. try { content = new String(data, encType); content = content.trim(); // ๊ฐ€๋”์”ฉ ์•ž๋’ค์— ๊ณต๋ฐฑ์ด ์ƒ๊ธฐ๋Š” ๊ฒฝ์šฐ๊ฐ€ ์กด์žฌํ•œ๋‹ค } catch (UnsupportedEncodingException e) { System.out.println("[ERROR] ์ธ์ฝ”๋”ฉ ์ง€์ • ์—๋Ÿฌ"); e.printStackTrace(); } catch (Exception e) { System.out.println("[ERROR] ์•Œ ์ˆ˜ ์—†๋Š” ์—๋Ÿฌ๊ฐ€ ๋ฐœ์ƒํ–ˆ์Šต๋‹ˆ๋‹ค"); e.printStackTrace(); } return content; } } ``` Answers: username_0: JSON์šฉ ์ž๋ฐ” BEANS ```java package Study.java.model; public class News { private String title; private String description; private String pubDate; public News(String title, String description, String pubDate) { super(); this.title = title; this.description = description; this.pubDate = pubDate; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public String getPubDate() { return pubDate; } public void setPubDate(String pubDate) { this.pubDate = pubDate; } @Override public String toString() { return "News [title=" + title + ", description=" + description + ", pubDate=" + pubDate + "]"; } } ``` username_0: main ํŒŒ์ผ ```java package Study.java.program; import org.json.JSONArray; import org.json.JSONObject; import Study.java.helper.FileHelper; public class Main03 { public static void main(String[] args) { String source = FileHelper.getInstance().readString("res/03.json", "utf-8"); JSONObject json = new JSONObject(source); // ๋ฐฐ์—ด ๊ตฌ์กฐ๋Š” JSONArray ๊ฐ์ฒด๋กœ ์ถ”์ถœํ•œ๋‹ค. JSONArray array = json.getJSONArray("item"); for (int i = 0; i < array.length(); i++) { String item = String.valueOf(array.get(i)); System.out.println(item); } } } ``` username_0: main ํŒŒ์ผ 2 ```java package Study.java.program; import org.json.JSONArray; import org.json.JSONObject; import Study.java.helper.FileHelper; import Study.java.model.News; public class Main05 { public static void main(String[] args) { String source = FileHelper.getInstance().readString("res/05.json", "utf-8"); JSONObject json = new JSONObject(source); JSONObject rss = json.getJSONObject("rss"); JSONArray item = rss.getJSONArray("item"); for(int i = 0; i<item.length(); i++) { JSONObject temp = item.getJSONObject(i); //๋ฐฐ์—ด์˜ i๋ฒˆ์งธ json์„ ๊บผ๋‚ธ๋‹ค. String title = temp.getString("title"); String description = temp.getString("description"); String pubDate = temp.getString("pubDate"); News news = new News(title,description, pubDate) ; System.out.println(news.toString()); } } } ```
explosion/spaCy
439978776
Title: POS Tagging shows different results in spacy online pos tagger and in code Question: username_0: ## How to reproduce the behaviour nlp = spacy.load("en_core_web_sm") print(nlp.meta['version']) nerval = nlp("face intense") for token in nerval: print(token.pos_) gives me the output 2.1.0 VERB ADJ while spacy online pos tagger when given the same phrase "face intense" classifies "face" as a NOUN. while the code using en_core_web_sm model classifies it as a verb. I want to get NOUN as result. ## Your Environment ## Info about spaCy * **spaCy version:** 2.1.3 * **Platform:** Windows-10-10.0.15063-SP0 * **Python version:** 3.7.0 * **en_core_web_sm model version : 2.1.0 (as printed above) Im using PyCharm IDE. Please clarify TIA Answers: username_1: By the online POS tagger, you mean the displaCy demo, right? https://explosion.ai/demos/displacy If so, that'd explain what's going on: In your code, you're using spaCy v2.1 and the 2.1 models. The displaCy demo we're hosting currently still uses the 2.0 models (see the versions in the dropdown). Specific predictions can differ across models and versions โ€“ especially for your case, where you're only processing a single phrase `"face intense"` instead of a sentence and also a pretty ambiguous phrase (to be honest, I don't think I understand what it's supposed to mean?). Part-of-speech tags are predicted based on the **context**. Whether something is a verb or a noun depends on the context of that word within a sentence. The text you're processing has no context, so what the mode predicts is completely arbitrary. Status: Issue closed username_2: +1, in pt language. displacy tagged an preposition correctly with ADP but the current spacy lib tagged it as VERB. Installing spacy v.2.0.18 solved the problem here too.
creachadair/jrpc2
744558196
Title: Enabling loose unmarshaling of requests Question: username_0: I just wanted to reopen the discussion originally started in https://github.com/username_1/jrpc2/pull/5 as it is still a source of friction for us in `hashicorp/terraform-ls` and the only reason we keep using a fork of this library - it worked mostly well for us otherwise! ๐Ÿ‘ Here is what happened since closure of that PR: - I tried generating custom unmarshalers for `go-lsp` https://github.com/sourcegraph/go-lsp/pull/8 - That work made me realize this approach doesn't work very well with embedded structs (parent structs inherit unmarshalers of the embedded ones), so this would require some more significant overhaul of the library to avoid embedding or otherwise work around that - Maintainer of `go-lsp` [suggested](https://github.com/sourcegraph/go-lsp/pull/8#issuecomment-632536771) that it would be great for `gopls` folks to expose their structs (currently at [`internal/lsp/protocol`](https://github.com/golang/tools/tree/master/internal/lsp/protocol)) - (via Gopher Slack) gopls maintainers expressed their hesitation to externalizing `internal/lsp/protocol` at this point due to their experience with the generator based on TypeScript implementation of the spec. Breaking changes are very common there, unfortunately. All of the above, but most importantly the conversation with gopls folks is what made me realize that LSP really needs to be treated as a moving target and something that is constantly changing. If this library intends to support LSP as one of the common use cases, then I think it should reflect this fact and strict unmarshalling perhaps should not be the default behaviour. Based on the conversations in https://github.com/username_1/jrpc2/pull/5 I understand it's not easy to solve this problem in a backwards-compatible way, but I hope the context above helps in understanding why it should be solved. Answers: username_1: Your reasoning makes sense. I had another idea as to how we might tackle this problem, and would be interested in your thoughts. As I recall, the crux of the problem previously (cf. #5) was how to plumb strictness information in through the `handler` package, which I suppose you are probably using. Obviously one _can_ bypass that using a custom `UnmarshalJSON` method as you mentioned (the `jrpc2.NonStrict` helper is meant to make this a little easier to write). But it sounds like that approach is not serving you well. It occurred to me that we could introduce a new interface to control strictness, without requiring a fully-custom decoder. For example, suppose we define: ```go package jrpc2 // A Stricter reports whether its receiver should be unmarshaled strictly (disallowing unknown fields). // N.B. Name is provisional. type Stricter interface { Strict() bool } ``` and then modify the decoding logic to check for this, e.g., ```go if s, ok := v.(Stricter); ok && s.Strict() { dec.DisallowUnknownFields() } ``` With this formulation, a type that does not implement the interface gets default behaviour. We can discuss which the default should be, though I suspect you'd prefer non-strict as the default, which is why I wrote the example that way. In this construction, a caller that wants strict unmarshaling can simply define: ```go type T struct { /* โ€ฆ */ } func (T) Strict() bool { return true } ``` Regardless of which direction is the default, the nice thing about this formulation is that it addresses the issue you raised with nested types: If the top-level type is non-strict, the whole tree will be; and vice versa. username_1: Omitted from my previous comment are various bikeshed issues, including: 1. Maybe the interface method can be completely void; its presence alone could be enough. 2. Maybe there should be one interface for each direction, so the default can change without breaking stuff. 3. Names and defaults should be settled. username_0: Yes, that matches my recollection. I think the interface approach would make it possible to avoid the problem with embedded structs, but I would still wish for non-strict (loose) mode to become the default. If at some point in the future there is a package similar go `go-lsp` which becomes "official" (either maintained by MSFT or by Gopls folks or by anyone else), then I reckon it would need to have `Strict()` method on every struct returning `false` (or `NotStrict()` depending on whether we go with `(2)` from your comment). This isn't too hard to achieve, especially if the rest of the code is already generated. The question is however if there would be interest in adding something like that to account for what is perceived as "default behavior" from LSP's perspective. I just think it would be great if it worked out of the box. I the question of default boils down to: - Is LSP common use case for consumers of this library? - Would it hurt the rest of the existing/new consumers if this wasn't strict? I am obviously biased as someone using it just in the context of LSP, but you may be able to answer these more objectively. username_1: You folks are the only other consumers of the package who have made themselves known to me, so from my perspective I'm the other main customer of this library. LSP is the only protocol I've seen in widespread use that is based on JSON-RPC, so I consider it an important use case. I do not think it would cause any great harm for the default to be loose. I foresee there are probably two mental models that a developer might take: 1. "This is JSON, in which anything goes, like `json.Unmarshal`". 2. "This is RPC, in which the parameters should match the schema, and I shouldn't have to check manually." I lean toward (2), but am sympathetic to (1). username_1: I created #32 as a possible solution for this issue, I welcome your comments there as well as here. username_0: AFAIK most implementations which have reached a certain level of maturity ignore unknown fields, just because the spec changes and new fields are added and the protocol does not have any version negotiation capabilities, so you have to expect that the client or server can speak any version of the protocol and you don't have any way of knowing what version it is. I'm guessing one reason this might not seem like a topic for LSP maintainers is because the spec and the canonical implementation is written in TypeScript, where you'd probably more often just decode the data into an arbitrary object (similar to decoding to `map[string]interface{}` in Go) and attempt to access whatever is available? I admit I'm not a TypeScript developer, so that's just a guess. ๐Ÿคทโ€โ™‚๏ธ However I will raise this on the LSP GitHub repo to see if this can be documented more explicitly. I doubt they will say strict is preferred as that's practically impossible given the context I explained above. username_1: I agree, that seems unlikely. username_0: As promised I opened an issue about this upstream: https://github.com/microsoft/language-server-protocol/issues/1144 Status: Issue closed username_1: I merged #32 and have tagged it as v0.11.0. Please let me know if it addresses your concerns! username_0: Just to follow up on this https://github.com/microsoft/language-server-protocol/issues/1144#issuecomment-730188844 I still believe we made the right pragmatic decision here. It's possible that a feature-based "stricter" unmarshaling can be implemented, but that would require somehow injecting the context with enabled/disabled features into custom unmarshalers, which I _think_ would be very difficult with the current model with reflection where data is generally unmarshaled out of context. We'd probably have to avoid all the reflection and explicitly unmarshal each request inside the handler dynamically, which also means we would loose most of the useful compile-time checks. Also even if we went down that route it would require modelling all these relationships in an LSP library somehow, presumably by hand, because I don't see how this could be expressed in the spec in a machine-readable way.
google/ExoPlayer
69821704
Title: Exoplayer is not working for Some Devices (Micromax A190,Micromax A77 ,Doodle A102) Question: username_0: Hi , I tried your exoplayer demo code for some devices. but video is not playing. it not working . Can you please once look into this? Answers: username_1: Please provide more exact issues than "not working", and re-open. Thanks. Status: Issue closed
lirantal/Riess.js
249839877
Title: Feature: Change MongoDB session store to Redis Question: username_0: Yep. I didn't describe at all the required change but in short, the idea is to not needing to update in multiple places the sessions storage (right now in express and socket.io). If you could share how you plan to update the code it would be great so we are aligned and you don't need to incur a lot of changes in the PR. Answers: username_1: @username_0 can i take this issue ? username_0: Yep. I didn't describe at all the required change but in short, the idea is to not needing to update in multiple places the sessions storage (right now in express and socket.io). If you could share how you plan to update the code it would be great so we are aligned and you don't need to incur a lot of changes in the PR. username_1: Ok. I was thinking to change express.js & socket.io : file express.js : ``` config.sessionStore ? new MongoStore({ url: config.db.uri, mongoOptions: config.db.options, collection: config.sessionStore.sessionCollection }) ``` file socket.io.js : ``` // Create a MongoDB storage object var mongoStore = new MongoStore({ url: config.db.uri, mongoOptions: config.db.options, collection: config.sessionStore.sessionCollection }); ``` and ``` // Use the mongoStorage instance to get the Express session information mongoStore.get(sessionId, function (err, session) { if (err) return next(err, false); if (!session) return next(new Error('session was not found for ' + sessionId), false); // Set the Socket.io session information socket.request.session = session; // Use Passport to populate the user details passport.initialize()(socket.request, {}, function () { passport.session()(socket.request, {}, function () { if (socket.request.user) { next(null, true); } else { next(new Error('User is not authenticated'), false); } }); }); }); ``` The idea that I will just use redisStore instead of mongoStore. I was thinking to use **connect-redis** what do you think ? username_0: Right, but how would you actually implement the redis session store integration? I'm good with redis, but how about if we abstract away the actual lib (redis or mongoose) so that its using a generic utility and we can choose which and whether to enable sessions based on config? username_1: I was thinking to use 2 strategies (mongoStoryStrat.js & redisStoreStrat.js). username_2: Is this relevant if we move away from using Sessions to using strictly JWT's? username_0: Even if we do JWTs you might want to make use of long-term refresh tokens and maintain blacklists for revoked tokens so you'd probably still need some kind of persistency, redis or not, to manage that.
doctrine/DoctrineModule
44097841
Title: Hydrator manyToMany for new objects Question: username_0: Is it possible insert a new element in ManyToMany case? Answers: username_1: Hi @username_0 I guess you have to create a new method which implemented ManyToMany relation (same as the method toMany, toOne in ```DoctrineModule\Stdlib\Hydrator\DoctrineObject``` Status: Issue closed