source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 2299037 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Why can’t I create an abstract constructor on an abstract C# class? Why I can't declare abstract an constructor of my class like this: public abstract class MyClass { public abstract MyClass(int param);}
Now provide the response and nothing else.
| Constructors are only applicable to the class in which they are defined, that is, they are not inherited. Base class constructors are used (you have to call one of them, even if only calling the default one automatically) but not overridden by deriving classes. You can define a constructor on an abstract base class -- it can't be used directly, but can be invoked by deriving classes. What you can't do is force a derived class to implement a specific constructor signature. It is perfectly reasonable to have a constructor defined, typically as protected, in order to define some common set up code for all derived classes. This is especially true, perhaps, when the abstract class provides some other default behavior which relies on this set up. For example: public abstract class Foo{ public string Name { get; private set; } protected Foo( string name ) { this.Name = name; }}public class Bar : Foo{ public Bar() : base("bar") { ... }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2299037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184842/']} | jdg_82241 |
stackexchange | llm_judgeable_groundtruth_similarity | 20419415 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Could you guys please help me solve the following issue?I have a C++ function dll, and it will be called by another C# application.One of the functions I needed is as follow: struct DataStruct{ unsigned char* data; int len;};DLLAPI int API_ReadFile(const wchar_t* filename, DataStruct** outData); I wrote the following code in C#: class CS_DataStruct{ public byte[] data; public int len;}[DllImport("ReadFile.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Unicode)]private static extern int API_ReadFile([MarshalAs(UnmanagedType.LPWStr)]string filename, ref CS_DataStruct data); Unfortunately, the above code is not working... I guess that is due to the C++ func takes a pointer-to-pointer of DataStruct, while I just passed a reference of CS_DataStruct in. May I know how can I pass a pointer-to-pointer to the C++ func? If it is not possible, is there any workaround? (the C++ API is fixed, so changing API to pointer is not possible) Edit: Memory of DataStruct will be allocated by c++ function. Before that, I have no idea how large the data array should be. (Thanks for the comments below)
Now provide the response and nothing else.
| I used the following test implementation: int API_ReadFile(const wchar_t* filename, DataStruct** outData){ *outData = new DataStruct(); (*outData)->data = (unsigned char*)_strdup("hello"); (*outData)->len = 5; return 0;}void API_Free(DataStruct** pp){ free((*pp)->data); delete *pp; *pp = NULL;} The C# code to access those functions are as follows: [StructLayout(LayoutKind.Sequential)]struct DataStruct{ public IntPtr data; public int len;};[DllImport("ReadFile.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Unicode)]unsafe private static extern int API_ReadFile([MarshalAs(UnmanagedType.LPWStr)]string filename, DataStruct** outData);[DllImport("ReadFile.dll", CallingConvention = CallingConvention.Cdecl)]unsafe private static extern void API_Free(DataStruct** handle);unsafe static int ReadFile(string filename, out byte[] buffer){ DataStruct* outData; int result = API_ReadFile(filename, &outData); buffer = new byte[outData->len]; Marshal.Copy((IntPtr)outData->data, buffer, 0, outData->len); API_Free(&outData); return result;}static void Main(string[] args){ byte[] buffer; ReadFile("test.txt", out buffer); foreach (byte ch in buffer) { Console.Write("{0} ", ch); } Console.Write("\n");} The data is now transferred to buffer safely, and there should be no memory leaks. I wish it would help. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20419415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2607505/']} | jdg_82242 |
stackexchange | llm_judgeable_groundtruth_similarity | 24841855 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why can’t I access the component methods from “outside” in ReactJS? Why is it not possible and is there any way to solve it? Consider the code: var Parent = React.createClass({ render: function() { var child = <Child />; return ( <div> {child.someMethod()} // expect "bar", got a "not a function" error. </div> ); }});var Child = React.createClass({ render: function() { return ( <div> foo </div> ); }, someMethod: function() { return 'bar'; }});React.renderComponent(<Parent />, document.body);
Now provide the response and nothing else.
| React provides an interface for what you are trying to do via the ref attribute . Assign a component a ref , and its current attribute will be your custom component: class Parent extends React.Class { constructor(props) { this._child = React.createRef(); } componentDidMount() { console.log(this._child.current.someMethod()); // Prints 'bar' } render() { return ( <div> <Child ref={this._child} /> </div> ); }} Note : This will only work if the child component is declared as a class, as per documentation found here: https://facebook.github.io/react/docs/refs-and-the-dom.html#adding-a-ref-to-a-class-component Update 2019-04-01: Changed example to use a class and createRef per latest React docs. Update 2016-09-19: Changed example to use ref callback per guidance from the ref String attribute docs. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/24841855', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1518183/']} | jdg_82243 |
stackexchange | llm_judgeable_groundtruth_similarity | 563901 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have this fasta (biology) file that looks like this: >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC Each line is max 70 chars long. Usually, if I want to format it to max 50 characters long, I use: fold -50 input.fasta > output.fasta # also tried -b and -w args But somehow this is not working. The file looks exactly the same as many others I've seen. The output now looks like this: >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCC It cuts overhanging 20 characters and correctly places them bellow, but then it's not joining the next line and cutting it on max 50 chars as it should. I went back to previous fasta files I created and the fold command still works normally. The problem persists if I copy a segment of the new file and past it in another file. I think there might be an encoding problem that I'm not aware of. Can anyone help? Cheers, EDIT: Great answers, thanks!!
Now provide the response and nothing else.
| Your issue has nothing to do with the encoding of your file. The fold utility is quite primitive and breaks lines at particular lengths, but it does not join lines. You may also want to be careful with retaining the fasta header lines as they are (i.e., not fold these). awk -v W=50 ' /^>/ { if (seq != "") print seq; print; seq = ""; next } { seq = seq $1 while (length(seq) > W) { print substr(seq, 1,W) seq = substr(seq, 1+W) } } END { if (seq != "") print seq }' file.fa This awk command would reformat your sequence to 50 characters, leaving the header lines intact. The width, 50, is adjustable through the W variable and may be set to any positive integer. The first block in the code handles header lines and will output the accumulated sequence bit from the previous sequence, if there is any left to output, before passing along the header line unmodified to the output. The second block accumulates a line of sequence and will possibly output the accumulated sequence if it's long enough, in appropriate chunks. The last block ( END ) outputs any left-over sequence when reaching the end of input. Running this on a file consisting of two copies of your sequence will produce >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC>m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC Changing W to 30 gives >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC>m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC You may also be interested in the FASTX-Toolkit from CSHL. I've never use this myself, but it seems to include a "FASTA Formatter (changes the width of sequences line in a FASTA file)". The latest release of the tools are from 2014 (quite old), so you may want to compile them yourself from source rather than using one of the provided precompiled binaries, unless your particular Unix distribution provides a package (check your package repository). | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/563901', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/286592/']} | jdg_82244 |
stackexchange | llm_judgeable_groundtruth_similarity | 63494479 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a menu component that appears globally. What is the best practice for getting data into that component? I'm trying to take advantage of static generation that Next.js offers but all data fetching guidance from the Next.js team relates to pages. getStaticProps and getStaticPaths seem to pertain to page generation, not data for components. Is their SWR package the right answer, or Apollo Client? Typically in hooks-based React, I'd just put my data call into useEffect . I'm not sure how to reason this out being that everything is rendered at build time with Next.
Now provide the response and nothing else.
| This is such a tricky problem, I think we need to lay out some background before a solution comes into focus. I'm focusing in the React.js world but a lot of this would apply to Vue/Nuxt I'd imagine. Background / Static Generation Benefits: Gatsby and Next are focused on generating static pages, which vastly improves performance and SEO in React.js sites. There is a lot of technical overhead to both platforms beyond this simple insight but let's start with this idea of a digital machine pumping out fancy HTML pages for the browser. Data Fetching for Pages In the case of Next.js (as of v9.5 ), their data fetching mechanism getStaticProps does most of the heavy lifting for you but it's sandboxed to the /pages/ directory. The idea is that it does the data fetching for you and tells the Next.js page generator in Node about it during build time (instead of doing it component-side in a useEffect hook - or componentDidMount ). Gatsby does much the same with their gatsby-node.js file, which orchestrates the data fetching for page building in concert with a Node server. What about Global Components that need data? You can use both Gatsby and Next to produce any kind of website but a huge use case are CMS-driven websites, because so much of that content is static. These tools are an ideal fit to that use case. In typical CMS sites, you will have elements that are global - header, footer, search, menu, etc. This is where static generation faces a big challenge: how do I get data into dynamic global components at build time? The answer to this question is... you don't. And if you think about this for a minute it makes sense. If you had a 10K page site, would you want to trigger a site-wide rebuild if someone adds a new nav item to a menu? Data Fetching for Global Components So how do we get around this? The best answer I have is apollo-client and to do the fetch client side. This helps us for a number of reasons: For small size queries, the performance impact is negligible. If we need to rebuild pages for changes at the CMS layer, this slides by Next/Gatsby's detection mechanisms, so we can make global changes without triggering gigantic site-wide rebuilds. So what does this actually look like? At the component level, it looks just like a regular Apollo-enhanced component would. I usually use styled-components but I tried to strip that out so you can could better see what's going on. import React from 'react'import { useQuery, gql } from '@apollo/client'import close from '../public/close.svg'/** * <NavMenu> * * Just a typical menu you might see on a CMS-driven site. It takes in a couple of props to move state around. * * @param { boolean } menuState - lifted state true/false toggle for menu opening/closing * @param { function } handleMenu - lifted state changer for menuState, handles click event */const NAV_MENU_DATA = gql` query NavMenu($uid: String!, $lang: String!) { nav_menu(uid: $uid, lang: $lang) { main_menu_items { item { ... on Landing_page { title _linkType _meta { uid id } } } } } }`const NavMenu = ({ menuState, handleMenu }) => { // Query for nav menu from Apollo, this is where you pass in your GraphQL variables const { loading, error, data } = useQuery(NAV_MENU_DATA, { variables: { "uid": "nav-menu", "lang": "en-us" } }) if (loading) return `<p>Loading...</p>`; if (error) return `Error! ${error}`; // Destructuring the data object const { nav_menu: { main_menu_items } } = data // `menuState` checks just make sure out menu was turned on if (data) return( <> <section menuState={ menuState }> <div> { menuState === true && ( <div>Explore</div> )} <div onClick={ handleMenu }> { menuState === true && ( <svg src={ close } /> )} </div> </div> { menuState === true && ( <ul> { data.map( (item) => { return ( <li link={ item }> { item.title } </li> ) })} </ul> )} </section> </> )}export default NavMenu Set Up for Next to Use Apollo This is actually really well documented by the Next.js team, which makes me feel like I'm not totally hacking the way this tool is supposed to work. You can find great examples of using Apollo in their repo. Steps to get Apollo into a Next app: Make a custom useApollo hook that sets up the connection to your data source (I put mine in /lib/apollo/apolloClient.js within Next's hierarchy but I'm sure it could go elsewhere). import { useMemo } from 'react'import { ApolloClient, InMemoryCache, SchemaLink, HttpLink } from '@apollo/client'let apolloClient// This is mostly from next.js official repo on how best to integrate Next and Apollofunction createIsomorphLink() { // only if you need to do auth if (typeof window === 'undefined') { // return new SchemaLink({ schema }) return null } // This sets up the connection to your endpoint, will vary widely. else { return new HttpLink({ uri: `https://yourendpoint.io/graphql` }) }}// Function that leverages ApolloClient setup, you could just use this and skip the above function if you aren't doing any authenticated routesfunction createApolloClient() { return new ApolloClient({ ssrMode: typeof window === 'undefined', link: createIsomorphLink(), cache: new InMemoryCache(), })}export function initializeApollo(initialState = null) { const _apolloClient = apolloClient ?? createApolloClient() // If your page has Next.js data fetching methods that use Apollo Client, the initial state // gets hydrated here if (initialState) { // Get existing cache, loaded during client side data fetching const existingCache = _apolloClient.extract() // Restore the cache using the data passed from getStaticProps/getServerSideProps // combined with the existing cached data _apolloClient.cache.restore({ ...existingCache, ...initialState }) } // For SSG and SSR always create a new Apollo Client if (typeof window === 'undefined') return _apolloClient // Create the Apollo Client once in the client if (!apolloClient) apolloClient = _apolloClient return _apolloClient}// This is goal, now we have a custom hook we can use to set up Apollo across our app. Make sure to export this!export function useApollo(initialState) { const store = useMemo(() => initializeApollo(initialState), [initialState]) return store} Modify _app.js in the /pages/ directory of Next. This is basically the wrapper that goes around every page in Next. We're going to add the Apollo provider to this, and now we can globally access Apollo from any component. import { ApolloProvider } from '@apollo/react-hooks'import { useApollo } from '../lib/apollo/apolloClient'/** * <MyApp> * * This is an override of the default _app.js setup Next.js uses * * <ApolloProvider> gives components global access to GraphQL data fetched in the components (like menus) * */const MyApp = ({ Component, pageProps }) => { // Instantiates Apollo client, reads Next.js props and initialized Apollo with them - this caches data into Apollo. const apolloClient = useApollo(pageProps.initialApolloState) return ( <ApolloProvider client={ apolloClient }> <Component {...pageProps} /> </ApolloProvider> )}export default MyApp And now you can get dynamic data inside of your components using Apollo! So easy right ;) HA! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/63494479', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/241153/']} | jdg_82245 |
stackexchange | llm_judgeable_groundtruth_similarity | 4325154 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This one made me think: class X;void foo(X* p){ delete p;} How can we possibly delete p if we do not even know whether X has visible destructor? g++ 4.5.1 gives three warnings: warning: possible problem detected in invocation of delete operator:warning: 'p' has incomplete typewarning: forward declaration of 'struct X' And then it says: note: neither the destructor nor the class-specific operator delete will be called, even if they are declared when the class is defined. Wow... are compilers required to diagnose this situation like g++ does? Or is it undefined behavior?
Now provide the response and nothing else.
| From the standard [expr.delete]: If the object being deleted has incomplete class type at the point of deletion and the complete class has a non-trivial destructor or a deallocation function, the behavior is undefined. So, it's UB if there's nontrivial stuff to do, and it's ok if there isn't. Warnings aren't neccessary for UB. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4325154', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/252000/']} | jdg_82246 |
stackexchange | llm_judgeable_groundtruth_similarity | 56877331 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have what I call a tricky problem. I want to align two buttons with the panel-title and have them justified right. However, when I put them in I get this: https://jsfiddle.net/mwoods98/fo58qswn/3/ <!DOCTYPE html><html><head> <title></title></head><body> <div class="row"> <div class="col-md-12"> <div class="panel panel-default"> <div class="panel-heading"> <h3 class="panel-title"><strong>Order Summary Details</strong></h3><span class="pull-right" style="position: relative;top: -110%;"><button class="btn btn-primary pull-right" type="button"><span class="pull-right" style="position: relative;top: -110%;">Print Summary</span> <a href="#event.buildlink(rc.printLabel)#?label=#order_id#" onclick="window.open(this.href,'_blank','toolbar=no,location=no,status=no,menubar=no,scrollbars=yes,resizable=yes,width=700,height=300'); return false;" role="button"><button class="btn btn-primary" type="button">Print Label</button></a></button></span> </div> </div> </div> </div></body></html> The result I want is the buttons aligned on the same line as the title. Thanks!
Now provide the response and nothing else.
| I think the thing you're missing is forkJoin will return an observable, so the standard map isn't what you're looking for: you'd need to use one of the maps that will handle the observable result in the appropriate manner (i.e. switchMap , exhaustMap , mergeMap ): .pipe( switchMap(value => forkJoin(getThingOne(value), getThingTwo(value))).subscribe(([thing1, thing2]) => {}) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56877331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1720548/']} | jdg_82247 |
stackexchange | llm_judgeable_groundtruth_similarity | 10045466 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
All, I'm trying to do some unit testing in some archaic java code (no interfaces, no abstraction, etc.) This is a servlet that uses a ServletContext (which I'm assuming is set up by Tomcat) and it has database information is set up in the web.xml/context.xml file. Now, I've figured out how to make a Fake ServletContext, but the code has InitialContext _ic = new InitialContext(); all over the place (so it isn't feasible to replace it). I need to find a way to make a default InitialContext() able to do the _ic.lookup(val) without throwing an exception. I'm assuming there is some way that the context.xml is getting loaded, but how that magic works, I'm drawing a blank. Anyone have any ideas?
Now provide the response and nothing else.
| Take advantage of the fact that InitialContext uses an SPI to handle its creation. You can hook into its lifecycle by creating an implementation of javax.naming.spi.InitialContextFactory and passing that to your tests via the system property javax.naming.factory.initial ( Context.INTITIAL_CONTEXT_FACTORY ). It's simpler than it sounds. Given this class: public class UseInitialContext { public UseInitialContext() { try { InitialContext ic = new InitialContext(); Object myObject = ic.lookup("myObject"); System.out.println(myObject); } catch (NamingException e) { e.printStackTrace(); } }} And this impl of InitialContextFactory : public class MyInitialContextFactory implements InitialContextFactory { public Context getInitialContext(Hashtable<?, ?> arg0) throws NamingException { Context context = Mockito.mock(Context.class); Mockito.when(context.lookup("myObject")).thenReturn("This is my object!!"); return context; }} Creating an instance of UseInitialContext in a junit test with -Djava.naming.initial.factory=initial.context.test.MyInitialContext on the command line outputs This is my object!! (easy to set up in eclipse). I like Mockito for mocking and stubbing. I'd also recommend Micheal Feather's Working Effectively with Legacy Code if you deal with lots of legacy code. It's all about how to find seams in programs in order to isolate specific pieces for testing. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10045466', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/663306/']} | jdg_82248 |
stackexchange | llm_judgeable_groundtruth_similarity | 46126977 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As title, how can I detect async/await es7 support in browser? Is that possible?
Now provide the response and nothing else.
| As any other syntactic feature, it should be evaluated in order to be detected. Since eval can be restricted, this may be impossible when CSP is enabled: let isAsync = true;try { eval('async () => {}');} catch (e) { if (e instanceof SyntaxError) isAsync = false; else throw e; // throws CSP error} If there's a chance that target browsers don't support a feature, the code should be transpiled. The alternative that allows to avoid CSP restrictions on eval is to use external script to detect syntactic features, as described here . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46126977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6885801/']} | jdg_82249 |
stackexchange | llm_judgeable_groundtruth_similarity | 18293594 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that this = null is illegal. I'm wondering if there's some other way to have an object clean itself up. my desire is to be able to do something like this: A a = new A();a.doStuffAndDisappear();if(a == null){ //this is true after my doStuffAndDisappear() method.} I suspect there's no way to make this happen, but thought it would be worth asking.
Now provide the response and nothing else.
| No, because a is a reference (not an object as in this question's title) and no method can modify the value of a reference except the method in which it is defined (I assume from the code context that a is a local variable). Since Java doesn't have pass-by-reference, what you ask cannot be done: there's no way to collect addresses-of references in order to manage the addresses pointed to. You might use a wrapper object, but not sure what'd be the point. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18293594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59947/']} | jdg_82250 |
stackexchange | llm_judgeable_groundtruth_similarity | 15530850 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use <input type="date"> fields that gracefully fallback to jQuery when the browser does not have support for the field. Relatively recently, Chrome started offering a native date picker, which is great. But I have found that many users miss the little arrow that brings up the calendar and therefore miss the fact that there is an easy calendar for date selection. To make this more intuitive, it would be great if I could bring up the native datepicker UI when the user moves the focus to the input or clicks another element (like a small calendar icon). Is there a method that I can use in Chrome to trigger the datepicker UI?
Now provide the response and nothing else.
| Here is a way to trigger the datepicker when users click on the field itself, because computer illiterate users don't know where they should click: input[type="date"] { position: relative;}/* create a new arrow, because we are going to mess up the native onesee "List of symbols" below if you want another, you could also try to add a font-awesome icon.. */input[type="date"]:after { content: "\25BC"; color: #555; padding: 0 5px;}/* change color of symbol on hover */input[type="date"]:hover:after { color: #bf1400;}/* make the native arrow invisible and stretch it over the whole field so you can click anywhere in the input field to trigger the native datepicker*/input[type="date"]::-webkit-calendar-picker-indicator { position: absolute; top: 0; left: 0; right: 0; bottom: 0; width: auto; height: auto; color: transparent; background: transparent;}/* adjust increase/decrease button */input[type="date"]::-webkit-inner-spin-button { z-index: 1;} /* adjust clear button */ input[type="date"]::-webkit-clear-button { z-index: 1; } <input type="date"> Links: List of symbols codepen example with font-awesome icon | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15530850', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/383539/']} | jdg_82251 |
stackexchange | llm_judgeable_groundtruth_similarity | 22389 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given the algorithm above (taken from the slides (p. 35) of the Coursera course “Algorithms Part I” by Robert Sedgewick and Kevin Wayne), look at the scenario where i is at "X", the following happens: Scenario: i -> "X", "X" > "P" 1. swap("X", "Z"), gt--; // the value at i is now "Z", which is still > "P"2. swap("Z", "Y"), gt--; // the value at i is now "Y", which is still > "P"3. swap("Y", "C"), gt--; // Now we finally get a value at i "C" which is < "P"// Now we can swap values at i and lt, and increrement them4. swap("P", "C"), i++, lt++; Why don't we just decrement gt until gt points to a value that is < the value at lt ("P" in this case), then we swap this value with the value at i. This will save swapping operations. So if we do that for the scenario mentioned above, we'll do: 1. gt--2. gt--3. swap("X", "C"), gt--; // Now we can swap values at i and lt, and increrement them4. swap("P", "C"), i++, lt++; Is this excessive swapping needed for the algorithm? does it improve performance in some way?If it does improve performance, how? If it doesn't affect performance, please give a proper explanation or a proof as to why this it does not affect performance. Also, would the second method I mentioned affect performance in any way? please explain why. P.S. "Affect performance" as used above means either improve/degrade performance.
Now provide the response and nothing else.
| In order to know that you may just decrement gt , you have to compare the element at position gt to the pivot. For illustration, here is the relevant part of the code from the slides you took your illustration from : while (i <= gt){ int cmp = a[i].compareTo(v); if (cmp < 0) exch(a, lt++, i++); else if (cmp > 0) exch(a, i, gt--); else i++;} Your modification would look like this: while (i <= gt){ int cmp = a[i].compareTo(v); if (cmp < 0) exch(a, lt++, i++); else if (cmp > 0) { while (a[gt].compareTo(v) > 0) gt--; exch(a, i, gt--); } else i++;} If the element is larger, we save a swap, as you illustrated. Furthermore we won't have to compare that element again. If, however, the element is not larger, we will compare it to the pivot twice, once before the swap and once after the swap. So with your modification we save a swap for each element that is larger than the pivot and on a position that ends up to the right of the pivot, while we introduce an additional comparison for each element in such a position that is not larger than the pivot. Averaging over all inputs we can expect the number of elements smaller resp. larger than the pivot to be equal. So the performance will depend on the number of elements equal to the pivot (and the cost of a swap vs. a comparison). Your modification will have the better relative performance, the smaller that number is. But 3-way partitioning was developed with inputs with many equal keys in mind, so the original choice seems reasonable. Taking a second look at the modified algorithm, we note that if we compare an element twice, there is no other comparison happening between these two. So it seems worthwhile to remember the result of the comparison. However, if we want to keep the structure of the code, this means a lot of control structure overhead: Boolean doComp = true;int cmp;while (i <= gt){ if (doComp) cmp = a[i].compareTo(v); doComp = true; if (cmp < 0) exch(a, lt++, i++); else if (cmp > 0) { while (cmp = a[gt].compareTo(v) > 0) { gt--; doComp = false; } exch(a, i, gt--); } else i++;} My gut feeling is that even though it combines the advantages of the previous two versions in terms of number of comparisons and swaps, it will perform worse due to the overhead of the additional control structures. Not only have we introduced additional operations (checking conditions, maintaining the flag), we also introduced (compared to Dijkstra's version) two additional conditional jumps (for if and while), that will often but irregularly switch between taken and not taken. The latter is a problem on modern processors, since they rely on branch predictions for optimal performance and such branches will lead to a high number of mispredictions. (The branch corresponding to the outer loop is an opposite example: Always predicting that the loop will be entered is wrong only once.) With these considerations in mind, we can try to optimize the code. The following should perform quite well: while (i <= gt){ int cmp = a[i].compareTo(v); if (cmp > 0) { while ((cmp = a[gt].compareTo(v)) > 0) gt--; exch(a, i, gt--); } if (cmp < 0) exch(a, lt++, i++); else i++;} This version makes as many comparisons as Dijkstra's version and as many swaps as your suggested version without introducing too much overhead: For each comparison we have one branch based on cmp > 0 and possibly one based on cmp < 0 . This is the same as for Dijkstra's version except that the role of the two comparisons is exchanged. So, summing up / answering your final questions explicitly: The additional swaps you point out can be avoided without introducing other costs. So, as you suspected, they degrade performance compared to the final version in this answer. The version you suggest removes the unnecessary swaps, but introduces additional comparisons. So it will too perform worse than the final version in this answer. As I pointed out, the realitve performance of the original and your version will depend on the input at hand and the relative costs of swaps and compares. (Since usually compares are faster than swaps, I'd expect your version to be faster on average.) | {} | {'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/22389', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/15391/']} | jdg_82252 |
stackexchange | llm_judgeable_groundtruth_similarity | 575205 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The method of separation of variables is used in many occasions in the upper level physics courses such as QM and EM. But when it is used there is no clear reason why using it is permitted it except that it works, or that let us try and see. Is there a theorem which provides the condition(s) that if satisfied then the method of separation of variables will be guaranteed to work in PDEs?
Now provide the response and nothing else.
| To the best of my knowledge: No , there is no general theorem that tells you how to start from an arbitrary partial differential equation and conclude whether that partial differential equation can be solved by separation of variables. (I should note here that one of the problems is to start from an arbitrary partial differential equation, and deduce a change of variables relative to which one can perform the separation of variables. The conditions given by naryb below is certainly sufficient [use the trivial change of variables], but definitely not necessary.) On the other hand, if you looked through the literature, there are a lot of criteria given for individual partial differential equations of specific forms . A particularly well-known example is that of Eisenhart's classification of potential functions for which the associated Schrodinger operator is separable. See this link . It is undeniable that separation of variables have something to do with symmetries of the (system of) partial differential equations you are looking at; but the connection is still poorly understood. For example, it was observed long ago that the linear wave equation on certain curved space-times (more precisely the Kerr black hole backgrounds) are separable, but the space-time does not appear to have enough infinitesimal symmetries for this to have been possible. (By an infinitesimal symmetry I mean that which is associated to a Lie group action on the PDE.) It turns out that this separability has to do with the so-called Carter constant , which is defined by some notion of a higher order symmetry . As far as I know, the connection between these higher order symmetries and separability of PDEs is still a field of active investigation. I don't know if there is even a universally accepted definition of what "separation of variables" mean! Some suggested further readings: Miller's 1977 treatise studying the relation between symmetries and separation of variables for second order partial differential equations. Koornwinder wrote a very detailed book review about the previous treatise, which has many historical points and shows, at least 30 years ago, how muddled the notions are. If you have access to it, a brief survey of Miller's book is available as a short paper . And here is the article alluded to in Koornwinder's review, which addresses the issue of defining the notion of separation of variables. For an explanation of the notions of symmetry and generalised symmetry for differential equations, Olver's book is by now a somewhat standard resource. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/575205', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19717/']} | jdg_82253 |
stackexchange | llm_judgeable_groundtruth_similarity | 26520 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Skip Garibaldi asks if there is an elementary proof of the following fact that "accidentally" fell out of some high-powered machinery he was working on. Say that two matrices $A$ and $B$ over the rationals are rationally congruent ifthere exists a nonsingular matrix $S$ over the rationals such that $S^t A S = B$. Theorem (Garibaldi). Suppose $n \equiv 0 \pmod 4$. Then the diagonal matrices$$A = diag\left[\binom{n}{0}, \binom{n}{2}, \binom{n}{4}, \ldots, \binom{n}{n/2 - 2}\right]$$and$$B = diag\left[\binom{n}{1}, \binom{n}{3}, \binom{n}{5}, \ldots, \binom{n}{n/2 - 1}\right]$$are rationally congruent. Similarly, suppose $ n \equiv 2 \pmod 4$. Then the matrices$$A = diag\left[\binom{n}{0}, \binom{n}{2}, \binom{n}{4}, \ldots, \binom{n}{n/2 - 1}\right]$$and$$B = diag\left[\binom{n}{1}, \binom{n}{3}, \binom{n}{5}, \ldots, \binom{n}{n/2 - 2}, \frac{1}{2}\binom{n}{n/2}\right]$$are rationally congruent.
Now provide the response and nothing else.
| Wadim, isn't that 95% of the proof? First let me correct your first displayed equation (thanks to fherzig for pointing this out): It is not sufficient for the proof, but$$\sum_{i=0}^{n-1}\binom{4n}{2i}P_i(t)P_i(s)=\sum_{i=0}^{n-1}\binom{4n}{2i+1}\hat P_i(t)\hat P_i(s)$$is, where $t$ and $s$ are two independent variables. Let me rename your $P_i$ as $Q_{2i}$ and your $\hat{P_i}$ as $Q_{2i+1}$, so that your equation$$\sum_{i=0}^{n-1}\binom{4n}{2i}P_i(t)P_i(s)=\sum_{i=0}^{n-1}\binom{4n}{2i+1}\hat P_i(t)\hat P_i(s)$$becomes$$\sum_{i=0}^{2n-1}\left(-1\right)^i\binom{4n}{i}Q_i(t)Q_i(s)=0.$$Now let $Q$ be the polynomial $Q\left(t\right)=t^{n-1}$. (With some work, the proof below works just as well if $Q$ is any polynomial of degree $n-1$ (not less!), but let me use $t^{n-1}$ for simplicity's sake.) Let $Q_i\left(t\right)=\left(2n-i\right)Q\left(t-\left(2n-i\right)^2\right)$ for every $i\in\mathbb Z$. For any fixed $t$ and $s$, the term $Q_i\left(t\right)Q_i\left(s\right)$ is a polynomial in $i$ of degree $2\left(2\left(n-1\right)+1\right)<4n$, and thus satisfies$$\sum_{i=0}^{4n}\left(-1\right)^i\binom{4n}{i}Q_i(t)Q_i(s)=0,$$since the $4n$-th finite difference of a polynomial of degree $< 4n$ is zero. Due to the symmetry of the function $i\mapsto Q_i\left(t\right)Q_i\left(s\right)$ around $i=2n$, and due to $Q_{2n}\left(t\right)=0$, this becomes$$\sum_{i=0}^{2n-1}\left(-1\right)^i\binom{4n}{i}Q_i(t)Q_i(s)=0.$$Now it remains to prove that each of the families $\left(Q_1,Q_3,...,Q_{2n-1}\right)$ and $\left(Q_0,Q_2,...,Q_{2n-2}\right)$ spans the space of all polynomials in $t$ of degree $< n$. This is a particular case of a more general fact: If $x_1$, $x_2$, ..., $x_n$ are $n$ distinct reals, then the polynomials $\left(t-x_1\right)^{n-1}$, $\left(t-x_2\right)^{n-1}$, ..., $\left(t-x_n\right)^{n-1}$ are linearly independent. In order to prove this, assume that they are linearly dependent, take their derivatives of all possible orders, evaluate at $t=0$ (or alternatively, just take their coefficients), and get a contradiction because Vandermonde's determinant is nonzero. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/26520', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3106/']} | jdg_82254 |
stackexchange | llm_judgeable_groundtruth_similarity | 701923 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The following problem is taken from the book generatingfunctionology (P.28) by Herbert S. Wilf. Let $f(n,m,k)$ be the number of strings of $n$ $\,0$'s and $1$'s that contain exactly $m\,$ $1$'s, no $k$ of which are consecutive. (a) Find a recurrence formula for $f$. It should have $f(n,m,k)$ on the left side, and exactly three terms on the right. (b) Find, in simple closed form, the generating functions$$F_k(x,y)=\sum_{n,m\ge0}f(n,m,k)x^ny^m \,\,\,\,\,\,\,\,\ (k=1,2,...).$$(c) Find an explicit formula for $f(n,m,k)$ from the genrating function (this should involve only a single summation, of an expression that involves a few factorials). I was able to solve (a) and (b) and found that the generating functions is$$F_k = \frac{1-x^ky^k}{1-x-xy-x^{k+1}y^k}$$However, I cannot seem to be able to extract the sequence from this function. Any help will be appreciated. Edit: I've added the way I solved (a) and (b) to allow any criticism in case I got the function wrong.My solution to (a) goes as follows: Given $m,n,k$ such that $0\le k\le m \le n$ and $k\lt n$, we look at all of the bit string of length $n$ that contain exactly $m$ $1$'s, no $k$ of which are consecutive. We split these strings into two sets. The first composed of all the strings that end with $0$. This set has $f(n-1,m, k)$ strings since by removing the $0$ from the end of the string we obtain a one to one correspondence between the strings in our set and the strings $f(n-1,m, k)$ counts. The other set is composed of all the strings that end with $1$. By removing the last bit we obtain a one to one correspondence between our set and all strings counted by $f(n-1,m-1, k)$, except those that have $k-1$ consecutive $1$'s at the end. To count the latter we first note that since $k\lt n$ there must be another bit, which must be $0$, before the last $k-1$ $\,1$'s. By removing the last $k$ bits ($10\cdots0$) we obtain a one to one correspondence with the strings that are counted by $f(n-k-1,\,m-k,\,k)$. With that we obtain the recursion: $$f(n,m,k) = f(n-1,m,k) + f(n-1,m-1,k) - f(n-k-1,\,m-k,\,k)$$ In addition, we note the if $0\le m \le n$ and $k > m$, then $f(n,m,k) = \binom{n}m$, since we can't have more consecutive $1$'s than there are $1$'s, so the problem is reduced to counting the number of strings with length $n$ that have $m$ $1$'s. For all other cases we write $f(n,m,k) = 0$ which completes the definition. To Solve (b) I did the following: $$\begin{align}F_k(x,y) & = \sum_{n,m\ge0}f(n,m,k) x^n y^m \\& = \sum_{n=0}^{\infty}\sum_{m=0}^{n}f(n,m,k) x^n y^m \\& = \sum_{n=0}^{k-1}\sum_{m=0}^{n}f(n,m,k) x^n y^m + \sum_{n=k}^{\infty}\sum_{m=0}^{k-1}f(n,m,k) x^n y^m + \sum_{n=k}^{\infty}\sum_{m=k}^{n}f(n,m,k) x^n y^m \\& = \sum_{n=0}^{k-1}\sum_{m=0}^{n}\binom{n}m x^n y^m + \sum_{n=k}^{\infty}\sum_{m=0}^{k-1}\binom{n}m x^n y^m + \sum_{n=k}^{\infty}\sum_{m=k}^{n}f(n,m,k) x^n y^m\end{align}$$By using the convention that $\binom{n}m = 0$ when $m\gt n$, and noting that when $m=n\ge k$ we have $f(n,m,k)=0$ we can write the above equation as follows:$$\begin{align}& \sum_{m=0}^{k-1}\sum_{n=0}^{\infty}\binom{n}m x^n y^m + \sum_{n=k+1}^{\infty}\sum_{m=k}^{n-1}f(n,m,k) x^n y^m \\=& \sum_{m=0}^{k-1} \frac{x^m}{(1-x)^{m+1}} y^m + \sum_{n=k+1}^{\infty}\sum_{m=k}^{n-1}f(n,m,k) x^n y^m \\=& \frac{1}{1-x} \cdot \omega(x,y) + \sum_{n=k+1}^{\infty}\sum_{m=k}^{n-1}f(n,m,k) x^n y^m \end{align}$$where: $$\omega(x,y) := \frac{1 - (\frac{xy}{1-x})^k}{1 - \frac{xy}{1-x}}.$$Using the recusion formula obtained in (a) we can expand the second summand like so:$$\begin{align}\sum_{n=k+1}^{\infty}\sum_{m=k}^{n-1}f(n,m,k) x^n y^m & = A + B - C \end{align}$$where:$$\begin{align}A & =\sum_{n=k+1}^{\infty}\sum_{m=k}^{n-1}f(n-1,m,k) x^n y^m \\& = x\sum_{n=k}^{\infty}\sum_{m=k}^{n}f(n,m,k) x^n y^m \\& = x\left(\sum_{n=0}^{\infty}\sum_{m=0}^{n}f(n,m,k) x^n y^m - \sum_{n=0}^{k-1}\sum_{m=0}^{n}f(n,m,k) x^n y^m - \sum_{n=k}^{\infty}\sum_{m=0}^{k-1}f(n,m,k) x^n y^m \right)\\& = x\left(F_k(x,y) - \sum_{n=0}^{k-1}\sum_{m=0}^{n}\binom{n}m x^n y^m - \sum_{n=k}^{\infty}\sum_{m=0}^{k-1}\binom{n}m x^n y^m \right)\\& = x\left(F_k(x,y) - \sum_{m=0}^{k-1}\sum_{n=0}^{\infty}\binom{n}m x^n y^m\right)\\& = xF_k(x,y) - \frac{x}{1-x} \cdot \omega(x,y)\\\\B & =\sum_{n=k+1}^{\infty}\sum_{m=k}^{n-1}f(n-1,m-1,k) x^n y^m \\& = xy\sum_{n=k}^{\infty}\sum_{m=k-1}^{n-1}f(n,m,k) x^n y^m \\& = xy\left(\sum_{n=0}^{\infty}\sum_{m=0}^{n}f(n,m,k) x^n y^m - \sum_{n=0}^{k-1}\sum_{m=0}^{n}f(n,m,k) x^n y^m - \sum_{n=k}^{\infty}\sum_{m=0}^{k-2}f(n,m,k) x^n y^m \right)\\& = xy\left(F_k(x,y) - \sum_{n=0}^{k-1}\sum_{m=0}^{n}\binom{n}m x^n y^m - \sum_{n=k}^{\infty}\sum_{m=0}^{k-2}\binom{n}m x^n y^m \right)\\& = xy\left(F_k(x,y) - \sum_{m=0}^{k-2}\sum_{n=0}^{\infty}\binom{n}m x^n y^m - x^{k-1}y^{k-1}\right)\\& = xyF_k(x,y) - xy\sum_{m=0}^{k-2}\sum_{n=0}^{\infty}\binom{n}m x^n y^m - x^ky^k\\& = xyF_k(x,y) - xy\sum_{m=0}^{k-2}\frac{x^m}{(1-x)^{m+1}} y^m - x^ky^k\\& = xyF_k(x,y) + 1 - \omega(x,y) - x^ky^k\\\\C & =\sum_{n=k+1}^{\infty}\sum_{m=k}^{n-1}f(n-k-1,\,m-k,\,k) x^n y^m = x^{k+1}y^kF_k(x,y)\end{align}$$Summing all the values we get:$$F_k = \frac{1}{1-x} \cdot \omega + xF_k - \frac{x}{1-x} \cdot \omega + xyF_k + 1 - \omega - x^ky^k - x^{k+1}y^kF_k$$And so (note that $\omega$ disappears):$$F_k = \frac{1-x^ky^k}{1-x-xy-x^{k+1}y^k}$$
Now provide the response and nothing else.
| First of all, some unsolicited advice: your derivation for (b) need not be so messy. It greatly simplifies things to observe that$$f(n,m,k) = f(n-1,m,k) + f(n-1,m-1,k) - f(n-k-1,m-k,k) \tag{1}$$holds not just when $0 \le k \le m \le n$, $k < n$. In fact, it holds for any $n \ge 0, m \ge 0, k \ge 1$, except in the two cases $m = n = k$, and $m = n = 0$. This of courses assumes that $f(n,m,k) = 0$ if $n < 0$ or $m < 0$.Proof: If $m > n$, then the LHS and RHS of (1) are both zero, so the equation holds. If $m \le n$ and $k > m$, then the LHS is ${n \choose m}$, while the RHS is${n-1 \choose m} + {n-1 \choose m-1} + 0$, so (1) holds, unless $n = 0$ in which case we have the first exception $m = n = 0$. If $m \le n$ and $k \le m$ and $k < n$, then (1) holds by the counting argument you gave. If $m \le n$ and $k \le m$ and $k \ge n$, i.e. $k = m = n$, then the LHS is $0$ while the RHS is $0 + 1 - 0 = 1$, so we have the second exception. This covers all cases. So then you can just write\begin{align*}F_k(x,y) & = \sum_{n,m\ge0}f(n,m,k) x^n y^m \\& = 1 - x^k y^k \\&\quad + \sum_{n,m\ge0}\big[ f(n-1,m,k) + f(n-1,m-1,k) - f(n-k-1, m-k,k) \big] x^n y^m \\&= 1 - x^k y^k + (x + xy - x^{k+1} y^k) F_k(x,y) \\F_k(x,y) &=\frac{1 - x^k y^k}{1 - x - xy + x^{k+1}{y^k}}.\end{align*} Then you avoid all the gross introduction of $\omega$, etc. ;) For part (c) Write as follows: \begin{align*}F_k(x,y)&= \frac{1 - x^k y^k}{1 - x - xy + x^{k+1}{y^k}} \\&= \frac{1 - x^k y^k}{1 - xy - x(1 - x^ky^k)} \\&= \frac{1}{x} \left( \frac{x(1 - x^k y^k)}{1 - xy - x(1 - x^ky^k)} \right) \\&= \frac{1}{x} \left( \frac{1 - xy - \left[1 - xy - x(1 - x^k y^k)\right]}{1 - xy - x(1 - x^ky^k)} \right) \\&= \frac{1}{x} \left( \frac{1 - xy}{1 - xy - x(1 - x^ky^k)} - 1 \right) \\&= \frac{1}{x} \left( \frac{1}{1 - x(1 + xy + x^2y^2 + \cdots + x^{k-1}y^{k-1})} - 1 \right) \\\end{align*} Now,\begin{align*}&\; \frac{1}{1 - x(1 + xy + x^2y^2 + \cdots + x^{k-1}y^{k-1})} \\&= \sum_{N \ge 0} x^N \left(1 + xy + x^2y^2 + \cdots + x^{k-1}y^{k-1}\right)^N \\\end{align*} We want the coefficient of $x^{n+1}y^m$ in this series (because then this will multiply by $\frac{1}{x}$ to give us the coefficient of $x^n y^m$, $f(n,m,k)$.)For the coefficient of $x^{n+1} y^m$, we need $N = n + 1 -m$. So we want the coefficient of $x^m y^m$ in$$\left(1 + xy + x^2y^2 + \cdots + x^{k-1}y^{k-1}\right)^{n+1-m}=\frac{(1 - x^k y^k)^{n+1-m}}{(1 - xy)^{n+1-m}}$$ Now write out the series for $\frac{1}{(1 - xy)^{n+1-m}}$ explicitly, and expand $(1 - x^k y^k)^{n+1-m}$ with the binomial formula, and you should be able to find a finite summation yielding the coefficient of $x^{n+1} y^m$, which will equal $f(m,n,k)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/701923', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/68387/']} | jdg_82255 |
stackexchange | llm_judgeable_groundtruth_similarity | 21292010 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on an Entity Framework project. I want to serialize a bunch of entity class instances. I've bound these together into a container class: public class Pseudocontext{ public List<Widget> widgets; public List<Thing> things; Etcetera... it is an instance of this class that I'm attempting to serialize. I want JSON.NET to serialize the members of each entity class instance that are actually columns in the underlying database. I do not want it to even attempt to serialize object references. In particular, my entity classes have virtual members that allow me to write C# code that navigates all my inter-entity relationships without worrying about actual key values, joins, etc., and I want JSON.NET to ignore the associated parts of my entity classes. On the surface, there seems to be a JSON.NET configuration option that does exactly what I'm talking about: JsonSerializer serializer = new JsonSerializer();serializer.PreserveReferencesHandling = PreserveReferencesHandling.None; Unfortunately, JSON.NET seems to be ignoring the second statement above. I actually found a web page ( http://json.codeplex.com/workitem/24608 ) where someone else brought the same issue to the attention of James Newton-King himself, and his response (in its entirety) was "Write a custom contract resolver." As inadequate as I find that response to be, I have been attempting to follow its guidance. I would very much like to be able to write a "contract resolver" that ignored everything except primitive types, strings, DateTime objects, and my own Pseudocontext class along with the Lists it contains directly. If someone has an example of something that at least resembles that, it might be all I need. This is what I came up with on my own: public class WhatDecadeIsItAgain : DefaultContractResolver{ protected override JsonContract CreateContract(Type objectType) { JsonContract contract = base.CreateContract(objectType); if (objectType.IsPrimitive || objectType == typeof(DateTime) || objectType == typeof(string) || objectType == typeof(Pseudocontext) || objectType.Name.Contains("List")) { contract.Converter = base.CreateContract(objectType).Converter; } else { contract.Converter = myDefaultConverter; } return contract; } private static GeeThisSureTakesALotOfClassesConverter myDefaultConverter = new GeeThisSureTakesALotOfClassesConverter();}public class GeeThisSureTakesALotOfClassesConverter : Newtonsoft.Json.Converters.CustomCreationConverter<object>{ public override object Create(Type objectType) { return null; }} When I attempt to use the above (by setting serializer.ContractResolver to an instance of WhatDecadeIsItAgain prior to serialization), I get OutOfMemory errors during serialization that indicate that JSON.NET is encountering reference loops that never terminate (in spite of my efforts to make JSON.NET just ignore object references ). I feel like my "custom contract resolver" may be wrong. As shown above, it's built around the premise that I should return the default "contract" for the types I do want to serialize, and a "contract" that simply returns "null" for all other types. I have no idea how correct these assumptions are, though, and it's not easy to tell. The JSON.NET design is very much based on implementation inheritance, method overriding, etc.; I'm not much of an OOP guy, and I find that sort of design to be pretty obscure. Were there a "custom contract resolver" interface that I could implement, Visual Studio 2012 would be able to stub out the required methods very quickly, and I imagine I'd have little trouble filling the stubs in with real logic. I'd have no problem writing, for example, a method that returns "true" if I want to serialize an object of a supplied type and "false" otherwise. Perhaps I'm missing something, but I've found no such method to override, nor have I been able to find the hypothetical interface (ICustomContractResolver ?) that would tell me what I'm actually supposed to be doing in the last code snippet inserted above. Also, I realize that there are JSON.NET attributes ([JsonIgnore]?) that are designed to deal with situations like this. I can't really use that approach, since I'm using "model first". Unless I decide to tear up my entire project architecture, my entity classes will be automatically generated, and they will not contain JsonIgnore attributes, nor do I feel comfortable editing the automated classes to contain these attributes. Incidentally, for a while I did have things set up to serialize object references, and I was just ignoring all the superfluous "$ref" and "$id" data that JSON.NET was returning in its serialization output. I've abandoned that approach for the moment at least, because (rather suddenly) serialization started taking an inordinate amount of time (~45 minutes to get ~5 MB of JSON). I haven't been able to tie that sudden change in performance back to anything specific that I did. If anything, the volume of data in my database is lower now than it was when serialization was actually completing in reasonable time. But I'd be more than happy with a return to the status quo ante (in which I was just having to ignore "$ref", "$id", etc.) if that could be achieved. At this point, I'm also open to the prospect of using some other JSON library, or a different strategy altogether. I feel like I could just use StringBuilder, System.Reflection, etc. and come of with my own, homemade solution... but isn't JSON.NET supposed to be able to handle this sort of thing pretty easily??
Now provide the response and nothing else.
| First, to address your issues with reference loops-- The PreserveReferencesHandling setting controls whether Json.Net emits $id and $ref to track inter-object references. If you have this set to None and your object graph contains loops, then you will also need to set ReferenceLoopHandling to Ignore to prevent errors. Now, to get Json.Net to ignore all object references altogether and only serialize primitive properties (except in your Pseudocontext class of course), you do need a custom Contract Resolver, as you suggested. But don't worry, it is not as hard as you think. The resolver has the capability to inject a ShouldSerialize method for each property to control whether or not that property should be included in the output. So, all you need to do is derive your resolver from the default one, then override the CreateProperty method such that it sets ShouldSerialize appropriately. (You do not need a custom JsonConverter here, although it is possible to solve this problem with that approach. It would require quite a bit more code, however.) Here is the code for the resolver: class CustomResolver : DefaultContractResolver{ protected override JsonProperty CreateProperty(MemberInfo member, MemberSerialization memberSerialization) { JsonProperty prop = base.CreateProperty(member, memberSerialization); if (prop.DeclaringType != typeof(PseudoContext) && prop.PropertyType.IsClass && prop.PropertyType != typeof(string)) { prop.ShouldSerialize = obj => false; } return prop; }} Here is a full demo showing the resolver in action. class Program{ static void Main(string[] args) { // Set up some dummy data complete with reference loops Thing t1 = new Thing { Id = 1, Name = "Flim" }; Thing t2 = new Thing { Id = 2, Name = "Flam" }; Widget w1 = new Widget { Id = 5, Name = "Hammer", IsActive = true, Price = 13.99M, Created = new DateTime(2013, 12, 29, 8, 16, 3), Color = Color.Red, }; w1.RelatedThings = new List<Thing> { t2 }; t2.RelatedWidgets = new List<Widget> { w1 }; Widget w2 = new Widget { Id = 6, Name = "Drill", IsActive = true, Price = 45.89M, Created = new DateTime(2014, 1, 22, 2, 29, 35), Color = Color.Blue, }; w2.RelatedThings = new List<Thing> { t1 }; t1.RelatedWidgets = new List<Widget> { w2 }; // Here is the container class we wish to serialize PseudoContext pc = new PseudoContext { Things = new List<Thing> { t1, t2 }, Widgets = new List<Widget> { w1, w2 } }; // Serializer settings JsonSerializerSettings settings = new JsonSerializerSettings(); settings.ContractResolver = new CustomResolver(); settings.PreserveReferencesHandling = PreserveReferencesHandling.None; settings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore; settings.Formatting = Formatting.Indented; // Do the serialization and output to the console string json = JsonConvert.SerializeObject(pc, settings); Console.WriteLine(json); } class PseudoContext { public List<Thing> Things { get; set; } public List<Widget> Widgets { get; set; } } class Thing { public int Id { get; set; } public string Name { get; set; } public List<Widget> RelatedWidgets { get; set; } } class Widget { public int Id { get; set; } public string Name { get; set; } public bool IsActive { get; set; } public decimal Price { get; set; } public DateTime Created { get; set; } public Color Color { get; set; } public List<Thing> RelatedThings { get; set; } } enum Color { Red, White, Blue }} Output: { "Things": [ { "Id": 1, "Name": "Flim" }, { "Id": 2, "Name": "Flam" } ], "Widgets": [ { "Id": 5, "Name": "Hammer", "IsActive": true, "Price": 13.99, "Created": "2013-12-29T08:16:03", "Color": 0 }, { "Id": 6, "Name": "Drill", "IsActive": true, "Price": 45.89, "Created": "2014-01-22T02:29:35", "Color": 2 } ]} Hope this is in the ballpark of what you were looking for. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21292010', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1172763/']} | jdg_82256 |
stackexchange | llm_judgeable_groundtruth_similarity | 1980738 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to delete rows from guide_category that have no relation with guide table (dead relations). Here is what I want to do, but it of course does not work. DELETE FROM guide_category AS pgc WHERE pgc.id_guide_category IN (SELECT id_guide_category FROM guide_category AS gc LEFT JOIN guide AS g ON g.id_guide = gc.id_guide WHERE g.title IS NULL) Error: You can't specify target table 'guide_category' for update in FROM clause
Now provide the response and nothing else.
| Due to the locking implementation issues, MySQL does not allow referencing the affected table with DELETE or UPDATE . You need to make a JOIN here instead: DELETE gc.*FROM guide_category AS gc LEFT JOIN guide AS g ON g.id_guide = gc.id_guideWHERE g.title IS NULL or just use a NOT IN : DELETE FROM guide_category AS gc WHERE id_guide NOT IN ( SELECT id_guide FROM guide ) | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1980738', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/223386/']} | jdg_82257 |
stackexchange | llm_judgeable_groundtruth_similarity | 5283071 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to implement the following regex search found on golfscript syntax page . var ptrn = /[a-zA-Z_][a-zA-Z0-9_]*|'(?:\\.|[^'])*'?|"(?:\\.|[^"])*"?|-?[0-9]+|#[^\n\r]*|./mg;input = ptrn.exec(input); Input is only ever the first match of the regexp. for example: "hello" "world" should return ["hello", "world"] but it only returns ["hello"] .
Now provide the response and nothing else.
| RegExp.exec is only able to return a single match result at once. In order to retrieve multiple matches you need to run exec on the expression object multiple times. For example, using a simple while loop: var ptrn = /[a-zA-Z_][a-zA-Z0-9_]*|'(?:\\.|[^'])*'?|"(?:\\.|[^"])*"?|-?[0-9]+|#[^\n\r]*|./mg;var match;while ((match = ptrn.exec(input)) != null) { console.log(match);} This will log all matches to the console. Note that in order to make this work, you need to make sure that the regular expression has the g (global) flag . This flag makes sure that after certain methods are executed on the expression, the lastIndex property is updated, so further calls will start after the previous result. The regular expression will also need to be declared outside of the loop (as shown in the example above). Otherwise, the expression object would be recreated on every iteration and then the lastIndex would obviously reset every time, resulting in an infinite loop. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5283071', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_82258 |
stackexchange | llm_judgeable_groundtruth_similarity | 18991306 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two similar questions about operator precedences in Java. First one: int X = 10;System.out.println(X++ * ++X * X++); //it prints 1440 According to Oracle tutorial : postfix (expr++, expr--) operators have higher precedence than prefix (++expr, --expr) So, I suppose that evaluation order: 1) first postfix operator: X++ 1.a) X++ "replaced" by 10 1.b) X incremented by one: 10+1=11 At this step it should look like: System.out.println(10 * ++X * X++), X = 11;2) second POSTfix operator: X++ 2.a) X++ "replaced" by 11 2.b) X incremented by one: 11+1=12 At this step it should look like: System.out.println(10 * ++X * 11), X = 12;3) prefix operator: ++X 3.a) X incremented by one: 12+1=13 3.b) ++X "replaced" by 13 At this step it should look like: System.out.println(10 * 13 * 11), X = 13;4) evaluating 10*13 = 130, 130*11 = 1430. But Java seems to ignore PRE/POST ordering and puts them on one level. So the real order: X++ -> ++X -> X++ what causes the answer to be (10 * 12 * 12) = 1440. Second one: Example from this question: int a=1, b=2; a = b + a++; Part of accepted answer:"By the time of assignment, ++ has already incremented the value of a to 2 (because of precedence), so = overwrites that incremented value." OK, let's look step-by-step: 1) replacing "b" with 2 2) replacing "a++" with 1 3) incrementing "a" by 1 -> at this point a==2 4) evaluating 2+1 = 3 5) overwriting incremented value of "a" with 3 Seems everything is fine.But let's make a little change in that code (replace "=" with "+=") a += b + a++; steps 1-4 should be same as above.so, after step 4 we have something like that: a += 3; where a==2 And then I think: OK, a = 2+3 , so a should be 5 . BUT the answer is only 4 I'm really confused. I already spent couple of hours but still can't understand where I am wrong. P.S. I know, that I shouldn't use this "style" in real applications. I just want to understand what is wrong in my thoughts.
Now provide the response and nothing else.
| The confusion stems from the fact that the operands are evaluated from left to right. This is done first, before any attention is paid to operator precedence/order of operations . This behavior is specified in JLS 15.7.2. Evaluate Operands before Operation So X++ * ++X * X++ is first evaluated as 10 * 12 * 12 which yields, as you saw, 1440. To convince yourself of this, consider the following: X = 10; System.out.println(X++ * ++X);X = 10; System.out.println(++X * X++); If X++ were done first, then ++X second, then multiplication, both should print the same number. But they do not: X = 10; System.out.println(X++ * ++X); // 120X = 10; System.out.println(++X * X++); // 121 So how does this make sense? Well if we realize that operands are evaluated from left to right, then it makes perfect sense. X = 10; System.out.println(X++ * ++X); // 120 (10 * 12)X = 10; System.out.println(++X * X++); // 121 (11 * 11) The first line looks like X++ * ++X10 (X=11) * (X=12) 1210 * 12 = 120 and the second ++X * X++(X=11) 11 * 11 (X=12)11 * 11 = 121 So why are prefix and postfix increment/decrement operators in the table? It is true that increment and decrement must be performed before multiplication. But what that is saying is that: Y = A * B++// Should be interpreted asY = A * (B++)// and notY = (A * B)++ Just as Y = A + B * C// Should be interpreted asY = A + (B * C)// and notY = (A + B) * C It remains that the order of the evaluation of the operands occurs left-to-right. If you're still not conviced: Consider the following program: class Test{ public static int a(){ System.out.println("a"); return 2; } public static int b(){ System.out.println("b"); return 3; } public static int c(){ System.out.println("c"); return 4; } public static void main(String[] args) { System.out.println(a() + b() * c()); // Lets make it even more explicit System.out.println(a() + (b() * c())); }} If the arguments were evaluated at the time they were needed, either b or c would come first, the other next, and lastly a . However, the program outputs: abc14abc14 Because, regardless of the order that they're needed and used in the equation, they're still evaluated left to right. Helpful reading: What are the rules for evaluation order in Java? a += a++ * a++ * a++ in Java. How does it get evaluated? Appendix A: Operator Precedence in Java | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18991306', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2812292/']} | jdg_82259 |
stackexchange | llm_judgeable_groundtruth_similarity | 2876463 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading about (abstract or not) homotopy theory, and I seem to have understood (correct me if I'm wrong) that weak equivalences describe homotopy better than homotopies, in the following sense : Intuitively, if I wanted to abstract away from classical homotopy theory, my first guess wouldn't be to say that we should consider categories with distinguished classes of morphisms; it would probably be to say we should consider $2$-categories, or perhaps categories with some congruence on the morphisms, or categories with a given object that "classifies homotopies" (playing the role of $I$), or something in this direction, i.e. I would try to give an abstract description of homotopies , not weak equivalences. That's probably connected to my lack of practice in classical homotopy theory, but at least from a beginner's perspective, that's what I would do. Now, as this idea is very intuitive and probably naive, it must have occurred to some mathematicians who decided for some reason that this wasn't the way to go, and that actually, equivalences, fibrations and cofibrations were the thing to study. My question is : what's that reason ? How does one go from homotopies to weak equivalences ? Could you give an intuitive reason/heuristic why, or is an answer necessarily technical (in which case I probably couldn't follow all of it, but I would be happy to know that it is) ? Another very related question is : are any of the approaches I mentioned interesting in that regard ($2$-categories, or categories with a congruence- they're interesting for other reasons, I wonder if they're interesting for homotopy theory, especially $2$-categories) ?
Now provide the response and nothing else.
| The focus on weak equivalences instead of homotopies is largely a consequence of Grothendieck's slogan to work in a nice category with bad (overly general) objects, rather than working in a bad category that has only the good objects. Typically, there is a good notion of homotopies between maps that is well-behaved, but only on the "good objects". If we worked with a category consisting of only the good objects, then we wouldn't need weak equivalences, but we also would be sad because our category probably wouldn't have things like limits and colimits, and would generally be difficult to work with. So instead we enlarge our category to allow objects which are "bad" and which don't directly relate to the homotopy theory we really want to study. To do homotopy theory with the bad objects, we introduce a notion of weak equivalence which lets us say every bad object is actually equivalent to some good object, as far as our homotopy theory is concerned. A basic example of this is simplicial sets and Kan complexes. Simplicial sets form a really really nice category that is easy to work with combinatorially or algebraically. However, on their own, they are awful for the purposes of homotopy theory. If you model some nice topological spaces as the geometric realizations of some simplicial sets, then most continuous maps between your spaces will not come from maps between the simplicial sets, even up to homotopy. We can define a notion of homotopy between maps of simplicial sets, but it is really poorly behaved (it's not even in equivalence relation, though you could take the equivalence relation it generates). Now, there is a very special type of simplicial set which is really good for modeling homotopy theory, namely Kan complexes. The singular set of any topological space is a Kan complex. Homotopy classes of maps between two Kan complexes are naturally in bijection with homotopy classes of maps between their geometric realizations. So we have this great theory of Kan complexes which models the classical homotopy theory of spaces and has the advantage that our objects are more combinatorial and we don't have to deal with the pathologies of pointset topology. However, despite all the nice things about Kan complexes, they don't form a particularly nice category. They aren't just the category of presheaves on a simple little category like simplicial sets are, and don't even have colimits. We can't work with them combinatorially nearly as easily as we can general simplicial sets. So, we'd really like to use the entire category of simplicial sets and not just Kan complexes. But this is awkward, because we don't have a good notion of homotopy for simplicial sets, and don't even have "enough" maps between most simplicial sets to model what we want them to model. The solution is that we do still have a good notion of weak equivalence which works for all simplicial sets, and after inverting weak equivalences we get the homotopy category we want. Every simplicial set is weak equivalent to a Kan complex, and when working with just Kan complexes, weak equivalences give the same homotopy theory as homotopies between maps would. Let me end with a more down-to-earth observation. A homotopy between maps $f,g:X\to Y$ is defined as a map $H:X\times I\to Y$ such that $Hi_0=f$ and $Hi_1=g$. Here $i_0:X\to X\times I$ is defined by $i_0(x)=(x,0)$ and $i_1$ is $i_1(x)=(x,1)$. Now let $p:X\times I\to X$ denote the first projection. Observe that $pi_0=pi_1=1_X$. So, if we formally adjoin an inverse to $p$, $i_0$ and $i_1$ will become equal (both equal to $p^{-1}$), and consequently $Hi_0=f$ and $Hi_1=g$ will become equal. In other words, imposing the homotopy equivalence relation on maps is essentially the same thing as considering all of the projection maps $p:X\times I\to X$ to be "weak equivalences". In this way, the classical equivalence relation on morphisms approach to homotopy is really just a special case of using weak equivalences. But weak equivalences are more general and flexible, and can be used in settings (like simplicial sets as discussed above) where an equivalence relation on morphisms would not do what you want. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2876463', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/408637/']} | jdg_82260 |
stackexchange | llm_judgeable_groundtruth_similarity | 8906448 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found a site with an guide to add custom tags to html, the same way people make ie work with the new HTML5 tags. I must admit I think it would be great to add my own tags, would make it easier to "scan" the the code, and find what you are looking for.But every site I found about it, people say it's not good.... but why isn't it good? Example html with class: <ul class="commentlist"> <li class="comment odd"> <div class="1"> <div class="avatar"> <img src="http://placehold.it/60x60" width="60" height="60" /> </div> <div class="metadata"> <div class="name">Name</div> <p>response1</p> </div> </div> <ul class="children"> <li class="comment even"> <div class="2"> <div class="avatar"> <img src="http://placehold.it/60x60" width="60" height="60" /> </div> <div class="metadata"> <div class="name">Name</div> <p>response1a</p> </div> </div> </li> <li class="comment odd"> <div class="3"> <div class="avatar"> <img src="http://placehold.it/60x60" width="60" height="60" /> </div> <div class="metadata"> <div class="name">Name</div> <p>response1b</p> </div> </div> </li> </ul></li> And here what I could do with custom tags, I think that would be much easier to find my way around, so why not: <clist><ccommentbox class="odd"> <ccomment class="1"> <cavatar> <img src="http://placehold.it/60x60" width="60" height="60" /> </cavatar> <cdata> <cname>Name</cname> <ctext>response1</ctext> </cdata> </ccomment> <cchildren> <ccommentbox class="even"> <ccomment class="2"> <cavatar> <img src="http://placehold.it/60x60" width="60" height="60" /> </cavatar> <cdata> <cname>Name</cname> <ctext>response1a</ctext> </cdata> </ccomment> </ccommentbox> <ccommentbox class="odd"> <ccomment class="3"> <cavatar> <img src="http://placehold.it/60x60" width="60" height="60" /> </cavatar> <cdata> <cname>Name</cname> <ctext>response1b</ctext> </cdata> </ccomment> </ccommentbox> </cchildren></ccommentbox>
Now provide the response and nothing else.
| Custom tags are not evil just consider this: They are not recognized in IE 6-8 by default -> you have to use JavaScript to introduce each custom tag you use on the page e.g: document.createElement('custom-tag') This means your website will only render correctly with JavaScript turned on In most browsers your custom tags will be treated as inline elements like <span> , this means you have to write CSS to declare them as custom-tag { display: block } There is no resource I found that could proof that custom tags have any negative impact on search engines. In fact Google released Angular.js which promotes custom tags ( <pane> and <tab> ) in its examples on the front page . Most HTML Editors will mark your custom tags as invalid HTML, because they are not in the official spec. To summarize: Use custom tags when there are important elements that have moremeaning than <div> and there is no existing HTML 4/5equivalent . This is especially true for web applications whichparse the DOM for special tags/attributes/classes to createcomponents (like Angular.js). If all you build is a simple website with normal content, stickto standard HTML. You will want your website to work also withoutJavaScript turned on. If you build a web application where custom tags could reallyhelp to make the source cleaner and express special semantics, usethem. All negative implications mentioned above (JavaScript has to beturned on / CSS declaration) won't matter for these cases. The samerules apply to custom attributes. For more details on this topic: IE compatibility for Angular.js | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8906448', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1155596/']} | jdg_82261 |
stackexchange | llm_judgeable_groundtruth_similarity | 21489111 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am creating two Python scripts to produce some plots for a technical report. In the first script I am defining functions that produce plots from raw data on my hard-disk. Each function produces one specific kind of plot that I need. The second script is more like a batch file which is supposed to loop around those functions and store the produced plots on my hard-disk. What I need is a way to return a plot in Python. So basically I want to do this: fig = some_function_that_returns_a_plot(args)fig.savefig('plot_name') But what I do not know is how to make a plot a variable that I can return. Is this possible? Is so, how?
Now provide the response and nothing else.
| You can define your plotting functions like import numpy as npimport matplotlib.pyplot as plt# an example graph typedef fig_barh(ylabels, xvalues, title=''): # create a new figure fig = plt.figure() # plot to it yvalues = 0.1 + np.arange(len(ylabels)) plt.barh(yvalues, xvalues, figure=fig) yvalues += 0.4 plt.yticks(yvalues, ylabels, figure=fig) if title: plt.title(title, figure=fig) # return it return fig then use them like from matplotlib.backends.backend_pdf import PdfPagesdef write_pdf(fname, figures): doc = PdfPages(fname) for fig in figures: fig.savefig(doc, format='pdf') doc.close()def main(): a = fig_barh(['a','b','c'], [1, 2, 3], 'Test #1') b = fig_barh(['x','y','z'], [5, 3, 1], 'Test #2') write_pdf('test.pdf', [a, b])if __name__=="__main__": main() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21489111', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1686013/']} | jdg_82262 |
stackexchange | llm_judgeable_groundtruth_similarity | 2376315 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So I'm trying to solve the problem irrational ^ irrational = rational. Here is my proof Let $i_{1},i_{2}$ be two irrational numbers and r be a rational number such that $$i_{1}^{i_{2}} = r$$ So we can rewrite this as $$i_{1}^{i_{2}} = \frac{p}{q}$$ Then by applying ln() to both sides we get $$i_2\ln(i_1) = \ln(p)-\ln(q)$$ which can be rewritten using the difference of squares as $$ i_2\ln(i_1) = \left(\sqrt{\ln(p)}-\sqrt{\ln(q)}\right)\left(\sqrt{\ln(p)}+\sqrt{\ln(q)}\right)$$ so now we have $$i_1 = e^{\sqrt{\ln(p)}+\sqrt{\ln(q)}}$$$$i_2 = \sqrt{\ln(p)}-\sqrt{\ln(q)}$$ because I've found an explicit formula for $i_1$ and $i_2$ we are done. So I'm new to proofs and I'm not sure if this is a valid argument. Can someone help me out?
Now provide the response and nothing else.
| As your proof is currently set up, you would need to show that your explicit formula gives you irrational numbers $i_1,i_2$ for at least one pair of integer values $p,q$. While it is certainly believable (and in fact true) that this is the case, it requires proof. You're approaching this problem from the wrong mindset. All we need is an example of a pair of irrational numbers satisfying "irrational ^ irrational = rational". It is not necessarily useful to find a "general solution" of any kind. The classic proof the statement can be summarized as follows: If $\sqrt{2}^{\sqrt{2}}$ is rational, then we're done. If not, then note that $\left[\sqrt{2}^{\sqrt{2}}\right]^{\sqrt{2}}$ is rational. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2376315', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/468209/']} | jdg_82263 |
stackexchange | llm_judgeable_groundtruth_similarity | 15335926 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am learning svg from its official documents, there is such line. I don't get it, if it already has a width and height attribute, what is the point to specify it again in viewBox="0 0 1500 1000" ? It is says, "One px unit is defined to be equal to one user unit. Thus, a length of "5px" is the same as a length of "5"" in the official docs, thus this viewBox is a 1500px wide and 1000 height view, which exceeds 300px and 200px. So why does it define the width and height value in the first place? <svg width="300px" height="200px" version="1.1" viewBox="0 0 1500 1000" preserveAspectRatio="none" xmlns="http://www.w3.org/2000/svg">
Now provide the response and nothing else.
| The width and height are how big the <svg> is. The viewBox controls how its contents are displayed so the viewBox="0 0 1500 1000" will scale down the contents of <svg> element by a factor of 5 (1500 / 300 = 5 and 1000 / 200 = 5) and the contents will be 1/5 the size they would be without the viewBox but the <svg> Imagine you have an elastic surface and cut it into 4 equal pieces. If you throw 3 pieces away you've got a surface that's 1/4 the size of the original surface. If you now stretch the surface and make it the same size as the original surface then everything on the surface will be twice the size. That's how viewBox and width/height are related. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/15335926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1950767/']} | jdg_82264 |
stackexchange | llm_judgeable_groundtruth_similarity | 14549446 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have script: <?phpinclude('db.php');session_start();$session_id = '1'; // User session id$path = "uploads/";$valid_formats = array("jpg", "png", "gif", "bmp", "jpeg");if (isset($_POST) and $_SERVER['REQUEST_METHOD'] == "POST") { $name = $_FILES['photoimg']['name']; $size = $_FILES['photoimg']['size']; if (strlen($name)) { list($txt, $ext) = explode(".", $name); if (in_array($ext, $valid_formats)) { if ($size < (1024 * 1024)) { // Image size max 1 MB $actual_image_name = time() . $session_id . "." . $ext; $tmp = $_FILES['photoimg']['tmp_name']; if (move_uploaded_file($tmp, $path . $actual_image_name)) { mysql_query("UPDATE users SET profile_image='$actual_image_name' WHERE uid='$session_id'"); echo "<img src='uploads/" . $actual_image_name . "' class='preview'>"; } else { echo "failed"; } } else { echo "Image file size max 1 MB"; } } else { echo "Invalid file format.."; } } else { echo "Please select image..!"; } exit;}?> Is possible convert all images (png, gif etc) to jpg with 100% quality? If yes, how? I would like allow to upload png and gif, but this script should convert this files to jpg. Is possible this with PHP?
Now provide the response and nothing else.
| Try this code: originalImage is the path of... the original image... outputImage is self explaining enough. Quality is a number from 0 to 100 setting the output jpg quality (0 - worst, 100 - best) function convertImage($originalImage, $outputImage, $quality){ // jpg, png, gif or bmp? $exploded = explode('.',$originalImage); $ext = $exploded[count($exploded) - 1]; if (preg_match('/jpg|jpeg/i',$ext)) $imageTmp=imagecreatefromjpeg($originalImage); else if (preg_match('/png/i',$ext)) $imageTmp=imagecreatefrompng($originalImage); else if (preg_match('/gif/i',$ext)) $imageTmp=imagecreatefromgif($originalImage); else if (preg_match('/bmp/i',$ext)) $imageTmp=imagecreatefrombmp($originalImage); else return 0; // quality is a value from 0 (worst) to 100 (best) imagejpeg($imageTmp, $outputImage, $quality); imagedestroy($imageTmp); return 1;} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14549446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2015832/']} | jdg_82265 |
stackexchange | llm_judgeable_groundtruth_similarity | 4540806 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Hibernate 3.6 and have my code annotated (versus using hibernate mapping files). I ran into the known "problem" of using JPA cascading options that are not compatible with Hibernate's CascadeType (see this link for more info http://www.mkyong.com/hibernate/cascade-jpa-hibernate-annotation-common-mistake/ ). I was hoping to get a bit more clarification on the problem. I have some particular questions: 1) So @Cascade({CascadeType.SAVE_UPDATE}) works for saveOrUpdate(), but does it apply also if I use merge() or persist()? or do I have to use all three Hibernate CascadeTypes? 2) How do I decide whether to use JPA cascade options or the Hibernate @Cascade annotation instead? 2) There is a "bug" filed against this in Hibernate, but the developers apparently see this as a documentation issue, (I'm completely disagree with them), and I'm not seeing that it was addressed in said documentation. Anyone know why this is "working as designed" and not a bug in Hibernate's JPA implementation? Many thanks in advance.
Now provide the response and nothing else.
| This behaviour is documented in 11.11. Transitive persistence . Hibernate cascade types correspond to the individual operations, so you need all three of them. In most cases you need either CascadeType.ALL or no cascading at all. In that case JPA annotation is enough, since JPA's CascadeType.ALL covers all Hibernate operations as well. Otherwise, if you need fine-grained cascading control (and use Hibernate's Session interface), you need Hibernate's @Cascade . It's not a bug in JPA implementation, because if you use JPA's EntityManager everything works fine. This problem exists only if you combine JPA annotations with Hibernate's Session interface. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4540806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/504701/']} | jdg_82266 |
stackexchange | llm_judgeable_groundtruth_similarity | 41471439 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The snippet below compiles ( demo ): struct A{ int i = 10; };int main() { struct A{ int i = 20; }; struct A; struct A a;} But this doesn't: struct A{ int i = 10; };int main() {// struct A{ int i = 20; }; struct A; struct A a;} I can see that the answer is probably given by these paragraphs in the Standard: [basic.lookup.elab]/2 and [basic.scope.pdecl]/7 . But I really don't know how to deduce the different behaviors shown above from these two paragraphs. Note that in the first example the struct A is not first declared in the elaborated-type-specifier struct A; , but in the definition of struct A in main() . In the second example, the struct A is also not first declared in the elaborated-type-specifier struct A; , but in the definition of struct A in global scope.
Now provide the response and nothing else.
| Each of the examples contains declarations of two different classes, both with the name A . Let's distinguish between the classes by renaming one of them to B : struct A{ int i = 10; };int main() { struct B{ int i = 20; }; struct B; struct B b;} The above is semantically identical to your first example. The class A is never used. struct A{ int i = 10; };int main() { struct B; struct B b;} This is semantically identical to your second example. You are trying to create an object of an incomplete type, the forward-declared class B . Renaming B back to A doesn't change anything because then the declaration of A in main shadows the declaration of the other A at global scope. [basic.lookup.elab]/2 If the elaborated-type-specifier has no nested-name-specifier , and [...] if the elaborated-type-specifier appears in a declaration with the form: class-key attribute-specifier-seq opt identifier ; the elaborated-type-specifier is a declaration that introduces the class-name as described in [basic.scope.pdecl]. So struct A; is a declaration that introduces the class name in the scope of the declaration. Under no circumstances can it refer to a class declared in an outer scope. [basic.scope.pdecl]/7 [ Note: Other forms of elaborated-type-specifier do not declare a new name [...] — end note ] By implication, this form of elaborated-type-specifier declares a new name. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41471439', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4463557/']} | jdg_82267 |
stackexchange | llm_judgeable_groundtruth_similarity | 1125144 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some C# code that generates google maps. This codes looks at all the Points I need to plot on the map and then works out the Bounds of a rectangle to include those points. It then passes this bounds to the Google Maps API to set the zoom level appropriately to show all of the points on the map. This code is working fine however I have a new requirement. One of the points may have a precision associated with it. If this is the case then I draw a circle around the point with the radius set to the precision value. Again this works fine however my bounds checking is now not doing what I want it to do. I want to have the bounding box include the complete circle. This requires an algorithm to take a point x and calculate the point y that would be z metres north of x and also z metres south of x. Does anyone have this algorithm, preferably in C#. I did find a generic algorithm here but I appear to have not implemented this correctly as the answers I am getting are 1000s of km adrift. This is the Generic example Lat/lon given radial and distanceA point {lat,lon} is a distance d out on the tc radial from point 1 if: lat=asin(sin(lat1)*cos(d)+cos(lat1)*sin(d)*cos(tc)) IF (cos(lat)=0) lon=lon1 // endpoint a pole ELSE lon=mod(lon1-asin(sin(tc)*sin(d)/cos(lat))+pi,2*pi)-pi ENDIF And this is my C# translation. // Extend a Point North/South by the specified distance public static Point ExtendPoint(Point _pt, int _distance, int _bearing ) { Decimal lat = 0.0; Decimal lng = 0.0; lat = Math.Asin(Math.Sin(_pt.Lat) * Math.Cos(_distance) + Math.Cos(_pt.Lat) * Math.Sin(_distance) * Math.Cos(_bearing)); if (Math.Cos(lat) == 0) { lng = _pt.Lng; // endpoint a pole } else { lng = ( (_pt.Lng - Math.Asin(Math.Sin(_bearing) * Math.Sin(_distance) / Math.Cos(lat)) + Math.PI) % (2 * Math.PI)) - Math.PI; } ret = new Point(lat,lng); return ret; } I am calling this function with a bearing of 0 to calculate the new northerly position and a value of 180 to calculate the new southerly position. Can anyone either see what I have done wrong or perhaps provide a known working algorithm?
Now provide the response and nothing else.
| I have a very similar piece of code. It got me very close results when compared to another implementation. I think the problem with yours is that you are using "distance" as linear distance in meters instead of angular distance in radians. /// <summary>/// Calculates the end-point from a given source at a given range (meters) and bearing (degrees)./// This methods uses simple geometry equations to calculate the end-point./// </summary>/// <param name="source">Point of origin</param>/// <param name="range">Range in meters</param>/// <param name="bearing">Bearing in degrees</param>/// <returns>End-point from the source given the desired range and bearing.</returns>public static LatLonAlt CalculateDerivedPosition(LatLonAlt source, double range, double bearing){ double latA = source.Latitude * UnitConstants.DegreesToRadians; double lonA = source.Longitude * UnitConstants.DegreesToRadians; double angularDistance = range / GeospatialConstants.EarthRadius; double trueCourse = bearing * UnitConstants.DegreesToRadians; double lat = Math.Asin( Math.Sin(latA) * Math.Cos(angularDistance) + Math.Cos(latA) * Math.Sin(angularDistance) * Math.Cos(trueCourse)); double dlon = Math.Atan2( Math.Sin(trueCourse) * Math.Sin(angularDistance) * Math.Cos(latA), Math.Cos(angularDistance) - Math.Sin(latA) * Math.Sin(lat)); double lon = ((lonA + dlon + Math.PI) % UnitConstants.TwoPi) - Math.PI; return new LatLonAlt( lat * UnitConstants.RadiansToDegrees, lon * UnitConstants.RadiansToDegrees, source.Altitude);} Where public const double EarthRadius = 6378137.0; // WGS-84 ellipsoid parameters and LatLonAlt is in degrees/meters (conversion takes place internally).Adjust as needed. I assume you can figure out what the value for UnitConstants.DegreesToRadians is :) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1125144', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68262/']} | jdg_82268 |
stackexchange | llm_judgeable_groundtruth_similarity | 5111106 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In ruby, I was wondering if there's a way to do the following: I have basically a matrix of four possible outcomes: A is True, B is TrueA is True, B is FalseA is False, B is TrueA is False, B is False I'd like to write a test for this in the cleanest possible "ruby way". I was hoping to do something like case[A,B] when A && B then ... when A && !B then ... when !A && B then ... when !A && !B then ...end ... but that doesn't work. So, what's the best way to handle this kind of situation?
Now provide the response and nothing else.
| Boolean case (with no expression in the case , it returns the first branch with a truthy when_expr ): result = casewhen A && B then ...when A && !B then ...when !A && B then ...when !A && !B then ...end Matching case (with an expression in the case , it returns the first branch that satisfies the predicate when_expr === case_expr ): result = case [A, B]when [true, true] then ...when [true, false] then ...when [false, true] then ...when [false, false] then ...end | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5111106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/417872/']} | jdg_82269 |
stackexchange | llm_judgeable_groundtruth_similarity | 20391943 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote a shell script as. source ve/bin/activate Saved it as activate_shell.sh latter when I ran the script with command. bash activate_shell.sh The script is being run with no error but the virtual environment is not being activated.
Now provide the response and nothing else.
| Your activation script path, ve/bin/activate , is relative. The script will only work from one directory. But the problem is not here. What does bin/activate do? It modifies the shell in which it runs . This is why you have to source it and not invoke as a regular program. The script you wrote starts its own copy of shell (bash), activates the virtual environment inside it, and exits, destroying the just-activated environment. If your script invoked Python after sourcing the bin/activate , it would be the Python from the virtual environment, not the system one. If you want a simple, easy-to-type command to activate a virtualenv, define a shell function: ve() { source $1/bin/activate; } (Yes, type the above line right into your shell prompt.) Then type ve foo and virtualenv named foo will be activated in your current shell, provided that you're in the right directory. Should you need to cope with a massive amount of virtualenvs, take a look at virtualenvwrapper . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20391943', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/992665/']} | jdg_82270 |
stackexchange | llm_judgeable_groundtruth_similarity | 98955 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that $A$ is a unital C$^*$-algebra and that $\varphi: A \to \mathbb{C}$ is a bounded linear functional. Then there exists a Hilbert space $H$, a representation $\pi: A \to B(H)$ and vectors $\psi, \eta \in H$ such that $$\varphi(a) = \langle \pi(a)\psi, \eta \rangle$$for all $a \in A$ (this can be proved by decomposing the functional as a linear combination of four states and considering the direct sum of the representation spaces associated to the GNS-construction for each state). My question is: Assuming further that $\| \varphi \| \leq 1$, can we choose $H$, $\pi$ and $\psi, \eta$ as above, satisfying the additional requirement that $\| \psi \| \leq 1$ and $\| \eta \| \leq 1$ , such that (again) $\varphi(a) = \langle \pi(a) \psi, \eta \rangle$ for all $a \in A$? Note that this is clearly true for a positive functional - simply write $\varphi(a) = \langle \pi(a)\xi, \xi \rangle$ (using the GNS-construction) and note that $$ 1 \geq \| \varphi \| = \sup_{\| a \| \leq 1} |\langle \pi(a) \xi, \xi \rangle | \geq |\langle \pi(1) \xi, \xi \rangle| = \| \xi \|^2$$
Now provide the response and nothing else.
| Yes, you can do this using polar decomposition. We can also consider $\phi$ to be a normal linear functional on $A^{**}$, and there is a positive $\omega \in A^*$ and a partial isometry $v \in A^{**}$ such that $\phi(a) = \omega(va)$ for all $x \in A$. (I'm sure this is in volume 1 of Takesaki, probably also in Pedersen.) We have $\|\omega\| = \|\phi\| \leq 1$, so we can apply GNS to $\omega$ and get $\phi(a) = \omega(va) = \langle \pi(va)\xi,\xi\rangle = \langle \pi(a)\psi,\eta\rangle$ with $\psi = \xi$ and $\eta = \pi(v^*)\xi$. As you note, $\|\xi\|^2 = \langle \pi(1)\xi,\xi\rangle = \omega(\xi) \leq 1$. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/98955', 'https://mathoverflow.net', 'https://mathoverflow.net/users/19506/']} | jdg_82271 |
stackexchange | llm_judgeable_groundtruth_similarity | 38995379 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any alternative to System.Web.Security.Membership.GeneratePassword in AspNetCore ( netcoreapp1.0 ). The easiest way would be to just use a Guid.NewGuid().ToString("n") which is long enough to be worthy of a password but it's not fully random.
Now provide the response and nothing else.
| Here's a class/method, based on the source of Membership.GeneratePassword of that works on .NET Core: public static class Password{ private static readonly char[] Punctuations = "!@#$%^&*()_-+=[{]};:>|./?".ToCharArray(); public static string Generate(int length, int numberOfNonAlphanumericCharacters) { if (length < 1 || length > 128) { throw new ArgumentException(nameof(length)); } if (numberOfNonAlphanumericCharacters > length || numberOfNonAlphanumericCharacters < 0) { throw new ArgumentException(nameof(numberOfNonAlphanumericCharacters)); } using (var rng = RandomNumberGenerator.Create()) { var byteBuffer = new byte[length]; rng.GetBytes(byteBuffer); var count = 0; var characterBuffer = new char[length]; for (var iter = 0; iter < length; iter++) { var i = byteBuffer[iter] % 87; if (i < 10) { characterBuffer[iter] = (char)('0' + i); } else if (i < 36) { characterBuffer[iter] = (char)('A' + i - 10); } else if (i < 62) { characterBuffer[iter] = (char)('a' + i - 36); } else { characterBuffer[iter] = Punctuations[i - 62]; count++; } } if (count >= numberOfNonAlphanumericCharacters) { return new string(characterBuffer); } int j; var rand = new Random(); for (j = 0; j < numberOfNonAlphanumericCharacters - count; j++) { int k; do { k = rand.Next(0, length); } while (!char.IsLetterOrDigit(characterBuffer[k])); characterBuffer[k] = Punctuations[rand.Next(0, Punctuations.Length)]; } return new string(characterBuffer); } }} I've omitted the do...while loop over the CrossSiteScriptingValidation.IsDangerousString . You can add that back in yourself if you need it. You use it like this: var password = Password.Generate(32, 12); Also, make sure you reference System.Security.Cryptography.Algorithms . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38995379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7233/']} | jdg_82272 |
stackexchange | llm_judgeable_groundtruth_similarity | 781796 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$x^2+1=0$ cannot be solved via real numbers. Because of this, we extend the real numbers to complex numbers.We can solve $x^2+1=0$ and $x^2+x+1=0$ equations after we define complex numbers. I wonder if we can solve all equations ( includes only the functions that are analytic.) via complex numbers or not? If It is yes, how can we prove that claim? For example: Can $z^{100}-5z+2=e^{i.\operatorname{erf}(z)}$ be solved via complex numbers? where $\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}}\int_{0}^x e^{-t^2}\,\mathrm dt$ Note: This is just an example, I am not wondering the solution for a special example, I am wondering if a general proof is possible or not. Update: I mention the functions that are analytic. $\bar z$ or $\Re{(z)}$ are not analytic functions. Thanks for answers.
Now provide the response and nothing else.
| All polynomial equations with non-constant polynomials with complex coefficients can be solved with complex numbers. This is the fundamental theorem of algebra. Link here. All equations in general can not. For example, $z\bar z = -1$ has no solutions in $\mathbb C$. In general, if you are asking if every equation $f(z) = 0$ has a solution in $\mathbb C$, you are asking if every function $f:\mathbb C\to\mathbb C$ has $0$ in its range (codomain). This is of course not true. There are many many functions which do not have $0$ in their codomain, of which $z\bar z+1$ is only one. There exist much uglier functions with this property, for example$$f(z)=\begin{cases}z&\text{ if } z\neq 0\\1&\text{ if } z=0\end{cases}.$$ Even all analytic functions do not contain $0$ in their codomain. For example, $f(z) = e^z$ does not hit $0$ at any point, meaning $e^z=0$ has no solution. However, in some way, analytic functions are the correct way to go. Because of Picard's little theorem also mentioned in the comments ( Link ) you know that if $f$ is entire (analytic and everywhere defined) and non-constant, then $f(z) = w$ has at least one solution for all values of $w$ except perhaps one. For example, $e^z=w$ has a solution (infinitely many of them) for all values of $w$ except $0$. Edit: The fact that $f$ is not constant is a valid demand to make, of course, since if a function is constant, the equation $f(z)=0$ translates to $C=0$ for the constant value $C$ of $f$, and such equations are of little interest. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/781796', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/22430/']} | jdg_82273 |
stackexchange | llm_judgeable_groundtruth_similarity | 32225634 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found some article on internet this url Then I code query like that and I get same result But when I change AS [text()] to [name] the result contain XML tag like this So My question is What is [text()] in this code Thank you.
Now provide the response and nothing else.
| The other current answers don't explain much about where this is coming from, or just offer links to poorly formatted sites and don't really answer the question. In many answers around the web for grouping strings there are the copy paste answers without a lot of explanation of what's going on. I wanted to better answer this question because I was wondering the same thing, and also give insight into what is actually happening overall. tldr; In short, this is syntax to help transform the XML output when using FOR XML PATH which uses column names (or aliases) to structure the output. If you name your column text() the data will be represented as text within the root tag. <row> My record's data<row> In the examples you see online for how to group strings and concat with , it may not be obvious (except for the fact that your query has that little for xml part) that you are actually building an XML file with a specific structure (or rather, lack of structure) by using FOR XML PATH ('') . The ('') is removing the root xml tags, and just spitting out the data. The deal with AS [text()] As usual, AS is acting to name or rename the column alias. In this example, you are aliasing this column as [text()] . The [] s are simply SQL Server's standard column delimiters, often unneeded, except today since our column name has () s. That leaves us with text() for our column name. Controlling the XML Structure with Column Names When you are using FOR XML PATH you are outputting an XML file and can control the structure with your column names. A detailed list of options can be found here: https://msdn.microsoft.com/en-us/library/ms189885.aspx An example includes starting your column name with an @ sign, such as: SELECT color as '@color', nameFROM #favorite_colorsFOR XML PATH This would move this column's data to an attribute of the current xml row, as opposed to an item within it. You end up with <row color="red"> <name>tim</name></row><row color="blue"> <name>that guy</name></row> So then, back to [text()] . This is actually specifying an XPath Node Test . In the context of MS Sql Server, you can learn about this designation here . Basically it helps determine the type of element we are adding this data to, such as a normal node (default), an xml comment, or in this example, some text within the tag. An example using a few moves to structure the output SELECT color as [@color] ,'Some info about ' + name AS [text()] ,name + ' likes ' + color AS [comment()] ,name ,name + ' has some ' + color + ' things' AS [info/text()]FROM #favorite_colorsFOR XML PATH Notice we are using a few designations in our column names: @color : a tag attribute text() : some text for this root tag comment() : an xml comment info/text() : some text in a specific xml tag, <info> The output looks like this: <row color="red"> Some info about tim <!--tim likes red--> <name>tim</name> <info>tim has some red things</info></row><row color="blue"> Some info about that guy <!--that guy likes blue--> <name>that guy</name> <info>that guy has some blue things</info></row> Wrapping it up, how can these tools group and concat strings? So, with the solutions we see for grouping strings together using FOR XML PATH , there are two key components. AS [text()] : Writes the data as text, instead of wrapping it in a tag FOR XML PATH ('') : Renames the root tag to '' , or rather, removes it entirely This gives us "XML" (air quotes) output that is essentially just a string. SELECT name + ', ' AS [text()] -- no 'name' tagsFROM #favorite_colorsFOR XML PATH ('') -- no root tag returns tim, that guy, From there, it's just a matter of joining that data back to the larger dataset from which it came. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32225634', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2506290/']} | jdg_82274 |
stackexchange | llm_judgeable_groundtruth_similarity | 20519431 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 2 vectors: set.seed(1)x1 = rnorm(100,0,1)x2 = rnorm(100,1,1) I want to plot these as lines and then find the intersection points of the lines, also if there are multiple points of intersection then I want to locate each of them. I have come across a similar question,and tried to solve this problem using spatstat , but I was not able to convert my combined data frame containing both vector values to psp object .
Now provide the response and nothing else.
| If you literally just have two random vectors of numbers, you can use a pretty simple technique to get the intersection of both. Just find all points where x1 is above x2 , and then below it on the next point, or vice-versa. These are the intersection points. Then just use the respective slopes to find the intercept for that segment. set.seed(2)x1 <- sample(1:10, 100, replace = TRUE)x2 <- sample(1:10, 100, replace = TRUE)# Find points where x1 is above x2.above <- x1 > x2# Points always intersect when above=TRUE, then FALSE or reverseintersect.points <- which(diff(above) != 0)# Find the slopes for each line segment.x1.slopes <- x1[intersect.points+1] - x1[intersect.points]x2.slopes <- x2[intersect.points+1] - x2[intersect.points]# Find the intersection for each segment.x.points <- intersect.points + ((x2[intersect.points] - x1[intersect.points]) / (x1.slopes-x2.slopes))y.points <- x1[intersect.points] + (x1.slopes*(x.points-intersect.points))# Joint pointsjoint.points <- which(x1 == x2)x.points <- c(x.points, joint.points)y.points <- c(y.points, x1[joint.points])# Plot pointsplot(x1,type='l')lines(x2,type='l',col='red')points(x.points,y.points,col='blue')# Segment overlapstart.segment <- joint.points[-1][diff(joint.points) == 1] - 1for (i in start.segment) lines(x = c(i, i+1), y = x1[c(i, i+1)], col = 'blue') | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20519431', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1632749/']} | jdg_82275 |
stackexchange | llm_judgeable_groundtruth_similarity | 8790093 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Pluralizing a single word is simple: pluralize(@total_users, "user") But what if I want to print "There is/are N user/users": There are 0 users There is 1 user There are 2 users , i.e., how to pluralize a sentence ?
Now provide the response and nothing else.
| You can add a custom inflection for it. By default, Rails will add an inflections.rb to config/initializers . There you can add: ActiveSupport::Inflector.inflections do |inflect| inflect.irregular "is", "are"end You will then be able to use pluralize(@total_users, "is") to return is/are using the same rules as user/users. EDIT : You clarified the question on how to pluralize a sentence. This is much more difficult to do generically, but if you want to do it, you'll have to dive into NLP . As the comment suggests, you could do something with I18n if you just want to do it with a few sentences, you could build something like this: def pluralize_sentence(count, i18n_id, plural_i18n_id = nil) if count == 1 I18n.t(i18n_id, :count => count) else I18n.t(plural_i18n_id || (i18n_id + "_plural"), :count => count) end end pluralize_sentence(@total_users, "user_count") And in config/locales/en.yml : en: user_count: "There is %{count} user." user_count_plural: "There are %{count} users." | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8790093', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/96588/']} | jdg_82276 |
stackexchange | llm_judgeable_groundtruth_similarity | 18999 |
Below is a question asked on the forum scicomp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to animate a dampened pendulum using RK4 for a highschool project. The equations that describe the dampened system are as follows:(from http://www.maths.tcd.ie/~smurray/Pendulumwriteup.pdf )$$\begin{aligned} \frac{d\theta}{dt}&=\omega,\\ \frac{d\omega}{dt}&=-\beta^2\sin\theta-k\omega+A\cos\Omega.\end{aligned}$$ Where $B$ is a constant, $k$ is the coefficient of damping, $A$ is the driving amplitude and $\Omega$ is the driving frequency. My problem is that I am getting an increasing amplitude on each swing, varying the coefficient of damping does not change this. I believe my problem is either my programming is wrong, or I have the wrong idea of what the driving amplitude and the driving frequency are. Ideally, I want the program to be able to showcase simple harmonic motion (when $k = 0$) and also allow the user to investigate dampening. This is my first time attempting to use the Runge Kutta method and I have most likely incorrectly implemented it. My code is written in Visual Basic.net 2010 and is as follows: Public Class Form1Dim l As Decimal = 1 'Length of rod (1m)Dim g As Decimal = 9.81 'GravityDim w As Decimal = 0 ' Angular VelocityDim initheta As Decimal = -Math.PI / 2 'Initial ThetaDim theta As Decimal = -Math.PI / 2 'Theta (This one changes for the simulation)Dim t As Decimal = 0 'Current time of the simulationDim h As Decimal = 0.01 'Time stepDim b As Decimal = Math.Sqrt(g / l) 'Constant used in the function for dw/dtDim k As Decimal = 0 'Coefficient of DampingDim initialx = l * Math.Sin(initheta) 'Initial Amplitude of the pendulumPrivate Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.LoadEnd Sub'Function for dw/dtPublic Function f(ByRef the As Decimal, ByRef omega As Decimal, ByRef time As Decimal) Return ((-b ^ 2) * Math.Sin(the)) - (k * omega) + (initheta * Math.Cos(omega * time))End FunctionPublic Function y(ByRef the As Decimal, ByRef omega As Decimal, ByRef time As Decimal) Return omegaEnd FunctionDim k1, k2, k3, k4, l1, l2, l3, l4 As Decimal 'Initialising RK4 variablesPublic Sub RK4Solve(ByRef The As Decimal, ByRef Ome As Decimal, ByRef h As Decimal) l1 = y(The, Ome, t) k1 = f(The, Ome, t) l2 = y(The + (0.5 * h * l1), Ome + (0.5 * h * k1), t + (0.5 * h)) k2 = f(The + (0.5 * h * l1), Ome + (0.5 * h * k1), t + (0.5 * h)) l3 = y(The + (0.5 * h * l2), Ome + (0.5 * h * k2), t + (0.5 * h)) k3 = f(The + (0.5 * h * l2), Ome + (0.5 * h * k2), t + (0.5 * h)) l4 = y(The + (h * l3), Ome + (h * k3), t + h) k4 = f(The + (h * l3), Ome + (h * k3), t + h) 'Setting next step of variables The = The + (h / 6 * (l1 + (2 * l2) + (2 * l3) + l4)) Ome = Ome + (h / 6 * (k1 + (2 * k2) + (2 * k3) + k4)) t += hEnd Sub'Timer ticking every 0.1s'Time step is 0.01s to increase accuracy of results for testingPrivate Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick ComboBox1.Items.Add(theta) 'Adding theta to a drop down box to test data RK4Solve(theta, w, h)End SubPrivate Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Timer1.Enabled = FalseEnd SubEnd Class Here is a picture of the Dis, Vel and Acc graphs (in that order) As you can see, the acceleration falls apart during the simulation (forgive my lack of scientific terms)Why does this happen? (I will also update my code with my new RK4 implementation)
Now provide the response and nothing else.
| You can generate a noise sequence with whatever noise spectrum you want (including $1/f$, also known as pink noise) by generating the noise coefficients in spectral space. The magnitudes of the coefficients should be chosen to give the desired spectrum and the phases should be chosen randomly. You then simply perform an inverse Fourier transform to give the sequence values. The following algorithm will generate 1/f noise: Determine the number of points, $n$, and the length, $T$, of your sequence. This also determines the spectral space with wave numbers $-k_{\max}$ to $k_{\max}$ which correspond to frequency values $f_k=kT/(2\pi)$. Set the magnitudes of your spectral coefficients: $C_k = 1/\left|f_k\right|$. Set $C_0 = 0$ to give zero mean to the noise sequence. Set the phases of the spectral coefficients to random values, ensure symmetry if you want real valued noise. for $k=0..k_{\max}$ $~~~\phi_k=\left(~\operatorname{rand\in[0,2\pi)}~\right)$, $~~~C_k = C_ke^{i\varphi_k}$, $~~~C_{-k} = C_{-k}e^{-i\varphi_k}$ Take an inverse Fourier transform of the spectral coeficients to get your noise sequence, $\{y_i\} = \operatorname{ifft}(\{C_k\})$ If you want higher or lower noise power you can simply multiply the resulting sequence by a constant (or multiply all the $C_k$ values by a constant, it's the same effect). By definition, noise generated in this way has exactly the specified spectrum since you are creating the noise by specifying its spectrum. | {} | {'log_upvote_score': 4, 'links': ['https://scicomp.stackexchange.com/questions/18999', 'https://scicomp.stackexchange.com', 'https://scicomp.stackexchange.com/users/12044/']} | jdg_82277 |
stackexchange | llm_judgeable_groundtruth_similarity | 20474706 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to let users specify an area by painting over it with a "paint" tool that draws semi-transparent lines on a canvas. Its purpose is specifying a "mask" for an image that will be drawn below on the canvas. This is what I tried so far: var canvas = document.getElementById('canvas');var ctx = canvas.getContext('2d');var canvasPos = canvas.getBoundingClientRect();var dragging = false;drawImage();$(canvas).mousedown(mouseDown);$(canvas).mouseup(mouseUp);$(canvas).mousemove(mouseMove);function drawImage() { var img = new Image(); img.src = 'http://img2.timeinc.net/health/img/web/2013/03/slides/cat-allergies-400x400.jpg'; img.onload = function () { ctx.drawImage(img, 0, 0); };}function mouseDown(e) { var pos = getCursorPosition(e); dragging = true; ctx.strokeStyle = 'rgba(0, 100, 0, 0.25)'; ctx.lineCap = 'round'; ctx.lineJoin = 'round'; ctx.lineWidth = 15; ctx.beginPath(); ctx.moveTo(pos.x, pos.y);}function mouseUp(e) { dragging = false;}function mouseMove(e) { var pos, i; if (!dragging) { return; } pos = getCursorPosition(e); ctx.lineTo(pos.x, pos.y); ctx.stroke();}function getCursorPosition(e) { return { x: e.clientX - canvasPos.left, y: e.clientY - canvasPos.top };} Link to a jsfiddle of the above code: http://jsfiddle.net/s34PL/2/ The issue with this example code is that subsequent pixels that are drawn are making the opacity becomes less and less visible. I think it's because the line is 15 pixels wide (but I want it that wide though). How can I solve this issue? Thanks!
Now provide the response and nothing else.
| The problem is that you are drawing the whole path again and again: function mouseMove(e) { ... ctx.stroke(); // Draws whole path which begins where mouseDown happened.} You have to draw only the new segment of the path ( http://jsfiddle.net/jF9a6/ ). And then ... you have the problem with the 15px width of the line. So how to solve this? We have to draw the line at once as you did, but avoid painting on top of existing lines. Here is the code: http://jsfiddle.net/yfDdC/ The biggest change is the paths array. It contains yeah, paths :-) A path is an array of points stored in mouseDown and mouseMove functions. New path is created in mouseDown function: paths.push([pos]); // Add new path, the first point is current pos. In the mouseMove you add current mouse position to the last path in paths array and refreshs the image. paths[paths.length-1].push(pos); // Append point tu current path.refresh(); The refresh() function clears the whole canvas, draws the cat again and draws every path. function refresh() { // Clear canvas and draw the cat. ctx.clearRect(0, 0, ctx.width, ctx.height); if (globImg) ctx.drawImage(globImg, 0, 0); for (var i=0; i<paths.length; ++i) { var path = paths[i]; if (path.length<1) continue; // Need at least two points to draw a line. ctx.beginPath(); ctx.moveTo(path[0].x, path[0].y); ... for (var j=1; j<path.length; ++j) ctx.lineTo(path[j].x, path[j].y); ctx.stroke(); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20474706', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/486285/']} | jdg_82278 |
stackexchange | llm_judgeable_groundtruth_similarity | 815094 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $f_n \to f$ uniformly locally on $\mathbb{R}$. All these functions are from $\mathbb{R}$ to $\mathbb{R}$, are bijective and increasing and continuous. Furthermore $f_n \in C^\infty$. Does it follow that $f_n^{-1} \to f^{-1}$ uniformly locally? (We have that $f_n$ and $f$ pass through the origin, so maybe we can have local convergence outside $\{0\}$). Or do I need more assumptions?
Now provide the response and nothing else.
| First, we will consider the case $f(x)=x$ for all $x$. Consider an interval $[a,b]$, and a positive real $\epsilon$. Since $(f_n(a-1))_n$ converges to $a-1$ there is an integer $n_0$ such that $f_n(a-1)\leq a$ for $n\geq n_0$. Similarly there is an integer $n_1$ such that $f_n(b+1)\geq b$ for $n\geq n_1$. Now, the sequence $(f_n)_n$ converges uniformly on $[a-1,b+1]$, so there is an integer $n_2$ such that$\sup_{[a-1,b+1]}|f_n(x)-x|<\epsilon$, for $n\geq n_2$. Let $N=\max(n_0,n_1,n_2)$. and consider $n\geq N$, then$$\eqalign{\sup_{x\in[a,b]}|f_n^{-1}(x)-x|&\leq \sup_{x\in[f_n(a-1),f_n(b+1)]}|f_n^{-1}(x)-x|\cr&=\sup_{t\in[ a-1 , b+1 ]}|f_n^{-1}(f_n(t))-f_n(t)|\cr&=\sup_{t\in[ a-1 , b+1 ]}|f_n(t)-t|\leq \epsilon}$$This proves that $(f_n^{-1})_n$ converges uniformly to $f$ on $[a,b]$, and the desired conclusion follows in this case. The general case follows by applying the previous case to $(f_n\circ f^{-1})_n$. the details are not difficult, using the fact that the image under $f$ or $f^{-1}$ of a compact interval is a compact interval. Remark. If we omit the hypothesis that $f^{-1}$ exists, then the other conditions do not guarantee its existence. Indeed, for $n>0$ there is a $C^\infty$ function of compact support $g_n$ that satisfy the following conditions: $\forall\,x\in\mathbb{R}, \quad g_n(x)\in[0,1]$. $\forall\,x\in[-1,1], \quad g_n(x)=1-\frac{1}{n}$. $\forall\,x\notin(-1-\frac{1}{n},1+\frac{1}{n}), \quad g_n(x)=0$. Then we define $f_n(x)=x-\int_0^xg_n(t)dt$ for $x\in\mathbb{R}$. This sequence satisfies the proposed conditions and converges uniformly on $\mathbb{R}$ to a function that is constant on $[-1,1]$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/815094', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/146425/']} | jdg_82279 |
stackexchange | llm_judgeable_groundtruth_similarity | 1381308 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
An int ( Int32 ) has a memory footprint of 4 bytes. But what is the memory footprint of: int? i = null; and : int? i = 3; Is this in general or type dependent?
Now provide the response and nothing else.
| I'm not 100% sure, but I believe it should be 8 Bytes, 4 bytes for the int32, and (since every thing has to be 4-Byte aligned on a 32 bit machine) another 4 bytes for a boolean indicating whether the integer value has been specified or not. Note, thanks to @sensorSmith, I am now aware that newer releases of .Net allow nullable values to be stored in smaller footprints (when the hardware memory design allows smaller chunks of memory to be independently allocated). On a 64 Bit machine it would still be 8 bytes (64 bits) since that is the smallest chunk of memory that can be addressed... A nullable for example only requires a single bit for the boolean, and another single bit for the IsNull flag and so the total storage requirements is less than a byte it theoretically could be stored in a single byte, however, as usual, if the smallest chunk of memory that can be allocated is 8 bytes (like on a 64 bit machine), then it will still take 8 bytes of memory. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1381308', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/128886/']} | jdg_82280 |
stackexchange | llm_judgeable_groundtruth_similarity | 239798 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any sophisticated way to find n-th derivative of $f(\ln(x))$. Function $f \in C^{\infty}(\mathbb R)$. I tried 2nd and 3rd but I can not see any pattern.
Now provide the response and nothing else.
| The introduction of $i$ is not, as the question seems to suggest, just a matter of introducing a new symbol axiomatically. The important content of this extension of the number concept is that, after introducing $i$, one can still use most (though not all) of the calculation rules that are familiar for real numbers. More precisely, the complex numbers form a field (though not an ordered field), so we can work with addition and multiplication (but not $<$) in $\mathbb C$ just as we do in $\mathbb R$. In this light, the analogous question for differential equations should not be "can we axiomatically introduce new quasi-functions to serve as solutions" but rather "can we axiomatically introduce new quasi-functions to serve as solutions and retain important properties that we are accustomed to using for working with ordinary functions?" The crucial point, which requires real mathematical work, is to figure out what to retain (like $+$ and $\times$ in the case of $\mathbb C$) and what to give up (like $<$). Then, if we're lucky and have made good choices, we might be able to develop a theory that provides solutions for differential equations that don't admit ordinary solutions. In fact, such theories have been developed, involving distributions, Sobolev spaces, and the like. You can probably find tons of information about these by searching for "weak solutions". One of the things one gives up is the notion (usually regarded as central to the very notion of function) of evaluating a function at a point. If $f$ is a distribution, there is often no way to assign a meaning to $f(a)$ for a specific value of $a$. Nevertheless, other things, like differentiation, make good sense on distributions. For some types of differential equations (elliptic ones, I believe), when one gets a weak solution it "magically" turns out to be an ordinary, smooth function, despite whatever quasi-function methods were used to obtain it. This can be viewed as analogous to the case where a calculation with complex numbers "magically" leads to a real number as the final result. For other types of equations, weak solutions are really necessary and one can't expect ordinary solutions. For example, some equations governing fluid dynamics allow for the development of discontinuities where the function or its derivatives become undefined, even if the initial conditions were perfectly smooth. This is not a defect of the equations or the solutions; rather it models a real physical phenomenon, the development of shocks. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/239798', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23566/']} | jdg_82281 |
stackexchange | llm_judgeable_groundtruth_similarity | 11776829 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There is iOS US2FormValidator library for user input validation (see the picture below). I think that library is better than the default of just popping an alert when something doesn't validate. I'm looking for how to do such things on Android. Are there some Android analogs of US2FormValidator ?
Now provide the response and nothing else.
| The pop-up effect you have shown on your screenshot can be achieved using Android's built-in setError(String) method on EditText widgets. Also, you can leverage the power of annotations using the Android Saripaar library that I've authored. first add the library: compile 'com.mobsandgeeks:android-saripaar:2.0.2' The library is very simple to use. In your activity annotate the View references you would like to validate as in the following example. @Order(1)private EditText fieldEditText;@Order(2)@Checked(message = "You must agree to the terms.")private CheckBox iAgreeCheckBox;@Order(3)@Length(min = 3, message = "Enter atleast 3 characters.")@Pattern(regex = "[A-Za-z]+", message = "Should contain only alphabets")private TextView regexTextView;@Order(4)@Password(min = 6, scheme = Password.Scheme.ALPHA_NUMERIC_MIXED_CASE_SYMBOLS)private EditText passwordEditText;@Order(5)@ConfirmPasswordprivate EditText confirmPasswordEditText; The order attribute specifies the order in which the fields have to be validated. In your onCreate() method instantiate a new Validator object. and call validator.validate() inside any of your event listeners. You'll receive callbacks on onSuccess and onFailure methods of the ValidationListener . If you want to show a pop-up as show in the image above then do the following, public void onValidationFailed(View failedView, Rule<?> failedRule) { if (failedView instanceof Checkable) { Toast.makeText(this, failedRule.getFailureMessage(), Toast.LENGTH_SHORT).show(); } else if (failedView instanceof TextView) { TextView view = (TextView) failedView; view.requestFocus(); view.setError(failedRule.getFailureMessage()); }} Hope that helps. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11776829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/807805/']} | jdg_82282 |
stackexchange | llm_judgeable_groundtruth_similarity | 40645018 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having a weird problem with Firebase in Android. I'm trying to upload a photo to Firebase Storage and I'm getting the following error: E/UncaughtException: java.lang.IllegalArgumentException: The supplied bucketname is not available to this project. At this line of code: StorageReference mStorageRef = mFirebaseStorage.getReferenceFromUrl("gs://mooseandroid-a9f96.appspot.com"); I'm sure that the bucketname is the same as the one in the console. I even tried with a bucketname that works fine in iOS with Swift. I also changed the rules to public so anyone can read and write to this storage bucket. The realtime database works fine for this project. I'm running out of options right now, don't even know what else I could be trying. Here's the whole piece of code: FirebaseStorage mFirebaseStorage = FirebaseStorage.getInstance();StorageReference mStorageRef = mFirebaseStorage.getReferenceFromUrl("gs://mooseandroid-a9f96.appspot.com");final StorageReference photoRef = mStorageRef.child("posts_images/mooseImg" + getCurrentDateTime() + ".jpg");UploadTask uploadTask = photoRef.putBytes(data);uploadTask.addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception exception) { // Handle unsuccessful uploads }}).addOnSuccessListener(new OnSuccessListener<UploadTask.TaskSnapshot>() { @Override public void onSuccess(UploadTask.TaskSnapshot taskSnapshot) { // taskSnapshot.getMetadata() contains file metadata such as size, content-type, and download URL.}); And here's the whole error: E/UncaughtException: java.lang.IllegalArgumentException: The supplied bucketname is not available to this project. at com.google.android.gms.common.internal.zzaa.zzb(Unknown Source) at com.google.firebase.storage.FirebaseStorage.zzz(Unknown Source) at com.google.firebase.storage.FirebaseStorage.getReferenceFromUrl(Unknown Source) at com.moose.android.AddPostActivity.onClick(AddPostActivity.java:163) at android.view.View.performClick(View.java:5702) at android.widget.TextView.performClick(TextView.java:10888) at android.view.View$PerformClick.run(View.java:22541) at android.os.Handler.handleCallback(Handler.java:739) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:158) at android.app.ActivityThread.main(ActivityThread.java:7229) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120)
Now provide the response and nothing else.
| The documentation for getReferenceFromUrl(String fullUrl) states: An error is thrown if fullUrl is not associated with the FirebaseApp used to initialize this FirebaseStorage Use this code to see the bucket name in your FirebaseApp : FirebaseOptions opts = FirebaseApp.getInstance().getOptions(); Log.i(TAG, "Bucket = " + opts.getStorageBucket()); I expect it will not be mooseandroid-a9f96.appspot.com and will instead be the storage_bucket value in the project_info section of your google-services.json file: "project_info": { "project_number": "816275527980", "firebase_url": "https://project-8693710910123456789.firebaseio.com", "project_id": "project-8693710910123456789", "storage_bucket": "project-8693710910123456789.appspot.com" }, | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40645018', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6000366/']} | jdg_82283 |
stackexchange | llm_judgeable_groundtruth_similarity | 5611632 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I reset the current date retrieved from [NSDate date] but then change the time to 10:00 in the morning.
Now provide the response and nothing else.
| As with all date manipulation you have to use NSDateComponents and NSCalendar NSDate *now = [NSDate date];NSCalendar *calendar = [[NSCalendar alloc] initWithCalendarIdentifier: NSCalendarIdentifierGregorian];NSDateComponents *components = [calendar components:NSCalendarUnitYear|NSCalendarUnitMonth|NSCalendarUnitDay fromDate:now];[components setHour:10];NSDate *today10am = [calendar dateFromComponents:components]; in iOS8 Apple introduced a convenience method that saves a few lines of code: NSDate *d = [calendar dateBySettingHour:10 minute:0 second:0 ofDate:[NSDate date] options:0]; Swift: let calendar: NSCalendar! = NSCalendar(calendarIdentifier: NSCalendarIdentifierGregorian)let now: NSDate! = NSDate()let date10h = calendar.dateBySettingHour(10, minute: 0, second: 0, ofDate: now, options: NSCalendarOptions.MatchFirst)! | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/5611632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_82284 |
stackexchange | llm_judgeable_groundtruth_similarity | 44896 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For an equilateral triangle ABC of side $a$ vertex A is always moving in the direction of vertex B, which is always moving the direction of vertex C, which is always moving in the direction of vertex A. The modulus of their "velocity" is a constant. When and where do they converge. Attempt. Found the "when" using a physics style approach by "fixing the frame" on one of the vertices. (From this frame, other two vertex are moving towards origin in a straight line and components of their speed along this line can be used to find when the three meet at origin) For the "where" it is difficult using above approach as this is some kind of rotating and shrinking triangle which is difficult to translate. @all Apologies for bumping this question. I wished to give an answer the bounty but it wont let me until the next 23 hours. For the record: I am not seeking new answers. Update: A cool example of PSTricks package of $\LaTeX$, for anyone who finds this question later. Link to code (a .tex file) And using Pgf/TikZ Source Page
Now provide the response and nothing else.
| Here's an animation showing the shrinking triangle: As is required by the statement of the problem, the sides of the triangle are everywhere tangent to the blue curves. The blue curves are logarithmic spirals . The curve on the right has endpoints $(0,0)$ and $(1,0)$, and is defined by the equation$$r \;=\; \exp\left(-\sqrt{3}\;\theta\right).$$Note that the triangle actually undergoes an infinite number of rotations as it shrinks towards the origin. The rate at which the sides of the triangle shrink is equal to 3/2 of the speed at which the points move. (This follows from the fact that one endpoint of each edge has velocity tangent to the edge, while the other has a velocity component of $\sin(30^\circ)=1/2$ in the direction of the edge.) As a consequence, the length of each blue curve is $2/3$ of the side length of the large triangle. Edit: Here is the Mathematica code for the animation above: PolarToRectangular[{r_, theta_}] := {r*Cos[theta], r*Sin[theta]}tmax = 2/Sqrt[3];PolarCurve[t_] := {1 - t/tmax, -Log[1 - t/tmax]/Sqrt[3]}f1[t_] := PolarToRectangular[PolarCurve[t]]f2[t_] := PolarToRectangular[PolarCurve[t] + {0, 2 Pi/3}]f3[t_] := PolarToRectangular[PolarCurve[t] + {0, 4 Pi/3}]spirals = ParametricPlot[{f1[t], f2[t], f3[t]}, {t, 0, tmax}, Axes -> None, ImageSize -> 300, PlotStyle -> Darker[Blue], PlotRange -> {{-0.7, 1.1}, {-0.9, 0.9}}];triangle[t_] := Graphics[{Opacity[0], EdgeForm[Black], Polygon[{f1[t], f2[t], f3[t]}]}]points[t_] := Graphics[{PointSize[Large], Point/@{f1[t],f2[t],f3[t]} }]dt = tmax/75;myframes = Table[Show[spirals, triangle[t], points[t]], {t, 0, 75*dt, dt}];Export["ShrinkingTriangle.gif", myframes, "DisplayDurations" -> {1}~Join~ConstantArray[0.04, 74]~Join~{1}] This code exports the animation as a GIF. If you want to see the animation from within Mathematica, the last command would be ListAnimate[myframes] instead. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/44896', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8365/']} | jdg_82285 |
stackexchange | llm_judgeable_groundtruth_similarity | 47365136 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been mucking about with a .NET Standard project and NuGet. I've got a working project and have uploaded it to NuGet.org . My project targets .NET Standard 1.3, which should support .NET Framework 4.6 and .NET Core 1.0. But when I tried to add my project (via NuGet) to a fresh .NET Framework 4.6 project, the dependencies resolved to 47 packages! They're all system libraries and appear to be dependencies of either Microsoft.NETCore.Platforms or NETStandard.Library 1.6.1. ( Gist of full PM output. ) My project only imports ( using ) a handful of libraries, none of which I added manually; i.e. they're all libraries that "came with" .NET Standard. These libraries are: System System.Text System.Reflection System.Linq System.Collections.Generic; The thing is, I decided to make my project target .NET Standard because I wanted it to work seamlessly across .NET Framework and .NET Core applications. I thought the whole point of Standard was to set a minimum level of compatibility. By extension, I suppose I had assumed (perhaps erroneously) that libraries like System.Console would be automatically available in either Core or Framework. I didn't notice anything like this when I tested my Standard project as a dependency in a Framework and Core project within the same solution, so I'm suspicious that this might be a NuGet thing. What's really going on here? And how can I make my .NET Standard library available on NuGet without a huge list of dependencies? Is it a problem with the way I've specified my NuGet package? Or have I fundamentally misunderstood something?
Now provide the response and nothing else.
| You haven't done anything wrong, this is expected to happen. If you want nothing more than your own DLL being added to a new .NET Framework project, you have to target .NET Standard 2.0 for your library wait for a .NET Framework version that natively supports both the API and assembly versions - which is going to be 4.7.2 (while .NET Framework 4.7.1 supports all the APIs, there were bugs with how some assemblies are versioned and so the tooling (VS 2017 15.5+) will add additional assemblies to fix that). What you are seeing are side effects of how .NET Standard is built and the support for the supported frameworks is implemented. This is also different based on the .NET Standard version you target and the tooling used to reference the library package. In .NET Standard < 2.0, you reference the NETStandard.Library meta-package which in turn references additional ( System.* ) packages. Those packages contain the reference assemblies that make up the ".NET Standard Contract" - a set of APIs and the assembly names + versions. When the NuGet package you create for .NET Standard 1.0-1.6 is then referenced by an application, these individual packages don't bring in the reference assemblies but rather implementation assemblies for the framework that the application targets. For .NET Core, these match the assemblies that are already part of the runtime so the DLL files won't end up next to the built application. This changed however when a new set of packages was released for .NET Core 1.1 ( NETStandard.Library version 1.6.1). This resulted in applications built for .NET Core 1.0 ending up getting newer implementation assemblies that were meant to be included in .NET Core 1.1 (luckily, 1.1 was then made the "long-term support" version since that sparked a discussion about which assemblies are part of the LTS promise). On .NET Framework these libraries (with some exceptions like System.Net.Http ) don't do much - they just forward to the system assemblies. So for example the "contract" defines that System.Object is defined in a System.Runtime.dll assembly. So the System.Runtime.dll file you end up with in a .NET Framework application contains a System.Runtime.dll that contains type forward to .NET Framework's mscorlib.dll . .NET Core already contains a different System.Runtime.dll that does something different for that platform. This mechanism allows for a single DLL file to work on both platforms since those type forwards and additional implementations assure the same "contract" (types + assemblies + assembly versions) working on both implementations. .NET Standard 2.0 aimed to reduce the number of packages and DLLs being necessary and also to remove requiring updates to NETStandard.Library whenever a new .NET Core version is released. So for .NET Standard 2.0 and .NET Core 2.0, the NETStandard.Library package only brings reference assemblies for compiling code to a project, but the resulting NuGet package no longer depends on this package. So when you create a library targeting .NET Standard 2.0 and publish it, it will have no NuGet dependencies (unless you add additional ones). The logic of what "support libraries" to bring in when consuming a .NET Standard library was moved to the tooling that is used during build. So when a library that contains a reference to a netstandard.dll is added to a .NET Framework project, the tooling will then add necessary support DLLs based on the version of .NET Framework being used. This was done for .NET Standard 2.0 as well as .NET Standard 1.5+ since .NET Framework 4.6.1 was retroactively made compatible with .NET Standard 2.0 (was 1.4 previously) through these kinds of DLL files. The same tooling also makes sure that even if NuGet packages are somehow brought in to such an application project, any .NET Standard implementation libraries brought in via NuGet are removed from the build. So if you reference a .NET Standard 1.0 NuGet package that was built when .NET Core 1.0 was released, all its NuGet dependencies are trimmed out and you get the support libraries shipped with the build tooling instead. The idea was that .NET Framework 4.7.1 would contain all the necessary assemblies "inbox" so that a netstandard.dll , System.Runtime.dll etc. are part of .NET Framework and any .NET Standard 1.0-2.0 DLL file would "just work", the problem was that these "inbox" dll files had a too low version number for some assemblies so libraries would fail to load - this was fixed by changing the tooling again to include DLL files with higher version numbers as support libraries which in turn forward to the "inbox" .NET Framework assemblies. This is planned to be fixed in .NET Framework 4.7.2. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/47365136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/50151/']} | jdg_82286 |
stackexchange | llm_judgeable_groundtruth_similarity | 24074923 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to have some free space after the first few words of a text. The following code does not do what I want but hopefully shows you guys what I mean. One reason for not working is that the span tag should be converted to display:block . But in this case the text followed by the span tag would start in the next line but I want it to start in the same line only 25em away from the left border. <span style="width:25em;">Lorem ipsum dolor sit amet,</span> consectetur adipisicing elit, sed do eiusmod tempor
Now provide the response and nothing else.
| You shouldn't be closing the serial port in Python between writing and reading. There is a chance that the port is still closed when the Arduino responds, in which case the data will be lost. while running: # Serial write section setTempCar1 = 63 setTempCar2 = 37 setTemp1 = str(setTempCar1) setTemp2 = str(setTempCar2) print ("Python value sent: ") print (setTemp1) ard.write(setTemp1) time.sleep(6) # with the port open, the response will be buffered # so wait a bit longer for response here # Serial read section msg = ard.read(ard.inWaiting()) # read everything in the input buffer print ("Message from arduino: ") print (msg) The Python Serial.read function only returns a single byte by default, so you need to either call it in a loop or wait for the data to be transmitted and then read the whole buffer. On the Arduino side, you should consider what happens in your loop function when no data is available. void loop(){ // serial read section while (Serial.available()) // this will be skipped if no data present, leading to // the code sitting in the delay function below { delay(30); //delay to allow buffer to fill if (Serial.available() >0) { char c = Serial.read(); //gets one byte from serial buffer readString += c; //makes the string readString } } Instead, wait at the start of the loop function until data arrives: void loop(){ while (!Serial.available()) {} // wait for data to arrive // serial read section while (Serial.available()) { // continue as before EDIT 2 Here's what I get when interfacing with your Arduino app from Python: >>> import serial>>> s = serial.Serial('/dev/tty.usbmodem1411', 9600, timeout=5)>>> s.write('2')1>>> s.readline()'Arduino received: 2\r\n' So that seems to be working fine. In testing your Python script, it seems the problem is that the Arduino resets when you open the serial port (at least my Uno does), so you need to wait a few seconds for it to start up. You are also only reading a single line for the response, so I've fixed that in the code below also: #!/usr/bin/pythonimport serialimport syslogimport time#The following line is for serial over GPIOport = '/dev/tty.usbmodem1411' # note I'm using Mac OS-Xard = serial.Serial(port,9600,timeout=5)time.sleep(2) # wait for Arduinoi = 0while (i < 4): # Serial write section setTempCar1 = 63 setTempCar2 = 37 ard.flush() setTemp1 = str(setTempCar1) setTemp2 = str(setTempCar2) print ("Python value sent: ") print (setTemp1) ard.write(setTemp1) time.sleep(1) # I shortened this to match the new value in your Arduino code # Serial read section msg = ard.read(ard.inWaiting()) # read all characters in buffer print ("Message from arduino: ") print (msg) i = i + 1else: print "Exiting"exit() Here's the output of the above now: $ python ardser.pyPython value sent:63Message from arduino:Arduino received: 63Arduino sends: 1Python value sent:63Message from arduino:Arduino received: 63Arduino sends: 1Python value sent:63Message from arduino:Arduino received: 63Arduino sends: 1Python value sent:63Message from arduino:Arduino received: 63Arduino sends: 1Exiting | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24074923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2706344/']} | jdg_82287 |
stackexchange | llm_judgeable_groundtruth_similarity | 43322868 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm facing a problem with nested enum. So, I have an nested enum which has default values as you can see. Before there were just an enums like CDs and CMs. Now I set something like definition to every of it as you can see "Cool Ds" and etc. Currently I'm facing a problem that I can't read enums String, which is in () and I don't know how to fix it anymore. Does anyone have an idea? package com.test.beans; import java.io.Serializable; import com.fasterxml.jackson.annotation.JsonIgnoreProperties;@JsonIgnoreProperties(ignoreUnknown = true)public class RecordBean implements Serializable {public enum Types { CDs("Cool Ds"), CMs("Cool Ms"); private final String s; private Types(String s) { this.s=s; } public String getTypes(){ return s; } public static Types fromNumeric(int index) { switch (index) { default: return null; case 0: return Types.CDs; case 1: return Types.CMs; } }}private Types type;private float value;public RecordBean() { // default constructor for default instantiate}public RecordBean(Types type, float value) { this.type = type; this.value = value;}public Types getType() { return type;}public void setType(Types type) { this.type = type;}public float getValue() { return value;}public void setValue(float value) { this.value = value;}} UPDATE Error what I'm getting: 17/04/10 12:44:53 ERROR App$: Can not construct instance of com.test.beans.RecordBean$Types from String value 'Cool Ds': value not one of declared Enum instance names: CDs, CMs ] So as you can see he is not comparing my String 'Cool Ds' with enums String in brackets but with pure enum or CDs and CMs My USE-CASE is like this. I'm working spark streaming where data are incoming to my RecordBean class and where are comparing to my enum type . Because in database are changed types from CMs to Cool Ms I needed to change the same in my app by adding definition to Enum. Afterwards I couldn't accomplish part where app will process enum like CMs and read its definition or Cool Ms
Now provide the response and nothing else.
| One way to iterate over the variadic bases: template <typename T, typename ...Args>class ChildGenerator : public Args...{public: ChildGenerator(T t) : Args(t)... {} void doThings() override { int dummy[] = {0, (Args::doThings(), void(), 0)...}; static_cast<void>(dummy); // avoid warning for unused variable }}; or in C++17, with folding expression: void doThings() override { (static_cast<void>(Args::doThings()), ...); } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43322868', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5748797/']} | jdg_82288 |
stackexchange | llm_judgeable_groundtruth_similarity | 10342486 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across this one liner that appears to work: stop-service -inputobject $(get-service -ComputerName remotePC -Name Spooler) Can anyone explain why, because I thought stop-service didn't work unless you either used remoting or it occurred on the local host.
Now provide the response and nothing else.
| The output of Get-Service is a System.ServiceProcess.ServiceController .NET class that can operate on remote computers. How it accomplishes that, I don't know - probably DCOM or WMI. Once you've gotten one of these from Get-Service , it can be passed into Stop-Service which most likely just calls the Stop() method on this object. That stops the service on the remote machine. In fact, you could probably do this as well: (get-service -ComputerName remotePC -Name Spooler).Stop() | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10342486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/66308/']} | jdg_82289 |
stackexchange | llm_judgeable_groundtruth_similarity | 641644 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make a solid PowerShell function with parameters.Now the problem is I don't get the ParameterSetNames to work. What I want to get is that computername OR ipaddress is always mandatory computername or ipaddress has to be set but not both at the same time (If possible ) if adminuser or adminpassword are given the other one is mandatory I tried a lot of diffrent settings but keeps failingthis was my last attempt CmdletBinding(DefaultParameterSetName="IpAddress")] [CmdletBinding()]Param( # computername: Name of the host you want to connect to. [Parameter(Mandatory=$true,ParameterSetName="ComputerName", Position=0)] [Parameter(Mandatory=$false,ParameterSetName="IpAddress")] [Parameter(Mandatory=$true,ParameterSetName="ComputerNameCredentials", Position=0)] [Parameter(Mandatory=$false,ParameterSetName="IpAddressCredentials")] [Parameter(Mandatory=$false,ParameterSetName="IpAddressUser")] [Parameter(Mandatory=$true,ParameterSetName="ComputerNameUser", Position=0)] [String]$computername, # ipAddress: Ip Address of the host you want to connect to. [Parameter(Mandatory=$false,ParameterSetName="ComputerName")] [Parameter(Mandatory=$true,ParameterSetName="IpAddress", Position=0)] [Parameter(Mandatory=$false,ParameterSetName="ComputerNameCredentials")] [Parameter(Mandatory=$true,ParameterSetName="IpAddressCredentials", Position=0)] [Parameter(Mandatory=$true,ParameterSetName="IpAddressUser", Position=0)] [Parameter(Mandatory=$false,ParameterSetName="ComputerNameUser")] [switch] [String]$ipAddress, # credentials: PowerShell credentials use to connect to the host. [Parameter(Mandatory=$false,ParameterSetName="ComputerName")] [Parameter(Mandatory=$false,ParameterSetName="IpAddress")] [Parameter(Mandatory=$true,ParameterSetName="ComputerNameCredentials")] [Parameter(Mandatory=$true,ParameterSetName="IpAddressCredentials")] [Parameter(Mandatory=$false,ParameterSetName="IpAddressUser")] [Parameter(Mandatory=$false,ParameterSetName="ComputerNameUser")] [PSCredential]$credentials, # adminUser: Username to use to connect to the host. [Parameter(Mandatory=$false,ParameterSetName="ComputerName")] [Parameter(Mandatory=$false,ParameterSetName="IpAddress")] [Parameter(Mandatory=$false,ParameterSetName="ComputerNameCredentials")] [Parameter(Mandatory=$false,ParameterSetName="IpAddressCredentials")] [Parameter(Mandatory=$true,ParameterSetName="IpAddressUser")] [Parameter(Mandatory=$true,ParameterSetName="ComputerNameUser")] [String]$adminUser, # adminPassword: Password to use to connect to the host. [Parameter(Mandatory=$false,ParameterSetName="ComputerName")] [Parameter(Mandatory=$false,ParameterSetName="IpAddress")] [Parameter(Mandatory=$false,ParameterSetName="ComputerNameCredentials")] [Parameter(Mandatory=$false,ParameterSetName="IpAddressCredentials")] [Parameter(Mandatory=$true,ParameterSetName="IpAddressUser")] [Parameter(Mandatory=$true,ParameterSetName="ComputerNameUser")] [String]$adminPassword)
Now provide the response and nothing else.
| Hm, I think you have too many parameter sets. Also not sure why you have separate user/password parameters and a PSCredential parameter. You should really just use the PSCredential. But I'll assume you need this for some reason (please consider changing it). 4 parameter sets: ComputerNameCred ComputerNamePlain IpAddressCred IpAddressPlain function Thing { [CmdletBinding(DefaultParameterSetName="ComputerNameCred")] Param ( # computername: Name of the host you want to connect to. [Parameter(Mandatory=$true,ParameterSetName="ComputerNameCred", Position=0)] [Parameter(Mandatory=$true,ParameterSetName="ComputerNamePlain", Position=0)] [String]$computername, # ipAddress: Ip Address of the host you want to connect to. [Parameter(Mandatory=$true,ParameterSetName="IpAddressCred", Position=0)] [Parameter(Mandatory=$true,ParameterSetName="IpAddressPlain", Position=0)] [String]$ipAddress, # credentials: PowerShell credentials use to connect to the host. [Parameter(Mandatory=$true,ParameterSetName="ComputerNameCred")] [Parameter(Mandatory=$true,ParameterSetName="IpAddressCred")] [PSCredential]$credentials, # adminUser: Username to use to connect to the host. [Parameter(Mandatory=$true,ParameterSetName="IpAddressPlain")] [Parameter(Mandatory=$true,ParameterSetName="ComputerNamePlain")] [String]$adminUser, # adminPassword: Password to use to connect to the host. [Parameter(Mandatory=$true,ParameterSetName="IpAddressPlain")] [Parameter(Mandatory=$true,ParameterSetName="ComputerNamePlain")] [String]$adminPassword )}Get-Help Thing # use this to make sure your parameter sets are as they should be. I also removed [switch] from your IP address parameter; not sure why that was there. You may want to consider using [System.Net.IPAddress] for that parameter instead of [String] . It will still accept strings (they will be cast), but it automatically validates the parameter since the cast will fail if it's not a valid IP. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/641644', 'https://serverfault.com', 'https://serverfault.com/users/249942/']} | jdg_82290 |
stackexchange | llm_judgeable_groundtruth_similarity | 26636673 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the below query i have 2 table product and product status in product table i have productid,productname and in productstatus table i have locationid,productid,currentstock,unitprice,statu.I want to write a correlated query to get the productname,currentstock,locationid.i tried but it display all product name with no currentstock.I can't able to get product based on date and locationid.Pls help me to do this. Correlated subquery SELECT p.ProductID, p.ProductName, (SELECT CurrentStock FROM ProductStatus PS WHERE PS.ProductID =p.ProductID AND PS.LocationID = 1 AND PS.Statu = '2014-10-14')FROM Product P Table structure and data CREATE TABLE #Product (productid INT,productname VARCHAR(100))INSERT INTO #ProductVALUES (1,'biscuit'), (2,'soap'), (3,'flower')CREATE TABLE #Productstatus ( locationid INT, productid INT, currentstock INT, unitprice INT, statu DATE )INSERT INTO #ProductstatusVALUES (1,1,200,10,'2014-10-14'), (1,2,300,10,'2014-11-16'), (2,3,200,10,'2014-12-12')
Now provide the response and nothing else.
| Whats is the standard way apps which run on the command line communicate to its daemon process? There are a number of ways to do this: Dropboxes and Signals A variety of locations are being used to store "pid files" containing a daemon's process ID number: /var/run/<app>/<app>.pid , /var/run/<app>.pid (thanks @Adam Katz for edit), /run/<app>/<app>.pid , /run/<app>.pid , (see Askubuntu: Why has /var/run been migrated to /run? ). When the pid of the daemon process is known, a command line program (which is running as a separate process) could communicate with the daemon in these ways: Writing something into a prearranged place. This could be an ordinary file, a database table, or any convenient spot readable by the server. Sending a signal to the daemon. The common way to do this locally is with the kill system call, int kill(pid_t pid, int sig); . Old School Example: The server multiplexing daemon xinetd would reread its configuration file after receiving SIGHUP. The send-a-signal methodology has been made redundant by the inotify API whereby a process can subscribe to file system events. Using signals is useful when you don't want the daemon to act on every file change, since not every change may leave the file in a valid state, as when modifying a system config file. FIFO or Pipe A fifo or pipe is simply a special file that blocks processes reading it until some other process has written to it. You can make a named pipe/fifo in the file system with mkfifo . The only tricky thing about this is that pipes should generally be opened unbuffered, e.g. with open() as opposed to fopen() . Scripting languages sometimes make a facility for pipe-friendly reading/writing: Perl had a line-buffered mode set with $|=1 that was useful for pipes. More commonly, you see anonymous pipes all the time on the command line with the | symbol separating commands which are executed as separate processes. Sockets What about something newer, like mySQL? The mysql database system consists of a command line client mysql and a server mysqld and may also serve other clients either on the local machine or over the internet. mySQL communicates using a socket . The server listens to a socket for connections, and forks new processes giving a socket descriptor to the child for processing. When the child is done processing, it can exit. There are UNIX sockets and Internet sockets, with different namespaces. One guide to programming sockets in C on Linux would be the sockets chapter of the The GNU C Library manual . No-wait I/O is an alternative to forking off processes. This is done in C with the select() system call, which allows having a process wait for an event on one or more files, including sockets, or a timeout. The GNU C Library docs includes a no-wait I/O socket server example NodeJS is a server for the Javascript language, written as a single-threaded server using no-wait I/O, and shows that these techniques are still relevant in modern systems. "Callbacks" I'm assuming its impossible to ask linux to call a callback when I type in myapp foo bar? Maybe . But it might be too much work to be worth the trouble. When you type myapp foo bar into "Linux", that's not Linux, you are typing that into your command shell, which is a program running in its own process separate from everything else. So, unless the functionality you want is built into the command shell, there is normally no way for that command shell to send messages to your other program without starting a new process. The default command shell for many (but not all) Linux systems is /bin/bash . To communicate with a daemon that listens to sockets from bash we would need a socket opening procedure in bash . And, sure enough, it exists! One can listen to a socket in bash . This is the basis for a daemon: From: Simple Socket Server in Bash? answer by dtmilano : Q: I have a simple debugging solution which writes things to 127.0.0.1:4444 and I'd like to be able to simply bind up a port from bash and print everything that comes across. Is there an easy way to do this? A: $ exec 3<>/dev/tcp/127.0.0.1/4444 $ cat <&3 One can also open a socket from bash to a remote process, i.e. communicate with a a daemon: From: TCP connection, bash only we learn exec 5<>"/dev/tcp/${SERVER}/${PORT}" redirects a TCP link to file descriptor 5 for both input and output. Not all bash programs are compiled with TCP support. It is apparently Linux-distribution dependent. At least according to a comment on this answer by William Pursell There are other shells besides bash . Many shells were developed back in the *nix days. ksh Korn shell. csh C-shell. Bourne shell sh . Ash shell. Wikipedia keeps a list of shells . And these shells each have their own advantages and disadvantages and are not entirely compatible with each other's formatting! Fast forward about 30 years, and there aren't so many in common use now. But an important feature exists here: each user can choose his own login shell . See the chsh command. So where I am going here is that if bash doesn't support the communications you need to do, you could set up a command shell where special messages can be sent without opening a new process. This might save you a few milliseconds, and usually isn't worth it. But nothing is stopping you. You might even set up an ipython command shell, as suggested in https://stackoverflow.com/a/209670/103081 , and python can import most anything you need to do socket communications with your specialized daemon. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26636673', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4195150/']} | jdg_82291 |
stackexchange | llm_judgeable_groundtruth_similarity | 24966271 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my repository, if I type $ git diff some-file or $ git difftool some-file I get the in-terminal diff display. I think this should not happen, because I have set up an external diff tool, as shown by the output of git config -l : $ git config -luser.name=blah blahuser.email=blah blahhttp.sslverify=truediff.external=/home/daniel/bin/git-diff <--This is the important linepush.default=simplecore.filemode=falsecore.editor=geditalias.tree=log --all --graph --decorate=short --color --format=format:'%C(bold blue)%h%C(reset) %C(auto)%d%C(reset) %C(black)[%cr]%C(reset) %x09%C(black)%an: %s %C(reset)'core.repositoryformatversion=0core.filemode=falsecore.bare=falsecore.logallrefupdates=truecore.ignorecase=trueremote.origin.url=https://daniel@skynet/git/pyle.gitremote.origin.fetch=+refs/heads/*:refs/remotes/origin/*branch.master.remote=originbranch.master.merge=refs/heads/masterbranch.daniel.remote=originbranch.daniel.merge=refs/heads/daniel The git-diff file referenced in the diff.external line looks like this #!/bin/bashmeld $2 $5 Why doesn't git diff invoke meld? I get the same behaviour if I set things up so that git config -l has the following line: diff.tool = meld or diff.external = usr/bin/meld Note : Other repositories on my machine don't have this problem. Related, but not equivalent, SO questions: What is the difference between git diff and git difftool ? Cannot make git diff use diff.external for external diff tool
Now provide the response and nothing else.
| I get the in-terminal diff display. I this should not happen, because I have set up an external diff tool Yes, it should: diff.external is for "in-terminal diff display". (from git config man page ) diff.external If this config variable is set, diff generation is not performed using the internal diff machinery, but using the given command . Can be overridden with the GIT_EXTERNAL_DIFF environment variable. The command is called with parameters as described under "git Diffs" in git(1). Note: if you want to use an external diff program only on a subset of your files, you might want to use gitattributes(5) instead. The question you link explains why meld wouldn't be able to play the role of an "external diff". Viewing a diff visually with another tool is done with: git difftool --dir-diff shaOfHisCheckIn^!git difftool --tool=meld --dir-diff shaOfHisCheckIn^!git difftool -t meld -d shaOfHisCheckIn^! meld can be configured on Windows as a difftool: see " Git Diff and Meld on Windows ". If you wanted to configure meld for git diff, you could (on Ubuntu) use the diff.external , but with a wrapper script : create a file called git-diff.sh , using the following content: #!/bin/bashmeld "$2" "$5" > /dev/null 2>&1 Save this to a location such as /usr/local/bin , giving it executable rights: $ sudo mv git-diff.sh /usr/local/bin/$ sudo chmod +x /usr/local/bin/git-diff.sh The final step is to open your $HOME/.gitconfig file and add the following few lines: [diff] external = /usr/local/bin/git-diff.sh The next time you type git diff in a Git project with changes, Meld will be launched showing you a split-pane diff viewer. Note that you are required to close the open instance of meld before the next diff viewer is opened. will notes in the comments : That wrapper script works great on my Ubuntu system. I did discover a gottcha: I detached the meld command as so: meld "$2" "$5" > /dev/null 2>&1 & Kept failing to open the $2 file (tempory file). What happens? Well, git removes $2 once the wrapper exits. My suggestion, if you wish to detach copy the tmp file, invoke meld and remove tmp file yourself. will proposes the gist gist-meld.bash , using meld --newtab , as seen here : #!/bin/bash# * expects meld to be on your default PATH#function detach_meld(){ local f1="/tmp/mld1-$(basename "$1")" local f2="/tmp/mld2-$(basename "$2")" ## echo "f1 = ${f1}"## echo "f2 = ${f2}" cp "$1" "${f1}" cp "$2" "${f2}"# (meld --newtab "${f1}" "${f2}" ; rm "${f1}" "${f2}" ) > /dev/null 2>&1 & }detach_meld "$2" "$5" | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24966271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2665591/']} | jdg_82292 |
stackexchange | llm_judgeable_groundtruth_similarity | 4952825 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using T4 in Visual Studio 2010, and I want to iterate over the files in my solution, however I have found that T4 source generation works in a kind of a sandbox, and the current working directory is inside of the Visual Studio 10 directory in program files. Is there a way to reference the solution the T4 file is in relativistically, so that it doesn't break the build, or works on some one else's box that doesn't have the same file structure etc? Thanks
Now provide the response and nothing else.
| You must set the hostspecific attribute to true like so: <#@ template language="C#" hostspecific="True" #> The ITextTemplatingEngineHost interface will give you the information you need. <#= this.Host.ResolveParameterValue("-", "-", "projects") #> I don't believe there is a way to reference the solution, but you can get the path in which your *.tt file is and from there get other files. To load a file from a location relative to the text template, you can use this: this.Host.ResolvePath("relative/path.txt") | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4952825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/520/']} | jdg_82293 |
stackexchange | llm_judgeable_groundtruth_similarity | 1091945 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What characters must be escaped in XML documents, or where could I find such a list?
Now provide the response and nothing else.
| If you use an appropriate class or library, they will do the escaping for you. Many XML issues are caused by string concatenation. XML escape characters There are only five: " "' '< <> >& & Escaping characters depends on where the special character is used. The examples can be validated at the W3C Markup Validation Service . Text The safe way is to escape all five characters in text. However, the three characters " , ' and > needn't be escaped in text: <?xml version="1.0"?><valid>"'></valid> Attributes The safe way is to escape all five characters in attributes. However, the > character needn't be escaped in attributes: <?xml version="1.0"?><valid attribute=">"/> The ' character needn't be escaped in attributes if the quotes are " : <?xml version="1.0"?><valid attribute="'"/> Likewise, the " needn't be escaped in attributes if the quotes are ' : <?xml version="1.0"?><valid attribute='"'/> Comments All five special characters must not be escaped in comments: <?xml version="1.0"?><valid><!-- "'<>& --></valid> CDATA All five special characters must not be escaped in CDATA sections: <?xml version="1.0"?><valid><![CDATA["'<>&]]></valid> Processing instructions All five special characters must not be escaped in XML processing instructions: <?xml version="1.0"?><?process <"'&> ?><valid/> XML vs. HTML HTML has its own set of escape codes which cover a lot more characters. | {} | {'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/1091945', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13370/']} | jdg_82294 |
stackexchange | llm_judgeable_groundtruth_similarity | 68113077 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently noticed a behaviour, that the default code analysis does not finish anymore in one of my android projects, in particular in kotlin files - unfortunately without an error message. With a stuck analysis I am not able to identify errors in my code anymore and basic stuff like autocompletion does not work, which makes it unable to work. In the top right corner of the opened file I constantly get Analyzing... I already tried restarting Android Studio, invalidating its cache, deleting and cloning my project again, rebooting and deleting all kind of gradle and Android Studio cache files but without success.
Now provide the response and nothing else.
| I finally got it working again with the help of the idea.log file, which can be found via Help > Show Log in Explorer in Android Studio. The log stated an Exception while analyzing expression at different locations in my files, caused by java.lang.IllegalStateException: @NotNull method org/jetbrains/kotlin/builtins/KotlinBuiltIns.getBuiltInClassByFqName must not return null . Since it looked like a Kotlin related issue, I upgraded org.jetbrains.kotlin:kotlin-stdlib-jdk7 in my build.gradle from 1.4.10 to 1.5.10 and it worked instantly again. I hope the info helps anyone! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68113077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3187439/']} | jdg_82295 |
stackexchange | llm_judgeable_groundtruth_similarity | 14568 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
At the moment I'm playing with Restricted Boltzmann Machines and since I'm at it I would like try to classify handwritten digits with it. The model I created is now a quite fancy generative model but I don't know how to go further with it. In this article the author say, that after creating a good generative model, one " then trains a discriminative classifier (i.e., linear classifier, Support Vector Machine) on top of the RBM using the labelled samples " and furtherly states " since you propagate the data vectors to the hidden units of the RBM model to get hidden unit vectors, or a higher-level representation of the data ". The problem is that I'm not sure if I get that right. Does that mean all I have to do is propagate the input to the hidden units and there I have my RBM feature for classification? Can somebody explain this process to me?
Now provide the response and nothing else.
| Review of Restricted Boltzmann Machines A restricted Boltzmann machine (RBM) is a generative model, which learns a probability distribution over the input. That means, after being trained, the RBM can generate new samples from the learned probability distribution. The probability distribution over the visible units $\mathbf{v}$ is given by$$p(\mathbf{v} \mid \mathbf{h}) = \prod_{i=0}^V p(v_i \mid \mathbf{h}),$$where$$p(v_i \mid \mathbf{h}) = \sigma\left( a_i + \sum_{j=0}^H w_{ji} h_j \right)$$and $\sigma$ is the sigmoid function, $a_i$ is the bias for the visible node $i$, and $w_{ji}$ is the weight from $h_j$ to $v_i$.From these two equations, it follows that $p(\mathbf{v} \mid \mathbf{h})$ only depends on the hidden states $\mathbf{h}$. That means that the information on how a visible sample $\mathbf{v}$ is generated, has to be stored in the hidden units, the weights and the biases. Using RBMs for classification When using RBMs for classification tasks, you use the following idea: as the information on how your training or test data was generated is saved in the hidden units $\mathbf{h}$, you can extract these underlying factors by feeding a training sample into the visible units of the RBM, propagate it forward to the hidden units, and use this vector of hidden units as a feature vector. You don't do any backwards pass to the visible units anymore. This hidden vector is just a transformed version of the input data - this can not classify anything by itself. To do a classification, you would train any classifier (linear classifier, SVM, a feedforward neural network, or anything else) with the hidden vector instead of the "raw" training data as inputs. If you are building a deep belief network (DBN) - which was used to pre-train deep feed-forward neural networks in an unsupervised fashion - you would take this hidden vector and use it as the input to a new RBM, which you stack on top of it. That way, you can train the network layer-by-layer until reaching the desired size, without needing any labeled data. Finally, you'd add e.g. a softmax layer to the top, and train the whole network with backpropagation on your classification task. | {} | {'log_upvote_score': 5, 'links': ['https://datascience.stackexchange.com/questions/14568', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/25289/']} | jdg_82296 |
stackexchange | llm_judgeable_groundtruth_similarity | 313630 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have several Linux servers connected to APC UPSs. They are connected to the UPS via USB and configured to shutdown if the UPS sends a signal that mains power has been lost and has not returned after a period of time. This works well. In addition they are configured via jumpers to automatically power on if power is lost and then returns. This also works well. The scenario that is causing issues is when mains power is lost, and the server shuts down, but then mains power is restored before the UPS runs out. In that case we'd like the Server to start up again, but since power is never actually lost post-UPS, the server never sees a power-return event and therefore doesn't power on. I'm hoping there exists a device that can plug into mains power pre-UPS, and sense the power return state and send wake up signals perhaps via Wake-on-Lan to preconfigured servers to wake them up once mains power has returned that someone can point me to. Alternate suggestions are also appreciated, only thing I can't do is replace the server hardware, and I'd prefer to keep the same UPSs if possible.
Now provide the response and nothing else.
| The typical scenario implemented when using UPS is: On power loss, the UPS notifies the software component on the system(s) connected to it. If the power is lost longer than n seconds, the UPS driver notifies the UPS and halts the system without powering it off (this is important). UPS waits m seconds for the systems to halt, and cuts the power off. When the power is restored and UPS battery is charged above minimum (so that it can support a power on and clean shut-down in case power is lost right after machines start booting), UPS restores the power to the systems. Systems notice (at BIOS/UEFI level), that they were powered on before they lost power, so they boot up. In most server BIOSes you can set the system to restore power state from before power loss. This means, that if the server is powered down and power is lost, it will remain powered down after you plug it in. On the other hand, if it was powered on, when the power was lost, it will power up when AC is restored. NUT mentioned by BillThor should allow you to realize this scenario. | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/313630', 'https://serverfault.com', 'https://serverfault.com/users/14752/']} | jdg_82297 |
stackexchange | llm_judgeable_groundtruth_similarity | 30060114 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a build.xml in our project, but IntelliJ is unable to import from it. I get a: Cannot import anything from <path_to>/build.xml file. On Eclipse I can do a: File -> Import -> General -> Existing Projects into workspace. and chose top-level folder. I did the same in IJ. The project imported and indexed fine, I can do a search for any class. However, it does not recognize protobuf compiled class files. I installed the proto plugin, so it syntax highlights my .proto files, but still doesn't realize that there's a compiled class output that it needs to refer to. Do I need to do something special to get it to recognize those files?
Now provide the response and nothing else.
| since this is pretty much the only place I could find an exact description for the issue I was facing with Idea, I'm replying even if it's an old question. It turns out that if the .java file that's generated by the protocol buffers compiler is larger than 2.5MB (default setting), Idea ignores it altogether. Even if there is a .class file, even if compile/make succeed, even if you build your project just fine with Maven. To fix that, simply edit your idea.properties file (it's inside the bin folder in your Idea installation folder), and either comment out this setting, or set it to something larger than your generated .java file (in my case, I set it to 15000 , as my generated file is currently about 8MB in size): #---------------------------------------------------------------------# Maximum file size (kilobytes) IDE should provide code assistance for.# The larger file is the slower its editor works and higher overall system memory requirements are# if code assistance is enabled. Remove this property or set to very large number if you need# code assistance for any files available regardless their size.#---------------------------------------------------------------------idea.max.intellisense.filesize=2500 Note : at least on Linux, where the default update policy seems to be "overwrite the old Idea folder with what you just downloaded", this change must be applied after every update/upgrade. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30060114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/792238/']} | jdg_82298 |
stackexchange | llm_judgeable_groundtruth_similarity | 42511476 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When sending notifications to iOS users, for some of them I get response status code 400 (BadDeviceToken) or code 410 (Unregistered). From Apple documentation about "BadDeviceToken": The specified device token was bad. Verify that the request contains a valid token and that the token matches the environment. What is the meaning of "bad"? I know for a fact that the device token was valid at some earlier time. What does a user do to make its device token bad? From documentation about "Unregistered": The device token is inactive for the specified topic. Does this necceserally mean that the app has been deleted? Or there can be some other reasons for this response.
Now provide the response and nothing else.
| As you've quoted from Table 8-6 in the APNS documentation, there are two possible causes for the error: That the device token is invalid That the device token does not match the environment If it is the first case, make sure that the iOS app registers the device for remote notifications every single time that the app is launched because there are many reasons for the device token to change across launches, as outlined in Configuring Remote Notification Support . If it is the second case, you need to be sure that: The backend uses development configurations if your app build was signed with development APNS entitlements, and The backend uses production configurations if your app build was signed with production APNS entitlements. Luckily, as the iOS developer, you don't need to directly change the APNS entitlements yourself. It is always in development, and is only automatically changed by Xcode to production when you generate the build and export for App Store or enterprise distribution. As for the backend, your backend developer should know how to configure the backend for development and production environments. For some frameworks, it is a matter of toggling some boolean named isProduction . Ultimately, according to Communicating with APNs under the section APNs Connections , push notifications are sent to different APNS endpoints depending on whether the environment is production or development. Let's pretend that the BadDeviceToken error is due to the second case--that the device token registered by the app does not match the backend's properly configured development environment. First, in your Xcode project, check your .entitlements file and verify that the APS Environment key's value is development . It should look like this: Next, after you generate an archive, open the Organizer (via the Window menu > Organizer ), select the archive, and click on Export... at the right. You should see four methods of distribution: If you select App Store or Enterprise, you will see in the later dialogs that Xcode changes the APNS entitlements to production (see tip of red arrow): If you select Ad Hoc or Development, the text under aps-environment will be development , which should then match the backend's configurations. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42511476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2636081/']} | jdg_82299 |
stackexchange | llm_judgeable_groundtruth_similarity | 695802 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to provide secure communication between various processes that are using TCP/IP sockets for communication. I want both authentication and encryption. Rather than re-invent the wheel I would really like to use SSL and the SslStream class and self-signed certificates. What I want to do is validate the remote process's certificate against a known copy in my local application. (There doesn't need to be a certificate authority because I intend for the certificates to be copied around manually). To do this, I want the application to be able to automatically generate a new certifiate the first time it is run. In addition to makecert.exe, it looks like this link shows a way to automatically generate self-signed certificates, so that's a start. I've looked at the AuthenticateAsServer and AuthenticateAsClient methods of SslStream. You can provide call-backs for verification, so it looks like it's possible. But now that I'm into the details of it, I really don't think it's possible to do this. Am I going in the right direction? Is there a better alternative? Has anyone done anything like this before (basically peer-to-peer SSL rather than client-server)?
Now provide the response and nothing else.
| Step 1: Generating a self-signed certificate: I downloaded the Certificate.cs class posted by Doug Cook I used this code to generate a .pfx certificate file: byte[] c = Certificate.CreateSelfSignCertificatePfx( "CN=yourhostname.com", //host name DateTime.Parse("2000-01-01"), //not valid before DateTime.Parse("2010-01-01"), //not valid after "mypassword"); //password to encrypt key file using (BinaryWriter binWriter = new BinaryWriter( File.Open(@"testcert.pfx", FileMode.Create))) { binWriter.Write(c); } Step 2: Loading the certificate X509Certificate cert = new X509Certificate2( @"testcert.pfx", "mypassword"); Step 3: Putting it together I based it on this very simple SslStream example You will get a compile time error about the SslProtocolType enumeration. Just change that from SslProtocolType.Default to SslProtocols.Default There were 3 warnings about deprecated functions. I replaced them all with the suggested replacements. I replaced this line in the Server Program.cs file with the line from Step 2: X509Certificate cert = getServerCert(); In the Client Program.cs file, make sure you set serverName = yourhostname.com (and that it matches the name in the certificate) In the Client Program.cs, the CertificateValidationCallback function fails because sslPolicyErrors contains a RemoteCertificateChainErrors. If you dig a little deeper, this is because the issuing authority that signed the certificate is not a trusted root. I don`t want to get into having the user import certificates into the root store, etc., so I made a special case for this, and I check that certificate.GetPublicKeyString() is equal to the public key that I have on file for that server. If it matches, I return True from that function. That seems to work. Step 4: Client Authentication Here's how my client authenticates (it's a little different than the server): TcpClient client = new TcpClient();client.Connect(hostName, port);SslStream sslStream = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(CertificateValidationCallback), new LocalCertificateSelectionCallback(CertificateSelectionCallback));bool authenticationPassed = true;try{ string serverName = System.Environment.MachineName; X509Certificate cert = GetServerCert(SERVER_CERT_FILENAME, SERVER_CERT_PASSWORD); X509CertificateCollection certs = new X509CertificateCollection(); certs.Add(cert); sslStream.AuthenticateAsClient( serverName, certs, SslProtocols.Default, false); // check cert revokation}catch (AuthenticationException){ authenticationPassed = false;}if (authenticationPassed){ //do stuff} The CertificateValidationCallback is the same as in the server case, but note how AuthenticateAsClient takes a collection of certificates, not just one certificate. So, you have to add a LocalCertificateSelectionCallback, like this (in this case, I only have one client cert so I just return the first one in the collection): static X509Certificate CertificateSelectionCallback(object sender, string targetHost, X509CertificateCollection localCertificates, X509Certificate remoteCertificate, string[] acceptableIssuers){ return localCertificates[0];} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/695802', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17635/']} | jdg_82300 |
stackexchange | llm_judgeable_groundtruth_similarity | 1636259 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Are both completely different concepts? Or is there an overlap in their meaning? Would it be correct to say that a Web Framework is used for the creation of a front-end, while a CMS is used for the back-end? If yes, then should the Web Framework use the same technology as the CMS? For example could Ruby on Rails be used in combination with Drupal? Or doesn't that make any sense at all?
Now provide the response and nothing else.
| Are both completely different concepts? Or is their an overlap in their meaning? A web (application) framework is a lower level, generic toolkit for the development of web applications. That could be any type of system managing and processing data while exposing it's data and services to human users(via web browsers and other interactive clients) as well as machines via the http protocol. A CMS is one type of such applications: a system to manage content shown in websites. Usually/historically, this mainly means managing (pieces of) text of "pages" shown in a web site, and useres that have different levels of access to manage this content. That's where the C and the M come from. With a CMS, you can manage web content. With a Web framework, you build web applications. Would it be correct to say that a Web Framework is used for the creation of a front-end, while a CMS is used for the back-end? No. It would be correct to say that a web framework can be used to create a CMS.Both contain parts that work on the backend as well as on the front end.Often, a CMS is based on a web framework - sometimes CMS developers build there own web framework, and sometimes they even expose the API of this framework, so a developer can create extensions to the CMS in a way as if he would develop an application with a web framework. Drupal really does this, so you can create real web applications based on the integrated framework - with the upside that they will also be easily to integrate into the CMS.But that(exposing the API of a web framework) is no necessary criteria for being called a CMS. If yes, then should the Web Framework use the same technology as the CMS? For example could Ruby on Rails be used in combination with Drupal? Or doesn't that make any sense at all? It's be possible to combine two existing systems build with these two, (e.g. because you want to show some data in a web site managed by drupal, that already exists in a Rails-based system). But as Drupal also provides you some of the genric functionality of it's underlying web framework, it might not be necessary. You would have to manage and learn two very different systems and handle all the problems with there interoperation. So, I'd try to build a Website with only one of these if possible and only combine them if theres a good reason to. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1636259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/75889/']} | jdg_82301 |
stackexchange | llm_judgeable_groundtruth_similarity | 41550219 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello I am new to laravel and now I am facing this error BadMethodCallException in Macroable.php line 74: Method save does not exist. If I var_dump(); die(); all of the outputs then I am getting all of them but when I use save method to save the results in to the database it gives me this error of save method does not exist . I dont know where in my code am I doing it wrong. Please see the route controller and view for proper understanding. Thank you in advance. Controller public function td($id) { $tn = $this->t->getAllTn(); $to = $this->t->getAllTo(); $time = Carbon\Carbon::now(); // current time $time->toDateTimeString(); // converting time to string if((isset($_POST["n"]) && !empty($_POST["n"]))) { $tn->t_type_id = Input::get('options'); $tn->d_id = $id; $tn->result = Input::get('message'); $tn->date = $time; // var_dump($tn->date); // var_dump($tn->t_type_id); // var_dump($tn->d_id); // var_dump($tn->result); // die(); Session::flash('message', 'Your tn has been added.'); $tn->save(); } else if((isset($_POST["o"]) && !empty($_POST["o"]))) { $to->d_id = $id; $to->outcome = Input::get('message'); $to->date = $time->toDateTimeString(); // var_dump($to->d_id); // var_dump($to->outcome); // var_dump($to->date = $time->toDateTimeString()); // die(); Session::flash('message', 'Your to has been added.'); $to->save(); } return redirect('/t');} Route Route::get('/t/{id}', 'TController@td');Route::post('/t/{id}', 'TController@td'); View <div class="form-group"><form action="/t/{{ $d['id'] }}" method="post"> {{ csrf_field() }}<div class="panel-body"><h4>Heading here</h4></div> <select class="form-control" id="options" name="options" style="width:100%" type="checkbox"> @foreach($t as $t) <option value="{{ $t->id }}">{{ $t->type }}</option> @endforeach </select> </div> <div class="col-md-4" id="value" align="center"> <div class="panel panel-warning"> <div class="panel-heading"> Enter text below </div> <div class="form-group has-success"> <textarea class="form-control" id="message" name="message" placeholder="Please enter your message here..." rows="5"></textarea> <input type="submit" class="btn btn-primary" name="n" value="A-N"> <input type="submit" class="btn btn-primary" name="o" value="A-O"> </div> </form> <!-- Notes: <br/>--> </div><script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script><script>function hide() {$("#value").hide();$("#h").hide(); $("#search").hide();}function show() {$("#value").show();$("#h").show(); $("#search").show();}function initHandlers() {$("#options").on('click', function() { show();});}hide();initHandlers();</script> Interface Implementation public function getAllTn() { return TN::all();}public function getAllTO(){ return TO::all();}
Now provide the response and nothing else.
| Alright, I hope I haven't misunderstood the question. Angular's router supports as many resolvers per route as you want. In the route declaration, the resolve property is an object and it can have as many keys as you'd like: { path: '', component: DashboardComponent, resolve: { foo: Resolver1 bar: Resolver2, // more resolves here... }} Then retrieve the resolved data from your component: @Component({ ... })export class MyComponent { constructor(private route: ActivatedRoute) { } ngOnInit() { const foo = this.route.snapshot.data['foo']; const bar = this.route.snapshot.data['bar']; }} The route won't be activated until ALL resolves are complete/fulfilled. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41550219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4573336/']} | jdg_82302 |
stackexchange | llm_judgeable_groundtruth_similarity | 1113556 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While thinking about another question I found out that this equality might be useful there:$$n\cdot 1 + (n-1)\cdot 2 + \dots + 2\cdot (n-1) + 1\cdot n = \frac{n(n+1)(n+2)}6$$To rewrite it in a more compact way:$$\sum_{k=1}^n k(n+1-k)=\frac{n(n+1)(n+2)}6.$$ This equality is relatively easy to prove:$$\sum_{k=1}^n k(n+1-k)=(n+1)\sum_{k=1}^n k - \sum_{k=1}^n k^2 =(n+1) \frac{n(n+1)}2 - \frac{n(n+1)(2n+1)}6 = n(n+1) \left(\frac{n+1}2-\frac{2n+1}6\right) = n(n+1)\frac{3(n+1)-(2n+1)}6 = \frac{n(n+1)(n+2)}6.$$(We only used the known formulas for the sum of the first $n$ squares and the sum of the first $n$ numbers .) Are there some other nice proofs of this equality? (Induction, combinatorial arguments, visual proofs, ...) EDIT: Now I found another question which asks about the same identity: Combinatorial interpretation of a sum identity: $\sum_{k=1}^n(k-1)(n-k)=\binom{n}{3}$ (I have tried to search before posting. But the answers posted here so far gave me some new ideas for good keywords to search which lead me to finding that question.) The questions are, in my opinion, not exact duplicates since the other question asks specifically about combinatorial proofs and my question does not have that restriction. But I agree that this is a very minor distinction. In any case, if you think that one of them should be closed as a duplicate, then you can vote to close. I will refrain from voting to close/reopen on this question. (If one of the two questions is voted to be a duplicate of the other one, they probably cannot be merged, since the summation variables are off by one.)
Now provide the response and nothing else.
| Let us choose three numbers from $\{0,1,2,\ldots, n+1\}$, beginning with the middle one, which has to be some $k\in \{1,\ldots,n\}$. We then have $k$ choices for the smallest and $n+1-k$ choices for the largest of the three. It follows that$${n+2\choose3}=\sum_{k=1}^n k(n+1-k)\ .$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1113556', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8297/']} | jdg_82303 |
stackexchange | llm_judgeable_groundtruth_similarity | 603078 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a directory (e.g. /home/various/ ) with many subdirectories (e.g. /home/various/foo/ , /home/various/ber/ , /home/various/kol/ and /home/various/whatever/ ). Is there a command I can run, which will breakdown the contents per file extension showing totals like total size number of files Let's say, I don't want to manually type each file extension in the terminal, in part because I don't know all the file extensions inside (recursively) /various/ . An output like this, would be great: *.txt 23 files, 10.2MB*.pdf 8 files, 23.2MB*.db 3 files, 2.3MB*.cbz 24 files, 2.3GB*.html 2,508 files, 43.9MB*.readme 13 files, 4KB
Now provide the response and nothing else.
| First, you should anchor your regular expression to only match at the beginning of the line ( ^chr1 ) to avoid finding lines that contain chr1 but it isn't the first string (this can easily happen with an annotated VCF file, for example). Next, you can use the -w option for (GNU) grep : -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified. If your grep doesn't support that, then use this: grep -v '^chr1\s' file The \s matches whitespace (including both tabs and spaces), so that will exclude any lines that start with chr1 and then any kind of whitespace character. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/603078', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/427061/']} | jdg_82304 |
stackexchange | llm_judgeable_groundtruth_similarity | 8851164 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to loop through a QML Grid's children and destroy each of them using Javascript. Grid { id: contentGrid spacing: 10 ImageItem { imageSource: "file:/foo.jpeg" } // destroy this ImageItem { imageSource: "file:/bar.jpeg" } // destroy this as well} I tried to do something like this but it's not working so far. for(var i = 0; contentGrid.children.length() < i; i++) { contentGrid.childAt(i).destroy();}
Now provide the response and nothing else.
| You have a number of problems in your attempt above... First, you'll need to iterate backwards because you'd be shifting the contents of the children down as you advance (ie, if you delete #1, number #2 would become child #1 and then you'd go to delete #2 which would be the old child #3). Second, you need to access the children differently. The childAt() function is for locating a child at a particular x,y on the screen, not a position in a list. Try this instead: import QtQuick 1.0Rectangle { width: 400 height: 400 Grid { id: contentGrid spacing: 10 Text { text: "foo" } // destroy this Text { text: "bar" } // destroy this as well } MouseArea { anchors.fill: parent onClicked: { for(var i = contentGrid.children.length; i > 0 ; i--) { console.log("destroying: " + i) contentGrid.children[i-1].destroy() } } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8851164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1000569/']} | jdg_82305 |
stackexchange | llm_judgeable_groundtruth_similarity | 3630594 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't understand what the Transformation Matrix is and how to work with it. The following will draw a circle at 0, 0 of my canvas: (generated from an svg converted with svg2canvas.jar) drawPoints: function(ctx, max_points) { ctx.save(); ctx.setTransform(1, 0, 0, 1, -551.23701, -368.42499); ctx.fillStyle="#0066ab"; ctx.globalAlpha="0.7"; ctx.beginPath(); ctx.moveTo(584.50,387.96); ctx.bezierCurveTo(584.50,397.14,577.05,404.59,567.87,404.59); ctx.bezierCurveTo(558.68,404.59,551.24,397.14,551.24,387.96); ctx.bezierCurveTo(551.24,378.77,558.68,371.33,567.87,371.33); ctx.bezierCurveTo(577.05,371.33,584.50,378.77,584.50,387.96); ctx.closePath(); ctx.fill(); ctx.restore(); } I would like to pass in arguments for setTransform() to draw on any part of my canvas, however I don't understand how to use it at all.
Now provide the response and nothing else.
| The transformation matrix they are referring to is the common transformation matrix found in linear algebra. Those arguments form the transformation matrix you wish to apply to your coordinates for the given shapes or paths. This page describes the transformation method. Please look specifically at the matrix they define under the method signature for transformation. It shows you which parameters go where in transformation matrix. Now please also refer to the following link . If you scroll down you will see what each element in the transformation matrix means. For instance the [0,0] element (parameter a from the HTML5 transform method signature) of the transformation matrix represents how the coordinate will scale in the X direction. Hope this helps, | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3630594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/437654/']} | jdg_82306 |
stackexchange | llm_judgeable_groundtruth_similarity | 89189 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm recently learning the part of I/O buffering of operating system and according to the book I use, When a user process issues an I/O request, the OS assigns a buffer in the system portion of main memory to the operation. I understand how this method is able to avoid the swapping problem in non-buffering situation. But is it assumed that the OS buffering created for the process will never be swapped out? To extend my question, I was wondering if there is any mechanism where the kernel portion of an OS in memory may also be swapped?
Now provide the response and nothing else.
| Indeed, it wouldn't make sense to swap out an I/O buffer. The point of that buffer is that it's in RAM. The code to perform the I/O and the code to manage swap must not be swapped out either. Other kernel code can be swapped out, in principle. However this can be problematic for several reasons. Hardware drivers that aren't involved in swap could be swapped out, but this means the kernel had to be very careful about the whole data party to swap (which may involve a disk over some external communication bus or even the network, as well as filesystem code when swapping to a file). Furthermore a lot of driver vice is very performance-sensitive: the kernel had to read it send data when the hardware is ready and may not be able to afford the time to bring vice and data back out of swap. In addition, RAM is usually assumed to be reliable (it's acceptable for the system to crash if the RAM fails), but swap (on disks or over the network) isn't, and it's difficult to design a kernel that can recover if part of it becomes unavailable. Most kernels take the simple approach and never swap out kernel code and data. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/89189', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/81369/']} | jdg_82307 |
stackexchange | llm_judgeable_groundtruth_similarity | 11643513 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My app is rejected by apple and they give a reason We found that your app does not use the HTTP Live Streaming protocol, with a baseline stream of 64 kbps, to broadcast streaming video Then I search for a solutions and over internet all the solutions suggest that I have to use .U8F8 and .ts extension for live streaming but my app has mp4 format. I already used MPMoviePlayerController for streaming. So my question is: Do I have to convert mp4 video to M8U8? Do I have to use MPMoviePlayer ? Can I convert mp4 video to M8U8 format during run-time and how to implement live streaming Is there any example code available for HTTP live streaming? Thanks in advance.
Now provide the response and nothing else.
| I can only comment on pre-recorded video, not live streaming... Do I have to convert mp4 video to M8U8? I think you mean .m3u8. Basically, you need to take your .mp4 file and: Encode it to a friendly bitrate for mobile bandwidths. You can use Apple's Compressor* app for this, it has presets for HTTP Live Streaming. Pick something around 1000kbps if you're playing around. Slice it up the output using Apple's mediafilesegmenter. You'll end up with lots of small .ts files and a manifest (.m3u8) which lists them. Hit the .m3u8 file in your player (initWithContentURL...) and you're off. Repeat steps 1 and 2 above and specify differing bandwidths. We went for the presets in Compressor. You'll end up with different versions of your video, 1 for each bandwidth, use the variantplaylistcreator tool from Apple to create a master playlist file which will point your player to each bandwidth so it can switch automatically. Stick all your .ts files and .m3u8 files on the net and use the mediastreamvalidator tool to check your master playlist file is ok and points to each version correctly. Make sure a decent quality bitrate is first in the master playlist file as this is the version that's played first. We went for something around the 1000kbps mark. Do I have to use MPMoviePlayer? I can't comment on other libraries, we used MPMoviePlayer. Can I convert mp4 video to M8U8 format during run-time and how to implement live streaming You can for live streams but for pre-recorded video do it all before. Put all your files online, validate your playlist (.m3u8) and play your videos using the master .m3u8 file. Is there any example code available for HTTP live streaming? In our implementation the client in iOS does all the work (e.g. switching streams depending on the available bandwidth). As such you just need to make sure all your files are in the right place before hand. Compressor - https://www.apple.com/final-cut-pro/compressor/ The mediafilesegmenter and mediastreamvalidator tools command lines tools available to download from the Apple developer network site. These articles have everything you need to know: https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html https://developer.apple.com/library/archive/technotes/tn2288/_index.html#//apple_ref/doc/uid/DTS40012238 Open this up in Safari: https://developer.apple.com/streaming/examples/advanced-stream.html - Each 'Gear' is a different bitrate stream Crack open this .m3u8 file in a text editor for an example master playlist file: https://devimages.apple.com.edgekey.net/resources/http-streaming/examples/bipbop_16x9/bipbop_16x9_variant.m3u8 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11643513', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1550588/']} | jdg_82308 |
stackexchange | llm_judgeable_groundtruth_similarity | 162829 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a 4 by 4 array which has a probability associated with each point. {{0., 0., 0., 0.9}, {0., 0.05, 0., 0.}, {0., 0., 0., 0.}, {0., 0., 0.05, 0.}} I want to sample the indices according to the probability/value at those indices in the matrix. I should find point {1,4} many more times than I should {4,3} or {2,2} . How can I sample the index where the matrix values correspond to the probability of finding that index?
Now provide the response and nothing else.
| You can use Position to find the non-zero probabilities: m = {{0., 0., 0., 0.9}, {0., 0.05, 0., 0.}, {0., 0., 0., 0.}, {0., 0., 0.05, 0.}};Position[m, _?Positive] {{1, 4}, {2, 2}, {4, 3}} If you have large matrices, then you may be interested in using SparseArray to save memory and possibly also gain in performance: sa = SparseArray[m];sa["NonzeroPositions"] {{1, 4}, {2, 2}, {4, 3}} Here is how to sample the indices in proportion to the corresponding matrix elements: pos = Position[m, _?Positive];values = Extract[m, pos];RandomChoice[values -> pos, 10] {{1, 4}, {1, 4}, {4, 3}, {1, 4}, {1, 4}, {1, 4}, {1, 4}, {1, 4}, {1, 4}, {1, 4}} | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/162829', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/36939/']} | jdg_82309 |
stackexchange | llm_judgeable_groundtruth_similarity | 935405 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't have much (good) math education beyond some basic university-level calculus. What do "analytical" and "numerical" mean? How are they different?
Now provide the response and nothing else.
| Analytical approach example: Find the root of $f(x)=x-5$ . Analytical solution: $f(x)=x-5=0$ , add $+5$ to both sides to get the answer $x=5$ Numerical solution: let's guess $x=1$ : $f(1)=1-5=-4$ . A negative number. Let's guess $x=6$ : $f(6)=6-5=1$ . A positive number. The answer must be between them. Let's try $x=\frac{6+1}{2}$ : $f(\frac{7}{2})<0$ So it must be between $\frac{7}{2}$ and $6$ ...etc. This is called bisection method. Numerical solutions are extremely abundant. The main reason is that sometimes we either don't have an analytical approach (try to solve $x^6-4x^5+\sin (x)-e^x+7-\frac{1}{x} =0$ ) or that the analytical solution is too slow and instead of computing for 15 hours and getting an exact solution, we rather compute for 15 seconds and get a good approximation. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/935405', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/51537/']} | jdg_82310 |
stackexchange | llm_judgeable_groundtruth_similarity | 5971267 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose you have result set such as: DATE ID cost --------------------------------------- 01/01/2011 1 10 01/01/2011 1 10 01/01/2011 2 10 01/01/2011 2 10 I want a way to sum the values on cost but only once for every distinct ID so that when i group by date I get a result such as DATE cost 01/01/2011 20 I first tried something like sum(distinct cost) but that of curse only returns 10I also tried: sum(case when distinct id then cost else 0 end) but that is not a functional query.
Now provide the response and nothing else.
| I will assume that the same ID will always have the same cost in the same day. I will also assume your RDBMS supports derived tables. In that case, this is what you want: select date, sum(cost)from (select distinct date, id, cost from YourTable)group by date Updated Oracle derived tables do not require alias. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5971267', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135683/']} | jdg_82311 |
stackexchange | llm_judgeable_groundtruth_similarity | 46044521 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to read a json object that is returned from a JSONP API call on Angular 4, but I keep getting "undefined" in the console when I try to print it. This is my SearchService.ts file: import {Injectable} from '@angular/core';import {Jsonp} from '@angular/http';import 'rxjs/add/operator/map';@Injectable()export class SearchService { apiRoot = 'this/is/my/api'; results: any; loading: boolean; constructor(private jsonp: Jsonp) { this.loading = false; } search(term: string) { const apiUrl = `${this.apiRoot}?search=${term}&rows=10&callback=JSONP_CALLBACK`; return this.jsonp.request(apiUrl).map(results => { this.results = results.json().data }); }} And this is the search.component.ts file that I'm using to carry out the search: import {Component, OnInit} from '@angular/core';import 'rxjs/add/operator/toPromise';import {SearchService} from '../../services/SearchService';@Component({ selector: 'app-search', templateUrl: './search.component.html', styleUrls: ['./search.component.css']})export class SearchComponent implements OnInit { loading = false; public result; constructor(private uniDirectory: SearchService) { } doSearch(term: string) { this.loading = true; this.uniDirectory.search(term).subscribe(results => this.result = results); this.loading = false; console.log('Result: ' + this.result); } ngOnInit() { }} If I try and print the result in the SearchService (i.e. console.log(results.json()); ), the json object is printed out. However, if I try to print out the same in the doSearch() method, then it prints undefined. Any suggestions are appreciated.
Now provide the response and nothing else.
| Assuming that you know how to add images at an absolute position (see Joris' answer), but looking at how to add text, then the answer to your question is: use ColumnText . If you only need to add a single line that doesn't need to be wrapped, you can use the ShowTextAligned() method: ColumnText.showTextAligned(writer.DirectContent, Element.ALIGN_CENTER, new Phrase("single line"), x, y, rotation); In this line of code, x and y are the coordinates for the middle of the text (other possible alignment values are ALIGN_LEFT and ALIGN_RIGHT ). The rotation parameter defines a rotation in degrees. Note that the text "single line" won't be wrapped. You can add text that "falls off the page" this way if the text you're adding is too long. If you want to add text inside a specific rectangle, then you need to define the column using a Rectangle object: ColumnText ct = new ColumnText(writer.DirectContent);ct.setSimpleColumn(new Rectangle(0, 0, 523, 50));ct.addElement(new Paragraph("This could be a very long sentence that needs to be wrapped"));ct.go(); If you provide more text than fits the rectangle, that text will not be rendered. However, it will still be available in the ct object so that you can add that remaining text at another position. All of this has been asked and answered before: Single line: http://stackoverflow.com/questions/16370428/how-to-write-in-a-specific-location-the-zapfdingbatslist-in-a-pdf-document-using http://stackoverflow.com/questions/17998306/rotating-text-using-center-in-itext Multiple lines: http://stackoverflow.com/questions/33609447 http://stackoverflow.com/questions/31152874/how-to-add-text-in-pdfcontentbyte-rectangle-using-itextsharp http://stackoverflow.com/questions/15414923/rotate-paragraphs-or-cells-some-arbitrary-number-of-degrees-itext Did I have to search long for these examples? No, I found them on the official web site under Absolute Positioning of text . Wisdom is there for those who search... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46044521', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4840056/']} | jdg_82312 |
stackexchange | llm_judgeable_groundtruth_similarity | 32559 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do you show that the equation $x^3+y^3+z^3=1$ has infinitely many solutions in integers? How about $x^3+y^3+z^3=2$?
Now provide the response and nothing else.
| You can reduce the first equation to $$x^3 = -y^3, z = 1$$ with obvious infinite solutions. This paper details other families of solutions. The second equation has solutions $(x,y,z)\equiv (6t^3+1, 1-6t^3, -6t^2)$ which (AFAIK) you find by construction (i.e you have to guess it). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/32559', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6715/']} | jdg_82313 |
stackexchange | llm_judgeable_groundtruth_similarity | 23296031 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using express 4 in order to create a json API service.I can't seem to define it to send a simple json without trying to render the view. var express = require('express');var router = express.Router();module.exports = function (app, namespace) { router.get('/', function(req, res) { res.json({'body': 123}); }); app.use(namespace + '/v1', router);}; when I access the route it's Error: Failed to lookup view "error" in views directory "/Volumes/api_service/init/views" I tried to remove the views engine all together //app.set('views', path.join(__dirname, 'views'));//app.set('view engine', 'jade'); but it yells with this error No default engine was specified and no extension was provided
Now provide the response and nothing else.
| If you're making any calls to res.render such as in an error handler that are generated by the 'express generate', then you'll see the error you described. For a json API service you probably don't need to render anything so just don't call render() , instead call res.send() with the status res.status set to 404 or 500. So basically, replace this: app.use(function(err, req, res, next) { res.render('error', { message: err.message, error: err });}); with this: app.use(function(err, req, res, next){ res.status(err.status || 500); res.send({ message: err.message, error: err }); return;}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23296031', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1574104/']} | jdg_82314 |
stackexchange | llm_judgeable_groundtruth_similarity | 3219114 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Greetings, I want to store some data in a redis db and don't know which way I should go. The data is equivalent to something like an address with the variables name , street and number . They will be stored under the lower cased name as key, there won't be doublets. Now, should I save it as a list or should I serialize the hash ( {:name => 'foo', :street => 'bar', :number => 'baz'} for example) with JSON/Marshall and simply store that? Regards Tobias
Now provide the response and nothing else.
| If you mean the characters of the numbers (such as 1927 and 9721), there are (at least) a couple of approaches. If you were allowed to sort, one approach is to simply sprintf them to two buffers, sort the characters in the buffers, then see if the strings are equal. However, given your desire to not sort the digits, another alternative is to set up a ten-element array, with all elements initially set to zero, then process each digit in the first number, incrementing the relevant element. Then do the same with the second number but decrementing. If, at the end, it's still all zeros, the numbers were a permutation of each other. This is efficient in that it's an O(n) algorithm where n is the number of digits in the two numbers. The pseudo-code for such a beast would be something like: def arePermutations (num1, num2): create array count, ten elements, all zero. for each digit in num1: increment count[digit] for each digit in num2: decrement count[digit] for each item in count: if item is non-zero: return false return true In C, the following complete program illustrates how this can be done: #include <stdio.h>#include <stdlib.h>#define FALSE (1==0)#define TRUE (1==1)int hasSameDigits (long num1, long num2) { int digits[10]; int i; for (i = 0; i < 10; i++) // Init all counts to zero. digits[i] = 0; while (num1 != 0) { // Process all digits. digits[num1%10]++; // Increment for least significant digit. num1 /= 10; // Get next digit in sequence. } while (num2 != 0) { // Same for num2 except decrement. digits[num2%10]--; num2 /= 10; } for (i = 0; i < 10; i++) if (digits[i] != 0) // Any count different, not a permutation. return FALSE; return TRUE; // All count identical, was a permutation.} int main (int c, char *v[]) { long v1, v2; if (c != 3) { printf ("Usage: %s <number1> <number2>\n", v[0]); return 1; } v1 = atol (v[1]); v2 = atol (v[2]); if (hasSameDigits (v1, v2)) { printf ("%d and %d are permutations\n", v1, v2); } else { printf ("%d and %d are not permutations\n", v1, v2); } return 0;} Simply pass it two (positive) numbers and, assuming they fit in a long , it'll tell you whether they have the same digit counts. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3219114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/384293/']} | jdg_82315 |
stackexchange | llm_judgeable_groundtruth_similarity | 32459325 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dataframe which was created via a df.pivot: type start endF_Type to_date A 20150908143000 345 316B 20150908140300 NaN 480 20150908140600 NaN 120 20150908143000 10743 8803C 20150908140100 NaN 1715 20150908140200 NaN 1062 20150908141000 NaN 145 20150908141500 418 NaN 20150908141800 NaN 450 20150908142900 1973 1499 20150908143000 19522 16659D 20150908143000 433 65E 20150908143000 7290 7375F 20150908143000 0 0G 20150908143000 1796 340 I would like to filter and return a single row for each 'F_TYPE' only returning the row with the Maximum 'to_date'. I would like to return the following dataframe: type start endF_Type to_date A 20150908143000 345 316B 20150908143000 10743 8803C 20150908143000 19522 16659D 20150908143000 433 65E 20150908143000 7290 7375F 20150908143000 0 0G 20150908143000 1796 340 Thanks..
Now provide the response and nothing else.
| A standard approach is to use groupby(keys)[column].idxmax() .However, to select the desired rows using idxmax you need idxmax to return unique index values. One way to obtain a unique index is to call reset_index . Once you obtain the index values from groupby(keys)[column].idxmax() you can then select the entire row using df.loc : In [20]: df.loc[df.reset_index().groupby(['F_Type'])['to_date'].idxmax()]Out[20]: start endF_Type to_date A 20150908143000 345 316B 20150908143000 10743 8803C 20150908143000 19522 16659D 20150908143000 433 65E 20150908143000 7290 7375F 20150908143000 0 0G 20150908143000 1796 340 Note: idxmax returns index labels , not necessarily ordinals. After using reset_index the index labels happen to also be ordinals, but since idxmax is returning labels (not ordinals) it is better to always use idxmax in conjunction with df.loc , not df.iloc (as I originally did in this post.) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/32459325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/636322/']} | jdg_82316 |
stackexchange | llm_judgeable_groundtruth_similarity | 8917056 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing windows service which will process "something" every couple minutes. Here is some code: public Service() { this.InitializeComponent(); this.ServiceName = Name; this.CanPauseAndContinue = true; this.CanShutdown = true; this.eventLog.Source = Name; // initialize timer this.timer.Elapsed += this.TimerElapsed; } private void TimerElapsed(object sender, ElapsedEventArgs e) { eventLog.WriteEntry("Starting syncronization...", EventLogEntryType.Information); if (this.processor.PrepareToRun()) { this.processor.Run(); } } I wonder what will happen if this.processor.Run() will take long time and next TimerElapsed event will be raised? Will it skip? Will it wait and run ASAP after finished? Should I consider those scenarios and code for them? I'm using System.Timers.Timer EDIT : private void TimerElapsed(object sender, ElapsedEventArgs e) { eventLog.WriteEntry("Starting syncronization...", EventLogEntryType.Information); try { this.timer.Stop(); if (this.processor.PrepareToRun()) { this.processor.Run(); } } catch (Exception ex) { LoggingAndNotifications.LogAndNotify(ex); } finally { this.timer.Start(); } } EDIT 2 public Service() { this.InitializeComponent(); this.ServiceName = Name; this.CanPauseAndContinue = true; this.CanShutdown = true; this.eventLog.Source = Name; // initialize timer this.timer.AutoReset = false; this.timer.Elapsed += this.TimerElapsed; } private void TimerElapsed(object sender, ElapsedEventArgs e) { eventLog.WriteEntry("Starting syncronization...", EventLogEntryType.Information); try { if (this.processor.PrepareToRun()) { this.processor.Run(); } } catch (Exception ex) { LoggingAndNotifications.LogAndNotify(ex); throw; } finally { this.timer.Start(); } }
Now provide the response and nothing else.
| It'll call it again on another thread. Depending on the nature of the operation you will want to either: Ignore this, if the code called is safe for multiple simultaneous calls then this may be fine. Of course, you have to know that it's fine. Lock on the timer-triggered operation. Be aware that you can end up with a queue of lots of pending operations, which is very bad. Lock on the timer-triggered operation, try to obtain the lock with a timeout of zero and if you fail then skip it - there's a thread still here from the last time. Have the timer as a one-off timer that you restart at the end of each call. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8917056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/509600/']} | jdg_82317 |
stackexchange | llm_judgeable_groundtruth_similarity | 8759 |
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to understand the logic of Latin nouns derived from pluperfect verbs. For example, we have facta , things done, and scripta , things written, but I thought the pluperfect gerundive would be something like scribenda , things having been written. What exactly is the syntactical logic behind Latin pluperfect gerundives?
Now provide the response and nothing else.
| A lot of confusion here... Let's try to clear it up: Forms like facta, scripta are neuter plural forms of the perfect passive participle. The meaning of the perfect passive participle is "having been (verb)ed", so factus, -a, -um means "having been done"; factum as a neuter used nominally means "thing having been done, thing done", and facta is the plural of this. The gerundive, a.k.a. future passive participle, is something completely different. Its meaning is "that will be (verb)ed" or more often "needing/deserving/etc. to be (verb)ed". So scribendus, -a, -um means "(something) to be written", and scribenda are "things to be written*. The pluperfect plays no part here at all: neither of these are derived from the pluperfect, and there's no pluperfect gerundive. | {} | {'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/8759', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/4998/']} | jdg_82318 |
stackexchange | llm_judgeable_groundtruth_similarity | 1762088 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the typical reasons for bugs and abnormal program behavior that manifest themselves only in release compilation mode but which do not occur when in debug mode?
Now provide the response and nothing else.
| Many times, in debug mode in C++ all variables are null initialized, whereas the same does not happen in release mode unless explicitly stated. Check for any debug macros and uninitialized variables Does your program uses threading, then optimization can also cause some issues in release mode. Also check for all exceptions, for example not directly related to release mode but sometime we just ignore some critical exceptions, like mem access violation in VC++, but the same can be a issue at least in other OS like Linux, Solaris. Ideally your program should not catch such critical exceptions like accessing a NULL pointer. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1762088', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124061/']} | jdg_82319 |
stackexchange | llm_judgeable_groundtruth_similarity | 14694992 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I defined a new variable Name Value Descriptioncategories (1, 2, 3, 4, 5) my categories ids and in my path i want to get a random value from categories: category_id=my_random_value . I even tried this category_id=${__StringFromArrayAtRandomindex('1', '2', '3', '4', '5'} but it doesn't work.
Now provide the response and nothing else.
| To obtain a random variable value from a list, first declare as User variables the list or available values, with a prefix and a incremental index: country_1 Spain country_2 France country_3 Portugal country_4 Italy country_5 England Then you can obtain a random value from the list concatenating the prefix with a random index in the interval: ${__V(country_${__Random(1,6,)})} --> "Spain", "France", "Portugal", etc... Explanation The __Random function will give you an index for your interval. To obtain values from 1 to 5, you have to call __Random(1,6,) , as it will never reach the MAX value. The __V function, will obtain the value of the variable with the given name. ${__Random(1,6,)} --> 1, 2, 3, 4, 5country_${__Random(1,6,)} --> "country_1", "country_2", etc...${__V(country_${__Random(1,6,)})} --> "Spain", "France", "Portugal", etc... As example, to use the random variable as JSON body for a request, in Body Data: } "country":"${__V(country_${__Random(1,6,)})}"} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14694992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1209810/']} | jdg_82320 |
stackexchange | llm_judgeable_groundtruth_similarity | 49042459 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm in a weird situation that i need to instantiate a new Class with a string stored in a variable but even i'm sure the class name is correct i get an error that given class name is not a constructor Here is a dummy code that doesn't work: class Foo { constructor(){ console.log('Foo!'); }};const foo = 'Foo';const bar = new window[foo]();console.log(bar); This trow this error: Uncaught TypeError: window[foo] is not a constructor
Now provide the response and nothing else.
| One possibility is to use eval . class Foo { constructor() { console.log('Foo!'); }};const foo = 'Foo';const bar = eval(`new ${foo}()`);console.log(bar); You will have to evaluate the safety of using eval() in your particular circumstances. If you know the origin of the string you are inserting into the code that you run eval() on or you can sanitize it first, then it may be safe. I personally would prefer a lookup table. If you have a known number of classes that you want to map by string, then you can make your own lookup table and use that. This has the advantage of there can be no unintended consequences if the string has weird stuff in it: class Foo { constructor() { console.log('Foo!'); }};class Goo { constructor() { console.log('Goo!'); }};// construct dict object that contains our mapping between strings and classes const dict = new Map([ ['Foo', Foo], ['Goo', Goo]]);// make a class from a stringconst foo = 'Foo';let bar = new(dict.get(foo))()console.log(bar); If you were really going to go this route, you may want to encapsulate it in a function and then add error handling if the string is not found in the dict . This should be better than using the global or Window object as your lookup mechanism for a couple reasons: If I recall, class definitions in ES6 are not automatically put on the global object like they would with other top level variable declarations (Javascript trying to avoid adding more junk on top of prior design mistakes). So, if you're going to manually assign to a lookup object, you might as well use a different object and not pollute the global object. That's what the dict object is used for here. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/49042459', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2187918/']} | jdg_82321 |
stackexchange | llm_judgeable_groundtruth_similarity | 48577158 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to kotlin programming. What I want is that I want to remove a particular data from a list while iterating through it, but when I am doing that my app is crashing. for ((pos, i) in listTotal!!.withIndex()) { if (pos != 0 && pos != listTotal!!.size - 1) { if (paymentsAndTagsModel.tagName == i.header) { //listTotal!!.removeAt(pos) listTotal!!.remove(i) } } } OR for ((pos,i) in listTotal!!.listIterator().withIndex()){ if (i.header == paymentsAndTagsModel.tagName){ listTotal!!.listIterator(pos).remove() } } The exception which I am getting java.lang.IllegalStateException
Now provide the response and nothing else.
| It's forbidden to modify a collection through its interface while iterating over it. The only way to mutate the collection contents is to use Iterator.remove . However using Iterator s can be unwieldy and in vast majority of cases it's better to treat the collections as immutable which Kotlin encourages. You can use a filter to create a new collections like so: listTotal = listTotal.filterIndexed { ix, element -> ix != 0 && ix != listTotal.lastIndex && element.header == paymentsAndTagsModel.tagName} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48577158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9042954/']} | jdg_82322 |
stackexchange | llm_judgeable_groundtruth_similarity | 12654085 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having troubles entering text into an UITextField under a SVProgressHUD (Basically an UIView with a full-screen transparent UIWindow and some UIView subviews showing text and a custom-drawn progress bar). My problem is that the UITextField displays the blinking cursor and the keyboard shows, but when I tap a key, nothing is entered into the text field.I only see that the cursor's blinking interrupts just like normal.Interestingly enough, the back (delete) key works (it really deletes the last letter in the UITextView), but not any other key. I'm using iOS 6. Any help would be highly appreciated. EDIT:The same UITextField works fine when there's no SVProgressHUD displayed. This makes me think it has something to do with the first responder, but I have already tried calling resignFirstResponder on every window and subview of the SVProgressHUD and it still does not work.
Now provide the response and nothing else.
| I finally found the problem:SVProgressHUD calls makeKeyAndVisible when it's initialized, because it wants to receive keyboard notifications for repositioning. I looked up what the "Key Window" actually is and found out: ...The key window responds to user input... Now, as the UIWindow of the SVProgressHUD was the keyWindow, my other window, which contained the UITextField did not get the user input. I finally call makeKeyWindow on the AppDelegate's window and everything is working fine. I hope this helps anyone with similar problems. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12654085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1691231/']} | jdg_82323 |
stackexchange | llm_judgeable_groundtruth_similarity | 11774822 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created a histogram with matplotlib using the pyplot.hist() function. I would like to add a Poison error square root of bin height ( sqrt(binheight) ) to the bars. How can I do this? The return tuple of .hist() includes return[2] -> a list of 1 Patch objects. I could only find out that it is possible to add errors to bars created via pyplot.bar() .
Now provide the response and nothing else.
| Indeed you need to use bar. You can use to output of hist and plot it as a bar: import numpy as npimport pylab as pltdata = np.array(np.random.rand(1000))y,binEdges = np.histogram(data,bins=10)bincenters = 0.5*(binEdges[1:]+binEdges[:-1])menStd = np.sqrt(y)width = 0.05plt.bar(bincenters, y, width=width, color='r', yerr=menStd)plt.show() | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11774822', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1565662/']} | jdg_82324 |
stackexchange | llm_judgeable_groundtruth_similarity | 50436910 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hey I just started using EF core and everything works fine. I call the the context.Database.Migrate() method and it creates a database. But even though my context object has a DBSet<T> , it doesn't create any tables except for the migration history. Can anyone help me with this issue?
Now provide the response and nothing else.
| context.Database.Migrate() in itself does not generate migrations . Instead, it processes your created migrations. For each database change, you should call Add-Migration {sensibleName} . Your startup class would continue to call context.Database.Migrate() which will check your database and process any outstanding migrations. For example once you have created your database, a general rule is to call Add-Migration Initial . Calling context.Database.Migrate() once will check your database exists, create it if not, check if Initial migration is applied, and apply it if not. If you then call Add-Migration SmallChange , the same will happen on next startup, similar to the following: Does database exist? Yes Has migration Initial been applied? Yes Has migration SmallChange been applied? No Apply Migration SmallChange Your first migration should look a little something like this: public partial class Initial : Migration{ protected override void Up(MigrationBuilder migrationBuilder) { migrationBuilder.CreateTable( name: "HelloWorld", columns: table => new { Id = table.Column<int>(nullable: false) .Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn), MyString = table.Column<string>(nullable: true), }); }} If your migration doesn't look like that, it may be that your DbContext isn't correctly configured. If your tables still aren't being applied, try running the database update from your Package Manager Console and see what exactly is happening with Update-Database -Verbose | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50436910', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6060749/']} | jdg_82325 |
stackexchange | llm_judgeable_groundtruth_similarity | 2671518 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to create a temporary table in a Oracle database something like Declare table @table (int id) In SQL server And then populate it with a select statement Is it possible? Thanks
Now provide the response and nothing else.
| Yep, Oracle has temporary tables. Here is a link to an AskTom article describing them and here is the official oracle CREATE TABLE documentation. However, in Oracle, only the data in a temporary table is temporary. The table is a regular object visible to other sessions. It is a bad practice to frequently create and drop temporary tables in Oracle. CREATE GLOBAL TEMPORARY TABLE today_sales(order_id NUMBER)ON COMMIT PRESERVE ROWS; Oracle 18c added private temporary tables, which are single-session in-memory objects. See the documentation for more details. Private temporary tables can be dynamically created and dropped. CREATE PRIVATE TEMPORARY TABLE ora$ptt_today_sales ASSELECT * FROM orders WHERE order_date = SYSDATE; Temporary tables can be useful but they are commonly abused in Oracle. They can often be avoided by combining multiple steps into a single SQL statement using inline views. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2671518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/281465/']} | jdg_82326 |
stackexchange | llm_judgeable_groundtruth_similarity | 50610049 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using material-ui to do a form, using the Grid system I'd like to do the following: <Grid container> <Grid item xs={4} /> <Grid item xs={4} /> <Grid item xs={4} /></Grid> And be able to put the first 2 items, on the first row and the third on a second row, the only way I found to do it is: <Grid container> <Grid item xs={4} /> <Grid item xs={4} /></Grid><Grid container> <Grid item xs={4} /></Grid> What is the better way to stack material-ui Grid into rows (like the row class concept in Bootstrap grid)? I also thought about these options: filling the first row with empty Grid item? putting vertical container?
Now provide the response and nothing else.
| You are close with the second block of code. I found that you could simply create 2 distinct Grid sections such as: <div> <Grid id="top-row" container spacing={24}> <Grid item xs={4}> <Paper className={classes.paper}>Grid cell 1, 1</Paper> </Grid> <Grid item xs={4}> <Paper className={classes.paper}>Grid cell 2, 1</Paper> </Grid> </Grid> <Grid id="bottom-row" container spacing={24}> <Grid item xs={4}> <Paper className={classes.paper}>Grid cell 1, 2</Paper> </Grid> <Grid item xs={4}> <Paper className={classes.paper}>Grid cell 2, 2</Paper> </Grid> </Grid></div> You can play with it in my sandbox It might also be work checking out the official documentation for Grid, as it shows a few ways to use it and also links to each exapmle hosted on codesandbox.io so you can play with it yourself. From the docs, it seems the best way to have multi-layered grid systems is to define the width of the overall grid and then to define the width of each cell, as this will push cells later in the series onto the other rows. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50610049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9871537/']} | jdg_82327 |
stackexchange | llm_judgeable_groundtruth_similarity | 34475367 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an action / reducer / components. In one of my components (component dump) I have a Select. I get information on what type of filter my store. Where can I handle it in action, or reducer?
Now provide the response and nothing else.
| I save the items, sortKey and sortKind (asc/desc) in the Redux Store. In my Angular component (I believe would be same for React), I get the store state as an Observable so that I can display the items, sortKey and sortOrder in the UX. When the user clicks on the table columns to change sort key (order) I dispatch the new keys/sort order to reducer for the state. The reducer then performs the new sorting, and returns the new state with the updated values. The Observable in the component thus sparks an event which updates the UX. Advantage: keep sorting logic out of the component by saving the sortKey and sortKind in the state, you can restore precisely the UX if the user refreshes the browser (I use Redux-LocalStorage to sync) as the store has the sorted items, you'll only perform sorting when the user actively wants it. the sorted items are remembered for when the user might return to the component. My reducer ( "bizzes" is my items list, and I use Immutable.List to store the items) import { List } from 'immutable';import { IBizz, IBizzState } from './bizz.types';import { BIZZES_SET, BIZZES_SORT} from 'store/constants';const SORT_ASC = 'asc';const SORT_DESC = 'desc';const defaultSortKey = 'serialNo';const defaultSortOrder = SORT_ASC;const INITIAL_STATE: IBizzState = { bizzes: List([]), sortKey: defaultSortKey, sortOrder: defaultSortOrder};export function bizzReducer(state: IBizzState = INITIAL_STATE, action: any): IBizzState { switch (action.type) { case BIZZES_SET: return { bizzes: List(action.payload.bizzes), sortKey: action.payload.sortKey || defaultSortKey, sortOrder: action.payload.sortOrder || defaultSortOrder }; case BIZZES_SORT: let sortKey = action.payload.sortKey || defaultSortKey; if(sortKey === state.sortKey) { state.sortOrder = state.sortOrder === SORT_ASC ? SORT_DESC : SORT_ASC; } return { bizzes: List(state.bizzes.sort( (a, b) => { if( a[sortKey] < b[sortKey] ) return state.sortOrder === SORT_ASC ? -1 : 1; if( a[sortKey] > b[sortKey] ) return state.sortOrder === SORT_ASC ? 1: -1; return 0; })), sortKey: sortKey, sortOrder: state.sortOrder }; default: return state; }} And my component ( I use Ng2-Redux to get the store as Observables): import { Component, OnInit, OnDestroy, ChangeDetectionStrategy } from '@angular/core';import { select } from 'store';import { BizzActions } from 'actions/index';@Component({ selector: 'bizzlist', templateUrl: './bizz-list.html', changeDetection: ChangeDetectionStrategy.OnPush})export class BizzListComponent implements OnInit { @select([ 'bizzState']) bizzState$; public sortOrder: string; public sortKey: string; public bizzes = []; private bizzStateSubscription; constructor( public bizzActions: BizzActions ) { } ngOnInit() { this.bizzStateSubscription = this.bizzState$.subscribe( bizzState => { this.bizzes = bizzState.bizzes; this.sortKey = bizzState.sortKey; this.sortOrder = bizzState.sortOrder; }); } ngOnDestroy() { this.bizzStateSubscription.unsubscribe(); } public sortBizzes(key) { this.bizzActions.sortBizzes(key); }} As you can see, I am using an Action (called BizzActions) to do the actual Redux dispatch. You could do it in your component, but I prefer to separate these things. For good measure, here's my BizzActions (a Service): import { Injectable } from '@angular/core';import { NgRedux, IAppState } from 'store';import { BIZZES_SET, BIZZES_SORT } from 'store/constants';@Injectable()export class BizzActions { constructor (private ngRedux: NgRedux<IAppState>) {} public setBizzes = (bizzes: any) => { return this.ngRedux.dispatch({ type: BIZZES_SET, payload: { bizzes: bizzes } }); }; public sortBizzes = (key:string) => { return this.ngRedux.dispatch({ type: BIZZES_SORT, payload: { sortKey: key } }); };} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34475367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5242690/']} | jdg_82328 |
stackexchange | llm_judgeable_groundtruth_similarity | 4084328 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $M$ is any model of ZFC then by Infinity it has a set $\omega^M$ . Also $\text{ZFC} \vdash (\omega \models \text{PA})$ , so $M \models (\omega \models \text{PA})$ . However, what can we say externally about $\omega^M$ ? Does it satisfy PA? Basically, what can we say externally about the natural numbers of an arbitrary model of ZFC? Also, given a model of ZFC, can we get another model of ZFC with (externally) natural numbers isomorphic to any chosen nonstandard model of PA?
Now provide the response and nothing else.
| Yes, $\omega^M$ satisfies PA. More generally, suppose $X$ is any first-order structure internal to $M$ . Then a simple induction on formulas shows that for any first-order formula $\varphi(x_1,\dots,x_n)$ and any $a_1,\dots,a_n\in X$ , $M\models(X\models\varphi(a_1,\dots,a_n))$ iff $X\models\varphi(a_1,\dots,a_n)$ . (To be clear, when I say $X\models\varphi(a_1,\dots,a_n)$ , I really mean $X'\models\varphi(a_1,\dots,a_n)$ where $X'=\{a\in M:M\models a\in X\}$ equipped with the first-order structure defined using the internal first-order structure of $X'$ .) However, $\omega^M$ cannot be an arbitrary nonstandard model of PA. For instance, since ZFC proves the consistency of PA, $M\models(\omega\models Con(PA))$ and thus $\omega^M$ must satisfy $Con(PA)$ . (There is an important subtlety here, which is that the argument of the first paragraph only applies when $\varphi$ is an actual first-order formula, i.e. an external one in the real universe, rather than a first-order formula internal to $M$ . If $\omega^M$ is nonstandard, then $M$ will have "first-order formulas" whose length is nonstandard and therefore are not actually formulas from the external perspective. This means, for instance, that if $X$ is a structure internal to $M$ which is externally a model of PA, then $M$ may not think $X$ is a model of PA, since $M$ has nonstandard axioms of PA which $X$ may not satisfy. See this neat paper for some dramatic ways that things like this can occur.) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4084328', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/79127/']} | jdg_82329 |
stackexchange | llm_judgeable_groundtruth_similarity | 44377713 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read the official AWS docs and several forums, still I cant find what I am doing wrong while adding item to string_set using Python/Boto3 and Dynamodb. Here is my code: table.update_item( Key={ ATT_USER_USERID: event[ATT_USER_USERID] }, UpdateExpression="add " + key + " :val0" , ExpressionAttributeValues = {":val0" : set(["example_item"]) }, ) The error I am getting is: An error occurred (ValidationException) when calling the UpdateItem operation: An operand in the update expression has an incorrect data type\"
Now provide the response and nothing else.
| It looks like you figured out a method for yourself, but for others who come here looking for an answer: Your 'Key' syntax needs a data type (like 'S' or 'N') You need to use "SS" as the data type in ExpressionAttributeValues , and You don't need "set" in your ExpressionAttributeValues . Here's an example I just ran (I had an existing set, test_set, with 4 existing values, and I'm adding a 5th, the string 'five'): import boto3db = boto3.client("dynamodb")db.update_item(TableName=TABLE, Key={'id':{'S':'test_id'}}, UpdateExpression="ADD test_set :element", ExpressionAttributeValues={":element":{"SS":['five']}}) So before, the string set looked like ['one','two','three','four'] , and after, it looked like ['one','two','three','four','five'] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44377713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7143346/']} | jdg_82330 |
stackexchange | llm_judgeable_groundtruth_similarity | 1664242 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As you know, we can geometrically interpret the set of all reals $\Bbb R$: every point on the line can be uniquely expressed by a real number x (it's coordinate). However, we could go a step further and use the line to geometrically express operations between real numbers, such as addition. In order for us to do this we introduce vectors and now every point on the line and every coordinate associated with that point represent a vector starting from the origin and ending at the given point. Thus every real number either represents a point on the line or a vector on the line. Now, in the definition of the derivative$$ \frac{{\rm d} }{{\rm d}x} y(x) =\lim_{h\rightarrow0} \frac{1}{h}( y(x+h)-y(x)) $$ we have x+h, which only makes sense geometrically if x and h are both vectors (addition of points on the line makes no sense). My question is: In order to have a meaningful definiton of the derivative, should the function y(x) be geometrically interpreted as mapping vectors to vectors?
Now provide the response and nothing else.
| You seem to be regarding $\mathbb{R}$ as an affine space that includes both points and vectors. With this view of the world, points and vectors are two different things, and it doesn't make sense to add together two points, as you say. The Wikipedia article on affine spaces says: In an affine space, there is no distinguished point that serves as an origin. Hence, no vector has a fixed origin and no vector can be uniquely associated to a point. In an affine space, there are instead displacement vectors, also called translation vectors or simply translations, between two points of the space. Thus it makes sense to subtract two points of the space, giving a translation vector, but it does not make sense to add two points of the space. Likewise, it makes sense to add a vector to a point of an affine space, resulting in a new point translated from the starting point by that vector. In your definition of the derivative, I think $x$ should be regarded as a point, and $h$ as a vector. The function $x \mapsto y(x)$ should be regarded as mapping points to points. Then $x+ h$ is a point (adding a vector to a point) $y(x+h)$ is a point (a value of the function $y$) $y(x+h) - y(x)$ is a vector (difference of two points) $y'(x)$ is a vector This point of view makes sense (I think), and it's consistent with the concept of directional derivative in higher dimensions. As mentioned in one of the comments, there is a very good discussion of the point-versus-vector issue in the answers to this question . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1664242', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/293981/']} | jdg_82331 |
stackexchange | llm_judgeable_groundtruth_similarity | 457269 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the Kalb-Ramond field $B_{\mu\nu}$ which is basically a massless $2$ -form field with the Lagrangian $$\mathcal L = \frac{1}{2}P_{\alpha\mu\nu}P^{\alpha\mu\nu}\,,$$ where $P_{\alpha\mu\nu} \equiv \partial_{[\alpha}B_{\mu\nu]}$ is the field strength, invariant under the gauge transformation $$B_{\mu\nu} \to B_{\mu\nu} + \partial_{[\mu}\epsilon_{\nu]}\,.$$ I am trying to calculate the number of degrees of freedom the theory has. A general $4\times4$ antisymmetric matrix has $6$ independent entries. Let us try to fix the redundancy by choosing a gauge $\epsilon_\mu$ such that the gauge-fixed field is divergence free. \begin{align}\partial^\alpha\left( B_{\alpha\beta} + \partial_{[\alpha}\epsilon_{\beta]} \right) &= 0 \\\Rightarrow \left(\delta^\alpha_\beta\square- \partial^\alpha\partial_\beta\right)\epsilon_\alpha &= -\partial^\alpha B_{\alpha\beta}\,.\end{align} The above is nothing but Maxwell's equations, and hence $\epsilon_\alpha$ has $3$ off-shell degrees of freedom. This means we are left with $6-3=3$ degrees of freedom for $B_{\mu\nu}$ . However, we should be able to kill $2$ more degrees of freedom because we know that a massless $2$ -form field is physically equivalent to a massless scalar field which has only $1$ degree of freedom. Do you see where the remaining gauge redundancy is?
Now provide the response and nothing else.
| It is natural to generalize to an Abelian $p$ -form gauge field $$A~=~\frac{1}{p!} A_{\mu_1\mu_2\ldots\mu_p} \mathrm{d}x^{\mu_1}\wedge\ldots\wedge \mathrm{d}x^{\mu_p}\tag{1}$$ with $\begin{pmatrix} D \cr p \end{pmatrix}$ real component fields $A_{\mu_1\mu_2\ldots\mu_p}$ in a $D$ -dimensional spacetime. I) Massless case: There is a gauge symmetry $$ \delta A ~=~\mathrm{d}\Lambda , \qquad \Lambda~=~\frac{1}{(p\!-\!1)!} \Lambda_{\mu_1\mu_2\ldots\mu_{p-1}} \mathrm{d}x^{\mu_1}\wedge\ldots\wedge \mathrm{d}x^{\mu_{p-1}}, \tag{2}$$ with $\begin{pmatrix} D \cr p\!-\!1 \end{pmatrix}$ gauge parameters $\Lambda_{\mu_1\mu_2\ldots\mu_{p-1}}$ ; and a gauge-for-gauge symmetry $$ \delta \Lambda ~=~\mathrm{d}\xi , \qquad \xi~=~\frac{1}{(r\!-\!2)!} \xi_{\mu_1\mu_2\ldots\mu_{p-2}} \mathrm{d}x^{\mu_1}\wedge\ldots\wedge \mathrm{d}x^{\mu_{p-2}}, \tag{3}$$ with $\begin{pmatrix} D \cr p\!-\!2 \end{pmatrix}$ gauge-for-gauge parameters $\xi_{\mu_1\mu_2\ldots\mu_{p-1}}$ ; and a gauge-for-gauge-for-gauge symmetry $\ldots$ ; and so forth. Lemma: There are $\begin{pmatrix} D\!-\!1 \cr p\!-\!1 \end{pmatrix}$ independent gauge symmetries; there are $\begin{pmatrix} D\!-\!1 \cr p\!-\!2 \end{pmatrix}$ independent gauge-for-gauge symmetries; there are $\begin{pmatrix} D\!-\!1 \cr p\!-\!3 \end{pmatrix}$ independent gauge-for-gauge-for-gauge symmetries; and so forth. Sketched proof: This is correct for $p=1$ . Now use induction $\begin{pmatrix} D\!-\!1 \cr p\!-\!1 \end{pmatrix}=\begin{pmatrix} D \cr p\!-\!1 \end{pmatrix}-\begin{pmatrix} D\!-\!1 \cr p\!-\!2 \end{pmatrix}$ for $p\geq 2$ while keeping $D$ fixed. $\Box$ From the EL equations $$\sum_{\mu_0=0}^{D-1}d_{\mu_0}F^{\mu_0\mu_1\ldots\mu_p}~= ~0,\tag{4}$$ we see that the temporal gauge fields $$A^{0i_1i_2\ldots i_{p-1}},\qquad i_1, i_2, \ldots, i_{p-1}~\in~ \{1,\ldots,D\!-\!1\},\tag{5}$$ are not propagating, i.e. their time derivatives don't appear. They are fixed by boundary conditions (up to non-trivial topology). This leaves us with the spatial gauge fields $$A^{i_1i_2\ldots i_p},\qquad i_1, i_2, \ldots, i_p~\in~ \{1,\ldots,D\!-\!1\},\tag{6}$$ which are $$\fbox{$\begin{pmatrix} D\!-\!1 \cr p \end{pmatrix} \text{ massless propagating off-shell DOF,}$}\tag{7}$$ which have $\begin{pmatrix} D\!-\!2 \cr p\!-\!1 \end{pmatrix}$ remaining independent gauge symmetries, cf. the Lemma. The Lorenz gauge conditions $^1$ $$\sum_{\mu_0=0}^{D-1}d_{\mu_0}A^{\mu_0i_1\ldots i_{p-1}}~=~ 0, \qquad i_1, i_2, \ldots, i_{p-1}~\in~ \{1,\ldots,D\!-\!1\},\tag{8}$$ or equivalently (since there are no temporal gauge fields left), the $\begin{pmatrix} D\!-\!2 \cr p\!-\!1 \end{pmatrix}$ Coulomb gauge conditions $$\sum_{i_0=1}^{D-1}d_{i_0}A^{i_0a_1\ldots a_{p-1}}~=~0,\qquad a_1, \ldots, a_{p-1}~\in~ \{1,\ldots,D\!-\!2\},\tag{9}$$ (which match the number of remaining independent gauge symmetries) can be used to eliminate polarizations along one spatial direction, say $x^{D-1}$ . Therefore there are only $$\begin{pmatrix} D\!-\!1 \cr p \end{pmatrix}-\begin{pmatrix} D\!-\!2 \cr p\!-\!1 \end{pmatrix}~=~\fbox{$\begin{pmatrix} D\!-\!2 \cr p \end{pmatrix} \text{ massless on-shell DOF,}$} \tag{10}$$ given by transversal component fields $$A^{a_1a_2\ldots a_p},\qquad a_1, a_2, \ldots, a_p\in \{1,\ldots,D\!-\!2\},\tag{11}$$ which each satisfies a decoupled wave eq. $$\Box A^{a_1a_2\ldots a_p}~=~0.\tag{12}$$ For the 4D Kalb-Ramond 2-form field, this leaves just 1 component, cf. OP's question. II) Massive case: There is no gauge-symmetry, so all field components are $$\fbox{$\begin{pmatrix} D \cr p \end{pmatrix} \text{ massive propagating off-shell DOF.}$}\tag{13}$$ The massive EL equations imply $\begin{pmatrix} D \cr p\!-\!1 \end{pmatrix}$ Lorenz conditions $$\sum_{\mu_0=0}^{D-1}d_{\mu_0}A^{\mu_0\mu_1\ldots \mu_{p-1}}~=~ 0, \qquad \mu_1, \mu_2, \ldots, \mu_{p-1}~\in~ \{0,\ldots,D\!-\!1\}.\tag{14}$$ They follow from $\begin{pmatrix} D\!-\!1 \cr p\!-\!1 \end{pmatrix}$ spatial Lorenz conditions $$\sum_{\mu_0=0}^{D-1}d_{\mu_0}A^{i_0i_1\ldots i_{p-1}}~=~ 0, \qquad i_1, i_2, \ldots, i_{p-1}~\in~ \{1,\ldots,D\!-\!1\},\tag{15}$$ which can be used to eliminate polarizations along the temporal direction $x^{0}$ . Therefore there are only $$\begin{pmatrix} D \cr p \end{pmatrix}-\begin{pmatrix} D\!-\!1 \cr p\!-\!1 \end{pmatrix}~=~\fbox{$\begin{pmatrix} D\!-\!1 \cr p \end{pmatrix} \text{ massive on-shell DOF,}$} \tag{16}$$ given by spatial component fields $$A^{i_1i_2\ldots i_p},\qquad i_1, i_2, \ldots, i_p\in \{1,\ldots,D\!-\!1\},\tag{17}$$ which each satisfies a decoupled wave eq. $$\Box A^{i_1i_2\ldots i_p}~=~0.\tag{18}$$ III) Alternatively, the massive $p$ -form in $D$ spacetime dimensions can be gotten from dimensional reduction of the massless $p$ -form in $D\!+\!1$ spacetime dimensions by eliminating $x^D$ -components of the gauge field $A$ via gauge symmetry, and identifying momentum $p^D=m$ . -- $^1$ One may show that the Lorenz conditions (9) not directly listed in eq. (8) still follow indirectly from eq. (8). | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/457269', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/76347/']} | jdg_82332 |
stackexchange | llm_judgeable_groundtruth_similarity | 415588 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to separate strings and numbers from one line using a bash command. Example: I have a string containing string123anotherstr456thenanotherstr789 The output should be: string123anotherstr456thenanotherstr789
Now provide the response and nothing else.
| GNU grep or compatible solution: s="string123anotherstr456thenanotherstr789"grep -Eo '[[:alpha:]]+|[0-9]+' <<<"$s" [[:alpha:]]+|[0-9]+ - regex alternation group, matches either alphabetic character(s) or number(s); both will be considered as separate entries on output The output: string123anotherstr456thenanotherstr789 | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/415588', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/112308/']} | jdg_82333 |
stackexchange | llm_judgeable_groundtruth_similarity | 4885584 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an idea of how to use action listeners and implementing them but I was wondering if anyone could tell me how do they listen to the events? Is there some kind of polling mechanism?
Now provide the response and nothing else.
| Action listeners register for Events using the Observer pattern and they are notified, by the main event loop, of any events they are registered for. So no, it's not a polling (pull) mechanism, but the opposite - a (push) callback. This is an example of 'don't call us, we'll call you' programming. Because everything in your code runs off a single thread (the event loop) you don't have to worry about synchronizing between different events - so your code is threadsafe. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4885584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/593418/']} | jdg_82334 |
stackexchange | llm_judgeable_groundtruth_similarity | 18668897 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
getChildAt(i) on gets only the direct children of a ViewGroup , is it possible to access to all children without doing nested loops?
Now provide the response and nothing else.
| ( source ) If you want to get all the child views, as well as the views within children ViewGroups , you must do it recursively, since there is no provision in the API to do this out of the box. private ArrayList<View> getAllChildren(View v) { if (!(v instanceof ViewGroup)) { ArrayList<View> viewArrayList = new ArrayList<View>(); viewArrayList.add(v); return viewArrayList; } ArrayList<View> result = new ArrayList<View>(); ViewGroup viewGroup = (ViewGroup) v; for (int i = 0; i < viewGroup.getChildCount(); i++) { View child = viewGroup.getChildAt(i); ArrayList<View> viewArrayList = new ArrayList<View>(); viewArrayList.add(v); viewArrayList.addAll(getAllChildren(child)); result.addAll(viewArrayList); } return result;} This will give you an ArrayList with all the Views in the hierarchy which you can then iterate over. Essentially, this code call itself if it finds another ViewGroup in the hierarchy, and then returns an ArrayList to be added to the bigger ArrayList. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18668897', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_82335 |
stackexchange | llm_judgeable_groundtruth_similarity | 1865819 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
OSGi allows for dependencies to be determined via Import-Package , which just wires up a single package (exported from any bundle), and Require-Bundle , which wires up to a specific named bundle's exports. In building a greenfield OSGi application, which approach should I use to represent dependencies? Most of the bundles will be internal, but there will be some dependencies on external (open-source) bundles.
Now provide the response and nothing else.
| I believe Require-Bundle is an Eclipse thing (that has now made it in the OSGi spec to accommodate Eclipse). The "pure" OSGi way is to use Import-Package , as it specifically decouples the package from the bundle that provides it. You should be declaring dependencies on functionality that you need (the Java API provided by a certain version of a certain package) instead of where that functionality is coming from (which should not matter to you). This keeps the composition of bundles more flexible. JavaScript analogy: This is like detecting whether a web browser supports a certain API versus inferring from what the user-agent string says what kind of browser it is. Peter Kriens of the OSGi Alliance has more to say about this on the OSGi blog . Probably the only case where you need to use Require-Bundle is if you have split packages, that is a package that is spread across multiple bundles. Split packages are of course highly discouraged. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1865819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130659/']} | jdg_82336 |
stackexchange | llm_judgeable_groundtruth_similarity | 41935 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I ask, because I have to come up with a first-order logic sentence that shows that there are exactly N objects in the universe. What I've been able to come up with is: $$ \forall x \; \exists y_1, y_2, \dots , y_{n-1} \; (x \neq y_1) \land (x \neq y_2) \land \dots \land (x \neq y_{n-1})$$ I'm not sure if this is a valid first-order logic sentence or not.
Now provide the response and nothing else.
| Strictly speaking, your statement is invalid because $\ldots$ is not part of the syntax of first-order logic. However, your statement is an abbreviation of a statement in first-order logic. For example, when $n = 3$, your statement is an abbreviation of the bona fide statement$$ \forall x, \exists y_1, y_2 (x \neq y_1) \land (x \neq y_2). $$As long as you understand this difference, you can use the statement. Your statement is stated in the metalanguage , and it corresponds to a bona fide statement of the language . Here is a similar example. The following defines a number, for every integer $n \geq 1$:$$\overbrace{11\cdots 1}^{\text{$n$ times}}$$For example, when $n = 5$, this number is $11111$. While the notation above, which belongs to the metalanguage, is not in itself a valid number, for every specific $n$ it satisfies a valid number. You can think of it as a pattern , a number scheme (in your case, a statement scheme ), or a function that takes $n$ and outputs a number (in your case, a statement). That said, your statement (for every $n > 1$) states that there are at least two different objects in the universe. So while it is a valid statement, it doesn't state what you intended it to. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/41935', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/31147/']} | jdg_82337 |
stackexchange | llm_judgeable_groundtruth_similarity | 7863956 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Why is a C++ bool var true by default? Say I were to do something like this: class blah{ public: bool exampleVar;};blah exampleArray[4];exampleArray[1].exampleVar = true; In exampleArray, there are now 3 unset instances of exampleVar, what are their default values without me setting them?
Now provide the response and nothing else.
| The default value depends on the scope that exampleArray is declared in. If it is local to a function the values will be random, whatever values those stack locations happened to be at. If it is static or declared at file scope (global) the values will be zero initialized. Here's a demonstration. If you need a member variable to have a deterministic value always initialize it in the constructor. class blah{ public: blah() : exampleVar(false) {} bool exampleVar;}; EDIT: The constructor in the above example is no longer necessary with C++11. Data members can be initialized within the class declaration itself. class blah{ public: bool exampleVar = false;}; This inline default value can be overridden by a user-defined constructor if desired. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7863956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/434419/']} | jdg_82338 |
stackexchange | llm_judgeable_groundtruth_similarity | 709 |
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Introduction On a private CI's and Source Control Repositories like Gitlab-ce it is possible to copy an ~/.vault_pass.txt to the server and let it use by the CI to decrypt files using Ansible. Problem On public CI's and Source Control Repositories like Bitbucket it is not possible to copy a ~/.vault_pass.txt to the CI server it self. Discussion In Bitbucket it is possible to define encrypted variables, but when this file is checked the only VAULT related variables are: ANSIBLE_ASK_VAULT_PASS ANSIBLE_VAULT_PASSWORD_FILE These variables are not an option to solve the issue as when the ANSIBLE_ASK_VAULT_PASS is set ansible-vault still prompts: user@host $Vault password: When the same password is entered it can open the encrypted file, but the aim is to open the file without needing a file or entering a password in a prompt. Another attempt to solve the issue was running export ANSIBLE_ASK_VAULT_PASS=<ansible-vault-password> , but the interactive mode persists. Another option is export ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt , but then this file needs to be pushed to the repository, but source Control repositories should not contain passwords.
Now provide the response and nothing else.
| --vault-password-file can instead point to an executable script that writes to stdout. This little-known feature should solve your problem. First, write a simple, executable script that prints a system environment variable, and check that into your source control. Then, use Bitbucket's encrypted variable feature to set that environment variable to your ansible-vault secret. Finally, execute like so: ansible-playbook site.yml --vault-password-file ./mypass.sh . References: http://docs.ansible.com/ansible/playbooks_vault.html#running-a-playbook-with-vault https://groups.google.com/forum/#!topic/ansible-devel/1vFc3y6Ogto | {} | {'log_upvote_score': 5, 'links': ['https://devops.stackexchange.com/questions/709', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/210/']} | jdg_82339 |
stackexchange | llm_judgeable_groundtruth_similarity | 666165 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have long-running programs that can restart their internal state. I want to see the log file entries only for the most recent state (to load into vim 's quickfix). How can I show all lines after the last occurrence of the string STARTING SESSION ? My current solution (log files are sometimes gigabytes long, so I never look at more than the last 5000 lines): tail -n5000 logfile.log | grep -B5000 -v -e 'STARTING SESSION'> shortened.log This works well when sessions produce a lot of logging, but if I have shorter logs with many restarts, it includes multiple sessions. Essentially, I want something like a --reverse flag that would make grep search from the end of the file instead of the start: grep --reverse --after-context=5000 --max-count=1 'STARTING SESSION' logfile.log Notes: The question is similar to Print line after nth occurrence of a match , but I want the last occurrence. The problem is almost the same as Getting text from last marker to EOF in POSIX.2 except that I don't have a POSIX requirement and my files are large. I'd prefer efficient solutions with GNU utils (I'm using mingw64 ).
Now provide the response and nothing else.
| Reverse the file, display it until the first occurrence, and reverse the output again: tac logfile.log | sed '/STARTING SESSION/q' | tac tac is efficient when given a regular (seekable) file to process , and since sed exits as soon as it sees the start line, the whole pipeline will only process the end of the log file as far as necessary (rounded up to tac ’s, sed ’s, and the kernel’s buffer sizes). This should scale well to large files. tac is a GNU utility. On non-GNU systems, you can often use tail -r to do the same. If the log file doesn’t have a “STARTING SESSION” line at all, this won’t produce the same behaviour as your grep : it will output the complete log file. To avoid this, a variant of Kusalananda’s approach can be used instead: tac logfile.log | sed -n '/STARTING SESSION/{H;x;p;q;};H' | tail -n +2 | tac The sed expression looks for “STARTING SESSION”, and when matched, append the current line to the hold space, swaps the hold space with the pattern space, outputs it and exits; any other line is appended to the hold space. tail -n +2 is used to skip the first blank line (appending the pattern space to the hold space adds a leading newline). | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/666165', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/21401/']} | jdg_82340 |
Subsets and Splits