source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
267540
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In my new project, I decided to try with TDD. And in very beginning I encountered a problem. First thing that I want to do in my application is to give ability to read data from data source. For this purpose, I want to use repository pattern. And now: If test are for real implementation of repository interface, I will be testing class that has access to database, and I know that I should avoid that. If test are for not real implementation of repository pattern, I Will be testing well... just mock. There will be no any piece of production code tested in those unit tests. I'm thinking about this from two days and still cannot come up with any reasonable solution. What I should do? Now provide the response and nothing else.
What a repository does is translate from your domain onto your DAL framework, such as NHibernate or Doctrine, or your SQL-executing classes. This means that your repository will call methods on said framework to perform its duties: your repository constructs the queries needed to fetch the data. If you're not using an ORM-framework (I hope your are...), the repository would be the place where raw SQL-statements are built. The most basic of these methods is the save: in most cases this will simply pass the object from the repository onto the unit of work (or the session). public void Save(Car car){ session.Save(car);} But let's look at another example, for example fetching a car by its ID. It might look like public function GetCarWithId(String id){ return Session.QueryOver<Car>() .Where(x => x.Id == id) .SingleOrDefault();} Still not too complex, but you can imagine with multiple conditions (get me all the cars made after 2010 for all brands in the 'Volkswagen' group) this gets tricky. So in true TDD fashion you need to test this. There are several ways to do this. Option 1: Mock the calls made to the ORM framework Sure, you can mock the Session-object and simply assert that the right calls are made. While this tests the repository, it is not really test- driven because you are just testing that the repository internally looks the way you want it to. The test basically says 'the code should look like this'. Still, it is a valid approach but it feels like this kind of test has very little value. Option 2: (Re)build the database from the tests Some DAL-frameworks give you the ability to build the complete structure of the database based on the mapping files you create to map the domain onto the tables. For these frameworks the way to test repositories is often to create the database with an in-memory database in the first step of the test and add objects using the DAL-framework to the in-memory database. After this, you can use the repository on the in-memory database to test if the methods work. These tests are slower, but very valid and drive your tests. It does require some cooperation from your DAL-framework. Option 3: Test on an actual database Another approach is to test on an actual database and isolate the unittest. You can do this in several ways: surround your tests with a transaction, clean up manually (would not recommend as very hard to maintain), completely rebuild the database after each step... Depending on the application you are building this may or may not be feasible. In my applications I can completely build a local development database from source control and my unittests on repositories use transactions to fully isolate the tests from each other (open transaction, insert data, test repository, rollback transaction). Every build first sets up the local development database and then performs transaction-isolated unittests for the repositories on that local development database. It's a little slower then a pure Unittest but the tests are extremely valuable and catch a lot of issues. Don't test the DAL If you are using a DAL framework such as NHibernate, avoid the need to test that framework. You could test your mapping files by saving, retrieving and then comparing a domain object to make sure everything is okay (be sure to disable any sort of caching) but it's not as required as a lot of other tests you should be writing. I tend to do this mostly for collections on parents with conditions on the children. When testing the return of your repositories you could simply check to see if some identifying property on your domain object matches. This can be an id but in tests it's often more beneficial to check a human readable property. In the 'get me all the cars made after 2010....' this could simply check that five cars are returned and the license plates are 'insert list here'. Added benefit is that it forces you to think about sorting AND your test automatically forces the sorting. You'd be surprised how many applications either sort multiple times (return sorted from the database, sort before creating a view object and then sort the view object, all on the same property just in case ) or implicitly assume the repository sorts and accidentally remove that somehwere along the way, breaking the UI. 'Unit test' is just a name In my opinion, unit tests should mostly not hit the database. You build an application so that every piece of code that needs data from a source does this with a repository, and that repository is injected as a dependency. This allows for easy mocking and all the TDD-goodness you want. But in the end you want to make sure that your repositories perform their duties and if the easiest way to do that is hit a database, well, so be it. I've long let go of the notion that 'unit tests should not touch the database' and learned that there are very real reasons to do this. But only if you can do this automatically and repeatedly. And weather we call such a test a 'unit test' or an 'integration test' is moot.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/267540', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/48334/']}
jdg_83141
stackexchange
llm_judgeable_groundtruth_similarity
21731112
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to execute dynamically a function for all items of an array, but Array.forEach execute in sequence and I need execute in asynchronous. items.forEach(function(item) { doSomething(item);}); I try this: var promises = [];items.forEach(function(item) { var promise = function() { return Q.fcall(function() { doSomething(item); }); }; promises.push(promise());});Q.all(promises).then(function () { otherFunction(datacontext.mainList); //use datacontext.mainList filled.}); But the execution is always in sequence and I need the execution in parallel. The doSomething(item) method: function doSomething(item) { var children = getChildren(item); //get data from local with manager.executeQueryLocally var total = getTotal(children); //simple calculations datacontext.mainList.push({ name: item.firstName() + ' ' + item.lastName(), total: total }); } Please help me. Now provide the response and nothing else.
This answer assumes that doSomething is itself an asynchronous operation. This means that it will have to yield to the event loop and wait for another event at least once in a while. If doSomething is synchronous, there is no benefit to composing it asynchronously. Within the realm of composite asynchronous jobs, there are serial and parallel variations. The serial model causes job(n + 1) to begin only after job(n) finishes. The parallel model starts all jobs initially and finishes when all jobs are finished. In both of these regards, I can give you tips. In parallel, you can use Array map and Q.all, assuming doSomething accepts a value from jobs and returns a promise: return Q.all(jobs.map(doSomething)) To do jobs in serial order, use Array reduce. return jobs.reduce(function (previous, job) { return previous.then(function () { return doSomething(job); });}, Q()); If you want to perform jobs in serial, but only proceed to the next job depending on the result of the first, you can use reduceRight to compose a function. return jobs.reduceRight(function (next, job) { return function (previous) { return doSomething(job).then(function (result) { if (result.isGood(previous)) return result; return next(result); }); });}, function fallthrough(previous) { throw new Error("No result was satisfactory");})(); If you have an array that contains functions that need to be executed in order, feeding the output of the previous into the input of the next, you can use reduce even more succinctly. return functions.reduce(Q.when, Q()); The tutorial in Q’s readme covers more cases and I am told has been helpful https://github.com/kriskowal/q#tutorial
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21731112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2182975/']}
jdg_83142
stackexchange
llm_judgeable_groundtruth_similarity
674153
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to prove that $\sin(x) > \frac{x}{2}$ if $0<x<\pi/2$ I've started working with the derivative, but if it's possible, I'd rather something simpler than that. Now provide the response and nothing else.
Let's rewrite the desired inequality as $${\theta\over2}\lt\sin\theta\quad\text{for }0\lt\theta\le{\pi\over2}$$ To prove this, draw the portion of the unit circle in the first quadrant, draw a typical angle $\theta$ from the origin $O$ to a point $P=(\cos\theta,\sin\theta)$, and let $Q=(1,0)$. The wedge of the circle for this angle has area $\theta/2$. It consists of two pieces: the triangle $\triangle OPQ$, of area ${1\over2}\sin\theta$, and a lens-shaped piece outside the triangle. Here's the key point to convince yourself of: If you reflect the lens-shaped region across the line $PQ$, it stays inside the triangle $\triangle OPQ$, and therefore has smaller area. The upshot is that the area of the wedge is less than twice the area of the triangle, which is exactly what we want. Added later : Here's a second proof based on the same picture. In addition to the points $P=(\cos\theta,\sin\theta)$ and $Q=(1,0)$, let $R=(1,y)$ be the point of intersection of the two tangents to the circle at $P$ and $Q$. Now the length of the circular arc from $Q$ to $P$ is, by definition, $\theta$ (when the angle is measured in radians ). But this length is less than the sum of the lengths $QR$ and $PR$. By symmetry, those two lengths are the same. Thus, since the length of $PR$ is obviously just $y$, we have $\theta\lt2y$. But $R$ clearly lies closer to the $x$-axis than $P$, hence $y\lt\sin\theta$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/674153', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/114107/']}
jdg_83143
stackexchange
llm_judgeable_groundtruth_similarity
2644
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In other words, are the censorship & ratings of movies and other forms of art based on something more than nothing? ;) Or, in a broader sense, I am questioning the very widespread belief that kids should not be allowed to watch anything sex-related. I am talking only the ratings/censorship based on the nudity, sex, explicitness, etc., i.e. not violence, profanity, drugs and other things movies can get bad MPAA rating (or an equivalent). Please no anecdotal evidences, nor any discussions about whether kids should be interested in these things, or do they (should they?) understand them, and so forth. I am only interested in the direct and measurable harm aspect. Now provide the response and nothing else.
It seems that all of the answers have gravitated towards depictions of sex. Either because nudity equals sex (while in Hollywood, this appears to generally be the case - it's extremely rare to see a nude scene that isn't purely for titillation, rather than because someone just woke up or got out of the shower), or because it hadn't occurred to them that they might be separate. If you want to know if social nudity - namely nudity that is purely nonsexual - has any effect on children, just ask a nudist. This document explains one nudist club's stance on the effects of social nudity on children, and cites several studies. The conclusion they reach is that there is absolutely no negative effect, and there may be a positive effect on the psyche. The basic nudist philosophy is that the extreme modesty of the Victorian era was in fact harmful (which was proven by many studies before and since), and as such, perhaps its inverse - a total lack of modesty - is beneficial. Studies aside, nudists themselves have observed no obvious negative effects in children - very young children especially like to be naked, and generally the idea that there's nothing inherently wrong with the human body in its natural state promotes better body image in older children as well. See the bottom of the document above about Casler's study, and the older children's reactions in his interviews. It's generally parents' negative reactions to nudity in TV and film that are most harmful to children. Especially if that reaction is particularly unhinged and panicked - see also: Super Bowl XXXVIII halftime show controversy Edit: There was a paper written on this subject (actually, of children witnessing parental nudity and sexual activity, which of course, is different from what you see on TV, since parents have such a central and authoritative role in a child's life) in the Journal of Sex Research. The paper can be found on here . Consistent with the cross-sectional retrospective literature (and with our expectations), no harmful main effects of these experiences were found at age 17-18. Indeed, trends in the data that were significant at p [less than] 0.05 but did not reach significance following the Bonferonni correction indicated primarily beneficial correlates of both of these variables. Exposure to parental nudity was associated with positive, rather than negative, sexual experiences in adolescence, but with reduced sexual experience overall. Boys exposed to parental nudity were less likely to have engaged in theft in adolescence or to have used various psychedelic drugs and marijuana. Taken as a whole then, effects are few, but generally beneficial in nature. Thus, results of this study add weight to the views of those who have opposed alarmist characterizations of childhood exposure both to nudity and incidental scenes of parental sexuality. This is very likely the kind of thing (historically speaking, of course, which the paper touches on) that the MPAA and the government bases such things upon. Mostly this seems to harken back to Freud and early students thereof, who assumed that nearly every psychological disorder originated from witnessing such things as a small child. The paper linked above questions this assumption and tries to find actual clinical research to support the claim.
{}
{'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/2644', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/-1/']}
jdg_83144
stackexchange
llm_judgeable_groundtruth_similarity
39985856
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to figure out how to use Alamofire 4.0 with Swift 3.0 to send a p12 (i also have the PEM cert and key if need be) to a website for authentication. All the examples i have seen are for Swift 2.0 and not exactly what i'm looking for. In safari on my mac i can access the site by putting the p12 in the keychain and sending it when safari asks so i know that portion works. I don't know if anyone can help me with an example of how to do so in Alamofire 4.0 and Swift 3.0 in an application. The certificates are self signed as well. Any thoughts or help? I am not just looking to pin the certificate as the client key and cert needs to be sent to the server for access... Now provide the response and nothing else.
I was able to get it to work. A few issues got into the way. First, you have to allow IOS to accept self signed certificates. This requires to set up AlamoFire serverTrustPolicy: let serverTrustPolicies: [String: ServerTrustPolicy] = [ "your-domain.com": .disableEvaluation ]self.sessionManager = Alamofire.SessionManager( serverTrustPolicyManager: ServerTrustPolicyManager(policies: serverTrustPolicies) ) From there, you have to override the sessionDidRecieveChallenge to send the client certificate. Because i wanted to use a p12 file I modified some code I found elsewhere (sorry i don't have the source anymore) to make is Swift 3.0 to import the p12 using foundation classes: import Foundationpublic class PKCS12 { var label:String? var keyID:Data? var trust:SecTrust? var certChain:[SecTrust]? var identity:SecIdentity? let securityError:OSStatus public init(data:Data, password:String) { //self.securityError = errSecSuccess var items:CFArray? let certOptions:NSDictionary = [kSecImportExportPassphrase as NSString:password as NSString] // import certificate to read its entries self.securityError = SecPKCS12Import(data as NSData, certOptions, &items); if securityError == errSecSuccess { let certItems:Array = (items! as Array) let dict:Dictionary<String, AnyObject> = certItems.first! as! Dictionary<String, AnyObject>; self.label = dict[kSecImportItemLabel as String] as? String; self.keyID = dict[kSecImportItemKeyID as String] as? Data; self.trust = dict[kSecImportItemTrust as String] as! SecTrust?; self.certChain = dict[kSecImportItemCertChain as String] as? Array<SecTrust>; self.identity = dict[kSecImportItemIdentity as String] as! SecIdentity?; } } public convenience init(mainBundleResource:String, resourceType:String, password:String) { self.init(data: NSData(contentsOfFile: Bundle.main.path(forResource: mainBundleResource, ofType:resourceType)!)! as Data, password: password); } public func urlCredential() -> URLCredential { return URLCredential( identity: self.identity!, certificates: self.certChain!, persistence: URLCredential.Persistence.forSession); }} This will allow me to import the file, and send it back to the client. let cert = PKCS12.init(mainBundleResource: "cert", resourceType: "p12", password: "password");self.sessionManager.delegate.sessionDidReceiveChallenge = { session, challenge in if challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodClientCertificate { return (URLSession.AuthChallengeDisposition.useCredential, self.cert.urlCredential()); } if challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodServerTrust { return (URLSession.AuthChallengeDisposition.useCredential, URLCredential(trust: challenge.protectionSpace.serverTrust!)); } return (URLSession.AuthChallengeDisposition.performDefaultHandling, Optional.none); } Now you can use the sessionManager to create as many calls as you need to. As a note, i've also added the following to the info.plist as recomended to get around the new security features in newer iOS features: <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> <key>NSExceptionDomains</key> <dict> <key>your-domain.com</key> <dict> <key>NSIncludesSubdomains</key> <true/> <key>NSExceptionRequiresForwardSecrecy</key> <false/> <key>NSExceptionAllowsInsecureHTTPLoads</key> <true/> </dict> </dict> </dict> I hope this helps!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39985856', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2413830/']}
jdg_83145
stackexchange
llm_judgeable_groundtruth_similarity
74005650
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In C and C++, is it undefined behavior to memcpy into a const variable when the number of bytes to be copied is zero? int x = 0;const int foo = 0;memcpy( (void *)&foo, &x, 0 ); This question is not purely theoretical. I have a scenario in which memcpy is called and if the destination pointer points to const memory, then the size argument is guaranteed to be zero. So I'm wondering whether I need to handle it as a special case. Now provide the response and nothing else.
c c17 The older question Is it guaranteed to be safe to perform memcpy(0,0,0)? points out 7.1.4p1: Each of the following statements applies unless explicitly stated otherwise in the detailed descriptions that follow: If an argument to a function has an invalid value (such as a value outside the domain of the function, or a pointer outside the address space of the program, or a null pointer, or apointer to non-modifiable storage when the corresponding parameter is not const-qualified ) or a type (after promotion) not expected by a function with variable number of arguments, the behavioris undefined. The prototype for memcpy is void *memcpy(void * restrict s1, const void * restrict s2, size_t n); where the first parameter is not const -qualified, and &foo points to non-modifiable storage. So this code is UB unless the description of memcpy explicitly states otherwise, which it does not. It merely says: The memcpy function copies n characters from the object pointed to by s2 into the object pointed toby s1. This implies that memcpy with a count of 0 does not copy any characters (which is also confirmed by 7.24.1p2 "copies zero characters", thanks Lundin), but it does not exempt you from the requirement to pass valid arguments.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/74005650', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17039046/']}
jdg_83146
stackexchange
llm_judgeable_groundtruth_similarity
1807676
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to solve this $$\int_{0}^{\infty} e^{-4x^2}dx$$ I saw how $\int_{\infty}^{\infty} e^{-x^2}dx$ it's done and how it covers the whole plane, then I assume that from $0 \ to \ \infty$ it's the right part of the plane, My problem comes when I try to cover the first and the fourth quadrant of the plane, I know that in polar coordinates $0\le r \le \infty$, but What about $\theta$ for this case? How $\theta$ moves in order to cover the right half of the plane? Now provide the response and nothing else.
Using the fact that$$\int_{-\infty}^\infty e^{-x^2}\ dx=\sqrt \pi$$and the even-ness of $x^2$, we have$$\int_0^\infty e^{-x^2}\ dx=\frac{\sqrt\pi}{2}.$$Let $u=2x$. Then $du=2dx$. The limits won't change and we have$$\int_0^\infty e^{-4x^2}\ dx=\frac{1}{2}\int_0^\infty e^{-u^2}\ du=\frac{\sqrt\pi}{4}.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1807676', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/323860/']}
jdg_83147
stackexchange
llm_judgeable_groundtruth_similarity
8957970
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have reviewed ECMA 335 and I have only found a reference to the .export keyword which seems promising but has very little documentation. I have found similar questions on StackOverflow with respect to doing this in C#. However, none of that has lead me anywhere useful so far. The bottom line is: I have a CIL DLL and I want to invoke some of its static methods from a native C++ application. Now provide the response and nothing else.
In newer versions of ILAsm, you can simply do: .method public static void Foo (){ .export [1] // code ...} This exports Foo at index 1 in the export table. Export ordinals should be unique and sequential. In older versions, you'd have to do: .data vt = int32 (0) [n].vtfixup [n] int32 fromunmanaged at vt.method public static void Foo (){ .vtentry 1:1 .export [1] // code ...} (Where 'n' is the amount of exports you want.) The .vtentry indicates which vtable:slot to store the method in. (Table IDs are sequential, and therefore depend on declaration order.) Newer ILAsms do all this work for you, provided that you don't use the export table for anything else. Do note that all of this is very unportable.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8957970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/468904/']}
jdg_83148
stackexchange
llm_judgeable_groundtruth_similarity
16117
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently had a comment made to me in an online discussion after I'd stated that randomness in a salt doesn't matter -- and I got the following response: Salts may not have to be "secure," but the method of generation can matter. Using a cryptographic random data source helps ensure uniqueness and randomness in the salt data. Depending on the algorithm being used, the distribution of randomness in the salt can have bearing on the strength of the key. Now, in most cases with a 22 character salt (in bcrypt for example) even with a prng the odds of generating the same salt twice are pretty small but its the second bit of that statement -- I'm not sure what that means and I certainly can't say its "wrong" without understanding it .... So is this correct? Does randomness in a salt matter? Since salts are a known thing if someone's attacking a table of hashes, how can the quality of the salt matter? Now provide the response and nothing else.
No. A salt is simply supposed to be unique so that you can't use an attack (such as rainbow tables) that computes a password hash once and uses that result against multiple password hashes. If you're interested in making reversing the hash impossible without some secret knowledge, then append a site-specific password to the provided password (in addition to the salt) before hashing. A salt is stored with the hash, so making it hard-to-guess is pointless. @Honoki : If it's a global value or site-specific, then what you're thinking of isn't a salt, it's something else. That's not to say it's not a good idea, but it's not a salt. Typically an installation-specific secret is called a "site key" or "site password". But salts are typically stored with the hash. For example, here's the current way Unix/Linux login passwords are stored. The password here is "foobar": $5$BcjmguyyH.Qrf$ADRXhi/5xb.dYU67I.JdY57uoFjel/rqMqj14QJmTQ1 $ is the field delimiter 5 is the hash algorithm specifier (in this case SHA-256) BcjmguyyH.Qrf is the salt (not disguised in any way) ADRXhi/5xb.dYU67I.JdY57uoFjel/rqMqj14QJmTQ1 is the hash
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/16117', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/10668/']}
jdg_83149
stackexchange
llm_judgeable_groundtruth_similarity
30733
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What are problems with the following properties: 1) they are restriction of (possibly well known) problems that are PSPACE-complete; 2) the restricted versions are in PSPACE, but it is an open problem if they are PSPACE-complete (or even if they are NP-hard). Four examples from "puzzles & C.": The complexity of 1x1 Rush Hour [1] (PSPACE-complete for blocks of size 2x1 ); [ SOLVED ] The complexity of planar Subway Shuffle [1] (PSPACE-complete even for planar graphs, a draft of the paper can be downloaded here ); The complexity of Lunar-Lockout without fixed blocks [1] (PSPACE-complete with fixed blocks); (not so famous) The complexity of (my) Switch-network problem (it is a restriction of the PSPACE-complete Sokoban, NP-hard in the non-planar case, see this Q&A on cstheory ). If you have many, group them by topic. [1] Robert A. Hearn, Erik D. Demaine: Games, puzzles and computation. A K Peters 2009, ISBN 978-1-56881-322-6, pp. I-IX, 1-237 Now provide the response and nothing else.
Retrograde Chess. It is $PSPACE$-complete if you are allowed to have arbitrarily many kings and none of them can be in check at any time. If no (or only one per player) kings are allowed, it is known that there are positions that require exponential moves, but the problem is only known to be $NP$-hard. http://arxiv.org/abs/1409.1530 https://mathoverflow.net/questions/27944/do-there-exist-chess-positions-that-require-exponentially-many-moves-to-reach
{}
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/30733', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/3247/']}
jdg_83150
stackexchange
llm_judgeable_groundtruth_similarity
439011
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A family has three children. what are the probabilities that it has no male child and at least one male child if we consider the order of birth. My Try: I didn't understand the complete question. It's confusing me while solving. Now provide the response and nothing else.
There are different ways to solve the problem. A simple, straight forward way is to enumerate the possible options (since the problem size is small). Since we consider the order of birth to important, the possible options are:\begin{align}\color{blue}F \color{blue}F \color{blue}F\\\color{red}M \color{blue}F \color{blue}F\\\color{blue}F \color{red}M \color{blue}F\\\color{blue}F \color{blue}F \color{red}M\\\color{red}M \color{red}M \color{blue}F\\\color{red}M \color{blue}F \color{red}M\\\color{blue}F \color{red}M \color{red}M\\\color{red}M \color{red}M \color{red}M\end{align}As you can see there are $8$ possible options in total. Hence, the probability that there is no male child is $\dfrac18$ and the probability that there is at-least one male child is $\dfrac78$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/439011', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/72048/']}
jdg_83151
stackexchange
llm_judgeable_groundtruth_similarity
56079223
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to create a custom data generator and don't know how integrate the yield function combined with an infinite loop inside the __getitem__ method. EDIT : After the answer I realized that the code I am using is a Sequence which doesn't need a yield statement. Currently I am returning multiple images with a return statement: class DataGenerator(tensorflow.keras.utils.Sequence): def __init__(self, files, labels, batch_size=32, shuffle=True, random_state=42): 'Initialization' self.files = files self.labels = labels self.batch_size = batch_size self.shuffle = shuffle self.random_state = random_state self.on_epoch_end() def __len__(self): return int(np.floor(len(self.files) / self.batch_size)) def __getitem__(self, index): # Generate indexes of the batch indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size] files_batch = [self.files[k] for k in indexes] y = [self.labels[k] for k in indexes] # Generate data x = self.__data_generation(files_batch) return x, y def on_epoch_end(self): 'Updates indexes after each epoch' self.indexes = np.arange(len(self.files)) if self.shuffle == True: np.random.seed(self.random_state) np.random.shuffle(self.indexes) def __data_generation(self, files): imgs = [] for img_file in files: img = cv2.imread(img_file, -1) ############### # Augment image ############### imgs.append(img) return imgs In this article I saw that yield is used in an infinite loop. I don't quite understand that syntax. How is the loop escaped? Now provide the response and nothing else.
You are using the Sequence API, which works a bit different than plain generators. In a generator function, you would use the yield keyword to perform iteration inside a while True: loop, so each time Keras calls the generator, it gets a batch of data and it automatically wraps around the end of the data. But in a Sequence, there is an index parameter to the __getitem__ function, so no iteration or yield is required, this is performed by Keras for you. This is made so the sequence can run in parallel using multiprocessing, which is not possible with old generator functions. So you are doing things the right way, there is no change needed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56079223', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1902610/']}
jdg_83152
stackexchange
llm_judgeable_groundtruth_similarity
136483
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I am wondering whether the following SDE can be solved explicitly? $$d X_t = X_t^2 d t + X_t d B_t$$ where $B_t$ is a standard Brownian motion. If not, can we say some thing about the moments of the solution, i.e., $E(|X_t|^n)$? Thank you very much for any hints! Anand Now provide the response and nothing else.
Solutions do exist locally. Globally they MAY blow up as you already know.The blowup will be dominated by the deterministic system. You did not write what is your initial condition - note that $0$ is a perfectly fine solution to your equation. If your initial condition is say $X_0=1$, then it may blow up, but with positive probability the solution may actually converge to $0$. This is because if the $X^2$ drift term were not there then the solution would be $X_0 e^{B_t-t/2}$ which converges to $0$, and if you add your drift, as long as $X_t$ is small then the drift is smaller than $+\delta X_t$ (e.g., as long as $X_t<\delta$). But the solution of the equation$dX_t=\delta X_t dt+X_t dB_t$ converges to $0$ as long as $\delta$ is small enough(in fact $\delta<1/2$ will work). So with positive probability, starting from any $X_0$ you become smaller than say $1/4$ after a finite time, and then with positive probability you actually converge to $0$ (using comparison theorems for 1D SDE's as in Ikeda-Watanabe should be enough to prove this). On the other hand, with positive probability you blow up. So I am not sure what do you mean by ``moments of the blow up time''.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/136483', 'https://mathoverflow.net', 'https://mathoverflow.net/users/36814/']}
jdg_83153
stackexchange
llm_judgeable_groundtruth_similarity
41326204
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Error protocol has only one property localizedDescription.I tried to create custom object inherited from NSObject and Error but I can not override localizedDescription. How can I do that? This code does not allow me to get custom description: class MyError: NSObject, Error { var desc = "" init(str: String) { desc = str } override var description: String { get { return "MyError: \(desc)" } } var localizedDescription: String { get { return self.description } }}func test_my_code() { let error = MyError(str: "my test string") let x = error as Error print(x.localizedDescription)} Calling function "test_my_code" get unexpected result:"The operation couldn’t be completed...". What should I do to get result "MyError: my test string" ? Now provide the response and nothing else.
The documentation about new Error bridging feature is not clear enough still now, so this answer may need some updates in the near future, but according to SE-0112 and the latest Swift source code , you may need to use LocalizedError rather than Error and implement errorDescription . class MyError: NSObject, LocalizedError { var desc = "" init(str: String) { desc = str } override var description: String { get { return "MyError: \(desc)" } } //You need to implement `errorDescription`, not `localizedDescription`. var errorDescription: String? { get { return self.description } }}func test_my_code() { let error = MyError(str: "my test string") let x = error as Error print(x.localizedDescription)}test_my_code() //->MyError: my test string Other than using LocalizedError , this default implementation works: (NSError.swift, the link shown above) public extension Error { /// Retrieve the localized description for this error. var localizedDescription: String { return NSError(domain: _domain, code: _code, userInfo: nil).localizedDescription }} It is a little complicated how Swift defines _domain or _code from arbitrary types just conforming to Error , but it seems that NSError generates "The operation couldn’t be completed..." for unknown combinations of domain and code.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41326204', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/751932/']}
jdg_83154
stackexchange
llm_judgeable_groundtruth_similarity
171023
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have heard this in a movie clip. I was just curious to know, is this really true? Because the one thing that I do know, is that at a high voltage, the current decreases, so that it doesn't harm a human body. Now provide the response and nothing else.
Skin Depth The human body does have a "skin effect" but it's not as thin as you might think. Electric currents are confined to the outside of a conducting body, but humans are not very conductive, so the fields penetrate quite deep. The best example that comes to mind is 2.45 GHz - we all know that a microwave oven cooks about 2 or 3 cm into a piece of meat - this penetration depth is closely related to the skin depth. The primary reason that you don't feel high frequency current is that the nerves and cells can't respond to anything above ?about? 1 kHz. I've discussed this in a previous answer , more about the safety aspects than the skin effect itself, but it might help. Nerve effects are the primary cause of injury due to electricity, mainly the heart of course. If the frequency is high enough that it can't influence the nerves, then all you have to worry about is the heating effect. For a potentially lethal 100 V at 20 mA, only 2 W is dissipated in the body, which is insignificant compared to the 200 W of normal body heat (though it will be concentrated at the entry and exit points). So at high frequencies you can carry a much higher current than would be lethal at low frequencies, possibly without pain or injury. High voltage and lower current It's not true that the current is lower at high voltage. In fact, a higher voltage will usually cause a larger current to flow, than a low voltage. High voltage overhead transmission lines might be 400 kV but they also carry hundreds of amps. When it comes to human safety, higher voltage are almost always more dangerous.
{}
{'log_upvote_score': 7, 'links': ['https://electronics.stackexchange.com/questions/171023', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/76769/']}
jdg_83155
stackexchange
llm_judgeable_groundtruth_similarity
3573878
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the exercises of the book 'Topology without tears', we are asked to prove that the following collection of subsets of $\mathbb{R}$ is a topology: The set containing $\emptyset$ , $\mathbb{R}$ , and every closed interval $[-n,n]$ , for $n$ any positive integer. This does seem to work out to be a topology given that it satisfies all 3 axioms, but if so, then these sets $[-n,n]$ , which are closed intervals become open sets on the topology, by definition. Is this correct? Isn't this contradictory? Now provide the response and nothing else.
Here are two reasons why it's not contradictory. The shorter reason is that there are lots of topological spaces that have subsets that are both open and closed. (Indeed, the entire space and the empty set are always both open and closed. But there are lots of nontrivial examples as well, such as any subset of a discrete space.) Therefore, the fact that a subset is closed does not contradict its being open. But the more essential reason is that a particular set can have lots of topologies placed upon it, and the sets that are open and closed in one of those topologies don't have to have anything to do with the sets that are open and closed in another topology. In this case, we are defining a new topology on $\Bbb R$ , and so the open sets are, by definition , the ones listed, and the closed sets are precisely the complements of those open sets (so $\Bbb R$ , $\emptyset$ , and $(-\infty,-n)\cup(n,\infty)$ for every $n\in\Bbb N$ ). In particular, $[-n,n]$ is not a closed set in this new topology. The language is definitely confusing, because the fact that we normally call $[-n,n]$ a "closed" interval is because it's closed in the usual topology on $\Bbb R$ . (Well, the interval nomenclature probably arose earlier than the topological nomenclature, but never mind....) But that's just one way to refer to that particular set, not a guarantee of closedness in every situation. It might have been less confusing if the problem had said: "The set containing $\emptyset$ , $\Bbb R$ , and every interval $\{x\in\Bbb R\colon -n\le x\le n\}$ for $n$ any positive integer".
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3573878', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/388882/']}
jdg_83156
stackexchange
llm_judgeable_groundtruth_similarity
15455698
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a way to get the treble and bass data from a song for some incrementation of time (say 0.1 seconds) and in the range of 0.0 to 1.0. I've googled around but haven't been able to find anything remotely close to what I'm looking for. Ultimately I want to be able to represent the treble and bass level while the song is playing. Thanks! Now provide the response and nothing else.
Its reasonably easy. You need to perform an FFT and then sum up the bins that interest you. A lot of how you select will depend on the sampling rate of your audio. You then need to choose an appropriate FFT order to get good information in the frequency bins returned. So if you do an order 8 FFT you will need 256 samples. This will return you 128 complex pairs. Next you need to convert these to magnitude. This is actually quite simple. if you are using std::complex you can simply perform a std::abs on the complex number and you will have its magnitude (sqrt( r^2 + i^2 )). Interestingly at this point there is something called Parseval's theorem . This theorem states that after performinng a fourier transform the sum of the bins returned is equal to the sum of mean squares of the input signal. This means that to get the amplitude of a specific set of bins you can simply add them together divide by the number of them and then sqrt to get the RMS amplitude value of those bins. So where does this leave you? Well from here you need to figure out which bins you are adding together. A treble tone is defined as above 2000Hz. A bass tone is below 300Hz (if my memory serves me correctly). Mids are between 300Hz and 2kHz. Now suppose your sample rate is 8kHz. The Nyquist rate says that the highest frequency you can represent in 8kHz sampling is 4kHz. Each bin thus represents 4000/128 or 31.25Hz. So if the first 10 bins (Up to 312.5Hz) are used for Bass frequencies. Bin 10 to Bin 63 represent the mids. Finally bin 64 to 127 is the trebles. You can then calculate the RMS value as described above and you have the RMS values. RMS values can be converted to dBFS values by performing 20.0f * log10f( rmsVal ); . This will return you a value from 0dB (max amplitude) down to -infinity dB (min amplitude). Be aware amplitudes do not range from -1 to 1. To help you along, here is a bit of my C++ based FFT class for iPhone (which uses vDSP under the hood): MacOSFFT::MacOSFFT( unsigned int fftOrder ) : BaseFFT( fftOrder ){ mFFTSetup = (void*)vDSP_create_fftsetup( mFFTOrder, 0 ); mImagBuffer.resize( 1 << mFFTOrder ); mRealBufferOut.resize( 1 << mFFTOrder ); mImagBufferOut.resize( 1 << mFFTOrder );}MacOSFFT::~MacOSFFT(){ vDSP_destroy_fftsetup( (FFTSetup)mFFTSetup );}bool MacOSFFT::ForwardFFT( std::vector< std::complex< float > >& outVec, const std::vector< float >& inVec ){ return ForwardFFT( &outVec.front(), &inVec.front(), inVec.size() );}bool MacOSFFT::ForwardFFT( std::complex< float >* pOut, const float* pIn, unsigned int num ){ // Bring in a pre-allocated imaginary buffer that is initialised to 0. DSPSplitComplex dspscIn; dspscIn.realp = (float*)pIn; dspscIn.imagp = &mImagBuffer.front(); DSPSplitComplex dspscOut; dspscOut.realp = &mRealBufferOut.front(); dspscOut.imagp = &mImagBufferOut.front(); vDSP_fft_zop( (FFTSetup)mFFTSetup, &dspscIn, 1, &dspscOut, 1, mFFTOrder, kFFTDirection_Forward ); vDSP_ztoc( &dspscOut, 1, (DSPComplex*)pOut, 1, num ); return true;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15455698', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1125766/']}
jdg_83157
stackexchange
llm_judgeable_groundtruth_similarity
12623783
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I deleted the Localizations of the project from the project's info by mistake. Both the storyboard and the related language were removed. I got back the storyboard from previous file but when I want to add a new Localization, a window popped out and asked me to " Choose files and reference language to create English localization " and there is no resource file at all. How can I add back the localizations? Now provide the response and nothing else.
Adding back a localization When you delete your localizations sometimes you can get that situation where you can't add back any localizations in Xcode. Go to the command-line and create a ISO 639-1 two-letter language abbreviation folder with the lproj extension somewhere within your project. Create an empty language folder For example: ${SRCROOT}/Resources/Translations/nb.lproj or ${SRCROOT}/Resources/Translations/en.lproj Re-add language files to project In the folder create an empty file called Localizable.strings and add this file to your project. Don't add the .lproj folder, just the contents. Then when you go back into the project file you will see the language under localizations. Now when you have added this, you should see the localization in your project. No checkbox If the added file, like a storyboard, doesn't have a checkbox next to it indicating that is part of that language localization you might need to round trip them through another language. To do this add a new language in the project. It should prompt you to copy the existing localized files over to the new language. Then delete the broken localization and re-add it. My Example As a specific example, I added the storyboard in the nb.lproj directory to the project. The Norwegian Bokmål appears in the project, but the checkbox for the language in the storyboard inspector won't let me add it. I created nb_NO in the project and it prompted me to copy over the storyboard. Then the storyboard had that checkbox checked. I deleted the 'nb' localization and re-added it. Now everything was the way I wanted it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12623783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1609545/']}
jdg_83158
stackexchange
llm_judgeable_groundtruth_similarity
175830
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Thinking about the physics of thermonuclear fusion, I have always had an intuitive sense that making fusion feasible is matter of reactor size. In other words I feel like: If the fusion reactor is big enough you can achieve self-sustaining nuclear fusion of $^2$H+$^3$T but perhaps also of $^1$H+$^{11}$B (even if it means that such a device should be several kilometres large). Some arguments on why it should be so: Energy is generated by volume while losses should be proportional to surface (this is probably not true for TOKAMAKs where plasma is not optically thick for bremsstrahlung X-ray, but it is true for inertial confinement) Big stars can burn almost any fusion fuel because the released energy cannot escape from its core very quickly. Can a similar effect be used in a practical device? (like a TOKAMAK with $1~\mathrm{km}$ toroidal vessel) In magnetic confinement many problems are connected with magnetic field and temperature gradients leading to Rayleigh–Taylor -like instability. If the reactor is larger these gradients are smaller. History says that TOKAMAKs are made bigger over time in order to achieve breakeven. I understand the practical point that a big plasma vessel is expensive so people try to make it as small as possible. But if the cost of one device wasn't an issue, would it be possible (based on just the same physics and scaling law) to build a large TOKAMAK that can burn $^1\mathrm{H}+^{11}$B fuel? I was searching the literature to get some general idea about scaling laws for nuclear fusion. I found several different empirical expressions for TOKAMAKs, how it scales with radius of torus, temperature and magnetic field, however it was quite specialized and device specific (there was no single general expression). I would rather like to get just a very rough idea about the scaling as general as possible, and derived from basic physical principles. Now provide the response and nothing else.
A key parameter that determines the size of a fusion reactor is the energy confinement time , $\tau_{E}$ . For example , a stellarator currently has \begin{equation}\tau_{E} \propto \, a^{2.33} B^{0.85},\end{equation} where $a$ is the minor radius and $B$ is the toroidal magnetic field. This particular scaling is of the Bohm type, which is found during low confinement operation. During high confinement operation, an improved scaling of the gyro-Bohm type is present. To answer your question, I will derive the origin of the above scaling using general principles (see sec. 7.6.4, here ). Exponential degradation of confinement is generally assumed, which gives the following confinement time for particles in a cylindrical device with minor radius $a$ and length $L$ , \begin{equation}\tau_E \approx \frac{N}{dN/dt}= \frac{n \pi a^2 L}{\Gamma_{\perp} 2 \pi a L} = \frac{n a}{2 \Gamma_{\perp}}\,,\end{equation} where $N$ is the number of ion-electron pairs, $n$ is the number density and $\Gamma_{\perp}$ is the cross-field particle flux with diffusion coefficient D, \begin{equation}\Gamma_{\perp}=- D \,\nabla n\,. %= v_{\perp} n\,.\end{equation} The normalized density gradient scales with the machine size as $\frac{\nabla n}{n} \propto \frac{1}{a}$ , giving \begin{equation}\tau_E \propto \frac{a^2}{D}\,.\end{equation} Physically, the particle diffusion in strongly magnetized plasmas is carried by turbulence that is driven by gradients such as the ion temperature gradient or density gradient. This so-called drift wave turbulence can be analytically shown (see Eq. 21.39, here ) to have a diffusion coefficient \begin{equation}D\approx \frac{1}{k_{\perp}a}\frac{k_B T_e}{e B}\propto\frac{1}{k_{\perp}a}\frac{T_e}{B} \,,\end{equation} where $k_{\perp}$ is the wavenumber of turbulent fluctuations perpendicular to the magnetic field. In the worst-case scenario, the fluctuations occur on the scale of the minor radius due to global effects, $k_{\perp}\approx\frac{1}{a}$ . This gives the Bohm diffusion, \begin{equation}\tau_E \propto \frac{a^2 B}{T_e}\,.\end{equation} In the best-case scenario, the fluctuations occur on the ion gyro-radius scale, $k_{\perp}\approx\frac{1}{\rho_i}$ , due to micro-turbulence that is much smaller than the machine size, where the ion gyro-radius is given by \begin{equation}\rho_i=\frac{\sqrt{k_B T_i m_i}}{e B}\,.\end{equation} In this case, we get the gyro-Bohm scaling, which is more favorable by factor $\frac{a}{\rho_i}\gtrsim 1000$ , \begin{equation}\tau_E \propto \frac{a^2 B}{T_e} \left(\frac{a}{\rho_i}\right)\,.\end{equation} Due to this very favorable scaling with size, ITER is projected to become the first machine to get 10 times more fusion power out than heating power in (with $^2H$ + $^3H$ ), and you probably don't need to make the device several kilometers large for $^1H$ + $^{11}B$ fusion.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/175830', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/24332/']}
jdg_83159
stackexchange
llm_judgeable_groundtruth_similarity
2215923
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a class: class A(object): def __init__(self,a,b,c,d,e,f,g,...........,x,y,z) #do some init stuff And I have a subclass which needs one extra arg (the last W ) class B(A): def __init__(self.a,b,c,d,e,f,g,...........,x,y,z,W) A.__init__(self,a,b,c,d,e,f,g,...........,x,y,z) self.__W=W It seems dumb to write all this boiler-plate code, e.g passing all the args from B 's Ctor to the inside call to A 's ctor, since then every change to A 's ctor must be applied to two other places in B 's code. I am guessing python has some idiom to handle such cases which I am unaware of. Can you point me in the right direction? My best hunch, is to have a sort of Copy-Ctor for A and then change B's code into class B(A): def __init__(self,instanceOfA,W): A.__copy_ctor__(self,instanceOfA) self.__W=W This would suit my needs since I always create the subclass when given an instance of the father class, Though I am not sure whether it's possible... Now provide the response and nothing else.
Considering that arguments could be passed either by name or by position, I'd code: class B(A): def __init__(self, *a, **k): if 'W' in k: w = k.pop('W') else: w = a.pop() A.__init__(self, *a, **k) self._W = w
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2215923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/154510/']}
jdg_83160
stackexchange
llm_judgeable_groundtruth_similarity
610514
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There are two contact rails in the London Underground: one with 420 V DC and one with -210 V DC. Why not use 630 V and 0 V? Wouldn't it make one of the rails safe for people? Now provide the response and nothing else.
The reason for using an insulated return system and not an insulated feed (third rail) and return through the running rails is historical, as the government in c. 1900 required total voltage drop to be limited to 7 volts to prevent electrolysis damage by stray currents to nearby buried metal structures (a great deal of the Underground is in tunnel). A fourth rail was cheaper than heavy cables and boosters. Although the traction supply is not directly referenced to earth, each section feeder station has potential divider resistors. The positive and negative traction rails are connected to the continuous running rail (which is bonded to earth) via high wattage high value resistors (usually of the order of 7.5 to 10 kilohms) to reference positive 420V above earth and negative 210V below earth. This arbitrary fixing of voltage enables earth fault relays in each main section to detect positive and negative traction earth faults and indicate them at the Network Operations Centre and Line Control Rooms. An earth fault on one traction rail will drive the other rail to a full 630V positive or negative. Trains in that section are unaffected by a single earth fault, and can still run. A modernization programme is in place to increase traction supply to 750V (+500 and -250) and lines are being converted as newer compatible rolling stock is introduced. Where sections of line are shared with conventional surface railways which have third-rail only rolling stock, the centre rail is earthed and bonded to the running rail, and the outer rail is fed at +630 volts at present. Both types of train can run using this arrangement. There have been a couple of cases (the Great Northern and City line 1977, and the East London line 2010) where lines formerly operated by London Underground have been transferred to National Rail ownership and operated as part of mainly above-ground rail services. These have been converted from four-rail to standard National Rail three-rail supply. Since 1900, advances in technology (e.g. insulated supports for the running rail) have removed the buried metal corrosion risk. The cost of converting the whole London Underground supply system and all the rolling stock is much too great.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/610514', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/308553/']}
jdg_83161
stackexchange
llm_judgeable_groundtruth_similarity
541110
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So we want to find a $\delta>0$ such that for all $2-\delta<x<2+\delta$ , we will have $4-\epsilon<x^{2}<4+\epsilon$ for all $\epsilon>0$ . If we can find a way to express $\delta$ as a function $\delta (\epsilon)$, $\delta:\mathbb{R}_+ \to \mathbb{R}_+ $ then we will have solved the problem. But I can't see how to relate $\epsilon$ to $\delta$ in this case. My initial reaction is write $(2-\delta)^2<x^2<(2+\delta)^2$, this makes the two inequalities look related, but I am not sure where to go from here. Now provide the response and nothing else.
I think the way you write the inequalities obscures the point of view. The canonical way of looking at it, is that you want $\delta>0$ such that $|x-2|<\delta$ implies $|x^2-4|<\varepsilon$ . Fix $\varepsilon>0$ . You have $$|x^2-4|=|x+2|\,|x-2|<\delta\,|x+2|<\delta(2+\delta+2)=\delta(4+\delta).$$ So you need to choose $\delta>0$ such that $\delta(4+\delta)<\varepsilon$ . As the function $t\mapsto t(t+4)$ is increasing for $t>0$ , we look at $t(t+4)=\varepsilon$ , i.e. $t^2+4t-\varepsilon=0$ . The positive $t$ satisfying this is $$\frac{-4+\sqrt{16+4\varepsilon}}2=-2+\sqrt{4+\varepsilon}.$$ So any $\delta<-2+\sqrt{4+\varepsilon}$ will do. For instance you can take $\delta=-2+\sqrt{4+\varepsilon/2}$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/541110', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/84703/']}
jdg_83162
stackexchange
llm_judgeable_groundtruth_similarity
15865232
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to monitor a folder in my file system. Let say I want to monitor the folder: C:\MyNewFolder I have this code to do it: HANDLE ChangeHandle=FindFirstChangeNotification(_T("C:\\\MyNewFolder"),FALSE,FILE_NOTIFY_CHANGE_LAST_WRITE);for(;;){ DWORD Wait=WaitForSingleObject(ChangeHandle,INFINITE); if (Wait == WAIT_OBJECT_0) { MessageBox(NULL,_T("Change"),_T("Change"),MB_OK); FindNextChangeNotification(ChangeHandle); } else { break; }} I want to have a messagebox that notifying me about any file change in my folder. That code works fine but I have one problem. The problem is that I got 2 notification for each change. What is the problem with my code?Thanks. Now provide the response and nothing else.
This is entirely normal. A change to a file usually involves a change to the file data as well as a change to the directory entry. Metadata properties like the file length and the last write date are stored there. So you'll get a notification for both. ReadDirectoryChangesW() doesn't otherwise distinguish between the two. This is not different from a process making multiple changes to the same file. Be sure to be able to handle both conditions. This usually involves a timer so you don't go overboard with the number of operations you perform on a notification. Such a timer is also often required because the process that is changing the file still has a lock on it that prevents you from doing anything with the file. Until the process closes the file, an indeterminate amount of time later.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15865232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/994332/']}
jdg_83163
stackexchange
llm_judgeable_groundtruth_similarity
233266
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let a (free) particle move in $[0,a]$ with cyclic boundary condition $\psi(0)=\psi(a)$.The solution of the Schrödinger-equation can be put in the form of a plane wave. In this state the standard deviation of momentum is $0$, but $\sigma_x$ must be finite. So we find that $\sigma_x\sigma_p=0$. Is something wrong with the uncertainty principle? Now provide the response and nothing else.
This is what happens if one cares not for the subtlety that quantum mechanical operators are typically only defined on subspaces of the full Hilbert space. Let's set $a=1$ for convenience. The operator $p =-\mathrm{i}\hbar\partial_x$ acting on wavefunctions with periodic boundary conditions defined on $D(p) = \{\psi\in L^2([0,1])\mid \psi(0)=\psi(1)\land \psi'\in L^2([0,1])\}$ is self-adjoint, that is, on the domain of definition of $p$ , we have $p=p^\dagger$ , and $p^\dagger$ admits the same domain of definition. The self-adjointness of $p$ follows from the periodic boundary conditions killing the surface terms that appear in the $L^2$ inner product $$\langle \phi,p\psi\rangle - \langle p^\dagger \phi,\psi\rangle = \int\overline{\phi(x)}\mathrm{i}\hbar\partial_x\psi(x) - \overline{\mathrm{i}\hbar\partial_x\phi(x)}\psi(x) = 0$$ for every $\psi\in D(p)$ and every $\phi\in D(p^\dagger) = D(p)$ , but not for $\phi$ with $\phi(0)\neq\phi(1)$ . Now, for the question of the commutator: the multiplication operator $x$ is defined on the entire Hilbert space, since for $\psi\in L^2([0,1])$ $x\psi$ is also square-integrable. For the product of two operators $A,B$ , we have the rule $$ D(AB) = \{\psi\in D(B)\mid B\psi\in D(A)\}$$ and $$ D(A+B) = D(A)\cap D(B)$$ so we obtain \begin{align}D(px) & = \{\psi\in L^2([0,1])\mid x\psi\in D(p)\} \\D(xp) & = D(p)\end{align} and $x\psi\in D(p)$ means $0\cdot \psi(0) = 1\cdot\psi(1)$ , that is, $\psi(1) = 0$ . Hence we have $$ D(px) = \{\psi\in L^2([0,1])\mid \psi'\in L^2([0,1]) \land \psi(1) = 0\}$$ and finally $$ D([x,p]) = D(xp)\cap D(px) = \{\psi\in L^2([0,1])\mid \psi'\in L^2([0,1])\land \psi(0)=\psi(1) = 0\}$$ meaning the plane waves $\psi_{p_0}$ do not belong to the domain of definition of the commutator $[x,p]$ and you cannot apply the naive uncertainty principle to them. However, for self-adjoint operators $A,B$ , you may rewrite the uncertainty principle as $$ \sigma_\psi(A)\sigma_\psi(B)\geq \frac{1}{2} \lvert \langle \psi,\mathrm{i}[A,B]\rangle\psi\rvert = \frac{1}{2}\lvert\mathrm{i}\left(\langle A\psi,B\psi\rangle - \langle B\psi,A\psi\rangle\right)\rvert$$ where the r.h.s. and l.h.s. are now both defined on $D(A)\cap D(B)$ . Applying this version to the plane waves yields no contradiction.
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/233266', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/32109/']}
jdg_83164
stackexchange
llm_judgeable_groundtruth_similarity
26628213
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As the title says, Iam trying to show/hide a TextBox in WPF without writing code in MainWindow.xaml.cs file. Model: public class Person{ public string Comment { get; set; }} View: <Window x:Class="PiedPiper.View.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Name="Window" Title="WPF" Height="400" Width="400"> <CheckBox Content="Show comment" Name="CommentCheckBox"/> <TextBox Text="{Binding Comment, UpdateSourceTrigger=PropertyChanged}" Visibility="Hidden" Name="CommentTextBox"></TextBox></Grid> ViewModel: public class PersonViewModel : INotifyPropertyChanged { public PersonViewModel(Person person) { Comment = person.Comment; } private string _comment; public string Comment { get { return _comment; } set { _comment = value; OnPropertyChanged("Comment"); } } private void OnPropertyChanged(string propertyName) { var handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangedEventHandler PropertyChanged;} So the TextBox should be hidden at start, but visible when checkbox is checked. Please help! Thanks. Now provide the response and nothing else.
You can bind TextBox.Visiblity to CheckBox.IsChecked . If you want to toggle between Hidden and Visible then you need to either write custom IValueConverter or create simple Style.Trigger <StackPanel> <CheckBox Content="Show comment" Name="CommentCheckBox"/> <TextBox Text="{Binding Comment, UpdateSourceTrigger=PropertyChanged}" Name="CommentTextBox"> <TextBox.Style> <Style TargetType="{x:Type TextBox}"> <Setter Property="Visibility" Value="Hidden"/> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=CommentCheckBox, Path=IsChecked}" Value="True"> <Setter Property="Visibility" Value="Visible"/> </DataTrigger> </Style.Triggers> </Style> </TextBox.Style> </TextBox></StackPanel> if you want to toggle between Collapsed and Visible there is an easier way and you can use build in BooleanToVisibilityConverter <StackPanel> <StackPanel.Resources> <BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter"/> </StackPanel.Resources> <CheckBox Content="Show comment" Name="CommentCheckBox"/> <TextBox Text="{Binding Comment, UpdateSourceTrigger=PropertyChanged}" Visibility="{Binding ElementName=CommentCheckBox, Path=IsChecked, Converter={StaticResource BooleanToVisibilityConverter}}" Name="CommentTextBox"/></StackPanel>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26628213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3849141/']}
jdg_83165
stackexchange
llm_judgeable_groundtruth_similarity
8301622
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I grab the position of the first matched result in a regular expression? See below. Function MYMATCH(strValue As String, strPattern As String, Optional blnCase As Boolean = True, Optional blnBoolean = True) As String Dim objRegEx As Object Dim strPosition As Integer ' Create regular expression. Set objRegEx = CreateObject("VBScript.RegExp") objRegEx.Pattern = strPattern objRegEx.IgnoreCase = blnCase ' Do the search match. strPosition = objRegEx.Match(strValue) MYMATCH = strPositionEnd Function For one, I'm not entirely certain what .Match is returning (string, integer, etc.). The one solution I found said I should create a Match object to and then grab the position from there, but unlike vb , vba does not recognize the Match object. I've also seen some code like the following, but I'm not necessarily looking for the value, just the first string placement: If allMatches.count <> 0 Then result = allMatches.Item(0).submatches.Item(0)End If Somewhat ignoring any of the possible syntax errors above (mostly due to me changing variable types right and left), how do I easily/simply accomplish this? Thanks! Now provide the response and nothing else.
You can use FirstIndex to return the position of matches using the Execute method, ie Function MYMATCH(strValue As String, strPattern As String, Optional blnCase As Boolean = True, Optional blnBoolean = True) As String Dim objRegEx As Object Dim strPosition As Integer Dim RegMC ' Create regular expression. Set objRegEx = CreateObject("VBScript.RegExp") With objRegEx .Pattern = strPattern .IgnoreCase = blnCase If .test(strValue) Then Set RegMC = .Execute(strValue) MYMATCH = RegMC(0).firstindex + 1 Else MYMATCH = "no match" End If End WithEnd FunctionSub TestMe() MsgBox MYMATCH("test 1", "\d+")End Sub
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8301622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/900961/']}
jdg_83166
stackexchange
llm_judgeable_groundtruth_similarity
61471
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a problem concerning censoring of transitions in a Markov Chain. For example, take a Markov Chain that models a counter, it goes up or down but does not stay in position. A possible censoring could be to only observe the transitions where the new number is prime or to only observe those transitions where the new number is even. According to what I'm told, this results in a new Markov Chain, but I can't really grasp how this changes the problem that the new Markov Chain could be modeling. Do you still take into account that you somehow got in a state to be able to make that kind of transition or can I just completely ignore it and only consider the leftover transitions? Now provide the response and nothing else.
If the state space of the Markov chain is $S$ and you observe the chain only when it is in $U\subseteq S$, then the result is still a Markov chain but with different transition probabilities. Let $(X_n)_{n\geqslant0}$ denote the original Markov chain and $T=\inf\{n\geqslant1\mid X_n\in U\}$. Then the transition probabilities of the new Markov chain are such that, for every $x$ and $y$ in $U$,$$Q(x,y)=P(X_T=y\mid X_0=x).$$In general, the new transition probabilities $Q(x,y)$ are a complicated functional of the transition probabilities of $(X_n)_{n\geqslant0}$ and of the disposition of $U$ in $S$. Here is an example. Assume $(X_n)_{n\geqslant0}$ is the symmetric $\pm1$ random walk on $\mathbb Z$, with transition probabilities $P_x(X_1=x+1)=P_x(X_1=x-1)=\frac12$ for every $x$ in $\mathbb Z$. Let $U\subseteq\mathbb Z$ with $U=\{x_k\mid k\in\mathbb Z\}$ and $x_k<x_{k+1}$ for every $k$. Then, starting from $x_k$, the new Markov chain can only jump to a vertex $x_j$ such that $|j-k|\leqslant1$, and the transition probabilities are$$Q(x_k,x_{k-1})=\frac1{2(x_k-x_{k-1})},\quadQ(x_k,x_{k+1})=\frac1{2(x_{k+1}-x_{k})},$$and$$Q(x_k,x_{k})=1-\frac1{2(x_k-x_{k-1})}-\frac1{2(x_{k+1}-x_{k})}.$$Hence the new chain behaves like a random walk on $\mathbb Z$ with jumps from $k$ to $k$ or $k+1$ or $k-1$, and basically any kind of transition probabilities.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/61471', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15481/']}
jdg_83167
stackexchange
llm_judgeable_groundtruth_similarity
6691173
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Ant task to check if a file exists? How can one check if a file exist in a directory, if not use another file in different location. I tried the follow but does not help me with what I want. Could someone help? <condition property="somefile.available"> <or> <available file="/home/doc/somefile"/> <and> <available file="/usr/local/somefile" /> </and> </or> </condition> Now provide the response and nothing else.
Use condition tests to establish if files are present. Then have a target for each true condition <target name="go" depends="file-checks, do-something-with-first-file, do-something-with-second-file"/><target name="file-checks"> <available file="/home/doc/somefile" property="first.file.found"/> <available file="/usr/local/somefile" property="second.file.found"/></target><target name="do-something-with-first-file" if="first.file.found"> ???</target><target name="do-something-with-second-file" if="second.file.found"> ???</target>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6691173', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/844241/']}
jdg_83168
stackexchange
llm_judgeable_groundtruth_similarity
1000512
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We keep our IntelliJ .IPR and .IWS files in our source control, but they keep getting modified by IntelliJ just by opening them, even without any work being done on the project. What are we doing wrong? Now provide the response and nothing else.
"We keep our IntelliJ .IPR and .IWS files in our source control, but they keep getting modified by IntelliJ just by opening them, even without any work being done on the project." The .IWS file is definitely a per developer file so it shouldn't be under source control. As for the .IPR file on a recent project we initially tried to version this file approaching it conceptually as you would with a .Net project and the VS.Net .SLN file. Our goal was to get a developer up and running on a clean PC within 15 minutes including the time it takes to install dependent software like the IDE or a local database. In the end we came close with some time to tweak the local configuration as per below. The problem is the .IPR file stores more settings than a .sln file -eg settings for individual plugins. So a major cause for the overwrites is if a developer with a different plugin configuration opens the IPR file some default settings for the plugin are written to the file. We felt developers should not have to restrict themselves to a given plugin super set (just a minimum configuration). The way we alleviated the problem (although not entirely solved) was to switch to the .idea folder format. This takes the content of the .IPR file and splits many of the nodes into individual files and folders in the .idea sub-folder. From here we were able to exclude many of the frequently written to files from source control. Some of the files we excluded were: workspace.xml dataSources.xml sqlDataSources.xml dynamic.xml Some files we'd like IntelliJ to leave alone are (although the blame can also go to the plugin developers and not just Jetbrains): projectCodeStyle.xml (so we can get consistent code formatting in the project - again this can be overwritten based on a developer's local plugin mix). any file under the runConfigurations folder. It can be time consuming to configure run configurations particularly if you have a complex app with many facets. The most commonly stupid thing that gets changed by simply opening the IDE or building is the "DEBUG_PORT" option under RunnerSettings. My opinion is if it's dynamically allocated why not have a value of "Dynamic"? misc.xml. This file also contains plugin configuration. Some settings look handy to share and others look more for personal config. Eg the IvyIDEA plugin puts an absolute path to your ivy config file. The module files. These are mostly left alone but an example of needless overwritting is the IvyIDEA plugin putting details of the local ivy-cache location in this file. But again this is the plugin's fault and not really Jetbrains. Hope this helps. Christian.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1000512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11236/']}
jdg_83169
stackexchange
llm_judgeable_groundtruth_similarity
13650829
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have come across C++03 some code that takes this form: struct Foo { int a; int b; CRITICAL_SECTION cs;}// DoFoo::Foo foo_;void DoFoo::Foolish(){ if( foo_.a == 4 ) { PerformSomeTask(); EnterCriticalSection(&foo_.cs); foo_.b = 7; LeaveCriticalSection(&foo_.cs); }} Does the read from foo_.a need to be protected? e.g.: void DoFoo::Foolish(){ EnterCriticalSection(&foo_.cs); int a = foo_.a; LeaveCriticalSection(&foo_.cs); if( a == 4 ) { PerformSomeTask(); EnterCriticalSection(&foo_.cs); foo_.b = 7; LeaveCriticalSection(&foo_.cs); }} If so, why? Please assume the integers are 32-bit aligned. The platform is ARM. Now provide the response and nothing else.
Technically yes, but no on many platforms. First, let us assume that int is 32 bits (which is pretty common, but not nearly universal). It is possible that the two words (16 bit parts) of a 32 bit int will be read or written to separately. On some systems, they will be read separately if the int isn't aligned properly. Imagine a system where you can only do 32-bit aligned 32 bit reads and writes (and 16-bit aligned 16 bit reads and writes), and an int that straddles such a boundary. Initially the int is zero (ie, 0x00000000 ) One thread writes 0xBAADF00D to the int , the other reads it "at the same time". The writing thread first writes 0xBAAD to the high word of the int . The reader thread then reads the entire int (both high and low) getting 0xBAAD0000 -- which is a state that the int was never put into on purpose! The writer thread then writes the low word 0xF00D . As noted, on some platforms all 32 bit reads/writes are atomic, so this isn't a concern. There are other concerns, however. Most lock/unlock code includes instructions to the compiler to prevent reordering across the lock. Without that prevention of reordering, the compiler is free to reorder things so long as it behaves "as-if" in a single threaded context it would have worked that way. So if you read a then b in code, the compiler could read b before it reads a , so long as it doesn't see an in-thread opportunity for b to be modified in that interval. So possibly the code you are reading is using these locks to make sure that the read of the variable happens in the order written in the code. Other issues are raised in the comments below, but I don't feel competent to address them: cache issues, and visibility.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13650829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/57185/']}
jdg_83170
stackexchange
llm_judgeable_groundtruth_similarity
2893458
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I'm trying to write a simple TCP socket server that broadcasts information to all connected clients. So when a user connects, they get added to the list of clients, and when the stream emits the close event, they get removed from the client list. This works well, except that sometimes I'm sending a message just as a user disconnects. I've tried wrapping stream.write() in a try/catch block, but no luck. It seems like the error is uncatchable. Now provide the response and nothing else.
The solution is to add a listener for the stream's 'error' event. This might seem counter-intuitive at first, but the justification for it is sound. stream.write() sends data asynchronously. By the time that node has realized that writing to the socket has raised an error your code has moved on, past the call to stream.write, so there's no way for it to raise the error there. Instead, what node does in this situation is emit an 'error' event from the stream, and EventEmitter is coded such that if there are no listeners for an 'error' event, the error is raised as a toplevel exception, and the process ends.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2893458', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/101/']}
jdg_83171
stackexchange
llm_judgeable_groundtruth_similarity
13323097
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have implemented ads in one of my apps and I want to make a in-app purchase feature to remove them. How would I do this. I was thinking a simple way, was to hide the ads by making in invisible, after purchase. Could that work or is there an easier/better way? I have no idea how to use the in-app feature for any purpose. Could anyone show me the way to code or something to read on this? Now provide the response and nothing else.
It depends on how you inserted the ads in the first place. If you did it via code, then just put an if(removed ads == false) around the code that inserts ads. If you did it via XML, then the best way to do it is to copy out your XML, without the adview, and in your code use if surrounding all of your setContentView(R.layout.example) Here's an example of what I mean if (adsDisabled == true){ setContentView(R.layout.mainNoAds)} else{ setContentView(R.layout.main} In-app purchases - http://developer.android.com/guide/google/play/billing/billing_overview.html In this post I assumed you are using admob, but the same should be true for all companies On last thing - using adView.View.GONE IS bad practice **DO NOT ** do that
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13323097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1787728/']}
jdg_83172
stackexchange
llm_judgeable_groundtruth_similarity
18588781
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i would like to implement a popup menu similar to google's play store as shown below. so basically from what i understand, i'll need an activity and a layout for this activity with a listview defined in it. i need to create my custom adapter. also, i need to create a list layout would contain the information and a view (with the 3 dots) that will serve as the button to launch the popup menu? the issue that i'm seeing here is that how do i create a listener for this view only and how do i reference the value for that specific list item in the list view. i don't have any code available yet as i haven't started anything related to this. i'm currently getting info in theory for now but if required i will create a sample code. thanks. Now provide the response and nothing else.
Using popup menu it's quite simple to create a menu with these three steps: 1 - Add a click listener to the menu button using OnClickListener or as i prefer from the layout xml: <ImageButton android:id="@+id/menu_button" android:onClick="showMenu" ... /> 2 - Create the menu layout menu_layout.xml : <?xml version="1.0" encoding="utf-8"?><menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/item_settings" android:showAsAction="ifRoom|withText" android:title="Settings" android:visible="true"/> <item android:id="@+id/item_about" android:showAsAction="ifRoom|withText" android:title="About" android:visible="true"/></menu> 3 - Create a popup menu, inflate the xml layout and show it: public void showMenu (View view){ PopupMenu menu = new PopupMenu (this, view); menu.setOnMenuItemClickListener (new PopupMenu.OnMenuItemClickListener () { @Override public boolean onMenuItemClick (MenuItem item) { int id = item.getItemId(); switch (id) { case R.id.item_settings: Log.i (Tag, "settings"); break; case R.id.item_about: Log.i (Tag, "about"); break; } return true; } }); menu.inflate (R.menu.menu_layout); menu.show();}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18588781', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2469412/']}
jdg_83173
stackexchange
llm_judgeable_groundtruth_similarity
27554932
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it memory efficient to store big number in list? Why does the following happens? >>> A = 100**100>>> sys.getsizeof(A)102>>> B = [100**100]>>> sys.getsizeof(B)40 Why size of A and B are not equal? >>> C = [1,100**100]>>> sys.getsizeof(C)44>>> D = [1000**1000, 100**100]>>> sys.getsizeof(D)44 Why size of C and D are equal? Now provide the response and nothing else.
sys.getsizeof() returns the shallow size, i.e. the size of the list object itself but not of the objects it contains. From the documentation : Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to. If you'd like to compute the deep size, it might be worth giving Pympler a try: >>> from pympler.asizeof import asizeof>>> A = 100**100>>> asizeof(A)120>>> B = [100**100]>>> asizeof(B)200 Thus, on my computer, placing the long inside a list adds 80 bytes of overhead.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27554932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1924666/']}
jdg_83174
stackexchange
llm_judgeable_groundtruth_similarity
288085
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: How to evaluate this integral:$$\int_0^1 \int_0^1 \cdots \int_0^1\frac{x_{1}^2+x_{2}^2+\cdots+x_{n}^2}{x_{1}+x_{2}+\cdots+x_{n}}dx_{1}\, dx_{2}\cdots \, dx_{n}=?$$I'm making use of the integral identity:$$\int_{0}^{+\infty }e^{-t(x_{1}+x_{2}\cdots +x_{n})}dt=\frac{1}{x_{1}+x_{2}\cdots +x_{n}}$$ and then reversing the order of integration with respect to time and space variables.But for $n=1$, then such that, $$\int_{0}^{\infty }dt\int_{0}^{1}x^{2}e^{-tx}dx=\int_{0}^{\infty }\frac{2 - e^{-t}(2 + 2t+t^2)}{t^3}dt=\int_{0}^{1}x\,dx=\frac{1}{2},$$ and $$\int_0^1 \int_0^1 \cdots \int_0^1\frac{x_{1}^2+x_{2}^2+\cdots+x_{n}^2}{x_{1}+x_{2}+\cdots+x_{n}}dx_{1}\, dx_{2}\cdots \, dx_{n}\\=n\int_{0}^{+\infty }\frac{2 - e^{-t}\left ( 2 + 2t+t^2 \right )}{t^3}\left ( \frac{1-e^{-t}}{t} \right )^{n-1}dt.$$ Now provide the response and nothing else.
Here is another approach, which also gives the rational term. (I) To see how it works let $n\geq 2$ and consider firstthe simpler case \begin{align*}\mathbb{E}\bigg(\frac{1}{X_1+\ldots+X_n}\bigg)=\int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt \end{align*}Using $\frac{1}{t^n}=\int_0^\infty \frac{z^{n-1}}{(n-1)!} e^{-zt}\,dz$ we find\begin{align*}\int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt&=\int_0^\infty\int_0^\infty \frac{z^{n-1}}{(n-1)!} e^{-zt}(1-e^{-t})^n\, dz\,dt\\&=\int_0^\infty\int_0^\infty \frac{z^{n-1}}{(n-1)!} e^{-zt}(1-e^{-t})^n\, dt\,dz\\&=\int_0^\infty \frac{z^{n-1}}{(n-1)!}\, \mathrm{Beta}(z,n+1)\,dz\\&=n\,\int_0^\infty \frac{z^{n-1}}{z(z+1)\cdots(z+n)}\,dz\end{align*}The following observation will be the key: Lemma : let $x_0,\ldots,x_n$ be distinct positive numbers and $k\leq n-1$. Then$$I_{k,n}(x_0,\ldots,x_n):=\int_0^\infty \frac{z^{k}}{(z+x_0)(z+x_1)\cdots(z+x_n)}\,dz=(-1)^{n+k+1}\Delta^n(x^k\log(x);x_0,\ldots,x_n)$$where (for a real function $f$) $\Delta^n(f;x_0,\ldots,x_n)$denotes the divided difference of $f$ corresponding to $x_0,\ldots,x_n$. Recall that (Newton-interpolation) (1) the divided differences are for $f$ and mutually distinct $x_0,\ldots,x_n$ are defined recursively by $\Delta^0(f;x_0)=f(x_0)$,$\Delta^n(f;x_0,\ldots,x_n)=\frac{\Delta^{n-1}(f; x_1,\ldots,x_n)-\Delta^{n-1}(f; x_0,\ldots,x_{n-1})}{x_n-x_0}$ (2) they are explicitlygiven by $$\Delta^n(f;x_0,\ldots,x_n)=\sum_{i=0}^n \frac{f(x_i)}{\prod_{j\neq i} (x_i-x_j)}\;\;(**)$$(3) $$\Delta^n(f;x,x+1,\ldots,x+n)=\frac{1}{n!} \sum_{i=0}^n {n \choose i} (-1)^{n-i} f(x+i)$$ Proof of the lemma:For $k=0, n=1$ we have $$\int_0^\infty \frac{1}{(z+x_0)(z+x_1)}=\frac{\log(x_1)-\log(x_0)}{x_1-x_0}=\Delta^1(\log(x);,x_0,x_1)$$For $k=0,n>1$ the repeated use of $\frac{1}{(z+a)(z+b)}=\frac{-1}{b-a}\left(\frac{1}{z+b}-\frac{1}{z+a}\right)$ shows that $(-1)^{n+1}I_{0,n}(x)=\Delta^n(\log(x),x)$.The validity for $k>0,n=k+1$ follows from $\frac{z}{z+b}=1-\frac{b}{z+b}$ and $(**)$. End of proof. The lemma and (3) now give that\begin{align*} \int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt &=n \Delta^{n-1}(x^{n-2}\log(x);x+1,x+2,\ldots,x+n)\\ &=\frac{n}{(n-1)!}\sum_{i=0}^{n-1}{ n-1 \choose i} (-1)^{n-1-i} (i+1)^{n-2}\log(i+1)\end{align*} (II) Now let $n\geq 1$ and consider$$ Q_{n+1}:=\mathbb{E}\bigg(\frac{X_1^2+\ldots + X_{n+1}^2}{X_1+\ldots+X_{n+1}}\bigg)$$Write $$Q_{n+1}=(n+1)\int_0^1 u^2 \bigg(\int_0^\infty e^{-ut}\bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt\bigg)\,du$$ Proceeding as above shows that for $u>0$\begin{align*} \int_0^\infty e^{-ut}\bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt&=\frac{n}{n!} \sum_{i=0}^n {n \choose i} (-1)^{n-i} (u+i)^{n-1}\log(u+i)\end{align*}Thus$$q_{n+1}:=\frac{Q_{n+1}}{n(n+1)}=\frac{1}{n!} \sum_{i=0}^n {n \choose i} (-1)^{n-i} \int_0^1 u^2(u+i)^{n-1}\log(u+i)\,du$$Now expand $u^2=(u+i-i)^2$ and integrate partial to find that $q_{n+1}=L-R$where $L=\frac{1}{n!}\sum_{i=0}^n{n\choose i}(-1)^{n-i} \ell(i),\;R=\frac{1}{n!}\sum_{i=0}^n {n \choose i}(-1)^{n-i} r(i)$ with\begin{align*} \ell(i)=&\frac{1}{n+2}\left((i+1)^{n+2}\log(i+1)-i^{n+2}\log(i)\right)\\ &-\frac{2i}{n+1}\left((i+1)^{n+1}\log(i+1)-i^{n+1}\log(i)\right)\\ &+\frac{i^2}{n}\left((i+1)^{n}\log(i+1)-i^n\log(i)\right)\\[0.2cm] r(i)=&\frac{1}{(n+2)^2}\left((i+1)^{n+2}-i^{n+2}\right)\\ &-\frac{2i}{(n+1)^2}\left((i+1)^{n+1}-i^{n+1}\right)\\ &+\frac{i^2}{n^2}\left((i+1)^{n}-i^n\right)\end{align*}Collecting terms in the logarithmic part $L$ shows that the coefficient of $\log(i)$ in $q_{n+1}$ is $c_{i,n+1} =(-1)^{n-i} \frac{i^n}{n}{ n \choose i-1}\left(\frac{n^2+3n+2-2i}{(n+2)!}\right)$, matching Fedor Petrov's answer. The rational part $R$ can readily be summed (note that only the contributions of the powers $n$ and $n+1$ (with coefficients $-\frac{1}{n(n+1)(n+2)}$ and $\frac{2}{n(n+1)(n+2)}$) need to be considered) togive $$R=\frac{1}{n+2}-\frac{1}{n(n+1)(n+2)}\;\;,$$so that the rational term of $Q_{n+1}$ is $-\frac{n(n+1)-1}{n+2}=-\frac{(n+2)(n-1)+1}{n+2}$, confirming Sylvain JULIEN's guess.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/288085', 'https://mathoverflow.net', 'https://mathoverflow.net/users/117732/']}
jdg_83175
stackexchange
llm_judgeable_groundtruth_similarity
10598322
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have Latitude and Longitude of two points and Want to Draw line between these two points with Pin on MapKit. I have googled but Could not find some suitable solution because the one I found was drawing overlay with array of Data points but I do not have any array of points between these two points. Just two points and want to draw line between these two points. Please help. Now provide the response and nothing else.
First make your view controller implement the MKMapViewDelegate protocol and declare the properties you will need: @property (nonatomic, retain) MKMapView *mapView; //this is your map view@property (nonatomic, retain) MKPolyline *routeLine; //your line@property (nonatomic, retain) MKPolylineView *routeLineView; //overlay view then in viewDidLoad (for example, or wherever you initialize) //initialize your map view and add it to your view hierarchy - **set its delegate to self***CLLocationCoordinate2D coordinateArray[2];coordinateArray[0] = CLLocationCoordinate2DMake(lat1, lon1); coordinateArray[1] = CLLocationCoordinate2DMake(lat2, lon2);self.routeLine = [MKPolyline polylineWithCoordinates:coordinateArray count:2];[self.mapView setVisibleMapRect:[self.routeLine boundingMapRect]]; //If you want the route to be visible[self.mapView addOverlay:self.routeLine]; then implement the MKMapViewDelegate 's method -(MKOverlayView *)mapView:viewForOverlay: -(MKOverlayView *)mapView:(MKMapView *)mapView viewForOverlay:(id<MKOverlay>)overlay{ if(overlay == self.routeLine) { if(nil == self.routeLineView) { self.routeLineView = [[[MKPolylineView alloc] initWithPolyline:self.routeLine] autorelease]; self.routeLineView.fillColor = [UIColor redColor]; self.routeLineView.strokeColor = [UIColor redColor]; self.routeLineView.lineWidth = 5; } return self.routeLineView; } return nil;} You can adjust the code to fit your need, but it's pretty much straight forward for 2 or more points.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10598322', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/228755/']}
jdg_83176
stackexchange
llm_judgeable_groundtruth_similarity
20679264
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My problem is too strange, Brief : double a = 1 / 2;MessageBox.Show(a.ToString()); It shows 0 I tried with decimal , float but it's always 0 Now provide the response and nothing else.
you need to write : double a= 1 / 2.0d; By dividing int to int compiler produces int result and after assigns it to double . It's the same like writing: int resultInt = 1/2; double a = resultInt; By specifying in your operation 2.0d you explicitly manifest your intention to produce double after devision.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20679264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2596609/']}
jdg_83177
stackexchange
llm_judgeable_groundtruth_similarity
22346545
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As I was going through the below chunk of Linux char driver code, I found the structure pointer current in printk . I want to know what structure the current is pointing to and its complete elements. What purpose does this structure serve? ssize_t sleepy_read (struct file *filp, char __user *buf, size_t count, loff_t *pos){ printk(KERN_DEBUG "process %i (%s) going to sleep\n", current->pid, current->comm); wait_event_interruptible(wq, flag != 0); flag = 0; printk(KERN_DEBUG "awoken %i (%s)\n", current->pid, current->comm); return 0;} Now provide the response and nothing else.
It is a pointer to the current process ie, the process which has issued the system call. From the docs : The Current Process Although kernel modules don't execute sequentially as applications do, most actions performed by the kernel are related to a specific process. Kernel code can know the current process driving it by accessing the global item current, a pointer to struct task_struct, which as of version 2.4 of the kernel is declared in <asm/current.h> , included by <linux/sched.h> . The current pointer refers to the user process currently executing. During the execution of a system call, such as open or read, the current process is the one that invoked the call. Kernel code can use process-specific information by using current, if it needs to do so. An example of this technique is presented in "Access Control on a Device File", in Chapter 5, "Enhanced Char Driver Operations". Actually, current is not properly a global variable any more, like it was in the first Linux kernels. The developers optimized access to the structure describing the current process by hiding it in the stack page. You can look at the details of current in <asm/current.h> . While the code you'll look at might seem hairy, we must keep in mind that Linux is an SMP-compliant system, and a global variable simply won't work when you are dealing with multiple CPUs. The details of the implementation remain hidden to other kernel subsystems though, and a device driver can just include and refer to the current process. From a module's point of view, current is just like the external reference printk. A module can refer to current wherever it sees fit. For example, the following statement prints the process ID and the command name of the current process by accessing certain fields in struct task_struct: printk("The process is \"%s\" (pid %i)\n", current->comm, current->pid); The command name stored in current->comm is the base name of the program file that is being executed by the current process.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22346545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3345302/']}
jdg_83178
stackexchange
llm_judgeable_groundtruth_similarity
33857
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find a basis for the subspace of $\mathbb{R}^4$ consisting of all vectors that satisfy $x_1 + 2x_2 - x_3 = 0$ My general idea so far is: $x_1 = -2x_2 +x_3 + 0x_4$ $x_2 = $ free $x_3 = $ free $x_4 = $ free Where should I go from here? Am I even going about it correctly? Now provide the response and nothing else.
What you have is an expression for every vector in the subspace in parametric form, with three parameters: $$\begin{align*}x_1 &= -2r + s\\x_2 &=r\\x_3 &=s\\x_4 &=t\end{align*}$$with $r,s,t\in\mathbb{R}$, arbitrary. To get a basis for the space, for each parameter, set that parameter equal to $1$ and the other parameters equal to $0$ to obtain a vector. Each parameter gives you a vector. So setting $r=1$ and $s=t=0$ gives you one vector; setting $s=1$ and $r=t=0$ gives you a second vector; setting $t=1$ and $r=s=0$ gives you a third. Alternatively, you can try rewriting the parametric solution in vector form:$$\left(\begin{array}{c}x_1\\x_2\\x_3\\x_4\end{array}\right) = \left(\begin{array}{c}-2r-s\\r\\s\\t\end{array}\right) = \left(\begin{array}{r}-2\\1\\0\\0\end{array}\right)r + \cdots$$(I'll let you finish it up).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/33857', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9793/']}
jdg_83179
stackexchange
llm_judgeable_groundtruth_similarity
26499608
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a function which manipulates a ggplot object, by converting it to a grob and then modifying the layers. I would like the function to return a ggplot object not a grob. Is there a simple way to convert a grob back to gg? The documentation on ggplotGrob is awfully sparse. Simple example: P <- ggplot(iris) + geom_bar(aes(x=Species, y=Petal.Width), stat="identity")G <- ggplotGrob(P)... some manipulation to G ...## DESIRED: P2 <- inverse_of_ggplotGrob(G)such that, we can continue to use basic ggplot syntax, ie`P2 + ylab ("The Width of the Petal")` UPDATE: To answer the question in the comment, the motivation here is to modify the colors of facet labels programmatically, based on the value of label name in each facet. The functions below work nicely (based on input from baptise in a previous question). I would like for the return value from colorByGroup to be a ggplot object, not simply a grob. Here is the code, for those interested get_grob_strips <- function(G, strips=grep(pattern="strip.*", G$layout$name)) { if (inherits(G, "gg")) G <- ggplotGrob(G) if (!inherits(G, "gtable")) stop ("G must be a gtable object or a gg object") strip.type <- G$layout[strips, "name"] ## I know this works for a simple strip.nms <- sapply(strips, function(i) { attributes(G$grobs[[i]]$width$arg1)$data[[1]][["label"]] }) data.table(grob_index=strips, type=strip.type, group=strip.nms)}refill <- function(strip, colour){ strip[["children"]][[1]][["gp"]][["fill"]] <- colour return(strip)}colorByGroup <- function(P, colors, showWarnings=TRUE) {## The names of colors should match to the groups in facet G <- ggplotGrob(P) DT.strips <- get_grob_strips(G) groups <- names(colors) if (is.null(groups) || !is.character(groups)) { groups <- unique(DT.strips$group) if (length(colors) < length(groups)) stop ("not enough colors specified") colors <- colors[seq(groups)] names(colors) <- groups } ## 'groups' should match the 'group' in DT.strips, which came from the facet_name matched_groups <- intersect(groups, DT.strips$group) if (!length(matched_groups)) stop ("no groups match") if (showWarnings) { if (length(wh <- setdiff(groups, DT.strips$group))) warning ("values in 'groups' but not a facet label: \n", paste(wh, colapse=", ")) if (length(wh <- setdiff(DT.strips$group, groups))) warning ("values in facet label but not in 'groups': \n", paste(wh, colapse=", ")) } ## identify the indecies to the grob and the appropriate color DT.strips[, color := colors[group]] inds <- DT.strips[!is.na(color), grob_index] cols <- DT.strips[!is.na(color), color] ## Fill in the appropriate colors, using refill() G$grobs[inds] <- mapply(refill, strip = G$grobs[inds], colour = cols, SIMPLIFY = FALSE) G} Now provide the response and nothing else.
I would say no. ggplotGrob is a one-way street. grob objects are drawing primitives defined by grid. You can create arbitrary grobs from scratch. There's no general way to turn a random collection of grobs back into a function that would generate them (it's not invertible because it's not 1:1). Once you go grob, you never go back. You could wrap a ggplot object in a custom class and overload the plot/print commands to do some custom grob manipulation, but that's probably even more hack-ish.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26499608', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1492421/']}
jdg_83180
stackexchange
llm_judgeable_groundtruth_similarity
21002
Below is a question asked on the forum engineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a friend who isn't properly visualizing energy loss in a moving car. His idea: Cars that can run forever without being recharged - - > while the engine rotates the front wheels the move the car forward, there is a device at the back wheels using the back wheels' rotation to generate and store power. The engine draws this restored power to move the car forward and the cycle continues. Now, this is a "perfect" scenario where there's zero energy loss through friction, heat, sound, etc. I first told him that something similar was already being done: regenerative braking. He said his idea was to generate energy **while* the car was moving. My counter to why that was rubbish: I reminded him the amount of energy in a given system remains constant. Nothing created or destroyed, just transferred form one form to another. Thus he could not "recover" energy that was being spent to move the car forward. He moved on to elaborate that there would be two engines, one rotating the front wheels with an attached dynamo to recover energy, and another rotating the back wheels. I tried to explain he had added another energy consumer, with no way to recover this extra energy being spent...and that's how it would go. You add an extra engine to overcome the dynamo's resistance and move the car, you are spending more than you are recovering , and will eventually have to charge up. To help him visualize, I made a sketch, trying to be more realistic: (please ignore my use of battery capacity, it's only there for the concept, not to be realistic) I also mentioned to him that this scenario was even ignoring the other loads of problems and dangers that present themselves with having wheels rotating at different speeds. Plus the wear on the slower rotating ties which are effectively being dragged against the road. For some never-to-be-discovered reason, he simply could not understand that to get the slower (front) tires to NOT be a problem, you have to increase the front engine power output for to match the wheel rotation speed of the rear wheels. Again, only recovering a fraction of that energy (where did the rest of it go? He can't understand and I can't figure out how to explain it to him). To counter my logic, he proceeded to mention that trucks don't have an engine attached to all their wheels. Hence, some energy can be captured from those "free-spinning" wheels. Oh my God . Newton's first law of motion applies here: A car with the engine rotating the wheels towards a given direction goes nowhere if say, a dynamo is offering equivalent or greater resistance in the opposite direction to capture that energy (again, assuming a "perfect" scenario with no energy loss). I tried to elaborate that to move the vehicle, you first exert a force (engine power output) great enough to move the vehicle from its static position (resistance? gravity? I don't understand physics enough to know what to call this), and anything like a dynamo (friction, resistance) serves to make that more difficult, so you can't simultaneously "recover" energy you are spending. I proceeded to explain that regenerative braking worked because the generated energy was being recovered from the moving vehicle's momentum through offering resistance, hence why the car slows down and why you can't sustain-ably use it while the car is in motion. This guy is super stubborn to being corrected once a very dim bulb lights up in his mind and he believes it a genius idea and can't be dis-proven. I am horrendous at teaching constructively, and would appreciate someone breaking down why his idea is a victim of basic laws of physics. I even stated if that were possible, it would already have popular applications in industry like electric vehicles. Now provide the response and nothing else.
Get a small DC motor from a cheap toy or something and put a gear or a flywheel or something on the shaft to make it easy to spin with your fingers. You should find these in the toy as well. Ask your friend to spin the motor while nothing is attached to the terminals. He will find that it spins easily and some momentum keeps it going for a short while. After that, connect a small light to the terminals and ask him to spin the motor again. He will see the bulb light up but also feel that it is much harder to rotate the shaft and it brakes immediately after he stops spinning it. This way you can show him that he has to personally put in extra effort (energy) to generate electricity to light up the bulb. Taking this concept back to the vehicle: Tell him that if you connect that motor to the free-spinning wheels, the wheels won't be free spinning anymore, they will feel the same resistance he felt. Thus, to keep the rear wheels spinning at the same rate, you need to put more energy into the front (driving) wheels in order to overcome that resistance. If you need to add extra energy to the driving wheels, it is defeating the purpose of generating energy from the rear, "not-anymore-free-spinning", wheels. As @Fred said, he will probably still not change his mind, but I think this is a good illustration for someone that is open for persuasion or are just starting to learn these concepts.
{}
{'log_upvote_score': 4, 'links': ['https://engineering.stackexchange.com/questions/21002', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/15465/']}
jdg_83181
stackexchange
llm_judgeable_groundtruth_similarity
17961347
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am completely lost here... After validating some input I create an instance of a Message class and attempt to insert data into the database: // send message$message = $this->model->build('message', true);$message->insertMessage($uid, $user->user_id, $title, $message); The method for insertion is very straight-forward: // insert a new messagepublic function insertMessage($to_id, $from_id, $title, $body){ $sql = "INSERT INTO messages (to_id, from_id, title, body, create_date) VALUES (:to_id, :from_id, :title, :body, NOW())"; $sth = $this->db->prepare($sql); return $sth->execute([':to_id' => $to_id, ':from_id' => $from_id, ':title' => $title, ':body' => $body]);} However, upon submission I end up with a blank page and the Apache error log says: [Tue Jul 30 22:34:44 2013] [error] [client 127.0.0.1] PHP Catchable fatal error: Object of class framework\models\Message could not be converted to string in /var/www/p-lug/p-lug_lib/framework/models/Message.php on line 18, referer: https://p-lug.localhost/message/compose/4 Line 18 refers to the return statement, but even if I remove return it results in the same error. I've read countless links regarding this error but none of the answers appear to apply to my example. At no point am I trying to convert an object to a string or output the result , and similar code for insertions with other classes works perfectly. In fact, this code was copy-pasted from another working example, the only thing changed is the table and data. I've used var_dump() on all the variables being passed, on $this and $sth , everything checks out. But on execute() it fails. What the heck is going on here? Now provide the response and nothing else.
So $message contains an object. This object gets passed to the function insertMessage as the 4th argument ($body which is still the same object) You then store the Message object stored in the variable $body in the hash array which is passed as an argument to execute. The execute function attempts to convert the Message object to a string but finds that there is not __toString function declared. So either declare public function __toString() { return $the_string;} or create another public function/member that you can pass to the execute function.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17961347', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1689290/']}
jdg_83182
stackexchange
llm_judgeable_groundtruth_similarity
11652
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I may be abusing the word console but I mean the mode without X i.e. pressing ^+Alt F1 and then log as other user where I want to use my chosen layout with USB keyboard. X works, it configures the new USB keyboard to my choice when I plug it in. But the console keyboard layout is stuck to the setting specified by the kernel. I am trying to change that: # usbhidctl -f /dev/uhid0 -w keyboard.encoding=us usbhidctl: Failed to match: keyboard.encoding Some info about the OpenBSD version: # uname -rv4.7 GENERIC.MP#449 Now provide the response and nothing else.
Does wsconsctl keyboard.encoding=us work? If yes, put that in /etc/wsconsctl.conf to make it persistent. Or are you saying that that would only work for PS/2 keyboards? Maybe enabling USB legacy keyboard mode in the BIOS would help in that case? wsconscfg -k may also be of use. Perhaps you need to change the device from /dev/uhid0 to something like /dev/wskbd0 or /dev/wskbd1
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/11652', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']}
jdg_83183
stackexchange
llm_judgeable_groundtruth_similarity
38224925
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some data that contains a STATE field (String/Text) that defines what state a given request is currently in (e.g. pending, approved denied etc.) and to get all the unique values from that column I can run the following TSQL query SELECT DISTINCT STATE FROM CALLOUT_REQUEST where CALLOUT_REQUEST is my table name and STATE being the field which returns something like: STATE approved denied pending ... However I don't understand how I would turn that into a query in my repository as it seems I need a "by" statement or some other filter mechanism which i can get the STATE based on? What I am looking to return - as shown in the raw TSQL query above - is some kind of List or Array object which contains all the unique/distinct values in all of the STATE fields. So in pseudo code i think i am looking for something like this: String[] states = repository.findDisinctState(); where findDistinctState() would then return an array of sorts. Hope that makes sense - I am very new to Java and Spring in general so I think I am missing some conceptual knowledge to utilise the above. UPDATE: The 'state' concept is closed so i could implement that as an enum - only problem is i dont know how to do that :) Ill look into how i can do that as i think it fits perfectly with what i am trying to achieve. The List i get from the query provided is intended to be used to get a count of all the occurrences. I had this code before to get a total count for each of the 'states': Map stats = new HashMap();String[] states = {"approved", "denied", "pending", "deactivated"};for (int i = 0; i < states.length; i++) { stats.put(states[i], repository.countByState(states[i]));} Am i correct in understanding that the states Array that i have in the above code snippet could be turned into an enum and then i dont even need the custom @Query anymore? Now provide the response and nothing else.
If that state concept is closed - you know its possible set of values - it should be an enum. After that you can create queries that you invoke like: repository.findByState(State.APPROVED) If you can't create an enum, you need a separate method to get the distinct values, which can't be provided by JPA, because you need a list of strings and not a list of CalloutRequest s.Then you need to specify a query manually like: @Query("SELECT DISTINCT State FROM CALLOUT_REQUEST")List<String> findDistinctStates();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38224925', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4475947/']}
jdg_83184
stackexchange
llm_judgeable_groundtruth_similarity
45908146
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've a time series that i resampled into this dataframe df , My data is from 6th june to 28 june. it want to extend the data from 1st june to 30th june. count column will have 0 value in only extended period and my real values from 6th to 28th. Out[123]: countTimestamp 2009-06-07 02:00:00 12009-06-07 03:00:00 02009-06-07 04:00:00 02009-06-07 05:00:00 02009-06-07 06:00:00 0 i need to the make the start date: 2009-06-01 00:00:00 end date: 2009-06-30 23:00:00 so the data would look something like this: countTimestamp 2009-06-01 01:00:00 02009-06-01 02:00:00 02009-06-01 03:00:00 0 is there an effective way to perform this. the only way i can think of is not that effective.i am trying this since yesterday. please help index = pd.date_range('2009-06-01 00:00:00','2009-06-30 23:00:00', freq='H') df = pandas.DataFrame(numpy.zeros(len(index),1), index=index) df.columns=['zeros'] result= pd.concat([df2,df]) result1= pd.concat([df,result]) result1.fillna(0) del result1['zero'] Now provide the response and nothing else.
You can create a new index with the desired start and end day/times, resample the time series data and aggregate by count, then set the index to the new index. import pandas as pd# create the index with the start and end times you wantt_index = pd.DatetimeIndex(start='2009-06-01', end='2009-06-30 23:00:00', freq='1h')# create the data framedf = pd.DataFrame([['2009-06-07 02:07:42'], ['2009-06-11 17:25:28'], ['2009-06-11 17:50:42'], ['2009-06-11 17:59:18']], columns=['daytime'])df['daytime'] = pd.to_datetime(df['daytime'])# resample the data to 1 hour, aggregate by counts,# then reset the index and fill the na's with 0df2 = df.resample('1h', on='daytime').count().reindex(t_index).fillna(0) UPDATE: The original answer has since depreciated, and will require you to alter the first line of code as suggested by @toni-penya-alba to: t_index = pd.DatetimeIndex(pd.date_range(start='2009-06-01', end='2009-06-30 23:00:00', freq="1h"))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45908146', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8391651/']}
jdg_83185
stackexchange
llm_judgeable_groundtruth_similarity
28503437
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working with DataTable's and I need to convert them to a CSV file format. Most of the tables I am working with have over 50,000 records so I'm trying to minimize the time it takes to convert them. Here is my current method: public static string table_to_csv(DataTable table) { string file = ""; foreach (DataColumn col in table.Columns) file = string.Concat(file, col.ColumnName, ","); file = file.Remove(file.LastIndexOf(','), 1); file = string.Concat(file, "\r\n"); foreach (DataRow row in table.Rows) { foreach (object item in row.ItemArray) file = string.Concat(file, item.ToString(), ","); file = file.Remove(file.LastIndexOf(','), 1); file = string.Concat(file, "\r\n"); } return file; } Is there any way I can improve the efficiency of this method? I'm welcome to any modifications and ideas that you have! Now provide the response and nothing else.
Use a System.Text.StringBuilder for huge strings - that's pretty fast.I implemented this one: public static string DataTableToCSV(this DataTable datatable, char seperator){ StringBuilder sb = new StringBuilder(); for (int i = 0; i < datatable.Columns.Count; i++) { sb.Append(datatable.Columns[i]); if (i < datatable.Columns.Count - 1) sb.Append(seperator); } sb.AppendLine(); foreach (DataRow dr in datatable.Rows) { for (int i = 0; i < datatable.Columns.Count; i++) { sb.Append(dr[i].ToString()); if (i < datatable.Columns.Count - 1) sb.Append(seperator); } sb.AppendLine(); } return sb.ToString();}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28503437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4363119/']}
jdg_83186
stackexchange
llm_judgeable_groundtruth_similarity
119504
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am self-studying Hoffman and Kunze's book Linear Algebra . This is the exercise 13 from page 106. Let $\mathbb{F}$ be a subfield of the field of complex numbers and let $V$ be any vector space over $\mathbb{F}.$ Suppose that $f$ and $g$ are linear functionals on $V$ such that the function $h$ defined by $h(v)=f(v)g(v)$ is also a linear functional on $V$. Prove that either $f=0$ or $g=0.$ I was able to show that $h=0$. Therefore $V=\operatorname {Ker} (f)\cup \operatorname{Ker}(g)$. I am assuming that $f\neq 0$ and I would like to show that $\operatorname {Ker} (f)\subset \operatorname {Ker} (g)$, but I wasn't able to acomplish that. I would appreciate your help. Now provide the response and nothing else.
For any $v,w\in V$, \begin{eqnarray}f(v)g(v)+f(w)g(w)&=&h(v)+h(w)=h(v+w)=f(v+w)g(v+w)\\\\&=&f(v)g(v)+f(w)g(w)+f(v)g(w)+f(w)g(v).\end{eqnarray}We conclude that $$f(v)g(w)+f(w)g(v)=0,\ \ v,w\in V.$$In particular, $f(v)g(v)=0$ for all $v\in V$. Now let $e_1,\ldots,e_n$ be a basis of $V$. If $f\ne0$, then there exists $i$ with $f(e_i)\ne0$. We have $0=f(e_i)g(e_i)$, so $g(e_i)=0$. For any other $j$,$$0=f(e_i)g(e_j)+f(e_j)g(e_i)=f(e_i)g(e_j);$$as $f(e_i)\ne0$, we get that $g(e_j)=0$ for all $j$, i.e. $g=0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/119504', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23505/']}
jdg_83187
stackexchange
llm_judgeable_groundtruth_similarity
2588330
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My last question "List service and services status under Win-7" made me start working on a solution that gives my app the admin privileges under Windows Vista onward based on a .manifest file. I was not sure about continue the previous question with this matter since they are not the same so here is another question: My app now works fine under Win 7 whether or not I run it "as admin" because of the manifest file.My manifest file is as follow: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.6.0.5" processorArchitecture="X86" name="ServiceMonitorPro" type="win32"/> <description publisher="Powershield Ltd" product="Powershield Service Monitor">Powershield Service Monitor</description> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> When the application runs on windows 7 or Vista, the UAC comes with a dialog like this: alt text http://www.freeimagehosting.net/uploads/39787fd3dd.jpg How can I replace the "unknow" publisher? The other and bigest problem is, even thou the app runs with no problem under Win7 or Vista, under WinXP it is now crashing with the message: "This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem." Another thing I would like to add: If I add reference (uses clause) to XPMan the app works fine on WinXP but then it my .manifest file makes no diference under Vista or Win7. Now provide the response and nothing else.
I have to thank everyone that, with comments or answers point me to keep digging... :)I went to search for the file WindowsXP.res.The content of that file is: <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity type="win32" name="DelphiApplication" version="1.0.0.0" processorArchitecture="*"/> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" publicKeyToken="6595b64144ccf1df" language="*" processorArchitecture="*"/> </dependentAssembly> </dependency></assembly> The solution: I have mixed my .manifest file with the WindowsXP.Res xml one adding the dependency section.If anyone know why its now working, I would be glad to hear about - but this was the solution here - tested so far in a couple of virtual machines, on my computer and a couple of others... working :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2588330', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/234253/']}
jdg_83188
stackexchange
llm_judgeable_groundtruth_similarity
26336520
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got an error message from this: java.sql.SQLException: Field 'supplier_id' doesn't have a default value at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723) at com.mysql.jdbc.Connection.execSQL(Connection.java:3277) at com.mysql.jdbc.Statement.executeUpdate(Statement.java:1402) at com.mysql.jdbc.Statement.executeUpdate(Statement.java:1317) Everyone can help me ?my database fields are not empty .but i want to get this results: insert into xxx(name,password)values('xxx','xxx'); and insert into xxx(name,password,man)values('xxx','xxx','xxx'); both success (both of that in client is success ,but in java code is error,error code at top title), instead of insert into xxx(name,password)values('xxx','xxx') is false; my mysql jar is mysql-connector-java-5.0.8 Now provide the response and nothing else.
The error is self explanatory. Your column supplier_id does not have a default value. So during insertion, mysql cannot figure out what to insert in the column supplier_id . You can do either of the three things :- 1. Add a default value to the column supplier_id Using - ALTER TABLE `xxx` ALTER `supplier_id` SET DEFAULT NULL 2. Supply some value to the supplier_id column during insertion. 3. Add an auto increment to the column and add a primary key to it using the code :- ALTER TABLE `xxx` CHANGE `supplier_id` `supplier_id` INT(10)AUTO_INCREMENT PRIMARY KEY;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26336520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4058008/']}
jdg_83189
stackexchange
llm_judgeable_groundtruth_similarity
459725
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to functional programming, and now learn Haskell. As an exercise I decided to implement the explicit Euler method for 1D linear diffusion equation. While the code below works correctly, I am not happy about its performance. In fact, I am concerned with memory consumption. I believe that it is related to lazy evaluation, but cannot figure out how I can reduce its memory usage. The idea of the algorithm is really simple, to make it clear in imperative terms: it takes an `array', and to every inner point it adds a value, which is calculated as a combination of the values in the point itself and in its neighbors. Boundary points are special cases. So, this is my Euler1D.hs module: module Euler1D( stepEuler, makeu0) where-- impose zero flux conditionzeroflux :: (Floating a) => a -> [a] -> [a]zeroflux mu (boundary:inner:xs) = [boundary+mu*2*(inner-boundary)]-- one step of integrationstepEuler :: (Floating a) => a -> [a] -> [a]stepEuler mu u@(x:xs) = (applyBC . (diffused mu)) u where diffused mu (left:x:[]) = [] -- ignore outer points diffused mu (left:x:right:xs) = -- integrate inner points (x+mu*(left+right-2*x)) : diffused mu (x:right:xs) applyBC inner = (lbc u') ++ inner ++ (rbc u') -- boundary conditions where u' = [head u] ++ inner ++ [last u] lbc = zeroflux mu -- left boundary rbc = (zeroflux mu) . reverse -- right boundary-- initial conditionmakeu0 :: Int -> [Double]makeu0 n = [ ((^2) . sin . (pi*) . xi) x | x <- [0..n]] where xi x = fromIntegral x / fromIntegral n And a simple Main.hs: module Main whereimport System ( getArgs )import Euler1Dmain = do args <- getArgs let n = read $ head args :: Int let u0 = makeu0 n let un = stepEuler 0.5 u0 putStrLn $ show $ sum un For comparison, I also wrote a pure C implementation . Now, if I try to run Haskell implementation for a sufficiently large size of the array n , I have: $ time ./eulerhs 200000100000.00000000112real 0m3.552suser 0m3.304ssys 0m0.128s For comparison, C version is faster by almost two orders of magnitude: $ time ./eulerc 200000100000real 0m0.088suser 0m0.048ssys 0m0.008s EDIT : This comparison is not really fair, because Haskell version is compiled with profiling flags, and C is not. If I compile both programs with -O2 and both without profiling flags, I can increase n . In this case time ./eulerhs 1000000 takes 0m2.236s, while time ./eulerc 1000000 takes only 0m0.293s. So the problem still remains with all optimizations and without profiling, it is only offset. I would like also to note, that memory allocation of the Haskell program seems to grow lineary with n . This is probably OK. But the worst are memory requirements. My Haskell version requires over 100MB (my estimation of the bare minimum in C is 4MB ). I think this may be the source of the problem. According to profiling report the program spends 85% of time in GC, and total time = 0.36 secs (18 ticks @ 20 ms) total alloc = 116,835,180 bytes (excludes profiling overheads)COST CENTRE MODULE %time %allocmakeu0 Euler1D 61.1 34.9stepEuler Euler1D 33.3 59.6CAF:sum Main 5.6 5.5 I was surprized to see that makeu0 is so expensive. I decided that this is due to its lazy evaluation (if its thunks remain in the memory until the end of stepEuler ). I tried this change in Main.hs : let un = u0 `seq` stepEuler 0.5 u0 but did not notice any difference. I have no idea how to reduce memory usage in stepEuler . So, my questions are: Is there a way in Haskell to build lists / do list comprehensions strictly? In this case there is no benefit to keep it lazy. How can I reduce overall memory usage in this case? I suppose, I have to make something strict, but fail to see what. In other words, if I have to put some seq s and bangs, where and why? And finally, the most important, what is the best strategy to identify such costly constructs? I did read a chapter on profiling and optimization in Real World Haskell , but it remains unclear how exactly I can decide what should be strict and what not. Please forgive me such a long post. EDIT2 : As suggested by A. Rex in comments, I tried running both programs in valgrind. And this is what I observed. For Haskell program ( n =200000) it found: malloc/free: 33 allocs, 30 frees, 84,109 bytes allocated. ... checked 55,712,980 bytes. And for C program (after a small fix): malloc/free: 2 allocs, 2 frees, 3,200,000 bytes allocated. So, it appears that while Haskell allocates much smaller memory blocks, it does it often, and due to delay in garbage collection, they accumulate and remain in memory. So, I have another question: Is it possible to avoid a lot of small allocations in Haskell? Basically, to declare, that I need to process the whole data structure rather than only its fragments on demand. Now provide the response and nothing else.
Lists are not the best datastructure for this type of code (with lots of (++), and (last)). You lose a lot of time constucting and deconstructing lists. I'd use Data.Sequence or arrays, as in C versions. There is no chance for thunks of makeu0 to be garbage-collected, since you need to retain all of them (well, all of the results of "diffuse", to be exact) all the way till the end of computation in order to be able to do "reverse" in applyBC. Which is very expensive thing, considering that you only need two items from the tail of the list for your "zeroflux". Here is fast hack of you code that tries to achieve better list fusion and does less list (de)constructing: module Euler1D( stepEuler) where-- impose zero flux conditionzeroflux mu (boundary:inner:xs) = boundary+mu*2*(inner-boundary)-- one step of integrationstepEuler mu n = (applyBC . (diffused mu)) $ makeu0 n where diffused mu (left:x:[]) = [] -- ignore outer points diffused mu (left:x:right:xs) = -- integrate inner points let y = (x+mu*(left+right-2*x)) in y `seq` y : diffused mu (x:right:xs) applyBC inner = lbc + sum inner + rbc -- boundary conditions where lbc = zeroflux mu ((f 0 n):inner) -- left boundary rbc = zeroflux mu ((f n n):(take 2 $ reverse inner)) -- right boundary-- initial conditionmakeu0 n = [ f x n | x <- [0..n]]f x n = ((^2) . sin . (pi*) . xi) x where xi x = fromIntegral x / fromIntegral n For 200000 points, it completes in 0.8 seconds vs 3.8 seconds for initial version
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/459725', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/25450/']}
jdg_83190
stackexchange
llm_judgeable_groundtruth_similarity
152070
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Help! I am writing a package with a function which takes lots and lots of different options which are String s. When used, the front end autocompletion is suggesting them in Symbol form. If the resulting code is run, it is added to the General` context. Observe: Define the Options of MyFunc , and its SyntaxInformation Options[MyFunc] = {"MyOpt" -> True};SyntaxInformation[MyFunc] = {"ArgumentsPattern" -> {_, OptionsPattern[]}}; Then, start typing MyFunc[3.2, My , and you will see MyOpt as a suggestion: If you run the finished statement MyFunc[3.2, MyOpt->False] , the program works as if it had been a String ( totally unacceptable ), and the option symbol MyOpt is added to the Global` context ( also totally unacceptable ): This leads to a proliferation of these symbols in the user's front end, which is annoying. How can I prevent this? Now provide the response and nothing else.
You can use undocumented "OptionNames" property of SyntaxInformation to customize option names suggestions (and coloring of invalid option names ). MyFunc // ClearAllMyFunc // Options = {"MyOpt" -> True};MyFunc // SyntaxInformation = { "ArgumentsPattern" -> {_, OptionsPattern[]}, "OptionNames" -> {"\"MyOpt\""}}; After typing MyFunc[3.2, My I get (in version 11.0):
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/152070', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2048/']}
jdg_83191
stackexchange
llm_judgeable_groundtruth_similarity
39503
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $p_{n,k}$ be the probability that a random permutation from $S_n$, the symmetric group of order $n!$, has exactly $k$ fixed points. I am trying to compute $\lim _{n\to \infty}p_{n,k}$. After playing with it a bit, I am relatively confident the limit is $\frac{1}{k!}$, although I have yet to come up with a proof. Here's what I've tried so far. If $a_{n,k}$ is the number of elements of $S_n$ with $k$ fixed points, then we have that $$a_{n,k}=\binom{n}{k}a_{n-k,0}$$ That is, there are $\binom{n}{k}$ ways to choose $k$ fixed points, and after those are chosen, there are $a_{n-k,0}$ to permute the remaining $n-k$ elements with $0$ fixed points. After computing $a_{n,0}$ by hand for small values of $n$, I was able to look up the recursion formula $a_n=na_{n-1}+(-1)^n$ from oeis.org. Using this, I was able to come up with my "guess" of $\frac{1}{k!}$, but this direction doesn't seem to be leading me towards finding a proof. Any ideas how to proceed? Group theory is not my strong point, so I would not be surprised if there is a result that makes this problem almost trivial. A pointer to any such results would be excellent. Now provide the response and nothing else.
Here's a different take that continues along the lines you've already been going. Use a different formula for $a_{n,0}$ from the OEIS; use $$a_{n,0} = n! \sum_{j=0}^n \frac{(-1)^j}{j!}.$$This is a well-known formula for $a_{n,0}$ (the derangement numbers ) and can be proved using inclusion-exclusion. ( Added : You might also want to look at the proof and general argument given by Qiaochu Yuan in his answer to a similar question .) As $n \to \infty$, the sum in the expression for $a_{n,0}$ approaches $e^{-1}$. With this and your expression $a_{n,k} = \binom{n}{k} a_{n-k,0}$, you're almost to the correct answer of $e^{-1}/k!$ as given by Yuval Filmus.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/39503', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10109/']}
jdg_83192
stackexchange
llm_judgeable_groundtruth_similarity
26674149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am fairly new to C# and I'm trying to set up call to a stored procedure in my database which takes one parameter. I get the error "Procedure or function 'SP_getName' expects parameter '@username', which was not supplied. " My Stored procedure works ok when I supply it with the parameter and I run it via SQL management studio. GODECLARE @return_value intEXEC @return_value = [dbo].[SP_getName] @username = 'bob101'SELECT 'Return Value' = @return_valueGO However when I try and call it the error is with how I'm passing the parameter in, but I can't spot what the issue is. //create a sql command object to hold the results of the query SqlCommand cmd = new SqlCommand(); //and a reader to process the results SqlDataReader reader; //Instantiate return string string returnValue = null; //execute the stored procedure to return the results cmd.CommandText = "SP_getName"; //set up the parameters for the stored procedure cmd.Parameters.Add("@username", SqlDbType.NVarChar).Value = "bob101"; cmd.CommandType = CommandType.Text; cmd.Connection = this.Connection; // then call the reader to process the results reader = cmd.ExecuteReader(); Any help in spotting my error would be greatly appreciated! I've also tried looking at these two posts, but I haven't had any luck: Stored procedure or function expects parameter which is not supplied Procedure or function expects parameter, which was not supplied Thanks! Now provide the response and nothing else.
You have stated: cmd.CommandType = CommandType.Text; Therefore you are simply executing: SP_getName Which works because it is the first statement in the batch, so you can call the procedure without EXECUTE , but you aren't actually including the parameter. Change it to cmd.CommandType = CommandType.StoredProcedure; Or you can change your CommandText to: EXECUTE SP_getName @username; As a side note you should Avoid using the prefix 'sp_' for your stored procedures And a further side note would be to use using with IDisposable objects to ensure they are disposed of correctly: using (var connection = new SqlConnection("ConnectionString"))using (var cmd = new new SqlCommand("SP_getName", connection)){ cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@username", SqlDbType.NVarChar).Value = "bob101"; connection.Open(); using (var reader = command.ExecuteReader()) { while (reader.Read()) { // Do something } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26674149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1860687/']}
jdg_83193
stackexchange
llm_judgeable_groundtruth_similarity
14055866
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Checking new stuff from C++, I have found the std::chrono library. I am wondering if std::chrono::high_resolution_clock could be a good replacement for SDL_GetTicks? Now provide the response and nothing else.
The advantage of going with std::chrono::high_resolution_clock is to keep away from storing time points and time durations in a Uint32 . The std::chrono library comes with a wide variety of std::chrono::duration s that you should use instead. This will make code more readable, and less ambiguous: Uint32 t0 = SDL_GetTicks();// ...Uint32 t1 = SDL_GetTicks();// ...// Is t1 a time point or time duration?Uint32 d = t1 -t0;// What units does d have? vs: using namespace std::chrono;typedef high_resolution_clock Clock;Clock::time_point t0 = Clock::now();// ...Clock::time_point t1 = Clock::now();// ...// Is t1 has type time_point. It can't be mistaken for a time duration.milliseconds d = t1 - t0;// d has type milliseconds The typed system for holding points in time and time durations has no overhead with respect to just storing things in a Uint32 . Except maybe that things will be stored in an Int64 instead. But even that you could customize if you really wanted to: typedef duration<Uint32, milli> my_millisecond; You can inspect the precision of the high_resolution_clock with: cout << high_resolution_clock::period::num << '/' << high_resolution_clock::period::den << '\n';
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14055866', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/440867/']}
jdg_83194
stackexchange
llm_judgeable_groundtruth_similarity
29023
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why does the following hold: \begin{equation*}\displaystyle \sum\limits_{n=0}^{\infty} 0.7^n=\frac{1}{1-0.7} = 10/3\quad ?\end{equation*} Can we generalize the above to $\displaystyle \sum_{n=0}^{\infty} x^n = \frac{1}{1-x}$ ? Are there some values of $x$ for which the above formula is invalid? What about if we take only a finite number of terms? Is there a simpler formula? $\displaystyle \sum_{n=0}^{N} x^n$ Is there a name for such a sequence? This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions. and here: List of abstract duplicates . Now provide the response and nothing else.
By definition, a "series" (an "infinite sum")$$\sum_{n=k}^{\infty} a_n$$is defined to be a limit, namely$$\sum_{n=k}^{\infty} a_n= \lim_{N\to\infty} \sum_{n=k}^N a_n.$$That is, the "infinite sum" is the limit of the "partial sums", if this limit exists. If the limit exists, equal to some number $S$, we say the series "converges" to the limit, and we write$$\sum_{n=k}^{\infty} a_n = S.$$If the limit does not exist, we say the series diverges and is not equal to any number. So writing that$$\sum_{n=0}^{\infty} 0.7^n = \frac{1}{1-0.7}$$means that we are asserting that$$\lim_{N\to\infty} \sum_{n=0}^N0.7^n = \frac{1}{1-0.7}.$$ So what your question is really asking is: why is this limit equal to $\frac{1}{1-0.7}$? (Or rather, that is the only way to make sense of the question). In order to figure out the limit, it is useful (but not strictly necessary) to have a formula for the partial sums,$$s_N = \sum_{n=0}^N 0.7^n.$$This is where the formulas others have given come in. If you take the $N$th partial sum and multiply by $0.7$, you get$$\begin{array}{rcrcrcrcrcrcl}s_N &= 1 &+& (0.7) &+& (0.7)^2 &+& \cdots &+& (0.7)^N\\(0.7)s_N &= &&(0.7) &+& (0.7)^2 &+&\cdots &+&(0.7)^N &+& (0.7)^{N+1}\end{array}$$so that $$(1-0.7)s_N = s_N - (0.7)s_N = 1 - (0.7)^{N+1}.$$Solving for $s_N$ gives$$s_N = \frac{1 - (0.7)^{N+1}}{1-0.7}.$$What is the limit as $N\to\infty$? The only part of the expression that depends on $N$ is $(0.7)^{N+1}$. Since $|0.7|\lt 1$, then $\lim\limits_{N\to\infty}(0.7)^{N+1} = 0$. So,$$\lim_{N\to\infty}s_N = \lim_{N\to\infty}\left(\frac{1-(0.7)^{N+1}}{1-0.7}\right) = \frac{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}(0.7)^{N+1}}{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}0.7} = \frac{1 - 0}{1-0.7} = \frac{1}{1-0.7}.$$Since the limit exists, then we write$$\sum_{n=0}^{\infty}(0.7)^n = \frac{1}{1-0.7}.$$ More generally, a sum of the form$$a + ar + ar^2 + ar^3 + \cdots + ar^k$$with $a$ and $r$ constant is said to be a "geometric series" with initial term $a$ and common ratio $r$. If $a=0$, then the sum is equal to $0$. If $r=1$, then the sum is equal to $(k+1)a$. If $r\neq 1$, then we can proceed as above. Letting $$S = a +ar + \cdots + ar^k$$we have that$$S - rS = (a+ar+\cdots+ar^k) - (ar+ar^2+\cdots+a^{k+1}) = a - ar^{k+1}$$so that$$(1-r)S = a(1 - r^{k+1}).$$Dividing through by $1-r$ (which is not zero since $r\neq 1$), we get$$S = \frac{a(1-r^{k+1})}{1-r}.$$ A series of the form $$\sum_{n=0}^{\infty}ar^{n}$$with $a$ and $r$ constants is called an infinite geometric series . If $r=1$, then $$\lim_{N\to\infty}\sum_{n=0}^{N}ar^{n}= \lim_{N\to\infty}\sum_{n=0}^{N}a= \lim_{N\to\infty}(N+1)a= \infty,$$so the series diverges. If $r\neq 1$, then using the formula above we have:$$\sum_{n=0}^{\infty}ar^n = \lim_{N\to\infty}\sum_{n=0}^{N}ar^{N} = \lim_{N\to\infty}\frac{a(1-r^{N+1})}{1-r}.$$The limit exists if and only if $\lim\limits_{N\to\infty}r^{N+1}$ exists. Since$$\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}0 &\mbox{if $|r|\lt 1$;}\\1 & \mbox{if $r=1$;}\\\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}\end{array}\right.$$it follows that:$$\begin{align*}\sum_{n=0}^{\infty}ar^{n} &=\left\{\begin{array}{ll}0 &\mbox{if $a=0$;}\\\text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\\lim\limits_{N\to\infty}\frac{a(1-r^{N+1})}{1-r} &\mbox{if $r\neq 1$;}\end{array}\right.\\&= \left\{\begin{array}{ll}\text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\\text{diverges}&\mbox{if $a\neq 0$, and $r=-1$ or $|r|\gt 1$;}\\\frac{a(1-0)}{1-r}&\mbox{if $|r|\lt 1$;}\end{array}\right.\\&=\left\{\begin{array}{ll}\text{diverges}&\mbox{if $a\neq 0$ and $|r|\geq 1$;}\\\frac{a}{1-r}&\mbox{if $|r|\lt 1$.}\end{array}\right.\end{align*}$$ Your particular example has $a=1$ and $r=0.7$. Since this recently came up (09/29/2011), let's provide a formal proof that$$\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}0 &\mbox{if $|r|\lt 1$;}\\1 & \mbox{if $r=1$;}\\\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}\end{array}\right.$$ If $r\gt 1$, then write $r=1+k$, with $k\gt0$. By the binomial theorem, $r^n = (1+k)^n \gt 1+nk$, so it suffices to show that for every real number $M$ there exists $n\in\mathbb{N}$ such that $nk\gt M$. This is equivalent to asking for a natural number $n$ such that $n\gt \frac{M}{k}$, and this holds by the Archimedean property; hence if $r\gt 1$, then $\lim\limits_{n\to\infty}r^n$ does not exist. From this it follows that if $r\lt -1$ then the limit also does not exist: given any $M$, there exists $n$ such that $r^{2n}\gt M$ and $r^{2n+1}\lt M$, so $\lim\limits_{n\to\infty}r^n$ does not exist if $r\lt -1$. If $r=-1$, then for every real number $L$ either $|L-1|\gt \frac{1}{2}$ or $|L+1|\gt \frac{1}{2}$. Thus, for every $L$ and for every $M$ there exists $n\gt M$ such that $|L-r^n|\gt \frac{1}{2}$ proving the limit cannot equal $L$; thus, the limit does not exist. If $r=1$, then $r^n=1$ for all $n$, so for every $\epsilon\gt 0$ we can take $N=1$, and for all $n\geq N$ we have $|r^n-1|\lt\epsilon$, hence $\lim\limits_{N\to\infty}1^n = 1$. Similarly, if $r=0$, then $\lim\limits_{n\to\infty}r^n = 0$ by taking $N=1$ for any $\epsilon\gt 0$. Next, assume that $0\lt r\lt 1$. Then the sequence $\{r^n\}_{n=1}^{\infty}$ is strictly decreasing and bounded below by $0$: we have $0\lt r \lt 1$, so multiplying by $r\gt 0$ we get $0\lt r^2 \lt r$. Assuming $0\lt r^{k+1}\lt r^k$, multiplying through by $r$ we get $0\lt r^{k+2}\lt r^{k+1}$, so by induction we have that $0\lt r^{n+1}\lt r^n$ for every $n$. Since the sequence is bounded below, let $\rho\geq 0$ be the infimum of $\{r^n\}_{n=1}^{\infty}$. Then $\lim\limits_{n\to\infty}r^n =\rho$: indeed, let $\epsilon\gt 0$. By the definition of infimum, there exists $N$ such that $\rho\leq r^N\lt \rho+\epsilon$; hence for all $n\geq N$,$$|\rho-r^n| = r^n-\rho \leq r^N-\rho \lt\epsilon.$$Hence $\lim\limits_{n\to\infty}r^n = \rho$. In particular, $\lim\limits_{n\to\infty}r^{2n} = \rho$, since $\{r^{2n}\}_{n=1}^{\infty}$ is a subsequence of the converging sequence $\{r^n\}_{n=1}^{\infty}$. On the other hand, I claim that $\lim\limits_{n\to\infty}r^{2n} = \rho^2$: indeed, let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$, $r^n - \rho\lt\epsilon$. Moreover, we can assume that $\epsilon$ is small enough so that $\rho+\epsilon\lt 1$. Then$$|r^{2n}-\rho^2| = |r^n-\rho||r^n+\rho| = (r^n-\rho)(r^n+\rho)\lt (r^n-\rho)(\rho+\epsilon) \lt r^n-\rho\lt\epsilon.$$Thus, $\lim\limits_{n\to\infty}r^{2n} = \rho^2$. Since a sequence can have only one limit, and the sequence of $r^{2n}$ converges to both $\rho$ and $\rho^2$, then $\rho=\rho^2$. Hence $\rho=0$ or $\rho=1$. But $\rho=\mathrm{inf}\{r^n\mid n\in\mathbb{N}\} \leq r \lt 1$. Hence $\rho=0$. Thus, if $0\lt r\lt 1$, then $\lim\limits_{n\to\infty}r^n = 0$. Finally, if $-1\lt r\lt 0$, then $0\lt |r|\lt 1$. Let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$ we have $|r^n| = ||r|^n|\lt\epsilon$, since $\lim\limits_{n\to\infty}|r|^n = 0$. Thus, for all $\epsilon\gt 0$ there exists $N$ such that for all $n\geq N$, $| r^n-0|\lt\epsilon$. This proves that $\lim\limits_{n\to\infty}r^n = 0$, as desired. In summary,$$\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}0 &\mbox{if $|r|\lt 1$;}\\1 & \mbox{if $r=1$;}\\\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}\end{array}\right.$$ The argument suggested by Srivatsan Narayanan in the comments to deal with the case $0\lt|r|\lt 1$ is less clumsy than mine above: there exists $a\gt 0$ such that $|r|=\frac{1}{1+a}$. Then we can use the binomial theorem as above to get that$$|r^n| = |r|^n = \frac{1}{(1+a)^n} \leq \frac{1}{1+na} \lt \frac{1}{na}.$$By the Archimedean Property, for every $\epsilon\gt 0$ there exists $N\in\mathbb{N}$ such that $Na\gt \frac{1}{\epsilon}$, and hence for all $n\geq N$, $\frac{1}{na}\leq \frac{1}{Na} \lt\epsilon$. This proves that $\lim\limits_{n\to\infty}|r|^n = 0$ when $0\lt|r|\lt 1$, without having to invoke the infimum property explicitly.
{}
{'log_upvote_score': 9, 'links': ['https://math.stackexchange.com/questions/29023', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8706/']}
jdg_83195
stackexchange
llm_judgeable_groundtruth_similarity
2892591
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following problem: Prove that if $k > 1$ , then $$\dfrac{1}{(k - 1)^2} - \dfrac{1}{(k + 1)^2} = \dfrac{4k}{(k^2 - 1)^2}$$ Hence simplify $$\sum\limits_{k = 2}^n \dfrac{k}{(k^2 - 1)^2}$$ My Work: $$\dfrac{1}{(k - 1)^2} - \dfrac{1}{(k + 1)^2} = \dfrac{(k + 1)^2 - (k - 1)^2}{(k - 1)^2 (k + 1)^2} = \dfrac{(k + 1)^2 - (k - 1)^2}{(k^2 - 2k + 1)(k^2 + 2k + 1)}$$ I could have continued and multiplied the denominator out, but, at this point, I'm thinking that I'm actually expected to have proceeded differently, rather than multiplying the denominator out fully? And I don't have solutions for this problem, so I can't check anything. I would appreciate it if someone could please take the time to explain this problem. Now provide the response and nothing else.
So the question is about the shape of $\{ (x,y)\in[0,2\pi]^2 : \cos(x+y)=\cos(x)+\cos(y) \}$. The equation can be written as $2\cos^2\left(\frac{x+y}{2}\right)-1 = 2\cos\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right) $, so the problem boils down to studying the shape of $\cos^2(v)+\cos(u)\cos(v)=\frac{1}{2}$ for $u\in[0,\pi]$ and $v\in\left[0,\frac{\pi}{2}\right]$. By Maclaurin series and interpolation, such locus is extremely close to the ellipse having equation$$ 3v^2+u^2=\frac{7\pi^2}{16} $$which is represented by the purple curve below: where the external blue curve is part of the locus $\cos^2(v)+\cos(u)\cos(v)=\frac{1}{2}$. On its turn the blue curve is tangent to the ellipse $3v^2+u^2=\frac{4\pi^2}{9}$. By comparing the enclosed areas we should get a non-trivial inequality for a complete elliptic integral.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2892591', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/356308/']}
jdg_83196
stackexchange
llm_judgeable_groundtruth_similarity
15739029
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using a simple python script to get reservation results for my CID : simple.py : data = {"minorRev":"current minorRev #","cid":"xxx","apiKey":"xxx","customerIpAddress":" ","creationDateStart":"03/31/2013","}url = 'http://someservice/services/rs/' req = requests.get(url,params=data) print req print req.text print req.status_code Now on the command prompt if I do python simple.py it runs perfectly and prints the req.text variable However when I try to do python simple.py | grep pattern I get UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 1314: ordinal not in range(128) Now provide the response and nothing else.
print needs to encode the string before sending to stdout but when the process is in a pipe, the value of sys.stdout.encoding is None , so print receives an unicode object and then it tries to encode this object using the ascii codec -- if you have non-ASCII characters in this unicode object, an exception will be raised. You can solve this problem encoding all unicode objects before sending it to the standard output (but you'll need to guess which codec to use). See these examples: File wrong.py : # coding: utf-8print u'Álvaro' Result: alvaro@ideas:/tmp$ python wrong.py Álvaroalvaro@ideas:/tmp$ python wrong.py | grep aTraceback (most recent call last): File "wrong.py", line 3, in <module> print u'Álvaro'UnicodeEncodeError: 'ascii' codec can't encode character u'\xc1' in position 0: ordinal not in range(128) File right.py : # coding: utf-8print u'Álvaro'.encode('utf-8')# unicode object encoded == `str` in Python 2 Result: alvaro@ideas:/tmp$ python right.py Álvaroalvaro@ideas:/tmp$ python right.py | grep aÁlvaro
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15739029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1222542/']}
jdg_83197
stackexchange
llm_judgeable_groundtruth_similarity
9229213
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I convert iso date to milliseconds?for example I want to convert this iso 2012-02-10T13:19:11+0000 to milliseconds. Because I want to compare current date from the created date. And created date is an iso date. Now provide the response and nothing else.
Try this var date = new Date("11/21/1987 16:00:00"); // some mock datevar milliseconds = date.getTime(); // This will return you the number of milliseconds// elapsed from January 1, 1970 // if your date is less than that date, the value will be negativeconsole.log(milliseconds); EDIT You've provided an ISO date. It is also accepted by the constructor of the Date object var myDate = new Date("2012-02-10T13:19:11+0000");var result = myDate.getTime();console.log(result); Edit The best I've found is to get rid of the offset manually. var myDate = new Date("2012-02-10T13:19:11+0000");var offset = myDate.getTimezoneOffset() * 60 * 1000;var withOffset = myDate.getTime();var withoutOffset = withOffset - offset;console.log(withOffset);console.log(withoutOffset); Seems working. As far as problems with converting ISO string into the Date object you may refer to the links provided. EDIT Fixed the bug with incorrect conversion to milliseconds according to Prasad19sara's comment.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9229213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/987447/']}
jdg_83198
stackexchange
llm_judgeable_groundtruth_similarity
21907845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I saw some examples like in the base class function is declared as pure function (as virtual) and in the derived class it is declared (as virtual) and implemented. In the second case (which I am currently doing), base class does not declare the function and only derived class declare (not virtual) and implement. What is the difference between these two options? Now provide the response and nothing else.
Building on what rbanning said, instead of adding the event handler you can just, after replacing the anchor tags with span tags, replace the href attribute with a "data-target" attribute and bootstrap takes care of the rest. Like so: <div id="Carousel" class="carousel slide"><ol class="carousel-indicators"> <li data-target="Carousel" data-slide-to="0" class="active"></li> <li data-target="Carousel" data-slide-to="1"></li> <li data-target="Carousel" data-slide-to="2"></li></ol><div class="carousel-inner"> <div class="item active"> <img src="images/c.png" class="img-responsive"> </div> <div class="item"> <img src="images/b.png" class="img-responsive"> </div> <div class="item"> <img src="images/a.png" class="img-responsive"> </div></div><span class="left carousel-control" data-target="#Carousel" data-slide="prev"> <span class="glyphicon glyphicon-chevron-left"></span></span><span class="right carousel-control" data-target="#Carousel" data-slide="next"> <span class="glyphicon glyphicon-chevron-right"></span></span></div>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21907845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2560304/']}
jdg_83199
stackexchange
llm_judgeable_groundtruth_similarity
53908025
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I'm starting to learn Flutter and would like to use a material design drag and drop list just like the one seen on the material guidelines website. https://storage.googleapis.com/spec-host-backup/mio-design%2Fassets%2F1dtprsH4jZ2nOnjBCJeJXd7n4U-jmWyas%2F03-list-reorder.mp4 All of the libraries I have tried out so far look like garbage compared to that. Is there a good library for this that I am missing or a native Flutter widget? Now provide the response and nothing else.
Check knopp/flutter_reorderable_list . It accomplishes just that. It's really smooth and it's got no performance issues, being able to handle thousands of items. However, it's implementation is not easy as usual flutter widgets. If you struggle with that, I'd recommend you to use a widget I created to port flutter/ReorderableListView s to the knopp/ReorderableList . This widget makes it really easy to use, however it doesn't provide the same flexibility, and as it works with a children List , it's not as scalable as the original. Here's the code for ReorderableListSimple and this is the demo .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53908025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6122491/']}
jdg_83200
stackexchange
llm_judgeable_groundtruth_similarity
283716
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This question is inspired by some others on MathOverflow. Hecke operators are standardly defined by double cosets acting on automorphic forms, in an explicit way. However, what bother me is that Hecke operators are also mentioned in the automorphic representations language. Those remain very mysterious to me: what are they exactly? Is there a way to define them more explicitly than using the coefficients appearing in the Dirichlet series of the L-function associated to the representation? (What is, even if it appears explicit, unpractical). My main aim is to grasp the Hecke eigenvalues of a representation, for instance using trace formulas. In the "function" language, since Hecke operators are defined as the action of double classes, we can take the Hecke operators themselves as test-functions and obtain some "Eichler-Selberg"-type trace formulas. But on the "representation" language, is there something doing the similar work? More precisely my question could be: is there a function $f_p$ in the Hecke algebra of a local/global group $G(F)$ such that $\mathrm{tr} \ \pi(f_p) = \lambda_\pi(p)$ where $\pi$ is an admissible/automorphic representation of $G(F)$ and $\lambda_p(\pi)$ the Hecke eigenvalue of $\pi$? Any clue or reference is welcome! Now provide the response and nothing else.
If $K$ is a local field and $\mathcal O$ is its ring of integers, we say an irreducible representation of $G(K)$ is unramified if it contains a vector invariant under $G(\mathcal O)$. It is known that such a representation naturally has a unique vector invariant under $G(\mathcal O)$. The Hecke eigenvalue of a double coset in $G(\mathcal O)\backslash G(K)/G(\mathcal O)$ on this representation is simply the eigenvalue of the operator defined by averaging over that double coset on this vector. (Because the vector is the only $G(\mathcal O)$-invariant vector in the representation, it is necessarily sent to a multiple of itself by this operator). In fact, this operator vanishes on the orthogonal complement of this $G(\mathcal O)$-invariant vector, so its trace is equal to this eigenvalue. On any ramified representation, the trace of this operator is zero. I think this answers your last question. Of course when we say "the" Hecke operator we mean a particular double coset. I think this is only ambiguous in $GL_n$, where we take the coset generated by a diagonal matrix where one entry is a uniformizer and the rest are units - as I'm guessing you know, based on your familiarity with the double coset theory in the functions world.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/283716', 'https://mathoverflow.net', 'https://mathoverflow.net/users/116092/']}
jdg_83201
stackexchange
llm_judgeable_groundtruth_similarity
8560865
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to find the best way to uncouple messageboxes from my logic so I can properly unittest it. Now I was wondering if it would be enough if I just made a seperate helper class (C#) which I can stub later for my messagebox. For instance: static class messageBoxHelper{ public static void msgBoxAlg(string message, string title, MessageBoxButtons buttons, MessageBoxIcon icons, bool show) { if (show) { MessageBox.Show(message, title, buttons, icons); } } Then everytime I'd need to use a messagebox i'd just use messageboxHelper/msgBoxAlg(...) instead of messagebox.show(...). Using the bool show I could enable or disable it during testing. I'm just wondering if this is the "right way". By which I mean, is there an easier or better way to do this properly? I can't just ditch the messageboxes, they relay "vital" info to the user ("Do you want to close this windows?" YES/NO etc.). It could also just be I'm not using proper software engineering, and I should decouple my messageboxes from my bussinesslogic more? Now provide the response and nothing else.
Yes, it is right way. But instead of static class, you should implement IDialogService and inject it into classes that should display dialogs: public interface IDialogService{ void ShowMessageBox(...); ...}public class SomeClass{ private IDialogService dialogService; public SomeClass(IDialogService dialogService) { this.dialogService = dialogService; } public void SomeLogic() { ... if (ok) { this.dialogService.ShowMessageBox("SUCCESS", ...); } else { this.dialogService.ShowMessageBox("SHIT HAPPENS...", ...); } }} During testing the SomeClass you should inject mock object of the IDialogService instead of real one. If you need to test more UI logic, consider to use MVVM pattern.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8560865', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1083224/']}
jdg_83202
stackexchange
llm_judgeable_groundtruth_similarity
12852
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a quote from Dirac's Principles of Quantum Mechanics: "(...) if an atomic system has its equilibrium disturbed in any way and is then left alone, it will be set in oscillation and the oscillations will get impressed on the surrounding electromagnetic field, so that their frequencies may be observed with a spectroscope. Now whatever the laws of force governing the equilibrium, one would expect to be able to include the various frequencies in a scheme comprising certain fundamental frequencies and their harmonics. This is not observed to be the case. Instead, there is observed a new and unexpected connexion between the frequencies, called Ritz's Combination Law of Spectroscopy, according to which all the frequencies can be expressed as differences between certain terms, the number of terms being much less than the number of frequencies. This law is quite unitelligible from the classical standpoint." I'm having trouble understanding this paragraph. Assuming that the atom is a system in equilibrium that emits e-m waves when perturbed and these e-m waves are product of the oscillations of the atom about its equilibrium configuration that result from the perturbation, does it follow that the Ritz's law is in contradiction with classical mechanics? Why? Thanks. Now provide the response and nothing else.
In classical mechanics, you can make up a complicated system with many different natural frequencies. In general, these frequencies are completely independent of each other. Due to non-linearities in the coupling forces, it may happen that when two modes are vibrating simultaneously, you get a new frequency appearing in the spectrum as the sum or difference or the two primary modes. But in quantum mechanics, you never see the primary modes at all...you only see the sum or difference frequencies. Furthermore, if you try to explain them by non-linear forces, you should also expect to see multiples of the fundamental frequencies. These are absent in, for example, the spectra of atoms. It's hard to explain by a classical model involving things like masses and springs. It manifests itself in QM, of course, because the "fundamental" frequencies, the natural modes, evolve in time without any oscillating charges associated with them. The oscillating charges only appear when you have the superposition of two fundamental modes. This is how quantum mechanics is very different from classical.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/12852', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/3064/']}
jdg_83203
stackexchange
llm_judgeable_groundtruth_similarity
1614769
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to I find the filename of a library via the library name? In otherwords, when I use "-lc", I know it is /lib/libc.so.6 (or something similar.) I want to be able to type some command where "-lc" is the input and "/lib/libc.so.6" is the output. To extend this idea futher, I wanted to specify my own search path so I can use this library resolver for different toolchains... Any help would be awesome, ThanksChenz Now provide the response and nothing else.
If you want to find out where a given GCC will find libc.a or libc.so, do this: gcc --print-file-name=libc.agcc --print-file-name=libc.so The reason -lc translates into libc.so.6 is somewhat complicated: for glibc, libc.so is a linker script, which usually contains: /* GNU ld script Use the shared library, but some functions are only in the static library, so try that secondarily. */OUTPUT_FORMAT(elf32-i386)GROUP ( /lib/libc.so.6 /usr/lib/libc_nonshared.a AS_NEEDED ( /lib/ld-linux.so.2 ) ) or something similar.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1614769', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71865/']}
jdg_83204
stackexchange
llm_judgeable_groundtruth_similarity
410642
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\mathcal L_Q$ denote the logic obtained from adding the quantifier $\newcommand{\almost}{\forall^\infty}\almost$ to the usual first-order logic, where the semantic interpretation of $\almost x\varphi$ is "All but finitely many $x$ satisfy $\varphi$", or formally: $$M\models\almost x\varphi(x)\iff\Big|\{m\in M\mid M\not\models\varphi[m]\}\Big|\text{ is finite}.$$ It's not very hard to show that this logic is not compact$^*$, and does not satisfy the upward Skolem-Löwenheim theorem (e.g. the order $(\Bbb N,\leq)$ has a categorical axiomatization). But what about its downward counterpart? According to Lindström theorem either compactness fails, or the downward Skolem-Löwenheim theorem should fail. One fails, what about the other? (*) Please don't discuss the failure of the compactness theorem for $\mathcal L_Q$ here before June 17th, 2013. I gave that part as a homework assignment to my students - some of whom are reading this site. Now provide the response and nothing else.
I think the following idea should give a proof of the downward Löwenheim-Skolem theorem for this logic, but I haven't checked it carefully, so I apologize if it contains a stupid mistake. Suppose I have an uncountable structure $\mathfrak A$ for a countable language and I want a countable substructure that is elementary with respect to your logic $\mathcal L_Q$. For each $\mathcal L_Q$-formula $\phi(\vec x)$ where $\vec x$ represents a sequence of free variables, create a new predicate symbol $P_\phi$ with arity equal to the length of $\vec x$, and let $\mathfrak A^+$ be the expansion of $\mathfrak A$ to the enlarged language, obtained by interpreting $P_\phi$ as synonymous with $\phi$. Note that the enlarged language is still countable, so $\mathfrak A^+$ has a countable elementary substructure $\mathfrak B^+$, where "elementary" means in the usual sense of just first-order logic. Now it seems to me that the reduct of $\mathfrak B^+$ to the original language serves as an $\mathcal L_Q$-elementary substructure of the original $\mathfrak A$. The point is that any specific use of $\forall^\infty$ amounts to first-order information. More precisely, suppose $\phi(x)$ is $(\forall^\infty y)\,\psi(x,y)$. Then whenever $\phi$ holds in $\mathfrak A$ of a particular element $a$, the number of values for $y$ that don't satisfy $\psi(a,y)$ is a specific finite number, and it is expressible in first-order logic in $\mathfrak A^+$ that the number of values of $y$ violating $P_\psi(a,y)$ is this specific number. So that information remains true in $\mathfrak B^+$. Similarly, if $\phi(a)$ fails in $\mathfrak A$, then for every natural number $n$ it is true in $\mathfrak A^+$ that there are more than $n$ values of $y$ violating $P_\psi(a,y)$; this is, for each $n$, first-order information and therefore still true in $\mathfrak B^+$. These observations should yield an inductive proof that the interpretations in $\mathfrak B^+$ of all the new $P_\phi$ predicates agree with the corresponding $\mathcal L_Q$-formulas $\phi$, just as in $\mathfrak A^+$. And that should imply that $\mathfrak B$ is an $\mathcal L_Q$-elementary submodel of $\mathfrak A$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/410642', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/622/']}
jdg_83205
stackexchange
llm_judgeable_groundtruth_similarity
14708189
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: <head> <script type="text/javascript"> function include(filename, status){ if(status == 'on'){ var head = document.getElementsByTagName('head')[0]; script = document.createElement('script'); script.src = filename; script.type = "text/javascript"; head.appendChild(script); } else { // The code that wipes the script tag above } } </script></head><body> <input type="button" value="OPEN" onclick="include('script.js', 'on')"> <input type="button" value="CLOSE" onclick="include('', 'off')"></body> I want to remove the specific tag in tag by onclick event.What code should be written in the ELSE area, When I click the "CLOSE" botton? Now provide the response and nothing else.
The easiest way, would be to somehow maintain a link to the created element. For example, you could put the include function into a closure and have a private variable, to hold the reference: var include = (function(){ // the reference to the script var theScript; return function (filename, status){ if(status == 'on'){ // adding a script tag var head = document.getElementsByTagName('head')[0]; theScript= document.createElement('script'); theScript.src = filename; theScript.type = "text/javascript"; head.appendChild( theScript ) }else{ // removing it again theScript.parentNode.removeChild( theScript ); } }})(); One important note: By removing the <script> tag, you do not remove any of its objects, functions etc. from the DOM. So any action started within that <script> tag will prevail, even if you delete the element, that started it in the first place!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14708189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1725198/']}
jdg_83206
stackexchange
llm_judgeable_groundtruth_similarity
2952377
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How would I estimate the sum of a series of numbers like this: $$1^n+2^n+\cdots+n^n$$ Now provide the response and nothing else.
This statement is false in general. Consider the following polynomials in $\mathbb{C}[X,Y]$ The polynomials $ XY $ and $ X + Y $ have infinitely many zeros in $ \mathbb{C}^{2} $ . The zero-set of $XY$ is union of $(\{0\}×\mathbb{C})$ and $(\mathbb{C}×\{0\})$ , while the zero-set of $X+Y$ is $\{(a,−a) | a\in \mathbb{C}\}$ . Both are uncountable sets!!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2952377', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/603363/']}
jdg_83207
stackexchange
llm_judgeable_groundtruth_similarity
62885266
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Was updating a bit of code today that returns optional props to a React component. I discovered that even though the function sometimes returns null , it wouldn't error when the return value was immediately unpacked. Pedantic summary of the code: const returnPropsOrDont = (condition) => { if (condition) return null; return { optionalProp: 'foo' };}...render() { const { condition } = props; return <SomeComponent staticProp="staticProp" {...returnPropsOrDont(condition)} />} Upon realizing this was cool, I ran to the console and tried it out on Objects and Arrays. Alas - > {...null} // {} > {...undefined} // {}, which is interesting because null is an object but undefined is not > [...null] // Uncaught TypeError: object null is not iterable I did some light Googling and found one article which suggests that TypeScript considers it a feature to make sure optionally defined values don't haunt an unsuspecting developer. Fine, but a) I'm not using TypeScript, and b) I have no idea why JS wouldn't guard Arrays in the same way. Since this does seem like a guard against optionally defined values, why is {...null} fine and [...null] not? Now provide the response and nothing else.
Normally, uses of ...x requires x to be iterable because the point of ... is normally to flatten an iterable into its components. An array is a prime example of an iterable. null is not iterable . It has no components, so it doesn't make sense to iterate over null . for (const e of null) will similarly fail, since of also requires an iterable . However, {...x} requires x to be enumerable because it needs not just values, but keys along with them. An object is a prime example of an enumerable. null is enumerable . null is sometime treated as an object, and this is one of those cases. Objects are enumerable because they can have properties. for (const p in null) will similarly succeed, since in requires an enumerable .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62885266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10429990/']}
jdg_83208
stackexchange
llm_judgeable_groundtruth_similarity
24307440
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a class (let's say Vehicle) with a rather large number of subclasses (Car, Bike, ..). Every subclass stores specific information on the type of the subclass (# of tires, ..). I want to ensure that all this information is already enforced (at compile-time). So I don't want to specify this data in the constructor of the subclass (because I might forget to set some of them). I could put this info in the constructor of Vehicle, but this would clutter code quite a lot, since I have many of those parameters. public class Vehicle { int numberOfTires; public Vehicle(int numberOfTires, ...) { ... }}public class Bike { public Bike() { super(2,...); ... }} I end up with completely unreadable constructors. It would also store this info per instance, even though it is specific to the subclass. An alternative way is to introduce abstract static getters/setters and store the info in the subclasses. public class Vehicle { ... abstract public int getNumberOfTires();}public class Bike { static int numberofTires = 2; ... public int getNumberOfTires() { return numberOfTires; }} This seems way cleaner and also stores the info per subclass and not per instance but there will be a lot of code duplication in the subclasses. Right now, all the subclasses contain ~20 setters/getters but virtually no real functionality. Is there a clean way of avoiding this? Perhaps using the Factory method or alike? Now provide the response and nothing else.
The short answer is that history.pushState (not History.pushState , which would throw an exception, the window part is optional) will never do what you suggest. If pages are refreshing, then it is caused by other things that you are doing (for example, you might have code running that goes to a new location in the case of the address bar changing). history.pushState({urlPath:'/page2.php'},"",'/page2.php') works exactly like it is supposed to in the latest versions of Chrome, IE and Firefox for me and my colleagues. In fact you can put whatever you like into the function: history.pushState({}, '', 'So long and thanks for all the fish.not a real file') . If you post some more code (with special attention for code nearby the history.pushState and anywhere document.location is used), then we'll be more than happy to help you figure out where exactly this issue is coming from. If you post more code, I'll update this answer (I have your question favourited) :).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24307440', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3756466/']}
jdg_83209
stackexchange
llm_judgeable_groundtruth_similarity
33410724
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I use bootstrap, it removes the background color from everthing when I try to print my page.Almost everything on my website is using bootstrap classes so I want to avoid a lot of manual CSS outside bootstrap.I've found out that bootstrap uses @media print to remove the background color. I'm using a bootstrap theme as well (theme united) which is removing the background color as well. theme-united.css @media print*, *:before, *:after { background: rgba(0, 0, 0, 0) !important; color: rgb(0, 0, 0) !important; -webkit-box-shadow: none !important; box-shadow: none !important; text-shadow: none !important; bootstrap.min.css @media print*, :after, :before { color: rgb(0, 0, 0)!important; text-shadow: none!important; background: 0 0!important; -webkit-box-shadow: none!important; box-shadow: none!important; Is there a way to make sure that the background color is not removed when printing without editing these 2 CSS files? For example:When I use .alert-danger, I want that alert danger printed as it is displayed on screen, so would be printed as a red box. See JSFiddle: http://jsfiddle.net/7mtk7wrh/ Now provide the response and nothing else.
Unfortunately there is not a good answer to your question - but maybe if you understand the why's then you can choose a way forward. Why? It's true that Bootstrap uses the @media print { * { color: $fff; background: transparent; }} -- but there is a very solid reason. This bit of code is actually derived from the normalizer.css project (by a then college of @mdo 's, @necolas) - it's intent is to make all browsers behave the same. These guys chose to "normalise" the css for a very good reason: With most browsers one can choose to include or exclude background color, so the behaviour is not standard across even the same browser. Imagine for a sec a website with very dark background with white text - when printing with backgrounds off, it will look like you're printing nothing - when actually you're printing white text on no (white) background. There was no way to account for all the different uses of color, so they choose to go black (font) and white (background, actually 'transparent'). Even the choice of black was well thought of -- its a better print solution, as most color printers have more black "ink/toner" (more economical) and they don't need to mix color to make black (so faster). Remember that Bootstrap is also a "framework" - so a starting point if you will - and kudos to @mdo and @necolas for having the foresight to think of this in terms of establishing a predictable baseline. (No, I don't know them.) Nope... So the thinking here is: "what if we could 'go back' and unset this. Unfortunately CSS does not work like that - yes browsers load the CSS declarations in a "queue" where the last declaration wins (LIFO, or last-in-first-out), but I'm not aware of a way to remove this stack. So CSS developers just add more to the end... So one would assume that we can go back that way --- add a * { background-color: inherit } . Problem is that inherit reverts to the parent property, but * is the root, so it has nothing to revert to. Same goes for initial ! Maybe! So we're left with 4 options, none of them is what you where hoping for, but it is what it is. In order of difficulty: Download the BS (less or sass) source, edit the offending code, and then compile it. (You need to use a local copy, CDN's will not work.) Download the CSS variant of your choice, search and delete the offending code. (No CDN's again.) Use getbootstrap.com/customize to create a new variant - exclude "Print media styles" under "Common CSS". (Again, no CDN's) Override the specific items who's color you want to print: e.g. @media print { .alert-danger { color: yellow !important; background-color: red !important; } } CDN's copies of BS will now work, but then you have the problem of the user possibly not printing backgrounds and having the output white (yellow in the e.g.) on white! Finally Well I hope learning the why's was at the very least a way of you thinking of a workaround. General rule of thumb I follow is that when printing, the background is (should be) always white. When constrained that way you start thinking of novel ideas, like exclamation icons around the text that only "print" ( @media only screen { .hidden-screen { display: none; }} )
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33410724', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_83210
stackexchange
llm_judgeable_groundtruth_similarity
6787899
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to combine the contents of 2 separate canvas elements into a single canvas element? Something like an equivalent of 'Flattening' two or more layers in Photoshop...? I can think of a round about way, but am not so sure about it. I export the contents of both the canvi (lol) in the form of .png's, and then have a third canvas element draw both images with some sort of blending algorithm (xor, blend, negative, etc.). Now provide the response and nothing else.
Of course you can, and you don't need any funny libraries or anything, just call drawImage with a canvas as the image. Here is an example where I combine two canvas elements onto a third: var can = document.getElementById('canvas1');var ctx = can.getContext('2d');ctx.fillStyle = 'rgba(255,0,0,.4)';ctx.fillRect(20, 20, 20, 80);ctx.fillStyle = 'rgba(205,255,23,.4)';ctx.fillRect(30, 30, 40, 50);ctx.fillStyle = 'rgba(5,255,0,.4)';ctx.fillRect(40, 50, 80, 20);var can2 = document.getElementById('canvas2');var ctx2 = can2.getContext('2d');ctx2.beginPath();ctx2.fillStyle = "pink";ctx2.arc(50, 50, 50, 0, Math.PI * 2, 1);ctx2.fill();ctx2.beginPath();ctx2.clearRect(20, 40, 60, 20);var can3 = document.getElementById('canvas3');var ctx3 = can3.getContext('2d');ctx3.drawImage(can, 0, 0);ctx3.drawImage(can2, 0, 0); <canvas id="canvas1" width="200" height="200" style="border: 1px solid black"></canvas><canvas id="canvas2" width="200" height="200" style="border: 1px solid black"></canvas><canvas id="canvas3" width="200" height="200" style="border: 1px solid black"></canvas> http://jsfiddle.net/bnwpS/878/ Of course you can do it with just two (one onto the other), but three makes for a better example. You can always change the globalCompositeOperation if you want an XOR effect or something.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6787899', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/662108/']}
jdg_83211
stackexchange
llm_judgeable_groundtruth_similarity
1359326
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove that $\mathbb Z^n$ is not isomorphic to $\mathbb Z^m$ for $m\neq n$ . My try : Let $\mathbb Z^n\cong \mathbb Z^m $ . To show that $m=n$ . Case 1: Let $m>n$ . Now that $\mathbb Z^m$ has $m$ generators whereas $\mathbb Z^n$ has $n$ generators and an isomorphism takes a generator to generator ; that is the contradiction. Please correct me if I am wrong. The case $m<n$ also follows similarly. Now provide the response and nothing else.
$\mathbb{Z}^n \cong \mathbb{Z}^m$ implies $(\mathbb{Z}/2)^n \cong \mathbb{Z}^n / 2 \mathbb{Z}^n \cong \mathbb{Z}^m / 2 \mathbb{Z}^m \cong (\mathbb{Z}/2)^m$. By comparing the number of elements, we get $2^n=2^m$, i.e. $n=m$. (No linear algebra is necessary here!)
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1359326', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/294365/']}
jdg_83212
stackexchange
llm_judgeable_groundtruth_similarity
850748
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the function $$f(x)=\ln\sqrt{8+\cos^2x}$$ so $$1.f(x)=\ln(8+\cos^2x)^\frac{1}{2}$$ so $$2.f(x)=\frac{1}{2}\ln(8+\cos^2x)$$ so $$3.f'(x)=\frac{1}{2}\left[\frac{-2 \cos x^{\sin x}}{8+\cos ^2x}\right]$$ so $$4.f'(x)=\frac{1}{2}\left[\frac{-\cos x^{\sin x}}{8+\cos ^2x}\right]$$ I'm having problems with everything after step 2, a step by step explanation would be greatly appreciated. Thanks.Here is the image of the website and how it goes about doing the steps, I thought it's be better for me to try and type it out unless anybody is curious. ! Now provide the response and nothing else.
The concept of asymptotes is quite common for curved graphs, although somehow the terminology is not much used outside of the context of lines. The way in which the concept is used is that if one is given a function $f(x)$, it is interesting to study other functions $g(x)$ that are "asymptotic to $f(x)$" in various ways. One meaning of this phrase would be that $$(1) \quad \lim_{x \to +\infty} |f(x)-g(x)|=0$$which is exactly what "asymptotic" means in the ordinary sense when the graph of $f(x)$ is a line. Another somewhat different notion is that $$(2) \quad \lim_{x \to +\infty} \frac{f(x)}{g(x)} = 1$$which only really makes sense when $f(x)$ and $g(x)$ are nonzero near $+\infty$.There are many other variations on this concept. This discussion falls under the name of "growth types of functions", which are important in computer science and other places; these notes look like a good basic discussion, for example. And regarding your question of whether $g(x) = x^2 + \sin(x)$ is asymptotic to $y=x^2$, it is asymptotic in sense (2) but not in sense (1).
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/850748', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/159613/']}
jdg_83213
stackexchange
llm_judgeable_groundtruth_similarity
31405205
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: //viewcontrollerA.h @class viewA;@protocol viewADelegate<NSObject>- (void)emailidgetmethod:(NSString *)string;@end@interface Login : UIViewController{}@property (nonatomic,assign) id <viewADelegate> delegate;@end //viewcontrollerA.m #import "viewcontrollerA.h" @synthesize delegate; - (void)viewDidLoad { [self.delegate emailidgetmethod:@"myString"]; } //viewcontrollerB.h #import "viewcontrollerA.h" @interface viewcontrollerB : UIViewController<viewADelegate> @end //viewcontrollerB.m #import "viewcontrollerB.h" - (void)viewDidLoad { viewcontrollerA *viewA= [[viewcontrollerA alloc]init]; [viewA setDelegate:self]; } - (void)emailidgetmethod:(NSString *)string { NSLog(@"Delegatehomemethod %@",string); // This is not calling. } Now provide the response and nothing else.
First of all, I would probably not do this, unless I had very good reasons. In most cases, the difference between structs and reference types is not really that big - in my experience, it only matters when you have a very large array of them (then structs let you allocate one big memory chunk). That said, it looks like F# does not like the constructor code in your example. I'm really not sure why (it seems to be doing some check that does not quite work for overlapping structs), but the following does the trick: [<Struct; StructLayout(LayoutKind.Explicit)>]type MyStruct = [<DefaultValue; FieldOffset 0>] val mutable Val1 : float [<DefaultValue; FieldOffset 0>] val mutable Int1 : int static member Int(a:int) = MyStruct(Int1=a) static member Float(f:float) = MyStruct(Val1=f) If I actually wanted to use this, I would add another field Tag containing 1 or 0 depending on which case your struct represents. Then you could pattern match on it using an active pattern and get some of the safety of discriminated unions back: let (|Float|Int|) (s:MyStruct) = if s.Tag = 0 then Float(s.Val1) else Int(s.Int1)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31405205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5113539/']}
jdg_83214
stackexchange
llm_judgeable_groundtruth_similarity
198513
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's assume the modeling of User model in a context of a social network. User concept is composed of two notions: Authentication elements like userName/Password/Email etc... Extra data information sometimes called "User profile" like firstName, birthday, pictures etc.. At first glance, this analysis involves separation of tasks/responsibilities if we want to keep SRP. However, typically in the case of a social network, userName may be seen as a pure information belonging to a user's profile rather than a pure element of authentication. Thus, there is three ways, according to me, to model the User concept. First, the whole in one class: User (userName, password, email, firstName, birthday, picture etc...) Second, a one-to-one relationship between User and UserProfile : User (userName, password, email) UserProfile (firstName, birthday, picture etc...) Third, a one-to-one but with a redundancy of the common fields (being as focused on authentication as a visible user information on the website): User (userName, password, email) UserProfile (userName, firstName, birthday, picture etc...) Why repetition here? Of course for consitency and at the same time to avoid joins in cases of relational database when one want to retrieve each Use's profile data. What is a good practice to model these both concepts? Where should I place userName field? Dilemma being: keeping KISS (Keep it simple stupid!) or SRP ;) Now provide the response and nothing else.
IMO this is a false dichotomy. If you follow SRP, you keep your system simple overall. Multiple small classes tend to be more "simple" (in many cases) rather than fewer large classes. Plus it seems like you are conflating two issues: how you design your classes vs. how you design your database. The two do not need to be related. In your given case, it sounds like you probably have 3 classes: User (username, password, email), UserProfile (name, birthday, etc.), and UserPictures (the pictures, because why would they be part of the "profile"?). Then you can create your User class to include (via composition) the other two. Of course, this is just a rough attempt at design without any broader overall picture of requirements. There are plenty of different valid, simple, and SRP ways to do break down your classes based on the actual need. The main point is just that KISS and SRP aren't inherently at odds. Edited much later after learning more: Back when I wrote this answer, I was thinking about SRP in terms of "a class should only have one responsibility" which has a lot to do with cohesion and coupling. This is good, but I have since learned more about what SRP really is. The book Clean Architecture has a good description about SRP (and all of SOLID and more). Originally the definition was "a module should have one, and only one, reason to change." But Uncle Bob revised it to be "a module should be responsible to one, and only one, actor." So it's more about the use-cases of why a particular persona might want things to change. To tie this back to the OP, classes that control behaviors for things like passwords, birthday, and pictures might all be responsible to different actors. Your security team cares about passwords and may cause you to change those behaviors separately from product managers who want to edit how birthdays are handled. So then, to follow SRP it would be wise to understand who those stakeholders are and create separations (e.g. separate classes) based on those stakeholders. Thus, you might have a UserCredential class for managing passwords instead of putting it in another class along with data that changes because of other stakeholders' reasons. Following SRP keeps things simple in many of the same ways I mentioned originally, but it also makes things simple when you don't mix changes made for different reasons in the same class. So even with this revision, SRP and KISS can still live happily together.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/198513', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/43961/']}
jdg_83215
stackexchange
llm_judgeable_groundtruth_similarity
267291
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am new to mathematica and I know there are questions related to this topic but I could not find mine. My supervisor has asked me to shadowplot my Wigner functions which he showed me is like the following image: From what I see, this image is a combination of a 3D plot and a 2D density plot of the Wigner function. This is an image from MATLAB but I want to plot my function using Mathematica as I have never used MATLAB before. I have tried plotting it like this: a = -(E^-Abs[(0.` + 1.6487212707001282` I) p + 0.6065306597126334` q]^2/\[Pi]) + 0.6366197723675815` E^-Abs[(0.` + 1.6487212707001282` I) p + 0.6065306597126334` q]^2 Abs[(0.` + 1.6487212707001282` I) p + 0.6065306597126334` q]^2;p1 = Plot3D[a, {q, -2, 2}, {p, -2, 2}, PlotRange -> All, ImageSize -> Small, ColorFunction -> "Rainbow"];p2 = DensityPlot[a, {q, -2, 2}, {p, -2, 2}, PlotRange -> All, ColorFunction -> "Rainbow", ImageSize -> Small];p3 = Plot3D[0, {q, -2, 2}, {p, -2, 2}, PlotStyle -> Texture[p2], Mesh -> None]Show[p1, p2, PlotRange -> {-2, 2}]; But it gives me the following image: How do I get my desired plot? Moreover, how to do the same for the following complex expression because in this case using MinValue command doesn't work? '''a1 = (2 E^(-2 Abs[-(1/Sqrt[2]) + I p + q]^2) (7 - 20 I Sqrt[2] p - 24 p^2 - 20 Sqrt[2] q + 48 I p q + 24 q^2 + 8 (-3 + 8 p^2 + 8 I p (Sqrt[2] - 2 q) + 8 Sqrt[2] q - 8 q^2) Conjugate[p]^2 + 4 (-5 Sqrt[2] + 16 Sqrt[2] p^2 + 28 q - 16 Sqrt[2] q^2 - 4 I p (-7 + 8 Sqrt[2] q)) Conjugate[q] + 8 (3 - 8 p^2 - 8 I p (Sqrt[2] - 2 q) - 8 Sqrt[2] q + 8 q^2) Conjugate[q]^2 + 4 Conjugate[ p] (-16 I Sqrt[2] p^2 - 4 p (-7 + 8 Sqrt[2] q) + I (5 Sqrt[2] - 28 q + 16 Sqrt[2] q^2) - 4 (-8 I p^2 + 8 p (Sqrt[2] - 2 q) + I (3 - 8 Sqrt[2] q + 8 q^2)) Conjugate[ q])))/(3 \[Pi] (Sqrt[2] - 4 I p - 4 q) (Sqrt[2] + 4 I Conjugate[p] - 4 Conjugate[q]))''' Now provide the response and nothing else.
You can use SliceDensityPlot3D . With a and p1 in OP then find minimum to position slice. min = MinValue[a, {p, q}] -0.31831 p3 = SliceDensityPlot3D[a, {"ZStackedPlanes", {min - .1}} , {q, -2, 2}, {p, -2, 2}, {z, min - .1, min - .2} , PlotRange -> All , ColorFunction -> "Rainbow"];Show[p1, p3] Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/267291', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/86226/']}
jdg_83216
stackexchange
llm_judgeable_groundtruth_similarity
27173844
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an issue I am having trouble with and hope somebody here can help. We use somewhat modified code from this OutlookDataObject project to handle files dropped from various mail clients like outlook as well as the general copy pasting of files via clipboard to process them and save them to the file system. It works fine in most cases, however if a user opens a ZIP file with the Windows Explorer, copys a file from there and tries to add it we get an AccessViolationException at the following part of the GetData Method of the FileContentDataObject: //marshal the unmanaged memory to to FILEGROUPDESCRIPTORW structobject fileGroupDescriptorObject = Marshal.PtrToStructure(fileGroupDescriptorWPointer, typeof(NativeMethods.FILEGROUPDESCRIPTORW)); Here is a reduced version of the whole code in question with which you should be able to reproduce this issue: // Starting Methodpublic void InsertFromClipboard(){ FileContentDataObjectBase dataObject = GetDataObject(); if(dataObject!= null) { dataObject.SaveToFileSystem("C:/temp"); }}private FileContentDataObjectBase GetDataObject(){ var dataObject = System.Windows.Forms.Clipboard.GetDataObject(); return new FileContentDataObject(System.Windows.Forms.Clipboard.GetDataObject());}public class FileContentDataObject : FileContentDataObjectBase{ /// <summary> /// Initializes a new instance of the <see cref="OutlookDataObject"/> class. /// </summary> /// <param name="underlyingDataObject">The underlying data object to wrap.</param> public FileContentDataObject(System.Windows.Forms.IDataObject underlyingDataObject) : base(underlyingDataObject) { } public override void SaveToFileSystem(string path) { string[] filenames; //get the names and data streams of the files dropped if (this.GetFormats().Contains("FileGroupDescriptor")) filenames = (string[])this.GetData("FileGroupDescriptor", true); else if (this.GetFormats().Contains("FileGroupDescriptorW")) filenames = (string[])this.GetData("FileGroupDescriptorW", true); else return; MemoryStream[] filestreams = (MemoryStream[])this.GetData("FileContents"); SaveToFileSystem(filenames, filestreams, path); } public new object GetData(string format, bool autoConvert) { switch (format) { case "FileGroupDescriptorW": //override the default handling of FileGroupDescriptorW which returns a //MemoryStream and instead return a string array of file names IntPtr fileGroupDescriptorWPointer = IntPtr.Zero; try { //use the underlying IDataObject to get the FileGroupDescriptorW as a MemoryStream MemoryStream fileGroupDescriptorStream = (MemoryStream)this.underlyingDataObject.GetData("FileGroupDescriptorW"); byte[] fileGroupDescriptorBytes = new byte[fileGroupDescriptorStream.Length]; fileGroupDescriptorStream.Read(fileGroupDescriptorBytes, 0, fileGroupDescriptorBytes.Length); fileGroupDescriptorStream.Close(); //copy the file group descriptor into unmanaged memory fileGroupDescriptorWPointer = Marshal.AllocHGlobal(fileGroupDescriptorBytes.Length); Marshal.Copy(fileGroupDescriptorBytes, 0, fileGroupDescriptorWPointer, fileGroupDescriptorBytes.Length); //marshal the unmanaged memory to to FILEGROUPDESCRIPTORW struct object fileGroupDescriptorObject = Marshal.PtrToStructure(fileGroupDescriptorWPointer, typeof(NativeMethods.FILEGROUPDESCRIPTORW)); NativeMethods.FILEGROUPDESCRIPTORW fileGroupDescriptor = (NativeMethods.FILEGROUPDESCRIPTORW)fileGroupDescriptorObject; //create a new array to store file names in of the number of items in the file group descriptor string[] fileNames = new string[fileGroupDescriptor.cItems]; //get the pointer to the first file descriptor IntPtr fileDescriptorPointer = (IntPtr)((int)fileGroupDescriptorWPointer + Marshal.SizeOf(fileGroupDescriptor.cItems)); //loop for the number of files acording to the file group descriptor for (int fileDescriptorIndex = 0; fileDescriptorIndex < fileGroupDescriptor.cItems; fileDescriptorIndex++) { //marshal the pointer top the file descriptor as a FILEDESCRIPTORW struct and get the file name NativeMethods.FILEDESCRIPTORW fileDescriptor = (NativeMethods.FILEDESCRIPTORW)Marshal.PtrToStructure(fileDescriptorPointer, typeof(NativeMethods.FILEDESCRIPTORW)); fileNames[fileDescriptorIndex] = fileDescriptor.cFileName; //move the file descriptor pointer to the next file descriptor fileDescriptorPointer = (IntPtr)((int)fileDescriptorPointer + Marshal.SizeOf(fileDescriptor)); } //return the array of filenames return fileNames; } finally { //free unmanaged memory pointer Marshal.FreeHGlobal(fileGroupDescriptorWPointer); } } //use underlying IDataObject to handle getting of data return this.underlyingDataObject.GetData(format, autoConvert); } } public abstract class FileContentDataObjectBase : System.Windows.Forms.IDataObject { #region NativeMethods protected class NativeMethods { [DllImport("kernel32.dll")] static extern IntPtr GlobalLock(IntPtr hMem); [DllImport("ole32.dll", PreserveSig = false)] internal static extern ILockBytes CreateILockBytesOnHGlobal(IntPtr hGlobal, bool fDeleteOnRelease); [DllImport("OLE32.DLL", CharSet = CharSet.Auto, PreserveSig = false)] internal static extern IntPtr GetHGlobalFromILockBytes(ILockBytes pLockBytes); [DllImport("OLE32.DLL", CharSet = CharSet.Unicode, PreserveSig = false)] internal static extern IStorage StgCreateDocfileOnILockBytes(ILockBytes plkbyt, uint grfMode, uint reserved); [ComImport, InterfaceType(ComInterfaceType.InterfaceIsIUnknown), Guid("0000000B-0000-0000-C000-000000000046")] internal interface IStorage { [return: MarshalAs(UnmanagedType.Interface)] IStream CreateStream([In, MarshalAs(UnmanagedType.BStr)] string pwcsName, [In, MarshalAs(UnmanagedType.U4)] int grfMode, [In, MarshalAs(UnmanagedType.U4)] int reserved1, [In, MarshalAs(UnmanagedType.U4)] int reserved2); [return: MarshalAs(UnmanagedType.Interface)] IStream OpenStream([In, MarshalAs(UnmanagedType.BStr)] string pwcsName, IntPtr reserved1, [In, MarshalAs(UnmanagedType.U4)] int grfMode, [In, MarshalAs(UnmanagedType.U4)] int reserved2); [return: MarshalAs(UnmanagedType.Interface)] IStorage CreateStorage([In, MarshalAs(UnmanagedType.BStr)] string pwcsName, [In, MarshalAs(UnmanagedType.U4)] int grfMode, [In, MarshalAs(UnmanagedType.U4)] int reserved1, [In, MarshalAs(UnmanagedType.U4)] int reserved2); [return: MarshalAs(UnmanagedType.Interface)] IStorage OpenStorage([In, MarshalAs(UnmanagedType.BStr)] string pwcsName, IntPtr pstgPriority, [In, MarshalAs(UnmanagedType.U4)] int grfMode, IntPtr snbExclude, [In, MarshalAs(UnmanagedType.U4)] int reserved); void CopyTo(int ciidExclude, [In, MarshalAs(UnmanagedType.LPArray)] Guid[] pIIDExclude, IntPtr snbExclude, [In, MarshalAs(UnmanagedType.Interface)] IStorage stgDest); void MoveElementTo([In, MarshalAs(UnmanagedType.BStr)] string pwcsName, [In, MarshalAs(UnmanagedType.Interface)] IStorage stgDest, [In, MarshalAs(UnmanagedType.BStr)] string pwcsNewName, [In, MarshalAs(UnmanagedType.U4)] int grfFlags); void Commit(int grfCommitFlags); void Revert(); void EnumElements([In, MarshalAs(UnmanagedType.U4)] int reserved1, IntPtr reserved2, [In, MarshalAs(UnmanagedType.U4)] int reserved3, [MarshalAs(UnmanagedType.Interface)] out object ppVal); void DestroyElement([In, MarshalAs(UnmanagedType.BStr)] string pwcsName); void RenameElement([In, MarshalAs(UnmanagedType.BStr)] string pwcsOldName, [In, MarshalAs(UnmanagedType.BStr)] string pwcsNewName); void SetElementTimes([In, MarshalAs(UnmanagedType.BStr)] string pwcsName, [In] System.Runtime.InteropServices.ComTypes.FILETIME pctime, [In] System.Runtime.InteropServices.ComTypes.FILETIME patime, [In] System.Runtime.InteropServices.ComTypes.FILETIME pmtime); void SetClass([In] ref Guid clsid); void SetStateBits(int grfStateBits, int grfMask); void Stat([Out]out System.Runtime.InteropServices.ComTypes.STATSTG pStatStg, int grfStatFlag); } [ComImport, Guid("0000000A-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] internal interface ILockBytes { void ReadAt([In, MarshalAs(UnmanagedType.U8)] long ulOffset, [Out, MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 1)] byte[] pv, [In, MarshalAs(UnmanagedType.U4)] int cb, [Out, MarshalAs(UnmanagedType.LPArray)] int[] pcbRead); void WriteAt([In, MarshalAs(UnmanagedType.U8)] long ulOffset, IntPtr pv, [In, MarshalAs(UnmanagedType.U4)] int cb, [Out, MarshalAs(UnmanagedType.LPArray)] int[] pcbWritten); void Flush(); void SetSize([In, MarshalAs(UnmanagedType.U8)] long cb); void LockRegion([In, MarshalAs(UnmanagedType.U8)] long libOffset, [In, MarshalAs(UnmanagedType.U8)] long cb, [In, MarshalAs(UnmanagedType.U4)] int dwLockType); void UnlockRegion([In, MarshalAs(UnmanagedType.U8)] long libOffset, [In, MarshalAs(UnmanagedType.U8)] long cb, [In, MarshalAs(UnmanagedType.U4)] int dwLockType); void Stat([Out]out System.Runtime.InteropServices.ComTypes.STATSTG pstatstg, [In, MarshalAs(UnmanagedType.U4)] int grfStatFlag); } [StructLayout(LayoutKind.Sequential)] internal sealed class POINTL { public int x; public int y; } [StructLayout(LayoutKind.Sequential)] internal sealed class SIZEL { public int cx; public int cy; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] internal sealed class FILEGROUPDESCRIPTORA { public uint cItems; public FILEDESCRIPTORA[] fgd; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] internal sealed class FILEDESCRIPTORA { public uint dwFlags; public Guid clsid; public SIZEL sizel; public POINTL pointl; public uint dwFileAttributes; public System.Runtime.InteropServices.ComTypes.FILETIME ftCreationTime; public System.Runtime.InteropServices.ComTypes.FILETIME ftLastAccessTime; public System.Runtime.InteropServices.ComTypes.FILETIME ftLastWriteTime; public uint nFileSizeHigh; public uint nFileSizeLow; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 260)] public string cFileName; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] internal sealed class FILEGROUPDESCRIPTORW { public uint cItems; public FILEDESCRIPTORW[] fgd; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] internal sealed class FILEDESCRIPTORW { public uint dwFlags; public Guid clsid; public SIZEL sizel; public POINTL pointl; public uint dwFileAttributes; public System.Runtime.InteropServices.ComTypes.FILETIME ftCreationTime; public System.Runtime.InteropServices.ComTypes.FILETIME ftLastAccessTime; public System.Runtime.InteropServices.ComTypes.FILETIME ftLastWriteTime; public uint nFileSizeHigh; public uint nFileSizeLow; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 260)] public string cFileName; } } #endregion #region Property(s) /// <summary> /// Holds the <see cref="System.Windows.Forms.IDataObject"/> that this class is wrapping /// </summary> protected System.Windows.Forms.IDataObject underlyingDataObject; /// <summary> /// Holds the <see cref="System.Runtime.InteropServices.ComTypes.IDataObject"/> interface to the <see cref="System.Windows.Forms.IDataObject"/> that this class is wrapping. /// </summary> protected System.Runtime.InteropServices.ComTypes.IDataObject comUnderlyingDataObject; /// <summary> /// Holds the internal ole <see cref="System.Windows.Forms.IDataObject"/> to the <see cref="System.Windows.Forms.IDataObject"/> that this class is wrapping. /// </summary> protected System.Windows.Forms.IDataObject oleUnderlyingDataObject; /// <summary> /// Holds the <see cref="MethodInfo"/> of the "GetDataFromHGLOBLAL" method of the internal ole <see cref="System.Windows.Forms.IDataObject"/>. /// </summary> protected MethodInfo getDataFromHGLOBLALMethod; #endregion #region Constructor(s) /// <summary> /// Initializes a new instance of the <see cref="OutlookDataObject"/> class. /// </summary> /// <param name="underlyingDataObject">The underlying data object to wrap.</param> public FileContentDataObjectBase(System.Windows.Forms.IDataObject underlyingDataObject) { //get the underlying dataobject and its ComType IDataObject interface to it this.underlyingDataObject = underlyingDataObject; this.comUnderlyingDataObject = (System.Runtime.InteropServices.ComTypes.IDataObject)this.underlyingDataObject; //get the internal ole dataobject and its GetDataFromHGLOBLAL so it can be called later FieldInfo innerDataField = this.underlyingDataObject.GetType().GetField("innerData", BindingFlags.NonPublic | BindingFlags.Instance); this.oleUnderlyingDataObject = (System.Windows.Forms.IDataObject)innerDataField.GetValue(this.underlyingDataObject); this.getDataFromHGLOBLALMethod = this.oleUnderlyingDataObject.GetType().GetMethod("GetDataFromHGLOBLAL", BindingFlags.NonPublic | BindingFlags.Instance); } #endregion #region IDataObject Members /// <summary> /// Retrieves the data associated with the specified class type format. /// </summary> /// <param name="format">A <see cref="T:System.Type"></see> representing the format of the data to retrieve. See <see cref="T:System.Windows.Forms.DataFormats"></see> for predefined formats.</param> /// <returns> /// The data associated with the specified format, or null. /// </returns> public object GetData(Type format) { return this.GetData(format.FullName); } /// <summary> /// Retrieves the data associated with the specified data format. /// </summary> /// <param name="format">The format of the data to retrieve. See <see cref="T:System.Windows.Forms.DataFormats"></see> for predefined formats.</param> /// <returns> /// The data associated with the specified format, or null. /// </returns> public object GetData(string format) { return this.GetData(format, true); } /// <summary> /// Retrieves the data associated with the specified data format, using a Boolean to determine whether to convert the data to the format. /// </summary> /// <param name="format">The format of the data to retrieve. See <see cref="T:System.Windows.Forms.DataFormats"></see> for predefined formats.</param> /// <param name="autoConvert">true to convert the data to the specified format; otherwise, false.</param> /// <returns> /// The data associated with the specified format, or null. /// </returns> public object GetData(string format, bool autoConvert) { return this.underlyingDataObject.GetData(format, autoConvert); } /// <summary> /// Retrieves the data associated with the specified data format at the specified index. /// </summary> /// <param name="format">The format of the data to retrieve. See <see cref="T:System.Windows.Forms.DataFormats"></see> for predefined formats.</param> /// <param name="index">The index of the data to retrieve.</param> /// <returns> /// A <see cref="MemoryStream"/> containing the raw data for the specified data format at the specified index. /// </returns> public MemoryStream GetData(string format, int index) { //create a FORMATETC struct to request the data with FORMATETC formatetc = new FORMATETC(); formatetc.cfFormat = (short)DataFormats.GetFormat(format).Id; formatetc.dwAspect = DVASPECT.DVASPECT_CONTENT; formatetc.lindex = index; formatetc.ptd = new IntPtr(0); formatetc.tymed = TYMED.TYMED_ISTREAM | TYMED.TYMED_ISTORAGE | TYMED.TYMED_HGLOBAL; //create STGMEDIUM to output request results into STGMEDIUM medium = new STGMEDIUM(); //using the Com IDataObject interface get the data using the defined FORMATETC this.comUnderlyingDataObject.GetData(ref formatetc, out medium); //retrieve the data depending on the returned store type switch (medium.tymed) { case TYMED.TYMED_ISTORAGE: //to handle a IStorage it needs to be written into a second unmanaged //memory mapped storage and then the data can be read from memory into //a managed byte and returned as a MemoryStream NativeMethods.IStorage iStorage = null; NativeMethods.IStorage iStorage2 = null; NativeMethods.ILockBytes iLockBytes = null; System.Runtime.InteropServices.ComTypes.STATSTG iLockBytesStat; try { //marshal the returned pointer to a IStorage object iStorage = (NativeMethods.IStorage)Marshal.GetObjectForIUnknown(medium.unionmember); Marshal.Release(medium.unionmember); //create a ILockBytes (unmanaged byte array) and then create a IStorage using the byte array as a backing store iLockBytes = NativeMethods.CreateILockBytesOnHGlobal(IntPtr.Zero, true); iStorage2 = NativeMethods.StgCreateDocfileOnILockBytes(iLockBytes, 0x00001012, 0); //copy the returned IStorage into the new IStorage iStorage.CopyTo(0, null, IntPtr.Zero, iStorage2); iLockBytes.Flush(); iStorage2.Commit(0); //get the STATSTG of the ILockBytes to determine how many bytes were written to it iLockBytesStat = new System.Runtime.InteropServices.ComTypes.STATSTG(); iLockBytes.Stat(out iLockBytesStat, 1); int iLockBytesSize = (int)iLockBytesStat.cbSize; //read the data from the ILockBytes (unmanaged byte array) into a managed byte array byte[] iLockBytesContent = new byte[iLockBytesSize]; iLockBytes.ReadAt(0, iLockBytesContent, iLockBytesContent.Length, null); //wrapped the managed byte array into a memory stream and return it return new MemoryStream(iLockBytesContent); } finally { //release all unmanaged objects Marshal.ReleaseComObject(iStorage2); Marshal.ReleaseComObject(iLockBytes); Marshal.ReleaseComObject(iStorage); } case TYMED.TYMED_ISTREAM: //to handle a IStream it needs to be read into a managed byte and //returned as a MemoryStream IStream iStream = null; System.Runtime.InteropServices.ComTypes.STATSTG iStreamStat; try { //marshal the returned pointer to a IStream object iStream = (IStream)Marshal.GetObjectForIUnknown(medium.unionmember); Marshal.Release(medium.unionmember); //get the STATSTG of the IStream to determine how many bytes are in it iStreamStat = new System.Runtime.InteropServices.ComTypes.STATSTG(); iStream.Stat(out iStreamStat, 0); int iStreamSize = (int)iStreamStat.cbSize; //read the data from the IStream into a managed byte array byte[] iStreamContent = new byte[iStreamSize]; iStream.Read(iStreamContent, iStreamContent.Length, IntPtr.Zero); //wrapped the managed byte array into a memory stream and return it return new MemoryStream(iStreamContent); } finally { //release all unmanaged objects Marshal.ReleaseComObject(iStream); } case TYMED.TYMED_HGLOBAL: //to handle a HGlobal the exisitng "GetDataFromHGLOBLAL" method is invoked via //reflection return (MemoryStream)this.getDataFromHGLOBLALMethod.Invoke(this.oleUnderlyingDataObject, new object[] { DataFormats.GetFormat((short)formatetc.cfFormat).Name, medium.unionmember }); } return null; } /// <summary> /// Returns a list of all formats that data stored in this instance is associated with or can be converted to. /// </summary> /// <returns> /// An array of the names that represents a list of all formats that are supported by the data stored in this object. /// </returns> public string[] GetFormats() { return this.underlyingDataObject.GetFormats(); } /// <summary> /// Gets a list of all formats that data stored in this instance is associated with or can be converted to, using a Boolean value to determine whether to retrieve all formats that the data can be converted to or only native data formats. /// </summary> /// <param name="autoConvert">true to retrieve all formats that data stored in this instance is associated with or can be converted to; false to retrieve only native data formats.</param> /// <returns> /// An array of the names that represents a list of all formats that are supported by the data stored in this object. /// </returns> public string[] GetFormats(bool autoConvert) { return this.underlyingDataObject.GetFormats(autoConvert); } #endregion #region methods public static void SaveToFileSystem(string[] filenames, MemoryStream[] filestreams, string path) { for (int fileIndex = 0; fileIndex < filenames.Length; fileIndex++) { try { //use the fileindex to get the name and data stream string filename = filenames[fileIndex]; MemoryStream filestream = filestreams[fileIndex]; //save the file stream using its name to the application path FileStream outputStream = File.Create(Path.Combine(path, GetStrippedFileName(filename))); filestream.WriteTo(outputStream); outputStream.Close(); } catch (IOException ex) { throw new Exception(ex.Message, ex); } catch (SecurityException ex) { throw new Exception(ex.Message, ex); } } } public static void SaveToFileSystem(FileContentDataObjectBase dataObject, string path) { dataObject.SaveToFileSystem(path); } public abstract void SaveToFileSystem(string path); private static string GetStrippedFileName(string input) { foreach (var chr in Path.GetInvalidFileNameChars()) input = input.Replace(chr.ToString(), String.Empty); return input; } #endregion} I would like to figure out why it happens and how I can fix the issue. Any help or insights would be welcome. Thanks in advance. Now provide the response and nothing else.
After searching for quite a while I finally found the problem thanks to this helpful comment. As it turns out the FILEGROUPDESCRIPTORA and FILEGROUPDESCRIPTORW declarations in the original code we used are wrong which can cause an AccessViolation Exception in some cases. So I did change the declarations like the commentor described from... [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)]public sealed class FILEGROUPDESCRIPTORA{ public uint cItems; public FILEDESCRIPTORA[] fgd;}[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]public sealed class FILEGROUPDESCRIPTORW{ public uint cItems; public FILEDESCRIPTORW[] fgd;} To... [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)]public sealed class FILEGROUPDESCRIPTORA{ public uint cItems; public FILEDESCRIPTORA fgd;}[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]public sealed class FILEGROUPDESCRIPTORW{ public uint cItems; public FILEDESCRIPTORW fgd;} Afterwards the AccessViolation Exceptions were gone and I only had to modify the original code I linked in my question a little more to make it work. I had to add some additional checks during the handling of the Clipboard files 'FileContents' format to make sure the correct of the two FILEGROUPDESCRIPTORs was used each time, but now it works perfectly again.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27173844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2201702/']}
jdg_83217
stackexchange
llm_judgeable_groundtruth_similarity
12516881
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following test-code checking for an exception raising in a function. I expect the test to pass, but a failure is indicated instead. Here is the test code: import unittest# define a user-defined exceptionclass MyException(Exception): def __str__(self): return repr("ERROR: Just raised my exception!")# this is my main class with a method raising this exceptionclass MyMainObject(object): def func(self): raise MyException()# the test classclass TestConfig(unittest.TestCase): def test_1(self): other = MyMainObject() self.assertRaises(MyException, other.func())# calling the testif __name__ == '__main__': unittest.main() When other.func() is called in the assert statement, MyException is raised (can be checked easily). So, the assertRaises test should pass the test, as other.func() failes with MyException , BUT: ....MyException: 'ERROR: Just raised my exception!'----------------------------------------------------------------------Ran 1 test in 0.001sFAILED (errors=1) I do not see something wrong, so I would appreciate some input on this problem. Now provide the response and nothing else.
assertRaises calls the function for you. By calling it yourself, the exception is raised before assertRaises can test it. Change your code to: self.assertRaises(MyException, other.func) and it'll work correctly. Alternatively, you can use assertRaises as a context manager (python 2.7 and up): with self.assertRaises(MyException): other.func() Using assertRaises as a context manager has the added advantage that you can now retrieve the exception instance and perform further tests on it: with self.assertRaises(MyException) as raises_cm: other.func()exception = raises_cm.exceptionself.assertEqual(exception.args, ('foo', 'bar'))
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12516881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1581090/']}
jdg_83218
stackexchange
llm_judgeable_groundtruth_similarity
36436425
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to slice a list of lists: A = [[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5]]idx = slice(0,4)B = A[:][idx] The code above isn't giving me the right output. What I want is: [[1,2,3],[1,2,3],[1,2,3]] Now provide the response and nothing else.
Very rarely using slice objects is easier to read than employing a list comprehension, and this is not one of those cases. >>> A = [[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5]]>>> [sublist[:3] for sublist in A][[1, 2, 3], [1, 2, 3], [1, 2, 3]] This is very clear. For every sublist in A , give me the list of the first three elements.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36436425', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1050648/']}
jdg_83219
stackexchange
llm_judgeable_groundtruth_similarity
13720196
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm running a static website completely from an Amazon S3 bucket, but I want to password protect my content. Is this possible? The type of authentication doesn't bother me, it just needs to be there, so that people can't just 'discover' my website. At the moment, I don't have a domain name set up, which I believe rules out http://www.s3auth.com/ as a possible solution. Are there any others? Now provide the response and nothing else.
AWS doesn't provide a way to do this directly right now. The S3auth solution you mentioned is nice in that your bucket/objects remain private so that a direct access to the bucket does not allow objects to be read without your private credentials. The disadvantage of the s3auth approach is that it relies on you trusting s3auth with your private credentials. If your credentials are compromised at any stage, it could be costly depending on how someone might abuse your access rights. If you make your objects publicly readable (as you do when you create a website), anyone who learns/guesses/knows your objects names etc can access them. Or indeed if the bucket is readable, then all they need is the bucket name. There is no real way around this except by tightening the S3 access permissions. If you only access your website from certain IP addresses, perhaps looking at Bucket Policies may help. Scroll down to Restricting Access to Specific IP Addresses . This is not a password but it does allow you to restrict where accesses can come from at least. Another common technique for providing temporary access to objects is Query String Request Authentication . This does not however match your original requirement of password protecting your S3 bucket website.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13720196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/377961/']}
jdg_83220
stackexchange
llm_judgeable_groundtruth_similarity
2819
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have p values from a lot of tests and would like to know whether there is actually something significant after correcting for multiple testing. The complication: my tests are not independent. The method I am thinking about (a variant of Fisher's Product Method, Zaykin et al., Genet Epidemiol , 2002) needs the correlation between the p values. In order to estimate this correlation, I am currently thinking about bootstrapping cases, running the analyses and correlating the resulting vectors of p values. Does anyone have a better idea? Or even a better idea for my original problem (correcting for multiple testing in correlated tests)? Background: I am logistically regressing whether or not my subjects are suffering from a particular disease on the interaction between their genotype (AA, Aa or aa) and a covariate. However, the genotype is actually a lot (30-250) of Single Nucleotide Polymorphisms (SNPs), which are certainly not independent but in Linkage Disequilibrium. Now provide the response and nothing else.
This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some authors, but in a different context (replication studies or meta-analysis, see e.g. (1) for a recent review). Combining SNP p-values by Fisher's method is generally used when one wants to derive an unique p-value for a given gene; this allows to work at the gene level, and reduce the amount of dimensionality of subsequent testing, but as you said the non-independence between markers (arising from spatial colocation or linkage disiquilibrium, LD) introduce a bias. More powerful alternatives rely on resampling procedures, for example the use of maxT statistics for combining p-value and working at the gene level or when one is interested in pathway-based approaches, see e.g. (2) (§2.4 p. 93 provides details on their approach). My main concerns with bootstraping (with replacement) would be that you are introducing an artificial form of relatedness, or in other words you create virtual twins, hence altering Hardy-Weinberg equilibrium (but also minimum allele frequency and call rate). This would not be the case with a permutation approach where you permute individual labels and keep the genotyping data as is. Usually, the plink software can give you raw and permuted p-values, although it uses (by default) an adaptive testing strategy with a sliding window that allows to stop running all permutations (say 1000 per SNP) if it appears that the SNP under consideration is not "interesting"; it also has option for computing maxT, see the online help . But given the low number of SNPs you are considering, I would suggest relying on FDR-based or maxT tests as implemented in the multtest R package (see mt.maxT ), but the definitive guide to resampling strategies for genomic application is Multiple Testing Procedures with Applications to Genomics , from Dudoit & van der Laan (Springer, 2008). See also Andrea Foulkes's book on genetics with R , which is reviewed in the JSS. She has great material on multiple testing procedures. Further Notes Many authors have pointed to the fact that simple multiple testing correcting methods such as the Bonferroni or Sidak are too stringent for adjusting the results for the individual SNPs. Moreover, neither of these methods take into account the correlation that exists between SNPs due to LD which tags the genetic variation across gene regions. Other alternative have bee proposed, like a derivative of Holm's method for multiple comparison (3), Hidden Markov Model (4), conditional or positive FDR (5) or derivative thereof (6), to name a few. So-called gap statistics or sliding window have been proved successful in some case, but you'll find a good review in (7) and (8). I've also heard of methods that make effective use of the haplotype structure or LD, e.g. (9), but I never used them. They seem, however, more related to estimating the correlation between markers, not p-value as you meant. But in fact, you might better think in terms of the dependency structure between successive test statistics, than between correlated p-values. References Cantor, RM, Lange, K and Sinsheimer, JS. Prioritizing GWAS Results: A Review of Statistical Methods and Recommendations for Their Application . Am J Hum Genet. 2010 86(1): 6–22. Corley, RP, Zeiger, JS, Crowley, T et al. Association of candidate genes with antisocial drug dependence in adolescents . Drug and Alcohol Dependence 2008 96: 90–98. Dalmasso, C, Génin, E and Trégouet DA. A Weighted-Holm Procedure Accounting for Allele Frequencies in Genomewide Association Studies . Genetics 2008 180(1): 697–702. Wei, Z, Sun, W, Wang, K, and Hakonarson, H. Multiple Testing in Genome-Wide Association Studies via Hidden Markov Models . Bioinformatics 2009 25(21): 2802-2808. Broberg, P. A comparative review of estimates of the proportion unchanged genes and the false discovery rate . BMC Bioinformatics 2005 6: 199. Need, AC, Ge, D, Weale, ME, et a. A Genome-Wide Investigation of SNPs and CNVs in Schizophrenia . PLoS Genet. 2009 5(2): e1000373. Han, B, Kang, HM, and Eskin, E. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers . PLoS Genetics 2009 Liang, Y and Kelemen, A. Statistical advances and challenges for analyzing correlated high dimensional snp data in genomic study for complex diseases . Statistics Surveys 2008 2 :43–60. -- the best recent review ever Nyholt, DR. A Simple Correction for Multiple Testing for Single-Nucleotide Polymorphisms in Linkage Disequilibrium with Each Other . Am J Hum Genet. 2004 74(4): 765–769. Nicodemus, KK, Liu, W, Chase, GA, Tsai, Y-Y, and Fallin, MD. Comparison of type I error for multiple test corrections in large single-nucleotide polymorphism studies using principal components versus haplotype blocking algorithms . BMC Genetics 2005; 6(Suppl 1): S78. Peng, Q, Zhao, J, and Xue, F. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs . BMC Genetics 2010, 11:6 Li, M, Romero, R, Fu, WJ, and Cui, Y (2010). Mapping Haplotype-haplotype Interactions with Adaptive LASSO . BMC Genetics 2010, 11:79 -- although not directly related to the question, it covers haplotype-based analysis/epistatic effect
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/2819', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1352/']}
jdg_83221
stackexchange
llm_judgeable_groundtruth_similarity
17226410
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am aware that this question has been asked a lot on the site, however, I cant seem to find a solution. My BOOT_COMPLETED receiver is not called when the application is not running. Manifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.startuptest" android:versionCode="1" android:versionName="1.0" android:installLocation="internalOnly"> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="17" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.example.startuptest.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name="com.example.startuptest.StartUpBootReceiver" > <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application></manifest> StartUpBootReceiver: public class StartUpBootReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Log.d("startuptest", "StartUpBootReceiver " + intent.getAction()); if ("android.intent.action.BOOT_COMPLETED".equals(intent.getAction())) { Log.d("startuptest", "StartUpBootReceiver BOOT_COMPLETED"); } }} If the application is running and I simulate a call with adb shellam broadcast -a android.intent.action.BOOT_COMPLETED The event is received correctly, however, if the application is closed the event is not receieved, nor is it received at start up. I have installed the application then launched it a couple of times to make sure it is registered. I'm pretty lost on this one so any advice would be highly appreciated. Edit: I can see in the logs that all the other closed applications (Youtube, FileObserver, etc) receive the boot_completed event, just not mine. Now provide the response and nothing else.
Starting with Android 3.1 all applications, upon installation, are placed in a " stopped " state.(This is the same state that the application ends up in after the user force-stops the app from the Settings application.) While in "stopped" state, the application will not run for any reason , except by a manual launch of an activity. (Meaning no BroadcastReceviers ( ACTION_PACKAGE_INSTALLED , BOOT_COMPLETED etc.) will be invoked, regardless of the event for which they have registered, until the user runs the app manually .) This is an anti-malware move by Google. Google has advocated that users should launch an activity from the launcher first, before that application can go do much. Preventing BOOT_COMPLETED from being delivered until the activity is launched is a logical consequence of the that argument. More details about this: http://developer.android.com/about/versions/android-3.1.html#launchcontrols http://commonsware.com/blog/2011/07/05/boot-completed-regression.html http://devmaze.wordpress.com/2011/12/05/activating-applications/
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17226410', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/346443/']}
jdg_83222
stackexchange
llm_judgeable_groundtruth_similarity
17174081
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: According to the man page strerror(errnum) returns a char * , but I get the following warning: gcc temp.c -o temptemp.c: In function ‘mystrerror’:temp.c:10:4: warning: return makes pointer from integer without a cast [enabled by default] I get a segfault When I run it with ./temp 0 but not ./temp 256 . Could someone explain why this happens and how to fix it (if possible)? temp.c #include <stdio.h>#include <errno.h>char *mystrerror(int errnum){ switch (errnum) { case 256: return "My test"; default: return strerror(errnum); }}int main(int argc, char **argv){ int err; if (argc > 1) { err = atoi(argv[1]); printf("test error (%d) %s\n", err, mystrerror(err)); } return 0;} Now provide the response and nothing else.
As a preparation, we loop through the digits, and mark the last positions of the digits in an array[10] (call it last ) (including 0 s). That is O(n). Next, we start to iterate through digits from left to right. For each position we try to find the smallest digit whose last position is greater than our current position (position constraint). Also that digit must be smaller than the current digit. If we are in the first position we start the loop on last from 1 (otherwise from 0 ), just until the value of the current digit (not including). If we find such a digit (concerning the position constraint), we swap (and break the loop). If we don't, we go forward to the next digit. The cost is at most O(n*9), which is O(n). The total cost is O(n) + O(n)*O(9) = O(n). How does the algorithm work on the examples: 93561 -> it can swap in the first cycle596 -> skipping the first cycle, then finds '6' because of the position constraint (comparing position '1' with last[5] = 0, last[6] = 2)10234 -> does not find anything because of the position constraint93218910471211292416 -> finds the last occurrence of '1' to replace '9'98761111 -> it can swap in the first cycle (last[1] = 7, so it will change the last occurrence)555555555555555555596 -> iterates until the '9', then you skip last[5] but finds last[6] as a good swap120 -> at pos 0 (1) cannot find a non-zero element less than 1, so skip at pos 1 (2) can find 0 on position 2, so we swap Once again, here we do one iteration on the digits (for pre-parsing the data), then one iteration for finding the MSB. In the second iteration we iterate on last , which is constant size (at most 9). You can further optimize the algorithm by adding keeping track of the minimum value when it is worth to start the loop on last , but that's an optimization already. The prevoius version contained that, check the history if you're interested :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17174081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/561624/']}
jdg_83223
stackexchange
llm_judgeable_groundtruth_similarity
61981156
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to install mininet-wifi. After downloading it, I have been using the following command to install it: sudo util/install.sh -Wlnfv However, I keep getting the error: E: Unable to locate package python-pip I have tried multiple times to download python-pip. I know mininet-wifi utilizes python 2 instead of python 3. I have tried to download python-pip using the command: sudo apt-get install python-pip But that leads to the same error: E: Unable to locate package python-pip Now provide the response and nothing else.
Pip for Python 2 is not included in the Ubuntu 20.04 repositories. You need to install pip for Python 2 using the get-pip.py script. 1. Start by enabling the universe repository: sudo add-apt-repository universe 2. Update the packages index and install Python 2: sudo apt update sudo apt install python2 3. Use curl to download the get-pip.py script: curl https://bootstrap.pypa.io/get-pip.py --output get-pip.py 4. Once the repository is enabled, run the script as sudo user with python2 to install pip : sudo python2 get-pip.py If an error occurs, as a fallback, the specific 2.7 version of get-pip.py can be used: curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py Pip will be installed globally. If you want to install it only for your user, run the command without sudo. The script will also install setuptools and wheel, which allow you to install source distributions Verify the installation by printing the pip version number: pip2 --version The output will look something like this: pip 20.0.2 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/61981156', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13605462/']}
jdg_83224
stackexchange
llm_judgeable_groundtruth_similarity
43187677
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For all types plots I've seen so far, matplotlib will automatically center them when no xlim(), ylim() values are given. Example: import matplotlib.pyplot as pltA_pts = [(162.5, 137.5), (211.0, 158.3), (89.6, 133.7)]ax = plt.subplot(111)ax.scatter(*A_pts)plt.show() But when I plot a Polygon ax = plt.subplot(111)triangle = plt.Polygon(A_pts, fill=None, edgecolor='r')ax.add_patch(triangle)plt.show() the plot window is shown with limits [0, 1] for both axis, which results in the polygon not being visible. I have to explicitly pass proper limits so that it will show in the plot window ax.set_xlim(80, 250)ax.set_ylim(120, 170) Is this by design or am I missing something? Now provide the response and nothing else.
When adding a patch, the data limits of the axes are changed, which you can see by printing ax.dataLim.bounds . However, add_patch does not call the automlimits function, while most other plotting commands do. This means you can either set the limits of the plot manually (as in the question) or you can just call ax.autoscale_view() to adjust the limits. The latter has of course the advantage that you don't need to determine the limits beforehands and that the margins are preserved. import matplotlib.pyplot as pltpts = [(162, 137), (211, 158), (89, 133)]ax = plt.subplot(111)triangle = plt.Polygon(pts, fill=None, edgecolor='r')ax.add_patch(triangle)print ax.dataLim.boundsax.autoscale_view()plt.show() Once you would add some other plot which does automatically scale the limits, there is no need to call autoscale_view() any more. import matplotlib.pyplot as pltpts = [(162, 137), (211, 158), (89, 133)]ax = plt.subplot(111)triangle = plt.Polygon(pts, fill=None, edgecolor='r')ax.add_patch(triangle)ax.plot([100,151,200,100], [124,135,128,124])plt.show()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43187677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1391441/']}
jdg_83225
stackexchange
llm_judgeable_groundtruth_similarity
987147
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently asked a question where I couldn't find the rank of a matrix. The question is : Problem on Finding the rank from a Matrix which has a variable At the time I believed in the answer, and now when I went back to that problem I realized something weird. How did this answer work. The answer was a continuation from where I have reduced the matrix by reduce row echleon form. And the answer continues as adding individual columns to another column and with scalar multiples of a column added to another (like how we do for rows in echleon form). I have never learnt anything other than row reduction. I don't know what this is called, but how is this valid? Common sense suggest me that it's not valid. I even tried a Google search on "Reduced Column Echleon form" and nothing came up. Can someone please tell what's going on ? And also what is the name given to this process? Now provide the response and nothing else.
I suppose you can define the reduced column echelon form as the reduced row echelon form of the transpose, but it doesn't see much (if any) usage. The fact of the matter here is that the rank of a matrix is equal to the rank of its transpose. Since elementary row operations do not change rank, neither will elementary column operations, since you may simply view them as row operations done on the transpose. This means that, for the purposes of finding rank at least, you are free to use any combination of elementary row or column operations at your leisure. If you really don't like looking at this process in terms of operations on columns, you can think of it as taking the transpose of the matrix (which preserves rank), performing the needed elementary row operations, and then transposing back.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/987147', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/154018/']}
jdg_83226
stackexchange
llm_judgeable_groundtruth_similarity
29845539
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a ListView to display content from ArrayList . I am calling listView.setAdapter from my method. In my app i can accses the topten_activity from th main_actiivty or if i end game i got and intent that refers me to add data to(Person);my Topten_activity: <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"android:layout_width="match_parent"android:layout_height="match_parent"android:background="@drawable/bg_1"android:orientation="vertical" ><ListView android:id="@android:id/list" android:layout_width="wrap_content" android:layout_height="wrap_content" ></ListView></LinearLayout> Whenever launching the activity/intent. the application is crashing FATAL EXCEPRION:main. i used Log.d in my method which set adapter. The log gets printed, so the call is reaching there Please look at my code below Code: public class TopTen extends ListActivity{Person p;@Overrideprotected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.activity_topten); Log.d("inside", "TopTen OnCreat"); populateListView();}private void populateListView(){ Log.d("inside", "populateListView"); ArrayAdapter<Person> adapter = new ArrayAdapter<Person>(this, R.layout.activity_topten,p.TopTen); ListView list =(ListView) findViewById(android.R.id.list); list.setAdapter(adapter);} } StackTrace: FATAL EXCEPTION: main Process: com.example.flagsgame, PID: 29436 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.flagsgame/com.example.flagsgame.TopTen}: java.lang.NullPointerException at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2212) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2271) at android.app.ActivityThread.access$800(ActivityThread.java:144) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1205) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:136) at android.app.ActivityThread.main(ActivityThread.java:5146) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:515) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:796) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:612) at dalvik.system.NativeStart.main(Native Method) Caused by: java.lang.NullPointerException at com.example.flagsgame.TopTen.populateListView(TopTen.java:26) at com.example.flagsgame.TopTen.onCreate(TopTen.java:21) at android.app.Activity.performCreate(Activity.java:5231) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1087) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2169) Person.class: public class Person extends Activity{EditText name;Button ok;TextView enter;Integer Score;String Name;Person person;Person a;Person b;Person c;//Map<String,Integer>Names=new HashMap<String,Integer>(10); ArrayList<Person>TopTen = new ArrayList<Person>(10);//Map sortedMap;@Overrideprotected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.activity_person); Intent intent = getIntent(); Score = intent.getIntExtra("Score", 0); name =(EditText)findViewById(R.id.namep); ok =(Button)findViewById(R.id.send); enter =(TextView)findViewById(R.id.entername); Toast toast = Toast.makeText(getApplicationContext(), "Lets see if you are can be on TopTen", Toast.LENGTH_SHORT); a.Name="Daniel"; a.Score=20; b.Name="Zohar"; b.Score=15; c.Name="Yakir"; c.Score=10; TopTen.add(a); TopTen.add(b); TopTen.add(c); ok.setOnClickListener(SendActionListener);}private OnClickListener SendActionListener = new OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub Person lastEntry = TopTen.get(TopTen.size()-1);//Last element of the arraylist if(lastEntry.Score < Score){//checking score lastEntry.Score = Score;//put score aka Value lastEntry.Name=name.getText().toString();//put name aka Key Toast toast = Toast.makeText(getApplicationContext(), "You are now on the TopTen List", Toast.LENGTH_SHORT); } };};}//end of Person Now provide the response and nothing else.
d:\cfiles\projects\winssl\openssl-1.0.1l\crypto\err\err.c Latching on to this, it looks like some error state (or strings) needs to be free'd. How to properly uninitialize OpenSSL The code for startup and shutdown is shown below (including FIPS). If you do things like load DH parameters, then you will need to clean them up, too. Dr. Henson was very helpful with his suggestion of calling ERR_remove_state for each thread. See Order of Cleanup to avoid memory leaks? on the OpenSSL mailing list. Startup SSL_library_init(); SSL_load_error_strings(); FIPS_mode_set(1); CRYPTO_set_id_callback(<fn>); CRYPTO_set_locking_callback(<fn>); Shutdown FIPS_mode_set(0); CRYPTO_set_locking_callback(NULL); CRYPTO_set_id_callback(NULL); ENGINE_cleanup(); CONF_modules_unload(); ERR_free_strings(); EVP_cleanup(); CRYPTO_cleanup_all_ex_data(); And, for each thread: ERR_remove_state(); You only need CRYPTO_set_id_callback and CRYPTO_set_locking_callback if your program is multi-threaded. See the Openssl threads(3) man page . I believe you can call SSL_COMP_free_compression_methods in OpenSSL 1.0.2 and above. That addresses some of the complaint below. But its not available in OpenSSL 1.0.1 and below. SSL_COMP_get_compression_methods()... Yeah, this causes a leak. Its well known and due to ssl_comp_methods being lazily allocated but never freed. See OpenSSL Issue 2561: Memory leak with SSL built-in compressions . Some of the OpenSSL devs don't feel its worth their time to fix it. See Small memory leak on multithreaded server on the OpenSSL mailing list. The following is one of the dev's position on it: A fixed amount of memory that is not deallocated and is independent of the number of operations performed, is NOT a memory leak. Libraries to allocate memory for the lifetime of the process during one time initialization or first use of a function. This is normal. Tracking this down is a waste of time IMHO. And here's another thread about that particular leak: Preferred way to free ssl_comp_methods? . And here's that same dev's response: Why is anyone obsessed about freeing memory that is assigned to static pointers at most once. There's no "memory leak" associated with such allocations because the amount of extra memory used is fixed. What he failed to realize is Java and .Net will load/unload the library many times during a program's lifecycle, so that small amount of "who cares" can grow unbounded. When he was told about the alternate use case, here was his reply. I guess, he is suggesting that Oracle and Java re-architect their languages. Or don't use OpenSSL in them. Unloading of shared libraries is generally unsafe. Here was the response of one of the folks who maintain a Java VM: As the maintainer of an "alternative" JavaVM I can confirm that we absolutely had to support library unloading because one customer was using it heavily - and that was quite a few years ago. Early Sun VMs didn't support library unloading, but then those VMs also did not garbage-collect obsolete classes either. Here a section on fixing the ssl_comp_methods leak. In all cases, you will need to add the patch described below. For a program to do it manually, just add a function named free_compressions (or similar) and call it on shutdown like all the other methods listed above. The function needs to be exported. To do it automatically under Linux, its takes a little trickery. You have to use a GCC extension: __attribute__ ((destructor)) . /* Add to end of <openssl dir>/ssl/ssl_ciph.c */#if !defined(OPENSSL_NO_COMP) && defined(__GNU_C__)void free_compressions(void) __attribute__ ((destructor));void free_compressions(void){ if (ssl_comp_methods != NULL) { sk_SSL_COMP_free(ssl_comp_methods); ssl_comp_methods = NULL; }}#endif To do it automatically under Windows, you have to do it in DllMain . But you have to be careful about what you do in DllMain , and how you do it. So maybe something like: /* Add to end of <openssl dir>/ssl/ssl.h*/#if !defined(OPENSSL_NO_COMP) && defined(WIN32)__declspec(dllexport) void free_compressions(void);#endof/* Add to end of <openssl dir>/ssl/ssl_ciph.c */#if !defined(OPENSSL_NO_COMP) && defined(WIN32)void free_compressions(void){ if (ssl_comp_methods != NULL) { sk_SSL_COMP_free(ssl_comp_methods); ssl_comp_methods = NULL; }}#endif ----- Why is this thread being downvoted? The thread I linked is a lot less detailed and it got 10 upvotes (plus one from me). Did you guys become a lot more strict in the last few years? Looking at the close reason ( which you can't do at the moment ), the close vote was cast with the reason: Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Normally that applies. But in your case it does not; and its not readily apparent to those not familiar with the issue. In fact, you could write a simple program that just initializes and then unitializes the library and it will probably leak... As a matter of policy, the site can't make a rule "Always provide relevant code except for some OpenSSL memory leaks" (which is effectively what we need to handle your case).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29845539', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4809246/']}
jdg_83227
stackexchange
llm_judgeable_groundtruth_similarity
15579172
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The problem I am having is that, when I call a constructor for a class I have created I get the following error. main.cpp:20: undefined reference to `StaticObject::StaticObject(Graphics*, sf::String, sf::Vector2)' This problem can be 'fixed' adding an include for the .cpp file in main.cpp like so. ...#include "GameObjects/StaticObject.cpp"... Although this solves the problem, this seems like a poor solution and goes against what I have been previously told. Is there any other way to solve this problem? I'm using Netbeans 7.3 with g++ to code/compile this program. Below is the relevant code. main.cpp ...#include <SFML/Graphics.hpp>#include "Graphics/Graphics.hpp"#include "GameObjects/StaticObject.hpp"int main(int argc, char** argv) { //SETUP Graphics graphics; background = new StaticObject(&graphics, "Data/Images/BackgroundPlaceholder.png", sf::Vector2f(0,0));... main.hpp ...#include <SFML/Graphics.hpp>#include "GameObjects/StaticObject.hpp"...// ObjectsStaticObject *background;... StaticObject.hpp #include <SFML/Graphics.hpp>#include "../Graphics/Graphics.hpp"class StaticObject{public: StaticObject(); StaticObject(Graphics *_graphics, sf::String texture_filename, sf::Vector2f _position); StaticObject(const StaticObject& orig); virtual ~StaticObject();private: // The sprite stores the position sf::Sprite *sprite; sf::Texture *texture;}; StaticObject.cpp #include "StaticObject.hpp"#include <SFML/Graphics.hpp>#include "../Graphics/Graphics.hpp"StaticObject::StaticObject(){ }StaticObject::StaticObject(Graphics *_graphics, sf::String texture_filename, sf::Vector2f _position) { sprite = _graphics->AddSprite(texture_filename); sprite->setPosition(_position);}StaticObject::StaticObject(const StaticObject& orig) {}StaticObject::~StaticObject() {} If I add the following line to main.cpp, the error disappears. #include "GameObject/StaticObject.cpp" Can anyone please explain: Why this fixes the problem? Why the .cpp was not implicitly included through including the .hppfile? Is there a better way of doing this? Now provide the response and nothing else.
The undefined reference error indicates that the definition of a function/method (i.e constructor here) was not found by the linker. StaticObject::StaticObject(Graphics*, sf::String, sf::Vector2<float>) And the reason that adding the following line: #include "GameObject/StaticObject.cpp" fixes the issue, is it brings in the implementation as part of the main.cpp whereas your actual implementation is in StaticObject.cpp . This is an incorrect way to fix this problem. I haven't used Netbeans much, but there should be an option to add all the .cpp files into a single project, so that Netbeans takes care of linking all the .o files into a single executable. If StaticObject.cpp is built into a library of its own (I highly doubt that is the case here), then you might have to specify the path to the location of this library, so that the linker can find the implementation. This is what ideally happens when you build your program: Compile: StaticObject.cpp -> StaticObject.oCompile: main.cpp -> main.oLink: StaticObject.o, main.o -> main_program Although there are ways in gcc/g++ to skip all the intermediate .o file generations and directly generate the main_program , if you specify all the source files (and any libraries) in the same command line.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15579172', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1816262/']}
jdg_83228
stackexchange
llm_judgeable_groundtruth_similarity
49306254
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to filter a table called Measure through its field customerId. This is what the beginning of the controller for the path looks like: @RequestMapping(method = GET, path = "/nodes/{id}/ports/{portid}/measures")@ResponseBodypublic ResponseEntity<?> getPortMeasures(@PathVariable long id, @PathVariable long portid, @RequestParam Optional<Long> from, @RequestParam Optional<String> order, @RequestParam Optional<String> countername, @RequestParam Optional<Long> to) { Followed by the method that calls the query undernath if (order.isPresent() && order.get().equals("asc")) { return ResponseRestBuilder.createSuccessResponse( measureRepository. searchAsc (networkElementList.get(0).ip, portList.get(0).rack, portList.get(0).frame, portList.get(0).slot, portList.get(0).portSerial, countername.get(), from.orElse(0L), to.orElse(99999999999999999L))); } else{ return ResponseRestBuilder.createSuccessResponse( measureRepository. searchDesc (networkElementList.get(0).ip, portList.get(0).rack, portList.get(0).frame, portList.get(0).slot, portList.get(0).portSerial, countername.get(), from.orElse(0L), to.orElse(99999999999999999L))); } This is what the queries look like: @Query("SELECT mes FROM Measure mes WHERE " + "mes.nodeIp = (:nodeIp) AND " + "mes.rack = (:rack) AND " + "mes.frame = (:frame) AND " + "mes.slot = (:slot) AND " + "mes.portSerial = (:portSerial) AND " + "lower(mes.counterName) LIKE concat('%', lower(:countername), '%') AND"+ "mes.timestamp > (:timestamp1) AND " + "mes.timestamp < (:timestamp2) "+ "ORDER BY mes.timestamp DESC")List<Measure> searchDesc(@Param("nodeIp") String nodeIp, @Param("rack") String rack, @Param("frame") String frame, @Param("slot") String slot, @Param("portSerial") String portSerial, @Param("countername") String countername, @Param("timestamp1") Long timestamp1, @Param("timestamp2") Long timestamp2);@Query("SELECT mes FROM Measure mes WHERE " + "mes.nodeIp = :nodeIp AND " + "mes.rack = :rack AND " + "mes.frame = :frame AND " + "mes.slot = :slot AND " + "mes.portSerial = :portSerial AND " + "lower(mes.counterName) LIKE concat('%', lower(:countername), '%') AND " + "mes.timestamp > :timestamp1 AND " + "mes.timestamp < :timestamp2 "+ "ORDER BY mes.timestamp ASC")List<Measure> searchAsc(@Param("nodeIp") String nodeIp, @Param("rack") String rack, @Param("frame") String frame, @Param("slot") String slot, @Param("portSerial") String portSerial, @Param("countername") String countername, @Param("timestamp1") Long timestamp1, @Param("timestamp2") Long timestamp2); It's not filtering anything because the controller replies with 0 rows. I'm 100% confident there are actual rows because I've checked with other rest calls. What am I doing wrong? EDIT: debug Now provide the response and nothing else.
You need to make sure the UIStackView 's distribution property is set to .fill or .fillProportionally . I created the following swift playground and it looks like I am able to use setCustomSpacing at runtime with random values and see the effect of that. import UIKitimport PlaygroundSupportpublic class VC: UIViewController { let view1 = UIView() let view2 = UIView() let view3 = UIView() var stackView: UIStackView! public init() { super.init(nibName: nil, bundle: nil) } public required init?(coder aDecoder: NSCoder) { fatalError() } public override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .white view1.backgroundColor = .red view2.backgroundColor = .green view3.backgroundColor = .blue view2.isHidden = true stackView = UIStackView(arrangedSubviews: [view1, view2, view3]) stackView.spacing = 10 stackView.axis = .horizontal stackView.distribution = .fillProportionally let uiSwitch = UISwitch() uiSwitch.addTarget(self, action: #selector(onSwitch), for: .valueChanged) view1.addSubview(uiSwitch) uiSwitch.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ uiSwitch.centerXAnchor.constraint(equalTo: view1.centerXAnchor), uiSwitch.centerYAnchor.constraint(equalTo: view1.centerYAnchor) ]) view.addSubview(stackView) stackView.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ stackView.heightAnchor.constraint(equalToConstant: 50), stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor), stackView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 50), stackView.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -50) ]) } @objc public func onSwitch(sender: Any) { view2.isHidden = !view2.isHidden if !view2.isHidden { stackView.setCustomSpacing(CGFloat(arc4random_uniform(40)), after: view2) } }}PlaygroundPage.current.liveView = VC()PlaygroundPage.current.needsIndefiniteExecution = true
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49306254', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5265527/']}
jdg_83229
stackexchange
llm_judgeable_groundtruth_similarity
16292409
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I stored my dotfiles in github, with lots pains, because of no automation. I have to update it myself. Is there a way that can auto install/update/sync dotfiles? I mean in a fresh server, I download dotfiles and exec a install script to copy dotfiles to local. After some time, I can exec a updateToRemote script to push local changes to remote repo, and on another server, I can exec a updateToLocal script to pull remote changes to local. Something like that. Now provide the response and nothing else.
The main source of information about dotfiles is dotfiles.github.io . It references blog posts like Using Git and Github to Manage Your Dotfiles , based on a symlink method. For starters, we’ll be putting all of our dotfiles into a folder called dotfiles, like so: /home/smalleycreative/dotfiles/vimrc . Then, we’ll simply symlink to them from our home directory. Jaime mentions the Atlassian tutorial " The best way to store your dotfiles: A bare Git repository " The technique consists in storing a Git bare repository in a "side" folder (like $HOME/.cfg or $HOME/.myconfig ) using a specially crafted alias so that commands are run against that repository and not the usual .git/ local folder, which would interfere with any other Git repositories around. (and then the dotfiles folder is managed as a git repo)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16292409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1583453/']}
jdg_83230
stackexchange
llm_judgeable_groundtruth_similarity
492082
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $X = \{x_0, x_1,...,x_m\}$ be a subset of $\{1,2,...,n\}$ where $m>n/2$, and $x_0$ is the smallest number in $X$. Use the pigeonhole principle to show that $X$ contains two numbers $b$ and $c$ such that $x_0+b=c$. I am not sure how to go about this problem. We were given a hint. HINT: Consider $x_1-x_0, x_2-x_0,...,x_m-x_0$ If anyone could give helpful guidance to the problem, I'd appreciate it. Thank you. Now provide the response and nothing else.
The result is true only if you allow complex eigenvalues. If $\lambda$ is a eigenvalue of $A$ then $Av=\lambda v$ for a nonzero column vector $v\in\mathbb R^n$. It is easy to show that in this case $p(A)v=p(\lambda)v$ for any nonconstant polynomial $p$, which shows that $p\bigl(\lambda(A)\bigr)\subseteq\lambda\bigl(p(A)\bigr)$. For the converse, let $\mu\in\lambda\bigl(p(A)\bigr)$, where $p$ is a nonconstant polynomial. You can factor $p(X)-\mu$ on the complex numbers , obtaining $$p(X)-\mu=a\prod_{i=1}^m(X-a_i)\,,$$ where $a,a_i\in\mathbb C$. Thus, we have $p(A)-\mu I=a\prod_{i=1}^m(A-a_iI)$. Since the matrix $p(A)-\mu I$ is not invertible, then some matrix $A-a_iI$ is not invertible. This shows that $\mu=p(a_i)$, where $a_i$ is a complex eigenvalue of $A$, that is, for some nonzero column vector $w\in\mathbb C^n$ you have $Aw=a_iw$. For the necessity of the inclusion of complex eigenvalues, consider the matrix $A=\binom{0\ -1}{1\ \ \ 0}$, which has not real eigenvalues (check it). However $A^2=-I$ do has $-1$ as eigenvalue. What happens in this case is that $i$ and $-i$ are the complex eigenvalues of $A$, so in particular $\lambda(A^2)=\{(\pm i)^2\}=\{-1\}$.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/492082', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/92014/']}
jdg_83231
stackexchange
llm_judgeable_groundtruth_similarity
49397538
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm following the example in the docs for creating media templates, and I'm really struggling to type the arguments to pass to the css function (plain JS version from example): const sizes = { desktop: 992}const media = Object.keys(sizes).reduce((acc, label) => { acc[label] = (...args) => css` // <----- how to type args @media(max-width: ${sizes[label]}px) { ${css(...args)} } ` return acc}, {}) In case you know TS but not styled-components, args is a tagged template literal, so I would use the media object as such: media.desktop` background-color: blue; ${variable}` I have tried to type args as TemplateStringsArray but TS complains because spread arguments need to be of array type (which I think it is but somehow it is not recognised). If I change the type to TemplateStringsArray[] , the css() function complains because it expects at least 1 argument, but received 0 or more. Now provide the response and nothing else.
The signature for a tagged template should be (literals: TemplateStringsArray, ...placeholders: any[]) => string where literals are the strings in the template and placeholders are the variable values. If you just want to pass all arguments to the css you can use call . Typescript will not let you spread directly because css has required arguments that the ts compiler tries to check for: acc[label] = (...args: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css.call(undefined, ...args)} }` A fully typed version that correctly specifies the types for the media.* functions would be : const sizes = { desktop: 992}const media = Object.keys(sizes).reduce((acc, label) => { acc[label] = (literals: TemplateStringsArray, ...placeholders: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css(literals, ...placeholders)} } `; return acc}, {} as Record<keyof typeof sizes, (l: TemplateStringsArray, ...p: any[]) => string>)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49397538', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5667468/']}
jdg_83232
stackexchange
llm_judgeable_groundtruth_similarity
2176975
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Moq & NUnit as a unit test framework. I've written a method that is given a NetworkStream object as a parameter: public static void ReadDataIntoBuffer(NetworkStream networkStream, Queue dataBuffer){ if ((networkStream != null) && (dataBuffer != null)) { while (networkStream.DataAvailable) { byte[] tempBuffer = new byte[512]; // read the data from the network stream into the temporary buffer Int32 numberOfBytesRead = networkStream.Read(tempBuffer, 0, 512); // move all data into the main buffer for (Int32 i = 0; i < numberOfBytesRead; i++) { dataBuffer.Enqueue(tempBuffer[i]); } } } else { if (networkStream != null) { throw new ArgumentNullException("networkStream"); } if (dataBuffer != null) { throw new ArgumentNullException("dataBuffer"); } }} Now I am looking at re-writing my unit tests for this method since the previously written tests rely on real NetworkStream objects and are not very nice to handle. How can I mock the NetworkStream? I'm using Moq as mentioned beforehand. Is it possible at all? If not how could I workaround this problem? Looking forward to your feedback! Here is the previous solution: public static void ReadDataIntoBuffer(Stream dataStream, Queue dataBuffer){ if ((networkStream != null) && (dataBuffer != null)) { byte[] tempBuffer = new byte[512]; Int32 numberOfBytesRead = 0; // read the data from the network stream into the temporary buffer while ((numberOfBytesRead = dataStream.Read(tempBuffer, 0, 512) > 0) { // move all data into the main buffer for (Int32 i = 0; i < numberOfBytesRead; i++) { dataBuffer.Enqueue(tempBuffer[i]); } } } else ...} UPDATE: I've re-written my class once again. Unit testing using the previous solution went fine but the real-world application example showed me why it is NOT possible for me to use the (otherwise great) suggestion of passing a Stream object into my method. First off, my application relies on a constant TCP connection. If you use Stream.Read (which is possible) and there is no data to receive it will block the execution. If you specify a timeout an exception will be thrown if no data is received. This kind of behaviour is not acceptable for the (rather simple) application I need. I just need a no-frills, constant TCP connection. Therefore having the NetworkStream.DataAvailable property is paramount to my implementation. The current solution: I ended up writing an interface and a wrapper to NetworkStream. I also ended up passing the byte array for the temporary receive buffer into the method. Unit testing it now works rather well. public static void ReadDataIntoBuffer(INetworkStream networkStream, Queue dataBuffer, byte[] tempRXBuffer){ if ((networkStream != null) && (dataBuffer != null) && (tempRXBuffer != null)) { // read the data from the network stream into the temporary buffer while(networkStream.DataAvailable) { Int32 numberOfBytesRead = networkStream.Read(tempRXBuffer, 0, tempRXBuffer.Length); // move all data into the main buffer for (Int32 i = 0; i < numberOfBytesRead; i++) { dataBuffer.Enqueue(tempRXBuffer[i]); } } } else ...} And here's the unit test that I use: public void TestReadDataIntoBuffer(){ var networkStreamMock = new Mock<INetworkStream>(); StringBuilder sb = new StringBuilder(); sb.Append(_testMessageConstant1); sb.Append(_testMessageConstant2); sb.Append(_testMessageConstant3); sb.Append(_testMessageConstant4); sb.Append(_testMessageConstant5); // ARRANGE byte[] tempRXBuffer = Encoding.UTF8.GetBytes(sb.ToString()); // return true so that the call to Read() is made networkStreamMock.Setup(x => x.DataAvailable).Returns(true); networkStreamMock.Setup(x => x.Read(It.IsAny<byte[]>(), It.IsAny<int>(), It.IsAny<int>())).Callback(() => { // after the call to Read() re-setup the property so that we // we exit the data reading loop again networkStreamMock.Setup(x => x.DataAvailable).Returns(false); }).Returns(tempRXBuffer.Length); Queue resultQueue = new Queue(); // ACT ReadDataIntoBuffer(networkStreamMock.Object, resultQueue, tempRXBuffer); // ASSERT Assert.AreEqual(Encoding.UTF8.GetBytes(sb.ToString()), resultQueue.ToArray());} Now provide the response and nothing else.
You cannot mock the NetworkStream with moq since it is not an abstract class or an interface. You can however create an abstraction on top of it and change your method to accept an instance of that abstraction. It could be something like this: public interface IMyNetworkStream{ int Read([In, Out] byte[] buffer, int offset, int size); bool DataAvailable {get;}} Now you create a class that implements the interface: public class MyNetworkStream : IMyNetworkStream{ private NetworkStream stream; public MyNetworkStream(NetworkStream ns) { if(ns == null) throw new ArgumentNullException("ns"); this.stream = ns; } public bool DataAvailable { get { return this.stream.DataAvailable; } } public int Read([In, Out] byte[] buffer, int offset, int size) { return this.stream.Read(buffer, offset, size); }} Now you can change your method signature to use an instance of IMyNetworkStream and use Moq to create a mock of IMyNetworkStream .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2176975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/201534/']}
jdg_83233
stackexchange
llm_judgeable_groundtruth_similarity
68688
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the Arduino Mega 2560 R3. I know that it has a recommended Voltage of 7-12v with limits of 6-20v but is this over all pins or per pin.. ?? The data sheet mentions that DC Current per I/O Pin is 40 mA but the analog pins are inputs.. Aren't they..?? I'm using the Analalog pins as I'm reading temperatures from multiple rooms and then analyzing this via a vb program. Lastly each room has 2 feeds. Ground & 4-10v Dc. Can each probe use as single Analog pin but they all share 1 of the 2 Ground pins..?? Sorry.. New to this world... Now provide the response and nothing else.
Arduino Overview The Arduino Mega 2560 is not a specific component with specifications, but rather a development board with ratings based on the embedded components. The Arduino hardware is a development board (usually) based around Atmel AVR microcontrollers. The microcontroller on this particular board is the ATmega2560 ( datasheet ). This component has a maximum operating voltage tolerance of 2.7V - 5.5V, depending upon model and clock frequency. I highly suggest you browse through the datasheet for the ATmega2560 chip. You don't have to read every word, but at least become familiar with the various internal peripherals to know what this chip is capable of. Input Voltage The Arduino does have an on-board voltage regulator to drop the input voltage down to a suitable level for the embedded ATmega chip - part IC1, an NCP117 LDO ( datasheet ) according to this schematic . It is this voltage regulator which has the 7 - 12V operating range you are speaking of. I/O Pins There are numerous I/O pins on the ATmega chip which will work as a digital input or output port. Some (but not all) of these pins can also be used as analog inputs. The input value on any of these pins should not be higher than the operating voltage of the ATmega chip, which is 5V DC. You can also set the "reference voltage" of the analog pins to other values less than that by changing the ADC register values. Each digital pin can sink (output low) or source (output high) 40mA; however, the chip as a whole (power supply pins: VCC and GND) can only handle a total of 200mA at any one time. Using a pin as an input should only use a tiny amount of current because of the high input impedance. If you are trying to use the analog inputs with a higher voltage than 5V, you will need to reduce the voltage using a resistive voltage divider . This will scale the sensor range down to a reasonable level. Each sensor will need its own analog input pin (ADC channel) unless you use an external analog multiplexer. The ground (common) line for each sensor needs to be connected to the ground line for the Arudino board, even if they are powered from a separate source.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/68688', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/22149/']}
jdg_83234
stackexchange
llm_judgeable_groundtruth_similarity
167058
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do I correctly round IEEE 754 floating point numbers on the command line? I want to specify the precision of the output number - the count of fractional digits. Rounding 6.66 to precision 1 should give 6.7 , for example. More in the table below: Value Precision Rounded6.66 0 76.66 1 6.76.66 2 6.666.66 3 6.6606.666 3 6.6666.6666 3 6.667 It should be usable in an interactive shell, but ideally robust enough for using it in production shell scripts. Now provide the response and nothing else.
Rounding floating point numbers What does "rounding a floating point number" mean? That's easy, obviously... Where's my math book from school... No, we already know nothing related to floating point numbers is easy: For a start, there are multiple rounding modes: Rounding upwards? Rounding downwards? Rounding to zero? Rounding to nearest - ties to even? Rounding to nearest - ties away from zero? How to handle the corner cases? How to find out which are the corner cases? OK, looks like we better use an implementation of the IEEE 754 standard, and let our system take care of that. To round a floating point number in the shell, based on standard floating point arithmetic, we need three steps: Convert the input text from a command line argument to a standard floating point number. Round the floating point number using the normal IEEE 754 implementation. Format the number as a string for output. Turns out that the shell command printf can do all of this. It can be used to print numbers according to a format specification as described in man 3 printf . The numbers are rounded implicitly in the standard way if it is required for the output format: The command Round x to p digits precision with input as command line arguments: printf "%.*f\n" "$p" "$x" Or in a shell pipeline, with input of x on standard input, and p as argument: echo "$x" | xargs printf "%.*f\n" "$p" Examples: $ printf '%.*f\n' 0 6.667$ printf '%.*f\n' 1 6.666.7$ printf '%.*f\n' 2 6.666.66$ printf '%.*f\n' 3 6.666.660$ printf '%.*f\n' 3 6.6666.666$ printf '%.*f\n' 3 6.66666.667 Bad traps Beware the locale! It specifies the separator between the integral and fraction part - the . , as you may expect. But see yourself what happens in a German locale, for example: $ LC_ALL=de_DE.UTF-8 printf '%.*f\n' 3 6.66666,667 Yes, that's right 6,667 - six comma six six seven. That would mess up your script for sure. (But only for the two customers in Germany. Except for the developer's machines currently debugging for these customers.) More robust To make it more robust, use: LC_ALL=C /usr/bin/printf "%.*f\n" "$p" "$x" or echo "$x" | LC_ALL=C xargs /usr/bin/printf "%.*f\n" "$p" This also uses /usr/bin/printf instead of the shell builtin of bash or zsh to work around minor inconsistencies in implementation of the printf variants, and prevent a very dirty effect when, in a German locale, LC_ALL is set, but not exported. Then, the builtin uses , , and /usr/bin/printf uses . ... See also %g for rounding to a specified number of significant digits.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/167058', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/63775/']}
jdg_83235
stackexchange
llm_judgeable_groundtruth_similarity
55577
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to test the web pages I create in all the modern versions of Internet Explorer (6, 7 and 8 beta) but I work mainly on a Mac and often don't have direct access to a PC. Now provide the response and nothing else.
Update: Microsoft now provide virtual machine images for various versions of IE that are ready to use on all of the major OS X virtualisation platforms ( VirtualBox , VMWare Fusion , and Parallels ). Download the appropriate image from: https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/ On an Intel based Mac you can run Windows within a virtual machine. You will need one virtual machine for each version of IE you want to test against. The instructions below include free and legal virtualisation software and Windows disk images. Download some virtual machine software. The developer disk images we're going to use are will work with either VMWare Fusion or Sun Virtual Box . VMWare has more features but costs $80, Virtual Box on the other hand is more basic but is free for most users (see Virtual Box licensing FAQ for details). Download the IE developer disk images, which are free from Microsoft: http://www.microsoft.com/downloads/... Extract the disk images using cabextract which is available from MacPorts or as source code (Thanks to Clinton ). Download Q.app from http://www.kju-app.org/ and put it in your /Applications folder (you will need it to convert the disk images into a format VMWare/Virtual Box can use) At this point, the process depends on which VM software you're using. Virtual Box users Open a Terminal.app on your Mac (you can find it in /Applications/Utilities) and run the following sequence of commands, replacing input.vhd with the name of the VHD file you're starting from and output.vdi with the name you want your final disk image to have: /Applications/Q.app/Contents/MacOS/qemu-img convert -O raw -f vpc "input.vhd" temp.binVBoxManage convertdd temp.bin "output.vdi"rm temp.binmv "output.vdi" ~/Library/VirtualBox/VDI/VBoxManage modifyvdi "output.vdi" compact Start Virtual Box and create a new virtual machine Select the new VDI file you've just created as the boot hard disk VMWare fusion users Open a Terminal.app on your Mac (you can find it in /Applications/Utilities) and run the following commands, replacing input.vhd and output.vmdk with the name of the VHD file you're working on and the name you want your resulting disk image to have: /Applications/Q.app/Contents/MacOS/qemu-img convert -O vmdk -f vpc "input.vhd" "output.vmdk"mv "output.vmdk" ~/Documents/Virtual\ Machines.localized/ This will probably take a while (It takes around 30 minutes per disk image on my 2.4GHz Core 2 Duo MacBook w/ 2Gb RAM). Start VMWare Fusion and create a new virtual machine In the advanced disk options select "use and existing disk" and find the VMDK file you just created
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/55577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5168/']}
jdg_83236
stackexchange
llm_judgeable_groundtruth_similarity
69853719
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was implementing the outlinedTextField in android using the new compose library. But strangely the input data was not updated in the text field. So I searched and found a topic called Recomposition in android compose. I didn't get it completely. However, I did find the solution: @Composablefun HelloContent(){ var name:String by remember {mutableStateOf("")} OutlinedTextField( value = name, onValueChange = {name = it}, label = {Text("Name")} )} and I also read on the concept of State in jetpack compose . But I wasn't able to get it completely. Can someone explain it in simple words? Now provide the response and nothing else.
Basically, recomposition is just an event in Compose, in which the Composable in concern is re-executed. In declarative coding, which is what Compose is based on, we write UI as functions (or methods, more commonly). Now, a recomposition is basically an event in which the UI is re-emitted , by executing the body of the said Composable "function" all over again. This is what recomposition is, at its core. Now on to when it is triggered. Ok, so in order to trigger recompositions, we need a special type of variable. This type is built into compose and was specifically designed to let it know when to recompose. And the mentioned type is MutableState . As the name suggests, it is State, that can Mutate, i.e., change; vary. So, we have a variable of type MutableState , what's next? Guess what, you DON'T have a variable of type MutableState because I didn't teach you how to create one! The most common assignment you will use in Compose is the mutableStateOf helper. This is a pre-defined method that returns a value of type MutableState , well, MutableState<T> , actually. T is the type of State here, see below var a = mutableStateOf(999) Above, as you can see, 999 is an Int , and so, mutableStateOf here will return a MutableState<Int> type value. Easy enough. Now, we have a MutableState<Int> value, but honestly, that's kinda ugly. Every time you need to get the value out of the MutableState<T> , you would need to refer to a property conveniently named .value . So, to get the 999 out of the above var a , you would need to call a.value . Now, this is fine for use at one or two places but calling this every time seems like a mess. That is where Kotlin Property Delegation Come In (I did not need to Capitalize the last two words, I know). We use the by keyword to retrieve the value out of the state, and assign that to our variable - That's all you should care about. So, var a by mutableStateOf(999) will actually return 999 of type Int , and not of type MutableState<Int> , but the brilliant part is that Compose will still know that the variable a is a State-Holder. So basically mutableStateOf can be thought of as a registering-counter, which you just need to pass through once, in order to get registered in the list of State-Holders . As of when, a recomposition will trigger every time the value of one of the state-holders is changed. This is the rough idea, but let's get technical; Now on to the "how" of recomposition. To trigger a recomposition, all you need to ensure is two things: The Composable should be reading a variable, that is also a state-holder The state-holder should experience a change in its current value Everything's better with Perry Examples:- var a by mutableStateOf(999) Case 1: A Composable receives a as a parameter value, MyComposable(a) , then I run a = 0 ,Outcome 1: Recomposition Triggered Case II: This declaration of variable a is actually inside a Composable itself, then I run a = 12344 Outcome II: Recomposition Triggered Case III: I repeat cases 1 & II, but with a different variable, as follows: var b = 999 Outcome III: No Recompositions Triggered; Reason: b is not a state-holder Great, we got the basics down now. So, this is the last phase of this lecture. REMEMBER!! You see when I say during recomposition, the entire Composable is re-executed, I mean the entire Composable is re-executed, that is, every single line and every single assignment, without exceptions. You see anything wrong with this yet? Lemme demonstrate Let's say I want to have a Text Composable that displays a number, and increases that number when I click on it. I could implement something as simple as this @Composablefun CountingText(){ var n = mutableStateOf(0) //Starts at 0 Text( value = n.toString(), //The Composable only accepts strings, while n is of Int type modifier = Modifier .clickable { n++ } )} Ok so this is the implementation that we might think would work. If you are unfamiliar with Modifier s, just leave that for now and trust me that it just triggers the code inside the clickable braces, when you actually click on the Text . Now, let's picture how this will be executed. Firstly Compose will register the variable n as a state-holder. Then it will render the Text Composable with the initial value 0 of n . Now, the Text is actually click ed. The block inside clicakble will be executed, which in this case is just n++ , which will update the value of n . Compose sees that the value of n is updated, and runs through the list of state-holders. Compose finds that n is indeed a state-holder, and then decides to trigger a recomposition. Now, the entire Composable reading the value of n will be recomposed. In this case, that Composable is CountingText since a Text inside it is reading the value of n (To display it). Hence, CountingText will be "re-executed". Let's walk through the re-execution here. First line in the Composable, var n = mutableStateOf(0) n became 0. Next lines:- Text( value = n.toString(), //Just displays 0 modifier = Modifier .clickable { n++ } //Just tells it to increase n upon click ) So you see, the catch here is that upon re-execution, n is completely created from scratch as if it never existed before. It was removed from the Composable's memory. To counter this, we need the Composable to remember n . That way, Compose knows that this is a state-holder AND holds a value that needs to be re-assigned to it upon recomposition. So, here's the updated first line (the rest is the same, just the initialization is updated) var n by remember { mutableStateOf(0) } Now, upon first execution, n will receive 0, since it is actually the very first time n is created. Thanks to remember , n now has access to the Composable 's memory, and thus will be stored in the memory for future usage. So, during recomposition, this is what happens - When the executor (???) reaches the line where n is assigned, var n by remember { mutableStateOf(0) } remember actually acts as a gatekeeper, and does not allow the executor to enter the block contained in it. Instead, it passes it the previously remembered value and asks it to move on. Since when the user clicked the Text , it already incremented the value of n to 1, this was retained in memory and so, now this works as expected. This is the same case for your TextField problem. The field initially reads an empty value and the value is updated every time the user types a letter, triggering a recomposition and finally displaying the correct value on the screen. Could it get simpler enough? Let me know I spent half an hour typing this.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/69853719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14004940/']}
jdg_83237
stackexchange
llm_judgeable_groundtruth_similarity
1224682
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can someone please point out the mistake in the following "theorem"? $A_n \subseteq \mathbb{R}^p$ is closed for all $n \in \mathbb{N}$ then $\displaystyle A = \bigcup_{n=1}^\infty A_n$ is closed in $\mathbb{R}^p$. Proof: We shall show that $A^\prime \subseteq A$ where $A^\prime$ denotes the accumulation points of $A$ Choose any $x \in A^\prime$. Then $B^\prime (x,\delta)\cap A \neq \emptyset \forall \delta >0$. $\implies B^\prime (x,\delta)\cap \bigg(\displaystyle \bigcup_{n=1}^\infty A_n \bigg) \neq \emptyset \ \ \ \ \forall \delta >0$ $\implies$ there exists a $n_0 \in \mathbb{N}$ such that $B^\prime (x,\delta) \cap A_{n_0} \neq \emptyset \ \ \ \ \forall \delta >0$. $\implies x \in A^\prime_{n_0} \subseteq A_{n_0} \implies x \in \displaystyle \bigcup_{n=1}^\infty A_n = A$ Now provide the response and nothing else.
HINT: Look at $n_0$, is it the same for all $\delta>0$? Just be careful with the order of quantifiers: is it $\exists n_0\in \mathbb{N}:\forall \delta >0$ or $\forall \delta >0:\exists n_0\in \mathbb{N}$?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1224682', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/137485/']}
jdg_83238
stackexchange
llm_judgeable_groundtruth_similarity
25605046
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm making a simple shift tracker / salary calculator to practice CoreData and Swift. I'm having issues with the different tasks related to this, as I need to use the dates to: Display them Check amount of shifts in a week to see if you deserve extra pay (Norwegian rules gives you 5/10$+ extra per hour if you work more than 12 hours that week) Add all salaries in a month together Delete data if the user types in wrong And probably more i'm forgetting. As of right now, I'm getting the data from date pickers which use a date formatter to add them to by DB as 'fromDate' and 'toDate' as a string. This leads to me having to use a bunch of bad code. Much because of NSDate timezone issues etc. How should I lay out my data base to do this most effectively? Should I make a field for each type of data I need? ('fromMinute'|'fromHour'|'fromDay'|'fromWeek'|'fromMonth'|'fromYear') and then corresponding for 'to' fields? I'm a rookie at databases but I don't think I should have that many fields. Should I have a month table, then a week table, then use relations (and in that case how do I do that?) I can not just use a Time Interval, because bonuses are dependent on how late it is. Hope someone can help me out with some recommended practices on CoreData UPDATE: Here is my super complex core data model! along with a NSManagedObject class ( I think ) import UIKitimport CoreData@objc(Shifts)class Shifts: NSManagedObject { @NSManaged var fromDate : String @NSManaged var toDate : String func getShift () -> Shift{ let formatter = NSDateFormatter() formatter.dateFormat = "yyyy-MM-dd w W HH:mm" let from:NSDate = formatter.dateFromString(self.fromDate)! let to:NSDate = formatter.dateFromString(self.toDate)! let length:Double = to.timeIntervalSinceDate(from) as Double / 60 let newShift = Shift(from:from,to:to,length:length) return newShift }} Shit is a class I have in my view controller which I use to display etc my data, it looks like this (TW:Code Gore) class Shift {var fromDate : NSDatevar toDate : NSDatevar length : Double //In Minutesvar salary : Double = Double()var UB : Bool = falselet etter18hverdag:Double = 22let etter21hverdag:Double = 45let helligdag:Double = 90let helgEtter13:Double = 45let helgEtter16:Double = 90 //HUSK AT PAUSE FINNESinit (from : NSDate, to : NSDate, length:Double){ self.fromDate = from self.toDate = to self.length = length}func calcSalary(ub: Bool)->Double{ let userDefaults:NSUserDefaults = NSUserDefaults.standardUserDefaults() let hour: Double = userDefaults.doubleForKey("salary") println("Checking salary on shift fromDate:\(fromDate) with length:\(length). Shift UB: \(UB)") if (ub){ let calendar : NSCalendar = NSCalendar(calendarIdentifier: NSGregorianCalendar) calendar.locale = NSLocale.systemLocale() calendar.timeZone = NSTimeZone.localTimeZone() var checkTime:NSDate = fromDate var checkToTime:NSDate = toDate var totalSalary:Double = Double() while (checkToTime.timeIntervalSinceDate(checkTime)>0.0){ let split = calendar.components(NSCalendarUnit.WeekdayCalendarUnit | NSCalendarUnit.HourCalendarUnit, fromDate: checkTime) println("Checking: \(checkTime) to: \(checkToTime) diff:\(checkToTime.timeIntervalSinceDate(checkTime)) currentSal: \(totalSalary) splitTime:\(split.hour)") if(split.weekday==7){ //Søndag let lønn = hour + helligdag totalSalary += (lønn/4) } else if (split.weekday == 6){ //Lørdag if(split.hour < 13){ let lønn = hour totalSalary += (lønn/4) } if (split.hour>13 && split.hour<16){ let lønn = hour + helgEtter13 totalSalary += (lønn/4) } if (split.hour > 16){ let lønn = hour + helgEtter16 totalSalary += (lønn/4) } } else if (split.weekday < 6){ //Hverdag if(split.hour < 18){ let lønn = hour totalSalary += (lønn/4) } if (split.hour>18 && split.hour<21){ let lønn = hour + etter18hverdag totalSalary += (lønn/4) } if (split.hour > 21){ let lønn = hour + etter21hverdag totalSalary += (lønn/4) } } checkTime = checkTime.dateByAddingTimeInterval(15*60) //15 min ganger 60 sek i min } println("Calculated salary WITH UB: \(totalSalary)") return totalSalary } else { let calendar : NSCalendar = NSCalendar(calendarIdentifier: NSGregorianCalendar) calendar.locale = NSLocale.systemLocale() calendar.timeZone = NSTimeZone.localTimeZone() var checkTime:NSDate = fromDate var totalSalary:Double = Double() let multiple = (length/60) while (self.toDate.timeIntervalSinceDate(checkTime)>0.0){ let lønn = hour totalSalary += (lønn/4) checkTime = checkTime.dateByAddingTimeInterval(15*60) //15 min ganger 60 sek i min } println("Calculated salary WITHOUT UB: \(totalSalary)") return totalSalary }} Now provide the response and nothing else.
The Short Version The best practice is to store fromDate and toDate as NSDate s. The Long Version What's in a date? An NSDate is actually an NSTimeInterval since a reference date. The NSTimeInterval value represents the number of seconds since that date, and is irregardless of time zone, calendar, or any other geographical or political construct. Conceptually, this is the same as a unix time stamp, though unix time stamps use a different reference date (equivalent to timeIntervalSince1970 ). An NSDate represents a point in time - days, years, time zones, and daylight savings are all presentation level concerns, while NSDate is just a value object. It's the honey badger of date representations. It don't care. That said, displaying an NSDate involves many other components, most having to do with localization. The user's localization settings determine which calendar representation ( NSCalendar ) is used, and how dates are formatted visually ( NSDateFormatter ). The user can even change these settings while your application is running, which is why there are some system notications to inform your application that there has been a change. Performing date calculations using NSCalendar is not too difficult once you understand the above concepts well, and last year NSCalendar was extended to be even easier to use (unfortunately, those improvements only came to iOS very recently). Apple has a good guide to calendar calculations in the Date and Time Programming Guide: Performing Calendar Calculations , and a session from last year's WWDC, WWDC 2013: Solutions to Common Date and Time Challenges , details the improvements to NSCalendar and covers some very good tips for working with dates in general. If your data model stores dates as NSDate it is much easier to adapt the visual presentation of date values correctly for the user's localization settings. The model doesn't need to be aware of time zones, calendars, and localization settings - those are best handled at the presentation (view or view controller) level. If you were storing dates as strings, as date components, or as calendar components you would run into a lot of issues. For example, if the current calendar changed, all of your persisted data would be invalid and would need to be rebuilt. Daylight savings time could also be a big problem. How does this affect Core Data? In your case you are representing a date range in your model (the range of seconds from fromDate to toDate ). There are some alternate ways of representing the same thing, like using fromDate and a duration indicating a NSTimeInterval from fromDate to whenever the shift ended. Either approach works. When querying data for a particular hour, first you would perform the necessary calendary calculations to get the start and end of the hour in the user's current locale. For example, in objective-c, using the new NSCalendar methods: NSCalendar *calendar = [NSCalendar autoupdatingCurrentCalendar];NSDate *startOfDay = [calendar startOfDayForDate:fromDate];NSDate *startOfHour= [calendar dateByAddingUnit:kCFCalendarUnitHour value:8 options:NSCalendarMatchStrictly];NSDate *endOfHour = [calendar dateByAddingUnit:kCFCalendarUnitHour value:9 options:NSCalendarMatchStrictly]; This would give you the start and end of the 8th hour of the day on which fromDate falls. This is just an example, your business rules may dictate more logic such as checking the value of toDate against fromDate, etc. - though many of those concerns may be best handled by customizing Core Data validation . Using this in a predicate is a set of straightforward numeric comparisons: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"SELF.fromDate <= %@ AND SELF.fromDate >= %@", endOfHour, startOfHour]; When using an NSFetchedResultsController to display sections, the sectionNameKeyPath can point to a Core Data transient property to allow you to display sections based on hour, month, etc. as you like. Sorting and grouping query results cannot use a transient property, only a property backed by a modelled attribute - in your case, fromDate and toDate . Since dates are numeric and represent a point in time this works easily for many business rules.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25605046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1278026/']}
jdg_83239
stackexchange
llm_judgeable_groundtruth_similarity
29124247
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to see the status of the current directory. Because there are lots of changes in sub-directories, which I do not want to see, the following command doesn't do the trick: git status . Is there any way of getting this kind of report, short of grepping the output of git status ? Now provide the response and nothing else.
Use git-status -- <pathspec>... The synopsis of the git-status man page tells you that you can filter by paths: git status [<options>...] [--] [<pathspec>...] Therefore, all you have to do is get a list of paths corresponding to the regular (non-directory) files in the current directory, and pass that to git-status . There is one gotcha: because git status reports about the whole repository if passed an empty <pathspec>... argument, you need to check whether the list is empty or not. Shell script Here is a small shell script that does what you want. #!/bin/sh# git-status-dot## Show the status of non-directory files (if any) in the working directory## To make a Git alias called 'status-dot' out of this script,# put the latter on your search path, make it executable, and run## git config --global alias.status-dot '! git-status-dot'# Because GIt aliases are run from the top-level directory of the repo,# we need to change directory back to $GIT_PREFIX.[ "$GIT_PREFIX" != "" ] && cd "$GIT_PREFIX"# List Non-Directory Files in the Current Directorylsnondirdot=$(ls -ap | grep -v /)# If "lsnondirdot" is not empty, pass its value to "git status".if [ -n "$lsnondirdot" ]then git status -- $lsnondirdotelse printf "No non-directory files in the working directory\n"fiexit $? For more details about why the GIT_PREFIX shenanigans are required, see git aliases operate in the wrong directory . The script is available at Jubobs/git-aliases on GitHub. Make a Git alias out of it For convenience, you can create a Git alias that calls the script; make sure the script is on your path, though. git config --global alias.statusdot '!sh git-status-dot.sh' Toy example Here is a toy example demonstrating how to use the resulting alias and what it does. # initialize a repo$ mkdir testgit$ cd testgit$ git init# create two files$ mkdir foo$ touch foo/foo.txt$ touch bar.txt# good old git status reports that subdir/test1.txt is untracked... $ git statusOn branch masterInitial commitUntracked files: (use "git add <file>..." to include in what will be committed) bar.txt foo/nothing added to commit but untracked files present (use "git add" to track)# ... whereas our new alias, git status-dot, only cares# about regular files in the current directory$ git status-dotOn branch masterInitial commitUntracked files: (use "git add <file>..." to include in what will be committed) bar.txtnothing added to commit but untracked files present (use "git add" to track)# and if we delete README.md ...$ rm README.md# ... good old git status still bother us about /subdir ...$ git statusInitial commitUntracked files: (use "git add <file>..." to include in what will be committed) foo/nothing added to commit but untracked files present (use "git add" to track)# ... whereas git statusdot doesn't$ git status-dotNo non-directory files in the working directory$
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29124247', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/647991/']}
jdg_83240