source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 1899579 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand how to show that $$\frac{\sin x}{x}$$is improperly integrable, but I'm having troubles with showing that it is not absolutely integrable. Here is my work: $$\frac{|\sin x|}{x}\leq \frac{1}{x}$$ and I know $\int_{1}^{\infty}\frac{1}{x}dx=\lim_{d\to \infty} \log(d)\to \infty$ but that doesn't prove that $\frac{|\sin x|}{x} \to \infty$ since it is less than $\frac{1}{x}$. I was thinking about doing squeeze theorem
Now provide the response and nothing else.
| Hint :$$\int_{(k-1)\pi}^{k\pi}\frac{|\sin x|}{x}\,dx \ge\int_{(k-1)\pi}^{k\pi}\frac{|\sin x|}{k\pi}\,dx =\frac1{k\pi}\left|\int_{(k-1)\pi}^{k\pi}\sin x\,dx\right| =\frac2{k\pi}\ .$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1899579', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/362867/']} | jdg_83941 |
stackexchange | llm_judgeable_groundtruth_similarity | 2651907 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden$ ls | wc -w9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touch ing a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene:The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.
Now provide the response and nothing else.
| There's a limit on 512 entries in the root directory of FAT filesystem. This limit comes about because the root directory lives in a particular place on FAT filesystems. For other directories this limit is not in place. Additionally, FAT32 removed the 512 entry limit for the root filesystem by treating the root directory the same as any other directory. Using long filenames - i.e. not in 8.3 format - means than a single file uses multiple directory entries . Some Googling finds some people claiming that a FAT32 directory can have a maximum of 65,536 entries (which would be fewer files if they had long file names). However, none of the sources that mentioned this limit seemed that reliable so I thought I'd test this. I wrote a script which creates files with 30 character filenames, meaning each file would need 4 directory entries. When the script got to file 16,384 it failed with an IO error and I couldn't create more files in my test directoy. So this does seem to validate the 65,536 entry limit. If you're hitting this limit at 9,000 files then your files must be using at least 7 entries each which corresponds to filenames that are at least 66 characters long. Does this match what you're doing? (Or you could have some short filenames and some very, very long ones, but you get the idea.) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2651907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4737/']} | jdg_83942 |
stackexchange | llm_judgeable_groundtruth_similarity | 554148 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider some known physical fact, e.g. $\nabla \cdot \mathbf B = 0$ for the magnetic induction $\mathbf B$ . Now, is it possible that a mathematical theorem exists, which yields a wrong prediction? E.g. a hypothetical - correctly proven - theorem that goes: "If $\nabla\cdot\mathbf B=0$ then some new planet should be between Earth and Mars." If this theorem was right, and after deep research we were sure that no such planet exists, one obvious possibility is that the previously known fact was incorrect i.e. perhaps $\nabla \cdot \mathbf B \neq 0$ under some strange conditions. But is this the only possibility? In other words, is it possible that both the premise and the theorem were right, but the mathematically obtained prediction is not true for physics? Please note that I picked just a silly example to make myself clear about a question regarding the relationship between mathematics and physics, but of course it is not this particular example that I am interested in. Also, I am not looking to discuss the existence of planets between Earth and Mars and, lastly, I am certainly not questioning the truth of Gauss' law.
Now provide the response and nothing else.
| Here is a mathematical theorem: the internal angles of a triangle add up to 180 degrees (i.e. half a complete rotation). To be a little more thorough, let's define a triangle: it is a closed figure consisting of three straight lines, and a straight line is the line of shortest distance between two points. Ok so we have a nice mathematical theorem. Now we go out into the world and start measuring triangles. They all have internal angles adding up to 180 degrees, to the precision of our instruments, so we are reassured. But then we get more precise instruments and larger triangles, and something happens: the angles are no longer adding up right! Oh no! What has happened? Is it a contradiction? Or perhaps our lines were not straight? We check that the lines were indeed of minimum distance. Eventually we go back to our mathematical theorem and realise that it had a hidden assumption. It was an assumption lying in a subtle way right at the heart of geometry and it turns out that it is an assumption that need not necessarily hold. One to do with parallel lines, called Euclid's fifth postulate. Then we discover a more general way of doing geometry and we can make sense of our measurements again---using the theory of general relativity and the geometry of curved spaces. So, to answer your question, what happens when physical observations contradict a mathematical statement has, up to now, always turned out to be like the above. What happens is that we find the mathematical statement was true in its own proper context, with the assumptions underling the concepts it was using, but that context is not the one that applies to the physical world. So, up till now at least, physics has never contradicted mathematics, but it has repeatedly shown that certain mathematical ideas which were thought to apply to the physical world in fact do not, or only do in a restricted sense or in some limiting case. | {} | {'log_upvote_score': 8, 'links': ['https://physics.stackexchange.com/questions/554148', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/190100/']} | jdg_83943 |
stackexchange | llm_judgeable_groundtruth_similarity | 8030363 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 3 tables: audioFormats, videoFormats, and fileInfo.I have a transaction such that when I insert into the fileInfo table, that insert includes an FK from audioFormats and videoFormats. An insertion into the latter tables takes place if the audio format or video format are not already in those tables, then the generated (or existing) ID value is inserted into fileInfo. How do I efficiently insert a value only if that value does not exist, but get the ID of the value whether it already exists or was freshly inserted using only SQL (and perhaps a transaction). I can insert a value if it does not already exist: INSERT INTO audioformats (audioformat)VALUES(@format)WHERE NOT EXISTS (SELECT 1 FROM audioformats WHERE audioformat = @format) I can get the inserted ID from an insertion: INSERT INTO audioFormats (audioFormat)VALUES ('Test')SET @audioFormatId = SCOPE_IDENTITY() SCOPE_IDENTITY won't give me an ID value if no insertion took place.I can execute a scalar query to get the identity after a possible insertion, but it seems like I should be able to do all of this with at most one SELECT and INSERT.
Now provide the response and nothing else.
| You can use an IF statement to do this IF NOT EXISTS(SELECT TOP 1 1 FROM audioformats WHERE audioformat = @format)BEGIN INSERT INTO audioFormats (audioFormat) VALUES ('Test') SET @audioFormatId = SCOPE_IDENTITY()ENDELSEBEGIN SELECT @audioFormatID = ID FROM audioformats WHERE audioformat = @formatEND or you could do it like this: INSERT INTO audioformats (audioformat) SELECT @format FROM audioFormats WHERE NOT EXISTS (SELECT TOP 1 1 FROM audioformats WHERE audioformat = @format)SELECT @audioFormatID = ID FROM audioformats WHERE audioformat = @format | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8030363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/161816/']} | jdg_83944 |
stackexchange | llm_judgeable_groundtruth_similarity | 12621 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to understand the algorithms by Peterson and Dekker which are very similar and display a lot of symmetries. I tried to formulate the algorithms in informal language like follows: Peterson's: "I want to enter." flag[0]=true; "You can enter next." turn=1; "If you want to enter and while(flag[1]==true&&turn==1){ it's your turn I'll wait." } Else: Enter CS! // CS "I don't want to enter any more." flag[0]=false;Dekker's: "I want to enter." flag[0]=true; "If you want to enter while(flag[1]==true){ and if it's your turn if(turn!=0){ I don't want to enter any more." flag[0]=false; "If it's your turn while(turn!=0){ I'll wait." } "I want to enter." flag[0]=true; } } Enter CS! // CS "You can enter next." turn=1; "I don't want to enter any more." flag[0]=false; The difference seems to be the point where "You can enter next." occurs and the fact that "if it's your turn I don't want to enter any more." occurs in Dekker's. In Peterson's algorithm, the two processes seem to be dominant. A process seems to force his way in into the critical section unless it's the other one's turn. Conversely, in Dekker's algorithm, the two processes seem to be submissive and polite. If both processes want to enter the critical section, and it's the other one's turn, the process decides to no longer want to enter. (Is this needed for starvation-freedom? Why?) How exactly do these algorithms differ? I imagine that when both processes try to enter the critical section, in Peterson's, the process says "I enter", while in Dekker's the process says "You may enter". Can someone clear up the way the processes behave in each algorithm? Is my way of putting it in informal terms correct?
Now provide the response and nothing else.
| Your informal descriptions of the algorithms is wonderful. I think in both cases the author was trying to come up with the simplest solution they could think of that guaranteed both mutual exclusion and deadlock freedom. Neither algorithm is starvation free or fair. [ed: as pointed out in the comments, both algorithms are starvation free, and Peterson's algorithm is also fair]. Dekker's solution was the first mutual exclusion algorithm using just load and store instructions. It was introduced in Dijkstra, Edsger W.; "Cooperating sequential processes", in F. Genuys, ed., Programming Languages: NATO Advanced Study Institute , pp. 43-112, Academic Press, 1968 . If you read through the paper you see Dijkstra work through a number of attempts, recognizing the problem with each, and then adding a little bit more for the next version. Part of the inefficiency of his algorithm comes from the fact that he starts with a turn-taking algorithm and then tries to modify it to allow the processes to progress in any order. (Not just 0,1,0,1,...) Peterson's algorithm was published in 1981, after more than a decade of experience and hindsight about Dekker's algorithm. Peterson wanted a much simpler algorithm than Dekker so that the proof of correctness is much easier. You can see that he was feeling some frustration with the community from the title of his paper. Peterson, G.L.; "Myths about the mutual exclusion problem," Inf. Proc. Lett. , 12(3): 115-116, 1981 . Very quick read and very well written. (And the snide remarks about formal methods are priceless.) Peterson's paper also discusses the process by which he built his solution from simpler attempts. (Since his solution is simpler, it required fewer intermediate steps.) Note that the main difference (what you call "dominance" rather than "submissiveness") is that because Peterson started out fresh (not from the turn-taking algorithm Dijkstra started with) his wait loop is simpler and more efficient. He realizes that he can just get away with simple looped testing while Dijkstra had to backoff and retry each time. I feel I must also mention Lamport's classic Bakery algorithm paper: Lamport, Leslie; "A New Solution of Dijkstra's Concurrent Programming Problem", Comm ACM 17(8):453-455, 1974 . The Bakery algorithm is arguably simpler than Dekker's algorithm (and certainly simpler in the case of more than 2 processors), and is specifically designed to be fault tolerant. I specifically mention it for two reasons. First, because it gives a little bit of history about the definition of the mutual exclusion problem and attempts to solve it up to 1974. Second because the Bakery algorithm demonstrates that no hardware atomicity is required to solve the mutual exclusion problem. Reads that overlap writes to the same location can return any value and the algorithm still works. Finally, a particular favorite of mine is Lamport, Leslie; "A Fast Mutual Exclusion Algorithm," ACM Trans. Comp. Sys. , 5(1):1-11, 1987 . In this paper Lamport was trying to optimize a solution to the mutual exclusion problem in the (common) case that there is little contention for the critical section. Again, it guarantees mutual exclusion and deadlock freedom, but not fairness. It is (I believe) the first mutual exclusion algorithm using only normal reads and writes that can synchronize N processors in O(1) time when there is no contention. (When there is contention, it falls back on an O(N) test.) He gives an informal demonstration that the best you can do in the contention free case is seven memory accesses. (Dekker and Peterson both do it with 4, but they can only handle 2 processors, when you extend their algorithms to N they have to add an extra O(N) accesses.) In all: I'd say Dekker's algorithm itself is interesting mainly from a historical perspective. Dijkstra's paper explained the importance of the mutual exclusion problem, and demonstrated that it could be solved. But with many years of hindsight simpler (and more efficient) solutions than Dekker's have been found. | {} | {'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/12621', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/8623/']} | jdg_83945 |
stackexchange | llm_judgeable_groundtruth_similarity | 31080284 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm doing an iOS app project in Swift 1.2, using Cocoapods 0.37.2, Xcode 6.3.2. After adding around 8 pods in my project, app launch time greatly increases (around 10 seconds more) on device (iPhone 5). (Note: launch time means the time when you tap the app icon to open the app) It is so slow iOS terminates it because it doesn't launch in time. The top of the crash log is as follows... Application Specific Information: com.tryslowappswift failed to launch in timeElapsed total CPU time (seconds): 27.720 (user 27.720, system 0.000), 68% CPU Elapsed application CPU time (seconds): 0.074, 0% CPUThread 0:0 dyld 0x1ff0f4c8 ImageLoaderMachOCompressed::rebase(ImageLoader::LinkContext const&) + 4561 dyld 0x1ff087be ImageLoader::recursiveRebase(ImageLoader::LinkContext const&) + 1742 dyld 0x1ff07dca ImageLoader::link(ImageLoader::LinkContext const&, bool, bool, bool, ImageLoader::RPathChain const&) + 1863 dyld 0x1ff012fc dyld::link(ImageLoader*, bool, bool, ImageLoader::RPathChain const&) + 2044 dyld 0x1ff022d6 dyld::_main(macho_header const*, unsigned long, int, char const**, char const**, char const**, unsigned long*) + 23625 dyld 0x1fefe222 dyldbootstrap::start(macho_header const*, int, char const**, long, macho_header const*, unsigned long*) + 3946 dyld 0x1fefe03c _dyld_start + 60No thread state (register information) available... You can test this by: Note: I have created an example Swift project with all the pods setup in my github repo . You can just clone and run it on your device and see the delay for yourself. Create a new blank project, nothing in the application:didFinishLaunchingWithOptions: method Run the app on device and see the app launch very fast. Stop. Now try adding a Podfile with about 8 pods (no matter big or small the pods are), do pod install . Just for clarity, this is the Podfile I used... Podfile source 'https://github.com/CocoaPods/Specs.git'platform :ios, '8.0'use_frameworks! # required for Swift podspod 'Alamofire', '~> 1.2.1'pod 'NPReachability', '~> 0.2.0'pod 'ActionSheetPicker-3.0', '~> 1.6.1'pod 'SDWebImage', '~> 3.7.2'pod 'KVNProgress', '~> 2.2.1'pod 'KeychainAccess'pod 'JazzHands', '~> 0.2.1'pod 'DGActivityIndicatorView' Then run on device again. This time you will notice the 10+ seconds delay , even before application:didFinishLaunchingWithOptions: method is called. No import , no bridging header files, just install the pod into the project and it slows down the launch (-- rage guy meme !!! ---) I suspect this has to do with Swift 1.2 so I tried on a Objective-C project, but I still experience the same delay. It seems to happen right after a normal pod installation, I have no idea how I can optimize or fix this. :( UPDATE 1 (27 July 2015): As pointed by Bryan Musial, I tried adding flags in my schema settings to log load time for each library. I run this on my iPhone 5. Here is the result in Xcode's 'Devices' Window: Jul 27 13:56:02 Hlung SpringBoard[43] <Warning>: Installed apps did change. Added: {( )} Removed: {( )} Modified: {( "th.in.hlung.TrySlowAppSwift" )}Jul 27 13:56:03 Hlung com.apple.debugserver-@(#)PROGRAM:debugserver PROJECT:debugserver-320.2.89 [4964] <Warning>: debugserver-@(#)PROGRAM:debugserver PROJECT:debugserver-320.2.89 for armv7.Jul 27 13:56:03 Hlung com.apple.debugserver-@(#)PROGRAM:debugserver PROJECT:debugserver-320.2.89 [4964] <Warning>: Connecting to com.apple.debugserver service...Jul 27 13:56:03 Hlung com.apple.debugserver-@(#)PROGRAM:debugserver PROJECT:debugserver-320.2.89 [4964] <Warning>: Got a connection, waiting for process information for launching or attaching.Jul 27 13:56:03 Hlung com.apple.debugserver-@(#)PROGRAM:debugserver PROJECT:debugserver-320.2.89 [4964] <Warning>: About to launch process for bundle ID: th.in.hlung.TrySlowAppSwiftJul 27 13:56:03 Hlung com.apple.xpc.launchd[1] (UIKitApplication:th.in.hlung.TrySlowAppSwift[0x578b]) <Error>: The DisableASLR key is no longer respected. Please remove it.Jul 27 13:56:03 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:04 Hlung kernel[0] <Notice>: xpcproxy[4965] Container: /private/var/mobile/Containers/Data/Application/6C097544-9C1E-4B73-ACF8-43701FDFC8C2 (sandbox)Jul 27 13:56:04 Hlung com.apple.debugserver-@(#)PROGRAM:debugserver PROJECT:debugserver-320.2.89 [4964] <Warning>: In completion handler, got pid for bundle id, pid: 4965.Jul 27 13:56:04 Hlung com.apple.debugserver-@(#)PROGRAM:debugserver PROJECT:debugserver-320.2.89 [4964] <Warning>: Got a connection, launched process /private/var/mobile/Containers/Bundle/Application/9F233F77-63BC-479E-827A-F08C964DE38C/TrySlowAppSwift.app (pid = 4965).Jul 27 13:56:04 Hlung SpringBoard[43] <Warning>: LICreateIconForImage passed NULL CGImageRef imageJul 27 13:56:04 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:04 Hlung locationd[4692] <Notice>: Gesture EnabledForTopCLient: 0, EnabledInDaemonSettings: 0Jul 27 13:56:05 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:05 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:06 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:07 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:08 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:08 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:09 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:10 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:10 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:11 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:12 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:12 Hlung MobileMail[139] <Warning>: Attempting to badge the application icon but haven't received permission from the user to badge the applicationJul 27 13:56:12 Hlung MobileMail[139] <Warning>: Attempting to badge the application icon but haven't received permission from the user to badge the applicationJul 27 13:56:12 Hlung MobileMail[139] <Warning>: Attempting to badge the application icon but haven't received permission from the user to badge the applicationJul 27 13:56:12 Hlung MobileMail[139] <Warning>: Attempting to badge the application icon but haven't received permission from the user to badge the applicationJul 27 13:56:12 Hlung MobileMail[139] <Warning>: Attempting to badge the application icon but haven't received permission from the user to badge the applicationJul 27 13:56:12 Hlung MobileMail[139] <Warning>: Attempting to badge the application icon but haven't received permission from the user to badge the applicationJul 27 13:56:12 Hlung MobileMail[139] <Warning>: Attempting to badge the application icon but haven't received permission from the user to badge the applicationJul 27 13:56:12 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:13 Hlung assistant_service[4931] <Warning>: the local store doesn't allow tasks and we have no default calendar :(Jul 27 13:56:13 Hlung assistant_service[4931] <Warning>: Error getting NanoAppRegistry workspace info: Error Domain=NSCocoaErrorDomain Code=4099 "The operation couldn’t be completed. (Cocoa error 4099.)" (The connection to service named com.apple.nanoappregistry.workspace was invalidated.) UserInfo=0x17ebf490 {NSDebugDescription=The connection to service named com.apple.nanoappregistry.workspace was invalidated.}Jul 27 13:56:13 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:14 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:14 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:15 Hlung amfid[4918] <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName]Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: dyld: loaded: /usr/lib/libcupolicy.dylibJul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: dyld: loaded: /usr/lib/libTelephonyUtilDynamic.dylibJul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total time: 13.1 seconds (100.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total images loaded: 149 (128 from dyld shared cache)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total segments mapped: 60, into 1700 pages with 112 pages pre-fetchedJul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total images loading time: 12.8 seconds (97.9%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total dtrace DOF registration time: 0.17 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total rebase fixups: 32,622Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total rebase fixups time: 34.74 milliseconds (0.2%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total binding fixups: 121,320Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total binding fixups time: 116.36 milliseconds (0.8%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total weak binding fixups time: 5.10 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total bindings lazily fixed up: 0 of 0Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total initializer time: 118.97 milliseconds (0.9%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: libSystem.B.dylib Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 37.57 milliseconds (0.2%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: libBacktraceRecording.dylib Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.77 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: libc++.1.dylib Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.09 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: libobjc.A.dylib Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.10 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: CoreFoundation Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.88 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: vImage Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.02 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: libGLImage.dylib Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.12 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: libFosl_dynamic.dylib Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.04 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: CoreImage Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 0.02 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: libswiftCore.dylib Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: : 2.14 milliseconds (0.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total symbol trie searches: 43149Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total symbol table binary searches: 0Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total images defining weak symbols: 18Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total images using weak symbols: 44 The most important part is probably this: Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total time: 13.1 seconds (100.0%)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total images loaded: 149 (128 from dyld shared cache)Jul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total segments mapped: 60, into 1700 pages with 112 pages pre-fetchedJul 27 13:56:17 Hlung TrySlowAppSwift[4965] <Notice>: total images loading time: 12.8 seconds (97.9%) It takes a whole 12.8 seconds (97.9%) to load the images. But this empty project doesn't have any image file. I skimmed through the pods and I think they have no significant amount of image file as well. I'm still stuck here. Comparing to Bryan's result, the same code but run on iPhone 6. The images loading time percentage is also high. Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total time: 1.9 seconds (100.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total images loaded: 148 (127 from dyld shared cache)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total segments mapped: 60, into 1756 pages with 164 pages pre-fetchedJul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total images loading time: 1.5 seconds (81.6%) In addition, there's another clue. During the splash screen, there are multiple log lines saying <Error>: SecTrustEvaluate [leaf CriticalExtensions IssuerCommonName] . Googling it just reveal that it is some enterprise app problem, which doesn't help me much. Overall, I'm still stuck. T_T
Now provide the response and nothing else.
| There are a host of reasons that you might be observing slow application start situations such as low memory or disk space conditions, jailbroken and/or modded device, failed software update in need of a clean install, or even hardware failure. While there isn't a lot to go on with the info you've provided, there are a few things you can take a look at to try and eliminate potential causes. I've cloned your sample project and tested on devices from iPhone 5 through iPhone 6 Plus and while I have not been able to replicate slow behavior you've observed locally, I have been in situations where both external and internal factors have caused slow startup performance. First things first, given that we have only a portion of your crashlog you should do some quick verification to make sure we are heading down the right investigative path (Ideally it would be helpful to see the full crashlog) -- As you may or may not know, iOS employs a watchdog process to ensure that iOS apps respond in a reasonable amount of time. When debugging watchdog restrictions are not enforced to allow Xcode the time it needs to establish a live debugging session. Launching as a standalone app, that is, outside of a debugger, Watchdog restrictions are in full effect. Take a look at your crashlog, and check that the Exception Code is the 0x8badf00d (Read: "Ate bad food") -- on 64-bit devices this code will be padded by leading zeros: 0x000000008badf00d Via TN2151 > Exception Codes : The exception code 0x8badf00d indicates that an application has been terminated by iOS because a watchdog timeout occurred. The application took too long to launch, terminate, or respond to system events. One common cause of this is doing synchronous networking on the main thread. Whatever operation is on Thread 0: needs to be moved to a background thread, or processed differently, so that it does not block the main thread. As you've pointed out that application:didFinishLaunchingWithOptions: isn't hit until after waiting that 10+ seconds, it suggests that the delay is occurring while the app's bootstrapping is happening -- your crashlog excerpt generally seems to agree. TrySlowSwiftApp.app's Thread 0 contains only stack frames for the dynamic link editor dyld . You also want to make sure that your crashlog is indicating that Thread 0 is the frame triggering the crash (I can't recall encountering a watchdog crash where Thread 0 wasn't blamed, but I suppose it is possible!). If another thread is being blamed, then we would need to see more about the crashlog you have on-hand. TN2239 speaks to a host of iOS debugging tools, and includes a section of Environment Variables for the Dynamic Linker - We want to add DYLD_PRINT_STATISTICS with a value of YES to the current run scheme: We should also enable 'Log Library Loads' in the Scheme's Diagnostic's Editor: Finally, Xcode's console does not include timestamp information in the In-Xcode Console. You can, however, use Xcode's 'Devices' screen to view the live console with timestamps: The Environment Variable we added will give you statistical information about what dyld spent its time doing, while the the 'Log Library Loads' options will show you the specific libraries that are attempting to be loaded. Because you are viewing this information in the Device's console you are able to see timestamps associated with each log entry. Within the dyld statistics output, look for operations that are taking unusually long on your device -- for reference, here's one load on my iPhone 6: Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total time: 1.9 seconds (100.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total images loaded: 148 (127 from dyld shared cache)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total segments mapped: 60, into 1756 pages with 164 pages pre-fetchedJul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total images loading time: 1.5 seconds (81.6%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total dtrace DOF registration time: 0.06 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total rebase fixups: 32,521Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total rebase fixups time: 24.03 milliseconds (1.2%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total binding fixups: 120,894Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total binding fixups time: 190.36 milliseconds (9.8%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total weak binding fixups time: 1.76 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total bindings lazily fixed up: 0 of 0Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total initializer time: 137.82 milliseconds (7.1%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: libSystem.B.dylib Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 60.11 milliseconds (3.1%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: libBacktraceRecording.dylib Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 0.39 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: libc++.1.dylib Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 0.27 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: libobjc.A.dylib Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 0.03 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: CoreFoundation Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 3.40 milliseconds (0.1%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: vImage Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 0.31 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: libGLImage.dylib Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 0.08 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: libFosl_dynamic.dylib Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 0.01 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: CoreImage Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 0.57 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: libswiftCore.dylib Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: : 1.74 milliseconds (0.0%)Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total symbol trie searches: 42394Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total symbol table binary searches: 0Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total images defining weak symbols: 17Jul 22 16:44:02 iPhone-6 TrySlowAppSwift[939] <Notice>: total images using weak symbols: 44 Within the 'dyld: loaded:' lines, take a look at the timestamp accompanying each item that is loaded -- you are looking for places where it takes longer to load the resource than it does with surrounding resources. Depending on what you find by using these diagnostic tools will help determine what the next diagnostic step should be -- This is left for you to interpret in light of additional information gathered with these steps. As suggested by others, I'd start by double checking the behavior on a second identical model and OS device -- just to rule out something device specific. If you can replicate it there then you should direct more time to a software investigation, however if it doesn't replicate there, you should direct your time to diagnosing your affected device. A more drastic diagnostic step could involve wiping the device and performing a clean install of iOS. If you do this, I would be prepared to do it at least twice -- the first time not restoring from iCloud or iTunes backup and retesting the launch behavior, then reloading a second time to restore your content to the device. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31080284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/467588/']} | jdg_83946 |
stackexchange | llm_judgeable_groundtruth_similarity | 9777 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Sometimes I connect to the Internet using a VPN connection to be able to access the blocked websites. I've got a few questions regarding this: Are my important data (like my credentials) vulnerable to be eavesdropped on the VPN server? What if the protocol of web site I am connecting to is HTTPS ? (like gmail) Is the owner of VPN server able to access anything on my computer (files and folders) via this connection? If the answer to any of the above questions is Yes, is there any counter measures to prevent it? Any advice would be appreciated!
Now provide the response and nothing else.
| You need to think of connecting to a VPN like walking over to a physical network and plugging your computer into a switch there. So: If you use HTTP, everything is sent in plain text. The VPN server can read everything. If you use HTTPS, only the domain of the server will be known by the VPN server. The first step in a HTTPS session is to establish an SSL connection, everything is encrypted from there. If you have open ports on your computer (sharing files, music, etc.) this will be visible to the VPN server. Other than this, nothing is revealed. Again, it's as if you were connected to a physical network. The VPN server doesn't gain any other special privileges. There is one exception to this, what VPN software are you using? Do you verify the source? If you received the VPN software from a questionable source, it could contain a trojan which allows outside attackers access to your computer. Have you considered Tor or Freenode? These can run by themselves, or in addition to a VPN to give you privacy when visiting standard HTTP websites. | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/9777', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/6415/']} | jdg_83947 |
stackexchange | llm_judgeable_groundtruth_similarity | 4603 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying for qualifying exams and ran into this problem. Show that if $\{a_n\}$ is a nonincreasing sequence of positive realnumbers such that $\sum_n a_n$ converges, then $\lim_{n \rightarrow \infty} n a_n = 0$ . Using the definition of the limit, this is equivalent to showing \begin{equation}\forall \varepsilon > 0 \; \exists n_0 \text{ such that }|n a_n| < \varepsilon \; \forall n > n_0\end{equation} or \begin{equation}\forall \varepsilon > 0 \; \exists n_0 \text{ such that }a_n < \frac{\varepsilon}{n} \; \forall n > n_0\end{equation} Basically, the terms must be bounded by the harmonic series. Thanks, I'm really stuck on this seemingly simple problem!
Now provide the response and nothing else.
| Some hints: If $S_{n} = \sum_{k=1}^{n} a_{k}$ then what is $\lim_{n \to \infty} S_{2n} - S_{n}$? Now can you use the fact that $a_{n}$ is non-increasing to upper bound a certain term of the sequence $na_{n}$ with a multiple of $S_{2n} - S_{n}$? | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/4603', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1761/']} | jdg_83948 |
stackexchange | llm_judgeable_groundtruth_similarity | 2023222 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am facing difficulty with the following limit. $$ \lim_{n\to\infty}\left(\binom{n}{0}\binom{n}{1}\dots\binom{n}{n}\right)^{\frac{1}{n(n+1)}} $$ I tried to take log both sides but I could not simplify the resulting expression. Please help in this regard. Thanks.
Now provide the response and nothing else.
| We see that $$\prod_{k=0}^n\binom{n}{k}=\frac{n!^{n+1}}{\prod_{k=0}^nk!^2}=\frac{n!^{n+1}}{\left(\prod_{k=0}^nk^{n+1-k}\right)^2}=\frac{H(n)^2}{n!^{n+1}}.$$where $H(n)=\prod_{k=1}^nk^k$. Now we see that$$\log(H(n))=\sum_{k=1}^nk\log(k)≥\int_{1}^nx\log(x)dx=\frac{n^2}{2}\log(n)-\frac{n^2}{4}$$as well as$$\log(H(n))=\sum_{k=1}^nk\log(k)≤\int_{1}^{n+1}x\log(x)dx=\frac{(n+1)^2}{2}\log(n+1)-\frac{(n+1)^2}{4}$$This gives$$-\frac{\log(n)}{2(n+1)}-\frac{n}{4(n+1)}≤\frac{1}{n(n+1)}\log(H(n))-\frac{1}{2}\log(n)=\frac{1}{n(n+1)}\log(H(n))-\frac{1}{2}\log(n+1)+\frac{1}{2}\log(1+1/n)≤\frac{\log(n+1)}{2n}-\frac{n+1}{4n}+\frac{1}{2}\log(1+1/n).$$As both the lower and the upper bound tend to $-\frac{1}{4}$ as $n\to\infty$ we get by the squeeze theorem$$\lim_{n\to\infty}\left[\frac{1}{n(n+1)}\log(H(n))-\frac{1}{2}\log(n)\right]=-\frac{1}{4}\iff\\\lim_{n\to\infty}\frac{H(n)^{\frac{1}{n(n+1)}}}{\sqrt{n}}=e^{-\frac{1}{4}}$$Using Stirlings approximation we notice$$\lim_{n\to\infty}\frac{n!^{\frac{1}{n}}}{n}=e^{-1}$$and thus$$\lim_{n\to\infty}\left[\prod_{k=0}^n\binom{n}{k}\right]^{\frac{1}{n(n+1)}}=\lim_{n\to\infty}\frac{H(n)^{\frac{2}{n(n+1)}}}{n!^{\frac{1}{n}}}=\lim_{n\to\infty}\left(\frac{H(n)^{\frac{1}{n(n+1)}}}{\sqrt{n}}\right)^2\left(\frac{n}{n!^{\frac{1}{n}}}\right)=(e^{-1/4})^2\cdot\frac{1}{e^{-1}}=\sqrt{e}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2023222', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/356886/']} | jdg_83949 |
stackexchange | llm_judgeable_groundtruth_similarity | 24858215 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have controls that are model tied to ASP.net MVC5 @Html.TextBoxFor(model => model.OriginLocation.City, new { @class = "form-control", data_ng_model = "address1.City", test_change = "" }) So when the page loads the value of text box input is bound and should display value coming from service with Razor bound controls, latter i can manipulate that value which changes angular model for this control.What i have is textbox loads empty.I can see the value when I view source but its not displayed. <input class="form-control ng-pristine ng-valid" data-ng-model="address1.City" data-val="true" data-val-length="The field City must be a string with a maximum length of 50." data-val-length-max="50" id="OriginLocation_City" name="OriginLocation.City" test-change="" type="text" value="Manheim"> js fragment app.controller('LocationCtrl', ["$scope",function ($scope) { $scope.address1 = { Label: 'address1' };
Now provide the response and nothing else.
| ngModel has precedence over the value that is originally set (it's setting the value to "" because the model doesn't exist). Take a look here... http://jsfiddle.net/yApeP/ But you can specify a value using ngInit ... http://jsfiddle.net/D7vh7/ Which means you can use ngInit when generating the textbox... @Html.TextBoxFor(model => model.OriginLocation.City, new { @class = "form-control", data_ng_model = "address1.City", test_change = "", data_ng_init = string.Format("address1.City = '{0}'", Model.OriginLocation.City.Replace("'", @"\'")) }) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24858215', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/627340/']} | jdg_83950 |
stackexchange | llm_judgeable_groundtruth_similarity | 8034566 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have gotten round to implementing the ADD A,r set of opcodes on my Z80 core. I had a bit of confusion about the carry and overflow flags which I think I've nailed, but I wanted to put it to the community to check that I'm right. Basically, from what I can see, the ALU in the Z80 doesn't care about signed/unsigned operations, it just adds bits. This means that if two 8-bit values are added together and cause a 9-bit value as a result of their addition, the carry flag will be set. This includes adding two negative two's complement numbers, for example -20 (11101100) and -40 (11011000), as although the result is -60 (11000100), the result is actually a 9-bit value 1 1100 0100. This surely means if adding two negative two's complement values, the carry flag will always be set, even when there is no overflow condition - am I right? Secondly, I decided that to detect an overflow in this instruction, I would XOR bit 7 of both operands, and if the result was 10000000, then there is definitely no overflow - if the result of this is 00000000 then there could be an overflow as the signs are the same, and I would therefore XOR bit 7 of the result of the addition with bit 7 of either operand, and if the result of this is 10000000 then an overflow has occurred and I set the P/V overflow flag. Am I right here also? Sorry for such a convoluted question, I'm pretty sure I'm right but I need to know before I carry on with countless more instructions based on this logic. Many thanks.
Now provide the response and nothing else.
| The bits of the result are obtained from the truncated sum of unsigned integers. The add instruction doesn't care about the sign here nor does it care about your own interpretation of the integers as signed or unsigned. It just adds as if the numbers were unsigned. The carry flag (or borrow in case of subtraction) is that non-existent 9th bit from the addition of the 8-bit unsigned integers. Effectively, this flag signifies an overflow/underflow for add/sub of unsigned integers. Again, add doesn't care about the signs here at all, it just adds as if the numbers were unsigned. Adding two negative 2's complement numbers will result in setting of the carry flag to 1, correct. The overflow flag shows whether or not there's been an overflow/underflow for add/sub of signed integers. To set the overflow flag the instruction treats the numbers as signed (just like it treats them as unsigned for the carry flag and the 8 bits of the result). The idea behind setting the overflow flag is simple. Suppose you sign-extend your 8-bit signed integers to 9 bits, that is, just copy the 7th bit to an extra, 8th bit. An overflow/underflow will occur if the 9-bit sum/difference of these 9-bit signed integers has different values in bits 7 and 8, meaning that the addition/subtraction has lost the result's sign in the 7th bit and used it for the result's magnitude, or, in other words, the 8 bits can't accommodate the sign bit and such a large magnitude. Now, bit 7 of the result can differ from the imaginary sign bit 8 if and only if the carry into bit 7 and the carry into bit 8 (=carry out of bit 7) are different. That's because we start with the addends having bit 7=bit 8 and only different carry-ins into them can affect them in the result in different ways. So overflow flag = carry-out flag XOR carry from bit 6 into bit 7. Both my and your ways of calculating the overflow flag are correct. In fact, both are described in the Z80 CPU User's Manual in section "Z80 Status Indicator Flags". Here's how you can emulate most of the ADC instruction in C, where you don't have direct access to the CPU's flags and can't take full advantage of the emulating CPU's ADC instruction: #include <stdio.h>#include <limits.h>#if CHAR_BIT != 8#error char expected to have exactly 8 bits.#endiftypedef unsigned char uint8;typedef signed char int8;#define FLAGS_CY_SHIFT 0#define FLAGS_OV_SHIFT 1#define FLAGS_CY_MASK (1 << FLAGS_CY_SHIFT)#define FLAGS_OV_MASK (1 << FLAGS_OV_SHIFT)void Adc(uint8* acc, uint8 b, uint8* flags){ uint8 a = *acc; uint8 carryIns; uint8 carryOut; // Calculate the carry-out depending on the carry-in and addends. // // carry-in = 0: carry-out = 1 IFF (a + b > 0xFF) or, // equivalently, but avoiding overflow in C: (a > 0xFF - b). // // carry-in = 1: carry-out = 1 IFF (a + b + 1 > 0xFF) or, // equivalently, (a + b >= 0xFF) or, // equivalently, but avoiding overflow in C: (a >= 0xFF - b). // // Also calculate the sum bits. if (*flags & FLAGS_CY_MASK) { carryOut = (a >= 0xFF - b); *acc = a + b + 1; } else { carryOut = (a > 0xFF - b); *acc = a + b; }#if 0 // Calculate the overflow by sign comparison. carryIns = ((a ^ b) ^ 0x80) & 0x80; if (carryIns) // if addend signs are the same { // overflow if the sum sign differs from the sign of either of addends carryIns = ((*acc ^ a) & 0x80) != 0; }#else // Calculate all carry-ins. // Remembering that each bit of the sum = // addend a's bit XOR addend b's bit XOR carry-in, // we can work out all carry-ins from a, b and their sum. carryIns = *acc ^ a ^ b; // Calculate the overflow using the carry-out and // most significant carry-in. carryIns = (carryIns >> 7) ^ carryOut;#endif // Update flags. *flags &= ~(FLAGS_CY_MASK | FLAGS_OV_MASK); *flags |= (carryOut << FLAGS_CY_SHIFT) | (carryIns << FLAGS_OV_SHIFT);}void Sbb(uint8* acc, uint8 b, uint8* flags){ // a - b - c = a + ~b + 1 - c = a + ~b + !c *flags ^= FLAGS_CY_MASK; Adc(acc, ~b, flags); *flags ^= FLAGS_CY_MASK;}const uint8 testData[] ={ 0, 1, 0x7F, 0x80, 0x81, 0xFF};int main(void){ unsigned aidx, bidx, c; printf("ADC:\n"); for (c = 0; c <= 1; c++) for (aidx = 0; aidx < sizeof(testData)/sizeof(testData[0]); aidx++) for (bidx = 0; bidx < sizeof(testData)/sizeof(testData[0]); bidx++) { uint8 a = testData[aidx]; uint8 b = testData[bidx]; uint8 flags = c << FLAGS_CY_SHIFT; printf("%3d(%4d) + %3d(%4d) + %u = ", a, (int8)a, b, (int8)b, c); Adc(&a, b, &flags); printf("%3d(%4d) CY=%d OV=%d\n", a, (int8)a, (flags & FLAGS_CY_MASK) != 0, (flags & FLAGS_OV_MASK) != 0); } printf("SBB:\n"); for (c = 0; c <= 1; c++) for (aidx = 0; aidx < sizeof(testData)/sizeof(testData[0]); aidx++) for (bidx = 0; bidx < sizeof(testData)/sizeof(testData[0]); bidx++) { uint8 a = testData[aidx]; uint8 b = testData[bidx]; uint8 flags = c << FLAGS_CY_SHIFT; printf("%3d(%4d) - %3d(%4d) - %u = ", a, (int8)a, b, (int8)b, c); Sbb(&a, b, &flags); printf("%3d(%4d) CY=%d OV=%d\n", a, (int8)a, (flags & FLAGS_CY_MASK) != 0, (flags & FLAGS_OV_MASK) != 0); } return 0;} Output: ADC: 0( 0) + 0( 0) + 0 = 0( 0) CY=0 OV=0 0( 0) + 1( 1) + 0 = 1( 1) CY=0 OV=0 0( 0) + 127( 127) + 0 = 127( 127) CY=0 OV=0 0( 0) + 128(-128) + 0 = 128(-128) CY=0 OV=0 0( 0) + 129(-127) + 0 = 129(-127) CY=0 OV=0 0( 0) + 255( -1) + 0 = 255( -1) CY=0 OV=0 1( 1) + 0( 0) + 0 = 1( 1) CY=0 OV=0 1( 1) + 1( 1) + 0 = 2( 2) CY=0 OV=0 1( 1) + 127( 127) + 0 = 128(-128) CY=0 OV=1 1( 1) + 128(-128) + 0 = 129(-127) CY=0 OV=0 1( 1) + 129(-127) + 0 = 130(-126) CY=0 OV=0 1( 1) + 255( -1) + 0 = 0( 0) CY=1 OV=0127( 127) + 0( 0) + 0 = 127( 127) CY=0 OV=0127( 127) + 1( 1) + 0 = 128(-128) CY=0 OV=1127( 127) + 127( 127) + 0 = 254( -2) CY=0 OV=1127( 127) + 128(-128) + 0 = 255( -1) CY=0 OV=0127( 127) + 129(-127) + 0 = 0( 0) CY=1 OV=0127( 127) + 255( -1) + 0 = 126( 126) CY=1 OV=0128(-128) + 0( 0) + 0 = 128(-128) CY=0 OV=0128(-128) + 1( 1) + 0 = 129(-127) CY=0 OV=0128(-128) + 127( 127) + 0 = 255( -1) CY=0 OV=0128(-128) + 128(-128) + 0 = 0( 0) CY=1 OV=1128(-128) + 129(-127) + 0 = 1( 1) CY=1 OV=1128(-128) + 255( -1) + 0 = 127( 127) CY=1 OV=1129(-127) + 0( 0) + 0 = 129(-127) CY=0 OV=0129(-127) + 1( 1) + 0 = 130(-126) CY=0 OV=0129(-127) + 127( 127) + 0 = 0( 0) CY=1 OV=0129(-127) + 128(-128) + 0 = 1( 1) CY=1 OV=1129(-127) + 129(-127) + 0 = 2( 2) CY=1 OV=1129(-127) + 255( -1) + 0 = 128(-128) CY=1 OV=0255( -1) + 0( 0) + 0 = 255( -1) CY=0 OV=0255( -1) + 1( 1) + 0 = 0( 0) CY=1 OV=0255( -1) + 127( 127) + 0 = 126( 126) CY=1 OV=0255( -1) + 128(-128) + 0 = 127( 127) CY=1 OV=1255( -1) + 129(-127) + 0 = 128(-128) CY=1 OV=0255( -1) + 255( -1) + 0 = 254( -2) CY=1 OV=0 0( 0) + 0( 0) + 1 = 1( 1) CY=0 OV=0 0( 0) + 1( 1) + 1 = 2( 2) CY=0 OV=0 0( 0) + 127( 127) + 1 = 128(-128) CY=0 OV=1 0( 0) + 128(-128) + 1 = 129(-127) CY=0 OV=0 0( 0) + 129(-127) + 1 = 130(-126) CY=0 OV=0 0( 0) + 255( -1) + 1 = 0( 0) CY=1 OV=0 1( 1) + 0( 0) + 1 = 2( 2) CY=0 OV=0 1( 1) + 1( 1) + 1 = 3( 3) CY=0 OV=0 1( 1) + 127( 127) + 1 = 129(-127) CY=0 OV=1 1( 1) + 128(-128) + 1 = 130(-126) CY=0 OV=0 1( 1) + 129(-127) + 1 = 131(-125) CY=0 OV=0 1( 1) + 255( -1) + 1 = 1( 1) CY=1 OV=0127( 127) + 0( 0) + 1 = 128(-128) CY=0 OV=1127( 127) + 1( 1) + 1 = 129(-127) CY=0 OV=1127( 127) + 127( 127) + 1 = 255( -1) CY=0 OV=1127( 127) + 128(-128) + 1 = 0( 0) CY=1 OV=0127( 127) + 129(-127) + 1 = 1( 1) CY=1 OV=0127( 127) + 255( -1) + 1 = 127( 127) CY=1 OV=0128(-128) + 0( 0) + 1 = 129(-127) CY=0 OV=0128(-128) + 1( 1) + 1 = 130(-126) CY=0 OV=0128(-128) + 127( 127) + 1 = 0( 0) CY=1 OV=0128(-128) + 128(-128) + 1 = 1( 1) CY=1 OV=1128(-128) + 129(-127) + 1 = 2( 2) CY=1 OV=1128(-128) + 255( -1) + 1 = 128(-128) CY=1 OV=0129(-127) + 0( 0) + 1 = 130(-126) CY=0 OV=0129(-127) + 1( 1) + 1 = 131(-125) CY=0 OV=0129(-127) + 127( 127) + 1 = 1( 1) CY=1 OV=0129(-127) + 128(-128) + 1 = 2( 2) CY=1 OV=1129(-127) + 129(-127) + 1 = 3( 3) CY=1 OV=1129(-127) + 255( -1) + 1 = 129(-127) CY=1 OV=0255( -1) + 0( 0) + 1 = 0( 0) CY=1 OV=0255( -1) + 1( 1) + 1 = 1( 1) CY=1 OV=0255( -1) + 127( 127) + 1 = 127( 127) CY=1 OV=0255( -1) + 128(-128) + 1 = 128(-128) CY=1 OV=0255( -1) + 129(-127) + 1 = 129(-127) CY=1 OV=0255( -1) + 255( -1) + 1 = 255( -1) CY=1 OV=0SBB: 0( 0) - 0( 0) - 0 = 0( 0) CY=0 OV=0 0( 0) - 1( 1) - 0 = 255( -1) CY=1 OV=0 0( 0) - 127( 127) - 0 = 129(-127) CY=1 OV=0 0( 0) - 128(-128) - 0 = 128(-128) CY=1 OV=1 0( 0) - 129(-127) - 0 = 127( 127) CY=1 OV=0 0( 0) - 255( -1) - 0 = 1( 1) CY=1 OV=0 1( 1) - 0( 0) - 0 = 1( 1) CY=0 OV=0 1( 1) - 1( 1) - 0 = 0( 0) CY=0 OV=0 1( 1) - 127( 127) - 0 = 130(-126) CY=1 OV=0 1( 1) - 128(-128) - 0 = 129(-127) CY=1 OV=1 1( 1) - 129(-127) - 0 = 128(-128) CY=1 OV=1 1( 1) - 255( -1) - 0 = 2( 2) CY=1 OV=0127( 127) - 0( 0) - 0 = 127( 127) CY=0 OV=0127( 127) - 1( 1) - 0 = 126( 126) CY=0 OV=0127( 127) - 127( 127) - 0 = 0( 0) CY=0 OV=0127( 127) - 128(-128) - 0 = 255( -1) CY=1 OV=1127( 127) - 129(-127) - 0 = 254( -2) CY=1 OV=1127( 127) - 255( -1) - 0 = 128(-128) CY=1 OV=1128(-128) - 0( 0) - 0 = 128(-128) CY=0 OV=0128(-128) - 1( 1) - 0 = 127( 127) CY=0 OV=1128(-128) - 127( 127) - 0 = 1( 1) CY=0 OV=1128(-128) - 128(-128) - 0 = 0( 0) CY=0 OV=0128(-128) - 129(-127) - 0 = 255( -1) CY=1 OV=0128(-128) - 255( -1) - 0 = 129(-127) CY=1 OV=0129(-127) - 0( 0) - 0 = 129(-127) CY=0 OV=0129(-127) - 1( 1) - 0 = 128(-128) CY=0 OV=0129(-127) - 127( 127) - 0 = 2( 2) CY=0 OV=1129(-127) - 128(-128) - 0 = 1( 1) CY=0 OV=0129(-127) - 129(-127) - 0 = 0( 0) CY=0 OV=0129(-127) - 255( -1) - 0 = 130(-126) CY=1 OV=0255( -1) - 0( 0) - 0 = 255( -1) CY=0 OV=0255( -1) - 1( 1) - 0 = 254( -2) CY=0 OV=0255( -1) - 127( 127) - 0 = 128(-128) CY=0 OV=0255( -1) - 128(-128) - 0 = 127( 127) CY=0 OV=0255( -1) - 129(-127) - 0 = 126( 126) CY=0 OV=0255( -1) - 255( -1) - 0 = 0( 0) CY=0 OV=0 0( 0) - 0( 0) - 1 = 255( -1) CY=1 OV=0 0( 0) - 1( 1) - 1 = 254( -2) CY=1 OV=0 0( 0) - 127( 127) - 1 = 128(-128) CY=1 OV=0 0( 0) - 128(-128) - 1 = 127( 127) CY=1 OV=0 0( 0) - 129(-127) - 1 = 126( 126) CY=1 OV=0 0( 0) - 255( -1) - 1 = 0( 0) CY=1 OV=0 1( 1) - 0( 0) - 1 = 0( 0) CY=0 OV=0 1( 1) - 1( 1) - 1 = 255( -1) CY=1 OV=0 1( 1) - 127( 127) - 1 = 129(-127) CY=1 OV=0 1( 1) - 128(-128) - 1 = 128(-128) CY=1 OV=1 1( 1) - 129(-127) - 1 = 127( 127) CY=1 OV=0 1( 1) - 255( -1) - 1 = 1( 1) CY=1 OV=0127( 127) - 0( 0) - 1 = 126( 126) CY=0 OV=0127( 127) - 1( 1) - 1 = 125( 125) CY=0 OV=0127( 127) - 127( 127) - 1 = 255( -1) CY=1 OV=0127( 127) - 128(-128) - 1 = 254( -2) CY=1 OV=1127( 127) - 129(-127) - 1 = 253( -3) CY=1 OV=1127( 127) - 255( -1) - 1 = 127( 127) CY=1 OV=0128(-128) - 0( 0) - 1 = 127( 127) CY=0 OV=1128(-128) - 1( 1) - 1 = 126( 126) CY=0 OV=1128(-128) - 127( 127) - 1 = 0( 0) CY=0 OV=1128(-128) - 128(-128) - 1 = 255( -1) CY=1 OV=0128(-128) - 129(-127) - 1 = 254( -2) CY=1 OV=0128(-128) - 255( -1) - 1 = 128(-128) CY=1 OV=0129(-127) - 0( 0) - 1 = 128(-128) CY=0 OV=0129(-127) - 1( 1) - 1 = 127( 127) CY=0 OV=1129(-127) - 127( 127) - 1 = 1( 1) CY=0 OV=1129(-127) - 128(-128) - 1 = 0( 0) CY=0 OV=0129(-127) - 129(-127) - 1 = 255( -1) CY=1 OV=0129(-127) - 255( -1) - 1 = 129(-127) CY=1 OV=0255( -1) - 0( 0) - 1 = 254( -2) CY=0 OV=0255( -1) - 1( 1) - 1 = 253( -3) CY=0 OV=0255( -1) - 127( 127) - 1 = 127( 127) CY=0 OV=1255( -1) - 128(-128) - 1 = 126( 126) CY=0 OV=0255( -1) - 129(-127) - 1 = 125( 125) CY=0 OV=0255( -1) - 255( -1) - 1 = 255( -1) CY=1 OV=0 You can change #if 0 to #if 1 to use the sign-comparison-based method for overflow calculation. The result will be the same. At first glance it's a bit surprising that the sign-based method takes care of the carry-in too. Please note that by using my method in which I calculate all carry-ins into bits 0 through 7, you also get for free the value of the half-carry flag (carry from bit 3 to bit 4) that's needed for the DAA instruction. EDIT: I've added a function for subtraction with borrow (SBC/SBB instruction) and results for it. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8034566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/902544/']} | jdg_83951 |
stackexchange | llm_judgeable_groundtruth_similarity | 123620 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Are they related to harmonic series in any way? Or something else? Wikipedia didn't help.
Now provide the response and nothing else.
| Consider a sheet of skin stretched into a flat drum head and drummed upon. When the drum head is in vibration, let $f(x,y,t)$ be the height of the drum head at position $(x,y)$ and time $t$. Then $f$ obeys the wave equation :$$\frac{\partial^2}{\partial t^2} f = c^2 \left( \frac{\partial^2}{\partial x^2} f + \frac{\partial^2}{\partial y^2} f \right) \quad (\ast) $$where $c$ is a physical constant related to things like how tight the skin is stretched and what it is made out of. Such a solution must also obey the physical constraint that there is no motion at the boundary of the drum, where the skin is nailed down. Every sound can be composed into its overtones . A pure overtone with frequency $\omega$ corresponds to a solution to the wave equation which looks like $f(x,y,t) = g(x,y) \cos(\omega t+b)$ where $$- \frac{\omega^2}{c^2} g =\frac{\partial^2}{\partial x^2} g + \frac{\partial^2}{\partial y^2} g \quad (\ast \ast).$$Therefore, to understand the sound of a drum, one should figure out for which $\omega$ the PDE $(\ast \ast)$ has solutions which are zero on the boundary of the drum. This is called computing the spectrum of the drum, and a property of the drum which depends only on these $\omega$'s is called a property which one "can hear". The lowest frequency, which will give the fundamental tone of the drum, will correspond to the lowest nonzero $\omega$ for which $(\ast \ast)$ has solutions. Of course, $(\ast \ast)$ always has the solution that $g$ is a constant and $\omega =0$. OK, so far that made sense. Now the terminology does something illogical. The name "harmonic" is attached not to the lowest nontrivial frequency, but to the zero frequency. That is to say, $g$ is called "harmonic" if it obeys$$0=\frac{\partial^2}{\partial x^2} g + \frac{\partial^2}{\partial y^2} g \quad (\ast \ast \ast).$$I don't know the actual history here, but I think of this as a form of mathematical obtuseness. "You musicians want to study the lowest frequency of vibration? Well you can't get lower than zero!" The actual physical question addressed by $(\ast \ast \ast)$ is "what are the possible stable shapes for a drumhead, if the boundary is not planar? So, if the rim of my drum varies in height, but I tack the drumhead to it anyway, what shape will the drumhead sit at when we're not pounding on it? This is the Dirichlet problem for the Laplace equation ; if I give you the values of a harmonic function on the boundary, what does the interior look like? | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/123620', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6690/']} | jdg_83952 |
stackexchange | llm_judgeable_groundtruth_similarity | 273161 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
New to special relativity, I am trying to understand and compute an example of the paradox of symmetrical time dilation explained in http://en.wikipedia.org/wiki/Minkowski_diagram , that "A second observer, having moved together with the clock from O to B, will argue that the other clock has reached only C until this moment and therefore this clock runs slower." I've tried to check for this already being answered, but couldn't find quite the same problem. Sorry if I missed it... If two observers move in opposite directions at the same speed (say 0.5c) from a resting observer, how will the clock of the symmetrical system seem to tick for the opposite observer ? All observers start at t0=0, x=0. I apply c=1At times resp t1=0,5 and t2=1, the positions of the two moving obs are + and - 0,25 and 0,5 in the resting frame of ref.When I compute their position and time in their own frames, I logically obtain:x'=0 at all times (observers at rest in own ref), t'1=0,433 ; t'2=0,86. Their clocks run slower than the clock in rest frame. All fine (i think). So now I proceed to compute the relative speed of one of the moving observers with respect to the other one. I get 0,8c, which seems to be correct. Subsequently, I apply Lorentz transform once more to compute the x''s and t''s of the observer moving at 0,8c in the frame of ref of its companion, using [x'1,t'1], and [x'2,t'2]. I obtain x''1 = 0,577; t''1 = 0,72x''2 = 1,154; t''2 = 1,44 Being new to relativity, I would have expected the clock of the observer moving away at 0.8c to be slower, but it doesn't seem to work that way... Moreover, I checked the Minkowsky invariants, and they are invariant, so I surmise that my calculations could be right, but that my brains don't interpret the result correctly. Where is my mistake? In the way I apply the Lorentz transform, or in my interpretation of the result?
Now provide the response and nothing else.
| Say you have a gas of atoms moving around in a box with some velocity. If the laser is placed at one side of the box, the photons' frequencies will be blue-shifted w.r.t. the laser if they're moving toward it, and red-shifted if they're moving away. So if you tune the laser to just above an atomic transition, the laser light will be transparent to the red-shifted atoms, and absorbed by the blue-shifted ones. When an atom absorbs a photon, it is slowed down, since the photon and the atom make a head-on collision (sort of; the point being the momentum vectors have components that oppose each other along the laser-atom axis). At some later time it will reemit the photon in some random direction. If it emits the photon in the direction opposite its velocity (backwards), its momentum will be boosted to the same value it had before the absorption. However, any other direction of emission will give it a boost that is <= the momentum lost through the absorption. So to summarize, absorption will only occur for atoms moving toward the laser. Once absorbed, they will be slowed down. With a very high probability, the subsequent emission be in some direction such that the net effect of absorption + emission is a negative change in momentum. Since the doppler shift is less with lower velocities, you have to keep detuning the laser as you cool. Edit : @QuantumAI, I somehow didn't see your answer when I wrote mine. Edit 2 : @Rococo gave a very nice, concise explanation. Since you're still confused, let's take a look at a concrete example. Say your atom initially has momentum $\mathbf{p}_{at,0} = p_i\mathbf{\hat x}$. And say, for the sake of simplicity that the photons have momentum $\mathbf{p}_{ph} = -\frac{p_i}{4}\mathbf{\hat x}$. Here the photon is incident from $-\mathbf{x}$, yielding a head-on collision with the atom traveling in $\mathbf{x}$ direction. The initial magnitude of the momentum of the atom is clearly $$\|\mathbf{p}_{at,0}\| = \sqrt{p_i^2} = p_i$$ The momentum after the absorption is $$\mathbf{p}_{at,1} = \frac{3p_i}{4}\mathbf{\hat x}$$ with magnitude $$\|\mathbf{p}_{at,1}\| = \frac{3p_i}{4}$$ Let's say now the emission occurs in some other direction, measured by an angle $\theta$ w.r.t. the horizontal. The momentum has the same magnitude as the photon, but the direction is different. $$\mathbf{p}_{ph}' = \frac{p_i}{4}\left[\cos{\theta}~\mathbf{\hat x} + \sin{\theta}~\mathbf{\hat y}\right]$$ The final momentum of the atom is now $$\mathbf{p}_{at,2} = \frac{p_i}{4}\left[ (3 + \cos{\theta}) ~\mathbf{\hat x} + \sin{\theta} ~\mathbf{\hat y}\right]$$ and the magnitude is $$\|\mathbf{p}_{at,2}\| = \sqrt{\frac{p_i^2}{16}\left[ (9 + 6\cos{\theta} + \cos^2{\theta}) + \sin^2{\theta} \right]} = \sqrt{\frac{p_i^2}{16}(10 + 6\cos{\theta})}$$ Since $\cos{\theta}$ takes on values between $-1$ and $1$, you can see that the magnitude can take on values between $\frac{p_i}{2}$ and $p_i$. The bottom line here is that for any angle of emission $\neq 0$ , corresponding to an emission in the direction opposite the atom's velocity, the magnitude of the final momentum is less than that of the initial . Note that because this example looks at the case of a perfect head-on collision, it is idealized and therefore fairly unrealistic. Nevertheless, even for a different angle of incidence the concept is the same. As mentioned in the comments, if you average over a large number of events, the net effect is a reduction in momentum. If this is still not clear, please ask a specific question about what is unclear to you. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/273161', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/126612/']} | jdg_83953 |
stackexchange | llm_judgeable_groundtruth_similarity | 317760 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know on client side (javascript) you can use windows.location.hash but could not find anyway to access from the server side. I'm using asp.net.
Now provide the response and nothing else.
| We had a situation where we needed to persist the URL hash across ASP.Net post backs. As the browser does not send the hash to the server by default, the only way to do it is to use some Javascript: When the form submits, grab the hash ( window.location.hash ) and store it in a server-side hidden input field Put this in a DIV with an id of " urlhash " so we can find it easily later. On the server you can use this value if you need to do something with it. You can even change it if you need to. On page load on the client , check the value of this this hidden field. You will want to find it by the DIV it is contained in as the auto-generated ID won't be known. Yes, you could do some trickery here with .ClientID but we found it simpler to just use the wrapper DIV as it allows all this Javascript to live in an external file and be used in a generic fashion. If the hidden input field has a valid value, set that as the URL hash ( window.location.hash again ) and/or perform other actions. We used jQuery to simplify the selecting of the field, etc ... all in all it ends up being a few jQuery calls, one to save the value, and another to restore it. Before submit: $("form").submit(function() { $("input", "#urlhash").val(window.location.hash);}); On page load: var hashVal = $("input", "#urlhash").val();if (IsHashValid(hashVal)) { window.location.hash = hashVal;} IsHashValid() can check for " undefined " or other things you don't want to handle. Also, make sure you use $(document).ready() appropriately, of course. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/317760', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4191/']} | jdg_83954 |
stackexchange | llm_judgeable_groundtruth_similarity | 6023549 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If i have an ImageView that fills the screen. The ImageView background is set to green color. I place a bitmap in the ImageView, keeping bitmap proportions. A portrait photo in this layout will show green on both left and right side (Phone orientation= portrait). Now, How do i get the left side x/y position of the edge when the green ends and bitmap begins. The background for this endeavor project is that i want to write text on the image and save the image back to a new image with the text. The problem is.. Since I scale the image inSampleSize = 4; and the ImageView shrinking it even more, saving this new picture will give a small small approx 250x350 image. What i want is to use the x/y positions and transfer the written text to the original inSampleSize = 4 image or to the sdcard 1500x3000 image I know and read other questions about this that i have to "Do the math calculations" myselfI just need this small answer. I forgot i can take a screenshot to clarify. this is what it look like: (I get a new pen on pressing the button "pen" each pen hold its own unique text and position on the screen Here is the imageview import java.util.HashMap; import java.util.UUID; import android.app.Activity; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Point; import android.graphics.Rect; import android.graphics.Typeface; import android.graphics.drawable.BitmapDrawable; import android.graphics.drawable.Drawable; import android.os.Environment; import android.util.AttributeSet; import android.view.Display; import android.view.MotionEvent; import android.widget.EditText; import android.widget.ImageView; public class DrawView2 extends ImageView { private HashMap<String, ColorBall2> HashBall ; private String balID = ""; // variable to know what ball is being dragged public final String PTPPSERVICE_DERECTORY = "/PTPPservice/"; private Bitmap bitmap; private EditText ed; private Paint paint = new Paint(); private Paint paint2 = new Paint(); private Paint pTouch = new Paint(); private EditText addtext; private Context ctx; private String imagePath; private boolean removeBall = false; int viewWidth = 0; int viewHeight = 0; double bitmapHight =0; double bitmapWidth =0; double ratio =0; double startX = 0; double endX= 0; double startY= 0; double endY = 0; public DrawView2(Context context, AttributeSet atts,String image1) { super(context, atts); this.ctx = context; this.imagePath = image1; setFocusable(true); paint.setStyle(Paint.Style.FILL_AND_STROKE); paint.setColor(Color.BLACK); paint.setTypeface(Typeface.DEFAULT_BOLD); paint2.setStyle(Paint.Style.FILL_AND_STROKE); paint2.setColor(Color.RED); addtext = (EditText) ((Activity) ctx).findViewById(R.id.edittextaddtext); String filePath = Environment.getExternalStorageDirectory().toString() + imagePath; BitmapFactory.Options options = new BitmapFactory.Options(); options.inSampleSize = 4; bitmap = BitmapFactory.decodeFile(filePath,options); // SAVE RATIO int x = bitmap.getWidth(); int y = bitmap.getHeight(); if(y>x) ratio = ((double)y)/x; if(x>y) ratio = ((double)x)/y; if(y==x) ratio = 1; Drawable bit = new BitmapDrawable(bitmap); setImageDrawable(bit); } public double getRatio() { return ratio; } public HashMap<String, ColorBall2> getHashBall() { return HashBall; } // RETURN THE ON SCREEN RESIZED BITMAP public double getOnScreenBitmapHight(){ return bitmapHight; } public double getOnScreenBitmapWidth(){ return bitmapWidth; } // BITMAP SIZE public int getBitmapHight(){ return bitmap.getHeight(); } public int getBitmapWidth(){ return bitmap.getWidth(); } // GET IMAGEVIEW HIGHT WIDTH public int getViewWidth() { return viewWidth; } public int getViewHeight() { return viewHeight; } // START END X Y public double getStartX() { return startX; } public double getEndX() { return endX; } public double getStartY() { return startY; } public double getEndY() { return endY; } // SET BALL TEXT public void addTextToBall(String text) { if(balID != "") HashBall.get(balID).setText(text); } // PATH public String getImagePath() { return imagePath; } // THE ORIGINAL INSAMPELSIZE=4 BITMAP public Bitmap getBitmap() { return bitmap; } // STOP DRAWAING THE BALL public void removeBall(boolean value) { removeBall = value; } // THE RECT THAT RETURN WRONG VALUE public Rect getRect(){ Rect r = getDrawable().copyBounds(); int drawLeft = r.left; int drawTop = r.top; int drawRight = r.right; int drawBottom = r.bottom; return r; } @Override protected void onSizeChanged(int xNew, int yNew, int xOld, int yOld){ super.onSizeChanged(xNew, yNew, xOld, yOld); viewWidth = xNew; viewHeight = yNew; } public void addBall(){// HERE I TRY TO CALCULATE THE BOUNDS LEFT,RIGHT,TOP AND BOTTOM EDGE OF THE BITMAP//NOT GOING THAT GOOD if(HashBall == null) HashBall = new HashMap<String,ColorBall2>(); //X double drawAbleWidth = viewWidth/ratio; startX = (viewWidth-drawAbleWidth)/2; double drawAbleHight = viewHeight/ratio; startY = drawAbleHight/2; int ballY = (viewHeight/2); int ballX = (viewWidth/2); Point point1 = new Point(); point1.x = (int) ballX; point1.y = (int) ballY; String uuId = UUID.randomUUID().toString(); HashBall.put(uuId,(new ColorBall2(ctx,R.drawable.pen1, point1,uuId))); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); //canvas.drawCircle(10,10,10,null); if(!removeBall && HashBall != null){ for (String key : HashBall.keySet()) { //System.out.println("Key: " + key + ", Value: " + map.get(key)); if(addtext!=null) //canvas.drawCircle(HashBall.get(key).getX(), HashBall.get(key).getY(), 10, paint2); canvas.drawBitmap(HashBall.get(key).getBitmap(), HashBall.get(key).getX()-10, HashBall.get(key).getY()-80, null); canvas.drawText (HashBall.get(key).getText() + " X="+HashBall.get(key).getX() + " Y="+HashBall.get(key).getY() , HashBall.get(key).getX(), HashBall.get(key).getY(), paint); } } } // events when touching the screen @Override public boolean onTouchEvent(MotionEvent event) { int eventaction = event.getAction(); int X = (int)event.getX(); int Y = (int)event.getY(); switch (eventaction ) { case MotionEvent.ACTION_DOWN: // touch down so check if the finger is on a ball balID = ""; for (String key : HashBall.keySet()) { // check if inside the bounds of the ball (circle) // get the center for the ball int centerX = HashBall.get(key).getX() + 15; int centerY = HashBall.get(key).getY() + 15; // calculate the radius from the touch to the center of the ball double radCircle = Math.sqrt( (((centerX-X)*(centerX-X)) + (centerY-Y)*(centerY-Y))); // if the radius is smaller then 23 (radius of a ball is 22), then it must be on the ball if (radCircle < 33){ balID = HashBall.get(key).getID(); addtext.setText(HashBall.get(key).getText()); break; } } break; case MotionEvent.ACTION_MOVE: // touch drag with the ball // move the balls the same as the finger if (balID != "") { HashBall.get(balID).setX(X-25); HashBall.get(balID).setY(Y-25); } break; case MotionEvent.ACTION_UP: // touch drop - just do things here after dropping break; } // redraw the canvas invalidate(); return true; } }
Now provide the response and nothing else.
| If you know the ratios you can just derive the width of the margin that will be placed to the side of the image. // These holds the ratios for the ImageView and the bitmapdouble bitmapRatio = ((double)bitmap.getWidth())/bitmap.getHeight()double imageViewRatio = ((double)imageView.getWidth())/imageView.getHeight() Now, if the bitmapRatio is larger than the imageViewRatio, you know that this means that the bitmap is wider than the imageview if they have an equal height. In other words, you'll have blanks on the top & bottom. Conversely, if bitmapRatio is smaller than imageViewRatio then you'll have blanks to the left and right. From this you can get one of the co-ordinates pretty trivially as it'll be 0! if(bitmapRatio > imageViewRatio){ drawLeft = 0;}else{ drawTop = 0;} To get the other co-ordinate, think about the second case where you have space left & right. Here the heights of the the bitmap and imageView are equal and thus the ratio between the widths is equal to ratio between the ratios. You can use this to figure out width of the bitmap as you know the width of the imageView. Similarly you can figure out the heights if the width are equal, except that you have to use the inverse of the ratio between the ratios as the width is inversely proportional the the ratio: if(bitmapRatio > imageViewRatio){ drawLeft = 0; drawHeight = (imageViewRatio/bitmapRatio) * imageView.getHeight();}else{ drawTop = 0; drawWidth = (bitmapRatio/imageViewRatio) * imageView.getWidth();} Once you have the width or height of the bitmap, getting the space to the side is simple, it is just half the difference between the bitmap and imageView width or height: if(bitmapRatio > imageViewRatio){ drawLeft = 0; drawHeight = (imageViewRatio/bitmapRatio) * imageView.getHeight(); drawTop = (imageView.getHeight() - drawHeight)/2;}else{ drawTop = 0; drawWidth = (bitmapRatio/imageViewRatio) * imageView.getWidth(); drawLeft = (imageView.getWidth() - drawWidth)/2;} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6023549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/538837/']} | jdg_83955 |
stackexchange | llm_judgeable_groundtruth_similarity | 37546656 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is mostly a lack of understanding of oauth2 and probably not specific to electron, however I'm trying to wrap my head around how someone would handle an oauth2 redirect url from a desktop platform, like electron? Assuming there is no webservice setup as part of the app, how would a desktop application prompt a user for credentials against a third party oauth2 service, and then authenticate them correctly?
Now provide the response and nothing else.
| Electron JS runs a browser instance on your localhost. Therefore, you can handle an oauth2 redirect url by supplying a callback url of https:localhost/whatever/path/you/want. Just be sure to white list it on the oauth2 app registration page for whatever service you are using. Example: var authWindow = new BrowserWindow({ width: 800, height: 600, show: false, 'node-integration': false, 'web-security': false});// This is just an example url - follow the guide for whatever service you are usingvar authUrl = 'https://SOMEAPI.com/authorize?{client_secret}....'authWindow.loadURL(authUrl);authWindow.show();// 'will-navigate' is an event emitted when the window.location changes// newUrl should contain the tokens you needauthWindow.webContents.on('will-navigate', function (event, newUrl) { console.log(newUrl); // More complex code to handle tokens goes here});authWindow.on('closed', function() { authWindow = null;}); A lot of inspiration taken from this page: http://manos.im/blog/electron-oauth-with-github/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37546656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/449688/']} | jdg_83956 |
stackexchange | llm_judgeable_groundtruth_similarity | 609993 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove the range of the following two functions. 1) f(x)=$(\frac{1-x^2}{1+x^2})$ for each x $ \in$ R. 2) g(x)=$\sqrt\frac{1-x}{1+x}$ for each x $ \in$ (-1,1]. Since f(x) is not one to one f inverse can't be computed.In this sort of situations how can I prove what the range is ? In g(x) since this is one to one I computed the inverse which I get as g$^{-1}$(x)=$\pm$$\sqrt\frac{1-x^2}{1+x^2}$. Here should I take the negative square root also or should it only be the positive one? When I consider the domain of this inverse function since $\frac{1-x^2}{1+x^2}$>=0. I get 1>=x$^2$. Thus x $\in$ [-1,1]. Hence the rsnge of g is [-1,1] But I think this is wrong because in g , x $ \in$ (-1,1].When x=1 g(x)=0 and when x is close to -1, g(x) goes to infinity.Thus the range of g should be [0, infinity). What have I done wrong here ? Please help to find the range of these two functions.
Now provide the response and nothing else.
| Let the characteristic of $D$ be $p$, therefore $pa = 0$ and $p$ isthe smallest positive integer. Suppose that $p$ is not a prime, then we can write $p = rs$ forsome positive integers $r$ and $s$, with both not equal to $1$. Assume $a \in D$ but a is not zero, then $a^2 \in D$ too. So we have $pa^2 = 0$, which implies $(rs)(aa) = 0$ and $(ra)(sa) = 0$And since $D$ in an integral domain, it implies that $ra = 0$ or $sa = 0$. When $ra = 0$, we have for all $x$ in $D$$(ar)x = 0$, which implies that $(a+a+a+.....+a)x=0$; here a is added $r$ times also $(ax+ax+......+ax)=0$ implies that $ax$ is added $r$ times also. same for $a(x+x+.....+x)=0$, which implies that $a(rx)=0$ But $a$ is not zero and $D$ is integral domain then $a(rx)=0$ implies that $rx=0$. Then we have$rx = 0$ for all $x ∈ D$ with $1 < r < p$, which is a contradiction as $p$ is the smallestsuch integer.Similarly, when $sa = 0$ we have contradiction. Thus $p = rs$ is notpossible. Thus $p$ must be prime. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/609993', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28077/']} | jdg_83957 |
stackexchange | llm_judgeable_groundtruth_similarity | 26944432 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am developing an app in SWIFT and I am using the following code to show a view controller on a button click. let storyboard = UIStoryboard(name: "Main", bundle: nil);var viewName:NSString = "websiteView"let vc = storyboard.instantiateViewControllerWithIdentifier(viewName) as WebsiteViewControllerself.showViewController(vc, sender: self) it works perfectly when I test it for ios 8 but on ios 7 no matter what I do I get the following error message. I read on a forum that self.showViewController was only available for ios 8 but the compiler doesn't throw an error, does anyone know if I can use it? showViewController:sender:]: unrecognized selector sent to instance 0x7f9552e7cc50 2014-11-15 09:25:49.565 Throw[80595:613] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[Throw.test showViewController:sender:]: unrecognized selector sent to instance 0x7f9552e7cc50'
Now provide the response and nothing else.
| The compiler didn't throw an error because you have the iOS 8.0 (or 8.1) SDK selected. That method is indeed iOS 8.0+ only. You can choose to use it if you call it conditionally, like so: if (self.respondsToSelector(Selector("showViewController"))) { self.showViewController(vc, sender: self)} else { //An example of the alternate if you are using navigation controller self.navigationController?.pushViewController(vc, animated: true)} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26944432', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/852438/']} | jdg_83958 |
stackexchange | llm_judgeable_groundtruth_similarity | 1189439 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose we have $f : \mathbb{R} \to \mathbb{R}$ continuous and $$\lim\limits_{x \to \pm \infty} f(x) = \infty.$$ Then must $f$ have a minimum on $\mathbb{R}$? Intuitively it seems so, and it's easy to prove for some arbitrary closed interval in $\mathbb{R}$ with the Extreme Value Theorem, but how can we apply that to an open interval like $(-\infty, \infty)$?
Now provide the response and nothing else.
| There exists a number $M$ with the property that $|x| > M$ implies $f(x) > f(0)$. This follows from the definition of the limit at $\pm \infty$. $f$ attains a minimum on the interval $[-M,M]$. If this minimum occurs at $x_0$ you have $f(x_0) \le f(x)$ for all $|x| \le M$ and $f(x_0) \le f(0) \le f(x)$ for all $|x| > M$. Thus $f$ has a global minimum at $x_0$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1189439', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/223536/']} | jdg_83959 |
stackexchange | llm_judgeable_groundtruth_similarity | 29183503 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm coding a server-client application in Java and I need to implement a local database on the server side and I decided to go for H2 database engine. One more thing to add is that I usa TCP connection to start and run the database.This is what I put together so far: Class.forName("org.h2.Driver"); Server server = Server.createTcpServer(DB_PATH).start();Connection currentConn = DriverManager.getConnection(DB_PATH, DB_USER, DB_PASSWORD); Where the connection string is jdbc:h2:tcp://localhost/~/test . That piece of code returns with an exception: Feature not supported: "jdbc:h2:tcp://localhost/~/test" [50100-176] I followed this article .
Now provide the response and nothing else.
| Something like this should work Server server = null; try { server = Server.createTcpServer("-tcpAllowOthers").start(); Class.forName("org.h2.Driver"); Connection conn = DriverManager. getConnection("jdbc:h2:tcp://localhost/~/stackoverflow", "sa", ""); System.out.println("Connection Established: " + conn.getMetaData().getDatabaseProductName() + "/" + conn.getCatalog()); } catch (Exception e) { e.printStackTrace(); } and the output is Connection Established: H2/STACKOVERFLOW This has been tested with h2-1.4.184 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29183503', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1950812/']} | jdg_83960 |
stackexchange | llm_judgeable_groundtruth_similarity | 2617931 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to find a method to include some files based on the current file path.. for example: I have "website.com/templates/name1/index.php", this "index.php should be a unique file that I will use in many different directories on different depths, so I want to make the code universal so I don't need to change anything inside this file.. So if this "index.php" is located in " website.com /templates/name1/index.php" than it should include the file located here: " website.com/content /templates/name1/content.php" another example: " website.com /templates/name2/index.php" than it should include the file located here: " website.com/content /templates/name2/content.php" Also I want to overrun " Warning: include_once() [function.include-once]: http:// wrapper is disabled in the server configuration by allow_url_include=0 " kind of error.. because is disabled and unsafe.. Is there a way to achieve this?Thank you!
Now provide the response and nothing else.
| I think you need to use __FILE__ (it has two underscores at the start and at the end of the name) and DIRECTORY_SEPARATOR constants for working with files based on the current file path. For example: <?php // in this var you will get the absolute file path of the current file $current_file_path = dirname(__FILE__); // with the next line we will include the 'somefile.php' // which based in the upper directory to the current path include(dirname(__FILE__) . DIRECTORY_SEPARATOR . '..' . DIRECTORY_SEPARATOR . 'somefile.php'); Using DIRECTORY_SEPARATOR constant is more safe than using "/" (or "\" ) symbols, because Windows and *nix directory separators are different and your interpretator will use proper value on the different platforms. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2617931', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302577/']} | jdg_83961 |
stackexchange | llm_judgeable_groundtruth_similarity | 27386177 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to validate if string has numeric or not. I want to see if string has character or more that are not allowed such as not numeric and/or one character "." my codes are //this code is call function (is_number). sTempArray[3] is amount such as $00.00if(!is_number(sTempArray[3])){ cout << "Your amount have letter(s) are not allowed!;}//the is_number is function and will run if anyone call this function.bool MyThread::is_number(const string& data){ string::const_iterator it = data.begin(); while (it != data.end() && std::isdigit(*it)) { ++it; } return !data.empty() && it == data.end();} I want to validate the string is allowed. For example, string has a value, it is 500.00 and it will be allowed but it always be denied because period character is in the string. Another example, string has a value, it is 500.00a and it should be not allowed.
Now provide the response and nothing else.
| This is a common problem between different Jekyll environments. Some explanations We need to understand site.url and site.baseurl and in which situation we need them.Those variables don't serve the same purpose. site.url By default, this variable is only used in page head for the canonical header and the RSS link . It's also used in the xml feed to point to site resources as the software that will manage this feed doesn't know resource's urls. This variable is only necessary for external systems. site.baseurl This variable indicates the root folder of your Jekyll site. By default it is set to "" (empty string). That means that your Jekyll site is at the root of http://example.com . If your Jekyll site lives in http://example.com/blog , you have to set site.baseurl to /blog ( note the slash ). This will allow assets (css, js) to load correctly. See how assets are loaded in you head : <link rel="stylesheet" href="{{ "/css/main.css" | prepend: site.baseurl }}"> that can also be : <link rel="stylesheet" href="{{ site.baseurl }}/css/main.css"> Working in different environments Now you have to test your site locally and to deploy it in production. Sometimes, the baseurl is different and the jekyll build may not work out of the box in one of those environment. Here we have two solutions : Use jekyll serve Let's imagine that your site lives in a github repository and is served at https://username.github.io/myProject . You can setup the baseurl to /myProject . and test your site locally with jekyll serve , your site will be served at http://127.0.0.1:4000/myProject/ Use multiple configuration files If, for one reason or another, you cannot use jekyll serve , you can set a configuration file for both environment and jekyll build depending on where you are deploying. Let's say we have the local site served at http://localhost and the production site served at https://username.github.io/myProject . We leave the _config.yml with url: https://username.github.io and baseurl: /myProject We create a new _config_dev.yml with only url: https://localhost and baseurl: "" Now to test locally : jekyll build --config _config.yml,_config_dev.yml or jekyll build --config _config.yml,_config_dev.yml --watch When pushed on production, the jekyll build command will use the default _config.yml . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27386177', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4077207/']} | jdg_83962 |
stackexchange | llm_judgeable_groundtruth_similarity | 19430145 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We are using oracle 10G and one of the table takes lot of time if we query/delete data. This table contains around 10 million records. Recently we discovered that ROW_MOVEMENT is disabled on this table, we want to understand following: What performance gain we can get if enable ROW_MOVEMENT ? Are there any downsides of enabling ROW_MOVEMENT ? What trigger's row movement ? How does oracle decides it needs to move ROWS ? Any help would be highly appreciated. Thanks in advance !!
Now provide the response and nothing else.
| Row movement is mainly applied to partition tables . It allows rows to be moved across partitions. With row movement disabled, which is the default, you can't move a row with an update: SQL> CREATE TABLE part_table (ID NUMBER) 2 PARTITION BY RANGE (ID) 3 (PARTITION p0 VALUES LESS THAN (1), 4 PARTITION p1 VALUES LESS THAN (MAXVALUE));Table createdSQL> INSERT INTO part_table VALUES (0);1 row insertedSQL> UPDATE part_table SET ID = 2;UPDATE part_table SET ID = 2ORA-14402: updating partition key column would cause a partition change When you allow row movement, you can move rows with an update: SQL> ALTER TABLE part_table ENABLE ROW MOVEMENT;Table alteredSQL> UPDATE part_table SET ID = 2;1 row updated This feature doesn't affect performance in most cases: the rows are stored and queried in exactly the same manner whether the feature is enabled or not. However, when row movement is enabled, the rows can be physically moved (similar to delete+insert) with ALTER TABLE SHRINK SPACE for example. This may in turn affect index cluster factor for instance, which may affect the performance of some queries. Row movement is disabled by default because it implies that the rowid of a row may change, which is not the usual behaviour in Oracle. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19430145', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2584700/']} | jdg_83963 |
stackexchange | llm_judgeable_groundtruth_similarity | 342652 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I hope it is appropriate to ask this question here: One formulation of the abc-conjecture is $$ c < \text{rad}(abc)^2$$ where $\gcd(a,b)=1$ and $c=a+b$ . This is equivalent to ( $a,b$ being arbitrary natural numbers): $$ \frac{a+b}{\gcd(a,b)} < \text{rad}(\frac{ab(a+b)}{\gcd(a,b)^3})^2$$ Let $d_1(a,b) = 1- \frac{\gcd(a,b)^2}{ab}$ which is a proven metric on natural numbers.Let $d_2(a,b) = 1- 2 \frac{\gcd(a,b)}{a+b}$ , which I suspect to be a metric on natural numbers, but I have not proved it yet. Let $$d(a,b) = d_1(a,b)+d_2(a,b)-d_1(a,b)d_2(a,b) = 1-2\frac{\gcd(a,b)^3}{ab(a+b)}$$ Then we get the equivalent formulation of the inequality above: $$\frac{2}{1-d_2(a,b)} < \text{rad}(\frac{2}{1-d(a,b)})^2$$ which is equivalent to : $$\frac{2}{1-d_2(a,b)} < \text{rad}(\frac{1}{1-d_1(a,b)}\cdot\frac{2}{1-d_2(a,b)} )^2$$ My question is if one can prove that $d_2$ and $d$ are distances on the natural numbers (without zero)? Result : By the answer of @GregMartin, $d_2$ is a metric. By the other answer $d$ is also a metric. Edit :By "symmetry" in $d_1$ and $d_2$ , this interpretation also suggests that the following inequality is true , which might be trivial to prove or very difficult or might be wrong and may be of use or not in number theory: $$\frac{1}{1-d_1(a,b)} < \text{rad}(\frac{2}{1-d(a,b)})^2$$ which is equivalent to $$ \frac{ab}{\gcd(a,b)^2} < \text{rad}(\frac{ab(a+b)}{\gcd(a,b)^3})^2$$ (This is not easy to prove, as the $abc$ conjecture $c=a+b < ab < \text{rad}(abc)^2$ would follow for all $a,b$ such that $a+b < ab$ .) Second edit :Maybe the proof that $d_2,d$ are distances can be done with some sort of metric transformation, for example maybe with a Schoenberg transform (See 3.1, page 8 in https://arxiv.org/pdf/1004.0089.pdf ) The idea, that this might be proved with a Schoenberg transform comes from the fact that: $$d_1(a,b) = 1-\exp(-\hat{d}(a,b))$$ so $d_1$ is a Schoenberg transform of $\hat{d}(a,b) = \log( \frac{ab}{\gcd(a,b)^2}) = \log( \frac{\text{lcm}(a,b)}{\gcd(a,b)})$ which is proved to be a metric (see Encyclopedia of Distances, page 198, 10.3 ) Third edit :Here is some Sage Code to test the triangle inequality for triples (a,b,c) up to 100: def d1(a,b): return 1-gcd(a,b)**2/(a*b)def d2(a,b): return 1-2*gcd(a,b)/(a+b)def d(a,b): return d1(a,b)+d2(a,b)-d1(a,b)*d2(a,b)X = range(1,101)for a in X: for b in X: for c in X: if d2(a,c) > d2(a,b)+d2(b,c): print "d2",a,b,c if d(a,c) > d(a,b)+d(b,c): print "d",a,b,c so far with no counterexample. Related: An inequality inspired by the abc-conjecture and two questions
Now provide the response and nothing else.
| $d_2$ is indeed a metric. Abbreviating $\gcd(m,n)$ to $(m,n)$ , we need to show that \begin{align*}1-\frac{2(a,c)}{a+c} &\le 1-\frac{2(a,b)}{a+b} + 1-\frac{2(b,c)}{b+c}\end{align*} or equivalently \begin{align*}\frac{2(a,b)}{a+b} + \frac{2(b,c)}{b+c} &\le 1 + \frac{2(a,c)}{a+c}.\end{align*} Furthermore, we may assume that $\gcd(a,b,c)=1$ , since we can divide everything in sight by that factor. Note that if $a=(a,b)\alpha$ and $b=(a,b)\beta$ with $(\alpha,\beta)=1$ , then $\frac{2(a,b)}{a+b} = \frac2{\alpha+\beta}$ . The only unordered pairs $\{\alpha,\beta\}$ for which this is at least $\frac12$ are $\{1,1\}$ , $\{1,2\}$ , and $\{1,3\}$ . Further, if neither $\frac{2(a,b)}{a+b}$ nor $\frac{2(b,c)}{b+c}$ is at least $\frac12$ , then the inequality is automatically valid because of the $1$ on the right-hand side. This leaves only a few cases to check. The case $\{\alpha,\beta\} = \{1,1\}$ (that is, $a=b$ ) is trivial. The case $\{\alpha,\beta\} = \{1,2\}$ (that is, $b=2a$ ) can be checked: we have $(a,c)=\gcd(a,2a,c)=1$ , and so the inequality in question is \begin{align*}\frac23 + \frac{2(2,c)}{2a+c} &\le 1 + \frac2{a+c},\end{align*} or equivalently $$\frac{(2,c)}{2a+c} \le \frac16 + \frac1{a+c};$$ there are only finitely many ordered pairs $(a,c)$ for which the left-hand side exceeds $\frac16$ , and they can be checked by hand. The proof for the case $\{\alpha,\beta\} = \{1,3\}$ (that is, $b=3a$ ) can be checked in the same way, as can the cases $a=2b$ and $a=3b$ . | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/342652', 'https://mathoverflow.net', 'https://mathoverflow.net/users/-1/']} | jdg_83964 |
stackexchange | llm_judgeable_groundtruth_similarity | 31181651 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How I can read WebSocket frames of a web page in a Chrome extension or Firefox add-on, in a way that cannot be detected by the page? Inspect WebSockets frames from a Chrome Dev Tools extension formulates a similar question, but developing a NPAPI plugin no longer makes sense because it will soon be removed.
Now provide the response and nothing else.
| Intercepting the WebSocket data is easy. Simply execute the following script before the page constructs the WebSocket. This snippet monkey-patches the WebSocket constructor: When a new WebSocket constructor is created, the snippet subscribes to the message event, from where you can do whatever you want with the data. This snippet is designed to be indistinguishable from native code so the modification cannot easily be detected by the page (however, see the remarks at the end of this post). (function() { var OrigWebSocket = window.WebSocket; var callWebSocket = OrigWebSocket.apply.bind(OrigWebSocket); var wsAddListener = OrigWebSocket.prototype.addEventListener; wsAddListener = wsAddListener.call.bind(wsAddListener); window.WebSocket = function WebSocket(url, protocols) { var ws; if (!(this instanceof WebSocket)) { // Called without 'new' (browsers will throw an error). ws = callWebSocket(this, arguments); } else if (arguments.length === 1) { ws = new OrigWebSocket(url); } else if (arguments.length >= 2) { ws = new OrigWebSocket(url, protocols); } else { // No arguments (browsers will throw an error) ws = new OrigWebSocket(); } wsAddListener(ws, 'message', function(event) { // TODO: Do something with event.data (received data) if you wish. }); return ws; }.bind(); window.WebSocket.prototype = OrigWebSocket.prototype; window.WebSocket.prototype.constructor = window.WebSocket; var wsSend = OrigWebSocket.prototype.send; wsSend = wsSend.apply.bind(wsSend); OrigWebSocket.prototype.send = function(data) { // TODO: Do something with the sent data if you wish. return wsSend(this, arguments); };})(); In a Chrome extension, the snippet can be run via a content script with run_at:'document_start' , see Insert code into the page context using a content script . Firefox also supports content scripts , the same logic applies (with contentScriptWhen:'start' ). Note: The previous snippet is designed to be indistinguishable from native code when executed before the rest of the page. The only (unusual and fragile) ways to detect these modifications are: Pass invalid parameters to the WebSocket constructor, catch the error and inspecting the implementation-dependent (browser-specific) stack trace. If there is one more stack frame than usual, then the constructor might be tampered (seen from the page's perspective). Serialize the constructor. Unmodified constructors become function WebSocket() { [native code] } , whereas a patched constructor looks like function () { [native code] } (this issue is only present in Chrome; in Firefox, the serialization is identical). Serialize the WebSocket.prototype.send method. Since the function is not bound, serializing it ( WebSocket.prototype.send.toString() ) reveals the non-native implementation. This could be mitigated by overriding the .toString method of .send , which in turn can be detected by the page by a strict comparison with Function.prototype.toString . If you don't need the sent data, do not override OrigWebSocket.prototype.send . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31181651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1074519/']} | jdg_83965 |
stackexchange | llm_judgeable_groundtruth_similarity | 620775 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The formula for integration by parts is given by $$ \int uv'=uv-\int u'v$$As most of you know. The result is invariant if we use$v=v+c$, instead of $v$ where$c$ is some arbitary constant. $$ \int uv'=u(v+c)-\int u'(v+c) =uv-\int u'v$$Since $\int{u'c}=uc$ since $c$ is constant. My question is asking for examples where it is more useful to use $v+c$ instead of $v$ when integrating by parts. Concretely examples where$$ \int f(x)\,\mathrm{d}x = (x+c)f(x) - \int(c+x)f'(x)\,\mathrm{d}x$$Is easier to integrate than $xf'(x)$. One example can be seen here Evaluating $\int_a^b \arccos\left(x\,/\sqrt{(a+b)x-ab\,}\,\right)\,\mathrm {d}x$ assuming $0<a<b$ Where the integration was simpler/cleaner (perhaps not strictly easier) from choosing $v = x - ab/(a+b)$ instead of plainly $x$. Are there any other examples of cases where it is simpler to use $v+c$ when integrating by parts? How would one see how to choose a fitting $c$?
Now provide the response and nothing else.
| Justifying that $$\begin{align} \int_{0}^{\infty} \frac{\sin x}{x} \, dx &= \int_{0}^{\infty} \int_{0}^{\infty}\sin x \ e^{-xt} \, dt \, dx = \int^{\infty}_{0} \int_{0}^{\infty} \sin x \ e^{-tx} \, dx \, dt \\ &= \int_{0}^{\infty} \frac{1}{1+t^{2}} \, dt = \frac{\pi}{2} \end{align}$$ is not trivial due to the fact that the iterated integral does not converge absolutely. But if we integrate by parts first and choose the antiderivative $1-\cos x$ for $v$, then $$ \begin{align} \int_{0}^{\infty} \frac{\sin x}{x} \, dx &= \int_{0}^{\infty}\frac{1- \cos x}{x^{2}} \, dx = \int^{\infty}_{0} \int_{0}^{\infty} (1- \cos x) \, t e^{-xt} \, dt \, dx \\ &= \int_{0}^{\infty} t \int_{0}^{\infty} (1- \cos x) e^{-tx} \, dx \, dt = \int_{0}^{\infty} t \left(\frac{1}{t} - \frac{t}{1+t^{2}} \right) \, dt \\ &= \int_{0}^{\infty} \frac{1}{1+t^{2}} \, dt = \frac{\pi}{2} \, , \end{align}$$ where the justification for switching the order of integration comes from the fact the the integrand is always nonnegative (Tonelli's theorem). EDIT : Another example where it's useful to add a constant is for finding a primitive of $ \displaystyle \frac{\log x}{x+a}$. If we assume that $a > 0$ and choose $\log(x+a) - \log(a) = \log \left(1+ \frac{x}{a} \right)$ for $v$, we get $$ \begin{align}\int \frac{\log x}{x+a} \, dx &= \log(x) \log \left(1+ \frac{x}{a}\right) - \int \frac{1}{x} \, \log \left(1+ \frac{x}{a} \right) \, dx \\ &= \log(x) \log \left(1+ \frac{x}{a}\right) + \text{Li}_{2} \left(- \frac{x}{a} \right) +C . \tag{1}\end{align}$$ $(1)$ https://en.wikipedia.org/wiki/Polylogarithm#Dilogarithm | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/620775', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18908/']} | jdg_83966 |
stackexchange | llm_judgeable_groundtruth_similarity | 164693 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to model a logistic regression with imbalanced data (9:1).I wanted to try the weights option in the glm function in R, but I'm not 100% sure what it does. Lets say my output variable is c(0,0,0,0,0,0,0,0,0,1) .now I want to give the "1" 10 times more weight.so I give the weights argument weights=c(1,1,1,1,1,1,1,1,1,1,1,10) . When I do that, it will be considered in the calculation of the maximum likelihood. Am I right? misclassification of "1" is just 10 times worse then missclassifying a "0".
Now provide the response and nothing else.
| Ching, You do not have to make your data set balanced in terms of 1’s and 0’s. All you need is sufficient number of 1’s for the maximum likelihood to converge. Looking at the distribution of 1’s (100,000) in your dataset, you should not have any problems. You can do a simple experiment here Sample 10 % of the 1’s and 10% of the 0’s and use a weight of 10 for both Sample 100% of the 1’s and 10% of the 0’s and use a weight of 10 for the 0’s In both cases, you will get identical estimates. Again the idea of weighting is related to sampling. If you are using the whole data set you should not weight it. If I were you I would just use 10% if 1's and 10% of 0's. In R, you would use glm . Here is a sample code: glm(y ~ x1 + x2, weights = wt, data =data, family = binomial("logit")) In your dataset there should be a variable wt for weights. If you use 10% of both 0's and 1's, your wt variable will have a value of 10. If you use 10% of the 0's and 100% of 1's: wt variable will have a value of 10 for observations with y=0 and 1 for observations with y=1 | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/164693', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/84119/']} | jdg_83967 |
stackexchange | llm_judgeable_groundtruth_similarity | 31121611 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was recently having a debate about the Dependency Inversion Principle , Inversion of Control and Dependency Injection . In relation to this topic we were debating whether these principles violate one of the pillars of OOP, namely Encapsulation . My understanding of these things is: The Dependency Inversion Principle implies that objects should depend upon abstractions, not concretions - this is the fundamental principle upon which the Inversion of Control pattern and Dependency Injection are implemented. Inversion of Control is a pattern implementation of the Dependency Inversion Principle, where abstract dependencies replace concrete dependencies, allowing concretions of the dependency to be specified outside of the object. Dependency Injection is a design pattern that implements Inversion of Control and provides dependency resolution. Injection occurs when a dependency is passed to a dependent component. In essence, the Dependency Injection pattern provides a mechanism for coupling dependency abstractions with concrete implementations. Encapsulation is the process whereby data and functionality that is required by a higher level object is insulated away and inaccessible, thus, the programmer is unaware of how an object is implemented. The debate got to a sticking point with the following statement: IoC isn't OOP because it breaks Encapsulation Personally, I think that the Dependency Inversion Principle and the Inversion of Control pattern should be observed religiously by all OOP developers - and I live by the following quote: If there is (potentially) more than one way to skin a cat, then do not behave like there is only one. Example 1: class Program { void Main() { SkinCatWithKnife skinner = new SkinCatWithKnife (); skinner.SkinTheCat(); }} Here we see an example of encapsulation. The programmer only has to call Main() and the cat will be skinned, but what if he wanted to skin the cat with, say a set of razor sharp teeth? Example 2: class Program { // Encapsulation ICatSkinner skinner; public Program(ICatSkinner skinner) { // Inversion of control this.skinner = skinner; } void Main() { this.skinner.SkinTheCat(); }}... new Program(new SkinCatWithTeeth()); // Dependency Injection Here we observe the Dependency Inversion Principle and Inversion of Control since an abstract ( ICatSkinner ) is provided in order to allow concrete dependencies to be passed in by the programmer. At last, there is more than one way to skin a cat! The quarrel here is; does this break encapsulation? technically one could argue that .SkinTheCat(); is still encapsulated away within the Main() method call, so the programmer is unaware of the behavior of this method, so I do not think this breaks encapsulation. Delving a little deeper, I think that IoC containers break OOP because they use reflection, but I am not convinced that IoC breaks OOP, nor am I convinced that IoC breaks encapsulation. In fact I'd go as far as to say that: Encapsulation and Inversion of Control coincide with each other happily, allowing programmers to pass in only the concretions of a dependency, whilst hiding away the overall implementation via encapsulation. Questions: Is IoC a direct implementation of the Dependency Inversion Principle? Does IoC always break encapsulation, and therefore OOP? Should IoC be used sparingly, religiously or appropriately? What is the difference between IoC and an IoC container?
Now provide the response and nothing else.
| Does IoC always break encapsulation, and therefore OOP? No, these are hierarchically related concerns. Encapsulation is one of the most misunderstood concepts in OOP, but I think the relationship is best described via Abstract Data Types (ADTs). Essentially, an ADT is a general description of data and associated behaviour. This description is abstract; it omits implementation details. Instead, it describes an ADT in terms of pre- and post-conditions . This is what Bertrand Meyer calls design by contract . You can read more about this seminal description of OOD in Object-Oriented Software Construction . Objects are often described as data with behaviour . This means that an object without data isn't really an object. Thus, you have to get data into the object in some way. You could, for example, pass data into an object via its constructor: public class Foo{ private readonly int bar; public Foo(int bar) { this.bar = bar; } // Other members may use this.bar in various ways.} Another option is to use a setter function or property. I hope we can agree that so far, encapsulation is not violated. What happens if we change bar from an integer to another concrete class? public class Foo{ private readonly Bar bar; public Foo(Bar bar) { this.bar = bar; } // Other members may use this.bar in various ways.} The only difference compared to before is that bar is now an object, instead of a primitive. However, that's a false distinction, because in object-oriented design, an integer is also an object . It's only because of performance optimisations in various programming languages (Java, C#, etc.) that there's an actual difference between primitives (strings, integers, bools, etc.) and 'real' objects. From an OOD perspective, they're all alike. Strings have behaviours as well: you can turn them into all-upper-case, reverse them, etc. Is encapsulation violated if Bar is a sealed/final, concrete class with only non-virtual members? bar is only data with behaviour, just like an integer, but apart from that, there's no difference. So far, encapsulation isn't violated. What happens if we allow Bar to have a single virtual member? Is encapsulation broken by that? Can we still express pre- and post-conditions about Foo , given that Bar has a single virtual member? If Bar adheres to the Liskov Substitution Principle (LSP), it wouldn't make a difference. The LSP explicitly states that changing the behaviour mustn't change the correctness of the system. As long as that contract is fulfilled, encapsulation is still intact. Thus, the LSP (one of the SOLID principles , of which the Dependency Inversion Principle is another) doesn't violate encapsulation; it describes a principle for maintaining encapsulation in the presence of polymorphism . Does the conclusion change if Bar is an abstract base class? An interface? No, it doesn't: those are just different degrees of polymorphism. Thus we could rename Bar to IBar (in order to suggest that it's an interface) and pass it into Foo as its data: public class Foo{ private readonly IBar bar; public Foo(IBar bar) { this.bar = bar; } // Other members may use this.bar in various ways.} bar is just another polymorphic object, and as long as the LSP holds, encapsulation holds. TL; DR There's a reason SOLID is also known as the Principles of OOD . Encapsulation (i.e. design-by-contract) defines the ground rules. SOLID describes guidelines for following those rules. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31121611', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1033686/']} | jdg_83968 |
stackexchange | llm_judgeable_groundtruth_similarity | 1111504 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've never heard of differentiating with respect to a matrix. Let $\mathbf{y}$ be a $N \times 1$ vector, $\mathbf{X}$ be a $N \times p$ matrix, and $\beta$ be a $p \times 1$ vector. Then the residual sum of squares is defined by$$\text{RSS}(\beta) = \left(\mathbf{y}-\mathbf{X}\beta\right)^{T}\left(\mathbf{y}-\mathbf{X}\beta\right)\text{.}$$ The Elements of Statistical Learning , 2nd ed., p. 45, states that when we differentiate this with respect to $\beta$, we get$$\begin{align}&\dfrac{\partial\text{RSS}}{\partial \beta} = -2\mathbf{X}^{T}\left(\mathbf{y}-\mathbf{X}\beta\right) \\&\dfrac{\partial^2\text{RSS}}{\partial \beta\text{ }\partial \beta^{T}} = 2\mathbf{X}^{T}\mathbf{X}\text{.}\end{align}$$I mean, I could look at $\mathbf{y}$ and $\mathbf{X}$ as "constants" and $\beta$ as a variable, but it's unclear to me where the $-2$ in $\dfrac{\partial\text{RSS}}{\partial \beta}$ comes from, and why we would use $\beta^T$ for the second partial. Any textbooks that cover this topic would be appreciated as well. Side note : this is not homework. Please note that I graduated with an undergrad degree only, so assume that I've seen undergraduate real analysis, abstract algebra, and linear algebra for my pure mathematics background.
Now provide the response and nothing else.
| Wow, I asked this two years ago! Since then, I've learned what the notation means for quick computational purposes. Let $$\mathbf{y} = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_N\end{bmatrix}$$$$\mathbf{X} = \begin{bmatrix}x_{11} & x_{12} & \cdots & x_{1p} \\x_{21} & x_{22} & \cdots & x_{2p} \\\vdots & \vdots & \vdots & \vdots \\x_{N1} & x_{N2} & \cdots & x_{Np}\end{bmatrix}$$and$$\beta = \begin{bmatrix}b_1 \\b_2 \\\vdots \\b_p\end{bmatrix}\text{.}$$Then $\mathbf{X}\beta \in \mathbb{R}^N$ and$$\mathbf{X}\beta = \begin{bmatrix}\sum_{j=1}^{p}b_jx_{1j} \\\sum_{j=1}^{p}b_jx_{2j} \\\vdots \\\sum_{j=1}^{p}b_jx_{Nj}\end{bmatrix} \implies \mathbf{y}-\mathbf{X}\beta=\begin{bmatrix}y_1 - \sum_{j=1}^{p}b_jx_{1j} \\y_2 - \sum_{j=1}^{p}b_jx_{2j} \\\vdots \\y_N - \sum_{j=1}^{p}b_jx_{Nj}\end{bmatrix} \text{.}$$Therefore,$$(\mathbf{y}-\mathbf{X}\beta)^{T}(\mathbf{y}-\mathbf{X}\beta) = \|\mathbf{y}-\mathbf{X}\beta \|^2 = \sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)^2\text{.} $$ We have, for each $k = 1, \dots, p$,$$\dfrac{\partial \text{RSS}}{\partial b_k} = 2\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)(-x_{ik}) = -2\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{ik}\text{.}$$Then$$\begin{align}\dfrac{\partial \text{RSS}}{\partial \beta} &= \begin{bmatrix}\dfrac{\partial \text{RSS}}{\partial b_1} \\\dfrac{\partial \text{RSS}}{\partial b_2} \\\vdots \\\dfrac{\partial \text{RSS}}{\partial b_p}\end{bmatrix} \\&= \begin{bmatrix}-2\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{i1} \\-2\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{i2} \\\vdots \\-2\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{ip}\end{bmatrix} \\&= -2\begin{bmatrix}\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{i1} \\\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{i2} \\\vdots \\\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{ip}\end{bmatrix} \\&= -2\mathbf{X}^{T}(\mathbf{y}-\mathbf{X}\beta)\text{.}\end{align}$$For the second partial, as one might guess:$$\begin{align}\dfrac{\partial \text{RSS}}{\partial \beta^{T}} &= \begin{bmatrix}\dfrac{\partial \text{RSS}}{\partial b_1} &\dfrac{\partial \text{RSS}}{\partial b_2} &\cdots &\dfrac{\partial \text{RSS}}{\partial b_p}\end{bmatrix} \\&= -2\begin{bmatrix}\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{i1} &\cdots &\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{ip}\end{bmatrix}\end{align}$$Now we "stack" to take the partial with respect to $\beta$:$$\begin{align}\dfrac{\partial^2\text{RSS}}{\partial \beta\text{ }\partial\beta^{T}} &= \dfrac{\partial}{\partial\beta}\left(\dfrac{\partial \text{RSS}}{\partial \beta^{T}} \right) \\&= \begin{bmatrix}-2\cdot \dfrac{\partial}{\partial b_1}\begin{bmatrix}\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{i1} &\cdots &\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{ip} \end{bmatrix} \\\vdots \\-2\cdot \dfrac{\partial}{\partial b_p}\begin{bmatrix}\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{i1} &\cdots &\sum_{i=1}^{N}\left(y_i-\sum_{j=1}^{p}b_jx_{ij}\right)x_{ip} \end{bmatrix}\end{bmatrix} \\&= \begin{bmatrix}-2\begin{bmatrix}-\sum_{i=1}^{N}x_{i1}^2 & \cdots & -\sum_{i=1}^{N}x_{i1}x_{ip} \end{bmatrix} \\\vdots \\-2\begin{bmatrix}-\sum_{i=1}^{N}x_{i1}x_{ip} & \cdots & -\sum_{i=1}^{N}x_{ip}^2\end{bmatrix} \end{bmatrix} \\&= 2\mathbf{X}^{T}\mathbf{X}\text{.}\end{align}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1111504', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/81560/']} | jdg_83969 |
stackexchange | llm_judgeable_groundtruth_similarity | 3020447 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am confused as to why $$\sum_{n\geq0}(1-x)^n=\frac1x$$ only works for $x\in (0,2)$ . I get that it has a singularity at $x=0$ , so that can't work, but there are no singularities for the rest of the positive real line. Why isn't there a power series representation of $\frac1x$ which works for the whole positive real line? Are there any series representations of $1/x$ which work for $x\in (0,\infty)$ ? I can't find any. Thanks.
Now provide the response and nothing else.
| The radius of convergence of a power series in $z-a$ is the largest $r$ such that the sum of the series is analytic in the disk $\{z: |z-a| < r\}$ in the complex plane. Thus a singularity at some point stops the series from convergingat all points farther away from the centre than that point, even though the function may be analytic at those other points. On the other hand, you could take the series $$ \frac{1}{x} = -1 + \sum_{n=1}^\infty \left(\frac{1+x}{1+2x}\right)^n$$ which converges for all $x > 0$ (in fact, everywhere outside a circle of radius $1/3$ centred at $-1/3$ in the complex plane).Of course, this is not a power series in the usual sense. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3020447', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/583016/']} | jdg_83970 |
stackexchange | llm_judgeable_groundtruth_similarity | 49743 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a mixture that contains out of dirt/sand and salt. I want to extract and measure the salt. What would be the best way to extract the salt from this mixture?
Now provide the response and nothing else.
| Assuming you mean NaCl - the common "salt" (chemists call lots of things salt!). If you extract the salt by physical means, it's a physics question. Assuming that the "dirt" is not (or poorly) soluble in water, I would simply dissolve the salt in water, filter the liquid, then recrystallize (by evaporation of the liquid) and weigh the resulting crystals. Not sure if you call that chemistry or physics... it's physical chemistry. It is well described in text books and online - see for example this Scientific American page Depending on how accurate you want to get, there are a slew of precautions to take. That would be the realm of the chemistry.stackexchange sister site. | {} | {'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/49743', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/29088/']} | jdg_83971 |
stackexchange | llm_judgeable_groundtruth_similarity | 19479504 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Apart from the scripts own console (which does nothing) I want to open two consoles and print the variables con1 and con2 in different consoles, How can I achieve this. con1 = 'This is Console1'con2 = 'This is Console2' I've no idea how to achieve this and spent several hours trying to do so with modules such as subprocess but with no luck. I'm on windows by the way. Edit: Would the threading module do the job? or is multiprocessing needed? Eg:
Now provide the response and nothing else.
| If you don't want to reconsider your problem and use a GUI such as in @Kevin's answer then you could use subprocess module to start two new consoles concurrently and display two given strings in the opened windows: #!/usr/bin/env python3import sysimport timefrom subprocess import Popen, PIPE, CREATE_NEW_CONSOLEmessages = 'This is Console1', 'This is Console2'# open new consolesprocesses = [Popen([sys.executable, "-c", """import sysfor line in sys.stdin: # poor man's `cat` sys.stdout.write(line) sys.stdout.flush()"""], stdin=PIPE, bufsize=1, universal_newlines=True, # assume the parent script is started from a console itself e.g., # this code is _not_ run as a *.pyw file creationflags=CREATE_NEW_CONSOLE) for _ in range(len(messages))]# display messagesfor proc, msg in zip(processes, messages): proc.stdin.write(msg + "\n") proc.stdin.flush()time.sleep(10) # keep the windows open for a while# close windowsfor proc in processes: proc.communicate("bye\n") Here's a simplified version that doesn't rely on CREATE_NEW_CONSOLE : #!/usr/bin/env python"""Show messages in two new console windows simultaneously."""import sysimport platformfrom subprocess import Popenmessages = 'This is Console1', 'This is Console2'# define a command that starts new terminalif platform.system() == "Windows": new_window_command = "cmd.exe /c start".split()else: #XXX this can be made more portable new_window_command = "x-terminal-emulator -e".split()# open new consoles, display messagesecho = [sys.executable, "-c", "import sys; print(sys.argv[1]); input('Press Enter..')"]processes = [Popen(new_window_command + echo + [msg]) for msg in messages]# wait for the windows to be closedfor proc in processes: proc.wait() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19479504', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2425215/']} | jdg_83972 |
stackexchange | llm_judgeable_groundtruth_similarity | 239330 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Cover every rational number on [0,1] with an interval of radius $\epsilon/2^n$, the total length of these intervals is $\epsilon$, so the measure of rational numbers on [0,1] is 0. My question is that, can you prove there exist a irrational number on [0,1] that is not covered by these intervals without the use of prove by contradiction?
Now provide the response and nothing else.
| To do what you ask more concretely, one needs to be more specific.Let's consider the enumeration of the rationals according to the following sequence:$$\tag0 0, 1, \frac12, \frac13, \frac23,\frac14,\frac34,\frac15,\frac25,\frac35,\frac45,\frac16,\frac56,\ldots, \frac{p_n}{q_n},\ldots $$i.e. all reduced frections ordered first by denominator, then by numerator.I claim that $\phi=\frac{\sqrt 5-1}2$, which is a root of $f(X)=X^2+X-1$, is not covered.Assume on the contrary that the $n$th element $\frac {p_n}{q_n}$ of the above sequence covers $\phi$, i.e. we have $$\tag1\left|\phi-\frac {p_n}{q_n}\right|<2^{-n}.$$Then $q_n^2f\left(\frac{p_n}{q_n}\right)=p_n^2+p_nq_n-q_n^2$ is a nonzero integer, hence $$\tag2\left|f\left(\frac{p_n}{q_n}\right)\right|\ge \frac1{q_n^2}.$$On the other hand $f\left(\frac{p_n}{q_n}\right)=f\left(\frac{p_n}{q_n}\right)-f(\phi)=\left(\frac{p_n}{q_n}-\phi\right)f'(\xi)=\left(\frac{p_n}{q_n}-\phi\right)(2\xi+1)$ for some $\xi$ between $\phi$ and $\frac{p_n}{q_n}$.From $(1)$, $(2)$ and $\xi\in[0,1]$ we conclude$$\tag3 \frac1{q_n^2}<\frac{3}{2^n}.$$For each denominator $q>2$, there appear at least two fractions in $(0)$, namely $\frac1q$ and $\frac{q-1}q$. Taking the initial exceptions for $q\le2$ into account, we see that $q_n\le \frac n2+1$. Together with $(3)$ this implies$$ 2^n<3\left(\frac n2+1\right)^2,$$ which holds only for $n\le 5$. Checking the first five fractions explicitly, we find that $(1)$ does not hold for them either. Remark: Formally, this is a proof by contradiction (though it can be converted to a direct one), but presumably not the kind you really wanted to exclude (i.e. simply using that measure $<1$ implies proper subset). Also note that I actually used $\epsilon=1$. Of course, smaller values of $\epsilon$ will also fail to cover $\phi$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/239330', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/29727/']} | jdg_83973 |
stackexchange | llm_judgeable_groundtruth_similarity | 3059044 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Fairly new to the Google Maps Api. I've got an array of data that I want to cycle through and plot on a map. Seems fairly simple, but all the multi-marker tutorials I have found are quite complex. Let's use the data array from Google's site for an example: var locations = [ ['Bondi Beach', -33.890542, 151.274856, 4], ['Coogee Beach', -33.923036, 151.259052, 5], ['Cronulla Beach', -34.028249, 151.157507, 3], ['Manly Beach', -33.80010128657071, 151.28747820854187, 2], ['Maroubra Beach', -33.950198, 151.259302, 1]]; I simply want to plot all of these points and have an infoWindow pop up when clicked to display the name.
Now provide the response and nothing else.
| This is the simplest I could reduce it to: <!DOCTYPE html><html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <title>Google Maps Multiple Markers</title> <script src="http://maps.google.com/maps/api/js?key=YOUR_API_KEY" type="text/javascript"></script></head> <body> <div id="map" style="width: 500px; height: 400px;"></div> <script type="text/javascript"> var locations = [ ['Bondi Beach', -33.890542, 151.274856, 4], ['Coogee Beach', -33.923036, 151.259052, 5], ['Cronulla Beach', -34.028249, 151.157507, 3], ['Manly Beach', -33.80010128657071, 151.28747820854187, 2], ['Maroubra Beach', -33.950198, 151.259302, 1] ]; var map = new google.maps.Map(document.getElementById('map'), { zoom: 10, center: new google.maps.LatLng(-33.92, 151.25), mapTypeId: google.maps.MapTypeId.ROADMAP }); var infowindow = new google.maps.InfoWindow(); var marker, i; for (i = 0; i < locations.length; i++) { marker = new google.maps.Marker({ position: new google.maps.LatLng(locations[i][1], locations[i][2]), map: map }); google.maps.event.addListener(marker, 'click', (function(marker, i) { return function() { infowindow.setContent(locations[i][0]); infowindow.open(map, marker); } })(marker, i)); } </script></body></html> Edit/fork on a Codepen → SCREENSHOT There is some closure magic happening when passing the callback argument to the addListener method. This can be quite a tricky topic if you are not familiar with how closures work. I would suggest checking out the following Mozilla article for a brief introduction if it is the case: ❯ Mozilla Dev Center: Working with Closures | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/3059044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/157503/']} | jdg_83974 |
stackexchange | llm_judgeable_groundtruth_similarity | 43091200 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple search form on my page / . class SearchForm(Form): query = CharField(max_length=256, label="Search", required=False, widget=TextInput(attrs={'placeholder': 'Search …', 'class': 'form-control'})) page = IntegerField(min_value=8, initial=1, widget=HiddenInput()) sort = CharField(max_length=16, initial="id", widget=HiddenInput()) order = CharField(max_length=4, initial="asc", widget=HiddenInput()) My (simplified) view is this: def search(request): search_form = SearchForm(request.GET) return render(request, "search.html", {'search_form': search_form}) My goal is to have search_form.cleaned_data['<FIELD>'] return the initial values I set in the class SearchForm , without having to check wether they are exist None or are empty '' . Unfortunately my code does not work as the input elements are renderd like this: <input id="id_page" name="page" type="hidden" /><input id="id_sort" maxlength="16" name="sort" type="hidden" /><input id="id_order" maxlength="4" name="order" type="hidden" /> Any ideas?
Now provide the response and nothing else.
| The reason the code would not work was because search_form = SearchForm(request.GET) creates a bound form. And bound forms don't have initial values. I ended up with this gem. It checks if request.GET contains at least one of the form's fields (regardless of value, e. g. /?page= ) and then creates a bound form where validation can happen. Otherwise the user visited / or submitted other parameters not related to the form (e. g. /?foo=bar ). if request.GET & SearchForm.base_fields.keys(): search_form = SearchForm(request.GET)else: ... search_form = SearchForm(initial={...}) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43091200', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2182044/']} | jdg_83975 |
stackexchange | llm_judgeable_groundtruth_similarity | 71859 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new in EEG signal classification.Studying the literature on this topic, I wonder why the EEG signals are divided in epochs, so, instead of classifying the whole signal all at once, we usually classify "windows" of signal, splitting the original one. I searched in the literature for a possible explanation, but I find nothing which answers to my question.
Now provide the response and nothing else.
| Solving a deconvolution isn't easy even in simulated environment not to mention in practice. The main trick to solve it is using the proper model / prior for the problem and very good measurements (High SNR). So basically, for deconvolution we're after: $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} $$ Where $ H $ is the matrix form of the convolution operator of the known signal and $ \boldsymbol{y} $ is our measurement samples. Now, we need to talk about the Convolution form. The convolution has to deal with boundary conditions which might be crucial for good results. We have few approaches (Basically extrapolation): Extrapolate with Zeros - Assuming data outside the samples of the signals is zero. Extrapolate by Nearest Neighbor (Also called Replicate) - The unknown value is extrapolated by the nearest known value. Extrapolate by Periodic continuation - The data is assumed to be periodic. Hence any missing value is based on that. The building of $ H $ must match that. Since you used, in your code conv() with no explicit mode it means you basically chose zeros and since your convolution output is full (The default) it means we see the transients as well in the output. The solution to the above is given by: $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} = {\left( {H}^{T} H \right)}^{-1} {H}^{T} y $$ The stability of this solution depends on the Condition Number of $ {H}^{T} H $ . Remark - One could solve this in Frequency Domain as well. Though it will require some touches as in Frequency Domain the model is the periodic model. Let's have a look on the results: First we can see the Condition Number is huge!You may think about the condition number as the amplification of the error. It means even the slightest noise will make things unstable. As can be seen, indeed even a white noise with a standard deviation of 1e-8 caused errors! In practice, to deal with this instability we use some regularization. $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} + \lambda R \left( x \right) $$ Where $ R \left( \cdot \right) $ is the regularization function and $ \lambda $ is the regularization factor which balances between listening to the naïve deconvolution model or to the regularization model. The regularization function must be chosen with respect to the prior knowledge we have about the signal of interest. In your case, something clear about your signal is its piece wise smooth property. Its gradient is very sparse. Hence it is a perfect match to the Total Variation model: $$ \hat{\boldsymbol{x}} = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} + \lambda TV \left( x \right) = \arg \min_{\boldsymbol{x}} \frac{1}{2} {\left\| H \boldsymbol{x} - \boldsymbol{y} \right\|}_{2}^{2} + \lambda {\left\| D x \right\|}_{1} $$ Where $ D $ is the finite differences operator (Discrete Derivative). This is a relatively easy task to solve. In my project Total Variation (TV) Regularized Least Squares - Solvers Analysis I implemented and compared few solvers. For this example I used the ADMM based solver . Here is the result for the TV regularization: As can be seen, it totally overcome the ( Very Low !) noise from above. In real world (And higher noise levels) one needs to tweak the $ \lambda $ parameter. You will hardly recover perfect results, but they will be much better than doing the naive approach. MATLAB Implementation The full code is available on my StackExchange Signal Processing Q71822 GitHub Repository (Look at the SignalProcessing\Q71822 folder). It includes the functions to build the convolution matrix from the samples and solve the TV problem. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/71859', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/23597/']} | jdg_83976 |
stackexchange | llm_judgeable_groundtruth_similarity | 47402405 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a API in Express.js that will create blog posts and add them to my database. When I make a request from my React app inside of DevTools it will show my JWT. I am worried that when my site goes live people can see my token and make a request from their site to add unwanted posts. Please tell me what is going on and how I can prevent the security error.
Now provide the response and nothing else.
| When you send a request with a token in the header it will look like this in the header pane in Developer Tools: I assume that's what you are wondering whether is safe or not. The connection between the React app and the API is unencrypted when you are using ordinary HTTP. That makes a replay attack possible – an ISP or another server between the front-end and the API can read the token and pretend to be you later on with the read token. The most important solution to that is to use HTTPS, which is encrypted HTTP. Potential attackers are unable to sniff and steal the tokens when you are using HTTPS. When you are dealing with usernames, passwords, etc., you should always use HTTPS. HTTPS is free to use and not very hard to set up. See here for more details. There is also an interesting discussion here that you might want to read. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47402405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8877362/']} | jdg_83977 |
stackexchange | llm_judgeable_groundtruth_similarity | 14553351 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a little new to Haskell, but this behavior is bizarre to me. If I have a simple function defined as follows: foobar :: Integer -> [Integer] -> Integerfoobar x y = case y of (a:x:b) -> x _ -> -1 I'm basically expecting that the function should evaluate to the first argument of foobar if y contains at least two elements and the second element of y is just the first argument of foobar. Otherwise get a -1. But in ghci: foobar 5 [6,7] gives me 7, not -1. How do I make sense of this behavior?
Now provide the response and nothing else.
| What you are doing here is not "updating" the x variable but shadowing it.You are creating a new variable called x in the scope of the first branch ofyour case statement. You cannot use a case statement to compare equality as I believe you aretrying to do. If that is your goal, you will need to do something like foobar :: Integer -> [Integer] -> Integerfoobar x y = case y of (a:x':b) | x == x' -> x _ -> -1 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14553351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/798955/']} | jdg_83978 |
stackexchange | llm_judgeable_groundtruth_similarity | 49065335 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Building a multi module Android app using Gradle plugin 3, instead of declaring a dependency with compile one should use implementation or api. The latter basically works like compile. Using implementation, the dependency is hidden from any module depending on this module. So, let's say I have three modules A, B and C. B depends on A and C depends on B and A, like this: A <- B <- C (Gradle plugin 2 compile). With Gradle plugin 3 I could do just the same using "api". Would it make any difference if I explicitly declared all dependencies using implementation, like A <- B, A <- C, B <- C? Asking a bit different: Why would I use api at all instead of explicitly declaring dependencies using implementation? It seems "saver" to avoid api. Is api just for convenience or am I missing any side effects?
Now provide the response and nothing else.
| Here is some code that seems to work quite well. There are two phases: One can observe that numbers are slightly bolder than boxes. Plus the whole image has strong horizontality. So we can apply a dilatation stronger horizontally to get rid of most vertical lines. At this point, OCRs, for example, Google's one , can detect most numbers. Unfortunately, it's somewhat too good and sees other stuff, so I have added another phase that is more complex and quite related to your particular context. Here is one image's result after 1st phase: And here are all results after 2nd phase: As you see it's not perfect, 8 can be seen as B (well, even a human like me sees it as a B... but it can be easily fixed if you have only numbers in your world). There is also like a ":" character (a legacy from a vertical line that has been removed) that I can't get rid of either w/o tweaking the code too much... The C# code: static void Unbox(string inputFilePath, string outputFilePath){ using (var orig = new Mat(inputFilePath)) { using (var gray = orig.CvtColor(ColorConversionCodes.BGR2GRAY)) { using (var dst = orig.EmptyClone()) { // this is what I call the "horizontal shake" pass. // note I use the Rect shape here, this is important using (var dilate = Cv2.GetStructuringElement(MorphShapes.Rect, new Size(4, 1))) { Cv2.Dilate(gray, dst, dilate); } // erode just a bit to get back some numbers to life using (var erode = Cv2.GetStructuringElement(MorphShapes.Rect, new Size(2, 1))) { Cv2.Erode(dst, dst, erode); } // at this point, good OCR will see most numbers // but we want to remove surrounding artifacts // find countours using (var canny = dst.Canny(0, 400)) { var contours = canny.FindContoursAsArray(RetrievalModes.List, ContourApproximationModes.ApproxSimple); // compute a bounding rect for all numbers w/o boxes and artifacts // this is the tricky part where we try to discard what's not related exclusively to numbers var boundingRect = Rect.Empty; foreach (var contour in contours) { // discard some small and broken polygons var polygon = Cv2.ApproxPolyDP(contour, 4, true); if (polygon.Length < 3) continue; // we want only numbers, and boxes are approx 40px wide, // so let's discard box-related polygons, if any // and some other artifacts that passed previous checks // this quite depends on some context knowledge... var rect = Cv2.BoundingRect(polygon); if (rect.Width > 40 || rect.Height < 15) continue; boundingRect = boundingRect.X == 0 ? rect : boundingRect.Union(rect); } using (var final = dst.Clone(boundingRect)) { final.SaveImage(outputFilePath); } } } } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49065335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/698373/']} | jdg_83979 |
stackexchange | llm_judgeable_groundtruth_similarity | 4038151 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Bernt Oksendal's Stochastic Differential Equations , Chapter 4, one has the following stochastic differential equation (whose solution is geometric Brownian motion): $$dN_t=rN_tdt+\alpha N_tdB_t\;\;\;\text{ ie } \;\;\; N_t-N_0=r\int_0^t N_sds+\alpha\int_0^tN_sdB_s,$$ where $\alpha,r\in\mathbb{R}$ and $B_t$ is standard Brownian motion (ie $B_0=0$ ). After assuming that $N_t$ solves the above equation, the author abruptly deduces that $$\frac{dN_t}{N_t}=rdt+\alpha dB_t \;\;\;\text{ie}\;\;\; \int_0^t\frac{1}{N_s}dN_s= rt+\alpha B_t. \;\;\;(*)$$ I don't understand how he obtained this directly. What I understand for sure is that if we seek to compute $$\int_0^t\frac{1}{N_s}dN_s $$ we apply Itô's formula for $Y_t=\ln(N_t)$ (assuming $N_t$ satisfies all the needed conditions). After some computation this yields $$\frac{1}{N_t}dN_t=d\ln N_t+\frac{1}{2}\alpha^2dt \;\;\text{ i.e } \;\;\int_0^t\frac{1}{N_s}dN_s=\ln(N_t)-\ln(N_0)+\frac{1}{2}\alpha^2t.\;\;\;(**)$$ But at first glance, it does not seem that $(**)$ implies $(*)$ . How did he obtain $(*)$ ? Did he use a method other than the Ito formula or am I missing something? Thank you for the helpful answers! I've upvoted both and will accept whichever has more votes (in case of tie I'll just leave them be). It turns out that what I was missing is the definition of the Itô integral with respect to an Itô process , which I could not find in the book. So actually by definition one has that for any Itô process of the form $$dX_t=\alpha dt+\sigma dB_t,$$ and $Y_t$ an appropriate integrand that $$\boxed{\int_0^tY_sdX_s:=\int_0^t\alpha Y_s ds + \int_0^t\sigma Y_sdB_s}$$ This justifies the formal notation $$\frac{1}{N_t}dN_t=\frac{1}{N_t}(rN_tdt+\sigma N_tdB_t)= \alpha dt+\sigma dB_t,$$ and automatically gives $(*)$ when $Y_t=1/N_t$ , of course, while assuming $Y_t$ meets all the necessary requirements.
Now provide the response and nothing else.
| Typically associativity of the integral is proved early on. If $X$ is a semimartingale and the integral $K \cdot X = \int K dX$ makes sense then the integral $(HK) \cdot X = \int HK dX$ makes sense if and only if the integral $H \cdot (K\cdot X) = \int H d(\int K dX)$ makes sense, in which case they are equal. In "differential form", this is $H\, d(K dX) = (HK) dX$ . I am assuming you are already comfortable with linearity of stochastic integrals. Thus taking $dN_t = N_t r dt + N_t\alpha dB_t$ and integrating $1/N_t$ with respect to the semimartingales defined by either side gives $$(1/N_t) dN_t = (1/N_t) d(N_t r dt + N_t\alpha dB_t) = (1/N_t) d(N_t r dt)+ (1/N_t)d( N_t\alpha dB_t) = r dt + \alpha dB_t.$$ There is no need to use Ito here, it is simply associativity of the integral. The key idea is that we are not "dividing by $N_t$ ", instead we are integrating $1/N_t$ with respect to two (equal) semimartingales. In differential form, it just looks like dividing. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4038151', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/194587/']} | jdg_83980 |
stackexchange | llm_judgeable_groundtruth_similarity | 1238963 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to get the MAC ID of a host in my network. For that, if I ping to that IP and query the ARP cache arp -a , I am able to get the MAC ID. I just wonder if I can get any API to query the ARP and get the MAC id. Also, if there is a better method to get the MAC ID from IP address, please suggest. P.S: I am working in JAVA. Thanks.
Now provide the response and nothing else.
| Java provides no direct way to query the MAC address of a host in your network, as this is abstracted away by Java's socket libraries. In a way, this makes sense, because the MAC address of a host actually says very little. There is no such thing as "the" MAC address of a host. Many hosts will have several NICs, all with a separate MAC address, with which they can connect to the network. The computer I'm on at the moment has a wired ethernet adapter, a WiFi adapter, and a Firewire adapter, and they all have their own MAC address. This means that there is no definitive MAC address for a host. If the host is on a different subnet, ARP will actually give you the MAC address for the last router your packet passed through, instead of the MAC address of the host you're scanning. Put both of these issues together, and that means that one host may have many different MAC addresses (if it has more than one NIC), and one MAC address may represent many different hosts (if traffic passes through a router). Assuming you know all this and you still need to get the MAC address of a host, the only way to do that in Java is by "going native": Native to the client that runs your program: You could launch an ARP command-line tool and parse its output. You could use some sort of JNI call. I'm not too familiar with JNI, though, so I can't help you with that. Write a separate, small native app that you can access from Java via Telnet or some such protocol, and which will run the ARP command for you. Native to the host that you want to scan: You could use SNMP, as some of the other answers to this thread suggest. I defer to these answers for making that work for you. SNMP is a great protocol, but be aware that SNMP's OIDs can be both platform-dependent and vendor-dependent. OIDs that work for Windows don't always work for Linux and vice versa. If you know that your host runs Windows, you could use WMI . The Win32_NetworkAdapter class holds the information you want, but be aware that this returns all of the hosts NICs, even the ones Windows makes up. Also, it requires administrator credentials to the host you are scanning. Google will tell you how to connect to WMI from Java. If you know your host runs OS X, you might be able to SSH into the machine and parse the output of the system_profile command. For Linux, a tool similar to OS X's system_profile probably exists. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1238963', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/139909/']} | jdg_83981 |
stackexchange | llm_judgeable_groundtruth_similarity | 167823 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand the difference between these two commands: sudo find / -name .DS_Store -delete and sudo find / -name ".DS_Store" -exec rm {} \; I noticed that the -exec ... {} method is preferred. Why? Which one is safer/faster/better? I've used both on my Macbook and everything appears to work well.
Now provide the response and nothing else.
| -delete will perform better because it doesn't have to spawn an external process for each and every matched file, but make sure to use it after -name , otherwise it will delete the specified entire file tree. For example, find . -name .DS_Store -type f -delete It is possible that you may see -exec rm {} + often recommended because -delete does not exist in all versions of find . I can't check right now but I'm pretty sure I've used a find without it. Both methods should be "safe". A common method for avoiding the overhead of spawning an external process for each matched file is: find / -name .DS_Store -print0 | xargs -0 rm (but note that there is a portability problem here too: not all versions of find have -print0 !) | {} | {'log_upvote_score': 9, 'links': ['https://unix.stackexchange.com/questions/167823', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/44651/']} | jdg_83982 |
stackexchange | llm_judgeable_groundtruth_similarity | 31778800 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to make a request to an API, I have to pass the following body: {"description":"Tenaris","ticker":"TS.BA","industry":"Metalúrgica","currency":"ARS"} Altough the code seems to be right and it finished with "Process finished with exit code 0", it's not working well. I have no idea of what I'm missing but this is my code: http = urllib3.PoolManager()http.urlopen('POST', 'http://localhost:8080/assets', headers={'Content-Type':'application/json'}, data={"description":"Tenaris","ticker":"TS.BA","industry":"Metalúrgica","currency":"ARS"}) By the way, this the first day working with Python so excuse me if I'm not specific enough.
Now provide the response and nothing else.
| Since you're trying to pass in a JSON request, you'll need to encode the body as JSON and pass it in with the body field. For your example, you want to do something like: import jsonencoded_body = json.dumps({ "description": "Tenaris", "ticker": "TS.BA", "industry": "Metalúrgica", "currency": "ARS", })http = urllib3.PoolManager()r = http.request('POST', 'http://localhost:8080/assets', headers={'Content-Type': 'application/json'}, body=encoded_body)print r.read() # Do something with the response? Edit: My original answer was wrong. Updated it to encode the JSON. Also, related question: How do I pass raw POST data into urllib3? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31778800', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1953742/']} | jdg_83983 |
stackexchange | llm_judgeable_groundtruth_similarity | 39461957 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to define a protocol inside my class, one of the method need the type of this class as parameter, how do I use self type here? code: // myClass.h@protocol delegate- (void)method:(myClass *)myClass;@end@interface myClass : UIView// some properties and methods@end It gives the error that "Expected a type" which indicates that myClass is not recognized as a type.
Now provide the response and nothing else.
| here I've modified your getPlayList() method. look into it. ArrayList<HashMap<String,String>> getPlayList(String rootPath) { ArrayList<HashMap<String,String>> fileList = new ArrayList<>(); try { File rootFolder = new File(rootPath); File[] files = rootFolder.listFiles(); //here you will get NPE if directory doesn't contains any file,handle it like this. for (File file : files) { if (file.isDirectory()) { if (getPlayList(file.getAbsolutePath()) != null) { fileList.addAll(getPlayList(file.getAbsolutePath())); } else { break; } } else if (file.getName().endsWith(".mp3")) { HashMap<String, String> song = new HashMap<>(); song.put("file_path", file.getAbsolutePath()); song.put("file_name", file.getName()); fileList.add(song); } } return fileList; } catch (Exception e) { return null; } } you can get the song name and song path like this: @Overrideprotected void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.main_activity);ArrayList<HashMap<String,String>> songList=getPlayList("/storage/sdcard1/"); if(songList!=null){ for(int i=0;i<songList.size();i++){ String fileName=songList.get(i).get("file_name"); String filePath=songList.get(i).get("file_path"); //here you will get list of file name and file path that present in your device log.e("file details "," name ="+fileName +" path = "+filePath); } } } Note: use "/storage/sdcard1/" for reading files from sdCard and use Environment.getExternalStorageDirectory().getAbsolutePath() for reading files from phone memory Hope this will help you. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39461957', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4959952/']} | jdg_83984 |
stackexchange | llm_judgeable_groundtruth_similarity | 36526035 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to Python, and I am learning TensorFlow . In a tutorial using the notMNIST dataset , they give example code to transform the labels matrix to a one-of-n encoded array. The goal is to take an array consisting of label integers 0...9, and return a matrix where each integer has been transformed into a one-of-n encoded array like this: 0 -> [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]1 -> [0, 1, 0, 0, 0, 0, 0, 0, 0, 0]2 -> [0, 0, 1, 0, 0, 0, 0, 0, 0, 0]... The code they give to do this is: # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) However, I don't understand how this code does that at all. It looks like it just generates an array of integers in the range of 0 to 9, and then compares that with the labels matrix, and converts the result to a float. How does an == operator result in a one-of-n encoded matrix ?
Now provide the response and nothing else.
| There are a few things going on here: numpy's vector ops, adding a singleton axis, and broadcasting. First, you should be able to see how the == does the magic. Let's say we start with a simple label array. == behaves in a vectorized fashion, which means that we can compare the entire array with a scalar and get an array consisting of the values of each elementwise comparison. For example: >>> labels = np.array([1,2,0,0,2])>>> labels == 0array([False, False, True, True, False], dtype=bool)>>> (labels == 0).astype(np.float32)array([ 0., 0., 1., 1., 0.], dtype=float32) First we get a boolean array, and then we coerce to floats: False==0 in Python, and True==1. So we wind up with an array which is 0 where labels isn't equal to 0 and 1 where it is. But there's nothing special about comparing to 0, we could compare to 1 or 2 or 3 instead for similar results: >>> (labels == 2).astype(np.float32)array([ 0., 1., 0., 0., 1.], dtype=float32) In fact, we could loop over every possible label and generate this array. We could use a listcomp: >>> np.array([(labels == i).astype(np.float32) for i in np.arange(3)])array([[ 0., 0., 1., 1., 0.], [ 1., 0., 0., 0., 0.], [ 0., 1., 0., 0., 1.]], dtype=float32) but this doesn't really take advantage of numpy. What we want to do is have each possible label compared with each element, IOW to compare >>> np.arange(3)array([0, 1, 2]) with >>> labelsarray([1, 2, 0, 0, 2]) And here's where the magic of numpy broadcasting comes in. Right now, labels is a 1-dimensional object of shape (5,). If we make it a 2-dimensional object of shape (5,1), then the operation will "broadcast" over the last axis and we'll get an output of shape (5,3) with the results of comparing each entry in the range with each element of labels. First we can add an "extra" axis to labels using None (or np.newaxis ), changing its shape: >>> labels[:,None]array([[1], [2], [0], [0], [2]])>>> labels[:,None].shape(5, 1) And then we can make the comparison (this is the transpose of the arrangement we were looking at earlier, but that doesn't really matter). >>> np.arange(3) == labels[:,None]array([[False, True, False], [False, False, True], [ True, False, False], [ True, False, False], [False, False, True]], dtype=bool)>>> (np.arange(3) == labels[:,None]).astype(np.float32)array([[ 0., 1., 0.], [ 0., 0., 1.], [ 1., 0., 0.], [ 1., 0., 0.], [ 0., 0., 1.]], dtype=float32) Broadcasting in numpy is very powerful, and well worth reading up on. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36526035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/158652/']} | jdg_83985 |
stackexchange | llm_judgeable_groundtruth_similarity | 282832 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let's put ourselves in the framework of ZF. Is it true that if we think of the set of real numbers as a rational vector space, there are continuum-many linearly independent vectors? I feel that we could then use this to make an injection from $\mathbb{R}$ to $\mathbb{R}/\mathbb{Q}$, the quotient vector space by the 1-dimensional subspace $\mathbb{Q}$. Does this strategy work? This is to help me think about the result that (in ZF) $|\mathbb{R}| \leq |\mathbb{R}/\mathbb{Q}|$, proved by Mycielski in 1964. I haven't looked at his paper, just trying to imagine how one would prove this result.
Now provide the response and nothing else.
| Let $f: 2^{\mathbb{N}} \rightarrow \mathbb{R}/\mathbb{Q}$ be the function given by$$ f((a_i)_{i \in \mathbb{N}}) = \text{the equivalence class of }\sum_{k=0}^{\infty} \frac{b_k}{2^{(k+1)!}}$$where $(b_i)_{i \in \mathbb{N}}=(a_0,a_0,a_1,a_0,a_1,a_2,\dots)$. Notice that a real number of the form $$\sum_{k=0}^{\infty} \frac{c_k}{2^{(k+1)!}}$$ where $(c_i)_{i \in \mathbb{N}} \in 2^{\mathbb{N}}$ is rational if and only if the sequence $(c_k)$ is eventually zero. Consequently, $f((a_i)_{i \in \mathbb{N}})=f((a'_i)_{i \in \mathbb{N}})$ if and only if the corresponding sequences $(b_i)_{i \in \mathbb{N}}$ and $(b'_i)_{i \in \mathbb{N}}$ are eventually equal if and only if $(a_i)_{i \in \mathbb{N}}=(a'_i)_{i \in \mathbb{N}}$. Finally, compose this function with your favorite explicit injection from $\mathbb{R}$ to $2^{\mathbb{N}}$. This gives you an injection from $\mathbb{R}$ to $\mathbb{R}/\mathbb{Q}$. If you want to find a $\mathbb{Q}$-linearly independent subset of $\mathbb{R}$ of size continuum, consider the image of the map $g: \mathcal{A} \rightarrow \mathbb{R}$ given by$$ g(S)=\displaystyle \sum_{k=0}^{\infty} \frac{\chi_{S}(k)}{2^{(k+1)!}}$$where $\mathcal{A} \subseteq \mathcal{P}(\mathbb{N})$ is an almost disjoint family of size continuum. For example, enumerate the vertices of the full binary tree of height $\omega$ by $\mathbb{N}$ and let $\mathcal{A}$ be the set of (labels of) branches. To see why this set is linearly independent, see this nice answer of Tim Gowers on another MO question. Note that both of these constructions can be done in ZF. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/282832', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4177/']} | jdg_83986 |
stackexchange | llm_judgeable_groundtruth_similarity | 1505354 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This year in Algebra we first got introduced to the concept of equations with variables. Our teacher is doing a great job of teaching us how to do them, except for one thing: He isn't telling us what we are actually doing when simplifying/solving for an equation. Instead of telling us we are adding or subtracting something from both sides, he tells us we are just moving something over the equals sign. Take, for example, this simple equation. $3x+5=2x+10$ We have to get all "x" terms on one side, so I originally thought we subtract 2x from both sides, leaving $x+5=10$ But that isn't how he teaches it.He says: We have to get all "x" terms on one side, so we move 2x over the equals sign, and whenever anything goes over the equals sign it becomes negative, so we have: $-2x+3x+5=10$ and then we can combine like terms to get$x+5=10$ Yes, they are basically doing the same thing, but my teachers way over complicates things a bit, and my main concern is that my classmates seem to think we can move the $2x$ over by "magic" and don't know that we are just subtracting it from both sides. This at first seemed really bad to me, but it seems in everything we have done so far, you can get away with not really knowing what you are doing while doing it. And, for some reason, thinking about doing Algebra this way seems to make it easier for me and my peers. My question is:Is there any disadvantages to thinking about Algebra like this? Is there anything later in my math education that will require me to know that I am subtracting or adding 2x to get rid of it on this side?
Now provide the response and nothing else.
| My question is: Is there any disadvantages to thinking about Algebra like this? Is there anything later in my math education that will require me to know that I am subtracting or adding 2x to get rid of it on this side? As a college algebra instructor, I'm involved with remediation efforts for hundreds of students each year who have graduated high school but can't get started with college math, mostly due to incorrect concepts picked up in their prior schooling. So I would say "yes" . There are some shortcuts that teachers can take to get students to pass some specific tests or programs that they are involved in; but the incorrect concepts definitely make things more difficult for students, sometimes overwhelmingly so, later on. (A majority of students that land in college remediation programs never get college degrees.) The first thing that I would point out is that the "apply inverse operations to both sides" idea is generalizable to any mathematical operation; this allows you to cancel additions, subtractions, multiplications, divisions, exponents, radicals... even exponential, logarithmic, and trigonometric functions. (With appropriate fine print: no division by zero, square roots to both sides creates two plus-or-minus solutions, trigonometric inverses creates infinite cyclic solutions, etc.) In contrast, the "move over and change the sign" method is not generalizable, as it only works for addend terms. This sets students on a course that requires memorizing many apparently different rules, one for each operation, which is much harder. When solving $2x = 10$, how is the multiplier of 2 canceled out? Must we remember to move it and turn it into the reciprocal 1/2? Will the students mistakenly change the sign and multiply by -1/2? Or add or multiply by -2 (I see this a lot)? How do we remove the division in $\frac{x}{2} = 5$ (probably some other rule)? How will we remember the seemingly totally different rule to solve $x^2 = 25$? By way of analogy, I have college students who never memorized the times tables; they did manage to get through high school by repeatedly adding on their fingers, and can get through perhaps the first part of an algebra course that way. But then we start factoring and reducing radicals: "What times what gives you 54?" I might ask; "I have no idea!" will be the answer (this happened this past week; and here's a student who has effectively no chance of passing the rest of the course). In summary: There are shortcuts or "tricks" that can get a student through a particular exam or test, which prove to be detrimental later on, as the "trick" fails in a broader context (like in this case, with any operations other than addition or subtraction). This then sets a student on a road to memorizing hundreds of little abstract rules, instead of a few simple big ideas, and at some point that complicated ad-hoc structure comes crashing down. Be polite and don't fight with your teacher to change things; but make sure to pick up a broader perspective for yourself, and share it with other students if they're willing, because you will need it later on. Take the opportunity to think about how you could improve on teaching the material, and then you may be on the path to being a master teacher yourself someday, and helping lots of people who need it. | {} | {'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/1505354', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/256704/']} | jdg_83987 |
stackexchange | llm_judgeable_groundtruth_similarity | 8344080 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As I understood all types of boost.variant are parsed into real types (meaning as if boost variant<int, string> a; a="bla-bla" would after compilation turn into string a; a="bla-bla" ) And so I wonder: how to get what type was put into boost variant? What have I tried: #include <boost/variant.hpp>#include <boost/function.hpp>#include <boost/shared_ptr.hpp>#include <iostream>int main(){ typedef boost::function<double (double x)> func0; typedef boost::function<double (double x, double y)> func1; typedef boost::variant<int, func0, func1> variant_func; func1 fn = std::plus<double>(); variant_func v(fn); std::cout << boost::get<func1>(v)(1.0, 1.0) << std::endl; // this works //std::cout << boost::get<v::type>(v)(1.0, 1.0) << std::endl; // this does not compile with many errors // std::cout << (v)(1.0, 1.0) << std::endl; // this fails with Error 1 error C2064: term does not evaluate to a function taking 2 arguments std::cin.get(); return 0;}
Now provide the response and nothing else.
| v.which() will return the 0-based index of the type of the object currently held. When you are retrieving the object your code must use a static type (in order to satisfy the get<T> function template) to refer to an (effectively) dynamically typed object. You need to either test for the type (using which() or type() ) and branch accordingly or use a static visitor. No matter which way you choose, you have to explicitly state the static type that you want to retrieve and it has to match the dynamic type or an exception will be thrown. One way around this problem is instead of using a variant type directly, use a class which contains a variant type internally and then defines any implicit conversion operators necessary to use the object with minimum fuss. I have a project called Dynamic C++ which uses this technique. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8344080', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1056328/']} | jdg_83988 |
stackexchange | llm_judgeable_groundtruth_similarity | 18383384 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've already read this thread but when I implement it into my code it only works for a few iterations. I'm using python to iterate through a directory (lets call it move directory) to copy mainly pdf files (matching a unique ID) to another directory (base directory) to the matching folder (with the corresponding unique ID). I started using shutil.copy but if there are duplicates it overwrites the existing file. I'd like to be able to search the corresponding folder to see if the file already exists, and iteratively name it if more than one occurs. e.g. copy file 1234.pdf to folder in base directory 1234. if 1234.pdf exists to name it 1234_1.pdf, if another pdf is copied as 1234.pdf then it would be 1234_2.pdf. Here is my code: import arcpyimport osimport reimport sysimport tracebackimport collectionsimport shutilmovdir = r"C:\Scans"basedir = r"C:\Links"try: #Walk through all files in the directory that contains the files to copy for root, dirs, files in os.walk(movdir): for filename in files: #find the name location and name of files path = os.path.join(root, filename) print path #file name and extension ARN, extension = os.path.splitext(filename) print ARN #Location of the corresponding folder in the new directory link = os.path.join(basedir,ARN) # if the folder already exists in new directory if os.path.exists(link): #this is the file location in the new directory file = os.path.join(basedir, ARN, ARN) linkfn = os.path.join(basedir, ARN, filename) if os.path.exists(linkfn): i = 0 #if this file already exists in the folder print "Path exists already" while os.path.exists(file + "_" + str(i) + extension): i+=1 print "Already 2x exists..." print "Renaming" shutil.copy(path, file + "_" + str(i) + extension) else: shutil.copy(path, link) print ARN + " " + "Copied" else: print ARN + " " + "Not Found"
Now provide the response and nothing else.
| Sometimes it is just easier to start over... I apologize if there is any typo, I haven't had the time to test it thoroughly. movdir = r"C:\Scans"basedir = r"C:\Links"# Walk through all files in the directory that contains the files to copyfor root, dirs, files in os.walk(movdir): for filename in files: # I use absolute path, case you want to move several dirs. old_name = os.path.join( os.path.abspath(root), filename ) # Separate base from extension base, extension = os.path.splitext(filename) # Initial new name new_name = os.path.join(basedir, base, filename) # If folder basedir/base does not exist... You don't want to create it? if not os.path.exists(os.path.join(basedir, base)): print os.path.join(basedir,base), "not found" continue # Next filename elif not os.path.exists(new_name): # folder exists, file does not shutil.copy(old_name, new_name) else: # folder exists, file exists as well ii = 1 while True: new_name = os.path.join(basedir,base, base + "_" + str(ii) + extension) if not os.path.exists(new_name): shutil.copy(old_name, new_name) print "Copied", old_name, "as", new_name break ii += 1 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18383384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2479601/']} | jdg_83989 |
stackexchange | llm_judgeable_groundtruth_similarity | 12652769 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to have an arbitrary HTML element rendered in a canvas (and then access its buffer...).
Now provide the response and nothing else.
| You won't get real HTML rendering to <canvas> per se currently, because canvas context does not have functions to render HTML elements. There are some emulations: html2canvas project http://html2canvas.hertzen.com/index.html (basically a HTML renderer attempt built on Javascript + canvas) HTML to SVG to <canvas> might be possible depending on your use case: https://github.com/miohtama/Krusovice/blob/master/src/tools/html2svg2canvas.js Also if you are using Firefox you can hack some extended permissions and then render a DOM window to <canvas> https://developer.mozilla.org/en-US/docs/HTML/Canvas/Drawing_Graphics_with_Canvas?redirectlocale=en-US&redirectslug=Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12652769', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138606/']} | jdg_83990 |
stackexchange | llm_judgeable_groundtruth_similarity | 13947 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I get that R is a set of languages that are decidable by a Turing Machines And that RE is a set of languages that a each language can be recognized by a TM, that is the machine will halt when given a word from that language and loop otherwise. But I can't wrap my head around co-RE. Is there a good way to describe it? A good example to convey what it really means?
Now provide the response and nothing else.
| The class ${\sf coRE}$ contains all languages whose complement is in ${\sf RE}$. Put differently: A language $L$ is in ${\sf RE}$ if there exists a Turing machine that can check if a requested word $w$ is contained in $L$ for every word $w\in L$. The machine always tells the truth but it may cycle on inputs $w\not\in L$. A language $L$ is in ${\sf coRE}$ if there exists a Turing machine that can check if a requested word $w$ is not contained in $L$ for every word $w\not\in L$. The machine always tells the truth but it may cycle on inputs $w\in L$. An example:The language $\{ \langle M,w \rangle \mid \text{$M$ cycles on input $w$}\}$ is in ${\sf coRE}$. Just take a Turing machine that simulates $M$ on $w$ and rejects, whenever the simulation stops. | {} | {'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/13947', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/9844/']} | jdg_83991 |
stackexchange | llm_judgeable_groundtruth_similarity | 2255324 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to know how to pay for the first month while calling the CreateRecurringPaymentsProfile,current situation is that I am able to make a initial payment at the time of creation of the profile, but I want to make the first month payment of the recurring payment. My current request parameters are below METHOD=CreateRecurringPaymentsProfile&VERSION=61.0&PWD=XXXXXX&USER=ramjio_1265317521_biz_api1.gmail.com&SIGNATURE=A1ztEX-EPedHf6p5.GxbRz1NXQj9A0lzO7zyE.0s4BzXkJhRpTZhCjxz&TOKEN=EC-0KM07981VB102461N&AMT=100&INITAMT=100&CURRENCYCODE=USD&PROFILESTARTDATE=2010-02-13T02%3A47%3A38&BILLINGPERIOD=Month&BILLINGFREQUENCY=1&TOTALBILLINGCYCLES=12&L_BILLINGTYPE0=RecurringPayments&L_BILLINGAGREEMENTDESCRIPTION0=1+Month+Free&DESC=1+Month+Free&MAXFAILEDPAYMENTS=5&AUTOBILLOUTAMT=AddToNextBilling so the current profile shows like below Payment cycle Monthly Total cycles 12 Cycles remaining 12 Amount due per cycle $100.00 USD Amount received $0.00 USD But I need something like the belowPayment cycle Monthly Total cycles 12 Cycles remaining 11 Amount due per cycle $100.00 USD Amount received $100.00 USD
Now provide the response and nothing else.
| If you've already declared your namespace in the root node, you just need to change the SetAttribute call to use the unprefixed attribute name. So if your root node defines a namespace like this: <People xmlns:s='http://niem.gov/niem/structures/2.0'> You can do this and the attribute will pick up the prefix you've already established: // no prefix on the first argument - it will be rendered as// s:id='ID_Person_01'TempElement.SetAttribute("id", "http://niem.gov/niem/structures/2.0", "ID_Person_01"); If you have not yet declared the namespace (and its prefix), the three-string XmlDocument.CreateAttribute overload will do it for you: // Adds the declaration to your root nodevar attribute = xmlDocToRef.CreateAttribute("s", "id", "http://niem.gov/niem/structures/2.0");attribute.InnerText = "ID_Person_01"TempElement.SetAttributeNode(attribute); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2255324', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151938/']} | jdg_83992 |
stackexchange | llm_judgeable_groundtruth_similarity | 2500592 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to implement RESTful urls in my Spring MVC application. All is well except for handling form submissions. I need to redirect either back to the original form or to a "success" page. @Controller@RequestMapping("/form")public class MyController { @RequestMapping(method = RequestMethod.GET) public String setupForm() { // do my stuff return "myform"; } @RequestMapping(method = RequestMethod.POST) public String processForm(ModelMap model) { // process form data model.addAttribute("notification", "Successfully did it!"); return "redirect:/form"; }} However as I read in the Spring documentation , if you redirect any parameters will be put into the url. And that doesn't work for me. What would be the most graceful way around this?
Now provide the response and nothing else.
| I had the same problem. solved it like this: return new ModelAndView("redirect:/user/list?success=true"); And then my controller method look like this: public ModelMap list(@RequestParam(required=false) boolean success) { ModelMap mm = new ModelMap(); mm.put(SEARCH_MODEL_KEY, campaignService.listAllCampaigns()); if(success) mm.put("successMessageKey", "campaign.form.msg.success"); return mm;} Works perfectly unless you want to send simple data, not collections let's say. Then you'd have to use session I guess. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2500592', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_83993 |
stackexchange | llm_judgeable_groundtruth_similarity | 2982341 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Assume you have an equation: $\ 5x^2+4x+6=ax^2+bx+c $ Now theoretically, this is an equation of 4 variables, and it should not be solvable, but it is very apparent that a,b,c equals 5,4,6, in that order. Something similar happens with complex numbers: $\ 5 + 4i = a + bi $ Here solution for a,b is 5,4, again it is very apparent. My theory is that a,b or c, in these are something like incompatible number types . Therefore you can construct a system of equations from one single equation. The complex number equation is a good example, you can easily separate the complex and the non-complex part, as they virtually cannot influence the other. But in the first equation, it's all in the real plane, there is no complex stuff going on and the x squared can influence the x You can even do this: $\ c = 5x^2 + 4x + 6 - ax^2 - bx $ Which would suggest c is dependent on the value of both a and b, and you get a similar result defining a or b, potentially pointing to an infinite number of solutions. So why do these have only one solution? Assuming my theory is somewhat correct, what do mathematicians call these "incompatible numbers" properly?
Now provide the response and nothing else.
| The uniqueness in your examples is a consequence of some implicit assumptions in each case. For the first, the unspoken assumption in the equation $$ 5x^2+4x+6=ax^2+bx+c$$ is that it is true for every value of $x$ . That really makes it infinitely many equations. You need just three of them to determine $a$ , $b$ and $c$ . For example, setting $x$ to be $0$ , $1$ and $-1$ in turn leads to $$6 = c.$$ $$5 + 4 + 6 = a + b + c$$ and $$5 -4 +6 = a - b + c .$$ It's no surprise that these determine $a$ , $b$ and $c$ . In the second case, a complex number is usually defined as a pair of real numbers, the real and imaginary parts. It's no surprise then that two complex numbers are equal just when their real and imaginary parts match. This is just the assertion that if you knoa a point in the plane you know both its $x$ and $y$ coordinates. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2982341', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/611613/']} | jdg_83994 |
stackexchange | llm_judgeable_groundtruth_similarity | 38282444 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I click on any Google Play Services API, or Firebase API that I'm using in my android project in Android Studio, the IDE takes me to a decompiled version of the class where the variable names are obfuscated and there are no docs on any of the methods. Any ideas how to see the docs directly in the IDE? I suspect the reason is that play services and firebase services are not open source and I'm ok with not seeing the impls, I understand they want to protect that. But I'd really benefit from seeing the docs directly in the IDE rather than alt-tabbing to a browser. Thanks for your help.
Now provide the response and nothing else.
| Javadocs really should be distributed with the client libraries. We have an open issue within Google on this, but I have no timelines on when a proper solution will be available. However, I can give you a workaround that you can apply to your project. It's not a great solution, but it's something. On my OSX machine, the Android SDK is installed here: /Users/[myusername]/Library/Android/sdk And the local copy for Play Services and Firebase javadocs are here (yours may vary): /Users/[myusername]/Library/Android/sdk/extras/google/google_play_services/docs/reference/ You can actually attach this path to your Firebase client libraries in Android Studio with this procedure: Find Firebase/Play client libraries in the project view under "External Libraries". Right click the one you're most interested in seeing javadoc. For example, for Firebase Realtime Database, you can use "firebase-database-9.2.0". In the contextual menu, click Library Properties at the bottom. In the Library Properties window, there is a button that has a + sign. Click that. This lets you attach a javadoc path to classes in that library. Navigate/Paste your javadoc path from above into the dialog. In the next dialog, tell it "JavaDocs". Now, when you summon javadoc for classes in that library, you'll see it. However, you'll have to do this for every library where you want javadoc. If you don't have local javadocs installed for some reason, you can attach the remote url for the javadoc. For that, use the plus button with the little earth under it, and paste the javadoc url. For Firebase, that's https://firebase.google.com/docs/reference/android and for Play Services, that's https://developers.google.com/android/reference/ It's not great to do all this manual work, but at least it gets you javadoc in your IDE. If you want to write a script to attach this for everything you use, know that this procedure essentially just modifies files under .idea/libraries. You can pretty easily observe the javadoc XML element that gets modified when you make changes in the IDE, and you can script/edit changes there yourself if you want. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38282444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3903990/']} | jdg_83995 |
stackexchange | llm_judgeable_groundtruth_similarity | 39294659 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dialog box that appears and while closing keyboard with hideKeyboard(); all the form is closed and i get back to the home page so that i can't continue the scenario for filling other data. Here the screen :
Now provide the response and nothing else.
| You can use @Query to selectively include some fields: // Include all fields you wanna query for using u.x syntax// AFAIK there is no exclusion syntatic sugar@Query("select u.id, u.username from SpringUsers u where u.id = ?1")List<SpringUsers> findByUserId(Integer userId); Also you can use Projections . First define the projection by introducing a projection interface: interface NoPasswordUser { Long getId(); String getUsername(); // Do not include getPassword();} Then use it in your repository: public interface SpringUsersRepository extends CrudRepository<SpringUsers, Integer> { NoPasswordUser findByUsername(String username); List<NoPasswordUser> findByUserId(Integer userId);} Anyway, It's better to not expose your entities through a REST or any remote interface. You can use DTOs for that matter, this post may be useful in this area. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39294659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3942186/']} | jdg_83996 |
stackexchange | llm_judgeable_groundtruth_similarity | 22462144 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For a given model : class SignUp: .... I have seen two ways of registering the model to admin in few tutorials.one is simply using admin.site.register(SignUp) and other is: class signUpAdmin(admin.ModelAdmin): class Meta: model=SignUpadmin.site.register(SignUp, SignUpAdmin) If I am correct, the second way is handy when we want to customize the admin (display, sorted order etc). but if one is not interested in that, then both serve the same purpose correct?
Now provide the response and nothing else.
| tl;dr - Pass -prune if you're deleting directories using find . For anyone else who stumbles on this question. Running an example like this find /media/disk3 -type d -name .AppleDouble -exec rm -rf {} \; results in an error like rm: cannot remove 'non_existent_directory': No such file or directory When finding and deleting directories with find, you'll often encounter this error because find stores the directory to process subdirectories, then deletes it with exec , then tries to traverse the subdirectories which no longer exist. You can either pass -maxdepth 0 or -prune to prevent this issue. Like so: find /media/disk3 -type d -name .AppleDouble -prune -exec rm -rf {} \; Now it deletes the directories without any errors. Hurray! :) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22462144', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2708477/']} | jdg_83997 |
stackexchange | llm_judgeable_groundtruth_similarity | 267374 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am curious why exception handling is a topic often ignored in Scala. Is it the goal of the language (or the style) to not rely on exception handling except for external input/code? I was hoping the style guide had some discussion on exception checking/handling. Is there a good reference that I could consult to understand if there are any differences in handling exceptions in Scala vs. languages I'm more used to (C++/C#/Java/Ruby/etc)? Would someone with some Scala years under their belt be willing to write down guidelines for exception handling that are Scala specific? or is the answer simply "do as you would in Java"? What is "the functional way"?
Now provide the response and nothing else.
| Good functional style does not need defensive coding nearly as much as the imperative style. try and catch is not the only control structure which is neglected in Scala (and other functional languages); for and while loops are endangered species and even if...else is used more selectively (although much more effectively, since it returns a value). In the functional world, you match your code to the shape of the data. Or you choose a shape which gives your code the structure you want. This is true not only with Option and Either but List and Set and Map and more. All of them are paired with powerful Higher Order Functions - map , fold , filter etc. To take a trivial example, if a Java or C++ coder wants to fetch the first (or all) of the items in a list and manipulate it, they either have to first check that the list is not empty or wrap it in try/catch to handle the error. If they have to deal with a collection of lists, the latter option is more likely to be chosen (and even less likely to match the context which might generate an error). In Scala you can simply do this to the list of Ints called xs : xs.headOption map (_ * 2) // Returns twice the first item - if there is one - as an Option or xs map (_ * 2) // Doubles everything in the list. Returns a list. If there is nothing in the list, nothing will happen. If you want something to happen, you can detect the failure and react. xs.headOption map (_ * 2) orElse Some(0) // Returns Some(twice the head item) if there is one or Some(0) xs.headOption map (_ * 2) getOrElse 0 // Returns twice the head item or 0 if there is none Even better, if you are dealing with a lists of lists, any one of which might be empty, mapping over each of them will produced results for every populated list and no problem at all with any empty ones. That is very powerful when dealing with large collections of unpredictable data. What these functional types (Option and Either and List and Map and all the other - sorry to use the ugly word - monads) also offer, very importantly, is separation of concerns. Notice how I used map in all the above examples. In two examples it is Option.map , while in the other it is List.map . In all cases, map is doing the same thing: ¨Apply my function to the data inside that container, preserving the context¨. If it is a list context, you get a list of transformed data back. If it is a ¨may or may not exist¨ context, you may get your transformed object. If it is a ¨this could really go badly wrong¨ context, you either get your object or a chance to complain. And so on. So you get a separation of concerns between the action you want to perform and the context in which it is applied. The big advantage is that if you decide to change the context (e.g. Set of unique objects rather than arbitrary List of objects), the rest of the code doesn´t need to change at all. map will still do the right thing (as will filter , fold and the rest). Built-in imperative keywords like try..catch or if...else...if...else do not have that power. For a start, they have no real meaning of their own and have to be bodged together (distorting the code tangled within them) and they offer no guarantee. Imagine having used those to manage an arbitrary list of objects and then deciding you want to guarantee uniqueness . How much of that imperative code will you have to change? How can you be confident it will work? Leave exceptions to deal with the truly unexpected and unsolveable. Functional types can be confident about their guarantees, to help you avoid the common and predictable errors. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/267374', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/146579/']} | jdg_83998 |
stackexchange | llm_judgeable_groundtruth_similarity | 650522 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that $X_1, X_2, X_3$ are i.i.d. normal random variables with mean $0$ and variance $1$. And Suppose that $Z \sim N(1, 2^2)$ and is independent of all $X_i$. Define $Z_i = Z + X_i$ for $i = 1, 2, 3$. What is the distribution of the random vector $Z$? I guess it is also normal but I don't know how exactly why. Any help would be appreciated.
Now provide the response and nothing else.
| Here is an elementary way to compute the integral. First, let us prove some initial results. Making the substitution $x \mapsto \pi - x$ yields$I(a) = \int^\pi_0 \log \left (1 + 2a\cos x + a^2 \right ) \, dx = I(-a)$so that$$I(a) = I(-a). \tag{$\dagger$}$$ Then, consider$$\begin{align*}I(a) + I(-a)&= \int^{\pi}_{0}\log \! \Big ( \left (1 - 2a\cos x + a^2 \right ) \left (1 + 2a\cos x + a^2 \right ) \Big) \> dx\\&= \int^{\pi}_{0}\log \! \Big ( \left (1 + a^2 \right )^2 - \left (2a\cos x \right )^2 \Big) \> dx.\\\end{align*}$$Using double angle formulae produces$$\begin{align*}I(a) + I(-a)&= \int^{\pi}_{0}\log \left ( 1 + 2a^2 + a^4 - 2a^2 \left ( 1 + \cos 2x \right ) \right) \, dx\\&= \int^{\pi}_{0}\log \left ( 1 - 2a^2\cos 2x + a^4 \right) \, dx,\\\end{align*}$$so we may let $x \mapsto \frac{1}{2}x$ to give$$\begin{align*}I(a) + I(-a) &= \frac{1}{2}\int^{2\pi}_{0}\log \left ( 1 - 2a^2\cos x + a^4 \right) \, dx.\\\end{align*}$$We can then split the integral at $\pi$ and set $x \mapsto 2\pi - x$ for the second integral:$$\begin{align*}I(a) + I(-a) &= \frac{1}{2} I(a^2) + \frac{1}{2}\int^{2\pi}_{\pi}\log \left ( 1 - 2a^2\cos x + a^4 \right) \, dx\\&= \frac{1}{2} I(a^2) + \frac{1}{2}\int^{\pi}_{0}\log \left ( 1 - 2a^2\cos x + a^4 \right) \, dx\\&= I(a^2).\end{align*}$$We thus have (applying $(\dagger)$)$$I(a)= \frac{1}{2}I(a^2). \tag{$\star$}$$ It follows from $(\star)$ that $I(0) = 0$ and $I(1) = 0$. Consider the case when $0 \le a < 1$. We may use $(\star)$ iteratively $n$ times to write $$I(a) = \frac{1}{2^n} I \left ( a^{2^{n}} \right ). $$ Setting $n \to \infty$ allows $\frac{1}{2^n} \to 0$ and $a^{2^{n}} \to 0$ so that $I \left ( a^{2^{n}} \right ) \to 0$ which gives the result $$ I(a) = 0. $$ When $a > 1$, it follows that $0 < 1/a < 1$ and consequently $I(1/a) = 0$. We have $$\begin{align*}I(a) &= \int^\pi_0 \log \! \Big ( a^2 \left ((1/a)^2 + (1/a)\cos x + 1 \right ) \Big ) \> dx\\&= 2\pi\log(a) + I(1/a)\\&= 2\pi\log\left(a\right).\end{align*}$$ We could use $(\dagger)$ to extend the result to negative $a$, obtaining the final solution valid for all real $a$, $$I(a) = \begin{cases}0 &\text{if } |a| \le 1;\\2\pi\log|a| &\text{otherwise}.\end{cases}$$ | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/650522', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/120655/']} | jdg_83999 |
stackexchange | llm_judgeable_groundtruth_similarity | 6784950 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was trying to setup this "when you click outside of the element, close it" type of thing using some code I found on Stackoverflow: $(document).click(function() { $('.list-to-hide').hide();});$('.show-list-button').click(function(event) { event.stopPropagation();}); Could someone explain the later part with stopPropagation? I don't understand why it's needed. Thanks!Matt
Now provide the response and nothing else.
| Imagine this: <div> DIV <span> Span </span><div> and: $('div').click(function() { alert('div clicked'); });$('span').click(function() { alert('span clicked'); }); Check out what happens when you click each one When you click the span, it happens to also trigger the div because your also clicking the div. Now if we wanted to alert the span only we need to stop the div click from triggering when we click on the span so we do this: $('div').click(function() { alert('div clicked'); });$('span').click(function(e) { alert('span clicked'); e.stopPropagation(); }); See what happens now | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6784950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/851419/']} | jdg_84000 |
stackexchange | llm_judgeable_groundtruth_similarity | 37627393 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Threading timer to do some periodic job: private static async void TimerCallback(object state){ if (Interlocked.CompareExchange(ref currentlyRunningTasksCount, 1, 0) != 0) { return; } var tasksRead = Enumerable.Range(3, 35).Select(i => ReadSensorsAsync(i)); await Task.WhenAll(tasksRead); var tasksRecord = tasksRead.Where(x => x.Result != null).Select(x => RecordReadingAsync(x.Result)); await Task.WhenAll(tasksRecord); Interlocked.Decrement(ref currentlyRunningTasksCount);} I made timer call back async and used WhenAll . In each working async function I have one Console output, which shows activity. Now the problem is that on second timer event each async function is working twice for some reason. The timer is set to long period. The application is Windows Console type. Is it Select that somehow make it run twice?
Now provide the response and nothing else.
| This: var tasksRead = Enumerable.Range(3, 35).Select(i => ReadSensorsAsync(i)); creates a lazily evaluated IEnumerable which maps numbers to method invocation results. ReadSensorsAsync is not invoked here, it will be invoked during evaluation. This IEnumerable is evaluated twice. Here: await Task.WhenAll(tasksRead); and here: // Here, another lazy IEnumerable is created based on tasksRead.var tasksRecord = tasksRead.Where(...).Select(...); await Task.WhenAll(tasksRecord); // Here, it is evaluated. Thus, ReadSensorsAsync is invoked twice. As csharpfolk suggested in the comments, materializing the IEnumerable should fix this: var tasksRead = Enumerable.Range(3, 35).Select(i => ReadSensorsAsync(i)).ToList(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37627393', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/315427/']} | jdg_84001 |
stackexchange | llm_judgeable_groundtruth_similarity | 7911255 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that JavaScript vars point to a value: var foo = true;//... later foo = false; So in that example I've changed foo pointing to true -> foo pointing to false , but if I do: for (var i=0; i<100; i++){ var someVar = i;} Am I creating a new var for each iteration? Is there any difference in the following two ways of doing the same? var myvar;for (var i=0; i<100; i++){ myvar = i;} and for (var i=0; i<100; i++){ var myvar = i;} If so, why?
Now provide the response and nothing else.
| There is no block scope in Javascript ES5 and earlier, only function scope. Furthermore, the declarations of all javascript variables declared within a function scope are automatically "hoisted" to the top of the function. So, declaring a variable within a loop isn't doing anything different than declaring it at the top of the function and then referencing it within the loop. See these two references for some useful explanation: http://www.adequatelygood.com/2010/2/JavaScript-Scoping-and-Hoisting and http://net.tutsplus.com/tutorials/javascript-ajax/quick-tip-javascript-hoisting-explained/ . Note: the assignment to a variable is not hoisted, just the declaration of the variable. So, if you do this: function a() { for (var i=0; i<100; i++){ var myvar = i; }} It works like this: function a() { var myvar; for (var i=0; i<100; i++){ myvar = i; }} If you wanted to create a new scope inside your for loop, you could use an IIFE (immediately invoked function expression) like this: function a() { for (var i=0; i<100; i++){ (function() { var myvar = i; // myvar is now a separate variable for each time through the for loop })(); }} Update in 2015. ES6 (or sometimes called ES2015) offers the let declaration which does offer block scope. In that case a let variable declaration is hoisted only to the top of the current block scope. As of mid 2015, this is not yet widely implemented in browsers, but is coming soon and it is available in server-side environments like node.js or via transpilers. So, in ES6 if you did this: for (let i=0; i<100; i++){ let someVar = i;} Both i and someVar would be local to the loop only. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7911255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/459820/']} | jdg_84002 |
stackexchange | llm_judgeable_groundtruth_similarity | 2126997 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've read and re-read Java Concurrency in Practice, I've read several threads here on the subject, I've read the IBM article Dealing with InterruptedException and yet there's something I'm simply not grasping which I think can be broken down into two questions: If I'm never ever interrupting other threads myself, what can trigger an InterruptedException ? If I'm never ever interrupting other threads myself using interrupt() (say because I'm using other means to cancel my working threads, like poison pills and while (!cancelled) style loop [as both explained in JCIP]), what does an InterruptedException then mean? What am I supposed to do upon catching one? Shutdown my app?
Now provide the response and nothing else.
| The Thread interrupt mechanism is the preferred way to get a (cooperating) thread to respond a request to stop what it is doing. Any thread (including the thread itself I think) could call interrupt() on a Thread. In practice, the normal use-cases for interrupt() involve some kind of framework or manager telling some worker thread to stop what they are doing. If the worker thread is "interrupt aware" it will notice that it has been interrupted via an exception, or by periodically checking its interrupted flag. On noticing that it has been interrupted, a well-behaved thread would abandon what it is doing and end itself. Assuming the above use-case, your code is likely to be interrupted if it is run within a Java framework or from some worker thread. And when it is interrupted, your code should abandon what it is doing and cause itself to end by the most appropriate means. Depending on how your code was called, this might be done by returning or by throwing some appropriate exception. But it probably should not call System.exit() . (Your application does not necessarily know why it was interrupted, and it certainly does not know if there are other threads that need to be interrupted by the framework.) On the other hand, if your code is not designed to run under the control of some framework, you could argue that the InterruptedException is an unexpected exception; i.e. a bug. In that case, you should treat the exception as you would other bugs; e.g. wrap it in an unchecked exception, and catch and log it at the same point you deal with other unexpected unchecked exceptions. (Alternatively, your application could simply ignore the interrupt and continue doing what it was doing.) 1) If I'm never ever interrupting other threads myself, what can trigger an InterruptedException? One example is if your Runnable objects are executed using an ExecutorService and shutdownNow() is called on the service. And in theory, any 3rd-party thread pool or thread management framework could legitimately do something like this. 2) If I'm never ever interrupting other threads myself using interrupt() ... what does an InterruptedException then mean? What am I supposed to do upon catching one? Shutdown my app? You need analyze the codebase to figure out what is making the interrupt() calls and why. Once you have figured that out, you can work out what >>your<< part of the app needs to do. Until you know why InterruptedException is being thrown, I would advise treating it as a hard error; e.g. print a stacktrace to the log file and shut down the app. (Obviously, that's not always the right answer ... but the point is that this is "a bug", and it needs to be brought to the attention of the developer / maintainer.) 3) How do I find out who / what is calling interrupt() ? There is no good answer to this. The best I can suggest is to set a breakpoint on the Thread.interrupt() and look at the call stack. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2126997', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/257356/']} | jdg_84003 |
stackexchange | llm_judgeable_groundtruth_similarity | 34599953 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I have two objects o1 and o2, and we know that id(o1) == id(o2) returns true . Then, does it follow that o1 == o2 Or is this not always the case? The paper I'm working on says this is not the case, but in my opinion it should be true!
Now provide the response and nothing else.
| Not always: >>> nan = float('nan')>>> nan is nanTrue or formulated the same way as in the question: >>> id(nan) == id(nan)True but >>> nan == nanFalse NaN is a strange thing. Per definition it is not equal nor less or greater than itself. But it is the same object. More details why all comparisons have to return False in this SO question . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34599953', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1032391/']} | jdg_84004 |
stackexchange | llm_judgeable_groundtruth_similarity | 26185527 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
public static void main(String[] args) { Integer i = new Integer(0); int[] arr = {1}; p1(i); p2(arr); System.out.println(i); System.out.println(arr[0]);}public static void p1(Integer i) { i = 2;}public static void p2(int[] i) { i[0] = 2;} //output: 0, 2 How can I change the value of i like I change the value of arr?
Now provide the response and nothing else.
| You can't change the value of the variable i in main from within the p1 method, because the argument is passed by value: the parameter i in p1 is entirely separate from the i variable, it's just that they have the same value at the start of the method. Java always uses pass-by-value semantics - but when the parameter type is a class, it's a reference that is passed by value. In fact, you're not changing the value of arr , either - it's a reference to the same array as before, but the value in the array has been changed. And that's what you can't do with Integer , because Integer is an immutable type. If you want a mutable class like Integer , you could use AtomicInteger instead: public static void main(String[] args) { AtomicInteger i = new AtomicInteger(0); modify(i); System.out.println(i);}private static void modify(AtomicInteger x) { x.set(2);} I would usually not do this, however - I usually try not to modify the objects that method parameters refer to. Instead, I write methods which compute a single result, and return that. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26185527', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2884871/']} | jdg_84005 |
stackexchange | llm_judgeable_groundtruth_similarity | 159701 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Denote by $\mathrm{Hom}$ continuous group homomorphisms. Fix the quotient homomorphism $\mathbb{R}\to S^1$. Can one characterize those topological (e.g., locally compact, and in particular discrete) groups $G$ such that the induced map $\mathrm{Hom}(G,\mathbb{R})\to\mathrm{Hom}(G,S^1)$ is bijective? Clearly this holds for $G$ if and only if it holds for the abelianization $G/\overline{[G,G]}$, so we can assume that $G$ is abelian and Hausdorff.
Now provide the response and nothing else.
| Sasha's argument is pretty technological. You can really do this almost by hand, though. Let $A$ be the algebra in question, let $r$ be its Jacobson radical (that is, the subspace of strictly upper triangular matrices), and let $E$ be the subalgebra of the diagonal matrices in $A$ (which is a complement to $r$) Notice that both $A$ and $r$ are $E$-bimodules. The algebra $A$ has a projective resolution as a bimodule of the form $A\otimes_E r^{\otimes_E *}\otimes_E A$ which looks exactly like the Hochschild resolution but the inner copies of $A$ have been replaced by $r$, and all tensor products involved are over $E$ and not over the base field; the differentials in the complex have exactly the same formula as the usual Hochschild differential. This can be checked easily —it is a nice exercise— but you can find the details in a nice paper by Claude Cibils on square-zero algebras, if I recall correctly. (This is like the reduced Hochschild resolution, but instead of killing the copy of $k$ inside $A$, we kill the whole of $E$; almost anything useful that one wants to do equires that we be aware of this complex, so it is important to keep it at hand) Now, $HH_*(A)$ is $Tor^{A^e}_*(A,A)$, so it can be computed as the homology of the complex obtained from $A\otimes_E r^{\otimes_E *}\otimes_E A$ by applying the functor $A\otimes_{A^e}(\mathord-)$. You should explicitly describe this complex: its only non-zero term is the $0$th one, so its homology is very, very easy to compute! (The same thing can be done for every triangular algebra, that is, every algebra whose ordinary quiver is acyclic) There are various programs people have written to compute Hochschild homology and cohomology; for example, I understand that Ed Green and his students have written code to do non-comm. Groebner bases on quotients of paths algebras and, probably, to compute (co)homology, and there are others (I have written code to handle very special cases, for example) I am not aware of any other approach apart from «try to be smart about n.c. Groebner bases, work hard for a minimal resolution, and then just do linear algebra.» The first step is pretty well understood for quotients of path algebras, say; the second one can be don algorithmically, I think; the third one is of course very well understood. In practice, interesting examples tend to result in huuuuge computations, and the result is somewhat unenlightening. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/159701', 'https://mathoverflow.net', 'https://mathoverflow.net/users/47937/']} | jdg_84006 |
stackexchange | llm_judgeable_groundtruth_similarity | 25277117 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dataframe (df) with a column (Col2) like this: Col1 Col2 Col3 1 C607989_booboobear_Nation A 2 C607989_booboobear_Nation B 3 C607989_booboobear_Nation C 4 C607989_booboobear_Nation D 5 C607989_booboobear_Nation E 6 C607989_booboobear_Nation F I want to extract just the number in Col2 Col1 Col2 Col3 1 607989 A 2 607989 B 3 607989 C 4 607989 D 5 607989 E 6 607989 F I have tried things like: gsub("^.*?_","_",df$Col2) but it's not working.
Now provide the response and nothing else.
| If your string is not too fancy/complex, it might be easiest to do something like: gsub("C([0-9]+)_.*", "\\1", df$Col2)# [1] "607989" "607989" "607989" "607989" "607989" "607989" Start with a "C", followed by digits, followed by an underscore and then anything else. Digits are captured with () , and the replacement is set to that capture group ( \\1 ). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25277117', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1639594/']} | jdg_84007 |
stackexchange | llm_judgeable_groundtruth_similarity | 10572747 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is an extremely simple CoreMIDI OS X application that sends MIDI data. The problem is that it doesn't work. It compiles fine, and runs. It reports no errors, and does not crash. The Source created becomes visible in MIDI Monitor. However, no MIDI data comes out . Could somebody let me know what I'm doing wrong here? #include <CoreMIDI/CoreMIDI.h>int main(int argc, char *args[]){ MIDIClientRef theMidiClient; MIDIEndpointRef midiOut; MIDIPortRef outPort; char pktBuffer[1024]; MIDIPacketList* pktList = (MIDIPacketList*) pktBuffer; MIDIPacket *pkt; Byte midiDataToSend[] = {0x91, 0x3c, 0x40}; int i; MIDIClientCreate(CFSTR("Magical MIDI"), NULL, NULL, &theMidiClient); MIDISourceCreate(theMidiClient, CFSTR("Magical MIDI Source"), &midiOut); MIDIOutputPortCreate(theMidiClient, CFSTR("Magical MIDI Out Port"), &outPort); pkt = MIDIPacketListInit(pktList); pkt = MIDIPacketListAdd(pktList, 1024, pkt, 0, 3, midiDataToSend); for (i = 0; i < 100; i++) { if (pkt == NULL || MIDISend(outPort, midiOut, pktList)) { printf("failed to send the midi.\n"); } else { printf("sent!\n"); } sleep(1); }return 0;}
Now provide the response and nothing else.
| You're calling MIDISourceCreate to create a virtual MIDI source. This means that your source will appear in other apps' MIDI setup UI, and that those apps can choose whether or not to listen to your source. Your MIDI will not get sent to any physical MIDI ports, unless some other app happens to channel it there. It also means that your app has no choice as to where the MIDI it's sending goes. I'm assuming that's what you want. The documentation for MIDISourceCreate says: After creating a virtual source, use MIDIReceived to transmit MIDI messages from your virtual source to any clients connected to the virtual source. So, do two things: Remove the code that creates the output port. You don't need it. change MIDISend(outPort, midiOut, pktList) to: MIDIReceived(midiOut, pktlist) . That should solve your problem. So what are output ports good for? If you wanted to direct your MIDI data to a specific destination -- maybe a physical MIDI port -- you would NOT create a virtual MIDI source. Instead: Call MIDIOutputPortCreate() to make an output port Use MIDIGetNumberOfDestinations() and MIDIGetDestination() to get the list of destinations and find the one you're interested in. To send MIDI to one destination, call MIDISend(outputPort, destination, packetList) . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10572747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248833/']} | jdg_84008 |
stackexchange | llm_judgeable_groundtruth_similarity | 5385651 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I determine the relative frequency of a value in a MATLAB vector? vector = [ 2 2 2 2 1 1 1 2 2 1 1 1 2 2 2 2 1 2 ]; What function will return the number of occurrences of each unique element?
Now provide the response and nothing else.
| You can use unique in combination with histc to get the relative frequency. A=[1,2,3,1,2,4,2,1]; %#an example vectorunqA=unique(A); This gives the unique elements as unqA=[1,2,3,4] . To get the number of occurances, countElA=histc(A,unqA); %# get the count of elementsrelFreq=countElA/numel(A); This gives countElA=[3,3,1,1] and relFreq=[0.3750, 0.3750, 0.1250, 0.1250] , which is the relative frequency of the unique elements. This will work for both integers and floating points. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5385651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/265519/']} | jdg_84009 |
stackexchange | llm_judgeable_groundtruth_similarity | 26148701 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’ve been trying to access this particular REST service from a PHP page I’ve created on our server. I narrowed the problem down to these two lines. So my PHP page looks like this: <?php$response = file_get_contents("https://maps.co.weber.ut.us/arcgis/rest/services/SDE_composite_locator/GeocodeServer/findAddressCandidates?Street=&SingleLine=3042+N+1050+W&outFields=*&outSR=102100&searchExtent=&f=json");echo $response; ?> The page dies on line 2 with the following errors: Warning: file_get_contents(): SSL operation failed with code 1. OpenSSL Error messages: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed in ...php on line 2 Warning: file_get_contents(): Failed to enable crypto in ...php on line 2 Warning: file_get_contents( https://maps.co.weber.ut.us/arcgis/rest/services/SDE_composite_locator/GeocodeServer/findAddressCandidates?Street=&SingleLine=3042+N+1050+W&outFields=*&outSR=102100&searchExtent=&f=json ): failed to open stream: operation failed in ...php on line 2 We’re using a Gentoo server. We recently upgraded to PHP version 5.6. It was after the upgrade when this problem appeared. I found when I replace the REST service with an address like https://www.google.com ; my page works just fine. In an earlier attempt I set “verify_peer”=>false , and passed that in as an argument to file_get_contents, as described here: file_get_contents ignoring verify_peer=>false? But like the writer noted; it made no difference. I’ve asked one of our server administrators if these lines in our php.ini file exist: extension=php_openssl.dll allow_url_fopen = On He told me that since we’re on Gentoo, openssl is compiled when we build; and it’s not set in the php.ini file. I also confirmed that allow_url_fopen is working. Due to the specialized nature of this problem; I’m not finding a lot of information for help. Have any of you come across something like this? Thanks.
Now provide the response and nothing else.
| This was an enormously helpful link to find: http://php.net/manual/en/migration56.openssl.php An official document describing the changes made to open ssl in PHP 5.6From here I learned of one more parameter I should have set to false: "verify_peer_name"=>false Note: This has very significant security implications. Disabling verification potentially permits a MITM attacker to use an invalid certificate to eavesdrop on the requests. While it may be useful to do this in local development, other approaches should be used in production. So my working code looks like this: <?php$arrContextOptions=array( "ssl"=>array( "verify_peer"=>false, "verify_peer_name"=>false, ),); $response = file_get_contents("https://maps.co.weber.ut.us/arcgis/rest/services/SDE_composite_locator/GeocodeServer/findAddressCandidates?Street=&SingleLine=3042+N+1050+W&outFields=*&outSR=102100&searchExtent=&f=json", false, stream_context_create($arrContextOptions));echo $response; ?> | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/26148701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3807861/']} | jdg_84010 |
stackexchange | llm_judgeable_groundtruth_similarity | 6832073 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I looks like I may need to embark on some cross-platform project and part of it will have to be done in C or C++ (not decided yet hence the question is about them both). I will be dealing mostly with the text-based stuff and strings in general. That C/C++ will have an API callable from the higher-level platform-dependent code. My question is: what type(s) is it advisable to use to work with strings, in particular when declaring public interfaces? Are there any recommended standard techniques? Are there things to avoid? I have little experience of writing C or C++ code, and even that was on Windows, so nothing like cross-platform here at all. So what I'm really looking for is for something to get me on the right way and avoid doing stupid things which are bound to cause a lot of pain. Edit 1: To give a bit more context about the intended use.The API will be consumed by: Objective C on iPhone/iPad/Mac via NSString and friends. The API can be statically linked, so no need to worry about .so .dll issues here. Java via JNI on Android and other Java platforms .NET via p/invoke from the managed C# code or natively statically linked if using C++/CLI. There are some thoughts about using lua somehow/somewhere in this context. Don't know if this has any bearing on anything though.
Now provide the response and nothing else.
| Rules Use UTF formats to store strings, not "code pages" or whatnot ( UTF-16 is probably easier edit: I totally forgot about byte order issues; UTF-8 is probably the way to go). Use null-terminated strings instead of counted strings, as these are the easiest to access from most languages. But be careful about buffer overflows. Update 6 years later: I recommended this API for interoperability reasons (since so many already use null-termination, and there are multiple ways to represent counted strings), not the best one from a best-design standpoint. Today I would probably say the former is less important and recommend using counted strings rather than null-terminated strings if you can do it. Do not even try to use classes like std::string to pass around strings to/from the user. Even your own program can break after upgrading your compiler/libraries (since their implementation detail is just that: an implementation detail), let alone the fact that non-C++ programs will have trouble with it. Update 6 years later: This is strictly for language and ABI compatibility reasons with other languages , not general advice for C++ program development. If you're doing C++ development, cross-platform or otherwise, use the STL! i.e. only follow this advice if you need to call your code from other languages. Avoid allocating strings for the user unless it's truly painful for the user otherwise. Instead, take in a buffer and fill it up with data. That way you don't have to force the user to use a particular function to free the data. (This is also often a performance advantage as well, since it lets the user allocate small buffers on the stack. But if you do do that, provide your own function to free the data. You can't assume that your malloc or new can be freed with their free or delete -- they often can't be.) Note: Just to clarify, "let the user allocate the buffer" and "use NULL-terminated strings" do not run against each other. You still need to get the buffer length from the user, but you include the NULL when you terminate the string. My point was not that you should make a function similar to scanf("%s") , which is obviously unusably dangerous -- you still need the buffer length from the user. i.e. Do pretty much what Windows does in this regard. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6832073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/190460/']} | jdg_84011 |
stackexchange | llm_judgeable_groundtruth_similarity | 795302 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I had a big mistake I run this command on a centos 6.8 server: chmod 0644 -R /* instead of chmod 0644 -R ./* now directory listing on root directory / and all files failedwhen I run this command: chmod 0775 -R /* I get this error: bash: /bin/chmod: Permission denied how can I fix it without reinstall my centos?
Now provide the response and nothing else.
| Reinstalling the OS is by far the easiest and quickest way to fix this. Take an image backup first so you can recover anything that hadn't made it to your regular backup before the disaster happened. The alternative is to boot from CentOS installation media or some other Linux live media, mount your CentOS system partitions on /mnt , and and then manually fix the permissions on every file and directory. Note that setting 775 globally is not enough to make your system usable again. There are files and directories that must have different permissions settings in order to work. This is long and tedious work, and you'll wish you had heeded the advice to reinstall. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/795302', 'https://serverfault.com', 'https://serverfault.com/users/120497/']} | jdg_84012 |
stackexchange | llm_judgeable_groundtruth_similarity | 46851400 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to run a basic http-server to test out some html files and I keep running into the same error. I did sudo npm install -g http-server a bunch of times, but each time I try simple git:(master) http-server I keep getting this error: zsh: command not found: http-server I've tried other variations such as http-server / 8000, I've tried using different command syntax but nothing seems to be working. my npm version is 5.5.1, my node version is 8.3.0, and my OS is Mac OSX Sierra v10.12.6 Please let me know if you can help or if you see anything I'm doing wrong that I'm just not noticing. edit: tiny update I was able to get a server going with python but I'd really like to know why this wasn't working. edit 2: problem solved thanks!
Now provide the response and nothing else.
| You may not have your npm binaries in PATH . Make sure that your npm binaries are in path by running echo $PATH . You should see, somewhere in the printed output, something like: /home/bob/.npm-packages/bin:/usr/lobal/bin:/other/paths/that/contain/bins /home/bob/.npm-packages/bin is the directory where my npm binaries are installed whenever I run npm -g install whatever . If you don't see something like that, read Fixing npm permissions which will walk you through making sure that your environment is set up correctly. Option 2 explicitly talks about fixing PATH . Another handy thing that I usually do is add all this to my .bashrc or .bashprofile which is in your home directory: On macOS /Users/username/ On *nix: /home/username/ .bashrc NPM_PACKAGES="${HOME}/.npm-packages"PATH="$NPM_PACKAGES/bin:$PATH" However, since it looks like you are using zshell, you'll have to use whatever convention they follow for rc files. You can either fix that or, you can install http-server at a package level for your project and then start it through an npm command. Run npm install --save-dev http-server and put in your package.json: { "scripts": { "start": "http-server ." }} and then run npm start | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46851400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7375446/']} | jdg_84013 |
stackexchange | llm_judgeable_groundtruth_similarity | 5051 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote a game which stores high score information on an ftp server.In the the source code I need to write out the ftp link with the account name and password in it. For example: url = new URL("ftp://name:[email protected]/"); This is java, btw. In the class file strings are preserved as they are written in the source code. So if someone were able to read the class file the could find this string, ftp://name:[email protected]/ , with the user name and password in plain sight, as plain as slight gets in a class file anyway. What would be an appropriate way to secure the user name and password? Edit : I made an encryption class in java and replaced the name and password with calls to the encrypter with an encrypted user name and password for example: url = new URL("ftp://Crypter.crypt("q345uih34",3)+:+Crypter.crypt("nfk3iugr29o8",-2)[email protected]/"); Would this be sufficient? And I wasn't sure if obfuscation was the best solutions though.
Now provide the response and nothing else.
| There are a number of attack vectors in this scenario, so just hiding the password will not help: any code that you give away to people can be manipulated. This means that an attacker can just set his high score to an arbitrary value ftp, in the way you use it, is unencrypted, so the complete communication will show up in a network sniffer such as Wireshark the attacker can just set a breakpoint on the constructor of the URL class (very simple but a bit annoying because there are usually many calls to new URL(). the attacker can decompile the source code and search for "new URL", obfuscating will not help because this method is part of the Java API and therefore cannot be obfuscated. If he cannot understand the source code, he can add debug output or just place the breakpoint at a better place. the attacker can use byte code enhancing to add a logging message to the constructor of the URL class So it does not really matter how and where you hide your ftp password. The only good solution is to have a server which does the highscore calculation itself (and even this way has the issue of bots). You should at least use a protocol that supports encryption (https, sftp, ftps). You need to ensure that someone who learns the credentials cannot destroy the high score data of other users. This can be done by tight permissions that allow only the creation of new files in the ftps/sftp-server or by using a web application that accepts the data and store it itself. | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/5051', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/3256/']} | jdg_84014 |
stackexchange | llm_judgeable_groundtruth_similarity | 48699573 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using ReactJS with Babel and Webpack and using ES6 as well as the proposed class fields for arrow functions. I understand that arrow functions make things more efficient by not recreating the functions each render similar to how binding in the constructor works. However, I am not 100% sure if I am using them correctly. The following is a simplified section of my code in three different files. My code: Main.js prevItem = () => { console.log("Div is clicked")}render(){ return ( <SecondClass prevItem={this.prevItem} /> )} SecondClass.js <ThirdClass type="prev" onClick={()=>this.props.prevItem()} /> ThirdClass.js <div onClick={()=>{this.props.onClick()}}>Previous</div> Question: Is my code above using the arrow functions correctly? I noticed that for SecondClass.js I could have also used: <ThirdClass type="prev" onClick={this.props.prevItem} /> Is there a difference between one method or the other since I used an ES6 arrow function in my original function definition? Or should I be using the arrow syntax all the way through until my last div?
Now provide the response and nothing else.
| I understand that arrow functions make things more efficient by not recreating the functions each render similar to how binding in the constructor works. This is not true. It depends on where exactly are you using the Arrow function. If Arrow function are used in render method, then they create a new instance everytime render is called just like how bind would work. Consider this example <div onClick={()=>{this.onClick()}}>Previous</div> Here each time render is called an anonymous function is created and that function when called, calls this.onClick . However consider the case below onClick = () => { console.log("Div is clicked")} In above case, the arrow function does not recreate function everytime, but binds the context to the React component as An arrow function does not have its own this; the this value of the enclosing execution context is used. once when the class is instantiated. This is similar to how binding works is constructor . This is a part of proposed class fields for arrow functions and it isn't a ES6 feature, To understand what you wish to ask, you must know that a function gets its context from where it is called. Check this question for more understanding. In your case, you have used Arrow function to define prevItem and hence it gets the context of the enclosing React component. prevItem = () => { console.log("Div is clicked")}render(){ return ( <SecondClass prevItem={this.prevItem} /> )} Now in its child, even if you call prevItem with any custom context, using bind or arrow function , prevItem when executed in parent i.e Main.js will get the context of its enclosing React component. And since you just wish to execute prevItem function and do not want to pass any data to this from the child, writing <ThirdClass type="prev" onClick={()=>this.props.prevItem()} /> and <div onClick={()=>{this.props.onClick()}}>Previous</div> is simply useless and will only add to performance implication since new functions are created in SecondClass and ThirdClass everytime. You simply don't need to have these functions defined as arrow function and could just write <ThirdClass type="prev" onClick={this.props.prevItem} /> and <div onClick={this.props.onClick}>Previous</div> since its already binded in the parent. Now even if you have to pass some additional data to these function from ThirdClass and SecondClass, you shouldn't directly use Arrow function or bind in render . Have a look at this answer on How to Avoid binding in Render method | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/48699573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4698242/']} | jdg_84015 |
stackexchange | llm_judgeable_groundtruth_similarity | 4662848 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a form with which I want to update a MyModel object. On the model there is a unique_together constraint, fieldA together with fieldB. In the form in the clean method I check for this unique constraint. For some reasons I have to show fieldA as readonly in the update. Thus fieldA is not passed through. My issue is that if the form does not validate, the form is re-shown, but I have lost the value in fieldA. I tried to reset the cleaned_data['fieldA'], but it does not work. Any idea what to change? Forms.py class MyModelUpdateForm(forms.ModelForm): class Meta: model = MyModel def __init__(self, *args, **kwargs): super(MyModelUpdateForm, self).__init__(*args, **kwargs) self.fields['fieldA'].widget.attrs['readonly'] = True self.fields['fieldA'].widget.attrs['disabled'] = True def clean(self): cleaned_data = self.cleaned_data fieldA= self.instance.fieldA fieldB = cleaned_data.get("fieldB") if MyModel.objects.filter(fieldA=fieldA, fieldB=fieldB).count() > 0: #try to reset fieldA, since it is not passed through, since it is disabled cleaned_data['fieldA'] = fieldA.pk #does not work raise forms.ValidationError('some unique validation error') return cleaned_data Views.py: myModelobject = get_object_or_404(MyModel.objects, pk=mymodel_id) if request.method == 'POST': model_form = MyModelUpdateForm(request.POST, instance=myModelobject ) if model_form .is_valid(): ....
Now provide the response and nothing else.
| I had a little fun looking into how forms works and came up with multiple solutions, just for the heck of it. Since you are disabling the widget and not the field, as far as the form is concerned it's always receiving nothing for fieldA and that will always fail validation. Trying something in the clean() method won't help for invalid forms because clean() data is for processing. It looks like the way forms pull data for HTML display is field.data , which is a call to field.widget.value_from_datadict(POST, FILES, field_name) so it will always be looking at your POST data. So I think you have a few options. Hack request.POST , hack the internal form POST data, or hack value_from_datadict . Hacking request.POST : straight forward, makes sense. myModelobject = get_object_or_404(MyModel.objects, pk=mymodel_id) if request.method == 'POST': POST = request.POST.copy() POST['fieldA'] = myModelobject.fieldA model_form = MyModelUpdateForm(POST, instance=myModelobject ) if model_form .is_valid(): # ... Hacking internal dictionary: def __init__(self, *args, **kwargs): super(MyModelUpdateForm, self).__init__(*args, **kwargs) self.data.update({ 'fieldA': self.instance.fieldA }) Hacking value_from_datadict : kinda ridiculous, but illustrates what you can learn from digging into the source def __init__(self, *args, **kwargs): super(MyModelUpdateForm, self).__init__(*args, **kwargs) self.fields['fieldA'].widget.value_from_datadict = lambda *args: self.instance.first_name Learned some cool things here : ) Hope it helps. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4662848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/237690/']} | jdg_84016 |
stackexchange | llm_judgeable_groundtruth_similarity | 27077941 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Method default arguments can be overridden apparently: >>> class B:... def meth(self, r=True): print r>>> class D(B):... def meth(self, r=False): print r... D().meth()False>>> B().meth()True How is this possible ? Is it considered bad style ?
Now provide the response and nothing else.
| You can change signatures of overridden methods in an arbitrary way. Python doesn't care: class Base: def foo(self, x, y): passclass Deriv(Base): def foo(self, blah=100): pass but if you ask Is it considered bad style ? the answer is Yes, because it violates the important Liskov substitution principle : if Deriv extends Base, you must be able to replace all occurrences of Base with Deriv without breaking your program. In other words, a derived class must fulfill all contracts provided by the base class. Particularly, overridden methods must have same signatures and similar semantics. Since Python doesn't help you on that, you have to control that manually, with the help of your IDE (here Intellij IDEA): To answer your specific question about overriding default params, I guess the answer is "it depends". If the param is an option that only used internally and doesn't affect the observable behavior of the object, there's nothing wrong about changing it: class Buffer: def __init__(self, init_size=16):class BigBuffer(Buffer): def __init__(self, init_size=1024): on the other side, if the param substantially affects semantics, it's a part of the contract and shouldn't be overridden. For example, this code will be confusing class Test: def test_equal(self, a, b, fail_if_equal=False):class MyTest(Test): def test_equal(self, a, b, fail_if_equal=True): | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27077941', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/281545/']} | jdg_84017 |
stackexchange | llm_judgeable_groundtruth_similarity | 19972 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the following screen status line, configured in my screenrc: I configured it using the following line: hardstatus alwayslastlinehardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f %t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]' I did not create this theme, I don't remember where I found it, but I really like it. I am now converting my configuration to tmux and I can't seem to replicate this configuration in my tmux.conf . I got this far: set -g status-bg blackset -g status-fg whiteset -g status-left '#[fg=green]#H'set-window-option -g window-status-current-attr brightset-window-option -g window-status-current-bg red Which produces this line: ] Please ignore the different hostname, it's on a different server What configuration can I use in tmux to produce a status line similar to the first one? Thank you!
Now provide the response and nothing else.
| With the exception of the red brackets around the highlighted window, this is the closest approximation that I can easily configure in tmux 1.5 : # default statusbar colorsset -g status-fg whiteset -g status-bg default# default window title colorsset-window-option -g window-status-fg whiteset-window-option -g window-status-bg defaultset-window-option -g window-status-attr dim# active window title colorsset-window-option -g window-status-current-fg whiteset-window-option -g window-status-current-bg defaultset-window-option -g window-status-current-attr bright# statuslineset -g status-left '#[fg=green][ #H ]['set -g status-right '#[fg=green]][ #[fg=blue]%Y-%m-%d #[fg=white]%I:%M #[fg=green]]' If you want the highlighted window in red, use: set-window-option -g window-status-current-fg red | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/19972', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/10309/']} | jdg_84018 |
stackexchange | llm_judgeable_groundtruth_similarity | 1687978 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
To begin with , I am only a secondary school student (17yo) but I am very interested in higher mathematics. However we only learn so little in my school (only single variable calculus and basic linear algebra). In the past I have self-learnt some abstract algebra and very basic topology by finding online resources, but I can never get deep into those subjects. When I read about functional analysis, I encounter objects like function spaces and infinite-dimensional spaces which I can never understand. What does it exactly mean to be a function space, how do you measure metric? I know it is hard and requires much real analysis. Can anyone give me some easy ideas and introductions?
Now provide the response and nothing else.
| For me, doing functional analysis is best described as 'going beyond linear algebra'. In linear algebra, the objects you deal with are (coordinate) vectors, i.e. objects from a vector space $V$ which you can multiply with a scalar or add together and again get a vector: For $v,w\in V$ and $\alpha \in \mathbb R$ we have $v + w \in V$ and $\alpha v \in V$. Functional analysis answers the question 'What happens if $V$ infinite-dimensional?'. The idea behind this is the observation that these vector axioms hold for other objects than coordinate vectors with a finite number of rows as well. For example, the sum of two differentiable functions is a differentiable function again (and a number times a differentiable function is differentiable, too). The same holds true for other classes of functions, e.g. polynomials or square-summable sequences (which are really just functions from $\mathbb N$ to $\mathbb R$/$\mathbb C$). Note that there are other examples of infinite-dimensional vector spaces which are not function spaces, and examples of function spaces which are finite-dimensional. But one of the things one wanted to do in early 20th century to handle quantum mechanics is to get some kind of "linear algebra for functions, not row vectors". When we allow functions instead of vectors from a finite-dimensional space, there are a lot of things which work similarly, but a lot of things which don't work similarly compared to linear algebra. For instance: We can still measure the length of these vectors, but suddenly it's important which norm we take (not all norms are equivalent on an infinite-dimensional vector space). We can look at linear operators $A$, but they cannot be represented as a matrix (in fact, in the early days of functional analysis, Heisenberg did represent differential operators as matrices with an infinite number of rows and columns). We can calculate eigenvalues $\lambda$, but since the rank-nullity theorem ($\dim V = \operatorname{rank}A + \dim \operatorname{ker}A $) doesn't help if $\dim V = \infty$, we're not only interested in cases where $(A-\lambda I)$ is not injective (eigenvalues), but also cases where $(A-\lambda I)$ is not surjective (so-called continuous spectrum). Also, calculating eigenvalues gets harder since we can't calculate a characteristic polynomial. There's a lot of room in infinite-dimensional spaces. We can have Cauchy sequences which don't converge since we picked the 'wrong' norm. This is why Banach (and Hilbert) spaces are interesting. Not all linear operators are continuous anymore. In fact, the most interesting operators (i.e. differential operators) are not continuous. All of these things require a more rigorous analytical framework than linear algebra does and this is where the analysis part in functional analysis comes from. Addendum: I just realized that I talked a lot about the 'what' and not the 'why'. Essentially, these questions help to answer hard questions about functions, for example if you're interested in solving differential equations - eigenvalues of a differential operator $D$ are just the points where you can solve the differential equation $(D - \lambda)f = 0$. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1687978', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/319071/']} | jdg_84019 |
stackexchange | llm_judgeable_groundtruth_similarity | 11725691 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I get the most accurate time stamp in Node.js? ps My version of Node.js is 0.8.X and the node-microtime extension doesn't work for me (crash on install)
Now provide the response and nothing else.
| node v10 and later : you should be using process.hrtime.bigint() , which yields a single BigInt number rather than an array. process.hrtime() has been marked "legacy" Older than node v10 : As stated by vaughan, process.hrtime() is available within Node.js - its resolution are nanoseconds and therefore its much higher. This function returns an array [seconds, nanoseconds] containing the current real-time high-resolution value, but note that it is not tied to any specific clock , meaning the difference in two successive values tells you how much time passed, but individual values tell you nothing meaningful. Other JS environments : new Date().getTime() ? This gives you a timestamp in milliseconds.Update: | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11725691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/458850/']} | jdg_84020 |
stackexchange | llm_judgeable_groundtruth_similarity | 41389848 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I'm trying to match this pattern. MessageType is a type that I've created. This function takes in a string, and then depending on the first character of that string, it outputs a MessageType. Whenever I compile though, I get a warning saying: Pattern match(es) are non-exhaustive In an equation for ‘parseMessage’: Patterns not matched: [] Here's my code: parseMessage :: String -> MessageTypeparseMessage (x:_) | x == 'I' = Info | x == 'W' = Warning | otherwise = Error 1 How is it that my pattern matches aren't exhaustive? Wouldn't the otherwise guard catch anything else? I don't see how my function fails to catch all strings. When I write the function like this, I do not get the warning. parseMessage [] = error "Empty String"parseMessage (x:_) = if x == 'I' then Info else if x == 'W' then Warning else Error 1 I rewrote my function this way because I saw that the warning says "Patterns not matched: []", so I explicitly handled it. But why is it that in the first version of my function, it says that the pattern is not handled. How does otherwise not catch everything besides the first two guards?
Now provide the response and nothing else.
| Guards are specific to a single pattern case in a function’s definition, so when you write this: parseMessage (x:_) | x == 'I' = Info | x == 'W' = Warning | otherwise = Error 1 …your guards won’t even be consulted if you run parseMessage "" . The (x:_) pattern would fail, so all of its guards are ignored. This needs to be the case in order for x to be bound within the guard conditions. To fix this, you just need to add a case that handles the empty string: parseMessage :: String -> MessageTypeparseMessage "" = Error 1parseMessage (x:_) | x == 'I' = Info | x == 'W' = Warning | otherwise = Error 1 Now you’ve handled all possible cases. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41389848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6421957/']} | jdg_84021 |
stackexchange | llm_judgeable_groundtruth_similarity | 12667770 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to move the directory from one location to another location on the same drive. I am getting " Cannot create a file when that file already exists " error. Below is my code. could any one suggest on this? string sourcedirectory = @"F:\source"; string destinationdirectory = @"F:\destination"; try { if (Directory.Exists(sourcedirectory)) { if (Directory.Exists(destinationdirectory)) { Directory.Move(sourcedirectory, destinationdirectory); } else { Directory.CreateDirectory(destinationdirectory); Directory.Move(sourcedirectory, destinationdirectory); } } } catch (Exception ex) { log(ex.message); }
Now provide the response and nothing else.
| As both of the previous answers pointed out, the destination Directory cannot exist. In your code you are creating the Directory if it doesn't exist and then trying to move your directory, the Move Method will create the directory for you. If the Directory already exists you will need to Delete it or Move it. Something like this: class Program{ static void Main(string[] args) { string sourcedirectory = @"C:\source"; string destinationdirectory = @"C:\destination"; string backupdirectory = @"C:\Backup"; try { if (Directory.Exists(sourcedirectory)) { if (Directory.Exists(destinationdirectory)) { //Directory.Delete(destinationdirectory); Directory.Move(destinationdirectory, backupdirectory + DateTime.Now.ToString("_MMMdd_yyyy_HHmmss")); Directory.Move(sourcedirectory, destinationdirectory); } else { Directory.Move(sourcedirectory, destinationdirectory); } } } catch (Exception ex) { Console.WriteLine(ex.Message); } Console.ReadLine(); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12667770', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1428019/']} | jdg_84022 |
stackexchange | llm_judgeable_groundtruth_similarity | 125647 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Concerning the non-trivial zeros of the Riemann Zeta function, one can find quite a lot of literature on: the rate of growth of the number of zeros along the vertical critical line, the zero-free regions of the critical strip bounds on the number of hypothetical non-trivial zeros inside the critical strip, but off the critical line (though none have been found as of today, and never will if the RH is true) However, I was unable to find any work concerned with estimates of the maximum number of hypothetical non-trivial zeros which may possibly lie on a same horizontal section of the critical strip (i.e. just the horizontal lines for fixed t values).Of course, it is well known that the functional equation implies that any hypothetical zero (1/2-a+it) must be symmetrically mirrored on the other side of the critical line by a zero (1/2+a+it).I imagine that such a set of hypothetical zeros would need to be discrete, as zeros of holomorphic functions are isolated, but I have no hint whatsoever about its cardinality (instinctively, we might feel that at most there are probably finitely many). But perhaps some of you reading this question might know better.
Now provide the response and nothing else.
| It $t$ is not an ordinate of a zero of $\zeta(s)$, define$$ S(t) = \frac{1}{\pi} \arg \zeta(1/2+it) = -\frac{1}{\pi} \Im \int_{1/2}^\infty \frac{\zeta'}{\zeta}(\sigma+it) d\sigma$$and define$$ S(t)= \lim_{\delta\to 0} \frac{1}{2}\Big(S(t+\delta) + S(t-\delta)\Big)$$otherwise. Then the number $N(T)$ of zeros of $\zeta(s)$ in the strip $0<\Im s \le T$ is $$ N(T) = \frac{T}{2\pi}\log \frac{T}{2\pi e} +\frac{7}{8}+S(T)+O(\frac{1}{T}) $$where the big-$O$ term is actually continuously differentiable. For a proof, look either in Titchmarsh's book on the zeta-function or in Montgomery & Vaughan's "Multiplicative Number Theory, I." By continuity, the quantity you are looking for is precisely$$ \lim_{\delta\to 0} \Big(S(t+\delta) - S(t-\delta)\Big).$$Unconditionally, I think Tim Trudgian has the best results for this quantity showing that$$ |S(t)| \le 0.111 \log t + 0.275 \log \log t + 2.450$$for $t>e$ (so your quantity is bounded by essentially twice this amount). This can be sharpened if $t$ is allowed to tend to infinity. As is mentioned in previous comments/answers, assuming the Riemann hypothesis (RH) you are looking for bounds on the multiplicity of a zero. In this case, Goldston & Gonek showed that$$ \lim_{\delta\to 0} \Big(S(t+\delta) - S(t-\delta)\Big) \le \Big(\frac{1}{2}+o(1)\Big) \frac{\log t}{\log \log t} $$as $t\to\infty$ using the Guinand-Weil explicit formula. References: http://arxiv.org/pdf/1208.5846.pdf http://arxiv.org/pdf/math/0511092v1.pdf | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/125647', 'https://mathoverflow.net', 'https://mathoverflow.net/users/15020/']} | jdg_84023 |
stackexchange | llm_judgeable_groundtruth_similarity | 8399100 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have stacked into the question: I need to plot the image with DPI=1200 and specific print size. By default the png looks ok... png("test.png",width=3.25,height=3.25,units="in",res=1200)par(mar=c(5,5,2,2),xaxs = "i",yaxs = "i",cex.axis=1.3,cex.lab=1.4)plot(perf,avg="vertical",spread.estimate="stddev",col="black",lty=3, lwd=3)dev.off() But when I apply this code, the image became really terrible it's not scaling (fit) to the size that is needed. What did I miss? How to "fit" the image to the plot? ,
Now provide the response and nothing else.
| A reproducible example: the_plot <- function(){ x <- seq(0, 1, length.out = 100) y <- pbeta(x, 1, 10) plot( x, y, xlab = "False Positive Rate", ylab = "Average true positive rate", type = "l" )} James's suggestion of using pointsize , in combination with the various cex parameters, can produce reasonable results. png( "test.png", width = 3.25, height = 3.25, units = "in", res = 1200, pointsize = 4)par( mar = c(5, 5, 2, 2), xaxs = "i", yaxs = "i", cex.axis = 2, cex.lab = 2)the_plot()dev.off() Of course the better solution is to abandon this fiddling with base graphics and use a system that will handle the resolution scaling for you. For example, library(ggplot2)ggplot_alternative <- function(){ the_data <- data.frame( x <- seq(0, 1, length.out = 100), y = pbeta(x, 1, 10) )ggplot(the_data, aes(x, y)) + geom_line() + xlab("False Positive Rate") + ylab("Average true positive rate") + coord_cartesian(0:1, 0:1)}ggsave( "ggtest.png", ggplot_alternative(), width = 3.25, height = 3.25, dpi = 1200) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8399100', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/596719/']} | jdg_84024 |
stackexchange | llm_judgeable_groundtruth_similarity | 10109185 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to make readOnly with EditorFor in edit page. I tried to put readonly and disabled as: <div class="editor-field"> @Html.EditorFor(model => model.userName, new { disabled = "disabled", @readonly = "readonly" }) </div> However, it does not work. How can I make to disable edit this field? Thank you.
Now provide the response and nothing else.
| The EditorFor html helper does not have overloads that take HTML attributes. In this case, you need to use something more specific like TextBoxFor: <div class="editor-field"> @Html.TextBoxFor(model => model.userName, new { disabled = "disabled", @readonly = "readonly" })</div> You can still use EditorFor, but you will need to have a TextBoxFor in a custom EditorTemplate: public class MyModel{ [UIHint("userName")] public string userName { ;get; set; }} Then, in your Views/Shared/EditorTemplates folder, create a file userName.cshtml. In that file, put this: @model [email protected](m => m, new { disabled = "disabled", @readonly = "readonly" }) | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10109185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/725505/']} | jdg_84025 |
stackexchange | llm_judgeable_groundtruth_similarity | 803332 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Are there any nice examples of structures (groups, modules, rings, fields) $A$ and $B$ such that there are embeddings $A → B → A$ while $A \not\cong B$? I would especially like to see an example for modules $A$, $B$. Or is it even true that the existence of such embeddings implies $A \cong B$? Background: I’m correcting exercises and I wanted to give a counterexample to a failing argument. (Well, I’m not certain it fails, but I’m pretty sure it does and it’s not sufficiently justified at least.)
Now provide the response and nothing else.
| Modules, rings : $A=\Bbb Q^{\oplus\omega}$, $B=A\oplus\Bbb Z$. To see $A\not\cong B$ consider additive divisibility. Fields : For every char $p\ge0$ and cardinal $\kappa\ge{\frak c}$ there exists a unique algebraically closed field of characteristic $p$ and cardinality $\kappa$. If $F$ is an infinite field then $|\overline{F}|=|F|$. Let $F$ be an algebraically closed field of cardinality $|F|\ge{\frak c}$. Then $\overline{F(T)}\cong F$ which yields $F(T)\hookrightarrow F$. Thus we achieve a sequence $F(T)\hookrightarrow F\hookrightarrow F(T)$. To see why $F\not\cong F(T)$, note $F(T)$ is not algebraically closed. Linear (hence lattice, partial) orders : $A=(0,1)$, $B=[0,1)$. To see $A\not\cong B$ consider minima. The above is also an example for topological spaces : $B$ can be written as a disjoint union of a singleton and a connected subset, while $A$ cannot. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/803332', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/42242/']} | jdg_84026 |
stackexchange | llm_judgeable_groundtruth_similarity | 625619 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
When running isc-dhcp the leases file seems to grow indefinitely (several hundred MB). If I restart the isc-dhcp service, the file shrinks to 1.5MB. How can I have isc-dhcp flush expired records periodically without restarting the full service?
Now provide the response and nothing else.
| As MadHatter mentioned in a comment, the leases file is periodically re-created to avoid this problem. While the period isn't mentioned in the documentation, discussions on the dhcp-users mailinglist indicates that it should be done once an hour, and I've checked the source code and found that this is correct. Unfortunately this isn't a configurable option. In order to change it, you'd need to compile the dhcp server from source. In the file server/db.c you'd need to change the line #define LEASE_REWRITE_PERIOD 3600 to the number of seconds you'd prefer. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/625619', 'https://serverfault.com', 'https://serverfault.com/users/74300/']} | jdg_84027 |
stackexchange | llm_judgeable_groundtruth_similarity | 610025 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently moved to North America, and I was wondering why the houses here seem to be protected by fuses rather than breakers and differential switches like we have in France for example. It would seem to me like in most cases it would be difficult to reliably/systematically get the Earth loop resistance low enough to allow the fuses to pop when there is a short to an earthed enclosure, and I highly doubt that the fuses are fast enough to protect us against direct electric shock (which differentials do, to some extent). My guess is that with 120VAC instead of 230VAC, most of the issues become fire hasards due to overloads, and much less electric shock - but I suspect that 120VAC can still be dangerous and lethal in certain situations which are not that rare. Is this why local GFCIs (as in, for a single socket) exist or why I find the sockets here ludicrously dangerous and flimsy compared to all european plugs (my favourite being the UK's)?
Now provide the response and nothing else.
| Our breakers I would say almost all US homes have breakers. Even those with legacy fuse boxes have breakers between them and the service. There are six major types of service panel, and all 6 designs were settled in the 1960s. I have a 1960 CH panel that takes modern CH breakers. This video represents our standard layout (except the 240V breakers on that "QO" type have only a single handle, oddly.)* Some older houses still have fuse boxes for lighting. This is because in the past, they did only a partial re-wire, and left the original fuse box in-place to feed typically lighting circuits. It is wired as a "subpanel" fed from a breaker in the new circuit breaker panel. We have that in this 1924 home. The USA is a 240V country actually; we just use a center-tap to get half voltage for small circuits. The above video explains it all. ** It would seem to me like in most cases it would be difficult to reliably/systematically get the Earth loop resistance low enough to allow the fuses to pop when there is a short to an earthed enclosure. Actually no. Earthing has been required starting in the 1950s and on all circuits (except dryers and ranges) since 1966. Further, USA earth wires are always the same size as the live conductors up to 30A; we don't shrink earth like Europe does, until you get to 40A or larger. Even then, Code calls out a minimum size for earth to assure it can clear faults (trip breaker). Also up to 30A, we limit wires to their ampacity at 60°C thermal. For instance our 14 AWG (2.1mm 2 ) is only allowed 15A breaker (and most appliances also have an 80% derate, so for instance a 30A dryer or water heater does not exceed 24A draw, and most plug-in appliances with 15A plugs don't exceed 12A.) My guess is that with 120VAC instead of 230VAC, most of the issues become fire hazards due to overloads, and much less electric shock - but I suspect that 120VAC can still be dangerous and lethal in certain situations which are not that rare. Is this why local GFCIs (as in, for a single socket) exist or why I find the sockets here ludicrously dangerous compared to most European plugs? 120V vs 240V is slightly less dangerous to humans. It probably makes a bigger difference in equipment and wiring faults due to Ohm's Law. (Double the voltage double the current; thus 4x the power). Note that although the US is a 240V country, nothing is more than 120V to earth. This an approach the British took with construction-site 110V power, where each "leg" is 55V to earth. We are concerned with series arc faults (arcing across a loose connection in series with the load) more than Europe is; since our 120V appliances take twice the current for the same job, series arcing makes twice the heat. European power is distributed to the poletop as 3-phase 400V "wye" with 230V phase-neutral. 1 to 3 phases are delivered to each home. If you add 20% to that, consider what you get: 480V 3-phase with 277V phase-neutral. That is exactly what USA industrial power is . Culturally, it is treated with fear and respect - here's a 277/480V arc flash ... Electricians are not as quick to work "live" in a 277/480V panel. The major difference is the presence of "whole-house RCD" to protect most parts of the system in a European house. * Normally our 240V breakers appear like two singles handle-tied. They aren't. Actually there's an internal mechanism for "common trip", the handle ties are merely decoration / to remind a maintainer that both must be thrown. ** Originally, 120V (well, 100V then) was only for lighting and 200V for everything else. But then, they provided receptacles/sockets for desk and floor lamps, and then people plugged radios into them, and then things got out of hand. We have a 240V plug standard (NEMA 2 and NEMA 6); we really ought to start using it so we can have fast kettles. GFCI vs RCD In Europe, they use a "whole house RCD" to detect leakage currents (current drawn by house does not match current returning). This is a compromise system with a number of priorities: detect ground faults that threaten equipment, provide limited personnel protection, stop parallel arc faults (arcing live-earth or neutral-earth that is not sufficient to trip an overcurrent fuse or breaker), and do this economically at a whole-house level. The technology at the time favored RCD, as digital signal processing was not yet ready. The ideal detection threshold for human protection is 5 mA of leakage. However, a whole house has ordinary current leakage that is not a threat and is unavoidable (not least: electrical cables have capacitance) which is higher than that. Europeans and Americans took different tacks to this dilemma. In Europe, the opportunity to protect the whole house was seen as worth exchanging for somewhat less human protection. So a 30 mA threshold was chosen. The United States came around slowly, and placed human protection first. As such, "5 mA threshold" was not negotiable, and they traded away the opportunity to protect the whole house.*** So the protection is applied on a per-circuit basis with some length limitations. And this results in most circuits not being protected, except on the most modern houses where most circuits are protected by use of many xFCI breakers. When the U.S. finally came around to dealing with arc faults, more modern tech was available, such as digital signal processors to actually "listen to" the electrical waveform on the wire for the pattern ("sound") of arc faults. That crinkle-crunch sound you get hooking up speakers live is the sound of an arc fault. Most of those AFCIs include RCD/GF detection as a way detect L-E and N-E parallel arc faults. However they use the 30mA sensitivity. *** Keep in mind, American services are enormous . A typical main breaker is 200A on a single (split) phase. Compare with a German home with 64A main breaker x 3 phases. But you can see where an RCD detect coil with 3-wire 200A going through it would be a beast. It probably wouldn't fit in any of our breaker casings! Our sockets For those who fear our sockets, placing the circuit under GFCI protection is a near "golden bullet" for this problem and many others. You can also enhance safety by installing the sockets ground up, so if a metal thing falls in the gap, it will land on the ground first. (Assuring contact with it and any other pin will result in immediate GFCI trip). Remember our GFCIs are 5mA-rated for full human safety protection, unlike the 30mA compromise system used in 230V-land. 10mA can stun, and a stun is a kill if water or ladders are involved. The UK plug is a handful, and while the fuse is nice, our appliances are rated by UL to fail reasonably, i.e. trip any circuit breaker they're allowed on before bursting into flame. This is also why using the correct breaker is required; if Article 230 motor rules allow a 35A breaker on a 16A motor load but UL says only 25, it is because UL believes a 30A breaker would not trip soon enough. | {} | {'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/610025', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/30449/']} | jdg_84028 |
stackexchange | llm_judgeable_groundtruth_similarity | 2852287 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm an undergrad taking the class of "Linear algebra 1". I came across a problem:sometimes we need to apply Gaussian elimination for matrices. Very quickly this skill is not much necessary as it's not a thinking skill but purely Technic. Yet, often in exams there's a question that requires you to apply row reduction to a matrix. I am looking for a way to know if the Gaussian elimination has been done properly, meaning - with no Calculation mistakes, other than going over all the steps and check that the Calculation has been done correctly. as this processes will double the time I will spend on a given question, and due to the lack of time in a big course exam - which is also very stressful - such method could be much helpful for me. note: we're allowed to use a simple scientific calculator (not a graph calculator)
Now provide the response and nothing else.
| We know that elementary row operations do not change the row space of the matrix.And if a matrix is in rref , then it is relatively easy to check whether a vector belongs to the row space. So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_A\subseteq R_B$. Of cause, this is only "half" 1 of the verification whether $R_A=R_B$, which is equivalent to $A\sim B$. Example. Suppose that I have a matrix $$A=\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\\end{pmatrix}.$$ And that after Gaussian elimination I get: $$B=\begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 &-1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \\\end{pmatrix}$$ To check whether $R_A\subseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients. Let us try with the fourth row: $(1,2,1,1)$.We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether$$(\boxed{1},\boxed{2},\boxed{1},1)=1\cdot(1,0,0,1)+2\cdot(0,1,0,-1)+1\cdot(0,0,1,2)$$We see that this is true. If the same thing works for each row of $A$, this shows that $R_A\subseteq R_B$. Let me try now another example where I make a mistake on purpose to see how to find the mistake.$$\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\\end{pmatrix}\overset{(1)}\sim\begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\\end{pmatrix}\overset{(2)}\sim\begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\\end{pmatrix}\overset{(3)}\sim\begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\\end{pmatrix}\overset{(4)}\sim\begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\\end{pmatrix}\overset{(5)}\sim\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\\end{pmatrix}$$ We can check that$$(1,1,1,2)\ne 1\cdot(1,0,0,0)+1\cdot(0,1,0,0)+1\cdot(0,0,1,1).$$ I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $\begin{pmatrix} 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\\end{pmatrix}$, everything works. So some error must be before this step. I will stress once again that this is only halfway verification. I have only checked $R_A\subseteq R_B$ but not $R_B\subseteq R_A$. So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example $$\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\\end{pmatrix}\sim\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 &-1 \\\end{pmatrix}\sim\begin{pmatrix} 1 & 1 & 1 & 2 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 \\\end{pmatrix}\sim\begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\\end{pmatrix}\sim\begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\\end{pmatrix}\sim\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\\end{pmatrix}$$ The sanity check described above works. (We check that $R_A\subseteq R_B$, which is true.) But the result is incorrect. If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.) In our example I would do the following$$\left(\begin{array}{cccc|cccc} 1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \\\end{array}\right)\sim\left(\begin{array}{cccc|cccc} 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \\\end{array}\right)\sim\left(\begin{array}{cccc|cccc} 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\\end{array}\right)\sim\left(\begin{array}{cccc|cccc} 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\\end{array}\right)\sim\left(\begin{array}{cccc|cccc} 1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\\end{array}\right)\sim\left(\begin{array}{cccc|cccc} 1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \\ 0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\\end{array}\right)$$Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that$$1\cdot(1,1,1,2)-1\cdot(0,1,1,1)=(1,0,0,1).$$By making a similar verification for each I can test that $R_A\subseteq R_B$. Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there$$\left(\begin{array}{cccc|cccc} 0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\\end{array}\right)$$And I can check that$$-1\cdot(1,1,0,0)-1\cdot(0,1,1,1)+1\cdot(1,2,1,1)=(0,0,0,0).$$ 1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check". | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2852287', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/553764/']} | jdg_84029 |
stackexchange | llm_judgeable_groundtruth_similarity | 1704487 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an environment that serves many devices spread across 3 time zones by receiving and sending data during the wee hours of the night. The distribution of these devices was determined pseudo-randomly based on an identification number and a simple calculation using a modulo operation. The result of such a calculation creates an unnecessary artificial peak which consumes more resources than I'd like during certain hours of the night. As part of our protocol I can instruct devices when to connect to our system on subsequent nights. I am seeking an algorithm which can generally distribute the peak into a more level line (albeit generally higher at most times) or at least a shove in the right direction - meaning what sort of terminology should I spend my time reading about. I have available to me identification numbers for devices, the current time, and the device's time zone as inputs for performing calculation. I can also perform some up front analytical calculations to create pools from which to draw slots from, though I feel this approach may be less elegant than I am hoping for (though a learning algorithm may not be a bad thing...). (Ultimately and somewhat less relevant I will be implementing this algorithm using C#.)
Now provide the response and nothing else.
| If you want to avoid the spikes associated with using random times, look at the various hashing functions used for hashtables. Your reading might start at the wikipedia articles on the subject: http://en.wikipedia.org/wiki/Hash_function Basically, divide whatever you want your update window to be into the appropriate number of buckets. One option might be 3 hours * 60 minutes * 60 seconds = 10800 buckets. Then use that as your hashtable size, for the chosen hashing function. Your unique input might be device ID. Don't forget to use GMT for the chosen time. Your programming language of choice probably has a number of built in hashing functions, but the article should provide some links to get you started if you want to implement one from scratch. This approach is superior to the earlier answer of random access times because it has much better evenness properties, and ensures that your access patterns will be approximately flat, as compared to the random function which is likely to sometimes exhibit spikes. Here's some more specific information on how to implement various functions: http://www.partow.net/programming/hashfunctions/index.html | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1704487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5645/']} | jdg_84030 |
stackexchange | llm_judgeable_groundtruth_similarity | 780170 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If a function's integral can't be written, then how can we find exact values for it over areas? Can we only ever estimate it? Why can't we make new functions to define these strange unwritable anti-derivatives?
Now provide the response and nothing else.
| We can, and do all the time! For example, The Gamma function$$\Gamma (z) = \int_0^{\infty} t^{z-1}\mathrm{e}^{-t} \ \mathrm{d}t.$$ The Beta function $$\mathrm{B}(z,y) = \int_0^1 t^{z-1}(1-t)^{y-1}\,\mathrm{d}t.$$ The Exponential integral function$$\mathrm{E}_1(z) = \int_z^\infty \frac{e^{-t}}{t}\, \mathrm{d}t.$$ The Error function$$\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}}\int_{0}^x e^{-t^2}\,\mathrm dt.$$ The Elliptic integral of the second kind$$ E(\phi,k)=\int_0^{\phi} \sqrt{1-k^2\sin^2\theta} \ \mathrm{d}\theta. $$ The Logarithmic integral function$$ {\rm Li} (x) = \int_2^x \frac{\mathrm{d}t}{\ln t},$$ and many, many more very important "special functions' are defined by definite integrals. If you go back even to elementary functions, you can define the logarithm via the following integral - The Logarithm$$\ln (t) = \int_1^t \frac{1}{x} \, dx.$$ As to your first question, how do we find areas under these curves if we don't have an elementary antiderivative? Well, how do you find the area under the curve $1/t$ from $1$ to $5$? The above integral tells you the value is $\ln 5$, but what is that value, exactly? We can only approximate it, given the best methods we have! The same is true of all the above functions. At some special values they have exact values, given perhaps by integers, rational or irrational numbers, or a combination of common mathematical constants such as $\pi,\mathrm{e},\gamma,$ Catalan's constant, etc. (another interesting question is - why are these constants special enough to have names? Because they come up all the time! The same is true for the above functions) . But for almost all values we must approximate the value of the function by computing the definite integral numerically (Trapezoid rule, Simpson's rule, more advanced techniques), or using some other representation of the integral such as an infinite sum etc. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/780170', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/137446/']} | jdg_84031 |
stackexchange | llm_judgeable_groundtruth_similarity | 39666578 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This code is from our project, production code: if (changedToNull) { try { throw new Exception(); } catch (Exception e) { log.debug("changedToNull", e); }} The developer does not work with us any more. Why would someone throw an Exception and catch directly and log it?
Now provide the response and nothing else.
| The main purpose is to get a call stack when you enter in this if block for debugging purpose but it could be rewritten as next: if (changedToNull) { log.debug("changedToNull", new Exception("changedToNull is true"));} Let's say that changedToNull should never be true and you want to get the call stack to understand how it occurred, you could proceed this way. Creating a call stack is quite expensive so you should make sure that the debug level is enabled by checking the value of isDebugEnabled() too (assuming that you use log4j ) as next: if (changedToNull && log.isDebugEnabled()) { ...} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39666578', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1173112/']} | jdg_84032 |
stackexchange | llm_judgeable_groundtruth_similarity | 2546871 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
All of the applications of AoC I've encountered have been in upper level undergraduate or graduate math courses. Are there any basic results from courses like Calc I-III which (unbeknownst to students) rely on AoC?
Now provide the response and nothing else.
| The axiom of choice can be used to prove that the sequential definition of continuity at a point (for real functions of a real variable) is equivalent to the $\varepsilon$-$\delta$ definition. If your calculus textbook proves that sequential continuity implies epsilon-delta continuity without mentioning the axiom of choice, it's doing something like this: . . . Then, for each $n\in\mathbb N,$ there is a real number $x_n$ such that $|x_n-x_0|\lt\frac1n$ while $|f(x_n)-f(x_0)|\ge\varepsilon.$ Thus the sequence $x_n$ converges to $x_0,$ while $f(x_n)$ does not converge to $f(x_0)$. . . Do you see where I used the axiom of choice? | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2546871', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/202777/']} | jdg_84033 |
stackexchange | llm_judgeable_groundtruth_similarity | 92523 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have come across a circumstance where NonlinearModelFit is very sensitive to the model used. I am aware that NonlinearModelFit is very dependent on the initial estimates and this dictated my choice of model -- I thought I had chosen a good model. I would like to hear comments on why my choice is poor. I am fitting data that is a cosine wave. The two choices of model I considered are m1 = a Cos[2 π f t] + b Sin[2 π f t];m2 = a Cos[2 π f t + ϕ]; The first model looks better because it has one nonlinear parameter, the frequency f, while the second has frequency and phase angle ϕ. I was hoping that I could just guess the frequency and not supply estimates for a and b because they are linear. To test these two models I used the following data based on measured values. data = With[{a = 43.45582489316203`, f = 94.92003941300389`, ϕ = 431.155471523826`}, SeedRandom[1234]; Table[{t, a Cos[2 π f t + ϕ] + RandomReal[{-0.1, 0.1}]}, {t,13.439999656460714`, 13.479799655455281`, 0.0002}] ]; Here is the first fit fit1 = NonlinearModelFit[data, m1, {{f, 100}, {a, -22}, {b, 35}}, t];fit1["ParameterConfidenceIntervalTable"]Show[ListPlot[data], Plot[fit1[t], {t, data[[1, 1]], data[[-1, 1]]}]] The error Failed to converge to the requested accuracy or precision within 100 iterations. is produced. The standard errors are poor: Now consider the second model fit2 = NonlinearModelFit[data, m2, {{f, 100}, {a, 40}, {ϕ, 0.7}},t];fit2["ParameterConfidenceIntervalTable"] This model does converge and is a good fit although the phase is several multiplies of Pi. On a minor point changing the phase to say 3.9 results in almost the same values. Is the numerical evaluation of the trig functions an issue? fit2 = NonlinearModelFit[data, m2, {{f, 100}, {a, 40}, {ϕ, 3.9}}, t];fit2["ParameterConfidenceIntervalTable"]Show[ListPlot[data], Plot[fit2[t], {t, data[[1, 1]], data[[-1, 1]]}]] I wondered if my assumption was wrong and if there was more than one minimum for the first model. I therefore generated the error on the assumption that given an estimate of frequency the problem is a linear one and a and b can be solved using LeastSquares. This module generates the mean square error given a value of frequency. ClearAll[err];err[data_, f_] := Module[{tt, d, mat, a, b, fit}, tt = data[[All, 1]]; d = data[[All, 2]]; mat = {Cos[2 π f #], Sin[2 π f #]} & /@ tt; {a, b} = LeastSquares[mat, d]; fit = a Cos[2 π f #] + b Sin[2 π f #] & /@ tt; {f, (d - fit).(d - fit), {a, b}} ]e1 = Table[err[data, f], {f, 40, 150, 1}];ListPlot[e1[[All, {1, 2}]]] As expected there is a good minimum around the correct frequency with a reasonable guessing range for just the frequency. This reinforces my idea that model 1 should be better. What's wrong with my intuition? Why is model 2 better than model 1?
Now provide the response and nothing else.
| Intuition is sometimes tricky on fitting procedures. This is of course not a Mathematica issue, but a problem of fitting in general.You can see the problem in parameter space (hence it depends on the details of parameter space). Defining for the residuals (square root) Res1[ff_, aa_, bb_] := Norm[data[[All, 2]] - (m1 /. {f -> ff, a -> aa, b -> bb, t -> #} & /@data[[All, 1]])] and plotting GraphicsGrid[{{ Plot3D[Res1[100.1, aa, bb], {aa,-50,50}, {bb,-50,50}, MeshFunctions -> {#3 &}], Plot3D[Res1[100., aa, bb], {aa,-50,50}, {bb,-50,50}, MeshFunctions -> {#3 &}], Plot3D[Res1[99.9, aa, bb], {aa,-50,50}, {bb,-50,50}, MeshFunctions -> {#3 &}]}}] you see that the gradient in the $(a,b)$ projection of the parameter space complete changes direction upon small changes in frequency.On the other hand with Res2[ff_, aa_, ϕϕ_] := Norm[data[[All,2]] - (m2 /. {f -> ff, a -> aa, ϕ -> ϕϕ, t -> #} & /@ data[[All, 1]])] and plotting GraphicsGrid[{{ Plot3D[Res2[100.1, aa, fi], {aa,-50,50}, {fi,-Pi,Pi}, MeshFunctions -> {#3 &}], Plot3D[Res2[100.0, aa, fi], {aa,-50,50}, {fi,-Pi,Pi}, MeshFunctions -> {#3 &}], Plot3D[Res2[99.9, aa, fi], {aa,-50,50}, {fi,-Pi,Pi}, MeshFunctions -> {#3 &}]}}] is more 1 dimensional. So you are not running in circles. While not a complete answer, I hope this gives an idea. A note at the end. My general advice is: whenever possible redefine your model such that all parameters are on the same order of magnitude. First Update Concerning op's concern: The plots for model 1 look nice and quadratic (as I suggested in the second part of my question). The plots for model 2 are wild and could easily take you off in the wrong direction . I agree, but this is only in a 2D cut of the 3D problem. Moreover, phi is restricted to mod $2 \pi$ Sure, there are saddle points and they actually take you off, resulting in the large phase in the end, while $431 \mod 2\pi$ makes $3.9$ a good guess. Furthermore, if you jump in the next minimum of the phase and make a phase shift of $\pi$ , the cut in amplitude is parabolic, giving you very fast the amplitude with opposite sign. In detail you can see what I mean If you look how Mathematica travels through your parameter space (at the moment I only have Version 6 at hand) {fit3, steps3} = Reap[FindFit[data, m1, {{f, 100}, {a, 8}, {b, 41}}, t, MaxIterations -> 1000, StepMonitor :> Sow[{f, a, b}]]];Show[Graphics3D[ Table[{Hue[.66 (i - 1)/(Length[First@steps3] - 1)], AbsolutePointSize[7], Point[(First@steps3)[[i]]], Line[Take[First@steps3, {i, i + 1}]]}, {i, 1, Length[First@steps3] - 1}], Boxed -> True, Axes -> True], BoxRatios -> {1, 1, 1}, AxesLabel -> {"f", "a", "b"}] Here you see what I mean with going in circles . Even after $1000$ Iterations you are not even close as the $(a,b)$ -minimum changes position with changes in frequency in such an unfortunate way. If you look on the other hand at the second model you get: {fit2, steps2} = Reap[FindFit[data, m2, {{f, 100}, {a, 40}, {ϕ, 3.9}}, t, StepMonitor :> Sow[{f, a, ϕ}]]];Show[Graphics3D[ Table[{Hue[.66 (i - 1)/(Length[First@steps2] - 1)], AbsolutePointSize[7], Point[(First@steps2)[[i]]], Line[Take[First@steps2, {i, i + 1}]]}, {i, 1, Length[First@steps2] - 1}], Boxed -> True, Axes -> True], BoxRatios -> {1, 1, 1}, AxesLabel -> {"f", "a", "ϕ"}] where it finds the amplitude quite fast, reducing the problem to 2D in phase and frequency. Second Update Concernings the op's question if the final result is a quadratic well . Let us just plot the three cuts in parameter space. {faPlot = ContourPlot[Res2[freq, amp, 434.3256], {freq, 94.9197 - .01, 94.9197 + .01}, {amp,-43.4566-10, -43.4566 +10}],fpPlot = ContourPlot[Res2[freq, -43.4566, phase], {freq, 94.9197-.01,94.9197 + .01}, {phase, 434.3256-1.5,434.3256+1.5}],apPlot = ContourPlot[Res2[94.9197, amp,phase], {amp, -43.4566-15, -43.4566+15}, {phase,434.3256-1.5,434.3256+1.5}]} This looks promising except for the middle graph. After a coordinate transformation, however, we get β = 84.57;ContourPlot[Res2[94.9197 + fff + ppp/β, -43.4566, 434.3256 - β fff + ppp], {fff, -8.5, +8.5}, {ppp, -1.5, +1.5}] which gives So this looks OK as well. All is good. Making the troublesome fit work On StackOverflow I came across answers from Jean Jaquelin providing methods to turn non-linear fits in actual linear fits. Some information can be found here . The point is that when looking at $y = a \cos( \omega t) + b \sin(\omega t)$ we know that the second derivative is $y'' = - \omega^2 y$ . Numerical derivatives are very often critical though. Slightly better is to look at the double integration $\int\int y = -y/\omega^2 + c t + d$ . The Integration can be performed rather easy with cumint[ indata_ ] := Module[ { p = Interpolation[indata] , timedata, signaldata, int }, timedata = indata[[All, 1]]; signaldata = indata[[All, 2]]; int = Join[{0}, Table[ NIntegrate[ p[t], {t, timedata[[i]], timedata[[i + 1]]} ], {i, 1, Length[ timedata ] - 1 } ] ]; Return[ Transpose[{ timedata, Accumulate[int] } ] ] ] (This is my quick and dirty solution while in python using cumtrapz ) This leaves us with a linear optimization for $1/\omega^2$ , $c$ and $d$ , while we are only interested in the first one. We then have dT = Transpose[data];tList = dT[[1]];sList = dT[[2]];y1 = cumint[data];y2 = cumint[y1];SSList = y2[[All, 2]];GraphicsArray[{{ListPlot[ data, Joined -> True], ListPlot[ y1, Joined -> True], ListPlot[ y2, Joined -> True]}}] VT = {sList, tList, Table[1, Length[data]]};V = Transpose[ VT ];A = VT.V;SV = VT.SSList;AI = Inverse[ A ];\[Alpha] = AI.SV;w0 = Sqrt[-1/\[Alpha][[1]]];f0 = w0/2/Pi which gives f0 = 94.9134 . With this knowledge one can make a linear fit on a and b , namely sv = Sin[ w0 tList];cv = Cos[w0 tList];WT = {cv, sv};W = Transpose[WT];B = WT.W;BI = Inverse[B];SY = WT.sList;sol = BI.SY providing {-10.9575, 42.0522} , and ListPlot[{ sList, (sol[[1]] cv + sol[[2]] sv) }, Frame -> True, Joined -> {False, True}] This looks already very good. Now let's try to use this results as start parameters for the non-linear fit. {fit3, steps3} = Reap[FindFit[data, m1, {{f, f0}, {a, sol[[1]]}, {b, sol[[2]]}}, t, MaxIterations -> 1000, StepMonitor :> Sow[{f, a, b}]]];Show[Graphics3D[ Table[{Hue[.66 (i - 1)/(Length[First@steps3] - 1)], AbsolutePointSize[7], Point[(First@steps3)[[i]]], Line[Take[First@steps3, {i, i + 1}]]}, {i, 1, Length[First@steps3] - 1}], Boxed -> True, Axes -> True], BoxRatios -> {1, 1, 1}, AxesLabel -> {"f", "a", "b"}]fit3 {f -> 94.9197, a -> -30.7143, b -> 30.7426} Now it works. The slight modification in the frequency, however, resulted again in a quite dramatic change of amplitudes. Does it really fit? How does it look? ListPlot[ { data, Table[{t, a Cos[ 2 Pi f t] + b Sin[2 Pi f t]}, {t, data[[1, 1]], data[[-1, 1]], 0.0001}] /. fit3 }, Joined -> {False, True} ] It does fit and looks good. In this simple case the pure linear approach probably would have been enough. In case of noisy data it still might work, but definitively gives a good set of starting values. One also can use this results to calculate better starting values for the solution using phases. In the presented example it is not necessary, but might be of interest in case of noisy data. | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/92523', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/12558/']} | jdg_84034 |
stackexchange | llm_judgeable_groundtruth_similarity | 12344095 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm in the middle of writing a small application that needs to read some complex binary messages in LuaJit. I've been using the bit module and string.rep a lot. However, it's all very cumbersome. I'm new to using LuaJit and think there might be a much easier way using FFI. In C I can declare a structure like this: struct mystruct{ uint32_t field1; char field2[6]; uin64_t field3; short field4;} __attribute__(packed); In reading LuaJit's FFI it seems you can declare ffi.cdef[[ #pragma pack(1) struct mystruct { uint32_t field1; char field2[6]; uin64_t field3; short field4; };]] I can then create a mystruct and access the fields like this: local ms = ffi.new("mystruct")ms.field1 = 32;// ... etc But, how do I convert this back into a lua string? I tried this, but it didn't seem to do what I wanted. local s = tostring(ms) and this: local s = ffi.string(ms) produces the following error "bad argument #1 to 'string' (cannot convert 'struct mystruct' to 'const char *')" So I tried a cast: local s = ffi.string(ffi.cast("char*", ms)) No error, but it looks wrong on the wire.
Now provide the response and nothing else.
| You have to explicitly specify the length when using ffi.string with a non-string-like parameter: str = ffi.string(ptr [,len]) Creates an interned Lua string from the data pointed to by ptr. If the optional argument len is missing, ptr is converted to a "char *" and the data is assumed to be zero-terminated. The length of the string is computed with strlen(). When running the following code, I get the expected (little endian) result: ffi = require 'ffi'ffi.cdef[[ typedef unsigned long uint32_t; typedef unsigned long long uint64_t; #pragma pack(1) struct mystruct { uint32_t field1; char field2[6]; uint64_t field3; short field4; };]]function string.tohex(str) return (str:gsub('.', function (c) return string.format('%02X', string.byte(c)) end))endms = ffi.new('struct mystruct', 1, {2, 3, 4, 5, 6, 7}, 8, 9)s = ffi.string(ms, ffi.sizeof(ms)) -- specify how long the byte sequence isprint(s:tohex()) --> 0100000002030405060708000000000000000900 Update: I know this is not a part of the original question, but I just learned this trick, and in order to be complete, here is a way to convert Lua string back to FFI cdata: data = ffi.new('struct mystruct') -- create a new cdata ffi.copy(data, s, ffi.sizeof(data)) -- fill it with data from Lua string 's' print(data.field1, data.field4) --> 1 9 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12344095', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134702/']} | jdg_84035 |
stackexchange | llm_judgeable_groundtruth_similarity | 58460501 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this following regex which is working in chrome but causes an error in firefox or safari. I need to modify it to make it work. Can anybody help out a poor soul? Thanks in advance! regex: /(?=<tag>)(.*?)(?<=<\/tag>)/ Basically, I have to match any char in between <tag> and </tag> and need to retain both tags. I used this expression as an argument to array.split. input: "The quick brown <tag>fox</tag> jumps over the lazy <tag>dog</tag>" operation: input.split(regex) output: ["The quick brown ", "<tag>fox</tag>", " jumps over the lazy ", "<tag>dog</tag>"]
Now provide the response and nothing else.
| firefox and safari doesn't have support for lookbehind yet, you can use capture group ( used so that this the pattern on which we are splitting will be also be added in output ) and split on <tag> </tag> let str = "The quick brown <tag>fox</tag> jumps over the lazy <tag>dog</tag>"let regex = /(<tag>.*?<\/tag>)/console.log(str.split(regex).filter(Boolean)) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58460501', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11389020/']} | jdg_84036 |
stackexchange | llm_judgeable_groundtruth_similarity | 17370093 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it recommended that C++ programmers frequently write lines like std::cout << "output: " << i << " and " << j << std::flush;//morestd::cout << "ending newline." << std::endl; //endl does flush In other words, in output lines that don't have endl , should we be flush ing alot, just in case? Or is this not really needed anymore these days on most platforms?
Now provide the response and nothing else.
| Your average program does not require frequent flushing. Flushing is something nearer to a special case needed in a few situations: Interacting with a human or other system: flushing output before waiting for input is sensible. Going dormant for awhile: Flushing before extended sleep or waiting simplifies examination of logfiles, makes databases consistent most of the time, etc. If buffering is not needed, it would be better to disable buffering in the first place instead of throwing in a lot of flushes. Most of the time, programs benefit by having buffering enabled. Sometimes they generate a few characters here and there. Other times they output a blast of lines. In all my decades of engineering, my most dramatic performance increases are often realized simply by improving buffering. Sometimes by increasing the default FILE buffer size above 512 bytes (the default) to 4K or 32K (sometimes higher). Other times by adding a layer of buffering or caching. Usually there is high overhead with each trip through the operating system's i/o system. Reducing the total number of system calls is (usually) an easy and highly effective scheme to improve performance. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17370093', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2521520/']} | jdg_84037 |
stackexchange | llm_judgeable_groundtruth_similarity | 54638080 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was trying to implement the Places API. My code looked like this: val builder = PlacePicker.IntentBuilder()startActivityForResult(builder.build(mActivity), PLACE_PICKER_REQUEST) My maps credentials were correct, but for this call I got Places API for Android does not seem to be enabled for your app. See https://developers.google.com/places/android/signup for more details. However, when I tried to enable the "Places API for Android", I got this error. You do not have sufficient permissions to view this page. I tried logging out of my accounts, logging in again, incognito mode, Safari & Chrome. Nothing worked so I contacted support, which were extremely fast (thanks guys!) The reason you are receiving an error when trying to enable the Places for Android API is that it has been deprecated. Places functionality for android will now be covered by having the Places API enabled. I asked about my implementation and got this reply. The place picker has also been deprecated. You can install the compatibility library to continue using the Place Picker until the deprecation period ends on July 29th. More about this can be red here: https://developers.google.com/places/android-sdk/client-migration#place_picker The docs I find online now are a bit confusing, what is deprecated and what isn't? Can anyone point me in the right direction for this kind of functionality?
Now provide the response and nothing else.
| Google Places SDK for Android is Deprecated, so we need to migrate for Places API .For implementing AutoComplete Place using new Places API.. please follow below steps. First enable PlacesAPI in developer console, then install Client Library by updating in gradle. (Note: You can only install either the client library or the compatibility library, NOT both) implementation 'com.google.android.libraries.places:places:1.0.0' Now initialize below code inside Oncreate(); // Add an import statement for the client library. import com.google.android.libraries.places.api.Places; // Initialize Places. Places.initialize(getApplicationContext(), "***YOUR API KEY***"); // Create a new Places client instance. PlacesClient placesClient = Places.createClient(this); New PlacesAPI is initialised.. For AutoComplete places use below code (You can use AutoComplete Fragment also) // Set the fields to specify which types of place data to return.List<Place.Field> fields = Arrays.asList(Place.Field.ID, Place.Field.NAME);// Start the autocomplete intent.Intent intent = new Autocomplete.IntentBuilder( AutocompleteActivityMode.FULLSCREEN, fields) .build(this);startActivityForResult(intent, AUTOCOMPLETE_REQUEST_CODE);@Overrideprotected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == AUTOCOMPLETE_REQUEST_CODE) { if (resultCode == RESULT_OK) { Place place = Autocomplete.getPlaceFromIntent(data); Log.i(TAG, "Place: " + place.getName() + ", " + place.getId()); } else if (resultCode == AutocompleteActivity.RESULT_ERROR) { // TODO: Handle the error. Status status = Autocomplete.getStatusFromIntent(data); Log.i(TAG, status.getStatusMessage()); } else if (resultCode == RESULT_CANCELED) { // The user canceled the operation. } }} Make sure permissions in manifest API key generated. Places API is enabled in Dev Console. REMOVE (if you added) implementation 'com.google.android.gms:play-services-places:16.0.0' Required header files import com.google.android.libraries.places.api.Places;import com.google.android.libraries.places.api.model.Place;import com.google.android.libraries.places.api.net.PlacesClient;import com.google.android.libraries.places.widget.Autocomplete;import com.google.android.libraries.places.widget.AutocompleteActivity;import com.google.android.libraries.places.widget.model.AutocompleteActivityMode; Hope this will help.. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54638080', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3220953/']} | jdg_84038 |
stackexchange | llm_judgeable_groundtruth_similarity | 679856 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm stuck on where to start with this. I can tell it is to do with the extreme value theorem, but past that point I'm stuck. Any help would be appreciated. If $f(x)$ is continuous on $\mathbb{R}$, with $f(x) \geq 0$ $\forall x \in \mathbb{R}$ and $\lim_{x \to - \infty} f(x) = 0 = \lim_{x \to \infty} f(x)$, then $\exists x_0 \in \mathbb{R}$ where $f(x) \leq f(x_0)$ $ \forall x \in \mathbb{R}$.
Now provide the response and nothing else.
| If $f$ is identically zero we are done - every point is maximal. Therefore assume $f(x_1)>0$ for some $x_1$. By the limit condition, there exists $a,b$ with $f(x)<f(x_1)$ for all $x<a$ and also for all $x>b$. On the compact interval $[a,b]$, $f$ assumes its maximum at some point $x_0$. Then $f(x_0)\ge f(x)$ for all $x\in \mathbb R$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/679856', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/129498/']} | jdg_84039 |
stackexchange | llm_judgeable_groundtruth_similarity | 47462222 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am building a component (html, css, spec.ts, ts) in angular in which I always want endDate > startDate. I have followed this link https://material.angular.io/components/datepicker/overview in order make multiple datepickers. Below is my HTML for startDate and endDate : startDate: <div class="start-date" fxFlex="50%" fxFlexOrder="1"> <mat-form-field> <input matInput [matDatepicker]="picker1" placeholder="{{'PORTAL.STARTDATE' | translate}}" type="text" formControlName="startDate" [(ngModel)]="unavailability.startDate" [readonly]="!componentPermission.writePermission"> <mat-datepicker-toggle matSuffix [for]="picker1"></mat-datepicker-toggle> <mat-datepicker #picker1></mat-datepicker> </mat-form-field> </div> endDate: <div class="end-date" fxFlex="50%" fxFlexOrder="2"> <mat-form-field> <input matInput [matDatepicker]="picker2" placeholder="{{'PORTAL.ENDDATE' | translate}}" type="text" formControlName="endDate" [(ngModel)]="unavailability.endDate" [readonly]="!componentPermission.writePermission"> <mat-datepicker-toggle matSuffix [for]="picker2"></mat-datepicker-toggle> <mat-datepicker #picker2></mat-datepicker> </mat-form-field> </div> Now below is my angular code (ts) where I am calling validateForm method on the page load. ngOnInit() { .... this.validateForm();}validateForm() { this.unavailabilityForm = this.formBuilder.group({ 'startDate': ['', Validators.required], 'endDate': ['', Validators.required], 'unavailabilityReason': [''] });} ProblemStatement: Now what I need to do is - if I have selected any date in startDate (for example 23rd Nov), then in the endDate datepicker all the dates before and including 23rd Nov should be disabled so that I can only select dates after 23rd Nov only. Is this possible to do in Angular? Can we achieve that by placing minDate and maxDate somewhere in HTML or TS ?
Now provide the response and nothing else.
| Since you are using a reactive form, utilize the form controls. It's not recommended to have two bindings ( ngModel and formControl ). So drop the ngModel like I suggested in a previous question of yours: https://stackoverflow.com/a/47426879/6294072 So populate your form controls with the values of from your object unavailability . constructor(private formBuilder: FormBuilder) { this.unavailabilityForm = this.formBuilder.group({ startDate: [this.unavailability.startDate], endDate: [this.unavailability.endDate] }); } if you are receiving the values at a later point you can use patchValues : this.unavailabilityForm.setValue({ startDate: this.unavailability.startDate; endDate: this.unavailability.endDate;}) else you can set the values when you build the form. Then the only thing you need to add to your second datepicker is [min] like the other answer mentioned. There utilize the form control value: <input matInput [min]="unavailabilityForm.controls.startDate.value" formControlName="endDate" ...> DEMO | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47462222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2809564/']} | jdg_84040 |
Subsets and Splits