source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 26840278 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to get the current route action, but I'm not sure how to go about it. In Laravel 4 I was using Route::currentRouteAction() but now it's a bit different. I'm trying to do Route::getActionName() in my controller but it keeps giving me method not found. <?php namespace App\Http\Controllers;use Route;class HomeController extends Controller{ public function getIndex() { echo 'getIndex'; echo Route::getActionName(); }}
Now provide the response and nothing else.
| In Laravel 5 you should be using Method or Constructor injection. This will do what you want: <?php namespace App\Http\Controllers;use Illuminate\Routing\Route;class HomeController extends Controller{ public function getIndex(Route $route) { echo 'getIndex'; echo $route->getActionName(); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26840278', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1200670/']} | jdg_78841 |
stackexchange | llm_judgeable_groundtruth_similarity | 24062437 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using Swift only, here's my code in AppDelegate.swift: import Cocoaclass AppDelegate: NSObject, NSApplicationDelegate { @IBOutlet var window: NSWindow @IBOutlet var textField: NSTextView @IBAction func displaySomeText(AnyObject) { textField.insertText("A string...") } func applicationDidFinishLaunching(aNotification: NSNotification?) { // Insert code here to initialize your application } func applicationWillTerminate(aNotification: NSNotification?) { // Insert code here to tear down your application }} In the interface builder, I have an object hooked up to receive input from a button, then the output goes to a text view. I'm trying to get the text view to populate with some text when I hit the button. I tried this with a text field as well, and didn't get the error, but got a "dong" error sound and it didn't do anything else. In Objective-C, you had to use the (assign) parameter to get this to work from what I understand. What am I doing wrong?
Now provide the response and nothing else.
| Use @IBOutlet var scrollView: NSScrollView instead of @IBOutlet var textField: NSTextView . Then create a property returns documentView in scrollView. import Cocoaclass AppDelegate: NSObject, NSApplicationDelegate { @IBOutlet var window: NSWindow @IBOutlet var scrollView: NSScrollView var textField: NSTextView { get { return scrollView.contentView.documentView as NSTextView } } @IBAction func displaySomeText(AnyObject) { textField.insertText("A string...") } func applicationDidFinishLaunching(aNotification: NSNotification?) { // Insert code here to initialize your application } func applicationWillTerminate(aNotification: NSNotification?) { // Insert code here to tear down your application }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24062437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/265075/']} | jdg_78842 |
stackexchange | llm_judgeable_groundtruth_similarity | 544628 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that TCP is very reliable, and what ever is sent is guaranteed to get to its destination. But what happens if after a packet is sent, but before it arrives at the server, the server goes down? Is the acknowledgment that the packet is successfully sent triggered on the server's existence when the packet is initially sent, or when the packet successfully arrives at the server? Basically what I'm asking is - if the server goes down in between the sending and the receiving of a packet, would the client know?
Now provide the response and nothing else.
| It really doesn't matter, but here's some finer details: You need to distinguish between the Server-Machine going down and the Server-Process going down. If the Server-Machine has crashed, then, clearly, there is nothing to receive the packet. The sending client will get no retry-requests, and no acknowledgment of success or failure. After having not received any feedback at all, the client will eventually receive a timeout, and consider the connection dropped. This is pretty much identical to the cable being physically cut unexpectedly. If, however, the Server-Machine remains functioning, but the Server-Process crashes due to a programming bug, then the receiving TCP stack, which is a function of the OS, not of the process, will likely ACK the packet, and any others that arrive. This will continue until the OS notifies the TCP stack that the process is no longer active. The TCP stack will likely send a RST (reset) notice to the client, or may drop the connection (as described above) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/544628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49018/']} | jdg_78843 |
stackexchange | llm_judgeable_groundtruth_similarity | 16709305 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i have a problem adding dependencies automatically to eclipse android project via gradle.I have only a little bit experience with gradle. Till now I have build two java projects with gradle. One jar and an executable-jar. This works without problems.I have used the eclipse plugin to generate the eclipse project and add the dependenies to the build path. I added new dependencies to the gradle script, started gradle with gradle eclipse ,update my project and the dependencies exist in the build path and I can used them. Here is the important part of that script. apply plugin: 'java'apply plugin: 'eclipse'repositories { mavenCentral()}dependencies { compile 'commons-io:commons-io:2.4'} So, now I tried it in combination with the android plugin. Here is my hole gradle script. buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.4' }}apply plugin: 'android'apply plugin: 'eclipse'repositories {mavenCentral()}dependencies {compile 'org.apache.commons:commons-lang3:3.1'}android { compileSdkVersion 17 buildToolsVersion "17" defaultConfig { minSdkVersion 14 targetSdkVersion 17 } sourceSets { main { manifest.srcFile 'AndroidManifest.xml' java.srcDirs = ['src'] resources.srcDirs = ['src'] aidl.srcDirs = ['src'] renderscript.srcDirs = ['src'] res.srcDirs = ['res'] assets.srcDirs = ['assets'] } instrumentTest.setRoot('tests') }} If I use gradle eclipse nothing happens. Then I found out that the java plugin adds the dependencies to the build path. So I added apply plugin: 'java' to it and got the error that the java plugin is not compatible with the android plugin.Then I found a solution to copy the jars automatically to the lib folder of the project. def libDir = file('libs')task copyLibs(type: Copy) { doFirst { libDir.mkdirs() } from configurations.runtime into libDir} But this task needs the java plugin too for the configurations.runtime.I need the android plugin to create the apk file, so it is not a solution to remove the android plugin. Has somebody an idea if it is possible to add the dependencies to the build path or lib folder in ecipse project that is compatible with the android plugin? EDIT:One of my ideas was to put the java-plugin to the eclipse-plugin, so that it will be only applied when the eclipse plugin will be applied. Something like this: apply plugin: 'eclipse'eclipse{ apply plugin: 'java'} But I still get the error that the java and android plugins are not compatible.Maybe I understand gradle wrong, but normally the java plugin should be applied only when I start the eclipse plugin and not the android plugin. I´m afraid that my understanding and experience of gradle is not good enough to solve this this way or understand why it is not possible.
Now provide the response and nothing else.
| My solution is based off Rafael's in that it copies dependencies to the libs directory which is only used by Android. However I go further to completely explode the referenced AAR's for use in Eclipse. Gradle Build File Add the following to the end of your Android projects build.gradle : task copyJarDependencies(type: Copy) { description = 'Used for Eclipse. Copies all dependencies to the libs directory. If there are any AAR files it will extract the classes.jar and rename it the same as the AAR file but with a .jar on the end.' libDir = new File(project.projectDir, '/libs') println libDir println 'Adding dependencies from compile configuration' configurations.compile.filter {it.name.endsWith 'jar'}.each { File file -> moveJarIntoLibs(file)} println 'Adding dependencies from releaseCompile configuration' configurations.releaseCompile.filter {it.name.endsWith 'jar'}.each { File file -> moveJarIntoLibs(file)} println 'Adding dependencies from debugCompile configuration' configurations.debugCompile.filter {it.name.endsWith 'jar'}.each { File file -> moveJarIntoLibs(file)} println 'Adding dependencies from instrumentTestCompile configuration' configurations.instrumentTestCompile.filter {it.name.endsWith 'jar'}.each { File file -> moveJarIntoLibs(file)} println 'Extracting dependencies from compile configuration' configurations.compile.filter {it.name.endsWith 'aar'}.each { File file -> moveAndRenameAar(file) } println 'Extracting dependencies from releaseCompile configuration' configurations.releaseCompile.filter {it.name.endsWith 'aar'}.each { File file -> moveAndRenameAar(file) } println 'Extracting dependencies from debugCompile configuration' configurations.debugCompile.filter {it.name.endsWith 'aar'}.each { File file -> moveAndRenameAar(file) } println 'Extracting AAR dependencies from instrumentTestCompile configuration' configurations.instrumentTestCompile.filter {it.name.endsWith 'aar'}.each { File file -> moveAndRenameAar(file) }}void moveJarIntoLibs(File file){ println 'Added jar ' + file copy{ from file into 'libs' }}void moveAndRenameAar(File file){ println 'Added aar ' + file def baseFilename = file.name.lastIndexOf('.').with {it != -1 ? file.name[0..<it] : file.name} // directory excluding the classes.jar copy{ from zipTree(file) exclude 'classes.jar' into 'libs/'+baseFilename } // Copies the classes.jar into the libs directory of the expoded AAR. // In Eclipse you can then import this exploded ar as an Android project // and then reference not only the classes but also the android resources :D copy{ from zipTree(file) include 'classes.jar' into 'libs/' + baseFilename +'/libs' rename { String fileName -> fileName.replace('classes.jar', baseFilename + '.jar') } }} Building with Gradle Run : "gradle clean build" You should find all dependencies and exploded AARs in your libs directory. This is all Eclipse should need. Importing in Eclipse Now this is where the real benefit begins. After you've generated the libs directory from the gradle step above you'll notice there are folders in there too. Those new folders are the exploded AAR dependencies from your build.gradle file. Now the cool part is that when you import your existing Android project into Eclipse it will also detect the exploded AAR folders as projects it can import too! 1. Import those folders under your project's libs directory, don't import any 'build' folders, they're generated by Gradle 2. Ensure you perform a Project -> Clean on all AAR projects you've added. In your workspace check that each AAR exploded project has the following in the project.properties : target=android-<YOUR INSTALLED SKD VERSION GOES HERE>android.library=true 3. Now in your main Android project you can just add the library references with either ADT or you can just edit the project.properties file and add android.libraries.reference.1=libs/someExplodedAAR/ 4. Now you can right-click on your main Android project and Run as -> Android Application . But what does this even mean? Well it means you don't need the source code for any of your Android AAR Gradle dependencies in order to reference both it's classes and resources in Eclipse. The gradle build script above takes the AAR file and prepares it for use in Eclipse. Once you add it to your workspace you're ready to just focus on your actual main Android project. You can now debug and develop using Eclipse and deploy using ADT with AAR dependencies being properly bundled in the APK. When you need to make some specific builds then you can use gradle. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16709305', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/588683/']} | jdg_78844 |
stackexchange | llm_judgeable_groundtruth_similarity | 11715224 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: Are there problems that cannot be written using tail recursion? From my understanding, tail recursion is an optimization you can use when a recursive call does not need information from the recursive calls that it will spam. Is it possible then to implement all recursive functions using tail-recursion? What about something like DFS, where you need the innermost child to return before the parent can?
Now provide the response and nothing else.
| It depends on exactly what you are asking. If you want to keep all functions as functions (no mutable state) with the same signatures, then no. The most obvious example is the quicksort, where both calls can't be tail calls. If you can modify the function in various ways, then yes. Sometimes a local modification is sufficient - often you can add an "accumulator" that builds some expression that is returned, although, if the result involves non-commutative operations, then you need to be careful (for example, when naively constructing linked lists, the order is reversed) or you can add a stack. Alternatively, you can do a global modification of the entire program, in which each function takes as an extra argument the function that contains future actions. This is the continuation passing that Pete is talking about . if you are working by hand then the local modification is often fairly easy. but if you're doing automated rewriting (in a compiler for example) then it's simpler to take the global approach (it requires less "smarts"). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11715224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1236697/']} | jdg_78845 |
stackexchange | llm_judgeable_groundtruth_similarity | 128985 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I consistently see answers quoting this link stating definitively "Don't parse ls !" This bothers me for a couple of reasons: It seems the information in that link has been accepted wholesale with little question, though I can pick out at least a few errors in casual reading. It also seems as if the problems stated in that link have sparked no desire to find a solution. From the first paragraph: ...when you ask [ls] for a list of files, there's a huge problem: Unix allows almost any character in a filename, including whitespace, newlines, commas, pipe symbols, and pretty much anything else you'd ever try to use as a delimiter except NUL. ... ls separates filenames with newlines. This is fine until you have a file with a newline in its name. And since I don't know of any implementation of ls that allows you to terminate filenames with NUL characters instead of newlines, this leaves us unable to get a list of filenames safely with ls . Bummer, right? How ever can we handle a newline terminated listed dataset for data that might contain newlines? Well, if the people answering questions on this website didn't do this kind of thing on a daily basis, I might think we were in some trouble. The truth is though, most ls implementations actually provide a very simple api for parsing their output and we've all been doing it all along without even realizing it. Not only can you end a filename with null, you can begin one with null as well or with any other arbitrary string you might desire. What's more, you can assign these arbitrary strings per file-type . Please consider: LS_COLORS='lc=\0:rc=:ec=\0\0\0:fi=:di=:' ls -l --color=always | cat -Atotal 4$drwxr-xr-x 1 mikeserv mikeserv 0 Jul 10 01:05 ^@^@^@^@dir^@^@^@/$-rw-r--r-- 1 mikeserv mikeserv 4 Jul 10 02:18 ^@file1^@^@^@$-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 ^@file2^@^@^@$-rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 02:27 ^@new$line$file^@^@^@$^@ See this for more. Now it's the next part of this article that really gets me though: $ ls -ltotal 8-rw-r----- 1 lhunath lhunath 19 Mar 27 10:47 a-rw-r----- 1 lhunath lhunath 0 Mar 27 10:47 a?newline-rw-r----- 1 lhunath lhunath 0 Mar 27 10:47 a space The problem is that from the output of ls , neither you or the computer can tell what parts of it constitute a filename. Is it each word? No. Is it each line? No. There is no correct answer to this question other than: you can't tell. Also notice how ls sometimes garbles your filename data (in our case, it turned the \n character in between the words "a" and "newline" into a ?question mark ... ... If you just want to iterate over all the files in the current directory, use a for loop and a glob: for f in *; do [[ -e $f ]] || continue ...done The author calls it garbling filenames when ls returns a list of filenames containing shell globs and then recommends using a shell glob to retrieve a file list! Consider the following: printf 'touch ./"%b"\n' "file\nname" "f i l e n a m e" | . /dev/stdinls -1qf i l e n a m e file?nameIFS="" ; printf "'%s'\n" $(ls -1q)'f i l e n a m e''filename' POSIX defines the -1 and -q ls operands so: -q - Force each instance of non-printable filename characters and <tab> s to be written as the question-mark ( '?' ) character. Implementations may provide this option by default if the output is to a terminal device. -1 - (The numeric digit one.) Force output to be one entry per line. Globbing is not without its own problems - the ? matches any character so multiple matching ? results in a list will match the same file multiple times. That's easily handled. Though how to do this thing is not the point - it doesn't take much to do after all and is demonstrated below - I was interested in why not . As I consider it, the best answer to that question has been accepted. I would suggest you try to focus more often on telling people what they can do than on what they can't. You're a lot less likely, as I think, to be proven wrong at least. But why even try? Admittedly, my primary motivation was that others kept telling me I couldn't. I know very well that ls output is as regular and predictable as you could wish it so long as you know what to look for. Misinformation bothers me more than do most things. The truth is, though, with the notable exception of both Patrick's and Wumpus Q. Wumbley's answers (despite the latter's awesome handle) , I regard most of the information in the answers here as mostly correct - a shell glob is both more simple to use and generally more effective when it comes to searching the current directory than is parsing ls . They are not, however, at least in my regard, reason enough to justify either propagating the misinformation quoted in the article above nor are they acceptable justification to " never parse ls . " Please note that Patrick's answer's inconsistent results are mostly a result of him using zsh then bash . zsh - by default - does not word-split $( command substituted ) results in a portable manner. So when he asks where did the rest of the files go? the answer to that question is your shell ate them. This is why you need to set the SH_WORD_SPLIT variable when using zsh and dealing with portable shell code. I regard his failure to note this in his answer as awfully misleading. Wumpus's answer doesn't compute for me - in a list context the ? character is a shell glob. I don't know how else to say that. In order to handle a multiple results case you need to restrict the glob's greediness. The following will just create a test base of awful file names and display it for you: { printf %b $(printf \\%04o `seq 0 127`) |sed "/[^[-b]*/s///g s/\(.\)\(.\)/touch '?\v\2' '\1\t\2' '\1\n\2'\n/g" |. /dev/stdinecho '`ls` ?QUOTED `-m` COMMA,SEP'ls -qmecho ; echo 'NOW LITERAL - COMMA,SEP'ls -m | cat( set -- * ; printf "\nFILE COUNT: %s\n" $# )} OUTPUT `ls` ?QUOTED `-m` COMMA,SEP??\, ??^, ??`, ??b, [?\, [?\, ]?^, ]?^, _?`, _?`, a?b, a?bNOW LITERAL - COMMA,SEP? \, ? ^, ? `, ? b, [ \, [\, ] ^, ]^, _ `, _`, a b, abFILE COUNT: 12 Now I'll safe every character that isn't a /slash , -dash , :colon , or alpha-numeric character in a shell glob then sort -u the list for unique results. This is safe because ls has already safed-away any non printable characters for us. Watch: for f in $( ls -1q | sed 's|[^-:/[:alnum:]]|[!-\\:[:alnum:]]|g' | sort -u | { echo 'PRE-GLOB:' >&2 tee /dev/fd/2 printf '\nPOST-GLOB:\n' >&2 }) ; do printf "FILE #$((i=i+1)): '%s'\n" "$f"done OUTPUT: PRE-GLOB:[!-\:[:alnum:]][!-\:[:alnum:]][!-\:[:alnum:]][!-\:[:alnum:]][!-\:[:alnum:]]ba[!-\:[:alnum:]]bPOST-GLOB:FILE #1: '? \'FILE #2: '? ^'FILE #3: '? `'FILE #4: '[ \'FILE #5: '[\'FILE #6: '] ^'FILE #7: ']^'FILE #8: '_ `'FILE #9: '_`'FILE #10: '? b'FILE #11: 'a b'FILE #12: 'ab' Below I approach the problem again but I use a different methodology. Remember that - besides \0 null - the / ASCII character is the only byte forbidden in a pathname. I put globs aside here and instead combine the POSIX specified -d option for ls and the also POSIX specified -exec $cmd {} + construct for find . Because find will only ever naturally emit one / in sequence, the following easily procures a recursive and reliably delimited filelist including all dentry information for every entry. Just imagine what you might do with something like this: #v#note: to do this fully portably substitute an actual newline \#v##v#for 'n' for the first sed invocation#v#cd ..find ././ -exec ls -1ldin {} + |sed -e '\| *\./\./|{s||\n.///|;i///' -e \} |sed 'N;s|\(\n\)///|///\1|;$s|$|///|;P;D'###OUTPUT152398 drwxr-xr-x 1 1000 1000 72 Jun 24 14:49.///testls///152399 -rw-r--r-- 1 1000 1000 0 Jun 24 14:49.///testls/? \///152402 -rw-r--r-- 1 1000 1000 0 Jun 24 14:49.///testls/? ^///152405 -rw-r--r-- 1 1000 1000 0 Jun 24 14:49.///testls/? `///... ls -i can be very useful - especially when result uniqueness is in question. ls -1iq | sed '/ .*/s///;s/^/-inum /;$!s/$/ -o /' | tr -d '\n' | xargs find These are just the most portable means I can think of. With GNU ls you could do: ls --quoting-style=WORD And last, here's a much simpler method of parsing ls that I happen to use quite often when in need of inode numbers: ls -1iq | grep -o '^ *[0-9]*' That just returns inode numbers - which is another handy POSIX specified option.
Now provide the response and nothing else.
| I am not at all convinced of this, but let's suppose for the sake of argument that you could , if you're prepared to put in enough effort, parse the output of ls reliably, even in the face of an "adversary" — someone who knows the code you wrote and is deliberately choosing filenames designed to break it. Even if you could do that, it would still be a bad idea . Bourne shell 1 is a bad language. It should not be used for anything complicated, unless extreme portability is more important than any other factor (e.g. autoconf ). I claim that if you're faced with a problem where parsing the output of ls seems like the path of least resistance for a shell script, that's a strong indication that whatever you are doing is too complicated to be a shell script and you should rewrite the entire thing in Perl, Python, Julia, or any of the other good scripting languages that are readily available. As a demonstration, here's your last program in Python: import os, sysfor subdir, dirs, files in os.walk("."): for f in dirs + files: ino = os.lstat(os.path.join(subdir, f)).st_ino sys.stdout.write("%d %s %s\n" % (ino, subdir, f)) This has no issues whatsoever with unusual characters in filenames -- the output is ambiguous in the same way the output of ls is ambiguous, but that wouldn't matter in a "real" program (as opposed to a demo like this), which would use the result of os.path.join(subdir, f) directly. Equally important, and in stark contrast to the thing you wrote, it will still make sense six months from now, and it will be easy to modify when you need it to do something slightly different. By way of illustration, suppose you discover a need to exclude dotfiles and editor backups, and to process everything in alphabetical order by basename: import os, sysfilelist = []for subdir, dirs, files in os.walk("."): for f in dirs + files: if f[0] == '.' or f[-1] == '~': continue lstat = os.lstat(os.path.join(subdir, f)) filelist.append((f, subdir, lstat.st_ino))filelist.sort(key = lambda x: x[0])for f, subdir, ino in filelist: sys.stdout.write("%d %s %s\n" % (ino, subdir, f)) 1 Yes, extended versions of the Bourne shell are readily available nowadays: bash and zsh are both considerably better than the original. The GNU extensions to the core "shell utilities" (find, grep, etc.) also help a lot. But even with all the extensions, the shell environment is not improved enough to compete with scripting languages that are actually good, so my advice remains "don't use shell for anything complicated" regardless of which shell you're talking about. "What would a good interactive shell that was also a good scripting language look like?" is a live research question, because there is an inherent tension between the conveniences required for an interactive CLI (such as being allowed to type cc -c -g -O2 -o foo.o foo.c instead of subprocess.run(["cc", "-c", "-g", "-O2", "-o", "foo.o", "foo.c"]) ) and the strictures required to avoid subtle errors in complex scripts (such as not interpreting unquoted words in random locations as string literals). If I were to attempt to design such a thing, I'd probably start by putting IPython, PowerShell, and Lua in a blender, but I have no idea what the result would look like. | {} | {'log_upvote_score': 9, 'links': ['https://unix.stackexchange.com/questions/128985', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/52934/']} | jdg_78846 |
stackexchange | llm_judgeable_groundtruth_similarity | 139970 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I work on enterprise Java projects that do not have a single entry point from where I can trace the flow of execution. Some of the projects have hundreds of classes, and when I'm asked to add a feature to a project, I often find myself at a loss as to where to start looking at code. What's the best way to dive into such projects so that I can implement the feature quickly without wasting time.
Now provide the response and nothing else.
| Does the project have a suite of well-maintained unit tests? The unit tests are programmatic documentation for what the code does. In addition, you need to learn enough about the architecture of the application to identify the places where you need to insert code for your new features, and more or less ignore the rest. You don't need to know the whole code base to do this; if the projects are well-architected, the functionality is already sufficiently encapsulated and decoupled that you can focus on the relevant parts. If you're lucky, the projects already follow a well-known architecture which will serve as a map for you to follow. Code always has one or more entry points. For MVC projects, the entry point is a controller method based on an URL; the method will almost certainly access a data repository, and return a view. Start there. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/139970', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/38762/']} | jdg_78847 |
stackexchange | llm_judgeable_groundtruth_similarity | 24138359 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to derive my class from generic class: class foo<T> {}class bar : foo<Int> {} But this code fails to compile with en error: Classes derived from generic classes must also be generic How to avoid this limitation? Is it possible?
Now provide the response and nothing else.
| Ssreg, Unfortunately this is official : You can subclass a generic class, but the subclass must also be a generic class. Let us hope Apple fixes this in a future version. Meanwhile, let us see this as an opportunity to exploit aggregation instead of subclassing. NOTE: As a poor man's version of a solution, one could use typealias : class foo<T> {}class bar<Int> : foo<Int> {}typealias Bar = bar<Int> This way, the rest of the code can be written just as if Apple already fixed the matter. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24138359', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1551555/']} | jdg_78848 |
stackexchange | llm_judgeable_groundtruth_similarity | 1940596 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am checking a string with the following regexes: [a-zA-Z0-9]+[A-Za-z]+ For some reason, the characters: .-_ are allowed to pass, why is that?
Now provide the response and nothing else.
| If you want to check that the complete string consists of only the wanted characters you need to anchor your regex like follows: ^[a-zA-Z0-9]+$ Otherwise every string will pass that contains a string of the allowed characters somewhere . The anchors essentially tell the regular expression engine to start looking for those characters at the start of the string and stop looking at the end of the string. To clarify: If you just use [a-zA-Z0-9]+ as your regex, then the regex engine would rightfully reject the string -__-- as the regex doesn't match against that. There is no single character from the character class you defined. However, with the string a-b it's different. The regular expression engine will match the first a here since that matches the expression you entered (at least one of the given characters) and won't care about the - or the b . It has done its job and successfully matched a substring according to your regular expression. Similarly with _-abcdef- – the regex will match the substring abcdef just fine, because you didn't tell it to match only at the start or end of the string; and ignore the other characters. So when using ^[a-zA-Z0-9]+$ as your regex you are telling the regex engine definitely that you are looking for one or more letters or digits, starting at the very beginning of the string right until the end of the string. There is no room for other characters to squeeze in or hide so this will do what you apparently want. But without the anchors, the match can be anywhere in your search string. For validation purposes you always want to use those anchors. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1940596', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68183/']} | jdg_78849 |
stackexchange | llm_judgeable_groundtruth_similarity | 36822292 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am running this code as a CherryPy Web Service both on Mac OS X and Ubuntu 14.04. By using multiprocessing on python3 I want to start the static method worker() in an asynchronous way, within a Process Pool . The same code runs flawlessly on Mac OS X, in Ubuntu 14.04 worker() does not run. I.e. by debugging the code inside the POST method I am able to see that each line is executed - from reqid = str(uuid.uuid4()) to return handle_error(202, "Request ID: " + reqid) Starting the same code in Ubuntu 14.04, it does not run the worker() method, not even a print() at the top of the method (which would be logged). Here's the relevant code (I only omitted the handle_error() method): import cherrypyimport jsonfrom lib import get_parameters, handle_errorfrom multiprocessing import Poolimport osfrom pymatbridge import Matlabimport requestsimport shutilimport uuidfrom xml.etree import ElementTreeclass Schedule(object): exposed = True def __init__(self, mlab_path, pool): self.mlab_path = mlab_path self.pool = pool def POST(self, *paths, **params): if validate(cherrypy.request.headers): try: reqid = str(uuid.uuid4()) path = os.path.join("results", reqid) os.makedirs(path) wargs = [(self.mlab_path, reqid)] self.pool.apply_async(Schedule.worker, wargs) return handle_error(202, "Request ID: " + reqid) except: return handle_error(500, "Internal Server Error") else: return handle_error(401, "Unauthorized") #### this is not executed #### @staticmethod def worker(args): mlab_path, reqid = args mlab = Matlab(executable=mlab_path) mlab.start() mlab.run_code("cd mlab") mlab.run_code("sched") a = mlab.get_variable("a") mlab.stop() return reqid ##### to start the Web Serviceif __name__ == "__main__": # start Web Service with some configuration global_conf = { "global": { "server.environment": "production", "engine.autoreload.on": True, "engine.autoreload.frequency": 5, "server.socket_host": "0.0.0.0", "log.screen": False, "log.access_file": "site.log", "log.error_file": "site.log", "server.socket_port": 8084 } } cherrypy.config.update(global_conf) conf = { "/": { "request.dispatch": cherrypy.dispatch.MethodDispatcher(), "tools.encode.debug": True, "request.show_tracebacks": False } } pool = Pool(3) cherrypy.tree.mount(Schedule('matlab', pool), "/sched", conf) # activate signal handler if hasattr(cherrypy.engine, "signal_handler"): cherrypy.engine.signal_handler.subscribe() # start serving pages cherrypy.engine.start() cherrypy.engine.block()
Now provide the response and nothing else.
| Your logic is hiding the problem from you. The apply_async method returns an AsyncResult object which acts as a handler to the asynchronous task you just scheduled. As you ignore the outcome of the scheduled task, the whole thing looks like "failing silently". If you try to get the results from that task, you'd see the real problem. handler = self.pool.apply_async(Schedule.worker, wargs)handler.get()... traceback here ...cPickle.PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed In short, you must ensure the arguments you pass to the Pool are Picklable . Instance and class methods are Picklable if the object/class they belong to is picklable as well. Static methods are not picklable because they loose the association with the object itself, therefore the pickle library cannot serialise them correctly. As a general line, is better to avoid scheduling to multiprocessing.Pool anything different than a top level defined functions. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36822292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41977/']} | jdg_78850 |
stackexchange | llm_judgeable_groundtruth_similarity | 199678 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can a Spearman correlation coefficient of 0.38 for a specific parameter be considered to demonstrate reasonable agreement between two biological cell types, more specifically a cell line and a primary cell type? The parameter in question is the difference between percent of mRNA with and without last exon in two subcellular compartments, so for every gene (of which there are ~40000): $\text{param} = x_{i} - x_{j}$, where $x_{i}$ is the proportion of RNA with last exon in compartment $i$ $x_{j}$ is the proportion of RNA with last exon in compartment $j$. For each cell type, this results in a table of $\text{gene} - \text{param}$ And the Spearman correlation is found between the ranked order of this parameter in the two cell types. Sample size is number of isoforms considered, in the order of 40000. Number of biological replicates per cell line - three each (standard in the field).
Now provide the response and nothing else.
| Correlations, such as Pearson's product moment correlation or Spearman's rank correlation, are not measures of agreement, no matter what their values are (i.e., even if $r = 1.0$). Consider a simple case using Pearson's correlation: A guy takes a woman on a date to what he thinks is a nice restaurant. Afterwards, they talk about the restaurant and give it a rating on a scale of $0 - 10$ for the overall experience (ambiance, service, food). The guy rates it a $7$, his date a $1$. So the next day, he takes her to a nicer place. He rates it an $8$; she rates it a $2$. The next day he takes her to an even nicer place. He rates it a $9$; she rates it a $3$. Here are the ratings: guy woman7 18 29 3 The correlation is $r = 1.0$. You can decide for yourself if you think they agree on the quality of the restaurants. I suspect this relationship isn't going to last. In essence, Pearson's correlation measures agreement with respect to the ordering of the ratings and the relative spacing between those ratings. Spearman's correlation measures the agreement on the ordering only . But people typically think of similar ratings as being at least as important as the ordering for there to be true agreement. For continuous ratings, Lin's concordance coefficient can be used as a measure of agreement that isn't subject to these flaws. For categorical ratings Cohen's kappa or Bangdawala's $B$ can be used to assess agreement instead of the chi-squared test for similar reasons. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/199678', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/107285/']} | jdg_78851 |
stackexchange | llm_judgeable_groundtruth_similarity | 244608 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Sometimes when dealing with arrays with intermittent non-numerical values (e.g., NaN s imported from external sources), the common arithmetic functions (e.g., Median ) may break annoyingly.Although it's possible to replace those NaN s with Indeterminate and then carefully remove them before applying the arithmetic functions, such operations are rather tedious compared to other computing environments (e.g., numpy ) where similar functions would quietly ignore those NaN s and produce results. I'm wondering if we can create a similar pure numeric environment in Mathematica that can do such jobs more easily? For example, for an arbitrary array generated using the code below: arNaN = Array[ RandomChoice[{RandomReal[], Indeterminate}] &, {4, 2, 3, 5}] How can we apply the common arithmetic functions (e.g., Median , Quartiles , etc.) without deliberately removing the non-numerical items? For people who are also familiar with numpy / pandas , I would like something similar there like numpy.nanmedian / pandas.DataFrame.median which can quietly ignore NaN values.
Now provide the response and nothing else.
| Maybe something like this? The following function will divide the available screen width into equally sized columns and then uses scrollbars to display content when it becomes too tall: readableGrid[content_List, maxRowHeight : _ : 50, gridOpts : OptionsPattern[]] := Grid[ Map[ Pane[#, ImageSize -> {Full, UpTo[maxRowHeight]}, Scrollbars -> {False, Automatic}, AppearanceElements -> None] &, content, {2} ], gridOpts, ItemSize -> Scaled[0.99/Max[Length /@ content]], Alignment -> Left ]TabView[readableGrid /@ ip] Alternatively, you can use ImageSize -> {UpTo[200], UpTo[maxRowHeight]} (or however wide you want your columns to be at most) in the Pane and leave out the ItemSize option in Grid . You can similarly also put a Pane with a scrollbar around the whole Grid to make sure the output cell itself remains manageable in size. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/244608', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/260/']} | jdg_78852 |
stackexchange | llm_judgeable_groundtruth_similarity | 415413 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got some process in Go. Here's an example counting lines in text, though the question is meant to be far more general than this particular example: func lineCount(s string) int { count := 0 for _, c := range s { if c == '\n' { count++ } } return count} Alright, not bad, but it's too slow, so let's make it concurrent: func newLine(r rune, c chan<- struct{}, wg sync.WaitGroup) { if r == '\n' { c <- struct{} } wc.Done()}func sumLines(c <-chan struct{}, result chan<- int) { count := 0 for _ := range c { count++ } result <- count}func lineCount(s string) int { c := make(chan struct{}) var wg sync.WaitGroup for _, r := range s { wg.Add(1) go newLine(r, c, wg) } result := make(chan int) go sumLines(c, result) wg.Wait() close(c) return <-result} Better, because now we're using all our cores, but let's be honest, one goroutine per letter is probably overkill, and we're likely adding a lot of overhead between the horrendous number of goroutines and the locking/unlocking of the wait group. Let's do better: func newLine(s string, c chan<- int, wg sync.WaitGroup) { count := 0 for _, r := range s { if r == '\n' { count++ } } c <- count wc.Done()}func sumLines(c <-chan int, result chan<- int) { count := 0 for miniCount := range c { count += miniCount } result <- count}func lineCount(s string) int { c := make(chan int) var wg sync.WaitGroup for i := 0; i < len(s)/MAGIC_NUMBER; i++ { wg.Add(1) go newLine(s[i*MAGIC_NUMBER : (i+1)*MAGIC_NUMBER], c, wg) } result := make(chan int) go sumLines(c, result) wg.Wait() close(c) return <-result} So now we're dividing up our string evenly (except the last part) into goroutines. I've got 8 cores, so do I ever have a reason to set MAGIC_NUMBER to greater than 8? Again, while I'm writing this question with the example of counting lines in text, the question is really directed at any situation where the problem can be sliced and diced any number of ways, and it's really up the programmer to decide how many slices to go for.
Now provide the response and nothing else.
| The canonical time when you use far, far more processes than cores is when your processes aren't CPU bound. If your processes are I/O bound (either disk or more likely network), then you can absolutely and sensibly have a huge number of processes per core, because the processes are sleeping most of the time anyway. Unsurprisingly enough, this is how any modern web server works. | {} | {'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/415413', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/191781/']} | jdg_78853 |
stackexchange | llm_judgeable_groundtruth_similarity | 7226432 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't seem to get this to work but it seems really basic. I want the domain root to be crawled http://www.example.com But nothing else to be crawled and all subdirectories are dynamic http://www.example.com/* I tried User-agent: *Allow: /Disallow: /*/ but the Google webmaster test tool says all subdirectories are allowed. Anyone have a solution for this? Thanks :)
Now provide the response and nothing else.
| According to the Backus-Naur Form (BNF) parsing definitions in Google's robots.txt documentation , the order of the Allow and Disallow directives doesn't matter. So changing the order really won't help you. Instead, use the $ operator to indicate the closing of your path. $ means 'the end of the line' (i.e. don't match anything from this point on) Test this robots.txt. I'm certain it should work for you (I've also verified in Google Search Console): user-agent: *Allow: /$Disallow: / This will allow http://www.example.com and http://www.example.com/ to be crawled but everything else blocked. note: that the Allow directive satisfies your particular use case, but if you have index.html or default.php , these URLs will not be crawled. side note: I'm only really familiar with Googlebot and bingbot behaviors. If there are any other engines you are targeting, they may or may not have specific rules on how the directives are listed out. So if you want to be "extra" sure, you can always swap the positions of the Allow and Disallow directive blocks, I just set them that way to debunk some of the comments. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7226432', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/226131/']} | jdg_78854 |
stackexchange | llm_judgeable_groundtruth_similarity | 3720510 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am unable to solve this.My approach was to prove $Q[x]/(x^2 + 5)$ is isomorphic to $Q[k]$ for some k and then similarly show it for $Q[x]/(x^2 - 5)$ implying there are both isomorphic to each other.
Now provide the response and nothing else.
| Draw in black the triangle $ABC$ with angle $48^\circ$ at $C$ , and $AD$ with angle $18^\circ$ at $A$ . This ensures the blue angles $114^\circ$ and $66^\circ$ . Choose $D'\in BC$ such that $|D'C|=|BD|=|AC|$ . This gives the red angles $66^\circ$ and $66^\circ-18^\circ=48^\circ$ . This implies that $|AD'|=|AD|$ . The triangles $AD'C$ and $ADB$ are therefore SAS-congruent with the same angles at $D'$ and $D$ . This shows that the green angle $\angle ABD=48^\circ$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3720510', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/800051/']} | jdg_78855 |
stackexchange | llm_judgeable_groundtruth_similarity | 480319 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using C#.Net and the Socket class from the System.Net.Sockets namespace. I'm using the asynchronous receive methods. I understand this can be more easily done with something like a web service; this question is borne out of my curiosity rather than a practical need. My question is: assume the client is sending some binary-serialized object of an unknown length. On my server with the socket, how do I know the entire object has been received and that it is ready for deserialization? I've considered prepending the object with the length of the object in bytes, but this seems unnecessary in the .Net world. What happens if the object is larger than the buffer? How would I know, 'hey, gotta resize the buffer because the object is too big'?
Now provide the response and nothing else.
| You either need the protocol to be self-terminating (like XML is, effectively - you know when you've finished receiving an XML document when it closes the root element) or you need to length-prefix the data, or you need the other end to close the stream when it's done. In the case of a self-terminated protocol, you need to have enough hooks in so that the reading code can tell when it's finished. With binary serialization you may well not have enough hooks. Length-prefix is by far the easiest solution here. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/480319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_78856 |
stackexchange | llm_judgeable_groundtruth_similarity | 468655 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to solve this: $$\frac{d}{dx}(\int_{0}^{x}\sqrt{t^2+1}dt)$$ So, first I solve the integral for $(0,x)$, only need to evaluate in $x$: $$\frac{d}{dx}(\frac{x\sqrt{x^2}}{2}dt)$$ Then I solve the derivative: $$\frac{d}{dx}(x^2dt)$$ Question Is this the end?
Now provide the response and nothing else.
| I think you're being asked to know the following fact. Let $f$ be any reasonably nice function — smooth is plenty good enough. Then $$ \frac{\mathrm d}{\mathrm d x} \int_0^x f(t)\,\mathrm d t = f(x). $$ The proof is as follows. The left hand side asks: Question: How much does $\int_0^x f$ change when you change $x$ to $x + \mathrm d x$? Here $\mathrm d x$ is some very small amount, meaning the question is approximate, and you should ignore numbers that are much smaller than $\mathrm d x$. Well, $\int_0^{x + \mathrm dx} = \int_0^x + \int_x^{x + \mathrm d x}$, so the answer is clearly: Answer 1: $\int_0^x f$ changes by $\int_x^{x + \mathrm d x}f$. But this isn't that useful. How much is $\int_x^{x + \mathrm d x}f$? Well, it's the area under a curve, from $x$ to $x+\mathrm{d}x$. So the width of the region whose area we're asking for is $\mathrm d x$. The height is roughly $f(x)$. The left-hand height is exactly $f(x)$. The right-hand height is $f(x+\mathrm d x) \approx f(x) + f'(x)\,\mathrm d x$. In any case, the point is that if $f$ is smooth, say, then the height of the curve doesn't vary that much over the interval $[x,x+\mathrm d x]$, and in particular varies by something much less than $1$. Which means we can approximate the area as $(\text{width}) \times (\text{height}) + $ an error which is $(\text{width}) \times (\text{something much less than 1})$. Evaluating gives an area which is $f(x)\,\mathrm d x + (\text{something much less than $\mathrm d x$})$. Ok, so now we can update our answer: Answer 2: $\int_0^x f$ changes by $f(x)\,\mathrm d x + (\text{something much smaller than $\mathrm d x$})$. Dividing through by $\mathrm d x$ and ignoring the error in the estimate, which is very small (meaning it goes to $0$ as $\mathrm d x \to 0$) gives the original displayed formula. Applying the displayed formula to your question gives: $$ \frac{\mathrm d}{\mathrm d x} \int_0^x \sqrt{t^2 + 1}\,\mathrm d t = \sqrt{x^2 + 1} $$ This is justified because the function $f(x) = \sqrt{t^2+1}$ is smooth. There are functions that do not satisfy the displayed formula — you will find some if you think about discontinuous functions — but they are unlikely to come up in your homework. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/468655', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/86428/']} | jdg_78857 |
stackexchange | llm_judgeable_groundtruth_similarity | 3751235 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to add a UINavigationController to a view application that inherits from UIViewController and not UITableViewController ? How is it done?
Now provide the response and nothing else.
| Yes, you can have Navigation controllers in any view based application, whether at the Root level (like when you create the Navigation-based template in Xcode) or with a TabBar root, or with any Root. One example, presenting a modal view including navigation (used in my app to display a series of forms): UIViewController *control = [[MyViewController alloc] initWithNibName: @"MyViewController" bundle: nil]; UINavigationController *navControl = [[UINavigationController alloc] initWithRootViewController: control]; [self presentModalViewController: navControl animated: YES]; [control release]; In another example, if you want to have it at the root level, but didn't create the application with the Navigation template, in the AppDelegate's didFinishLaunching(...): UINavigationController *navControl = [[UINavigationController alloc] initWithRootViewController: control]; [window setRootViewController: navControl]; [navControl release]; You can also set it in Interface Builder, by setting up the class of the View controller you use (UIViewController replaced by UINavigationController). I hope this answers your question (sorry about the previous discussion). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3751235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/404020/']} | jdg_78858 |
stackexchange | llm_judgeable_groundtruth_similarity | 6460630 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a tab open when the user clicks a button. On the onload I have it bring up the print dialog, but the user asked me whether it was possible that after it sends to the printer to print, if the tab could close itself. I am not sure whether this can be done. I have tried using setTimeout(); , but it's not a defined period of time since the user might get distracted and have to reopen the tab. Is there any way to accomplish this?
Now provide the response and nothing else.
| if you try to close the window just after the print() call, it may close the window immediately and print() will don't work. This is what you should not do : window.open();...window.print();window.close(); This solution will work in Firefox, because on print() call, it waits until printing is done and then it continues processing javascript and close() the window.IE will fail with this because it calls the close() function without waiting for the print() call is done. The popup window will be closed before printing is done. One way to solve it is by using the "onafterprint" event but I don' recommend it to you becasue these events only works in IE. The best way is closing the popup window once the print dialog is closed (printing is done or cancelled). At this moment, the popup window will be focussed and you can use the "onfocus" event for closing the popup. To do this, just insert this javascript embedded code in your popup window: <script type="text/javascript">window.print();window.onfocus=function(){ window.close();}</script> Hope this hepls ;-) Update: For new chrome browsers it may still close too soon see here . I've implemented this change and it works for all current browsers: 2/29/16 setTimeout(function () { window.print(); }, 500); window.onfocus = function () { setTimeout(function () { window.close(); }, 500); } | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6460630', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28586/']} | jdg_78859 |
stackexchange | llm_judgeable_groundtruth_similarity | 57618689 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to wrap the example code snippet to get secrets in a function and then call it but it does not appear to be working. I suspect I am calling it asynchronously and I need to call it synchronously? I just want a function I can call to get a secret value and put it in a var. this is the function: //outside exports.handler = (event, context, callback) => {function getSecret(secretName) { // Load the AWS SDK var AWS = require('aws-sdk'), region = process.env.AWS_REGION, secretName = secretName, secret, decodedBinarySecret; // Create a Secrets Manager client var client = new AWS.SecretsManager({ region: region }); // In this sample we only handle the specific exceptions for the 'GetSecretValue' API. // See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html // We rethrow the exception by default. client.getSecretValue({SecretId: secretName}, function(err, data) { if (err) { if (err.code === 'DecryptionFailureException') // Secrets Manager can't decrypt the protected secret text using the provided KMS key. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === 'InternalServiceErrorException') // An error occurred on the server side. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === 'InvalidParameterException') // You provided an invalid value for a parameter. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === 'InvalidRequestException') // You provided a parameter value that is not valid for the current state of the resource. // Deal with the exception here, and/or rethrow at your discretion. throw err; else if (err.code === 'ResourceNotFoundException') // We can't find the resource that you asked for. // Deal with the exception here, and/or rethrow at your discretion. throw err; } else { // Decrypts secret using the associated KMS CMK. // Depending on whether the secret is a string or binary, one of these fields will be populated. if ('SecretString' in data) { return data.SecretString; } else { let buff = new Buffer(data.SecretBinary, 'base64'); return buff.toString('ascii'); } } });} Then I call it // inside exports.handler = (event, context, callback) => {var secret = getSecret('mySecret')console.log('mysecret: ' + secret ) The secret var is always undefined EDIT: Async only works with promises so I had to make my function async and return a promise: async function mySecrets(secretName) { // Load the AWS SDK var AWS = require('aws-sdk'), region = process.env.AWS_REGION, secretName = secretName, secret, decodedBinarySecret; // Create a Secrets Manager client var client = new AWS.SecretsManager({ region: region }); return new Promise((resolve,reject)=>{ client.getSecretValue({SecretId: secretName}, function(err, data) { // In this sample we only handle the specific exceptions for the 'GetSecretValue' API. // See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html // We rethrow the exception by default. if (err) { reject(err); } else { // Decrypts secret using the associated KMS CMK. // Depending on whether the secret is a string or binary, one of these fields will be populated. if ('SecretString' in data) { resolve(data.SecretString); } else { let buff = new Buffer(data.SecretBinary, 'base64'); resolve(buff.toString('ascii')); } } }); });}.....// inside handlerexports.handler = async (event) => {....var value = await mySecrets('mysecret')
Now provide the response and nothing else.
| You need wait for the async call to finish. Inside your main handler you will have something like: // inside your main handlerexports.handler = async function(event, context) { var secret = await getSecret('mySecret') console.log('mysecret: ' + secret ) return ... } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57618689', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1028270/']} | jdg_78860 |
stackexchange | llm_judgeable_groundtruth_similarity | 5612676 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm searching for a solution for a weird problem. I have a controller, that needs authentication (with the devise gem). I added the Devise TestHelpers but i can't get it working. require 'test_helper'class KeysControllerTest < ActionController::TestCase include Devise::TestHelpers fixtures :keys def setup @user = User.create!( :email => '[email protected]', :password => 'MyTestingPassword', :password_confirmation => 'MyTestingPassword' ) sign_in @user @key = keys(:one) end test "should get index" do get :index assert_response :success assert_not_nil assigns(:keys) end test "should get new" do get :new assert_response :success end test "should create key" do assert_difference('Key.count') do post :create, :key => @key.attributes end assert_redirected_to key_path(assigns(:key)) end test "should destroy key" do assert_difference('Key.count', -1) do delete :destroy, :id => @key.to_param end assert_redirected_to keys_path end end And i get the following output in my "rake test" window: 29) Failure:test_should_create_key(KeysControllerTest) [/test/functional/keys_controller_test.rb:29]:"Key.count" didn't change by 1.<3> expected but was<2>. 30) Failure:test_should_destroy_key(KeysControllerTest) [/test/functional/keys_controller_test.rb:37]:"Key.count" didn't change by -1.<1> expected but was<2>. 31) Failure:test_should_get_index(KeysControllerTest) [/test/functional/keys_controller_test.rb:19]:Expected response to be a <:success>, but was <302> 32) Failure:test_should_get_new(KeysControllerTest) [/test/functional/keys_controller_test.rb:25]:Expected response to be a <:success>, but was <302> Can someone tell my, why devise doesn't authenticate? I'm using the exact same procedure for an AdminController and it works perfect.
Now provide the response and nothing else.
| Are you using Devise with confirmable? In this case, create is not enough and you need to confirm the user with @user.confirm! Second, why do you create the user in the functional test? Declare your users in the fixture like this (confirmed_at if you require confirmation only): test/fixtures/users.yml: user1: id: 1email: [email protected]_password: abcdef1password_salt: efvfvffdvconfirmed_at: <%= Time.now %> and sign them in in your functional tests with: sign_in users(:user1) Edit: I just saw, that in my app the Devise-Testhelpers are declared in test/test-helpers.rb and I don't know if this makes a difference, maybe you want to try: ENV["RAILS_ENV"] = "test"require File.expand_path('../../config/environment', __FILE__)require 'rails/test_help'class ActionController::TestCase include Devise::TestHelpersendclass ActiveSupport::TestCase # Setup all fixtures in test/fixtures/*.(yml|csv) for all tests in alphabetical order. # # Note: You'll currently still have to declare fixtures explicitly in integration tests # -- they do not yet inherit this setting fixtures :all # Add more helper methods to be used by all tests here...end | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5612676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/700932/']} | jdg_78861 |
stackexchange | llm_judgeable_groundtruth_similarity | 2934185 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I put two projects into Eclipse, called Project1 and Project2. Project1 is independent, but Project2 must be dependent to Project1. When I build the Project2, the error msg comes like this: Error generating final archive: duplicate entry: AndroidManifest.xml
Now provide the response and nothing else.
| I figured out what the problem was. I included a JAR file that had an AndroidManifest.xml file in it, which was conflicting with my own project's AndroidManifest.xml file. The solution in my instance was to rebuild the JAR file, making sure it excluded its own version of AndroidManifest.xml, then to re-include the new version of the JAR file into my project. You might need to clean the project after doing this. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2934185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/313897/']} | jdg_78862 |
stackexchange | llm_judgeable_groundtruth_similarity | 2275266 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $G$ is a finite (nontrivial) cyclic group acting faithfully on $\mathbb{P}^1_\mathbb{C}$ by holomorphic automorphisms, must it have exactly two fixed points? I believe this should follow from the Hurwitz formula, but it's possible I've made a mistake. Is this well known? Are there other proofs of this which don't use the Hurwitz formula?
Now provide the response and nothing else.
| Thanks to Kenny Wong, the answer is an easy one. The fixed points of a cyclic group are the same as the fixed points of a generator, which is just a mobius transformation given by $z\mapsto (az+b)/(cz+d)$, or alternatively given by a matrix $\begin{bmatrix}a&b\\c&d\end{bmatrix}$ representing an element of $PGL_2(\mathbb{C})$. By looking at the Jordan canonical form, we see that if the matrix has finite order, then it is diagonalizable, which is to say that it has two linearly independent eigenvectors - ie, two fixed points. Alternatively, if $\alpha$ is an automorphism, then since $PGL_2(\mathbb{C})$ is sharply 3-transitive, if $\alpha$ fixes at least 3 points, then it must be the identity. If it fixes $\le 1$ points and has finite order then the quotient map $$\mathbb{P}^1_\mathbb{C}\rightarrow\mathbb{P}^1_\mathbb{C}/\langle\alpha\rangle$$is ramified at $\le 1$ points, but $\mathbb{P}^1_\mathbb{C}/\langle\alpha\rangle$ is a smooth compact Riemann surface, hence also isomorphic to $\mathbb{P}^1_\mathbb{C}$, and thus the map above restricts to an unramified cover of $\mathbb{P}^1_\mathbb{C} - \{\text{$\le 1$ points}\}$, which cannot happen since both $\mathbb{P}^1_\mathbb{C}$ and $\mathbb{A}^1_\mathbb{C}$ have trivial fundamental groups. Note that the finite order assumption is crucial, since the matrix $\begin{bmatrix}1&1\\0&1\end{bmatrix}$ representing the automorphism $z\mapsto z+1$ has only one fixed point, but this is okay since it has infinite order, and hence is not the generator of a finite cyclic group. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2275266', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_78863 |
stackexchange | llm_judgeable_groundtruth_similarity | 37799296 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is the rspec binstub in my project directory. #!/usr/bin/env rubybegin load File.expand_path("../spring", __FILE__)rescue LoadErrorend# frozen_string_literal: true## This file was generated by Bundler.## The application 'rspec' is installed as part of a gem, and# this file is here to facilitate running it.#require "pathname"ENV["BUNDLE_GEMFILE"] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath)require "rubygems"require "bundler/setup"load Gem.bin_path("rspec-core", "rspec") What is this intended to do? # frozen_string_literal: true
Now provide the response and nothing else.
| # frozen_string_literal: true is a magic comment, supported for the first time in Ruby 2.3, that tells Ruby that all string literals in the file are implicitly frozen, as if #freeze had been called on each of them. That is, if a string literal is defined in a file with this comment, and you call a method on that string which modifies it, such as << , you'll get RuntimeError: can't modify frozen String . The comment must be on the first line of the file. In Ruby 2.3, you can use this magic comment to prepare for frozen string literals being the default in Ruby 3 . In Ruby 2.3 run with the --enable=frozen-string-literal flag, and in Ruby 3, string literals are frozen in all files. You can override the global setting with # frozen_string_literal: false . If you want a string literal to be mutable regardless of the global or per-file setting, you can prefix it with the unary + operator (being careful with operator precedence) or call .dup on it: # frozen_string_literal: true"".frozen?=> true(+"").frozen?=> false"".dup.frozen?=> false You can also freeze a mutable (unfrozen) string with unary - . Source: magic_comment defined in ruby/ruby | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/37799296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/597358/']} | jdg_78864 |
stackexchange | llm_judgeable_groundtruth_similarity | 672575 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is known that \begin{align}\arcsin x + \arcsin y =\begin{cases}\arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2}) \\\quad\text{if } x^2+y^2 \le 1 &\text{or} &(x^2+y^2 > 1 &\text{and} &xy< 0);\\\pi - \arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2}) \\\quad\text{if } x^2+y^2 > 1&\text{and} &0< x,y \le 1;\\-\pi - \arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2}) \\\quad\text{if } x^2+y^2 > 1&\text{and} &-1\le x,y < 0.\end{cases}\end{align} I tried to prove this myself, have no problem in getting the 'crux' $\arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2})$ part of the RHS, but face trouble in checking the range of that 'crux' under the given conditions.
Now provide the response and nothing else.
| Using this , $\displaystyle-\frac\pi2\leq \arcsin z\le\frac\pi2 $ for $-1\le z\le1$ So, $\displaystyle-\pi\le\arcsin x+\arcsin y\le\pi$ Again, $\displaystyle\arcsin x+\arcsin y= \begin{cases} \\-\pi- \arcsin(x\sqrt{1-y^2}+y\sqrt{1-x^2})& \mbox{if } -\pi\le\arcsin x+\arcsin y<-\frac\pi2\\\arcsin(x\sqrt{1-y^2}+y\sqrt{1-x^2}) &\mbox{if } -\frac\pi2\le\arcsin x+\arcsin y\le\frac\pi2 \\ \pi- \arcsin(x\sqrt{1-y^2}+y\sqrt{1-x^2})& \mbox{if }\frac\pi2<\arcsin x+\arcsin y\le\pi \end{cases} $ and as like other trigonometric ratios are $\ge0$ for the angles in $\left[0,\frac\pi2\right]$ So, $\displaystyle\arcsin z\begin{cases}\text{lies in } \left[0,\frac\pi2\right] &\mbox{if } z\ge0 \\ \text{lies in } \left[-\frac\pi2,0\right] & \mbox{if } z<0 \end{cases} $ Case $(i):$Observe that if $\displaystyle x\cdot y<0\ \ \ \ (1)$ i.e., $x,y$ are of opposite sign, $\displaystyle -\frac\pi2\le\arcsin x+\arcsin y\le\frac\pi2$ Case $(ii):$If $x>0,y>0$ $\displaystyle \arcsin x+\arcsin y$ will be $\displaystyle \le\frac\pi2$ according as $\displaystyle \arcsin x\le\frac\pi2-\arcsin y$ But as $\displaystyle\arcsin y+\arccos y=\frac\pi2,$ we need $\displaystyle \arcsin x\le\arccos y$ Again as the principal value of inverse cosine ratio lies in $\in[0,\pi],$$\displaystyle\arccos y=\arcsin(+\sqrt{1-y^2})\implies \arcsin x\le\arcsin\sqrt{1-y^2}$ Now as sine ratio is increasing in $\displaystyle \left[0,\frac\pi2\right],$ we need $\displaystyle x\le\sqrt{1-y^2}\iff x^2\le1-y^2$ as $x,y>0$ $\displaystyle\implies x^2+y^2\le1 \ \ \ \ (2)$ So, $(1),(2)$ are the required condition for $\displaystyle \arcsin x+\arcsin y\le\frac\pi2$ Case $(iii):$ Now as $\displaystyle-\frac\pi2\arcsin(-u)\le\frac\pi2 \iff -\frac\pi2\arcsin(u)\le\frac\pi2$ $\arcsin(-u)=-\arcsin u$ Use this fact to find the similar condition when $x<0,y<0$ setting $x=-X,y=-Y$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/672575', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/70311/']} | jdg_78865 |
stackexchange | llm_judgeable_groundtruth_similarity | 34282583 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to know difference between rowID and rowNUM And how to see both of these in our table. when I execute this: SELECT * FROM emp WHERE rownum=1 It returns one query but when I do the same for rowid it says inconsistent datatypes: expected ROWID got NUMBER And even in some of the tables, rownum returns null . Why so? Please clarify this: rowid vs rownum?(Demo query) Thank you EDIT: Require to use alias to display ROWID and ROWNUM (as they're pseudocolumn)like: SELECT rownum r1, rowid r2 FROM emp
Now provide the response and nothing else.
| Both, ROWNUM and ROWID are pseudo columns. Rowid For each row in the database, the ROWID pseudo column returns theaddress of the row. An example query would be: SELECT ROWID, last_name FROM employees WHERE department_id = 20; More info on rowid here: https://docs.oracle.com/cd/B19306_01/server.102/b14200/pseudocolumns008.htm Rownum For each row returned by a query, the ROWNUM pseudo column returns anumber indicating the order in which Oracle selects the row from atable or set of joined rows. The first row selected has a ROWNUM of 1,the second has 2, and so on. You can limit the amount of results with rownum like this: SELECT * FROM employees WHERE ROWNUM < 10; More info on rownum here: https://docs.oracle.com/cd/B19306_01/server.102/b14200/pseudocolumns009.htm Difference The actual difference between rowid and rownum is, that rowid is a permanent unique identifier for that row. However, the rownum is temporary. If you change your query, the rownum number will refer to another row, the rowid won't. So the ROWNUM is a consecutive number which applicable for a specific SQL statement only. In contrary the ROWID, which is a unique ID for a row. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34282583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5576660/']} | jdg_78866 |
stackexchange | llm_judgeable_groundtruth_similarity | 80932 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it an accurate statement to say that free electrons in a metal experience NO restoring force when they interact with electromagnetic waves? I understand that the electrons exist in a space filled with ions, and doesn't the cumulative potential that is present due to the presence of the ions exert an electric field on the electrons. Even in the case of simple metals, where you can say that the nucleus is shielded by the valence electrons, so called Coulomb shielding, how significant is the shielding. Naively, it seems to me that because the charge in the nucleus is not balanced by the charge in the bound electrons, there should be some net potential that the free electron sees.
Now provide the response and nothing else.
| The free electron model is surprisingly good at predicting the properties of electrons in metals, and this implies that the electrons really are nearly free. However when you look more closely there is of course an interaction with the lattice. This is modelled using the (rather predictably named) nearly free electron model . The conduction electrons are delocalised, so you shouldn't think of them as little balls bouncing off the ion cores. The spatial extent of their wavefunction is typically far greater than the lattice repeat, hence the relatively weak interactions. However interactions with the lattice are responsible for electrical resistance and thermal conductivity, and at very low temperatures for superconductivity. However note that these aren't interactions between a single electron and a single ion core, but rather interactions between electron waves and lattice waves (phonons). | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/80932', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/22182/']} | jdg_78867 |
stackexchange | llm_judgeable_groundtruth_similarity | 16683775 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying my hands on developing a simple android application in which I am trying to use sqlcipher , which uses .so libraries internally. I have read the documentation on how to use sqlcipher with android app . I have followed the steps and it compiles without any error. But, at runtime it throws UnsatisfiedLinkError . Googling around for it, I found that, gradle doesn't support .so libraries yet , but I found a hack here which I am trying to use. But it throws compile time error at line #40 on the gist which is, tasks.withType(com.android.build.gradle.PackageApplicationTask) { pkgTask -> pkgTask.jniDir new File(buildDir, 'native-libs')} saying Could not find property 'com' on Project 'MyProject' Here I am posting code from my build.gradle file. buildscript { repositories { maven { url 'http://repo1.maven.org/maven2' } } dependencies { classpath 'com.android.tools.build:gradle:0.4' }}apply plugin: 'android'dependencies { compile files('libs/android-support-v4.jar') compile files('libs/commons-codec.jar') compile files('libs/guava-r09.jar') compile files('libs/sqlcipher.jar')}targetCompatibility = 1.6sourceCompatibility = 1.6android { target = 'android-14' compileSdkVersion 17 buildToolsVersion "17.0.0" defaultConfig { minSdkVersion 9 targetSdkVersion 16 } }task copyNativeLibs(type: Copy) { from(new File(project(':MyProject').buildDir, 'native-libs')) { include '**/*.so' } into new File(buildDir, 'native-libs')}tasks.withType(Compile) { compileTask -> compileTask.dependsOn copyNativeLibs }clean.dependsOn 'cleanCopyNativeLibs'tasks.withType(com.android.build.gradle.PackageApplicationTask) { pkgTask -> pkgTask.jniDir new File(buildDir, 'native-libs')} Can, anybody please help me on what I have done wrong or what should I do to include those .so libraries in my apk? As I am new to android development and gradle, please apologize me if I have misunderstood something.
Now provide the response and nothing else.
| I've tried the solution presented in the accepted answer and it did not work for me.I wanted to share what DID work for me as it might help someone else. I've found this solution here . Basically what you need to do is put your .so files inside a a folder named lib (Note: it is not libs and this is not a mistake). It should be in the same structure it should be in the APK file. In my case it was: Project: |--lib: |--|--armeabi: |--|--|--.so files. So I've made a lib folder and inside it an armeabi folder where I've inserted all the needed .so files. I then zipped the folder into a .zip (the structure inside the zip file is now lib/armeabi/*.so) I renamed the .zip file into armeabi.jar and added the line compile fileTree(dir: 'libs', include: '*.jar') into dependencies {} in the gradle's build file. This solved my problem in a rather clean way. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16683775', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1951058/']} | jdg_78868 |
stackexchange | llm_judgeable_groundtruth_similarity | 120598 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was wondering about the voltage drop in real diodes (0.7 V in Si diodes, 0.3 in Ge diodes etc.). According to my experience this drop is linked to the used material, but the shockley equation which should describe a diode never uses any material constant. So, how is the material constant introduced in a diode equation for modelling the additional voltage drop compared to an ideal diode?
Now provide the response and nothing else.
| Shockly diode equation is given by: $$\style{}{I=I_S(e^{V_D/nV_T}-1)}\tag1$$ Where, \$I_S\$ is the reverse bias saturation current. \$V_D\$ is the voltage across the diode \$V_T\$ is the thermal voltage \$n\$ is the ideality factor, also known as the quality factor or emission coefficient. I see two material dependent parameters here: The reverse saturation current, \$I_S\$ depends on the material. The parameter \$n\$ depends on fabrication process and semiconductor material. EDIT From equation (1), $$V_D = n \cdot V_T \ln\left(\frac{I}{I_S}+1\right) \approx n \cdot V_T\ln\left(\frac{I}{I_S}\right)$$ $$V_D \approx n \cdot V_T \cdot \ln10 \cdot \log_{10}\left(\frac{I}{I_S}\right)$$ Assuming room temperature and \$n=1\$ , $$V_D \approx 0.05916 \cdot \log_{10}\left(\frac{I}{I_S}\right)\tag2$$ Typical values of the saturation current at room temperature are: \$I_S = 10^{-12}\$ for silicon diodes; \$I_S = 10^{-6}\$ for germanium diodes. For a current of 1.0 mA: \$V_D \approx 0.53 V\$ for silicon diodes (9 orders of magnitude) \$V_D \approx 0.18 V\$ for germanium diodes (3 orders of magnitude) For a current of 100 mA: \$V_D \approx 0.65 V\$ for silicon diodes (11 orders of magnitude) \$V_D \approx 0.30 V\$ for germanium diodes (5 orders of magnitude) Further increase in current won't cause much increase in \$V_D\$ (0.05916V per decade is the rate of change of \$V_D\$ with respect to \$I\$ ). Hence in the common cases (current in the range of mA), the voltage drop remains constant around 0.6V for silicon diodes. And hence values of 0.6 or 0.7 Volts are commonly used as voltage drop for silicon diodes and 0.3 for germanium diodes. source: wikipedia | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/120598', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/49115/']} | jdg_78869 |
stackexchange | llm_judgeable_groundtruth_similarity | 2759350 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My comments on this answer got me thinking about the issues of constness and sorting. I played around a bit and reduced my issues to the fact that this code: #include <vector>int main() { std::vector <const int> v; } will not compile - you can't create a vector of const ints. Obviously, I should have known this (and intellectually I did), but I've never needed to create such a thing before. However, it seems like a useful construct to me, and I wonder if there is any way round this problem - I want to add things to a vector (or whatever), but they should not be changed once added. There's probably some embarrassingly simple solution to this, but it's something I'd never considered before.I probably should not have mentioned sorting (I may ask another question about that, see this for the difficulties of asking questions). My real base use case is something like this: vector <const int> v; // ok (i.e. I want it to be OK)v.push_back( 42 ); // okint n = v[0]; // okv[0] = 1; // not allowed
Now provide the response and nothing else.
| Well, in C++0x you can... In C++03, there is a paragraph 23.1[lib.containers.requirements]/3, which says The type of objects stored in these components must meet the requirements of CopyConstructible types (20.1.3), and the additional requirements of Assignable types. This is what's currently preventing you from using const int as a type argument to std::vector . However, in C++0x, this paragraph is missing, instead, T is required to be Destructible and additional requirements on T are specified per-expression, e.g. v = u on std::vector is only valid if T is MoveConstructible and MoveAssignable . If I interpret those requirements correctly, it should be possible to instantiate std::vector<const int> , you'll just be missing some of its functionality (which I guess is what you wanted). You can fill it by passing a pair of iterators to the constructor. I think emplace_back() should work as well, though I failed to find explicit requirements on T for it. You still won't be able to sort the vector in-place though. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2759350', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_78870 |
stackexchange | llm_judgeable_groundtruth_similarity | 162875 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As in the title, I am asking if there is a difference between allowing set-theoretic operations over arbitrarily many sets, and restricting to only countably many sets. For example, the standard definition of an topology on a set $X$ requires that arbitrary unions of open sets are open. Do I lose anything significant if I restrict this to just unions of countably many (open) sets? I cannot come up with an example where it makes a difference.
Now provide the response and nothing else.
| Let $X$ be an uncountable set. Let \[ \tau = \{ O \subseteq X \mid O = X \text{ or } O \text{ is at most countable}\}\]Then $\tau$ contains $\emptyset$ and $X$, is closed under finite intersections and under countable unions. But it isn't a topology on $X$ as it isn't closed under arbitrary unions. So it makes a difference. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/162875', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33817/']} | jdg_78871 |
stackexchange | llm_judgeable_groundtruth_similarity | 8586526 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My application offers only Portrait mode. In a portrait activity I have a fullscreen VideoView. What I want to do is rotate the VideoView ( the actual video, videobuffer) 90 degrees at Landscape mode. Enabling activity to be on Lanscape mode is not an option.Extending VideoView and canvas rotate will not work as it is a SurfaceView not an actual view.Is there any way to achieve that with a videoView?
Now provide the response and nothing else.
| VideoView does not support rotation of video even if composition matrix is set correctly and rotation attribute is used. What you can do is to use TextureView and set its attribute rotation="90" (for example). It then will rotate the frames but the aspect ratio is something that you need to handle your self. In order to do so you can use textureView.setScaleX((screenHeight * 1.0f) / screenWidth) <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" > <TextureView android:id="@+id/playback_video" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_centerInParent="true" android:rotation="90" /></RelativeLayout> It should handle streamed video too. But please think of it as of an example rather than release ready code. I renamed some things and removed others, they don't have relation to the question and this might break something but in general this is workable example. public class PlaybackActivity extends Activity implements MediaPlayer.OnErrorListener, OnPreparedListener, OnCompletionListener, OnVideoSizeChangedListener, OnBufferingUpdateListener, OnInfoListener, SurfaceTextureListener{ private MediaPlayer mediaPlayer; private TextureView videoView; private boolean startedPlayback = false; private boolean playerReady = false; public static final int MEDIA_INFO_NETWORK_BANDWIDTH = 703; private void createMediaPlayer() { mediaPlayer = new MediaPlayer(); } private void releaseMediaPlayer() { if (mediaPlayer != null) { mediaPlayer.setSurface(null); mediaPlayer.release(); mediaPlayer = null; } } public void onCompletion(MediaPlayer mp) { Log.w(TAG, "Video playback finished"); } @Override public boolean onError(MediaPlayer player, int what, int extra) { if (what == MediaPlayer.MEDIA_ERROR_UNKNOWN) { /* * Restart play back in case we did not start anything yet. This may * be the case when we tried to tune in in very first secs of the * broadcast when there is no data yet. */ if (liveBroadcast && mediaPlayer != null && !mediaPlayer.isPlaying() && !startedPlayback) { if (checkCount-- > 0) { mediaPlayer.reset(); checkBroadcast(); } else { Log.w(TAG, "Broadcast finished"); } } else { Log.w(TAG, "No media in stream"); } } else if (what == MediaPlayer.MEDIA_ERROR_SERVER_DIED) { Log.w(TAG, "Media service died unexpectedly"); } else { Log.w(TAG, "Unknown media error"); } return true; } @Override public boolean onInfo(MediaPlayer mp, int what, int extra) { switch (what) { case MediaPlayer.MEDIA_INFO_VIDEO_TRACK_LAGGING: Log.w(TAG, "Media is too complex to decode it fast enough."); startedPlayback = true; break; case MEDIA_INFO_NETWORK_BANDWIDTH: Log.w(TAG, "Bandwith in recent past."); break; case MediaPlayer.MEDIA_INFO_BUFFERING_START: Log.w(TAG, "Start of media bufferring."); startedPlayback = true; break; case MediaPlayer.MEDIA_INFO_BUFFERING_END: Log.w(TAG, "End of media bufferring."); startedPlayback = true; break; case MediaPlayer.MEDIA_INFO_BAD_INTERLEAVING: Log.w(TAG, "Media is not properly interleaved."); break; case MediaPlayer.MEDIA_INFO_NOT_SEEKABLE: Log.w(TAG, "Stream is not seekable."); break; case MediaPlayer.MEDIA_INFO_METADATA_UPDATE: Log.w(TAG, "New set of metadata is available."); break; case MediaPlayer.MEDIA_INFO_UNKNOWN: default: Log.w(TAG, "Unknown playback info (" + what + ":" + extra + ")."); break; } return true; } private void startPlayback() { if (mediaPlayer != null) { onLoaded(mediaPlayer); mediaPlayer.start(); } } private void pausePlayback() { if (mediaPlayer != null && mediaPlayer.isPlaying()) mediaPlayer.pause(); } private void resumePlayback() { if (mediaPlayer != null && mediaPlayer.isPlaying()) mediaPlayer.start(); } private void onLoaded(MediaPlayer mp) { } public void onPrepared(MediaPlayer mp) { playerReady = true; startPlayback(); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON); setContentView(R.layout.playback); videoView = (TextureView) findViewById(R.id.playback_video); videoView.setOnClickListener(videoViewClickHandler); videoView.setSurfaceTextureListener(this); createMediaPlayer(); } @Override protected void onDestroy() { releaseMediaPlayer(); if (surface != null) { surface.release(); surface = null; } super.onDestroy(); } @Override public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) { this.surface = new Surface(surface); loadMedia(someurl); } @Override public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) { if (this.surface != null) { releaseMediaPlayer(); this.surface.release(); this.surface = null; } return true; } @Override public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) { } @Override public void onSurfaceTextureUpdated(SurfaceTexture surface) { } @Override public void onVideoSizeChanged(MediaPlayer mp, int w, int h) { if (w > 0 && h > 0 && !videoSizeSetupDone) { Log.w(TAG, "Video size changed: " + w + "x" + h); changeVideoSize(w, h); } } private boolean videoSizeSetupDone = false; private void changeVideoSize(int width, int height) { DisplayMetrics metrics = new DisplayMetrics(); RelativeLayout.LayoutParams params; Utils.getScreenMetrics(this, metrics); VideoOrientation orientation = someVideoSource.getVideoOrientation(); if (orientation == LANDSCAPE) { params = new RelativeLayout.LayoutParams(metrics.widthPixels, metrics.heightPixels); } else { float rotation = orientation == BroadcastVideoOrientation.BroadcastVideoFrontCamera ? -90.0f : 90.0f; params = new RelativeLayout.LayoutParams(metrics.heightPixels, metrics.widthPixels); float scale = (width * 1.0f) / (height * 1.0f); videoView.setRotation(rotation); videoView.setScaleX(scale); } params.addRule(RelativeLayout.CENTER_IN_PARENT, -1); videoView.setLayoutParams(params); videoSizeSetupDone = true; } private void loadMedia(String url) { if (surface == null) return; Log.d(App.TAG, "Loading url: " + url); startedPlayback = false; try { mediaPlayer.reset(); mediaPlayer.setSurface(surface); mediaPlayer.setDataSource(url); mediaPlayer.setOnPreparedListener(this); mediaPlayer.setOnCompletionListener(this); mediaPlayer.setOnErrorListener(this); mediaPlayer.setOnVideoSizeChangedListener(this); mediaPlayer.setScreenOnWhilePlaying(true); mediaPlayer.setOnBufferingUpdateListener(this); mediaPlayer.setOnInfoListener(this); mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); mediaPlayer.prepareAsync(); } catch (Exception e) { Log.w(TAG, "Media load failed"); Utils.alert(this, "Playback Error", e.getMessage(), finishHandler); } }} Hope this helps. I was looking for this solution for quite long time. Tried almost everything and this seems to be the only way. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8586526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/348058/']} | jdg_78872 |
stackexchange | llm_judgeable_groundtruth_similarity | 21932987 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to store some timeseries data into redis.I have unix timestamps and that that i need to associate a value (timestamp <-> value) I tried sorted sets having the timestamp as score (so i can do zrange on the timestamps) and the value as member. 127.0.0.1:6379> ZADD timeserie 1392141527245 10 1392141527275 12 1392141527100 10(integer) 2127.0.0.1:6379> zscan timeserie 01) "0"2) 1) "10" 2) "1392141527245" 3) "12" 4) "1392141527275"127.0.0.1:6379> But i hit a problem, members are nonrepeating while my values can be the same for different timestamps.Any idea how to approach this? Is another data type better?
Now provide the response and nothing else.
| An easy trick to solve this problem is to concatenate the timestamp and the value. Instead of storing: ZADD timeserie 1392141527245 10 you can store: ZADD timeserie 1392141527245 10:1392141527245 Up to the application to encode/parse the value:timestamp format. Sorted sets are implemented as a skip list plus a hash table, so they are not especially compact in memory. If the volume of your data is significant, you will be better served by another solution. Some people use normal strings to encode time series, which are way more compact than sorted sets. You can find an example here: https://github.com/antirez/redis-timeseries | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21932987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1887900/']} | jdg_78873 |
stackexchange | llm_judgeable_groundtruth_similarity | 2369887 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Returning a method value from inside a using statement that gets a DataContext seems to always work fine , like this: public static Transaction GetMostRecentTransaction(int singleId){ using (var db = new DataClasses1DataContext()) { var transaction = (from t in db.Transactions orderby t.WhenCreated descending where t.Id == singleId select t).SingleOrDefault(); return transaction; }} But I always feel like I should be closing something before I break out of the using brackets, e.g. by defining transaction before the using statement, get it's value inside the brackets, and then returning after the brackets. Would defining and returning the variable outside the using brackets be better practice or conserve resources in any way?
Now provide the response and nothing else.
| No, I think it's clearer this way. Don't worry, Dispose will still be called "on the way out" - and only after the return value is fully evaluated. If an exception is thrown at any point (including evaluating the return value) Dispose will still be called too. While you certainly could take the longer route, it's two extra lines that just add cruft and extra context to keep track of (mentally). In fact, you don't really need the extra local variable - although it can be handy in terms of debugging. You could just have: public static Transaction GetMostRecentTransaction(int singleId){ using (var db = new DataClasses1DataContext()) { return (from t in db.Transactions orderby t.WhenCreated descending where t.Id == singleId select t).SingleOrDefault(); }} Indeed, I might even be tempted to use dot notation, and put the Where condition within the SingleOrDefault : public static Transaction GetMostRecentTransaction(int singleId){ using (var db = new DataClasses1DataContext()) { return db.Transactions.OrderByDescending(t => t.WhenCreated) .SingleOrDefault(t => t.Id == singleId); }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2369887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4639/']} | jdg_78874 |
stackexchange | llm_judgeable_groundtruth_similarity | 3227225 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering whether the following addition and multiplication over the set $(\mathbb{R}\setminus\{0\}\times \mathbb{Z}) \cup \{0\}$ define a field: $$(a,a')+(b,b')=\begin{cases}(a,a') \text{ if } a'>b'\\(b,b') \text{ if } b'>a'\\(a+b,a') \text{ if } b'=a' \text{ and } a\neq -b\\0 \text{ if } b'=a' \text{ and } a= -b\\\end{cases}$$ $$(a,a')(b,b')=(ab,a'+b')$$ $$-(a,a')=(-a,a')$$ $$(a,a')^{-1}=(a^{-1},-a')$$ [ $0$ is the additive unit, which fixes addition and multiplication with $0$ . $(1,0)$ the multiplicative unit.] If yes, does this field have a name? If no, which of the axioms fail? I'm a bit confused because I thought there are only "relatively few" different fields, such as the rational, real, complex numbers, or finite fields.
Now provide the response and nothing else.
| No, addition is not associative. For instance, $$((1,0)+(-1,0))+(1,-1)=0+(1,-1)=(1,-1)$$ but $$(1,0)+((-1,0)+(1,-1))=(1,0)+(-1,0)=0.$$ Note that you can tell something must be wrong with just the additive axioms, since your operation $+$ does not allow cancellation and so cannot be a group operation. Since there clearly is an identity and inverses, associativity must fail. By the way, there are lots and lots of different fields; there are just a few that are familiar in elementary mathematics. For instance, the following is a field: the underlying set is $\mathbb{Q}\times\mathbb{Q}$ , addition is $$(a,b)+(c,d)=(a+c,b+d),$$ and multiplication is $$(a,b)\cdot(c,d)=(ac+2bd,ad+bc).$$ (To make this more familiar, this field is isomorphic to the subfield of real numbers consisting of numbers of the form $a+b\sqrt{2}$ with $a,b\in\mathbb{Q}$ , sending $(a,b)$ to $a+b\sqrt{2}$ .) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3227225', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/487705/']} | jdg_78875 |
stackexchange | llm_judgeable_groundtruth_similarity | 1851650 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a string 'some.file.name',I want to grab 'some.file'. To do that,I need to find the last occurrence of '.' in a string. My solution is : declare @someStr varchar(20) declare @reversedStr varchar(20) declare @index int set @someStr = '001.002.003' set @reversedStr = reverse(@someStr) set @index = len(@someStr) - charindex('.',@reversedStr) select left(@someStr,@index) Well,isn't it too complicated?I was just intented to using 'some.file' in a where-clause. Anyone has a good idea?
Now provide the response and nothing else.
| What do you need to do with it?? Do you need to grab the characters after the last occurence of a given delimiter? If so: reverse the string and search using the normal CHARINDEX: declare @test varchar(100)set @test = 'some.file.name'declare @reversed varchar(100)set @reversed = REVERSE(@test)select REVERSE(SUBSTRING(@reversed, CHARINDEX('.', @reversed)+1, 100)) You'll get back "some.file" - the characters up to the last "." in the original file name. There's no "LASTCHARINDEX" or anything like that in SQL Server directly. What you might consider doing in SQL Server 2005 and up is great a .NET extension library and deploy it as an assembly into SQL Server - T-SQL is not very strong with string manipulation, whereas .NET really is. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1851650', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102565/']} | jdg_78876 |
stackexchange | llm_judgeable_groundtruth_similarity | 22300278 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a Visual Studio extension, currently installed with MSI, since we need to perform certain tasks after the installation (ngen, registering certain COM servers, etc). In the end of the installation, we run devenv.exe /setup (more specifically, devenv.exe /setup /nosetupvstemplates to make it a bit faster) to register the extension in Visual Studio 2012 and above (in 2010 this step was not required, since VS2010 would rebuild the extensions cache on every startup). We're currently looking to move to a partial VSIX installation, but there are certain issues: We have a certain folder structure (additional bundled tools). I wasn't able to find an option to include arbitrary folders and files with in VSIX package. Is this possible? We're trying to avoid using devenv.exe /setup , since in rare cases, this may cause extensions not to load (as detailed by Remco Mulder on the MSDN forums ). We still need to perform "post-installation" operations, currently not possible with VSIX. Due to previous 'sins', we hardcoded some of the paths to use the CompanyName\ProductName\Version format that the VSIX packages used to follow, but it seems that VSIX randomy generate a folder name now (as Jason mentions in his answer below). So what I'd essentially like is to be able to install an "unpacked" VSIX package - have the MSI installer create the file structure on disk (in the correct location, %VSInstallDir%\Common7\IDE\Extensions\Company\Product\Version ), and then somehow to have VSIX consume the .vsixmanifest file to register the extension in Visual Studio. (Initial digging into the Extension Manager API shows there's a method called CreateInstalledExtension in ExtensionManagerService that takes in a path to vsixmanifest, in assembly Microsoft.VisualStudio.ExtensionManager.Implementation.dll , but unfortunately it is internal). So, without resorting to API hackery, is there any alternative way to essentially install an extension in Visual Studio (2012 and above), without using devenv /setup ?
Now provide the response and nothing else.
| As Jason (and everyone else) suggests - yes, the "correct" (supported) way of registering a Visual Studio package externally is by running devenv.exe /setup , and for the most part - it works without problems (albeit slow, due to entire Visual Studio configuration being rebuilt). This is a commercial product, and every once in a while we get a support case, complaining that after installing our extension, something bad happened in Visual Studio - other extensions failed to load, problem with Solution explorer, etc. Those cases are rare, but they do happen. In an attempt to minimize the impact installing our extension does to the user's machine, I was trying to find a suitable way to use the VSIX installation mechanism that does not use devenv.exe /setup , but unfortunately, there isn't a supported scenario that handles all the issues I've raised in the question above. After extensive research into the problem, I found an undocumented, unsupported solution that solves the problem completely! An empty file, called extensions.configurationchanged , located under %VSInstallDir%\Common7\IDE\Extensions gets touched every time a VSIX package is installed or uninstalled, causing Visual Studio to refresh it's package cache (only!) on next restart! With modifying this file's "last modified date", it will essentially cause Visual Studio to act as if a VSIX package was just installed or uninstalled. A few quick experiments of copying the extension files to an empty hive, and then touching the extensions.configurationchanged will cause Visual Studio to load the extension (and process all its .pkgdef files) on next restart - no devenv /setup necessary! Several commercial products offer this solution in the Troubleshooting section of their online help, dealing with loading failure of their extension, so this solution might be the "lesser evil" I was looking for. Use at your own risk. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22300278', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8205/']} | jdg_78877 |
stackexchange | llm_judgeable_groundtruth_similarity | 656078 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I am sliding a block very slowly on a rough surface. If the block has travelled $d$ distance then work done by me is $W_1=\mu mg d$ and that by friction is $W_2=-\mu mg d$ . Now the energy transferred from me to block is $\mu mgd$ and that taken by friction from block is $\mu mgd $ , The net energy of block remains same but the energy taken by friction evolves as heat and that is equal to my chemical energy consumed, so total energy of $block + me$ system remains constant. Now If I pull a block of mass $m$ slowly towards up to a height $h$ , then work done by me is $W_1=mgh$ (assuming $h$ is much less than radius of earth) and that by gravity is $W_2=-mgh$ . Therefore $mgh$ goes from me to block and $mgh$ from block to earth, So here also energy of block doesn't change, then why do we say that potential energy of block increases. I know I am lacking something here, as the total energy of the system would not be conserved if the block's energy doesn't change and my energy decreases. Please help me in understanding where I am wrong.
Now provide the response and nothing else.
| So here also energy of block doesn't change, then why we say that potential energy of block increases. This is a common confusion and is due to poor communication, not a failure in your understanding which appears to be correct. Potential energy is energy that is available by virtue of the configuration of a system. In this case the potential energy is $mgh$ . Now, consider what system this potential energy describes. We have $m$ , which is a property of the block, but $g$ is a property of the earth, and $h$ is a relationship between the block and the earth. So the system that has potential energy $mgh$ is the system of both the block and the earth. So, when we say "the potential energy of the block increases", we are actually making a mistake. We should say "the potential energy of the block+earth system increases". Thus, when you say: Therefore $mgh$ goes from me to block and $mgh$ from block to earth If we taught potential energy correctly from the beginning you would have said "Therefore $mgh$ goes from me to block and the block+earth system's potential energy is increased by $mgh$ " | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/656078', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/298106/']} | jdg_78878 |
stackexchange | llm_judgeable_groundtruth_similarity | 49969319 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The general pattern to create constants in Kotlin seems to be using companion objects. However, I can also define a constant at the file level. Why is that not so popular? Am I missing something? With companion object: class Example { companion object { const val CONSTANT = "something"} On top level: const val CONSTANT = "something"class Example {}
Now provide the response and nothing else.
| In Java you're forced to put all static field and method declarations in a class and often you even have to create a class just for that purpose. Coming to Kotlin, many users look for the equivalent facility out of habit and end up overusing companion objects. Kotlin completely decouples the notions of a file and a class. You can declare any number of public classes in the same file. You can also declare private top-level functions and variables and they'll be accessible only to the classes within the same file. This is a great way to organize closely associated code and data. Compared to top-level declarations, the syntax of companion objects is quite unwieldy. You should use them only when you specifically want to associate some public static code or data with a class and want your users to qualify access to it with the class's name. The use cases for this are quite rare and in most cases the top-level declarations are more natural. Whenever you have some private static code/data that you want to couple to a class, you'll be better served with private top-level declarations. Finally, sometimes the concern of the generated bytecode matters. If, for whatever reason, you have to produce a Java class with Kotlin code such that the class has a static member, you must resort to a companion object and a special annotation. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/49969319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6051176/']} | jdg_78879 |
stackexchange | llm_judgeable_groundtruth_similarity | 6746078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to implement callback function in Java using “interface”. I have wrote the application part as MyJavaFunction(int size, m_GetSizeInterface); m_GetSizeInterface is an Interface which contains the callback function GetSize. This GetSize method is override in the application. In JNI I need to call a CPP function having prototype int MyCPPFunction(int size, int (*callback)(int* ID)); How can I pass this GetSize as parameter to MyCPPFunction in JNI? Please help public int GetSize (m_SizeClass arg0){ g_size = arg0.size; return 0;}
Now provide the response and nothing else.
| The complication here is that you want to invoke native C++ code which you, in turn, want to invoke a java method. This is actually a bit tricky. You need to create a JNI C++ function for java to call, and a C++ function matchingthe MyCPPFunction callback signature. The latter will act as a wrapper to call the java method. Because the wrapper will need information about the JNI environment, which cannot be provided by parameters (lest we ruin the signature) you create a few global variables to hold it: jobject g_getSizeIface;jmethodID g_method;JNIEnv *g_env; The C++ function which java will call is the following: JNIEXPORT void JNICALL Java_ClassName_MyCPPFunction (JNIEnv *env, jint size, jobject getSizeInterface){ jclass objclass = env->GetObjectClass(getSizeInterface); jmethodID method = env->GetMethodID(objclass, "GetSize", "(m_SizeClass)I"); if(methodID == 0){ std::cout << "could not get method id!" << std::endl; return; } g_method = method; g_getSizeIface = getSizeInterface; g_env = env MyCPPFunction(size, WrapperFunc);} And the wrapper function is thus: int WrapperFunc(int *id){ jint retval; //marshalling an int* to a m_SizeClass boogy-woogy. ... g_env->ExceptionClear(); retval = g_env->CallIntMethod(g_getSizeIface, g_method, /*marshalled m_SizeClass*/); if(g_env->ExceptionOccurred()){ //panic! Light fires! The British are coming!!! ... g_env->ExceptionClear(); } return rvalue;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6746078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/536091/']} | jdg_78880 |
stackexchange | llm_judgeable_groundtruth_similarity | 4057221 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to test a scope I have that is based upon a chain of other scopes. ("public_stream" below). scope :public, where("entries.privacy = 'public'")scope :completed, where("entries.observation <> '' AND entries.application <> ''")scope :without_user, lambda { |user| where("entries.user_id <> ?", user.id) }scope :public_stream, lambda { |user| public.completed.without_user(user).limit(15) } Using a test like this: it "should use the public, without_user, completed, and limit scopes" do @chain = mock(ActiveRecord::Relation) Entry.should_receive(:public).and_return(@chain) @chain.should_receive(:without_user).with(@user).and_return(@chain) @chain.should_receive(:completed).and_return(@chain) @chain.should_receive(:limit).with(15).and_return(Factory(:entry)) Entry.public_stream(@user) end However, I continue to receive this error: Failure/Error: Entry.public_stream(@user)undefined method `includes_values' for #<Entry:0xd7b7c0> It seems includes_values is an instance variable of the ActiveRecord::Relation object, but when I try to stub it, I still receive the same error. I was wondering if anyone had experience with stubing Rails 3's new chained queries? I can find a bunch of discussion over 2.x's find hash, but nothing on how to test what's current.
Now provide the response and nothing else.
| I use rspec's stub_chain for this. You might be able to use something like: some_model.rb scope :uninteresting, :conditions => ["category = 'bad'"], :order => "created_at DESC" Controller @some_models = SomeModel.uninteresting.where(:something_else => true) spec SomeModel.stub_chain(:uninteresting, :where) {mock_some_model} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4057221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/330934/']} | jdg_78881 |
stackexchange | llm_judgeable_groundtruth_similarity | 19915 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The conventional wisdom is that established speed limits lead to an increase in safety. Speed limits are usually set to attempt to cap road traffic speed; there are several reasons for wanting to do this. It is often done with an intention to improve road traffic safety and reduce the number of road traffic casualties from traffic collisions. In their World report on road traffic injury prevention report, the World Health Organization (WHO) identify speed control as one of various interventions likely to contribute to a reduction in road casualties. Wikipedia: Speed Limit I am inclined to question the conventional wisdom, especially when those proposing the restrictions have something to benefit by maintaining and enforcing restrictions (ticket revenues, enforcement disgression, etc). To be clear, I am not talking about raising the speed limit. I think the presence of any established limit has a psychological effect on people, who may actually drive faster than is safe if a sign says that it is permissible. I am also not asking if driving a car too fast for conditions is dangerous, that seems pretty clear. I am asking if establishing and posting a legal speed limit decreases the danger on highways. I am also restricting this query to highways, since I believe that they are fundamentally different than surface and residential streets, and the data for each will be vastly different, so different conclusions may be reached.
Now provide the response and nothing else.
| Germany is an interesting case to study this, as they have stretches of highway with and without speed limits. In a study done in the state of Brandenburg, the effect of putting a speed limit on stretches of highway has been studied by comparing the accident statistics before and after the new limit. On two stretches the speed limit was changed from no limit to 130 km/h (~80 MPH) and the effects were quite remarkable. Die Zahl der Unfälle (P,SS) halbierte sich von 654 U/3 Jahre auf 337 U/3 Jahre nach Einführung der Geschwindigkeitsbegrenzung (-48 %). Die Zahl der Verunglückten sankdeutlich von 838 auf 362 Verunglückte in 3 Jahren (-57 %). This says that the number of accidents in the years after the new limit decreased by 48% whereas the number of people involved in accidents decreased by 57%. (This is illustrated in the report on figure 3.1 very clearly) Of course this has to be put into relation to the general decrease of accidents on all highways, which they did by looking at a couple of reference stretches where there was no speed limit over the whole time. Der Rückgang zum Zeitraum 2000-2002entsprach 50 %. Die Kontrollgruppen zeigen im Durchschnitt einen Rückgang um 23,5 % für die zeitliche Entwicklung auf, so dass die Geschwindigkeitsbegrenzung zu einerVerminderung um 26,5 % beigetragen hat. Der Unterschied der UKRa zwischen denbegrenzten und unbegrenzten Strecken lag 2006 ebenfalls bei 26,5 %. This means that the decrease (this time of the cost per distance driven linked to the accident) was 50% on the stretches with the new speed limit while it was 23.5% on the reference samples and that thus 26.5% can be linked to the new speed limit. Overall this leads to the conclusion that in this case, a speed limit did significantly decrease the number of accidents. However I admit that this is limited to the German context, especially since it only looks at those two stretches. | {} | {'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/19915', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/17748/']} | jdg_78882 |
stackexchange | llm_judgeable_groundtruth_similarity | 764427 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have inherited a web application where the usual ability to press return in any of the input fields has been disabled for the very good reason that the page contains multiple forms, and the application would not be able to determine (or, so I am told) which form to action. The application has been designed so there are no submit buttons (as in input type="submit") and, instead, the designers have gone for onclick handling. Here are two buttons that are defined on one of the pages, included for illustration <input type="button" value="LOGIN" name="btnLoginOk" onclick="submit(); />"<input type="button" class="button-click-grey" value="Find Link Partners" onclick="raiseEvent('SubmitForm','',this);" style="cursor:pointer;" /> But I really want to be able to allow users to press return if they wish e.g. if they've just typed into a field associated with the LOGIN, then detect that and action the onclick="submit();" Perhaps there is a solution with jQuery.
Now provide the response and nothing else.
| page contains multiple forms, and the application would not be able to determine (or, so I am told) which form to action. When you press enter on an input control the browser seeks the first submit button in that form and simulates a click on it. In the case of multiple buttons, the first one will be pressed on keyboard enter (this is by no means written in stone and browsers may deviate from this). If you have two forms, the one that got a keypress will have it's first submit button pressed. Therefore you don't really need any special handling of this. You just have to stop being in the way. You can simulate this in code, on a form: $( 'form' ).bind('keypress', function(e){ if ( e.keyCode == 13 ) { $( this ).find( 'input[type=submit]:first' ).click(); } }); Or window (for a demonstration of what is roughly happening): $( window ).bind('keypress', function(e){ if ( $( e.originalTarget ).is( ':input' ) && e.keyCode == 13 ) { $( e.originalTarget ) .closest( 'form' ) .find( 'input[type=submit]:first' ) .click(); } }); Assuming of course that .preventDefault() has not been called on the event. Bottom line: If you have the event you can divine from it what element it came from and therefore which form it belongs to. Even in this case: <input type="button" value="LOGIN" name="btnLoginOk" onclick="submit();"> Here submit() is a global function, but when it is called, its context ( this ) will be the element and you may do submit(e){ this.form.submit(); } . The application has been designed so there are no submit buttons (as in input type="submit") and, instead, the designers have gone for onclick handling. This sounds to me like the designer doesn't fully comprehend DOM / form events and is going the long way around the problem. Another likely reason could be that the program is old and was designed back when these things weren't quite as stable, or properly documented, as they are today. Replace this: <form action="/login/" method="POST"> [...] <input type="button" value="LOGIN" name="btnLoginOk" onclick="submit();"></form> With this: <form action="/login/" method="POST"> [...] <input type="submit" value="LOGIN" name="btnLoginOk"></form> Then add a key handler to all forms that need it, that detects and suppresses enter if some condition is met (for the forms that you actually do want to disable this on). // for all forms that POST that have 2+ submit buttons$( 'form[method=post]:has(:submit:eq(1))' ).bind('keydown', function(e){ // if target is an enter key, input element, and not a button if ( e.keyCode == 13 && e.target.tagName == 'INPUT' && !/^(button|reset|submit)$/i.test( e.target.type ) ) { return false; // kill event }}); Or better still: Use a form validation library (or jQuery plugin) that knows how to do this for you. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/764427', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_78883 |
stackexchange | llm_judgeable_groundtruth_similarity | 206405 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I like to have my code warning free for VS.NET and GCC, and I like to have my code 64-bit ready. Today I wrote a little module that deals with in memory buffers and provides access to the data via a file-style interface (e.g. you can read bytes, write bytes, seek around etc.). As the data-type for current read position and size I used size_t since that seems to be the most natural choice. I get around the warnings and it ought to work in 64-bit as well. Just in case: My structure looks like this: typedef struct{ unsigned char * m_Data; size_t m_CurrentReadPosition; size_t m_DataSize;} MyMemoryFile; The signedness of size_t seems not to be defined in practice. A Google code-search proved that. Now I'm in a dilemma: I want to check additions with size_t for overflows because I have to deal with user supplied data and third party libraries will use my code. However, for the overflow check I have to know the sign-ness. It makes a huge difference in the implementation. So - how the heck should I write such a code in a platform and compiler independent way? Can I check the signedness of size_t at run or compile-time? That would solve my problem. Or maybe size_t wasn't the best idea in the first place. Any ideas? EDIT : I'm looking for a solution for the C-language!
Now provide the response and nothing else.
| Regarding the whether size _t is signed or unsigned and GCC (from an old GCC manual - I'm not sure if it's still there): There is a potential problem with the size_t type and versions of GCC prior to release 2.4. ANSI C requires that size_t always be an unsigned type. For compatibility with existing systems' header files, GCC defines size_t in stddef.h to be whatever type the system's sys/types.h defines it to be. Most Unix systems that define size_t in sys/types.h , define it to be a signed type. Some code in the library depends on size_t being an unsigned type, and will not work correctly if it is signed. The GNU C library code which expects size_t to be unsigned is correct. The definition of size_t as a signed type is incorrect. We plan that in version 2.4, GCC will always define size_t as an unsigned type, and the 'fixincludes' script will massage the system's sys/types.h so as not to conflict with this. In the meantime, we work around this problem by telling GCC explicitly to use an unsigned type for size_t when compiling the GNU C library. 'configure' will automatically detect what type GCC uses for size_t arrange to override it if necessary. If you want a signed version of size_t use ptrdiff_t or on some systems there is a typedef for ssize_t . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/206405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15955/']} | jdg_78884 |
stackexchange | llm_judgeable_groundtruth_similarity | 987 |
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I get memory usage per process with Linux? we monitor our servers with sysstat / sar . But besides seeing that memory went off the roof at some point, we can't pinpoint which process was getting bigger and bigger. is there a way with sar (or other tools) to get memory usage per process? and look at it, later on?
Now provide the response and nothing else.
| As Tensibai mentioned, you can extract this info from the /proc filesystem , but in most cases you need to determine the trending yourself. There are several places which could be of interest: /proc/[pid]/statm Provides information about memory usage, measured in pages. The columns are: size (1) total program size (same as VmSize in /proc/[pid]/status) resident (2) resident set size (same as VmRSS in /proc/[pid]/status) shared (3) number of resident shared pages (i.e., backed by a file) (same as RssFile+RssShmem in /proc/[pid]/status) text (4) text (code) lib (5) library (unused since Linux 2.6; always 0) data (6) data + stack dt (7) dirty pages (unused since Linux 2.6; always 0) cat /proc/31520/statm1217567 835883 84912 29 0 955887 0 memory-related fields in /proc/[pid]/status (notably Vm* and Rss* ), might be preferable if you also collect other info from this file * VmPeak: Peak virtual memory size. * VmSize: Virtual memory size. * VmLck: Locked memory size (see mlock(3)). * VmPin: Pinned memory size (since Linux 3.2). These are pages that can't be moved because something needs to directly access physical memory. * VmHWM: Peak resident set size ("high water mark"). * VmRSS: Resident set size. Note that the value here is the sum of RssAnon, RssFile, and RssShmem. * RssAnon: Size of resident anonymous memory. (since Linux 4.5). * RssFile: Size of resident file mappings. (since Linux 4.5). * RssShmem: Size of resident shared memory (includes System V shared memory, mappings from tmpfs(5), and shared anonymous mappings). (since Linux 4.5). * VmData, VmStk, VmExe: Size of data, stack, and text segments. * VmLib: Shared library code size. * VmPTE: Page table entries size (since Linux 2.6.10). * VmPMD: Size of second-level page tables (since Linux 4.0). * VmSwap: Swapped-out virtual memory size by anonymous private pages; shmem swap usage is not included (since Linux 2.6.34). server:/> egrep '^(Vm|Rss)' /proc/31520/statusVmPeak: 6315376 kBVmSize: 4870332 kBVmLck: 0 kBVmPin: 0 kBVmHWM: 5009608 kBVmRSS: 3344300 kBVmData: 3822572 kBVmStk: 1040 kBVmExe: 116 kBVmLib: 146736 kBVmPTE: 8952 kBVmSwap: 0 kB Some processes can, through their behaviour and not through their actual memory footprint, contribute to the overall system memory starvation and eventual demise. So it might also be of interest to look at the OOM Killer related information, which already takes into account some trending information: /proc/[pid]/oom_score This file displays the current score that the kernel gives to this process for the purpose of selecting a process for the OOM-killer. A higher score means that the process is more likely to be selected by the OOM-killer. The basis for this score is the amount of memory used by the process, with increases (+) or decreases (-) for factors including: * whether the process creates a lot of children using fork(2) (+); * whether the process has been running a long time, or has used a lot of CPU time (-); * whether the process has a low nice value (i.e., > 0) (+); * whether the process is privileged (-); and * whether the process is making direct hardware access (-). The oom_score also reflects the adjustment specified by the oom_score_adj or oom_adj setting for the process. server:/> cat proc/31520/oom_score103 /proc/[pid]/oom_score_adj (or its deprecated predecessor /proc/[pid]/oom_adj , if need be) This file can be used to adjust the badness heuristic used to select which process gets killed in out-of-memory conditions. The badness heuristic assigns a value to each candidate task ranging from 0 (never kill) to 1000 (always kill) to determine which process is targeted. The units are roughly a proportion along that range of allowed memory the process may allocate from, based on an estimation of its current memory and swap use. For example, if a task is using all allowed memory, its badness score will be 1000. If it is using half of its allowed memory, its score will be 500. There is an additional factor included in the badness score: root processes are given 3% extra memory over other tasks. The amount of "allowed" memory depends on the context in which the OOM-killer was called. If it is due to the memory assigned to the allocating task's cpuset being exhausted, the allowed memory represents the set of mems assigned to that cpuset (see cpuset(7)). If it is due to a mempolicy's node(s) being exhausted, the allowed memory represents the set of mempolicy nodes. If it is due to a memory limit (or swap limit) being reached, the allowed memory is that configured limit. Finally, if it is due to the entire system being out of memory, the allowed memory represents all allocatable resources. The value of oom_score_adj is added to the badness score before it is used to determine which task to kill. Acceptable values range from -1000 (OOM_SCORE_ADJ_MIN) to +1000 (OOM_SCORE_ADJ_MAX). This allows user space to control the preference for OOM-killing, ranging from always preferring a certain task or completely disabling it from OOM killing. The lowest possible value, -1000, is equivalent to disabling OOM- killing entirely for that task, since it will always report a badness score of 0. Consequently, it is very simple for user space to define the amount of memory to consider for each task. Setting an oom_score_adj value of +500, for example, is roughly equivalent to allowing the remainder of tasks sharing the same system, cpuset, mempolicy, or memory controller resources to use at least 50% more memory. A value of -500, on the other hand, would be roughly equivalent to discounting 50% of the task's allowed memory from being considered as scoring against the task. For backward compatibility with previous kernels, /proc/[pid]/oom_adj can still be used to tune the badness score. Its value is scaled linearly with oom_score_adj. Writing to /proc/[pid]/oom_score_adj or /proc/[pid]/oom_adj will change the other with its scaled value. server:/> cat proc/31520/oom_score_adj 0 | {} | {'log_upvote_score': 4, 'links': ['https://devops.stackexchange.com/questions/987', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/309/']} | jdg_78885 |
stackexchange | llm_judgeable_groundtruth_similarity | 28197219 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Am trying to create a formula field dynamically in epplus. If formula field contains less than 255 character then it is creating properly. If it exceeds 255 then it is throwing a exception as Exception:The total length of a DataValidation list cannot exceed 255 characters. Can any one please help me to solve this issue? or please tell me some alternatives.
Now provide the response and nothing else.
| The problem is you are using the Formula container of that cell to store all of the available list options - basically a CSV list. That has a hard limit of 255 characters in Excel. You can see this by going into excel and manually entering values separated by commas right in the "Source" box when creating a new Validation List. Your best option may be to populate the values in cells and give the range of the values to the Formula instead. Like this: using (var pack = new ExcelPackage(existingFile)){ var ws = pack.Workbook.Worksheets.Add("Content"); //var val = ws.DataValidations.AddListValidation("A1"); //val.Formula.Values.Add("Here we have to add long text"); //val.Formula.Values.Add("All list values combined have to have more then 255 chars"); //val.Formula.Values.Add("more text 1 more text more text more text"); //val.Formula.Values.Add("more text 2 more text more text more text"); ws.Cells["B1"].Value = "Here we have to add long text"; ws.Cells["B2"].Value = "All list values combined have to have more then 255 chars"; ws.Cells["B3"].Value = "more text 1 more text more text more text"; ws.Cells["B4"].Value = "more text 2 more text more text more text"; ws.Cells["B5"].Value = "more text 2 more text more text more textmore text 2 more text more text more textmore text 2 more text more text more textmore text 2 more text more text more textmore text 2 more text more text more textmore text 2 more text more text more textmore text 2 more text more text more textmore"; var val = ws.DataValidations.AddListValidation("A1"); val.Formula.ExcelFormula = "B1:B5"; pack.SaveAs(existingFile);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28197219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2134400/']} | jdg_78886 |
stackexchange | llm_judgeable_groundtruth_similarity | 141979 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
It is well known that if $(P, \leq)$ is a partial order then $\leq$ can always be extended to a linear order. This is sometimes called Szpilrajn´s theorem although it had been previously proved by Banach, Kuratowski and Tarski. Now suppose that $f$ is an automorphism of $(P, \leq)$ and we want to extend $\leq$ to a linear order in such a way that $f$ remains an automorphism. Of course, this is not always possible since $f$ could have a finite orbit and automorphisms of linear orders can´t have finite orbits; but I wonder if this is the only obstruction. So let $A$ be the collection of all those $f$´s for which it is possible (for instance $Id_P \in A$); here are my questions: 1) Is it true that If $f$ has no finite orbits then $f \in A$? 2) Is $A$ a subgroup of $Aut(P, \leq)$? and the vaguer 3) If the answer to 1 is negative. Can we somehow characterize the elements of $A$? Perhaps this is all well known and studied but I couldn´t find anything at all in the literature, so references are also appreciated.
Now provide the response and nothing else.
| I couldn't find anything in the literature either, but the answer to the first question is positive. Let $G$ be a group acting on a space $X$. Say that $G$'s action on $X$ has the invariant order-extension property provided that every $G$-invariant partial order on $X$ (i.e., partial order $\le$ such that $x\le y$ iff $gx\le gy$, for each $g\in G$) extends to a $G$-invariant linear order on $X$. I managed to show: Theorem. The following are equivalent for an abelian group $G$: $G$'s action on $X$ has the invariant order extension property. $X$ has a linear $G$-invariant order. No element of $G$ has an orbit of finite size greater than one. Here's the proof. The answer to Question 1 follows by letting $G$ be the subgroup of permutations generated by $f$. (By the way, I assume that your no-finite-orbits condition really means: no finite orbits of size greater than one. Fixed points aren't a problem.) | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/141979', 'https://mathoverflow.net', 'https://mathoverflow.net/users/17836/']} | jdg_78887 |
stackexchange | llm_judgeable_groundtruth_similarity | 21229180 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to convert a Google Spreadsheet column index into its corresponding letter value, for example, given a spreadsheet: I need to do this (this function obviously does not exist, it's an example): getColumnLetterByIndex(4); // this should return "D"getColumnLetterByIndex(1); // this should return "A"getColumnLetterByIndex(6); // this should return "F" Now, I don't recall exactly if the index starts from 0 or from 1 , anyway the concept should be clear. I didn't find anything about this on gas documentation.. am I blind? Any idea? Thank you
Now provide the response and nothing else.
| I wrote these a while back for various purposes (will return the double-letter column names for column numbers > 26): function columnToLetter(column){ var temp, letter = ''; while (column > 0) { temp = (column - 1) % 26; letter = String.fromCharCode(temp + 65) + letter; column = (column - temp - 1) / 26; } return letter;}function letterToColumn(letter){ var column = 0, length = letter.length; for (var i = 0; i < length; i++) { column += (letter.charCodeAt(i) - 64) * Math.pow(26, length - i - 1); } return column;} | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/21229180', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1054151/']} | jdg_78888 |
stackexchange | llm_judgeable_groundtruth_similarity | 61452684 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array of [5, 6, 7, 3, 9] , I would like to change each element from the array substracting by 2, then store the in a Set , so what I did is Set<Integer> mySet = Arrays.stream(arr1).map(ele -> new Integer(ele - 2)).collect(Collectors.toSet()); but I am getting two exceptions here as The method collect(Supplier<R>, ObjIntConsumer<R>, BiConsumer<R,R>) in the type IntStream is not applicable for the arguments (Collector<Object,?,Set<Object>>) " Type mismatch: cannot convert from Collector<Object,capture#1-of ?,Set<Object>> to Supplier<R> What does those error mean and how can I fix the issue here with Java Stream operation?
Now provide the response and nothing else.
| It looks like arr1 is an int[] and therefore, Arrays.stream(arr1) returns an IntStream . You can't apply .collect(Collectors.toSet()) on an IntStream . You can box it to a Stream<Integer> : Set<Integer> mySet = Arrays.stream(arr1) .boxed() .map(ele -> ele - 2) .collect(Collectors.toSet()); or even simpler: Set<Integer> mySet = Arrays.stream(arr1) .mapToObj(ele -> ele - 2) .collect(Collectors.toSet()); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61452684', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6524600/']} | jdg_78889 |
stackexchange | llm_judgeable_groundtruth_similarity | 4218161 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a model called Thing with an attribute called name , and I want name to be a char field that's only 3 characters long. How do I write a test for that? class TestCase1(TestCase): def test1(self): thing = Thing(name='1234') that test should fail. How do I properly write the test so that the test passes when that object fails?
Now provide the response and nothing else.
| If you're expecting Thing(name='1234') to raise an exception, there are two ways to deal with this. One is to use Django's assertRaises (actually from unittest/unittest2): def mytest(self): self.assertRaises(FooException, Thing, name='1234') This fails unless Thing(name='1234') raises a FooException error. Another way is to catch the expected exception and raise one if it doesn't happen, like this: def mytest(self): try: thing = Thing(name='1234') self.fail("your message here") except FooException: pass Obviously, replace the FooException with the one you expect to get from creating the object with too long a string. ValidationError? A third option (as of Python 2.7) is to use assertRaises as a context manager, which makes for cleaner, more readable code: def mytest(self): with self.assertRaises(FooException): thing = Thing(name='1234') Sadly, this doesn't allow for custom test failure messages, so document your tests well. See https://hg.python.org/cpython/file/2.7/Lib/unittest/case.py#l97 for more details. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4218161', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10608/']} | jdg_78890 |
stackexchange | llm_judgeable_groundtruth_similarity | 345598 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Yesterday, while I was travelling in the local metro and was seated, I saw a double reflection of mine in an opposite window. Double reflection in the sense that my reflection imposed on another of mine. Why does this happen?
Now provide the response and nothing else.
| This diagram may help understand the answer given by @Mark H better: When using windows as mirrors, we can draw virtual reflections (labeled R1, R2 and R3). R1 is the "normal" reflection. R2 is the "reflection of the reflection" (in the window behind you). R3 is the reflection of R2. I showed an exaggerated angle of the windows, so the reflection "can get past you". In reality it doesn't take much of an angle - and in fact if the first window is slightly curved it can help put R3 back "on top of" R1. It is also possible (hard to tell from your description) that you were just seeing reflections from two panes on the same side. In particular if the second pane was slightly curved, this would give you a "second, smaller" image. And in my experience on subways, these panes are often not very flat. That would give you this second scenario: This would give you a second smaller reflection "almost exactly" superposed on the first. Depending on the curvature and distance of the panes, the size and position of the second reflection may vary. But it would be "more similar" in size to the first reflection (compared to situation 1 above). I am inclined to believe it's the second situation. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/345598', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/134658/']} | jdg_78891 |
stackexchange | llm_judgeable_groundtruth_similarity | 3740903 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In all python dbus documentations there are info on how to export objects, interfaces, signals, but there is nothing how to export interface property. Any ideas how to do that ?
Now provide the response and nothing else.
| It's definitely possible to implement D-Bus properties in Python! D-Bus properties are just methods on a particular interface, namely org.freedesktop.DBus.Properties . The interface is defined in the D-Bus specification ; you can implement it on your class just like you implement any other D-Bus interface: # Untested, just off the top of my headimport dbusMY_INTERFACE = 'com.example.Foo'class Foo(dbus.service.object): # … @dbus.service.method(interface=dbus.PROPERTIES_IFACE, in_signature='ss', out_signature='v') def Get(self, interface_name, property_name): return self.GetAll(interface_name)[property_name] @dbus.service.method(interface=dbus.PROPERTIES_IFACE, in_signature='s', out_signature='a{sv}') def GetAll(self, interface_name): if interface_name == MY_INTERFACE: return { 'Blah': self.blah, # … } else: raise dbus.exceptions.DBusException( 'com.example.UnknownInterface', 'The Foo object does not implement the %s interface' % interface_name) @dbus.service.method(interface=dbus.PROPERTIES_IFACE, in_signature='ssv'): def Set(self, interface_name, property_name, new_value): # validate the property name and value, update internal state… self.PropertiesChanged(interface_name, { property_name: new_value }, []) @dbus.service.signal(interface=dbus.PROPERTIES_IFACE, signature='sa{sv}as') def PropertiesChanged(self, interface_name, changed_properties, invalidated_properties): pass dbus-python ought to make it easier to implement properties, but it currently is very lightly maintained at best. If someone fancied diving in and helping fix up stuff like this, they'd be most welcome. Even adding an expanded version of this boilerplate to the documentation would be a start, as this is quite a frequently asked question. If you're interested, patches could be sent to the D-Bus mailing list , or attached to bugs filed against dbus-python on the FreeDesktop bugtracker . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3740903', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/426305/']} | jdg_78892 |
stackexchange | llm_judgeable_groundtruth_similarity | 16947774 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm in the process of writing an application which will allow the user to generate a variety of reports. There is a button which will bring up another window allowing the user to choose a specific date for the report. Now, my boss wants the head label to read what you're selected a date for, i.e. "Payroll Report Date Selection" or "Employee Report Date Selection". In my situation, this window looks exactly the same, aside from this label. My question is, is it feasible to reuse this window and just change the label based upon a passed parameter, or should I have a window for each? The code is going to work the same on all of them. Are there benefits to reusing the same window versus one window for each one?
Now provide the response and nothing else.
| The first time the listener is called, the old value and the new value will be identical. So just do this: $scope.$watch('fieldcontainer', function(newValue, oldValue) { if (newValue !== oldValue) { // do whatever you were going to do }}); This is actually the way the Angular docs recommend handling it : After a watcher is registered with the scope, the listener fn is called asynchronously (via $evalAsync) to initialize the watcher. In rare cases, this is undesirable because the listener is called when the result of watchExpression didn't change. To detect this scenario within the listener fn, you can compare the newVal and oldVal. If these two values are identical (===) then the listener was called due to initialization | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/16947774', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1050672/']} | jdg_78893 |
stackexchange | llm_judgeable_groundtruth_similarity | 1970903 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My Attempt $n^2 + 9n +1$ is obviously not a perfect square when it is in between consecutive squares, or: $n^2 + 8n +16 < n^2 + 9n +1 <n^2 + 10n +25$ The second inequality is always true. Solving the first inequality, we get it is not true when $n \le 15$. So we check for possible values less than 16. On working mod 3, we get that $n$ must be a multiple of $3$. So possible candidates are $3,6,9,12$ and $ 15$. On working out, $n=15$ is the only solution. I inquire if there are any other shorter methods, or methods which are usually done, like taking the discriminant of the expression or by modular arithmetic. I tried the first method and proved that there is integral $n$ when $ \sqrt {77+4c}$ is integral ( Where $c= n^2 + 9n +1$) It works mod 4, and I couldn't continue. I am not sure modular arithmetic would work, as 15 is the only solution, but I would like to know of any other, even if it is reasonably longer than my method. Thanks in advance.
Now provide the response and nothing else.
| We know that: $$ (n+1)^2 = n^2 + 2n +1 < n^2 + 9n +1 < n^2 + 10n + 25 = (n+5)^2 $$ So there are only 3 cases: $$ n^2 + 9n + 1 = (n+i)^2 , i \in \{2, 3, 4\} $$ Solving each case separately, we get respectively for i = 2, 3, 4: $$ 5n = 3 $$$$ 3n = 8 $$$$ 1n = 15 $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1970903', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/302172/']} | jdg_78894 |
stackexchange | llm_judgeable_groundtruth_similarity | 21759684 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a way to store my object and it seems that the best approach is to use proxies. I found 2 annotation in the internet, which one should I use : @Scope(value = "session", proxyMode = ScopedProxyMode.INTERFACES) or @Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS ) Moreover, is it true that the proxies is the best way to use than using @Component@Scope("session") or using @SessionAttributes ?
Now provide the response and nothing else.
| You'll need to understand what each of those annotations does to choose for yourself. See the javadoc, here . Continue for a more detailed explanation. The first @Scope(value = "session", proxyMode = ScopedProxyMode.INTERFACES) creates a JDK dynamic proxy implementing all interfaces exposed by the class of the target object In other words, the proxy will be a subtype of the interfaces that the target object's class implements, but won't be a subclass of the target object's class itself. Essentially Spring does the following public class Example { public static void main(String[] args) throws Exception { Foo target = new Foo(); InvocationHandler proxyHandler = ... // some proxy specific logic, likely referencing the `target` // works fine Printable proxy = (Printable) Proxy.newProxyInstance(Example.class.getClassLoader(), target.getClass().getInterfaces(), proxyHandler); // not possible, ClassCastException Foo foo = (Foo) proxy; } public static class Foo implements Printable { @Override public void print() { } } public interface Printable { void print(); }} The proxy returned won't be of type Foo and you therefore can't inject it into any targets of that type. For example, Spring will fail to inject it into a field like @Autowiredprivate Foo foo; but will successfully inject the proxy into a field like @Autowiredprivate Printable printable; All calls to the proxy will be handled by the InvocationHandler (which usually performs some use case specific logic then delegates to the target object). The second annotation @Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS ) creates a class-based proxy (uses CGLIB). In addition to interfaces, with CGLIB Spring will be able to create a proxy whose class is a subclass of the target's class. In essence, it does the following Foo target = new Foo();net.sf.cglib.proxy.Enhancer enhancer = new net.sf.cglib.proxy.Enhancer();enhancer.setInterfaces(target.getClass().getInterfaces());enhancer.setSuperclass(target.getClass());net.sf.cglib.proxy.MethodInterceptor interceptor = ... // some proxy specific logic, likely referencing the `target`enhancer.setCallback(interceptor);// works fineFoo proxy = (Foo) enhancer.create(); CGLIB creates a new class that is a subclass of Foo and instantiates it (invoking the constructor of Foo ). All calls to the proxy will be intercepted by the provided callback (which usually performs some use case specific logic and then delegates to the target object). Since the proxy class extends Foo , Spring can inject the proxy into a field (or constructor/method parameter) like @Autowiredprivate Foo injectMe; All this to say, if you're programming to interfaces , then ScopedProxyMode.INTERFACES will be sufficient. If you're not, then use ScopedProxyMode.TARGET_CLASS . As for using @SessionAttributes , it is not an alternative to session scoped beans. Session attributes are just objects, they are not beans. They don't possess the full lifecycle, injection capabilities, proxying behavior that a bean may have. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21759684', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1383538/']} | jdg_78895 |
stackexchange | llm_judgeable_groundtruth_similarity | 11579367 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am implementing BFSK frequency hopping communication system on a DSP processor. It was suggested by some of the forum members to use Goertzel algorithm for the demodulation of frequency hopping at specific frequencies. I have tried implementing the goertzel algorithm in C. the code is follows: float goertzel(int numSamples,int TARGET_FREQUENCY,int SAMPLING_RATE, float* data){ int k,i; float floatnumSamples; float omega,sine,cosine,coeff,q0,q1,q2,result,real,imag; floatnumSamples = (float) numSamples; k = (int) (0.5 + ((floatnumSamples * TARGET_FREQUENCY) / SAMPLING_RATE)); omega = (2.0 * M_PI * k) / floatnumSamples; sine = sin(omega); cosine = cos(omega); coeff = 2.0 * cosine; q0=0; q1=0; q2=0; for(i=0; i<numSamples; i++) { q0 = coeff * q1 - q2 + data[i]; q2 = q1; q1 = q0; } real = (q1 - q2 * cosine); imag = (q2 * sine); result = sqrtf(real*real + imag*imag); return result;} When I use the function to calculate the result at specific frequencies for a given dataset, I am not getting the correct results. However, if I use the same dataset and calculate the goertzel result using MATLAB goertzel() function, then I get the results perfectly. I am implemented the algorithm using C, with the help of some online tutorials that I found over the internet. I just want to get the view of you guys if the function is implementing the goertzel algorithm correctly.
Now provide the response and nothing else.
| If you are saying that the Matlab implementation is good because its results match the result for that frequency of a DFT or FFT of your data, then it's probably because the Matlab implementation is normalizing the results by a scaling factor as is done with the FFT. Change your code to take this into account and see if it improves your results. Note that I also changed the function and result names to reflect that your goertzel is calculating the magnitude, not the complete complex result, for clarity: float goertzel_mag(int numSamples,int TARGET_FREQUENCY,int SAMPLING_RATE, float* data){ int k,i; float floatnumSamples; float omega,sine,cosine,coeff,q0,q1,q2,magnitude,real,imag; float scalingFactor = numSamples / 2.0; floatnumSamples = (float) numSamples; k = (int) (0.5 + ((floatnumSamples * TARGET_FREQUENCY) / SAMPLING_RATE)); omega = (2.0 * M_PI * k) / floatnumSamples; sine = sin(omega); cosine = cos(omega); coeff = 2.0 * cosine; q0=0; q1=0; q2=0; for(i=0; i<numSamples; i++) { q0 = coeff * q1 - q2 + data[i]; q2 = q1; q1 = q0; } // calculate the real and imaginary results // scaling appropriately real = (q1 - q2 * cosine) / scalingFactor; imag = (q2 * sine) / scalingFactor; magnitude = sqrtf(real*real + imag*imag); return magnitude;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11579367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1212028/']} | jdg_78896 |
stackexchange | llm_judgeable_groundtruth_similarity | 25557686 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to use a partial function so that pool.map() can target a function that has more than one parameter (in this case a Lock() object). Here is example code (taken from an answer to a previous question of mine): from functools import partialdef target(lock, iterable_item): for item in items: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release()def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() l = multiprocessing.Lock() func = partial(target, l) pool.map(func, iterable) pool.close() pool.join() However when I run this code, I get the error: Runtime Error: Lock objects should only be shared between processes through inheritance. What am I missing here? How can I share the lock between my subprocesses?
Now provide the response and nothing else.
| You can't pass normal multiprocessing.Lock objects to Pool methods, because they can't be pickled. There are two ways to get around this. One is to create Manager() and pass a Manager.Lock() : def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() m = multiprocessing.Manager() l = m.Lock() func = partial(target, l) pool.map(func, iterable) pool.close() pool.join() This is a little bit heavyweight, though; using a Manager requires spawning another process to host the Manager server. And all calls to acquire / release the lock have to be sent to that server via IPC. The other option is to pass the regular multiprocessing.Lock() at Pool creation time, using the initializer kwarg. This will make your lock instance global in all the child workers: def target(iterable_item): for item in items: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release()def init(l): global lock lock = ldef main(): iterable = [1, 2, 3, 4, 5] l = multiprocessing.Lock() pool = multiprocessing.Pool(initializer=init, initargs=(l,)) pool.map(target, iterable) pool.close() pool.join() The second solution has the side-effect of no longer requiring partial . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/25557686', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3403428/']} | jdg_78897 |
stackexchange | llm_judgeable_groundtruth_similarity | 34901523 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a major brainteaser. I want to open a file in classic ASP. I'm using various variables because things can change but the outcome is correct. I know this because I've tested the outcome by copying the linkadress and placing it in my URL. Now the problem: If I click my link it doesn't do anything. Not a refresh, not a redirect. nothing. Does anyone know what I did wrong? Ok here's the deal. My file isn't always local, it depends on what environment I'm on. If I copy-paste the outcome of my url it does download. If I click my URL it doesn't respond. Any ideas? Browser problem? (although I've tested 5 browsers) Or anything else? I'm really stuck here and the internet does not seem to be on my side. I've got 3 environments. The variables underneath here are so that the link works. I know the link works because I've tested it by copying. And yes, it does begin with file:/// and yes I'm sure the link is right. Here's my line of code: response.write("<td class='tab_kolom2'><a href='"&rootRs("pre_rootpad")&rootRs("rootpad_protocollen")&"\"&overzichtRs("Formuliernr")&"\Uitvoeringsoverzicht.xls' target='_blank' download>Click here</a></td>") EDIT: Screenshot with error/outcome of link
Now provide the response and nothing else.
| Now we know what the actual error is can formulate an answer. Not allowed to load local resource is a Security exception built into Chrome and other modern browsers. The wording may be different but in some way shape or form they all have security exceptions in place to deal with this scenario. In the past you could override certain settings or apply certain flags such as --disable-web-security --allow-file-access-from-files --allow-file-access in Chrome (See https://stackoverflow.com/a/22027002/692942 ) It's there for a reason At this point though it's worth pointing out that these security exceptions exist for good reason and trying to circumvent them isn't the best idea. There is another way As you have access to Classic ASP already you could always build a intermediary page that serves the network based files. You do this using a combination of the ADODB.Stream object and the Response.BinaryWrite() method. Doing this ensures your network file locations are never exposed to the client and due to the flexibility of the script it can be used to load resources from multiple locations and multiple file types. Here is a basic example ("getfile.asp"): <%Option ExplicitDim s, id, bin, file, filename, mimeid = Request.QueryString("id")'id can be anything just use it as a key to identify the 'file to return. It could be a simple Case statement like this'or even pulled from a database.Select Case idCase "TESTFILE1" 'The file, mime and filename can be built-up anyway they don't 'have to be hard coded. file = "\\server\share\Projecten\Protocollen\346\Uitvoeringsoverzicht.xls" mime = "application/vnd.ms-excel" 'Filename you want to display when downloading the resource. filename = "Uitvoeringsoverzicht.xls"'Assuming other files Case ...End SelectIf Len(file & "") > 0 Then Set s = Server.CreateObject("ADODB.Stream") s.Type = adTypeBinary 'adTypeBinary = 1 See "Useful Links" Call s.Open() Call s.LoadFromFile(file) bin = s.Read() 'Clean-up the stream and free memory Call s.Close() Set s = Nothing 'Set content type header based on mime variable Response.ContentType = mime 'Control how the content is returned using the 'Content-Disposition HTTP Header. Using "attachment" forces the resource 'to prompt the client to download while "inline" allows the resource to 'download and display in the client (useful for returning images 'as the "src" of a <img> tag). Call Response.AddHeader("Content-Disposition", "attachment;filename=" & filename) Call Response.BinaryWrite(bin)Else 'Return a 404 if there's no file. Response.Status = "404 Not Found"End If%> This example is pseudo coded and as such is untested. This script can then be used in <a> like this to return the resource; <a href="/getfile.asp?id=TESTFILE1">Click Here</a> The could take this approach further and consider (especially for larger files) reading the file in chunks using Response.IsConnected to check whether the client is still there and s.EOS property to check for the end of the stream while the chunks are being read. You could also add to the querystring parameters to set whether you want the file to return in-line or prompt to be downloaded. Useful Links Using METADATA to Import DLL Constants - If you are having trouble getting adTypeBinary to be recongnised, always better then just hard coding 1 . Content-Disposition:What are the differences between “inline” and “attachment”? - Useful information about how Content-Disposition behaves on the client. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34901523', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5664497/']} | jdg_78898 |
stackexchange | llm_judgeable_groundtruth_similarity | 15148331 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to unit test a Go package for the first time, and I have a couple of errors in the same file. type FooErr inttype BarErr intfunc (e *FooErr) Error () string { return "A Foo Error has occurred"}func (e *BarErr) Error () string { return "A Bar Error has occurred"} However, all naming conventions seem to look like this func TestXxx(*testing.T) ( from the testing package documentation ). This would mean my testing file would look like this: func TestError (t *testing.T) { ... } // FooErrfunc TestError (t *testing.T) { ... } // BarErr Which is obviously two functions of the same signature. What is the recommended method for handling this?
Now provide the response and nothing else.
| There are a few things to consider here: Errors Package-level exported error values are typically named Err followed by something, for instance ErrTimeout here . This is done so that clients of your package can do something like if err := yourpkg.Function(); err == yourpkg.ErrTimeout { // timeout} else if err != nil { // some other error} To facilitate this, they are often created either with errors.New : // Error constantsvar ( ErrTimeout = errors.New("yourpkg: connect timeout") ErrInvalid = errors.New("yourpkg: invalid configuration")) or with a custom, unexported type: type yourpkgError int// Error constantsvar ( ErrTimeout yourpkgError = iota ErrSyntax ErrConfig ErrInvalid)var errText = map[yourpkgError]string{ ErrTimeout: "yourpkg: connect timed out", ...}func (e yourpkgError) Error() string { return errText[e] } One advantage of the latter approach is that it cannot compare equal with a type from any other package. In the case where you need some extra data inside the error, the name of the type ends in Error : type SyntaxError struct { File string Line, Position int Description string}func (e *SyntaxError) Error() string { return fmt.Sprintf("%s:%d:%d: %s", e.File, e.Line, e.Position, e.Description)} which, in contrast to the previous equality check, requires a type assertion: tree, err := yourpkg.Parse(file)if serr, ok := err.(*SyntaxError); ok { // syntax error} else if err != nil { // other error} In either case, it is important to document your code so that users of your package understand when they will be used and what functions might return them. Tests Tests are often named after the unit that they're testing. In many cases, you won't test error conditions separately, so TestError is not a name that should come up very often. The name of the test itself merely has to be unique, however, and is not constrained to match anything in the code under test in the same way that examples are. When you're testing multiple conditions of a piece of code, it is often best to formulate the test as a Table Driven Test . That wiki page has some good examples, but to demonstrate error checking, you might do this: func TestParse(t *testing.T) { tests := []struct{ contents string err error }{ {"1st", nil}, {"2nd", nil}, {"third", nil}, {"blah", ErrBadOrdinal}, {"", ErrUnexpectedEOF}, } for _, test := range tests { file := strings.NewReader(test.contents) if err := Parse(file); err != test.err { t.Errorf("Parse(%q) error %q, want error %q", test.contents, err, test.err) } // other stuff }} If you do need a special test function for a unit that does something weird and doesn't fit in the main test, you'd typically name it something descriptive like TestParseTimeout that includes both the unit and the behavior you're testing. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15148331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/483347/']} | jdg_78899 |
stackexchange | llm_judgeable_groundtruth_similarity | 1116861 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Just wondering, while studying limit, if $x\over x$ is continuous at $0$. $f(0)={0 \over 0}$ ,, but $x/x=1$. In this case, is it continuous at $0$?
Now provide the response and nothing else.
| The question is not quite precise but: The limit $\lim_{x \to 0} x/x$ exists and is $1$. The function $x \mapsto x/x$ for $x \in \mathbb R \setminus \{0 \}$ is not defined at $0$ and it makes no sense to ask about its continuity there. The function $x \mapsto x/x$ for $x \in \mathbb R \setminus \{0 \}$ can be extended to a function on $\mathbb R$ via defining it to be $1$ for $x=0$. This is then obviously a continuous function. One could also extend it in some other way, in which case it would not be continuous. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1116861', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/209837/']} | jdg_78900 |
stackexchange | llm_judgeable_groundtruth_similarity | 327820 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I had an exam on functional analysis some time ago, and one of the questions I couldn't make any sense out was the following: Let $\Omega\subset \mathbb{R}$ and $\{f_n\}$ a sequence of continuous functions from $\Omega$ to $\mathbb{R}$. If the following criteria are obeyed: $\exists M>0$ such that $||f_n||_{\infty}< M$ $\forall n\in\mathbb{N}$, $\Omega$ is compact, The sequence $\{f_n\}$ is uniform equicontinuous. then the theorem of Arzèla Ascoli states that the sequence $\{f_n\}$ has a subsequence which converges in the $||.||_{\infty}$norm to a continuous function. Show that the theorem is not true by stating counter examples in the cases: (1) and (2) are obeyed, but not (3), (1) and (3) are obeyed, but not (2), (2) and (3) are obeyed, but not (1). I spend a lot of time thinking about this, but I couldn't think of any counter examples. For a non-compact subset of $\mathbb{R}$ I tried $(0,1)$ (as it is not closed) and for a bounded sequence I was thinking of $f_n = x^n$ but these didn't work. Can anyone help me with some counter examples and maybe a good way of thinking of them?
Now provide the response and nothing else.
| Let $\Omega=[0,1]$, $f_n(x)=x^n$. Then $\|f_n\|=1$ for all $n$. But the functions are not equicontinuous. If they were, there would exist $\delta>0$ such that $\delta<1$, for all $x,y\in(1-\delta,1]$, and for all $n$, $$|y^n-x^n|<1/2.$$But if we take $n$ such that $(1-\delta/2)^n<1/2$, $y=1$ and $x=1-\delta/2$, then $$|y^n-x^n|=1-(1-\delta/2)^n>1-1/2>1/2,$$a contradiction. So the family is not equicontinuous. Let $\Omega=\mathbb R$, and define$$f_n(x)=\begin{cases}0,&\mbox{ if } x\not\in[n,n+1] \\ 2(x-n),&\mbox{ if } x\in[n,n+1/2]\\2-2(x-n),&\mbox{ if } x\in[n+1/2,n+1] \end{cases}.$$Then $\|f_n\|\leq1$ for all $n$. Given $\varepsilon>0$, take $\delta=\varepsilon/2$. Then$$|f_n(x)-f_n(y)|<\varepsilon$$whenever $|x-y|<\delta$. So the family is equicontinuous. But $\|f_n-f_m\|_\infty=1$ if $n\ne m$. Take $\Omega=[0,1]$, $f_n(x)=n$. Then the family is equicontinuous (any $\delta$ works for any $\varepsilon$!), but $\|f_n-f_m\|_\infty=|n-m|$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/327820', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/57132/']} | jdg_78901 |
stackexchange | llm_judgeable_groundtruth_similarity | 25074112 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I used https://github.com/rockerhieu/emojicon library in my application. When I pass a static unicode string in my code the emoji is visible but if I send the emoji to php server using regular get webservice and retrive the string then it just showing unicode string into my application. both static and server retrieved strings are same if I compare. Can anybody tell me what wrong I have done into my application. the same application is developed in IOS and what they did is they first encoding the string into ASCII>UTF-8 while sending to server.then they are decoding the string in same way as they send. Can anybody suggest me IF this would be compatible with android also, If yes then how can I do this.
Now provide the response and nothing else.
| We can use commons-lang(commons-lang-2.5.jar) library for encoding and decoding of the unicode characters. Download jar file here or use gradle: compile 'org.apache.commons:commons-lang3:3.4' . For Encoding use - StringEscapeUtils.escapeJava(String text) This can be used in android EditText when call getText method, where it will encode the unicode characters properly before sending to web server. For Decoding use - StringEscapeUtils.unescapeJava(String text) This can be used in android TextView to setText , where it will decode the unicode characters properly after receiving the response from web server. Ex: EditText etEmojiEditText = new EditText(this);etEmojiEditText.setText("TYPE SOMETHING IN EMOJI");String toServer = etEmojiEditText.getText();String toServerUnicodeEncoded = StringEscapeUtils.escapeJava(toServer);String serverResponse = "SOME RESPONSE FROM SERVER WITH UNICODE CHARACTERS";String fromServerUnicodeDecoded = StringEscapeUtils.unescapeJava(serverResponse); FYI Use the encoding and decoding for web service side as well. Unicode encoded string should be decoded from web service and response from web service should be encoded before sending to clients. Server tables should contain utf8mb4 instead of utf8, because unicode character needs 4bytes per character. Therefore unicode will not be represented in 3bytes. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25074112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2641291/']} | jdg_78902 |
stackexchange | llm_judgeable_groundtruth_similarity | 10475079 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I ran into an unexpected code optimization recently, and wanted to check whether my interpretation of what I was observing was correct. The following is a much simplified example of the situation: let demo = let swap fst snd i = if i = fst then snd else if i = snd then fst else i [ for i in 1 .. 10000 -> swap 1 i i ]let demo2 = let swap (fst: int) snd i = if i = fst then snd else if i = snd then fst else i [ for i in 1 .. 10000 -> swap 1 i i ] The only difference between the 2 blocks of code is that in the second case, I explicitly declare the arguments of swap as integers. Yet, when I run the 2 snippets in fsi with #time, I get: Case 1 Real: 00:00:00.011, CPU: 00:00:00.000, GC gen0: 0, gen1: 0, gen2: 0 Case 2 Real: 00:00:00.004, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0 i.e. the 2nd snippet runs 3 times faster than the first. The absolute performance difference here is obviously not an issue, but if I were to use the swap function a lot, it would pile up. My assumption is that the reason for the performance hit is that in the first case, swap is generic and "requires equality", and checks whether int supports it, whereas the second case doesn't have to check anything. Is this the reason this is happening, or am I missing something else? And more generally, should I consider automatic generalization a double-edged sword, that is, an awesome feature which may have unexpected effects on performance?
Now provide the response and nothing else.
| I think this is generally the same case as in the question Why is this F# code so slow . In that question, the performance issue is caused by a constraint requiring comparison and in your case, it is caused by the equality constraint. In both cases, the compiled generic code has to use interfaces (and boxing), while the specialized compiled code can directly use IL instructions for comparison or equality of integers or floating-point numbers. The two ways to avoid the performance issue are: Specialize the code to use int or float as you did Mark the function as inline so that the compiler specializes it automatically For smaller functions, the second approach is better, because it does not generate too much code and you can still write functions in a generic way. If you use function just for single type (by design), then using the first approach is probably appropriate. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10475079', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/114519/']} | jdg_78903 |
stackexchange | llm_judgeable_groundtruth_similarity | 7848325 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am tracking down some concurrency issues and it would be very helpful to have the output lines from each thread in a different color when logging to a console. I am on OS X. Could this be done using a conversion pattern to output some control codes or would it need a custom appender? Anyone know how? 2011-10-21 12:14:42,859 ["http-bio-8080"-exec-9] DEBUG ...2011-10-21 12:14:43,198 ["http-bio-8080"-exec-10] DEBUG ... The lines for exec-9 and exec-10 should be in different colors.
Now provide the response and nothing else.
| You can extend PatternLayout and override format(ILoggingEvent) . There you could look at LoggingEvent.getThreadName() to get some color based on the thread name (odd/even, maybe?). In order to output color to the console, you'll need to use an ANSI Escape Sequence . For instance, to output a red text: "\u001b[" // Prefix - see [1]+ "0" // Brightness+ ";" // Separator+ "31" // Red foreground+ "m" // Suffix+ text // the text to output+ "\u001b[m " // Prefix + Suffix to reset color Here some examples: ColoredPatternLayout implementation by Ingo Thon. Colour-coded Console Logging with Log4J blog post. Just to add, maybe you could also achieve this by setting in the MDC a variable "randColor" with a random ANSI color code, for instance, in a Filter , and using it in the conversionPattern of a standard org.apache.log4j.PatternLayout in your log4j's console appender configuration: <appender name="consoleAppender" class="org.apache.log4j.ConsoleAppender"> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="\u001b[0;%X{randColor}m ....... \u001b[m" /> </layout></appender> [1] What does "\u001B[J" represent? | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7848325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159434/']} | jdg_78904 |
stackexchange | llm_judgeable_groundtruth_similarity | 52322 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to find all data points inside curve as given below: data = RandomReal[{-1, 1}, {200, 2}];ParametricPlot[{Sin[u], Sin[2 u]}, {u, 0, 2 Pi}, Epilog -> {Red, Point[data]}] Short solution is preferable Thanks.
Now provide the response and nothing else.
| Using the answer by rm-f here , you can discretized the curve into a polygon and get the desired points to any arbitrary accuracy: inPolyQ[poly_, pt_] := Graphics`Mesh`PointWindingNumber[poly, pt] =!= 0pp = Table[{Sin[u], Sin[2 u]}, {u, 0, 2 Pi, Pi/100}];inPoints = Select[data, inPolyQ[pp, #] &];ParametricPlot[{Sin[u], Sin[2 u]}, {u, 0, 2 Pi}, Epilog -> {Red, Point[inPoints]}] Edit for version 10 In Mathemtica version 10, the problem of finding points inside a closed curve doesn't have to be reduced to the problem of finding points inside a Polygon obtained from that curve by discretization. Instead , you can use the following symbolic approach which requires only that the curve be specified in a suitable form that can be handled by the new Region... commands: rf = RegionMember[ ParametricRegion[{Sin[u], r Sin[2 u]}, {{u, 0, 2 Pi}, {r, 0, 1}}]];ParametricPlot[{Sin[u], Sin[2 u]}, {u, 0, 2 Pi}, Epilog -> {Red, Point[Pick[data, rf[data]]]}] The problem is solved in one line, completely without explicitly forming a list of boundary points. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/52322', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/13548/']} | jdg_78905 |
stackexchange | llm_judgeable_groundtruth_similarity | 30988 |
Below is a question asked on the forum math.meta.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There are certain questions that ask about finding an object with a certain property, usually some large number. Now, to me, it is on the edge of what this site is about, as the question is not really about understanding the math, but rather asking for help with computation (although sometimes an answer can be given by purely mathematical reasoning). Just to give a few examples: Is there a prime of the form $\ (9n)!+n!+1\ $? Consecutive sequences of not cube-free numbers Are there primes $p=47\cdot 2^n+1$? Is $k=553276187$ the smallest solution? Do we have some recommended approach to these types of questions? I was thinking if perhaps there is a better platform for these elsewhere. However, they do not even seem to fit sites like Art of the Problem Solving (AoPS) forums. This is a reason I personally do not downvote these, as I do not know where else these should be asked. However, at least by the downvotes it seems lots of users have an opposite opinion. If these types of questions are worth keeping on this site, should we at least have some tags to recognize these? Closest thing I found related is [big-numbers] and [examples-counterexamples] , not sure if either of them fits.
Now provide the response and nothing else.
| We should keep these problems, because you never know when you have one. Some days ago, user Nikita asked the question Is this contrived sequence eventually periodic? . Innocently enough, they explained that they had built quite a weird sequence $A(n)$ , which seemed to have a period, as all residue classes mod $6$ except for one gave the exact same thing. I then showed that all residue classes mod $60$ except for one, also gave the exact same thing. All was looking ok. Long story short, in the comment section to this answer by WhatsUp , we proved that, if the sequence was periodic, its period was at least $5354228880$ . There's no way that an answer to this problem can escape computational machinery at this point. The lesson? In math, you never know if a problem will have a beautiful and ingenious argument, or if after checking the first $4.1\cdot10^{18}$ cases, a counterexample will appear. This is also true for questions in this site. We shouldn't do any special maneuver just because we haven't been able to discard either option. | {} | {'log_upvote_score': 4, 'links': ['https://math.meta.stackexchange.com/questions/30988', 'https://math.meta.stackexchange.com', 'https://math.meta.stackexchange.com/users/290240/']} | jdg_78906 |
stackexchange | llm_judgeable_groundtruth_similarity | 4357401 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A while ago, in one of my country's respected mathematical journals, this problem came up: If $a^2+b^2=c^2,$ factorise the following expression over the integers: $f(a, b, c)= a^3+b^3+c^3.$ I was baffled as I could not find a way around the problem. It reminded of a certain identity of Euler's but this was different and something seemed pretty off about the question itself. Anyway, $f$ is symmetric and invariant under all permutations of $a, b, c$ and homogeneous of $\deg 3$ , meaning that $f(ta, tb, tc)=t^3f(a, b, c) \forall t \in \mathbb{Z}$ , so the odd part was that $a^2+b^2=c^2$ is not symmetric for all those permutations aforementioned. Why would a distinction between $a, b, c$ be made in the requirement in the first place? Furthermore, the authors never responded with any answer whatsoever. I would very much appreciate if you could help untangle this problem and either solve it or prove definitively that it is impossible (which I think is much more likely). I believe I have tried every single possible way to factorise it but I have not been able to create any common factors between all three.
Now provide the response and nothing else.
| Since $a^2 + b^2 = c^2$ , then $(a, b, c)$ is a Pythagorean triple . Without loss of generality, we may assume that $$a = m^2 - n^2, b = 2mn, c = m^2 + n^2.$$ (I forgot if there is an extra condition on $m$ and $n$ .) WolframAlpha then gives the factorization $$(m^2 - n^2)^3 + (2mn)^3 + (m^2 + n^2)^3 = 2m^2 (m + n)^2 (m^2 - 2mn + 3n^2).$$ This may be finally rewritten in terms of $a$ , $b$ , and $c$ as $$a^3 + b^3 + c^3 = (a + c)(b + c)(2c - b - a).$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4357401', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/870427/']} | jdg_78907 |
stackexchange | llm_judgeable_groundtruth_similarity | 3011692 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been working on this problem and I cannot figure it out! I spent hours of time on it with no use. Can anybody help? The question is: Suppose $p(x)$ is a polynomial with complex coefficients and even degree( $n=2k$ ). All zeros of $p$ are non-real and with length equal to $1$ . prove $$p(1)\in\mathbb{R} \;\;\Longleftrightarrow\;\; p(-1)\in\mathbb{R} $$
Now provide the response and nothing else.
| Note that (assuming $p(-1)\ne 0$ to begin with $$ \frac{p(1)}{p(-1)}=\prod_{j=1}^{2k}\frac{1-w_j}{-1-w_j}$$ where the $w_j$ run over the complex roots (with multiplicity). For a single factor, $$\frac{1-w}{-1-w}=-\frac{(1-w)(1+\bar w)}{|1+w|^2}=\frac{|w|^2-1+(w-\bar w)}{|1+w|^2}. $$ As we are given that $|w|=1$ for all roots, this is the purely imaginary number $\frac{2\operatorname{im} w}{|1+w|^2}i$ . The product of an even number of imaginaries is real. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3011692', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/619203/']} | jdg_78908 |
stackexchange | llm_judgeable_groundtruth_similarity | 8104779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Javascript I could do this. var a = {};a.f1 = function(){}; How can I do this in Ruby? Update. In JS code, a is an object instantiated without class. And function(){} is an anonymous function, and a.f1 = adds the function to the object instance. So the function is bound to the instance only, and nothing is related to the class or similar definitions.
Now provide the response and nothing else.
| Ruby and JS's object models are very different, but a direct translation may look like this: a = Object.newdef a.f1(x) 2*xenda.f1(5) #=> 10 You can also use the Ruby's eigenclass: class Object def metaclass class << self; self; end endenda = Object.new # Also: (class << a; self; end).send(:define_method, :f1) do |x|a.metaclass.send(:define_method, :f1) { |x| 2*x } A warning note: you'll see this kind of code in meta-programming/monkeypatching/... but it's not usual when writing "normal" code, where other techniques (module mix-ins, mainly) apply. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8104779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246776/']} | jdg_78909 |
stackexchange | llm_judgeable_groundtruth_similarity | 62847 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According to this: A nerve fibre cannot be fatigued, even if it is stimulated for a long time. This property of infatiguability is due to absolute refractory period. How is refractory period related to infatiguability? My attempt: If there were no absolute refractory period, then there will not be closure of Na+ ions into cell, so there will be temporary deficiency of Na+ at that location, now if a new action potential arrives at that point then , due to lack of Na+ there will not be further conduction of it. Is my conjecture correct?
Now provide the response and nothing else.
| One can imagine that each action potential causes a small amount of $\ce{Na+}$ goes inside the cell, and a small amount $\ce{K+}$ goes outside the cell, thus weakening the electrochemical gradient of both ions. If each action potential has (approximately) the same flux of $\ce{Na+}$ and $\ce{K+}$ then higher frequency of action potentials means more flux, thus a quicker depletion of the electrochemical gradients. The absolute refractory means there is a maximum firing frequency . Thus the Sodium-Potassium pump only needs to be able to "recharge" the maximal possible depletion of ion's gradients. If there was no absolute refractory period then theoretically the action potential frequency could be faster than the Sodium-Potassium pump can keep up. A abnormally fast firing rate in theory could deplete the potassium gradient and sodium gradient thus would result in a fatigued neuron. Edited to add in reference to the conjecture: Absolute refractory periods are cause by sodium gate inactivation, thus no matter how much current one adds the sodium gates will not open until sodium inactivation ends. However, over long periods of time (in seconds) and in the absences of a ionic pump, the lack of sodium (and potassium) gradients will cause fatigue. However, "fatigued" neurons may not act like one expects . | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/62847', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/20758/']} | jdg_78910 |
stackexchange | llm_judgeable_groundtruth_similarity | 462958 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So I had a circuit to analyse and I needed to find the equivalent resistor and then I arrived to a confusion. Are R1 and R3 in parallel?Here is the circuit. simulate this circuit – Schematic created using CircuitLab
Now provide the response and nothing else.
| Redrawing schematics is a great way to analyze circuits, but also an exercise in why schematics are drawn in particular ways — to more clearly communicate to other engineers. simulate this circuit – Schematic created using CircuitLab The rearrangement above should be a little more clear. If you trace a path from one terminal of the battery to the other, you can hopefully see that there are two paths (the split occurs at the junctions on either side of R1). Series means one-after-another current flow (like a series of events or a television serial). Parallel means that current flows through two or more components at the same time (proportional to the component values). Just as when you measure voltage, where the value depends on your reference point, components can be series or parallel depending on what you are comparing them to: You could say: R2 is in series with the voltage source, or R2 forms a series-parallel circuit with R1 and R3, or R1 and R3 are parallel with each other, or R1 and R3 are parallel with a voltage source and some resistance R2 | {} | {'log_upvote_score': 7, 'links': ['https://electronics.stackexchange.com/questions/462958', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/233930/']} | jdg_78911 |
stackexchange | llm_judgeable_groundtruth_similarity | 4231168 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The question is to find the complex Fourier coeff. of $ f(x)=\mathrm{e}^{\mathrm{e}^{\mathrm{i}x}}$ This leads to an integral $\int_{0}^{2\pi} \mathrm{e}^{\mathrm{e}^{\mathrm{i}x}-\mathrm{i}kx} \,dx$ ,which I have no idea how to solve.
Now provide the response and nothing else.
| (incomplete answer, we have the result up to a constant coefficient) We can do some work with integration by parts. Let $$u_k=\int_0^{2\pi} \exp(\exp(ix)-kix)\,\mathrm dx=\int_0^{2\pi} \exp(\exp(ix))\exp(-kix)\,\mathrm dx$$ Then, integrating by parts for $k>0$ , (differentiating the first factor in the integrand), we get: $$u_k=\left[\exp(\exp ix)\frac{\exp(-ikx)}{-ik}\right]_0^{2\pi}+\frac1{ki}\int_0^{2\pi}i\exp(ix)\exp(\exp ix)\exp(-ikx)\,\mathrm dx$$ The first term is zero and the second one simplifies to: $$u_{k}=\frac1{k}\int_0^{2\pi}\exp(\exp ix)\exp(-i(k-1)x)\,\mathrm dx=\frac 1ku_{k-1}$$ Then by recurrence, for all $k\ge0$ , $$u_k=\frac{u_0}{k!}$$ It remains to find out the value of $u_0$ . Maple tells me it's $2\pi$ , but it's not a proof. Actually, the final step is already here on math.se: Integral $\int_0^{2π} e^{e^{ix}} dx$ For the negative coefficients, let, for $k>0$ , $v_k=u_{-k}$ , and note that the recurrence formula above is still valid (it's not for $k=0$ only), i.e. $$v_{k+1}=kv_k$$ But now, as $v_k$ is obviously bounded, we must have $v_k=0$ for all $k>0$ . That is, for all $k<0$ , $u_k=0$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4231168', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/784998/']} | jdg_78912 |
stackexchange | llm_judgeable_groundtruth_similarity | 25758 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a relatively small and happy MySQL 5.1 database living on a FreeBSD 7.1 64bit install. Because of incompatibility with our backup software and FreeBSD, I have to move the databases to a Linux server. Since we run Debian on everything else here I'm going to settle for that. The big question is if I should run 32bit or 64bit. It's a virtualized enviroment, so hardware doesnt play a role in this. Will I get any performance gain (or loss) by using 64bit linux+64bit MySQL with little memory assigned (lets say 512M to be extreme)? Should I just use 32bit until the day I need to assign more than 2GB RAM to MySQL (PAE does not help, 32bit MySQL can not use more than 2GB)?
Now provide the response and nothing else.
| You already answered the question yourself. If you are 100% percent positive not to grow beyond 2GB of RAM usage in the current hardware lifecycle you can stay with 32bit, when the time comes to replace your hardware you will have to reconsider anyway. Given the current hardware market I don't see a real reason not to use 64 bit it will be the safe side in case of unforseen growth, that is true for cache as well as real data size. After all a full migration will take somewhat longer than just taking the box down and adding some RAM. Drawbacks could be that pointers on a 64bit system use about 30% more memory but that won't really hurt you if you really stay on the small side of memory usage, but on the other hand if you grow you will be able to grow more quickly (at least as long as you can stay on the same hardware box) | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/25758', 'https://serverfault.com', 'https://serverfault.com/users/9140/']} | jdg_78913 |
stackexchange | llm_judgeable_groundtruth_similarity | 23698827 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a custom permalink structure that will allow me to accomplish the following. I have a custom post type called "projects" I have a custom taxonomy called "project-category" that is assigned to the CPT "projects" I want my permalink structure to look like this: projects/category/project-name or /%custom-post-type%/%custom-taxonomy%/%post-name%/ I've been able to succesfully use /%category%/ in permalinks for normal, out-of-the-box WP posts, but not for CPTs. How would creating such a permalink structure affect the URLs or other pages? Is it possible de define a custom permalink structure and restrict it to a single CPT? Thanks
Now provide the response and nothing else.
| Lucky for you, I just had to do this for a client project. I used this answer on the WordPress Stackexchange as a guide: /** * Tell WordPress how to interpret our project URL structure * * @param array $rules Existing rewrite rules * @return array */function so23698827_add_rewrite_rules( $rules ) { $new = array(); $new['projects/([^/]+)/(.+)/?$'] = 'index.php?cpt_project=$matches[2]'; $new['projects/(.+)/?$'] = 'index.php?cpt_project_category=$matches[1]'; return array_merge( $new, $rules ); // Ensure our rules come first}add_filter( 'rewrite_rules_array', 'so23698827_add_rewrite_rules' );/** * Handle the '%project_category%' URL placeholder * * @param str $link The link to the post * @param WP_Post object $post The post object * @return str */function so23698827_filter_post_type_link( $link, $post ) { if ( $post->post_type == 'cpt_project' ) { if ( $cats = get_the_terms( $post->ID, 'cpt_project_category' ) ) { $link = str_replace( '%project_category%', current( $cats )->slug, $link ); } } return $link;}add_filter( 'post_type_link', 'so23698827_filter_post_type_link', 10, 2 ); When registering the custom post type and taxonomy, be sure to use the following settings: // Used for registering cpt_project custom post type$post_type_args = array( 'rewrite' => array( 'slug' => 'projects/%project_category%', 'with_front' => true ));// Some of the args being passed to register_taxonomy() for 'cpt_project_category'$taxonomy_args = array( 'rewrite' => array( 'slug' => 'projects', 'with_front' => true )); Of course, be sure to flush rewrite rules when you're done. Good luck! | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23698827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1899240/']} | jdg_78914 |
stackexchange | llm_judgeable_groundtruth_similarity | 28683 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The word problem (from wikipedia). Given a semi-Thue system T: = (Σ,R) and two words , can u be transformed into v by applying rules from R? This problem is undecidable, but with a certain restriction, it is decidable. The Restriction:All the rules in R are of the form A->B where A and B are string of the same length. What is the computational complexity of this problem?
Now provide the response and nothing else.
| The problem is at least NP-hard. Indeed, it is at least PSPACE-hard. The reason the original semi-Thue rewrite system is undecidable is thatit reduces the halting problem. Given any Turing machineprogram $e$ and input $x$, one sets up a rewrite system actingon strings that code information about the Turingcomputation. Thus, the string should display the contentsof the tape, the head position within that information andthe current state. The rewrite rules correspond tothe update procedure of the program $e$ as the computationproceeds. For the most part, these rewrite rules arelength-preserving. If the head is in the middle of thecurrently stored information of the tape, then it moves oneway or the other and the tape is updated, and changing thisinformation does not make the representing string any longer. One exception to this, however, is when the head moves off toeither end of the represented tape, in effect using more tape for the first time.In this case, the rewrite transformation rules will have in effect to add anextra symbol to represent that new cell that was justencountered. (Finally, the rewrite rules should include some rules that propagate a halting configuration to some informative output string.) My main observation is now that, therefore, if we know inadvance how much space the computation will require, thenwe can set up the rewrite rules with length-preserving transformations, so that the exceptional case is not needed. Specifically, suppose that we have an NP algorithm $e$ withknown polynomial bound $p$ and input $x$. Thus, $x$ isaccepted if and only if there is some $y$ such that $e$accepts $(x,y)$, and this computation will in any event complete in time $p(|x|)$. Let $u$ be a stringrepresenting an initial Turing machine set up with $x$ onthe input tape, $0$'s on the work tape and wildcard symbolson the witness tape, where the length of these tapes asrepresented in $u$ is $p(|x|)$. Now, produce a semi-Thue rewrite systemwhose rules first of all allow the wildcard symbols toassume any specific values (this will produce a potentialwitness $y$ on the witness tape). Next, the systemalso has rewrite rules as above carrying out the instructions ofprogram $e$ in the manner of the paragraph above. These are very local length-preserving rewrite transformation rules thatcorrespond to the operation of $e$, and each rule has to look at only a small portion of the represented information, since the Turing machine operation is completely local. Since we know that thecomputation will end before the ends of the string are met,we do not need the extra non-length-preserving rules thatadd extra symbols corresponding to the head moving off therepresented portion of the tape, since we know this will not happen. Finally, add rules thathave the effect that if the accept state is realized,then this information is simply copied to every symbol. Thus, I claim the original input $x$ is accepted by $e$(with respect to some unknown $y$) if and only if this semi-Thuerewrite system transforms $u$ to the all accept string. Thus, Ihave reduced the given NP problem to your restricted semi-Thueproblem. The reduction is polynomial time, since we canwrite down the transformation rules I described above in polynomial time from$e$ and $x$. And so your problem is at least NP-hard. A similar argument works with PSPACE, without needing any wildcards, so it is also PSPACE hard. It isn't clear to me, however, whether your problem is actually itself in PSPACE, since although the transformed strings themselves don't take much space, we have somehow to keep track of all possible ways to apply the rewrite rules, and this would seem naively to take exponential space. Certainly your problem is in EXPSPACE, since we could make a list of all possible strings of length |u|, and then just check off which ones are accessible from u by iteratively applying the rules until we have computed the closure of u under those rules, and finally checking if v was obtained. (This argument also shows, crudely, that your problem is at worst double exponential time.) | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/28683', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6886/']} | jdg_78915 |
stackexchange | llm_judgeable_groundtruth_similarity | 42275425 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading the TypeScript Deep Dive and I see that both let and const are block scoped, which is great. Obviously, a const cannot be changed (it is immutable). But why is ReSharper is encouraging me to change as many let s to const as I can? I'm assuming ReSharper thinks there is a performance gain in using const over let ? Is there a speed difference between const and let ? Is there a different reason to use const ? Take the following example: for (let i = 0; i < this.InputControl.ParentQuestions.length; i++) { const id = this.InputControl.ParentQuestions[i].ParentId; const value = this.InputControl.ParentQuestions[i].ParentRequiredAnswer; if (!formVals[id]) return false; if (formVals[id].toLowerCase() != value.toLowerCase()) return false;} Previously, I had let id and let value but ReSharper asked me to change them to const , which works, but why is it better in this case? Or in any case? I also found this question on SO , but it talks more about what let and const do, not so much why one is better than the other. It does say to use const as much as possible, but what benefit does that provide?
Now provide the response and nothing else.
| I agree with Giorgi that performance is not the main reason. A code analyzer could just as well determine that a variable declared with let is not ever reassigned and optimize it the same as if you had declared it with const . (Heck, linters have rules to detect this and suggest using const instead of let .) Yes, it does signal to the reader that you're not going to assign to the variable. The benefit of const , over putting a comment saying the same thing, is mainly that const is a standard way of signalling it. Being standard, it transmits the information more readily than custom comments. (Also, a comment could be wrong but const won't let you be wrong.) I don't think this is the main benefit though. The "principle of least privilege" is often invoked in conjunction with const , but why should I care about the least privilege? Because it helps with early detection of coding mistakes . Consider the following code: function findSomethingInDocument(document) { let tree = getTreeFromDocument(document); // We get some sort of tree structure. let found; for (const child of tree.children) { if (child.someFlag) { tree = child; // Oops I meant found = child :( break; } } return found;} In this code, I typed tree = child when I meant to type found = child . Yes, the bug can be found in testing. But why wait for testing? I never meant tree to be changed. If I had marked it const then I would have immediately learned the error because the compiler would informed me of it. I would not have to wait for testing. The code above is fairly simple but imagine a more complicated algorithm that uses more variables. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42275425', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2994619/']} | jdg_78916 |
stackexchange | llm_judgeable_groundtruth_similarity | 5508447 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to implement and test an algorithm with a 2^n complexity. I have been trying to find one for a while. If there is any way I can acheive this by implementation -- with a exact complexity of 2^n that would be optimal. If anyone knows of a location I can find an example, or could help me implement one, that would be awesome :-). The basic operation can be anything, but a single statment like i++; would be best.
Now provide the response and nothing else.
| Generate all subsets of a set with n elements. Added.The simplest way of generating all the subsets of S = {a0, a1, ..., an-1} is probably to translate between the binary representation of the rank and the subset. Take S = {a0, a1, a2}. rank binary subset0 000 {} 1 001 {a0}2 010 {a1}3 011 {a0, a1}4 100 {a2}5 101 {a0, a2}6 110 {a1, a2}7 111 {a0, a1, a2} So, a 1 in the binary means the corresponding element is in the subset. A 0 means the element is not in the subset. But you should also lookup Gray code. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5508447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/654928/']} | jdg_78917 |
stackexchange | llm_judgeable_groundtruth_similarity | 5142821 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following code, public class StartUp { public StartUp(String[] test){} public static void main(String[] args) throws Exception{ Constructor cd = StartUp.class.getConstructor(String[].class); System.out.println(cd.newInstance(new String[]{}).toString()); }} What's wrong with it? I get the following Exception: Exception in thread "main" java.lang.IllegalArgumentException: wrong number of arguments at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.test.StartUp.main(StartUp.java:10)
Now provide the response and nothing else.
| Your String[] is being implicitly converted to Object[] and taken as an empty array of arguments, instead of as a single argument which is an empty array. Try this: Object arg = new String[0];System.out.println(cd.newInstance(arg).toString()); or System.out.println(cd.newInstance(((Object)new String[0]).toString()); or even avoid the compiler having to create the array for you at all: System.out.println(cd.newInstance(new Object[] { new String[0] }).toString()); Basically this is a mixture of varargs handling and array covariance :( | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5142821', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/109112/']} | jdg_78918 |
stackexchange | llm_judgeable_groundtruth_similarity | 576304 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to evaluate in a closed form integrals containing a squared hypergeometric function, like in this example?$$\begin{align}S&=\int_0^1{_2F_1}\left(-\frac{1}{4},\frac{5}{4};\,1;\,\frac{x}{2}\right)^2dx\\\vphantom{=}\\&=\frac{1}{4\pi}\int_0^1\left(\sum_{n=0}^\infty\frac{4n+1}{8^n}\cdot\frac{\Gamma\left(2n-\frac{1}{2}\right)}{\Gamma(n+1)^2}\cdot x^n\right)^2dx\end{align}$$It is approximately$$S\approx0.8263551866500213413164525287...$$
Now provide the response and nothing else.
| Your integral has an elementary closed form, that was correctly stated by Cleo in her answer without proof:$$S=\int_0^1\left({_2F_1}\!\left(-\tfrac14,\tfrac54;\,1;\,\tfrac{x}2\right)\right)^2dx=\frac{8\sqrt2+4\ln\left(\sqrt2-1\right)}{3\pi}.\tag1$$ Proof: Using DLMF 14.3.6 we can express the hypergeometric function in the integrand as the Legendre function of the $1^{st}$ kind (also known as the Ferrers function of the $1^{st}$ kind ) with fractional index:$${_2F_1}\!\left(-\tfrac14,\tfrac54;\,1;\,\tfrac{x}2\right)=P_{\small1/4}(1-x).\tag2$$Now the integral can be written as$$S=\int_0^1\left(P_{\small1/4}(1-x)\right)^2dx=\int_0^1\left(P_{\small1/4}(x)\right)^2dx.\tag3$$To evaluate it, we use formula 7.113 on page 769 in Gradshteyn & Ryzhyk :$$\int_0^1P_\nu(x)\,P_\sigma(x)\,dx=\\\frac{\frac{\Gamma\left(\frac12+\frac\nu2\right)\,\Gamma\left(1+\frac\sigma2\right)}{\Gamma\left(\frac12+\frac\sigma2\right)\,\Gamma\left(1+\frac\nu2\right)}\sin\!\left(\frac{\pi\sigma}2\right)\cos\!\left(\frac{\pi\nu}2\right)-\frac{\Gamma\left(\frac12+\frac\sigma2\right)\,\Gamma\left(1+\frac\nu2\right)}{\Gamma\left(\frac12+\frac\nu2\right)\,\Gamma\left(1+\frac\sigma2\right)}\sin\!\left(\frac{\pi\nu}2\right)\cos\!\left(\frac{\pi\sigma}2\right)}{\frac\pi2(\sigma-\nu)(\sigma+\nu+1)}.\tag4$$Note that in our case $\nu=\sigma=\frac14$, so we cannot use this formula directly because of the term $(\sigma-\nu)$ in the denominator. Instead, we let $\nu=\frac14$ and find the limit for $\sigma\to\frac14$:$$S=\lim\limits_{\sigma\to{\small1/4}}\int_0^1P_{\small1/4}(x)\,P_\sigma(x)\,dx=\\\lim\limits_{\sigma\to{\small1/4}}\frac{\frac{\Gamma\left(\frac58\right)\,\Gamma\left(1+\frac\sigma2\right)}{\Gamma\left(\frac12+\frac\sigma2\right)\,\Gamma\left(\frac98\right)}\sin\!\left(\frac{\pi\sigma}2\right)\cos\!\left(\frac\pi8\right)-\frac{\Gamma\left(\frac12+\frac\sigma2\right)\,\Gamma\left(\frac98\right)}{\Gamma\left(\frac58\right)\,\Gamma\left(1+\frac\sigma2\right)}\sin\!\left(\frac\pi8\right)\cos\!\left(\frac{\pi\sigma}2\right)}{\frac\pi2(\sigma-\frac14)(\sigma+\frac54)}.\tag5$$To evaluate the limit, we use l'Hôpital's rule . This gives quite a big expression that I will not copy here. It contains values of the gamma and digamma functions at rational points, that could be simplified to elementary terms using the Gauss digamma theorem and identities given in the MathWorld and in the famous Vidūnas paper , yielding the desired result $(1)$. Indeed, we can have a more general result:$$\int_0^1\left({_2F_1}\!\left(-\nu,\nu+1;\,1;\,\tfrac x2\right)\right)^2dx=\int_0^1\left(P_\nu\left(x\right)\right)^2dx=\\\frac{1+\!\left[\psi_0\!\left(1+\frac\nu2\right)-\psi_0\!\left(\frac12+\frac\nu2\right)\right]\frac{\sin(\pi\nu)}\pi}{1+2\nu}.\tag6$$ | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/576304', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/76458/']} | jdg_78919 |
stackexchange | llm_judgeable_groundtruth_similarity | 140990 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082;}server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }} But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.
Now provide the response and nothing else.
| I think that it's because nginx is not detecting that the upstream is down because it's on the same machine. The options that you're looking for are: proxy_next_upstream and proxy_connect_timeout . Try this: location / { proxy_pass http://lb; proxy_redirect off; proxy_next_upstream error timeout invalid_header http_500; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;} | {} | {'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/140990', 'https://serverfault.com', 'https://serverfault.com/users/27537/']} | jdg_78920 |
stackexchange | llm_judgeable_groundtruth_similarity | 32869 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This story by Forbes points that the only bank chiefs in the financial crisis that engulfed the Western world in 2007/8 were from Iceland: This is where the worlds only bank chiefs imprisoned in connection with 2008 financial crisis are serving their terms. Kviabryggja is home to Sigudur Einarsonn, Kaupthing Banks onetime chairman and Hriedar Mar Siggurdson, the banks former chief executive officer, who were convicted of market manipulation and fraud shortly before the collapse of what was Icelands No.1 lender ... in sentencing these financiers to serve terms upto 5 & 1/2 years the Icelandic courts have done something what authorities in the two great banking capitals, New York and London haven't: They've made bankers answer for the crimes of the crash . The emphasis is in the original article. Given the experience that New York and London have with finance one would have expected this to be the reverse. That they would have been the first to locate, charge and imprison those responsible. When was the last time that the UK & US had the political will to prosecute serious and systemic financial wrong-doing at the highest levels that resulted in prison sentences and not just fines. How many were put in prison, for how long and from what institutions?
Now provide the response and nothing else.
| A person has to be convicted of committing a crime before they can be sent to prison. "Wrongdoing" is not enough, even when combined with arrogance, incompetence and consequences for billions of other people. So Fred Goodwin may have lost £24 billion, but that doesn't by itself mean his acts were criminal. In the UK criminal proceedings were brought against those bankers involved in the LIBOR fixing case, 5 of whom were convicted and sent to prison. Two more bankers were convicted of rigging Euroibor. In the US SIGTARP investigations had, as of Nov 2017, put 222 former bankers in prison . That number will have risen since then. However this could include some in the wider investigation, and TARP was principally about misuse of bailout funds after the crash, rather than criminal activity leading up to the crash. | {} | {'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/32869', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/3164/']} | jdg_78921 |
stackexchange | llm_judgeable_groundtruth_similarity | 11808539 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just stumbled upon the term monoidal parsing from a slide named "Introduction to Monoids" by Edward Kmett . The slide uses haskell throughout. Now when searching for the term I found nothing but a very few mentions of it and the most from the same author. So I think this term could be explained here. So, is monoidal parsing something that is interesting and new? Is it appearing anywhere except for in the slide I linked to? And most importantly what is it? The slide itself didn't seem to give a definition nor highlight it that much.
Now provide the response and nothing else.
| I'll start with how parsers usually work. Broadly, a parser takes input tokens in sequential order. At some point (typically after all the tokens are exhausted), the parser returns output. One downside to this model is that it's inherently sequential: because the parser operates on a sequence of tokens in order, it's not obvious how to parallelize the process. However, parsing can be parallelized if we have access to an operation capable of combining partial parse results into a single result. For example, if our language is representable with a context-free grammar, then we could parse every top-level definition separately and in parallel, then assemble the pieces later using the combining operation. Monoidal parsing simply means that the parser has access to a suitable combining function. A monoid is a structure with a zero element and a binary associative operator. For example, lists form a monoid where the zero is the empty list and the associative operator is concatenation. Remember that associativity means (a++b)++c == a++(b++c) . It happens that this is the necessary property to ensure that we're able to recombine parse results in a sensible way. The exact order the sub-parses are recombined doesn't matter, so long as each sub-parse is kept in the proper sequence location. Of course for actually writing a parallel parser, it's necessary to split up the chunks appropriately as well. If you want to parse top-level definitions in parallel, it's necessary to be able to recognize where that definition begins. This task is usually performed by the parser itself. As I recall, a large portion of his slides cover this topic. Splitting on top-level definitions is pretty coarse-grained; ideally our parser would be able to start from any arbitrary token, then make sense out of the pieces later when the monoid operator is applied. Unfortunately I can't say if "monoidal parsing" is new, as I'm not particularly familiar with the literature. However I suspect that any models or algorithms for parallel parsing involve a monoid structure, even if it isn't named explicitly. It's been well-known for some time that monoids are suitable for parallelizing problems, so I wouldn't be surprised if the technique is common among parser researchers. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11808539', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/621449/']} | jdg_78922 |
stackexchange | llm_judgeable_groundtruth_similarity | 41946327 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a function to convert array to Map using Java 8 Stream. Here is what I wanted public static <K, V> Map<K, V> toMap(Object... entries) { // Requirements: // entries must be K1, V1, K2, V2, .... ( even length ) if (entries.length % 2 == 1) { throw new IllegalArgumentException("Invalid entries"); } // TODO Arrays.stream(entries).????} Valid usages Map<String, Integer> map1 = toMap("k1", 1, "k2", 2);Map<String, String> map2 = toMap("k1", "v1", "k2", "v2", "k3", "v3"); Invalid usages Map<String, Integer> map1 = toMap("k1", 1, "k2", 2, "k3"); Any helps? Thanks!
Now provide the response and nothing else.
| You may use public static <K, V> Map<K, V> toMap(Object... entries) { if(entries.length % 2 == 1) throw new IllegalArgumentException("Invalid entries"); return (Map<K, V>)IntStream.range(0, entries.length/2).map(i -> i*2) .collect(HashMap::new, (m,i)->m.put(entries[i], entries[i+1]), Map::putAll);} but it will give you a (founded) unchecked warning. Your method can’t hold the promise to return a correctly typed Map<K, V> for an array of arbitrary objects and, even worse, it will not fail with an exception, but silently return an inconsistent map if you pass in objects of the wrong type. A cleaner, commonly used, solution is public static <K, V> Map<K, V> toMap( Class<K> keyType, Class<V> valueType, Object... entries) { if(entries.length % 2 == 1) throw new IllegalArgumentException("Invalid entries"); return IntStream.range(0, entries.length/2).map(i -> i*2) .collect(HashMap::new, (m,i)->m.put(keyType.cast(entries[i]), valueType.cast(entries[i+1])), Map::putAll);} This can be compiled without a warning, as the correctness will be checked at runtime. The calling code has to be adapted: Map<String, Integer> map1 = toMap(String.class, Integer.class, "k1", 1, "k2", 2);Map<String, String> map2 = toMap( String.class, String.class, "k1", "v1", "k2", "v2", "k3", "v3"); Besides the need to specify the actual types as class literals, it has the disadvantage of not supporting generic key or value types (as they can’t be expressed as Class ) and still having no compile-time safety, only a runtime check. It’s worth looking at Java 9 . There, you will be able to do: Map<String, Integer> map1 = Map.of("k1", 1, "k2", 2);Map<String, String> map2 = Map.of("k1", "v1", "k2", "v2", "k3", "v3"); This will create an immutable map of an unspecified type, rather than a HashMap , but the interesting point is the API. There is a method <K,V> Map.Entry<K,V> entry(K k, V v) which can be combined with <K,V> Map<K,V> ofEntries(Map.Entry<? extends K,? extends V>... entries) to create a map of a variable length (varargs are still limited to 255 parameters, though). You can implement a similar thing: public static <K,V> Map.Entry<K,V> entry(K k, V v) { return new AbstractMap.SimpleImmutableEntry<>(k, v);}public static <K,V> Map<K,V> ofEntries(Map.Entry<? extends K,? extends V>... entries) { return Arrays.stream(entries) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));} The convenience method(s) of are implemented the only way, this can be done with type safety: as overloaded methods with different numbers of arguments, like public static <K,V> Map<K,V> of() { return new HashMap<>();// or Collections.emptyMap() to create immutable maps}static <K,V> Map<K,V> of(K k1, V v1) { return ofEntries(entry(k1, v1));}static <K,V> Map<K,V> of(K k1, V v1, K k2, V v2) { return ofEntries(entry(k1, v1), entry(k2, v2));}static <K,V> Map<K,V> of(K k1, V v1, K k2, V v2, K k3, V v3) { return ofEntries(entry(k1, v1), entry(k2, v2), entry(k3, v3));}static <K,V> Map<K,V> of(K k1, V v1, K k2, V v2, K k3, V v3, K k4, V v4) { return ofEntries(entry(k1, v1), entry(k2, v2), entry(k3, v3), entry(k4, v4));} static <K,V> Map<K,V> of(K k1, V v1, K k2, V v2, K k3, V v3, K k4, V v4, K k5, V v5) { return ofEntries(entry(k1, v1), entry(k2, v2), entry(k3, v3), entry(k4, v4));} (Java 9 makes the cut at ten mappings, if you have more, you have to use the ofEntries(entry(k1, v1), …) variant). If you follow this pattern, you should keep your toMap name or use just map , rather than calling at “ of ”, as you are not writing the Map interface. These overloads might not look very elegant, but they solve all problems. You can write the code just as in your question, without specifying Class objects, but gain compile-time type safety and even rejection of attempts to call it with an odd number of arguments. You have to make a cut at a certain number of parameters, but, as already noted, even varargs do not support unlimited parameters. And the ofEntries(entry(…), …) form isn’t so bad for larger maps. The collector Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue) returns an unspecified map type, which might even be immutable (though it’s a HashMap in the current version). If you want to have a guaranty that a HashMap instance is returned, you have to use Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (v1,v2)->{throw new IllegalArgumentException("duplicate key");}, HashMap::new) instead. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41946327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3018377/']} | jdg_78923 |
stackexchange | llm_judgeable_groundtruth_similarity | 21225 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is Genesis 1 structured so that there is a general category and then with what will fill it, which would be a type of Hebrew poem of parallel ideas? In this case day one (light), would relate to day four (sun, moon, and stars) while day two (sea and sky) would relate to day five (birds and sea creatures) and day three (the land & vegetation) with day six (animals and humanity). Does this indicate that it is a type of creation poem or song?
Now provide the response and nothing else.
| Not poetry, but Prologue Gordon J. Wenham notes in The Word Biblical Commentary Vol. 1: Genesis 1-15 on page 46 ...[Genesis 1:1–2:3] stands apart from the narratives that follow in style and content and makes it an overture to the whole work. On page 50 he continues: Extrabiblical creation stories from the ancient Near East are usually poetic, but Gen 1 is not typical Hebrew poetry. Indeed, some writers endeavoring to underline that Gen 1 is pure priestly theology insist that it is not poetry at all. There is no "hymnic element in the language" (von Rad, 47). On the other hand, Gen 1 is not normal Hebrew prose either; its syntax is distinctively different from narrative prose. Cassuto (1:11 [1961]), Loretz (1975) and Kselman (1978) have all pointed to poetic bicola or tricola in Gen 1, while admitting that most of the material is prose. It is possible that these poetic fragments go back to an earlier form of the creation account, though, as Cassuto observes, "it is simpler to suppose … the special importance of the subject led to an exaltation of style approaching the level of poetry" (1:11). Gen 1 is unique in the Old Testament. It invites comparison with the psalms that praise God‘s work in creation (e.g., 8, 136, 148) or with passages such as Prov 8:22–31 or Job 38 that reflect on the mystery of God‘s creativity. It is indeed a great hymn, setting out majestically the omnipotence of the creator, but it surpasses these other passages in the scope and comprehensiveness of vision. In that it is elevated prose, not pure poetry, it seems unlikely that it was used as a song of praise as the psalms were. Rather, in its present form it is a careful literary composition introducing the succeeding narratives. (emphasis added) And indeed, scholars like McBride 1 and Waltke 2 also regard Genesis 1:1-2:3 as prologue and The opening prologue of Genesis has been compared to the opening prologue of John by several scholars. Subsequently, it is pretty clear that this writing was intended to act as a prologue or overture to the book of Genesis. Correspondence of days is a framework While the correspondence of days is present in the structure of Genesis 1, this represents a literary framework , but it does not equate to poetry itself. There are some poetry-like elements within the text however. Whenham notes some of these on page 50 of his commentary: 1:1–2:3 form the first section of Genesis; the second starts with 2:4. 2:1–3 echoes 1:1 by introducing the same phrases but in reverse order: "he created," "God," "heavens and earth" reappear as "heavens and earth" (2:1) "God" (2:2), "created" (2:3). This chiastic pattern brings the section to a neat close which is reinforced by the inclusion "God created" linking 1:1 and 2:3. The correspondence of the first paragraph, 1:1–2, with 2:1–3 is underlined by the number of Hebrew words in both being multiples of 7. 1:1 consists of 7 words, 1:2 of 14 (7 x 2) words, 2:1–3 of 35 (7 x 5) words. The number seven dominates this opening chapter in a strange way, not only in the number of words in a particular section but in the number of times a specific word or phrase recurs. For example, "God" is mentioned 35 times, "earth" 21 times, "heaven/firmament" 21 times, while the phrases "and it was so" and "God saw that it was good" occur 7 times. Conclusion Genesis 1:1 is almost, but not quite a poem or hymn. As such it rises to the level elevated or high prose and acts as a prologue to the book of Genesis. It also repeatedly invites polemic comparison to other creation myths in Mesopotamia. 1 Dean McBride Jr. "Divine Protocol: Genesis 1:1-2:3 as Prologue to the Pentateuch" 2 Bruce K. Waltke Genesis: A Commentary | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/21225', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/-1/']} | jdg_78924 |
stackexchange | llm_judgeable_groundtruth_similarity | 9828762 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can that be done with no while loops? static void Main(string[] args){ Console.WriteLine("Please enter a number"); int number = Convert.ToInt32(Console.ReadLine()); Console.WriteLine(" #" + Fibonacci(number));}public static int Fibonacci(int number){ if (number <= 1) { return 1; } else { return Fibonacci(number - 2) + Fibonacci(number - 1); }} I can't even add a Console.WriteLine in the body of base case since it gets executed [number] number of times; Not sure how to do this without loops...
Now provide the response and nothing else.
| static void Main(string[] args){ Console.WriteLine("Please enter a number"); int number = Convert.ToInt32(Console.ReadLine()); Fibonacci(0, 1, 1, number);} public static void Fibonacci(int a, int b, int counter, int number){ Console.WriteLine(a); if (counter < number) Fibonacci(b, a+b, counter+1, number);} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9828762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/419730/']} | jdg_78925 |
stackexchange | llm_judgeable_groundtruth_similarity | 42340073 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Clang 3.9 extremely reuses memory used by temporaries. This code is UB (simplified code): template <class T>class my_optional{public: bool has{ false }; T value; const T& get_or_default(const T& def) { return has ? value : def; }};void use(const std::string& s){ // ...}int main(){ my_optional<std::string> m; // ... const std::string& s = m.get_or_default("default value"); use(s); // s is dangling if default returned} We have tons of code something like above ( my_optional is just a simple example to illustrate it). Because of UB all clang compiler since 3.9 starts to reuse this memory, and it is lawful behavior. The question is: how to detect such dangling references at compile time or with something like sanitizer at runtime? No clang sanitizer can detect them. Upd. Please do not answer: "use std::optional ". Read carefully: question is NOT about it. Upd2. Please do not answer: "your code design is bad". Read carefully: question is NOT about code design.
Now provide the response and nothing else.
| You can detect misuses of this particular API by adding an additional overload: const T& get_or_default(T&& rvalue) = delete; If the argument given to get_or_default is a true rvalue, it will be chosen instead, so compilation will fail. As for detecting such errors at runtime, try using Clang's AddressSanitizer with use-after-return ( ASAN_OPTIONS=detect_stack_use_after_return=1 ) and/or use-after-scope ( -fsanitize-address-use-after-scope ) detection enabled. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42340073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1555823/']} | jdg_78926 |
stackexchange | llm_judgeable_groundtruth_similarity | 11449232 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to assign multiple keys per value in a Python dictionary. One possible solution is to assign value to each key: dict = {'k1':'v1', 'k2':'v1', 'k3':'v1', 'k4':'v2'} but this is not memory efficient since my data file is > 2 GB. Otherwise you could make a dictionary of dictionary keys: key_dic = {'k1':'k1', 'k2':'k1', 'k3':'k1', 'k4':'k4'}dict = {'k1':'v1', 'k4':'v2'}main_key = key_dict['k2']value = dict[main_key] This is also very time and effort consuming because I have to go through whole dictionary/file twice. Is there any other easy and inbuilt Python solution? Note: my dictionary values are not simple string (as in the question 'v1', 'v2') rather complex objects (contains different other dictionary/list etc. and not possible to pickle them) Note: the question seems similar as How can I use both a key and an index for the same dictionary value? But I am not looking for ordered/indexed dictionary and I am looking for other efficient solutions (if any) other then the two mentioned in this question.
Now provide the response and nothing else.
| What type are the values? dict = {'k1':MyClass(1), 'k2':MyClass(1)} will give duplicate value objects, but v1 = MyClass(1)dict = {'k1':v1, 'k2':v1} results in both keys referring to the same actual object. In the original question, your values are strings: even though you're declaring the same string twice, I think they'll be interned to the same object in that case NB. if you're not sure whether you've ended up with duplicates, you can find out like so: if dict['k1'] is dict['k2']: print("good: k1 and k2 refer to the same instance")else: print("bad: k1 and k2 refer to different instances") ( is check thanks to J.F.Sebastian, replacing id() ) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11449232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1099709/']} | jdg_78927 |
stackexchange | llm_judgeable_groundtruth_similarity | 17209283 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the purpose of the selector in the jQuery on() method? In what situation is $("#fish").on("click", ".button", function() { // something}); better than, say, $("#fish").find(".button").on("click", function() { // something}); ? I've looked on the jQuery documention and found: If selector is omitted or is null, the event handler is referred to as direct or directly-bound. The handler is called every time an event occurs on the selected elements, whether it occurs directly on the element or bubbles from a descendant (inner) element. When a selector is provided, the event handler is referred to as delegated. The handler is not called when the event occurs directly on the bound element, but only for descendants (inner elements) that match the selector. jQuery bubbles the event from the event target up to the element where the handler is attached (i.e., innermost to outermost element) and runs the handler for any elements along that path matching the selector. Am I correct in saying that the difference is that in my first example, the event is bound to the #fish , but only listens for events bubbling up from the .buttons , whereas in the second example the event is bound to each of the .buttons ? If so what are the implications of this? :S So if the event is bound to the #fish, when you click a .button, is there anyway to know which .button was clicked on? this will refer to the #fish rather than the .button, correct?
Now provide the response and nothing else.
| Example #1 will bind the event for dynamically created buttons, as long as they are a descendant of #fish . Any buttons created after the binding will have the click handler called. You can also bind to $(document) if you don't want select an ancestor element. Example #2 will only bind to the buttons that are in the DOM at the time of binding. Any dynamically created elements will not receive the handler. In both examples, this and $(this) will refer to the button that was clicked, not #fish . Have a look at this jsFiddle for a demo. You'll see in the console that this is referring to the buttons not #fish . $('#fish').on('click', '.button', function(){ console.log(this.id);}); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17209283', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2393790/']} | jdg_78928 |
stackexchange | llm_judgeable_groundtruth_similarity | 23886989 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My server runs many long running notebooks, and I'd like to monitor the notebooks memory. Is there a way to match between the pid or process name and a notebook?
Now provide the response and nothing else.
| Since the question is about monitoring notebooks' memory, I've written a complete example showing the memory consumption of the running notebooks. It is based on the excellent @jcb91 answer and a few other answers ( 1 , 2 , 3 , 4 ). import jsonimport osimport os.pathimport posixpathimport subprocessimport urllib2import pandas as pdimport psutildef show_notebooks_table(host, port): """Show table with info about running jupyter notebooks. Args: host: host of the jupyter server. port: port of the jupyter server. Returns: DataFrame with rows corresponding to running notebooks and following columns: * index: notebook kernel id. * path: path to notebook file. * pid: pid of the notebook process. * memory: notebook memory consumption in percentage. """ notebooks = get_running_notebooks(host, port) prefix = long_substr([notebook['path'] for notebook in notebooks]) df = pd.DataFrame(notebooks) df = df.set_index('kernel_id') df.index.name = prefix df.path = df.path.apply(lambda x: x[len(prefix):]) df['pid'] = df.apply(lambda row: get_process_id(row.name), axis=1) # same notebook can be run in multiple processes df = expand_column(df, 'pid') df['memory'] = df.pid.apply(memory_usage_psutil) return df.sort_values('memory', ascending=False)def get_running_notebooks(host, port): """Get kernel ids and paths of the running notebooks. Args: host: host at which the notebook server is listening. E.g. 'localhost'. port: port at which the notebook server is listening. E.g. 8888. username: name of the user who runs the notebooks. Returns: list of dicts {kernel_id: notebook kernel id, path: path to notebook file}. """ # find which kernel corresponds to which notebook # by querying the notebook server api for sessions sessions_url = posixpath.join('http://%s:%d' % (host, port), 'api', 'sessions') response = urllib2.urlopen(sessions_url).read() res = json.loads(response) notebooks = [{'kernel_id': notebook['kernel']['id'], 'path': notebook['notebook']['path']} for notebook in res] return notebooksdef get_process_id(name): """Return process ids found by (partial) name or regex. Source: https://stackoverflow.com/a/44712205/304209. >>> get_process_id('kthreadd') [2] >>> get_process_id('watchdog') [10, 11, 16, 21, 26, 31, 36, 41, 46, 51, 56, 61] # ymmv >>> get_process_id('non-existent process') [] """ child = subprocess.Popen(['pgrep', '-f', name], stdout=subprocess.PIPE, shell=False) response = child.communicate()[0] return [int(pid) for pid in response.split()]def memory_usage_psutil(pid=None): """Get memory usage percentage by current process or by process specified by id, like in top. Source: https://stackoverflow.com/a/30014612/304209. Args: pid: pid of the process to analyze. If None, analyze the current process. Returns: memory usage of the process, in percentage like in top, values in [0, 100]. """ if pid is None: pid = os.getpid() process = psutil.Process(pid) return process.memory_percent()def long_substr(strings): """Find longest common substring in a list of strings. Source: https://stackoverflow.com/a/2894073/304209. Args: strings: list of strings. Returns: longest substring which is found in all of the strings. """ substr = '' if len(strings) > 1 and len(strings[0]) > 0: for i in range(len(strings[0])): for j in range(len(strings[0])-i+1): if j > len(substr) and all(strings[0][i:i+j] in x for x in strings): substr = strings[0][i:i+j] return substrdef expand_column(dataframe, column): """Transform iterable column values into multiple rows. Source: https://stackoverflow.com/a/27266225/304209. Args: dataframe: DataFrame to process. column: name of the column to expand. Returns: copy of the DataFrame with the following updates: * for rows where column contains only 1 value, keep them as is. * for rows where column contains a list of values, transform them into multiple rows, each of which contains one value from the list in column. """ tmp_df = dataframe.apply( lambda row: pd.Series(row[column]), axis=1).stack().reset_index(level=1, drop=True) tmp_df.name = column return dataframe.drop(column, axis=1).join(tmp_df) Here is an example output of show_notebooks_table('localhost', 8888) : | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23886989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245024/']} | jdg_78929 |
stackexchange | llm_judgeable_groundtruth_similarity | 30622599 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Keycloak (version 1.0.4.Final) in JBOSS AS 7.1.1 server. The server is on Amazon AWS. I am able to start the jboss server with keycloak. i can see the keycloak default screen while hitting the URL - ServerIP:8080/auth But when i am clicking on the Administration Console link to go to the login screen. I am getting a page saying - HTTPS required The server is on AWS, changing to "ssl-required" : "none", in General Adapter Config has not helped. How to resolve this issue? Edit: I was not getting this issue in keycloak-1.2.0.Beta1 version.
Now provide the response and nothing else.
| I was running the key cloak inside a docker container, The keycloak command line tool was avaialble inside the keycloak container. docker exec -it {contaierID} bashcd keycloak/bin./kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin./kcadm.sh update realms/master -s sslRequired=NONE If the admin user is not created, then the user can be created via this command. ./add-user-keycloak.sh --server http://ip_address_of_the_server:8080/admin --realm master --user admin --password adminPassword Update:For the newer versions the file in available in the following path: /opt/jboss/keycloak/bin | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30622599', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1850925/']} | jdg_78930 |
stackexchange | llm_judgeable_groundtruth_similarity | 83468 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My indention is to have a script that updates all git submodules according to which branch is given. If there's no such branch for a submodule, master is used. This is what I have now: #!/bin/bash -xif [ -z $1 ]; then echo "Branch name required." exitfifunction pbranch { exists=`git show-ref refs/heads/$branch` if [ -z $exists ]; then branch="master" fi git co $branch git pull origin $branch}branch=$1git submodule foreach pbranch But when running this script, the error is thrown: oleq@pc ~/project> git-fetchmodules major+ '[' -z major ']'+ branch=major+ git submodule foreach pbranchEntering 'submodule'/usr/lib/git-core/git-submodule: 1: eval: pbranch: not foundStopping at 'submodule'; script returned non-zero status. My guess is that git submodule foreach utilizes eval (according to the documentation ), which I don't use correctly in this context. There are billions of examples of how to use this command with "inline callback" but I couldn't found a single one with the callback in form of the function. Any idea how to solve this?
Now provide the response and nothing else.
| I resolved my issue by putting the function inside of the quotes as a callback: #!/bin/bashif [ -z $1 ]; then echo "Branch name required." exitfigit submodule foreach " branch=$1; exists=\$(git show-ref refs/heads/\$branch | cut -d ' ' -f1); if [ -z \$exists ]; then branch='master'; fi; echo Checking branch \$branch for submodule \$name.; git fetch --all -p; git co \$branch; git reset --hard origin/\$branch;" Note that variables like $1 are those from the script's namespace. The "escaped ones" like $\(bar) , \$branch are evaluated within "the callback". It was pretty easy. | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/83468', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/28030/']} | jdg_78931 |
stackexchange | llm_judgeable_groundtruth_similarity | 3742637 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been reading about Haskell and I'm having a hard time understanding how function definitions are handled in this language. Let's say I'm defining a sum function: let sum x y = x + y if I query Haskell for its type :t sum I get sum :: (Num a) => a -> a -> a What does it mean the => operator? Does it have anything to do with lambda expressions? That's how one signals that what is following the => operator is one, in C#. What does the a -> a -> a mean? By eye inspection on a number of different functions I've been trying out, it seems the initial a -> a are the arguments and the final -> a is the result of the sum function. If that is right, why not something as (a, a) -> a , which seems way more intuitive?
Now provide the response and nothing else.
| 0. The Haskell => has nothing to do with C#'s => . In Haskell an anonymous function is created with \x -> x * x Also, don't name the function sum because such a function already exists in Prelude. Let's call it plus from now on to avoid confusion. 1. Anyway, the => in Haskell provides a context to the type. For instance: show :: (Show a) => a -> String Here, The Show a => means a type must be an instance of the type class Show , which means a must be convertible to a string. Similarly, (Num a) => a -> a -> a means the a type must be an instance of the type class Num, which means a must be like a number. This puts a constraint on a so that show or plus won't accept some unsupported input, e.g. plus "56" "abc" . (String is not like a number.) A type class is similar to C#'s interface, or more specifically, an interface base-type constraint in generics . See the question Explain Type Classes in Haskell for more info. 2. a -> a -> a means a -> (a -> a) . Therefore, it is actually a unary function that returns another function. plus x = \y -> x + y This makes partial application (currying) very easy. Partial application is used a lot esp. when using higher order functions. For instance we could use map (plus 4) [1,2,3,4] to add 4 to every element of the list. In fact we could again use partial application to define: plusFourToList :: Num a => [a] -> [a]plusFourToList = map (plus 4) If a function is written in the form (a,b,c,...)->z by default, we would have to introduce a lot of lambdas: plusFourToList = \l -> map(\y -> plus(4,y), l) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3742637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130758/']} | jdg_78932 |
stackexchange | llm_judgeable_groundtruth_similarity | 21554127 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can already find the current location using latitude and longitude, but I would also like to be able to find the current location given a zip code. Here is what I have so far: .h #import <MapKit/MapKit.h>#import <UIKit/UIKit.h>#import <CoreLocation/CoreLocation.h>@interface ViewController : UIViewController<CLLocationManagerDelegate>{CLLocationManager *locationManager; CLLocation *currentLocation; IBOutlet UILabel *label1; IBOutlet UILabel *lable2;}@property (weak, nonatomic) IBOutlet MKMapView *myMapview;@property (weak, nonatomic) IBOutlet UILabel *label2;@property (weak, nonatomic) IBOutlet UILabel *lable1;@end .m #import "ViewController.h"@interface ViewController ()@end@implementation ViewController- (void)viewDidLoad{ [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. locationManager = [[CLLocationManager alloc]init]; locationManager.delegate = self; locationManager.distanceFilter = kCLDistanceFilterNone; locationManager.desiredAccuracy = kCLLocationAccuracyBest; [locationManager startUpdatingLocation]; _myMapview.showsUserLocation = YES; [self->locationManager startUpdatingLocation]; CLLocation *location = [locationManager location]; CLLocationCoordinate2D coordinate = [location coordinate]; NSString *latitude = [NSString stringWithFormat:@"%f", coordinate.latitude]; NSString *longitude = [NSString stringWithFormat:@"%f", coordinate.longitude]; //NSLog(@”dLatitude : %@”, latitude); //NSLog(@”dLongitude : %@”,longitude); NSLog(@"MY HOME :%@", latitude); NSLog(@"MY HOME: %@ ", longitude);}#pragma mark CLLocationManager Delegate- (void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations{ currentLocation = [locations objectAtIndex:0]; [locationManager stopUpdatingLocation]; [self->locationManager stopUpdatingLocation]; NSLog(@"my latitude :%f",currentLocation.coordinate.latitude); NSLog(@"my longitude :%f",currentLocation.coordinate.longitude); label1.text = [NSString stringWithFormat:@"%.8f", currentLocation.coordinate.longitude]; lable2.text = [NSString stringWithFormat:@"%.8f", currentLocation.coordinate.latitude]; NSLog(@"Detected Location : %f, %f", currentLocation.coordinate.latitude, currentLocation.coordinate.longitude); CLGeocoder *geocoder = [[CLGeocoder alloc] init] ; [geocoder reverseGeocodeLocation:currentLocation completionHandler:^(NSArray *placemarks, NSError *error) { if (error) { NSLog(@"Geocode failed with error: %@", error); return; }NSLog(@"Monday"); CLPlacemark *placemark = [placemarks objectAtIndex:0]; NSLog(@"placemark.ISOcountryCode %@",placemark.ISOcountryCode); }];}- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation{ NSLog(@"didUpdateToLocation: %@", newLocation); CLLocation *currentLocation = newLocation; if (currentLocation != nil) { label1.text = [NSString stringWithFormat:@"%.8f", currentLocation.coordinate.longitude]; lable2.text = [NSString stringWithFormat:@"%.8f", currentLocation.coordinate.latitude]; }}- (void)didReceiveMemoryWarning{ [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated.}@end
Now provide the response and nothing else.
| Step 1: Import MobileCoreServices framework in .h File #import <MobileCoreServices/MobileCoreServices.h> Step 2: Add delegate CLLocationManagerDelegate @interface yourViewController : UIViewController<CLLocationManagerDelegate>{ CLLocationManager *locationManager; CLLocation *currentLocation;} Step 3: Add this code in class file - (void)viewDidLoad{ [super viewDidLoad]; [self CurrentLocationIdentifier]; // call this method} Step 4: Method to get location //------------ Current Location Address------(void)CurrentLocationIdentifier{ //---- For getting current gps location locationManager = [CLLocationManager new]; locationManager.delegate = self; locationManager.distanceFilter = kCLDistanceFilterNone; locationManager.desiredAccuracy = kCLLocationAccuracyBest; [locationManager startUpdatingLocation]; //------}- (void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations{ currentLocation = [locations objectAtIndex:0]; [locationManager stopUpdatingLocation]; CLGeocoder *geocoder = [[CLGeocoder alloc] init] ; [geocoder reverseGeocodeLocation:currentLocation completionHandler:^(NSArray *placemarks, NSError *error) { if (!(error)) { CLPlacemark *placemark = [placemarks objectAtIndex:0]; NSLog(@"\nCurrent Location Detected\n"); NSLog(@"placemark %@",placemark); NSString *locatedAt = [[placemark.addressDictionary valueForKey:@"FormattedAddressLines"] componentsJoinedByString:@", "]; NSString *Address = [[NSString alloc]initWithString:locatedAt]; NSString *Area = [[NSString alloc]initWithString:placemark.locality]; NSString *Country = [[NSString alloc]initWithString:placemark.country]; NSString *CountryArea = [NSString stringWithFormat:@"%@, %@", Area,Country]; NSLog(@"%@",CountryArea); } else { NSLog(@"Geocode failed with error %@", error); NSLog(@"\nCurrent Location Not Detected\n"); //return; CountryArea = NULL; } /*---- For more results placemark.region); placemark.country); placemark.locality); placemark.name); placemark.ocean); placemark.postalCode); placemark.subLocality); placemark.location); ------*/ }];} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21554127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_78933 |
stackexchange | llm_judgeable_groundtruth_similarity | 25711 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given enough time, where are the Voyager spacecrafts heading? (Assuming some alien civilization doesn't pick them up.) Will they pass by any interesting stars on the way to the black hole at the center of our galaxy or will it perhaps leave the galaxy? What are the highlights on their journey that we can reasonably predict?
Now provide the response and nothing else.
| Neither Voyager 1 nor Voyager 2 was aimed at any particular target outside the Solar System. Their trajectories were largely determined by the requirement to do fly-bys of Jupiter and Saturn (and, in the case of Voyager 2's extended mission, Uranus and Neptune). They've been sending back some interesting results about the boundary between the Solar System and interstellar space, but are not expected to remain operational long enough to send back any data about any other stars. Voyager 1 happens to be heading "in the general direction of the Solar Apex (the direction of the Sun's motion relative to nearby stars)", which means it should reach the heliopause somewhat sooner than if it were going in a different direction. I don't think that was deliberate. To do that intentionally, they would have had to alter its course as it passed through the Saturn system; as far as I know, its course was optimized for observations of Saturn, its rings, and its moons. And Voyager 2 passed through the termination shock 10 AUs closer to the Sun than Voyager 1 did. I don't think there's any particular reason to think that either one of them will encounter the black hole at the core of the galaxy. To reach the galactic core, they'd have to have enough velocity to cancel the Sun's orbital motion around the core, about 251 kilometers per second , compared to their actual Sun-relative speed of about 17 kilometers per second. JPL's web site for the Voyager Interstellar Mission says the following: Both Voyagers are headed towards the outer boundary of the solar system in search of the heliopause, the region where the Sun's influence wanes and the beginning of interstellar space can be sensed. The heliopause has never been reached by any spacecraft; the Voyagers may be the first to pass through this region, which is thought to exist somewhere from 8 to 14 billion miles from the Sun. This is where the million-mile-per-hour solar winds slows to about 250,000 miles per hour—the first indication that the wind is nearing the heliopause. The Voyagers should cross the heliopause 10 to 20 years after reaching the termination shock. The Voyagers have enough electrical power and thruster fuel to operate at least until 2020. By that time, Voyager 1 will be 12.4 billion miles (19.9 billion KM) from the Sun and Voyager 2 will be 10.5 billion miles (16.9 billion KM) away. Eventually, the Voyagers will pass other stars. In about 40,000 years, Voyager 1 will drift within 1.6 light years (9.3 trillion miles) of AC+79 3888, a star in the constellation of Camelopardalis. In some 296,000 years, Voyager 2 will pass 4.3 light years (25 trillion miles) from Sirius, the brightest star in the sky . The Voyagers are destined—perhaps eternally—to wander the Milky Way. Both Voyagers are, of course, still influenced by the Sun's gravity. Ignoring other forces (which is a good enough approximation for thousands of years), they'll continue to decelerate, asymptotically approaching a speed that depends on the Sun's gravity and their kinetic energy. According to a footnote in this article , Voyager 1 has an asymptotic velocity of 3.5 AU/yr, Voyager 2 an asymptotic velocity of 3.4 AU/yr which converts to about 16.6 and 16.1 kilometers per second. That's not much slower than their current velocities. Reference (Wikipedia footnote): Mallove, Eugene F.; Gregory L. Matloff (1989). The starflight handbook: a pioneer's guide to interstellar travel | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/25711', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/8916/']} | jdg_78934 |
stackexchange | llm_judgeable_groundtruth_similarity | 56283 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If a person has a specific mutation in a gene (2281 del/ins in a single copy of the Bloom BLM gene), is he more likely than the general population to have other types of mutations in the same gene?
Now provide the response and nothing else.
| A priori, no! Why would it? Do you have any specific hypothesis in mind you would like to discuss? Below are some expectations from a simple model and possible reasons for why this expectation may break down under more complicated models. Simple model Under a simple model (panmictic population and a few other simple assumptions), the number of mutations a given individual has in any sequence considered follows a poisson distribution. Assuming that all mutations occuring have a constance selection coefficient $s$, a constant dominance coefficient $h$ and that the mutation rate for the sequence of interest is $U$, then the number of mutations an individual carry comes from a Poisson distribution with mean $\frac{U}{2hs}$ ( Crow 1970 ). This model is simple but is probably a pretty good approximation to reality. Below are three assumptions that are not necessarily true and that would yield to a higher variance in the number of mutations (that is a higher probability for an individual that already carry a mutation to get a second mutation). Population structure In reality, populations are often not panmictic. As the genetic load depends upon the population size. More specifically, the smaller the population, the higher the genetic load (or the higher the number of mutants alleles present in the population) ( Kimura et al. 1963 ). Past evolutionary and demographic history Populations are not stable through time. Some population have recently gone through a bottleneck, others a quickly expanding, some are shrinking, some have recently suffered from a plague, etc... Such processes can all cause populations to differ in the number of mutations they carry and therefore would increase the variance in the distribution of number of mutations individuals carry. Within a specific short enough sequence, physical linkage associated with specific past evolutionary process will create a relatively long lasting linkage disequilibrium. Condition-dependent mutation rate It has been shown in Drosophila melanogaster that individuals with poor genotype tend to have a higher mutation rate causing their lineage to accumulate even more mutations ( Sharp and Agrawal 2012 ). | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/56283', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/22207/']} | jdg_78935 |
stackexchange | llm_judgeable_groundtruth_similarity | 13843352 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I normally use matrix[:, i:] It seems not work as fast as I expected.
Now provide the response and nothing else.
| To obtain a sparse matrix as output the fastest way to do row slicing is to have a csr type, and for columns slicing csc , as detailed here . In both cases you just have to do what you are currently doing: matrix[l1:l2, c1:c2] If you want a ndarray as output it might be faster to perform the slicing directly in the ndarray object, which you can obtain from the sparse matrix using the .A attribute or the .toarray() method: matrix.A[l1:l2, c1:c2] or: matrix.toarray()[l1:l2, c1:c2] As mentioned in the comment below, converting the sparse array to a dense array might lead to memory errors if the array is big enough. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13843352', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1755323/']} | jdg_78936 |
stackexchange | llm_judgeable_groundtruth_similarity | 547315 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I could be on the wrong path, but I'm trying to use binomial distribution to identify gender discrimination and am having issues getting consistent results based on trial size(?)... I think. Small numbers work as expected, large numbers do not. An example that works: 16 people are hired. 2 are women. What is the chance this happened randomly? (x)=2, (n)=16, (p)=.5. Example below [from R] show what I expected. pbinom(2, size=16, prob=0.5) #0.0021 -> .2% chance that there is no gender discrimination An example that doesn't work: 1150 people are hired. 350 are women. What is the chance this happened randomly? (x)=350, (n)=1150, (p)=.5. Example below [from R] does not show what I expected. pbinom(350, size=1150, prob=0.5) # 2.388788e-41 -> ?! Thanks in advance. Any suggestions better approached please!
Now provide the response and nothing else.
| Assume that one is hiring from a large pool of equally qualified applicants of whom half are women and half are men. The number of women hired out of $n$ is $X.$ Suppose that $p$ is the probability that any one hire will be a women. Perhaps you want to testThe null hypothesis $H_0: p = 1/2$ against $H_a: p < 1/2.$ For $n = 16, x = 2$ the P-value of this test is given in R as follows: binom.test(2, 16, .5, alt="less") Exact binomial testdata: 2 and 16number of successes = 2, number of trials = 16, p-value = 0.00209alternative hypothesis: true probability of success is less than 0.595 percent confidence interval: 0.0000000 0.3438252sample estimates:probability of success 0.125 The P-value can also be found as follows: pbinom(2, 16, .5)[1] 0.002090454 Perhaps a more even-handed approach wouldbe to test $H_0: p= 1/2$ against $H_a: p\ne 1/2,$ perhaps allowing for the possibility that thehiring process might be biased in either direction.It turns out that the P-value of this 2-sided testis twice the P-value of the 1-sided test. binom.test(2, 16, .5) Exact binomial testdata: 2 and 16number of successes = 2, number of trials = 16, p-value = 0.004181alternative hypothesis: true probability of success is not equal to 0.595 percent confidence interval: 0.0155136 0.3834762sample estimates:probability of success 0.125 A major difficulty in interpreting results of thesetests arises in justifying the assumption that "a large pool of equally qualified applicants of whom half are women and half are men" was used--or is available. Without addressing such much moredifficult issues, counts of women hired may leadto 'statistical significance', but not likely to persuasion. @whuber's Comment provides useful advice. For data with 350 women hired out of 1150, the P-values of one- and two-sided tests are shown below.Both of the P-values are very nearly $0,$ indicating that the observed $\hat p =X/n = 350/1150 = 0.304$ is inconsistent with $p = 1/2.$ binom.test(350, 1150, .5, alt="less") $p.val[1] 2.388788e-41binom.test(350, 1150, .5)$ p.val[1] 4.777577e-41 | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/547315', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/337188/']} | jdg_78937 |
stackexchange | llm_judgeable_groundtruth_similarity | 2795404 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
implementing service something similar with tinyurl or bit.ly, I'm would like to expose service as API, I'm using java and jersey as RESTfull service implementation. I'm looking for simplest way for authentification of users who use API, OAuth is first thing coming in mind, but the problem is I don't need this 3 iteration calls with request token query, than access token query with callback url passing. I just need to give user ability to invoke api with no additional security calls to my server.
Now provide the response and nothing else.
| Thanks to patrickmcgraw comment I used 2-legged oauth authentificaton.Here is some java code. For client side (using Jersey api): OAuthParameters params = new OAuthParameters().signatureMethod("HMAC-SHA1"). consumerKey("consumerKey").version("1.1");OAuthSecrets secrets = new OAuthSecrets().consumerSecret("secretKey");OAuthClientFilter filter = new OAuthClientFilter(client().getProviders(), params, secrets);WebResource webResource = resource();webResource.addFilter(filter);String responseMsg = webResource.path("oauth").get(String.class); On provider side: @Path("oauth")public class OAuthService { @GET @Produces("text/html") public String secretService(@Context HttpContext httpContext) { OAuthServerRequest request = new OAuthServerRequest(httpContext.getRequest()); OAuthParameters params = new OAuthParameters(); params.readRequest(request); OAuthSecrets secrets = new OAuthSecrets().consumerSecret("secretKey"); try { if(!OAuthSignature.verify(request, params, secrets)) return "false"; } catch (OAuthSignatureException ose) { return "false"; } return "OK"; }} Here is code for PHP client: <?php require_once 'oauth.php';$key = 'consumerKey';$secret = 'secretKey';$consumer = new OAuthConsumer($key, $secret);$api_endpoint = 'http://localhost:9998/oauth';$sig_method = new OAuthSignatureMethod_HMAC_SHA1;$parameters = null;$req = OAuthRequest::from_consumer_and_token($consumer, null, "GET", $api_endpoint, $parameters);$sig_method = new OAuthSignatureMethod_HMAC_SHA1();$req->sign_request($sig_method, $consumer, null);//note: double entry of token//get data using signed url$ch = curl_init($req->to_url());curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);$res = curl_exec($ch);echo $res;curl_close($ch); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2795404', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/306025/']} | jdg_78938 |
stackexchange | llm_judgeable_groundtruth_similarity | 31002091 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently installed lxml . Before that, I had to install all the dependencies for that.So I tried to install liblxml2-dev , liblxslt1-dev and python-dev (google-searched for what packages are required for lxml )but even after that, I could not able to install lxml by using the command pip install lxml . However as because I am using Python 3.4.0, I thought that may be there are different version of python-dev (thought came due to facing some similar version conflict problem). So I tried to install python3-dev . Then I tried to install lxml using the same command said earlier and that worked!!! I google-searched for those sentencesand I could able to understand the meaning of lxml and xslt . I found a link for python-dev but I am not still sure what it is actually doing. OS - Ubuntu
Now provide the response and nothing else.
| python-dev python-dev contains the header files you need to build Python extensions . lxml lxml is a Python C-API extension that is compiled when you do pip install lxml . The lxml sources have at least something like #include <Python.h> in the code. The compiler looks for the header file Python.h during compilation, hence those header files need to be on your system such that they can be found. -dev packages on Linux On Linux typically, the binary libraries and header files of packages like Python are separate. Hence you can have Python (e.g. package python ) installed and everything works fine. But when you want to build extensions you need to install the corresponding -dev package. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31002091', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3142049/']} | jdg_78939 |
stackexchange | llm_judgeable_groundtruth_similarity | 6279354 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How come this code returns true? string to match: ab pattern: /^a|b$/ but when I put parentheses like this: pattern: /^(a|b)$/ it will then return false .
Now provide the response and nothing else.
| The first pattern without the parenthesis is equivalent to /(^a)|(b$)/ . The reason is, that the pipe operator ("alternation operator") has the lowest precedence of all regex operators: http://www.regular-expressions.info/alternation.html (Third paragraph below the first heading) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6279354', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/782552/']} | jdg_78940 |
Subsets and Splits