source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
75344
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: From Xfce Docs : In case you want to override the DPI (dots per inch) value calculated by the X-server , you can select the checkbox and use the spin box to specify the resolution to use when your screen renders fonts. But how does X-server do its calculation? What assumptions are made in the process and can some of the parameters be overridden? It may know how many pixels I have on my display, but is that enough? Now provide the response and nothing else.
As far as I know, starting with version 1.7, xorg defaults to 96 dpi . It doesn't calculate anything unless you specify DisplaySize via Xorg config files. Also, don't rely on xdpyinfo output. My laptop runs on Intel SandyBridge. Excerpt from my Xorg.0.log on a fresh Archlinux install: (==) intel(0): DPI set to (96, 96) running xdpyinfo | grep -E 'dimensions|resolution' returns: dimensions: 1600x900 pixels (423x238 millimeters)resolution: 96x96 dots per inch which is far from being true. I know that my screen size is 344x193 mm so obviously xdpyinfo calculates the physical size based on pixel resolution ( 1600x900 ) and default 96 DPI . If I add ........DisplaySize 344 193........ in /etc/X11/xorg.conf.d/monitor.conf and restart, Xorg.0.log correctly reports: (**) intel(0): Display dimensions: (344, 193) mm(**) intel(0): DPI set to (118, 118) However, xdpyinfo | grep -E 'dimensions|resolution' always returns: dimensions: 1600x900 pixels (423x238 millimeters)resolution: 96x96 dots per inch Still, no visual changes as I'm using Gnome, and 96 DPI is also hard-coded in gnome-settings-daemon . After patching the latter, I can enjoy my native 118 DPI . But even after all that, xdpyinfo still returns: dimensions: 1600x900 pixels (423x238 millimeters)resolution: 96x96 dots per inch
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/75344', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/9365/']}
jdg_83841
stackexchange
llm_judgeable_groundtruth_similarity
58069516
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to have two unique alerts attached to the same Button view. When I use the code below, only the alert on the bottom works. I'm using the official release of Xcode 11 on macOS Catalina. @State private var showFirstAlert = false@State private var showSecondAlert = falseButton(action: { if Bool.random() { showFirstAlert = true } else { showSecondAlert = true }}) { Text("Show random alert")}.alert(isPresented: $showFirstAlert) { // This alert never shows Alert(title: Text("First Alert"), message: Text("This is the first alert"))}.alert(isPresented: $showSecondAlert) { // This alert does show Alert(title: Text("Second Alert"), message: Text("This is the second alert"))} I expect first alert to show when I set showFirstAlert to true and I expect the second alert to show when I set showSecondAlert to true. Only the second alert shows when its state is true but the first one does nothing. Now provide the response and nothing else.
The second call to .alert(isPresented) is overriding the first. What you really want is one Binding<Bool> to denote whether the alert is presented, and some setting for which alert should be returned from the closure following .alert(isPresented) . You could use a Bool for this, but I went ahead and did it with an enum, as that scales to more than two alerts. enum ActiveAlert { case first, second}struct ToggleView: View { @State private var showAlert = false @State private var activeAlert: ActiveAlert = .first var body: some View { Button(action: { if Bool.random() { self.activeAlert = .first } else { self.activeAlert = .second } self.showAlert = true }) { Text("Show random alert") } .alert(isPresented: $showAlert) { switch activeAlert { case .first: return Alert(title: Text("First Alert"), message: Text("This is the first alert")) case .second: return Alert(title: Text("Second Alert"), message: Text("This is the second alert")) } } }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/58069516', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12108274/']}
jdg_83842
stackexchange
llm_judgeable_groundtruth_similarity
441309
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Following the discussions here on SO I already read several times the remark that mutable structs are “evil” (like in the answer to this question ). What's the actual problem with mutability and structs in C#? Now provide the response and nothing else.
Structs are value types which means they are copied when they are passed around. So if you change a copy you are changing only that copy, not the original and not any other copies which might be around. If your struct is immutable then all automatic copies resulting from being passed by value will be the same. If you want to change it you have to consciously do it by creating a new instance of the struct with the modified data. (not a copy)
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/441309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40347/']}
jdg_83843
stackexchange
llm_judgeable_groundtruth_similarity
5073917
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In application.rb , it says: Set Time.zone default to the specified zone and make Active Record auto-convert to this zone. Run "rake -D time" for a list of tasks for finding time zone names. Default is UTC. But setting config.time_zone = 'Central Time (US & Canada)' or config.time_zone = 'Eastern Time (US & Canada)' has no effect - the created_at field in a model is stil being saved in UTC. According to this railsforum answer : config.time_zone just lets rails know that your server is set to this timezone so when it writes dates to the database it can properly convert it to UTC. If that is true, then why is it that when my system time is Pacific Time (US & Canada) and config.time_zone = 'Central Time (US & Canada)' or config.time_zone = 'Eastern Time (US & Canada)' , that the created_at time is the correct UTC? Should it not be incorrect?! Because, if the PST time is 8 PM, then EST is 11 PM and UTC is 4 AM.Presuming that Rails does Time.now , that would be 8 PM. And we told Rails that the server is in EST. So, 8 PM would be EST time as far as Rails is concerned and the UTC would then be 5 AM UTC, which would be incorrect (because the actual time is 8 PM PST/11 PM EST, which is 4 AM UTC) What's going on here? Now provide the response and nothing else.
Here's a list of concepts and things that may help you: config.time_zone doesn't set "Server Time", that is controlled by your operating system usually. Rails always stores your dates in UTC in the database (unless you change a different setting). Time.now returns the localtime for your computer in your timezone and it also includes the local timezone offset from your operating system, which means Ruby and therefore Rails knows how to convert localtime into UTC. You can try this by using irb directly, so no Rails libraries are loaded: ctcherry$ irb>> Time.now=> Mon Feb 21 20:53:14 -0800 2011>> If config.time_zone , or Time.zone is set, to lets say EST, Rails expects that if you set a datetime attribute that you mean for that time and date to be in specified timezone, in this case EST. This is why you set Time.zone equal to your end users timezone, so they can use their local time and dates, and you can pass them directly into your ActiveRecord models and Rails can convert it to UTC for storage in the database. Does that help at all?
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5073917', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382818/']}
jdg_83844
stackexchange
llm_judgeable_groundtruth_similarity
7350860
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to change the following JS to Jquery. But I don't know how to pass parameter to click event in Jquery. Can anyone help me, thanks! <script type="text/javascript">function display(id){ alert("The ID is "+id); }</script><input id="btn" type="button" value="click" onclick="display(this.id)" /> Now provide the response and nothing else.
Better Approach: <script type="text/javascript"> $('#btn').click(function() { var id = $(this).attr('id'); alert(id); });</script><input id="btn" type="button" value="click" /> But, if you REALLY need to do the click handler inline, this will work: <script type="text/javascript"> function display(el) { var id = $(el).attr('id'); alert(id); }</script><input id="btn" type="button" value="click" OnClick="display(this);" />
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7350860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/808422/']}
jdg_83845
stackexchange
llm_judgeable_groundtruth_similarity
42011070
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: On Windows, TensorFlow reports either or both of the following errors after executing an import tensorflow statement: No module named "_pywrap_tensorflow" DLL load failed. Now provide the response and nothing else.
The problem was the cuDNN Library for me - for whatever reason cudnn-8.0-windows10-x64-v6.0 was NOT working - I used cudnn-8.0-windows10-x64-v5.1 - ALL GOOD! My setup working with Win10 64 and the Nvidia GTX780M: Be sure you have the lib MSVCP140.DLL by checking your system/path - if not get it here Run the windows installer for python 3.5.3-amd64 from here - DO NOT try newer versions as they probably won't work Get the cuDNN v5.1 for CUDA 8.0 from here - put it under your users folder or in another known location (you will need this in your path) Get CUDA 8.0 x86_64 from here Set PATH vars as expected to point at the cuDNN libs and python (the python path should be added during the python install) Make sure that ".DLL" is included in your PATHEXT variable If you are using tensorflow 1.3 then you want to use cudnn64_6.dll github.com/tensorflow/tensorflow/issues/7705 If you run Windows 32 be sure to get the 32 bit versions of the files mentioned above.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42011070', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7490399/']}
jdg_83846
stackexchange
llm_judgeable_groundtruth_similarity
5179648
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I get the label of each column in a result set to prepend the name if its table? I want this to happen for queries on single tables as well as joins. Example: SELECT first_name, last_name FROM person; I want the results to be: | person.first_name | person.last_name | |-------------------|------------------| | Wendy | Melvoin | | Lisa | Coleman | I could use "AS" to define an alias for each column, but that would be tedious. I want this to happen automatically. SELECT first_name AS person.first_name, last_name AS person.last_name FROM person; The reason for my question is that I am using a database driver that does not provide the meta-data informing me the database column from where the result set got its data. I am trying to write generic code to handle the result set. I would like to know how to do this in SQL generally, or at least in Postgres specifically. SQLite had such a feature, though I see it is now inexplicably deprecated. SQLite has two pragma settings: full_column_names & short_column_names . Now provide the response and nothing else.
I know this question is a bit old, but perhaps someone will stumble over the answer and it will help them out. The proper way to do what you're looking for is to create and use a View. Yes, it will be a bit tedious one-time to type out all those new column names as aliases, but if there are a lot of columns here's a trick you can use to leverage the PostgreSQL metadata to write out the text of the view: select 'CREATE OR REPLACE VIEW people AS SELECT ' || (select string_agg(column_name || ' AS person_' || column_name, ', ')from information_schema.columnswhere table_name = 'person'group by table_name) || ' FROM person;'; running this yields: ?column? ------------------------------------------------------------------------------------------------------------- CREATE OR REPLACE VIEW people AS SELECT last_name AS person_last_name, first_name AS person_first_name FROM person; 1 record(s) selected [Fetch MetaData: 0/ms] [Fetch Data: 0/ms][Executed: 4/21/12 2:05:21 PM EDT ] [Execution: 9/ms] you can then copy and execute the results and voila: select * from people; person_last_name person_first_name ------------------- -------------------- Melvoin Wendy Coleman Lisa 2 record(s) selected [Fetch MetaData: 1/ms] [Fetch Data: 0/ms]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5179648', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/642706/']}
jdg_83847
stackexchange
llm_judgeable_groundtruth_similarity
31507
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: DISCLAIMER I do apologize in advance if this question turns out to be silly, for some trivial reason that I may be overlooking in this moment. Suppose for a moment that $\mathbf{P} = \mathbf{BQP}$ is true: it is possible to efficiently simulate quantum computation by classical deterministic computation. Now, from that very little I know about quantum computing (I'm much less than an amateur when speaking about it), if I'm not wrong a Quantum Turing Machine is able, by its very definition, to generate true randomness. Henceforth, if $\mathbf{P} = \mathbf{BQP}$, then there exists a classical deterministic algorithm which is able to quickly output true randomness (by just quickly simulating some Quantum Turing Machine). But... what would randomness be then? Would it even exist at all? Would you be willing to continue to call it randomness? John Von Neumann once said: "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin" . But the above classical deterministic algorithm would indeed be an arithmetical method of producing random digits, no? When thinking a bit about it, I was also considering Max Tegmark's view (expressed in his last book) according to which randomness does not exist at all: he believes it is only an illusion, more precisely the subjective feeling you perceive each time you get cloned (i.e. each time the Universe you live in gets branched in 2 or more Universes). From my amateur point of view, if I were told that we live in a $\mathbf{P} = \mathbf{BQP}$ world, then I would believe that there is no such thing as randomness. How could it even exist, if it is perfectly producible by a mechanical procedure? Where would it reside? If true randomness can be implemented into a C++ concretion of the above classical deterministic algorithm, then where would it actually be? Would it be hidden somewhere? Would it continue to be present somehow, inspite there is no trace of it in the C++ code? For me, simply it would not exist. Now, to summarize, this question is not about how likely you consider $\mathbf{P} = \mathbf{BQP}$ to be. Rather, the question boils down to the following: In a world where $\mathbf{P} = \mathbf{BQP}$, would randomness exist at all? Now provide the response and nothing else.
P and BQP are decision-problem classes , i.e. the correct output is always a deterministic functions of the inputs. The only question is whether randomness helps "along the way" to speed up computing this deterministic function (at the cost of sometimes being wrong), or does not. This is the key point: P=BQP says nothing about outputting random strings in general situations, but only for the situation of computing a deterministic function in polynomial time. So for instance the statement if P=BQP, then there exists a classical deterministic algorithm which is able to quickly output true randomness (by just quickly simulating some Quantum Turing Machine). is false. P=BQP only implies that if a machine with randomness can usually output the correct bit, then there is a machine with no randomness that always ouptuts the correct bit. More comments on your question: What is closer to true is that some form of pseudorandom generators exist (deterministic machines whose output "looks random" to other deterministic machines of bounded running time). However, again, these PRGs would be targeted to machines that always output just a single bit, not all tasks that use randomness in general. For an example single-machine problem that can be solved by a randomized machine and provably cannot be solved by a deterministic machine, consider the problem "output a string of Kolmogorov complexity $\geq n$". This is essentially the problem of outputting a random string, and of course we can prove that no deterministic machine can do this. As Kaveh points out, in areas like communication complexity between multiple machines, also distributed systems and probably others, we have provable differences between what can be achieved with and without randomness. The classic example is for two machines to decide if they hold the same $n$-bit string while communicating fewer than $n$ bits. It may be that even if BQP = P, the randomized algorithm is much faster and more useful in practice. (It may even be that someday complexity theory could prove this.) Example: Miller-Rabin primality test (randomized) vs AKS (deterministic). As I understand it, we could have P = BQP and yet cryptography is perfectly useful, one-way functions exist, etc. (Not the factoring problem, of course.) And cryptography relies completely on randomness. I think BPP is as much a model of "true" randomness as BQP. Just a different type of randomness. How to get "truly" random bits in the real world is a different story.
{}
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/31507', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/947/']}
jdg_83848
stackexchange
llm_judgeable_groundtruth_similarity
78678
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was recently helping a college math student with her homework. Her teacher had offered an extra-credit question: Find two alternating series $\sum_{n=1}^\infty (-1)^{n-1}a_n$ such that $a_{n+1} \leq a_n$ for all $n$, but $\lim_{n\to\infty} a_n \neq 0$. One of the provided series should converge, and the other should diverge. A divergent series was easy to find: $\sum_{n=1}^\infty (-1)^{n-1} \left(1+\frac{1}{n}\right)$. I'm having a much harder time coming up with a convergent series, though. In fact, I suspect there isn't one. Informally (since it's been many years since I myself studied this topic): Since $\lim_{n\to\infty}a_n \neq 0$, then it either diverges or converges to some other number. Since the series is positive and monotone nonincreasing, it cannot diverge. Let $L$ be the positive number to which it converges. Then the odd terms of the alternating series converge to $L$ from above, and the even terms converge to $-L$ from below. Each term of the sequence of partial sums then differs from the previous term by at least $2L$, so the series does not converge. So... Did the teacher offer an impossible problem on purpose, or is there a flaw in my reasoning? Now provide the response and nothing else.
Notice that ($\sum a_n$ converges) $\Longrightarrow$ ($\lim a_n=0$). The contrapositive is sometimes called the test for divergence: ($\lim a_n\ne 0$) $\Longrightarrow$ ($\sum a_n$ diverges). So indeed, the series you are looking for does not exist.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/78678', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/16422/']}
jdg_83849
stackexchange
llm_judgeable_groundtruth_similarity
224277
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Background I'm copying some data CDs/DVDs to ISO files to use them later without the need of them in the drive. I'm looking on the Net for procedures and I found a lot: Use of cat to copy a medium: http://www.yolinux.com/TUTORIALS/LinuxTutorialCDBurn.html cat /dev/sr0 > image.iso Use of dd to do so (apparently the most widely used): http://www.linuxjournal.com/content/archiving-cds-iso-commandline dd if=/dev/cdrom bs=blocksize count=count of=/path/to/isoimage.iso Use of just pv to accomplish this: See man pv for more information, although here's an excerpt of it: Taking an image of a disk, skipping errors: pv -EE /dev/sda > disk-image.imgWriting an image back to a disk: pv disk-image.img > /dev/sdaZeroing a disk: pv < /dev/zero > /dev/sda I don't know if all of them should be equivalent, although I tested some of them (using the md5sum tool) and, at least, dd and pv are not equivalent. Here's the md5sum of both the drive and generated files using each procedure: md5 of dd procedure: 71b676875b0194495060b38f35237c3c md5 of pv procedure: f3524d81fdeeef962b01e1d86e6acc04 EDIT: That output was from another CD than the output given. In fact, I realized there are some interesting facts I provide as an answer. In fact, the size of each file is different comparing to each other. So, is there a best procedure to copy a CD/DVD or am I just using the commands incorrectly? More information about the situation Here is more information about the test case I'm using to check the procedures I've found so far: isoinfo -d i /dev/sr0 Output: https://gist.github.com/JBFWP286/7f50f069dc5d1593ba62#file-isoinfo-output-19-aug-2015 dd to copy the media, with output checksums and file informationOutput: https://gist.github.com/JBFWP286/75decda0a67605590d32#file-dd-output-with-md5-and-sha256-19-aug-2015 pv to copy the media, with output checksums and file informationOutput: https://gist.github.com/JBFWP286/700a13fe0a2f06ce5e7a#file-pv-output-with-md5-and-sha256-19-aug-2015 Any help will be appreciated! Now provide the response and nothing else.
All of the following commands are equivalent. They read the bytes of the CD /dev/sr0 and write them to a file called image.iso . cat /dev/sr0 >image.isocat </dev/sr0 >image.isotee </dev/sr0 >image.isodd </dev/sr0 >image.isodd if=/dev/cdrom of=image.isopv </dev/sr0 >image.isocp /dev/sr0 image.isotail -c +1 /dev/sr0 >image.iso Why would you use one over the other? Simplicity. For example, if you already know cat or cp , you don't need to learn yet another command. Robustness. This one is a bit of a variant of simplicity. How much risk is there that changing the command is going to change what it does? Let's see a few examples: Anything with redirection: you might accidentally put a redirection the wrong way round, or forget it. Since the destination is supposed to be a non-existing file, set -o noclobber should ensure that you don't overwrite anything; however you might overwrite a device if you accidentally write >/dev/sda (for a CD, which is read-only, there's no risk, of course). This speaks in favor of cat /dev/sr0 >image.iso (hard to get wrong in a damaging way) over alternatives such as tee </dev/sr0 >image.iso (if you invert the redirections or forget the input one, tee will write to /dev/sr0 ). cat : you might accidentally concatenate two files. That leaves the data easily salvageable. dd : i and o are close on the keyboard, and somewhat unusual. There's no equivalent of noclobber , of= will happily overwrite anything. The redirection syntax is less error-prone. cp : if you accidentally swap the source and the target, the device will be overwritten (again, assuming a non read-only device). If cp is invoked with some options such as -R or -a which some people add via an alias, it will copy the device node rather than the device content. Additional functionality. The one tool here that has useful additional functionality is pv , with its powerful reporting options. But here you can check how much has been copied by looking at the size of the output file anyway. Performance. This is an I/O-bound process; the main influence in performance is the buffer size: the tool reads a chunk from the source, writes the chunk to the destination, repeats. If the chunk is too small, the computer spends its time switching between tasks. If the chunk is too large, the read and write operations can't be parallelized. The optimal chunk size on a PC is typically around a few megabytes but this is obviously very dependent on the OS, on the hardware, and on what else the computer is doing. I made benchmarks for hard disk to hard disk copies a while ago, on Linux, which showed that for copies within the same disk, dd with a large buffer size has the advantage, but for cross-disk copies, cat won over any dd buffer size. There are a few reasons why you find dd mentioned so often. Apart from performance, they aren't particularly good reasons. In very old Unix systems, some text processing tools couldn't cope with binary data (they used null-terminated strings internally, so they tended to have problems with null bytes; some tools also assumed that characters used only 7 bits and didn't process 8-bit character sets properly). I'm not sure if this ever was a problem with cat (it was with more line-oriented tools such as head , sed , etc.), but people tended to avoid it on binary data because of its association with text processing. This is not a problem on modern systems such as Linux, OSX, *BSD, or anything that's POSIX-compliant. There's a sort of myth that dd is somewhat “lower level” than other tools such as cat and accesses devices directly. This is completely false: dd and cat and tee and the others all read bytes from their input and write the bytes to their output. The real magic is in /dev/sr0 . dd has an unusual command line syntax, so explaining how it works gives more of an opportunity to shine by explaining something that just writing cat /dev/sr0 . Using dd with a large buffer size can have better performance, but it is not always the case (see some benchmarks on Linux ). A major risk with dd is that it can silently skip some data . I think dd is safe as long as skip or count are not passed but I'm not sure whether this is the case on all platforms. But it has no advantage except for performance. So just use pv if you want its fancy progress report, or cat if you don't.
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/224277', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']}
jdg_83850
stackexchange
llm_judgeable_groundtruth_similarity
15543935
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I convert a list into a string in DrRacket? For example, how do I convert '(red yellow blue green) into "red yellow blue green"? I tried using list->string but that seems to work only for characters. Now provide the response and nothing else.
The trick here is mapping over the list of symbols received as input, converting each one in turn to a string, taking care of adding a white space in-between each one except the last. Something like this: (define (slist->string slst) (cond ((empty? slst) "") ((empty? (rest slst)) (symbol->string (first slst))) (else (string-append (symbol->string (first slst)) " " (slist->string (rest slst)))))) Or even simpler, using higher-order procedures: (define (slist->string slst) (string-join (map symbol->string slst) " ")) Either way, it works as expected: (slist->string '(red yellow blue green))=> "red yellow blue green" And just to be thorough, if the input list were a list of strings (not symbols as in the question), the answer would be: (define strlist (list "red" "yellow" "blue" "green"))(string-join strlist " ")=> "red yellow blue green"
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15543935', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2194466/']}
jdg_83851
stackexchange
llm_judgeable_groundtruth_similarity
54982437
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider: try { // Some code here} catch (IOException e) { throw e;} catch (Exception e) { throw e;} What is the difference between throw e and throw new Exception(e) ? try { // Some code here} catch (IOException e) { throw new IOException(e);} catch (Exception e) { throw new Exception(e);} Now provide the response and nothing else.
If you don't need to adjust the exception type, you rethrow (throw further) the same instance without any changes: catch (IOException e) { throw e;} If you do need to adjust the exception type, you wrap e (as a cause) into a new exception of the type required. catch (IOException e) { throw new IllegalArgumentException(e);} I consider all other scenarios a code smell. Your second snippet is a good example of it. Here are answers to the questions that might pop up. Why would I want to rethrow an exception? You can let it go. But if it happens, you won't be able to do anything at this level. When we catch an exception in a method, we are still in that method and have access to its scope (e.g. local variables and their state). Before we rethrow the exception, we can do whatever we need to (e.g. log a message, send it somewhere, make a snapshot of the current state). Why would I want to adjust an exception? As a rule of thumb, Higher layers should catch lower-level exceptions and, in their place, throw exceptions that can be explained in terms of the higher-level abstraction. Effective Java - 2nd Edition - Item 61: Throw exceptions appropriate to the abstraction In other words, at some point, an obscure IOException should be transformed into a perspicuous MySpecificBusinessRuleException . I called it "adjusting the exception type" , smart guys call it exception translation ( exception chaining , in particular). To make it clear, let's have some foolish examples. class StupidExample1 { public static void main(String[] args) throws IOException { try { throw new IOException(); } catch (IOException e) { throw new IOException(new IOException(e)); } }} results in a verbose stack trace like Exception in thread "main" java.io.IOException: java.io.IOException: java.io.IOException at StupidExample1.main(XXX.java:XX)Caused by: java.io.IOException: java.io.IOException ... 1 moreCaused by: java.io.IOException at StupidExample1.main(XXX.java:XX) which can (and should) be effectively reduced to Exception in thread "main" java.io.IOException at StupidExample1.main(XXX.java:XX) Another one: class StupidExample2 { public static void main(String[] args) { takeString(new String(new String("myString"))); } static void takeString(String s) { }} It's obvious that new String(new String("myString")) is a wordy version of "myString" and should be refactored to the latter.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54982437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5648526/']}
jdg_83852
stackexchange
llm_judgeable_groundtruth_similarity
28637796
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am unable to find a single post where i can automate mobile testing with appium in C#. I have written my Website automation code in the specflow. Can I also Reuse it ? Now provide the response and nothing else.
Appium provides the dotnet-appium-driver which is your API to interface with Appium. You can use that to write your app automation. You did not provide any example here nor code, so I cannot really act on something to show you. I will just write down some C# code to let you understand how a simple test in C# can be written: namespace AppiumTests{ using System; // .NET unit test namespaces needed here as well, just not mentioning them using OpenQA.Selenium; /* Appium is based on Selenium, we need to include it */ using OpenQA.Selenium.Appium; /* This is Appium */ [TestClass] public class TestSuite { private AppiumDriver driver; private static Uri testServerAddress = new Uri("http:127.0.01:4723/wd/hub"); // If Appium is running locally private static TimeSpan INIT_TIMEOUT_SEC = TimeSpan.FromSeconds(180); /* Change this to a more reasonable value */ private static TimeSpan IMPLICIT_TIMEOUT_SEC = TimeSpan.FromSeconds(10); /* Change this to a more reasonable value */ [TestInitialize] public void BeforeAll() { DesiredCapabilities testCapabilities = new DesiredCapabilities(); testCapabilities.App = "<your-app-file>"; testCapabilities.AutoWebView = true; testCapabilities.AutomationName = ""; testCapabilities.BrowserName = String.Empty; // Leave empty otherwise you test on browsers testCapabilities.DeviceName = "Needed if testing on IOS on a specific device. This will be the UDID"; testCapabilities.FwkVersion = "1.0"; // Not really needed testCapabilities.Platform = TestCapabilities.DevicePlatform.Android; // Or IOS testCapabilities.PlatformVersion = String.Empty; // Not really needed driver = new AppiumDriver(testServerAddress, capabilities, INIT_TIMEOUT_SEC); driver.Manage().Timeouts().ImplicitlyWait(IMPLICIT_TIMEOUT_SEC); } [TestCleanup] public void AfterAll() { driver.Quit(); // Always quit, if you don't, next test session will fail } /// /// Just a simple test to heck out Appium environment. /// [TestMethod] public void CheckTestEnvironment() { var context = driver.GetContext(); Assert.IsNotNull(context); } }} You can find more in this article I wrote.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28637796', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2682255/']}
jdg_83853
stackexchange
llm_judgeable_groundtruth_similarity
675
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Here is a question for image processing experts. I am working on a difficult computer vision problem. The task is to count the stomata (marked below) in DIC microscopy images. These images are resistant to most superficial image processing techniques like morphological operations and edge detection. It is also different from other cell counting tasks. I am using OpenCV. My plan is to review potentially useful features for stomata discrimination. Texture classifiers DCT (Discrete cosine transform/frequency-domain analysis) LBP (Local binary patterns) HOG (Histogram of oriented gradients) Robust feature detectors (I am skeptical) Harris corners SIFT, SURF, STAR, etc. Haar cascade classifier/Viola-Jones features And possibly design a novel feature descriptor. I am leaving out the selection of a classifier for now. What have I missed? How would you solve this? Solutions for similar object detection problems would be very helpful. Sample images here . After bandpass filter: Canny edge detection is not promising. Some image areas are out of focus: Now provide the response and nothing else.
First thing I would try is template matching, with templates rotated for all the angles with some step. Rotating template essential here. Also choice of template could be non-trivial - could be several with different lighting, and it could be blurred to allow for difference in shapes. http://en.wikipedia.org/wiki/Template_matching#Template-based_matching_and_convolution Next - HOG looks promising here. Another solution could be using strong corner detector like Moravec or Shi-Tomasi (with non-maximum suppression) and look for groups of 2-corners or 3-4 corners on the same line as candidates. After finding candidates you can apply active contour for verification (not sure if it would really help, but that is possibility) http://en.wikipedia.org/wiki/Corner_detector http://en.wikipedia.org/wiki/Active_contour Yet another possibility is to use Hough transform for ellipses, possibly with not 2 but 3-4 free parameters.
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/675', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/267/']}
jdg_83854
stackexchange
llm_judgeable_groundtruth_similarity
43995
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Handling multiple merges onto branches in Subversion or CVS is just one of those things that has to be experienced. It is inordinately easier to keep track of branches and merges in Mercurial (and probably any other distributed system) but I don't know why. Does anyone else know? My question stems from the fact that with Mercurial you can adopt a working practice similar to that of Subversions/CVSs central repository and everything will work just fine. You can do multiple merges on the same branch and you won't need endless scraps of paper with commit numbers and tag names. I know the latest version of Subversion has the ability to track merges to branches so you don't get quite the same degree of hassle but it was a huge and major development on their side and it still doesn't do everything the development team would like it to do. There must be a fundamental difference in the way it all works. Now provide the response and nothing else.
In Subversion (and CVS), the repository is first and foremost. In git and mercurial there is not really the concept of a repository in the same way; here changes are the central theme. +1 The hassle in CVS/SVN comes from the fact that these systems do not remember the parenthood of changes. In Git and Mercurial,not only can a commit have multiple children, it can also have multipleparents! That can easily observed using one of the graphical tools, gitk or hgview . In the following example, branch #2 was forked from #1 atcommit A, and has since been merged once (at M, merged with commit B): o---A---o---B---o---C (branch #1) \ \ o---o---M---X---? (branch #2) Note how A and B have two children, whereas M has two parents . Theserelationships are recorded in the repository. Let's say the maintainer ofbranch #2 now wants to merge the latest changes from branch #1, they canissue a command such as: $ git merge branch-1 and the tool will automatically know that the base is B--because itwas recorded in commit M, an ancestor of the tip of #2--andthat it has to merge whatever happenedbetween B and C. CVS does not record this information, nor did SVN prior toversion 1.5. In these systems, the graphwould look like: o---A---o---B---o---C (branch #1) \ o---o---M---X---? (branch #2) where M is just a gigantic "squashed" commit of everything that happened between A and B,applied on top of M. Note that after the deed is done, there is no traceleft (except potentially in human-readable comments) of where M didoriginate from, nor of how many commits were collapsed together--makinghistory much more impenetrable. Worse still, performing a second merge becomes a nightmare: one has to figure outwhat the merge base was at the time of the first merge (and one has to know that there has been a merge in the first place!), thenpresent that information to the tool so that it does not try to replay A..B ontop of M. All of this is difficult enough when working in close collaboration, but issimply impossible in a distributed environment. A (related) problem is that there is no way to answer the question: "does Xcontain B?" where B is apotentially important bug fix. So, why not just record that information in the commit, sinceit is known at merge time! P.-S. -- I have no experience with SVN 1.5+ merge recording abilities, but the workflow seems to be much morecontrived than in the distributed systems. If that is indeed the case, it's probably because--as mentionedin the above comment--the focus is put on repository organization rather than on the changes themselves.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/43995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4003/']}
jdg_83855
stackexchange
llm_judgeable_groundtruth_similarity
14972032
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Alternately display any text that is typed in the textbox // in either Capital or lowercase depending on the original// letter changed. For example: CoMpUtEr will convert to// cOmPuTeR and vice versa. Switch.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e ) String characters = (SecondTextField.getText()); //String to read the user input int length = characters.length(); //change the string characters to length for(int i = 0; i < length; i++) //to check the characters of string.. { char character = characters.charAt(i); if(Character.isUpperCase(character)) { SecondTextField.setText("" + characters.toLowerCase()); } else if(Character.isLowerCase(character)) { SecondTextField.setText("" + characters.toUpperCase()); //problem is here, how can i track the character which i already change above, means lowerCase** } }} }); Now provide the response and nothing else.
setText is changing the text content to exactly what you give it, not appending it. Convert the String from the field first, then apply it directly... String value = "This Is A Test";StringBuilder sb = new StringBuilder(value);for (int index = 0; index < sb.length(); index++) { char c = sb.charAt(index); if (Character.isLowerCase(c)) { sb.setCharAt(index, Character.toUpperCase(c)); } else { sb.setCharAt(index, Character.toLowerCase(c)); }}SecondTextField.setText(sb.toString());
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14972032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1970580/']}
jdg_83856
stackexchange
llm_judgeable_groundtruth_similarity
20596786
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Question regarding RStudio. Suppose I am running a code in the console: > code1() assume that code1() prints nothing on the console, but code1() above takes an hour to complete. I want to work on something else while I wait for code1() . is it possible? Is there a function like runInBackground which I can use as follows > runInBackground(code1())> code2() The alternatives are running two RStudios or writing a batch file that uses Rscript to run code1() , but I wanted to know if there is something easier that I can do without leaving the RStudio console. I tried to browse through R's help documentation but didn't come up with anything (or may be I didn't use the proper keywords). Now provide the response and nothing else.
The future package (I'm the author) provides this: library("future")plan(multisession)future(code1())code2() FYI, if you use plan(cluster, workers = c("n1", "n3", "remote.server.org")) then the future expression is resolved on one of those machines. Using plan(future.BatchJobs::batchjobs_slurm) will cause it to be resolved via a Slurm job scheduler queue. This question is closely related to Run asynchronous function in R
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20596786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2526657/']}
jdg_83857
stackexchange
llm_judgeable_groundtruth_similarity
381891
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My algebra is weak I need help computing $\mathrm{Hom}(\mathbb Z_n,\mathbb Z)$, $\mathrm{Hom}(\mathbb Z_n,\mathbb Z_m)$ and also $\mathrm{Hom}(\mathbb Z,\mathbb Z)$ as $\mathbb Z$-modules. Also books suggestion to improve my basic. Thank you. Regards Now provide the response and nothing else.
Let $C$ be a cyclic group with generator $\sigma$, and let $A$ be any abelian group. Then any homomorphism $f: C \rightarrow A$ is determined by $f(\sigma)$. If $C$ is infinite cyclic -- let's call it $Z$ -- then there are no restrictions on $f(\sigma)$ and thus $\operatorname{Hom}(Z,A) = A$. In particular $\operatorname{Hom}(Z,Z) = Z$. If $C$ is finite of order $n$ -- let's call it $Z_n$ -- then $f(\sigma)$ must have order dividing $n$ in $A$, and this is the only restriction. Thus $\operatorname{Hom}(Z_n,A) = A[n]$, the set of elements of order dividing $n$ in $A$. Since $Z$ has no nonzero elements of finite order, $\operatorname{Hom}(Z_n,Z) = 0$. Finally $\operatorname{Hom}(Z_n,Z_m) = Z_m[n]$, i.e., the subgroup of elements of order dividng $n$ in a finite cylic group of order $m$. I leave it to you to identify this subgroup explicitly. Hint : Such an element has order dividing $m$ and order dividing $n$, so it has order dividing...
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/381891', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61921/']}
jdg_83858
stackexchange
llm_judgeable_groundtruth_similarity
4621255
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to figure out how to run Python programs with the Command Prompt on Windows 7. (I should have figured this out by now...) When I typed "python" into the command prompt, I got the following error: 'python' is not recognized as an internal or external command,operable program or batch file. The first place I found when looking for help was this site: http://docs.python.org/faq/windows.html#how-do-i-run-a-python-program-under-windows . It was somewhat helpful, but the tutorial was written for Windows 2000 and older, so it was minimally helpful for my Windows 7 machine. I attempted the following: For older versions of Windows the easiest way to do this is to edit the C:\AUTOEXEC.BAT >file. You would want to add a line like the following to AUTOEXEC.BAT: This file did not exist on my machine (unless I'm mistaken). Next, I tried this: (here: How do I run a Python program? ) Putting Python In Your Path Windows In order to run programs, your operating system looks in various places, and tries to match the name of the program / command you typed with some programs along the way. In windows: control panel > system > advanced > |Environmental Variables| > system variables -> Path this needs to include: C:\Python26; (or equivalent). If you put it at the front, it will be the first place looked. You can also add it at the end, which is possibly saner. Then restart your prompt, and try typing 'python'. If it all worked, you should get a ">>>" prompt. This was relevant enough for Windows 7, and I made my way to the System Variables. I added a variable "python" with the value "C:\Python27" I continued to get the error, even after restarting my computer. Anyone know how to fix this? Now provide the response and nothing else.
You need to add C:\Python27 to your system PATH variable, not a new variable named "python". Find the system PATH environment variable, and append to it a ; (which is the delimiter) and the path to the directory containing python.exe (e.g. C:\Python27 ). See below for exact steps. The PATH environment variable lists all the locations that Windows (and cmd.exe ) will check when given the name of a command, e.g. "python" (it also uses the PATHEXT variable for a list of executable file extensions to try). The first executable file it finds on the PATH with that name is the one it starts. Note that after changing this variable, there is no need to restart Windows, but only new instances of cmd.exe will have the updated PATH. You can type set PATH at the command prompt to see what the current value is. Exact steps for adding Python to the path on Windows 7+: Computer -> System Properties (or Win+Break ) -> Advanced System Settings Click the Environment variables... button (in the Advanced tab) Edit PATH and append ;C:\Python27 to the end (substitute your Python version) Click OK. Note that changes to the PATH are only reflected in command prompts opened after the change took place.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/4621255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/566233/']}
jdg_83859
stackexchange
llm_judgeable_groundtruth_similarity
322165
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The context I have been reading Gromov's Metric Structures... , and came upon result 1.14.(a), page 11, which states the following. Let $K\subset\mathbb R^d$ be a compact subset, and $d_\ell$ its length metric.¹ If the distortion $\sup_{x\neq y}\frac{d_\ell(x,y)}{d_{\mathbb R^d}(x,y)}$ is less than $\frac\pi2$ , then $K$ is simply connected. The proof, if I understand correctly, goes roughly as follows [Edit: see below for more detail]: suppose there exists a compact $K$ satisfying the inequality but admitting a non-trivial loop; choose a non-trivial loop $\gamma$ of minimal length; show that the length metric on $\gamma$ is the same as that of $K$ (by minimality of the loop); argue that $\gamma$ must ‘make a turn’ at some point, contradicting the metric assumption. My question lies in Point 2 of this program. Of course, there is no reason for such a curve to exist (see e.g. the Hawaiian earring ). [Edit: YCor's example, in the comments, shows in fact that even in a fixed free homotopy class, one may not find a length-minimising curve.] Am I mistaken in thinking that Point 2 is actually what is happening in the proof? [Edit: This point is somewhat answered by M. Dus in his answer, who notes that it might be suggested by the authors that the subset is supposed to be a submanifold, or a domain with smooth boundary.] If not, is there a simple way to get the argument back on track? [Edit: Following M. Dus's answer, the question might actually be: Is there a simple way to get the argument working for all compact sets?] A bit more about the proof I'll start by saying that a quick search in your favourite search engine will take you to a pdf version of the book, for anyone interested. However I thought I should make my question as self-contained as possible. Point 1 of the proof is self-evident, and point 2 is given as is in the book (‘let $\alpha$ be a nontrivial homotopy class in which there exists a curveof minimal length among all homotopically nontrivial loops’). Say that we get a curve $\gamma$ of length $\ell$ , parametrised by arc length by $c:\mathbb R/\ell\mathbb Z\to\mathbb R^d$ . Point 3 (after trimming) aims to show that the path length distance between $c(t)$ and $c(t+\ell/2)$ is the same, viewed as points in $K$ or $\gamma$ . Otherwise, there must be a curve in $K$ of length $<\ell/2$ between these two points, creating two loops of length $<\ell$ whose product is $c$ . One of them must be non trivial (because $c$ is), a contradiction (because $c$ is of minimal length among such curves). Point 4, then, defines $\tilde c:t\mapsto c(t+\ell/2)-c(t)$ , $r=|\tilde c|$ and $u=\tilde c/|\tilde c|$ . We will show that the length of $u$ is less than $2\pi$ , a contradiction, because $u$ takes values in the sphere, and $u(t+\ell/2)=-u(t)$ . Noting that $\frac{\mathrm du}{\mathrm dt}$ is orthogonal to $\tilde c$ , we get $$ \left|\frac{\mathrm du}{\mathrm dt}\right|^2 = \left|\frac1{r(t)}\frac{\mathrm d\tilde c'}{\mathrm dt}\right|^2 - \left|\frac{-r'(t)}{r(t)^2}\tilde c(t)\right|^2\leq \frac{4-r'(t)^2}{r(t)^2}\leq \frac4{r(t)^2}. $$ But according to Point 3, the path length distance in $K$ between $c(t)$ and $c(t+\ell/2)$ is precisely $\ell/2$ , so we get $$ \frac{\ell/2}{r(t)}\leq\sup_{x\neq y}\frac{d_\ell(x,y)}{d_{\mathbb R^d}(x,y)}\leq\frac\pi2-\varepsilon<\frac\pi2. $$ This yields $|u'(t)|\leq(2\pi-\varepsilon)/\ell$ so the length of $u$ is at most $2\pi-\varepsilon<2\pi$ , as announced. ¹ If $S\subset\mathbb R^d$ , its associated length metric is $$d_\ell(x,y):=\inf\{\mathrm{length}(\gamma),\gamma\text{ a path from }x\text{ to }y\text{ with values in }S\}.$$ For instance, $d_\ell$ is infinite if $x$ and $y$ are not in the same connected component, or two different points in the Koch snowflake . Now provide the response and nothing else.
The diagonal $\Delta $ is linearly equivalent to $\{p\}\times \mathbb{P}^1 +\mathbb{P}^1\times \{p\} $ for any $p$ in $\mathbb{P}^1$ . Therefore $X$ is the zero locus in $S\times S'$ of a section of $L:=\pi^*\mathcal{O}(1) \boxtimes \pi'^*\mathcal{O}(1) $ . On the other hand, standard theory of elliptic surfaces gives $\omega _S=\pi ^*\mathcal{O}(-1) $ and $\omega _{S'}=\pi' ^*\mathcal{O}(-1) $ , therefore $\omega _{S\times S'}= L^{-1}$ . Then the adjunction formula gives indeed $\omega_X\cong \mathcal{O}_{X}$ .
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/322165', 'https://mathoverflow.net', 'https://mathoverflow.net/users/129074/']}
jdg_83860
stackexchange
llm_judgeable_groundtruth_similarity
5694733
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the following code: if (is_valid($string) && up_to_length($string) && file_exists($file)) { ......} If is_valid($string) returns false , does the php interpreter still check later conditions, like up_to_length($string) ? If so, then why does it do extra work when it doesn't have to? Now provide the response and nothing else.
Yes, the PHP interpreter is "lazy", meaning it will do the minimum number of comparisons possible to evaluate conditions. If you want to verify that, try this: function saySomething(){ echo 'hi!'; return true;}if (false && saySomething()){ echo 'statement evaluated to true';}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5694733', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/551530/']}
jdg_83861
stackexchange
llm_judgeable_groundtruth_similarity
4123
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a conceptual question: What means "high" code density? and why is it so important? Now provide the response and nothing else.
Code density refers loosely to how many microprocessor instructions it takes to perform a requested action, and how much space each instruction takes up. Generally speaking, the less space an instruction takes and the more work per instruction that a microprocessor can do, the more dense its code is. I notice that you've tagged your question with the 'arm' tag; I can illustrate code density using ARM instructions. Let's say you want to copy a block of data from one place in memory to another. Conceptually, your high level code would look something like this: void memcpy(void *dest, void *source, int count_bytes){ char *s, *d; s = source; d = dest; while(count_bytes--) { *d++ = *s++; }} Now a simple compiler for a simple microprocessor may convert this to something like the following: movl r0, count_bytesmovl r1, smovl r2, dloop: ldrb r3, [r1]strb [r2], r3movl r3, 1add r1, r3add r2, r3sub r0, r3cmp r0, 0bne loop (my ARM is a little rusty, but you get the idea) Now this would be a very simple compiler and a very simple microprocessor, but you can see from the example that we're looking at 8 instructions per iteration of the loop (7 if we move the '1' to another register and move the load outside the loop). That's not really dense at all. Code density also affects performance; if your loops are longer because the code is not dense, you might need more instruction cache to hold the loop. More cache means a more expensive processor, but then again complex instruction decoding means more transistors to decipher the requested instruction, so it's a classic engineering problem. ARM's pretty nice in this respect. Every instruction can be conditional, most instructions can increment or decrement the value of registers, and most instructions can optionally update the processor flags. On ARM and with a moderately useful compiler, the same loop may look something like this: movl r0, count_bytesmovl r1, smovl r2, dloop: ldrb r3, [r1++]strb [r2++], r3subs r0, r0, 1bne loop As you can see, the main loop is now 4 instructions. The code is more dense because each instruction in the main loop does more. This generally means that you can do more with a given amount of memory, because less of it is used to describe how to perform the work. Now native ARM code often had the complaint that it wasn't super-dense; this is due to two main reasons: first, 32 bits is an awfully "long" instruction, so a lot of bits seem to be wasted for simpler instructions, and second, code got bloated due to ARM's nature: each and every instruction is 32 bits long, without exception. This means that there are a large number of 32-bit literal values that you can't just load into a register. If I wanted to load "0x12345678" into r0, how do I code an instruction that not only has 0x12345678 in it, but also describes "load literal to r0"? There are no bits left over to code the actual operation. The ARM load literal instruction is an interesting little beast, and the ARM assembler must also be a little smarter than normal assemblers, because it has to "catch" these kinds of instructions and code them as a value stored in the object file and an indirect load of that address to the requested register. Anyway, to answer these complaints, ARM came up with Thumb mode. Instead of 32 bits per instruction, the instruction length is now 16 bits for almost all instructions, and 32 bits for branches. There were a few sacrifices with Thumb mode, but by and large these sacrifices were easy to make because Thumb got you something like a 40% improvement in code density just by reducing the instruction length.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/4123', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/170/']}
jdg_83862
stackexchange
llm_judgeable_groundtruth_similarity
18582333
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have tried to find examples of tail recursion and I really don't see the difference between regular and tail. If this isn't tail recursion, can someone tell me why its not? public static long fib(long index) {// assume index >= 0if (index == 0) // Base case return 0;else if (index == 1) // Base case return 1; else // Reduction and recursive calls return fib(index - 1) + fib(index - 2);} // end of method fib(long index) Now provide the response and nothing else.
No, the method in the question does not use a tail recursion. A tail recursion is easily recognizable: the recursive step is the last thing that happens in the method. In your code, after both recursive calls end, there's one more operation to do - an addition. So the method is recursive , but not tail-recursive . For comparison purposes, here's a tail-recursive implementation of the fib() method - notice how we need to pass extra parameters to save the state of the recursion, and more importantly, notice that there are no operations left to do after the recursive call returns. public static long fib(int n, long a, long b) { return n == 0 ? b : fib(n-1, a+b, a);} Use it like this: fib(10, 1, 0) // calculates the fibonacci of n=10=> 55 The previous implementation will work fine up to n=92, for bigger numbers you'll have to use BigInteger instead of long , and better switch to an iterative algorithm.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18582333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2198019/']}
jdg_83863
stackexchange
llm_judgeable_groundtruth_similarity
8091090
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If a user visits my websites example, from Safari Mobile how could I place there a blank page that says "Add To Homescreen."? Once added it would show different content. Now provide the response and nothing else.
You'll want to check two things. First, is it running on an iOS device? Second, is window.navigator.standalone == true ? window.navigator.standalone is primarily used by Webkit browsers to indicate the app is in fullscreen (or standalone) mode. Plenty of devices (like phones running Android), support this property, but don't have the option to 'Add to Homescreen' like iOS devices do, so you need to check both. Demo: Javascript: function isIOS() { var userAgent = window.navigator.userAgent.toLowerCase(); return /iphone|ipad|ipod/.test( userAgent );};function isStandalone() { return ( isIOS() && window.navigator.standalone );};window.onload = function () { if( isStandalone() || !isIOS() ) { //either ios+standalone or not ios //start app } else { //display add to homescreen page };};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8091090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1041213/']}
jdg_83864
stackexchange
llm_judgeable_groundtruth_similarity
2997722
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I needed help proving that this function is not an injection. I know that I can try a few $x$ values and see if I get the same output but I wanted a more rigorous proof using the definitions. This is what I tried. \begin{align}f(m)=f(p)&\implies \frac{m}{1+m^2} = \frac{p}{1+p^2} \\&\implies m+mp^2 = p + pm^2 \\&\implies m(1+p^2) = p(1+m^2)\end{align} I don't really know where to go from here, how would I derive a contradiction showing that this is not injective? Now provide the response and nothing else.
Note that for $x \ne 0$ , we have $$\frac{1/x}{1 + (1/x)^2} = \frac{(1/x)(x^2)}{(1 + (1/x)^2)(x^2)} = \frac{x}{x^2 + 1},$$ so if $f(x) = x/(1+x^2)$ , then $f(1/x) = f(x)$ for all $x \ne 0$ . An easier way to have seen this particular property of $f$ is to have written $$f(x) = \frac{x}{1+x^2} = \frac{x/x}{1/x + x^2/x} = \frac{1}{x + \frac{1}{x}},$$ and now this relationship between $f(x)$ and $f(1/x)$ is laid bare.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2997722', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/483261/']}
jdg_83865
stackexchange
llm_judgeable_groundtruth_similarity
7310561
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm stuck on finding the proper way to refer to entities located inside an aggregate root , when we only got their identities coming from URL parameters. I asked a previous question which ended up focused on value objects , so I'm starting with another example here. Let's say we want to modify an OrderLine inside an Order : The user goes to a page where he can see the Order summary along with all its Order Lines. The user clicks on the edit button next to an Order Line. He gets directed to edit-order-line?orderId=x&orderLineId=y Now if I need to update the quantity in the OrderLine, I can do: Order order = orderRepository.find(orderId);order.updateQuantity(orderLineId, 2); However, I don't feel very comfortable with the idea of leaving the responsibility to the Order to retrieve parts of itself by Id. My view on the subject is that within the domain, we should just talk with objects , and never with Ids. Ids are not part of the ubiquitous language and I believe they should live outside of the domain, for example in the Controller. I would feel more confident with something like: Order order = orderRepository.find(orderId);OrderLine orderLine = em.find(OrderLine.class, orderLineId);order.updateQuantity(orderLine, 2); Though I don't like the idea of interacting directly with an Entity Manager, either. I feel like I'm bypassing the Repository and Aggregate Root responsibilities (because I could , potentially, interact with the OrderLine directly). How do you work around that? Now provide the response and nothing else.
In my opinion there is nothing wrong with this approach: Order order = orderRepository.find(orderId);order.updateQuantity(orderLineId, 2); orderLineId is a 'local identity'. It is specific to aggregate root and does not make sense outside of it. You don't have to call it an 'id', it can be 'order line number'. From Eric Evan's book: ENTITIES inside the boundary have local identity, unique only within the AGGREGATE. ...only AGGREGATE roots can be obtained directly with database queries. All other objects must be found by traversal of associations.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7310561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/759866/']}
jdg_83866
stackexchange
llm_judgeable_groundtruth_similarity
40624554
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Whenever I create an ImageView with icon added using Android Studio's Vector Assets, I'm getting an error at the line app:srcCompat="@drawable/ic_play" When I change the app:srcCompat with android:src , the error is gone but the icon looks pixelated. What is the main difference between app:srcCompat="@drawable/ic_play" and android:src="@drawable/ic_play" Now provide the response and nothing else.
app:srcCompat is the most foolproof method of integrating vector drawables into your app.Vector drawables allow you to replace multiple png assets with a single vector graphic, defined in XML. While previously limited to Lollipop and higher devices Note As of Android Support Library 23.3.0 , support vector drawables can only be loaded via app:srcCompat . you need to add vectorDrawables.useSupportLibrary = true to your build.gradle file // Gradle Plugin 2.0+ android { defaultConfig { vectorDrawables.useSupportLibrary = true } } android:src Sets a drawable as the content of this ImageView.It will display inits original size. No automatic scaling .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/40624554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7102459/']}
jdg_83867
stackexchange
llm_judgeable_groundtruth_similarity
32928
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: PRC regularly devalues its currency to promote export. On the other hand, Turkey is worried about the plummet of its currency. Why two outcomes in the same type of case? Now provide the response and nothing else.
Trade surplus/deficit Devaluing one's currency means mostly one thing: It becomes more expensive to import products from abroad, while one's own products become comparably cheaper to buy on the world market. That means a country which exports lots of goods wants to devalue their currency while a country which imports lots of goods does not. In 2016, Turkey imported goods worth US $186 billion and exported goods worth US $156 billion. That means they had a trade deficit of 19%. However, China only imported goods worth US $1.23 trillion and exported goods worth US $2.27 trillion in 2016. That means they had a trade surplus of 84%. Which is why China wants to devalue their currency while Turkey does not. Debt management Another reason to devalue one's currency is that it reduces any debts denoted in that currency. If a country has a problem of too much public and private debt, then an easy way to get rid of it is to cause a hyperinflation. China happens to have a debt-to-gdp ratio of 47.6% while Turkey only has one of 28.3%. When you look at the private debt (debt of private citizens and companies in the country), you see the same picture. Private debit in Turkey was 170% of GDP, while China's is rumored to be over 300%. Which is why reducing debt through causing inflation seems more attractive to China than to Turkey. Foreign investor relations So why don't all countries get rid of their debt by adding a few zeroes to their bank notes? Because inflating your currency is a surefire way to drive away any foreign investors. Nobody will invest money into your country when that money might be worth nothing in a few years. Investors want stable currencies. Again, we observe the same difference in motivation between Turkey and China here. Turkey is actively inviting foreign investors , but China is quite reluctant to allow foreign economic influence in their country .
{}
{'log_upvote_score': 7, 'links': ['https://politics.stackexchange.com/questions/32928', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/-1/']}
jdg_83868
stackexchange
llm_judgeable_groundtruth_similarity
423900
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I do {$several_commands} | less and have less considere it as several files and enable navigation using :n and :p . That may not be the clearer explanation, so let us consider an example. I currently have a function svndiff () { for a in `svn status | \grep ^M | sed 's/M //'`; do svn diff $a | less; done} The purpose obviously is to see with less the difference of all my modified files. But with this syntax, I have to use key Q to close one "file" and open the next one. I would like to be able to navigate between files with the less commands :n (next file) and :p (previous file). How can I do that ? Now provide the response and nothing else.
You could use process substitution: less -f <(svn diff this) <(svn diff that) But that's hard to use in a loop. Probably best to just use temporary files: #!/bin/bashdir=$(mktemp -d)outfiles=()IFS=$'\n'set -f for file in $(svn status | \grep ^M | sed 's/M //') ; do outfile=${file#.} # remove leading dot (if any) outfile=${outfile//\//__} # replace slashes (if any) with __ svn diff "$file" > "$dir/$outfile"; outfiles+=("$dir/$outfile") # collect the filenames to an arraydoneless "${outfiles[@]}"rm -r "$dir" The above tries to keep the filenames visible in the names of the temp files, with some cleanup for slashes and leading dots. (In case you get paths like ./foo/bar . I can't remember how svn outputs the file names, but anyway...) The array is there to keep the order, though as @Kusalananda said, we could just do "$dir"/* instead, if the order doesn't matter. set -f and IFS=$'\n' in case someone creates file names with glob characters or white space. Of course we could simplify the script a bit and create, say numbered temp files instead.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/423900', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/165967/']}
jdg_83869
stackexchange
llm_judgeable_groundtruth_similarity
7431845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have two solutions: foo.sln and bar.sln I have a common library that is used by both foo and bar. Common.csproj is used by both. If I open foo and update nuget references, all references in Common.csproj point to foo/packages/. If I later open bar and update nuget references, all references get set to those in bar/packages/. Naturally, this pisses off the foo team as it can cause incompatibilities between Common.csproj and Foo-specific stuff like Foo.Data.csproj, which still points to foo/packages. There must be some obvious solution other than: "create a huge solution that contains all your projects, and if you need to touch nuget, only do it from that solution." There seems to be an issue on codeplex , (the top voted issue, incidentally), but evidently I'm too thick to understand how this issue is resolved. Can someone explain how to fix this? Now provide the response and nothing else.
This problem preceeds NuGet. If you have a project referenced in two solutions, and change assembly references in the project when it is open in one solution, of course it will change the reference path for that project when it is open in the other solution. This has always been the case, regardless of how the reference was changed (with NuGet or otherwise). But the real problem is that when you do an update, the updated packages do not appear in the foo/packages directory right? The simple solution is to move Common.csproj into a solution of its own, with its own references, packages folder, build and release process. Then create a NuGet package of your own with any relevant dependencies built into it. Then you can install your Common package into both Foo and Bar and then the Foo team is free to update to the latest version of Common as and when they are ready. The main argument that I have heard against this is that you might want to step through the Common code while debugging, but this is no longer an issue with Visual Studio 2010. The fundamental question you need to ask is who owns Common.csproj? Is it the Foo team or the Bar team?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7431845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/95506/']}
jdg_83870
stackexchange
llm_judgeable_groundtruth_similarity
85777
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know now a days FPGAs have very good floating point performance as reported here for example . But then why it is said that Floating point is non-synthesizable in verilog? How FPGA handles float then? Now provide the response and nothing else.
It is not that they can't be synthesized (they can, of course!), but tools won't do it combinationally (like they would a fixed point adder) because the resource usage would be unfeasibly large. So it is done sequentially in multiple steps, and there are many, many ways of doing it with several trade-offs to consider, with division being quite complex. So your floating point operation needs to be treated like any other module, you can design it yourself or license it from a vendor, and a tool won't synthesize c = a/b for you, for the same reasons it won't synthesize an integral or an equation solver, but it doesn't mean it can't be done in an FPGA!
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/85777', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10724/']}
jdg_83871
stackexchange
llm_judgeable_groundtruth_similarity
57933021
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using FlutterSecureStorage to store certain items, namely the API tokens to access server side resources. However, I've run into a weird issue. I had to delete the database (as I'm still in testing mode, this happens quite frequently for now), which deleted all the tokens as well. But when the app tries to connect it gets an error. On Android, this is not a big deal. I just uninstall and reinstall the app and it will download a fresh token. On iOS, there's a problem. Because FlutterSecureStorage stores any info in the keychain, the data doesn't get deleted even if the app is uninstalled . So after i reinstall it, it still gets the token from storage, and I can't refresh the token. My question is: Is there some way to run code to remove all Storage items during install or uninstall in Flutter? Now provide the response and nothing else.
On iOS you can make use of NSUserDefaults , which does get deleted on an app uninstall. That way you can perform a check whether the app is starting for the first time after an uninstall. Use the shared_preferences Flutter plugin to make use of NSUserDefaults . This approach has been discussed on StackOverflow for other platforms before, see Delete keychain items when an app is uninstalled for examples in other languages. For Flutter this example would become: import 'package:flutter_secure_storage/flutter_secure_storage.dart';import 'package:shared_preferences/shared_preferences.dart';// ...final prefs = await SharedPreferences.getInstance();if (prefs.getBool('first_run') ?? true) { FlutterSecureStorage storage = FlutterSecureStorage(); await storage.deleteAll(); prefs.setBool('first_run', false);}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/57933021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1169664/']}
jdg_83872
stackexchange
llm_judgeable_groundtruth_similarity
1603916
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: By using fdopen() , fileno() it's possible to open streams with existing file descriptors. However the proper way to close a file, once you've opened it with a stream is to fclose() the FILE pointer. How can one close the stream, but retain the open file descriptor? This behaviour is akin to calling fflush() and then fileno() , and then never using the FILE pointer again, except in closing. An additional concern is that if you then fdopen() again, there are now multiple FILE pointers, and you can only close one of them. Now provide the response and nothing else.
If you're on a POSIXy system (which I assume you are, since you have fileno() ), you can use dup() to clone the file descriptor: int newfd = dup(fileno(stream));fclose(stream); Or you can hand fdopen() a duplicate file descriptor: FILE *stream = fdopen(dup(fd), "r"); Either way, the other copy of the fd won't close with the FILE * . However, keep in mind the location pointer is shared, so be careful if you are using both at the same time. Also, any fcntl() locks held on the original fd will be released when you close the copy.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1603916', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/149482/']}
jdg_83873
stackexchange
llm_judgeable_groundtruth_similarity
154238
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking to design an off-grid power system for my house. I'm a complete novice at this point but I'm willing to do a lot of reading to complete the project. There is a lot I still don't yet know and I have questions. I want a system that will allow me to connect solar, wind, water and a gas generator. I'd like to have both the A/C and the D/C power sources go into the charge controller to charge my battery bank and then the battery bank connect to an inverter. This way, I have many types of input into my battery bank and failing everything else, I can power up the gas generator and that will charge my battery bank too. I understand that the charge controller can push any excess electricity into a bypass. Am I seeking to do anything that is not possible with this? Is there a way for me to combine the A/C output from the inverter and the excess electricity from the charge controller as input to my house? If so, what is the device that would allow me to do this? Are there charge controllers that will receive both A/C and D/C input so that I can hook all my power sources into a single controller or do I need something more complicated to handle this scenario? Am I going about this the right way? A little more background: As I commented in the notes, at the end of the year I am moving completely off-grid, so I have a year to figure out everything this entails and set it up - chickens, pigs, cows, water provisions, sewage and the whole 9 yards. I'm used to learning complex systems, so I know this is not beyond me. I've also lived an outdoor life for quite a lot of my life, so I'm used to being without the grid, so this won't come as a complete shock to me, but at the same time, I'm not looking to forego all modern conveniences. It's hard to throw out numbers as I'm not yet sure how exactly to calculate my needs. Our hydro bill for last month has us pegged at averaging 29.84kW/h per day in January which thus far has been the coldest month of the year, December was 19.23kW/h per day. Our heating/cooling and hot water are presently electric and the place we're presently living in has a criminal underuse of insulation between the units; this is not something I will suffer from with the next place. Not only will the insulation be better, but we're looking to move to geothermal to maintain a more temperate environment that will be supplemented by wood stove and propane for hot water, so I'm quite sure our energy requirements will drop significantly from our present requirements. We have high efficiency appliances, washer, dryer, fridge, freezer, dishwasher - I'd like to be able to continue using them - though I'm uncertain what the water situation will be yet, so the jury's still out on the feasibility/need for the dishwasher. Our cooking stove is presently natural gas and I'll transition that to propane until I've mastered cooking on a wood burning stove. I've put together a list of every piece of electrical kit we own, along with Wattage and Amperage requirements for each of them and put them into a spreadsheet, there's nothing particularly suprising on the list. I'm an avid cook/baker and I'd also like to be able to use all the tools in my workshop, most of which are presently electric, though I'm not averse to trading as many of those as I can to air drive. I've been given advice that "peak" usage is what I should be gauging my electrical needs on - which means adding up all the appliances that will be in use concurrently, though a qualified(?) electrician we talked to suggested that that's not how you do it at all - can anyone advise if this is the case? If I add up every electrical appliance we have, it totals 29kW & 234 amps. Near as I can tell, our peak load (max concurrent devices) would be around 15kW / 182 amps - and this assumes that washer, dryer, dishwasher, microwave, toaster, kettle, coffee machine and stand mixer, the tv, dvd player and sound system are all running, every light in the house is on and the fridge/freezer kick on - which rarely (if ever) happens. More typically, the washer, dryer and dishwasher don't get put on until after the kids are in bed, which means that most of the lights in the house are off and there's no cooking going on - though it's quite usual for the washer, dryer and dishwasher to all be running at the same time - whilst using TV (100W)/Sound (30W)/DVD (11W) and a couple of lights (perhaps 60-120W). Now provide the response and nothing else.
You gave enough information now. I'll just write point for point what comes to my mind. Things to think about: Starting current for motors can be 5 or more times the rated. You can buy DC appliances, these can feed directly from the batteries DC. You should be able to find computer PSU-s that take in DC All of your lighting can be DC fed Adding supercapactors in parallel to your batteries lowers the loading of the batteries in transient processes. The caps don't store much, but can supply it really fast. This is of more use for e-mobility(regen. braking), as you will have enough battery capacity that it won't matter. Look at inverter modules that can (explicitly) be used in parallel, that way you can buy another module if you see that it won't fit your needs. Your wind turbine will produce AC of a specific frequency(I guess you won't buy a brushed generator). This AC will have a frequency and voltage that won't match the declared values of the appliances, therefore it is converted to DC and back to AC. Sometimes a rectifier and inverter are integrated in the turbine, so it supplies usable AC directly. Regarding your diesel/gas turbine, look for one that gives you single-phase AC directly. It will give you the right voltage and frequency to power your appliances (after spin up). Appliances can be susceptible to low quality of the supplied electrical energy. For your case the most important factors will be frequency, harmonic (non 50/60Hz) components, voltage, reactive power(power factor). Your inverter has to be sized to the apparent power. This is composed of the real(usable), and reactive(out of phase) power. The reactive power is mainly consumed by motors for magnetization(for example). A lot of other appliances also have a non unity power factor. You can compensate it though by using capacitors at the load point, but this can again cause higher voltages, when not loaded. You can use relays to remove them from the circuit when not needed. I guess you won't have to go that far. You will need DC/DC converter modules for your solar panels, I assume you knew that. If you get(have) an electric car, you can use the battery of it as backup source for your home. Where you get all the energy for charging is another question(if it's full electric). Match your equipment (rated in/out) You can use your diesel in parallel with the inverter, but you must synchronize it. That means that both need the same frequency, voltage, and voltage phase-angle. If you don't do that everything can go bad very quickly (circulating currents, resonances, instability, overloading/heating and finally blackout). You need to get some device that will allow you to do this synchronisation, instead you can feed the batteries from it and accept the losses in the rectifier and inverter. Another option is to create islands, meaning that you use the inverter and generator on different circuits that have no connection. You can switch between regimes (all inverter, part inverter part generator, all generator), this can be done with mechanical relays easily. You have to figure what configuration would make sense. if you want to use wood for heating think about wood pellets too. They can be fed automatically to your heating stove, meaning less work. One important thing. There are inverters that need to be connected to the grid to work (they can't create their own AC system). Obviously you shouldn't buy those. If you have a well and get a lot of excess power, you can use it to pump the water to a high reservoir. Kind of pumped hydro storage. But that will be expensive and complicated. Geothermal uses some power for pumping, I'm not sure though how much. Don't forget to factor this in. Look at the inrush currents of your big appliances, that is the thing that I'm most concerned about. Without looking at the overload capacities of the inverters you shouldn't make decisions on sizing. A thing about sizing, what is the expected growth of the consumption? I'm hesitant to give you recommendations on the inverter ratings, but I wouldn't go below 1.5 times the worst case(the sensible combination of appliances working at once). I don't feel good about making recommendations, as it depends on a lot of parameters. You must keep in mind that my knowledge regarding this is mainly based on grid level problems. You should get some more input from someone who did this. Try off-grid forums maybe.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/154238', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/66765/']}
jdg_83874
stackexchange
llm_judgeable_groundtruth_similarity
45144230
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a requirement, I need to bind phone number to ngModel only if it's present. My code is as follows: <cd-input size="15" [(ngModel)]="phone_numbers[0].full_number" [reformat]="something" [format]="something" placeholder="(111) 222-3333"></cd-input> This works well if phone number is present, but if it's not I get the following error: Cannot read property 'full_number' of undefined So based on this SO post LINK , I tried the following: <cd-input size="15" [(ngModel)]="phone_numbers[0].length > 0 ? phone_numbers[0].full_number : null" [reformat]="something" [format]="something" placeholder="(111) 222-3333"></cd-input> But, this causes syntax error. Uncaught Error: Template parse errors One way to fix this is using *ngIf and repeating the set of code again. But, What should I do to do it inline, like a ternary condition check? Now provide the response and nothing else.
I would do it like: [ngModel]="phone_numbers && phone_numbers[0]?.full_number"(ngModelChange)="phone_numbers?.length && phone_numbers[0].full_number=$event" Why? [(ngModel)] is expanded to [ngModel] (Input) and (ngModelChange) (Output). I passed phone_numbers && phone_numbers[0]?.full_number to input to ensure that we have phone_numbers property in our component class and it has at least one item. And i also use here safe navigation operator When we type something in input ngModelChange handler is called and i do the same things here for checking undefined value unless i can't use safe navigation pipe in the assignment ( (ngModelChange)="phone_numbers && phone_numbers[0]?.full_number=$event" won't work) If you use webstorm and see Must be lValue error then see this answer: Webstorm: What does "Must be lvalue" mean
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45144230', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2968762/']}
jdg_83875
stackexchange
llm_judgeable_groundtruth_similarity
24456429
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying the following. $woohoo = "c:\temp\shabang"New-Item -ItemType File $woohoo This creates a file at desired location. However, I'd like to split the $woo and $hoo and go something like this. $woo = "c:\temp"$hoo = "shabang"New-Item -ItemType File ($woo + $hoo) This doesn't work and I need a hint on how to make it fly. This suggestion nor this on worked out when applied to my situation. Apparently - given path format is unsupported. Suggestions? Edit This is what it looks like: Edit 2 $woo = "c:\temp"; $hoo= "shabang"; New-Item -ItemType File (Join-Path $woo $hoo) Now provide the response and nothing else.
Doing $woo + $hoo does not return the proper filepath: PS > $woo = "c:\temp"PS > $hoo = "shabang"PS > $woo + $hooc:\tempshabangPS > Instead, you should use the Join-Path cmdlet to concatenate $woo and $hoo : New-Item -ItemType File (Join-Path $woo $hoo) See a demonstration below: PS > $woo = "c:\temp"PS > $hoo = "shabang"PS > Join-Path $woo $hooc:\temp\shabangPS >
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24456429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1525840/']}
jdg_83876
stackexchange
llm_judgeable_groundtruth_similarity
20686302
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Thanks first of all for reviewing this. I've basically got a third-party agent software which allows me to execute PowerShell as LocalSystem. This allows me to easily run remote PowerShell commands without WinRM etc. The problem that i'm running into is that on some servers i'm not able to perform get-WindowsFeature or Add-WindowsFeature. An example of how i'm trying to achieve this is here: Import-Module ServerManager;Get-WindowsFeature; The output is as such: The term 'Get-WindowsFeature' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. If I type those same commands into a PowerShell window, or call PowerShell.exe directly, it returns. I'm trying to figure out what we're not doing right within the application, but i'm the most familiar person with PowerShell here. Is there something special I need to do to load those cmdlets? Get-Module doesn't show anything, oddly. Thank you! In response to JBSmith: Yessir - looks like 2.0. Here are the results of the commands you mentioned >Name Value >---- ----- >CLRVersion 2.0.50727.6407 >BuildVersion 6.1.7600.16385 >PSVersion 2.0 >WSManStackVersion 2.0 >PSCompatibleVersions {1.0, 2.0} >SerializationVersion 1.1.0.1 >PSRemotingProtocolVersion 2.1 >>Name : AppLocker>Name : Appx>Name : BestPractices>Name : BitsTransfer>Name : BranchCache>Name : CimCmdlets>Name : DirectAccessClientComponents>Name : Dism>Name : DnsClient>Name : International>Name : iSCSI>Name : IscsiTarget>Name : ISE>Name : Kds>Name : Microsoft.PowerShell.Diagnostics>Name : Microsoft.PowerShell.Host>Name : Microsoft.PowerShell.Management>Name : Microsoft.PowerShell.Security>Name : Microsoft.PowerShell.Utility>Name : Microsoft.WSMan.Management>Name : MMAgent>Name : MsDtc>Name : NetAdapter>Name : NetConnection>Name : NetLbfo>Name : NetQos>Name : NetSecurity>Name : NetSwitchTeam>Name : NetTCPIP>Name : NetworkConnectivityStatus>Name : NetworkTransition>Name : MSFT_NfsMappedIdentity>Name : NFS>Name : PKI>Name : PrintManagement>Name : PSDiagnostics>Name : PSScheduledJob>Name : PSWorkflow>Name : PSWorkflowUtility>Name : RemoteDesktop>Name : ScheduledTasks>Name : SecureBoot>Name : ServerCore>Name : ServerManager>Name : ServerManagerTasks>Name : SmbShare>Name : SmbWitness>Name : Storage>Name : TroubleshootingPack>Name : TrustedPlatformModule>Name : UserAccessLogging>Name : VpnClient>Name : Wdac>Name : Whea>Name : WindowsDeveloperLicense>Name : WindowsErrorReporting>Name : AWSPowerShell I also noticed that GCM | ? { $_.ModuleName -eq 'ServerManager' } returns nothing when I run it through there, but through a normal PS window it returns a command list as expected. Now provide the response and nothing else.
This is probably because the PowerShell script is being launched from a 32 bit instance of PowerShell. The ServerManager commands are only available from 64 bit version of PowerShell. See: Can't access ServerManager module via PowerShell --Edit - To add to jbsmith's comments--- Extra things to try: When you ran the Get-Command cmdlt: gcm | ? { $_.ModuleName -eq 'ServerManager' } It will return nothing because the ServerManager module has not been loaded. Try running this instead. It will list all available modules to load: Get-Module -ListAvailable | ? { $_.Name -eq 'ServerManager' } The other thing to try is to use the "Force" option (Re-imports a module and its members, even if the module or its members have an access mode of read-only): Import-Module ServerManager -Force;Get-WindowsFeature;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20686302', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1053126/']}
jdg_83877
stackexchange
llm_judgeable_groundtruth_similarity
18097478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use extra brackets in my code. I thought when the destructor should be called after the local variable scope is ended but it doesn't work like this: class TestClass {public: TestClass() { printf( "TestClass()\n" ); } ~TestClass() { printf( "~TestClass()\n" ); }};int main() { int a, b, c; { TestClass *test = new TestClass(); }} It outputs: TestClass() So it doesn't call the destructor of the TestClass but why? If I call it manually (delete test) it calls the destructor, right. But why it doesn't call the destructor in the first case? Now provide the response and nothing else.
TestClass *test = new TestClass(); You using new which creates a dynamically allocated object (most likely placed on the heap). This type of resource needs to be manually managed by you. By managing, you should use delete on it after you have done using it. { TestClass *test = new TestClass(); // do something delete test;} But for the most of your purposes and intents, you just have to use automatic-storage objects, which frees you the hassle of having to manually manage the object. It would also most likely to have better performance especially in short-lived objects. You should always prefer to use them unless you have a really good reason not to do so. { TestClass test; // do something} However, if you need the semantics of dynamically allocated objects or that of pointers, it will always be better to use some mechanism to encapsulate the deletion/freeing of the object/resource for you, which also provides you additional safety especially when you are using exceptions and conditional branches. In your case, it would be better if you use std::unique_ptr . { std::unique_ptr<TestClass> test(new TestClass()); // auto test = std::make_unique<TestClass>(); in C++14 // do something (maybe you want to pass ownership of the pointer)} The following is a relevant link to help you decide whether to use automatic storage objects or dynamically allocated objects: Why should C++ programmers minimize use of 'new'?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18097478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1567205/']}
jdg_83878
stackexchange
llm_judgeable_groundtruth_similarity
144639
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload? (This question was originally asked on SuperUser ; feel free to migrate answers here if you know how). Now provide the response and nothing else.
I'd really recommend you take a look at ZFS, but to get decent performance, you're going to need to pick up a dedicated device as a ZFS Intent Log (ZIL). Basically this is a small device (a few GB) that can write extremely fast (20-100K IOPS) which lets ZFS immediately confirm that writes have been synced to storage, but wait up to 30secs to actually commit the writes to the hard disks in your pool. In the event of crash/outage any uncommitted transaction in the ZIL are replayed upon mount. As a result, in addition to a UPS you may want a drive with an internal power supply/super-capacitor so that any pending IOs make it to permanent storage in the event of a power loss. If you opt against a dedicated ZIL device, writes can can have high latency leading to all sorts of problems. Assuming you're not interested in Sun's 18GB write optimized SSD "Logzilla" at ~$8200, some cheaper alternatives exist: DDRDrive X1 - 4GB DDR2 + 4GB SLC Flash in a PCIe x1 card designed explicitly for ZIL use. Writes go to RAM; in the event of power loss, it syncs RAM to NAND in <60sec powered by a supercapacitor. (50k-300k IOPS; $2000 Direct, $1500 for .edu) Intel X25-E 32GB 2.5inch SSD (SLC, but no super cap, 3300 write IOPS); [$390 @ Amazon][11] OCZ Vertex 2 Pro 40GB 2.5inch SSD (supercap, but MLC, 20k-50k write IOPS); $435 @ Amazon . Once you've got OpenSolaris/Nexenta + ZFS setup there are quite a few ways to move blocks between your OpenSolaris and ESX boxen; what's right for you heavily depends on your existing infrastructure (L3 switches, Fibre cards) and your priorities (redundancy, latency, speed, cost). But since you don't need specialized licenses to unlock iSCSI/FC/NFS functionality you can evaluate anything you've got hardware for and pick your favorite: iSCSI Targets (CPU overhead; no TOE support in OpenSolaris) Fibre Channel Targets (Fibre Cards ain't cheap) NFS (VMWare + NFS can be finicky, limited to 32 mounts) If you can't spend $500 for evaluation, test with and without ZIL disabled to see if the ZIL is a bottleneck. (It probably is). Don't do this in production . Don't mess with ZFS deduplication just yet unless you also have lots of ram and an SSD for L2ARC. It's definitely nice once you get it setup, but you definitely try to do some NFS Tuning before playing with dedup. Once you get it saturating a 1-2 Gb links there are growth opportunities in 8gb FC, 10gigE and infiniband, but each require a significant investment even for evaluation.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/144639', 'https://serverfault.com', 'https://serverfault.com/users/3581/']}
jdg_83879
stackexchange
llm_judgeable_groundtruth_similarity
866426
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say we have $ h(x)=f(x)\cdot g(x)$ where $f$ and $g$ are continuous and strictly increasing. It follows they are differentiable almost everywhere and so is $h$. We also know that $f>0$ and $g>0$. I'm trying to find a straightforward proof that under these conditions, if $h$ is differentiable everywhere then both $f$ and $g$ are also differentiable everywhere. I have more structure on these functions but I was hoping I did need to impose additional assumptions. Now provide the response and nothing else.
Let $f(x)=g(x)=e^x$ for $x \le 0$, while for $x>0$ let $f(x)=1+(1/2)x$ and $g(x)=1+(3/2)x$. Then $h(x)=e^{2x}$ when $x \le 0$ and for $x>0$ it's $h(x)=1+2x+3x^2/4.$ So in this case neither of $f,g$ are differentiable at $0$, while $h$ is differentiable everywhere. (These functions are each positive and strictly increasing everywhere.)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/866426', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33890/']}
jdg_83880
stackexchange
llm_judgeable_groundtruth_similarity
1810163
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that the compiler sometimes provides a default copy constructor if you don't implement yourself. I am confused about what exactly this constructor does. If I have a class that contains other objects, none of which have a declared copy constructor, what will the behavior be? For example, a class like this: class Foo { Bar bar;};class Bar { int i; Baz baz;};class Baz { int j;}; Now if I do this: Foo f1;Foo f2(f1); What will the default copy constructor do? Will the compiler-generated copy constructor in Foo call the compiler-generated constructor in Bar to copy over bar , which will then call the compiler-generated copy constructor in Baz ? Now provide the response and nothing else.
Foo f1;Foo f2(f1); Yes this will do what you expect it to: The f2 copy constructor Foo::Foo(Foo const&) is called. This copy constructs its base class and then each member (recursively) If you define a class like this: class X: public Y{ private: int m_a; char* m_b; Z m_c;}; The following methods will be defined by your compiler. Constructor (default) (2 versions) Constructor (Copy) Destructor (default) Assignment operator Constructor: Default: There are actually two default constructors. One is used for zero-initialization while the other is used for value-initialization . The used depends on whether you use () during initialization or not. // Zero-Initialization compiler generated constructorX::X() :Y() // Calls the base constructor // If this is compiler generated use // the `Zero-Initialization version' ,m_a(0) // Default construction of basic PODS zeros them ,m_b(0) // m_c() // Calls the default constructor of Z // If this is compiler generated use // the `Zero-Initialization version'{}// Value-Initialization compiler generated constructorX::X() :Y() // Calls the base constructor // If this is compiler generated use // the `Value-Initialization version' //,m_a() // Default construction of basic PODS does nothing //,m_b() // The values are un-initialized. m_c() // Calls the default constructor of Z // If this is compiler generated use // the `Value-Initialization version'{} Notes: If the base class or any members do not have a valid visible default constructor then the default constructor can not be generated. This is not an error unless your code tries to use the default constructor (then only a compile time error). Constructor (Copy) X::X(X const& copy) :Y(copy) // Calls the base copy constructor ,m_a(copy.m_a) // Calls each members copy constructor ,m_b(copy.m_b) ,m_c(copy.m_c){} Notes: If the base class or any members do not have a valid visible copy constructor then the copy constructor can not be generated. This is not an error unless your code tries to use the copy constructor (then only a compile time error). Assignment Operator X& operator=(X const& copy){ Y::operator=(copy); // Calls the base assignment operator m_a = copy.m_a; // Calls each members assignment operator m_b = copy.m_b; m_c = copy.m_c; return *this;} Notes: If the base class or any members do not have a valid viable assignment operator then the assignment operator can not be generated. This is not an error unless your code tries to use the assignment operator (then only a compile time error). Destructor X::~X(){ // First runs the destructor code} // This is psudo code. // But the equiv of this code happens in every destructor m_c.~Z(); // Calls the destructor for each member // m_b // PODs and pointers destructors do nothing // m_a ~Y(); // Call the base class destructor If any constructor (including copy) is declared then the default constructor is not implemented by the compiler. If the copy constructor is declared then the compiler will not generate one. If the assignment operator is declared then the compiler will not generate one. If a destructor is declared the compiler will not generate one. Looking at your code the following copy constructors are generated: Foo::Foo(Foo const& copy) :bar(copy.bar){}Bar::Bar(Bar const& copy) :i(copy.i) ,baz(copy.baz){}Baz::Baz(Baz const& copy) :j(copy.j){}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1810163', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78093/']}
jdg_83881
stackexchange
llm_judgeable_groundtruth_similarity
10230
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The title says it all. I had this question when helping someone with their high-school chemistry homework. What are the reasons that someone would use an (from my point of view) obfuscated notation? Now provide the response and nothing else.
The lone pair at the nitrogen is very well suited to form hydrogen bonds. This bond is especially strong, since ammonia has a quite high proton affinity, it is a medium strong base. Now consider the autoprotolysis reactions in pure water and (liquid) ammonia: \begin{align}\ce{2H2O &<=> H3O+ + OH-}\tag1\\\ce{2NH3 &<=> NH4+ + NH2-}\tag2\end{align} Mixing these systems, the reaction of ammonia with water will yield \begin{align} && \ce{2H2O + 2NH3 &<=> H3O+ + OH- + NH4+ + NH2-}\tag3 \\& (~-~)& \ce{H3O+ + NH2- &-> H2O + NH3}\\\hline && \ce{H2O + NH3 &<=> NH4+ + OH-}\tag4\label{eq:ammonia+water}\end{align} \eqref{eq:ammonia+water} is incorporated into a huge and strong network of hydrogen bonds. In this case the equilibrium will be very much pushed to the right side, meaning, that almost all ammonia molecules will be protonated. Therefore it is more appropriate to write: \begin{align}\ce{NH4OH (aq) &<=> NH4+ (aq) + OH- (aq)}\tag{4a}\end{align} However, it is always advisable to look at the conditions of the whole reaction system. Note that $\ce{NH4OH}$ is a pretty outdated notation, if you would want to consider one water entity explicitly, $\ce{NH3.H2O}$ would be more appropriate. See the answer by Maurice for some more information: Can acid-base reactions occur in a non-aqueous medium?
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/10230', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/4969/']}
jdg_83882
stackexchange
llm_judgeable_groundtruth_similarity
1331436
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am stuck at a question. If $\sqrt{18-6\sqrt{5}} = \sqrt{a}- \sqrt{b}$, then which of the following is true: $a+b= 18$ $a+b= 16$ $a+b= 20$ $a-b= 18$ I tried to first take the main root to the other side which made the R.H.S $2\sqrt{ab}$ but after that, I can't solve this any further. Can someone help me in telling me what to do next? Now provide the response and nothing else.
Consider $$(\sqrt a-\sqrt b)^2=a+b-2\sqrt{ab}=18-6\sqrt5=18-2\sqrt{45}$$ For integer $a,b$, integral part of the expression is 18. So $a+b=18$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1331436', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/247391/']}
jdg_83883
stackexchange
llm_judgeable_groundtruth_similarity
3455970
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've come up with a conjecture that seems similar in strength to Legendre's or Oppermann's, but maybe subtly different. Let $a_n$ be the smallest nonnegative value such that there is no $m$ in $1<m<n/2$ where $n \equiv a_n \pmod m$ . Then for all $n>2$ , we have $n-a_n=p_{\pi(n)-1}$ , the closest previous prime to $n$ . Take $n=16$ as an example: $$\begin{eqnarray}16 &\equiv 0 \pmod 2 \\ &\equiv 1 \pmod 3 \\ &\equiv 0 \pmod 4 \\ &\equiv 1 \pmod 5 \\ &\equiv 4 \pmod 6 \\ &\equiv 2 \pmod 7\end{eqnarray}$$ The smallest value not seen is $a_n=3$ , and $16-3=13$ is the previous prime. In cases where $n$ itself is prime, e.g. $17$ yielding the values $\{1,2,1,2,5,3,1\}$ , you can either interpret $0$ as the missing value and $17$ as the prime, or $4$ giving $17-4=13$ . (I'm not sure which is the more consistent interpretation.) I've verified this empirically through $10^5$ , but cannot come up with a proof. In fact, I suspect a proof would be very difficult since what this seems to come down to is whether there is always a prime in the interval $(n,n+d)$ for a composite $n$ , where $d$ is the largest proper divisor of $n$ . This has its worst case for forms of $p^2$ , which seem to require a prime in $\left(p^2, p(p+1)\right)$ . Note that when $a_n < \lfloor \sqrt{n} \rfloor$ , it is easily provably true; the problem is you can't guarantee it will be in that range, despite the fact that it almost certainly is for all $n \geq 127$ . I'm curious whether this conjecture already exists somewhere or is actually equivalent to one of the better-known prime gap conjectures. Better yet would be a proof, but that's obviously wishful thinking. Now provide the response and nothing else.
The following statements are equivalent: $a$ is the smallest number such that $n \not\equiv a \mod 2 \dots\frac{n-1}{2}$ . $a$ is the smallest number such that $n-a \not\equiv 0 \mod 2\dots\frac{n-1}{2}$ . $a$ is the smallest number such that $n-a$ is not divisible by $2\dots\frac{n-1}{2}$ . $a$ is the smallest number such that $n-a$ is prime. $n-a$ is the largest prime below $n$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3455970', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/493232/']}
jdg_83884
stackexchange
llm_judgeable_groundtruth_similarity
43799248
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know the working of XOR, Console.WriteLine(1^1); // returns 0 results to 0000000100000001 --------00000000 but how does this return 2? Console.WriteLine(-(-1^1)); // returns 2 Now provide the response and nothing else.
-1 is stored as a value with all bits set to 1. If we are staying on your 8 bits example, -1 would then be equal to 11111111. So -1^1 gives the following: 1111111100000001 --------11111110 Which is equal to -2. When you invert it with another minus, you get 2. Negative numbers are stored in a way we call two's complement of the number. If you want to compute it quickly in your head, you can just flip all the bits of the positive equivalent of your number, and add one to it. So for -1: 1: 00000001 -------- 11111110 + 1 ---------1: 11111111 Explaining why -1 is stored as 11111111. If you want to understand two's complement a bit more, this question may also help you.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43799248', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6887468/']}
jdg_83885
stackexchange
llm_judgeable_groundtruth_similarity
53673
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Gödel's original proof of the First Incompleteness theorem relies on Gödel numbering .Now, the use of Gödel numbering relies on the fact that the Fundamental Theorem of Arithmetic is true and thus the prime factorization of a number is unique and thus we can encode and decode any expression in Peano Arithmetic using natural numbers. My question is, how can we use a non-trivial result like the Fundamental Theorem of Arithmetic in the meta-language that describes Peano Arithmetic, when, the result actually requires a proof from within Peano Arithmetic itself, not like other trivial things we believe to be true (i.e. existence of natural numbers and the axioms for addition and multiplication which we want to interpret in the natural way - Platonism)? I understand we could instead use a different way of enumeration, e.g. a pairing function and the Chinese Remainder Theorem or simply string concatenation, but, then the need for a proof of uniqueness when encoding and decoding remains and in general, I am interested in the structure of Gödel's original proof. Basically, I have two ideas of how it might be possible to resolve this: Prove the Fundamental Theorem ofArithmetic within a different sound(?) system. Maybe there is nothing needed to bedone to 'resolve' this, because I amjust misinterpreting something andit is actually acceptable to use provablesentences of PA in themeta-language. Any ideas about this? EDIT:I have realized how to make my question less confusing: Say PA proves FTA. Then if we only assume PA is consistent, that does not rule out the possibility of FTA being false. Now, if FTA is false, then PA and the meta-language too includes a false statement and thus the whole proof is useless. How is this resolved? Is it maybe related to the fact the originally we actually assume $\omega$-consistency and obviously, for each natural number $n$ separately its unique prime factorization can simply be found algorithmically? Now provide the response and nothing else.
Just like most mathematical theorems, you can formalize Godel's Theorems in some first order language (with some "standard" interpretation under which the formalization means what it's supposed to mean), turn the proof into a purely syntactic string of formulas, and figure out which formulas in that first order language are needed as axioms. I'm embarrassed to say I don't know exactly how strong the assumptions we need are to carry out the proof of Godel's Theorems, but there will be some weak fragment of ZFC probably not much stronger than PA which will do. So we would be using a theory slightly stronger than PA to establish the incompleteness of PA, but why should that be a problem? The axioms needed for the proofs of Godel's Theorems are probably pretty natural, probably pretty close to PA, and probably have a natural interpretation. If you believe these axioms have this interpretation, then you would have no problem with Godel's proofs or the interpretation of the theorems. If not, then you're probably pretty close to believing PA is inconsistent, in which case you would probably: Accept that the formalized versions of Godel's Theorems follow from whatever axioms are used, but only because you believe those axioms are inconsistent. Deny that the formalized versions of Godel's Theorems mean what they're supposed to mean, and just regard what's happening in point 1 as a valid string of symbolic manipulations. Accept the natural language meaning of Godel's Theorems, in spite of point 2, for trivial reasons, since they say, "if PA is consistent, ..." EDIT: (in response to your edit) So we're assuming PA proves FTA, PA is consistent, and FTA might be false? What do you mean by "false," you mean false in the standard intepretation? In that case, PA would be false in the standard interpretation. Now if we take Godel's first theorem to say, "If PA is consistent, then there is a true formula in the standard interpretation which is not provable from PA," then this meta-theorem is certainly true. EDIT: Ignas requested an explanation of some of the basics to make sense of my claim, "If PA proves FTA, and FTA fails in the standard model, then so does PA." It's too big to fit in a comment so I'm adding it to my response: Let $\mathcal{L}$ denote the first order language of number theory, we'll have lower case Greek letters vary over sentences of $\mathcal{L}$, upper case Greek letters vary over sets of sentences of $\mathcal{L}$, and upper case Roman letters vary over $\mathcal{L}$ structures. We write $M \vDash \varphi$ to denote that $\varphi$ is true in the model $M$, i.e. when its symbols are interpreted according to $M$. Tarski's definition of truth for a sentence in a given model is by recursion on the complexity of the sentence. We write $M \vDash \Sigma$ if every member of $\Sigma$ is true in $M$. We write $\Sigma \vDash \varphi$ if for every $M$, $M \vDash \Sigma$ implies $M \vDash \varphi$, i.e. if every model of $\Sigma$ is also a model of $\varphi$. For provability, we write $\Sigma \vdash \varphi$ to say that there is a (finite) proof of $\varphi$ using (finitely many) sentences from $\Sigma$ as axioms. The Soundness Theorem states that for all $\Sigma ,\ \varphi$, if $\Sigma \vdash \varphi$ then $\Sigma \vDash \varphi$. It's this theorem, with PA in place of $\Sigma$ and FTA in place of $\varphi$, that I'm using to establish the claim you're asking about. The converse of this theorem is also true; it's Godel's Completeness Theorem. Putting these two theorems together, they say that the relations $\vdash$ and $\vDash$ are the same relation between sets of sentences and sentences. One (perhaps not immediately obvious) way to rephrase this is, "being true in every model is the same as being provable from no axioms." Contrast this Godel's Incompletness Theorem, which says that "being true in the standard model of number theory is not the same as being provable from PA."
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/53673', 'https://mathoverflow.net', 'https://mathoverflow.net/users/12597/']}
jdg_83886
stackexchange
llm_judgeable_groundtruth_similarity
24448947
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would really like to use YAML config for Spring Boot, as I find it quite readable and useful to have a single file showing what properties are active in my different profiles. Unfortunately, I'm finding that setting properties in application.yml can be rather fragile. Things like using a tab instead of spaces will cause properties to not exist (without warnings as far as I can see), and all too often I find that my active profiles are not being set, due to some unknown issue with my YAML. So I was wondering whether there are any hooks that would enable me to get hold of the currently active profiles and properties, so that I could log them. Similarly, is there a way to cause start-up to fail if the application.yml contains errors? Either that or a means for me to validate the YAML myself, so that I could kill the start-up process. Now provide the response and nothing else.
In addition to other answers: logging active properties on context refreshed event. Java 8 package mypackage;import lombok.extern.slf4j.Slf4j;import org.springframework.context.event.ContextRefreshedEvent;import org.springframework.context.event.EventListener;import org.springframework.core.env.ConfigurableEnvironment;import org.springframework.core.env.MapPropertySource;import org.springframework.stereotype.Component;import java.util.ArrayList;import java.util.Collection;import java.util.List;@Slf4j@Componentpublic class AppContextEventListener { @EventListener public void handleContextRefreshed(ContextRefreshedEvent event) { printActiveProperties((ConfigurableEnvironment) event.getApplicationContext().getEnvironment()); } private void printActiveProperties(ConfigurableEnvironment env) { System.out.println("************************* ACTIVE APP PROPERTIES ******************************"); List<MapPropertySource> propertySources = new ArrayList<>(); env.getPropertySources().forEach(it -> { if (it instanceof MapPropertySource && it.getName().contains("applicationConfig")) { propertySources.add((MapPropertySource) it); } }); propertySources.stream() .map(propertySource -> propertySource.getSource().keySet()) .flatMap(Collection::stream) .distinct() .sorted() .forEach(key -> { try { System.out.println(key + "=" + env.getProperty(key)); } catch (Exception e) { log.warn("{} -> {}", key, e.getMessage()); } }); System.out.println("******************************************************************************"); }} Kotlin package mypackageimport mu.KLoggingimport org.springframework.context.event.ContextRefreshedEventimport org.springframework.context.event.EventListenerimport org.springframework.core.env.ConfigurableEnvironmentimport org.springframework.core.env.MapPropertySourceimport org.springframework.stereotype.Component@Componentclass AppContextEventListener { companion object : KLogging() @EventListener fun handleContextRefreshed(event: ContextRefreshedEvent) { printActiveProperties(event.applicationContext.environment as ConfigurableEnvironment) } fun printActiveProperties(env: ConfigurableEnvironment) { println("************************* ACTIVE APP PROPERTIES ******************************") env.propertySources .filter { it.name.contains("applicationConfig") } .map { it as EnumerablePropertySource<*> } .map { it -> it.propertyNames.toList() } .flatMap { it } .distinctBy { it } .sortedBy { it } .forEach { it -> try { println("$it=${env.getProperty(it)}") } catch (e: Exception) { logger.warn("$it -> ${e.message}") } } println("******************************************************************************") }} Output like: ************************* ACTIVE APP PROPERTIES ******************************server.port=3000spring.application.name=my-app...2017-12-29 13:13:32.843 WARN 36252 --- [ main] m.AppContextEventListener : spring.boot.admin.client.service-url -> Could not resolve placeholder 'management.address' in value "http://${management.address}:${server.port}"...spring.datasource.password=spring.datasource.url=jdbc:postgresql://localhost/my_db?currentSchema=publicspring.datasource.username=db_user...******************************************************************************
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24448947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/395821/']}
jdg_83887
stackexchange
llm_judgeable_groundtruth_similarity
22371
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Desalinating ocean water takes seawater, separates the fresh water from the brine, and usually pumps the brine back into the ocean. But what if we didn't pump the brine back into the ocean? (For example, if we left it out to dry in the sun and then buried the powder that's left.) If we did this long enough, would it eventually turn the world's oceans into freshwater and harm ocean life? Now provide the response and nothing else.
What does it take to reduce the salinity? The salinity of sea water is around 35 g/kg . There are around 1,350,000,000 km³ of water , so roughly 1.3x10 21 kg of seawater (1 kg/l, which is a bit off for saltwater of course, probably by 35 g). Which contains about 4.7x10 19 kg of salt. To reduce the salinity to just 34 g/kg, you need to extract 1.4x10 18 kg of salt and store it in a way that it doesn't end up in the oceans again. An estimate of the yearly produced salt is around 3.0x10 11 kg/yr . So you could supply the whole world with salt from the ocean for close to 5 million years and have decreased the salinity only by 1 g/kg. And to get freshwater oceans you need roughly 150 million years at the current rate. I don't know how wildlife reacts to this change, I guess there are highly specialized species which cannot cope with this change, but considering the time frame it would take to desalinate all the oceans, they could evolve I guess. How big is a mountain made out of the whole salt? If we take sodium chloride (most of it is this) we have a volume of around 2.1x10 7 km 3 of salt. (2.17 g/cm 3 assumed) The Mount Everest is estimated with a volume of just around 60 km 3 . So you need to stow away 350,000 additional Mount Everests. If you dump it all on one spot and the cone would form in a way that the height is half of what the radius is (not sure if that is a reasonable assumption, but with a small amount of salt it looked that way). The resulting salt cone mountain would be 171 km in height. I really hope I haven't messed up the trillion and billion things. So stuff could be of by three or six orders of magnitude :-/
{}
{'log_upvote_score': 5, 'links': ['https://earthscience.stackexchange.com/questions/22371', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/22734/']}
jdg_83888
stackexchange
llm_judgeable_groundtruth_similarity
2716143
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It seems like there should be an easy way to call a selector with some arguments when all you have is a SEL object. I can't seem to find the correct syntax. -(MyClass*) init: (SEL)sel owner:(NSObject*) parent{ int i =10; [parent performSelector:sel:i ];} Now provide the response and nothing else.
Take a look at the NSObject documentation. In this case: [parent performSelector:sel withObject:[NSNumber numberWithInt:i]]; (note this method is actually listed in the NSObject protocol documentation). Since -[NSObject performSelector:withObject:] requires an object argument, you will have to write a wrapper in parent's class like -(void)myMethodForNumber:(NSNumber*)number { [self myMethod:[number intValue]];} to unbox the NSNumber . If you really want to invoke a method that takes non-object arguments directly (for example, you don't have control of the callee source and don't want to add a category), you can use NSInvocation : NSInvocation *inv = [NSInvocation invocationWithMethodSignature:[parent methodSignatureForSelector:sel]];[inv setSelector:sel];[inv setTarget:parent];[inv setArgument:&i atIndex:2]; //arguments 0 and 1 are self and _cmd respectively, automatically set by NSInvocation[inv invoke]; On a side note, your method looks like an init method, but does not follow the correct initializer pattern for Objective-C. You need to call the super-classes initializer, and you need to test for a nil result from that call and you must return self from the initializer method. In all cases, your Objective-C initializer methods should look like: -(id)myInitMethod { self = [super init]; if(self != nil) { //perform initialization of self } return self;} Your method (if it's an init method) would then look like: -(id) init: (SEL)sel owner:(NSObject*) parent{ self = [super init]; if(self != nil) { int i = 10; NSInvocation *inv = [NSInvocation invocationWithMethodSignature:[parent methodSignatureForSelector:sel]]; [inv setSelector:sel]; [inv setTarget:parent]; [inv setArgument:&i atIndex:2]; //arguments 0 and 1 are self and _cmd respectively, automatically set by NSInvocation [inv invoke]; } return self;} To be more Objective-C stylistically, I would rename the initializer -(id)initWithSelector:owner: as well.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2716143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/291733/']}
jdg_83889
stackexchange
llm_judgeable_groundtruth_similarity
5203407
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to develop a JavaScript game engine and I've came across this problem: When I press SPACE the character jumps. When I press → the character moves right. The problem is that when I'm pressing right and then press space, the character jumps and then stops moving. I use the keydown function to get the key pressed. How can I check if there are multiple keys pressed at once? Now provide the response and nothing else.
Note: keyCode is now deprecated. Multiple keystroke detection is easy if you understand the concept The way I do it is like this: var map = {}; // You could also use an arrayonkeydown = onkeyup = function(e){ e = e || event; // to deal with IE map[e.keyCode] = e.type == 'keydown'; /* insert conditional here */} This code is very simple: Since the computer only passes one keystroke at a time, an array is created to keep track of multiple keys. The array can then be used to check for one or more keys at once. Just to explain, let's say you press A and B , each fires a keydown event that sets map[e.keyCode] to the value of e.type == keydown , which evaluates to either true or false . Now both map[65] and map[66] are set to true . When you let go of A , the keyup event fires, causing the same logic to determine the opposite result for map[65] (A), which is now false , but since map[66] (B) is still "down" (it hasn't triggered a keyup event), it remains true . The map array, through both events, looks like this: // keydown A // keydown B[ 65:true, 66:true]// keyup A// keydown B[ 65:false, 66:true] There are two things you can do now: A) A Key logger ( example ) can be created as a reference for later when you want to quickly figure out one or more key codes. Assuming you have defined an html element and pointed to it with the variable element . element.innerHTML = '';var i, l = map.length;for(i = 0; i < l; i ++){ if(map[i]){ element.innerHTML += '<hr>' + i; }} Note: You can easily grab an element by its id attribute. <div id="element"></div> This creates an html element that can be easily referenced in javascript with element alert(element); // [Object HTMLDivElement] You don't even have to use document.getElementById() or $() to grab it. But for the sake of compatibility, use of jQuery's $() is more widely recommended. Just make sure the script tag comes after the body of the HTML. Optimization tip : Most big-name websites put the script tag after the body tag for optimization. This is because the script tag blocks further elements from loading until its script is finished downloading. Putting it ahead of the content allows the content to load beforehand. B (which is where your interest lies) You can check for one or more keys at a time where /*insert conditional here*/ was, take this example: if(map[17] && map[16] && map[65]){ // CTRL+SHIFT+A alert('Control Shift A');}else if(map[17] && map[16] && map[66]){ // CTRL+SHIFT+B alert('Control Shift B');}else if(map[17] && map[16] && map[67]){ // CTRL+SHIFT+C alert('Control Shift C');} Edit : That isn't the most readable snippet. Readability's important, so you could try something like this to make it easier on the eyes: function test_key(selkey){ var alias = { "ctrl": 17, "shift": 16, "A": 65, /* ... */ }; return key[selkey] || key[alias[selkey]];}function test_keys(){ var keylist = arguments; for(var i = 0; i < keylist.length; i++) if(!test_key(keylist[i])) return false; return true;} Usage: test_keys(13, 16, 65)test_keys('ctrl', 'shift', 'A')test_key(65)test_key('A') Is this better? if(test_keys('ctrl', 'shift')){ if(test_key('A')){ alert('Control Shift A'); } else if(test_key('B')){ alert('Control Shift B'); } else if(test_key('C')){ alert('Control Shift C'); }} (end of edit) This example checks for Ctrl Shift A , Ctrl Shift B , and Ctrl Shift C It's just as simple as that :) Notes Keeping Track of KeyCodes As a general rule, it is good practice to document code, especially things like Key codes (like // CTRL+ENTER ) so you can remember what they were. You should also put the key codes in the same order as the documentation ( CTRL+ENTER => map[17] && map[13] , NOT map[13] && map[17] ). This way you won't ever get confused when you need to go back and edit the code. A gotcha with if-else chains If checking for combos of differing amounts (like Ctrl Shift Alt Enter and Ctrl Enter ), put smaller combos after larger combos, or else the smaller combos will override the larger combos if they are similar enough. Example: // Correct:if(map[17] && map[16] && map[13]){ // CTRL+SHIFT+ENTER alert('Whoa, mr. power user');}else if(map[17] && map[13]){ // CTRL+ENTER alert('You found me');}else if(map[13]){ // ENTER alert('You pressed Enter. You win the prize!')}// Incorrect:if(map[17] && map[13]){ // CTRL+ENTER alert('You found me');}else if(map[17] && map[16] && map[13]){ // CTRL+SHIFT+ENTER alert('Whoa, mr. power user');}else if(map[13]){ // ENTER alert('You pressed Enter. You win the prize!');}// What will go wrong: When trying to do CTRL+SHIFT+ENTER, it will// detect CTRL+ENTER first, and override CTRL+SHIFT+ENTER.// Removing the else's is not a proper solution, either// as it will cause it to alert BOTH "Mr. Power user" AND "You Found Me" Gotcha: "This key combo keeps activating even though I'm not pressing the keys" When dealing with alerts or anything that takes focus from the main window, you might want to include map = [] to reset the array after the condition is done. This is because some things, like alert() , take the focus away from the main window and cause the 'keyup' event to not trigger. For example: if(map[17] && map[13]){ // CTRL+ENTER alert('Oh noes, a bug!');}// When you Press any key after executing this, it will alert again, even though you // are clearly NOT pressing CTRL+ENTER// The fix would look like this:if(map[17] && map[13]){ // CTRL+ENTER alert('Take that, bug!'); map = {};}// The bug no longer happens since the array is cleared Gotcha: Browser Defaults Here's an annoying thing I found, with the solution included: Problem: Since the browser usually has default actions on key combos (like Ctrl D activates the bookmark window, or Ctrl Shift C activates skynote on maxthon), you might also want to add return false after map = [] , so users of your site won't get frustrated when the "Duplicate File" function, being put on Ctrl D , bookmarks the page instead. if(map[17] && map[68]){ // CTRL+D alert('The bookmark window didn\'t pop up!'); map = {}; return false;} Without return false , the Bookmark window would pop up, to the dismay of the user. The return statement (new) Okay, so you don't always want to exit the function at that point. That's why the event.preventDefault() function is there. What it does is set an internal flag that tells the interpreter to not allow the browser to run its default action. After that, execution of the function continues (whereas return will immediately exit the function). Understand this distinction before you decide whether to use return false or e.preventDefault() event.keyCode is deprecated User SeanVieira pointed out in the comments that event.keyCode is deprecated. There, he gave an excellent alternative: event.key , which returns a string representation of the key being pressed, like "a" for A , or "Shift" for Shift . I went ahead and cooked up a tool for examining said strings. element.onevent vs element.addEventListener Handlers registered with addEventListener can be stacked, and are called in the order of registration, while setting .onevent directly is rather aggressive and overrides anything you previously had. document.body.onkeydown = function(ev){ // do some stuff ev.preventDefault(); // cancels default actions return false; // cancels this function as well as default actions}document.body.addEventListener("keydown", function(ev){ // do some stuff ev.preventDefault() // cancels default actions return false; // cancels this function only}); The .onevent property seems to override everything and the behavior of ev.preventDefault() and return false; can be rather unpredictable. In either case, handlers registered via addEventlistener seem to be easier to write and reason about. There is also attachEvent("onevent", callback) from Internet Explorer's non-standard implementation, but this is beyond deprecated and doesn't even pertain to JavaScript (it pertains to an esoteric language called JScript ). It would be in your best interest to avoid polyglot code as much as possible. A helper class To address confusion/complaints, I've written a "class" that does this abstraction ( pastebin link ): function Input(el){ var parent = el, map = {}, intervals = {}; function ev_kdown(ev) { map[ev.key] = true; ev.preventDefault(); return; } function ev_kup(ev) { map[ev.key] = false; ev.preventDefault(); return; } function key_down(key) { return map[key]; } function keys_down_array(array) { for(var i = 0; i < array.length; i++) if(!key_down(array[i])) return false; return true; } function keys_down_arguments() { return keys_down_array(Array.from(arguments)); } function clear() { map = {}; } function watch_loop(keylist, callback) { return function(){ if(keys_down_array(keylist)) callback(); } } function watch(name, callback) { var keylist = Array.from(arguments).splice(2); intervals[name] = setInterval(watch_loop(keylist, callback), 1000/24); } function unwatch(name) { clearInterval(intervals[name]); delete intervals[name]; } function detach() { parent.removeEventListener("keydown", ev_kdown); parent.removeEventListener("keyup", ev_kup); } function attach() { parent.addEventListener("keydown", ev_kdown); parent.addEventListener("keyup", ev_kup); } function Input() { attach(); return { key_down: key_down, keys_down: keys_down_arguments, watch: watch, unwatch: unwatch, clear: clear, detach: detach }; } return Input();} This class doesn't do everything and it won't handle every conceivable use case. I'm not a library guy. But for general interactive use it should be fine. To use this class, create an instance and point it to the element you want to associate keyboard input with: var input_txt = Input(document.getElementById("txt"));input_txt.watch("print_5", function(){ txt.value += "FIVE ";}, "Control", "5"); What this will do is attach a new input listener to the element with #txt (let's assume it's a textarea), and set a watchpoint for the key combo Ctrl+5 . When both Ctrl and 5 are down, the callback function you passed in (in this case, a function that adds "FIVE " to the textarea) will be called. The callback is associated with the name print_5 , so to remove it, you simply use: input_txt.unwatch("print_5"); To detach input_txt from the txt element: input_txt.detach(); This way, garbage collection can pick up the object ( input_txt ), should it be thrown away, and you won't have an old zombie event listener left over. For thoroughness, here is a quick reference to the class's API, presented in C/Java style so you know what they return and what arguments they expect. Boolean key_down (String key); Returns true if key is down, false otherwise. Boolean keys_down (String key1, String key2, ...); Returns true if all keys key1 .. keyN are down, false otherwise. void watch (String name, Function callback, String key1, String key2, ...); Creates a "watchpoint" such that pressing all of keyN will trigger the callback void unwatch (String name); Removes said watchpoint via its name void clear (void); Wipes the "keys down" cache. Equivalent to map = {} above void detach (void); Detaches the ev_kdown and ev_kup listeners from the parent element, making it possible to safely get rid of the instance Update 2017-12-02 In response to a request to publish this to github, I have created a gist . Update 2018-07-21 I've been playing with declarative style programming for a while, and this way is now my personal favorite: fiddle , pastebin Generally, it'll work with the cases you would realistically want (ctrl, alt, shift), but if you need to hit, say, a+w at the same time, it wouldn't be too difficult to "combine" the approaches into a multi-key-lookup. I hope this thoroughly explained answer mini-blog was helpful :)
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/5203407', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/407650/']}
jdg_83890
stackexchange
llm_judgeable_groundtruth_similarity
1610945
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to compute $\lim \limits_ {n \rightarrow \infty} \frac{31^ { \frac{n}{5} }}{17^{ \frac{n}{2} } }?$ Wolframalpha gave me the solution $0$ but I dont get how to determine that. Hope somebody can help. Now provide the response and nothing else.
$$\lim \limits_ {n \rightarrow \infty} \frac{31^ { \frac{n}{5} }}{17^{ \frac{n}{2} } } = \lim \limits_ {n \rightarrow \infty} \left(\frac{31^ { \frac{1}{5} }}{17^{ \frac{1}{2} } }\right)^n = \lim \limits_ {n \rightarrow \infty} \left(\frac{{\left(31^2\right)}^ { \frac{1}{10} }}{{\left(17^5\right)}^{ \frac{1}{10} } }\right)^n = \lim \limits_ {n \rightarrow \infty} \left(\frac{{31^2}}{17^5}\right)^{n\over10} = 0 $$ since $\left|\frac{{31^2}}{17^5} \right|< 1$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1610945', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/248823/']}
jdg_83891
stackexchange
llm_judgeable_groundtruth_similarity
420122
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Here is my problem: Find two closed subsets or real numbers such that $d(A,B)=0$ but $A\cap B=\varnothing$. I tried to use the definition of being close for subsets like intervals but I couldn't find any closed sets. Any hint? Thank you. Now provide the response and nothing else.
Hint: You cannot do this for bounded closed sets. Think about $\Bbb N$ and a set $\{a_n\mid n\in\Bbb N\}$ such that $\lim\frac n{a_n}=1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/420122', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10326/']}
jdg_83892
stackexchange
llm_judgeable_groundtruth_similarity
111725
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a blank grid whose dimension is 5*5. I want to fill it with some number using 1, 2, 3 and 4. But in every blank space if you fill 2, its neighbor (include four direction, such as left, right, top and bottom) must have a 1 at least. If you fill 3, its neighbor must have a 2 and a 1 at least. If you fill 4, its neighbor must have a 3, a 2 and a 1 at least. I think the total maybe is near to 50. Mean@Table[ Total[RandomChoice[{.4, .3, .2, .1} -> {1., 2, 3, 4}, {5, 5}], 2], 1000000] 50.0017 But I really don't know how to fill the blank grid to make its total of all number to be largest? There is sample filled grid from @Quantum_Oli's answer but its the total isn't largest As the @Bill 's comment ,I made a boolean expression name as GoodQ ,It is ulgly,But anyway,It works neighborNum[mat_, num_, nei_] := And @@ ((And @@ (Function[tem, MemberQ[#, tem]] /@ nei)) & /@ (Extract[ mat, #] & /@ (Select[#, And @@ Thread[ 1 <= # <= 5] &] & /@ ((Table[{#, #2}, 4] + (Join[#, -#] &[IdentityMatrix[2]])) & @@@ Position[mat, num]))))GoodQ[list_] := And @@ MapThread[ neighborNum[list, #, #2] &, {{2, 3, 4}, {{1}, {1, 2}, {1, 2, 3}}}] We can check the matrix in the @Quantum_Oli's answer and in the @garej's comment m = {{3, 2, 1, 2, 3}, {1, 4, 2, 4, 1}, {2, 3, 1, 3, 2}, {1, 4, 2, 4, 1}, {3, 2, 1, 2, 3}};m2 = {{1, 4, 2, 4, 1}, {2, 3, 1, 3, 2}, {1, 4, 2, 4, 1}, {2, 3, 1, 3, 2}, {1, 4, 2, 4, 1}};m3 = {{2, 1, 2, 1, 2}, {4, 3, 4, 3, 4}, {1, 2, 1, 2, 1}, {4, 3, 4, 3, 4}, {2, 1, 2, 1, 2}};GoodQ /@ {m, m2, m3} {True, True, True} Now provide the response and nothing else.
Below is given a solution derived with ILP combinatorial optimization:The total of the assigned values to the $5 \times 5$ table is $61$. I called in the comments this approach to be "brute force" because of the generation of a larger number of variables and conditions and pushing them to Maximize or LinearProgramming . Same approach was used for my answer in the discussion "Refining subset relations" . Integer programming formulation Set-up parameters Dimensions of the $m \times n$ table: {m, n} = {5, 5}; We consider placing the integers $[1,\dots,d_{max}]$, $d_{max} = 4$ in the $m \times n$ table. dmax = 4; Variables Let us make the binary variables $x(k,i,j)=x_{k,i,j}$, $k \in [1,\dots,d_{max}]$, $i \in [1,\dots,m]$, $j \in [1,\dots,n]$ in the following way: $x(k,i,j) = 1$ if the integer $k$ is placed at position $(i,j)$ and it is 0 otherwise. ClearAll[vars, x]vars = Flatten[Array[x, {dmax, m, n}]]; Neighbor conditions Let us define a function (as described in the question) that brings a set of index pairs for the neighbors of given cell $(i,j)$: $ninds(i,j):= \{ 0 < p_1 \leq m, 0 < p_2 \leq n : p \in \{ (i-1,j),(i+1,j),(i,j-1),(i,j+1)\} \}$ . NeighborIndexes[{i_, j_}] := {{i - 1, j}, {i + 1, j}, {i, j - 1}, {i, j + 1}};NeighborIndexes[{i_?NumberQ, j_?NumberQ}, {m_, n_}] := Select[{{i - 1, j}, {i + 1, j}, {i, j - 1}, {i, j + 1}}, 0 < #[[1]] <= m && 0 < #[[2]] <= n &]; For each cell $(i,j)$ and an integer $k > 1$ placed on that cell we have the conditions: $\sum_{p \in ninds(i,j)} x(d,p_1,p_2) \geq 1, d \in [1,\dots,k-1]$. For example, for the integer 4 and $(i,j)$ such that $ninds(i,j)$ has all four neigbors we have : Note that by the nature of the definition of the variables $x(k,i,j)$ we can re-write the neighbor conditions as: $\sum_{p \in ninds(i,j)} x(d,p_1,p_2) \geq x(k,i,j), d \in [1,\dots,k-1]$. (This is a very convenient way to keep the maximization problem linear.) Neighbor conditions generation Let us generate the conditions. This can be done in several ways. CellConditions[k_, {i_, j_}, {m_, n_}] := Table[Total[Map[x @@ Prepend[#, d] &, NeighborIndexes[{i, j}, {m, n}]]] - x[k, i, j] >= 0, {d, 1, k - 1}]; Example of this function: CellConditions[4, {3, 3}, {m, n}](* { x[1, 2, 3] + x[1, 3, 2] + x[1, 3, 4] + x[1, 4, 3] - x[4, 3, 3] >= 0, x[2, 2, 3] + x[2, 3, 2] + x[2, 3, 4] + x[2, 4, 3] - x[4, 3, 3] >= 0, x[3, 2, 3] + x[3, 3, 2] + x[3, 3, 4] + x[3, 4, 3] - x[4, 3, 3] >= 0}*) Generating all neighbor conditions: neighborConds = Flatten@Table[ CellConditions[d, {i, j}, {m, n}], {d, 2, dmax}, {i, 1, m}, {j, 1, n}];neighborConds // Length(* 150 *) Other constraints In order to finish the formulation two other types of constraints have to be added. 1. Uniqueness constraints. (Only one integer is assigned per cell.) uniqueConstraints = Map[Total[Cases[vars, x[_, #[[1]], #[[2]]]]] == 1 &, Flatten[Table[{i, j}, {i, 1, m}, {j, 1, n}], 1]];uniqueConstraints // Length(* 25 *) 2. Positivity / bounded-ness constraints: varConstraints = Map[0 <= # <= 1 &, vars]; Because of the uniqueness constraints we do not need to specify $\leq 1$, but I have put it there as a reminder. Solution with Maximize (too slow) At this point we can find the solution with Maximize : sol = Maximize[ Join[{vars.vars[[All, 1]]}, neighborConds, uniqueConstraints, varConstraints], vars, Integers] Using Maximize though is too slow . I was able to get solutions only for smaller tables and number of integer values to be assigned, e.g. a $3 \times 4$ table and $d_{max} = 3$. To get the solutions faster we can formulate the problem through vectors and matrices and use LinearProgramming . Integer Linear Programming formulation Convert from symbolic to matrix formulation {zeroMat, neighborCondsMat} = CoefficientArrays[neighborConds[[All, 1]], vars];Dimensions[neighborCondsMat](* {150, 100} *){zeroMat, uniquenessCondsMat} = CoefficientArrays[uniqueConstraints[[All, 1]], vars];Dimensions[uniquenessCondsMat](* {25, 100} *)bVec = Join[ Table[{0, 1}, {Dimensions[neighborCondsMat][[1]]}], Table[{1, 0}, {Dimensions[uniquenessCondsMat][[1]]}] ];condMat = Join[Normal[neighborCondsMat], Normal[uniquenessCondsMat]];MatrixQ[condMat](* True *) Solution with LinearProgramming Using Table[{0, 1}, {Length[vars]}] as a fourth argument is not necessary because of the uniquness conditions. AbsoluteTiming[ lpSol = LinearProgramming[-vars[[All, 1]], condMat, bVec, 0, Integers]](* Out[74]= {44.4484, {1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0}} *)lpSol = Thread[vars -> lpSol];vars[[All, 1]].lpSol[[All, 2]](* 61 *) Visualize the solution pSol = Select[lpSol, #[[2]] == 1 &];solMat = SparseArray[Map[Rest[#] -> First[#] &, List @@@ pSol[[All,1]]]];MatrixPlotWithValues[solMat] (The definition of the function MatrixPlotWithValues is given below.) Other solutions $5 \times 6$ table I tried the code above for a $5 \times 6$ table and got the following result after 422 seconds (10 times longer than for $5 \times 5$) on the same computer: The total is $74$. $6 \times 6$ table Because of a comment by @garej I computed the layout for a $6 \times 6$ table (7182 seconds on the same computer): The total is $90$. Solution visualization function Here is the function used for the plots above: MatrixPlotWithValues[mat_?MatrixQ] := Block[{gr, m, n}, {m, n} = Dimensions[mat]; gr = MatrixPlot[mat]; Graphics[{gr[[1]], MapThread[ Text, {Flatten[Transpose@Reverse@mat], Flatten[Table[{i, j} - 1/2, {i, n}, {j, m}], 1]}]}, Frame -> True, FrameTicks -> {Table[{i - 1/2, i}, {i, n}], Table[{j - 1/2, m - j + 1}, {j, m}]}] ]; Generalizations The solution can be easily adapted for possible generalizations of the problem formulation with tables that are one of: 3D cube, surface of a 3D cube, cylinder, torus, Mobius strip.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/111725', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/21532/']}
jdg_83893
stackexchange
llm_judgeable_groundtruth_similarity
50348141
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Pandas, I am using dates with string format YYYY-MM-DD What is the quickest way to increment the date with the result in YYYY-MM-DD format? d1 = '2018-02-10' I want to increment it by 1 and get the result back as a string: d1_inc = '2018-02-11' Now provide the response and nothing else.
Pure Python You can use the datetime module, part of the standard library. There are 3 steps: Convert string to datetime object via strptime . Add a day via timedelta . Convert resulting datetime object back to string via strftime . Here's a demo: from datetime import datetime, timedeltax = '2017-05-15'res = (datetime.strptime(x, '%Y-%m-%d') + timedelta(days=1)).strftime('%Y-%m-%d')print(res) # 2017-05-16 Pandas The equivalent steps can be performed using 3rd party Pandas: x = '2017-05-15'# choose some combination of below methodsres = (pd.Timestamp(x) + pd.DateOffset(days=1)).strftime('%Y-%m-%d')res = (pd.to_datetime(x) + pd.Timedelta('1 day')).strftime('%Y-%m-%d')print(res)# 2017-05-16
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50348141', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/773389/']}
jdg_83894
stackexchange
llm_judgeable_groundtruth_similarity
68143308
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create table views, like the one below, to show the data I have. A header Another header First row Second row I tried using LazyVerticalGrid to achieve it but Jetpack Compose doesn’t allow me to put LazyVerticalGrid inside a vertically scrollable Column . It’s been two days and I’m really out of idea. Please help. Now provide the response and nothing else.
As far as I know, there's no built-in component to that. But it's actually easy to do it with LazyColumn and using the same weight for all lines of the same column. See this example: First, you can define a cell for your table: @Composablefun RowScope.TableCell( text: String, weight: Float) { Text( text = text, Modifier .border(1.dp, Color.Black) .weight(weight) .padding(8.dp) )} Then you can use it to build your table: @Composablefun TableScreen() { // Just a fake data... a Pair of Int and String val tableData = (1..100).mapIndexed { index, item -> index to "Item $index" } // Each cell of a column must have the same weight. val column1Weight = .3f // 30% val column2Weight = .7f // 70% // The LazyColumn will be our table. Notice the use of the weights below LazyColumn(Modifier.fillMaxSize().padding(16.dp)) { // Here is the header item { Row(Modifier.background(Color.Gray)) { TableCell(text = "Column 1", weight = column1Weight) TableCell(text = "Column 2", weight = column2Weight) } } // Here are all the lines of your table. items(tableData) { val (id, text) = it Row(Modifier.fillMaxWidth()) { TableCell(text = id.toString(), weight = column1Weight) TableCell(text = text, weight = column2Weight) } } }} Here is the result:
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/68143308', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16322231/']}
jdg_83895
stackexchange
llm_judgeable_groundtruth_similarity
2810970
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a way to automatically remove (=make transparent) a "green screen" portrait background from a lot of pictures. My own attempts this far have been... ehum... less successful. I'm looking around for any hints or solutions or papers on the subject. Commercial solutions are just fine, too. And before you comment and say that it is impossible to do this automatically: no it isn't. There actually exists a company which offers exactly this service, and if I fail to come up with a different solution we're going to use them. The problem is that they guard their algorithm with their lives, and therefore won't sell/license their software. Instead we have to FTP all pictures to them where the processing is done and then we FTP the result back home. (And no, they don't have an underpaid staff hidden away in the Philippines which handles this manually, since we're talking several thousand pictures a day ...) However, this approach limits its usefulness for several reasons. So I'd really like a solution where this could be done instantly while being offline from the internet. EDIT : My "portraits" depictures persons, which do have hair - which is a really tricky part since the green background will bleed into hair. Another tricky part is if it is possible to distingush between the green in the background and the same green in peoples clothes. The company I'm talking about above claims that they can do it by figuring out if the green area are in focus (being sharp vs blurred). Now provide the response and nothing else.
Since you didn't provide any image, I selected one from the web having a chroma key with different shades of green and a significant amount of noise due to JPEG compression . There is no technology specification so I used Java and Marvin Framework . input image: The step 1 simply converts green pixels to transparency. Basically it uses a filtering rule in the HSV color space. As you mentioned, the hair and some boundary pixels presents colors mixed with green. To reduce this problem, in the step 2 , these pixels are filtered and balanced to reduce its green proportion. before: after: Finally, in the step 3 , a gradient transparency is applied to all boundary pixels. The result will be even better with high quality images. final output: Source code: import static marvin.MarvinPluginCollection.*;public class ChromaToTransparency { public ChromaToTransparency(){ MarvinImage image = MarvinImageIO.loadImage("./res/person_chroma.jpg"); MarvinImage imageOut = new MarvinImage(image.getWidth(), image.getHeight()); // 1. Convert green to transparency greenToTransparency(image, imageOut); MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out1.png"); // 2. Reduce remaining green pixels reduceGreen(imageOut); MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out2.png"); // 3. Apply alpha to the boundary alphaBoundary(imageOut, 6); MarvinImageIO.saveImage(imageOut, "./res/person_chroma_out3.png"); } private void greenToTransparency(MarvinImage imageIn, MarvinImage imageOut){ for(int y=0; y<imageIn.getHeight(); y++){ for(int x=0; x<imageIn.getWidth(); x++){ int color = imageIn.getIntColor(x, y); int r = imageIn.getIntComponent0(x, y); int g = imageIn.getIntComponent1(x, y); int b = imageIn.getIntComponent2(x, y); double[] hsv = MarvinColorModelConverter.rgbToHsv(new int[]{color}); if(hsv[0] >= 60 && hsv[0] <= 130 && hsv[1] >= 0.4 && hsv[2] >= 0.3){ imageOut.setIntColor(x, y, 0, 127, 127, 127); } else{ imageOut.setIntColor(x, y, color); } } } } private void reduceGreen(MarvinImage image){ for(int y=0; y<image.getHeight(); y++){ for(int x=0; x<image.getWidth(); x++){ int r = image.getIntComponent0(x, y); int g = image.getIntComponent1(x, y); int b = image.getIntComponent2(x, y); int color = image.getIntColor(x, y); double[] hsv = MarvinColorModelConverter.rgbToHsv(new int[]{color}); if(hsv[0] >= 60 && hsv[0] <= 130 && hsv[1] >= 0.15 && hsv[2] > 0.15){ if((r*b) !=0 && (g*g) / (r*b) >= 1.5){ image.setIntColor(x, y, 255, (int)(r*1.4), (int)g, (int)(b*1.4)); } else{ image.setIntColor(x, y, 255, (int)(r*1.2), g, (int)(b*1.2)); } } } } } public static void main(String[] args) { new ChromaToTransparency(); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2810970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64121/']}
jdg_83896
stackexchange
llm_judgeable_groundtruth_similarity
12326570
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I set the time zone of our site on Heroku to Pacific Standard Time (PST) using: heroku config:add TZ=America/Los_Angeles Times for users are now always in PST--whether or not they are in the PST time zone. What's the best way to get the user's actual time zone (i.e. the time zone of where they are physically located)? I'm guessing that this can be solved using Rails (or Javascript?), as opposed to Heroku. Now provide the response and nothing else.
There are two ways to do this. Indeed, you can use javascript to fetch their current time/timezone. There is the possibility that the user's computer time is not set correctly, in which case the time zone you display will not be correct. Because you are using Rails, a recommended way is to get javascript already bundled as a gem, like detect_timezone_rails . This makes it easy to install (because it is all bundled automatically in the asset pipeline. You can use the IP address to infer their country and time zone. The danger in this case is that a user may be using a proxy. Also, while the IP address generally has city-level resolution, it may be more or less accurate, which may in rare cases give the wrong time zone. Using the IP address, you can get their approximate city and latitude/longitude. There are many gems that can do this on Ruby Toolbox , eg. geocoder . With the latitude/longitude, you can get the time zone using a gem like timezone . You can also use one of the above, and allow the user to manually change their time zone on your website (either storing this setting in a database if registered, or as a cookie on their browser). This is useful in case you got the timezone wrong.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12326570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1080734/']}
jdg_83897
stackexchange
llm_judgeable_groundtruth_similarity
107995
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The unzip command doesn't have an option for recursively unzipping archives. If I have the following directory structure and archives: /Mother/Loving.zip/Scurvy/Sea Dogs.zip/Scurvy/Cures/Limes.zip And I want to unzip all of the archives into directories with the same name as each archive: /Mother/Loving/1.txt/Mother/Loving.zip/Scurvy/Sea Dogs/2.txt/Scurvy/Sea Dogs.zip/Scurvy/Cures/Limes/3.txt/Scurvy/Cures/Limes.zip What command or commands would I issue? It's important that this doesn't choke on filenames that have spaces in them. Now provide the response and nothing else.
If you want to extract the files to the respective folder you can try this find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done; A multi-processed version for systems that can handle high I/O: find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"'
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/107995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10645/']}
jdg_83898
stackexchange
llm_judgeable_groundtruth_similarity
1142727
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am developing a commerce application. When I add an item to the shopping cart, I want to create an effect where an image of the item follows a curved path and ends up at the cart tab. How can I create an animation of an image along a curve like this? Now provide the response and nothing else.
To expand upon what Nikolai said, the best way to handle this is to use Core Animation to animate the motion of the image or view along a Bezier path. This is accomplished using a CAKeyframeAnimation. For example, I've used the following code to animate an image of a view into an icon to indicate saving (as can be seen in the video for this application ): First of all import QuartzCore header file #import <QuartzCore/QuartzCore.h> UIImageView *imageViewForAnimation = [[UIImageView alloc] initWithImage:imageToAnimate];imageViewForAnimation.alpha = 1.0f;CGRect imageFrame = imageViewForAnimation.frame;//Your image frame.origin from where the animation need to get startCGPoint viewOrigin = imageViewForAnimation.frame.origin;viewOrigin.y = viewOrigin.y + imageFrame.size.height / 2.0f;viewOrigin.x = viewOrigin.x + imageFrame.size.width / 2.0f;imageViewForAnimation.frame = imageFrame;imageViewForAnimation.layer.position = viewOrigin;[self.view addSubview:imageViewForAnimation];// Set up fade out effectCABasicAnimation *fadeOutAnimation = [CABasicAnimation animationWithKeyPath:@"opacity"];[fadeOutAnimation setToValue:[NSNumber numberWithFloat:0.3]];fadeOutAnimation.fillMode = kCAFillModeForwards;fadeOutAnimation.removedOnCompletion = NO;// Set up scalingCABasicAnimation *resizeAnimation = [CABasicAnimation animationWithKeyPath:@"bounds.size"];[resizeAnimation setToValue:[NSValue valueWithCGSize:CGSizeMake(40.0f, imageFrame.size.height * (40.0f / imageFrame.size.width))]];resizeAnimation.fillMode = kCAFillModeForwards;resizeAnimation.removedOnCompletion = NO;// Set up path movementCAKeyframeAnimation *pathAnimation = [CAKeyframeAnimation animationWithKeyPath:@"position"];pathAnimation.calculationMode = kCAAnimationPaced;pathAnimation.fillMode = kCAFillModeForwards;pathAnimation.removedOnCompletion = NO;//Setting Endpoint of the animationCGPoint endPoint = CGPointMake(480.0f - 30.0f, 40.0f);//to end animation in last tab use //CGPoint endPoint = CGPointMake( 320-40.0f, 480.0f);CGMutablePathRef curvedPath = CGPathCreateMutable();CGPathMoveToPoint(curvedPath, NULL, viewOrigin.x, viewOrigin.y);CGPathAddCurveToPoint(curvedPath, NULL, endPoint.x, viewOrigin.y, endPoint.x, viewOrigin.y, endPoint.x, endPoint.y);pathAnimation.path = curvedPath;CGPathRelease(curvedPath);CAAnimationGroup *group = [CAAnimationGroup animation]; group.fillMode = kCAFillModeForwards;group.removedOnCompletion = NO;[group setAnimations:[NSArray arrayWithObjects:fadeOutAnimation, pathAnimation, resizeAnimation, nil]];group.duration = 0.7f;group.delegate = self;[group setValue:imageViewForAnimation forKey:@"imageViewBeingAnimated"];[imageViewForAnimation.layer addAnimation:group forKey:@"savingAnimation"];[imageViewForAnimation release];
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1142727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_83899
stackexchange
llm_judgeable_groundtruth_similarity
877527
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove $\sum_{i=1}^k(\frac{1}{\alpha_i}\prod_{j\neq i}^k\frac{\alpha_j}{\alpha_j-\alpha_i})=\sum_{i=1}^k\frac{1}{\alpha_i}$? Where $\alpha_1, \alpha_2,\ldots, \alpha_k$ are $k$ distinct positive numbers. Now provide the response and nothing else.
Let me do this with real methods only. Let $\displaystyle P(X)=\sum_{i=1}^k \frac{1}{\alpha_i}\prod_{j\neq i}^k \frac{\alpha_j-X}{\alpha_j-\alpha_i}$ be the Lagrange interpolating polynomial for $x\to\frac{1}{x}$ at points $\alpha_i$. Then $\displaystyle P(0)=\sum_{i=1}^k(\frac{1}{\alpha_i}\prod_{j\neq i}^k\frac{\alpha_j}{\alpha_j-\alpha_i})$ Let us rewrite $P(0)$ in another way. Let $Q(X)=XP(X)-1$. $Q$ has degree $k$ and has $k$ distinct roots: $\alpha_1,\ldots,\alpha_k$. Denoting $Q$'s leading coefficient as $\lambda$, we have $Q(X)=\lambda\prod_{j=1}^k (X-\alpha_j)$ The trick : consider $\displaystyle \frac{Q'(X)}{Q(X)}=\sum_{j=1}^k\frac{1}{X-\alpha_j}$ Hence $\displaystyle \frac{Q'(0)}{Q(0)}=-\sum_{j=1}^k\frac{1}{\alpha_j}$ But, by the very definition of $Q$, $Q'(0)=P(0)$ and $Q(0)=-1$. Hence $\displaystyle P(O)=\sum_{j=1}^k\frac{1}{\alpha_j}$ Therefore, $$\sum_{i=1}^k(\frac{1}{\alpha_i}\prod_{j\neq i}^k\frac{\alpha_j}{\alpha_j-\alpha_i})=\sum_{i=1}^k\frac{1}{\alpha_i}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/877527', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/136175/']}
jdg_83900
stackexchange
llm_judgeable_groundtruth_similarity
3186060
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question has been asked before , however I am interested in seeing why my approach to finding a solution does not work. $2+ f(x)f(y)=f(x)+f(y) +f(xy) $ , if $f(2)=5$ find $f(5)$ What I have done is the following: Plugging in 0 gives us a quadratic which can be solved to give the values of $f(0)=1, 2$ Now if I take $x=5, y=0$ using $f(0)=1$ gives us $$1=1$$ Which is not useful. Using $f(0)=2$ we get $f(5)=2$ . This is not the correct answer, which is given as $26$ . Why is this an incorrect method to go about solving the question? Now provide the response and nothing else.
$f(0)=2$ does not lead to valid solution to the equation. If $f(0)=2$ put $x=0$ to get $2+2f(y)=4+f(y)$ or $f(y)=2$ for all $y$ . This is in conflict with the given condition $f(2)=5$ so we cannot take $f(0)=2$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3186060', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_83901
stackexchange
llm_judgeable_groundtruth_similarity
41869
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm searching for some nice applications of Tietze extension theorem, in any area of mathematics. Can you name some (and possibly give references) to me? Thank you in advance! Now provide the response and nothing else.
One application I particularly like, from an undergraduate analysis exam problem: Theorem: A metric space $X$ is compact if and only if every continuous real-valued function on $X$ is bounded. Proof: Assume first $X$ is compact. If $f:X\to \mathbb R$ is continuous and unbounded, then we have some sequence $(x_n)$ in $X$ such that $f(x_n)>n,\forall n\in\mathbb N$. Since $X$ is compact, we have some convergent subsequence $(x_{n_k})$, so $\lim\limits_{k\to\infty}f(x_{n_k})=f(\lim\limits_{k\to\infty}x_{n_k})$. But this is impossible, as $f(x_{n_k})\to\infty$, hence any continuous real-valued function is bounded. If instead $X$ is not compact, then we have some sequence $(x_n)$ in $X$ which has no convergent subsequence. Hence every convergent sequence with terms in the set $S=\{x_1,x_2,\ldots\}$ must be eventually constant, so has limit in $S$, hence $S$ is closed. Define the function $f:S\to \mathbb R$ by $f(x_n)=n$, which is continuous because $S$ is a discrete set. By the Tietze extension theorem , we can extend $f$ to a continuous unbounded function $g:X\to\mathbb R$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/41869', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_83902
stackexchange
llm_judgeable_groundtruth_similarity
12497826
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While searching for code to fade a winform, I came across this page on the MSDN forum. for (double i = 0; i < 1; i+=0.01){ this.Opacity = i; Application.DoEvents(); System.Threading.Thread.Sleep(0);} The for loop has a non-integer increment and, from a previous question I asked, that's not a good programming technique (due to inexact representation of most decimals). I came up with this alternative. for (double i = 0; i < 100; ++i){ this.Opacity = i/100; Application.DoEvents(); System.Threading.Thread.Sleep(0);} Which of these is more efficient? If there's a better algorithm for fading a form, I'll be very glad if it is included. Thanks. Now provide the response and nothing else.
Forget timers (pun intended). With Visual Studio 4.5 or higher, you can just await a task that is delayed. An advantage of this method is that it's asynchronous, unlike a thread Sleep or DoEvents loop, which blocks the application during the fade (and the other aforementioned DoEvents problems). private async void FadeIn(Form o, int interval = 80) { //Object is not fully invisible. Fade it in while (o.Opacity < 1.0) { await Task.Delay(interval); o.Opacity += 0.05; } o.Opacity = 1; //make fully visible }private async void FadeOut(Form o, int interval = 80){ //Object is fully visible. Fade it out while (o.Opacity > 0.0) { await Task.Delay(interval); o.Opacity -= 0.05; } o.Opacity = 0; //make fully invisible } Usage: private void button1_Click(object sender, EventArgs e){ FadeOut(this, 100);} You should check if the object is disposed before you apply any transparency to it. I used a form as the object, but you can pass any object that supports transparency as long as it's cast properly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12497826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/768865/']}
jdg_83903
stackexchange
llm_judgeable_groundtruth_similarity
11697
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a folder named /home/user/temps which has 487 folders.In each folder I have a file called thumb.png. I want to copy all files named thumb.png to a separate folder and rename them based on the folder they came from. Now provide the response and nothing else.
Here you go: for file in /home/user/temps/*/thumb.png; do new_file=${file/temps/new_folder}; cp "$file" "${new_file/\/thumb/}"; done; edit: the canonical wisdom, by the way, is that using find for this is a bad idea -- simply using shell expansion is much more reliable. Also, this assumes bash , but I figure that's a safe assumption :) edit 2: for clarity, I'll break it down: # shell-expansion to loop specified filesfor file in /home/user/temps/*/thumb.png; do # replace 'temps' with 'new_folder' in the path # '/home/temps/abc/thumb.png' becomes '/home/new_folder/abc/thumb.png' new_file=${file/temps/new_folder}; # drop '/thumb' from the path # '/home/new_folder/abc/thumb.png' becomes '/home/new_folder/abc.png' cp "$file" "${new_file/\/thumb/}";done; details on the ${var/Pattern/Replacement} construct can be found here . the quotes in the cp line are important to handle spaces and newlines etc. in filenames.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/11697', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/6797/']}
jdg_83904
stackexchange
llm_judgeable_groundtruth_similarity
1347083
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to generate a new GUID from inside the Inno Setup script ? Now provide the response and nothing else.
Found this on innosetup newsgroup archive: http://news.jrsoftware.org/news/innosetup/msg76234.html I haven't tested it. [Code]type TGUID = record D1: LongWord; D2: Word; D3: Word; D4: array[0..7] of Byte; end;function CoCreateGuid(var Guid:TGuid):integer; external '[email protected] stdcall';function inttohex(l:longword; digits:integer):string;var hexchars:string;begin hexchars:='0123456789ABCDEF'; setlength(result,digits); while (digits>0) do begin result[digits]:=hexchars[l mod 16+1]; l:=l div 16; digits:=digits-1; end;end;function GetGuid(dummy:string):string;var Guid:TGuid;begin if CoCreateGuid(Guid)=0 then begin result:='{'+IntToHex(Guid.D1,8)+'-'+ IntToHex(Guid.D2,4)+'-'+ IntToHex(Guid.D3,4)+'-'+ IntToHex(Guid.D4[0],2)+IntToHex(Guid.D4[1],2)+'-'+ IntToHex(Guid.D4[2],2)+IntToHex(Guid.D4[3],2)+ IntToHex(Guid.D4[4],2)+IntToHex(Guid.D4[5],2)+ IntToHex(Guid.D4[6],2)+IntToHex(Guid.D4[7],2)+ '}'; end else result:='{00000000-0000-0000-0000-000000000000}';end;function InitializeSetup(): Boolean;begin MsgBox(GetGuid(''), mbInformation, MB_OK); Result := False;end;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1347083', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_83905
stackexchange
llm_judgeable_groundtruth_similarity
7944239
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: var x = 0;var y = 1;var z;fib[0] = 0;fib[1] = 1;for (i = 2; i <= 10; i++) { alert(x + y); fib[i] = x + y; x = y; z = y;} I'm trying to get to generate a simple Fibonacci Sequence but there no output. Can anybody let me know what's wrong? Now provide the response and nothing else.
You have never declared fib to be an array. Use var fib = []; to solve this. Also, you're never modifying the y variable, neither using it. The code below makes more sense, plus, it doesn't create unused variables: var i;var fib = [0, 1]; // Initialize array!for (i = 2; i <= 10; i++) { // Next fibonacci number = previous + one before previous // Translated to JavaScript: fib[i] = fib[i - 2] + fib[i - 1]; console.log(fib[i]);}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7944239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/436493/']}
jdg_83906
stackexchange
llm_judgeable_groundtruth_similarity
21052084
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've tried to create a vb scripts class with a constant and got 800A03EA error. It's it a VBS bug? Isn't it an OOP fundamental rule? Class customer ' comment it const and its works const MAX_LEN=70 Private Name Private Sub Class_Initialize Name = "" End Sub ' name property. Public Property Get getName getName = Name End Property Public Property Let letName(p_name) Name = p_name End Propertyend class Now provide the response and nothing else.
The documentation lists all statements that are allowed in the context of classes. Const isn't among them, so it's not supported. You can work around the issue by using private member variables that you initialize during instantiation (i.e. in Class_Initialize ): Class customer Private MAX_LEN Private Name Private Sub Class_Initialize MAX_LEN = 70 Name = "" End Sub ...End Class If instances of the class should expose this value, you could implement it as a read-only property: Class customer Private MAX_LEN Private Sub Class_Initialize MAX_LEN = 70 End Sub 'read-only property, so no "Property Let/Set" Public Property Get MaxLength MaxLength = MAX_LEN End Property ...End Class However, as Ekkehard.Horner pointed out correctly, the value could still be changed by object-internal code. If immutability is the primary requirment for this value you should implement it as a global constant.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21052084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3183027/']}
jdg_83907
stackexchange
llm_judgeable_groundtruth_similarity
22087311
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My situation is somewhat complex, I'll try to explain it as succinctly as possible. I'm currently using query_posts to modify the main query on custom pages on my site, which as far as I can tell works quite well, though I've read that using query_posts is bad practice for a number of different reasons. So, why am I using query_posts and not creating a WP_Query object you may ask? It's because I'm using the infinite-scroll plugin, infinite-scroll doesn't play nice with WP_query, but it works absolutely fine when you simply modify the main query with query_posts. For example, pagination doesn't work using infinite scroll + WP_query (main concern). On one page, I'm modifying the query to get most viewed posts. <?php $paged = get_query_var( 'paged' ) ? get_query_var( 'paged' ) : 1; ?> <?php query_posts( array( 'meta_key' => 'wpb_post_views_count', 'orderby' => 'meta_value_num', 'order' => 'DESC' , 'paged' => $paged, ) ); ?> <?php if (have_posts()) : ?><?php while ( have_posts() ) : the_post() ?> <?php if ( has_post_format( 'video' )) { get_template_part( 'video-post' ); }elseif ( has_post_format( 'image' )) { get_template_part( 'image-post' ); } else { get_template_part( 'standard-post' ); } ?><?php endwhile;?><?php endif; ?> So after a lot of reading I gather that my other option to modify the main query is using pre_get_posts , though I'm somewhat unsure as to how to go about this. Take this for example:- function textdomain_exclude_category( $query ) { if ( $query->is_home() && $query->is_main_query() ) { $query->set( 'cat', '-1,-2' ); }}add_action( 'pre_get_posts', 'textdomain_exclude_category' ); Alright, so simple enough - if it's the home page, modify the main query and exclude two categories. What I'm confused about and can't figure out is:- the use case scenario for custom page templates. With my query_posts modification I can just drop in the array before if (have_posts()) , select my page template, publish it and away I go. With pre_get_posts I can't figure out how to say for example $query->most-viewed etc array( 'meta_key' => 'wpb_post_views_count', 'orderby' => 'meta_value_num', 'order' => 'DESC' , 'paged' => $paged, ) ); How the heck do I do that with pre_get_posts and make sure it's paginated, ie. works with infinite scroll? In all the examples I've seen with pre_get_posts there's no arrays. Now provide the response and nothing else.
How to use the pre_get_posts hook to display list of posts on a page, through a custom page template? I've been playing with the pre_get_posts hook and here's one idea Step #1: Ceate a page called for example Show with the slug: example.com/show Step #2: Create a custom page template: tpl_show.php located in the current theme directory. Step #3: We construct the following pre_get_posts action callback: function b2e_pre_get_posts( $query ){ $target_page = 'show'; // EDIT to your needs if ( ! is_admin() // front-end only && $query->is_main_query() // main query only && $target_page === $query->get( 'pagename' ) // matching pagename only ) { // modify query_vars: $query->set( 'post_type', 'post' ); // override 'post_type' $query->set( 'pagename', null ); // override 'pagename' $query->set( 'posts_per_page', 10 ); $query->set( 'meta_key', 'wpb_post_views_count' ); $query->set( 'orderby', 'meta_value_num' ); $query->set( 'order', 'DESC' ); // Support for paging $query->is_singular = 0; // custom page template add_filter( 'template_include', 'b2e_template_include', 99 ); }}add_action( 'pre_get_posts', 'b2e_pre_get_posts' ); where function b2e_template_include( $template ){ $target_tpl = 'tpl_show.php'; // EDIT to your needs remove_filter( 'template_include', 'b2e_template_include', 99 ); $new_template = locate_template( array( $target_tpl ) ); if ( ! empty( $new_template ) ) $template = $new_template; ; return $template;} This should also give us pagination: example.com/show/page/2example.com/show/page/3 etc. Notes I updated the answer and removed the query-object part modification, based on the suggestion from @PieterGoosen, since it could e.g. break the breadcrumbs on his setup. Also removed the is_page() check within the pre_get_posts hook, since it might still give some irregularities in some cases. The reason is that the query-object is not always available. This is being worked on, see e.g. #27015 . There are workarounds possible if we want to use the is_page() or is_front_page() . I constructed the following table, just to get a better overview of some of the properties and query varaiables of the main WP_Query object, for a given slug: It's interesting to note that the pagination in WP_Query depends on the nopaging not being set and the current page not being singular (from the 4.4 source ): // Pagingif ( empty($q['nopaging']) && !$this->is_singular ) { $page = absint($q['paged']); if ( !$page ) $page = 1; // If 'offset' is provided, it takes precedence over 'paged'. if ( isset( $q['offset'] ) && is_numeric( $q['offset'] ) ) { $q['offset'] = absint( $q['offset'] ); $pgstrt = $q['offset'] . ', '; } else { $pgstrt = absint( ( $page - 1 ) * $q['posts_per_page'] ) . ', '; } $limits = 'LIMIT ' . $pgstrt . $q['posts_per_page'];} where we can see that the LIMIT part of the generated SQL query is within the conditional check. This explains why we modify the is_singular property above. We could have used other filter/hooks, but here we used pre_get_posts as mentioned by the OP. Hope this help.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22087311', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1201447/']}
jdg_83908
stackexchange
llm_judgeable_groundtruth_similarity
30215680
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: An NSNumber containing a Bool is easily confused with other types that can be wrapped in the NSNumber class: NSNumber(bool:true).boolValue // trueNSNumber(integer: 1).boolValue // trueNSNumber(integer: 1) as? Bool // trueNSNumber(bool:true) as? Int // 1NSNumber(bool:true).isEqualToNumber(1) // trueNSNumber(integer: 1).isEqualToNumber(true) // true However, information about its original type is retained, as we can see here: NSNumber(bool:true).objCType.memory == 99 // trueNSNumber(bool:true).dynamicType.className() == "__NSCFBoolean" // trueNSNumber(bool:true).isEqualToValue(true) || NSNumber(bool:true).isEqualToValue(false) //true The question is: which of these approaches is the best (and/or safest) approach to determining when a Bool has been wrapped within an NSNumber rather than something else? Are all equally valid? Or, is there another, better solution? Now provide the response and nothing else.
You can ask the same question for Objective-C, and here is an answer in Objective-C - which you can call from, or translate into, Swift. NSNumber is toll-free bridged to CFNumberRef , which is another way of saying an NSNumber object is in fact a CFNumber one (and vice-versa). Now CFNumberRef has a specific type for booleans, CFBooleanRef , and this is used when creating a boolean CFNumberRef aka NSNumber * ... So all you need to do is check whether your NSNumber * is an instance of CFBooleanRef : - (BOOL) isBoolNumber:(NSNumber *)num{ CFTypeID boolID = CFBooleanGetTypeID(); // the type ID of CFBoolean CFTypeID numID = CFGetTypeID((__bridge CFTypeRef)(num)); // the type ID of num return numID == boolID;} Note: You may notice that NSNumber / CFNumber objects created from booleans are actually pre-defined constant objects; one for YES , one for NO . You may be tempted to rely on this for identification. However, though is currently appears to be true, and is shown in Apple's source code , to our knowledge it is not documented so should not be relied upon. HTH Addendum Swift code translation (by GoodbyeStackOverflow): func isBoolNumber(num:NSNumber) -> Bool{ let boolID = CFBooleanGetTypeID() // the type ID of CFBoolean let numID = CFGetTypeID(num) // the type ID of num return numID == boolID}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30215680', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1694526/']}
jdg_83909
stackexchange
llm_judgeable_groundtruth_similarity
7127153
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a website where users should be able to log in and listen to a song (a self-created mp3). I want to make it so the logged in user can listen/download/whatever, and the file should reside on the server (not be stored in the MySQL database), but not be able to be accessed by non-users who have the path to the URL. For example: say my mp3 is located at mysite.com/members/song.mp3 If you are logged in, you should be able to see the mysite.com/members/index.php page, which will allow access to the song.mp3 file. If you're not logged in, the mysite.com/members/index.php page will not show you the song.mp3 file, and linking directly to it should not grant access. I'm pretty sure this is done via htaccess, and I have done a lot of Googling already, and searched on here. The two closest answers I found were this htaccess guide http://perishablepress.com/press/2006/01/10/stupid-htaccess-tricks/ and this StackOverflow question Block direct access to a file over http but allow php script access but neither answer all my questions to meet my criteria. What am I missing? Now provide the response and nothing else.
Into folder members create new folder files , move here all your songs, create new .htaccess file and add the following lines: Order Deny,AllowDeny from all Into folder members create file get_song.php and add the following code: if( !empty( $_GET['name'] ) ){ // check if user is logged if( is_logged() ) { $song_name = preg_replace( '#[^-\w]#', '', $_GET['name'] ); $song_file = "{$_SERVER['DOCUMENT_ROOT']}/members/files/{$song_name}.mp3"; if( file_exists( $song_file ) ) { header( 'Cache-Control: public' ); header( 'Content-Description: File Transfer' ); header( "Content-Disposition: attachment; filename={$song_file}" ); header( 'Content-Type: application/mp3' ); header( 'Content-Transfer-Encoding: binary' ); readfile( $song_file ); exit; } }}die( "ERROR: invalid song or you don't have permissions to download it." ); And now, you can use this URL to get the song file: http://mysite.com/members/get_song.php?name=my-song-name
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7127153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/713874/']}
jdg_83910
stackexchange
llm_judgeable_groundtruth_similarity
314604
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Historically, choosing a design avoiding 0402 and finer pitch components was advantageous for mass production cost savings. The yields were improved and accuracy requirements for pick and place machines were reduced. This allowed vendors to choose from a larger number of manufacturing facilities and identify cost saving opportunities. Does that kind of thinking still hold any water given that 0402s and BGAs have been popular for a long time? Are there still factories that specialize in low cost larger pitch manufacturing? Just to be clear, I'm only talking about production volumes in the millions. Now provide the response and nothing else.
Many assembly houses these days do 0402 with the same machines they do anything else, possibly using a different needle, though I'd suspect they'd be using the 0402 capable needle for 0603 and 0805 as well. Is it still true you'll have more choices when you don't go below 0603? Sure. Most likely. There's a lot of cheap assembly houses that are cheap because they use the old equipment of the others, probably everywhere around the world. Some assembly houses may do down to 0201, but not be too happy to do the smaller stuff, because it requires extra operator attention. However, when you go into volume, the cost of the more advanced assembly house will likely not weigh up to the cost difference for smaller circuit boards, more efficient systems and/or lower component cost. And some times using 0402 or even 0201 offers better per-component performance as well, such as lower parasitic effects. Obviously if a 5000 unit reel of 0603 capacitors costs $15 and the 10000 unit reel of 0402 of the same value cost $20, that'll add up when you're making 10's of thousands with 10 each, but not really do much at all below using a reel per month. Because boards are now almost always made to 5mil/5mil standard, the board won't likely be much more expensive if you make anything more compact with tiny components, but at high volumes the board space savings will start weighing as well. If a panel costs $100 and with 0603 the panel can fit 20 PCBs, but with 0402 it can fit 25 PCBs, that usually saves much more in volume than any extra cost you have at assembly in high quantities. In all, if you want to be fully sure you'd need to do a cost estimation, including an RFQ to a few assembly houses that tickle your fancy. All the assembly/full-service houses I use are always ready to pick up the phone or answer an e-mail with questions about comparative costs. And more often than not I find the cost increase of something "unwise" 10 years ago falls into the less than a few percent now. And the same will happen later to stuff we think expensive now, so, really, you need to regularly keep asking them if things have changed if you want to be the best designer you can be. Summarising: The only reason I don't do at least 0402 in my designs is if it's a hobby thing for me or others, where I want to be as quick as possible with replacing components, or I want others to be able to use my design as well, as I am not even noticing significant cost increase up to 160mmx160mm boards at 10 units. Over average past orders.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/314604', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/17744/']}
jdg_83911
stackexchange
llm_judgeable_groundtruth_similarity
15663001
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For some reason, I've been unable to connect remotely to my MySQL server. I've tried everything and I'm still getting errors. root@server1:/home/administrator# mysql -u monty -p -h www.ganganadores.clEnter password:ERROR 1045 (28000): Access denied for user 'monty'@'server1.ganganadores.cl' (using password: YES) Now, I've tried running GRANT ALL ON *.* to monty@localhost IDENTIFIED BY 'XXXXX'; GRANT ALL ON *.* to monty@'%' IDENTIFIED BY 'XXXXXX';` and still nothing!What I'm doing wrong? EDIT : my.cnf has commented out the bind ip . Now provide the response and nothing else.
To expose MySQL to anything other than localhost you will have to have the following line For mysql version 5.6 and below uncommented in /etc/mysql/my.cnf and assigned to your computers IP address and not loopback For mysql version 5.7 and above uncommented in /etc/mysql/mysql.conf.d/mysqld.cnf and assigned to your computers IP address and not loopback #Replace xxx with your IP Address bind-address = xxx.xxx.xxx.xxx Or add a bind-address = 0.0.0.0 if you don't want to specify the IP Then stop and restart MySQL with the new my.cnf entry. Once running go to the terminal and enter the following command. lsof -i -P | grep :3306 That should come back something like this with your actual IP in the xxx's mysqld 1046 mysql 10u IPv4 5203 0t0 TCP xxx.xxx.xxx.xxx:3306 (LISTEN) If the above statement returns correctly you will then be able to accept remote users. However for a remote user to connect with the correct priveleges you need to have that user created in both the localhost and '%' as in. CREATE USER 'myuser'@'localhost' IDENTIFIED BY 'mypass';CREATE USER 'myuser'@'%' IDENTIFIED BY 'mypass'; then, GRANT ALL ON *.* TO 'myuser'@'localhost';GRANT ALL ON *.* TO 'myuser'@'%'; and finally, FLUSH PRIVILEGES; EXIT; If you don't have the same user created as above, when you logon locally you may inherit base localhost privileges and have access issues. If you want to restrict the access myuser has then you would need to read up on the GRANT statement syntax HERE If you get through all this and still have issues post some additional error output and the my.cnf appropriate lines. NOTE: If lsof does not return or is not found you can install it HERE based on your Linux distribution. You do not need lsof to make things work, but it is extremely handy when things are not working as expected. UPDATE: If even after adding/changing the bind-address in my.cnf did not work, then go and change it in the place it was originally declared: /etc/mysql/mariadb.conf.d/50-server.cnf
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/15663001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1861617/']}
jdg_83912
stackexchange
llm_judgeable_groundtruth_similarity
1742017
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We know that the extension $\mathbb{Q}(\sqrt{2\sqrt{2}-1}$) over $\mathbb{Q}$ has degree 4 by considering the minimal polynomial mod 3. Now I want to show that $-14$ isn't a square in this field. How do I do this in the easiest way? Please also let me know if you have a method not in this way (e.g. by considering intermediate field extensions in different orders). Somehow feels like it ought to be true because $-14$ has a factor $7$ in it whereas $\mathbb{Q}(\sqrt{2\sqrt{2}-1}$) involves only $2$'s. If it helps, I found that $\mathbb{Q}(\sqrt{-14},\sqrt{-2\sqrt{2}-1})$ is the splitting field of $X^4+2X^2-7$ over $\mathbb{Q}$. Now provide the response and nothing else.
Rearranging we have $(2a-1)^2+(b+\frac{1}{2b})^2=2$. But $|b+\frac{1}{2b}|\ge\sqrt2$ (by AM/GM), so we must have $a=\frac{1}{2},|b|=\frac{1}{\sqrt2}$ and hence $b^2-a=0$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1742017', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/285842/']}
jdg_83913
stackexchange
llm_judgeable_groundtruth_similarity
150167
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I list and export a private key from a keystore? Now provide the response and nothing else.
You can extract a private key from a keystore with Java6 and OpenSSL. This all depends on the fact that both Java and OpenSSL support PKCS#12-formatted keystores. To do the extraction, you first use keytool to convert to the standard format. Make sure you use the same password for both files (private key password, not the keystore password) or you will get odd failures later on in the second step. keytool -importkeystore -srckeystore keystore.jks \ -destkeystore intermediate.p12 -deststoretype PKCS12 Next, use OpenSSL to do the extraction to PEM: openssl pkcs12 -in intermediate.p12 -out extracted.pem -nodes You should be able to handle that PEM file easily enough; it's plain text with an encoded unencrypted private key and certificate(s) inside it (in a pretty obvious format). When you do this, take care to keep the files created secure. They contain secret credentials. Nothing will warn you if you fail to secure them correctly. The easiest method for securing them is to do all of this in a directory which doesn't have any access rights for anyone other than the user. And never put your password on the command line or in environment variables; it's too easy for other users to grab.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/150167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1310/']}
jdg_83914
stackexchange
llm_judgeable_groundtruth_similarity
146078
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: we know, the Dirichlet function as: $$ f(x) = \begin{cases} 1, &\text{if } x \text{ is irrational; and } \\ 0, &\text{if }x \text{ is rational}. \end{cases} $$ R.A. Silverman in his book Modern Calculus says that this wild function cannot be plotted at all while M.R.Spiegel in the book Advanced Calculus constructed a graph of $f(x)$ as two parallel lines with $x$-axe, gone via $1$ and $0$. The second athour says: The graph is shown in the adjoining Fig. 2-3. From its appearence it would seem that there are two functional values $0$ and $1$ corresponding to each value of $x$, i.e. that $f(x)$ is multiple-valued, whereas it is actually single-valued. What happens in these two point of views? Now provide the response and nothing else.
Silverman would deny that this picture counts as "plotting" the function. What look like two solid lines are actually not solid at all: the top line contains only points whose $x$ coordinate is irrational, and the bottom line contains only points whose $x$ coordinate is rational. These facts are impossible to show in a picture.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/146078', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8581/']}
jdg_83915
stackexchange
llm_judgeable_groundtruth_similarity
6700374
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Anyone knows any efficient method of perform an animation that what is has to do is to display a text, character by character? Like: T Th Thi This This i This is ... And so on. Thanks! Now provide the response and nothing else.
This may not be the most elegant solution, but the simplest is probably a quick subclass of TextView with a Handler that updates the text every so often until the complete sequence is displayed: public class Typewriter extends TextView { private CharSequence mText; private int mIndex; private long mDelay = 500; //Default 500ms delay public Typewriter(Context context) { super(context); } public Typewriter(Context context, AttributeSet attrs) { super(context, attrs); } private Handler mHandler = new Handler(); private Runnable characterAdder = new Runnable() { @Override public void run() { setText(mText.subSequence(0, mIndex++)); if(mIndex <= mText.length()) { mHandler.postDelayed(characterAdder, mDelay); } } }; public void animateText(CharSequence text) { mText = text; mIndex = 0; setText(""); mHandler.removeCallbacks(characterAdder); mHandler.postDelayed(characterAdder, mDelay); } public void setCharacterDelay(long millis) { mDelay = millis; }} You can then use this in an Activity like so: public class MyActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Typewriter writer = new Typewriter(this); setContentView(writer); //Add a character every 150ms writer.setCharacterDelay(150); writer.animateText("Sample String"); }} If you want some animation effects with each letter added, perhaps look at subclassing TextSwitcher instead. Hope that Helps!
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6700374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245638/']}
jdg_83916
stackexchange
llm_judgeable_groundtruth_similarity
15650348
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using python 2.7.3, and I'm trying to sort a list of dictionaries based on the order of values of another list. IE: listOne = ['hazel', 'blue', 'green', 'brown']listTwo = [{'name': 'Steve', 'eyecolor': 'hazel', 'height': '5 ft. 11 inches'}, {'name': 'Mark', 'eyecolor': 'brown', 'height': '6 ft. 2 inches'}, {'name': 'Mike', 'eyecolor': 'blue', 'height': '6 ft. 0 inches'}, {'name': 'Ryan', 'eyecolor': 'brown', 'height': '6 ft, 0 inches'}, {'name': 'Amy', 'eyecolor': 'green', 'height': '5 ft, 6 inches'}] Sorting listTwo based off of the order of values in listOne, we would end up with the following: print listTwo[{'name': 'Steve', 'eyecolor': 'hazel', 'height': '5 ft. 11 inches'},{'name': 'Mike', 'eyecolor': 'blue', 'height': '6 ft. 0 inches'},{'name': 'Amy', 'eyecolor': 'green', 'height': '5 ft, 6 inches'},{'name': 'Mark', 'eyecolor': 'brown', 'height': '6 ft. 2 inches'},{'name': 'Ryan', 'eyecolor': 'brown', 'height': '6 ft, 0 inches'}] I eventually need to output this text, so what I've done to display it correctly (in the correct order) is the following: for x in xrange(len(listOne)): for y in xrange(len(listTwo)): if listOne[x] == listTwo[y]["eyecolor"]: print "Name: " + str(listTwo[y]["name"]), print "Eye Color: " + str(listTwo[y]["eyecolor"]), print "Height: " + str(listTwo[y]["height"]) Is there some sort of lambda expression that can be used to make this happen? There has to be a more compact, less complex way of getting it in the order I want. Now provide the response and nothing else.
The simplest way would be to use list.index to generate a sort value for your list of dictionaries: listTwo.sort(key=lambda x: listOne.index(x["eyecolor"])) This is a little bit inefficient though, since list.index does a linear search through the eye-color list. If you had many eye colors to check against, it would be slow. A somewhat better approach would build an index dictionary instead: order_dict = {color: index for index, color in enumerate(listOne)}listTwo.sort(key=lambda x: order_dict[x["eyecolor"]]) If you don't want to modify listTwo , you can use the built-in sorted function instead of the list.sort method. It returns a sorted copy of the list, rather than sorting in-place.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15650348', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1215608/']}
jdg_83917
stackexchange
llm_judgeable_groundtruth_similarity
222350
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: a 1024-bit private key is generated using the openssl genrsa privateKey.pem 1024 command. we extract the public key from the private key using the command openSSl rsa -in privatekey.pem -pubout -out publickey.pem I want to know the size of my private key and my public key? I want to know the maximum size of a file that I want to encrypt with my public key? and the maximum size that I want to decrypt with my private key? Now provide the response and nothing else.
First of all, a 1024-bit RSA key is much too small for comfort. Current recommendations for RSA are a debate between 3072 or 4096 (better security margin) vs. 2048 (better performance). See: https://crypto.stackexchange.com/questions/19655/what-is-the-history-of-recommended-rsa-key-sizes https://paragonie.com/blog/2019/03/definitive-2019-guide-cryptographic-key-sizes-and-algorithm-recommendations I want to know the maximum size of a file that I want to encrypt with my public key? and the maximum size that I want to decrypt with my private key? The limit here really comes down not to RSA itself, because practical public-key cryptography is almost always hybrid cryptography where RSA is used to encapsulate symmetric encryption keys (usually for some AES-based algorithm) and the actual data is encrypted with those. So what in practice limits the size of the file you can encrypt is the design of the file or message format. For example with AES-GCM you cannot encrypt individual messages bigger than 64 GiB , and you should not encrypt more than 2 32 messages with one key, but that really means that to encrypt a larger volume of data the software should split it into smaller chunks and encrypt each one as a separate message, rotating keys after some number of chunks is exceeded. (Side note: even for smaller files it's best to encrypt them in modest-sized chunks so that the software can reject forgeries without having to decrypt the whole file before producing any output. For example, one of the causes of the EFail vulnerability is that GnuPG outputs the decryptions of forged files before it performs an optional anti-forgery check.) This is just one example; the general answer is that your question cannot be answered concretely with just the information that you give, and concrete answers require looking at your specific cryptographic software to see how it's designed and implemented. For example, the documentation for GnuPG tells you for some of its symmetric algorithms you should not encrypt files bigger than 4 GiB : Due to its 1970s-era 64-bit block size, [3DES] should not be used to encrypt more than about 4Gb of data. Beyond that, though, it is solid as a rock, and very few GnuPG users will ever notice a problem with it. Provided you’re not encrypting more than 4Gb of data you may use 3DES with confidence. I could not quickly find what's the failure mode if you violate this rule, and I cannot warrant whether it gives you an error saying you did something bad (as it should) or just silently produces a potentially insecure output file (which would be terrible). Caveat emptor.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/222350', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/222897/']}
jdg_83918
stackexchange
llm_judgeable_groundtruth_similarity
35507446
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Wikipedia says: A weak symbol denotes a specially annotated symbol during linking of Executable and Linkable Format (ELF) object files. By default, without any annotation, a symbol in an object file is strong. During linking, a strong symbol can override a weak symbol of the same name. In contrast, two strong symbols that share a name yield a link error during link-time. When linking a binary executable, a weakly declared symbol does not need a definition. In comparison, (by default) a declared strong symbol without a definition triggers an undefined symbol link error. Weak symbols are not mentioned by C or C++ language standards; as such, inserting them into code is not very portable. Even if two platforms support the same or similar syntax for marking symbols as weak, the semantics may differ in subtle points, e.g. whether weak symbols during dynamic linking at runtime lose their semantics or not. What are the weak functions and what are their uses? I am using an stm32f429 micro controller. There are some weak functions in the library. But I can't understand, what they and their use! I searched about it on google but did't get a satisfactory answer. Now provide the response and nothing else.
When a function is prepended with the descriptor __weak it basically means that if you (the coder) do not define it, it is defined here. Let us take a look at my arch-nemesis "HAL_UART_RxCpltCallback()". This function exists within the HAL of the STM32F4-HAL code base that you can download from ST-Micro. Within the file stm32f4xx_hal_uart.c file you will find this function defined as: __weak void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart){ /* NOTE: This function Should not be modified, when the callback is needed, the HAL_UART_RxCpltCallback could be implemented in the user file */} So, as the note within the code here says, place this function inside your own user files. However when you do that, do not put in the __weak term. This means that the linker will take your definition of the HAL_UART_RxCpltCallback() function and not the one defined within the stm32f4xx_hal_uart.c file. This gives the generic code base the ability to always compile. You don't have to write a whole bunch of functions that you are not interested in but it will compile. When it comes time to writing your own, you just have to NOT define yours as __weak and write it. Simple? Helpful? Cheers!!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35507446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5951329/']}
jdg_83919
stackexchange
llm_judgeable_groundtruth_similarity
65382
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The full question reads:Suppose that $a$ is a number that has the property that for every $n \in \mathbb{N}$, $a \leq 1/n$. Prove $a \leq 0$. Is there anyway to show this using Archimedean Property, or is it something related to the Completeness Axiom? The problem using the Archimedean Property is that I get up to $a< \epsilon$ but from there I am not able to conclude anything about whether $a \leq 0$ because $\epsilon > 0$. Now provide the response and nothing else.
In other words, prove that $a\not>0$. Assume $a>0$, what does that tell you about some $n$?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/65382', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/16233/']}
jdg_83920
stackexchange
llm_judgeable_groundtruth_similarity
158631
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I encountered this problem in Physics before i knew about a thing called Taylor Polynomials My problem was that i had to sum this series : $$\sum^\infty_{n=1}\frac{(-1)^{n+1}}{n}$$basically $$1,-\frac{1}{2},\frac{1}{3},-\frac{1}{4},\frac{1}{5},-\frac{1}{6},\frac{1}{7}.....$$ So now i know that there is something called a taylor polynomial that says that $$\ln(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5}-\frac{x^6}{6}+\frac{x^7}{7}....$$ So the above summation boils down to $\ln 2$. What if i never knew the exansion then how would I calculate it? Earlier I tried solving it like so , divide it into two different sets i.e. $$\text{1 and $\dfrac{1}{3}+\frac{1}{5}+\frac{1}{7}+\frac{1}{9}+\frac{1}{11}+\frac{1}{13} \ldots$ and $-\dfrac{1}{2}-\frac{1}{4}-\frac{1}{6}-\frac{1}{8}-\frac{1}{10}\ldots$}$$ I said Hey! the first set would contain stuff like, $$\frac{1}{3^n},\frac{1}{5^n},\ldots$$ each of them would probably be reduced to a sum like so $$\sum^\infty_{n=1}\frac1{a^n}=\frac1{a-1}$$ and further become $$\sum^\infty_{a=3}\frac1{a-1}$$ which would subtract all the numbers in the other even set giving 1 as the answer which is wrong . Where did I go wrong and how could I proceed even without knowing Taylor polynomials Now provide the response and nothing else.
I find Norbert's solution more appealing if you run it backwards. You're trying to evaluate $$1-{1\over2}+{1\over3}-{1\over4}+\cdots$$ Let $$f(x)=x-{1\over2}x^2+{1\over3}x^3-{1\over4}x^4+\cdots$$ Then we want $f(1)$. So, how can we find a simple formula for $f(x)$? Differentiate it: $$f'(x)=1-x+x^2-x^3+\cdots$$ Recognize this as a geometric series, first term $1$, constant ratio $-x$, so sum is $$f'(x)={1\over1+x}$$ Having differentiated, now antidifferentiate to get $$f(x)=\int{1\over1+x}\,dx=\log(1+x)+C$$ But what is $C$? Well, from the origial formula for $f$, we see $f(0)=0$, so that forces $C=0$, so $f(x)=\log(1+x)$, so $f(1)$, which is what we wanted, is $\log 2$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/158631', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/31335/']}
jdg_83921
stackexchange
llm_judgeable_groundtruth_similarity
4511087
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What regular expression can be used to make the following conversions? City -> CITYFirstName -> FIRST_NAMEDOB -> DOBPATId -> PAT_IDRoomNO -> ROOM_NO The following almost works - it just adds an extra underscore to the beginning of the word: var rgx = @"(?x)( [A-Z][a-z,0-9]+ | [A-Z]+(?![a-z]) )";var tests = new string[] { "City", "FirstName", "DOB", "PATId", "RoomNO"};foreach (var test in tests) Console.WriteLine("{0} -> {1}", test, Regex.Replace(test, rgx, "_$0").ToUpper());// output:// City -> _CITY// FirstName -> _FIRST_NAME// DOB -> _DOB// PATId -> _PAT_ID// RoomNO -> _ROOM_NO Now provide the response and nothing else.
Flowing from John M Gant's idea of adding underscores then capitalizing, I think this regular expression should work: ([A-Z])([A-Z][a-z])|([a-z0-9])([A-Z]) replacing with: $1$3_$2$4 You can rename the capture zones to make the replace string a little nicer to read. Only $1 or $3 should have a value, same with $2 and $4. The general idea is to add underscores when: There are two capital letters followed by a lower case letter, place the underscore between the two capital letters. (PATId -> PAT_Id) There is a small letter followed by a capital letter, place the underscore in the middle of the two. (RoomNO -> Room_NO and FirstName -> First_Name) Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4511087', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1094969/']}
jdg_83922
stackexchange
llm_judgeable_groundtruth_similarity
54679928
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is a pretty common pattern in React components: handleTextFieldChange(event){ const name = event.currentTarget.name; this.setState({[name]: event.currentTarget.value})} What Javascript syntax could be used to do the same with React hooks? i.e. something possibly along the lines of: handleTextFieldChange(event){ const name = event.currentTarget.name; this.set[name](event.currentTarget.value);} Now provide the response and nothing else.
You could use a single useState with a default value of an object that contains all your input values and update that like you are used to with class components. Example const { useState } = React;function App() { const [state, setState] = useState({ email: "", password: "" }); function onChange(event) { const { name, value } = event.target; setState(prevState => ({ ...prevState, [name]: value })); } return ( <div> <input value={state.email} name="email" onChange={onChange} /> <input value={state.password} name="password" onChange={onChange} /> </div> );}ReactDOM.render(<App />, document.getElementById("root")); <script src="https://unpkg.com/react@16/umd/react.development.js"></script><script src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script><div id="root"></div>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54679928', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/364966/']}
jdg_83923
stackexchange
llm_judgeable_groundtruth_similarity
1791264
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: First off, I apologize for any English mistakes.I've come across a double integral problem that I haven't been able to solve: Find $$\int_0^\pi \int_0^\pi|\cos(x+y)|\,dx\,dy$$Intuitively, I thought it could be solved in a similar manner to what you would do for a regular integral with and absolute value. For example, to solve $\int_0^\pi |\cos(x)| \,dx$, you just take the interval where $\cos(x)$ is positive and multiply it by two:$$\int_0^\pi |\cos(x)|\,dx = 2\int_0^{\frac{\pi}{2}} \cos(x) \, dx$$So I naively assumed my initial problem could be solved in a similar manner, that is:$$\int_0^\pi \int_0^\pi |\cos(x+y)| \,dx\,dy = 4\int_0^{\frac{\pi}{4}}\int_0^{\frac{\pi}{4}}\cos(x+y) \,dx\,dy$$However, this does not give me the right answer. I'm not really sure how this integral is actually solved.Thanks a lot for your help! Now provide the response and nothing else.
For the single integral case, "take the interval where $\cos x$ is positive and multiply it by two" is rather misleading: what you are actually doing is splitting the region of integration into a part where $\cos x$ is positive and a part where $\cos x$ is negative, working out both and adding them. The two integrals are the same in this case , so you can just multiply by $2$. You can do the same for the double integral, but there is no guarantee that you will be adding equal integrals. You will need to subdivide the square of integration into bits where $\cos(x+y)$ has a single sign. So you will need to take$$0\le x+y\le\frac\pi2\quad\hbox{and}\quad \frac\pi2\le x+y\le\frac{3\pi}2\quad\hbox{and}\quad \frac{3\pi}2\le x+y\le2\pi\ .$$Good luck!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1791264', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/326678/']}
jdg_83924
stackexchange
llm_judgeable_groundtruth_similarity
36374335
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm running MySql Server 5.7.11 and this sentence: updated datetime NOT NULL DEFAULT '0000-00-00 00:00:00' is not working. Giving the error: ERROR 1067 (42000): Invalid default value for 'updated' But the following: updated datetime NOT NULL DEFAULT '1000-01-01 00:00:00' just works . The same case for DATE. As a sidenote , it is mentioned in the MySQL docs : The DATE type is used for values with a date part but no time part. MySQL retrieves and displays DATE values in 'YYYY-MM-DD' format. The supported range is '1000-01-01' to '9999-12-31'. even if they also say: Invalid DATE, DATETIME, or TIMESTAMP values are converted to the “zero” value of the appropriate type ('0000-00-00' or '0000-00-00 00:00:00'). Having also into account the second quote from MySQL documentation, could anyone let me know why it is giving that error? Now provide the response and nothing else.
The error is because of the sql mode which can be strict mode as per latest MYSQL 5.7 documentation MySQL Documentation 5.7 says : Strict mode affects whether the server permits '0000-00-00' as a valid date: If strict mode is not enabled, '0000-00-00' is permitted and inserts produce no warning. If strict mode is enabled, '0000-00-00' is not permitted and inserts produce an error, unless IGNORE is given as well. For INSERT IGNORE and UPDATE IGNORE, '0000-00-00' is permitted and inserts produce a warning. To Check MYSQL mode SELECT @@GLOBAL.sql_mode global, @@SESSION.sql_mode session Disabling STRICT_TRANS_TABLES mode However to allow the format 0000-00-00 00:00:00 you have to disable STRICT_TRANS_TABLES mode in mysql config file or by command By command SET sql_mode = ''; or SET GLOBAL sql_mode = ''; Using the keyword GLOBAL requires super previliges and it affects the operations all clients connect from that time on if above is not working than go to /etc/mysql/my.cnf (as per ubuntu) and comment out STRICT_TRANS_TABLES Also, if you want to permanently set the sql mode at server startup then include SET sql_mode='' in my.cnf on Linux or MacOS. For windows this has to be done in my.ini file. Note However strict mode is not enabled by default in MYSQL 5.6. Hence it does not produce the error as per MYSQL 6 documentation which says MySQL permits you to store a “zero” value of '0000-00-00' as a “dummy date.” This is in some cases more convenient than using NULL values, and uses less data and index space. To disallow '0000-00-00', enable the NO_ZERO_DATE SQL mode. UPDATE Regarding the bug matter as said by @Dylan-Su: I don't think this is the bug it the way MYSQL is evolved over the time due to which some things are changed based on further improvement of the product. However I have another related bug report regarding the NOW() function Datetime field does not accept default NOW() Another Useful note [see Automatic Initialization and Updating for TIMESTAMP and DATETIME ] As of MySQL 5.6.5, TIMESTAMP and DATETIME columns can be automatically initializated and updated to the current date and time (that is, the current timestamp). Before 5.6.5, this is true only for TIMESTAMP, and for at most one TIMESTAMP column per table. The following notes first describe automatic initialization and updating for MySQL 5.6.5 and up, then the differences for versions preceding 5.6.5. Update Regarding NO_ZERO_DATE As of MySQL as of 5.7.4 this mode is deprecated. For previous version you must comment out the respective line in the config file. Refer MySQL 5.7 documentation on NO_ZERO_DATE
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/36374335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5476782/']}
jdg_83925
stackexchange
llm_judgeable_groundtruth_similarity
61684
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Congress can confirm a new Vice President, but if Congress couldn't operate due to an emergency like the attack on the Capitol, then there would be no one to fill in the the Vice President. Why is there no Vice Presidential line of succession? Now provide the response and nothing else.
The Vice President has exactly two jobs according to the Constitution: To preside over the Senate, breaking ties if and when necessary, but otherwise not voting or doing much else of substance. To become President or Acting President pursuant to the 25th Amendment. #2 is not relevant here as it is part of the Presidential line of succession (which has numerous backup options if the VP is unavailable), so that leaves #1. But if we descend from the platonic ideal of "what the Constitution says" into the real world, the VP in fact spends little time presiding over the Senate, except when their casting vote is needed (this is more common when the Senate is closely divided, as you might expect). Instead, they spend most of their time facilitating the goals of the President. This often involves conferring with people such as lawmakers, cabinet members, and other federal officials, as well as a variety of other unglamorous behind-the-scenes tasks. Most of these tasks boil down to various forms of planning, negotiation, and coordination of the administration's policy agenda, and as a result, no Constitutional authority is required to carry them out (which is why the Constitution does not describe any of these tasks explicitly). Meanwhile, the Senate elects its own President pro tempore , who presides in the VP's absence (or further delegates that task to someone else). So there are two reasons not to have a VP line of succession: As Rick Smith points out, the VP can't do their job very effectively if they do not share the President's policy goals. It's therefore preferable to let the President select a VP explicitly. If Congress is unable to come into session, then the Senate doesn't need anyone to preside over them. If they can come into session, but don't want to confirm a new VP for whatever reason, they could just elect a President pro tempore and carry on as usual. Unlike a vacancy in the office of the President, a vacant VP office is not an emergency, as the VP is not responsible for any particularly important Constitutional functions. It therefore makes more sense to just leave the office vacant for however long it takes to get a new VP confirmed.
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/61684', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/35790/']}
jdg_83926
stackexchange
llm_judgeable_groundtruth_similarity
19266
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Usually when I write a shell script for a specific task, I just make a list of files like so: #/bin/shread -d '' imagefiles <<EOF./01/IMG0000.jpg./01/IMG0001.jpg./01/IMG0002.jpg./01/IMG0003.jpg./01/IMG0004.jpg./01/IMG0005.jpg./01/IMG0006.jpg./01/IMG0007.jpg(a whole bunch of files down to ./10/IMG0102.jpg)EOFfor i in $imagefiles for j in range(len(commands))domv $i ./$j.jpg donedone In this instance I wanted to be able to iterate over the output of seq but following Gilles's suggestion simply wrote that part out of it out as Python (yeah I know that as it is it would execute each command j times, leading to about 100000 executions). Earlier on in the day I was renaming 736 files in sequence, but now I'm renaming 1000-odd files. I'm sure there's a better way to do that (please don't hesitate to tell me), but it would still be nice to know how to iterate over the list of commands and some other iterable. Now provide the response and nothing else.
Ok, so you want to zip two iterables, or in other words you want a single loop, iterating over a bunch of strings, with an additional counter. It's quite easy to implement a counter. n=0for x in $commands; do mv -- "$x" "$n.jpg" n=$(($n+1))done Note that this only works if none of the elements that you're iterating over contains any whitespace (nor globbing characters). If you have items separated by newlines, turn off globbing and split only on newlines. n=0IFS=''; set -ffor x in $commands; do mv -- "$x" "$n.jpg" n=$(($n+1))doneset +f; unset IFS If you only need to iterate over the data once, loop around read (see Why is while IFS= read used so often, instead of IFS=; while read.. ? for more explanations). n=0while IFS= read -r x; do mv -- "$x" "$n.jpg" n=$(($n+1))done <<EOF…EOF If you're using a shell that has arrays (bash, ksh or zsh), store the elements in an array. In zsh, either run setopt ksh_arrays to number array elements from 0, or adapt the code for array element numbering starting at 1. commands=( ./01/IMG0000.jpg …)n=0while [[ $n -lt ${#commands} ]]; do mv -- "${commands[$n]}" "$n.jpg"done
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/19266', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1389/']}
jdg_83927
stackexchange
llm_judgeable_groundtruth_similarity
1330
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Further to my question asked here: prior post and which left some points unanswered, I have reformulated the question as follows: What is Ito's lemma used for in quantitative finance? and when is it applicable? I don't understand for instance if Ito's lemma is used for obtaining a SDE from a stochastic process or the converse: obtain a stochastic process from an SDE. Furthermore vonjd's reply is a bit confuse to me: does he mean "Ito's lemma can only or also be used for processes with bounded quadratic variation? Now provide the response and nothing else.
If you are given a diffusion process $X_t$, and a $C^{1,2}$ transformation $Y_t=f(t,X_t)$ of the process $X_t$. Then Itô's lemma gives you the SDE followed by the process $Y_t$ in terms of $dX_t$, and $dt$ and partial derivatives of $f$ up to order 1 in time and 2 in $x$. If you are given the SDE followed by $X_t$ in terms of Brownian motion, drift, and diffusion term then you can write down the SDE of $Y_t$ in terms of Brownian motion, drift, and diffusion term. This shows in particular that diffusions are stable by those type of transformations. There is nothing more and nothing less in it. Of course you can extend this lemma in various fancy and sophisticated ways. Regards
{}
{'log_upvote_score': 5, 'links': ['https://quant.stackexchange.com/questions/1330', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/943/']}
jdg_83928
stackexchange
llm_judgeable_groundtruth_similarity
641
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Watching this youtube video shows that our planet is nothing more than dust in the wind. At the 2:50 mark in the video, you will see a view of the whole Milky Way. At the bottom you will see this text Light travel time from earth: 100,000 years Which means if we travel at light speed we need 100,000 years to go far from the Milky Way. If nothing can travel faster than light, and if it takes 100,000 years to go far from the Milky Way at light speed How did they map our whole universe, which has a radius of 13.798±0.037 billion light years? Now provide the response and nothing else.
The simple answer is that light from the distant parts of the Milky Way Galaxy have already been traveling for that amount of time. Because light does not travel instantaneously through space, we are seeing the galaxy not as it is now, but as it was 100,000 (or so) years ago. Incidentally, this is also how we are able to see back to earlier moments of the universe's history. In terms of distance, the further we look, the longer light would have had to leave in order to reach us today. The universe did have a beginning (at least in the sense of having an earliest time). Because the universe is not infinitely old, there is actually a limit to how far we can see back into the universe's history (see: Cosmological horizon ). We are able to make maps of the universe, like this one from SDSS : by running what's known as a survey. These are large projects which maps out the positions (and potentially other properties, depending on what their science goals are) of galaxies in the universe. How you should look at this picture is the following: 1) You're at the center of the image, 2) Each point in the picture represents an individual galaxy, 3) Distance from the observer, the redshift of a galaxy, increases with increasing distance from the center, and 4) as you travel around the circle in angle, you're sweeping through the right ascension coordinate of the celestial coordinate system (think longitude). You're seeing a pizza slice instead of a sphere because this is a particular slice in declination (lattitude). As you look to higher redshifts, you are looking further back into time, and earlier into the universe's history. This is how astronomers/cosmologists can say something statistically about models of structure formation of the universe (how objects like galaxies, and clusters of galaxies came to be). The problem with looking at galaxies at very high redshifts is that they becomevery faint. At a certain point, you need more powerful telescopes to continue looking further back in time. Though, we can get around this somewhat by looking at the radiation component of the universe at microwave wavelengths (most surveys are in the optical region of the electro-magnetic spectrum ). This radiation is called the cosmic microwave background radiation ( CMB ), and can also be mapped out (see: COBE , WMAP , and most recently Planck satellites). This is also a map of sorts. This map of low energy photons is a snapshot of the universe approximately ~380,000 (redshift of z=1100; meaning the universe was smaller by about a factor of 1101) years after the big bang, when the universe had very little of what we would recognize as structure. The job of the cosmologist is to connect the picture we see from the CMB to the picture we see from our surveys of galaxies through the use of physics.
{}
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/641', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/239/']}
jdg_83929
stackexchange
llm_judgeable_groundtruth_similarity
50766409
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to learn javascript and I am making a simple Fahrenheit Celsius application. The problem that I currently have is only one of two buttons work. in case there are 2 if statements , only button Fahrenheit2Celcius button works, in case there is an if... else if statement then only button Celcius2Fahrenheit works, but they never work both. Would you please explain what I am doing wrong? Here is my code: function C2F() { var Stoinost = document.getElementById('Stoinost').value; var Fahrenheit = (Stoinost * 1.8) + 32; var fahrenheit2 = Math.round(Fahrenheit*100)/100; var C2F_but = document.getElementById("C2F"); var Celcius = (Stoinost - 32) * (5/9); var Celcius2 = Math.round(Celcius*100)/100; var F2C_but = document.getElementById("F2C"); if (C2F_but){ document.getElementById("result").innerHTML = fahrenheit2; } if (F2C_but){ document.getElementById("result").innerHTML = Celcius2; }} <table id="myTable"> <tr> <tr><td><p> Input Value: <input type="number" id="Stoinost" size="20" ></p></td></tr> <td> <button onclick="C2F()" id="F2C">Fahrenheit2Celcius</button> <button onclick="C2F()" id="C2F">Celcius2Fahrenheit</button> </td> <tr><td><p> <div id="result">Result is displayed here.</div> </p></td></tr> </tr></table> Now provide the response and nothing else.
You can accomplish this by embedding the labels in a stack view and embedding the stack view in a UIView. The label text will expand the stack view vertically, which will expand the content view vertically, which will control the scroll view's .contentSize . Black is the scroll view; blue is the content view; stack view only shows as thin gray outline; labels are yellow, green and cyan. The background colors just make it easier to see what's what. Bunch of steps, but should be clear: add a scrollView, set constraints as normal add a UIView to scrollView - name it "contentView" set constraints 0 for top/leading/trailing/bottom of contentView to scrollView set width and height of contentView equal to width and height of scrollView add a stackView to contentView set stackView to Vertical / Fill / Fill / Spacing: 20 set stackView constraints top: 8, bottom: 8, leading: 40, trailing: 40 to contentView set stackView centerY constraint to contentView add three labels to stackView set fonts and text, set number of lines = 0 for center and bottom labels change stackView top and bottom constraints to >= 8 change contentView height constraint to Priority: 250 I think that's everything.... Setting the height Priority of the contentView to 250 will allow it to expand vertically based on the text in the labels. Setting top and bottom stackView constraints to >= 8 will "push" the top and bottom of the contentView, but allow extra space when you don't have enough text to exceed the vertical bounds. Results: Here's a storyboard with everything in place for reference: <?xml version="1.0" encoding="UTF-8"?><document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" toolsVersion="14109" targetRuntime="iOS.CocoaTouch" propertyAccessControl="none" useAutolayout="YES" useTraitCollections="YES" useSafeAreas="YES" colorMatched="YES" initialViewController="SeU-GX-TTY"> <device id="retina4_7" orientation="portrait"> <adaptation id="fullscreen"/> </device> <dependencies> <deployment identifier="iOS"/> <plugIn identifier="com.apple.InterfaceBuilder.IBCocoaTouchPlugin" version="14088"/> <capability name="Safe area layout guides" minToolsVersion="9.0"/> <capability name="documents saved in the Xcode 8 format" minToolsVersion="8.0"/> </dependencies> <scenes><!--View Controller--> <scene sceneID="bCz-Kd-LLi"> <objects> <viewController id="SeU-GX-TTY" sceneMemberID="viewController"> <view key="view" contentMode="scaleToFill" id="qjW-fW-J5n"> <rect key="frame" x="0.0" y="0.0" width="375" height="667"/> <autoresizingMask key="autoresizingMask" widthSizable="YES" heightSizable="YES"/> <subviews> <scrollView clipsSubviews="YES" multipleTouchEnabled="YES" contentMode="scaleToFill" translatesAutoresizingMaskIntoConstraints="NO" id="Zj2-9M-SP5" userLabel="scrollView"> <rect key="frame" x="0.0" y="40" width="375" height="627"/> <subviews> <view contentMode="scaleToFill" translatesAutoresizingMaskIntoConstraints="NO" id="Pmb-IH-ckB" userLabel="contentView"> <rect key="frame" x="0.0" y="0.0" width="375" height="627"/> <subviews> <stackView opaque="NO" contentMode="scaleToFill" axis="vertical" spacing="20" translatesAutoresizingMaskIntoConstraints="NO" id="EfQ-93-hcI" userLabel="stackView"> <rect key="frame" x="40" y="164" width="295" height="299.5"/> <subviews> <label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" verticalCompressionResistancePriority="751" text="Anger" textAlignment="natural" lineBreakMode="tailTruncation" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="Sxz-f7-zjR" userLabel="topLabel"> <rect key="frame" x="0.0" y="0.0" width="295" height="43"/> <color key="backgroundColor" red="0.99953407049999998" green="0.98835557699999999" blue="0.47265523669999998" alpha="1" colorSpace="custom" customColorSpace="sRGB"/> <fontDescription key="fontDescription" type="system" pointSize="36"/> <nil key="textColor"/> <nil key="highlightedColor"/> </label> <label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" textAlignment="natural" lineBreakMode="tailTruncation" numberOfLines="0" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="STy-4u-e1W" userLabel="centerLabel"> <rect key="frame" x="0.0" y="63" width="295" height="183"/> <color key="backgroundColor" red="0.83216959239999999" green="0.98548370600000001" blue="0.47333085539999997" alpha="1" colorSpace="custom" customColorSpace="sRGB"/> <string key="text">Anger is an intense emotion defined as a response to a perceived provocation, the invasion of one's boundaries, or a threat. From an evolutionary standpoint, anger servers to mobilise psychological resources in order to address the threat/invasion. Anger is directed at an individual of equal status.</string> <fontDescription key="fontDescription" type="system" pointSize="17"/> <nil key="textColor"/> <nil key="highlightedColor"/> </label> <label opaque="NO" userInteractionEnabled="NO" contentMode="left" horizontalHuggingPriority="251" verticalHuggingPriority="251" verticalCompressionResistancePriority="749" text="Based on information from Wikipedia. APA DIctionary of Psycology" textAlignment="natural" lineBreakMode="tailTruncation" numberOfLines="0" baselineAdjustment="alignBaselines" adjustsFontSizeToFit="NO" translatesAutoresizingMaskIntoConstraints="NO" id="u3i-zP-e1M" userLabel="bottomLabel"> <rect key="frame" x="0.0" y="266" width="295" height="33.5"/> <color key="backgroundColor" red="0.45138680930000002" green="0.99309605359999997" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/> <fontDescription key="fontDescription" type="system" pointSize="14"/> <nil key="textColor"/> <nil key="highlightedColor"/> </label> </subviews> </stackView> </subviews> <color key="backgroundColor" red="0.46202266219999999" green="0.83828371759999998" blue="1" alpha="1" colorSpace="custom" customColorSpace="sRGB"/> <constraints> <constraint firstAttribute="trailing" secondItem="EfQ-93-hcI" secondAttribute="trailing" constant="40" id="4HE-oJ-RE3"/> <constraint firstItem="EfQ-93-hcI" firstAttribute="centerY" secondItem="Pmb-IH-ckB" secondAttribute="centerY" id="H9O-jj-a7A"/> <constraint firstItem="EfQ-93-hcI" firstAttribute="top" relation="greaterThanOrEqual" secondItem="Pmb-IH-ckB" secondAttribute="top" constant="8" id="cKe-DN-Lbn"/> <constraint firstItem="EfQ-93-hcI" firstAttribute="leading" secondItem="Pmb-IH-ckB" secondAttribute="leading" constant="40" id="f4g-6a-VqH"/> <constraint firstAttribute="bottom" relation="greaterThanOrEqual" secondItem="EfQ-93-hcI" secondAttribute="bottom" constant="8" id="meR-gT-OVG"/> </constraints> </view> </subviews> <color key="backgroundColor" red="0.0" green="0.0" blue="0.0" alpha="1" colorSpace="custom" customColorSpace="sRGB"/> <constraints> <constraint firstItem="Pmb-IH-ckB" firstAttribute="top" secondItem="Zj2-9M-SP5" secondAttribute="top" id="HCI-bq-7ur"/> <constraint firstAttribute="trailing" secondItem="Pmb-IH-ckB" secondAttribute="trailing" id="Tdl-c0-GAV"/> <constraint firstItem="Pmb-IH-ckB" firstAttribute="width" secondItem="Zj2-9M-SP5" secondAttribute="width" id="Zj9-ND-Fqt"/> <constraint firstItem="Pmb-IH-ckB" firstAttribute="leading" secondItem="Zj2-9M-SP5" secondAttribute="leading" id="ckv-wi-E1z"/> <constraint firstItem="Pmb-IH-ckB" firstAttribute="height" secondItem="Zj2-9M-SP5" secondAttribute="height" priority="250" id="jpK-HZ-vva"/> <constraint firstAttribute="bottom" secondItem="Pmb-IH-ckB" secondAttribute="bottom" id="psz-UW-bNp"/> </constraints> </scrollView> </subviews> <color key="backgroundColor" white="1" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/> <constraints> <constraint firstItem="Zj2-9M-SP5" firstAttribute="top" secondItem="Xr7-LW-bbC" secondAttribute="top" constant="20" id="EgA-Bk-3fC"/> <constraint firstItem="Zj2-9M-SP5" firstAttribute="leading" secondItem="qjW-fW-J5n" secondAttribute="leading" id="MBG-pL-R8Q"/> <constraint firstItem="Xr7-LW-bbC" firstAttribute="bottom" secondItem="Zj2-9M-SP5" secondAttribute="bottom" id="e9K-6A-Y9F"/> <constraint firstItem="Xr7-LW-bbC" firstAttribute="trailing" secondItem="Zj2-9M-SP5" secondAttribute="trailing" id="yfs-wt-Br8"/> </constraints> <viewLayoutGuide key="safeArea" id="Xr7-LW-bbC"/> </view> </viewController> <placeholder placeholderIdentifier="IBFirstResponder" id="lHx-xL-Vx5" userLabel="First Responder" sceneMemberID="firstResponder"/> </objects> <point key="canvasLocation" x="225" y="106"/> </scene> </scenes></document> And here's a quick example replicating that layout / functionality via code only: //// ScrollWorkViewController.swift//// Created by DonMag on 6/12/19.//import UIKitclass ScrollWorkViewController: UIViewController { let theScrollView: UIScrollView = { let v = UIScrollView() v.backgroundColor = .red return v }() let contentView: UIView = { let v = UIView() v.backgroundColor = UIColor(red: 0.25, green: 0.25, blue: 1.0, alpha: 1.0) return v }() let stackView: UIStackView = { let v = UIStackView() v.axis = .vertical v.alignment = .fill v.distribution = .fill v.spacing = 20 return v }() let topLabel: UILabel = { let v = UILabel() v.font = UIFont.boldSystemFont(ofSize: 32.0) v.backgroundColor = .yellow return v }() let centerLabel: UILabel = { let v = UILabel() v.font = UIFont.systemFont(ofSize: 17.0) v.numberOfLines = 0 v.backgroundColor = .green return v }() let bottomLabel: UILabel = { let v = UILabel() v.font = UIFont.systemFont(ofSize: 14.0) v.numberOfLines = 0 v.backgroundColor = .cyan return v }() override func viewDidLoad() { super.viewDidLoad() [theScrollView, contentView, stackView, topLabel, centerLabel, bottomLabel].forEach { $0.translatesAutoresizingMaskIntoConstraints = false } view.addSubview(theScrollView) theScrollView.addSubview(contentView) contentView.addSubview(stackView) stackView.addArrangedSubview(topLabel) stackView.addArrangedSubview(centerLabel) stackView.addArrangedSubview(bottomLabel) let contentViewHeightConstraint = contentView.heightAnchor.constraint(equalTo: theScrollView.heightAnchor, constant: 0.0) contentViewHeightConstraint.priority = .defaultLow NSLayoutConstraint.activate([ // constrain all 4 sides of the scroll view to the safe area theScrollView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 0.0), theScrollView.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor, constant: 0.0), theScrollView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor, constant: 0.0), theScrollView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor, constant: 0.0), // constrain all 4 sides of the content view to the scroll view contentView.topAnchor.constraint(equalTo: theScrollView.topAnchor, constant: 0.0), contentView.bottomAnchor.constraint(equalTo: theScrollView.bottomAnchor, constant: 0.0), contentView.leadingAnchor.constraint(equalTo: theScrollView.leadingAnchor, constant: 0.0), contentView.trailingAnchor.constraint(equalTo: theScrollView.trailingAnchor, constant: 0.0), // constrain width of content view to width of scroll view contentView.widthAnchor.constraint(equalTo: theScrollView.widthAnchor, constant: 0.0), // constrain the stack view >= 8-pts from the top // <= minus 8-pts from the bottom // 40-pts leading and trailing stackView.topAnchor.constraint(greaterThanOrEqualTo: contentView.topAnchor, constant: 8.0), stackView.bottomAnchor.constraint(lessThanOrEqualTo: contentView.bottomAnchor, constant: -8.0), stackView.leadingAnchor.constraint(equalTo: contentView.leadingAnchor, constant: 40.0), stackView.trailingAnchor.constraint(equalTo: contentView.trailingAnchor, constant: -40.0), // constrain stack view centerY to contentView centerY stackView.centerYAnchor.constraint(equalTo: contentView.centerYAnchor, constant: 0.0), // activate the contentView's height constraint contentViewHeightConstraint, ]) topLabel.text = "Anger" bottomLabel.text = "Based on information from Wikipedia APA Dictionary of Psychology" // a sample paragraph of text let centerSampleText = "Anger is an intense emotion defined as a response to a perceived provocation, the invasion of one’s boundaries, or a threat. From an evolutionary standpoint, anger servers to mobilise psychological resources in order to address the threat/invasion. Anger is directed at an individual of equal status." // change to repeat the center-label sample text let numberOfParagraphs = 2 var s = "" for i in 1...numberOfParagraphs { s += "\(i). " + centerSampleText if i < numberOfParagraphs { s += "\n\n" } } centerLabel.text = s }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50766409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5616029/']}
jdg_83930
stackexchange
llm_judgeable_groundtruth_similarity
62818306
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following pandas dataframe : df = pd.DataFrame([ ['A', 2017, 1], ['A', 2019, 1], ['B', 2017, 1], ['B', 2018, 1], ['C', 2016, 1], ['C', 2019, 1],], columns=['ID', 'year', 'number']) and am looking for the most efficient way to fill the missing years with a default value of 0 for the column number The expected output is: ID year number0 A 2017 11 A 2018 02 A 2019 13 B 2017 14 B 2018 15 C 2016 16 C 2017 07 C 2018 08 C 2019 1 The dataframe that I have is relatively big, so I am looking for an efficient solution. Edit: This is the code that I have so far: min_max_dict = df[['ID', 'year']].groupby('ID').agg([min, max]).to_dict('index')new_ix = [[], []]for id_ in df['ID'].unique(): for year in range(min_max_dict[id_][('year', 'min')], min_max_dict[id_][('year', 'max')]+1): new_ix[0].append(id_) new_ix[1].append(year)df.set_index(['ID', 'year'], inplace=True)df = df.reindex(new_ix, fill_value=0).reset_index() Result ID year number0 A 2017 11 A 2018 02 A 2019 13 B 2017 14 B 2018 15 C 2016 16 C 2017 07 C 2018 08 C 2019 1 Now provide the response and nothing else.
A slightly faster approach rather than using explode is to use pd.Series constructor. And you can use .iloc if years are already sorted from earliest to latest. idx = df.groupby('ID')['year'].apply(lambda x: pd.Series(np.arange(x.iloc[0], x.iloc[-1]+1))).reset_index()df.set_index(['ID','year']).reindex(pd.MultiIndex.from_arrays([idx['ID'], idx['year']]), fill_value=0).reset_index() Output: ID year number0 A 2017 11 A 2018 02 A 2019 13 B 2017 14 B 2018 15 C 2016 16 C 2017 07 C 2018 08 C 2019 1
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62818306', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13390139/']}
jdg_83931
stackexchange
llm_judgeable_groundtruth_similarity
1363310
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a bean that i want to inject with a named list using Spring util namespace <util:list id="myList"> but Spring is looking for a collection of beans of type String instead. My broken test is: @RunWith(SpringJUnit4ClassRunner.class)@ContextConfigurationpublic class ListInjectionTest { @Autowired @Qualifier("myList") private List<String> stringList; @Test public void testNotNull() { TestCase.assertNotNull("stringList not null", stringList); }} My context is: <beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xmlns="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd"> <util:list id="myList"> <value>foo</value> <value>bar</value> </util:list></beans> But I get Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No matching bean of type [java.lang.String] found for dependency [collection of java.lang.String]: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true), @org.springframework.beans.factory.annotation.Qualifier(value=myList)} at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoSuchBeanDefinitionException(DefaultListableBeanFactory.java:726) at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:571) at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:412) Which puzzles me rather as I figured this would be the way it was expected to work. Now provide the response and nothing else.
This is due to a rather obscure part of @Autowired's behaviour, specified in 3.11.2. @Autowired : It is also possible to provide all beans of a particular type from the ApplicationContext by adding the annotation to a field or method that expects an array of that type... The same applies for typed collections... In other words, by saying @Autowired @Qualifier("myList") List<String> , you're actually asking for "give me the list of all beans of type java.lang.String that have the qualifier "myList". The solution is mentioned in 3.11.3. Fine-tuning annotation-based autowiring with qualifiers : If you intend to express annotation-driven injection by name, do not primarily use @Autowired - even if is technically capable of referring to a bean name through @Qualifier values. Instead, prefer the JSR-250 @Resource annotation which is semantically defined to identify a specific target component by its unique name, with the declared type being irrelevant for the matching process. As a specific consequence of this semantic difference, beans which are themselves defined as a collection or map type cannot be injected via @Autowired since type matching is not properly applicable to them. Use @Resource for such beans, referring to the specific collection/map bean by unique name. So use this in your test, and it works fine: @Resource(name="myList") private List<String> stringList;
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1363310', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135624/']}
jdg_83932
stackexchange
llm_judgeable_groundtruth_similarity
32722952
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it safe to put a jwt (json web token) into the url as a query parameter of a GET request? Now provide the response and nothing else.
It can be safe under the following circumstances: the JWT is one-time time usage only the jti and exp claims are present in the token the receiver properly implements replay protection using jti and exp but in case it is used as a token that can repeatedly be used e.g. against an API then supplying it as a query parameter is less preferred since it may end up in logs and system process information, available to others that have access to the server or client system. In that case would be better to present it as part of a header or a POST parameter. Besides that, by using it in the query parameters you may run in to URL size limitations on browsers or servers; using it in a header provides some more space, using it as a POST parameter would work best.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/32722952', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1817296/']}
jdg_83933
stackexchange
llm_judgeable_groundtruth_similarity
133370
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In differential geometry of surfaces, how can one define a non-zero Torsion tensor? It seems that the connection you provide has always to be symmetric since, by definition,$$\Gamma^{\gamma}_{\alpha\beta}\equiv\mathbf{a}^{\gamma}\cdot\mathbf{a}_{\alpha,\beta}=\mathbf{a}^{\gamma}\cdot\mathbf{r}_{,\alpha\beta}=\mathbf{a}^{\gamma}\cdot\mathbf{r}_{,\beta\alpha}=\Gamma^{\gamma}_{\beta\alpha},$$where $\mathbf{r}:U\to\mathbb{R}^3$, $U\subset\mathbb{R}^2$, is an embedded $C^3$ surface with parametrization $(\theta^1,\theta^2)\in U$, $\mathbf{a}_\alpha\equiv\mathbf{r}_{,\alpha}$ are the tangent vectors to the coordinate curves $\theta^\alpha$, $\alpha=\{1,2\}$, and $\mathbf{a}^\gamma$ is the covector of $\mathbf{a}_\alpha$. This definition also implies that the connection is metric compatible:$$\Gamma^{\gamma}_{\alpha\beta}=\frac{1}{2}a^{\gamma\lambda}(a_{\beta\lambda,\alpha}+a_{\gamma\alpha,\beta}-a_{\alpha\beta,\lambda}).$$So there is no non-zero Non-metricity Tensor either. ($a_{\alpha\beta}\equiv\mathbf{a}_\alpha\cdot\mathbf{a}_\beta$,$a^{\alpha\beta}\equiv\mathbf{a}^\alpha\cdot\mathbf{a}^\beta$.) Existence of non-zero Torsion tensor and Non-metricity tensor is important in studies of defects in two-dimensional crystals because in continuum model, they represent certain defect densities. Now provide the response and nothing else.
I think that the OP is asking a more specific question than whether or not a surface has a connection that is not metric or not torsion free. It seems that the OP is assuming that the surface $M$ comes equipped with an immersion $\mathbf{r}:M\to\mathbb{E}^3$ into (oriented) Euclidean $3$-space and is asking whether, using the data of the immersion $\mathbf{r}$ , it is possible to define, in a canonical way, a connection that has torsion and/or is not metric compatible. His question includes the argument that the usual induced connection associated to a given $\mathbf{r}$ discussed in all curves-and-surfaces books is both compatible with the induced metric and is torsion-free. Now, it's true that the only canonical connection induced by $\mathbf{r}$ that uses at most second-order information from $\mathbf{x}$ at a point is the Levi-Civita connection. However, there are other canonical connections definable using $\mathbf{r}$ that use higher order information, and these need be neither torsion-free nor compatible with any metric (let alone the induced metric), at least for the general immersion. (Obviously, any canonical formula using higher order information will just produce the Levi-Civita connection when applied to an immersion whose image is either a plane or a sphere.) Example : Given an immersion $\mathbf{x}:M\to\mathbb{E}^3$, there is an associated mean curvature function $H$ that, unfortunately, depends on a choice of orientation of the surface $M$; it switches sign if one reverses the orientation of $M$ (always, assuming, of course, that the target space $\mathbb{E}^3$ is oriented). However, the $1$-form $\eta = \ast dH$ is independent of a choice of orientation of the surface, since both $H$ and $\ast$ reverse sign when one reverses orientation. Let $\nabla$ be the Levi-Civita connection on $M$ associated to the metric induced on $M$ by the immersion $\mathbf{x}$, and define a second connection $\tilde\nabla$ on $M$ by the formula$$\tilde\nabla_XY = \nabla_XY + \eta(X)Y$$Then $\tilde\nabla$ is a connection canonically associated to $\mathbf{x}$ (whose local formula depends on third order derivatives of $\mathbf{x}$). One computes (using the fact that the torsion of $\nabla$ vanishes) that$$T^{\tilde\nabla}(X,Y) = \tilde\nabla_XY - \tilde\nabla_YX - [X,Y] = \eta(X)Y - \eta(Y)X,$$so the torsion of $\tilde\nabla$ vanishes if and only if $\eta=0$, i.e., $H$ is locally constant. Meanwhile, it is easy to compute that the curvatures of the two connections are related by$$R^{\tilde\nabla}(X,Y)Z = R^{\nabla}(X,Y)Z + d\eta(X,Y)\ Z,$$so $\tilde\nabla$ does not even have a parallel $2$-form, let alone a parallel metric, unless $d\eta=0$, i.e., unless $H$ is (locally) a harmonic function on the surface. Thus, in general, $\tilde\nabla$ is neither torsion-free nor metric compatible.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/133370', 'https://mathoverflow.net', 'https://mathoverflow.net/users/25516/']}
jdg_83934
stackexchange
llm_judgeable_groundtruth_similarity
12726860
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am setting an image on the imageview picked from the gallery(camera album). If the picked image has landscape orientation, it displays perfectly but if the image in in portrait mode(i.e the image was clicked in portrait mode) it is displaying the image with a 90 degree rotation. Now I am trying to find out the orientation just before setting on imageview, but all the images are giving same orientation and same width-height. Here is my code : Uri selectedImage = intent.getData();if (selectedImage != null) { Bitmap bitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), selectedImage); int str = new ExifInterface(selectedImage.getPath()).getAttributeInt("Orientation", 1000); Toast.makeText(this, "value:" + str, Toast.LENGTH_LONG).show(); Toast.makeText(this, "width:" + bitmap.getWidth() + "height:" + bitmap.getHeight(), Toast.LENGTH_LONG).show(); Now provide the response and nothing else.
Use ExifInterface for rotate image. Use this method for get correct value to be rotate captured image from camera. public int getCameraPhotoOrientation(Context context, Uri imageUri, String imagePath){ int rotate = 0; try { context.getContentResolver().notifyChange(imageUri, null); File imageFile = new File(imagePath); ExifInterface exif = new ExifInterface(imageFile.getAbsolutePath()); int orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL); switch (orientation) { case ExifInterface.ORIENTATION_ROTATE_270: rotate = 270; break; case ExifInterface.ORIENTATION_ROTATE_180: rotate = 180; break; case ExifInterface.ORIENTATION_ROTATE_90: rotate = 90; break; } Log.i("RotateImage", "Exif orientation: " + orientation); Log.i("RotateImage", "Rotate value: " + rotate); } catch (Exception e) { e.printStackTrace(); } return rotate;} And put this code in Activity result method and get value to rotate image... String selectedImage = data.getData();String[] filePathColumn = {MediaStore.Images.Media.DATA};Cursor cursor = getContentResolver().query(selectedImage, filePathColumn, null, null, null);cursor.moveToFirst();int columnIndex = cursor.getColumnIndex(filePathColumn[0]);filePath = cursor.getString(columnIndex);cursor.close();int rotateImage = getCameraPhotoOrientation(MyActivity.this, selectedImage, filePath); Hope this helps..
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12726860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1384840/']}
jdg_83935
stackexchange
llm_judgeable_groundtruth_similarity
36593727
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the following classes: public abstract class ValueBase{ public new abstract string ToString();}public class EmailAddress : ValueBase{ public MailAddress MailAddress { get; } public EmailAddress([NotNull] string address) { MailAddress = new MailAddress(address); } public override string ToString() { return MailAddress.Address; }} Why does: var email = new EmailAddress("[email protected]");string emailString1 = $"{email}";string emailString2 = email.ToString(); return a string of the type name ( Namespace.EmailAddress ), not the overridden ToString method ( [email protected] )? Now provide the response and nothing else.
Interpolation works as expected since your classes do not override Object.ToString() . ValueBase defines a new method that hides Object.ToString instead of overriding it. Simply remove ToString from ValueBase . In this case Email.Address will override Object.ToString correctly, and interpolation will return the desired result. Specifically, changing ValueBase to this: public abstract class ValueBase{} Makes the test code var email = new EmailAddress("[email protected]");string emailString1 = $"{email}"; return [email protected] UPDATE As people suggested, the base ToString() method could be added to force implementers to implement a custom ToString method in their classes. This can be achieved by defining an abstract override method. public abstract class ValueBase{ public abstract override string ToString();}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36593727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/200653/']}
jdg_83936
stackexchange
llm_judgeable_groundtruth_similarity
7795303
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: void run() { ... if (done) return cancel(); ...} where cancel() return void . This won't compile... and I can almost understand why. But if I want to return a void from a void, why not? Instead, I end up writing something like this: if (done) { cancel(); return;} I'm not looking for code style suggestions, I want to know why Java expressly prohibits this type of void return. Any info is appreciated, thanks. Now provide the response and nothing else.
A return statement with an expression returns the value of that expression. The type of cancel() is a void expression - it doesn't have a value. Logically you want to execute cancel() , and then return - so that's what you have to say. The two actions (calling cancel() and then returning) are logically distinct. Now Java could have a sort of "unit" type instead of void - but that would affect rather more than just return values.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7795303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/291180/']}
jdg_83937
stackexchange
llm_judgeable_groundtruth_similarity
33633243
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for the simplest , cleanest way of connecting to Heroku Postgres in a Spring Boot app using JPA/Hibernate. I don't see a good, complete example for this combo in either Heroku or Spring Boot documentation, so I'd like to document this on Stack Overflow. I'm trying to go with something like this: @Configuration public class DataSourceConfig { Logger log = LoggerFactory.getLogger(getClass()); @Bean @Profile("postgres") public DataSource postgresDataSource() { String databaseUrl = System.getenv("DATABASE_URL") log.info("Initializing PostgreSQL database: {}", databaseUrl); URI dbUri; try { dbUri = new URI(databaseUrl); } catch (URISyntaxException e) { log.error(String.format("Invalid DATABASE_URL: %s", databaseUrl), e); return null; } String username = dbUri.getUserInfo().split(":")[0]; String password = dbUri.getUserInfo().split(":")[1]; String dbUrl = "jdbc:postgresql://" + dbUri.getHost() + ':' + dbUri.getPort() + dbUri.getPath(); // fully-qualified class name to distuinguish from javax.sql.DataSource org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource(); dataSource.setUrl(dbUrl); dataSource.setUsername(username); dataSource.setPassword(password); return dataSource; }} I'm using Profiles , which seems a good match for what I want: on Heroku SPRING_PROFILES_ACTIVE is set to postgres , while in local development spring.profiles.active is h2 to use a H2 in-memory database (whose config omitted here). This approach seems to work fine. In application-postgres.properties ( profile-specific properties ): spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialectspring.datasource.driverClassName=org.postgresql.Driver DataSource from Tomcat seemed like a good option since the default dependencies include it, and because Spring Boot reference guide says : We prefer the Tomcat pooling DataSource for its performance and concurrency, so if that is available we always choose it. (I'm also seeing BasicDataSource from Commons DBCP being used with Spring Boot . But to me this does not seem like the cleanest choice as the default dependencies do not include Commons DBCP. And in general I'm wondering if Apache Commons could really , in 2015, be the recommended way to connect to Postgres... Also Heroku documentation offers " BasicDataSource in Spring" for this kind of scenario; I assume this refers to Commons DBCP, since I don't see such class in Spring itself.) Dependencies: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId></dependency><dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId></dependency><dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope></dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId></dependency><dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId></dependency><dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.4-1205-jdbc42</version></dependency> Current status : failing with "Not loading a JDBC driver as driverClassName property is null": eConfig$$EnhancerBySpringCGLIB$$463388c1 : Initializing PostgreSQL database: postgres:[...]j.LocalContainerEntityManagerFactoryBean : Building JPA container EntityManagerFactory for persistence unit 'default'org.hibernate.cfg.Environment : HHH000206: hibernate.properties not found[...]o.a.tomcat.jdbc.pool.PooledConnection : Not loading a JDBC driver as driverClassName property is null. o.a.tomcat.jdbc.pool.PooledConnection : Not loading a JDBC driver as driverClassName property is null.[...]org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect In logs I see that my postgresDataSource is called just fine, and that PostgreSQLDialect is in use (without this it was failing with "Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set"). My specific questions Well, how to get this working? I am setting spring.datasource.driverClassName , so why "Not loading a JDBC driver as driverClassName property is null"? Is the use of Tomcat's DataSource fine or would you recommend something else? Is it mandatory to define postgresql dependency as above with a specific version ? (I was getting "no suitable driver found" error without this.) Is there a simpler way to do all this (while sticking to Java code and/or properties; no XML please)? Now provide the response and nothing else.
Simplest cleanest way for Spring Boot 2.x with Heroku & Postgres I read all answers, but didn´t find what Jonik was looking for: I'm looking for the simplest, cleanest way of connecting to Heroku Postgres in a Spring Boot app using JPA/Hibernate The development process most people want to use with Spring Boot & Heroku includes a local H2 in-memory database for testing & fast development cycles - and the Heroku Postgres database for staging and production on Heroku. First thing is - you don´t need to use Spring profiles for that! Second: You don´t need to write/change any code! Let´s have a look on what we have to do step by step. I have a example project in place that provides a fully working Heroku deployment and configuration for Postgres - only for the sake of completeness, if you want to test it yourself: github.com/jonashackt/spring-boot-vuejs . The pom.xml We need the following depencencies: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <!-- In-Memory database used for local development & testing --> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> </dependency> <!-- Switch back from Spring Boot 2.x standard HikariCP to Tomcat JDBC, configured later in Heroku (see https://stackoverflow.com/a/49970142/4964553) --> <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-jdbc</artifactId> </dependency> <!-- PostgreSQL used in Staging and Production environment, e.g. on Heroku --> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>42.2.2</version> </dependency> One tricky thing here is the usage of tomcat-jdbc , but we´ll cover that in a second. Configure Environment Variables on Heroku In Heroku Environment Variables are named Config Vars . You heard right, all we have to do is to configure Enviroment Variables! We just need the correct ones. Therefore head over to https://data.heroku.com/ (I assume there´s already a Postgres database configured for your Heroku app, which is the default behavior). Now click on your application´s corresponding Datastore and switch over to the Settings tab. Then click on View Credentials... , which should look something similar like this: Now open a new browser tab and go to your Heroku application´s Settings tab also. Click on Reveal Config Vars and create the following Environment Variables: SPRING_DATASOURCE_URL = jdbc :postgres ql :// YourPostgresHerokuHostNameHere :5432/ YourPostgresHerokuDatabaseNameHere (mind the leading jdbc: and the ql addition to postgres !) SPRING_DATASOURCE_USERNAME = YourPostgresHerokuUserNameHere SPRING_DATASOURCE_PASSWORD = YourPostgresHerokuPasswordHere SPRING_DATASOURCE_DRIVER-CLASS-NAME = org.postgresql.Driver (this isn´t always needed since Spring Boot can deduce it for most databases from the url , just for completeness here) SPRING_JPA_DATABASE-PLATFORM = org.hibernate.dialect.PostgreSQLDialect SPRING_DATASOURCE_TYPE = org.apache.tomcat.jdbc.pool.DataSource SPRING_JPA_HIBERNATE_DDL-AUTO = update (this will automatically create your tables according to your JPA entities, which is really great - since you don´t need to hurdle with CREATE SQL statements or DDL files) In Heroku this should look like this: Now that´s all you have to do! Your Heroku app is restarted every time you change a Config Variable - so your App should now run H2 locally, and should be ready connected with PostgreSQL when deployed on Heroku. Just if you´re asking: Why do we configure Tomcat JDBC instead of Hikari As you might noticed, we added the tomcat-jdbc dependency to our pom.xml and configured SPRING_DATASOURCE_TYPE=org.apache.tomcat.jdbc.pool.DataSource as a Environment variable. There´s only a slight hint in the docs about this saying You can bypass that algorithm completely and specify the connection pool to use by setting the spring.datasource.type property. This is especially important if you run your application in a Tomcat container, ... There are several reasons I switched back to Tomcat pooling DataSource instead of using the Spring Boot 2.x standard HikariCP. As I already explained here , if you don´t specifiy spring.datasource.url , Spring will try to autowire the embedded im-memory H2 database instead of our PostgreSQL one. And the problem with Hikari is, that it only supports spring.datasource.jdbc-url . Second, if I try to use the Heroku configuration as shown for Hikari (so leaving out SPRING_DATASOURCE_TYPE and changing SPRING_DATASOURCE_URL to SPRING_DATASOURCE_JDBC-URL ) I run into the following Exception: Caused by: java.lang.RuntimeException: Driver org.postgresql.Driver claims to not accept jdbcUrl, jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE So I didn´t get Spring Boot 2.x working on Heroku & Postgres with HikariCP, but with Tomcat JDBC - and I also don´t want to brake my development process containing a local H2 database described upfront. Remember: We were looking for the simplest, cleanest way of connecting to Heroku Postgres in a Spring Boot app using JPA/Hibernate!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33633243', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/56285/']}
jdg_83938
stackexchange
llm_judgeable_groundtruth_similarity
48292
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Every so often I would encounter Hardy's inequality: Theorem 1 (Hardy's inequality) . If $p>1$, $a_n \geq 0$, and $A_n=a_1+a_2+\cdots+a_n$, then $$\sum_{n=1}^\infty \left(\frac{A_n}{n}\right)^p < \left(\frac{p}{p-1}\right)^p \sum_{n=1}^\infty a_n^p,$$ unless $(a_n)_{n=1}^\infty$ is identically zero. The constant is the best possible. and its integral version: Theorem 2 (Hardy's integral inequality) . If $p>1$, $f(x) \geq 0$, and $F(x) = \int_0^x f(t) \ dt$, then $$\int_0^\infty \left(\frac{F}{x}\right)^p \ dx < \left(\frac{p}{p-1}\right)^p \int_0^\infty f^p(x) \ dx,$$ unless $f \equiv 0$. The constant is the best possible. with a comment or two emphasizing how important and fundamental they are. Nevertheless, I have yet to see a good application of the above inequalities. So... Could you give an application of Theorem 1 or Theorem 2 that you think is particularly useful or instructive? Thanks in advance! Now provide the response and nothing else.
In analysis, individual inequalities or estimates are usually not so useful per se (though there are some notable exceptions, such as the Sobolev embedding inequality, or the Cauchy-Schwarz inequality), but are instead representative examples of a larger useful class of estimates. (cf. Gowers' " Two cultures of mathematics ".) In this particular case, the classical Hardy inequality exemplifies two useful principles; firstly, that an inverse power weight such as $1/|x|^\alpha$ is "dominated" in some $L^p$ sense by the corresponding derivative $|\nabla|^\alpha$ (or, to put it somewhat facetiously, $\frac{1}{x} = O(\frac{d}{dx} )$ ; compare with the uncertainty principle $dx \cdot d\xi \gtrsim 1$ ); and secondly, that a maximal average of a function is often dominated in an $L^p$ sense by the function itself. The first principle is captured by a number of higher-dimensional generalisations of Hardy's inequality (which typically take a shape such as $$\left\| \frac{f}{|x|^\alpha} \right\| _ {L^p({\bf R}^n)} \leq C_{p,\alpha,n} \| |\nabla|^\alpha f \|_{L^p({\bf R}^n)}$$ under suitable assumptions on $p,n,\alpha,f$ ) which are fundamental to the analysis of any PDE that involves singular potentials or weights such as $\frac{1}{|x|^\alpha}$ . The second principle is captured by a different family of generalisations of Hardy's inequality, namely the maximal inequalities for which the Hardy-Littlewood maximal inequality is the model example. This inequality is the foundation of a large part of real-variable harmonic analysis, and in particular in the analysis of singular integral operators such as the Hilbert transform or pseudo-differential operators. There are two nice features of Hardy's original inequality that are also worth pointing out. Firstly, it is an $L^p$ inequality with an explicit optimal constant, which is something of a rarity in analysis (there are maybe only a dozen or so other such sharp inequalities known for the fundamental operators in analysis). The other is that the inequality is never actually satisfied with equality (except in the trivial case when the function vanishes); one can construct sequences of near-extremisers that get arbitrarily close to attaining equality, but they do not converge to a limit that actually attains that equality. (The function $f = x^{-1/p}$ formally attains equality for Theorem 2, but there is a logarithmic divergence on both sides.) This is perhaps one of the simplest examples of such a situation, and one which is well worth studying if one is interested in using variational methods to find optimal constants for other inequalities, as one needs to have a good intuition as to when one expects optimisers to actually exist or not.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/48292', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8452/']}
jdg_83939
stackexchange
llm_judgeable_groundtruth_similarity
43088
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a function with many of its own options, but I would also like to pass options to subfunctions using the FilterRules function. Here's a simple example: Options[pfunc] = {"test" -> True};pfunc[x0_, plotopts : OptionsPattern[]] := Module[{}, Plot[x^2, {x, -x0, x0}, PlotStyle -> If[OptionValue["test"], Red, Blue], Evaluate[FilterRules[{plotopts}, Options[Plot]]]]] When I just try to use the options of the pfunc it works just fine pfunc[7]pfunc[7, "test" -> False] But if I try to use an internal argument for Plot , it produces the figure as expected but gives error messages. pfunc[7, "test" -> False, Frame -> True, Axes -> False](*OptionValue::nodef: Unknown option Frame for pfunc. >>OptionValue::nodef: Unknown option Axes for pfunc. >>OptionValue::nodef: Unknown option BaseStyle for pfunc. >>General::stop: Further output of OptionValue::nodef will be suppressed during this calculation. >>*) Why does it give these errors? Furthermore, what if I need to pass options to different subfunctions? Now provide the response and nothing else.
I don't know why no one mentioned this, but all you have to do is to use a special form of OptionsPattern : pfunc[x0_, plotopts : OptionsPattern[{Plot, pfunc}]] := your-code where inside OptionsPattern go all sub-functions you need, in a list. Now everything is fine and dandy. This has been explained already in this answer of Mr. Wizard , so this answer and the whole question should be considered a duplicate. There might be a downside of this, in case when several sub-functions can take the same subset of options, while you might want to pass some of the options to some sub-functions but not the other - in which case the special "PlotOptions" option, as suggested by rm -rf, might be a better solution. In the simple cases, however, OptionsPattern[{f1,f2,...}] is IMO the way to go.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/43088', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9490/']}
jdg_83940