source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 6855169 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Profiling some computational work I'm doing showed me that one bottleneck in my program was a function that basically did this ( np is numpy , sp is scipy ): def mix1(signal1, signal2): spec1 = np.fft.fft(signal1, axis=1) spec2 = np.fft.fft(signal2, axis=1) return np.fft.ifft(spec1*spec2, axis=1) Both signals have shape (C, N) where C is the number of sets of data (usually less than 20) and N is the number of samples in each set (around 5000). The computation for each set (row) is completely independent of any other set. I figured that this was just a simple convolution, so I tried to replace it with: def mix2(signal1, signal2): outputs = np.empty_like(signal1) for idx, row in enumerate(outputs): outputs[idx] = sp.signal.convolve(signal1[idx], signal2[idx], mode='same') return outputs ...just to see if I got the same results. But I didn't, and my questions are: Why not? Is there a better way to compute the equivalent of mix1() ? (I realise that mix2 probably wouldn't have been faster as-is, but it might have been a good starting point for parallelisation.) Here's the full script I used to quickly check this: import numpy as npimport scipy as spimport scipy.signalN = 4680C = 6def mix1(signal1, signal2): spec1 = np.fft.fft(signal1, axis=1) spec2 = np.fft.fft(signal2, axis=1) return np.fft.ifft(spec1*spec2, axis=1)def mix2(signal1, signal2): outputs = np.empty_like(signal1) for idx, row in enumerate(outputs): outputs[idx] = sp.signal.convolve(signal1[idx], signal2[idx], mode='same') return outputsdef test(num, chans): sig1 = np.random.randn(chans, num) sig2 = np.random.randn(chans, num) res1 = mix1(sig1, sig2) res2 = mix2(sig1, sig2) np.testing.assert_almost_equal(res1, res2)if __name__ == "__main__": np.random.seed(0x1234ABCD) test(N, C)
Now provide the response and nothing else.
| So I tested this out and can now confirm a few things: 1) numpy.convolve is not circular, which is what the fft code is giving you: 2) FFT does not internally pad to a power of 2. Compare the vastly different speeds of the following operations: x1 = np.random.uniform(size=2**17-1)x2 = np.random.uniform(size=2**17)np.fft.fft(x1)np.fft.fft(x2) 3) Normalization is not a difference -- if you do a naive circular convolution by adding up a(k)*b(i-k), you will get the result of the FFT code. The thing is padding to a power of 2 is going to change the answer. I've heard tales that there are ways to deal with this by cleverly using prime factors of the length (mentioned but not coded in Numerical Recipes) but I've never seen people actually do that. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6855169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/188535/']} | jdg_81441 |
stackexchange | llm_judgeable_groundtruth_similarity | 24402456 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
py.test stacktraces look like this at the moment: Traceback (most recent call last): File "/home/foo_tbz_di476/src/djangotools/djangotools/tests/ReadonlyModelTestCommon.py", line 788, in test_stale_or_missing_content_types self.assertEqual([], errors, 'Stale/Missing ContentTypes: %s' % '\n'.join(errors)) File "/usr/lib64/python2.7/unittest/case.py", line 511, in assertEqual assertion_func(first, second, msg=msg) File "/usr/lib64/python2.7/unittest/case.py", line 740, in assertListEqual self.assertSequenceEqual(list1, list2, msg, seq_type=list) File "/usr/lib64/python2.7/unittest/case.py", line 722, in assertSequenceEqual self.fail(msg) File "/usr/lib64/python2.7/unittest/case.py", line 408, in fail raise self.failureException(msg) It would be much easier for my human eyes if the output would skip the lines from the unittest module. Example: Traceback (most recent call last): File "/home/foo_tbz_di476/src/djangotools/djangotools/tests/ReadonlyModelTestCommon.py", line 788, in test_stale_or_missing_content_types self.assertEqual([], errors, 'Stale/Missing ContentTypes: %s' % '\n'.join(errors)) I tried option --tb=short but this does not do this. Update A solution without a unix pipe (like py.test ...| grep ) is preferred. Update after 8 years I think it is time to leave ascii-universe and enter HTML. Wouldn't it be nice to have a traceback which can be collapsed/expanded?
Now provide the response and nothing else.
| It looks like you are invoking pytest like: py.test --tb=native This form will output a python stdlib stacktrace derived from traceback.format_exception With pytest you can add a conftest.py file(s) to your project. Here you can add any bits of code to modify pytest behaviour. Beware! In both the following approaches use monkey patching which people may consider evil . Option 1: String matching This is the simplest approach but could be a problem if the string you are searching for appears in a line that you don't want to hide. This approach patches the ReprEntryNative class in the py package which is a dependancy of pytest. Put the following code in you conftest.py import pydef test_skip_line(line): """decide which lines to skip, the code below will also skip the next line if this returns true""" return 'unittest' in lineclass PatchedReprEntryNative(py._code.code.ReprEntryNative): def __init__(self, tblines): self.lines = [] while len(tblines) > 0: line = tblines.pop(0) if test_skip_line(line): # skip this line and the next tblines.pop(0) else: self.lines.append(line)py._code.code.ReprEntryNative = PatchedReprEntryNative Option 2: traceback frame inspection If string matching isnt true enough for you, we can inspect the traceback before it gets dumped to strings and only output frames that aren't from a set of modules. This approach patches the traceback.extract_tb function which probably kills puppies. Put the following code in you conftest.py import inspectimport linecacheimport tracebackimport unittest.caseimport sys SKIPPED_MODULES = [ unittest.case]def test_skip_frame(frame): module = inspect.getmodule(frame) return module in SKIPPED_MODULESdef tb_skipper(tb): tbnext = tb.tb_next while tbnext is not None: if test_skip_frame(tbnext.tb_frame): tbnext = tbnext.tb_next else: yield tbnext yield Nonedef new_extract_tb(tb, limit = None): if limit is None: if hasattr(sys, 'tracebacklimit'): limit = sys.tracebacklimit list = [] n = 0 new_tb_order = tb_skipper(tb) # <-- this line added while tb is not None and (limit is None or n < limit): f = tb.tb_frame lineno = tb.tb_lineno co = f.f_code filename = co.co_filename name = co.co_name linecache.checkcache(filename) line = linecache.getline(filename, lineno, f.f_globals) if line: line = line.strip() else: line = None list.append((filename, lineno, name, line)) tb = next(new_tb_order) # <-- this line modified, was tb = tb.tb_next n = n+1 return listtraceback.extract_tb = new_extract_tb | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24402456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/633961/']} | jdg_81442 |
stackexchange | llm_judgeable_groundtruth_similarity | 48626203 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’m trying to remove an input field by clicking an “X button”. After it is removed it will not post its value when the form is submitted. A “+ button” appears that allows the user to add said input again. The input has an onclick event that opens a calendar and after reattaching, the calendar does not open on click anymore. I can’t use jQuery. adderBtn.onclick = function (e) { var elem = that.hiddenElems.shift(); that.collectionItemContainer.append(elem);}removerBtn.onclick = function (e) { collectionItemElem.remove(); that.hiddenElems.push(collectionItemElem);} The question is how do I remove and reattach DOM nodes without losing the Events.
Now provide the response and nothing else.
| When you remove an element, as long as you keep a reference to it, you can put it back. So: var input = /*...code to get the input element*/;input.parentNode.removeChild(input); // Or on modern browsers: `input.remove();` later if you want to put it back someParentElement.appendChild(input); Unlike jQuery, the DOM doesn't distinguish between "remove" and "detach" — the DOM operation is always the equivalent of "detach," meaning if you add the element back, it still has its handlers: Live Example: var input = document.querySelector("input[type=text]");input.addEventListener("input", function() { console.log("input event: " + this.value);});input.focus();var parent = input.parentNode;document.querySelector("input[type=button]").addEventListener("click", function() { if (input.parentNode) { // Remove it parent.removeChild(input); } else { // Put it back parent.appendChild(input); }}); <form> <div> Type in the input to see events from it </div> <label> Input: <input type="text"> </label> <div> <input type="button" value="Toggle Field"> </div></form> If you remove the element without keeping any reference to it, it is eligible for garbage collection, as are any handlers attached to it (provided nothing else refers to them, and ignoring some historic IE bugs in that regard...). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48626203', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4251547/']} | jdg_81443 |
stackexchange | llm_judgeable_groundtruth_similarity | 39091048 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a project for a client. I have two .cer files (developer and distribution) both are loaded into Keystore Access on Mac OS X. However I'm unable to export as a .p12 file. Alternatively I tried OpenSSL but still no luck... openssl pkcs12 -export -in followMe_ios_development.cer -out followMe_ios_development.p12 -clcerts -nokeys How can I export this .cer file as a .p12 so I can compile this app for iOS?
Now provide the response and nothing else.
| try this:given you have files as follow: aps.cer, downloaded from Apple. app.key, your own private key generated by openssl. 1st, convert the .cer file into .pem format: openssl x509 -in aps.cer -inform DER -out aps.pem -outform PEM 2nd, use the .pem file and your private .key to generate .p12 file: openssl pkcs12 -export -out aps.p12 -inkey app.key -in aps.pem this should prompt to ask a password for this .p12 file. CF: aps_developer_identity.cer to p12 without having to export from Key Chain? Creating a .p12 file | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/39091048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/710887/']} | jdg_81444 |
stackexchange | llm_judgeable_groundtruth_similarity | 28989 |
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is not a comprehensive list but just a few snippets from languages that have a few consonants that sound pretty much the same to me. I wanted to ask how I can learn to hear the difference between them. Also, I am interested to know if either (a) it is just an "accent thing", meaning " d " might be pronounced d after some sounds, while pronounced ɖ after others, d̪ after still others, and d̪ʱ after others, but they all mean " d ", or (b) these are actually conceptualized as totally distinct sounds. The examples are: Nuxalk : pʰ ⟨p⟩ , tʰ ⟨t⟩ , t͡sʰ ⟨c⟩ , cʰ ⟨k⟩ , kʷʰ ⟨kʷ⟩ , qʰ ⟨q⟩ , qʷʰ ⟨qʷ⟩ , qʼʷ ⟨q̓ʷ⟩ Proto Circassian has a lot, but won't go into that. Preaspiration ʰt still figuring out. Uyghur : k vs. q . Tsez : k vs. kʼ . Odia : t̪ ʈ t̪ʰ ʈʰ , d̪ ɖ d̪ʱ . Naxi : /k/ vs. /kʰ/ k , /cʰ/ vs. /tʂʰ/ ch Hindustani : (ʂ) vs. ʃ , and (ʐ) vs. (ʒ) vs. (ɣ) . The main ones are: /k/ vs. /kʰ/ k k vs. kʼ k vs. q /cʰ/ vs. /tʂʰ/ ch kʷʰ ⟨kʷ⟩ vs. qʷʰ ⟨qʷ⟩ (ʂ) vs. ʃ (ʐ) vs. (ʒ) vs. (ɣ) I understand the technical pronunciation difference between k and q , but I can't hear a difference, especially if two words were said that mean totally different things but sound roughly the same, like kar vs. qar , I couldn't tell the difference. Wondering if you need to be able to distinguish the two sounds in order to understand the language, or it's more of a context/accent thing (way of pronouncing the "k" sound in a certain context). Same goes for kʷʰ ⟨kʷ⟩ vs. qʷʰ ⟨qʷ⟩ , just slightly different variants. In Naxi they have /cʰ/ vs. /tʂʰ/ ch , which both sound the same to me as well. Wondering the same for this, if it's required to distinguish between these two sounds. In Hindustani they have (ʂ) vs. ʃ , and (ʐ) vs. (ʒ) vs. (ɣ) . I can kind of tell the difference between /ʂ/ and /ʃ/ , where ʂ is a little deeper sounding. But all of these sound similar to each other roughly. So basically just wondering if you need to be able to clearly distinguish and pronounce these sound pairs/sets in order to speak and understand the language properly, or really there is just one underlying sound, and multiple ways of producing it (like (ʂ) vs. ʃ ). And if it needs to be understood at a deeper level, wondering if you have suggestions on how to do it.
Now provide the response and nothing else.
| If you mean "what can I do to learn the distinctions", you need repeated and varied exposure to the sounds in question, and you need some method of telling if you're correct in your hearing and pronunciation. For instance, you might learn the difference between [k] and [q] in the context of learning Arabic. In that case, you would be given many words (indeed minimal pairs) involving those sounds, you could be given dictation exercises where you have to write words that you don't know, and your instructor guides you in correct pronunciations. In other words, practice, practice, practice, based on data, data, data. In the case of normal child language acquisition, this just happens with no instruction. If you grow up in a Lushootseed-speaking environment, you'll speak Lushootseed (or if everybody is speaking related Nuxalk, that's what you'll learn -- their segmental inventories are very similar except apparently for "k"). However, if you try to learn these sounds as an adult, you may never learn them. The extent to which it is mandatory to learn native-like pronunciations is very hard to determine. I do know that you can map Lushootseed consonants onto English consonants (eliminate ejectives, q→k, kʷ→k, x,χ→h, ɬ→ʃ), and the instructor can still understand you. It would be an interesting experiment to reduce all of the clicks of Taa to [k], and get rid of tone, phonatory adornments and nasalization on vowels, to see if you could still be understood (Taa is much less redundant compared to Lushootseed). Some of the distinctions that you mentioned are marginal anyhow, for example k vs q in Uighur. There is a 3-way contrast in Norwegian (dialect 1) between [ʃ, ʂ, ç] which are all pronounced as [ʃ] in dialect 2, and I don't think anyone even notices the neutralization. If the goal is "be minimally comprehensible", you can probably forgo the obscure sounds, but if you aspire to more native control over the sound system, you need practical experience with these sounds. Unfortunately, once you've gotten control over a certain sound in some language, like palatal [c] in Hungarian, you have to start over when you try to learn Saami, since [c] in Saami is pronounced somewhat differently. Ejectives in Salishan are different from ejectives in Ethiopian Semitic. Then you have to start to acquire various variants of [ʈ,c,q,ħ,ʕ] and so on. If you have a specific sound that's causing a problem and if you can hack the writing system, you might try forvo.com, where you can find recordings (of varying quality) of words in the language. The North Saami collection is very well done, for example. Their Naxi collection is, shall we say, sparse. I find getting suitable recordings to be the biggest challenge. | {} | {'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/28989', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/3142/']} | jdg_81445 |
stackexchange | llm_judgeable_groundtruth_similarity | 2406187 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove using first principles : $42n^8 + 16n^5 + 5n\log(n) + 2017 \in O(n^{10})$ I am not sure where to begin and what "first principles" exactly means?
Now provide the response and nothing else.
| Your example is correct, I think you are very close to having it understood fully. Just note that in mod 3 calculations, $\bar{3} = \bar{0}$, so it makes sense that in your particular example, the addition of (the equivalence class of) 3 does not change anything. Similarly, for example$$\bar{2} + \bar{2} = \overline{2+2} = \bar{4} = \bar{1},$$which does make sense as $2+2 = 4 = 1 + 3 = 1 \mod 3$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2406187', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/413834/']} | jdg_81446 |
stackexchange | llm_judgeable_groundtruth_similarity | 9075098 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can we start/stop a Windows Service from Java? For example, I would like to start and stop the mysql Windows Service from Java. If start/stop is possible, then is it possible to know whether the service is started successfully or not?
Now provide the response and nothing else.
| You can formulate a Command Prompt script to start, stop, and check status on a service using a String Array: // start serviceString[] script = {"cmd.exe", "/c", "sc", "start", SERVICE_NAME};// stop serviceString[] script = {"cmd.exe", "/c", "sc", "stop", SERVICE_NAME};// check whether service is running or notString[] script = {"cmd.exe", "/c", "sc", "query", APP_SERVICE_NAME, "|", "find", "/C", "\"RUNNING\""}; Execute scripts using the following: Process process = Runtime.getRuntime().exec(script); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9075098', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/867662/']} | jdg_81447 |
stackexchange | llm_judgeable_groundtruth_similarity | 89130 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to change the format of the output xargs displays cat k.txt 1 2 3 And cat k.txt | xargs 1 2 3 However I would like to have 1, 2, 3 or 1|2|3 . Any suggestions?
Now provide the response and nothing else.
| Below are a dozen or so examples of how you can take a file such as this: $ cat k.txt123 and convert it to this format: 1,2,3 You can use this command to create the above file if you'd like to play along: $ cat <<EOF > k.txt123EOF The examples below are split into 2 groups. Ones that "work" and ones that "almost" work. I leave these because often times it's just as valuable to see why something doesn't work, as it is to see why something does. Most scripting languages that I'm familiar with are represented. Some are represented multiple times, since as with the famous acronym typically referenced in Perl, TIMTOWTDI . NOTE: You can swap out the comma ( , ) in the examples below and replace it with whatever characters you want, i.e. | . Examples that "work" These code snippets will produce the desired output. The paste command: $ paste -s -d ',' k.txt 1,2,3 The sed command: $ sed ':a;N;$!ba;s/\n/,/g' k.txt1,2,3$ sed ':a;{N;s/\n/,/};ba' k.txt 1,2,3 The perl command: $ perl -00 -p -e 's/\n(?!$)/,/g' k.txt1,2,3$ perl -00 -p -e 'chomp;tr/\n/,/' k.txt1,2,3 The awk command: $ awk '{printf"%s%s",c,$0;c=","}' k.txt1,2,3$ awk '{printf "%s,",$0}' k.txt | awk '{sub(/\,$/,"");print}'1,2,3$ awk -vORS=, 1 k.txt | awk '{sub(/\,$/,"");print}'1,2,3$ awk 'BEGIN {RS="dn"}{gsub("\n",",");print $0}' k.txt | awk '{sub(/\,$/,"");print}'1,2,3 The python command: $ python -c "import sys; print sys.stdin.read().replace('\n', ',')[0:-1]" <k.txt1,2,3$ python -c "import sys; print sys.stdin.read().replace('\n', ',').rstrip(',')" <k.txt1,2,3 Bash's mapfile built-in: $ mapfile -t a < k.txt; (IFS=','; echo "${a[*]}")1,2,3 The ruby command: $ ruby -00 -pe 'gsub /\n/,",";chop' < k.txt1,2,3$ ruby -00 -pe '$_.chomp!"\n";$_.tr!"\n",","' k.txt1,2,3 The php command: $ php -r 'echo strtr(chop(file_get_contents($argv[1])),"\n",",");' k.txt1,2,3 Caveats Most of the examples above will work just fine. Some have hidden issues, such as the PHP example above. The function chop() is actually an alias to rtrim() , so the last line's trailing spaces will also be removed. So too do does the first Ruby example, and the first Python example. The issue is with how they're all making use of a type of operation that essentially "chops" off, blindly, a trailing character. This is fine in for the example that the OP provided, but care must be taken when using these types of one liners to make sure that they conform with the data they're processing. Example Say our sample file, k.txt looked like this instead: $ echo -en "1\n2\n3" > k.txt It looks similar but it has one slight difference. It doesn't have a trailing newline ( \n ) like the original file. Now when we run the first Python example we get this: $ python -c "import sys; print sys.stdin.read().replace('\n', ',')[0:-1]" <k.txt1,2, Examples that "almost" work These are the "always a bridesmaid, never a bride" examples. Most of them could probably be adapted, but when working a potential solution to a problem, when it feels "forced", it's probably the wrong tool for the job! The perl command: $ perl -p -e 's/\n/,/' k.txt1,2,3, The tr command: $ tr '\n' ',' < k.txt 1,2,3, The cat + echo commands: $ echo $(cat k.txt)1 2 3 The ruby command: $ ruby -pe '$_["\n"]=","' k.txt1,2,3, Bash's while + read built-ins: $ while read line; do echo -n "$line,"; done < k.txt1,2,3, | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/89130', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/43968/']} | jdg_81448 |
stackexchange | llm_judgeable_groundtruth_similarity | 5231530 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
UPDATE: Added the query that runs second most: (maybe needed when taking an index in consideration???) SELECT m.time, m.message, m.receiver_uid AS receiver, m.sender_uid AS sender FROM messages AS m, users AS u WHERE u.uid = '$coID' AND ( (m.receiver_uid = '$meID' AND m.sender_uid = '$coID') OR (m.receiver_uid = '$coID' AND m.sender_uid = '$meID') ) ORDER BY m.time DESC $meID is the iD of the user who runs the wuery, $coID is the ID of the contact. I've got a somewhat big query and it runs everytime an user visits my page. SELECT m2.message, m2.time, m2.sender_uid AS sender, m2.receiver_uid AS receiver, m.contact, u.ufirstnameFROM ( SELECT CASE WHEN sender_uid = '$me' THEN receiver_uid ELSE sender_uid END AS contact, MAX(time) AS maxtime FROM messages WHERE sender_uid = '$me' OR receiver_uid = '$me' GROUP BY CASE WHEN sender_uid = '$me' THEN receiver_uid ELSE sender_uid END ) AS mINNER JOIN messages m2 ON m.maxtime = m2.timeAND ((m2.sender_uid = '$me' AND m2.receiver_uid = m.Contact)OR (m2.receiver_uid = '$me' AND m2.sender_uid = m.Contact))INNER JOIN users AS u ON m.contact = u.uidORDER BY time DESC $me is the ID of the user who runs the query This query will (successfully) retrieve: LAST MESSAGE from EVERY 'CONVERSATION' ordered by TIME. So it will get the last message (whether the message is send or received) in every PM sessionAnd than sort those by time, and retrieves the contacts information. Please tell me if I didn't explain it correctly. My MySQL table looks like this: receiver_id | sender_id | message | time From what index(es) would this query benefit? (The user table already has an primary key on the ID so the part where the join retrieves the contacts name should be efficient) EXPLAIN OUTPUTs: The BIG query: id select_type table type possible_keys key key_len ref rows Extra1 PRIMARY <derived2> ALL NULL NULL NULL NULL 4 Using temporary; Using filesort1 PRIMARY m2 ALL NULL NULL NULL NULL 42 Using where1 PRIMARY u eq_ref PRIMARY PRIMARY 4 m.contact 1 Using where2 DERIVED messages ALL NULL NULL NULL NULL 42 Using where; Using temporary; Using filesort The query in the update part: id select_type table type possible_keys key key_len ref rows Extra1 SIMPLE u const PRIMARY PRIMARY 4 const 1 Using index; Using filesort1 SIMPLE m ALL NULL NULL NULL NULL 42 Using where
Now provide the response and nothing else.
| Consider the following implementation as a basic guide how to apply custom styling and create non-selectable items: public class ExtendedComboBox extends JComboBox { public ExtendedComboBox() { setModel(new ExtendedComboBoxModel()); setRenderer(new ExtendedListCellRenderer()); } public void addDelimiter(String text) { this.addItem(new Delimiter(text)); } private static class ExtendedComboBoxModel extends DefaultComboBoxModel { @Override public void setSelectedItem(Object anObject) { if (!(anObject instanceof Delimiter)) { super.setSelectedItem(anObject); } else { int index = getIndexOf(anObject); if (index < getSize()) { setSelectedItem(getElementAt(index+1)); } } } } private static class ExtendedListCellRenderer extends DefaultListCellRenderer { @Override public Component getListCellRendererComponent(JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) { if (!(value instanceof Delimiter)) { return super.getListCellRendererComponent(list, value, index, isSelected, cellHasFocus); } else { JLabel label = new JLabel(value.toString()); Font f = label.getFont(); label.setFont(f.deriveFont(f.getStyle() | Font.BOLD | Font.ITALIC)); return label; } } } private static class Delimiter { private String text; private Delimiter(String text) { this.text = text; } @Override public String toString() { return text.toString(); } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5231530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/574700/']} | jdg_81449 |
stackexchange | llm_judgeable_groundtruth_similarity | 59055023 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just installed Visual Studio 2019 and tried it with my current solution. It seems like it ignores the tsconfig.json file, while it worked in VS 2017.In Visual Studio 2017 in project properties: TypeScript Build, it shows: One or more tsconfig.json or jsconfig.json files detected. Project properties are disabled. This works good and compiles the files according to my tsconfig.json file. Now the newly installed Visual Studio 2019 behaves differently. It does not show that sentence and I have to configure all the settings in the project properties. It seems like it does not recognize my tsconfig.json file at all. How do I get VS 2019 to use the tsconfig.json instead of the project properties?
Now provide the response and nothing else.
| The Build Action property (F4) for the tsconfig.json file has to be set to Content .Otherwise Visual Studio ignores this file since VS 2019. There is a downside though: After this change VS 2017 will complain about the Build Action: ErrorNETSDK1022 Duplicate 'Content' items were included. The .NET SDK includes'Content' items from your project directory by default. You can eitherremove these items from your project file, or set the'EnableDefaultContentItems' property to 'false' if you want toexplicitly include them in your project file. For more information,see https://aka.ms/sdkimplicititems . The duplicate items were:'tsconfig.json' | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59055023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6276704/']} | jdg_81450 |
stackexchange | llm_judgeable_groundtruth_similarity | 32409101 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to animate an SVG circle's radius attribute with CSS. Whilst (using the Firefox Inspect Element tool) I can see that the animation itself is setup and running correctly, the size of the ".innerCircle" doesn't visibly change. If you can spot anything that I've obviously missed, or help in any way I'd be greatly appreciative. I'm rather new to this, so if I have gone about this wrong, please be kind! I've pasted my files underneath for reference. Thanks again. @keyframes buttonTransition { from { r: 5%; } to { r: 25%; }}.innerCircle { animation-duration: 1s; animation-iteration-count: infinite; animation-name: buttonTransition;} <html> <head> <link href = "styling.css" rel = "stylesheet" type = "text/css"></link> </head> <body> <svg class = "button" expanded = "true" height = "100px" width = "100px"> <circle cx = "50%" cy = "50%" r = "35%" stroke = "#000000" stroke-width = "10%" fill = "none"/> <circle class = "innerCircle" cx = "50%" cy = "50%" r = "25%" fill = "#000000"/> </svg> </body></html>
Now provide the response and nothing else.
| In SVG 1.1 the radius of a circle was an attribute and not a CSS property . SVG 2 changes this and instead makes the circle's radius a CSS property that's mapped to an attribute. CSS animations animate CSS properties and do not animate attributes. Firefox has now implemented this part of the SVG 2 specification so the testcase in the question will work now although it didn't when the question was written. SMIL animations will work on attributes (and CSS properties). <html> <head> <link href = "styling.css" rel = "stylesheet" type = "text/css"></link> </head> <body> <svg class = "button" expanded = "true" height = "100px" width = "100px"> <circle cx = "50%" cy = "50%" r = "35%" stroke = "#000000" stroke-width = "10%" fill = "none"/> <circle class = "innerCircle" cx = "50%" cy = "50%" r = "25%" fill = "#000000"> <animate attributeName="r" begin="0s" dur="1s" repeatCount="indefinite" from="5%" to="25%"/> </circle> </svg> </body></html> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32409101', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4637862/']} | jdg_81451 |
stackexchange | llm_judgeable_groundtruth_similarity | 8275181 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a small client application to communicate with a server. I open a socket between my client and the server and can read anything that goes out from the server with a BufferedReader. This is read in a thread. However, when I write on the socket using a BufferedReader, nothing happens ! No exception but not any server response (and it should have a server response)Here is my code: socketWritter.write(message);socketWritter.write("\n");System.out.println(socketWritter.toString());socketWritter.flush(); My socket is correctly open and mu BufferedWriter correctly initialized : new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())) I have no idea why this doesn't work ??Any help would be great !Regards
Now provide the response and nothing else.
| Your code is correct. I tested it with a generic server that will echo whatever the client sends and it worked fine (with no changes). It could be that the server your using is faulty. One thing I noticed was that for my server I needed to append a new line character every time I wrote to the output stream, for it to actually send the data. I'm willing to bet that's why your GUI isn't receiving anything. Here's the client thread class from my server: class ClientThread extends Thread { private Socket sock; private InputStream in; private OutputStream out; ClientThread( Socket sock ) { this.sock = sock; try { this.in = sock.getInputStream(); this.out = sock.getOutputStream(); } catch ( IOException e ) { e.printStackTrace(); } } //Echos whatever the client sends to it public void run() { BufferedReader bufIn = new BufferedReader( new InputStreamReader( in ) ); BufferedWriter bufOut = new BufferedWriter( new OutputStreamWriter( out ) ); while ( true ) { try { String msg = bufIn.readLine(); System.out.println( "Received: " + msg ); bufOut.write( msg ); bufOut.newLine(); //HERE!!!!!! bufOut.flush(); } catch ( IOException e ) { e.printStackTrace(); } } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8275181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/251552/']} | jdg_81452 |
stackexchange | llm_judgeable_groundtruth_similarity | 8983 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The explanation I have heard of the difference between static and dynamic friction is that static friction is stronger because bonds form when one object is put on top of another object and these have to be overcome to get the movement started. For a rotating tire, although the point on the ground will be stationary for an instant, it would seem that bonds wouldn't have time to form. So, why isn't the dynamic coefficient of friction used?
Now provide the response and nothing else.
| A car drives at 20 m/s. The circumference of the wheel is 2m, so the rotation rate is 10 Hz. A reasonable percentage of the tire is in contact with the ground - maybe around $5\%$ . That would give a contact time of $5\times 10^{-3}$ s. This is a pretty long time in molecular terms. The distance between molecules divided by the speed of light is around $10^{-18}$ s, so that's the fastest we can imagine some sort of bonding occurring. That 16 orders of magnitude faster than the contact time. Real chemical reactions must be slower, but with $10^{16}$ "clicks" of time for the chemistry to sort itself out, there should be plenty of time for the tire to stick. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/8983', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/119/']} | jdg_81453 |
stackexchange | llm_judgeable_groundtruth_similarity | 34342 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I have the system of multi-variable polynomial (quadratic) equations with real coefficients. The number of equations is given scales as $K$ and the number of unknowns goes as $K^2$. So for for large $K$, this is an underdetermined system. Can I conclude that I can always find $K$ large enough so that this system has at least one real solution or where should I look for counterexamples?
Now provide the response and nothing else.
| However large $n$ may be, the equation $x_1^2 + \ldots + x_n^2 = -1$ has no real solution. (Similarly, the homogeneous equation $x_1^2 + \ldots + x_n^2 = 0$ has no nontrivial solution.) If you want to go there, the theorem which gives you a necessary and sufficient condition for a system of real polynomial equations to have a real solution is the Real Nullstellensatz . See for instance Section 3.9` of http://alpha.math.uga.edu/~pete/modeltheory2010Chapter3.pdf ( Very roughly, this theorem says that the above is essentially the only way that a system of $n$ real polynomial equations in more than $n$ variables can fail to have a real solution.) | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/34342', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8119/']} | jdg_81454 |
stackexchange | llm_judgeable_groundtruth_similarity | 24296325 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this library which is purely sync. It exposes sync methods and I have clients using it. I changed the underlying implementation to async and exposed async methods for whoever wants to use it. But now I have lots of replicated code. Async code seems to perform better. I'd like existing clients to take advantage of it and I want to eliminate code repetition. Is there any safe way to keep a sync signature and call async implementation? I'm specifically afraid of deadlocks when calling .Result and .Wait.
Now provide the response and nothing else.
| I strongly encourge you not to do this First, read Should I expose synchronous wrappers for asynchronous methods? and Should I expose asynchronous wrappers for synchronous methods? by Stephan Toub. The main reasons I wouldn't do this: Sync over Async - As you said, deadlocks. Higher or lower down the call chain, using Result or Wait on async maybe risky business. It really depends on which platform you run (ASP.NET, UI, Console) as each one behaves a bit differently (Even when using ConfigureAwait(false) ) Async over Sync - Scalability. Once I see an async endpoint, I assume it is pure async, which for me, as the consumer of the API means there isn't any Thread spinning "behind my back". If your users assume the same, finding out that for every call to an async method a Thread Pool thread is used can drastically hurt performance when trying to scale out. If the users want to wrap a sync method with a Task.Run , leave it up to them to make that call and make their on judgement on how this will affect their application | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24296325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/298622/']} | jdg_81455 |
stackexchange | llm_judgeable_groundtruth_similarity | 57425 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Bug introduced in 8 or earlier and persisting through 12.0 I'm trying to plot a graph wit a logarithmic y-axis. Since I'm exporting the graph to pdf and later printing it, I want to manually set the frame and tick marks to a reasonable thickness. However the logarithmic tick marks do not change their thickness. (Note: I exaggerated the thickness of the tick marks on purpose to illustrate my point.) LogPlot[x^2, {x, 1, 3}, PlotStyle -> Red, Frame -> True,FrameStyle -> Directive[Black, AbsoluteThickness[2]],FrameTicksStyle -> Directive[Black, AbsoluteThickness[2]]] I'm working with Mathematica 10 on Mac OS X 10.9.4. In Mathematica Version 9 the logarithmic tick marks change their thickness as expected. Can anyone reproduce this behavior? Is this a bug or did the FrameTicksStyle change in Mathematica 10?
Now provide the response and nothing else.
| Reproduced in v.10.0.0 under Win7 x64. In versions 8.0.4 and 9.0.1 the behavior differs in details but the bug is also present: only major logarithmic frame ticks change their thickness, but not minor ticks. Let us elaborate. First of all, in v.10 the logarithmic tick specifications are generated dynamically when the plot is rendered by the FrontEnd by calling Charting`ScaledTicks and Charting`ScaledFrameTicks : LogPlot[x^2, {x, 1, 3}, Frame -> True];Options[%, FrameTicks] {FrameTicks -> {{Charting`ScaledTicks[{Log, Exp}], Charting`ScaledFrameTicks[{Log, Exp}]}, {Automatic, Automatic}}} Here is what these functions return (I have shortened the output for readability): Charting`ScaledTicks[{Log, Exp}][1, 10] {{2.30259, 10, {0.01, 0.}, {AbsoluteThickness[0.1]}}, {4.60517, 100, {0.01, 0.}, {AbsoluteThickness[0.1]}}, {6.90776, 1000, {0.01, 0.}, {AbsoluteThickness[0.1]}}, {9.21034, Superscript[10,4], {0.01, 0.}, {AbsoluteThickness[0.1]}}, {0., Spacer[{0, 0}], {0.005, 0.}, {AbsoluteThickness[0.1]}}, {0.693147, Spacer[{0, 0}], {0.005, 0.}, {AbsoluteThickness[0.1]}}} It is clear that the thickness specifications are already included and have higher priorities than the FrameTicksStyle directive. That is the reason why the latter has no effect. So this behavior reflects inconsistent implementation of Charting`ScaledTicks and Charting`ScaledFrameTicks which should NOT include styling into the tick specifications they generate. It is a bug. Here is a function fixLogPlot which fixes this: fixLogPlot[gr_] := Show[gr, FrameTicks -> {{# /. _AbsoluteThickness :> (## &[]) &@* Charting`ScaledTicks[{Log, Exp}], # /. _AbsoluteThickness :> (## &[]) &@* Charting`ScaledFrameTicks[{Log, Exp}]}, {Automatic, Automatic}}];fixLogPlot@ LogPlot[x^2, {x, 1, 3}, Frame -> True, FrameTicksStyle -> Directive[Black, AbsoluteThickness[2]]] UPDATE Here is universal fix for version 10 which works for all types of log plots: fixLogPlots[gr_] := gr /. f : (Charting`ScaledTicks | Charting`ScaledFrameTicks)[{Log, Exp}] :> (Part[#, ;; , ;; 3] &@*f) UPDATE 2 And here is universal fix for versions 8 and 9: fixLogPlots[gr_] := gr /. f : (Ticks | FrameTicks -> _) :> (f /. _Thickness :> (## &[])) | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/57425', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19173/']} | jdg_81456 |
stackexchange | llm_judgeable_groundtruth_similarity | 5772304 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the following bean definition to make my spring app talking in JSON <bean id="jacksonMessageConverter" class="org.springframework.http.converter.json.MappingJacksonHttpMessageConverter" /> Is it possible with this message converter bean to use the @JsonView annotation?
Now provide the response and nothing else.
| @JsonView is already supported in the Jackson JSON Processor from v1.4 onwards. New Edit: Updated for Jackson 1.9.12 According to the v1.8.4 documentation the function I was using writeValueUsingView is now Deprecated Use ObjectMapper.viewWriter(java.lang.Class) instead … however that has also been Deprecated Since 1.9, use writerWithView(Class) instead ! (see v1.9.9 documentation ) So here is an updated example, tested with Spring 3.2.0 and Jackson 1.9.12 which simply returns {id: 1} and not the extended {name: "name"} since it is using the .writerWithView(Views.Public.class) . Switching to Views.ExtendPublic.class will result in {"id":1,"name":"name"} package com.demo.app;import org.springframework.stereotype.Controller;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.ResponseBody;import org.codehaus.jackson.map.annotate.JsonView;import org.codehaus.jackson.map.ObjectMapper;import org.codehaus.jackson.map.ObjectWriter;import javax.servlet.http.HttpServletResponse;import java.io.IOException;@Controllerpublic class DemoController { private final ObjectMapper objectMapper = new ObjectMapper(); @RequestMapping(value="/jsonOutput") @ResponseBody public String myObject(HttpServletResponse response) throws IOException { ObjectWriter objectWriter = objectMapper.writerWithView(Views.Public.class); return objectWriter.writeValueAsString(new MyObject()); } public static class Views { static class Public {} static class ExtendPublic extends Public {} } public class MyObject { @JsonView(Views.Public.class) Integer id = 1; @JsonView(Views.ExtendPublic.class) String name = "name"; }} Previous Edit: You need to instantiate the ObjectMapper and write out the object using a custom view as shown here , or in this example: Define views: class Views { static class Public {} static class ExtendedPublic extends PublicView {} ...}public class Thing { @JsonView(Views.Public.class) Integer id; @JsonView(Views.ExtendPublic.class) String name;} Use views: private final ObjectMapper objectMapper = new ObjectMapper();@RequestMapping(value = "/thing/{id}")public void getThing(@PathVariable final String id, HttpServletResponse response) { Thing thing = new Thing(); objectMapper.writeValueUsingView(response.getWriter(), thing, Views.ExtendPublic.class);} If you are using Jackson >= 1.7 you might find that the @JSONFilter better suits your needs. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5772304', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/164165/']} | jdg_81457 |
stackexchange | llm_judgeable_groundtruth_similarity | 4640462 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hey, I hope you can help me. I am trying to find a way to direct the user to the default error pages 404.html and 500.html in my public folder. So when there is a routing or nomethod error it should be directed there to.I already tried some stuff in my application controller but it didnt work. Many thanks!!
Now provide the response and nothing else.
| Rails does this for you automatically when running in production mode. When you upload your application to a live server, Rails takes care of handling those exceptions and rendering the correct error pages with the correct header status. If you're trying to see what those pages look like (for testing or something), just access them directly via http://localhost:3000/404.html Whenever you set up your Rails application on a live server (let's use Apache as an example), you give the site root as the /public folder in your application. Then, whenever a request is made to that server address, Apache first looks in that public folder and tries to serve a static asset (this is a configurable option in [environment].rb ). If it can't find the requested page, then the request is forwarded through the Ruby stack. When in production mode, if Rails encounters an error that isn't handled (i.e begin, rescue), it throws the error the whole way up to the stack, which then tells Apache (again, in my example) to render an appropriate error. Here are some common errors that you'll see in development mode and what they render in production mode: ActiveRecord::RecordNotFound => 404 (page not found)nil.method => 500 (server error) unless you turn off whiny nilsActionController::RoutingError => 404 (page not found) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4640462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/535779/']} | jdg_81458 |
stackexchange | llm_judgeable_groundtruth_similarity | 28531421 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It's been a while that I've been doing C++ but I'm not familiar with templates. Recently, I tried to write a class that wrap a std::vector<std::tuple<Types...>> . This class must have member functions, and I really need to be able to iterate over the tuple. In fact, if I am able to print every element of a tuple (in the order), I would be able to do everything I need. I found a solution using a cast, but I'm not really confident with it since it is based on a cast that I don't really like (plus, when I try to use static_cast , it doesn't compile anymore). My question is, is the following code correct, portable, is it a hack and should I find another way to do this than to use this cast ? Also, this cast is probably a runtime-cast right ? Is there a way to do what I want without this ? std::ostream& operator<<(std::ostream& out, std::tuple<> const& tuple){ return out; // Nothing to do here}template<typename First, typename... Types>std::ostream& operator<<(std::ostream& out, std::tuple<First, Types...> const& tuple){ out << std::get<0>(tuple) << " "; // The cast that I don't like return out << (std::tuple<Types...>&) tuple;}int main(){ auto tuple = std::make_tuple(1, 2.3, "Hello"); std::cout << tuple << std::endl; return 0;} Thank you in advance for your answers.
Now provide the response and nothing else.
| Use std::index_sequence_for for fun and profit. template <typename TupleLike, size_t ... Inds>std::ostream& PrintHelper(std::ostream& out, TupleLike const& tuple, std::index_sequence<Inds...>){ int unused[] = {0, (void(out << std::get<Inds>(tuple) << " "), 0)...}; (void)unused; return out;}template<typename... Types>std::ostream& operator<<(std::ostream& out, std::tuple<Types...> const& tuple){ return PrintHelper(out, tuple, std::index_sequence_for<Types...>());} EDIT : Live Demo . Thanks to @dyp. This uses an expansion trick from this answer . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28531421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3010432/']} | jdg_81459 |
stackexchange | llm_judgeable_groundtruth_similarity | 2647197 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Are there compatibility barriers with a .NET 4.0 assembly calling code in a .NET 2.0 assembly? And vice versa? More specifically, I'm investigating an upgrade to Visual Studio 2010 when using a third party application based on .NET 2.0. The application is extensible by hooks that reference my custom code. And vice versa, my code will reference the application's assemblies.
Now provide the response and nothing else.
| The CLR, in .NET 4, can consume .NET 2 assemblies and use them properly. If you want your .NET 2 application to be able to load .NET 4 assemblies, you'll need to configure it differently. By setting the requiredRuntime to .NET 4, and the legacy load policy, you should be able to force the .NET 2 application to load using CLR 4, which would allow your .NET 4 assemblies to be used. Setup your app.config file to include: <?xml version="1.0"?><configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup></configuration> That being said, in a situation like this, I'd recommend just using VS 2010 and targetting .NET 3.5 instead of .NET 4. This would compile your assemblies for CLR 2, and avoid this issue entirely. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2647197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3347/']} | jdg_81460 |
stackexchange | llm_judgeable_groundtruth_similarity | 67116 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was looking at my Arduino Uno and I noticed that symbol by digital pins 11 , 10 , 9 , 6 , 5 , and 3 . What do these mean? Does this affect the way it works? Can I not use these pins for certain situations?
Now provide the response and nothing else.
| Relax. Don't worry. These pins are called PWM and are the same as the other pins, except they have an "added bonus." Some uses from Arduino's Website : Dimming an LED Simulates an analog output. The output is still digitally toggling from 0V to 5V. However, low-pass filter (capacitor and resistor) to simulate analog voltages. Generating audio signals. Providing variable speed control for motors. Generating a modulated signal, for example to drive an infrared LED for a remote control. How it works: The PWM pins are controlled by on-chip timers which toggle the pins automatically at a rate of about 490Hz. The "Pulse Width Modulation" (PWM) is how long the pin stays on or off for a single cycle of that frequency. This can dim a LED by giving the illusion it is at half the brightness as before, where it is really flashing very quickly. When there is a 25% duty cycle, it is on one-forth of the time. If you used for a LED, it would appear about 1/4th as bright [give or take]. (Note: as some people pointed out this isn't truly proportional but let's leave it this way for simplicity. EX: 25% isn't always 1/4th the brightness.) (If you are really electrical savvy, you could probably add a capacitor to make it also an analog output.) How to use these pins to output: First, you need to define the pin as output. Then, you use analogWrite(ledPin, 128); to start it. The ledPin is the PWM pin that you want to start PWM and 128 should be replaced with a number between 0 and 255 ; 0 : 0% duty cycle (turns the pin completely off) and 255 : 100% duty cycle . (turns the pin on completely) Source: http://www.arduino-tutorials.com/arduino-pwm/ Why can't I just turn the light on and off really fast in my code?: Technically, you can, however, there are some problems: It may not be as precise as using the hardwired circuits with the Arduino Its simpler just to type instruction instead of having lots of "if" statements It's not really going to make that much of a difference if the Arduino's sole purpose is to generate PWM signals. However, if you put any delays longer than 50 MS in the main loop, it will mess up the timing. With the software approach you would want to eliminate any "delay" functions since the Arduino only runs on one thread (it can only do one thing at one time). If you know what you're doing, it won't make that much of a difference dimming the light, but if you have an extra pin with PWM, you're just wasting your time with a software approach. As others have pointed out: You still need a resistor for your circuits to limit current and voltage. You cannot skip this. | {} | {'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/67116', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/18442/']} | jdg_81461 |
stackexchange | llm_judgeable_groundtruth_similarity | 764726 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The book "Unix in a Nutshell" discusses about accessing multiple files on pages 572-573. There seem to be very useful commands such as ":e", ":e #", ":e new_file", ":n files", ":args", ":prev" and ":n!". The commands confuse me: ":n Edit next file in the list of files." ":args Display list of files to be edited." ":prev Edit previous file in the list of files." I cannot see no real list when I do ":args". There is only a small text at the corner. I would like to see all files that I accessed with ":e", ie a list of files in the buffer. Where can I see the list when I do the command ":n files"? What are the commands ":prev" and ":n" supposed to do? I got the error message: There is only one file to edit.
Now provide the response and nothing else.
| I've not read the book in mention, but I'll try to explain how vim handles files. Vim has buffers. You open every file with: :e name_of_the_file.txt (loads file in a buffer, i.e. "opens file") You can also: :e *.txt Useful options while doing this are :set laststatus=2 (to always show the statusline) :set wildmenu (to ease opening files) If you have standard vim with netrw plugin, you can: :e . (for one of vim's file managers, so to say) To manage buffers: :ls will give you a list of currently opened buffers (files) :bnext , and :bprevious (or :bn and :bp ) enable you to cycle through buffers :bd closes the buffer/file (buffer done) Other buffer types serve other purposes (yanking/pasting, temporary, vim's internal, ... etc.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/764726', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/54964/']} | jdg_81462 |
stackexchange | llm_judgeable_groundtruth_similarity | 4316385 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems that recursive-descent parsers are not only the simplest to explain, but also the simplest to design and maintain. They aren't limited to LALR(1) grammars, and the code itself can be understood by mere mortals. In contrast, bottom up parsers have limits on the grammars they are able to recognize, and need to be generated by special tools (because the tables that drive them are next-to-impossible to generate by hand). Why then, is bottom-up (i.e. shift-reduce) parsing more common than top-down (i.e. recursive descent) parsing?
Now provide the response and nothing else.
| If you choose a powerful parser generator, you can code your grammar without worrying about peculiar properties. (LA)LR means you don't have to worry about left recursion, one less headache. GLR means you don't have to worry about local ambiguity or lookahead. And the bottom-up parsers tend to be pretty efficient. So, once you've paid the price of a bit of complicated machinery, it is easier to write grammars and the parsers perform well. You should expect to see this kind of choice wherever there is some programming construct that commonly occurs: if it is easier to specify, and it performs pretty well, even if the machinery is complicated, complex machinery will win. As another example, the database world has gone to relational tools, in spite of the fact that you can hand-build an indexed file yourself. It's easier to write the data schemas, it's easier to specify the indexes, and with complicated enough machinery behind (you don't have to look at the gears, you just use them), they can be pretty fast with almost no effort. Same reasons. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4316385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/82320/']} | jdg_81463 |
stackexchange | llm_judgeable_groundtruth_similarity | 1547593 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Find the limit: $$\lim_\limits{x\to 0^+}{\left( e^{\frac{1}{\sin x}}-e^{\frac{1}{x}}\right)}$$ Using graph inspection, I have found the limit to be $+\infty$, but I cannot prove this in any way (I tried factorizing, using DLH)... Can anyone give a hint about that? The limit should be done without any approximations, because we haven't been taught those yet.
Now provide the response and nothing else.
| First, you can show that $$\lim_{x\to 0^+}\left(\frac{1}{\sin x}-\frac{1}{x}\right)=0.$$ This shows that $$\lim_{x\to 0^+}\frac{e^{\frac{1}{\sin x}-\frac{1}{x}}-1}{\frac{1}{\sin x}-\frac{1}{x}}=1.$$Now, write $$e^{1/\sin x}-e^{1/x}=e^{1/x}\left(\frac{1}{\sin x}-\frac{1}{x}\right)\frac{e^{\frac{1}{\sin x}-\frac{1}{x}}-1}{\frac{1}{\sin x}-\frac{1}{x}},$$ and letting $x\to 0^+$ gives\begin{align*}\lim_{x\to 0^+}\left(e^{1/\sin x}-e^{1/x}\right)&=\lim_{x\to 0}e^{1/x}\left(\frac{1}{\sin x}-\frac{1}{x}\right)\lim_{x\to 0^+}\frac{e^{\frac{1}{\sin x}-\frac{1}{x}}-1}{\frac{1}{\sin x}-\frac{1}{x}}=\lim_{x\to 0^+}e^{1/x}\left(\frac{1}{\sin x}-\frac{1}{x}\right)\\&=\lim_{x\to 0^+}\frac{e^{1/x}(x-\sin x)}{x\sin x}=\lim_{x\to 0^+}\frac{e^{1/x}(x-\sin x)}{x^2},\end{align*}since $\lim_{x\to 0}\frac{\sin x}{x}=1$. Continuing the computation, this last limit is equal to$$\lim_{x\to 0^+}xe^{1/x}\lim_{x\to 0^+}\frac{x-\sin x}{x^3}=\frac{1}{6}\lim_{x\to 0^+}xe^{1/x}=\frac{1}{6}\lim_{u\to\infty}\frac{e^u}{u},$$after performing the substitution $u=\frac{1}{x}$. The last limit is $\infty$, so the limit you ask is equal to $\infty$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1547593', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_81464 |
stackexchange | llm_judgeable_groundtruth_similarity | 3799501 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've written a class that implements IEnumerable<T> . I have a method that returns MyClass . If I try to yield return from within that method, the compiler tells me "... cannot be an iterator block because ... is not an iterator interface type". So, how can I define my own interface iterator type? Does it have to be "abstract" (can't have any methods defined)? What I want to do is write a bunch of chainable methods, so every method should return an instance of MyClass . But I need MyClass to be some kind of enumerable. Rather than using some underlying data type, I was hoping I could just yield return everywhere. @Oded: class SharpQuery : IEnumerable<HtmlNode>{ public SharpQuery Find(string selector) { foreach (var n in this) { // filter the results yield return node; } }}
Now provide the response and nothing else.
| No, that's not possible. To see why consider that you have a class Zoo that implements IEnumerable<Animal> but also has lots of other members. A Zoo is an IEnumerable<Animal> but not necessarily vice versa - a sequence of animals is just a sequence of animals. There's no zoo keeper, no shops, no entrance fee or any of the other things that makes a zoo a zoo. When you use yield return x the return type cannot be Zoo because you don't have a zoo - you just have a sequence of animals. What you can do instead is to call it as new Zoo(foo()) where foo returns an IEnumerable<Animal> and add a constructor to Zoo that accepts an IEnumerable<Animal> . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3799501', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65387/']} | jdg_81465 |
stackexchange | llm_judgeable_groundtruth_similarity | 284804 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Dark matter does not interact with electromagnetic radiation. Neutrinos also don't interact via electromagnetism, which would make neutrinos a good candidate for dark matter. However because of the extremely low mass of the known three types of neutrinos they can only supply a tiny fraction of the dark matter in the universe (see the section of the Wikipedia page comparing baryonic and non-baryonic matter ). But I have heard there have been speculations in the past, that there could be a fourth type of neutrino, which should be much more heavy that the other three types of neutrinos. Wouldn't such a heavy 4th generation neutrino be the perfect, natural candidate for dark matter?
Now provide the response and nothing else.
| Weak physics rates (I think the $Z^0$ lifetime is generally quoted as giving the strongest single constraint) give pretty strong reason to doubt the existence of the fourth generation of 'normal' neutrinos with a mass lower than $\approx 45\,\mathrm{GeV}$, but so-called "sterile neutrinos" (which don't couple directly to the weak interaction, but do mix with the better know flavors) have been on the radar as candidates for some time. I haven't checked with a recent review article, but my general feeling is that improved measurements of the mixing parameters have been constraining the possible space for such a mixing term, though it will be essentially impossible to remove all doubt. However the smaller the mixing the harder it is to arrange a large enough mass of such neutrinos in the early universe to explain the observed structure. These are not WIMPs and would not be detected by the current and planned generation of direct observation experiments. Only a day latter I learn of this recent paper jointly published by Daya Bay and MINOS putting new limits on the mass and mixing angles for oscillation to sterile neutrinos in a minimal 4 flavor model. That said, a few years ago there was a round of interest in 5 flavor models which seems for a time to be more likely (they are, of course, also harder to constrain). | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/284804', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1648/']} | jdg_81466 |
stackexchange | llm_judgeable_groundtruth_similarity | 18225 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The following question was motivated by my research. Consider a $n\times n$ matrix whose elements are $0$'s or $1$'s such that the determinant is odd. The question is: is it possible to assign signs to matrix elements such that the determinant of the matrix will be equal to $1$? I do not know an answer even to a weaker question: is it possible to replace some of the $1$'s in the matrix with odd integers so that the determinant will be equal to $1$? Remark: it is known that a natural reduction mod $N$ map $SL_n(\mathbb Z) \to SL_n(\mathbb Z/ N\mathbb Z)$ is surjective for any $n,N$.
Now provide the response and nothing else.
| I believe that the weaker question can be proved by induction on $n$. The case $n=1$ is clear. Now assume for $n-1$ and expand the $n\times n$ determinant by the first row. At least one of the terms in the expansion must be odd. Thus the original matrix $A$ has an $(n-1)\times (n-1)$ submatrix $B$, say consisting of entries not in row 1 or column $j$, with odd determinant such that $A_{1j}=1$. By induction we can change some of the 1's in $B$ to odd integers so that the new matrix $B'$ satisfies det$(B')=1$. Let $A'$ be $A$ after replacing $B$ with $B'$. Now det$(A')= A_{1j} +$ terms not involving $A_{1j}$, say det$(A')=A_{1j}+c$. Since $A_{1j}=1$ and det$(A')$ is odd, it follows that $c$ is even. Hence we can replace $A_{1j}$ with the odd integer $1-c$ so that the resulting matrix has determinant 1. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/18225', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2823/']} | jdg_81467 |
stackexchange | llm_judgeable_groundtruth_similarity | 1854756 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The number of irrational roots of $(x^2 -3x +1)(x^2 +3x +2)(x^2 -9x + 20) = -30$ is___? My Approach : Multiplying the above equation and then apply Descartis rule is very lengthy method. Also I tried it by taking factors of $30$ and then solving the equation but that is also going very lengthy . Please suggest some good method of doing this.
Now provide the response and nothing else.
| I'm not sure if this is a good method for your question, but we have$$(x^2-3x+1)(x+1)(x+2)(x-4)(x-5)=-30$$rearranging$$(x^2-3x+1)(x+1)(x-4)(x+2)(x-5)=-30$$$$(t+1)(t-4)(t-10)=-30$$where $t=x^2-3x$. Now, one sees that $t=5$ works. So, $$(t+1)(t-4)(t-10)+30$$is divisible by $t-5$ to have$$(t+1)(t-4)(t-10)+30=(t-5)(t^2-8t-14)$$So,$$x^2-3x=t=5,4\pm\sqrt{30}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1854756', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/353089/']} | jdg_81468 |
stackexchange | llm_judgeable_groundtruth_similarity | 3047375 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to simulate F5 key press in my C# program. When IE is open, I want to be able refresh my website automatically. How can I do that?
Now provide the response and nothing else.
| Here's an example... static class Program{ [DllImport("user32.dll")] public static extern int SetForegroundWindow(IntPtr hWnd); [STAThread] static void Main() { while(true) { Process [] processes = Process.GetProcessesByName("iexplore"); foreach(Process proc in processes) { SetForegroundWindow(proc.MainWindowHandle); SendKeys.SendWait("{F5}"); } Thread.Sleep(5000); } }} a better one... less anoying... static class Program{ const UInt32 WM_KEYDOWN = 0x0100; const int VK_F5 = 0x74; [DllImport("user32.dll")] static extern bool PostMessage(IntPtr hWnd, UInt32 Msg, int wParam, int lParam); [STAThread] static void Main() { while(true) { Process [] processes = Process.GetProcessesByName("iexplore"); foreach(Process proc in processes) PostMessage(proc.MainWindowHandle, WM_KEYDOWN, VK_F5, 0); Thread.Sleep(5000); } }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3047375', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/327592/']} | jdg_81469 |
stackexchange | llm_judgeable_groundtruth_similarity | 3308826 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is $A = \{ (x, \frac{1}{x}) : 0 < x \le 1 \}$ , compact in $\mathbb{R}^2$ ? My attempt: Yes, I take $f(x, y) = xy$ that is here $A$ is closed because is the inverse image of closed set that is $\{1\}$ , by a continuous function.
Now provide the response and nothing else.
| If $A$ were compact, its image by the continuous map \begin{align}A&\longrightarrow \mathbf R, \\\Bigl(x,\frac1x\Bigr)&\longmapsto x,\end{align} would be compact. However this image is the interval $(0,1]$ , which is not closed in $\mathbf R$ , thereby not compact. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3308826', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/557708/']} | jdg_81470 |
stackexchange | llm_judgeable_groundtruth_similarity | 1071273 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This should be an easy problem but... I need to format a currency for display (string) in C# The currency in question will have its own rules such as the symbol to use and if that symbol should come before the value (e.g. $ 10) or after (e.g. 10 ₫ which is Vietnamese Dong). But how the numbers are formatted depends upon the users local, not the currency. E.g. 1.234,56 ₫ should be displayed to a user in Vietnam but 1,234.56 ₫ should be displayed to a user in the US (formatted as code so easy to see difference between , and.) So code like Double vietnamTotal = 1234.56;return vietnamTotal.ToString("c"); Won't work as it will use the users (or more accuratly CultureInfo.CurrentCulture) locale for format and currency so you would get things like $1,123.56 - right use of , and . but wrong symbol. Double vietnamTotal = 1234.56;CultureInfo ci = new CultureInfo(1066); // Vietnamreturn vietnameTotal.ToString("c",ci)); Would give 1.234,56 ₫ - Right symbol, wrong use of , and . for current user. This post gives more detail on the right thing to do, but not how to do it. What obvious method hidden in the framework am I missing?
Now provide the response and nothing else.
| Take the NumberFormatInfo from the user's currency, and clone it Set the CurrencySymbol in the cloned format to the CurrencySymbol of the currency in question If you want the currency position (and some other aspects of the format) to be copied,set CurrencyPositivePattern and CurrencyNegativePattern in the same way. Use the result to format. For example: using System;using System.Globalization;class Test{ static void Main() { decimal total = 1234.56m; CultureInfo vietnam = new CultureInfo(1066); CultureInfo usa = new CultureInfo("en-US"); NumberFormatInfo nfi = usa.NumberFormat; nfi = (NumberFormatInfo) nfi.Clone(); NumberFormatInfo vnfi = vietnam.NumberFormat; nfi.CurrencySymbol = vnfi.CurrencySymbol; nfi.CurrencyNegativePattern = vnfi.CurrencyNegativePattern; nfi.CurrencyPositivePattern = vnfi.CurrencyPositivePattern; Console.WriteLine(total.ToString("c", nfi)); }} Admittedly my console doesn't manage to display the right symbol, but I'm sure that's just due to font issues :) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1071273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20198/']} | jdg_81471 |
stackexchange | llm_judgeable_groundtruth_similarity | 34504849 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The Int class has a method is_prime , so I figured, just for giggles, I'd like to add some other methods to Int for some of my hobby projects that do number theory stuff. I thought I could do something like this: class Int { method is-even (Int:D $number ) returns Bool:D { return False if $number % 2; return True; } }say 137.is-even; But that doesn't work: ===SORRY!===P6opaque: must compose before allocating I don't know if this means that I can't do that or that I'm doing it incorrectly. I could easily make a new class that inherits from Int , but that's not what I'm interested in: class MyInt is Int { method is-even () returns Bool:D { return False if self % 2; return True; } }my $n = MyInt.new(138);say $n.is-even; I'm not looking for workarounds or alternate solutions.
Now provide the response and nothing else.
| There's syntactic sugar for this - augment : use MONKEY-TYPING;augment class Int { method is-even() returns Bool:D { return False if self % 2; return True; }} Augmenting a class is considered dangerous for two reasons: First, action at a distance, and second, because (as far as I'm aware), there's potential for undefined behaviour deoptimization as it might leave various method caches in an invalid state. Thus, the requirement for providing the MONKEY-TYPING pragma before you're allowed to use it. As an aside, note that is-even could be written more compactly as self %% 2 . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34504849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2766176/']} | jdg_81472 |
stackexchange | llm_judgeable_groundtruth_similarity | 95551 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When concentrated sulfuric acid is added to anhydrous potassium chloride and the fumes produced are bubbled into aqueous potassium iodide solution, the observed solution would be colourless solution. I think the first reaction is:$$\ce{2KCl(aq) + H2SO4(aq) -> K2SO4 + 2HCl(g)}$$ I assumed the second reaction would be:$$\ce{HCl + KI -> KCl + \frac{1}{2}I2 + \frac{1}{2}H2 }$$ However the second reaction is wrong according to the answer book as $\ce{HCl}$ won't react with $\ce{KI}$. From what I know, $\ce{Cl}$ is a stronger oxidising agent than $\ce{I}$, so shouldn't $\ce{I-}$ in $\ce{KI}$ be oxidized to $\ce{I2}$? I am an A-levels student so would appreciate a simpler answer.
Now provide the response and nothing else.
| In your first reaction, you added concentrated sulfuric acid to anhydrous potassium chloride. In your equation, you wrote $\ce{KCl (aq)}$, which is incorrect. Under dilute conditions , that reaction would not take place because all reagents and products would be in aqueous ionic state, meaning there is no reaction. In your second reaction, the equation is completely wrong. It is true that elemental $\ce{Cl2}$ a stronger oxidizing agent than elemental $\ce{Br2}$ or elemental $\ce{I2}$. But in current situation is ionic $\ce{Cl-}$ versus ionic $\ce{I-}$ where both are aqueous. If all reagents are in ionic form and expected products are also ionic, there is no reaction. See your reaction below:$$\ce{H+(aq) + Cl-(aq) +K+(aq) + I-(aq) <=> H+(aq) + I-(aq) +K+(aq) + Cl-(aq)}$$There is no gas, liquid or solid formed, but all ions. Therefore, there is no reaction. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/95551', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/63084/']} | jdg_81473 |
stackexchange | llm_judgeable_groundtruth_similarity | 503865 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There must be an easy way to do this. I build a Flex app using ant that depends on a SWC library, which works fine except that it rebuilds the library whether it needs to or not. How do I tell ant to only run the task if any of the sources files of the library (*.as, *.mxml) are newer than the SWC? I've looked at <dependset> but it only seems to delete files, not determine whether a task should be run or not. <depend> seems to expect a one-to-one relationship between the source and target files rather than a one-to-many relationship -- I have many input files and one output file, but no intermediate object files. Thanks a lot,Alex
Now provide the response and nothing else.
| You may use the Ant uptodate task to create a property, and execute your other target only if that property is set. I don't know much about flex, but you probably want something like this: <?xml version="1.0" encoding="UTF-8"?><project name="test" default="compile"> <target name="checkforchanges"> <uptodate property="nochanges"> <srcfiles dir="." includes="**/*.as"/> <srcfiles dir="." includes="**/*.mxml"/> <mapper to="applicaton.flex"/> </uptodate> </target> <target name="compile" depends="checkforchanges" unless="nochanges"> ... </target></project> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/503865', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15899/']} | jdg_81474 |
stackexchange | llm_judgeable_groundtruth_similarity | 57706198 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given this explanation Nullable types: Type declarations for parameters and return values can now be marked as nullable by prefixing the type name with a question mark. This signifies that as well as the specified type, NULL can be passed as an argument, or returned as a value, respectively. https://www.php.net/manual/en/migration71.new-features.php The following code : public function test(?int $var) {} Means that test() can be called with $var either as int or as null . And the following code : public function test(int $var = null) {} Means that test() can be called with $var either as int or as null as well. What are the differences between those two methods ? Is any of these more performant than the other?
Now provide the response and nothing else.
| It is important to distinguish between the two language features being discussed here, that is, type declarations and default argument values . The first function is only using type declarations, this means that the input argument has to be of the type int or NULL . The second function is using both type declarations and default argument values, this means that the argument has to be of the type int or NULL but if omitted it will default to NULL . Take your first function, if you simply called test() without passing anything to it, you'd get: PHP Fatal error: Uncaught ArgumentCountError: Too few arguments to function test() [...] which is correct as the function expects either int or NULL but got neither whereas for the second, as you have defined the argument with a default value, it would run without errors. Code function test(?int $var) { var_dump($var);}function test2(int $var = null) { var_dump($var);}test(1); // finetest(); // errortest2(1); // finetest2(); // fine As far as performance goes, the difference is probably negligible, nothing significant enough that would be a cause for concern. Live Example Repl | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/57706198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3489447/']} | jdg_81475 |
stackexchange | llm_judgeable_groundtruth_similarity | 22976504 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With ssh I can do this: ssh REMOTE -t 'tmux a || tmux' With mosh the best I can do is this: mosh REMOTE -- tmux a But this won't work: mosh REMOTE -- tmux a || tmux Neither does this: (It doesn't matter whether it's single quote or double quote, I tried both) mosh REMOTE -- 'tmux a || tmux' So, my question is: How am I supposed to do this job?
Now provide the response and nothing else.
| Well, it seems that I have to explicitly use a shell to execute command: mosh REMOTE -- sh -c 'tmux a || tmux' EDIT Instead of doing tmux a || tmux , a better way is add new-session to ~/.tmux.conf and just run tmux . That would make things much easier. I can do things like this now: mosh REMOTE -- tmux Awesome! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22976504', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/533480/']} | jdg_81476 |
stackexchange | llm_judgeable_groundtruth_similarity | 58791 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to use RSA public and private keys to authenticate two computers (or to make sure they're both the right computers)? If so, how? Thanks!
Now provide the response and nothing else.
| Yes, it's possible. Computers Alice and Bob both have one private key and one public key each. Alice chooses a random token and encrypts it with Bob's public key and its own private key. Bob decrypts the token with its private key and Alice's public key, then reencrypts it with Alice's public key. Upon receiving the original token from Bob, Alice knows that Bob must know Bob's private key, therefore it must be Bob. And since the token was decrypted with Alice's public key, Bob knows that Alice is in possession of Alice's private key, which means it is Alice. And since nobody else except Alice and Bob are now privy to the random token that has been exchanged, both Alice and Bob may use it as the key to a symmetric cypher algorithm suitable for exchanging further communication (they might also use again RSA, but it's more expensive computationally): ALICE = Alice's public key; bob = Bob's private key Alice chooses K Alice encrypts with BOB, then with alice ----> alice(BOB(K)) ----> Bob knows ALICE and bob, so can retrieve K <---- bob(ALICE(K)) <---- (or just ALICE(K)) (or even K, if K need not be secret) Alice gets confirmation <---- AES256(K, MESSAGE[0]) ----> <---- AES256(K, MESSAGE[1]) ----> ... <---- AES256(K, MESSAGE[N]) ----> During such a conversation, K may also get refreshed from time to time. Real life example (well, sort of) In this example Alice and Bob are two friends and they both have the OpenSSL utility. They communicate through a public channel (e.g. their Facebook pages) where they can post binary files. OpenSSL actually can produce ASCII files that would be much better for this purpose, but let's keep as near the above as possible (in real real life we'd use some OpenSSL scripts that would automate the whole thing and work much better, or some tool that would wrap OpenSSL to do the same things with an easier interface). There are some caveats that I'll mark with "VULN". See at bottom. To begin with, Alice creates a RSA public/private key pair and extracts the public key. Bob does the same (his files will be named bob-*) (VULN-1). openssl genrsa -out alice-both.pem 1024openssl rsa -in alice-both.pem -out alice-public.pem -outform PEM -pubout Now both Alice and Bob publish their public keys on their facebook page. Anyone can read them. Now Alice wants to exchange a private conversation with Bob, maybe some files. Alice wants to be sure that Bob is cognizant of their source, and she wants Bob the be the only one to be able to read the files. This can be done using asymmetric encryption only, but it would be too slow and inefficient due to size constraints and computational overhead. So they need to use symmetric encryption, and for this, a shared secret (above, this was called K), that only they know. They do so by choosing a K, and communicating it over a secure channel. The secure channel is provided by asymmetric encryption over a public channel (facebook). Bob being a gentleman, it's Alice who chooses a passphrase. She does so by creating a text file with the passphrase... echo 'the magic words are Squeamish Ossifrage' > phrase.txt and encrypting it with Bob's public key downloaded from Bob's facebook page (VULN-2) openssl rsautl -encrypt -inkey bob-public.pem -pubin -in phrase.txt -out file1 Now, only Bob can decrypt the file. The problem remains how to make Bob sure that this is from Alice. Alice could cut "file1" in pieces small enough to be encrypted and send them all, but it's not necessary; she can sign the file with her private key. openssl dgst -sha256 -sign alice-both.pem -out file1.sign file1 Now Bob receives the two files. He first verifies the signature against Alice's public key (VULN-2 again). openssl dgst -sha256 -verify alice-public.pem -signature file1.sign file1 and he gets "Verified OK" (if anyone else's certificate had been used, for example his one, he would have gotten "Verification Failure"). So, verified against the key published on Alice's facebook page, the "file1" is at least as trustworthy as Alice's facebook account. Bob proceeds in decrypting with his own private key. openssl rsautl -decrypt -inkey bob-both.pem -in file1 -out secret Now Bob knows the secret phrase (it's in "secret") and can encrypt a document using that phrase. openssl enc -in secret-document.txt -out secret.bin -e -aes256 -k "$( cat secret )" or, equivalently, openssl enc -in secret-document.txt -out secret.bin -e -aes256 -k "the magic words are Squeamish Ossifrage" secret.bin is again posted publicly on Bob's page. Alice on the other hand just runs the corresponding decoding command openssl enc -in secret.bin -out plain.txt -d -aes256 -k "$( cat phrase.txt )" and recovers "plain.txt". VULN-1: Alice and Bob must be sure that nobody else has access to their private keys. They can lock them with a passphrase that needs to be entered for OpenSSL to access the keys, for example. VULN-2: if Eve gains access to Alice's facebook page, or can induce Bob's computer to believe that some other computer is actually the Facebook web site, she can trick Bob into downloading Eve's public key believing it's Alice's. If Eve is capable of doing the same to Alice, she can pose with her as Bob. So Alice "talks" to Eve, using a format that Eve can decrypt, and Eve reencrypts it after tampering and sends it along to Bob, both ends being none the wiser. This would be a man-in-the-middle attack. That's why the authenticity of the public keys must be unquestionable. Since neither Alice nor Bob have reason to use the keys they have published (they own the originals), once gained access to both Facebook pages Eve could replace both keys with others of her own making , and continuously monitor both pages in order to download any newly posted file and replace it with the appropriately decoded-tampered-and-reencrypted version. Since Alice and Bob aren't likely to verify their own files , and they haven't published the files' signatures (and even if they did, they might not notice they've been changed), they won't notice a thing. Not really likely to happen with something as public as a Facebook page, but in other scenarios the key distribution might be at risk. Of course, as soon as someone out of Eve's purview attempts to communicate with Alice or Bob and fails , the game's in the open. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/58791', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/47335/']} | jdg_81477 |
stackexchange | llm_judgeable_groundtruth_similarity | 28350232 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When starting to work with models I got the following error Class Post not found`. All I did: - Created a Model with the command php artisan make:model - Tried to get all entries from table posts with echo Post::all() I used the following code: router.php Route::get('/posts', function(){ $results = Post::all(); return $results;}); Post.php <?php namespace App;use Illuminate\Database\Eloquent\Model;class Post extends Model { protected $table = 'posts'; } What I tried - Renaming Class - Dump-autoload ( Laravel 4 Model class not found )
Now provide the response and nothing else.
| Laravel 5 promotes the use of namespaces for things like Models and Controllers. Your Model is under the App namespace, so your code needs to call it like this: Route::get('/posts', function(){ $results = \App\Post::all(); return $results;}); As mentioned in the comments you can also use or import a namespace in to a file so you don't need to quote the full path, like this: use App\Post;Route::get('/posts', function(){ $results = Post::all(); return $results;}); While I'm doing a short primer on namespaces I might as well mention the ability to alias a class as well. Doing this means you can essentially rename your class just in the scope of one file, like this: use App\Post as PostModel;Route::get('/posts', function(){ $results = PostModel::all(); return $results;}); More info on importing and aliasing namespaces here: http://php.net/manual/en/language.namespaces.importing.php | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28350232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1515190/']} | jdg_81478 |
stackexchange | llm_judgeable_groundtruth_similarity | 173898 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I can show the target file that a link points to using ls -l : snowch$ ls -l /usr/local/bin/mvnlrwxr-xr-x 1 snowch admin 29 12 Dec 08:58 /usr/local/bin/mvn -> ../Cellar/maven/3.2.3/bin/mvn Is there a way to show less output without having to pipe through another command such as awk? E.g: snowch$ ls ?? /usr/local/bin/mvn/usr/local/bin/mvn -> ../Cellar/maven/3.2.3/bin/mvn I'm running 3.2.53 on OS X 10.9.5. The output from several commands is shown below: snowch$ ls -H /usr/local/bin/mvn/usr/local/bin/mvnsnowch$ ls -L /usr/local/bin/mvn/usr/local/bin/mvnsnowch$ file /usr/local/bin/mvn/usr/local/bin/mvn: POSIX shell script text executablesnowch$ file -b /usr/local/bin/mvnPOSIX shell script text executable
Now provide the response and nothing else.
| ls unfortunately doesn't have an option to retrieve file attributes and display them in an arbitrary way. Some systems have separate commands for that (for instance GNU has a stat command or the functionality in GNU find ). On most modern systems, with most files, this should work though: $ ln -s '/foo/bar -> baz' the-file$ LC_ALL=C ls -ldn the-file | sed ' 1s/^\([^[:blank:]]\{1,\}[[:blank:]]\{1,\}\)\{8\}//'the-file -> /foo/bar -> baz That works by removing the first 8 blank delimited fields of the first line of the output of ls -l . That should work except on systems where the gid is not displayed there or the first 2 fields are joined together when there's a large number of links. With GNU stat : $ LC_ALL=C stat -c '%N' the-file'the-file' -> '/foo/bar -> baz' With GNU find : $ find the-file -prune \( -type l -printf '%p -> %l\n' -o -printf '%p\n' \)the-file -> /foo/bar -> baz With FreeBSD/OS/X stat: f=the-fileif [ -L "$f" ]; then stat -f "%N -> %Y" -- "$f"else printf '%s\n' "$f"fi With zsh stat: zmodload zsh/statf=the-filezstat -LH s -- "$f"printf '%s\n' ${s[link]:-$f} Many systems also have a readlink command to specifically get the target of a link: f=the-fileif [ -L "$f" ]; then printf '%s -> ' "$f" readlink -- "$f"else printf '%s\n' "$f"fi | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/173898', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/24554/']} | jdg_81479 |
stackexchange | llm_judgeable_groundtruth_similarity | 1976029 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does the following integral converge? $$\int_1^\infty \sin^2 (x^2) \, dx$$ I tried$$\int_1^\infty \sin^2(x^2) \, dx=\int_1^\infty \frac{1-\cos(2x^2)}{2} \, dx = \frac{\sqrt{2}}{4} \int_1^\infty\frac{1-\cos(u)}{2\sqrt{u}} \, du$$ The idea is that, I want to compare the original integral to a divergent $p$-integral. But I am not sure how to proceed from here.
Now provide the response and nothing else.
| No, it does not. We have, due to $\sin^2(z)=\frac{1-\cos(2z)}{2}$,$$ \int_{1}^{M}\sin^2(x^2)\,dx \stackrel{x^2\to z}{=} \int_{1}^{M^2}\frac{\sin^2(z)}{2\sqrt{z}}\,dz =\frac{M-1}{2}-\int_{1}^{M^2}\frac{\cos(2z)}{4\sqrt{z}}\,dz$$where $\int_{1}^{+\infty}\frac{\cos(2z)}{4\sqrt{z}}\,dz$ is finite by Dirichlet's test /integration by parts, but $\frac{M-1}{2}$ grows unbounded. In particular,$$\int_{1}^{M^2}\frac{\cos(2z)}{4\sqrt{z}}\,dz = \left.\frac{\sin(2z)}{8\sqrt{z}}\right|_{1}^{M^2}+\int_{1}^{M^2}\frac{\sin(2z)}{16 z\sqrt{z}}\,dz$$is bounded in absolute value by $\frac{3}{8}$ (by the trivial inequality $\sin\leq 1$), hence$$\boxed{ \int_{1}^{M}\sin^2(x^2)\,dx \color{red}{\geq \frac{M}{2}-\frac{7}{8}}. }$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1976029', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/359968/']} | jdg_81480 |
stackexchange | llm_judgeable_groundtruth_similarity | 18994069 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I was going through the orange book (3rd edition) and I came across a passage in chapter 9 about the invariant qualifier. And it says: The invariant qualifier instructs the compiler and linked to ignore expressions and functions that are not directly related to the computation of the output. This passage comes after two similar snippets of code: uniform mat4 MVPmatrix;// ...in vec4 MCVertex;// ...a(); // does not modify gl_Position, MVP or MCVertex// ...// Transform vertex to clip spacegl_Position = MVP * MCVertex; and uniform mat4 MVPmatrix;// ...invariant gl_Position;in vec4 MCVertex;// ...a(); // does not modify gl_Position, MVP or MCVertex// ...// Transform vertex to clip spacegl_Position = MVP * MCVertex; The book then goes on to state: The first case may or may not compute the transformed positions in exactly the same way no matter what unrelated function or expression is linked to the shader. This can cause problems in rendering if a multipass algorithm is used to render the same geometry more than once. Which has me confused. If a() in no way affects the variables involved in calculating the transformed position, then how would the computation vary? (And how exactly does adding invariant help with that?). And referring to the first quote, what exactly do they mean by "ignoring the unrelated functions" ? Do they just not get executed?
Now provide the response and nothing else.
| The purpose of invariant is to make sure the computation you're doing will result in the same result always, no matter what the shader optimizer will do to the shader (notably across multiple shader compilations). I find the phrasing of the orange book to be poor (and misleading, as you've noted). The GLSL specification (language 1.2) section 4.6 is much clearer: In this section, variance refers to the possibility of getting different values from the same expression in different programs. For example, say two vertex shaders, in different programs, each set gl_Position with the same expression in both shaders, and the input values into that expression are the same when both shaders run. It is possible, due to independent compilation of the two shaders, that the values assigned to gl_Position are not exactly the same when the two shaders run. In this example, this can cause problems with alignment of geometry in a multi-pass algorithm. In general, such variance between shaders is allowed. When such variance does not exist for a particular output variable, that variable is said to be invariant. and it goes on to explain that the invariant qualifier gets you guarantees to avoid this issue. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18994069', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/583833/']} | jdg_81481 |
stackexchange | llm_judgeable_groundtruth_similarity | 8624137 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am serializing an structure by using BinaryFormatter using this code: private void SerializeObject(string filename, SerializableStructure objectToSerialize){ Stream stream = File.Open(filename, FileMode.Create); BinaryFormatter bFormatter = new BinaryFormatter(); bFormatter.Serialize(stream, objectToSerialize); stream.Close();} Which objectToSerialize is my structure, I'm calling this function like this: SerializableStructure s = new SerializableStructure();s.NN = NN;s.SubNNs = SubNNs;s.inputs = inputs;SerializeObject(Application.StartupPath + "\\Save\\" + txtSave.Text + ".bin", s); Which SerializableStructure , and Type of NN , SubNNs and inputs are serializable. (inputs contains some Points , Rectangles and generic lists). Now, When I run my code, I am given this error: Type 'MainProject.Main' in Assembly 'MainProject, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable. Why I'm given this error? Main is my form, and these variables are located in my form. I have successfully serialized Type of NN with MemoryStream and VB.NET , But I don't know why I'm getting this error? Here is the definition of my structures: SerializableStructure: [Serializable()]public class SerializableStructure{ public List<Inputs> inputs = new List<Inputs>(); public NeuralNetwork NN; public NeuralNetwork[] SubNNs;} Inputs: [Serializable()]public class Inputs{ public string XPath { get; set; } public string YPath { get; set; } public string ImagePath { get; set; } public string CharName { get; set; } public string CharBaseName { get; set; } public List<double> x { get; set; } public List<double> y { get; set; } public List<double> DotsX { get; set; } public List<double> DotsY { get; set; } public List<Point> GravityCenters { get; set; } public List<Rectangle> Bounds { get; set; } public override string ToString() { return CharName; } public Inputs(string xPath, string yPath, string imagePath, string charName, string charBaseName) { XPath = xPath; YPath = yPath; ImagePath = imagePath; CharName = charName; CharBaseName = charBaseName; x = new List<double>(); y = new List<double>(); GravityCenters = new List<Point>(); Bounds = new List<Rectangle>(); }} Also NN is very big structure(!).
Now provide the response and nothing else.
| This almost alwas means you have an event (or other delegate - maybe a callback) somewhere in your object model, that is trying to be serialized. Add [NonSerialized] to any event-backing fields. If you are using a field-like event (the most likely kind), this is: [field:NonSerialized]public event SomeDelegateType SomeEventName; Alternatively: most other serializers don't look at events/delegates, and provide better version-compatibility. Switching to XmlSerializer, JavaScriptSerializer, DataContractSerializer or protobuf-net (just 4 examples) would also solve this by the simple approach of not trying to do this (you almost never intend for events to be considered as part of a DTO). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8624137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/942659/']} | jdg_81482 |
stackexchange | llm_judgeable_groundtruth_similarity | 26932672 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have no idea how tab completion works, but all of a sudden mine is broken. I don't even know what info to provide other than the use case.there is a target clean in the makefile. $ make c<tab> results in $ make c23:set: command not foundlean EDIT: I believe somehow I ruined the set bash built-in since man set says No manual entry for set and which set doesn't report anything. Invoking set on the terminal, however, produces result. I'm using: GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) and GNU Make 3.81
Now provide the response and nothing else.
| thanks to Etan's comment and Aaron's indication of where makefiles are, I managed to debug this. I ran set -x so I could track what was happening when doing the tab completion. The output of make c<tab> consists mostly of commands from the bash completion file for make , located at /usr/share/bash-completion/completions/make (1).However, I noticed the an inconsistency between the output and the file. Towards the end, the output said: + local mode=--+ (( COMP_TYPE != 9 ))++ set +o++ grep --colour=auto -n -F posix+ local 'reset=23:set +o posix'+ set +o posix Which I identified as corresponding to these lines from the file: if (( COMP_TYPE != 9 )); then mode=-d # display-only modefilocal reset=$( set +o | grep -F posix ); set +o posix # for <(...) So the output did a grep --colour=auto -n instead of just grep . Indeed, I had setup this alias for grep Make worked as soon as I removed the alias. I hope this helps others debug their problems. EDIT: I have submitted a bug report here: https://alioth.debian.org/tracker/index.php?func=detail&aid=315108&group_id=100114&atid=413095 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26932672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/786559/']} | jdg_81483 |
stackexchange | llm_judgeable_groundtruth_similarity | 2836539 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I run into this integral $$\mathcal{J} = \int_{0}^{\pi/2} \frac{x \log \left ( 1-\sin x \right )}{\sin x} \, \mathrm{d}x$$ I don't know to what it evaluates. I tried several approaches. 1st: Differentiation under the integral sign Consider the function $\displaystyle f(\alpha)= \int_{0}^{\pi/2} \frac{x \log \left ( 1-\alpha\sin x \right )}{\sin x} \, \mathrm{d}x$. Hence \begin{align*}\frac{\mathrm{d} }{\mathrm{d} \alpha} f(\alpha) &= \frac{\mathrm{d} }{\mathrm{d} \alpha} \int_{0}^{\pi/2} \frac{x \log \left ( 1-\alpha\sin x \right )}{\sin x} \, \mathrm{d}x \\ &= \int_{0}^{\pi/2} \frac{\partial }{\partial \alpha} \frac{x \log \left ( 1-\alpha\sin x \right )}{\sin x} \, \mathrm{d}x \\ &= -\int_{0}^{\pi/2} \frac{x \sin x}{\sin x \left ( 1- \alpha \sin x \right )} \, \mathrm{d}x\\ &=- \int_{0}^{\pi/2} \frac{x}{1- \alpha \sin x} \, \mathrm{d}x\end{align*} And the last integral equals? 2nd: Taylor series expansion Lemma: It holds that $$x \sin^n x = \left\{\begin{matrix}2^{1-n}\displaystyle\mathop{\sum}\limits_{k=0}^{\frac{n-1}{2}}(-1)^{\frac{n-1}{2}-k}\binom{n}{k}\,x\sin\big((n-2k)x\big) & , & n \;\; \text{odd} \\\\ 2^{-n}\displaystyle\binom{n}{\frac{n}{2}}\,x+2^{1-n}\mathop{\sum}\limits_{k=0}^{\frac{n}{2}-1}(-1)^{\frac{n}{2}-k}\binom{n}{k}\,x\cos\big((n-2k)x\big) & , & n \;\; \text{even}\end{matrix}\right.$$ Hence, \begin{align*}\int_{0}^{\pi/2} \frac{x \log \left ( 1-\sin x \right )}{\sin x} \, \mathrm{d}x &= -\int_{0}^{\pi/2} \frac{x}{\sin x} \sum_{n=1}^{\infty} \frac{\sin^n x}{n} \, \mathrm{d}x \\ &=-\sum_{n=1}^{\infty} \frac{1}{n} \int_{0}^{\pi/2} x \sin^{n-1} x \, \mathrm{d}x\end{align*} However the lemma does not help at all. In fact, if someone substitutes the RHS what it seems to be in there is an $\arcsin $ Taylor expansion. The series that remains to be evaluated is very daunting. To sum up, I don't know to what this integral evaluates. I don't even know if a nice closed form exists neither do I expect one. But , I still hope.
Now provide the response and nothing else.
| The given problem is equivalent to the evaluation of$$ \int_{0}^{1}\frac{\arcsin(x)}{\sqrt{1-x^2}}\cdot\frac{\log(1-x)}{x}\,dx =\sum_{n\geq 1}\frac{4^n}{2n\binom{2n}{n}}\int_{0}^{1}x^{2n-2}\log(1-x)\,dx=\sum_{n\geq 1}\frac{4^n H_{2n-1}}{2n\binom{2n}{n}(1-2n)}$$which is a twisted hypergeometric series. On the other hand$$ \mathcal{J}= 2\int_{0}^{\pi/4}\frac{2x \log(1-\sin(2x))}{\sin(2x)}\,dx=2\int_{0}^{1}\frac{\arctan(t)}{t}\log\left(\frac{(1-t)^2}{1+t^2}\right)\,dt $$appears to be manageable through the polylogarithms machinery. Indeed $\arctan t=\text{Im}\log(1+it)$ and the integrals$$ \int \frac{\log(1+it)\log(1\pm it)}{t}\,dt, \qquad \int \frac{\log(1+it)\log(1-t)}{t}\,dt $$have closed forms in terms of $\text{Li}_2$ and $\text{Li}_3$. However the simplest way to recover $\mathcal{J}=-\frac{\pi^3}{8}$ might be to exploit complex analysis and contour integration, as it often happens when integrating multiples of $\frac{x}{\sin x}$. Through the Fourier series of $\log\sin$ we have$$ \log(1-\cos x)=-\log(2)-2\sum_{n\geq 1}\frac{\cos(nx)}{n} $$pointwise on $(0,\pi/2)$. We have that $\int_{0}^{\pi/2}\frac{x}{\sin x}\,dx $ equals $2K$, with $K$ being Catalan's constant, and by induction $$ \int_{0}^{\pi/2}\frac{x}{\sin x}\cos\left[n\left(\frac{\pi}{2}-x\right)\right]\,dx $$up to the sign, equals $\sum_{m>n/2}\frac{2(-1)^m}{(2m+1)^2}$ or $\sum_{m> n/2}\frac{1}{(2m+1)^2}$, according to the parity of $n$. This allows to write the original twisted sum in terms of standard Euler sums. $K$ disappears from the outcome after some simplification and $$ \sum_{n\geq 0}\frac{(-1)^n}{(2n+1)^3} = \frac{\pi^3}{32} $$ is well-known . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2836539', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/203035/']} | jdg_81484 |
stackexchange | llm_judgeable_groundtruth_similarity | 59900538 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am creating a shareable React component library. The library contains many components but the end user may only need to use a few of them. When you bundle code with Webpack (or Parcel or Rollup) it creates one single file with all the code . For performance reasons I do not want to all that code to be downloaded by the browser unless it is actually used. Am I right in thinking that I should not bundle the components? Should the bundling be left to the consumer of the components? Do I leave anything else to the consumer of the components? Do I just transpile the JSX and that's it? If the same repo contains lots of different components, what should be in main.js?
Now provide the response and nothing else.
| This is an extremely long answer because this question deserves an extremely long and detailed answer as the "best practice" way is more complicated than just a few-line response. I've maintained our in-house libraries for 3.5+ years in that time I've settled on two ways I think libraries should be bundled the trade-offs depend on how big your library is and personally we compile both ways to please both subsets of consumers. Method 1: Create an index.ts file with everything you want to be exposed exported and target rollup at this file as its input. Bundle your entire library into a single index.js file and index.css file; With external dependencies inherited from the consumer project to avoid duplication of library code.(gist included at bottom of example config) Pros: Easy to consume as project consumers can import everything from the root relative library path import { Foo, Bar } from "library" Cons: This will never be tree shakable, and before people say to do this with ESM and it will be tree shakeable. NextJS doesn't support ESM at this current stage and neither do a lot of project setups that's why it's still a good idea to compile this build to just CJS. If someone imports 1 of your components they will get all the CSS and all the javascript for all your components. Method 2: This is for advanced users: Create a new file for every export and use rollup-plugin-multi-input with the option "preserveModules: true" depending on how what CSS system you're using your also need to make sure that your CSS is NOT merged into a single file but that each CSS file requires(".css") statement is left inside the output file after rollup and that CSS file exists. Pros: When users import { Foo } from "library/dist/foo" they willonly get the code for Foo, and the CSS for Foo, and nothing more. Cons: This setup involves the consumer having to handle node_modulesrequire(".css") statements in their build configuration with NextJSthis is done with next-transpile-modules npm package. Caveat: We use our own babel plugin you can find it here: https://www.npmjs.com/package/babel-plugin-qubic to allow people to import { Foo, Bar } from "library" and then with babel transform it to... import { Foo } from "library/dist/export/foo"import { Bar } from "library/dist/export/bar" We have multiple rollup configurations where we actually use both methods; so for library consumers who don't care for tree shaking can just do "Foo from "library" and import the single CSS file, and for library consumers who do care for tree shaking and only using critical CSS they can just turn on our babel plugin. Rollup guide for best practice: whether you are using typescript or not ALWAYS build with "rollup-plugin-babel": "5.0.0-alpha.1" Make sure your .babelrc looks like this. { "presets": [ ["@babel/preset-env", { "targets": {"chrome": "58", "ie": "11"}, "useBuiltIns": false }], "@babel/preset-react", "@babel/preset-typescript" ], "plugins": [ ["@babel/plugin-transform-runtime", { "absoluteRuntime": false, "corejs": false, "helpers": true, "regenerator": true, "useESModules": false, "version": "^7.8.3" }], "@babel/plugin-proposal-class-properties", "@babel/plugin-transform-classes", ["@babel/plugin-proposal-optional-chaining", { "loose": true }] ]} And with the babel plugin in rollup looking like this... babel({ babelHelpers: "runtime", extensions, include: ["src/**/*"], exclude: "node_modules/**", babelrc: true }), And your package.json looking ATLEAST like this: "dependencies": { "@babel/runtime": "^7.8.3", "react": "^16.10.2", "react-dom": "^16.10.2", "regenerator-runtime": "^0.13.3" }, "peerDependencies": { "react": "^16.12.0", "react-dom": "^16.12.0", } And finally your externals in rollup looking ATLEAST like this. const makeExternalPredicate = externalArr => { if (externalArr.length === 0) return () => false; return id => new RegExp(`^(${externalArr.join('|')})($|/)`).test(id);};//... rest of rollup config above external. external: makeExternalPredicate(Object.keys(pkg.peerDependencies || {}).concat(Object.keys(pkg.dependencies || {}))),// rest of rollup config below external. Why? This will bundle your shit to automatically to inheritreact/react-dom and your other peer/external dependencies from theconsumer project meaning they won't be duplicated in your bundle. This will bundle to ES5 This will automatically require("..") in all the babel helper functions for objectSpread, classes, etc FROM the consumer project which will wipe another 15-25KB from your bundle size and mean that the helper functions for objectSpread won't be duplicated in your library output + the consuming projects bundled output. Async functions will still work externals will match anything that starts with that peer-dependency suffix i.e babel-helpers will match external for babel-helpers/helpers/object-spread Finally here is a gist for an example single index.js file output rollup config file. https://gist.github.com/ShanonJackson/deb65ebf5b2094b3eac6141b9c25a0e3 Where the target src/export/index.ts looks like this... export { Button } from "../components/Button/Button";export * from "../components/Button/Button.styles";export { Checkbox } from "../components/Checkbox/Checkbox";export * from "../components/Checkbox/Checkbox.styles";export { DatePicker } from "../components/DateTimePicker/DatePicker/DatePicker";export { TimePicker } from "../components/DateTimePicker/TimePicker/TimePicker";export { DayPicker } from "../components/DayPicker/DayPicker";// etc etc etc Let me know if you experience any problems with babel, rollup, or have any questions about bundling/libraries. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59900538', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6854859/']} | jdg_81485 |
stackexchange | llm_judgeable_groundtruth_similarity | 4409023 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading the Metaprogramming section of Programming Ruby 1.9 and I'm having trouble understanding what's going on internally between class_eval / class_exec vs. instance_eval / instance_exec . So first of all, my understanding is that def adds a method to the method table of self (the class object): class A puts self # => A def foo; 42; end # added to the method table of self, so becomes an instance methodendA.new.foo # => 42 And if we use class_eval , we get the same behavior: A.class_eval do puts self # => A def bar; 42; end # same as aboveendA.new.bar # => 42 But somehow in the instance_eval case, things are different: A.instance_eval do puts self # => A def baz; 42; end # added to the method table of an anonymous # singleton class of self, so becomes a class methodendputs A.baz # => 42s = 'string's.instance_eval do ... end # same behavior, so now def creates an instance method So I understand the functional difference between class_eval and instance_eval . But the contexts inside the class_eval and instance_eval blocks look exactly the same to me -- in particular, self points to the same object, and the local_variables are the same. So what's going on inside the blocks (internally) that's making def act different? Is there some piece of documentation I could read? The RDoc for instance_eval and class_eval doesn't help. Looking at the source, instance_eval seems to set up a singleton class object whereas class_eval doesn't -- but is this difference visible outside the C code, on the Ruby level?
Now provide the response and nothing else.
| I think your confusion comes from the fact that def does not depend on the current self, you might think about it as being a "current class" that has it's own rules. Following your examples: class A # defs here go to A puts self # => A class << self #defs here go to A's eigenclass endendA.class_eval do #defs here go to AendA.instance_eval do #defs here go to A's eigenclass ends = "Hello World"class << s #defs here go to s's eigenclassend Here's the portion of the chapter that talks about the issue and it's pretty clear about the behaviour class_eval and instance_eval both set self for the duration of the block. However, they differ in the way they set up the environment for method definition. class_eval sets things up as if you were in the body of a class definition, so method definitions will define instance methods In contrast, calling instance_eval on a class acts as if you were working inside the singleton class of self. Therefore, any methods you define will become class methods. The only thing I think is worth adding is that you can call instance_eval in any object, not just classes, and the behaviour doesn't change but has different consequences. Some relevant reading: Ruby: instance_eval and class_eval method definitions Chapter 4 of this most excelent series | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4409023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/525872/']} | jdg_81486 |
stackexchange | llm_judgeable_groundtruth_similarity | 20127 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The TRAPPIST-1 system is around an ultra-cool dwarf star. I went looking for more information on that kind of star, and found very little. The Wikipedia article on it lengthened from a minimal stub to one paragraph in the 15 minutes from when I began the search, after the press conference on the discovery of 7 Earth-sized planets in orbit around TRAPPIST-1. What kind of star is this?
Now provide the response and nothing else.
| Breaking the phrase down: Dwarf star - a term I will never understand - is used to describe relatively small, dim stars. Unfortunately, this encompasses most main-sequence stars, which are indeed dwarfs compared to large giants and supergiants. Ultra-cool , as called2voyage already discussed, means that the star has an effective temperatures of less than 2,700 kelvin. This is about half of the Sun's surface temperature. Spectroscopically, it means that these stars are of class M7-L8 , i.e. really, really cool stars that just barely reach the threshold for nuclear fusion. Actually, some L dwarfs will never fuse hydrogen as normal stars do, and will become brown dwarfs. Therefore, "ultra-cool dwarf" doesn't necessarily refer to just main sequence stars. Other characteristics in some cases (from Cruz et al .): Lithium in some L-type objects, indicative of low temperatures, low masses, and young ages in those particular objects. Very low surface gravities in late M-type dwarfs (though likely not as much in an M7 dwarf like TRAPPIST-1). Very low metallicity in certain other L-type objects. TRAPPIST-1 actually appears to be a rather massive ultra-cool dwarf, then, in comparison to some of these other objects. Here's an annotated Hertzsprung-Russell diagram, with a box for ultra-cool dwarfs and a circle for where TRAPPIST-1 is, approximately: Image from Wikipedia courtesy of user Saibo, under the Creative Commons Attribution-Share Alike 2.5 Generic license . | {} | {'log_upvote_score': 6, 'links': ['https://astronomy.stackexchange.com/questions/20127', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/2810/']} | jdg_81487 |
stackexchange | llm_judgeable_groundtruth_similarity | 978078 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How would you prove; without big calculations that involve calculator, program or log table; or calculus that $2^{50} < 3^{32}$ using elementary number theory only? If it helps you: $2^{50} - 3^{32} = -727120282009217$, $3^{32} \approx$ $2^{50.718800023077\ldots}$, $3^{32} $ $\div 2^{50}$ $=$ $1.6458125430068558$ ( thanks to Henry ).
Now provide the response and nothing else.
| Compare:$$3^{32}=(3^{2})^{16}\quad\text{vs.}\quad2^{50}=4(2^{3})^{16}$$So that using the binomial theorem to second order:$$\frac{3^{32}}{2^{50}}=\frac{(9/8)^{16}}{4}=\frac{(1+1/8)^{16}}{4}>\frac{1+16/8+120/64}{4}>1$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/978078', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/154508/']} | jdg_81488 |
stackexchange | llm_judgeable_groundtruth_similarity | 32008125 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a GUI website that can use several languages. The original HTML-files I got to work with were totally static. So if translation was needed I had to parse through alle files, note where some words or terms were, collect them all hand them to the translation department and enter those translations in the new language files. Since those files were totally static it meant having to translate whole sections several times. Not very effictient. So now I am working on some kind of dictionary in Javascript, to just exchange the terms in those websites. Mostly it works this way: var dicEnglish = {term 1: "This is the English text"Ref: "Another English text"}var dicFrench = {term 1: "This is the French text"Ref: "Another French text" } Which contains all the possible content that needs to be changed. Every candidate in the HTML-code gets a class="dicRef" id="l_dicTag_#" as identifier, which I slice down to the dictionary tag and exchange with the following code: var imgSrc = "en";var ActiveDic;var langSel;if(window.name){ langSel=window.name; }else{langSel="English";}function LangChange(){langClass = document.getElementsByClassName("dicRef");var i = langClass.length;var Start, Stop, idSrc, idDic;var navText;switch(langSel){ case "French": langSel="French"; imgSrc = "en"; navText="Anglais"; break; case "English": case "Anglais": default: langSel="English"; imgSrc = "fr"; navText="French"; break; }ActiveDic="dic"+langSel;window.name=langSel;while(i--){ idSrc = langClass[i].id; Start=idSrc.indexOf("_")+1; Stop=idSrc.lastIndexOf("_"); idDic=idSrc.slice(Start,Stop); if(window[ActiveDic][idDic]){ document.getElementById(idSrc).innerHTML=window[ActiveDic][idDic];} else{ document.getElementById(idSrc).innerHTML="N/A"; }}if(document.getElementById("imgSel")){ document.getElementById("imgSel").src="../../img/"+imgSrc+".gif";}if (document.getElementById("l_SelLang1_1")){ document.getElementById("l_SelLang1_1").innerHTML=navText;}} The problem lies in the uniqueness of the id-tag. Since some terms can occur more than once and some are generated the counter is needed. I'd prefer to ommit the counter, but can't find any other identifier to sort out all target terms and change their content. Since I want to be safe for the future I'd prefer a solution that makes it possible to handle a possible third language. Working with the inner HTML would need to tag the same term several times, once for each language. So is there any way to target all terms to be exchanged more efficently and easily, or a better way to do it? I can only work with client-side solutions, so no PHP and so on. Thanks in advance and hopefully this wasn't too long to read.
Now provide the response and nothing else.
| No offense to the other answerers but storing the text in JavaScript or in data attributes is not good for search engines or disabled site visitors and offers no benefits while added unnecessarily complicated code. The best and most simple solution in my opinion is to make use of HTML lang attribute and use JavaScript to show and hide the desired language. This solution also gracefully degrades so if a site visitor has their JavaScript disabled it will still display the content. Here is my solution: HTML <button id="switch-lang">Switch Language</button><h1><span lang="en">Hello</span> <span lang="es">Hola</span></h1><p lang="en">I really enjoy coding.</p><p lang="es">Me gusta mucho la codificación.</p> jQuery $('[lang="es"]').hide();$('#switch-lang').click(function() { $('[lang="es"]').toggle(); $('[lang="en"]').toggle();}); Then I would recommend adding HTML5 Geolocation to determine which language to show initially based on the users location in the world. I would also use Fontawesome language icon to show users they can switch languages in a way that is understandable by anyone: http://fontawesome.io/icon/language/ Here is the working code example at CodePen: https://codepen.io/codepajamas/pen/ZejaQz?editors=1010 Here is an additional example on code pen using a select menu to change between 3 (or more) languages: https://codepen.io/codepajamas/pen/NjGOMV Updated Full Example with Geolocation and Cookies I kept working on this and created an updated example switching between two languages Chinese and English (if you need more than two languages you would have to hide all languages and show only the one selected instead of using toggle the way I am). This code also detects if an existing cookie is already set for the language using jQuery Cookie. It also checks their geolocation if their browser supports it automatically setting the language to Chinese if they are in either Taiwan or China and defaults to English in all other countries. The code below is commented so you can see what each step is doing and hopefully be able to modify it to suit your needs. Here it is: HTML <button id="switch-lang">Switch Language Icon Here</button><h1><span lang="en">Hello</span> <span lang="zh">你好</span></h1><p lang="en">I really enjoy coding.</p><p lang="zh">我真的很喜歡編碼。</p> jQuery Note: this requires linking to not only jQuery but also jQuery Cookie $(function () { ///// Language Switching (2 languages: English and Chinese). ///// // Initially disable language switching button. $('#switch-lang').css({'pointer-events':'none', 'cursor':'default'}).attr('disabled','disabled'); function langButtonListen() { $('#switch-lang').click(function (event) { event.preventDefault(); $('[lang="zh"]').toggle(); $('[lang="en"]').toggle(); // Switch cookie stored language. if ($.cookie('lang') === 'en') { $.cookie('lang', 'zh', { expires: 7 }); } else { $.cookie('lang', 'en', { expires: 7 }); } }); // Enable lang switching button. $('#switch-lang').css({'pointer-events':'auto', 'cursor':'pointer'}).removeAttr('disabled'); } // Check if language cookie already exists. if ($.cookie('lang')) { var lang = $.cookie('lang'); if (lang === 'en') { $('[lang="zh"]').hide(); langButtonListen(); } else { $('[lang="en"]').hide(); langButtonListen(); } } else { // no cookie set, so detect language based on location. if ("geolocation" in navigator) { // geolocation is available navigator.geolocation.getCurrentPosition(function (position) { // accepted geolocation so figure out which country var lat = position.coords.latitude, lng = position.coords.longitude; $.getJSON('http://maps.googleapis.com/maps/api/geocode/json?latlng='+lat+','+lng+'&sensor=true', null, function (response) { var country = response.results[response.results.length-1].formatted_address; if (country === 'Taiwan' || country === 'China') { $('[lang="en"]').hide(); $.cookie('lang', 'zh', { expires: 7 }); langButtonListen(); } else { $('[lang="zh"]').hide(); $.cookie('lang', 'en', { expires: 7 }); langButtonListen(); } }).fail(function (err) { console.log('error: '+err); $('[lang="zh"]').hide(); $.cookie('lang', 'en', { expires: 7 }); langButtonListen(); }); }, function (error) { if (error.code == error.PERMISSION_DENIED) { // denied geolocation $('[lang="zh"]').hide(); $.cookie('lang', 'en', { expires: 7 }); langButtonListen(); } else { console.log('Unknown error. Defaulting to English!'); $('[lang="zh"]').hide(); $.cookie('lang', 'en', { expires: 7 }); langButtonListen(); } }); } else { // geolocation IS NOT available $('[lang="zh"]').hide(); $.cookie('lang', 'en', { expires: 7 }); langButtonListen()); } }}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32008125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5226518/']} | jdg_81489 |
stackexchange | llm_judgeable_groundtruth_similarity | 824516 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking into the notion of vault running under swarm (1.12.x). A single container would be started with: docker run -d --cap-add IPC_LOCK -p 8200:8200 -p 8215:8125 --name vault --volume /vagrant/vault:/vagrant/vault vault server -config=/path/to/vault.hcl but when I want to run this in swarm as a service, there appears to be no way to specify the IPC_LOCK capability, in order to lock down encrypted swapping for the vault service in this case. How can I set --cap-add flags when starting a swarm mode service with the docker service create command?
Now provide the response and nothing else.
| As of 20.10, this is available from docker service create with --cap-add : $ docker service create --help... --cap-add list Add Linux capabilities --cap-drop list Drop Linux capabilities Or in a compose file used with docker stack deploy with the same syntax from the version 2 file: version: "3.9"services: app: image: your-image:tag cap_add: - CAP_NAME1 - CAP_NAME2 cap_drop: - CAP_NAME3 - CAP_NAME4 [ Original answer from before 20.10 ] It's currently not supported, but Docker is working on a solution. The logic behind not including the --cap-add option blindly is in a large cluster, there could be security concerns of a manager submitting containers with added privileges to a worker. The worker may trust running secure containers that can't access the host, but not want to allow remote root access to the host via a privileged container. Discussion on this is over on github at: https://github.com/docker/docker/pull/26849#issuecomment-252704844 https://github.com/docker/swarmkit/issues/1030 https://github.com/docker/swarmkit/pull/1722 https://github.com/moby/moby/issues/25885#issuecomment-557790402 and https://github.com/docker/cli/pull/2199 | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/824516', 'https://serverfault.com', 'https://serverfault.com/users/390412/']} | jdg_81490 |
stackexchange | llm_judgeable_groundtruth_similarity | 517167 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For all sets $A$ and $B$, $$A \cap(A \cup B) = A.$$ I get that this is true informally, but what would be the notation to formally prove this?
Now provide the response and nothing else.
| Hint: Frequently, the easiest method for showing some set $X$ and some set $Y$ are equal is this: first, show that any element $x\in X$ also satisfies $x\in Y$ -- which proves that $X\subseteq Y$. Then, show that any element $y\in Y$ also satisfies $y\in X$, proving that $Y\subseteq X$. If $X\subseteq Y$, and $Y\subseteq X$, then necessarily $X=Y$. So, start off with an element $x$ which is in $A\cap(A\cup B)$, and show that $x\in A$. Then, show that any element $y\in A$ also satisfies $y\in A\cap(A\cup B)$. It might help you to remember the following two facts: Every element of $A\cap X$ is both an element of $A$ and an element of $X$ -- and so in particular is definitely a member of $A$. If $x\in X$ and $x\in Y$, then $x\in X\cap Y$. Edit: Here's a start. Suppose that $x\in A\cap (A\cup B)$. Then by definition of the intersection, we must have $x\in A$ and $x\in A\cup B$. In particular, $x\in A$. Now, see if you can show that any element $x\in A$ must also be in $A\cap(A\cup B)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/517167', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/97350/']} | jdg_81491 |
stackexchange | llm_judgeable_groundtruth_similarity | 70584317 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
At the moment I'm forced to push to a new branch on every change I make. And that branch must be deleted since the error appears again on every newly created branch. git push origin main gives the following output error: dst refspec refs/heads/main matches more than oneerror: failed to push some refs to 'https://github.com/CarloCattano/fastesTube' My git tag outputs: refs/heads/devrefs/heads/dev2refs/heads/dev3refs/heads/mainv1.1win64 no matter if I manually delete them with -d .Even tried to migrate the project to a new repo , and the problem persists after a few pushes. git remote -v origin https://github.com/CarloCattano/fastesTube (fetch)origin https://github.com/CarloCattano/fastesTube (push) git ls-remote ac4cac50b79ff682ddd01f6c0c3913d0bd765e64 HEAD77273d612953f96e72ce305ab94f0a535a4c332d refs/heads/dev33c344e7af2feb33db2d05f08866cad5fe624b57c refs/heads/developac4cac50b79ff682ddd01f6c0c3913d0bd765e64 refs/heads/mainfde3bb1ed7c770a5b8eb94a368bb34f25566f00e refs/pull/1/headffe33059c3fcc12899953bc588772072d9a18bf0 refs/pull/2/head77273d612953f96e72ce305ab94f0a535a4c332d refs/pull/3/head3c344e7af2feb33db2d05f08866cad5fe624b57c refs/pull/4/headb9d1c3f8b83ea1ac868143ec647776d03f9bacc7 refs/tags/refs/heads/devffe33059c3fcc12899953bc588772072d9a18bf0 refs/tags/refs/heads/dev277273d612953f96e72ce305ab94f0a535a4c332d refs/tags/refs/heads/dev34098ea71b5a0873db6be41e859e5b8242d81c708 refs/tags/refs/heads/maina42341ba40635bd5063a0bf988eab6c00b0e62d1 refs/tags/v1.137220afec1d13dcac99c61ef766ac800fc6438f5 refs/tags/win64 Force pushing also doesn't seem to work. It might be I wrongly configured the .yml file and creates tags for every release.
Now provide the response and nothing else.
| The problem here is that you have created tags named refs/heads/... (fill in the three dots). A ref or reference , in Git, is a string that normally starts with refs/ and goes on to have a namespace qualifier: refs/heads/* are branch names: the part that matches the * is the branch name; refs/tags/* are tag names: the part that matches the * is the tag name; refs/remotes/* are remote-tracking names; and so on. Typically you can give Git a shortened name, such as main or v1.1 , and it can figure out whether this name is a branch name or a tag name by looking at the existing names: There's normally a refs/heads/main or refs/heads/master , but no refs/tags/main or refs/tags/master , so main or master is therefore a branch name. There may be a refs/tags/v1.1 , but typically there isn't a refs/heads/v1.1 if so, so v1.1 is therefore a tag name. When using this kind of scheme, you provide an ambiguous name like main or v1.1 and Git figures out if it's a branch or tag name on its own, or you provide a full name like refs/heads/main and Git immediately knows that it's a branch name. The short name, without the refs/heads/ or refs/tags/ qualifier in front, is an unqualified name . The full name is a qualified name . The git push command is more complicated than most other Git commands (except that git fetch is similarly complicated) because it has to deal with two Git repositories, rather than just one. So instead of a ref , git push can take a refspec , which is a pair of refs separated by a colon : . If you use a full refspec: git push origin refs/heads/main:refs/tags/v1.2 then the parts on the left and right are each refs and each one is either unqualified or qualified. Unqualified names can be resolved by looking at the local Git repository's names (for those that are local) or remote's name (for those that are remote). If, however, you use a partial refspec: git push origin main then Git isn't sure whether you meant "main as found locally" or "main as found on the remote". So Git will look in both places to make its best guesses. In this case, however, the destination Git—the one whose references are shown in your git ls-remote output—has both refs/heads/main and refs/tags/refs/heads/main . So, your Git has looked up main locally and found refs/heads/main ; it now looks that up in the set of refs in the other Git repository, and can't translate it to just one ref since both refs/heads/main —a fully-qualified branch name— and refs/tags/refs/heads/main —a fully-qualified tag name—match the potentially-unqualified refs/heads/main . The result is that you get this error message. Whether you'd get this error for git push origin refs/heads/main:refs/heads/main , I don't know. The best fix, though, is to correct the set of names on the destination , so that there are no refs/tags/refs/* names any more. That is, these four names: b9d1c3f8b83ea1ac868143ec647776d03f9bacc7 refs/tags/refs/heads/devffe33059c3fcc12899953bc588772072d9a18bf0 refs/tags/refs/heads/dev277273d612953f96e72ce305ab94f0a535a4c332d refs/tags/refs/heads/dev34098ea71b5a0873db6be41e859e5b8242d81c708 refs/tags/refs/heads/main should be adjusted (or deleted entirely) so that no tag name in the Git repository over on GitHub starts with refs/ . Ideally, these tag names should not match any branch names either: tag names should generally conform to the v* form, or to some other "clearly a tag" pattern so that nobody will accidentally think they're branch names, or vice versa. Once you fix that, simple git push commands will work again. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/70584317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10880644/']} | jdg_81492 |
stackexchange | llm_judgeable_groundtruth_similarity | 32732677 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create multiple directories(test1,test2) with 2 sub directories (/home/test1/bin and /home/test2/conf) similarly for test2. My playbook looks like this : --- hosts: localhost tasks: - name: Create directory file: path=/home/{{item}}/bin state=directory file: path=/home/{{item}}/conf state=directory with_items: - test1 - test2 However i get the following error: An exception occurred during task execution. The full traceback is:Traceback (most recent call last): File "/root/ansible/lib/ansible/executor/process/worker.py", line 122, in run executor_result = TaskExecutor(host, task, job_vars, new_play_context, self._new_stdin, self._loader, shared_loader_obj).run() File "/root/ansible/lib/ansible/executor/task_executor.py", line 89, in run items = self._get_loop_items() File "/root/ansible/lib/ansible/executor/task_executor.py", line 179, in _get_loop_items items = self._shared_loader_obj.lookup_loader.get(self._task.loop, loader=self._loader, templar=templar).run(terms=loop_terms, variables=vars_copy) File "/root/ansible/lib/ansible/plugins/lookup/items.py", line 26, in run return self._flatten(terms)TypeError: _flatten() takes exactly 2 arguments (1 given)fatal: [localhost]: FAILED! => {"failed": true, "stdout": ""} Whats the issue here? I'm using the latest git checkout. Is there a better way to approach this?
Now provide the response and nothing else.
| I think that errors raised because you used file module 2 times in 1 task. You should only use 1 module per task. In your case, you should use nested loop to create multiple directories and subdirectories. Example: ---- hosts: localhost tasks: - name: test file: path=/tmp/{{item.0}}/{{item.1}} state=directory with_nested: - ['test1', 'test2'] - ['bin', 'conf'] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32732677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4124940/']} | jdg_81493 |
stackexchange | llm_judgeable_groundtruth_similarity | 278502 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to access the array index variable while looping thru an array in my bash shell script. myscript.sh #!/bin/bashAR=('foo' 'bar' 'baz' 'bat')for i in ${AR[*]}; do echo $idone The result of the above script is: foobarbazbat The result I seek is: 0123 How do I alter my script to achieve this?
Now provide the response and nothing else.
| You can do this using List of array keys . From the bash man page: ${!name[@]} ${!name[*]} List of array keys . If name is an array variable, expands to the list of array indices (keys) assigned in name. If name is not an array, expands to 0 if name is set and null otherwise. When @ is used and the expansion appears within double quotes, each key expands to a separate word. For your example: #!/bin/bashAR=('foo' 'bar' 'baz' 'bat')for i in "${!AR[@]}"; do printf '${AR[%s]}=%s\n' "$i" "${AR[i]}"done This results in: ${AR[0]}=foo${AR[1]}=bar${AR[2]}=baz${AR[3]}=bat Note that this also work for non-successive indexes: #!/bin/bashAR=([3]='foo' [5]='bar' [25]='baz' [7]='bat')for i in "${!AR[@]}"; do printf '${AR[%s]}=%s\n' "$i" "${AR[i]}"done This results in: ${AR[3]}=foo${AR[5]}=bar${AR[7]}=bat${AR[25]}=baz | {} | {'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/278502', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/167174/']} | jdg_81494 |
stackexchange | llm_judgeable_groundtruth_similarity | 1199584 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm developing a simple 2D game in Java, everything works fine. To find the correct FPS refresh/redraw/update, I used currentTimeMillis to find the difference. The problem is that currentTimeMillis sometimes returns negative values, and the Thread.sleep will throw exception (java.lang.IllegalArgumentException: timeout value is negative) What I did was to put a while in my game and while currentTimeMillis <= -1 check again until its over, then sleep. Code sample: private void gameLoop(){ // Find FPS long FPS = 40; long beginTime = System.currentTimeMillis(); while(beginTime < -1){ beginTime = System.currentTimeMillis(); } while(!done){ // Sleep try{ beginTime += FPS; long currTime = System.currentTimeMillis(); while(currTime < -1){ currTime = System.currentTimeMillis(); } difference = (beginTime - currTime); // Should be (currTime - beginTime) Thread.sleep(difference); }catch (Exception e){ e.printStackTrace(); } // RENDER GAME renderGame(); } JOptionPane.showMessageDialog(null, "Game Over\nYou Died."); System.exit(0);}// end gameLoop() When the game starts, this works fine, but sometimes I still get the Exception. Is there a better way? I still think it´s strange that it´s returning a negative value.
Now provide the response and nothing else.
| Isn't the real problem that beginTime-currTime is negative ? difference = (beginTime - currTime);Thread.sleep(difference); I note that you add FPS (40) to beginTime . But if currentTime is greater than beginTime+FPS , you're going to have problems. If you're trying to schedule something at regular intervals (which I think you are), check out Timer , which will allow you to do this more simply/reliably. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1199584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91360/']} | jdg_81495 |
stackexchange | llm_judgeable_groundtruth_similarity | 47702810 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently had to compile a data frame of student scores (one row per student, id column and several integer-valued columns, one per score component). I had to combine a "master" data frame and several "correction" data frames (containing mostly NA and some updates to the master), so that the result contains the maximum values from the master, and all corrections. I succeeded by copy-pasting a sequence of mutate() calls, which works (see example below), but is not elegant in my opinion. What I would have wanted to do, was instead of copying and pasting, to use something along the lines of map2 and two lists of columns to compare the columns pair-wise. Something like (which obviously does not work as such): list_of_cols1 <- list(col1.x, col2.x, col3.x)list_of_cols2 <- list(col1.y, col2.y, col3.ymap2(list_of_cols1, list_of_cols2, ~ column = pmax(.x, .y, na.rm=T)) I can't seem to be able to figure out to do it. My question is: how to specify such lists of columns and mutate them in one map2() call in dplyr pipe, or is it even possible – have I gotten it all wrong? Minimum working example library(tidyverse)master <- tibble( id=c(1,2,3), col1=c(1,1,1), col2=c(2,2,2), col3=c(3,3,3))correction1 <- tibble( id=seq(1,3), col1=c(NA, NA, 2 ), col2=c( 1, NA, 3 ), col3=c(NA, NA, NA))result <- reduce( # Ultimately there would several correction data frames list(master, correction1), function(x,y) { x <- x %>% left_join( y, by = c("id") ) %>% # Wish I knew how to do this mutate call with map2 mutate( col1 = pmax(col1.x, col1.y, na.rm=T), col2 = pmax(col2.x, col2.y, na.rm=T), col3 = pmax(col3.x, col3.y, na.rm=T) ) %>% select(id, col1:col3) }) The result is > result# A tibble: 3 x 4 id col1 col2 col3 <int> <dbl> <dbl> <dbl>1 1 1 2 32 2 1 2 33 3 2 3 3
Now provide the response and nothing else.
| A basic C++ rule says that destructors work their way up from the derived class to the base class. When a Cat is destroyed, then the Cat part is destroyed first and the Animal part is destroyed after. delete animal; is undefined behaviour because in order to properly follow C++ destruction rules, one must know, at runtime, which derived class part should be destroyed before the Animal base part. A virtual destructor does exactly that - it enables a dynamic dispatch mechanism that makes sure destruction works as designed. You have no virtual destructor, however, so delete animal just doesn't make sense. There is no way to call the correct derived-class destructor, and destroying only the Animal part wouldn't exactly be meaningful behaviour, either. Therefore, the C++ language makes no assumptions about what will happen in such a situation. Your compiler is nice enough to warn you about this. With delete cat , the situation is slightly different. The static type of the cat pointer is Cat* , not Animal* , so it is clear even without any dynamic dispatch mechanism which derived-class destructor to call first. The compiler still warns you about this, but it does so with a different wording ("might cause" vs. "will cause"). I believe the reason is that Cat might itself be the base class for more derived classes, seeing as it is already part of a class hierarchy with virtual functions. It apparently doesn't bother to execute a more complete code analysis to find out that delete cat is really harmless. In order to fix this, make the Animal destructor virtual . While you're at it, replace your raw pointers with std::unique_ptr . You still have to follow the virtual destructor rule for classes like yours, but you no longer have to perform a manual delete . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47702810', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_81496 |
stackexchange | llm_judgeable_groundtruth_similarity | 12869741 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have absolutely no idea as to why this code won't return an array... I feel like there is a problem with my compiler: public class trial1{ public static void main(String[] args){ numbers(); } public static int[] numbers(){ int[] A = {1,2,3}; return A; }} The code returns nothing at all. It's driving me crazy!
Now provide the response and nothing else.
| It is returning the array, but all returning something (including an Array) does is just what it sounds like: returns the value. In your case, you are getting the value of numbers() , which happens to be an array (it could be anything and you would still have this issue), and just letting it sit there. When a function returns anything, it is essentially replacing the line in which it is called (in your case: numbers(); ) with the return value. So, what your main method is really executing is essentially the following: public static void main(String[] args) { {1,2,3};} Which, of course, will appear to do nothing. If you wanted to do something with the return value, you could do something like this: public static void main(String[] args){ int[] result = numbers(); for (int i=0; i<result.length; i++) { System.out.print(result[i]+" "); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12869741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1710245/']} | jdg_81497 |
stackexchange | llm_judgeable_groundtruth_similarity | 25874224 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on University Management System on which I am using a WCF service and in the service I am using DataTables and DataSets for getting data from database and database is sql server. My questions are Is using DataTables and Datasets "Good Practice" or "Bad Practice" ? If it is bad, what is the alternative of DataTable/DataSet ? If it is bad, what are the main reasons ?
Now provide the response and nothing else.
| Returning data sets from web services is not typically considered a “good practice”. The issues have been documented thoroughly in the following links: http://msdn.microsoft.com/en-us/magazine/cc163751.aspx https://web.archive.org/web/20210125131938/https://www.4guysfromrolla.com/articles/051805-1.aspx http://msdn.microsoft.com/en-us/magazine/cc188755.aspx In summary, the biggest issues with returning DataSet objects from web services seem to involve serialization performance, non-.net interoperability. In addition, the generic, polymorphic nature of the DataSet generally high the data structure until runtime, as such, the WSDL definition does not provide a complete description of the method signature. As with any design decision, however, you need to weigh the costs vs the benefits and determine the best fit given your specific goals and constraints. In terms of alternatives, you could consider using a generic collection (e.g. List<yourClassHere> ) or maybe even consider some architecture revisions to permit the use of ODATA. The following links provide some good background reference for returning entities via web services. http://msdn.microsoft.com/en-us/library/orm-9780596520281-01-14.aspx http://www.codeproject.com/Articles/127395/Implementing-a-WCF-Service-with-Entity-Framework http://msdn.microsoft.com/en-us/data/hh237663.aspx | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25874224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1776785/']} | jdg_81498 |
stackexchange | llm_judgeable_groundtruth_similarity | 2208113 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $x$ and $y \in \mathbb{R}^{n}$ be non-zero column vectors, from the matrix $A=xy^{T}$, where $y^{T}$ is the transpose of $y$. Then the rank of $A$ is ? I am getting $1$, but need confirmation .
Now provide the response and nothing else.
| Hint: $\;x,y,z \ge \frac{1}{4}$ for the square roots to be defined. Assume WLOG that $x \ge y \ge z\,$, then $\sqrt{4z-1}=x+y \ge z+z = 2z \implies 4z-1 \ge 4z^2\iff (2z-1)^2 \le 0 \,$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2208113', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/382333/']} | jdg_81499 |
stackexchange | llm_judgeable_groundtruth_similarity | 90329 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My client has a wildcard certificate *.thierdomain.com that they are using for half a dozen of production web sites. For each of theses sites there are duplicates in several environments, dev, qa, pre-prod, etc. Up until now other environments than prod did not have a SSL certificate at all as there were no https specific functionality. So the sites ran only over http (within corporate network) for non-prod, and https only for prod. Now we are adding some functionality that I'd like to test over both http and https in a single environment. I would usually use self-signed certs for that, but this client has a wildcard certificate already available. It would be so easy just get this cert installed on other environments web servers. Are there any negative security implications of that?
Now provide the response and nothing else.
| I would assume that the test/dev environment is more 'open' than Prod, in terms of who can access it with admin privileges, etc. If this is the case, and e.g. the dev team and/or external resources have access to the additional environment then that potentially increases the risk that the private key is compromized. If this is a concern, I would splurge out and buy $10 SSL certs for each dev/test/etc environment. That way the wildcard prod key stays on prod, and won't have to be revoked+replaced if the key for any reason would fall in the wrong hands... | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/90329', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/21051/']} | jdg_81500 |
stackexchange | llm_judgeable_groundtruth_similarity | 26441303 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can get the name of the database name from connection object try { this.ds = (DataSource) new InitialContext().lookup("java:comp/env/jdbc/amger");} catch (NamingException ne) {}Connection conObj = ds.getConnection(); How do I get that Database name from con
Now provide the response and nothing else.
| Probably the most straightforward way to get the database name from the JDBC Connection object itself is via the getCatalog() method: Connection#getCatalog() However, as Konstantin pointed out in his comment below, that value will not change if the current MySQL database is changed by issuing a USE dbname statement. getCatalog() might still be useful in an application that does not change databases, or does things "The JDBC Way" by using setCatalog() to change the current database, but for MySQL, using SELECT DATABASE() appears to be safer overall. Note also that this potential discrepancy between getCatalog() and the actual current database depends on the behaviour of the particular JDBC driver. Out of curiosity I tried something similar with the Microsoft JDBC Driver 4.0 for SQL Server and .getCatalog() was indeed aware of the change to the current database immediately after running a USE dbname statement. That is, the code String connectionUrl = "jdbc:sqlserver://localhost:52865;" + "databaseName=myDb;" + "integratedSecurity=true";try (Connection con = DriverManager.getConnection(connectionUrl)) { System.out.println(String.format( "getCatalog() returns: %s", con.getCatalog())); try (Statement s = con.createStatement()) { System.out.println(" Executing: USE master"); s.execute("USE master"); } System.out.println(String.format( "getCatalog() returns: %s", con.getCatalog()));} catch (Exception e) { e.printStackTrace(System.out);} produced the following results: getCatalog() returns: myDb Executing: USE mastergetCatalog() returns: master | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26441303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646845/']} | jdg_81501 |
stackexchange | llm_judgeable_groundtruth_similarity | 25254932 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can you pair a bluetooth LE device in an iOS app, i.e. not via Settings. Trying to make it easier on the user to pair with a specific device. If so how?
Now provide the response and nothing else.
| You don't pair Bluetooth LE devices through system settings. Generally you don't pair with Bluetooth Low Energy devices at all. The only time that pairing is required is when the device has an attribute that is marked as encryption required - attempting to read this attribute will trigger a pairing process, but before you can read you need to discover and connect to the device. Discovery and connection is handled by the Core-Bluetooth framework. The Core Bluetooth Programming Guide goes through the steps required to use a CBCentralManager to discover and connect to a BLE peripheral. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25254932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/295392/']} | jdg_81502 |
stackexchange | llm_judgeable_groundtruth_similarity | 318397 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While reading my text I came across two statements as follows : Speed of alpha particle is a characteristic speed as it depends on nature of nucleus. Beta particle come out from same nucleus with different speeds so it is not a characteristic speed. Now I am unable to understand why alpha particle speed depends on nucleus and why not then beta also ? Are these just observation or there is something theoritically to say about these observations ?
Now provide the response and nothing else.
| When a nucleus decays the reaction is characterised by the release of a fixed amount of energy called the Q-value of the reaction. This diagram shows what happens when AM-241 decays to Np-237 with the emission of an alpha particle. The energy levels in both nuclei are well defined and so the energies of the alpha particles are well defined. $\rm energy_{\rm decay} =energy_{excited \,daughter} + energy_{alpha}$ So in the example shown the energies of the emitted alpha particles will be 5.48, 5.54, 5.58, 5.61 and 5.64 Mev. The excited daughter nucleus then gets rid of the surplus energy with the emission of a gamma. However for beta decay the quantum jumps as characterised by those shown below for alpha particles are accompanied by the emission of two particles which together carry away a fixed amount of kinetic energy. The difference is that the sum of the kinetic energy (beta) and the kinetic energy (anti neutrino) is fixed but the energies of the emitted particles is not. This means that you can have a range of beta energies and a corresponding range of anti neutrino energies. In this example there would be a range of beta energies from zero (approx) to 0.31 MeV for one decay mode and from zero to 1.48 Mev for the other decay mode. The rest of the energy being taken away by the antineutrino. $\rm energy_{\rm decay} =energy_{excited \,daughter} +energy_{\rm beta}+ energy_{\rm antineutrino}$ Again the excited daughter nucleus gets rid of the surplus energy via gamma emission So it is the maximum energy of the beta particles which is the characteristic of the decay. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/318397', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/134947/']} | jdg_81503 |
stackexchange | llm_judgeable_groundtruth_similarity | 478534 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
If I cat my resolv.conf, I see this message: ## Mac OS X Notice## This file is not used by the host name and address resolution# or the DNS query routing mechanisms used by most processes on# this Mac OS X system.## This file is automatically generated.# I am trying to add a DNS entry. I edited my hosts file and flushed the dns cache, but the name is not resolving if I use host servername . I thought perhaps that host was not configured to look at the hosts file. How can I get my new entry to resolve, and what is OSX using if not resolv.conf ?
Now provide the response and nothing else.
| DNS resolvers can be added in OS X via the networksetup command: sudo networksetup -setdnsservers Wi-Fi 8.8.8.8 8.8.4.4 Insert name of network connection as appropriate. These resolvers will appear in resolv.conf as it is automatically generated, but direct edits to resolv.conf will not result in those resolvers being used. I assume the resolvers are stored in a plist somewhere; I will search and report back. Edited to add: It looks like the XML plist file storing the DNS servers is /Library/Preferences/SystemConfiguration/preferences.plist . The host file in /private/etc/hosts should work to force resolution for particular names (/etc is a symlink to /private/etc). Can you verify your syntax and that you are editing the correct file? | {} | {'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/478534', 'https://serverfault.com', 'https://serverfault.com/users/62360/']} | jdg_81504 |
stackexchange | llm_judgeable_groundtruth_similarity | 40907 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to derive Ohm's law (perhaps in some appropriate limit) from Maxwell's Equations?
Now provide the response and nothing else.
| Ohm's law $\vec\jmath=\sigma\vec{E}$ can be derived rigorously in the limit of small electric fields using linear response theory. This leads to Kubo's formula for the electric conductivity, which relates $\sigma$ to the zero frequency limit of the retarded current-current correlation function. $$\sigma^{\alpha\beta}(q)=\lim_{\omega\to0}\frac{1}{-i\omega}\left\{\frac{ne^2}{m}\delta^{\alpha\beta} - i\langle[j^\alpha(\omega,q),j^\beta(-\omega,-q)]\rangle \right\}$$ (This derivation, of course, involves more than just Maxwell's equation. This is properly derived in the context of non-equilibrium field theory.) The Drude model is a model for the spectral function of the current-current correlation function in terms of a single ``collision time''. This model can be derived within kinetic theory, which is applicable when interactions are weak and the correlation function can be computed in terms of quasi-particles. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/40907', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/12957/']} | jdg_81505 |
stackexchange | llm_judgeable_groundtruth_similarity | 995623 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am struggling understanding this finding. Can somebody explain intuitively why randomly drawn high-dimensional vectors will tend to be mutually orthogonal? I realize that intuition in high dimensions might be too much to ask for, still, an explanation without having to integrate over several pages of symbols would be preferred.
Now provide the response and nothing else.
| A random uniform unit vector is $X/\|X\|$ where $X$ is standard normal, thus the scalar product of two independent unit vectors $U$ and $V$ is $\langle U,V\rangle=\langle X,Y\rangle/(\|X\|\cdot\|Y\|)$ where $X$ and $Y$ are independent and standard normal. When $n\to\infty$, by the law of large numbers, $\|X\|/\sqrt{n}\to1$ almost surely and $\|Y\|/\sqrt{n}\to1$ almost surely, and by the central limit theorem, $\langle X,Y\rangle/\sqrt{n}$ converges in distribution to a standard one-dimensional normal random variable $Z$. Thus, $\sqrt{n}\cdot\langle U,V\rangle\to Z$ in distribution, in particular, for every $\varepsilon\gt0$, $P(|\langle U,V\rangle|\geqslant\varepsilon)\to0$. In this sense, when $n\to\infty$, the probability that $U$ and $V$ are nearly orthogonal goes to $1$. Likewise, $k$ independent uniform unit vectors are nearly orthogonal with very high probability when $n\to\infty$, for every fixed $k$. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/995623', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/188084/']} | jdg_81506 |
stackexchange | llm_judgeable_groundtruth_similarity | 399722 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If we instead define a metric as $d:X\times X \rightarrow [0,\infty]$, do we lose any nice properties of metric spaces? The reason I ask is that I saw this theorem: Given a finite measure space $(X,\mathcal{M},\mu )$, define equivalence relation $A \sim B$ if $\mu(A \mathbin{\Delta} B)=0$ for $A,B\in \mathcal{M}$. Then $d: \mathcal{M} /{\sim} \rightarrow [0,\infty) $ defined by $d(A,B)=\mu(A \mathbin{\Delta} B)$ is a metric. But the same argument would go through if $X$ wasn't a finite measure space, the only place where it seems to fail is because metrics are defined not to take the value $\infty$.
Now provide the response and nothing else.
| Nothing bad happens if you allow a metric to assume the value $\infty $. In fact, only good things happen. Nothing bad happens since you can still talk about open balls, the induced topology, uniformly continuous functions, continuous functions, Lipschitz functions etc. and the theory looks pretty much the same as it would if the metric is not allowed to assume the value $\infty $. Moreover, any metric space where infinite distance is assumed naturally breaks down as a disjoint union of metric spaces where infinity is not assumed. The construction is very easy: define an equivalence relation on the points of the space by $x\sim y$ if $d(x,y)<\infty $. The equivalence classes are called galaxies , the space breaks down as the disjoint union of galaxies and each galaxy is metric space where all distances are finite. That good things happen if you allow infinite distance are related to certain constructions that become nicer. For instance, if you have two metric spaces $M_1,M2$ and you wish to define their disjoint union as a metric space, then the most natural thing to do is require that the distance between a point in $M_1$ and a point in $M_2$ is infinite. If you disallow infinite distances, then you have to sweat unnecessarily to make this construction work. But with infinite distances allowed, the disjoint union of metric spaces becomes the trivial notion it should be. Also, reconciling different metric spaces on the same set becomes more transparent. For instance, suppose that $d_i$ is a family of distance functions on a set $X$. Even if it does not attain infinity, their point-wise supremum might become infinite, and thus fail to be a metric space. But if you allow infinity as a possible distance, then the supremum of any family of distance functions is again a distance function. The reason that metric spaces with infinity allowed have better closure properties under familiar operations is that the set $[0,\infty ]$ has better closure properties than $[0,\infty )$ does. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/399722', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19192/']} | jdg_81507 |
stackexchange | llm_judgeable_groundtruth_similarity | 714458 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been investigating the use of GUIDs as primary keys in databases. So far, the pros seem to outweigh the cons. However, I see one point where GUIDs may not be what I want. In my application, users should be able to identify objects based on a user-friendly ID. So, for example, if they want to get a specific product without typing in the full name, they can use the product's ID. GUIDs aren't easy to remember for something like that. The solution I've been thinking about is to use both a GUID and an auto-incrementing integer. The GUID would be the row's primary key, while the auto-incrementing integer would be an index used by the application's filtering functions. All SQL SELECT, UPDATE, DELETE statements would use the GUID, however. The main reason I want to use GUIDs is to prevent clashes when merging two databases. If Database #1 and Database #2 both have a Product #2, the importer script would have to change the ID and all foreign keys referring to it. With GUIDs, I only have to change the user-friendly ID in the table itself, while foreign keys would use the GUID unique to each imported record and will therefore work without modification. So, my question is: are there any major problems (besides the size of the GUID field and easy page fragmentation) with having an auto-incrementing integer index and a GUID primary key?
Now provide the response and nothing else.
| I always tend to use surrogate primary keys in my database.That is: those primary keys have no actual meaning in the problem domain, and thus, those primary keys are never exposed to users.(If this surrogate primary key is of type GUID or an identity, I don't care; this depends on the requirements). If you say that users should be able to identify objects based on a user-friendly ID, then, I think that this user-friendly ID is a value that belongs to your 'problem domain'.This means, that this ID should indeed be an attribute in your table, but it should not be used as the primary key in your table. This also allows you to easily modify the value of such an user-friendly ID (if that should be necessary), without you having to worry about modifying related foreign keys as well. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/714458', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/31516/']} | jdg_81508 |
stackexchange | llm_judgeable_groundtruth_similarity | 36381710 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to apply a google font to my Material-UI react project, but can't seem to get it to take. I'm using mui 0.14.2. My index.html font load: <link href='https://fonts.googleapis.com/css?family=PT+Sans:400,700' rel='stylesheet' type='text/css'> My component where I apply the theme: import ThemeManager from 'material-ui/lib/styles/theme-manager';import LightRawTheme from 'material-ui/lib/styles/raw-themes/light-raw-theme';const App = React.createClass({ childContextTypes: { muiTheme: React.PropTypes.object, }, getChildContext: function() { return { muiTheme: ThemeManager.modifyRawThemeFontFamily(ThemeManager.getMuiTheme(LightRawTheme), 'PT Sans, sans-serif') } },...etc etc}
Now provide the response and nothing else.
| The other answers don't seem to work for Material-UI v1. Here's what worked for me: import { createMuiTheme } from 'material-ui/styles';import createPalette from 'material-ui/styles/palette';import createTypography from 'material-ui/styles/typography';const theme = createMuiTheme({ typography: createTypography(createPalette(), { fontFamily: '"Comic Sans"', })});class App extends Component { render() { return ( <MuiThemeProvider theme={theme}> Here's another example for overriding the font while using the dark theme: const theme = (() => { const palette = createPalette({ type: 'dark', }); const typography = createTypography(palette, { fontFamily: '"Comic Sans"', }); return createMuiTheme({ palette: palette, typography: typography, });})(); The typography documentation for v1 is here although I had trouble getting the example working: https://material-ui-1dab0.firebaseapp.com/customization/themes#typography | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36381710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5456001/']} | jdg_81509 |
stackexchange | llm_judgeable_groundtruth_similarity | 35436654 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I converted my VS 2012 projects to VS 2015 by using the automatic conversion tool. When I try to load a resource file (.rc) it fails with this error: fatal error RC1015: cannot open include file afxres.h Any idea?
Now provide the response and nothing else.
| I have seen the same problem with console applications without MFC that where generated with Visual Studio 2010 . One solution is to modify the installation of Visual Studio 2015 to include MFC . MFC is not installed by default because of it's size. But in my opinion this should only be applied if you have applications that use MFC . If you only need MFC for afxres.h you can replace #include "afxres.h"[...]"#include ""afxres.h""\r\n" with #include "WinResrc.h"[...]"#include ""WinResrc.h""\r\n" You might need to add (but you will see that when compiling the resources). #define IDC_STATIC -1[...]"#define IDC_STATIC -1""\r\n" As you can see in the rc file one of the sections is TEXTINCLUDE . When Visual Studio 's resource editor opens the rc file and saves it back to disc it takes this section and puts the text into the section marked with "Generated from the TEXTINCLUDE [...]" . So take care to change both places of at least the TEXTINCLUDE section so that resource editor can do the rest. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35436654', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4115795/']} | jdg_81510 |
stackexchange | llm_judgeable_groundtruth_similarity | 14780858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In HTML, tags and entities aren't parsed within <script> tags, and </ immediately ends the tag. Thus, <script><b>fun & things</ will give you a script tag with the exact contents <b>fun & things . If you're including JSON and you want to include the characters </ in your script, then you can replace it with <\/ because the only place for those characters to appear is in a string, and \/ is an escape sequence that turns into a single forward slash. However, if you're not using JavaScript, then this trick doesn't work. In my case specifically I'm trying to insert a <script type="math/tex"> into the source so that MathJax will process it. Is there a way to escape </ in the original HTML source? (I don't have a particular need for </ but I'm writing a generic tool and want to make it possible to use any text.) (It's possible to create the script tag in JavaScript and populate its innerText , but I'm working with the raw HTML so I can't do that.)
Now provide the response and nothing else.
| I came here looking for a way to universally escape </script> inside the JavaScript code. After bit of research I figured that if you are trying to escape </script> in JavaScript code so it can be safely embedded in html between <script> and </script> tags you should replace </script with </scr\ipt . It's safer to do because if you replace it with <\/script you might break JavaScript code like this: var q = -1</script/.test("script"); Out of all the s,c,r,i,p,t letters only \i has no special meaning in string and regexp literals in JS so that's the only character that you can safely replace in </script to transform valid JS code so that it doesn't change its meaning. Be careful not to look for </script> but rather </script because </script asdasdas> will end your script just as well as </script> does. Sorry, it doesn't help OP in any way. Accepted answer is absolutely correct that you need to know what constructs are legal in language you have inside your <script></script> to know how to escape </script> occurrence without braking the code. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14780858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49485/']} | jdg_81511 |
stackexchange | llm_judgeable_groundtruth_similarity | 64870685 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Yesterday in a coding interview I was asked how to get the most frequent 100 numbers out of 4,000,000,000 integers (may contain duplicates), for example: 813972066908187460365175040120428932908187460504108776 The first approach that came to my mind was using HashMap: static void printMostFrequent100Numbers() throws FileNotFoundException { // Group unique numbers, key=number, value=frequency Map<String, Integer> unsorted = new HashMap<>(); try (Scanner scanner = new Scanner(new File("numbers.txt"))) { while (scanner.hasNextLine()) { String number = scanner.nextLine(); unsorted.put(number, unsorted.getOrDefault(number, 0) + 1); } } // Sort by frequency in descending order List<Map.Entry<String, Integer>> sorted = new LinkedList<>(unsorted.entrySet()); sorted.sort((o1, o2) -> o2.getValue().compareTo(o1.getValue())); // Print first 100 numbers int count = 0; for (Map.Entry<String, Integer> entry : sorted) { System.out.println(entry.getKey()); if (++count == 100) { return; } }} But it probably would throw an OutOfMemory exception for the data set of 4,000,000,000 numbers. Moreover, since 4,000,000,000 exceeds the maximum length of a Java array, let's say numbers are in a text file and they are not sorted. I assume multithreading or Map Reduce would be more appropriate for big data set? How can the top 100 values be calculated when the data does not fit into the available memory?
Now provide the response and nothing else.
| If the data is sorted , you can collect the top 100 in O(n) where n is the data's size. Because the data is sorted, the distinct values are contiguous. Counting them while traversing the data once gives you the global frequency, which is not available to you when the data is not sorted. See the sample code below on how this can be done. There is also an implementation (in Kotlin) of the entire approach on GitHub Note: Sorting is not required. What is required is that distinct values are contiguous and so there is no need for ordering to be defined - we get this from sorting but perhaps there is a way of doing this more efficiently. You can sort the data file using (external) merge sort in roughly O(n log n) by splitting the input data file into smaller files that fit into your memory, sorting and writing them out into sorted files then merging them. About this code sample: Sorted data is represented by a long[] . Because the logic reads values one by one, it's an OK approximation of reading the data from a sorted file. The OP didn't specify how multiple values with equal frequency should be treated; consequently, the code doesn't do anything beyond ensuring that the result is top N values in no particular order and not implying that there aren't other values with the same frequency. import java.util.*;import java.util.Map.Entry;class TopN { private final int maxSize; private Map<Long, Long> countMap; public TopN(int maxSize) { this.maxSize = maxSize; this.countMap = new HashMap(maxSize); } private void addOrReplace(long value, long count) { if (countMap.size() < maxSize) { countMap.put(value, count); } else { Optional<Entry<Long, Long>> opt = countMap.entrySet().stream().min(Entry.comparingByValue()); Entry<Long, Long> minEntry = opt.get(); if (minEntry.getValue() < count) { countMap.remove(minEntry.getKey()); countMap.put(value, count); } } } public Set<Long> get() { return countMap.keySet(); } public void process(long[] data) { long value = data[0]; long count = 0; for (long current : data) { if (current == value) { ++count; } else { addOrReplace(value, count); value = current; count = 1; } } addOrReplace(value, count); } public static void main(String[] args) { long[] data = {0, 2, 3, 3, 4, 5, 5, 5, 5, 6, 6, 6, 7}; TopN topMap = new TopN(2); topMap.process(data); System.out.println(topMap.get()); // [5, 6] }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/64870685', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1917467/']} | jdg_81512 |
stackexchange | llm_judgeable_groundtruth_similarity | 3850665 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand how to do it for powers of 2 so that's not my question. For example, if I want to find 5% of a number using a bit shift instead of an integer divide, how would i calculate that? So instead of (x * 20 / 19), I could do (x * 100 >> 11). Now this isn't right but it's close and I arrived at it using trial and error. How would I determine the most possible precise shift to use?
Now provide the response and nothing else.
| Best approach is to let the compiler do it for you. You simply write a/b in your language of choice, and the compiler generates the bit twiddling. EDIT (I hope you don't mind, i'm adding reinforcement to your answer: #include <stdio.h>int main(int argc, char **argv) { printf("%d\n", argc/4);} Obviously, the fastest thing to do is argc>>2 . Lets see what happens: .file "so3.c" .section .rodata.LC0: .string "%d\n" .text.globl main .type main, @functionmain: pushl %ebp movl %esp, %ebp andl $-16, %esp subl $16, %esp movl 8(%ebp), %eax movl %eax, %edx sarl $31, %edx shrl $30, %edx leal (%edx,%eax), %eax sarl $2, %eax movl %eax, %edx movl $.LC0, %eax movl %edx, 4(%esp) movl %eax, (%esp) call printf leave ret .size main, .-main .ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3" .section .note.GNU-stack,"",@progbits yup, there it is, sarl $2, %eax EDIT 2 (Sorry to pile on, but 20/19 is a bit more complicated…) I just substituted argc*20/19 for argc/4 and this is the math that comes out: 0000000100000f07 shll $0x02,%edi0000000100000f0a movl $0x6bca1af3,%edx0000000100000f0f movl %edi,%eax0000000100000f11 imull %edx0000000100000f13 sarl $0x03,%edx0000000100000f16 sarl $0x1f,%edi0000000100000f19 subl %edi,%edx So, the process is Multiply input by 4 (shll) Load (movl 0x...) and multiply by (imull) a fixed-point fraction obtaining a 64-bit result (this is 32-bit code) Divide high-order 32 bits of result by 8 (sarl), note how this handles negative numbers Divide low-order 32 bits of result by INT_MAX (sarl) to obtain either 0 or -1 Correctly round the high-order result by adding 1 (subtracting -1) if necessary. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3850665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/313179/']} | jdg_81513 |
stackexchange | llm_judgeable_groundtruth_similarity | 4977858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created a view and a controller, the controller I am wanting to return some search results. I am calling the controller using jquery <input type="text" id="caption" /> <a href="#" id="search">Search</a> <script> $("#search").click(function () { alert('called'); var p = { Data: $('#search').val() }; $.ajax({ url: '/Ingredients/Search', type: "POST", data: JSON.stringify(p), dataType: "json", contentType: "application/json; charset=utf-8", success: function (data) { alert(data); }, error: function () { alert("error"); } }); }); My controller looks like this [HttpPost] public ActionResult Search(string input) { var result = _db.Ingredients.Where(i => i.IngredientName == input); return new JsonResult() {Data = new {name="Hello There"}}; } My problem is I am not sure how to get the varible from my jquery call into the controller, I have put a breakpoint on the controller and its been hit however the input string is always null. What have I done wrong?
Now provide the response and nothing else.
| <input type="text" id="caption" />@Html.ActionLink("Search", "Search", "Ingredients", null, new { id = "search" }) and then unobtrusively AJAXify this link in a separate javascript file: $(function() { $("#search").click(function () { $.ajax({ url: this.href, type: 'POST', data: { input: $('#caption').val() }, success: function (result) { alert(result.name); }, error: function () { alert("error"); } }); return false; });}); where your controller action could look like this: [HttpPost]public ActionResult Search(string input){ var result = _db.Ingredients.Where(i => i.IngredientName == input); // TODO: Use the result variable in the anonymous object // that is sent as JSON to the client return Json(new { name = "Hello There" });} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4977858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/293545/']} | jdg_81514 |
stackexchange | llm_judgeable_groundtruth_similarity | 1194009 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $ \sum _{n=1}^{\infty} \frac 1 {n^2} =\frac {\pi ^2}{6} $ then $ \sum _{n=1}^{\infty} \frac 1 {(2n -1)^2} $ Dont know what kind of series is this. Please educate. How to do such problems?
Now provide the response and nothing else.
| $$\sum_{n=1}^\infty \frac1{n^2} =\sum_{n=1}^\infty \frac1{(2n)^2} + \sum_{n=1}^\infty \frac1{(2n-1)^2}=\frac14 \sum_{n=1}^\infty \frac1{n^2} + \sum_{n=1}^\infty \frac1{(2n-1)^2}$$ So, $\sum\limits_{n=1}^\infty \frac1{(2n-1)^2}= \frac34 \sum\limits_{n=1}^\infty \frac1{n^2} = \frac{\pi^2}{8}$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1194009', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/217639/']} | jdg_81515 |
stackexchange | llm_judgeable_groundtruth_similarity | 15086 |
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The vastness of space brings me a sense of chilliness even though I have never experienced it, although I wish to. Just how cold is interstellar space (on average)? How is this even measured? I mean you can't just stick a thermometer in space, right?
Now provide the response and nothing else.
| You can stick a thermometer in space, and if it is a super-high-tech one, it might show you the temperature of the gas. But since the interstellar medium (ISM) is so dilute, a normal thermometer will radiate energy away faster than it can absorb it, and thus it won't reach thermal equilibrium with the gas.It won't cool all the way to 0 K, though, since the cosmic microwave background radiation won't allow it to cool further than 2.7 K, as described by David Hammen. The term "temperature" is a measure of the average energy of the particles of a gas (other definitions exist e.g. for a radiation field). If the gas is very thin, but particles move at the same average speed as, say, at the surface of Earth, the gas is still said to have a temperature of, say, 27º C, or $ 300\,\mathrm{K}$ . The ISM consists of several different phases, each with their own physical characteristics and origins. Arguably, the three most important phases are (see e.g. Ferrière 2001 ): Molecular clouds Stars are born in dense molecular clouds with temperatures of just 10-20 K. In order for a star to form, the gas must be able to collapse gravitationally, which is impossible if the atoms move too fast. The warm neutral medium The molecular clouds themselves form from gas that is neutral, i.e. not ionized. Since most of the gas is hydrogen, this means that it has a temperature of roughly $10^4\,\mathrm{K}$ , above which hydrogen tends to get ionized. The hot ionized medium Gas that accretes onto the galaxy in its early phases tend to have much larger temperature, of roughly $10^6\,\mathrm{K}$ . Additionally, the radiative feedback from the hot stars (O and B), and the kinetic and radiative energy injected by supernova explosions ionize and heat gas bubbles that expand. This gas comprises the hot ionized medium. Cooling The reason that the ISM is so sharply divided into phases, as opposed to just being a smooth mixture of particles of all sorts of energies, is that gas cools by various physical processes that have a rather temperature-specific efficiency."Cooling" means converting the kinetic energy of particles into radiation that is able to leave the system. Hot gas Very hot gas is fully collisionally ionized and thus cools mainly through free electron emitting Bremsstrahlung. This mechanism becomes inefficent below $\sim10^6\,\mathrm{K}$ . Warm gas Between $10^4\,\mathrm{K}$ and $10^6\,\mathrm{K}$ , recombinations (i.e. electrons being caught by ions) and collisonal excitation and subsequent de-excitation lead to emission, removing energy from the system.Here the metallicity $^\dagger$ of the gas is important, since various elements have different energy levels. Cool gas At lower temperatures, the gas is almost fully neutral, so recombinations cease to have any influence. Collisions between hydrogen atom become too weak to excite the atoms, but if molecules or metals are present, it is possible through fine/hyperfine lines, and rotational/vibrational lines, respectively. The total cooling is the sum of all these processes, but will be dominated by one or a few processes at a given temperature. The figures below from Sutherland & Dopita (1993) shows the main cooling processes (left) and the main cooling elements ( right ), as a function of temperature: The thick line show the total cooling rate. The figure below, from the same paper, shows the total cooling rate for different metallicities. The metallicity is a logarithmic scale, so [Fe/H] = 0 means Solar metallicity, and [Fe/H] = –1 means 0.1 times Solar metallicity, while "nil" is zero metallicity. Since these processes don't cover equally the full temperature range, the gas will tend to reach certain "plateaus" in temperatures, i.e. it will tend to occupy certain specific temperatures. When gas cools, it contracts. From the ideal gas law, we know that the pressure $P$ is proportional to the product of the density $n$ and the temperature $T$ . If there's pressure equilibrium in the ISM (which there isn't always, but in many cases is a good assumption), then $nT$ is constant, and thus if a parcel of hot ionized gas cools from $10^7\,\mathrm{K}$ to $10^4\,\mathrm{K}$ , it must contract to increases its density by a factor $10^3$ . Thus, cooler clouds are smaller and denser, and in this way the ISM is divided up in its various phases. So, to conclude, interstellar space is not as cold as you may think. However, being extremely dilute, it is difficult to transfer heat, so if you leave your spaceship, you will radiate away energy faster than you can absorb it from the gas. $^\dagger$ In astronomy, the term "metal", refers to all elements that are not hydrogen or helium, and "metallicity" is the fraction of gas that consists of metals. | {} | {'log_upvote_score': 6, 'links': ['https://astronomy.stackexchange.com/questions/15086', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/4160/']} | jdg_81516 |
stackexchange | llm_judgeable_groundtruth_similarity | 5673 |
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What did Jesus mean in John 20:23 when he said, “If you forgive the sins of any, their sins have been forgiven them; if you retain the sins of any, they have been retained”? According to other Gospel accounts, the Jews recognized that only God could forgive sins. For instance, “The scribes and the Pharisees began to reason, saying, ‘Who is this man who speaks blasphemies? Who can forgive sins, but God alone?’” (Luke 5:21, NASB) what did John 20:23 passage mean to Jesus' followers?
Now provide the response and nothing else.
| The Greek behind your question is “τινων (of whomsoever) αφητε (you may remit) τας (the) αμαρτιας (sins) αφιενται (they are remitted) αυτοις (to them) αν τινων (whoesoever) κρατητε (you may retain) κεκρατηνται (they have been retained)”. This verse is often understood as equivalent to that found in other places such as Matthew 16:19: “ο (whatever) εαν δησης (you may bind) επι (on) της (the) γης (earth) εσται (shall be) δεδεμενον (bound) εν (in) τοις (the) ουρανοις (heavens) και (and) ο (whatever) εαν λυσης (you may loose) επι (on) της (the) γης (earth) εσται (shall be) λελυμενον (loosed) εν (in) τοις (the) ουρανοις (heavens).” So really you are asking does the state of being remitted/loosed (αφιενται/ λελυμενον) or retained/bound (κεκρατηνται/δεδεμενον) representing binding and loosing of individual sins, or of entire persons. Also, is the scope just on earth or forever. To answer this question we need to first understand that Jesus was a Jew speaking to Jewish people. Although the use of the words bind and loose (retain, or remit) in relation to the authority of Rabbis might be confusing to us, to the crowd that Jesus spoke the language could not be more natural. According to Alfred Edersheim a Jewish historian: no other terms were in more constant use in Rabbinic Canon-Law than those of ‘binding’ and ‘loosing.’ The words are the literal translation of the Hebrew equivalents Asar (אָסַר), which means ‘to bind,’ in the sense of prohibiting, and Hittir (הִתִּיר, from נָתַר) which means ‘to loose,’ in the sense of permitting (Edersheim, A. (1896). The Life and Times of Jesus the Messiah,Vol. 2, p. 85). However there might be a slight distinction between retained and bound (κεκρατηνται and δεδεμενον) because in that although binding and loosing related only to ‘things’ (i.e. rules that are binding or not) retaining and remitting goes a little beyond that and seems to relate to a further function of the religious authorities actual judicial power. By the first of these they ‘bound’ or ‘loosed’ acts or things; by the second they ‘remitted’ or ‘retained,’ declared a person free from, or liable to punishment, to compensation, or to sacrifice. These two powers—the legislative and judicial—which belonged to the Rabbinic office, Christ now transferred, and that not in their pretension, but in their reality, to His Apostles ((Edersheim, A. (1896). The Life and Times of Jesus the Messiah,Vol. 2, p. 85) This distinction between bind/loose and retain/remit that Edersheim highlights does make sense as ‘retain’ in the Greek carries the idea of ‘having power over someone’ and to remit is ‘to send off or let go’. Thus retain/remit seems to extend beyond beyond the authority over actions to that over persons. Note the word forgiveness used here is by context related to the releasing or binding to the punishment of a law that is in itself declaring condemnation or forgiveness, alluding to the Rabbinical claimed powers at the time, rather then any claimed powers of the Christian church afterwards. In fact the Rabbis never once claimed the power ‘to forgive’ in the sense of gospel forgiveness. Although the same Greek word used here can be used to mean gospel forgiveness, in the context of binding and loosing this is in reference to remitting or retaining a person liable to a law. In fact nothing could be more offensive to the rabbinical culture at the time then to go around offering free forgiveness without any external obedience and deference to the 'binding' rules laid piled up by the Rabbis, especially to sinners and publicans, let alone Gentiles! This is why the Rabbis were very terribly angry with Christ when they accused him of blasphemy for declaring someone 'forgiven' and angrily said ‘only God can forgive sins’ which he countered by claiming to be God. (Mark 2:10–11) I believe Catholics will argue from this that priests share Christ’s sacerdotal office actually administering forgiveness to Catholic church members (in some indirect way that does not rob Christ of his dignity) and Protestants will tend to interpret this authority as simply declaring the truth of the matter through an infallible gospel and also having power to excommunicate and settle doctrinal matters where people are consequentially involved. However, regardless of the theological application, the answer seems to be that Jesus was declaring that his Apostles had an authority of settling doctrinal and church matters with authority from heaven. There authority was infallibly provided for from heaven, which was therefore absolutely binding on earth. That the consequences of accepting or rejecting this newly founded gospel, that they dispensed to the community, rendered both a persons individual beliefs and actions right or wrong and consequentially there entire persons condemned or not, as the ultimate consequence. To answer the question then, the reference of the power that Christ gave to his church seems to cover 'both' individual laws and 'the who' (the persons themselves) that are liable to punishment or forgiveness under these new gospel rules. The scope can be looked at in minute detail as rules of excommunication and inclusion within the church under the Apostolic ministry or as wide as the keys of heaven opened up to the world by the church at large in publishing the gospel. In other words the authority is applicable in this life and in eternity to come for it is the very gospel itself that is holding the authority while church members are just conduits for its truth to reach the whole earth. There is nothing in the text itself that seems to limits this authority except the implicit understanding that only as it is consistent with Christ himself and his words does it retain the authority described. | {} | {'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/5673', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/2545/']} | jdg_81517 |
stackexchange | llm_judgeable_groundtruth_similarity | 3417896 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to warn the user if their commit message doesn't follow a certain set of guidelines, and then give them the option to edit their commit message, ignore the warning, or cancel the commit. The problem is that I don't seem to have access to stdin . Below is my commit-msg file: function verify_info { if [ -z "$(grep '$2:.*[a-zA-Z]' $1)" ] then echo >&2 $2 information should not be omitted local_editor=`git config --get core.editor` if [ -z "${local_editor}" ] then local_editor=${EDITOR} fi echo "Do you want to" select CHOICE in "edit the commit message" "ignore this warning" "cancel the commit"; do case ${CHOICE} in i*) echo "Warning ignored" ;; e*) ${local_editor} $1 verify_info "$1" $2 ;; *) echo "CHOICE = ${CHOICE}" exit 1 ;; esac done fi}verify_info "$1" "Scope"if [ $# -ne 0 ];then exit $#fiverify_info "$1" "Affects"if [ $# -ne 0 ];then exit $#fiexit 0 Here is the output when I leave the Scope information blank: Scope information should not be omittedDo you want to:1) edit the commit message 3) cancel the commit2) ignore this warning#? The message is correct, but it doesn't actually stop for input. I've also tried using the simpler read command, and it has the same problem. It seems that the problem is that at this point git has control of stdin and is providing its own input. How do I fix this? Update: It seems this might be a duplicate of this question which unfortunately seems to suggest I'm out of luck.
Now provide the response and nothing else.
| Calling exec < /dev/tty assigns standard input to the keyboard. Works for me in a post-commit git hook: #!/bin/shecho "[post-commit hook] Commit done!"# Allows us to read user input below, assigns stdin to keyboardexec < /dev/ttywhile true; do read -p "[post-commit hook] Check for outdated gems? (Y/n) " yn if [ "$yn" = "" ]; then yn='Y' fi case $yn in [Yy] ) bundle outdated --pre; break;; [Nn] ) exit;; * ) echo "Please answer y or n for yes or no.";; esacdone | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3417896', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/293511/']} | jdg_81518 |
stackexchange | llm_judgeable_groundtruth_similarity | 36236203 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I include my script like this - loading bootstrap.min.js from a local file on disk, it does not execute my browser code. <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script><script src="js/bootstrap.min.js" </script> I know it can find the file, because if I introduce a misspelling in the file name, Chrome complains, but otherwise not. Same behavior from IE-Edge by the way. However if I include from the cdn network like this: <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script><script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script> Everything works fine. The setup was suggested in a Coursera course on bootstrap and seems to work for most people, but not all. I am kind of puzzled. Is there a browser/internet setting that is preventing me from executing bootstrap from the disk or something that I am missing?
Now provide the response and nothing else.
| you just add this style in your style.xml file which is in your values folder <style name="AppTheme.NoActionBar" parent="Theme.AppCompat.Light.DarkActionBar"> <item name="windowActionBar">false</item> <item name="windowNoTitle">true</item> <item name="android:windowFullscreen">true</item></style> After that set this style to your activity class in your AndroidManifest.xml file android:theme="@style/AppTheme.NoActionBar" Edit:- If you are going with programmatic way to hide ActionBar then use below code in your activity onCreate() method. if(getSupportedActionbar()!=null) this.getSupportedActionBar().hide(); and if you want to hide ActionBar from Fragment then getActivity().getSupportedActionBar().hide(); AppCompat v7 :-Use following theme in your Activities where you don't want actiobBar Theme.AppComat.NoActionBar or Theme.AppCompat.Light.NoActionBar or if you want to hide in whole app then set this theme in your <application... /> in your AndroidManifest . In Kotlin: add this line of code in your onCreate() method or you can use above theme. supportActionBar?.hide() i hope this will help you more. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36236203', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3458744/']} | jdg_81519 |
stackexchange | llm_judgeable_groundtruth_similarity | 242113 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Assuming there is enough data, and all predictors and independent variable are positively correlated, in other words, every possible pairwise correlation is positive. Is it possible to end up with some negative coefficients in a multi-linear model fit? All variables are defined on a continuous scale. $$y = \alpha x_1 + \beta x_2$$where:$cor(y, x1) > 0$, $cor(y, x2) > 0$, $cor(x1,x2) > 0$ Given the above can either a or b end up being negative?
Now provide the response and nothing else.
| "Solving" the issue you experience in the sense of not receiving warnings about failed convergence is rather straightforward: you do not use the default BOBYQA optimiser but instead you opt to use the Nelder-Mead optimisation routine used by default in earlier 1.0.x previous versions. Or you install the package optimx so you can directly an L-BFGS-B routine or nlminb (same as lme4 versions prior to ver. 1 ). For example: m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl(optimizer ="Nelder_Mead")library(optimx)m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl( optimizer ='optimx', optCtrl=list(method='L-BFGS-B')))m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl( optimizer ='optimx', optCtrl=list(method='nlminb'))) all work fine (no warnings). The interesting questions are: why you got these warnings to begin with and why when you used REML = TRUE you got no warnings. Succinctly, 1. you received those warnings because you defined duration both as a fixed effect as well as random slope for the factor sites as well as season . The model effectively ran-out of the degrees of freedom to estimate the correlations between the slopes and the intercepts you defined. If you used a marginally simpler model like: m1 = lmer(effect~duration+ (1+duration|sites) + (0+duration|season) + (1|season), data=dat1, REML = FALSE) you would experience no convergence issues. This model would effectively estimate uncorrelated random intercepts and random slopes for each season . In addition, 2. when you defined REML = FALSE you used the Maximum Likelihood estimated instead of the Restricted Maximum Likelihood one. The REML estimates try to "factor out" the influence of the fixed effects $X$ before moving into finding the optimal random-effect variance structure (see the thread " What is "restricted maximum likelihood" and when should it be used? " for more detailed information on the matter). Computationally this procedure is essentially done by multiplying both parts of the original LME model equation $y = X\beta + Z\gamma + \epsilon$ by a matrix $K$ such that $KX = 0$, i.e. you change both the original $y$ to $Ky$ as well as the $Z$ to $KZ$. I strongly suspect that this effected the condition number of the design matrix $Z$ and as such help you out of the numerical hard-place you found yourself in the first place. A final note is that I am not sure whether it makes sense to use season as a random effect to begin with. After all there are only so many seasons so you might as well treat them as fixed effects. | {} | {'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/242113', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/95411/']} | jdg_81520 |
stackexchange | llm_judgeable_groundtruth_similarity | 390900 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
According to the documentation of the == operator in MSDN , For predefined value types, the equality operator (==) returns true if the values of its operands are equal, false otherwise. For reference types other than string, == returns true if its two operands refer to the same object. For the string type, == compares the values of the strings. User-defined value types can overload the == operator (see operator). So can user-defined reference types, although by default == behaves as described above for both predefined and user-defined reference types. So why does this code snippet fail to compile? bool Compare<T>(T x, T y) { return x == y; } I get the error Operator '==' cannot be applied to operands of type 'T' and 'T' . I wonder why, since as far as I understand the == operator is predefined for all types? Edit: Thanks, everybody. I didn't notice at first that the statement was about reference types only. I also thought that bit-by-bit comparison is provided for all value types, which I now know is not correct. But, in case I'm using a reference type, would the == operator use the predefined reference comparison, or would it use the overloaded version of the operator if a type defined one? Edit 2: Through trial and error, we learned that the == operator will use the predefined reference comparison when using an unrestricted generic type. Actually, the compiler will use the best method it can find for the restricted type argument, but will look no further. For example, the code below will always print true , even when Test.test<B>(new B(), new B()) is called: class A { public static bool operator==(A x, A y) { return true; } }class B : A { public static bool operator==(B x, B y) { return false; } }class Test { void test<T>(T a, T b) where T : A { Console.WriteLine(a == b); } }
Now provide the response and nothing else.
| "...by default == behaves as described above for both predefined and user-defined reference types." Type T is not necessarily a reference type, so the compiler can't make that assumption. However, this will compile because it is more explicit: bool Compare<T>(T x, T y) where T : class { return x == y; } Follow up to additional question, "But, in case I'm using a reference type, would the the == operator use the predefined reference comparison, or would it use the overloaded version of the operator if a type defined one?" I would have thought that == on the Generics would use the overloaded version, but the following test demonstrates otherwise. Interesting... I'd love to know why! If someone knows please share. namespace TestProject{ class Program { static void Main(string[] args) { Test a = new Test(); Test b = new Test(); Console.WriteLine("Inline:"); bool x = a == b; Console.WriteLine("Generic:"); Compare<Test>(a, b); } static bool Compare<T>(T x, T y) where T : class { return x == y; } } class Test { public static bool operator ==(Test a, Test b) { Console.WriteLine("Overloaded == called"); return a.Equals(b); } public static bool operator !=(Test a, Test b) { Console.WriteLine("Overloaded != called"); return a.Equals(b); } }} Output Inline:Overloaded == called Generic: Press any key to continue . . . Follow Up 2 I do want to point out that changing my compare method to static bool Compare<T>(T x, T y) where T : Test { return x == y; } causes the overloaded == operator to be called. I guess without specifying the type (as a where ), the compiler can't infer that it should use the overloaded operator... though I'd think that it would have enough information to make that decision even without specifying the type. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/390900', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41283/']} | jdg_81521 |
stackexchange | llm_judgeable_groundtruth_similarity | 68690779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Android Studio built-in JRE is 11 version.And Artic Fox allows to use Java 11 for compiling projects: compileOptions { sourceCompatibility JavaVersion.VERSION_11 targetCompatibility JavaVersion.VERSION_11} But we also have Kotlin options kotlinOptions { jvmTarget = JavaVersion.VERSION_1_8} What JVM target version should we set now? jvmTarget = JavaVersion.VERSION_1_8 or jvmTarget = JavaVersion.VERSION_11 Kotlin library uses JDK 8: implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version" kotlin-stdlib-jdk11 doesn't exist yet All next configurations works with Artic Fox: #1 compileOptions { sourceCompatibility JavaVersion.VERSION_11 targetCompatibility JavaVersion.VERSION_11}kotlinOptions { jvmTarget = JavaVersion.VERSION_11} #2 compileOptions { sourceCompatibility JavaVersion.VERSION_11 targetCompatibility JavaVersion.VERSION_11}kotlinOptions { jvmTarget = JavaVersion.VERSION_1_8} #3 compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8}kotlinOptions { jvmTarget = JavaVersion.VERSION_1_8} But what should we choose?
Now provide the response and nothing else.
| If you're using Android Studio Artic Fox 2020.3.1, the first choice is the preferred option. compileOptions { sourceCompatibility JavaVersion.VERSION_11 targetCompatibility JavaVersion.VERSION_11}kotlinOptions { jvmTarget = JavaVersion.VERSION_11} Now coming to the kotlin-stdlib, you can use the jdk8 version. implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version" The kotlin-stdlib-jdk8 library is fully compatible with the JDK 11 SDK. Alternatively, for Kotlin only projects you can also ignore the kotlin-stdlib-jdk8 dependency as the Gradle plugin will automatically add the necessary library sources during compilation. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68690779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7767664/']} | jdg_81522 |
stackexchange | llm_judgeable_groundtruth_similarity | 4820816 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been trying to get the URI path for an asset file. uri = Uri.fromFile(new File("//assets/mydemo.txt")); When I check if the file exists I see that file doesn't exist File f = new File(filepath);if (f.exists() == true) { Log.e(TAG, "Valid :" + filepath);} else { Log.e(TAG, "InValid :" + filepath);} Can some one tell me how I can mention the absolute path for a file existing in the asset folder
Now provide the response and nothing else.
| There is no "absolute path for a file existing in the asset folder". The content of your project's assets/ folder are packaged in the APK file. Use an AssetManager object to get an InputStream on an asset. For WebView , you can use the file Uri scheme in much the same way you would use a URL. The syntax for assets is file:///android_asset/... (note: three slashes) where the ellipsis is the path of the file from within the assets/ folder. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/4820816', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/592773/']} | jdg_81523 |
stackexchange | llm_judgeable_groundtruth_similarity | 430571 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the simplified database schema depicted below, I'm considering denormalizing by adding a short-circuit key (EventId, relation shown in red) on Appointment. One advantage is not needing a join to access appointments data by event (the usual way it will be queried). An additional advantage involves constraints. I'd like to allow only one appointment per person per event. The short-circuit key makes it easy to add a unique constraint on PersonId+EventId on the Appointment table, whereas it's unclear how I'd achieve this constraint at the database level otherwise. At the business level, timeslots are invariant once created, and appointment editing is already restricted to changing the timeslot within the same event, so I don't see much cost to keeping the data consistent here. It also appeals to me intuitively, as I think of both the appointments and the timeslots as belonging to the event. Thoughts on this denormalization? Does it seem reasonable?
Now provide the response and nothing else.
| In an RDBMS context, the first question is: why do you want to avoid the extra join? RDBMSes are designed to optimize such queries; do you experience any tangible performance issue? If it is for programming convenience only, you could very well create a view to automate the shortcut without real redundancy. Looking at your design and requirements, it seems that a Timeslot is a component of an Event (i.e. in the sense of UML composition): the slot makes no sense without the event, and probably, if an event would be deleted, so would be all its slots. If this is true: A trick could be to use a composite primary key for Timeslot made of EventId and a sequence number of the slot in the event. The foreign key in Appointment would then be replaced by a pair EventId , Sequence While such composite keys have drawbacks, and would not be my first recommendation, they have nevertheless several advantages in your scenario: it facilitates the verification of time-slot change within the same event; it provides the shortcut you are looking for, without redundancy and would hence still benefit from the automatic consistency; (this makes it in my view a better candidate than your denormalization). it encourages the DDD viewpoint that Event is the root of an aggregate that includes also Timeslot ; and access to an aggregate's entities shall always go via the aggregate root. If at this stage, you still think you need to denormalize, then evaluate the pros and cons carefully: some access may be facilitated, but the updates would be (slightly) more complex. So, if there are many more writes than reads, or if there is a high degree of concurrency, you might suffer from overhead costs that might exceed the hoped benefits; for example, the extra locks for propagating the denormalized change, could make the effects of denormalization less performant than you think. But ultimately, if the pros exceed the cons, this specific shortcut, does not seem intrinsically unreasonable. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/430571', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/129110/']} | jdg_81524 |
stackexchange | llm_judgeable_groundtruth_similarity | 8813539 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Are the dates stored in the 'created_at' fields marshaled to Python datetime objects via PyMongo, or do I have to manually replace the text strings with Python Date objects? i.e. How do I convert a property in MongoDB from text to date type? It seems highly unnatural that I would have to replace the date strings with Python date objects, which is why I'm asking the question. I would like to write queries that display the tweets from the past three days. Please let me know if there is a slick way of doing this. Thanks!
Now provide the response and nothing else.
| you can parse Twitter's created_at timestamps to Python datetimes like so: import datetime, pymongocreated_at = 'Mon Jun 8 10:51:32 +0000 2009' # Get this string from the Twitter APIdt = datetime.strptime(created_at, '%a %b %d %H:%M:%S +0000 %Y') and insert them into your Mongo collection like this: connection = pymongo.Connection('mymongohostname.com')connection.my_database.my_collection.insert({ 'created_at': dt, # ... other info about the tweet ....}, safe=True) And finally, to get tweets within the last three days, newest first: three_days_ago = datetime.datetime.utcnow() - datetime.timedelta(days=3)tweets = list(connection.my_database.my_collection.find({ 'created_at': { '$gte': three_days_ago }}).sort([('created_at', pymongo.DESCENDING)])) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8813539', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/190894/']} | jdg_81525 |
stackexchange | llm_judgeable_groundtruth_similarity | 51722525 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
json() I want to send result from function after post but I think res.json() executes before I have the result from the function and every time result is undefined. Game.find({ $or: [{ 'homeTeamName': awayTeamName }, { 'awayTeamName': awayTeamName }] }) .then( function (games) { let gamesGuest = games let result = testFunc.createCoef(gamesHome, gamesGuest, homeTeamName, awayTeamName) res.json(result) } ) .catch(err => res.status(404).json('Away no games') ) } )
Now provide the response and nothing else.
| As stated in this answer: From .Net-Core v2.0 upwards you can use ViewLocationFormats and AreaViewLocationFormats in RazorViewEngineOptions to modify the View look-up. The option you are looking for is ViewLocationFormats since you're not using View Areas. Your Solution would be along these lines: services.Configure<RazorViewEngineOptions>(o => { o.ViewLocationFormats.Clear(); o.ViewLocationFormats.Add("/MVC/Views/{1}/{0}" + RazorViewEngine.ViewExtension); o.ViewLocationFormats.Add("/MVC/Views/Shared/{0}" + RazorViewEngine.ViewExtension); }); The last line is only needed if you have the shared Layouts and Paritals at that location and not in the Standard Folder. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51722525', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9634505/']} | jdg_81526 |
stackexchange | llm_judgeable_groundtruth_similarity | 102939 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While messing with Arduino I came across this wireless transceiver. As a newbie in all this stuff, it's written that it operates at 2.4 GHz, so the transmitter crystal and receiver crystal must operate at 2.4 GHz, right? From the photo I can see that the crystal is 16 MHz which is 0.016 GHz. Probably I'm not getting it right, where am I mistaken? NRF24L01+ 2.4GHz Antenna Wireless Transceiver Module
Now provide the response and nothing else.
| Most wireless transmitters (of this type) use a crystal and a phase-locked-loop (PLL) to generate much higher frequencies. See this article: - Frequencies can be produced that are N*16MHz. This means, that for a 2.448GHz output, N equals 153. If N = 154, the output is 2.464GHz i.e. 16MHz higher. There are other techniques that can do this but the PLL is most likely. Here's what the device says about itself: - The type of PLL used is called "fractional N" because it has the ability to produce higher frequencies that are spaced at a fraction of 16MHz. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/102939', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/36746/']} | jdg_81527 |
stackexchange | llm_judgeable_groundtruth_similarity | 1405982 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In 1752 Goldbach send this conjecture to Euler: "Every odd integer can be written in the form $p+2a^2$ where $p$ is a prime or $1$ and $a$ is a natural number (can be even 0)." This conjecture turned out to be false and my book asks me to prove that $5777$ cannot be written in such manner. What I did is simply noted that if exists such $p$ it must be of the form $5777-2a^2$ and so $a$ mustn't be greater than $53$. Then I simply checked that for every values of $a$ from $0$ to $53$ $p$ is not prime. Btw this is a very tedious way to prove this and may in a test I wouldn't be able to do this so I was wondering if there was any shorter way? (or my book just wanted me to make a lot of calculations for some reason.)
Now provide the response and nothing else.
| Observe that $5777\equiv 2\pmod 3$ and $2a^2\equiv 2\pmod 3$ unless $3\mid a$. Hence once you check that $5777-3$ is not twice a square, you need only check $a$ with $3\mid a$ (cutting down the effort by two thirds). Likewise, $5777\equiv 2\pmod 5$, which allows you to drop all cases where $a\equiv \pm1\pmod 5$ (after checking that $5777-5$ is not twice a square). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1405982', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/217700/']} | jdg_81528 |
stackexchange | llm_judgeable_groundtruth_similarity | 1618246 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to use MySQL Administrator and MySQL Query Browser on a remote mysql server. Should I just change the bind-address to 0.0.0.0 to accept all connections or is there a more secure way. Thanks
Now provide the response and nothing else.
| The best approach is to create an IPv6 server socket that can also accept IPv4 connections. To do so, create a regular IPv6 socket, turn off the socket option IPV6_V6ONLY , bind it to the "any" address, and start receiving. IPv4 addresses will be presented as IPv6 addresses, in the IPv4-mapped format. The major difference across systems is whether IPV6_V6ONLY is a) available, and b) turned on or off by default. It is turned off by default on Linux (i.e. allowing dual-stack sockets without setsockopt), and is turned on on most other systems. In addition, the IPv6 stack on Windows XP doesn't support that option. In these cases, you will need to create two separate server sockets, and place them into select or into multiple threads. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1618246', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/183276/']} | jdg_81529 |
stackexchange | llm_judgeable_groundtruth_similarity | 42022890 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm pretty new in Spark and I've been trying to convert a Dataframe to a parquet file in Spark but I haven't had success yet. The documentation says that I can use write.parquet function to create the file. However, when I run the script it shows me: AttributeError: 'RDD' object has no attribute 'write' from pyspark import SparkContextsc = SparkContext("local", "Protob Conversion to Parquet ")# spark is an existing SparkSessiondf = sc.textFile("/temp/proto_temp.csv")# Displays the content of the DataFrame to stdoutdf.write.parquet("/output/proto.parquet") Do you know how to make this work? The spark version that I'm using is Spark 2.0.1 built for Hadoop 2.7.3.
Now provide the response and nothing else.
| The error was due to the fact that the textFile method from SparkContext returned an RDD and what I needed was a DataFrame . SparkSession has a SQLContext under the hood. So I needed to use the DataFrameReader to read the CSV file correctly before converting it to a parquet file. from pyspark.sql import SparkSessionspark = SparkSession \ .builder \ .appName("Protob Conversion to Parquet") \ .config("spark.some.config.option", "some-value") \ .getOrCreate()# read csvdf = spark.read.csv("/temp/proto_temp.csv")# Displays the content of the DataFrame to stdoutdf.show()df.write.parquet("output/proto.parquet") | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42022890', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2198720/']} | jdg_81530 |
stackexchange | llm_judgeable_groundtruth_similarity | 20563166 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This may be a trivial question but I am wondering if Laravel recommends a certain way to check whether an Eloquent collection returned from $result = Model::where(...)->get() is empty, as well as counting the number of elements. We are currently using !$result to detect empty result, is that sufficient? As for count($result) , does it actually cover all cases, including empty result?
Now provide the response and nothing else.
| When using ->get() you cannot simply use any of the below: if (empty($result)) { }if (!$result) { }if ($result) { } Because if you dd($result); you'll notice an instance of Illuminate\Support\Collection is always returned, even when there are no results. Essentially what you're checking is $a = new stdClass; if ($a) { ... } which will always return true. To determine if there are any results you can do any of the following: if ($result->first()) { } if (!$result->isEmpty()) { }if ($result->count()) { }if (count($result)) { } You could also use ->first() instead of ->get() on the query builder which will return an instance of the first found model, or null otherwise. This is useful if you need or are expecting only one result from the database. $result = Model::where(...)->first();if ($result) { ... } Notes / References ->first() http://laravel.com/api/4.2/Illuminate/Database/Eloquent/Collection.html#method_first isEmpty() http://laravel.com/api/4.2/Illuminate/Database/Eloquent/Collection.html#method_isEmpty ->count() http://laravel.com/api/4.2/Illuminate/Database/Eloquent/Collection.html#method_count count($result) works because the Collection implements Countable and an internal count() method: http://laravel.com/api/4.2/Illuminate/Database/Eloquent/Collection.html#method_count Bonus Information The Collection and the Query Builder differences can be a bit confusing to newcomers of Laravel because the method names are often the same between the two. For that reason it can be confusing to know what one you’re working on. The Query Builder essentially builds a query until you call a method where it will execute the query and hit the database (e.g. when you call certain methods like ->all() ->first() ->lists() and others). Those methods also exist on the Collection object, which can get returned from the Query Builder if there are multiple results. If you're not sure what class you're actually working with, try doing var_dump(User::all()) and experimenting to see what classes it's actually returning (with help of get_class(...) ). I highly recommend you check out the source code for the Collection class, it's pretty simple. Then check out the Query Builder and see the similarities in function names and find out when it actually hits the database. Laravel 5.2 Collection Class Laravel 5.2 Query Builder | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/20563166', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1677057/']} | jdg_81531 |
stackexchange | llm_judgeable_groundtruth_similarity | 36368843 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have code like this: let things = vec![/* ...*/]; // e.g. Vec<String>things .map(|thing| { let a = try!(do_stuff(thing)); Ok(other_stuff(a)) }) .filter(|thing_result| match *thing_result { Err(e) => true, Ok(a) => check(a), }) .map(|thing_result| { let a = try!(thing_result); // do stuff b }) .collect::<Result<Vec<_>, _>>() In terms of semantics, I want to stop processing after the first error. The above code works, but it feels quite cumbersome. Is there a better way? I've looked through the docs for something like filter_if_ok , but I haven't found anything. I am aware of collect::<Result<Vec<_>, _>> , and it works great. I'm specifically trying to eliminate the following boilerplate: In the filter's closure, I have to use match on thing_result . I feel like this should just be a one-liner, e.g. .filter_if_ok(|thing| check(a)) . Every time I use map , I have to include an extra statement let a = try!(thing_result); in order to deal with the possibility of an Err . Again, I feel like this could be abstracted away into .map_if_ok(|thing| ...) . Is there another approach I can use to get this level of conciseness, or do I just need to tough it out?
Now provide the response and nothing else.
| There are lots of ways you could mean this. If you just want to panic, use .map(|x| x.unwrap()) . If you want all results or a single error, collect into a Result<X<T>> : let results: Result<Vec<i32>, _> = result_i32_iter.collect(); If you want everything except the errors, use .filter_map(|x| x.ok()) or .flat_map(|x| x) . If you want everything up to the first error, use .scan((), |_, x| x.ok()) . let results: Vec<i32> = result_i32_iter.scan((), |_, x| x.ok()); Note that these operations can be combined with earlier operations in many cases. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36368843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/595228/']} | jdg_81532 |
stackexchange | llm_judgeable_groundtruth_similarity | 130649 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently reading Alonso and Finn's Electromagnetism book. It explains that the spin contributes to the magnetic moment and is somewhat comparable to a rotation of the particle around its own axis.It says that the spin of a particle is caused by a certain internal structure, which makes sense in the aforementioned analogy. Right underneath the paragraph with the explanation of spin, it says "The electron has no known internal structure", but since it does have a spin, does that mean that we know the electron has an internal structure but we just don't know what it is?
Now provide the response and nothing else.
| Spin is not about stuff spinning. (Confusing, I know, but physicists have never been great at naming things. Exhibit A: Quarks.) Spin is a purely quantum mechanical phenomenon, it cannot be understood with classical physics alone, and every analogy will break down. It has also, intrinsically, nothing to do with any kind of internal structure. (Non-relativistic) spin arises simply because quantum things must transform in some representation of the rotation group $\mathrm{SO}(3)$ in order for the operators of angular momentum to act upon them (and because we need to explain the degree of freedom observed in, e.g., the Stern-Gerlach experiment . Since the states in the QM space of states are only determined up to rays, we seek a projective representation upon the space, and this means that we actually represent the covering group $\mathrm{SU}(2)$. The $\mathrm{SU}(2)$ representations are labeled by a number $s \in \mathbb{N} \vee s \in \mathbb{N} + \frac{1}{2}$, which we call spin. Whether the thing we are looking at is "composite" or "fundamental" has no impact on the general form of this argument. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/130649', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/44499/']} | jdg_81533 |
stackexchange | llm_judgeable_groundtruth_similarity | 1532 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There are 2 routers connected something like this: Network1 ----- (172.16.1.1/24) Router 1 (172.16.3.1/30) ------ (172.16.3.2/30) Router 2 ------ (172.16.2.1/24) Network 2 I try to add addresses for dynamic routing using the RIP protocol, for Router 1 adding 172.16.1.0 and 172.16.3.0 and for Router 2 172.16.3.0 and 172.16.2.0 but they are assigned by default to 172.16.0.0 in both cases, and it does not work. Any suggestions?
Now provide the response and nothing else.
| they are assigned by default to 172.16.0.0 in both cases, and it does not work I modified your ascii art a litte to reduce scrolling... It sounds like you're saying that you can't reach reach N1 from N2... Broken RIPv1 topology=====================N1 ---- (172.16.1.1/24) R1 (172.16.3.1/30) ----- (172.16.3.2/30) R2 ----- (172.16.2.1/24) N2 Classful routing protocol overview RIPv1 is a classful routing protocol... as such, it doesn't associate netmasks with routes when they are advertised. Classful routing protocols do a couple non-intuitive things... They only advertise routes out interfaces where the masks match (this is your problem) They automatically summarize at major network boundaries (see bonus material, below) Interface netmasks To make your topology work, your masks will have to match on all RIPv1 interfaces, unless you use a classless routing protocol (such as RIPv2, EIGRP, OSPF, or ISIS). If you need to use RIPv1, then reconfigure your topology such that all interfaces have matching masks, like this... Functional RIPv1 topology=========================N1 ---- (172.16.1.1/24) R1 (172.16.3.1/24) ----- (172.16.3.2/24) R2 ----- (172.16.2.1/24) N2 Bonus material: RIPv1 Auto-summarization Example Since this also tends to trip people up, I am including an example of RIPv1 auto-summarization dynamics. When I mention major network boundaries below, I'm talking about the classic definitions for Class A, Class B, and Class C IPv4 networks ... Class A (8-bit netmasks): 1.0.0.0/8 - 127.0.0.0/8 Class B (16-bit netmasks): 128.0.0.0/16 - 191.255.0.0/16 Class C (24-bit netmasks): 192.168.0.0/24 - 223.255.255.0/24 Moving on to the RIPv1 auto-summarization example... I will use matching /24 interface netmasks for simplicity. Lo0:192.168.1.0/24Lo1:1.1.2.0/24+----+ +----+ +----+| R1 +------------------+ R2 +---------------------+ R3 |+----+ +----+ +----+ 1.1.1.0/24 172.16.1.0/24router rip router rip router rip version 1 version 1 version 1 network 192.168.1.0 network 1.0.0.0 network 172.16.0.0 network 1.0.0.0 network 172.16.0.0The routing table on R3 contains:C 172.16.1.0/24R 1.0.0.0/8 <--- 1.1.1.0/24 and 1.1.2.0/24 are "hidden" by the classful summary at R2R 192.168.1.0/24 <--- 192.168.1.0/24 passes transparently through R2 since it's a Class C network itself and is not summarized at R2 R1 and R2 are connected by subnets of the 1.0.0.0/8 major network, so 1.1.1.0/24 and 1.1.2.0/24 are advertised between R1 and R2; however, the link between R2 and R3 is not in 1.0.0.0/8, therefore R2 performs automatic summarization of subnets of 1.0.0.0/8 and subnets of 172.16.0.0/16. When subnets of a major network are summarized, they get hidden by the summarized route... This happens at R2 when 1.1.1.0/24 and 1.1.2.0/24 are summarized to 1.0.0.0/8. Cisco routers cannot disable auto-summarization under RIP version 1 (but they can for RIPv2). | {} | {'log_upvote_score': 5, 'links': ['https://networkengineering.stackexchange.com/questions/1532', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/1502/']} | jdg_81534 |
stackexchange | llm_judgeable_groundtruth_similarity | 11076790 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been researching for over an hour and finding great samples of how to use MSMQ in C# and even one full chapter of a book about Message Queue...But for a quick test all I need is to cover is this scenario, not even in a perfect way, just for a quick demo: "Application A": Writes a Message to Message Queue. ( Application A is a C# windows service)Now I open "Application B" ( it is a C# winForms app ) and I check MSMQ and I see oh I have a new Message. That's it... All I need for a simple demo. Could anyone please help me with a code sample for this? Much appreciated.
Now provide the response and nothing else.
| //From Windows Service, use this codeMessageQueue messageQueue = null;if (MessageQueue.Exists(@".\Private$\SomeTestName")){ messageQueue = new MessageQueue(@".\Private$\SomeTestName"); messageQueue.Label = "Testing Queue";}else{ // Create the Queue MessageQueue.Create(@".\Private$\SomeTestName"); messageQueue = new MessageQueue(@".\Private$\SomeTestName"); messageQueue.Label = "Newly Created Queue";}messageQueue.Send("First ever Message is sent to MSMQ", "Title"); //From Windows applicationMessageQueue messageQueue = new MessageQueue(@".\Private$\SomeTestName");System.Messaging.Message[] messages = messageQueue.GetAllMessages();foreach (System.Messaging.Message message in messages){ //Do something with the message.}// after all processing, delete all the messagesmessageQueue.Purge(); For more complex scenario, you could use Message objects to send the message, wrap your own class object inside it, and mark your class as serializable. Also be sure that MSMQ is installed on your system | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11076790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/320724/']} | jdg_81535 |
stackexchange | llm_judgeable_groundtruth_similarity | 44720580 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to get a 1000 x 1000 picture in Python from any input picture so that the input doesn't lose it's aspect ratio. In other words, I want to resize the input so that its longer dimension is 1000 pixels and "fill" the other dimension with the background color until it become 1000 x 1000 square. The original one must be in the center at the end.
Now provide the response and nothing else.
| Using OpenCV You can use resize() in OpenCV to resize the image up/down to the size you need. However, resize() requires that you put in either the destination size (in both dimensions) or the scaling (in both dimensions), so you can't just put one or the other in for 1000 and let it calculate the other for you. So the most robust way to do this is to find the aspect ratio and calculate what the smaller dimension would be when the bigger one is stretched to 1000. Then you can resize. h, w = img.shape[:2]aspect = w/h Note that if aspect is greater than 1, then the image is oriented horizontally, while if it's less than 1, the image is oriented vertically (and is square if aspect = 1 ). Different interpolation methods will look better depending on whether you're stretching the image to a larger resolution, or scaling it down to a lower resolution. From the resize() docs: To shrink an image, it will generally look best with CV_INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with CV_INTER_CUBIC (slow) or CV_INTER_LINEAR (faster but still looks OK). So, after resizing we'll end up with a 1000xN or Nx1000 image (where N<=1000 ) and we'll need to pad it with whatever background color you want on both sides to fill the image to 1000x1000 . For this you can use copyMakeBorder() for a pure OpenCV implementation, or since you're using Python you can use numpy.pad() . You'll need to decide what to do in case an odd number of pixels needs to be added in order to make it 1000x1000 , like whether the additional pixel goes to the left or right (or top or bottom, depending on the orientation of your image). Here's a script that defines a resizeAndPad() function which automatically calculates the aspect ratio, scales accordingly, and pads as necessary, and then uses it on a horizontal, vertical, and square image: import cv2import numpy as npdef resizeAndPad(img, size, padColor=0): h, w = img.shape[:2] sh, sw = size # interpolation method if h > sh or w > sw: # shrinking image interp = cv2.INTER_AREA else: # stretching image interp = cv2.INTER_CUBIC # aspect ratio of image aspect = w/h # if on Python 2, you might need to cast as a float: float(w)/h # compute scaling and pad sizing if aspect > 1: # horizontal image new_w = sw new_h = np.round(new_w/aspect).astype(int) pad_vert = (sh-new_h)/2 pad_top, pad_bot = np.floor(pad_vert).astype(int), np.ceil(pad_vert).astype(int) pad_left, pad_right = 0, 0 elif aspect < 1: # vertical image new_h = sh new_w = np.round(new_h*aspect).astype(int) pad_horz = (sw-new_w)/2 pad_left, pad_right = np.floor(pad_horz).astype(int), np.ceil(pad_horz).astype(int) pad_top, pad_bot = 0, 0 else: # square image new_h, new_w = sh, sw pad_left, pad_right, pad_top, pad_bot = 0, 0, 0, 0 # set pad color if len(img.shape) is 3 and not isinstance(padColor, (list, tuple, np.ndarray)): # color image but only one color provided padColor = [padColor]*3 # scale and pad scaled_img = cv2.resize(img, (new_w, new_h), interpolation=interp) scaled_img = cv2.copyMakeBorder(scaled_img, pad_top, pad_bot, pad_left, pad_right, borderType=cv2.BORDER_CONSTANT, value=padColor) return scaled_imgv_img = cv2.imread('v.jpg') # vertical imagescaled_v_img = resizeAndPad(v_img, (200,200), 127)h_img = cv2.imread('h.jpg') # horizontal imagescaled_h_img = resizeAndPad(h_img, (200,200), 127)sq_img = cv2.imread('sq.jpg') # square imagescaled_sq_img = resizeAndPad(sq_img, (200,200), 127) And this gives the images: Using ImageMagick ImageMagick is a simple, but well-built command-line interface to do basic image processing. It's very easy to do what you want with a single command. See here for descriptions of the resizing commands. $ convert v.jpg -resize 200x200 -background skyblue -gravity center -extent 200x200 scaled-v-im.jpg$ convert h.jpg -resize 200x200 -background skyblue -gravity center -extent 200x200 scaled-h-im.jpg$ convert sq.jpg -resize 200x200 -background skyblue -gravity center -extent 200x200 scaled-sq-im.jpg Producing the images: | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44720580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6510276/']} | jdg_81536 |
stackexchange | llm_judgeable_groundtruth_similarity | 5331638 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a combo box that is bound to a list of model objects. I've bound the combo box SelectedItem to a property that is the model type. All of my data binding works beautifully after the window has been loaded. The SelectedItem is set properly and I'm able to save the object directly with the repository. The problem is when the window first loads I initialize the SelectedItem property and my combobox displays nothing. Before I moved to binding to objects I was binding to a list of strings and that worked just fine on initialization. I know I'm missing something but I can't figure it out. Thanks in advance for any guidance you can provide. (One note about the layout of this page. The combo boxes are actually part of another ItemTemplate that is used in a ListView. The ListView is bound to an observable collection in the main MV. Each item of this observable collection is itself a ModelView. It is that second ModelView that has the SelectedItem property.) Here is my Model: public class DistributionListModel : Notifier, IComparable { private string m_code; private string m_description; public string Code { get { return m_code; } set { m_code = value; OnPropertyChanged("Code"); } } public string Name { get { return m_description; } set { m_description = value; OnPropertyChanged("Name"); } } #region IComparable Members public int CompareTo(object obj) { DistributionListModel compareObj = obj as DistributionListModel; if (compareObj == null) return 1; return Code.CompareTo(compareObj.Code); } #endregion} Here the pertinent code in my ModelView: public MailRoutingConfigurationViewModel(int agencyID) : base(){ m_agencyID = agencyID; m_agencyName = DataManager.QueryEngine.GetAgencyName(agencyID); IntializeValuesFromConfiguration(DataManager.MailQueryEngine.GetMailRoutingConfiguration(agencyID)); // reset modified flag m_modified = false;}private void IntializeValuesFromConfiguration(RecordCheckMailRoutingConfiguration configuration){ SelectedDistributionList = ConfigurationRepository.Instance.GetDistributionListByCode(configuration.DistributionCode);}public DistributionListModel SelectedDistributionList{ get { return m_selectedDistributionList; } set { m_selectedDistributionList = value; m_modified = true; OnPropertyChanged("SelectedDistributionList"); }} And finally the pertinent XAML: <UserControl.Resources> <DataTemplate x:Key="DistributionListTemplate"> <Label Content="{Binding Path=Name}" /> </DataTemplate></UserControl.Resources> <ComboBox ItemsSource="{Binding Source={StaticResource DistributionCodeViewSource}, Mode=OneWay}" ItemTemplate="{StaticResource DistributionListTemplate}" SelectedItem="{Binding Path=SelectedDistributionList, Mode=TwoWay}" IsSynchronizedWithCurrentItem="False" />
Now provide the response and nothing else.
| @SRM, if I understand correctly your problem is binding your comboBox to a collection of objects rather than a collection of values types ( like string or int- although string is not value type).I would suggest add a two more properties on your combobox <ComboBox ItemsSource="{Binding Source={StaticResource DistributionCodeViewSource}, Mode=OneWay}" ItemTemplate="{StaticResource DistributionListTemplate}" SelectedItem="{Binding Path=SelectedDistributionList, Mode=TwoWay}" SelectedValuePath="Code" SelectedValue="{Binding SelectedDistributionList.Code }"/> I am assuming here that DistributionListModel objects are identified by their Code.The two properties I added SelectedValuePath and SelectedValue help the combobox identify what properties to use to mark select the ComboBoxItem by the popup control inside the combobox.SelectedValuePath is used by the ItemSource and SelectedValue by for the TextBox. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5331638', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/369957/']} | jdg_81537 |
stackexchange | llm_judgeable_groundtruth_similarity | 18080543 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm being given some JSON I need to cycle through to output the elements. The problem is this section of it is structured differently. Normally I would just loop through the elements like this: var json = $.parseJSON(data);json[16].events.burstevents[i] But I can't do that with the JSON below because they're key value pairs. How do I extract just the unix timestamp from the JSON below? (i.e. 1369353600000.0, 1371600000000.0, etc.) {"16": { "events": { "burstevents": { "1369353600000.0": "maj", "1371600000000.0": "maj", "1373414400000.0": "maj", "1373500800000.0": "maj", "1373673600000.0": "maj" }, "sentevents": { "1370736000000.0": "pos", "1370822400000.0": "pos", "1370908800000.0": "pos" } } }}
Now provide the response and nothing else.
| You can iterate over the keys using the in keyword. var json = $.parseJSON(data);var keys = array();for(var key in json[16].events.burstevents){ keys.push(key);} You can do it with jQuery var json = $.parseJSON(data);var keys = $.map(json[16].events.burstevents,function(v,k) { return k; }); You can use JavaScript Object var json = $.parseJSON(data);var keys = Object.keys(json[16].events.burstevents); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18080543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1439776/']} | jdg_81538 |
stackexchange | llm_judgeable_groundtruth_similarity | 9310183 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a very strange behavior with EF code first approach and associations. I have two entities: public class GlobalKpiSectionn{ public GlobalKpiSection() { this.Regions = new HashSet<Region>(); } public virtual ICollection<Region> Regions { get; protected set; } }public class Region{ public int RegionId { get; set; } public bool IsMain { get; set; } [Required] public virtual GlobalKpiSection KpiSection { get; set; }} I need required attribute on KiSection property in order to get cascade deletes. The problem is the following - in this code: var mainRegion = context.Regions.Single(x => x.RegionId == id);mainRegion.IsMain = true;context.SaveChanges(); I'm getting exception that Required field is not initialized. But it is present just not loaded. I don't what to write everywhere explicit includes for properties when I use this entity. What can I do to overcome this? UPDATE The reason why I'm sure its lazy loading issue is that: var primaryRegion = context.Regions .Include(x => x.KpiSection) .Single(x => x.RegionId == id); Solves the issue, but its definitely awful solution.
Now provide the response and nothing else.
| That is why you should not use data annotations. Data annotations are wrong feature because they do both mapping and validation (violation of single responsibility) - as you see it is not always what you want. So your current options are: Turn off validation in context.Configuration.ValidateOnSaveEnabled = false Expose non nullable KpiSectionId foreign key property in your Region entity (you will not need Required attribute on your navigation property). Use fluent API instead of data annotations : Example: modelBuilder.Entity<GlobalKpiSection>() .WithMany(s => s.Regions) .HasRequired(r => r.KpiSection); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9310183', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/72174/']} | jdg_81539 |
stackexchange | llm_judgeable_groundtruth_similarity | 7926864 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the benefits of using the new fork/join framework over just simply splitting the big task into N subtasks in the beginning, sending them to a cached thread pool (from Executors ) and waiting for each task to complete? I fail to see how using the fork/join abstraction simplifies the problem or makes the solution more efficient from what we've had for years now. For example, the parallelized blurring algorithm in the tutorial example could be implemented like this: public class Blur implements Runnable { private int[] mSource; private int mStart; private int mLength; private int[] mDestination; private int mBlurWidth = 15; // Processing window size, should be odd. public ForkBlur(int[] src, int start, int length, int[] dst) { mSource = src; mStart = start; mLength = length; mDestination = dst; } public void run() { computeDirectly(); } protected void computeDirectly() { // As in the example, omitted for brevity }} Split in the beginning and send tasks to a thread pool: // source image pixels are in src// destination image pixels are in dst// threadPool is a (cached) thread poolint maxSize = 100000; // analogous to F-J's "sThreshold"List<Future> futures = new ArrayList<Future>();// Send stuff to thread pool:for (int i = 0; i < src.length; i+= maxSize) { int size = Math.min(maxSize, src.length - i); ForkBlur task = new ForkBlur(src, i, size, dst); Future f = threadPool.submit(task); futures.add(f);}// Wait for all sent tasks to complete:for (Future future : futures) { future.get();}// Done! The tasks go to the thread pool's queue, from which they're executed as worker threads become available. As long as the splitting is granular enough (to avoid having to particularly wait for the last task) and the thread pool has enough (at least N of processors) threads, all processors are working at full speed until the whole computation is done. Am I missing something? What's the added value of using the fork/join framework?
Now provide the response and nothing else.
| I think the basic misunderstanding is, that the Fork/Join examples do NOT show work stealing but only some kind of standard divide and conquer. Work stealing would be like this: Worker B has finished his work. He is a kind one, so he looks around and sees Worker A still working very hard. He strolls over and asks: "Hey lad, I could give you a hand." A replies. "Cool, I have this task of 1000 units. So far I have finished 345 leaving 655. Could you please work on number 673 to 1000, I'll do the 346 to 672." B says "OK, let's start so we can go to the pub earlier." You see - the workers must communicate between each other even when they started the real work. This is the missing part in the examples. The examples on the other hand show only something like "use subcontractors": Worker A: "Dang, I have 1000 units of work. Too much for me. I'll do 500 myself and subcontract 500 to someone else." This goes on until the big task is broken down into small packets of 10 units each. These will be executed by the available workers. But if one packet is a kind of poison pill and takes considerably longer than other packets -- bad luck, the divide phase is over. The only remaining difference between Fork/Join and splitting the task upfront is this: When splitting upfront you have the work queue full right from start. Example: 1000 units, the threshold is 10, so the queue has 100 entries. These packets are distributed to the threadpool members. Fork/Join is more complex and tries to keep the number of packets in the queue smaller: Step 1: Put one packet containing (1...1000) into queue Step 2: One worker pops the packet(1...1000) and replaces it with two packets: (1...500) and (501...1000). Step 3: One worker pops packet (500...1000) and pushes (500...750) and (751...1000). Step n: The stack contains these packets: (1..500), (500...750), (750...875)... (991..1000) Step n+1: Packet (991..1000) is popped and executed Step n+2: Packet (981..990) is popped and executed Step n+3: Packet (961..980) is popped and split into (961...970) and (971..980)..... You see: in Fork/Join the queue is smaller (6 in the example) and the "split" and "work" phases are interleaved. When multiple workers are popping and pushing simultaneously the interactions are not so clear of course. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7926864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59279/']} | jdg_81540 |
Subsets and Splits