source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
15407075
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following problem: the version at master works fine the version of the last tag before master (say last ) has a bug a colleague needs a patch for his last revision for that certain bug Okay. Let's ask our friend git bisect for the revision that fixed the bug: git bisect startgit bisect bad lastgit bisect good master But that's not going to work: Some good revs are not ancestor of the bad rev. git bisect cannot work properly in this case. Maybe you mistake good and bad revs? Any hints to overcome this? Did I miss something in the docs? Now provide the response and nothing else.
As of git 2.7, you can use the arguments --term-old and --term-new. For instance, you can identify a problem-fixing commit thus: git bisect start --term-new=fixed --term-old=unfixedgit bisect fixed mastergit bisect unfixed $some-old-sha1 As you test, say git bisect fixed or git bisect unfixed as appropriate. Old answer, for versions of git prior to 2.7 Instead of temporarily training yourself to think that bad means good and good means bad, why not create some aliases? In ~/.gitconfig add the following: [alias] bisect-fixed = bisect bad bisect-unfixed = bisect good You can start identifying a problem-fixing commit thus: $ git bisect start$ git bisect-fixed master$ git bisect-unfixed $some-old-sha1 As you test, say git bisect-fixed or git bisect-unfixed as appropriate.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/15407075', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/520162/']}
jdg_87241
stackexchange
llm_judgeable_groundtruth_similarity
3262932
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I tried to alive this question but end up asking.please help me outIf X , Y and Z are positive real numbers Nd a,b and c are rational number then the value of $1÷1+ x^b-a+x^c-a$ + $ 1÷1+x^a-b+x^c-b$ + $1÷1+x^b-c+x^a-c$ is ? Now provide the response and nothing else.
Since $x>0$ so the denominators are non-zero (don't see any reason why $y$ and $z$ are given as part of the problem) . Now observe that \begin{align*}\frac{1}{1+x^{b-a}+x^{c-a}} & = \frac{x^a}{x^a+x^b+x^c}\\\frac{1}{1+x^{a-b}+x^{c-b}} & = \frac{x^b}{x^a+x^b+x^c}\\\frac{1}{1+x^{b-c}+x^{a-c}} & = \frac{x^c}{x^a+x^b+x^c}\end{align*} Now you add these to get $$LHS=\frac{x^a+x^b+x^c}{x^a+x^b+x^c}=1.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3262932', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/677227/']}
jdg_87242
stackexchange
llm_judgeable_groundtruth_similarity
10866740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am experimenting a little bit with RequireJS 2.0.1 . My goal is to load correctly jQuery, Underscore and Backbone. From the original RequireJS doc I discovered that the author J. Burke added (to this new release) a new config option called shim . Then I wrote this stuff down here: index.html <!DOCTYPE html><html> <head> <title>Testing time</title> <script data-main="scripts/main" src="scripts/require.js"></script> </head> <body> <h1>Testing time</h1> </body></html> scripts/main.js requirejs.config({ shim: { 'libs/jquery': { exports: '$' }, 'libs/underscore': { exports: '_' }, 'libs/backbone': { deps: ['libs/underscore', 'libs/jquery'], exports: 'Backbone' } }});define( ['libs/jquery', 'libs/underscore', 'libs/backbone'], function (jQueryLocal, underscoreLocal, backboneLocal) { console.log('local', jQueryLocal); console.log('local', underscoreLocal); console.log('local', backboneLocal); console.log('global', $); console.log('global', _); console.log('global', Backbone); }); Everything seems to work quite fine, but I have the feeling that I'm missing something, I know that there are AMDed version of jQuery and Underscore but if the setup is so simple I don't understand why I should use them. So, is this setup right or I'm missing something? Now provide the response and nothing else.
You only need to use "shim" config if the library does not already call define() to declare a module. jQuery does this already, so you can remove that from the shim config. The above code will work as is, but the exports shim config for jQuery will be ignored since jQuery will call define() before the shim work is done. The downsides with the shim vs having the script call define() to define a module: It is less portable/reliable: every developer needs to do the shim config, and keep track of library changes. If the library author does it inline in the library, everyone gets the benefits more efficiently. The code templates at umdjs/umd can help with that code change. Less optimal code loading: shim config works by loading shim deps before loading the actual library. So it is a bit more sequential loading than parallel. It is faster if all scripts can be loaded in parallel, which is possible when define() is used. If you do a build/optimization for final deployment and combine all the scripts into one script, then this is not really a downside.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10866740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/597097/']}
jdg_87243
stackexchange
llm_judgeable_groundtruth_similarity
3418192
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a problem in my Qualifying Exam. "Suppose $f:[0,1]\to \mathbb{R}$ is in $L^1$ (Lebesgue measure) and for every measurable $A\subset [0,1]$ with $m(A)=\frac 1{\pi}$ we have $\int_A f dm=0$ . Prove that $f=0$ a.e." I could not do it back then. I did my research and we have a similar problem here Integral vanishes on all intervals implies the function is a.e. zero . But the same method cannot be applied. Anyway, I cannot think of anything except for let $B$ be a set of measure $1/4$ and try to make the integral 0. However, I forgot that this is on the real line, so there is no monotonicity here. Anyone can help? Now provide the response and nothing else.
Let $E = \{f > 0\}$ . If $m(E) \ge \dfrac 1\pi$ then $E$ contains a subset $A$ with $m(A) = \dfrac 1\pi$ and necessarily $\displaystyle \int_A f > 0$ . Thus $m(E) < \dfrac 1\pi$ . Likewise, if $F = \{f < 0\}$ , then $m(F) < \dfrac 1\pi$ . Define $G = \{f = 0\}$ and note that $m(G) = 1 - \dfrac 2\pi > \dfrac 1\pi$ . Suppose that $m(E) > 0$ . Select $H \subset G$ with $m(H) = \dfrac 1\pi - m(E)$ and observe that $\displaystyle \int_{E \cup H} f > 0$ , contrary to hypothesis. Thus $m(E) = 0$ . Likewise $m(F) = 0$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3418192', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/721021/']}
jdg_87244
stackexchange
llm_judgeable_groundtruth_similarity
19875
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As per the accepted answer to this question , I'm attempting to use modelines in vim to force filetype detection in some files. For example, at the top of a file named gitconfig (note there is no leading . ), I have the following line: # vim: set filetype=gitconfig : modeline is enabled on my system. However, when I open the file in vim, set filetype? returns conf , rather than the expected gitconfig . Is it possible that other parts of my vim configuration (e.g. filetype.vim) are causing this strange behaviour? Edited in response to comments: set compatible? returns nocompatible set modeline? returns modeline verbose set filetype? returns: filetype=conf Last set from /usr/share/vim/vim73/filetype.vim I don't understand why the system wide filetype plugin would be overriding what I have set in the file itself. One final note: this is the version of Vim 7.3 shipped with OSX. The latest version of MacVim running on the same system using the same .vimrc behaves as expected, with set ft? returning filetype=gitconfig . Now provide the response and nothing else.
So, after some digging, it transpires that the system vimrc shipped with OSX sets the modelines (note the trailing 's') variable to 0. This variable controls the number of lines in a file which are checked for set commands. Setting modelines to a non-zero value in my .vimrc solved the problem. Full output, for the curious: the output of vim --version prompted me to check the system vimrc: % vim --versionVIM - Vi IMproved 7.3 (2010 Aug 15, compiled Jun 24 2011 20:00:09)Compiled by [email protected] version without GUI. Features included (+) or not (-):... system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim"Compilation: gcc -c -I. -D_FORTIFY_SOURCE=0 -Iproto -DHAVE_CONFIG_H -arch i386 -arch x86_64 -g -Os -pipeLinking: gcc -arch i386 -arch x86_64 -o vim -lncurses Looking at the system vimrc: % cat /usr/share/vim/vimrc " Configuration file for vimset modelines=0 " CVE-2007-2438... Led me to the modelines variable. It appears that MacVim does not source this system file (perhaps looking for a system GVIMRC instead? :help startup isn't clear). VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Jul 27 2011 19:46:24)MacOS X (unix) versionIncluded patches: 1-260Compiled by XXXXXHuge version with MacVim GUI. Features included (+) or not (-):... system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "$VIM/gvimrc" user gvimrc file: "$HOME/.gvimrc" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/Applications/MacVim.app/Contents/Resources/vim"Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_MACVIM -Wall -Wno-unknown-pragmas -pipe -DMACOS_X_UNIX -no-cpp-precomp -g -O2 -D_FORTIFY_SOURCE=1Linking: gcc -L. -Wl,-syslibroot,/Developer/SDKs/MacOSX10.6.sdk -L/usr/local/lib -o Vim -framework Cocoa -framework Carbon -lncurses -liconv -framework Cocoa -fstack-protector -L/usr/local/lib -L/System/Library/Perl/5.10/darwin-thread-multi-2level/CORE -lperl -lm -lutil -lc -framework Python -framework Ruby
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/19875', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/2459/']}
jdg_87245
stackexchange
llm_judgeable_groundtruth_similarity
189817
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In a multichoice online test that I did the other day, I was required to select the Maclaurin series for $e^{\tan(x)}$. It was necessary for me to find the first four terms in order to establish which answer was correct. In the end of year exam, I will have reference to a Useful Information booklet (this contains a generalized Taylor polynomial approximation and Maclaurin series of $e^x$, $(1+x)^n$, $\sin(x)$, $\cos(x)$ and $\ln(1+x)$), and no calculator - hence throughout all of of my work, including online tests (which do contribute to my grade), I choose only to work with this resource, as preparation for this exam. As my approach for this problem, I used the generalized Taylor polynomial approximation to find the Maclaurin series for $\tan(x)$ and substituted this series in place of $x$ into the given Maclaurin series for $e^x$, and double-checked my answer once I had finished every other question by using the generalized Taylor polynomial approximation to find the Maclaurin series for $e^{\tan(x)}$. Obviously, as you can imagine, both of these methods were very time consuming (especially when you consider that the other nineteen questions in the test collectively took me less than ten minutes to answer). I'm probably missing a simple concept here. Can you please help me to establish a more elegant approach to this problem? The choices I was given were as follows: Now provide the response and nothing else.
Try the following. If you only want the first four terms then you can compute everything $\bmod x^4$. Then \begin{eqnarray*} \tan x &=& \frac{\sin x}{\cos x} \\ &\equiv& \frac{x - \frac{x^3}{6}}{1 - \frac{x^2}{2}} \bmod x^4 \\ &\equiv& \left( x - \frac{x^3}{6} \right) \left( 1 + \frac{x^2}{2} \right) \bmod x^4 \\ &\equiv& x + \frac{x^3}{3} \bmod x^4. \end{eqnarray*} Then \begin{eqnarray*} e^{\tan x} &\equiv& e^x e^{\frac{x^3}{3} } \bmod x^4 \\ &\equiv& \left( 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \right) \left( 1 + \frac{x^3}{3} \right) \bmod x^4 \\ &\equiv& 1 + x + \frac{x^2}{2} + \frac{x^3}{2} \bmod x^4. \end{eqnarray*} This took a little under 5 minutes on computer, and by hand it probably would have been a little faster. Is that fast enough?
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/189817', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19905/']}
jdg_87246
stackexchange
llm_judgeable_groundtruth_similarity
252367
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There are two statement about a matrix under rank-one updates that I would be grateful if you give me some insightful proofs. Suppose $A$ be a nonsingular $n \times n$ matrix and $\mathbf{u},\mathbf{v}$ be vectors. First, Sherman-Morrison Formula states that: $$(A+\mathbf{u}\mathbf{v}^T)^{-1} = A^{-1} - \frac{A^{-1}\mathbf{u}\mathbf{v}^TA^{-1}}{1+\mathbf{v}^TA^{-1}\mathbf{u}}.$$ (We can prove this by verifying that the RHS multiplied by $A+\mathbf{u}\mathbf{v}^T$ is $I$.) Second, Matrix Determinant Lemma states that:$$\det(A+\mathbf{u}\mathbf{v}^T)=\det(A)(1+\mathbf{v}^T A^{-1}\mathbf{u}).$$ (In the proof from wikipedia , we just have to verify some identity again.) It's easy to verify these proofs but it's not clear to me how to come up with the identity. Are there any other proofs which are not just by multiplication of matrices, and give us some insight ? Or even some informal explanation ? Now provide the response and nothing else.
Look first at $I + \mathbf{u}\mathbf{v}^\top$. This may require factoring $A + \mathbf{u}\mathbf{v}^\top = A\left(I + A^{-1}\mathbf{u}\mathbf{v}^\top\right)$ which may help the intuition for the term $A^{-1}\mathbf{u}$ in the formulas. Consider how $I + \mathbf{u}\mathbf{v}^\top$ acts on the vector $\mathbf{u}$:$$\left(I + \mathbf{u}\mathbf{v}^\top\right)\mathbf{u} = \mathbf{u} + \mathbf{u}\mathbf{v}^\top\mathbf{u} = \left(1+\mathbf{v}^\top\mathbf{u}\right)\mathbf{u}$$ This shows that $\mathbf{u}$ is a right eigenvector with eigenvalue of $1+\mathbf{v}^\top\mathbf{u}$. The inverse must have the same eigenvector but with eigenvalue $(1+\mathbf{v}^\top\mathbf{u})^{-1}$. (If $\mathbf{v}^\top\mathbf{u}=-1$ then the matrix is singular.) The rest of the eigenvalues are ones, since any $\mathbf{b}$ such that $\mathbf{v}^\top\mathbf{b} = 0$ gives $\left(I + \mathbf{u}\mathbf{v}^\top\right)\mathbf{b}=\mathbf{b}$. This completes the entire spectrum ( so long as $\mathbf{v}^\top\mathbf{u} \ne -1$), showing eignevalues of ones and the value of $1 + \mathbf{v}^\top\mathbf{u}$. From here notice that any matrix with such a spectrum must be of the form $I+\mathbf{u}g\mathbf{v}^\top$ (after the factorization of $A$ mentioned earlier) where $g$ is any scalar. This is the general form of matrix that has such a spectrum, with all except one of the eigenvalues as ones, the other eignevalue with the right and left eigenvectors of $\mathbf{u}$ and $\mathbf{v}^\top$ (having eigenvalue parametric in the variable $g$). Once the necessity of that form is realized, the rest is algebra, finding the value for $g$ that solves the equations. For example, the inverse: \begin{align} \left(I+ \mathbf{u}\mathbf{v}^\top\right) \left(I+ \mathbf{u}\mathbf{v}^\top\right)^{-1} &= I \\ \left(I+ \mathbf{u}\mathbf{v}^\top\right) \left(I+ \mathbf{u}g\mathbf{v}^\top\right) &=I \\ I+ \mathbf{u}g\mathbf{v}^\top+ \mathbf{u}\mathbf{v}^\top + \mathbf{u}\mathbf{v}^\top\mathbf{u}g\mathbf{v}^\top&=I \\ I+ \mathbf{u}\left(g+ 1 + \mathbf{v}^\top\mathbf{u}g \right)\mathbf{v}^\top&=I \\ \Rightarrow g+ 1 + \mathbf{v}^\top\mathbf{u}g &= 0 \\ g(1+\mathbf{v}^\top\mathbf{u}) &= -1 \\ g = \frac{-1}{1+\mathbf{v}^\top\mathbf{u}} \\\end{align} So that we have$$\left(I+ \mathbf{u}\mathbf{v}^\top\right)^{-1} = \left(I+ \mathbf{u}\frac{-1}{1+\mathbf{v}^\top\mathbf{u}}\mathbf{v}^\top\right)$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/252367', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43386/']}
jdg_87247
stackexchange
llm_judgeable_groundtruth_similarity
65935
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The (uncentered) Hardy-Littlewood maximal function $M(f)$ of (a locally integrable) function $f$ on $\mathbb{R}^{n}$ is defined by the rule $M(f)(x)=\sup_{\delta>0,\left|y-x\right|<\delta} \text{Avg}_{B(y,\delta)} \left|f\right|$, where $\text{Avg}_{B(y,\delta)} \left|f\right| = \int_{\left|z\right|<\delta} f(y-z) dz$. The following results regarding the (uncentered) Hardy-Littlewood maximal function are well-known and can be found in many basic texts on analysis (e.g. Loukas Grafakos' "Classical Fourier Analysis", Chapter 2, pages 78-81): The Hardy-Littlewood maximal function is a bounded operator from $L^1(\mathbb{R}^n)$ to $L^{1,\infty}(\mathbb{R}^n)$ (i.e., weak $L^1$) of norm at most $3^n$ ($n$ is the dimension of the Euclidean space). Since the Hardy-Littlewood maximal function is also a bounded operator from $L^{\infty}(\mathbb{R}^n)$ to itself with norm at most $1$ (this is clear), we can apply the Marcinkiewicz interpolation theorem to conclude that for all $1 < p < \infty$, the operator norm of the Hardy-Littlewood maximal function is at most $2\left(\frac{p}{p-1}\right)^{\frac{1}{p}}3^{\frac{n}{p}}$. In fact, there is a slightly better bound: $\frac{p}{p-1}3^{\frac{n}{p}}$. The bound given above grows exponentially with the dimension $n$ (if $p$ is fixed). It is a fact that it cannot be improved to a bound that does not grow exponentially with the dimension $n$. My questions: Is an exact value for the norm of the (uncentered) Hardy-Littlewood maximal function, viewed as a bounded operator from $L^p$ to itself ($1<p<\infty$), known? If so, what is it? Also, what is the norm of the Hardy-Littlewood maximal function when it is viewed as an operator from $L^1$ to weak $L^1$ (if it is known)? Are the answers to the analogous questions regarding the centered Hardy-Littlewood maximal function known? I apologize if this question is too basic. It seems like a fairly simple question but it is not clear (at least to me) how to solve it. Now provide the response and nothing else.
Those are basic yet difficult questions. I don't know much about the uncentered case, but here is some information on the centered case. A nonempty set $B \subseteq \mathbb{R}^d$ is centrally symmetric with respect to $p \in B$ if $B$ is invariant under the affine transform $x \mapsto 2p - x$. We say that $B$ is a centrally symmetric convex body if $B$ is compact, convex, and centrally symmetric with respect to the origin. Consider the centered Hardy-Littlewood maximal function over a centrally symmetric convex body $B \subseteq \mathbb{R}^d$ $$\mathcal{M}_Bf(x) = \sup_{r > 0} \frac{1}{m(rB)} \int_{rB} |f(x+y)| \, dy,$$which is defined for all $f \in L^1_{\mathrm{loc}}(\mathbb{R}^d)$.As Willie said in his comment, the classical result is: Theorem 1 ( Stein-Stromberg, 1983 ). There exists a constant $c > 0$ such that $$\|\mathcal{M}_B\|_{L^1(\mathbb{R}^d) \to L^{1,\infty}(\mathbb{R}^d)} \leq c \, d \log d$$ for each $d > 1$ and every centrally symmetric convex body $B \subseteq \mathbb{R}^d$. Over the standard Euclidean ball $B_2$, we have the improved bound $$ \|\mathcal{M}_{B_2}\|_{L^1(\mathbb{R}^d) \to L^{1,\infty}(\mathbb{R}^d)} \leq c' \, d$$ for all $d \geq 1$. In the same paper, Stein and Strömberg conjectured the following, which is unresolved as of today (May 19, 2014): Conjecture 2 (Stein-Strömberg maximal function conjecture, 1983). For each fixed centrally symmetric convex body $B \subseteq \mathbb{R}^d$, $$\sup_{d \geq 1} \|\mathcal{M}_B\|_{L^1(\mathbb{R}^d) \to L^{1,\infty}(\mathbb{R}^d)} < \infty.$$ Stein and Strömberg also conjectured a stronger statement, which asserts a uniform control over all dimensions $d$ and centrally symmetric convex bodies $B$. J. M. Aldaz disproved the conjecture in 2011: Theorem 3 ( Aldaz, 2011 ). If $B = B_\infty$, the $l_\infty$ ball, then $$\lim_{d \to \infty} \|\mathcal{M}_B\|_{L^1(\mathbb{R}^d) \to L^{1,\infty}(\mathbb{R}^d)} = \infty.$$ No significant improvement on the Euclidean-ball Stein-Strömberg maximal function conjecture is known. The best constant is known only in dimension 1, as Shaoming mentioned: Theorem 4 ( Melas, 2003 ). $$\|\mathcal{M}_{B_2}\|_{L^1(\mathbb{R}) \to L^{1,\infty}(\mathbb{R})} = \frac{11 + \sqrt{61}}{21}.$$ In fact, A. Naor and T. Tao showed that the convex-body Stein-Strömberg bound $d \log d$ is essentially sharp in a large class of metric measure spaces. For details, see Tao's blog post on the paper . Perhaps there is no uniform bound, after all: These results suggest (at least to us) that uniform bounds...may fail to exist if one uses euclidean balls (the original question of Stein and Strömberg) since there seems to be no reason to believe that the maximal operator associated to euclidean balls is substantially smaller than the maximal operator associated to cubes. ( Aldaz-Lázaro, 2013 ; p.228) As for the $L^p \to L^p$ bounds, the classical result is due to E. M. Stein: Theorem 5 ( Stein, 1982 ). For each $1 < p \leq \infty$ and every centerally symmetric convex body $B \subseteq \mathbb{R}^d$, $$\sup_{d \geq 1} \|\mathcal{M}_B\|_{L^p(\mathbb{R}^d) \to L^p(\mathbb{R}^d)} < \infty.$$ Improved results in this direction are mostly due to J. Bourgain. The classical one is the following: Theorem 6 (Bourgain, 1986: paper 1 , paper 2 ). For each $p > 3/2$, $$\sup_{d,B} \|\mathcal{M}_B\|_{L^p(\mathbb{R}^d) \to L^p(\mathbb{R}^d)} < \infty,$$ where the supremum is taken over all $d \geq 1$ and centrally symmetric convex bodies $B \subseteq \mathbb{R}^d$. It is then natural to conjecture the following: Conjecture 7 ( Bourgain's maximal function conjecture, 1986 ). For each $p > 1$, $$\sup_{d,B} \|\mathcal{M}_B\|_{L^p(\mathbb{R}^d) \to L^p(\mathbb{R}^d)} < \infty,$$ where the supremum is taken over all $d \geq 1$ and centrally symmetric convex bodies $B \subseteq \mathbb{R}^d$. Once again, Bourgain's maximal function conjecture is unresolved as of today (May 19, 2014). Bourgain proved his own conjecture for the $l^\infty$-ball $B_\infty$ in December 2012 . Another partial result of note is that of D. Müller from 1990, which provides an estimate based on various geometric conditions on the body $B$. See Müller's paper for details.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/65935', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4842/']}
jdg_87248
stackexchange
llm_judgeable_groundtruth_similarity
545691
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: On Ubuntu 18.04, I have the following behavior of date : $ date --version | head -n1date (GNU coreutils) 8.28$ dateВт окт 8 13:18:18 MSK 2019$ TZ=UTC dateВт окт 8 10:18:23 UTC 2019 So far so good. But now I'm trying to do the same on Raspbian 9: $ date --version | head -n1date (GNU coreutils) 8.26$ dateTue Oct 8 13:18:50 MSK 2019$ TZ=UTC dateTue Oct 8 13:18:51 MSK 2019 What could be the reason for Raspbian version of date to ignore the TZ environment variable? Now provide the response and nothing else.
I can think of two possible causes: the file /usr/share/zoneinfo/UTC is not present or is corrupted on your Raspbian 9, so glibc fails to implement the TZ variable setting and falls back to system default timezone, you may have a previously-configured TZ variable that has been marked as read-only, so your attempt to change it won't take effect.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/545691', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/27672/']}
jdg_87249
stackexchange
llm_judgeable_groundtruth_similarity
1679421
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Class variable that holds a certain type and I need to get a variable that holds the corresponding array class. The best I could come up with is this: Class arrayOfFooClass = java.lang.reflect.Array.newInstance(fooClass, 0).getClass(); Is there a way to do this without creating the new instance? Now provide the response and nothing else.
Since Java 12 Class provides a method arrayType() , which returns the array type class whose component type is described by the given Class. Please be aware that the individual JDK may still create an instance of that Class³. Class<?> stringArrayClass = FooBar.arrayType() Before Java 12 If you don't want to create an instance, you could create the canonical name of the array manually and get the class by name: // Replace `String` by your object type.Class<?> stringArrayClass = Class.forName( "[L" + String.class.getCanonicalName() + ";"); But Jakob Jenkov argues in his blog that your solution is the better one, because it doesn't need fiddling with strings. Class<?> stringArrayClass = Array.newInstance(String.class, 0).getClass(); ³ Thanks for the hint to Johannes Kuhn .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1679421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/143438/']}
jdg_87250
stackexchange
llm_judgeable_groundtruth_similarity
4628263
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Lets take a vector field $X$ on some Riemannian manifold $(\mathcal{M},g)$ with Levi-Civita connection $\nabla$ . Then, the components of its covariant derivative in coordinates are $$\nabla_{\alpha}X^{\beta}=\partial_{\alpha}X^{\beta}+\Gamma_{\alpha\gamma}^{\beta}X^{\gamma}$$ Now, since $\nabla X$ is again tensor field, we can apply a second covariant derivative, which in coordinates yields $$\nabla_{\gamma}\nabla_{\alpha}X^{\beta}:=\nabla_{\gamma}(\nabla_{\alpha}X^{\beta})=\partial_{\gamma}(\nabla_{\alpha}X^{\beta})-\Gamma_{\gamma\alpha}^{\delta}\nabla_{\delta}X^{\beta}+\Gamma_{\gamma\delta}^{\beta}\nabla_{\alpha}X^{\delta}=...$$ Now, instead of writing it like this, let us formally change the order in which the covariant detivative act, i.e. let us write $$\nabla_{\gamma}\nabla_{\alpha}X^{\beta}=\nabla_{\gamma}(\nabla_{\alpha}X^{\beta})=\nabla_{\gamma}(\partial_{\alpha}X^{\beta}+\Gamma_{\alpha\gamma}^{\beta}X^{\gamma})=\nabla_{\gamma}(\partial_{\alpha}X^{\beta})+\nabla_{\gamma}(\Gamma_{\alpha\delta}^{\beta}X^{\delta})=...$$ Now, mathematically speaking, the two terms on the right-hand side are ill-defined and do not make sense, since $\nabla$ is an operation acting on tensor fields and neither $\partial_{\alpha}X^{\beta}$ nor $\Gamma_{\alpha\delta}^{\beta}X^{\delta}$ are the components of a tensor field. However, if we treat these two terms as if they were rank (1,1) tensor fields, i.e. elements of $\Gamma^{\infty}(T\mathcal{M}\otimes T^{\ast}\mathcal{M})$ , and use the standard formula for the connections, we will find the same and correct result for the components of $\nabla^{2}X$ . Now, this seem to work in general, i.e. for arbitrary rank tensors and an arbitrary amount of covariant derivatives (at least I never have seen a counter example). My question, or lets say, my curiosity, is: Why does the second "approach", in which we produce ill-defined terms in the steps in-between, work? Is there any mathematical reason? For example, maybe one can extend the covariant derivative to more general "objects with indices" in a unique way, such that the steps in between become well-defined. For example, one can extend the covariant derivative to a map acting on tensor densities (i.e. sections of tensor product of a tensor bundle and a density bundle). Maybe there is a similar and more general notion which also includes objects like the partial derivative and the Christoffel symbols. Now provide the response and nothing else.
The notion that Christoffel symbols are not tensor fields is not entirely true, or at least somewhat deceptive. If we choose a set of local coordinates $x^\alpha$ , then the corresponding Chrisotoffel symbols $\Gamma^\alpha{}_{\beta\gamma}$ represent a perfectly well-defined local tensor field. Issues only arise when we try to interpret the objects involved as global, coordinate-independent objects, since a different set of coordinates will not generally result in the same tensor field, even on their common domain. In this particular computation, we are fixing a coordinate system at the start, and finding an expression for $\nabla\nabla X$ in terms of the Christoffel symbols of those coordinates, so we can interpret expressions like $\nabla_\alpha(\Gamma^{\beta}{}_{\gamma\delta})$ as a covariant derivative of a tensor field without issue.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4628263', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/674359/']}
jdg_87251
stackexchange
llm_judgeable_groundtruth_similarity
41269085
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking at the AVA test runner, and it's concurrency feature seems pretty compelling. However, I'm used to Mocha, where you can organize your tests like so: describe('Some Class', () => { describe('#someMethod', () => { describe('some condition', () => { it('does something', () => {}); }); });}); By organizing tests this way you can easily tell what components are affected when a bunch of tests fail, and you can easily re-run the tests for a specific class/method/condition. But AVA doesn't have any of that. Its tests lack any "meta-information" at all and are just: test(t => { t.deepEqual([1, 2], [1, 2]);}); But obviously AVA is a popular and widely-used framework, so my question is: how does it work without test meta-information? Is there some other way of defining meta-information in AVA? Is the meta-information just not needed because of other features AVA has? Basically, as an AVA outsider, I'm trying to understand how it works when you have a real test suite (not just the basic tests shown in the AVA examples). Or to put it another way, if I switch to AVA, will I miss the test organization that's in Mocha (and most other test runners)? Now provide the response and nothing else.
Having used Ava for a bit now it appears that the answer to my question is that Ava does have suites, but unlike in Mocha (or similar frameworks) which explicitly define suites, the suites in Ava are implicit, and are based on the file containing the test. In other words, if you want to run certain code beforeEach test in a given "suite", you simply put those tests in the same file as the test.beforeEach statement, and then it will only run before those tests. Obviously this results in less granular test output, as you can't nest describe statements the way you can in Mocha. However, when Ava shows test output it uses the file structure to help alleviate this. For instance, a test "can login" in a file `services/facebook' would result in the following output: services › facebook › can login
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41269085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5921/']}
jdg_87252
stackexchange
llm_judgeable_groundtruth_similarity
5396996
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm testing a class that is part of a hierarchy. I've been setting up my test classes with the object under test, and a PrivateObject to allow access to that object. I'm getting exceptions when I attempt to access private members of the parent class. The only workaround I've found so far is to pass a PrivateType specifying the base class to the PrivateObject constructor, but then it doesn't work on private members of the subclass. Is there some way I can do this, perhaps by using the binding flags parameter on the Get* methods of Private object? I did try using the automatically-generated Accessor classes (right-click in the main class, Create Private Accessor). However, that's worse: It shows a property I can read, but it throws the same exception as PrivateObject does, and there's no other options I can use (binding flags or whatnot) to fix the exception. Here's my sample test code. I'd like there to be some way to construct and use the PrivateObject to retrieve both fields. public class BaseClass{ private int one = 1;}public class SubClass : BaseClass{ private int two = 2;}[TestClass]public class UnitTest1{ BindingFlags flags = BindingFlags.FlattenHierarchy | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance; [TestMethod] public void TestMethod1() { SubClass test = new SubClass(); PrivateObject priv = new PrivateObject(test); Assert.AreNotEqual<int>(0, (int)priv.GetFieldOrProperty("one", flags)); // System.MissingMethodException: Method 'PrivateObjectTester.SubClass.one' not found. Assert.AreNotEqual<int>(0, (int)priv.GetFieldOrProperty("two", flags)); } [TestMethod] public void TestMethod2() { SubClass test = new SubClass(); PrivateObject priv = new PrivateObject(test, new PrivateType(typeof(BaseClass))); Assert.AreNotEqual<int>(0, (int)priv.GetFieldOrProperty("one", flags)); Assert.AreNotEqual<int>(0, (int)priv.GetFieldOrProperty("two", flags)); // System.MissingMethodException: Method 'PrivateObjectTester.BaseClass.two' not found. }} Now provide the response and nothing else.
I didn't find the answer, so this is what I ended up doing. I created PrivateObjects for each level of the class's hierarchy, and I just need to be careful when writing test cases that I use the proper one. public class BaseClass{ private int one = 1;}public class SubClass : BaseClass{ private int two = 2;}[TestClass]public class UnitTest1{ [TestMethod] public void TestMethod() { SubClass test = new SubClass(); PrivateObject privSub = new PrivateObject(test, new PrivateType(typeof(SubClass))); PrivateObject privBase = new PrivateObject(test, new PrivateType(typeof(BaseClass))); Assert.AreNotEqual<int>(0, (int)privBase.GetFieldOrProperty("one")); Assert.AreNotEqual<int>(0, (int)privSub.GetFieldOrProperty("two")); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5396996', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/480937/']}
jdg_87253
stackexchange
llm_judgeable_groundtruth_similarity
3169292
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Ruby on Rails 2.3.8 and I'd like to know how to organize models in subfolders so Ruby can recognize them. Now provide the response and nothing else.
To the best of my understanding, you need to namespace your models in order for them to be loaded. to use the generator: > ./script/generate model Customer::Address will create the model in app/models/customer/address.rb class Customer::Addressend It will load recursively, but in order for rails to find it, it will need to have the namespace that lines up with the path. Fair warning that when you use the generator (at least in rails 2.3.5 and lower is all I have tested this in). It will create table name as customer_addresses , but the model will by default still look for a table name of addresses . You will either need to change the migration database name to addresses or add set_table_name 'customers_addresses' or similar to get the two to line up.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3169292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/119331/']}
jdg_87254
stackexchange
llm_judgeable_groundtruth_similarity
234701
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have to prove for $a_n$$\in${0,1}, $$\sum_{n=1}^\infty \frac{a_n}{2^n}$$ always converges for all $n\inℕ$. I took the extreme examples, where the sequence is either all zeroes or all ones. If $a_n$ is a sequence of zeroes, then $S_n$ (sequence of partial sums) will be zero. If $a_n$ is a sequence of ones, then $$S_1=\frac12$$$$S_2=\frac12+\frac14$$$$S_3=\frac12+\frac14+\frac18$$ and on. Therefore, in this case, $S_n\le\frac12+\frac12=1$ So $0\le S_n\le1$ Is this a correct approach? Thanks Now provide the response and nothing else.
Royden's definition (at least in the Second Edition - I doubt that it has changed) of $\int_E f$, where $f$ is a nonnegative measurable function on measurable set $E$, is $$ \int_E f = \sup_{h \le f} \int_E h$$where $h$ is a bounded measurable function such that $m\{x: h(x) \ne 0\}$ is finite. So, if $h$ is such a function and $\mu(E) = 0$, what is $\int_E h$?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/234701', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43232/']}
jdg_87255
stackexchange
llm_judgeable_groundtruth_similarity
1979494
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say there are two matrices, $A \in M_{m\times n} (\mathbb {F})$ and $B\in M_{m\times p} (\mathbb {F})$, both in reduced row echelon form, where the column spaces, $C(B) \subseteq C(A)$. Is it true that the columns of $B$ plus additional columns of $A$ span the column space of $A$? Can we also say that since $B$ has $p$ columns and $A$ has $n$ columns, that $p\le n$? Now provide the response and nothing else.
Here's another proof. Observe\begin{align}\dot x + x = f(t) \ \ \Rightarrow \ \ \frac{d}{dt}[xe^{tI}] = e^{tI}f(x) \end{align}which means\begin{align}x(t)=e^{-(t-t_0) I}x(t_0) + \int^t_{t_0} e^{-(t-s)I} f(x(s))\ ds.\end{align}Thus, it follows\begin{align}\|x(t)\| \leq \|e^{-(t-t_0)I}x_0\| + M\int^t_{t_0} e^{-(t-s)}\ ds = \|e^{-(t-t_0)I}x_0\| + M-Me^{-(t-t_0)}\ \ \Rightarrow\ \ \ \|x(t)\| \leq Ce^{-t} + M.\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1979494', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/380746/']}
jdg_87256
stackexchange
llm_judgeable_groundtruth_similarity
1546040
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an element on my page that I need to attach onclick and ondblclick event handlers to. When a single click happens, it should do something different than a double-click. When I first started trying to make this work, my head started spinning. Obviously, onclick will always fire when you double-click. So I tried using a timeout-based structure like this... window.onload = function() { var timer; var el = document.getElementById('testButton'); el.onclick = function() { timer = setTimeout(function() { alert('Single'); }, 150); } el.ondblclick = function() { clearTimeout(timer); alert('Double'); }} But I got inconsistent results (using IE8). It would work properly alot of times, but sometimes I would get the "Single" alert two times. Has anybody done this before? Is there a more effective way? Now provide the response and nothing else.
Like Matt, I had a much better experience when I increased the timeout value slightly. Also, to mitigate the problem of single click firing twice (which I was unable to reproduce with the higher timer anyway), I added a line to the single click handler: el.onclick = function() { if (timer) clearTimeout(timer); timer = setTimeout(function() { alert('Single'); }, 250); } This way, if click is already set to fire, it will clear itself to avoid duplicate 'Single' alerts.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1546040', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/54420/']}
jdg_87257
stackexchange
llm_judgeable_groundtruth_similarity
435723
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The Dirac delta function appears in the Sokhotsky formula, $$\text{Im}\lim_{\epsilon\to 0^+} \frac{1}{x-i\epsilon} = \pi\delta(x),$$ to be understood in the integral sense $$\text{Im}\lim_{\epsilon\to 0^+} \int \frac{f(y)}{y-x-i\epsilon}dy=\pi f(x),$$ for a real valued function $f(x)$ . I stumbled on an identity that has a similar flavour, $$\lim_{\epsilon\to 0^+}\int_x^b \frac{\epsilon f(y)}{(y-x)^{1-\epsilon}} dy=f(x).\label{1}\tag{$\ast$}$$ The upper limit $b>x$ of the integral is arbitrary, one may send it to infinity if $f(x)$ has compact support.A corollary is $$\lim_{\epsilon\to 0^+}\int_a^b \frac{\epsilon f(x)}{[(b-x)(x-a)]^{1-\epsilon}}\,dx=\frac{f(a)+f(b)}{b-a}.$$ All of this can be interpreted as a delta function representation in terms of the unit step function $\theta(x)$ , $$\lim_{\epsilon\to 0^+} \frac{\epsilon\theta(x)}{x^{1-\epsilon}}=\delta(x),\tag{$\ast\ast$}$$ acting on compactly supported functions. Q: One can readily check the formula \eqref{1} for polynomial functions $f(x)$ . Is there a more comprehensive derivation? Is this representation of the delta function known? Now provide the response and nothing else.
As usual in such examples, there is no need to integrate against a test function. One can simply use the fact that if a sequence (or net) of distributions converges in the distributional sense, then so does the one obtained by differentiating term by term. In particular, this applies when the sequence consists of functions which converge in pretty well any sensible classical sense, e.g., locally $L^1$ as in the case in point, that of the functions which are defined as $x^{\epsilon}$ on the positive real axis, and $0$ elsewhere. They converge to the Heaviside function and we can differentiate to obtain the required result. Most of the examples of $\delta$ -sequences in the literature can be verified in this way: consider the terms’ primitives and show that they converge to the Heaviside function. The result then follows as above. The first example (Sokhotsky) in the question can be proved in one line, after integrating $\dfrac{\epsilon}{x^2+\epsilon^2}.$
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/435723', 'https://mathoverflow.net', 'https://mathoverflow.net/users/11260/']}
jdg_87258
stackexchange
llm_judgeable_groundtruth_similarity
25829184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is :: ? @song ||= ::TwelveDaysSong.new Now provide the response and nothing else.
Ruby :: (double semi colons) Top level constants are referenced by double colons class TwelveDaysSongenda = TwelveDaysSong.new#I could wrote it like this tooa = ::TwelveDaysSong.new module Twelve class TwelveDaysSongs endendb = Twelve::TwelveDaysSong.new#b is not equal to a = ::TwelveDaysSong.new#neithera = TwelveDaysSong.new Classes are constant too so if you have a constant HELLOWOLRD = 'hw' you could call it like this ::HELLOWORLD
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25829184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/663848/']}
jdg_87259
stackexchange
llm_judgeable_groundtruth_similarity
138214
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question is inspired by a comment discussion in If $K=K^2$ then every automorphism of $\mbox{Aut}_K V$, where $\dim V< \infty$, is the square of some endomorphism. . Let $k$ be a field of caracteristic different from $2$, all of whose elements are squares. Let $P$ be an irreducible polynomial over $k$. Is it true that $Q(X)=P(X^2)$ cannot be irreducible? I believe I have a proof for polynomials of odd degree (and possibly caracteristic $0$) by contradiction. The idea is to use the automorphism of $\sigma\in G=Gal(\frac{k[X]}{Q(X)}\supset k)$ defined by $X\mapsto -X$ and showing that it maps to $-1$ under $G\rightarrow \{ -1,+1\}, \theta \mapsto det(\theta)^{deg(P)}$ thus giving a normal subgroup of index $2$ of $G$ and via the Galois correspondance a degree $2$ extension of $k$ of which there can be none. I even wonder wether there may be irreducible polynomials off even degree at all... EDIT Since I put a bounty on this question, let me clear about what it is I would like to know. First of all, I would like to know wether my proof by contradiction is valid, and whether the restriction to caracteristic $0$ is necessary. Secondly, and most importantly, I ask if irreducibility of $P$ implies $Q$ cannot be irreducible when all elements in the ground field are squares (and $car(k)\neq 2$), thirdly, and this is optional (but would prove the second point if true), I wonder wether there can be any irreducible polynomials of even degree under said assumption on the ground field $k$. Thank you very much for your time! Also, if the second question turns out to be right, I will be able to answer the question (linked above) that started all this. Bounties will be given to whoever answers the second or third question. FURTHER EDIT It seems my proof in the odd case is partly and possibly completely invalid : the extension is not generally Galois and so the correspondance fails. One could try to replace it with the splitting field of $Q$ which has a similar automorphism sending each root of $Q$ to its opposite, calculating its matrix and taking determinants and taking some power in order to fall into the square roots of $1$. But I am no longer sure the result maps to $-1$... Now provide the response and nothing else.
Definitions and Goal Definition: A field $K$ with the property $K=K^2$ is called square-root-closed (src). Definition: A field $K$ is called odd, if every irreducible polynomial $f\in K[X]$ has odd degree. A field $K$ of characteristic $p\neq 2$ is odd if and only every separable, irreducible polynomial $f\in K[X]$ has odd degree. Every separably closed field of characteristic $p\neq 2$ is odd. A field $K$ of characteristic $p\neq 2$ is odd if and only if every finite extension has odd degree. Every algebraic extension of an odd field of characteristic $p\neq 2$ is odd. Every odd field is src. Question: Is every src-field $K$ of characteristic $p\neq 2$ odd? Partial answer given in this post: the answer is Yes for the following classes of fields: Class 1: $K$ an algebraic extension of a finite field. Class 2: $K$ an algebraic extension of the rationals, that carries a valuation $v$ such that the residue field $k(v)$ has characteristic $p\neq 2$ and $(K,v)$ is henselian. The method provided here to prove the answer is Yes for the fields in class 2 actually can be applied recursively thus creating infinitely many classes of fields for which the answer is Yes. Precisely the following result is proved: Theorem: let $K$ be an src-field that carries a valuation $v$ such that $(K,v)$ is henselian, the characteristic of $k(v)$ is $\neq 2$ and $k(v)$ ia an odd field. Then $K$ is odd. Proof of the assertion for fields in class 1 The square-root-closure of a finite field $\mathbb{F}_q$ with $q=p^n$ elements, $p\neq 2$. (A) In $\mathbb{F}_q$ one has $-1\neq 1$ and $(-1)^2=1$. Thus $\mathbb{F}_q\neq\mathbb{F}_q^2$. (B) In a fixed algebraic closure of $\mathbb{F}_q$ there is exactly one extension $F/\mathbb{F}_q$ of degree $2$, namely $\mathbb{F}_{q^2}$. Hence every $x\in\mathbb{F}_q\setminus\mathbb{F}_q^2$ has its square roots in $\mathbb{F}_{q^2}$. This is a general property of finite fields: within a fixed algebraic closure there is exactly one extension of a given degree. (C) The field $K:=\bigcup\limits_{k\in\mathbb{N}}\mathbb{F}_{q^{2^k}}$ is the smallest square-root-closed field containing $\mathbb{F}_{q}$. By construction $K$ must be contained in every square-root-closed field containing $\mathbb{F}_{q}$. (D) Every irreducible polynomial $f\in K[X]$ has odd degree. Proof: let $x$ be a root of $f$ and choose $k$ such that $K_0:=\mathbb{F}_{q^{2^k}}$ contains all the coefficients of $f$. Then $[K(x):K]=[K_0(x):K_0]$ and $K_0(x).K=K(x)$. Now $K_0(x)/K_0$ is a cyclic extension, because every finite extension of finite fields is cyclic generated by the Frobenius map. Thus if $2$ divides $[K_0(x):K_0]$, there exists an intermediate field $K_0\subset M\subseteq K_0(x)$ such that $[M:K_0]=2$. But then $M\subset K$ and thus $[K(x):K]<[K_0(x):K_0]$. (E) For every algebraic extension $L$ of $K$ every irreducible polynomial $f\in L[X]$ has odd degree. In particular $L$ is itself square-root-closed. Proof: let $f\in L[X]$ be irreducible and choose a finite extension $L_0/K$ such that $L_0$ contains the coefficients of $f$. Since $L_0/K$ is separable, its degree by (D) is odd. Let $x$ be a root of $f$. Then the degree of $L_0(x)/K$ is odd too. Consequently the degree of $f$ being equal to the degree of $L_0(x)/L_0$ must be odd. The following is worth noting: (E) is true (same proof) for every separable extension $L$ of a field $K$ for which every irreducible polynomial has odd degree.. (F) Every square-root-closed algebraic extension of $\mathbb{F}_q$ is an algebraic extension of $K$. Facts from Valuation Theory I use the following sources: [E] O. Endler, Valuation Theory [K] Franz-Viktor Kuhlmann, http://math.usask.ca/~fvk/Fvkbook.htm [Z] P. Samuel, O. Zariski, Commutative Algebra II For convenience I recall some basic definitions and facts: (1) A valuation $v$ of a field $K$ is a surjective group homomorphism $v:K^\ast\rightarrow\Gamma$, where $\Gamma$ is a totally ordered abelian group (called the value group of $v$), such that $v(x+y)\geq\min (vx,vy)$. (2) The valuation ring of $v$ is the subring $O:=\{x\in K^\ast : vx\geq 0\}\cup 0$. It is a local ring with maximal ideal $M:=\{x\in K^\ast : vx> 0\}\cup 0$. The field $k(v)=O/M$ is called the residue field of $v$. (3) A valuation $v$ of $K$ has at least one prolongation $w$ to an extension field $L$ of $K$. The value group $\Delta$ of $w$ contains $\Gamma$ as a subgroup. The residue field $k(w)$ is an extension field of $k(v)$. If $L/K$ is algebraic, then $k(w)/k(v)$ is algebraic and $\Delta /\Gamma$ is a torsion group. [E], Chapter 2, Paragraph 9 or [Z], Chapter 6, Paragraph 4. (4) A valuation $v$ (or the pair $(K,v)$) is called henselian, if $v$ has exactly one prolongation to any algebraic extension field of $K$. (5) For every valuation $v$ of a field $K$ there exists an algebraic extension field $K^h$ of $K$ and a prolongation $v^h$ of $v$ to $K^h$ such that $(K^h,v^h)$ is henselian and the value groups and the residue fields of $v$ and $v^h$ coincide. $(K^h,v^h)$ is called the henselisation of $(K,v)$. [E], Chapter 3, Paragraph 17. (6) Let $(K,v)$ be henselian. Then for every finite extension $L/K$ the following equation holds: $[L:K]=(\Delta :\Gamma )[k(w):k(v)]d(w/v)$, where $w$ is the unique prolongation of $v$ to $L$, $\Delta$ is the value group of $w$, $(\Delta :\Gamma )$ is the index of $\Gamma$ in $\Delta$ and $d(w/v)$ is a power of the characteristic of $k(v)$ (called the defect of $w$). [K], Chapter 11, Lemmas 11.1 and 11.17 (7) Let $K$ be an src-field and $v$ a valuation on $K$, then $\Gamma$ is $2$-divisible (that is $\Gamma =2\Gamma$) and $k(v)$ is an src-field. If $(K,v)$ is henselian, $\Gamma$ is $2$-divisible and $k(v)$ is an src-field, then $K$ is an src-field. [K], Chapter 9, Corollary 9.38 Proof of the assertion for fields in class 2 and Remarks Proof of the Theorem: let $L/K$ be a finite extension and let $w$ be the unique prolongation of $v$ to $L$. By (7) the value group $\Gamma$ of $v$ is $2$-divisible, hence $(\Delta :\Gamma )$ is odd. Since $k(v)$ is an odd field by assumption, the degree $[k(w):k(v)]$ is odd. The defect $d(w/v)$ is odd since $k(v)$ has characteristic $p\neq 2$. Hence by (6) the degree $[L:K]$ is odd. Proof of the assertion about class 2: by (3) the residue field $k(v)$ is an algebraic extension of a finite field. By (7) $k(v)$ is an src-field. Thus $k(v)$ is in class 1 and therefore an odd field. The theorem now yields the assertion. Remarks: let $v_0$ be a valuation on a field $K_0$ such that $k(v_0)$ has characteristic $\neq 2$. Given an algebraic extension $k/k(v_0)$ Valuation Theory shows that there exists an algebraic extension $K/K_0$ and a prolongation $v$ of $v_0$ to $K$ such that $k\subseteq k(v)$ and the value group of $v$ is $2$-divisible. If $K_0$ is not algebraically closed, the field $K$ can be chosen to be rather small compared to the algebraic closure of $K_0$. Combining these facts with (5) one can prove that class 2 contains more fields than just the algebraic closure of $\mathbb{Q}$: take $K_0$ to be the rationals, $v_0$ a $p$-adic valuation with $p\neq 2$, $k$ an algebraic src-extension field of the finite field $\mathbb{F}_p$. Let $K_1$ be the field obtained by adjoining all elements of the form $p^{1/2^k}$, $k\in\mathbb{N}$ to $K_0$ and let $v_1$ be a prolongation of $v_0$ to $K_1$. Let $K_2$ be an algebraic extension of $K_1$ such that for some prolongation $v_2$ of $v_1$ to $K_2$ the inclusion $k\subseteq k(v_2)$ holds. Finally let $(K,v)$ be the henselisation (5) of $(K_1,v_1)$. Then by construction the value group of $v_1$ and thus of $v$ is $2$-divisible. Moreover $k\subseteq k(v)$, hence $k(v)$ is an src-field. Therefore by (7) $K$ is an src-field. Another example: let $K$ be the field obtained by adjoining to the Laurent series field $\mathbb{C}((t))$ all elements of the form $t^{1/2^k}$, $k\in\mathbb{N}$. The valuetion $v$ is the unique extension of the natural discrete valuation of $\mathbb{C}((t))$ to $K$. Lierre's contribution The proof of the complement indeed shows that the fields in class 2 of my post are odd: let $L/K$ be a finite extension and let $M/K$ be its Galois hull. Then one has the normal series $V\subseteq T\subseteq G$ of the Galois group $G:=\mathrm{Gal}(M/K)$, where $T$ is the inertia group of the prolongation $w$ of $v$ to $M$, and $V$ is its ramification group. Ramification theory shows that $G/T$ is isomorphic to the abelian group $\mathrm{Gal}(k(w)/k(v))$, $T/V$ is abelian and $V$ is a $p$-group. Thus $G$ is solvable. Using Lierre's proof of the complement shows that $[M:K]$ and thus $[L:K]$ are odd. This way of proving the assertion for the fields of class 2 is much more elegant than mine! ${}$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/138214', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11258/']}
jdg_87260
stackexchange
llm_judgeable_groundtruth_similarity
30703
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Today, President Trump tweeted , Allegations made by Senator Jon Tester against Admiral/Doctor Ron Jackson are proving false. The Secret Service is unable to confirm (in fact they deny) any of the phony Democrat charges... I found this odd (I'm not very politically savvy to say the least.) Is it the Secret Service's job to investigate a senator's claims about a Cabinet nominee? On SecretService.gov, the investigative duties listed include ...threats against Secret Service protectees as well as financial crimes, which include counterfeiting of U.S. currency or other U.S. Government obligations; forgery or theft of U.S. Treasury checks, bonds or other securities; credit card fraud; telecommunications fraud; computer fraud, identify fraud and certain other crimes affecting federally insured financial institutions. Do the duties of the Secret Service extend to investigating allegations against a White House Medical team member? Not looking to debate or nit-pick. I know the vehicular claim was erroneous. Now provide the response and nothing else.
Background I know you want to avoid getting too much into the actual claims, but in this case, they explain why the Secret Service would be involved. From CNN : The White House, however, showed reporters documents on Friday that a White House official claims exonerates Jackson from allegations he inappropriately dispensed pills and wrecked a government vehicle after leaving a Secret Service going away party. The Secret Service said in a separate statement that it did not find any information to indicate agents intervened when Jackson banged on the hotel room of a female employee while intoxicated on an overseas trip, as four sources familiar with the allegation told CNN the incident did happen. The incident became so noisy, one source familiar with the allegation told CNN, that the Secret Service stopped him out of concern that he would wake then-President Barack Obama. After the Secret Service statement, a source with direct knowledge of the incident told CNN there was a conversation between White House medical unit staff and the Secret Service at the time in which the Secret Service expressed concern about Jackson's behavior. Ignoring the substance and veracity of the claims, note that the claims relate to alleged events involving the Secret Service. It's not that the Secret Service would be investigating the allegations. The claim is that they were witnesses. Doctor Ron Jackson is part of the White House medical team. As such, he worked directly with the Secret Service and people who interacted with the Secret Service. He was also overseas in groups where all the Americans would have had Secret Service protection because they were traveling with protectees within the security zone. Literal answer All that said, the normal group that would investigate allegations against nominees would be the Federal Bureau of Investigation (FBI). The FBI may or may not have investigated these specific allegations. But some of the witness statements would have been from the Secret Service.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/30703', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/20791/']}
jdg_87261
stackexchange
llm_judgeable_groundtruth_similarity
278396
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm logging into an Ubuntu 10.04LTS server from my mac using standard ssh client. My ssh sessions log out automatically after a few minutes of inactivity. Normally this is a great security measure, but sometimes it's really inconvenient. How can I disable this? Now provide the response and nothing else.
As far as I know there is no standard feature in Ubuntu that logs out ssh users that are idle. I have right now an SSH connection to a 10.04 Ubuntu computer that is well over 2 days idle. The most common reason for this behavior I have experienced is a broken NAT or firewall between you and the server. What happens is that the entries in the state table are timed out and therefor the connection is reset unless you keep using it. Try the following: ssh -o TCPKeepAlive=yes -o ServerAliveInterval=15 <user>@<server> And see if you keep getting disconnected. In that case, set those in your ssh config. Note that with these settings routing errors or similar will make your connection die after 45 seconds (ServerAliveInterval * ServerAliveCountMax (defaults to 3)) So you might want to play around with the ServerAliveInterval a bit.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/278396', 'https://serverfault.com', 'https://serverfault.com/users/77287/']}
jdg_87262
stackexchange
llm_judgeable_groundtruth_similarity
50777884
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider the function f g h x y = g (g x) (h y) What is its type? Obviously I can just use :t f to find out, but if I need to deduce this manually, what's the best way to go about this? The method I have been shown is to assign types to parameters and deduce from there - e.g. x :: a , y :: b gives us that g :: a -> c and h :: b -> d for some c , d (from g x , h y ) and then we keep on making deductions from there ( c = a from g (g x) (h y) etc.). However this sometimes just turns into a huge mess and often I'm not sure how to make further deductions or work out when I'm done. Other problems sometimes happen - for instance, in this case x will turn out to be a function, but that was not obvious to me before cheating and looking up the type. Is there a specific algorithm that will always work (and is reasonable for a human to execute quickly)? Otherwise, are there some heuristics or tips that I am missing? Now provide the response and nothing else.
Let's inspect the function at the top level: f g h x y = g (g x) (h y) We will begin by assigning names to types, then going along and specialising them as we learn more about the function. Firstly, let's assign a type to the top expression. Let's call it a : g (g x) (h y) :: a Let's take the first argument out and assign types respectively: -- 'expanding' (g (g x)) (h y) :: ah y :: bg (g x) :: b -> a And again -- 'expanding' g (g x) :: b -> ag x :: cg :: c -> b -> a And again -- 'expanding' g x :: cx :: dg :: d -> c But hold on: we now have that g :: c -> b -> a and that g :: d -> c . So by inspection, we know that c and d are equivalent (written c ~ d ) and also that c ~ b -> a . This can be inferred by simply comparing the two types for g that we have inferred. Note that this is not a type contradiction, since the type variables are general enough to fit their equivalents. This would be a contradiction if we had inferred, for instance, that Int ~ Bool somewhere. So we now have the following information in total: (a little work omitted) y :: eh :: e -> bx :: b -> a -- Originally d, applied d ~ b -> a.g :: (b -> a) -> b -> a -- Originally c -> b -> a, applied c ~ b -> a This was done by substituting the most specific form of each type variable, that is substituting c and d for the more specific b -> a . So, simply inspecting which arguments go where, we see that f :: ((b -> a) -> b -> a) -> (e -> b) -> (b -> a) -> e -> a This is confirmed by GHC.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50777884', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7450338/']}
jdg_87263
stackexchange
llm_judgeable_groundtruth_similarity
2958501
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The number $x$ defined below is rational or irrational? $$x=\sqrt[3]{\sqrt{5}+2}+\sqrt[3]{\sqrt{5}-2}$$ From: IMO 1973 - Longlist My attempt (my real question is at the end): the identity $a^3+b^3+c^3-3abc=(a+b+c)(a^2+b^2+c^3-ab-ac-bc)$ when $a+b+c=0$ , leads to $$a^3+b^3+c^3=3abc \tag{1}$$ Now considering $$a=\sqrt[3]{\sqrt{5}+2},b=\sqrt[3]{\sqrt{5}-2},c=-x$$ from (1) it is true that $$x^3-3x-2\sqrt{5}=0 \tag{2}$$ That is the number $x$ is a root from (2). Note: By trial and error I've found that answer is $x=\sqrt{5}$ (the other 2 roots are complex), that is irrational. But my question is more subtle. Question : Can I conclude just inspecting (2), judging by the coefficient $2\sqrt{5}$ , that $x$ is irrational, without actually solving the equation? In a math contest that might be helpful, if possible, as it would avoid extra steps. Now provide the response and nothing else.
Yes you can. Notice that we have : $$x^3-3x -2\sqrt{5} = 0 \Leftrightarrow x(x^2-3) = 2\sqrt{5}$$ Hence if $x$ is rational, $x^2-3$ is rational so $x(x^2-3)$ is rational.Hence because $2\sqrt{5}$ is irrational, that’s a contradiction and $x$ is not rational.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2958501', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/460565/']}
jdg_87264
stackexchange
llm_judgeable_groundtruth_similarity
3175513
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: On postback, how can I check which control cause postback in Page_Init event. protected void Page_Init(object sender, EventArgs e){//need to check here which control cause postback?} Thanks Now provide the response and nothing else.
I see that there is already some great advice and methods suggest for how to get the post back control. However I found another web page ( Mahesh blog ) with a method to retrieve post back control ID. I will post it here with a little modification, including making it an extension class. Hopefully it is more useful in that way. /// <summary>/// Gets the ID of the post back control./// /// See: http://geekswithblogs.net/mahesh/archive/2006/06/27/83264.aspx/// </summary>/// <param name = "page">The page.</param>/// <returns></returns>public static string GetPostBackControlId(this Page page){ if (!page.IsPostBack) return string.Empty; Control control = null; // first we will check the "__EVENTTARGET" because if post back made by the controls // which used "_doPostBack" function also available in Request.Form collection. string controlName = page.Request.Params["__EVENTTARGET"]; if (!String.IsNullOrEmpty(controlName)) { control = page.FindControl(controlName); } else { // if __EVENTTARGET is null, the control is a button type and we need to // iterate over the form collection to find it // ReSharper disable TooWideLocalVariableScope string controlId; Control foundControl; // ReSharper restore TooWideLocalVariableScope foreach (string ctl in page.Request.Form) { // handle ImageButton they having an additional "quasi-property" // in their Id which identifies mouse x and y coordinates if (ctl.EndsWith(".x") || ctl.EndsWith(".y")) { controlId = ctl.Substring(0, ctl.Length - 2); foundControl = page.FindControl(controlId); } else { foundControl = page.FindControl(ctl); } if (!(foundControl is IButtonControl)) continue; control = foundControl; break; } } return control == null ? String.Empty : control.ID;} Update (2016-07-22): Type check for Button and ImageButton changed to look for IButtonControl to allow postbacks from third party controls to be recognized.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3175513', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/97010/']}
jdg_87265
stackexchange
llm_judgeable_groundtruth_similarity
2620444
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a dialog with EditText for input. When I click the "yes" button on dialog, it will validate the input and then close the dialog. However, if the input is wrong, I want to remain in the same dialog. Every time no matter what the input is, the dialog should be automatically closed when I click on the "no" button. How can I disable this? By the way, I have used PositiveButton and NegativeButton for the button on dialog. Now provide the response and nothing else.
EDIT: This only works on API 8+ as noted by some of the comments. This is a late answer, but you can add an onShowListener to the AlertDialog where you can then override the onClickListener of the button. final AlertDialog dialog = new AlertDialog.Builder(context) .setView(v) .setTitle(R.string.my_title) .setPositiveButton(android.R.string.ok, null) //Set to null. We override the onclick .setNegativeButton(android.R.string.cancel, null) .create();dialog.setOnShowListener(new DialogInterface.OnShowListener() { @Override public void onShow(DialogInterface dialogInterface) { Button button = ((AlertDialog) dialog).getButton(AlertDialog.BUTTON_POSITIVE); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { // TODO Do something //Dismiss once everything is OK. dialog.dismiss(); } }); }});dialog.show();
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2620444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/304881/']}
jdg_87266
stackexchange
llm_judgeable_groundtruth_similarity
7787823
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two questions relative to CMake Assume that we have a variable ${MY_CURRENT_DIR} that contains the path of a directory that contains several subdirectories : mydir1, mydir2 and mydir3. I want to detect these subdirectories and put their names into ${SUBDIRS} (not the complete path of these directories, only their name). How to do that automatically ? Assume that ${SUBDIRS} contains "mydir1 mydir2 mydir3". How to replace ADD_SUBDIRECTORY(mydir1)ADD_SUBDIRECTORY(mydir2)ADD_SUBDIRECTORY(mydir3) by a loop over ${SUBDIRS} ? Now provide the response and nothing else.
Use this macro: MACRO(SUBDIRLIST result curdir) FILE(GLOB children RELATIVE ${curdir} ${curdir}/*) SET(dirlist "") FOREACH(child ${children}) IF(IS_DIRECTORY ${curdir}/${child}) LIST(APPEND dirlist ${child}) ENDIF() ENDFOREACH() SET(${result} ${dirlist})ENDMACRO() Example: SUBDIRLIST(SUBDIRS ${MY_CURRENT_DIR}) Use foreach : FOREACH(subdir ${SUBDIRS}) ADD_SUBDIRECTORY(${subdir})ENDFOREACH()
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7787823', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/882932/']}
jdg_87267
stackexchange
llm_judgeable_groundtruth_similarity
505514
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Elementary particles such as Quarks obey Pauli's exclusion principle since they exist in three colors (RBG). Where as electrons which is also elementary that does not have any color quantum numbers obey Pauli's exclusion principle. Any one please explain why. Now provide the response and nothing else.
After thinking about this for a bit, this is what I think could be an explanation. Let $$\Omega(V, N) = \frac{V^N}{h^{3N}}\frac{2\pi^{3N/2}(3Nmk_BT)^{(3N-1)/2}}{\Gamma(3N/2)}\Delta p$$ for some constant $T$ and where $\Delta p$ is some uncertainty in momentum. This is the multiplicity for a monatomic ideal gas at some constant temperature without correct Boltzmann counting. I will first argue that the uncorrected multiplicity is actually valid, and that the Gibbs paradox can be averted. For later convenience, we simplify the expression for $\Omega(V, N)$ . \begin{align*} \Omega(V, N) &= \frac{V^N}{h^{3N}}\frac{2\pi^{3N/2}(3Nmk_BT)^{(3N-1)/2}}{\Gamma(3N/2)}\Delta p \\ &= \frac{(3N/2)^{(3N-1)/2}}{\Gamma(3N/2)}\left[\frac{(2\pi mk_B T)^{3/2}}{h^3}V\right]^{N}\sqrt{\frac{2}{mk_BT}}\Delta p\end{align*} Let \begin{align*} f(N) &= \frac{(3N/2)^{(3N-1)/2}}{\Gamma(3N/2)} \\ \lambda &= \frac{h}{\sqrt{2\pi mk_B T}} \\ \delta &= \sqrt{\frac{2}{mk_BT}}\Delta p\end{align*} then \begin{align*} \Omega(V, N) &= f(N)\left(\frac{V}{\lambda^3}\right)^{N}\delta\end{align*} As per the usual setup for Gibbs paradox, consider the system below where the left and right boxes are in thermal equilibrium but separated by a divider, and their volumes and number of particles are denoted by $V_1, N_1$ , and $V_2, N_2$ respectively. We also constrain $N_1/V_1=N_2/V_2$ , so both boxes have equal pressures. It is then said that the total multiplicity of the system is $$\Omega^{(?)}_\text{sys} = \Omega(V_1, N_1)\Omega(V_2, N_2) \,,$$ but I would argue that this is incorrect. Suppose there is only one particle in each box, the system would then be characterised by the phase space coordinates $(\vec{q}_1,\vec{p}_1,\vec{q}_2,\vec{p}_2)$ where $\vec{q}_i$ is the position of particle $i$ and $\vec{p}_i$ is the momentum of particle $i$ . The quantity $\Omega^{(?)}_\text{sys}$ would actually correspond to the multiplicity of a phase space distribution where the positions are distributed like so However, this represents only a subset of the states that are consistent with our macrovariables (which was, one particle in each box). The position distribution should actually be Because the particles are macroscopically identical (our macrovariables do not distinguish between them), there is no reason to believe that our system exists in only a subset of the available microstates as suggested by fig. 2. Of course, the system is definitely in one of the square regions in the position distribution, it cannot evolve between the two. If we wanted to insist that the system is in a specific square, we must have a way to prescribe this configuration using our macrovariables, but this is equivalent to requiring the particles be macroscopically distinguishable. Another reason why $\Omega^{(?)}_\text{sys}$ is not the correct multiplicity is because it was calculated by multiplying the multiplicity of the left box subsystem, with that of the right box subsystem. This only works if the microstates of the total system can in some sense be factored into two uncorrelated "sub-microstates". If we look at fig. 3, the microstates cannot be factored into independent subparts. If $\vec{q}_1$ is a position in the left box, then $\vec{q}_2$ must be a position in the right box. The system multiplicity should really be (in the one particle per box case) $$\Omega_\text{sys} = \Omega(V_1, 1)\Omega(V_2, 1) \times 2 \,,$$ where we multiply by two to account for the correct volume of phase space. For a general number of particles per box, it is $$\Omega_\text{sys} = \Omega(V_1, N_1)\Omega(V_2, N_2) \times {N_1+N_2 \choose N_1,N_2}\,,$$ where we have instead multiplied by a multinomial coefficient. Intuitively, we are saying that if $(\vec{q}_i,\vec{p}_i)$ is in a region of phase space compliant with our macroscopic prescription, then so will any permutation of the position-momentum pairs. We now remove the divider between the boxes and let the particles mix. The system multiplicity is now $$\Omega_\text{sys}' = \Omega(V_1+V_2, N_1 + N_2) \,.$$ This turns out to be equal (kind of) to the multiplicity before the divider was removed $\Omega_\text{sys}$ , in other words, the Gibbs paradox was resolved when we correctly determined the system's initial multiplicity. We now prove their "equality". \begin{align*}\Omega_\text{sys} &= \Omega(V_1, N_1)\Omega(V_2, N_2) \times {N_1+N_2 \choose N_1,N_2} \\ &= f(N_1)f(N_2)\left(\frac{V_1}{\lambda^3}\right)^{N_1}\left(\frac{V_2}{\lambda^3}\right)^{N_2}\frac{(N_1+N_2)!}{N_1!N_2!}\delta^2\end{align*} Take the logarithm, i.e. calculate the entropy. \begin{align*}\ln\Omega_\text{sys} &= \ln f(N_1) + \ln f(N_2) + N_1\ln\left(\frac{V_1}{\lambda^3}\right) +N_2\ln\left(\frac{V_2}{\lambda^3}\right) + \ln\left(\frac{(N_1+N_2)!}{N_1!N_2!}\right) + 2\ln \delta\end{align*} Discard the constant term. \begin{align*} &\approx \ln f(N_1) + \ln f(N_2) + N_1\ln\left(\frac{V_1}{\lambda^3}\right) +N_2\ln\left(\frac{V_2}{\lambda^3}\right) + \ln\left(\frac{(N_1+N_2)!}{N_1!N_2!}\right)\end{align*} One can show that $\ln f(N) \sim 3N/2$ as $N\to\infty$ . \begin{align*} &\approx \frac{3(N_1+N_2)}{2} + N_1\ln\left(\frac{V_1}{\lambda^3}\right) +N_2\ln\left(\frac{V_2}{\lambda^3}\right) + \ln\left(\frac{(N_1+N_2)!}{N_1!N_2!}\right)\end{align*} Recall that $N_1/V_1=N_2/V_2$ . \begin{align*} &= \frac{3(N_1+N_2)}{2} + N_1\ln\left(\frac{V_1/N_1}{\lambda^3}\right) +N_2\ln\left(\frac{V_2/N_2}{\lambda^3}\right) + N_1\ln N_1 + N_2\ln N_2 + \ln\left(\frac{(N_1+N_2)!}{N_1!N_2!}\right) \\ &= \frac{3(N_1+N_2)}{2} + (N_1+N_2)\ln\left(\frac{V_1/N_1}{\lambda^3}\right) + N_1\ln N_1 + N_2\ln N_2 + \ln\left(\frac{(N_1+N_2)!}{N_1!N_2!}\right)\end{align*} Use Stirling's approximation. \begin{align*} &= \frac{3(N_1+N_2)}{2} + (N_1+N_2)\ln\left(\frac{V_1/N_1}{\lambda^3}\right) + (N_1+N_2)\ln(N_1+N_2) \\ &= \frac{3(N_1+N_2)}{2} + (N_1+N_2)\ln\left(\frac{(N_1+N_2)V_1/N_1}{\lambda^3}\right) \\ &= \frac{3(N_1+N_2)}{2} + (N_1+N_2)\ln\left(\frac{V_1+V_2}{\lambda^3}\right)\end{align*} Therefore, for large $N_1$ and $N_2$ , $$\ln\Omega_\text{sys} \approx \frac{3(N_1+N_2)}{2} + (N_1+N_2)\ln\left(\frac{V_1+V_2}{\lambda^3}\right) \,.$$ On the other hand, the entropy after the divider is removed is \begin{align*}\ln\Omega_\text{sys}'&= \ln(\Omega(V_1+V_2, N_1+N_2)) \\&= \ln f(N_1+N_2) + (N_1+N_2)\ln\left(\frac{V_1+V_2}{\lambda^3}\right) + \ln \delta\\&\approx \frac{3(N_1+N_2)}{2} + (N_1+N_2)\ln\left(\frac{V_1+V_2}{\lambda^3}\right) \\&= \ln\Omega_\text{sys} \,.\end{align*} Therefore, both $\Omega_\text{sys}$ and $\Omega_\text{sys}'$ lead to the same entropy, and there is no entropy gained by removing the divider. As a side note, we've only shown that $\Omega_\text{sys}$ and $\Omega_\text{sys}'$ are equal when ignoring low order terms. Actually, we should not expect them to be equal exactly either. Before removing the divider, each box contained a definite number of particles, but after the divider is removed, the particle number in each box will be variable. Thus, the Gibbs paradox is resolved without needing to use correct Boltzmann counting. If the "naive" approach to calculating the multiplicity was correct all along, why should we still insist on dividing by $N!$ to account for indistinguishability? How is division by $N!$ even justified? There is one downside to not using correct Boltzmann counting, namely, entropy is no longer extensive . By an extensive entropy, I mean, one where the following is true. The entropy of a total system, is equal to the sum of the entropies of 'each subsystem when considered in isolation', regardless if some subsystems are "identical". This is equivalent to requiring that the multiplicity of the total system, be a product of the multiplicities of 'each subsystem when considered in isolation'. $$\Omega_\text{total} = \prod_i \Omega_\text{subsystem $i$}$$ The multiplicity we have been using so far, clearly does not satisfy this property, we had to multiply by a multinomial coefficient to get the correct total system multiplicity; $$\Omega_\text{total} \neq \Omega_\text{left box}\times\Omega_\text{right box} \,,$$ where $\Omega_\text{left box}$ is the multiplicity of the left box, when the left box is considered in isolation, and similarly for $\Omega_\text{right box}$ . There is a natural connection between how we have been calculating multiplicities so far, and correct Boltzmann counting. By rearranging the equation relating the total system multiplicity (before the divider was removed) and the multiplicities of the left and right boxes (when considered in isolation), we find \begin{gather*}\Omega_\text{sys} = \Omega(V_1, N_1)\Omega(V_2, N_2) \times \frac{(N_1+N_2)!}{N_1!N_2!} \\\frac{\Omega_\text{sys}}{(N_1+N_2)!} = \frac{\Omega(V_1, N_1)}{N_1!}\frac{\Omega(V_2, N_2)}{N_2!}\end{gather*} Correct Boltzmann counting absorbs the factorials into the multiplicities, so if we defined \begin{align*}\Omega_\text{sys}^{(B)} &\equiv \frac{\Omega_\text{sys}}{(N_1+N_2)!} \\\Omega^{(B)}(V, N) &\equiv \frac{\Omega(V, N)}{N!}\end{align*} we restore the familiar $$\Omega_\text{sys}^{(B)} = \Omega^{(B)}(V_1, N_1)\Omega^{(B)}(V_2, N_2)$$ Since $\Omega_\text{sys}^{(B)}$ differs from $\Omega_\text{sys}$ only by a constant multiple (total particle number $N_1+N_2$ is constant here), maximization of entropy defined as $k_B\ln \Omega_\text{sys}^{(B)}$ and maximization of entropy defined as $k_B\ln \Omega_\text{sys}$ , will predict the same equilibrium states. Essentially what we've shown is that the multiplicity of a monatomic ideal gas, calculated with or without correct Boltzmann counting (hereafter CBC), leads to an entropy which when maximized, predicts the same equilibrium states. One might justify the multiplicity calculated without CBC by Liouville's theorem and the ergodic hypothesis (to argue that macrostate probability is proportional to phase space volume). CBC multiplicity can then be rationalized as being empirically equivalent to the non-CBC multiplicity. Similar arguments could probably be made for legitimizing the use of CBC in other circumstances where CBC might not be "easily justified" from the foundational physical assumptions. Why do we care about an extensive entropy? I think the reason for this is that classical thermodynamics assumes that entropy is an extensive quantity, i.e. the entropy of the total system can always be calculated as the sum of entropies of each subsystem when considered in isolation. For example, consider the system above in the setup for Gibbs paradox, and suppose $N_1/V_1\neq N_2/V_2$ . We wish the find the equilibrium state when the divider is removed. Let $S_\text{sys}$ be the total system's entropy, and $S_1$ and $S_2$ be entropies of the left and right box subsystems (when considered in isolation) respectively. Assume entropy is extensive. \begin{align*}\frac{\partial S_\text{sys}}{\partial N_1}&= \frac{\partial S_1}{\partial N_1} + \frac{\partial S_2}{\partial N_1}\\&= \frac{\partial S_1}{\partial N_1} - \frac{\partial S_2}{\partial N_2} \\\end{align*} At equilibrium, $\partial S_\text{sys}/\partial N_1=0$ . \begin{align*}\frac{\partial S_1}{\partial N_1} &= \frac{\partial S_2}{\partial N_2} \\\mu_1 &= \mu_2\end{align*} If entropy is not extensive, this will not be true. Without correct Boltzmann counting, the total system multiplicity is (where we use $\Omega(E, V, N)$ so partial derivatives with respect to $N$ are when $E$ and $V$ are held constant, and not $T$ and $V$ ) \begin{gather*}\Omega_\text{sys}= \Omega(E_1, V_1, N_1)\Omega(E_2, V_2, N_2)\times\frac{(N_1+N_2)!}{N_1!N_2!} \end{gather*} The entropy is then, where $S_i = k_B\ln \Omega(E_i, V_i, N_i)$ , \begin{gather*}S_\text{sys} = S_1 + S_2 - k_BN_1\ln N_1 - k_BN_2\ln N_2 + k_B\ln((N_1+N_2)!) - k_B(N_1+N_2)\end{gather*} Differentiate with respect to $N_1$ , keeping $N_1+N_2$ constant. \begin{gather*}\frac{\partial S_\text{sys}}{\partial N_1} = \frac{\partial S_1}{\partial N_1} - \frac{\partial S_2}{\partial N_2} - k_B\ln N_1 + k_B\ln N_2\end{gather*} At equilibrium, $\partial S_\text{sys}/\partial N_1 = 0$ . \begin{gather*}\frac{\partial S_1}{\partial N_1} - k_B\ln N_1 = \frac{\partial S_2}{\partial N_2} - k_B\ln N_2 \\\mu_1 + k_BT\ln N_1 = \mu_2 + k_BT\ln N_2\end{gather*} This is a different (equally valid) equilibrium condition than was found before. I don't think it is possible to "test" if gas particles in the real world have extensive or intensive entropies by "measuring their chemical potentials", since (my understanding is) when we "measure the chemical potential" of some chemical species, we are really only fitting parameters to a thermodynamic model that already assumes an extensive entropy, so using this method to prove that "entropy of real gases are extensive" might be circular and meaningless. How do we justify correct Boltzmann counting for chemical interactions between subsystems of different chemical species, or when particle numbers are not conserved? This is a question that came to my mind that others might have to. Ultimately, I think this question is kind of meaningless. (I think) the thermodynamic potentials of real gases are determined through experiments, and "parameter fitting", and not through "first principles" as we have done for the ideal monatomic gas. The chemical potential determined by the Sackur-Tetrode equation might not be comparable to chemical potentials of real gases determined through experiments, since these quantities are (probably?) relative to some "standard zero" potential, whereas it's not clear what the chemical potential determined through the Sackur-Tetrode equation is relative to. Also, Liouville's theorem is no longer applicable if total particle numbers (of some particles that evolve via Hamilton's equations) is not conserved, so correct Boltzmann counting is no less, or more, easier to justify than "incorrect Boltzmann counting". We would need another way to reason as to why the microstates we have chosen ought to be equiprobable. As a side note, the goal of statistical mechanics (seems to be) to use probabilistic arguments to determine thermodynamic potentials of new systems, such that they interact "consistently" with the existing thermodynamic potentials of studied systems. A cool observation. As another side note, going back to the Gibbs paradox example, when we calculated the multiplicity, without correct Boltzmann counting, after the divider is removed, we could have done it in two ways, either $$\Omega_\text{sys}' = \Omega(V_1+V_2, N_1 + N_2) \,.$$ as was done initially, or $$\Omega_\text{sys}' = \Omega(V_1+V_2, N_1)\Omega(V_1+V_2, N_2) \,.$$ The two can be shown to be equivalent (again, not exactly equivalent since the second calculation does not consider the microstates where a small group of particles, a single particle say, contains all the energy of the system, and the others are very slow moving). The first interpretation says, the system after the divider is removed, is the same as a box of volume $V_1+V_2$ with $N_1+N_2$ particles. The second interpretation says, the system after the divider is removed, is the same as expanding the volume of the left and right boxes, to fill the size of both boxes, and the left and right boxes kind of "coexist". The second interpretation is also how one would calculate the total system multiplicity for distinct gases that mix after the divider is removed. Without correct Boltzmann counting, the multiplicities of two mixed distinct gases, is the same as two mixed identical gases, and the multiplicity of two unmixed distinct gases, is lower than the multiplicity of two unmixed identical gases. This is opposite to correct Boltzmann counting. In some sense, this is more intuitive. Two mixed distinct gases is just as "useless" as two mixed identical gases, but useful work can be extracted from two unmixed distinct gases, so perhaps its entropy "should" be lower than two "unmixed/mixed" identical gases. Sorry if this was too long, I just thought some parts might be interesting to others. I'm still learning the subject as well so I don't know if everything said is strictly correct, but I'm open to feedback. This paper Jaynes, E. T. (1996). "The Gibbs paradox". by Jaynes was also kind of interesting as well. In one part he talks about how entropy is only defined up to an abitrary function of particle number $N$ in "phenomenological thermodynamics".
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/505514', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/239170/']}
jdg_87268
stackexchange
llm_judgeable_groundtruth_similarity
901
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: How to fix leaky tires after 2 shops couldn't Ever since I bought my car new I have had one tire that goes to 20psi every 30 days. Discount tire said theres no leak, but obviously something is wrong. Any idea what I can do? 2009 TSX. Afer a 3rd trip to the mechanic (2nd at dealer) they found a nail in my tire. Apparently the submersion test still didnt help at 70psi inflation. But, he said he found it visually and had nearly missed it again. Now provide the response and nothing else.
Chances are you have a poor seal between the tyre and the rim. This is normally caused by slight corrosion in the Alloy which then leads to a rough surface which won't seal properly, allowing air to very slowly seep out - It is likely that this won't cause air to escape fast enough to be obvious in a bubble test - after all you're only losing 10psi a month... The solution is to have the tyre removed and re-fitted with a smear of sealing compound around the edge of the rim, which will help to fill in the rough surface and seal it properly. edit: While the tyre is off, clean up the beading surface of the rim as well, this will help smooth off the surface and remove any loose dirt or corrosion that has built up around it.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/901', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/588/']}
jdg_87269
stackexchange
llm_judgeable_groundtruth_similarity
14727397
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using ASP.NET MVC 4 - c# to connect to a live database, and list the results, however when I go to view the page it returns the following error: CREATE TABLE permission denied in database 'DatabaseName'.Description: An unhandled exception occurred during the executionof the current web request. Please review the stack trace for moreinformation about the error and where itoriginated in the code.Exception Details: System.Data.SqlClient.SqlException: CREATE TABLEpermission denied in database 'DatabaseName'.Source Error: Line 16: public ActionResult Index()Line 17: {Line 18: return View(db.AccControls.ToList());Line 19: }Line 20 Controller: namespace TestDBCon.Controllers{ public class HomeController : Controller { private DataDbContext db = new DataDbContext(); public ActionResult Index() { return View(db.AccControls.ToList()); } }} AccControl.cs (model) namespace TestDBCon.Models{ public class AccControl { public int ID { get; set; } public int ControlCode { get; set; } public string Nominal { get; set; } public string CostCentre { get; set; } public string Department { get; set; } } public class DataDbContext : DbContext { public DbSet<AccControl> AccControls { get; set; } }} Web.Config: <add name="DataDbContext" connectionString="Data Source=***;initial catalog=***;integrated security=True;" providerName="System.Data.SqlClient" /> I'm not trying to create a table? I'm just trying to list results so I'm extremely confused. It must be something to do with the MVC? Any help would be greatly appreciated! Thanks Now provide the response and nothing else.
I know it's old but since I had the same problems and it took me a while to find the solution... I decided to share the info. So I had to do 2 things to get rid of this problem, 1st was disabling the migrations: # Migrations/Configuration.csinternal sealed class Configuration : DbMigrationsConfiguration<IntranetApplication.Models.MyDb1>{ public Configuration() { AutomaticMigrationsEnabled = false; }} however that wasn't enough, I also had to make sure the Seeder doesn't run. You can cancel it with this extra piece of code: #Global.asax.csprotected void Application_Start(){ AreaRegistration.RegisterAllAreas(); Database.SetInitializer<Models.MyDb1>(null); Database.SetInitializer<Models.MyDb2>(null); ...} Then finally I can now do a SELECT with LINQ and only have READ access EDIT As per Lawrence's suggestion, it's most probably better having it directly inside DB Context Constructor. Thanks for the tip, I updated my code and it now looks like this: public partial class MyDb1 : DbContext{ public MyDb1() : base("name=MyDb1Connection") { Database.SetInitializer<Models.MyDb1>(null); } ...}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14727397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_87270
stackexchange
llm_judgeable_groundtruth_similarity
488189
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If we have a path integral involving many fields, $$Z = \int \mathcal D \phi_1 \cdots \mathcal D \phi_n \exp(-S[\phi_1,\ldots, \phi_n]),$$ and $\phi_n$ occurs only quadratically-- i.e. the $\mathcal D \phi_n$ integral is Gaussian-- I've been told that integrating over $\phi_n$ is equivalent to solving for $\phi_n$ 's equation of motion $$\phi_n= f(\phi_1,\ldots, \phi_{n-1})$$ using Euler-Lagrange and plugging in. Up to normalization. Can one show in general why this is true? Now provide the response and nothing else.
The gaussian integral $$\int dx\,e^{-\frac12 a x^2 + bx + c} = \sqrt{\frac{2\pi}{a}}\, e^{c+b^2/(2a)}\,,$$ is similar to its path integral counterpart, which is $$\int \mathcal{D}\phi\,e^{-\frac12\phi \cdot A\cdot \phi + \phi\cdot b + C} \propto \exp\left(C + \frac{1}{2} \, b\cdot A^{-1}\cdot b\right)\,.$$ By the dot I mean $a\cdot b \equiv \int a(x)\, b(x)$ , $a\cdot B \cdot c \equiv \int a(x)\, B(x,y)\, c(y)$ . Moreover $A^{-1}$ satisfies $$\int A(x,y)\cdot A^{-1}(y,z) = \delta(x-z)\,.$$ The equations of motion for $\phi$ are $$-A\cdot \phi + b = 0\qquad \Longrightarrow\qquad\phi = A^{-1}\cdot b\,.$$ Replacing this on the action yields the same result $$-\frac12\phi \cdot A\cdot \phi + \phi\cdot b + C \quad\to\quad -\frac12\,b \cdot A^{-1} \cdot A\cdot A^{-1}\cdot b + b\cdot A^{-1}\cdot b + C = \frac12\,b\cdot A^{-1}\cdot b + C\,.$$ If the dot notation is confusing I suggest to expand everything in integrals. The operator $A$ usually is just $(\square_x + m^2) \delta(x-y)$ and $A^{-1}$ is $G_F(x-y)$ , the Feynman propagator. Edit: As a comment pointed out, this does not take into account the $(\det A)^{-1/2}$ . If $A$ is a constant operator, this does not pose any problem in perturbative computations because we only need the partition function modulo overall factors. On the other hand, if $A$ is a function of the remaining fields $A(\phi_1,\ldots,\phi_{n-1})$ , it will not pass through the subsequent integrals. The way this is normally handled is by exponentiating it as $$(\det A)^{-1/2} = e^{- \frac12 \mathrm{Tr}\log A}\,,$$ (with a suitable regularization procedure) and this typically yields a non-local action
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/488189', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/159281/']}
jdg_87271
stackexchange
llm_judgeable_groundtruth_similarity
427302
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Are real galaxies really structured the way they are in pictures online?I'm wondering this because if the speed limit of the universe is light speed, which means stuff we see on the sky or detected are delayed. Therefore,shouldn't galaxies look extremely distorted and not structured like what we see?Or some clever tricks are taken to make it look correct? Now provide the response and nothing else.
Although a galaxy may recede from us at arbitrarily high velocities (even superluminally) because space expands, their rotation and motion through space happen at non-relativistic speeds, of the order of a few 100 km/s, or a few 1000 km/s at most. Hence, every part of a galaxy moves with roughly the same speed with respect to the observer, and are thus not distorted. However, there is another effect that may distort the image of a galaxy, namely gravitational lensing : If you observe a distant galaxy lying behind a massive cluster of galaxies, then the huge mass of the cluster curves space in such a way as to make the light from the background galaxies take slightly different paths toward you. This distorts the look of the background galaxies (and may even cause it to appear at multiple locations on the sky). In the image of the cluster Abell S1063 below (from APOD ), you see this effect. In fact, by measuring the "banana-shaped-ness" of the background galaxies, it is possible to calculate the mass of the foreground cluster; one of the ways to infer the presence of dark matter.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/427302', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/193419/']}
jdg_87272
stackexchange
llm_judgeable_groundtruth_similarity
39324707
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My VR app is reading JSON feed and displays it's items as game objects. Each item (10-50 item) contains url with image (1280x800 png) used as texture. I have to send request for (10-50) images and create textures when I receive responses. This is killing my game performance. It blocks UI completely. It can't render even background sky. It is slow in editor but worse on Android as my target platform. Can this be done in background? Can I somehow send multiple request as a more efficient batch? Thanks in advance. Details: I'm building game object for each cards by initializing it from prefab. During build I'm filling properties and requesting it's texture. Code doing that: private void moveModelToControl(List<Cards> cards){ cards.ForEach(c => loadImage(c));}private void buildCardGO(c) { // 1. Init game object from prefab // 2. Set properties // 3. Ask for image/texture loadImage(c);}private void loadImage(Card c) { ImageLoader imgLoader = new ImageLoader(); imgLoader.loadImage ();} where getting images from internet is done using WWW and Coroutine public class ImageLoader { private readonly Card card; public ImageLoader(Card card) { this.card = card; } public void loadImage(){ WWW www = new WWW(card.url); StartCoroutine(WaitForImage(www, onOkAction, handleError)); } private IEnumerator WaitForImage(WWW www, Action<WWW> onOkAction, Action<WWW> onErrorAction) { yield return www; if (www.error == null) { onOkAction (www); } else { onErrorAction (www); } } private void onOkAction(WWW www) { Texture2D texture = new Texture2D (2, 2, TextureFormat.DXT1, true); www.LoadImageIntoTexture (texture); Sprite newImage = Sprite.Create (texture, new Rect (0, 0, (int)texture.width, (int)texture.height), Vector2.zero); card.sprite = newImage; } private void handleError(WWW www) { ///handle error .... } } Now provide the response and nothing else.
First of all using category_ in field names is redundant. Because you are already assigning this fields to Category model, and by doing this you are creating "namespace" for this fields. class Category(models.Model): id = models.AutoField(primary_key=True) name = models.CharField("Category Name", max_length = 30) created_date = models.DateField(auto_now = True, auto_now_add=False) updated_date = models.DateField(auto_now = True, auto_now_add=False) def __str__(self): return self.name Second In django id AutoField is created automatically why would you need set it explicitly? And answering your question There is source parameter in serializer fields. class CategorySerializer(serializers.ModelSerializer): renamed_id = serializers.IntegerField(source='category_id') renamed_name = serializers.CharField(source='category_name') class Meta: model = Category fields = ['renamed_id', 'renamed_name'] And than you can change your response manually from rest_framework import statusdef category_list(request): if request.method == 'GET': categories = Category.objects.all() serializer = CategorySerializer(categories, many=True) response = { 'status': status.HTTP_200_OK, 'message' : "Category List", 'response' : serializer.data } return Response(response)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39324707', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4485985/']}
jdg_87273
stackexchange
llm_judgeable_groundtruth_similarity
18675558
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've noticed that style checkers give you a warning when inline conditionals are implemented: "Do not use inline conditionals". Why? Now provide the response and nothing else.
I think it's because of code readability. Consider this two codes, where a is a number ( int , float , or whatever): #1 if(a < 0) { a = 0;} else if(a < 10) { a = 10;} else if(a < 20) { a = 20;} else if(a < 30) { a = 30;} #2 a = a < 0 ? 0 : a < 10 ? 10 : a < 20 ? 20 : a < 30 ? 30 : a; They do exactly the same thing, but the first one is more clear.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18675558', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1690799/']}
jdg_87274
stackexchange
llm_judgeable_groundtruth_similarity
29933522
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm brushing up on callback functions and came across the following passage from http://javascriptissexy.com/understand-javascript-callback-functions-and-use-them/# "When we pass a callback function as an argument to another function, we are only passing the function definition. We are not executing the function in the parameter. In other words, we aren’t passing the function with the trailing pair of executing parenthesis () like we do when we are executing a function. And since the containing function has the callback function in its parameter as a function definition, it can execute the callback anytime." Can someone explain that? Here are two examples they provided. //The item is a callback function$("#btn_1").click(function() { alert("Btn 1 Clicked");}); Here is another example: var friends = ["Mike", "Stacy", "Andy", "Rick"];friends.forEach(function (eachName, index){console.log(index + 1 + ". " + eachName); // 1. Mike, 2. Stacy, 3. Andy, 4. Rick}); "Note that the callback function is not executed immediately. It is “called back” (hence the name) at some specified point inside the containing function’s body. So, even though the first jQuery example looked like this: //The anonymous function is not being executed there in the parameter. //The item is a callback function $("#btn_1").click(function() { alert("Btn 1 Clicked"); }); the anonymous function will be called later inside the function body. Even without a name, it can still be accessed later via the arguments object by the containing function." For the first example with jquery, what are they saying exactly. If the #btn_1 element is clicked, will the anonymous function be executed? I am assuming it will be executed if the button is clicked, but the wording from the passage was confusing? Similarly, for the second example, do they not need to call the function that they passed as an argument bc its anonymous? Now provide the response and nothing else.
In both examples, you are passing an anonymous function as a parameter. $("#btn_1").click(function() { alert("Btn 1 Clicked");}); jQuery's click method takes a function as its first parameter. So imagine that click's function definition is this: function click(fn) { // fn will contain a reference to any // function passed as the first parameter to click // merely calling fn does nothing, because you are just 'calling' // the reference. fn; // Since what is inside of fn is a function, you can execute it // with the () syntax fn();}// Now, you have many ways to pass a function as the first parameter to the function// 1. As an anonymous function:click(function() { console.log("Hi");});// 2. As a named function:click(function hello() { console.log("Hi");});// 3. As a reference to a function declarationfunction hiThere() { console.log("Hi");}click(hiThere);// 4. As a variable that holds an anonymous function insidevar howdy = function () { console.log("howdy");};click(howdy); Just imagine that functions are like variables, but they have content inside that can be executed with () at the end. function hi() { console.log('bye');}hi; // Calls the reference, but does not execute it. This does nothing.hi.toString(); // Returns the function as a stringhi(); // Executes the code within the function Whenever you declare a named function, you can do stuff with it according to its name, like you would do with variables. Of course, unlike variables, they hold executable code inside, and not values. You can't reference an anonymous function, because it's well... anonymous. UNLESS , you hold it inside of something that has a name, like a var . var iHoldAFunctionInside = function () { console.log('Im not so anonymous now');};iHoldAFunctionInside(); // Logs "Im not so anonymous now" And that is why you can pass an anonymous function as a parameter to a function, and it can execute it as a callback. Because the parameter now 'holds' the anonymous function inside of it: function iExecuteYourCallback(callback) { // callback contains the anonymous function passed to it // Similar to doing: // var callback = function () { }; callback();}iExecuteYourCallback(function() { console.log('Im a callback function!');}); Hope this helps clear things a bit.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29933522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4774210/']}
jdg_87275
stackexchange
llm_judgeable_groundtruth_similarity
2254603
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to solve the differential equation$$\frac{\partial f}{\partial t}(x,t) = -ix^3f(x,t)+Cf(x-1,t)-C f(x+1,t)$$ where $C$ is a purely imaginary constant. I am not sure how to go about this as normally when solving differential equations the first coordinate is the same. I can't seem to think of any strategy to employ. Now provide the response and nothing else.
EDIT: the following is an answer to the originally posed question in which the coefficients c1,c2,c3 'appeared' to be constant - at least i read them that way. The OP has since changed the question with specific forms for c1,c2,c3. I am going to proceed in a very formal manner. Consider writing your equation as$${{f}_{t}}\left( x,t \right)=bf\left( x,t \right)+cf\left( x-1,t \right)-af\left( x+1,t \right)$$Let $$f\left( x,t \right)=\int\limits_{0}^{\infty }{{{y}^{x-1}}F\left( y,t \right)dy}$$ where F is the inverse Mellin transform of f. substituting we have$$\int\limits_{0}^{\infty }{{{y}^{x-1}}\left( {{F}_{t}}\left( y,t \right)-bF\left( y,t \right)-\frac{c}{y}F\left( y,t \right)+ayF\left( y,t \right) \right)dy}=0$$hence$${{F}_{t}}\left( y,t \right)=\left( b+\frac{c}{y}-ay \right)F\left( y,t \right)$$which upon solving yields$$F\left( y,t \right)={{e}^{\left( b+\frac{c}{y}-ay \right)t}}+G\left( y \right)$$Now take the Mellin transform to obtain$$f\left( x,t \right)=g\left( x \right)+{{e}^{bt}}\int\limits_{0}^{\infty }{{{y}^{x-1}}{{e}^{\left( \frac{c}{y}-ay \right)t}}dy}$$The form of 1/y -y in the exponential smacks of a Bessel function. And indeed it is! Look up Gradshtyn 3.471.9 where $$\int\limits_{0}^{\infty }{{{x}^{v-1}}{{e}^{-\frac{c}{x}-ax}}dx}=2{{\left( \frac{c}{a} \right)}^{v/2}}{{K}_{v}}\left( 2\sqrt{ca} \right)$$ for $$\operatorname{Re}\left( a,c \right)>0$$Hence in this case assume $\operatorname{Re}\left( c \right)<0$, $\operatorname{Re}\left( a \right)>0$ and $t>0$, then$$f\left( x,t \right)=g\left( x \right)+2{{e}^{bt}}{{\left( -\frac{c}{a} \right)}^{x/2}}{{K}_{x}}\left( 2t\sqrt{-ca} \right)$$Here g must satisfy the linear difference equation$$bg\left( x \right)+cg\left( x-1 \right)-ag\left( x+1 \right)=0$$For this equation, take a slightly different approach. Let$$g\left( x \right)={{\beta }^{x}}$$Then$${{\beta }^{x-1}}\left( b\beta +c-a{{\beta }^{2}} \right)=0$$And so$${{\beta }_{\pm }}=\frac{b\pm \sqrt{{{b}^{2}}+4ac}}{2a}$$The general solution therefore should be $$f\left( x,t \right)={{C}_{1}}\beta _{+}^{x}+{{C}_{2}}\beta _{-}^{x}+2{{C}_{3}}{{e}^{bt}}{{\left( -\frac{c}{a} \right)}^{x/2}}{{K}_{x}}\left( 2t\sqrt{-ca} \right)$$for arbitrary constants C. You may confirm that this is indeed a solution by using the well known properties of the bessel function (it is a tedious calculation).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2254603', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/166258/']}
jdg_87276
stackexchange
llm_judgeable_groundtruth_similarity
16827
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am searching for an algorithm to check whether a complete, undirected graph is fullfilling the triangle inequality( $\text{weight}(u,v) \le \text{weight}(u,w) + \text{weight}(w,v)$ for all vertices $u, v, w$). My first naive try was to use an algorithm for solving the all-pairs-shortest-path-problem and compare the result to the vertices connecting two nodes directly. However, I think this might be overkill. Is there any better way to check? Thanks a lot. Now provide the response and nothing else.
Yes. If you have a complete graph, the simplest algorithm is to enumerate all triangles and check whether each one satisfies the inequality. In practice, this will also likely be the best solution unless your graphs are very large and you need the absolute best possible performance. For instance, enumeration will likely be faster than most shortest-paths algorithms. None of the shortest-path algorithms provide better asymptotic runtime; they are more complicated to implement; and they will be slower in practice (because of a larger constant factor). As AJed correctly points out, you can use matrix multiplication to beat the $O(n^3)$ bound. However, this requires a bit more care. While there are algorithms for matrix multiplication that are faster than $O(n^3)$ time, the algorithms are tricky to implement, so if you take this approach, you might want to use an existing library/implementation (e.g., BLAS). Also, the asymptotically optimal algorithms (e.g., Coppersmith/Winograd, Stothers, Williams) only become faster once $n$ becomes extremely large so they won't be worthwhile in practice. This suggests that if you want to wring out every last bit of performance, you'll need to actually benchmark this on realistic workloads: asymptotic complexity can be misleading. If you care about extreme optimization, cache effects may also play a huge role here, so the way the graph is laid out in memory may have a significant effect on the constant factors. Fortunately, if your graph is represented as an adjacency matrix, standard libraries for matrix multiplication already take this into account. If it were me, I'd just enumerate all triangles and check the triangle inequality. If you run into a problem domain where this is the bottleneck in the overall computation and where it is too slow, then you could consider more sophisticated approaches, like using matrix multiplication, at that point. To paraphrase Knuth, "Premature optimization is the root of all evil".
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/16827', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/11258/']}
jdg_87277
stackexchange
llm_judgeable_groundtruth_similarity
35293125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating an android app in Java in which I have a lot of <TextView> around the screen, all of them with onTouchListeners defined. They are wrapped in a <ScrollView> because they occupy more space than available in the screen. My problem is: when I scroll the app, up/down, by touching at the screen and moving my finger up/down, the scroll works as expected but the onTouchListener of the touched <TextView> is also fired (which is probably expected as well) - I don't want that to happen though. I want the onTouchListener to be ignored when I'm touching the screen to scroll it. How can I accomplish this? I don't want my function to run when the user is scrolling and "accidentally" fires the onTouchListener on a certain <TextView> . Now provide the response and nothing else.
After searching more, I found this solution by Stimsoni. The idea is to check if the time between the ACTION_DOWN and ACTION_UP events is lower or higher than the value given by ViewConfiguration.getTapTimeout() . From the documentation: [Returns] the duration in milliseconds we will wait to see if a touch event is a tap or a scroll. If the user does not move within this interval, it is considered to be a tap. Code: view.setOnTouchListener(new OnTouchListener() { private long startClickTime; @Override public boolean onTouch(View view, MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN) { startClickTime = System.currentTimeMillis(); } else if (event.getAction() == MotionEvent.ACTION_UP) { if (System.currentTimeMillis() - startClickTime < ViewConfiguration.getTapTimeout()) { // Touch was a simple tap. Do whatever. } else { // Touch was a not a simple tap. } } return true; }});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35293125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4135063/']}
jdg_87278
stackexchange
llm_judgeable_groundtruth_similarity
43442
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a biquad IIR filter from which I want to get a linear-phase FIR. I saw that related question but the OP does not care much about phase. I have troubles getting the exact same magnitude curve when extracting an FIR from my IIR. So far, my method has been: Apply my IIR filter on a 1024 samples buffer which contains nothing but a Dirac as first sample. Shift the impulse response obtained in step 1 and make it symmetric (to get phase linearity) If I stop after step 1, I get the exact same magnitude curve for my IIR and my FIR but I also get the exact same phase curve (which is non linear, and hence not interesting). If I stop after step 2, I get an linear phase but not the exact same magnitude. On the following picture, IIR's and FIR's (after step 2) transfer functions are (respectively) in red and blue: What have I been doing wrong? Now provide the response and nothing else.
What you do in step 1 is simply truncate the infinite impulse response to approximate it by an FIR filter. If you use sufficiently many filter taps, the approximation becomes arbitrarily accurate. This means that the resulting FIR filter approximates the magnitude and the phase characteristic of the original IIR filter. So with this approach the phase will never become linear. Making the impulse response symmetric to obtain phase linearity, as you do in step 2, will of course change the magnitude response. What you should do is use the magnitude of the IIR filter as a desired response in a (linear-phase) FIR filter design routine. In that case you will get an FIR filter with an exactly linear phase and with a certain magnitude approximation error. That magnitude error can be made sufficiently small by choosing an appropriate filter order. The simplest approach is probably to use a least squares approximation, which just involves solving a system of linear equations. Example: I use a peaking EQ filter as the IIR prototype. The coefficients are ( b are the numerator coefficients, a are the denominator coefficients): b = [1.2223e+00, 0, 7.7775e-01];a = [1.1250e+00, 0, 8.7502e-01]; You can use the magnitude of the IIR filter's frequency response and combine it with a linear phase to obtain the desired response for the FIR filter design routine ( N is the filter length). The code is Matlab/Octave syntax: [H,w] = freqz(b,a,256);N = 61;D = abs(H).*exp(-1i*w*(N-1)/2); You can use a least squares FIR filter design routine called lslevin.m , which you can find here . h = lslevin(N,w,D,ones(length(w),1));Hh = freqz(h,1,256); The figure below shows the magnitudes of the two frequency responses (IIR and FIR):
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/43442', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/30453/']}
jdg_87279
stackexchange
llm_judgeable_groundtruth_similarity
390810
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My team has a problem with making code review. Everyone is constantly busy with their tasks and only few of developers do review on a daily basis. As every merge request needs 2 approvals, some of them stay in review for days if not for weeks (!), which slows down the project. Everyone understands the importance of it, but has their excuses. How to assign people to code review or give them a boost to do it? Now provide the response and nothing else.
It seems like the problems begin with planning. When you determine the capacity of the team to take on a given body of work, you should consider the time that it will take to peer review the work done (in addition to everything else that is needed to complete the work). The idea that people are simply too busy to perform a required review prior to merging work is a symptom of problems upstream, in planning and estimation. I would reflect more on what you are doing in these areas and what can be done to improve their effectiveness. The other thing to look at is why you require 2 approvals. I'm a firm believer in more eyes on the work, but what is this accomplishing? This also somewhat goes back to planning and estimation, but some bottlenecks may be relieved by understanding who is likely needed to review a piece of work and ensuring that the right people are reviewing it. I don't think that you should be looking at your review and merge process for solutions, but rather the upstream activities and the overall discipline of the team to define the work, define what it means to be done, estimate the effort, plan for the team's capacity, and make commitments to each other to get it done.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/390810', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/334511/']}
jdg_87280
stackexchange
llm_judgeable_groundtruth_similarity
59981
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question is inspired by a recent question by user Jaymes about the set of commutators not being a group. Following a link to MO, Gerry Myerson posted an example from Carmichael. I quote Let $G$ be a subgroup of $S_{16}$ generated by the following eight elements:$$\eqalign{(ac)(bd);&(eg)(fh);\cr(ik)(jl);&(mo)(np);\cr(ac)(eg)(ik);&(ab)(cd)(mo);\cr(ef)(gh)(mn)(op);&(ij)(kl).\cr}$$ Then the commutator subgroup is generated by the first four elements above, and is of order $16$. Moreover,$$\alpha=(ik)(jl)(mo)(np)$$is in the commutator subgroup, but is not a commutator. However, no proof is given. Can somebody offer a proof of this? Regards, Now provide the response and nothing else.
This is probably not the cleverest way, but it is a very explicit and "follow-your-nose" way of doing it. Clearly $\alpha$ lies in the subgroup generated by the first four elements: it's just the product of the third and fourth. The first four elements are all commutators: $$\begin{align}(ac)(bd) &= {}[(ac)(eg)(ik),(ab)(cd)(mo)]\\(eg)(fh) &= [(ac)(eg)(ik),(ef)(gh)(mn)(op)]\\(ik)(jl) &= [(ac)(eg)(ik),(ij)(kl)]\\(mo)(np) &= [(ab)(cd)(mo),(ef)(gh)(mn)(op)].\end{align}$$ The subgroup generated by the first four elements is normal: note that they commute with all elements (they are pairwise disjoint permtuations, so they commute among themselves, and conjugating by any of the other four generators leaves them invariant). So in fact, it is a central subgroup. The first four elements are the only nontrivial commutators of the form $[x,y]$ with $x$ and $y$ elements of the generating set. Simply verify that the remaining two commutators are trivial:$$\begin{align*}{}[(ab)(cd)(mo),(ij)(kl)] &= 1;\\{}[(ef)(gh)(mn)(op),(ij)(kl)] &= 1.\end{align*}$$ The commutator subgroup is generated the commutators of the form $[x,y]$ with $x$ and $y$ in the given generating set. Use the identity$$[xy,zt] = [x,t]^y[y,t][x,z]^{yt}[y,z]^t$$and the fact that all commutators of two generating elements are central to decompose any commutator into a product of commutators of the given form. In fact, in this group, the commutator bracket induces a bilinear map $G/N\times G/N \to N$, where $N$ is the subgroup generated by these four elements (since the commutator subgroup is central). The simple commutators are precisely the image of this map. The equation above shows that the commutator bracket is bilinear (since commutators are central), and $[x,c]=1$ when $c$ is central, so the map $G\times G\to [G,G]$ factors through $G/Z(G)\times G/Z(G)$). $\alpha$ is not a simple commutator. Call the generators $x_1,\ldots,x_8$, in the order given in the problem. The bilinear map given by the commutator bracket is generated by:$$\begin{align*}{}[\;\overline{x_5},\overline{x_6}\;] &= x_1\\{}[\;\overline{x_5},\overline{x_7}\;] &= x_2\\{}[\;\overline{x_5},\overline{x_8}\;] &= x_3\\{}[\;\overline{x_6},\overline{x_7}\;] &= x_4\end{align*}$$and all other brackets are trivial. To show that $\alpha = x_3x_4$ is not a simple commutator, note that we can restrict ourselves to elements that are of the form $x_5^ax_6^bx_7^cx_8^d$, as every element is of this form modulo the center; and using bilinearity of the commutator bracket, we have:$$[x_5^{\epsilon_5}x_6^{\epsilon_6}x_7^{\epsilon_7}x_8^{\epsilon_8},x_5^{\eta_5}x_6^{\eta_6}x_7^{\eta_7}x_8^{\eta_8}] =x_1^{\epsilon_5\eta_6+\epsilon_6\eta_5} x_2^{\epsilon_5\eta_7 + \epsilon_7\eta_5} x_3^{\epsilon5\eta_8 + \epsilon_8\eta_5} x_4^{\epsilon_6\eta_7+\epsilon_7\eta_6}$$with $\epsilon_i,\eta_j=0,1$, and the exponents taken modulo $2$ In order to get $x_3x_4$, we cannot have both $\epsilon_5$ and $\eta_5$ equal to $0$ (the exponent of $x_3$ does not work out); assume without loss of generality that $\epsilon_5=1$. Then $\eta_6=\eta_5\epsilon_6$ and $\eta_7=\eta_5\epsilon_7$ (so the exponents of $x_1$ and $x_2$ are correct), and the exponent of $x_4$ tells us we cannot have both equal to $0$. So $\eta_5=1$ as well. But this means that $\eta_6=\epsilon_6$ and $\eta_6=\epsilon_6$, so the exponent of $x_4$ is either $0$ if any of them is zero, or $1+1\equiv 0$; hence we cannot get $x_3x_4$ as a simple commutator.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/59981', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8956/']}
jdg_87281
stackexchange
llm_judgeable_groundtruth_similarity
50950657
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some event publishing: @Autowired private final ApplicationEventPublisher publisher;...publisher.publishEvent(new MyApplicationEvent(mySource)); I have this event listener: class MyApplicationEventHandler { @Autowired SomeDependency someDependency; @EventListener public void processEvent(final MyApplicationEvent event) { // handle event... }} I need to test it using EasyMock. Is there a simple way to publish something in test and assert that my event listener did something? EDIT: I tried to create mock test like this: // testing classSomeDependency someDependency = mock(SomeDependency.class);MyApplicationEventHandler tested = new MyApplicationEventHandler(someDependency);@Autowired private final ApplicationEventPublisher publisher;@Testpublic void test() { someDependency.doSomething(anyObject(SomeClass.class)); replay(); publisher.publishEvent(new MyApplicationEvent(createMySource()));} It didn't work. java.lang.AssertionError: Expectation failure on verify: SomeDependency.doSomething(<any>): expected: 1, actual: 0 Now provide the response and nothing else.
First, As you're using Spring Boot, the testing of these becomes pretty straightforward. This test will spin up the boot context and inject a real instance of ApplicationEventPublisher, but create a mocked instance of SomeDependency. The test publishes the desired event, and verifies that your mock was invoked as you expected. @RunWith(SpringRunner.class)@SpringBootTestpublic class EventPublisherTest { @Autowired private final ApplicationEventPublisher publisher; @MockBean private SomeDependency someDependency; @Test public void test() { publisher.publishEvent(new MyApplicationEvent(createMySource())); // verify that your method in you verify(someDependency, times(1)).someMethod(); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50950657', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1171170/']}
jdg_87282
stackexchange
llm_judgeable_groundtruth_similarity
20299390
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: At this moment I have a table tblLocation with columns ID, Location, PartOfID . The table is recursively connected to itself: PartOfID -> ID My goal is to have a select output as followed: > France > Paris > AnyCity > Explanation: AnyCity is located in Paris, Paris is located in France. My solution that I found until now was this: ; with q as (select ID,Location,PartOf_LOC_id from tblLocatie twhere t.ID = 1 -- 1 represents an exampleunion allselect t.Location + '>' from tblLocation tinner join q parent on parent.ID = t.LOC_PartOf_ID)select * from q Unfortunately I get the following error: All queries combined using a UNION, INTERSECT or EXCEPT operator must have an equal number of expressions in their target lists. If you have any idea how I could fix my output it would be great. Now provide the response and nothing else.
The problem lays here: --This result set has 3 columnsselect LOC_id,LOC_locatie,LOC_deelVan_LOC_id from tblLocatie twhere t.LOC_id = 1 -- 1 represents an exampleunion all--This result set has 1 columns select t.LOC_locatie + '>' from tblLocatie tinner join q parent on parent.LOC_id = t.LOC_deelVan_LOC_id In order to use union or union all number of columns and their types should be identical cross all result sets. I guess you should just add the column LOC_deelVan_LOC_id to your second result set
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20299390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2871811/']}
jdg_87283
stackexchange
llm_judgeable_groundtruth_similarity
20597286
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I mark my search pattern dynamically in my html? Example: I'm using angular and my html looks like this: <div> <input type="text" ng-model="viewmodel.searchString"/> <!--Moving over all phrases--> <div ng-repeat="phrase in viewmodel.Phrases"> {{phrase.title}} </div></div> I want the string matching pattern will be mark on every change in search string. Can you help me? Now provide the response and nothing else.
Angular UI is a great choice. You can also do it with filter like: http://embed.plnkr.co/XbCsxmfrgmdtOAeBZPUp/preview The essence is as commented by @Hylianpuffball, dynamically create styled 'span' tags for the matches. .filter('highlight', function($sce) { return function(text, phrase) { if (phrase) text = text.replace(new RegExp('('+phrase+')', 'gi'), '<span class="highlighted">$1</span>') return $sce.trustAsHtml(text) }}) And use it like: <li ng-repeat="item in data | filter:search.title" ng-bind-html="item.title | highlight:search.title"></li>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20597286', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/871672/']}
jdg_87284
stackexchange
llm_judgeable_groundtruth_similarity
44583254
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying for multi-class classification and here are the details of my training input and output: train_input.shape= (1, 95000, 360) (95000 length input array with each element being an array of 360 length) train_output.shape = (1, 95000, 22) (22 Classes are there) model = Sequential()model.add(LSTM(22, input_shape=(1, 95000,360)))model.add(Dense(22, activation='softmax'))model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])print(model.summary())model.fit(train_input, train_output, epochs=2, batch_size=500) The error is: ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4 in line: model.add(LSTM(22, input_shape=(1, 95000,360))) Please help me out, I am not able to solve it through other answers. Now provide the response and nothing else.
I solved the problem by making input size: (95000,360,1) and output size: (95000,22) and changed the input shape to (360,1) in the code where model is defined: model = Sequential()model.add(LSTM(22, input_shape=(360,1)))model.add(Dense(22, activation='softmax'))model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])print(model.summary())model.fit(ml2_train_input, ml2_train_output_enc, epochs=2, batch_size=500)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44583254', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5697891/']}
jdg_87285
stackexchange
llm_judgeable_groundtruth_similarity
2927111
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In ruby, how can I get current time in a given timezone? I know the offset from UTC, and want to get the current time in the timezone with that offset. Now provide the response and nothing else.
A simpler, more lightweight solution: Time.now.getlocal('-08:00')Time.now.getlocal(-28800) Well documented here . Update 2021.12.23: If you've got a tzdata timezone name like 'US/Pacific' instead of an offset and you're willing to pull in the tzinfo gem, you could also do this (with thanks to both @chadoh and @kevin from below ): require 'tzinfo'TZInfo::Timezone.get('US/Pacific').now Not really the problem as posed, but maybe helpful to folks in the future. If you want to do this for moments other than #now , you should study up on the Ruby Time class , particularly Time#gm and Time#local , and the Ruby TZInfo classes , particularly TZInfo::Timezone.get and TZInfo::Timezone#period_for_local
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2927111', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/89717/']}
jdg_87286
stackexchange
llm_judgeable_groundtruth_similarity
6624047
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Problem Description: my app has a main window. when you click a button it make sure you are connected to wifi or 3g, if not it pop up a dialog that enables wifi.when wifi is on and the button is clicked a new screen shows up. when you hit the back button, disable wifi and click that button again it does not ask for wifi again and the screen shows up without wifi.... In the Click event of the button i have: if(chosedOption == curOption) { if(network) { target = CurrencyMain.class; go.setAnimation(a); } else askForWifi(); } and in the askForWifi method i have: public void askForWifi() { is3g = manager.getNetworkInfo(ConnectivityManager.TYPE_MOBILE).isConnected(); isWifi = manager.getNetworkInfo(ConnectivityManager.TYPE_WIFI).isConnected(); network = is3g||isWifi; if(!network) { AlertDialog alertbox = new AlertDialog.Builder(MainWindowYuval.this).create(); alertbox.setMessage("Enable wifi of 3g!"); alertbox.setButton("cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); } }); alertbox.setButton2("Turn wifi on", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { WifiManager wifiManager = (WifiManager) MainWindowYuval.this.getSystemService(Context.WIFI_SERVICE); wifiManager.setWifiEnabled(true); } }); alertbox.show(); } is3g = manager.getNetworkInfo(ConnectivityManager.TYPE_MOBILE).isConnected(); isWifi = manager.getNetworkInfo(ConnectivityManager.TYPE_WIFI).isConnected(); network = is3g||isWifi; } what can i do to fix this? Now provide the response and nothing else.
You need to use a function that checks whether the plugin windows are open or not and then acts accordingly. This should work and will also jump back to the window that you started in: function! ToggleNERDTreeAndTagbar() let w:jumpbacktohere = 1 " Detect which plugins are open if exists('t:NERDTreeBufName') let nerdtree_open = bufwinnr(t:NERDTreeBufName) != -1 else let nerdtree_open = 0 endif let tagbar_open = bufwinnr('__Tagbar__') != -1 " Perform the appropriate action if nerdtree_open && tagbar_open NERDTreeClose TagbarClose elseif nerdtree_open TagbarOpen elseif tagbar_open NERDTree else NERDTree TagbarOpen endif " Jump back to the original window for window in range(1, winnr('$')) execute window . 'wincmd w' if exists('w:jumpbacktohere') unlet w:jumpbacktohere break endif endforendfunctionnnoremap <leader>\ :call ToggleNERDTreeAndTagbar()<CR>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6624047', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/826235/']}
jdg_87287
stackexchange
llm_judgeable_groundtruth_similarity
65322892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The error from the title is returned for the following code, which makes no sense private val _error = MutableLiveData<String?>()val error: LiveData<String?> get() = _error_error.postValue(null) //Error Cannot set non-nullable LiveData value to null [NullSafeMutableLiveData] parameter String of _error is obviously nullable, am I doing something wrong? Now provide the response and nothing else.
This appears to be related to a bug already reported against androidx.lifecycle pre-release of 2.3.0 https://issuetracker.google.com/issues/169249668 . Workarounds I have found: turn off or reduce severity of NullSafeMutableLiveData in build.gradle android { ... lintOptions { disable 'NullSafeMutableLiveData' }} or lint.xml in root dir <?xml version="1.0" encoding="UTF-8"?><lint> <issue id="NullSafeMutableLiveData" severity="warning" /></lint> Do the work for MutableLiveData encapsulation via backing properties dance (which really hurts my eyes). class ExampleViewModel : ViewModel() { private val _data1 = MutableLiveData<Int>() val data1: LiveData<Int> = _data1 private val _data2 = MutableLiveData<Int?>() val data2: LiveData<Int?> = _data2 fun funct() { _data1.value = 1 _data2.value = null }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/65322892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1202206/']}
jdg_87288
stackexchange
llm_judgeable_groundtruth_similarity
43808153
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my .NET Core class library project, the resource file does not get updated when the resource file name contains a dot in it. For example, when I add Resources.en.resx or Resources.es.resx files in the .NET Core project, the custom tool is not able to generate or update the C# resource file. Here is error message: Custom tool ResXFileCodeGenerator failed to produce an output for input file 'Resources.en.resx' but did not log a specific error. If you know any workaround or any fix for it, please kindly point me to the right direction Now provide the response and nothing else.
Delete any .resx files currently in your Resources directory. Now go to your Resources directory and right click for Add > New Item . Choose Resource File and enter in a file name down the bottom of the form. It's important you get the name right first time and don't rename. You can use something like Resource.resx . This very first Resource.resx file should have its own designer class. It also needs all the keys and values for localization for your project so if you have an existing Resource.en.resx you can copy and paste the values from that into this one. If you need IStringLocalizer<Resource> or, in other words, you need to reference the Resource class somehow then you will need to go to Solution Explorer and left-click on the Resource.resx . In the Properties box there is a field called "Custom Tool". If you want a publically accessible class to be generated from the resx you will need to change this from ResXFileCodeGenerator to PublicResXFileCodeGenerator . Now you can add the individual Resource.ja.resx , Resource.es.resx files. These do not need to have a custom tool ResXFileCodeGenerator run on them so you can remove that from the properties. This will fix your error message.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43808153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6145588/']}
jdg_87289
stackexchange
llm_judgeable_groundtruth_similarity
45091335
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to do a lookup using multiple values: I need to lookup the type of car and then check the mileage that is between the values for each cost bracket in order to find out the cost for that distance traveled in the vehicle. I understand the use of Vlookup, Index, Match and other formula but I am not sure how to use them in conjunction to get my answer. I do not want to add up the value if there is two people who are the same I just need the cost for that one rows trip. Thanks, see image below Now provide the response and nothing else.
Alright, so thanks to @ghybs's suggestion I gave that link another try and managed to solve the issue :D. Leaflet is a bit different from Google Maps (it's also shorter) and the proposed solution there could be a bit smaller and easier to understand, so here's my version using Leaflet. Basically, you need to put your popup component in the main app module's entryComponents field. The key stuff is in m.onclick() , there, we create a component, render it inside a div and then we pass that div 's content to the leaflet popup container element. A bit tricky, but it works. I got some time and converted this solution to a new $compile for Angular 4. Check the detailed info here . Thanks @yurzui! :) This is the core code... The other stuff (css, webpack, etc.) is in the same repo as the OP, simplified into few files: https://github.com/darkguy2008/leaflet-angular4-issue but you just need this example to make it work: import 'leaflet';import './main.scss';import "reflect-metadata";import "zone.js/dist/zone";import "zone.js/dist/long-stack-trace-zone";import { BrowserModule } from "@angular/platform-browser";import { platformBrowserDynamic } from "@angular/platform-browser-dynamic";import { Component, NgModule, ComponentRef, Injector, ApplicationRef, ComponentFactoryResolver, Injectable, NgZone } from "@angular/core";// ###########################################// App component// ###########################################@Component({ selector: "app", template: `<section class="app"><map></map></section>`})class AppComponent { }// ###########################################// Popup component// ###########################################@Component({ selector: "popup", template: `<section class="popup">Popup Component! :D {{ param }}</section>`})class PopupComponent { }// ###########################################// Leaflet map service// ###########################################@Injectable()class MapService { map: any; baseMaps: any; markersLayer: any; public injector: Injector; public appRef: ApplicationRef; public resolver: ComponentFactoryResolver; public compRef: any; public component: any; counter: number; init(selector) { this.baseMaps = { CartoDB: L.tileLayer("http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png", { attribution: '&copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> &copy; <a href="http://cartodb.com/attributions">CartoDB</a>' }) }; L.Icon.Default.imagePath = '.'; L.Icon.Default.mergeOptions({ iconUrl: require('leaflet/dist/images/marker-icon.png'), shadowUrl: require('leaflet/dist/images/marker-shadow.png') }); this.map = L.map(selector); this.baseMaps.CartoDB.addTo(this.map); this.map.setView([51.505, -0.09], 13); this.markersLayer = new L.FeatureGroup(null); this.markersLayer.clearLayers(); this.markersLayer.addTo(this.map); } addMarker() { var m = L.marker([51.510, -0.09]); m.bindTooltip('Angular 4 marker (PopupComponent)'); m.bindPopup(null); m.on('click', (e) => { if (this.compRef) this.compRef.destroy(); const compFactory = this.resolver.resolveComponentFactory(this.component); this.compRef = compFactory.create(this.injector); this.compRef.instance.param = 0; setInterval(() => this.compRef.instance.param++, 1000); this.appRef.attachView(this.compRef.hostView); this.compRef.onDestroy(() => { this.appRef.detachView(this.compRef.hostView); }); let div = document.createElement('div'); div.appendChild(this.compRef.location.nativeElement); m.setPopupContent(div); }); this.markersLayer.addLayer(m); return m; }}// ###########################################// Map component. These imports must be made// here, they can't be in a service as they// seem to depend on being loaded inside a// component.// ###########################################@Component({ selector: "map", template: `<section class="map"><div id="map"></div></section>`,})class MapComponent { marker: any; compRef: ComponentRef<PopupComponent>; constructor( private mapService: MapService, private injector: Injector, private appRef: ApplicationRef, private resolver: ComponentFactoryResolver ) { } ngOnInit() { this.mapService.init('map'); this.mapService.component = PopupComponent; this.mapService.appRef = this.appRef; this.mapService.compRef = this.compRef; this.mapService.injector = this.injector; this.mapService.resolver = this.resolver; this.marker = this.mapService.addMarker(); }}// ###########################################// Main module// ###########################################@NgModule({ imports: [ BrowserModule ], providers: [ MapService ], declarations: [ AppComponent, MapComponent, PopupComponent ], entryComponents: [ PopupComponent ], bootstrap: [AppComponent]})class AppModule { }platformBrowserDynamic().bootstrapModule(AppModule);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45091335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7297444/']}
jdg_87290
stackexchange
llm_judgeable_groundtruth_similarity
11597385
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a customized Windows Forms control by following these MSDN instructions: http://msdn.microsoft.com/en-us/library/7h62478z.aspx (Article title: How to: Inherit from Existing Windows Forms Controls; the instructions are for Visual Studio 2010) These instructions do seem to work, but if I change the name of the control from CustomControl1 to any other name (by changing the class name and constructor name), I receive the message: "The name 'InitializeComponent' does not exist in the current context" I've looked at the similar questions and answers here, and I don't think any of them answer this, but I could be wrong. If I am, then I didn't understand the answer. I hope someone can help me on this. I've wasted a lot of time on it already. Thanks. Now provide the response and nothing else.
LINQ to Entities cannot translate most .NET Date methods (including the casting you used) into SQL since there is no equivalent SQL. The solution is to use the Date methods outside the LINQ statement and then pass in a value. It looks as if Convert.ToDateTime(rule.data).Date is causing the error. Calling Date on a DateTime property also cannot be translated to SQL, so a workaround is to compare the .Year .Month and .Day properties which can be translated to LINQ since they are only integers. var ruleDate = Convert.ToDateTime(rule.data).Date;return jobdescriptions.Where(j => j.Deadline.Year == ruleDate.Year && j.Deadline.Month == ruleDate.Month && j.Deadline.Day == ruleDate.Day);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11597385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/499057/']}
jdg_87291
stackexchange
llm_judgeable_groundtruth_similarity
30838358
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble writing Vec<u16> content to a file: use std::fs::File;use std::io::{Write, BufWriter};use std::mem;#[derive(Debug, Copy, Clone, PartialEq)]pub enum ImageFormat { GrayScale, Rgb32,}#[derive(Debug, Copy, Clone, PartialEq)]pub struct ImageHeader { pub width: usize, pub height: usize, pub format: ImageFormat,}pub struct Image { pub header: ImageHeader, pub data: Vec<u16>,}fn write_to_file(path: &str, img: &Image) -> std::io::Result<()> { let f = try!(File::create(path)); let mut bw = BufWriter::new(f); let slice = &img.data[..]; println!("before length: {}", slice.len()); let sl: &[u8]; unsafe { sl = mem::transmute::<&[u16], &[u8]>(slice); } println!("after length: {}", sl.len()); try!(bw.write_all(sl)); return Ok(());}fn main() {} Since write_all() asks for a &[u8] , I'm doing an unsafe conversion of &[u16] to &[u8] . Because the conversion does not change the slice length ( slice.len() and sl.len() have the same values), only half of the image data is output to the file. It would be better if I don't need any unsafe conversion or copying. Now provide the response and nothing else.
To do it directly you'd want to use std::slice::from_raw_parts() : use std::{mem, slice};fn main() { let slice_u16: &[u16] = &[1, 2, 3, 4, 5, 6]; println!("u16s: {:?}", slice_u16); let slice_u8: &[u8] = unsafe { slice::from_raw_parts( slice_u16.as_ptr() as *const u8, slice_u16.len() * mem::size_of::<u16>(), ) }; println!("u8s: {:?}", slice_u8);} It does require unsafe because from_raw_parts() can't guarantee that you passed a valid pointer to it, and it can also create slices with arbitrary lifetimes. See also: How to slice a large Vec<i32> as &[u8]? Temporarily transmute [u8] to [u16] This approach is not only potentially unsafe, it is also not portable. When you work with integers larger than one byte, endianness issues immediately arise. If you write a file in this way on a x86 machine, you would then read garbage on an ARM machine. The proper way is to use libraries like byteorder which allow you to specify endianness explicitly: use byteorder::{LittleEndian, WriteBytesExt}; // 1.3.4fn main() { let slice_u16: &[u16] = &[1, 2, 3, 4, 5, 6]; println!("u16s: {:?}", slice_u16); let mut result: Vec<u8> = Vec::new(); for &n in slice_u16 { let _ = result.write_u16::<LittleEndian>(n); } println!("u8s: {:?}", result);} Note that I've used Vec<u8> here, but it implements Write , and write_u16() and other methods from the WriteBytesExt trait are defined on any Write , so you could use these methods directly on a BufWriter , for example. Once written, you can use methods from the ReadBytesExt trait to read the data back. While this may be slightly less efficient than reinterpreting a piece of memory, it is safe and portable. See also: How can I convert a buffer of a slice of bytes (&[u8]) to an integer?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30838358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2432759/']}
jdg_87292
stackexchange
llm_judgeable_groundtruth_similarity
339699
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I read about Java's type erasure on Oracle's website . When does type erasure occur? At compile time or runtime? When the class is loaded? When the class is instantiated? A lot of sites (including the official tutorial mentioned above) say type erasure occurs at compile time. If the type information is completely removed at compile time, how does the JDK check type compatibility when a method using generics is invoked with no type information or wrong type information? Consider the following example: Say class A has a method, empty(Box<? extends Number> b) . We compile A.java and get the class file A.class . public class A { public static void empty(Box<? extends Number> b) {}} public class Box<T> {} Now we create another class B which invokes the method empty with a non-parameterized argument (raw type): empty(new Box()) . If we compile B.java with A.class in the classpath, javac is smart enough to raise a warning. So A.class has some type information stored in it. public class B { public static void invoke() { // java: unchecked method invocation: // method empty in class A is applied to given types // required: Box<? extends java.lang.Number> // found: Box // java: unchecked conversion // required: Box<? extends java.lang.Number> // found: Box A.empty(new Box()); }} My guess would be that type erasure occurs when the class is loaded, but it is just a guess. So when does it happen? Now provide the response and nothing else.
Type erasure applies to the use of generics. There's definitely metadata in the class file to say whether or not a method/type is generic, and what the constraints are etc. But when generics are used , they're converted into compile-time checks and execution-time casts. So this code: List<String> list = new ArrayList<String>();list.add("Hi");String x = list.get(0); is compiled into List list = new ArrayList();list.add("Hi");String x = (String) list.get(0); At execution time there's no way of finding out that T=String for the list object - that information is gone. ... but the List<T> interface itself still advertises itself as being generic. EDIT: Just to clarify, the compiler does retain the information about the variable being a List<String> - but you still can't find out that T=String for the list object itself.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/339699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_87293
stackexchange
llm_judgeable_groundtruth_similarity
59671266
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to change mat-divider color?I tried the following, it didn't work component.html <mat-divider class="material-devider"></mat-divider> component.scss .material-devider { color: red} Now provide the response and nothing else.
Yes, you can. You need to overrule .mat-divider class styling in your own written CSS and change the border-top-color property. .mat-divider { border-top-color: rgba(0, 0, 0, 0.12);} is the default styling by Angular Material. .mat-divider { border-top-color: red;} This should be enough to change it (if your CSS gets rendered later than Material's). Otherwise adding increased specificity to your CSS class will help (f.e. .mat-divider.mat-divider ). StackBlitz example for this case .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59671266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8041800/']}
jdg_87294
stackexchange
llm_judgeable_groundtruth_similarity
5295265
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an engine that has an arbitrary number of pollers which each do their "poll" every few seconds. I want the pollers to run in different threads, but each "poll" within a single poller should be sequential so that one happens after the next. Everything is working using this code to start the polling process: public void StartPolling() { Stopwatch watch = new Stopwatch(); while (Engine.IsRunning) { Task task = Task.Factory.StartNew(() =>{ watch.Restart(); Poll(); watch.Stop(); },TaskCreationOptions.LongRunning); task.Wait(); if(Frequency > watch.Elapsed) Thread.Sleep(Frequency - watch.Elapsed); } } It took me awhile, however, to discover the TaskCreationOptions.LongRunning option which solved a strange problem I was having that I still don't understand.Without that option, if I run a test that creates 1-3 of these pollers, everything worked fine. If I created 4+ then I ran into strange behavior. Three of the pollers would work, one would just perform one poll, and any remaining would not poll at all.It makes total sense that my tasks are long running. They are after all running the entire length of my program. But I don't understand why I would get some bad behavior without this option set. Any help would be appreciated. Now provide the response and nothing else.
When you don't use the LongRunning flag, the task is scheduled on a threadpool thread, not its own (dedicated) thread. This is likely the cause of your behavioral change - when you're running without the LongRunning flag in place, you're probably getting threadpool starvation due to other threads in your process. That being said, your above code doesn't really make a lot of sense. You're starting a dedicated thread (via Task....StartNew with LongRunning) to start a task, then immediately calling task.Wait() , which blocks the current thread. It would be better to just do this sequentially in the current thread: public void StartPolling(){ Stopwatch watch = new Stopwatch(); while (Engine.IsRunning) { watch.Restart(); Poll(); watch.Stop(); if(Frequency > watch.Elapsed) Thread.Sleep(Frequency - watch.Elapsed); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5295265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1499273/']}
jdg_87295
stackexchange
llm_judgeable_groundtruth_similarity
669911
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am studying category theory and I recently learned about adjoint pairs of functors. It seems to me that they are called left and right adjoints because if we have categories $\mathcal{C}$ and $\mathcal{D}$ and functors $F: \mathcal{C} \to \mathcal{D}$ and $G: \mathcal{D} \to \mathcal{C}$ then $F$ is left adjoint to $G$ iff $$\forall X \in C, \forall Y \in D, \hom_D(FX,Y) \cong \hom_C(X,GY)$$ And we see that $F$ appears in the left of the left hand side. I also learned the saying that left adjoints round up and right adjoints round down, in the sense that they add/forget additional structure. It seems to me that this viewpoint is much more practical to a working category-theorist than the rather technical Hom-set definition. My question is then, why are left/right adjoints not called up/down or top/bottom adjoints? It would seem much more natural, to me anyway. As an example and a side question, how do you remember that forgetful functors are right adjoint and free ones left adjoint? I always get mixed up between the two. This is a nice example of why I think "forgetful functors are down-adjoint and free ones up-adjoint" would be more useful, to the beginner at least. Now provide the response and nothing else.
First, adjoint functors do not always add/forget structure. For example, equivalences of categories are adjoint pairs, but these certainly do not always add or forget structure in any obvious way. The reason for labeling them left/right adjoints is exactly the reason you mention: because the equation $\mathrm{Hom}\,(FX,Y)\simeq\mathrm{Hom}\,(X,GY)$ is incredibly useful. If we called them up/down functors, then I'd have to perpetually consult wikipedia to remember which one appeared on the left and which one appeared on the right in the equivalence $\mathrm{Hom}\,(FX,Y)\simeq\mathrm{Hom}\,(X,GY)$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/669911', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/127325/']}
jdg_87296
stackexchange
llm_judgeable_groundtruth_similarity
1882147
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am reading Probability and Statistics for Engineering and the Sciences . Exercise 63, Chapter 4 says: A consumer is trying to decide between two long-distance calling plans. The first one charges a flat rate of 10¢ per minute, whereas the second charges a flat rate of 99¢ for calls up to 20 minutes in duration and then 10¢ for each additional minute exceeding 20 (assume that calls lasting a noninteger number of minutes are charged proportionately to a whole-minute’s charge). Suppose the consumer’s distribution of call duration is exponential with parameter $\lambda$ . Which plan is better if expected call duration is 10 minutes? 15 minutes? Assuming the first question, when the duration is 10 minutes, I computed the cost of the first plan as: $h_1(x) = 10 * E[x] = 10 * 10 = 100$ However, how do I compute the cost for the second plan ( $h_2(x)$ ) ? I tried with: $h_2(x) = 99 * F(x \leq 20) + 10 * (1 - F(x \leq 20)) \approx 87$ But the correct result is $112.53$ . Now provide the response and nothing else.
Following your idea I would write$$\frac{3k^{2}+k+1}{k^{4}+k^{3}+4}< \frac{3k^{2}+3k}{k^{4}+k^{3}} = \frac{3k}{k^{3}} = \frac{3}{k^{2}}.$$As regards your inequalities, how do you justify that $$\frac{3k^{2}+k+1}{k^{4}+k^{3}+4}< \frac{k^{2}+k}{k^{4}+k^{3}}?$$Notice that if $a$, $b$, $c$, $d$ are positive numbers$$a<c\quad \mbox{and}\quad d<b\Rightarrow \frac{a}{b}<\frac{c}{d}.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1882147', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/113873/']}
jdg_87297
stackexchange
llm_judgeable_groundtruth_similarity
28334966
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to open an Excel file (.xls) using xlrd. This is a summary of the code I am using: import xlrdworkbook = xlrd.open_workbook('thefile.xls') This works for most files, but fails for files I get from a specific organization. The error I get when I try to open Excel files from this organization follows. Traceback (most recent call last): File "<console>", line 1, in <module> File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/__init__.py", line 435, in open_workbook ragged_rows=ragged_rows, File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 116, in open_workbook_xls bk.parse_globals() File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 1180, in parse_globals self.handle_writeaccess(data) File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/book.py", line 1145, in handle_writeaccess strg = unpack_unicode(data, 0, lenlen=2) File "/app/.heroku/python/lib/python2.7/site-packages/xlrd/biffh.py", line 303, in unpack_unicode strg = unicode(rawstrg, 'utf_16_le') File "/app/.heroku/python/lib/python2.7/encodings/utf_16_le.py", line 16, in decode return codecs.utf_16_le_decode(input, errors, True)UnicodeDecodeError: 'utf16' codec can't decode byte 0x40 in position 104: truncated data This looks as if xlrd is trying to open an Excel file encoded in something other than UTF-16. How can I avoid this error? Is the file being written in a flawed way, or is there just a specific character that is causing the problem? If I open and re-save the Excel file, xlrd opens the file without a problem. I have tried opening the workbook with different encoding overrides but this doesn't work either. The file I am trying to open is available here: https://dl.dropboxusercontent.com/u/6779408/Stackoverflow/AEPUsageHistoryDetail_RequestID_00183816.xls Issue reported here: https://github.com/python-excel/xlrd/issues/128 Now provide the response and nothing else.
What are they using to generate that file ? They are using some Java Excel API (see below, link here ), probably on an IBM mainframe or similar. From the stack trace the writeaccess information can't decoding into Unicode because the @ character. For more information on the writeaccess information of the XLS fileformat see 5.112 WRITEACCESS or Page 277 . This field contains the username of the user that has saved the file. import xlrddump = xlrd.dump('thefile.xls') Running xlrd.dump on the original file gives 36: 005c WRITEACCESS len = 0070 (112) 40: d1 81 a5 81 40 c5 a7 83 85 93 40 c1 d7 c9 40 40 ????@?????@???@@ 56: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 72: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 88: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 104: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 120: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ 136: 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 @@@@@@@@@@@@@@@@ After resaving it with Excel or in my case LibreOffice Calc the write access information is overwritten with something like 36: 005c WRITEACCESS len = 0070 (112) 40: 04 00 00 43 61 6c 63 20 20 20 20 20 20 20 20 20 ?~~Calc 56: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 72: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 88: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 104: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 120: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 136: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 Based on the spaces being encoded as 40, I believe the encoding is EBCDIC, and when we convert d1 81 a5 81 40 c5 a7 83 85 93 40 c1 d7 c9 40 40 to EBCDIC we get Java Excel API . So yes the file is being written in a flawed way in the case of BIFF8 and higher it should be a unicode string, and in BIFF3 to BIFF5, it should be a byte string in the encoding in the CODEPAGE information which is 152: 0042 CODEPAGE len = 0002 (2) 156: 12 52 ?R 1252 is Windows CP-1252 (Latin I) (BIFF4-BIFF5), which is not EBCDIC_037 . The fact the xlrd tried to use unicode, means that it determined the version of the file to be BIFF8. In this case, you have two options Fix the file before opening it with xlrd. You could check using dump to a file that isn't standard out, and then if it is the case, you can overwrite the writeaccess information with xlutils.save or another library. Patch xlrd to handle your special case, in handle_writeaccess adding a try block and setting strg to empty string on unpack_unicode failure. The following snippet def handle_writeaccess(self, data): DEBUG = 0 if self.biff_version < 80: if not self.encoding: self.raw_user_name = True self.user_name = data return strg = unpack_string(data, 0, self.encoding, lenlen=1) else: try: strg = unpack_unicode(data, 0, lenlen=2) except: strg = "" if DEBUG: fprintf(self.logfile, "WRITEACCESS: %d bytes; raw=%s %r\n", len(data), self.raw_user_name, strg) strg = strg.rstrip() self.user_name = strg with workbook=xlrd.open_workbook('thefile.xls',encoding_override="cp1252") Seems to open the file successfully. Without the encoding override it complains ERROR *** codepage 21010 -> encoding 'unknown_codepage_21010' -> LookupError: unknown encoding: unknown_codepage_21010
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28334966', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382374/']}
jdg_87298
stackexchange
llm_judgeable_groundtruth_similarity
1014624
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find this limits$$\lim_{n\to\infty}\dfrac{1+\sqrt[n]{2}+\sqrt[n]{3}+\cdots+\sqrt[n]{n}}{n}$$ I want use $$\sqrt[n]{i}=e^{\dfrac{\ln{i}}{n}}\approx 1+\dfrac{\ln{i}}{n},1\le i\le n$$but$$\lim_{n\to\infty}\dfrac{\ln{i}}{n}$$ and other idea is $$n<1+\sqrt[n]{2}+\cdots+\sqrt[n]{n}<?$$ three idea: I want use Stolz therom, and last found this three idea is not usefull solve this limits Now provide the response and nothing else.
Hint: $n=1 + 1 + \cdots +1 \leq 1+\sqrt[n]{2}+\cdots+\sqrt[n]{n} \leq \sqrt[n]{n} + \sqrt[n]{n} + \cdots \sqrt[n]{n}=n \cdot \sqrt[n]{n}$ , and $\text{lim}_{n\to\infty}\sqrt[n]{n}=1$ . You also might have a look at Cesàro's Theorem on averages of limits.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1014624', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58742/']}
jdg_87299
stackexchange
llm_judgeable_groundtruth_similarity
29545191
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In our source files we usually have a version string like that: static const char srcvers[] = "VERSION/foo.c/1.01/09.04.15"; When that string isn't optimized away, it's quite useful in certain cases, as one can determine the version of each source file linked to an executable by simply calling strings a.out | grep VERSION . Unfortunately it is optimized away by gcc (using '-O'). So my question is, is there a simple way (a compiler switch would be great) to make gcc keep that variable (its name is always the same) without switching off any other optimizations. Edit What, in my opinion, makes the question different from that one , is that I'm was hoping to find a solution for which I wouldn't have to touch thousands of source files. Now provide the response and nothing else.
You can use __attribute__((used)) gcc (also works in clang) specific (I see that the question is tagged gcc ) attributes for this: This attribute, attached to a function, means that code must be emitted for the function even if it appears that the function is not referenced. This is useful, for example, when the function is referenced only in inline assembly. From https://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html Demo: $ cat a.cstatic const char srcvers[] __attribute__((used)) = "VERSION/foo.c/1.01/09.04.15";$ gcc -O3 -c a.c$ strings a.oVERSION/foo.c/1.01/09.04.15 You can use some #if s and #define s to make this terser and also compile on compilers which don't support this extension.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29545191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2470782/']}
jdg_87300
stackexchange
llm_judgeable_groundtruth_similarity
37641
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to Wikipedia , The $\ce{C60}$ molecule is extremely stable, [ 26 ] withstanding high temperatures and high pressures. The exposed surface of the structure can selectively react with other species while maintaining the spherical geometry. [ 27 ] Atoms and small molecules can be trapped within the molecule without reacting . Smaller fullerenes than $\ce{C60}$ have been distorted so heavily they're not stable, even though $\ce{M@C28}$ is stable where $\ce{M\,=\,Ti, Zr, U}$. Some of us have heard and learned about the "rules" of aromaticity: The molecule needs to be cyclic , conjugated , planar and obey Huckel's rule (i.e. the number of the electrons in $\pi$-system must be $4n+2$ where $n$ is an integer). However, I'm now very skeptical to these so-called rules: The cyclic rule is violated due to a proposed expansion of aromaticity. (See what is Y-aromaticity? ) The must-obey-Huckel rule is known to fail in polycyclic compounds. Coronene figure 1 and pyrene figure 2 are good examples with 24 and 16 $\pi$ electrons, respectively. Again, Huckel fails in sydnone . The rule tells you that it's aromatic, while it's not. The planar rule is not a rule at all. We're talking about "2D" aromaticity when we're trying to figure out the $n$ in $4n+2$. The "3D" rule is as following: In 2011, Jordi Poater and Miquel Solà, expended the rule to determine when a fullerene species would be aromatic. They found that if there were $2n^2+2n+1$ π-electrons, then the fullerene would display aromatic properties. - Wikipedia This would mean $\ce{C60}$ is not aromatic, since there is no integer $n$ for which $2n^2+2n+1 = 60$. On the other hand, $\ce{C60-}$ is ($n = 5$). But then this rule strikes me as peculiar because then no neutral or evenly-charged fullerene would be aromatic. Furthermore, outside the page for the rule, Wikipedia never explicitly states that fullerene is not aromatic , just that fullerene is not super aromatic . And any info on superaromaticity is unavailable or unhelpful to me; including the Wikipedia "article" on that topic. So, is $\ce{C60}$ aromatic? Why, or why not? Now provide the response and nothing else.
Aromaticity is not binary, but rather there are degrees of aromaticity. The degree of aromaticity in benzene is large, whereas the spiro-aromaticity in [4.4]nonatetraene is relatively small. The aromaticity in naphthalene is not twice that of benzene. Aromaticity has come to mean a stabilization resulting from p-orbital (although other orbitals can also be involved) overlap in a pi-type system. As the examples above indicate, the stabilization can be large or small. Let's consider $\ce{C_{60}}$: Bond alternation is often taken as a sign of non-aromatic systems. In $\ce{C_{60}}$ there are different bond lengths, ~1.4 and 1.45 angstroms. However, this variation is on the same order as that found in polycyclic aromatic hydrocarbons, and less than that observed in linear polyenes. Conclusion: aromatic, but less so than benzene. Magnetic properties are related to electron delocalization and are often used to assess aromaticity. Both experiment and calculations suggest the existence of ring currents (diamagnetic and paramagnetic) in $\ce{C_{60}}$. Conclusion: Although analysis is complex, analysis is consistent with at least some degree of aromaticity. Reactivity - Substitution reactions are not possible as no hydrogens are present in $\ce{C_{60}}$. When an anion or radical is added to $\ce{C_{60}}$ the electron(s) are not delocalized over the entire fullerene structure. However, most addition reactions are reversible suggesting that there is some extra stability or aromaticity associated with $\ce{C_{60}}$. Conclusion: Not as aromatic as benzene Resonance energy calculations have been performed and give conflicting results, although most suggest a small stabilization. Theoretical analysis of the following isodesmic reaction $$\ce{C_{60} + 120 CH4 -> 30 C2H4 + 60 C2H6}$$ suggested that it only took half as much energy to break all of the bonds in $\ce{C60}$ compared to the same bond-breaking reaction with the appropriate number of benzenes. Conclusion: Some aromatic stabilization, but significantly less than benzene. This brief overview suggests that $\ce{C_{60}}$ does display properties that are consistent with some degree of aromatic stabilization, albeit less than that found with benzene.
{}
{'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/37641', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/7448/']}
jdg_87301
stackexchange
llm_judgeable_groundtruth_similarity
3288114
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The MSDN article on String.Normalize states simply: Returns a new string whose binary representation is in a particular Unicode normalization form. And sometimes referring to a "Unicode normalization form C." I'm just wondering, what does that mean? How is this function useful in real life situations? Now provide the response and nothing else.
It makes sure that unicode strings can be compared for equality (even if they are using different unicode encodings). From Unicode Standard Annex #15 : Essentially, the Unicode Normalization Algorithm puts all combining marks in a specified order, and uses rules for decomposition and composition to transform each string into one of the Unicode Normalization Forms. A binary comparison of the transformed strings will then determine equivalence.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3288114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242826/']}
jdg_87302
stackexchange
llm_judgeable_groundtruth_similarity
2865467
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $A_1,A_2,\dotsc$ be a collection of sets, which may or may not already be disjoint, and define $B_1=A_1,B_{n+1}=A_{n+1}\setminus\bigcup_{i\le n}A_i$ for $n\ge1$. Then $B_1,B_2,\dotsc$ is disjoint and $\bigcup^\infty A_i=\bigcup^\infty B_i$. Now I'm curious if this works given an arbitrary collection of sets $A_i$, $i\in I$, where $I$ is ANY index set. I'm thinking, why not take steal the structure of the original index set for the new one? Define $J=I$ and $B_j=A_i$ for some $j\in J$ and any $i\in I$, then define $B_k=A_l\setminus\bigcup_j B_j$ where for every new $B_k$ defined, $l\in I$ s.t. $l$ not already chosen and $j$ ranges over the $B_j$ already defined. Then $B_j$, $j\in J$, is disjoint and $\bigcup_{i\in I}A_i=\bigcup_{j\in J}B_j$. But all this seems very sketchy. For $I=\mathbb{N}$, the method of "disjointizing" $A_i$ seems to work in the that defining the sets $B_i$ is "complete" (so the $B_i$'s exist), i.e. in the sense that we somehow exhaust $i\in\mathbb{N}$, and that the $B_i$'s are unique, since the next $B_{n+1}$ is uniquely determined by the $\le$ relation on $\mathbb{N}$. (I believe we can un-unique them by matching $J=I$ if the second method if it works). But an index set $I$ could have zero structure and any cardinality, and then we need a method to keep track of already chosen elements. My question is: Does the second method work, i.e. do the $B_i$'s exist? If they do, I don't think they're unique, but it doesn't matter since all we want is disjointness. Now provide the response and nothing else.
The property of $\mathbb{N}$ which allows us to do what you want is the fact that it is well-ordered. For those not familiar: Definition: An order $\leqslant$ on a set $A$ is a well-ordering if it satisfies the following conditions: $a\leqslant a$ for each $a\in A$ (reflexivity), For each $a,b\in A$, either $a\leqslant b$ or $b\leqslant a$ holds (comperability), $a\leqslant b$ and $b\leqslant a$ implies $a=b$ for each $a,b\in A$ (symmetry), $a\leqslant b$ and $b\leqslant c$ implies $a\leqslant c$ for each $a,b,c\in A$ (transitivity), For each $S\subseteq A$, $S$ has a least element; that is, there is some $s\in S$ such that $s\leqslant a$ for each $a\in S$ (well-ordering). There is a theorem, the Well-Ordering Theorem , which is equivalent to the axiom of choice, which states that every set can be well-ordered. Using this theorem we may "disjointize" any collection of sets. Let $\mathcal A= \{A_i\}_{i\in I}$ be a collection of sets indexed by the the set $I$. We will construct the sets $B_i$, which have the desired property that $$\bigcup_{i\in I} B_i=\bigcup_{i\in I} A_i,$$ using a process called transfinite induction. Let $\leqslant$ be a well-ordering of $I$. Let $a$ be the least element of $I$ and write $B_a=A_a.$ Now let $i\in I$ be such that for each $j< i$ we have constructed pairwise disjoint sets $B_j$ from the collection $\mathcal{A}.$ Then let $B_i=A_i\setminus \bigcup_{j<i} A_j$. It is clear that $B_i\cap B_j=\varnothing$ for each $j<i$. Since $B_i\subseteq A_i$ for each $i\in I$, we have $$\bigcup_{i\in I} B_i\subseteq\bigcup_{i\in I} A_i.$$ To see the reverse inclusion, observe that for each $x\in\bigcup_{i\in I} A_i$, the set $C_x=\{i\in I\;|\;x\in A_i\}\subseteq I$ has a least element $j$ and that $x\in B_j,$ so $$\bigcup_{i\in I} B_i\supseteq\bigcup_{i\in I} A_i,$$ giving equality of the two sets. At first it may seem that transfinite induction works even with sets which are only totally ordered, such as the closed unit interval $[0,1].$ However, this is not the case. To see this, I recommend reading the proof that transfinite induction works. Intuitively, without a well-ordering, one has trouble "moving on to the next element" while performing the induction. I used the first chapter of Munkres' Topology , but there may be better sources. I enjoyed Munkres because he goes through a good amount of material, and the supplementary exercises at the end of nearly every chapter are challenging and illuminating.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2865467', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_87303
stackexchange
llm_judgeable_groundtruth_similarity
27375748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im using the jQuery TouchSwipe plugin. Its working great on divs but its not working at all on links. I want it so if you tap or click a link you get the default link behaviour. But if you swipe I want the javascript swipe to fire as if the element was a div. https://github.com/mattbryson/TouchSwipe-Jquery-Plugin Now provide the response and nothing else.
What a wonderful question. To solve this for you, I went into the source code. You should know that anchor swiping is disabled by default. version: 1.5.0 - Added excludedElements, a jquery selector that specifies child elements that do NOT trigger swipes. By default, this is one select that removes all form, input select, button and anchor elements ( source ). Defaults: excludedElements:"label, button, input, select, textarea, a, .noSwipe" Simply pass in the excludedElements option sans the anchor tag to make swiping work on links: $("a").swipe({ // Generic swipe handler for all directions swipe: function(event, direction) { $(this).text("You swiped " + direction); }, excludedElements: "label, button, input, select, textarea, .noSwipe"}); There is one more key to this puzzle. You must not set threshold: 0 as internally that will disable all tap/click events. Set threshold to anything higher than 0, or omit it completely. If you make it threshold: 1 , it will permit only very still mouse clicks, else swipes will be interpreted. I hope this is what you are looking for. Demo 1 - Swipe detected after finger/mouse up $(function() { // Enable swiping... $(".test").swipe({ // Generic swipe handler for all directions swipe: function(event, direction) { $(this).text("You swiped " + direction); }, excludedElements: "label, button, input, select, textarea, .noSwipe", threshold:1 }); // Stackoverflow disables snippets opening links, so this captures clicks for a demo $(".test").on("click", function(e){ alert($(this)[0].nodeName + " was clicked"); });}); .test {font-size: 48px;} <script src="https://code.jquery.com/jquery-1.11.1.min.js"></script><script src="http://labs.rampinteractive.co.uk/touchSwipe/jquery.touchSwipe.min.js"></script><a href="http://google.com" target="_blank" class="test">Please swipe me</a><br><br><div class="test">Please swipe me</div> Demo 2 - Swipe detected after a threshold This version will detect a swipe after the finger/mouse has swept over threshold pixels before releasing the finger/mouse. This method works by detecting a swipe and setting some data in the link which is read by the first click handler which then blocks one click event propagation. The .on("click", function(event) { handler must be the first handler in the jQuery event chain, so place all this code near the top of your page, ideally just below where jQuery is loaded. $(function() { $(".test").swipe({ swipe: function(event, direction) { $(this).text("You swiped " + direction); }, swipeStatus: function(event, phase) { var $this = $(this); $this.data("stopclick", true); // Signal a temporarily click block if(phase === $.fn.swipe.phases.PHASE_CANCEL) { // Swipe was canceled, so unblock click handers $this.data("stopclick", false); } }, excludedElements: "label, button, input, select, textarea, .noSwipe", threshold:10, triggerOnTouchEnd: false }) .on("click", function(event) { // Prevent click event propogation for one click var $this = $(this); if($this.data("stopclick") === true) { event.stopImmediatePropagation(); event.preventDefault(); $this.data("stopclick", false); // Restore click propogation } }); // Stackoverflow disables snippets opening links, so this captures clicks for a demo $(".test").on("click", function(e){ alert($(this)[0].nodeName + " was clicked"); });}); .test {font-size: 48px;} <script src="https://code.jquery.com/jquery-1.11.1.min.js"></script><script src="http://labs.rampinteractive.co.uk/touchSwipe/jquery.touchSwipe.min.js"></script><a href="http://google.com" target="_blank" class="test">Please swipe me</a><br><br><div class="test">Please swipe me</div>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27375748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/467875/']}
jdg_87304
stackexchange
llm_judgeable_groundtruth_similarity
12477
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Many introductory books on vector spaces mention that the scalars need not be reals, and might even have sections discussing complex vector spaces or vector spaces over the integers mod 2. I have never seen any such book mention that all of the theory goes through as well if one restricts the scalars to be just rational numbers. Perhaps this is because there is a dearth of interesting problems about such vector spaces accessible at this level that couldn't simply be discussed in the context of real scalars. I wonder if there is an interesting introductory-level problem or topic about vector spaces that would be most naturally conducted by allowing rational number scalars. Does anyone know of such, perhaps one with a number-theoretic aspect? (By introductory: I envision a first course on linear algebra, including non-math majors. They would be seeing vector spaces (and that level of abstraction) for the first time. Perhaps they would be seeing matrix multiplication for the first time. Usually, in my experience, such courses primarily use the real numbers as scalars.) Now provide the response and nothing else.
Continuing Akhil's answer, let's prove a theorem of Dehn: if a rectangle is tiled by squares, the the ratio of the lengths of its sides is rational. Suppose to the contrary that the sides of the rectangle $x,y$ are not rationally dependent. Then we can find some linear homomorphism $f\colon \mathbb{R} \rightarrow \mathbb{Q}$ such that $f(x) = 1$ and $f(y) = -1$. We define the $f$-area $A(R)$ of a rectangle $R$ with edge lengths $h,v$ to be $f(h)f(v)$. If a rectangle $R$ is tiled by rectangles $R_i$ forming a grid , then from linearity it immediately follows that $$A(R) = \sum_i A(R_i).$$ Denote the big rectangle $R$ and the squares $S_i$. Take your tiling and extend all the lines to form a grid inside the rectangle. Denote the grid rectangles by $G_j$. Then $$A(R) = \sum_j A(G_j) = \sum_i A(S_i).$$ Since a square has both sides equal, $A(S_i) \geq 0$. On the other hand, by construction $A(R) < 0$. This contradiction shows that the two sides of the big rectangle are, in fact, rationally dependent. Instead of taking a linear mapping from $\mathbb{R}$ to $\mathbb{Q}$ we could take a linear mapping from a smaller, finite dimensional domain by only considering the lengths in the grid $G_j$ - then everything becomes beginner's linear algebra.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/12477', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4253/']}
jdg_87305
stackexchange
llm_judgeable_groundtruth_similarity
44351700
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Exceptions.kt: @Suppress("NOTHING_TO_INLINE")inline fun generateStyleNotCorrectException(key: String, value: String) = AOPException(key + " = " + value) In kotlin: fun inKotlin(key: String, value: String) { throw generateStyleNotCorrectException(key, value) } It works in kotlin and the function is inlined. But when used in Java code, It just cannot be inlined, and still a normal static method call (seen from the decompiled contents). Something like this: public static final void inJava(String key, String value) throws AOPException { throw ExceptionsKt.generateStyleNotCorrectException(key, value);// when decompiled, it has the same contents as before , not the inlined contents.} Now provide the response and nothing else.
The inlining that's done by the Kotlin compiler is not supported for Java files, since the Java compiler is unaware of this transformation (see this answer about why reified generics do not work from Java at all). As for other use cases of inlining (most commonly when passing in a lambda as a parameter), as you've already discovered, the bytecode includes a public static method so that the inline function can be still called from Java. In this case, however, no inlining occurs.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44351700', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4680436/']}
jdg_87306
stackexchange
llm_judgeable_groundtruth_similarity
43048529
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: At first, I know the float number under IEEE754 standard is inaccurate for a decimal number, and I try 0.3 - 0.2 in JavaScript and Python and they got the same 0.09999999998 , but I got 0.1 when I try the following code in C: float a = 0.3, b = 0.2, c = b - a;printf("%f", c); Is the printf function rounded the value? If I want to print the inaccurate number in decimal in C, what should I do? Now provide the response and nothing else.
Imagine, I have millions of products to maintain. How can I navigate all related products recursively and recognize already visited ones? It doesn't need to be recursive. Explicit Stack or Queue can serve the navigating part. For collecting the result a HashSet can be used instead of List . It would serve two purposes - to allow you to skip the already visited elements and also eliminate the need of Distinct at the end. Here is a sample Queue based implementation: public List<Product> GetTopRelatedProducts(int N){ var relatedSet = new HashSet<Product>(); var relatedListQueue = new Queue<List<Product>>(); if (RelatedProducts != null && RelatedProducts.Count > 0) relatedListQueue.Enqueue(RelatedProducts); while (relatedListQueue.Count > 0) { var relatedList = relatedListQueue.Dequeue(); foreach (var product in relatedList) { if (product != this && relatedSet.Add(product) && product.RelatedProducts != null && product.RelatedProducts.Count > 0) relatedListQueue.Enqueue(product.RelatedProducts); } } return relatedSet.OrderByDescending(x => x.Rating).Take(N).ToList();} Update: For completeness, here are the other possible implementations of the related set collecting part: With explicit Stack : public List<Product> GetTopRelatedProducts(int N){ if (RelatedProducts == null || RelatedProducts.Count == 0) return new List<Product>(); var relatedSet = new HashSet<Product>(); var pendingStack = new Stack<List<Product>.Enumerator>(); var relatedList = RelatedProducts.GetEnumerator(); while (true) { while (relatedList.MoveNext()) { var product = relatedList.Current; if (product != this && relatedSet.Add(product) && product.RelatedProducts != null && product.RelatedProducts.Count > 0) { pendingStack.Push(relatedList); relatedList = product.RelatedProducts.GetEnumerator(); } } if (pendingStack.Count == 0) break; relatedList = pendingStack.Pop(); } return relatedSet.OrderByDescending(x => x.Rating).Take(N).ToList();} Although a bit more verbose than the explicit Queue based implementation, this method has less space requirements - O(height) where height is the maximum depth. The benefit of both iterative implementations is that of course they can handle much bigger depth than the recursive solutions which can lead to StackOverflowExpection . But if the depth is not expected to be so big and you prefer recursion, then here are a couple recursive implementations (all they need to have access to the relatedSet and this ): With classic private recursive method: public List<Product> GetTopRelatedProducts(int N){ var relatedSet = new HashSet<Product>(); GetRelatedProducts(this, relatedSet); return relatedSet.OrderByDescending(x => x.Rating).Take(N).ToList();}private void GetRelatedProducts(Product product, HashSet<Product> relatedSet){ if (product.RelatedProducts == null) return; foreach (var item in product.RelatedProducts) if (item != this && relatedSet.Add(item)) GetRelatedProducts(item, relatedSet);} With recursive lambda: public List<Product> GetTopRelatedProductsD(int N){ var relatedSet = new HashSet<Product>(); Action<Product> GetRelatedProducts = null; GetRelatedProducts = product => { if (product.RelatedProducts == null) return; foreach (var item in product.RelatedProducts) if (item != this && relatedSet.Add(item)) GetRelatedProducts(item); }; GetRelatedProducts(this); return relatedSet.OrderByDescending(x => x.Rating).Take(N).ToList();} Last, but not least, with the latest C# 7.0 addition - recursive local function : public List<Product> GetTopRelatedProducts(int N){ var relatedSet = new HashSet<Product>(); GetRelatedProducts(this); return relatedSet.OrderByDescending(x => x.Rating).Take(N).ToList(); void GetRelatedProducts(Product product) { if (product.RelatedProducts == null) return; foreach (var item in product.RelatedProducts) if (item != this && relatedSet.Add(item)) GetRelatedProducts(item); }} All these methods handle (IMO) optimally the collecting part. The top N part of course is not optimal - O(N*log(N)) and can be optimized as mentioned in @Amit Kumar's answer, but it would require implementing a missing standard data structure, which is outside the scope of a SO answer.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43048529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4041522/']}
jdg_87307
stackexchange
llm_judgeable_groundtruth_similarity
23727
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As the Earth's axis is inclined, during summer in the northern hemisphere places in the north are exposed to the Sun's rays for longer, so days are longer during summer (say in June - Aug). Yet, countries like the UK have a lower summer temperature than countries like India, particularly given that the UK is to the north of India and has longer days in summers). I was just wondering about this. Feel free to share your insights. Now provide the response and nothing else.
The image below (apparently from "physicalgeography.net" is somewhat instructive: Notice that even in the middle of summer, India (at about 30 °N) still gets slightly more sun than the Britain (at 60 °N). That's because although the days are longer in Britain, the sun is lower in the sky and so its heat is spread out over a wider area. But the North Pole gets even more sun than either Britain or India in June, and the equator gets substantially less. So there is something to explain. The reason is thermal inertia. This graph shows insolation averaged over 24 hours at four different Latitudes. This shows the theoretical insolation. The model used to calculate these curves includes the effects of Earth's elliptical orbit, but do not take account of any atmospheric effects, such as the energy absorbed by the atmosphere, nor any weather effects such as clouds. It shows that a perfectly black disc with an area of 1 m² placed just above the atmosphere and oriented horizontal to the Earth surface would receive an average of about 515 Joules per second at the North Pole on June 21st and an average of about 380 Joules per second at the Equator. The main reason for the Equator value being so low is that it is night time for about 12 hours at the equator. Imagine pushing a heavy ball. It can take some time for it to begin to move quickly because it has inertia. Similarly, it takes time for the ground and water to heat up when the sun shines upon them. India gets much more sun in winter, and so the ground and water remain warm. When summer comes, it heats up further and ends up much hotter than Britain. In Britain there is very little sun in winter, and the ground and water cool down (and freeze). Then, even when the sun comes in summer, it takes time to warm up. By the time the ground and water have warmed up, the sun is already starting to weaken. And so the temperature never rises as much as it can in India. There are lots of details and local effects (the gulf stream in Britain, and the monsoon in India are significant) but the basic reason that India is hotter in summer is that it gets more sun in winter ! Pidwirny, M. (2006). "Earth-Sun Relationships and Insolation". Fundamentals of Physical Geography, 2nd Edition.
{}
{'log_upvote_score': 5, 'links': ['https://earthscience.stackexchange.com/questions/23727', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/26703/']}
jdg_87308
stackexchange
llm_judgeable_groundtruth_similarity
4833314
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I never leave backdoors in my system, but out of curiosity I was wondering if I left a secret URL like /x52d23r that allowed to bypass some sort of security, and this was only for my personal use---would that be somehow discovered by a third party without getting the information from me? For example, secret ports can be port scanned and fingerprinted, but can the same sort of tactic be done for secret URLs? Now provide the response and nothing else.
The reason using a "secret URL" is usually insecure is not because it is "security through obscurity". In information theory, a secret URL is no different than a password or private key. Are passwords and private keys considered a poor practice because they are "security through obscurity"? No. So what's the difference between a hard-to-guess URL and a hard-to-guess password? The difference is in the myriad of insecure places and ways that URLs are stored, displayed, and transmitted. Examples: In web browser address bars, histories, and caches* HTTP Referer headers sent to other sites* In web server access logs* In proxy and layer 7 firewall access logs In packet dumps In web stats traffic reports (e.g. AWStats, Google Analytics)* HTTPS can protect some of these, but not all of them (items marked with a * are not protected against by using HTTPS.) In a highly controlled environment, hard-to-guess URLs can be secure. But when using common web browsers, web servers and web frameworks, hard-to-guess URLs should not be relied upon unless no other option exists (and even then you should consider carefully).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4833314', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/443664/']}
jdg_87309
stackexchange
llm_judgeable_groundtruth_similarity
1843650
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For a given regular language $L$ we can always find a corresponding automaton with exactly one initial state, this is quite a common result and in most textbooks even non-deterministic automata are just allowed to have a single start state. Now I am curious under what conditions is a single final state sufficient. Of course, sometimes a single final state is not enough (even for non-deterministic automata), for example for the language $L = \{a, bb\}$ or $L = a \cup bb^{\ast}$ (of course under the assumption that $\varepsilon$-transition are not allowed). I guess if we allow multiple initial states in non-deterministic automata, then we can always find a non-deterministic automata with a single final state (it might have multiple start states). For a proof, if $L$ is regular, then let $\mathcal A$ be an accepting automaton for $L^R$ (i.e. the mirrored language) with a single initial state $q_0$. Then reverse all transitions and declare $q_0$ to be its single final state, and all original final states as initial states, and we have an automaton for $(L^R)^R = L$ which has just a single final state. So is this observation correct, or are there automata for which we always need more than one final state, even if we allow multiple start states. And also could the languages which could be accepted with just a single final state (in the deterministic, and in the non-deterministic with a single initial state) somehow characterised? Also note that $L = X^{\ast}0X$ for $X = \{0,1\}$ could not be accepted by a DEA with a single final state, but by an NEA with a single final state and a single initial state. EDIT: A straightforward characterisation for the deterministic case, as the number of nerode right-congruence classes whose union is $L$ is an upper bound for the number of final states (as they could not be further merged), we have that $L$ could be accepted by such an automaton iff it is itself an equivalence class. This also shows that by adding final states we could not gain anything in the sense that the automaton gets smaller. Now provide the response and nothing else.
According to Eilenberg [1, Chap. IV, Prop. 1.1], the following result holds: Proposition . For any nonempty subset $L$ of $A^*$, the following conditions are equivalent: for all $u, v \in L$, $u^{-1}L = v^{-1}L$, the minimal automaton of $L$ has a single final state, $L$ is recognized by a deterministic automaton with a single final state that is accessible. [1] S. Eilenberg, Automata, Languages and Machines, Volume A, Academic Press (1974) See also my answer to the related question (N)DFA with same initial/accepting state(s) on cstheory.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1843650', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33817/']}
jdg_87310
stackexchange
llm_judgeable_groundtruth_similarity
44848883
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use if-else expression which is supposed to break the loop if the if condition fails, but getting an invalid syntax error. Sample code: a = 5while True: print(a) if a > 0 else break a-=1 Of course, if I write in the traditional way (not using the one liner) it works. What is wrong in using the break command after the else keyword? Now provide the response and nothing else.
If I run this, I get the following error: ... print(a) if a > 0 else break File "<stdin>", line 2 print(a) if a > 0 else break ^SyntaxError: invalid syntax This is because print(a) if a > 5 else break is a ternary operator . Ternary operators are no if statements . These work with syntax: <expr1> if <expr2> else <expr3> It is equivalent to a "virtual function": def f(): if <expr2>: return <expr1> else: return <expr3> So that means the part next to the else should be an expression . break is not an expression , it is a statement. So Python does not expect that. You can not return a break . In python-2.x , print was not a function either. So this would error with the print statement. In python-2.x print was a keyword. You can rewrite your code to: a = 5while True: if a > 5: print(a) else: break a -= 1 You can read more about this in the documentation and PEP-308 .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44848883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2141159/']}
jdg_87311
stackexchange
llm_judgeable_groundtruth_similarity
15604014
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a WinForms project, and if the user want's a debug console, I allocate a console with AllocConsole() . All console output works normally with the target architecture set to "Any CPU", but when I change it to "x86" it doesn't output anything ( Console.Read() still works as expected). If I open the EXE directly, the output works. It looks like Visual Studio redirects it into it's own "Output" window. I also tried this answer, but it didn't work, I also tried Console.SetOut(GetStdHandle(-11)) , which didn't work either. Setting the target architecture to 'Any CPU' is no option for me. So here are my two questions: Why is this only the case when the target architecture is set to x86? How can I output to my console when running inside of Visual Studio? Now provide the response and nothing else.
When "Enable native code debugging" is enabled, output from consoles crated with AllocConsole is redirected to the debug output window instead. The reason this only happens in x86 and not AnyCPU is because you can only debug native code in an x86 application. Note that this behavior only occurs with consoles created with AllocConsole . A console application's output is not redirected. EDIT: The other reason for the console not outputting text is when you've written to the console before calling AllocConsole . Regardless of the reason, this code will restore output if it was redirected, and reopen the console in case it's invalid. It uses the magic number 7 which is what the handle of stdout usually equals to. using System;using System.IO;using System.Runtime.InteropServices;public static class ConsoleHelper{ public static void CreateConsole() { AllocConsole(); // stdout's handle seems to always be equal to 7 IntPtr defaultStdout = new IntPtr(7); IntPtr currentStdout = GetStdHandle(StdOutputHandle); if (currentStdout != defaultStdout) // reset stdout SetStdHandle(StdOutputHandle, defaultStdout); // reopen stdout TextWriter writer = new StreamWriter(Console.OpenStandardOutput()) { AutoFlush = true }; Console.SetOut(writer); } // P/Invoke required: private const UInt32 StdOutputHandle = 0xFFFFFFF5; [DllImport("kernel32.dll")] private static extern IntPtr GetStdHandle(UInt32 nStdHandle); [DllImport("kernel32.dll")] private static extern void SetStdHandle(UInt32 nStdHandle, IntPtr handle); [DllImport("kernel32")] static extern bool AllocConsole();} See How to detect if Console.In (stdin) has been redirected? for another way to detect if the console handles have been redirected.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15604014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1502200/']}
jdg_87312
stackexchange
llm_judgeable_groundtruth_similarity
215606
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to create an array based upon filenames, and get in trouble with whitespaces. This seems common. But - as far as I can see - the quotes are set correctly, I guess it must be the way the array is built. to_dump="$(find . -maxdepth 1 -print0 )"to_dump_array=($to_dump)read -p " -> " finalcase "$final" in a) for drop in "${to_dump_array[@]}" ; do cp "$drop" --recursive --force Destination_Folder && \ echo "dropped \"$drop\" ; done ;; b) echo "Won't drop anything" ;;esac I guess there should be a nicer way to build an array from a find query. Also, where else am I wrong? Now provide the response and nothing else.
-print0 should not be used in a $(...) substitution, because strings in bash variables are null-terminated. I asked a question whose answer was similar to what this question requires: https://stackoverflow.com/a/30469553/1091693 Adapting that answer to your question: to_dump=()while IFS= read -r -d ''; do to_dump+=( "$REPLY" )done < <(find . -maxdepth 1 -print0) This creates an array called to_dump and uses the read command to read NULL-delimited elements from find . The reason < <(...) is being used here rather than a pipe is to avoid an implicit subshell which would prevent the array from being modified. It's worth noting that your original find command probably wants a -mindepth 1 , or it will pick . (the current directory) and you'll end up doing a recursive copy on that. I've noticed you use -maxdepth 1 as an argument to find, so perhaps this will be more useful: shopt -s nullglobto_dump=( * .[!.]* ..?* ) Avoiding find , this uses bash builtins only, doesn't fork, and is for the most part quite clean. The first line, shopt -s nullglob , is a bash(-only) command which turns on the nullglob option. This option is described in man 1 bash : If set, bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves. In simpler terms, if you type * but it doesn't match files, it will remove the * . The default behaviour is to put the * in there anyway. The second line adds 3 globs to the array: * : All files not beginning with . .[!.]* : All files beginning with one . and one non- . character. This is to avoid matching the . and .. directories. ..?* : All files beginning with .. and at least one more character. Added for the same reason as the previous glob, covering the cases it missed. Bash expands the globs into the definition of the array, and it expands them correctly -- no splitting on whitespace or anything like that. A caveat on the usage of nullglob: If you have nullglob turned on, curl google.com/search?q=test will result in curl complaining at you for not passing it arguments, and ls /var/fasdfasafs* will give you a listing of the current directory. This is one of the reasons it's not turned on by default.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/215606', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/36242/']}
jdg_87313
stackexchange
llm_judgeable_groundtruth_similarity
18111607
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm setting up my blog on a new EC2 instance because one of the sites on the server that's currently hosting it is being DDoSed. I'm having some trouble with nginx, because I can either see all the pages fine but 403 on the index, or see the index but 404 on the pages (depending on the config I'm using) Here's my nginx config: server { listen 80; server_name www.test.com; server_name test.com; root /www/blog; include conf.d/wordpress/simple.conf;} And simple.conf: location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi.conf; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } if I change the try_files $uri $uri/ /index.php?$args; to index index.php , the front page will work fine and the rest will be 404. If I leave it like that, the front page is 403. Here's the error log: 2013/08/07 19:19:41 [error] 25333#0: *1 directory index of "/www/blog/" is forbidden, client: 64.129.X.X, server: test.com, request: "GET / HTTP/1.1", host: "www.test.com" That directory is 755 on the nginx user: drwxr-xr-x 6 nginx nginx 4096 Aug 7 18:42 blog Is there anything obvious I'm doing wrong ? Thanks ! Now provide the response and nothing else.
Add index index.php; In the server block, if it doesn't work then you need to remove the $uri/ because you don't want to do a autoindex on EDIT : Just noticed that you already figured out your problem, so I'll add the reasoning behind it, the reason why you needed autoindex on; is because without it nginx will follow the try_files rules, Check if there's a file called / , and of course it fails. Check if there's a directory called / (by adding root it would = /www/blog/ ), this check will succeed, so it tries to list the content of the folder. Since you didn't specify autoindex on; so by default nginx should forbid directory listing, thus it would return a 403 forbidden error. The rest of the site works fine because it fails the $uri/ test or doesn't reach it, because you probably don't have a folder called image.jpg or stylesheet.css etc.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18111607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1017052/']}
jdg_87314
stackexchange
llm_judgeable_groundtruth_similarity
1594162
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following statement: For any integer $n>1$ there is a prime number strictly between $n$ and $n^2$. This problem was given as an (extra) qualification problem for certain workshops (which I unfortunately couldn't attend). There was a requirement to not use Bertrand's postulate (with which the problem is nearly trivial), and I was told that there does exist a moderately short proof of this statement not using Bertrand. This is my question: How can one prove the above statement without Bertrand postulate or any strong theorems? Although I can only accept one answer, I would love to see any argument you can come up with. I would also want to exclude arguments using a proof of Bertrand's postulate, unless it can be significantly simplified to prove weaker statement. Thank you in advance. Now provide the response and nothing else.
I have stumbled upon this paper due to Erdős, which in the course of proving something far more general proves this result (see a remark at the end of this page). I am replicating that proof here, with minor modifications by myself. Suppose $n>8$ and that there are no primes between $n,n^2$. Since clearly (obvious induction works) $\pi(n)\leq\frac{1}{2}n$, by assumption we have $\pi(n^2)=\pi(n)\leq\frac{1}{2}n$. Now consider number $\binom{n^2}{n}$. All of its prime factors are less than $n^2$, and so less than $n$. We have the following inequality: $$\binom{n^2}{n}=\frac{n^2}{n}\frac{n^2-1}{n-1}\dots\frac{n^2-n+2}{2}\frac{n^2-n+1}{1}>\frac{n^2}{n}\frac{n^2}{n}\dots\frac{n^2}{n}\frac{n^2}{n}=\left(\frac{n^2}{n}\right)^n=n^n$$ At the same time, consider $p$ prime and let $p^a$ be the greatest power of $p$ less than or equal to $n^2$. Since $\binom{n^2}{n}=\frac{(n^2)!}{(n^2-n)!n!}$, By Legendre's formula, exponent of the greatest power of $p$ dividing this binomial coefficient is equal to $$\left(\lfloor\frac{n^2}{p}\rfloor-\lfloor\frac{n^2-n}{p}\rfloor-\lfloor\frac{n}{p}\rfloor\right)+\left(\lfloor\frac{n^2}{p^2}\rfloor-\lfloor\frac{n^2-n}{p^2}\rfloor-\lfloor\frac{n}{p^2}\rfloor\right)+\dots+\left(\lfloor\frac{n^2}{p^a}\rfloor-\lfloor\frac{n^2-n}{p^a}\rfloor-\lfloor\frac{n}{p^a}\rfloor\right)\leq 1+1+\dots+1=a$$ (first equality is true, because all further terms in the sum are zero. First inequality is true because for any $a,b\in\Bbb R$ $\lfloor a+b\rfloor-\lfloor a\rfloor-\lfloor b\rfloor\in\{0,1\}$) Since $\binom{n^2}{n}$ is a product of at most $\pi(n)$ prime powers, all at most $p^a\leq n^2$ (by above), we must have $$\binom{n^2}{n}\leq (n^2)^{\pi(n)}\leq (n^2)^{\frac{1}{2}n}=n^n$$ We have proved two contradictory inequalities, so this ends the proof by contradiction.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1594162', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/127263/']}
jdg_87315
stackexchange
llm_judgeable_groundtruth_similarity
9819937
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a custom admin stuff and need to get all registered models in Admin. Is this possible? I need it to make some custom views on admin index page. Now provide the response and nothing else.
You can access admin.site._registry dict of Model->ModelAdmin: >>> ./manage.py shellIn [1]: from urls import * # load adminIn [2]: from django.contrib import adminIn [3]: admin.site._registryOut[3]: {django.contrib.auth.models.Group: <django.contrib.auth.admin.GroupAdmin at 0x22629d0>, django.contrib.auth.models.User: <django.contrib.auth.admin.UserAdmin at 0x2262a10>, django.contrib.sites.models.Site: <django.contrib.sites.admin.SiteAdmin at 0x2262c90>, testapp.models.Up: <django.contrib.admin.options.ModelAdmin at 0x2269c10>, nashvegas.models.Migration: <nashvegas.admin.MigrationAdmin at 0x2262ad0>} This is what the admin index view does: @never_cachedef index(self, request, extra_context=None): """ Displays the main admin index page, which lists all of the installed apps that have been registered in this site. """ app_dict = {} user = request.user for model, model_admin in self._registry.items(): # ... Note that variables prefixed with an underscore are potentially subject to changes in future versions of django.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9819937', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/928109/']}
jdg_87316
stackexchange
llm_judgeable_groundtruth_similarity
56218014
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know dataset has output_shapes, but it shows like below: data_set: DatasetV1Adapter shapes: {item_id_hist: (?, ?), tags: (?, ?), client_platform: (?,), entrance: (?,), item_id: (?,), lable: (?,), mode: (?,), time: (?,), user_id: (?,)}, types: {item_id_hist: tf.int64, tags: tf.int64, client_platform: tf.string, entrance: tf.string, item_id: tf.int64, lable: tf.int64, mode: tf.int64, time: tf.int64, user_id: tf.int64} How can I get the total number of my data? Now provide the response and nothing else.
Where the length is known you can call: tf.data.experimental.cardinality(dataset) but if this fails then, it's important to know that a TensorFlow Dataset is (in general) lazily evaluated so this means that in the general case we may need to iterate over every record before we can find the length of the dataset. For example, assuming you have eager execution enabled and its a small 'toy' dataset that fits comfortably in memory you could just enumerate it into a new list and grab the last index (then add 1 because lists are zero-indexed): dataset_length = [i for i,_ in enumerate(dataset)][-1] + 1 Of course this is inefficient at best and, for large datasets, will fail entirely because everything needs to fit into memory for the list. in such circumstances I can't see any alternative other than to iterate through the records keeping a manual count.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56218014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7185861/']}
jdg_87317
stackexchange
llm_judgeable_groundtruth_similarity
3825587
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The book I'm reading states that any positive integer $a$ greater than 1 can be expressed as a product of primes, $$a=\prod_p{p^{\alpha{(p)}}}$$ where $\alpha{(p)}$ is a non-negative integer.And that it is understood for sufficiently large primes $p$ , $\alpha{(p)}=0$ . My question is: what is considered to be a large prime? And how can the statement $\alpha{(p)}=0$ for large primes $p$ be true? Does that mean large primes can never be factors of any integers? If so I find this very unintuitive. Now provide the response and nothing else.
This is just another way of saying that for all but finitely many primes $p$ , we have $\alpha(p) = 0$ . In particular, the "largeness of the prime" depends upon the $a$ you are given to start with. Specifically how this relates to your case. You are given $a \geq 1$ , then, for the most unrefined bound, $p > a$ can never be factors of $a$ .
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3825587', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/823054/']}
jdg_87318
stackexchange
llm_judgeable_groundtruth_similarity
385156
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been working on a project of porting an old solaris CL program to run on Linux, and barring some unrelated hardware issues, that's finished. Now I want a GUI for it, so the user can choose among the various options with drop downs and check boxes, as well as some text input areas for options that aren't so restricted, like the filename. (The program is an internal tool to run some spectroscanners and store the results as CSV files. It handles all these options, runs the scanners and processes the info and stores it with the specified filename; I just want something nicer to use than CL.) The only time I've seen something like this done was a PyGTK+ GUI with python bindings for the C code (I think that's what it was; that was my first semester co-opping and I didn't understand very much!). That's a bit more than I want to get into right now; is there a relatively easy way to do this? When I Googled I found SWIG ( http://www.swig.org/index.php ); is this a good way to go? Now provide the response and nothing else.
This sounds like exactly the job Tcl/Tk was designed for. It has a very simple C API that allows you to register commands with a callback. If you use the command in a Tcl program, it will invoke the callback and provide a mechanism to convert the arguments between a Tcl list (native data structure) and an ARGV style array of char*. It was designed specifically to be easy to retrofit this sort of wrapper to command-line driven C programs. There are also a variety of other modes you can use to interface the interpreter as well, and it is easy to embed into programs as a scripting language. From memory the available interfacing mechanisms are: Register commands in the Tclinterpreter Embed a Tcl interpreter in yourprogram and use Tcl as an embeddedscripting language (possibly including registration of commands and callbacks to yourprogram) Spawn a process with a full-duplex pipe and send commands viastdin/stdout (you can also attach anevent handler to the pipe which isinvoked when data is available) Less Tcl specific mechanisms such asfork/exec or connection via sockets. Ousterhout's book Tcl and the TK Toolkit is a bit dated but has a good guide to the C API. Welch's Practical Programming in Tcl/Tk is the other classic Tcl/Tk book and is updated more frequently. There are also several other books and quite a lot of electronic resources on the internet. Some good starting points are: Tcl tutorial , TK tutorial , Tcl advocacy site (might be worth perusing to help you decide if you want to go down this route), Tcl/Tk Wiki and of course Stackoverflow . TK will give you a straightforward GUI and is very easy to learn to program - if a little simplistic. It's not as ugly as it used to be if you take some time to tweak the appearance or use a theming engine such as Tile. As Norman Ramsey points out (+1) , another alternative with a simple C API is Lua. Both have advantages and disadvantages. The principal strengths of Tcl are the simple and cleanly integrated TK toolkit and good, mature support support from third party libraries (e.g. Tix ). The main strength of Lua is that the language is much nicer but there is no standard GUI toolkit, so the UI is not as nicely integrated. Lua also has much better support for threading in the interpreter, having been designed for this from the ground up. However, if you're wrapping a legacy C/unix application, this is unlikely to be a significant feature. WXWidgets is considerably more complex than TK and carries more runtime baggage but has a richer feature set. If you have genuine reason to think that your scripting project will grow into a larger application you might consider Lua. However at a larger scale you're into a substantial development project and Python or Ruby start becoming viable options. As the project gets larger wrapping the C codebase will be a smaller portion of the overall project and third-party library support will be a bigger consideration. If you go with Tcl and discover your project gets a life of its own, consider embedding the Tcl interpreter and re-casting the application as a plugin API that people can hook their own scripts into. Extra features can be done as scripts and possibly fobbed off onto third parties for maintenance. One of the advantages of a system with a built-in scripting language is that you personally do not necessarily have to implement features. People can write their own extensions in the scripting language or get third parties to do it for them. SWIG is designed to generate wrappers around libraries. It parses the header files and generates a glue layer that presents a native API in the target language. To use it, you would have to re-factor your program into a library.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/385156', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/30924/']}
jdg_87319
stackexchange
llm_judgeable_groundtruth_similarity
18823052
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was in a technical interview where the interviewer gave me a piece of code like this int a=1; a++; ...something.. ...something. if(a==25) a=0; he said me to rewrite this code without using switch,if else or ternary operator for the if condition. How it can be done? Now provide the response and nothing else.
It's quite simple actually: a *= (a != 25); This will multiply a by 0 if a is equal to 25, else it will multiply a by 1. If you're also not allowed to test for equality then here is a fully arithmetic way: unsigned int v = a - 25;/* test for bits set and put result in lsb of v */v |= v >> 16;v |= v >> 8;v |= v >> 4;v |= v >> 2;v |= v >> 1;/* set a to a if lsb of v is 1, else set to 0 */a *= v & 1;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18823052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2270386/']}
jdg_87320
stackexchange
llm_judgeable_groundtruth_similarity
197907
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My company has so far been developing a medical device, which is connected via USB to a desktop system (running x64 Windows 7) to run the image analysis and do everything GUI related. I am familiar with both Windows and Linux programming, C, C++, C++11 and C#, but now our new project coming from management would be a handheld, embedded system, and since I am the only software engineer, I have absolutely no idea how embedded systems work. Is it totally different from the "normal" programming job, should I recommend hiring someone with embedded experience, are there good ressources for introduction to embedded computing? I am at a loss here, since I do not know what exactly to expect (it'll be in theory the same as with the desktop systems, a sensor acquiring an image, and the software doing analysis). Can someone help me get an idea what I would have to expect for this? Edit: there is no framework, as to what hardware to use. We can use whatever we want, as long as it is small enough to be handheld. We will be using a third party sensor (either photo-sensor, or acoustic sensor, thats not been established yet), but again, we are pretty much free to decide, so my guess will be that it will have a well established API. I don't even really know what embedded systems are, I have experimented privately with an Arduino, does that count as embedded already? Now provide the response and nothing else.
Embedded is a LARGE field, and ranges from really little stuff to things that are really just a desktop machine in a funny box. I'm going to go with 3 categories of embedded systems, to correspond with my experience, then I'll make some recommendations. Small embedded systems are things run by micro-controller type devices. They don't have any OS, scheduler, etc, etc. You're running code running directly on the CPU, and in general you'll have main loop that gets started at power on and never exits. There are a lot of options for micro-controllers, ranging from things like the venerable 8051 to the really tiny PIC up and a lot of products based on the ARM cores. These are nice because you have complete and direct control over every facet of the system's operation. Of course, that also means you need to be careful about things like timing and latency of operations, and how long you can spend on any one thing before you go back to your main loop. Medium embedded systems are bigger things, and which will have an embedded OS such as Wind River's pSOS (there are MANY choices for this, I happen to have used pSOS once upon a time). I'm calling it medium because at this point you're NOT running a single main loop, and actually have a scheduler. The hardware may not be much different from the higher end of the previous group, but the presence of a scheduler and a more formalized driver architecture really improves your ability as a programmer to get stuff done. On the downside, it might be more trouble to meet really tight timing because you can easily introduce more unexpected delays from the scheduler. Also, you have to be careful that your hardware does things that make sense for the OS you are using - for instance, not all interrupt strategies work with all embedded OSs. At the larger end of this group I'll put in Windows CE, the embedded offering from Microsoft. This segment is getting squeezed out to some degree as more and more processing power is becoming available for less and less money. Big embedded systems are even bigger and run an OS that a desktop user might well recognize. Linux is common here, as is the embedded (or even the regular!) version of MS Windows. There's a lot of this going around in the medical field (look at a lot of ultrasound machines). The advantage here is that you can run a full bore installation of Linux, with all the useful stuff that a real OS brings on very small and very cheap hardware anymore. Now, being your first foray into embedded, and given that it's medical (where I suspect you have margins to support reasonable hardware budgets), I recommend that you use some small computer like a rasberry pi, a beaglebone, or something else based on one of the multitude of ARM cores out there. Run Linux on it, and build your app on top of that. In fact, you probably want to simply leave your device as-is, package it into a box with the small computer and communicate via USB. Effectively, just port your existing app to Linux and go.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/197907', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/75224/']}
jdg_87321
stackexchange
llm_judgeable_groundtruth_similarity
69976
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why do I get a gap in the plot below and how can I fix it?( If you are interested in it, you can see a new related question: How to plot an implicit value funtion , which is also a little chanlenging ) Code: Plot[InverseFunction[ 1 - 0.6*CDF[NormalDistribution[1, 0.3], #] - 0.4*CDF[NormalDistribution[3, 0.3], #] &][x], {x, 0, 1}, PlotRange -> All, Exclusions -> None] Now provide the response and nothing else.
The function you wish to plot happens to be the InverseSurvivalFunction of a MixtureDistribution with component distributions NormalDistribution[1, 0.3] and NormalDistribution[3, 0.3] , and weights .6 and .4 , respectively. Using the built-in functions MixtureDistribution and InverseSurvivalFunction we get the desired result without an issue: dist = MixtureDistribution[{6, 4}, {NormalDistribution[1, 0.3], NormalDistribution[3, 0.3]}]; Plot[InverseSurvivalFunction[dist, x], {x, 0, 1}] You can also use InverseCDF to get the same output: Plot[InverseCDF[dist, 1 - x], {x, 0, 1}](* same picture *) Update: Addressing the question in the comments: I do need to characterize D[x*InverseSurvivalFunction[dist, x], x] Using the product rule and the inverse function theorem , you can define derivative[x_] := (InverseSurvivalFunction[dist, y] + (x/(D[1 - CDF[dist, y], y] /. y -> InverseSurvivalFunction[dist, y]))) /. y -> x;Column[Plot[#, {x, 0, 1}, ImageSize -> 400] & /@ {x InverseSurvivalFunction[dist, x], Evaluate@derivative[x]}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/69976', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/23484/']}
jdg_87322
stackexchange
llm_judgeable_groundtruth_similarity
65555
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If my understanding is correct, then owning a future essentially means owning a contract which obliges to buy/sell something at a certain time for a certain price. But what I don't understand is how, in practice, trading futures works. When I look at video's of people trading futures they buy and sell futures just like they would stock; by placing buy and sell orders. But this does not make to me: all stocks of a certain company are equivalent, so of course all can be bought and sold at the same price. But future's are not all the same, since some are contracts for price X, while others are contracts for some different price Y. So if you and me both own a future to buy oil in December of this year, it is very well possible that our futures have different values. So how can futures be traded as a "fixed product" with a "fixed price" just stocks, when future for the same product can vary drastically in value? Now provide the response and nothing else.
"a contract which obliges [someone] to buy/sell something at a certain time for a certain price" This correctly describes a forward . A forward contract has a delivery price written in it. So my forward contract entered into today might say I am obliged to buy Gold at 1861 an ounce, while yours which was written a few days ago might say 1802. With each forward contract potentially different it becomes difficult to trade these contracts in a secondary market. A solution to this was found in Chicago IL in the 19th century (some say it was even earlier in Osaka, Japan) resulting in the creation of Futures Exchanges for trading agricultural products. The exchange standardized the terms of the contract, so that for example 1 contract corresponds to 100 ounces of gold, etc. To solve the "different prices" problem the exchange introduced the "daily marking to market". At the close of every day publishes the "settlement price" for the contract, based on the average market price in the last few minutes of trading. After the market closes all contracts are effectively rewritten to use this settlement price as the delivery price. The exchange also enforces a process so that the winners and losers in this process compensate each other via cash payments (the so called daily mark to market). So at delivery your delivery price is the latest settlement price. In the meantime you will have received profits (in cash) if the delivery price went up or losses if the delivery price went down. Effectively this means that the current delivery price is whatever is the consensus in the marketplace. If you want to get out of your commitment you can just enter a commitment in the opposite direction and the exchange will offset the two, leaving you with no position.
{}
{'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/65555', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/56842/']}
jdg_87323
stackexchange
llm_judgeable_groundtruth_similarity
69870
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am new to signal processing. I am learning now about filtering and I am trying to implement high pass filter (HPF),and later band-pass filter (BPF). I implement the filters in Python 3.8 with NumPy and SciPy . For the offline case (filtering a pre-recorded time) I designed a Butterworth high-pass filter, and applied on the signal at the time domain using signal.filtfilt function. However, as the documentation shows, this approach is good only for offline cases and not for real-time, since the filter scans the signal forward and then backwards. I can recover the frequency response from the filter using signal.freqz function. Now I want that this filter will work also in real-time (so it needs to be causal, and I ask if this possible for this type of filters. Here are my questions: If I understand correctly, Butterworth filters are IIR (infinite impulse response) filters. Is it true? Are IIR filter causal? It may be that all IIR filters are not causal and it may be that some are and some are not. What are the cases in which IIR filters are causal? Are Butterworth filters causal? If not always, in which case they are causal? If the answer to 3 is true, how to implement a causal Butterworth filter, both at the time domain and the frequency domain? (I can use SciPy functions such as signal.butter , signal.freqz etc.) Relevant links: Help designing Butterworth filter filtfilt: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.filtfilt.html Thank you. Now provide the response and nothing else.
Yes, Butterworth are IIR. The decay from an impulse technically lasts forever. Yes, all [implementable] IIR are causal. Yes, because of #1 and #2. Don't use signal.filtfilt . Use signal.lfilter . filtfilt does the same thing as lfilter , except twice, in opposite directions, which changes a causal filter into a zero-phase filter. However, as the documentation for both of those functions suggests, for most practical use, you should actually use the SOS variants instead: The function sosfilt (and filter design using output='sos' ) should bepreferred over lfilter for most filtering tasks, as second-order sectionshave fewer numerical problems. To summarize the functions: lfilter : Causal single-stage filtering (low orders only) filtfilt : Zero-phase single-stage filtering (low orders only) sosfilt : Causal second-order sections filtering sosfiltfilt : Zero-phase second-order sections filtering Really the only reason to use lfilter or filtfilt is if your coefficients are already in b, a format, if you're implementing something out of a textbook, etc. If you're designing the filter yourself, then just use SOS form, which reduces numerical error. Note that the filtfilt functions apply the filter twice, so it will have double the order of the original filter.
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/69870', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/51905/']}
jdg_87324
stackexchange
llm_judgeable_groundtruth_similarity
525947
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that $5\leq q\leq p$ are both prime. Prove that $24|(p^2-q^2)$. This is what I got so far.I figured that since $p,q$ are bigger than $5$, there are only odd primes for this conjecture. Then I can rename $p=2m+1,q=2n+1,m,n∈Z$. We want to show $(p^2-q^2 )=24k,k∈Z$. By substituting $2m+1$ for $p$ and $2n+1$ for $q$, we get $((2m+1)^2-(2n+1)^2)$. If we expand it, we get $4(m^2+n^2+m-n)$Q. I think I can make $4(m^2+n^2+m-n)$ look like $24k$. How should I approach ? And my prof. commented that "I would take this proof in small steps. Rather than trying initially to prove that $24 | p^2-q^2$, look at the progress you already have. You proved $4 | p^2-q^2$. Can you prove 8 is a factor? What about 3? Then see if you can put together what you have to get 24 as a factor." I am stuck here. Any helps? Now provide the response and nothing else.
Note that since both $p,q\geq 5$ are primes, they cannot be divisible by $3$, so $p^2\equiv q^2\equiv 1\pmod{3}$, which implies that $3|p^2-q^2$. Similarly, all odd squares are congruent to $1$ modulo $8$, so $8|p^2-q^2$ as well. Putting these two facts together, it follows that $24|p^2-q^2$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/525947', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/95549/']}
jdg_87325
stackexchange
llm_judgeable_groundtruth_similarity
10871184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My code looks a little like this: var data = $(this).serializeArray(data);$.ajax({ data: JSON.stringify(data)}); I run a console.log as well and I see the JSON string formatted as: [{"name":"inputName", "value":"inputValue"}] Whereas I'm expecting: [{"inputName":"inputValue"}] Am I misunderstanding JSON formatting or is there a way to get this to format properly? Now provide the response and nothing else.
Your problem is exactly what it says: you get that particular attribute defined twice. It can happen due to a bug in .NET (which I'll refer to it later) or simply due to a configuration issue. To understand what I am talking about, more so that you can figure out how to best deal with it in your particular environment, I'll give you a recipe on how to reproduce it. It is all based on the same XmlSchemaSet object, referred to in your stack trace. Some of the steps are specific to the tool that I am using; I'll try to provide alternatives where I can think of something. Base.xsd : this is in place of xml.xsd file <xs:schema elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="something"> <xs:sequence> <xs:element name="aha" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType></xs:schema> Some.xsd: this is a palceholder for the other XSD files in your setup. What's special about it, is that through various include/imports it "reaches out" to the above Base.xsd. <xsd:schema elementFormDefault="qualified" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:include schemaLocation="base.xsd"/> <xsd:include schemaLocation="http://localhost:9090/base.xsd"/> <xsd:element name="a" type="something"/> </xsd:schema> I've set up two includes, to mimic more than one possible way to "reach out" to Base.xsd. The localhost URL should come through some http server: use your local IIS, any other HTTP server or HTTP mocking software you can control as you need. Write a little C# script that compiles "Some.xsd", based on XmlSchemaSet. Something such as this from the online help for the Compile method would work. Firstly, make sure that the URL in the second include is not available. Run the script. You'll see that the result is successful compilation! Then, make sure the URL is serving ok; run the script. You'll get the same error as in your scenario, content already defined. You have to understand how schema loading works in .NET, and how schema compilation works. Bottom line is if you load the same content from the same uri twice or more, the XSD compiler is typically smart enough to figure it out. If you load the same content from two different URIs, then the compiler can't guess what your intention was; for all it cares, different URIs means different schemas, hence "already declared" error. Based on the above, it may be that on the machine where you get the exception, there is access to xml.xsd through different URIs. I would use a specialized tool that would load your WSDL and give you a graph of all its dependencies (other WSDLs and/or external XSDs). In it you should see immediately if xml.xsd comes from two different places. If the above still doesn't yield enough understanding... To troubleshoot, first see if you can get your stuff run without external connectivity ("pull the network plug" he said...). Then, run an http debugger, Fiddler is an example, on the machine where the WS runs, where you get the exception and where you don't. Monitor for the error and correlate with the debug trace. This should get you going in the right direction. Unlike some of the comments, I wouldn't bother disabling the WS-I BP check; as you figured it out already, that doesn't really deals with your question. As to the bug in .NET, it is a very remote, but possible, scenario. I assume you've already checked the patch levels. The only reason I've mentioned it is because I ran myself into it, and I had to write my own resolver to get around it. Even with the latest .NET is still there but then it'll manifest itself all the time... Which is why I doubt it is this bug in your case...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10871184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/882428/']}
jdg_87326
stackexchange
llm_judgeable_groundtruth_similarity
47173560
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any TTL option on documents for Firebase Firestore . Where documents get auto deleted after that amount time Now provide the response and nothing else.
Update (2022-07-26): Firestore just added the option to set a time-to-live policy on collection groups. I'm still leaving the custom approach below, as those give you control over the expunge moment which (for now) isn't possible with the built-in feature. The easiest way to build it yourself is by: Adding a expirationTimestamp property to your documents. Denying read of documents whose expiration has passed in your security rules. match /collection/{document} { allow read: if resource.data.expirationTimestamp > request.time.date(); } Unfortunately this means that you won't be able to query the collection anymore. You'll need to access the individual documents. Periodically run Cloud Functions code to delete expired documents. Also see Doug's excellent blog post describing this process: How to schedule a Cloud Function to run in the future with Cloud Tasks (to build a Firestore document TTL) .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47173560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6287285/']}
jdg_87327
stackexchange
llm_judgeable_groundtruth_similarity
9021849
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an mobile app that reads a JSON file that is stored on an Apache server. The contents of that JSON file are regenerated (using a PHP script) if something is changed via a GUI. I am worried that trying to overwrite the JSON file in the middle of it being served by Apache might cause problems. Does Apache obtain a read lock before serving files? If not, what will happen if I try to write it at the same time it is being served? Now provide the response and nothing else.
No. On POSIX-compatible systems, all locks are advisory anyways, so even if apache would get a read lock, the other process could just write the file. You can determine that with strace : [pid 7246] open("/var/www/file.json", O_RDONLY|O_CLOEXEC) = 11[pid 7246] fcntl(11, F_GETFD) = 0x1 (flags FD_CLOEXEC)[pid 7246] mmap(NULL, 20, PROT_READ, MAP_SHARED, 11, 0) = 0x7f53f93da000[pid 7246] munmap(0x7f53f93da000, 20) = 0[pid 7246] writev(10, [{"HTTP/1.1 200 OK\r\nDate: Thu, 26 J"}, ...) = 365[pid 7246] close(11) = 0 Therefore, it can happen that your JSON file is only partially written. To avoid this problem, write your JSON file to a temporary file on the same filesystem, and use the atomic rename to overwrite the file. That way, if the open has succeeded, apache will continue serving the old file. If the rename finishes before the open , apache will get the new, completed file. If you worry about consistency (in the case of a power failure or so), you may also want to call fsync in the application that writes the JSON file before closing it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9021849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/62571/']}
jdg_87328
stackexchange
llm_judgeable_groundtruth_similarity
189423
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As everybody, when sampling a distribution, I was told that the population average can be estimated to be within the range of: $$\mu \in \bar{x} \pm Z_{\text{conf}}\frac{s}{\sqrt{n}}$$ where $Z_{\text{conf}}=1.96$ for a confidence of 95%. What I do not remember is whether this assumes the population data is Normally distributed, or just the sampling errors. Below is a simulation (in R) of sampling of an Exponential distribution (and I have tried others), and this formula seems to work very well for non-Normal distribution. But I just want to be sure. correct <- 0pop_mean <- 800nsim <- 100000for(i in 1:nsim) { n <- 50 s <- rexp(n, 1/pop_mean) mu <- mean(s) stderr <- sd(s) / sqrt(n) correct <- correct + (mu-1.96*stderr <= pop_mean & pop_mean <= mu+1.96*stderr)}print(correct/nsim) # should give 0.95 Now provide the response and nothing else.
Efficiency is a "per se" concept in the sense that it is a measure of how variable (and biased) the estimator is from the "true" parameter. There is an actual numeric value for efficiency associated with a given estimator at a given sample-size for a given loss function. This actual number is related to the estimator AND the sample-size AND the loss function. Asymptotic efficiency looks at how efficient the estimator is as the sample size increases. More important is how rapidly the estimator becomes efficient but this can be more difficult to determine. Relative efficiency looks at how efficient the estimator is relative to an alternative estimator (typically at a GIVEN sample-size). Efficiency requires the specification of some loss function. Originally, this was variance when only unbiased estimators were considered. These days, this is most often MSE (mean-squared-error which accounts for bias and variability). Other loss-functions can be used. The classical Cramer-Rao bound was for unbiased estimators only but was extended to many of these other loss functions (most especially for MSE loss). An important adjunct concept is admissibility and domination of estimators. The Wikipedia entry has many links.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/189423', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/35537/']}
jdg_87329
stackexchange
llm_judgeable_groundtruth_similarity
27482806
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For instance, I have a collection User : var mongoose = require('mongoose');var UserSchema = new mongoose.Schema({ email: String, googleId: String, facebookId: String, displayName: String, active: Boolean});module.exports = mongoose.model('User', UserSchema); And then I have an ID: var userID = "some-user-id" What is the right way to just check if this id exists in the User collection. I don't need it to read the file or return it, I just need the true or false value. Here is one way to achieve it: User.findOne({ _id: userID}, function (err, existingUser) { But is there faster and more efficient way? Now provide the response and nothing else.
Use count rather than findOne. This will (under the hood) cause mongoose to use find : http://docs.mongodb.org/manual/reference/method/db.collection.count findOne() will read + return the document if it existsOn the other hand, find() just returns a cursor (or not) and only reads the data if you iterate over the cursor.So in our case, we're not iterating over the cursor, merely counting the results returned. User.countDocuments({_id: userID}, function (err, count){ if(count>0){ //document exists }); }});
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27482806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2598876/']}
jdg_87330
stackexchange
llm_judgeable_groundtruth_similarity
9655164
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I make the following regex ignore case sensitivity? It should match all the correct characters but ignore whether they are lower or uppercase. G[a-b].* Now provide the response and nothing else.
Assuming you want the whole regex to ignore case, you should look for the i flag . Nearly all regex engines support it: /G[a-b].*/istring.match("G[a-b].*", "i") Check the documentation for your language/platform/tool to find how the matching modes are specified. If you want only part of the regex to be case insensitive (as my original answer presumed), then you have two options: Use the (?i) and [optionally] (?-i) mode modifiers: (?i)G[a-b](?-i).* Put all the variations (i.e. lowercase and uppercase) in the regex - useful if mode modifiers are not supported: [gG][a-bA-B].* One last note: if you're dealing with Unicode characters besides ASCII, check whether or not your regex engine properly supports them.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/9655164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/671809/']}
jdg_87331
stackexchange
llm_judgeable_groundtruth_similarity
30700603
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am curious, what do the 3 different brackets mean in Python programming? Not sure if I'm correct about this, but please correct me if I'm wrong: [] - Normally used for dictionaries, list items () - Used to identify params {} - I have no idea what this does... Or if these brackets can be used for other purposes, any advice is welcomed! Thanks! Now provide the response and nothing else.
Square brackets: [] Lists and indexing/lookup/slicing Lists: [] , [1, 2, 3] , [i**2 for i in range(5)] Indexing: 'abc'[0] → 'a' Lookup: {0: 10}[0] → 10 Slicing: 'abc'[:2] → 'ab' Parentheses: () (AKA "round brackets") Tuples, order of operations, generator expressions, function calls and other syntax. Tuples: () , (1, 2, 3) Although tuples can be created without parentheses: t = 1, 2 → (1, 2) Order of operations: (n-1)**2 Generator expressions: (i**2 for i in range(5)) Function or method calls: print() , int() , range(5) , '1 2'.split(' ') with a generator expression: sum(i**2 for i in range(5)) Curly braces: {} Dictionaries and sets, as well as in string formatting Dicts: {} , {0: 10} , {i: i**2 for i in range(5)} Sets: {0} , {i**2 for i in range(5)} Except the empty set: set() In string formatting to indicate replacement fields: F-strings: f'{foobar}' Format strings: '{}'.format(foobar) Regular expressions All of these brackets are also used in regex. Basically, [] are used for character classes, () for grouping, and {} for repetition. For details, see The Regular Expressions FAQ . Angle brackets: <> Used when representing certain objects like functions, classes, and class instances if the class doesn't override __repr__() , for example: >>> print<built-in function print>>>> zip<class 'zip'>>>> zip()<zip object at 0x7f95df5a7340> (Note that these aren't proper Unicode angle brackets, like ⟨⟩ , but repurposed less-than and greater-than signs.)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/30700603', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4931657/']}
jdg_87332
stackexchange
llm_judgeable_groundtruth_similarity
85436
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question got me thinking about the meaning of variance: Intuition behind standard deviation . Variance of a set of data is calculated the same way that the moment of inertia is calculated for a physical body. The moment of inertia is related to the energy required to rotate the body at a given speed. A figure skater will rotate faster with arms pulled in than stretched out. So what would be the analogous result of reducing variance, if any. Perhaps the analogy simply breaks down. Are there any publications that have investigated this analogy? Now provide the response and nothing else.
The direct analogy is pretty clear: To make it simple we'll assume it's for a continuous random variable on $(a,b)$. Without loss of generality, let $c=b-a$ and consider the corresponding variable on $(0,c)$; call that random variable $X$. Now imagine a very thin rod of length $c$, whose density (mass per element of length) is variable in the x-direction (along its length) and consider that the rod happens to have the same material-density as a function of $x$ as the random variable has probability density as a function of $x$. Then then second moment of inertia of the rod is the variance of $X$. And hence what it 'means' to rotate a distribution is clear enough - it's quite literally rotating the 'rod' whose density represents probability-density. Variance is how 'hard' it would be to rotate the rod (low variance means 'easy to spin', high variance means it takes more push to spin it ... and stop it, if you spin it). Think about what inertia (how hard it is to spin) reflects here, which is simply how close the mass is to the mean . The closer the mass is to the mean the easier it is to spin. If you made a physical object whose physical density represents the probability density and the random variable had low variance, the corresponding object would be easy to spin, because most of the mass would be close to the mean - both inertia and variance are how close the mass is to the mean, in a particular (and directly analogous) sense. You don't actually 'spin' a probability density and imagine that to be physically difficult, any more than electricity is wet because of the water analogy. To expect that level of correspondence is to miss the point of such analogies (the aspects that correspond, correspond, but not every consequence of the correspondence in one realm carries over with it). The point of saying the 'rod is hard to spin' is to give a pretty direct sense of what high variance is telling you about density. But to insist that the probability density itself spin is to miss the point.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/85436', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/38102/']}
jdg_87333
stackexchange
llm_judgeable_groundtruth_similarity
39806367
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have found that "CommandFlags"=dword:00000040 will add a separator below an entry, though only to a submenu (under the "subcommands"="" string). Example here . Is there a way to add a separator in the main context menu? The example I am using: [HKEY_CLASSES_ROOT\AllFilesystemObjects\shell\Delete With Rimraf]@="Delete With Rimraf""CommandFlags"=dword:00000040 This gives no separator. Now provide the response and nothing else.
This example feature a separator after and before the item, this entry is for VirusTotal Uploader Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\*\shell\vtuploader]@="Send to VirusTotal""Icon"="C:\\Program Files\\VirusTotalUploader2\\VirusTotalUploader2.2.exe,0""SeparatorBefore"="""SeparatorAfter"="" For other applications loaded over shellex it's not possible to add separator over registry, the injected dll add/remove separator itself, you need then to change the dll code and recompile it to achieve what you want here is an example with notepad++ https://github.com/notepad-plus-plus/notepad-plus-plus/pull/3092/commits InsertMenu(hMenu, nIndex, MF_STRING | MF_BYPOSITION, idCmd++, m_szMenuTitle);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39806367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2892615/']}
jdg_87334
stackexchange
llm_judgeable_groundtruth_similarity
11380003
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have that kind piece of code: case sth of {a, 1} -> doA(); {a, 2} -> doA(); {a, 3} -> doB()end. Is there a way not to repeat "doA()" part? I thought that it should be easy, but I couldn't found answer in google. Now provide the response and nothing else.
You can use when guards in the case statement such as: case sth of {a, Var} when Var < 3-> doA(); {a, 3} -> doB()end. Also your expression( sth ) is an atom here meaning it can never match any of those cases.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11380003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1076288/']}
jdg_87335
stackexchange
llm_judgeable_groundtruth_similarity
31387238
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Where in the standard are functions returning functions disallowed? I understand they are conceptually ridiculous, but it seems to me that the grammar would allow them. According to this webpage, a " noptr-declarator [is] any valid declarator " which would include the declarator of a function: int f()(); Regarding the syntax. It seems to me that the syntax, as spelled out in [dcl.decl], allows int f(char)(double) which could be interpreted as the function f that takes a char and returns a function with same signature as int g(double) . 1 declarator:2 ptr-declarator3 noptr-declarator parameters-and-qualifiers trailing-return-type4 ptr-declarator:5 noptr-declarator6 ptr-operator ptr-declarator7 noptr-declarator:8 declarator-id attribute-specifier-seq opt9 noptr-declarator parameters-and-qualifiers10 noptr-declarator [ constant-expression opt ] attribute-specifier-seq opt11 ( ptr-declarator )12 parameters-and-qualifiers:13 ( parameter-declaration-clause ) cv-qualifier-seqAfter Roughly speaking, after1->2, 2=4, 4->6, 4->6you should have ptr-operator ptr-operator ptr-operatorThen, use 4->5, 5=7, 7->8 for the first declarator; use 4->5, 5=7, 7->9 for the second and third declarators. Now provide the response and nothing else.
From [dcl.fct], pretty explicitly: Functions shall not have a return type of type array or function , although they may have a return type of type pointer or reference to such things. There shall be no arrays of functions, although there can be arrays of pointers to functions. With C++11, you probably just want: std::function<int()> f();std::function<int(double)> f(char); There is some confusion regarding the C++ grammar. The statement int f(char)(double); can be parsed according to the grammar. Here is a parse tree: Furthermore such a parse is even meaningful based on [dcl.fct]/1: In a declaration T D where D has the form D1 ( parameter-declaration-clause ) cv-qualifier-seq opt ref-qualifier opt exception-specification opt attribute-specifier-seq opt and the type of the contained declarator-id in the declaration T D1 is “ derived-declarator-type-list T ”, the type of the declarator-id in D is “ derived-declarator-type-list function of ( parameter-declaration-clause ) cv-qualifier-seq opt ref-qualifier opt returning T ”. In this example T == int , D == f(char)(double) , D1 == f(char) . The type of the declarator-id in T D1 ( int f(char) ) is "function of (char) returning int". So derived-declarator-type-list is "function of (char) returning". Thus, the type of f would be read as "function of (char) returning function of (double) returning int." It's ultimately much ado about nothing, as this is an explicitly disallowed declarator form. But not by the grammar.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31387238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2549876/']}
jdg_87336
stackexchange
llm_judgeable_groundtruth_similarity
38425996
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm pretty new with Docker and i wanted to map the node_modules folder on my computer (for debugging purpose). This is my docker-compose.yml web: build: . ports: - "3000:3000" links: - db environment: PORT: 3000 volumes: - .:/usr/src/app - /usr/src/app/node_modulesdb: image: mongo:3.3 ports: - "27017:27017" command: "--smallfiles --logpath=/dev/null" I'm with Docker for Mac. When i run docker-compose up -d all go right, but it create a node_modules folder on my computer but it's empty. I go into the bash of my container and ls node_modules, all the packages was there. How can i get the content on the container on my computer too? Thank you Now provide the response and nothing else.
TL;DR Working example, clone and try: https://github.com/xbx/base-server You need a node_modules in your computer (outside image) for debugging purposes first (before run the container). If you want debug only node_modules: volumes: - /path/to/node_modules:/usr/src/app/node_modules If you want debug both your code and the node_modules: volumes: - .:/usr/src/app/ Remember that you will need run npm install at least one time outside the container (or copy the node_modules directory that the docker build generates). Let me now if you need more details. Edit . So, without the need of npm in OSX, you can: docker build and then docker cp <container-id>:/path/to/node-modules ./local-node-modules/ . Then in your docker-compose.yml mount those files and troubleshot whatever you want. Or, docker build and there (Dockerfile) do the npm install in another directory. Then in your command (CMD or docker-compose command) do the copy ( cp ) to the right directory, but this directory is mounted empty from your computer (a volume in the docker-compose.yml) and then troubleshot whatever you want. Edit 2. (Option 2) Working example, clone and try: https://github.com/xbx/base-server I did it all automatically in this repo forked from the yours. Dockerfile FROM node:6.3# Install app dependenciesRUN mkdir /build-dirWORKDIR /build-dirCOPY package.json /build-dirRUN npm install -g babel babel-runtime babel-register mocha nodemonRUN npm install# Create app directoryRUN mkdir -p /usr/src/appWORKDIR /usr/src/appRUN ln -s /build-dir/node_modules node_modules# Bundle app sourceCOPY . /usr/src/appEXPOSE 1234CMD [ "npm", "start" ] docker-compose.yml web: build: . ports: - "1234:1234" links: - db # liaison avec la DB environment: PORT: 1234 command: /command.sh volumes: - ./src/:/usr/src/app/src/ - ./node_modules:/usr/src/app/node_modules - ./command.sh:/command.shdb: image: mongo:3.3 ports: - "27017:27017" command: "--smallfiles --logpath=/dev/null" command.sh #!/bin/bashcp -r /build-dir/node_modules/ /usr/src/app/exec npm start Please, clone my repo and do docker-compose up . It does what you want.PS: It can be improved to do the same in a better way (ie best practices, etc) I'm in OSX and it works for me.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38425996', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1060566/']}
jdg_87337
stackexchange
llm_judgeable_groundtruth_similarity
6780
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have several query frequencies, and I need to estimate the coefficient of Zipf's law. These are the top frequencies: 264861205350523033253623911444122011521039 Now provide the response and nothing else.
Update I've updated the code with maximum likelihood estimator as per @whuber suggestion. Minimizing sum of squares of differences between log theoretical probabilities and log frequencies though gives an answer would be a statistical procedure if it could be shown that it is some kind of M-estimator. Unfortunately I could not think of any which could give the same results. Here is my attempt. I calculate logarithms of the frequencies and try to fit them to logarithms of theoretical probabilities given by this formula . The final result seems reasonable. Here is my code in R. fr <- c(26486, 12053, 5052, 3033, 2536, 2391, 1444, 1220, 1152, 1039)p <- fr/sum(fr)lzipf <- function(s,N) -s*log(1:N)-log(sum(1/(1:N)^s))opt.f <- function(s) sum((log(p)-lzipf(s,length(p)))^2)opt <- optimize(opt.f,c(0.5,10))> opt$minimum[1] 1.463946$objective[1] 0.1346248 The best quadratic fit then is $s=1.47$. The maximum likelihood in R can be performed with mle function (from stats4 package), which helpfully calculates standard errors (if correct negative maximum likelihood function is supplied): ll <- function(s) sum(fr*(s*log(1:10)+log(sum(1/(1:10)^s))))fit <- mle(ll,start=list(s=1))> summary(fit)Maximum likelihood estimationCall:mle(minuslogl = ll, start = list(s = 1))Coefficients: Estimate Std. Errors 1.451385 0.005715046-2 log L: 188093.4 Here is the graph of the fit in log-log scale (again as @whuber suggested): s.sq <- opt$minimums.ll <- coef(fit)plot(1:10,p,log="xy")lines(1:10,exp(lzipf(s.sq,10)),col=2)lines(1:10,exp(lzipf(s.ll,10)),col=3) Red line is sum of squares fit, green line is maximum-likelihood fit.
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/6780', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2998/']}
jdg_87338
stackexchange
llm_judgeable_groundtruth_similarity
25666993
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The Null propagating operator / Conditional access expression coming in c#-6.0 looks like quite a handy feature. But I'm curious if it will help solve the problem of checking if a child member is not null and then calling a Boolean method on said child member inside an if block: public class Container<int>{ IEnumerable<int> Objects {get;set;} } public Container BuildContainer() { var c = new Container(); if (/* Some Random Condition */) c.Objects = new List<int>{1,2,4}; } public void Test() { var c = BuildContainer(); //Old way if ( null != c && null != c.Objects && c.Objects.Any()) Console.Write("Container has items!"); //C# 6 way? if (c?.Object?.Any()) Console.Write("Container has items!"); } Will c?.Object?.Any() compile? If the propagating operator short circuits (I assume that's the right term) to null then you have if (null) , which isn't valid. Will the C# team address this concern or am I missing the intended use case for the null propagating operator? Now provide the response and nothing else.
It won't work this way. You can just skip the explanation and see the code below :) As you know ?. operator will return null if a child member is null. But what happens if we try to get a non-nullable member, like the Any() method, that returns bool ? The answer is that the compiler will "wrap" a return value in Nullable<> . For example, Object?.Any() will give us bool? (which is Nullable<bool> ), not bool . The only thing that doesn't let us use this expression in the if statement is that it can't be implicitly casted to bool . But you can do comparison explicitly, I prefer comparing to true like this: if (c?.Object?.Any() == true) Console.Write("Container has items!"); Thanks to @DaveSexton there's another way: if (c?.Object?.Any() ?? false) Console.Write("Container has items!"); But as for me, comparison to true seems more natural :)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25666993', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1224069/']}
jdg_87339
stackexchange
llm_judgeable_groundtruth_similarity
13097754
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I'm getting a (potentially big) list of images to download from some URLs. I'm using Scala, so what I would do is : import scala.actors.Futures._// Retrieve URLs from somewhereval urls: List[String] = ...// Download image (blocking operation)val fimages: List[Future[...]] = urls.map (url => future { download url })// Do something (display) when completefimages.foreach (_.foreach (display _)) I'm a bit new to Scala, so this still looks a little like magic to me : Is this the right way to do it? Any alternatives if it is not? If I have 100 images to download, will this create 100 threads at once, or will it use a thread pool? Will the last instruction ( display _ ) be executed on the main thread, and if not, how can I make sure it is? Thanks for your advice! Now provide the response and nothing else.
Use Futures in Scala 2.10. They were joint work between the Scala team, the Akka team, and Twitter to reach a more standardized future API and implementation for use across frameworks. We just published a guide at: http://docs.scala-lang.org/overviews/core/futures.html Beyond being completely non-blocking (by default, though we provide the ability to do managed blocking operations) and composable, Scala's 2.10 futures come with an implicit thread pool to execute your tasks on, as well as some utilities to manage time outs. import scala.concurrent.{future, blocking, Future, Await, ExecutionContext.Implicits.global}import scala.concurrent.duration._// Retrieve URLs from somewhereval urls: List[String] = ...// Download image (blocking operation)val imagesFuts: List[Future[...]] = urls.map { url => future { blocking { download url } }}// Do something (display) when completeval futImages: Future[List[...]] = Future.sequence(imagesFuts)Await.result(futImages, 10 seconds).foreach(display) Above, we first import a number of things: future : API for creating a future. blocking : API for managed blocking. Future : Future companion object which contains a number of useful methods for collections of futures. Await : singleton object used for blocking on a future (transferring its result to the current thread). ExecutionContext.Implicits.global : the default global thread pool, a ForkJoin pool. duration._ : utilities for managing durations for time outs. imagesFuts remains largely the same as what you originally did- the only difference here is that we use managed blocking- blocking . It notifies the thread pool that the block of code you pass to it contains long-running or blocking operations. This allows the pool to temporarily spawn new workers to make sure that it never happens that all of the workers are blocked. This is done to prevent starvation (locking up the thread pool) in blocking applications. Note that the thread pool also knows when the code in a managed blocking block is complete- so it will remove the spare worker thread at that point, which means that the pool will shrink back down to its expected size. (If you want to absolutely prevent additional threads from ever being created, then you ought to use an AsyncIO library, such as Java's NIO library.) Then we use the collection methods of the Future companion object to convert imagesFuts from List[Future[...]] to a Future[List[...]] . The Await object is how we can ensure that display is executed on the calling thread-- Await.result simply forces the current thread to wait until the future that it is passed is completed. (This uses managed blocking internally.)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13097754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/821995/']}
jdg_87340