source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
30083949
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to turn a list into separated strings joined with an ampersand if there are only two items, or commas and an ampersand between the last two e.g. Jones & BenJim, Jack & James I currently have this: pa = ' & '.join(listauthors[search]) and don't know how to make sort out the comma/ampersand issue. Beginner so a full explanation would be appreciated. Now provide the response and nothing else.
"&".join([",".join(my_list[:-1]),my_list[-1]]) I would think would work or maybe just ",".join(my_list[:-1]) +"&"+my_list[-1] to handle edge cases where only 2 items you could "&".join([",".join(my_list[:-1]),my_list[-1]] if len(my_list) > 2 else my_list)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30083949', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4871768/']}
jdg_81741
stackexchange
llm_judgeable_groundtruth_similarity
28600
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say I have an observed data set ($n_i$) and I want to obtain the best fit out of 10 data sets produced by a model dependent on a single parameter $a$ ($m_i(a)\;a=1..10$). Suppose I use a Poisson likelihood distribution: $P_i(a)=\frac{m_i(a)^{n_i}}{e^{m_i(a)}n_i!}\; ; \; a=1..10$ where $m_i(a)$ represents the model value of bin $i$ (for a given value of the parameter $a$) and $n_i$ its observed value (which remains constant always)The likelihood of the whole data set or cumulative likelihood (for each value of $a$) is then: $L(a)=\prod\limits^{N}_{i=1} \frac{m_i(a)^{n_i}}{e^{m_i(a)}n_i!}\; ; \; a=1..10$ where $N$ is the total number of bins.Now, I want to pick the model data set that best fits my observed data set . Could I just choose the maximum value of $L(a)$, ie: the maximum likelihood : $Max\_Likelihood = max\,[ L(a)\; ; \; a=1..10]$ and say that the value of parameter $a$ associated with that particular modeled data set is the best estimation of parameter $a$, given my observed data set ? Or should I use a cumulative likelihood ratio , defined as : $LR(a)= \prod\limits^{N}_{i=1} \frac{\frac{m_i(a)^{n_i}}{e^{m_i(a)}n_i!}}{\frac{n_i^{n_i}}{e^{n_i}n_i!}} = \prod\limits^{N}_{i=1} \left(\frac{m_i(a)}{n_i}\right)^{n_i}e^{n_i-m_i(a)}\; ; \; a=1..10$ and keep the value of $a$ that gives the maximum value of $LR(a)$, ie the maximum likelihood ratio : $Max\_Likelihood\_Ratio = max\,[ LR(a)\; ; \; a=1..10]$ Since the observed data set ($n_i$) remains the same for all the modeled data sets ($m_i(a);\;a=1..10$), won't maximizing this value $LR(a)$ give me the same result (ie: the same value of $a$) as maximizing $L(a)$? Now provide the response and nothing else.
The probability of getting $n$ identical faces is $1/6^n$ if you fix the face first. Of course a long run is suspect. These kinds of questions can be resolved with hypothesis tests. The null hypothesis is that the dice is fair, and the alternative hypothesis is that it is not. Then you set about finding the likelihood ratio and, depending on whether it is greater or less than a threshold, you accept or reject the null hypothesis. The procedure is detailed here . Another relevant concept is the typical set from information theory, which says that a handful of sequences are overwhelmingly more likely to be observed than others.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/28600', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/10416/']}
jdg_81742
stackexchange
llm_judgeable_groundtruth_similarity
18236
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to " The Myth of Plutonium Toxicity " by Bernard L. Cohen : Plutonium is constantly referred to by the news media as ``the most toxic substance known to man.'' Ralph Nader has said that a pound of plutonium could cause 8 billion cancers, and former Senator Ribicoff has said that a single particle of plutonium inhaled into the lung can cause cancer. The book sources the quote as coming from: R. Nader, speech at Lafayette College (Spring 1975). Another source states that he said this in a debate with radiation pioneer Ralph Lapp: Discussions Of Nuclear Power Should Be Based In Reality by nuclear engineer Theodore Rockwell, published in The Scientist: For example, one day consumer activist Ralph Nader was debating radiation pioneer Ralph Lapp. Nader stated that a pound of plutonium could kill every human being on Earth. One could picture a one-pint jar of the stuff spilling on the ground and its dea dly vapors spreading until all life was obliterated. That's what Nader's statement means in the common-sense real world. But Lapp put the statement in its proper context by replying: "So could a pound of fresh air, Ralph." Now how can that be? We've been repeatedly told that plutonium is the deadliest substance known. And we know that fresh air is literally the breath of life. What's going on here? Nader's statement was not actually a lie; he was just trying to make us think that a hypothetical conjecture was a real-world problem. He's saying that the lethal dose of plutonium is a five-billionth of a pound. It's really several thousand times larger, but even if Nader were correct, the only way you could actually kill the world's 5 billion people with just one pound would be to line them up and have a trained physician inject into each person just the toxic amount of plutonium-no more or there wouldn't be enough to go around. It would have to be in a fine aerosol mist, or it wouldn't be lethal, and it would have to go directly into the lung. Then we would have to wait several decades, protecting the individual from other life-threatening influences such as cars, smoking, and malnutrition, until he or she died of lung cancer, because plutonium poses no other health threat. Context can make a huge difference in what people actually mean, so I wonder if Ralph Nader has actually claimed that a pound of plutonium could cause 8 billion cancer deaths, and, if so, if context would indicate whether he means it in the way it is interpreted. Did Ralph Nader claim that a pound of plutonium could cause 8 billion cancers ? Question inspired by: Is plutonium the most toxic substance known to man? Now provide the response and nothing else.
It is a common figure of speech to say [small amount] of toxin could kill [large amount] people. (Examples: 1 , 2 ) It is an emotive demonstration of the toxicity, but it openly ignores the complexity of the delivery mechanism - an assumption Rockwell scoffs at. What is more relevant than the figure of speech used to describe the toxicity, is the underlying level of toxicity claimed: He's saying that the lethal dose of plutonium is a five-billionth of a pound. While I don't have a transcript of his March 11, 1975 speech, I did find an article Nader wrote about two weeks later ( Source: Nader.org ): Any use of plutonium, a tiny 0.2 micrograms of which has given lung cancer to animals, must be viewed with utmost caution. There are 2,267,961,850 doses of 0.2 micrograms per pound - i.e. about 2 billion. So: based on this article, rather than the cited speech, assuming similar dosages will work on humans, as well as other animals, ignoring the distribution mechanisms required to split the dosages up and apply them to the lungs of the humans: Ralph Nader did suggest that the toxicity of Plutonium was high enough to cause over 2 billion cancers per pound. This is lower than the quoted 8 billion, but shows Nader was making toxicity claims in the ball-park of the quote.
{}
{'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/18236', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/5337/']}
jdg_81743
stackexchange
llm_judgeable_groundtruth_similarity
2018320
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So I was messing around with Tau in base 12 and Mathematica (very amateur), and at some point started converting 5 same digit long base 10 sequences to base 12, and inputing these sequences in a Notepad file with 1,250,000 Tau decimals in base 12. Weirdly, many of these (>1/2) pretty rare sequences tend to end up only ~5,000 digits from, and most of these even closer to 1,200,000 th decimal. Here's the data: X = 10 E = 11 sqs = sequences ("closest to..." is calculated from the first decimal the number makes appearance on) $11111_{10} = 651E_{12}$ - 68 sqs found, closest to 1,200,000 on 1,201,556 th decimal, (67th is 1,189,183rd),(this sequence is most likely not relevant - too common). $22222_{10} = 10X3X_{12}$ - 4 sqs found, closest to 1,200,000 on 698,601st decimal. $33333_{10} = 17359_{12}$ - 4 sqs found, closest to 1,200,000 on 1,197,845 th decimal. $44444_{10} = 21878_{12}$ - 5 sqs found, closest to 1,200,000 on 1,195,972 nd decimal. $55555_{10} = 28197_{12}$ - 4 sqs found, closest to 1,200,000 on 1,196,762 nd decimal. $66666_{10} = 326E6_{12}$ - 3 sqs found, closest to 1,200,000 on 1,188,745 th decimal. $77777_{10} = 39015_{12}$ - 3 sqs found, closest to 1,200,000 on 1,533,163rd decimal (679,798th). $88888_{10} = 43534_{12}$ - 4 sqs found, closest to 1,200,000 on 1,066,571st decimal. $99999_{10} = 49X53_{12}$ - 5 sqs found, closest to 1,200,000 on 1,199,871 st decimal. Of course then I wonder, what mathematical explanation does this have? It surely must have some explanation, the coincidence is just too big.(Now I will try to do the opposite, convert 5 of the same digit sequences from base 12 to base 10, and input it in a Tau base 10 file. There could be some similarities?) edit: I did try what I suggested above, with little luck. There were some minor curiosities. Like 11111 = 22621 had 5 appearances within ~31,000 digits from 500,000, closest being 495,703, 496,433, 497,578 - and 495,703 was shared by XXXXX = 226210 (extra 0), but that must be 10% chance of happening, so, very minor curiousity. Just another minor unrelated curiousity I noticed was the symmetrical number "06X999X60", where the middle "9" is decimal number 3,330. To mods: Suggest and edit my tags, I'm really not sure what I should tag this (amateur). Check this out, if you add these numbers together, almost the same pattern does emerge here as well (bold equations are the ones that were close to 1,200,000): $11111_{10} = 651E_{12}$ | 6+5+1+E= 23 $22222_{10} = 10X3X_{12}$ | 1+0+X+3+X= 24 $33333_{10} = 17359_{12}$ | 1+7+3+5+9= 25 $44444_{10} = 21878_{12}$ | 2+1+8+7+8= 26 $55555_{10} = 28197_{12}$ | 2+8+1+9+7= 27 $66666_{10} = 326E6_{12}$ | 3+2+6+E+6= 28 $77777_{10} = 39015_{12}$ | 3+9+0+1+5= 18 (why not 29?) $88888_{10} = 43534_{12}$ | 4+3+5+3+4= 19 (why not 30?) $99999_{10} = 49X53_{12}$ | 4+9+X+5+3= 31 Do you see the pattern? 2 of the last numbers that were not close to the 1,200,000th decimal did not follow the logical steps. Only the 1st of them "10X3X" still followed the pattern. $ 23, 24, 25, 26, 27, 28, 18, 19, 31 $ Isn't it a bit strange that the only number " 24 " that "does not follow" the "pattern", is the only number divisible by 12 ! :) I now continue adding "11111", and the most exciting discovery so far is number $122221_{10} = 5X891_{12}$ makes its 13th appearance on decimal 2,222,222 ! PS: I did do many more +11111, but no apparent pattern emerged. The next 9 numbers weren't that close to 1,200,000 in general as the first 9 (tho still had some bias towards it). But nothing really suggests that the first 9 numbers, 0r the second were more than just slightly lucky. Maybe I will update this post occasionally with more interesting coincidences, if I find them, as with 122221 (mentioned above), the symmetrical number 06X999X60, in which the middle 9 is placed at 3,330th decimal, and the three XXX's (101010) where the middle X is placed on top of decimal 6,666. Now provide the response and nothing else.
This is an expected behaviour. You found many (extremely) close occurrences but also several incomparably distant ones. The standard deviation of the offsets you list is roughly 207300 and counting the mean square deviation from our artificial reference point of 1200000 we get something even a bit higher, roughly 244943. These numbers compare well with the inverse of the expected probability of occurrence of a pattern of particular 5-digit sequence in a random decimal string, i.e., $10^5$. Nothing would be out of normal still if they were twice as close each. For $\pi$ (and consequently, $τ$), it is believed that the digits in any base should be distributed just like completely random numbers , and in the light of that any clusters are a mere coincidence that could happen, comparably sparingly, if you generated digits from a coin toss. The magnitude of 12 million relative to $10^5$ seems well within “sparingly”, albeit I did not calculate that in more detail. While this may sound disappointing, please don't let that put you off looking for inexplicable phenomena in different combinations of bases! You're doing a great job and maybe you'll help disprove the conjecture. Many a big discovery started with a “ that's funny... ” .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2018320', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/390321/']}
jdg_81744
stackexchange
llm_judgeable_groundtruth_similarity
1079491
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using a custom endpoint behavior extension to intercept messages as they are received by my WCF service endpoint, using IDispatchMessageInspector. I retrieve the message content like this: public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext){ MessageBuffer messageBuffer = request.CreateBufferedCopy(Int32.MaxValue); Message message = messageBuffer.CreateMessage(); using (MemoryStream stream = new MemoryStream()) { using (XmlWriter writer = XmlWriter.Create(stream)) { message.WriteMessage(writer); writer.Flush(); stream.Position = 0; } }} I need to obtain the XML message exactly as it is sent by the client, but the XML that is written to the stream seems to be modified by WCF (or the XmlWriter?). My main problem is that it has modified closing tags: <id /> becomes <id></id> everywhere in the XML message. Is there a way to write the message content as it was received by the endpoint without it being modified (or at least without changing the way the tags are closed? Now provide the response and nothing else.
The previous posts are correct. Use ROW_NUMBER if using Microsoft SQL Server 2005 or greater. However, your tags do not specify that you're using MSSQL, so here's a solution that should work across most RDBMS implementations. Essentially, use a correlated subquery to determine the count of rows in the same set that are less than the current row, based on the values of the ORDER clause of the outer query. Something like this: SELECT T1.id, T1.[name], (SELECT COUNT(*) FROM table T2 WHERE T2.[name] < T1.[name]) + 1 AS rowposFROM table T1ORDER BY T1.[name]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1079491', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/132558/']}
jdg_81745
stackexchange
llm_judgeable_groundtruth_similarity
16685113
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can Some one explain me the architecture of Edge node in hadoop.I am able to find only the definition on the internet, I have the following queries - 1) Does the edge node have to be part of the cluster (What advantages do we have if it is inside the cluster?). Does it store any blocks of data in hdfs. 2) Can the edge node be outside the cluster? Now provide the response and nothing else.
+1 with the Dell explanation. In my opinion, edge nodes in a Hadoop cluster are typically nodes that are responsible for running the client-side operations of a Hadoop cluster. Typically edge-nodes are kept separate from the nodes that contain Hadoop services such as HDFS, MapReduce, etc, mainly to keep computing resources separate. For smaller clusters only having a few nodes, it's common to see nodes playing a hybrid combination of roles for master services (JT, NN, etc.) , slave services (TT, DN, etc) and gateway services. Note that running master and slave Hadoop services on the same node is not an ideal setup, and can cause scaling and resource issues depending on what's at use. This kind of configuration is typically seen on a small-scale dev environment. With that said, here's some answers to your questions posted: 1) Does the edge node have to be part of the cluster? The edge node does not have to be part of the cluster, however if it is outside of the cluster (meaning it doesn't have any specific Hadoop service roles running on it), it will need some basic pieces such as Hadoop binaries and current Hadoop cluster config files to submit jobs on the cluster. 2) What advantages do we have if it is inside the cluster? Depending on which distribution is in use, edge nodes run within the cluster allow for centralized management of all the Hadoop configuration entries on the cluster nodes which helps to reduce the amount of administration needed to update the config files. Usually this is a one-to-many approach, where config entries are updated in one location and are pushed out to all (many) nodes in the cluster. However, when one of the nodes within the cluster is also used as an edge node, there are CPU and memory resources that are consumed by the client operations which detracts the available resources that could be utilized by the running Hadoop services in that node. 3) Does it store any blocks of data in hdfs? Unless the edge node is configured with a DataNode service, blocks of data will not be stored on that node. 4) Should the edge node be outside the cluster? As mentioned above, it can be dependent on the cluster environment and use-case; One of the supporting reasons to configure it outside of the cluster is to keep the client-running and Hadoop services separated. Keeping an edge node separate allows that node to utilize the full computing resources available for Hadoop processing. Hope this helps!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16685113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1660223/']}
jdg_81746
stackexchange
llm_judgeable_groundtruth_similarity
70990
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to the man page of lsmod the command shows “what kernel modules are currently loaded”. I wrote a script that uses modinfo to show what kernel object (.ko) files are actually in use: #!/bin/shfor i in `lsmod | awk '{print $1}' | sed -n '1!p'`; doecho "###############################$i###############################"echo ""modinfo $iecho ""echo ""done Now I found out that modinfo nvidia shows the following output: ERROR: modinfo: could not find module nvidia Do you guys have any explanation for this? Now provide the response and nothing else.
Your nvidia module is perfectly loaded and working. The problem lies in modinfo . modinfo fetch the list of known modules by reading the /lib/modules/$(uname -r)/modules.* files, which are usually updated with depmod . If depmod -a has not been run after installing the nvidia module, then modinfo does not knows about it. This does not prevent anybody from loading the module with insmod and lsmod will show it just fine if loaded.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/70990', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/17859/']}
jdg_81747
stackexchange
llm_judgeable_groundtruth_similarity
1714461
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that every vector space needs to contain a zero vector. But all the vector spaces I've seen have the zero vector actually being zero (e.g. $\mathbf{0}=\langle0,0,\ldots,0\rangle$). Can't the "zero vector" not involve zero, as long as it acts as the additive identity? If that's the case then are there any graphical representations of a vector space that does not contain the origin? Now provide the response and nothing else.
Here's an example. Let $V$ be the set of all $n$-tuples of strictly positive numbers $x_1,\ldots,x_n$ satisfying $x_1+\cdots+x_n=1$. Define "addition" of such vectors by $$(x_1,\ldots,x_n) \mathbin{\text{“}{+}\text{''}} (y_1,\ldots,y_n) = \frac{(x_1 y_1,\ldots,x_n y_n)}{x_1 y_1 + \cdots + x_n y_n }.$$ This is a vector space whose zero element is$$\left( \frac 1 n , \ldots, \frac 1 n \right).$$The additive inverse of $(x_1,\ldots,x_n)$ is$$\frac{\left( \dfrac 1 {x_1}, \ldots, \dfrac 1 {x_n} \right)}{\dfrac 1 {x_a} + \cdots + \dfrac 1 {x_n}}.$$This operation is involved in a basic identity on conditional probabilities: $$ (\Pr(A_1),\ldots,\Pr(A_n)) \mathbin{\text{“}{+}\text{''}} k\cdot(\Pr(D\mid A_1),\ldots,\Pr(D\mid A_n)) = (\Pr(A_1\mid D),\ldots,\Pr(A_n\mid D)) $$where $k$ is whatever it takes to make the sum of the entries $1$. However, in practice, one wouldn't bother with $k$; just multiply term by term and then normalize. Here's a more down-to-earth example. Look at $\mathbb R^3$ and say you want to put the zero point at $\vec p = (2,3,7)$. Then define "addition" as follows:$$\vec a \mathbin{\text{“}{+}\text{''}} \vec b = \underbrace{\vec p + (\vec a - \vec p) + (\vec b - \vec p)}_{\begin{smallmatrix} \text{These are the usual} \\ \text{addition and subtraction.} \end{smallmatrix}}.$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1714461', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/325462/']}
jdg_81748
stackexchange
llm_judgeable_groundtruth_similarity
3799423
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $f(x,y) = \frac{x^my^n}{x^i+y^j}$ is continuous at $(x,y)=(0,0)$ if and only if (iff) $mj+ni>ij$ . I tried the approach $\frac{m}{i}$ + $\frac{n}{j}$ >1, but its of no use. Any other approach I should try?It is given $f(0,0) = 0$ .It is also given m,n are non-negative integers and i,j are even natural numbers. Now provide the response and nothing else.
$$\frac{1}{\sqrt{R^2+x^2-2 R x \cos (\theta )}}=\frac{1}{\sqrt{R^2+x^2}}\frac{1}{\sqrt{1-k\cos (\theta )}}$$ with $k=\frac{2Rx}{R^2+x^2}$ . $$\int \frac{d\theta}{\sqrt{1-k\cos (\theta )}}=\frac 2 {\sqrt{{1-k}}} F\left(\frac{\theta }{2}|\frac{2 k}{k-1}\right)$$ $$\int_0^{2\pi} \frac{d\theta}{\sqrt{1-k\cos (\theta )}}=\frac 4 {\sqrt{{1-k}}}K\left(\frac{2 k}{k-1}\right)$$ If $k$ is small, you can use the expansion $$\frac 4 {\sqrt{{1-k}}}K\left(\frac{2 k}{k-1}\right)=2\pi \left(1+\frac{3 k^2}{16}+\frac{105 k^4}{1024}+\frac{1155 k^6}{16384} \right)+O\left(k^8\right)$$ If you want a much better approximation, you could use the Padé approximant $$\frac 4 {\sqrt{{1-k}}}K\left(\frac{2 k}{k-1}\right)=2\pi\frac{1-\frac{497 }{576}k^2+\frac{3835}{36864}k^4 } {1-\frac{605 }{576}k^2+\frac{7315 }{36864}k^4 }$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3799423', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/818277/']}
jdg_81749
stackexchange
llm_judgeable_groundtruth_similarity
27381119
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible in a JavaFX 8 3D scene to find points along a ray (e.g. PickRay), starting at an arbitrary point in 3D space with some 3D direction vector, where the ray intersects the triangles in a mesh (TriangleMesh inside a MeshView)? I know this is done in the Camera/MouseHandler for mouse picking but I can't see any way for doing it for arbitrary rays. Now provide the response and nothing else.
As @jdub1581 suggests, a ray is just a geometric vector, so in order to find a list of triangles intersected by this vector we need to solve problems of the kind 'line intersects plane' and 'line intersects plane within the triangle borders'. Let's assume we have a TriangleMesh , and we have a list of vertices and a list of faces. Each vertex with 3 coordinates, each face with 3 vertices (without considering texture, normals,... ). For the sake of simplicity let's use two lists of Point3D to store them. In this link there are several 3D shapes ready to use. Let's grab one CuboidMesh . CuboidMesh cuboid = new CuboidMesh(10f,12f,4f,4); This will give us this 3D shape: Now, if we have a look at the mesh, we could create two lists with vertices and faces: List<Point3D> vertices=Arrays.asList(new Point3D(5.0, 6.0, 2.0), new Point3D(5.0, 6.0, 2.0), new Point3D(5.0, -6.0, 2.0), ..., new Point3D(-1.875, -2.25, -2.0), new Point3D(-1.875, -1.5, -2.0));List<Point3D> faces=Arrays.asList(new Point3D(0, 386, 388), new Point3D(98, 387, 386.0), new Point3D(100, 388, 387), ..., new Point3D(383, 1535, 1537), new Point3D(1536, 1537, 1535)); Let's add some 3D points in our scene, one origin and one target, both in global coordinates, and define the direction of the vector, normalized: Point3D gloOrigin=new Point3D(4,-7,-4);Point3D gloTarget=new Point3D(2,3,2);Point3D direction=gloTarget.subtract(gloOrigin).normalize(); // -0.154,0.771,0.617 The ray equation then will be this: r(t) = (4,-7,-4)+t*(-0.154,0.771,0.617) If we add a slender cylinder between these two points we'll have a visual representation of our ray: Bounding Box Intersection The first step will be checking if the ray intersects the bounding box of our shape. In local coordinates of the shape we have 6 faces given by their normals, with their 6 centers: Bounds locBounds = cuboid.getBoundsInLocal();List<Point3D> normals=Arrays.asList(new Point3D(-1,0,0),new Point3D(1,0,0), new Point3D(0,-1,0), new Point3D(0,1,0), new Point3D(0,0,-1), new Point3D(0,0,1));List<Point3D> positions=Arrays.asList(new Point3D(locBounds.getMinX(),0,0), new Point3D(locBounds.getMaxX(),0,0), new Point3D(0,locBounds.getMinY(),0), new Point3D(0,locBounds.getMaxY(),0), new Point3D(0,0,locBounds.getMinZ()), new Point3D(0,0,locBounds.getMaxZ())); Since we'll work on the local system, we need our origin point in this coordinates: Point3D gloOriginInLoc = cuboid.sceneToLocal(gloOrigin); // 4,-7,-4 since the box is centered in 0,0,0 Now, for any of the six faces, we get the distance t to the plane following this link . Then we can check if the point belongs to the box or not. AtomicInteger counter = new AtomicInteger();IntStream.range(0, 6).forEach(i->{ double d=-normals.get(i).dotProduct(positions.get(i)); double t=-(gloOriginInLoc.dotProduct(normals.get(i))+d)/ (gloDirection.dotProduct(normals.get(i))); Point3D locInter=gloOriginInLoc.add(gloDirection.multiply(t)); if(locBounds.contains(locInter)){ counter.getAndIncrement(); }}); If counter.get()>0 then we have some intersections between the ray and the shape, and we can proceed with the triangles. In this example, these will be the intersection points: (3.5,-4.5, -2) and (2.5,0.5,2). Triangles intersection There are several algorithms for the task of finding if the ray intersects any triangle of the mesh, so we don't need to reinvent the wheel. The one I've used is from Tomas Möller & Ben Trumbore . It will provide the distance t from the origin to the plane, and the coordinates u,v inside the triangle for a given intersection. Once we have the origin in local coordinates of the shape, and we know the direction of the ray, the implementation of this algorithm is this: private final float EPS = 0.000001f;public List<Point3D> getIntersections(Point3D origin, Point3D direction, List<Point3D> points, List<Point3D> faces){ return faces.parallelStream().filter(f->{ // vertices indices int p0=(int)f.getX(); int p1=(int)f.getY(); int p2=(int)f.getZ(); // vertices 3D coordinates Point3D a = points.get(p0); Point3D b = points.get(p1); Point3D c = points.get(p2); Point3D edge1 = b.substract(a); Point3D edge2 = c.substract(a); Point3D pvec=direction.crossProduct(edge2); float det=edge1.dotProduct(pvec); if(det<=-EPS || det>=EPS){ float inv_det=1f/det; Point3D tvec=origin.substract(a); float u = tvec.dotProduct(pvec)*inv_det; if(u>=0f && u<=1f){ Point3D qvec=tvec.crossProduct(edge1); float v = direction.dotProduct(qvec)*inv_det; if(v>=0 && u+v<=1f){ float t = c.dotProduct(qvec)*inv_det; System.out.println("t: "+t+", u: "+u+", v: "+v); return true; } } } return false; }).collect(Collectors.toList());} In this sample, we find three faces, given by these vertices: (85, 1245, 1274), (85, 1274, 1266) and (351, 1476, 1479). If we plot those faces will see the intersection: Note that by performing all the operations in the local coordinate system of the shape we save the operations of transforming every triangle to the global system. This algorithm is really fast. I've tested up to 3M triangles in less than 40 ms. All the code for this test is available here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27381119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2696378/']}
jdg_81750
stackexchange
llm_judgeable_groundtruth_similarity
39092149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use my script like this: python test.py runpython test.py stop and my code is like this: parser = argparse.ArgumentParser()command_group = parser.add_mutually_exclusive_group(required=True)command_group.add_argument('run', help='run it', action='store_true')command_group.add_argument('stop', help='stop it', action='store_true') when I execute it, an exception is raised: ValueError: mutually exclusive arguments must be optional so I try to add required=False when I add each argument.Then I get another exception: TypeError: 'required' is an invalid argument for positionals I'm confused. Now provide the response and nothing else.
A better way to do this is to add a single positional argument that can have two choices. Since you want it to be optional, use nargs='?' , which means zero or one times: parser = argparse.ArgumentParser()parser.add_argument('run', help='run or stop', nargs='?', choices=('run', 'stop')) If run is given, the value will be 'run' . If stop is given, it will be 'stop' . If neither is given, it will be None . If you really want to use a mutually-exclusive group, I'm not sure if you can do it exactly how you want. You can , however, make them optional arguments by adding a hyphen: import argparseparser = argparse.ArgumentParser()command_group = parser.add_mutually_exclusive_group()command_group.add_argument('-run', help='run it', action='store_true')command_group.add_argument('-stop', help='stop it', action='store_true') Of course the problem with that is that the user also needs to provide the hyphen, but that's the sort of problem you can expect if you limit yourself like that.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39092149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4068848/']}
jdg_81751
stackexchange
llm_judgeable_groundtruth_similarity
11966286
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to show the tooltip on the right side of the cursor. I looked in the documentation/examples but I can't find a way to force the tooltips to stay on the right side of the cursor. Can anyone tell me how to do it? With tooltip positioner I only can set a default position. Now provide the response and nothing else.
Tooltip positioner is much more than just default position. The function arguments contain info about your point position & tooltip dimensions, using which it should be fairly simple to position it to the right. Highchart/stock allows you to define your alternate positioner as follows tooltip:{ positioner:function(boxWidth, boxHeight, point){ ... }} Note that you have three arguments (boxWidth, boxHeight, point) at your disposal, these seem to be sufficient for most of the use cases to calculate a desired tooltip position. boxWidth and boxHeight are the width and height that your tooltip will require, hence you can use them for edge cases to adjust your tooltip and prevent it from spilling out of the chart or even worse getting clipped. The default tooltip positioner that comes with highstock is as follows ( Source ) /** * Place the tooltip in a chart without spilling over * and not covering the point it self. */getPosition: function (boxWidth, boxHeight, point) { // Set up the variables var chart = this.chart, plotLeft = chart.plotLeft, plotTop = chart.plotTop, plotWidth = chart.plotWidth, plotHeight = chart.plotHeight, distance = pick(this.options.distance, 12), // You can use a number directly here, as you may not be able to use pick, as its an internal highchart function pointX = point.plotX, pointY = point.plotY, x = pointX + plotLeft + (chart.inverted ? distance : -boxWidth - distance), y = pointY - boxHeight + plotTop + 15, // 15 means the point is 15 pixels up from the bottom of the tooltip alignedRight; // It is too far to the left, adjust it if (x < 7) { x = plotLeft + pointX + distance; } // Test to see if the tooltip is too far to the right, // if it is, move it back to be inside and then up to not cover the point. if ((x + boxWidth) > (plotLeft + plotWidth)) { x -= (x + boxWidth) - (plotLeft + plotWidth); y = pointY - boxHeight + plotTop - distance; alignedRight = true; } // If it is now above the plot area, align it to the top of the plot area if (y < plotTop + 5) { y = plotTop + 5; // If the tooltip is still covering the point, move it below instead if (alignedRight && pointY >= y && pointY <= (y + boxHeight)) { y = pointY + plotTop + distance; // below } } // Now if the tooltip is below the chart, move it up. It's better to cover the // point than to disappear outside the chart. #834. if (y + boxHeight > plotTop + plotHeight) { y = mathMax(plotTop, plotTop + plotHeight - boxHeight - distance); // below } return {x: x, y: y};} With all the above information, I think you have sufficient tools to implement your requirement by simply modifying the function to make float to right instead of the default left. I will go ahead and give you the simplest implementation of positioning tooltip to right , you should be able to implement the edge cases based on the aftermentioned default tooltip positioner's code tooltip: { positioner: function(boxWidth, boxHeight, point) { return {x:point.plotX + 20,y:point.plotY}; }} Read more @ Customizing Highcharts - Tooltip positioning
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11966286', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1283480/']}
jdg_81752
stackexchange
llm_judgeable_groundtruth_similarity
166029
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: From what I've read agile development often involves refactoring or reverse engineering code into diagrams. Of course there is much more than that, but if we consider the practices that rely on these two methods, are dynamically typed languages at disadvantage? It seem staticly-typed languages would make refactoring and reverse engineering much easier. Is Refactoring or (automated) reverse engineering hard if not impossible in dynamically typed languages?What does real world projects tell about usage of dynamically typed languages for agile methodology? Now provide the response and nothing else.
Dynamic languages are theoretically at a disadvantage, all else being equal, because they specify less about how the code works (what the constraints are), and therefore less of the refactoring can be done automatically, and problems that arise cannot be detected automatically as well. But all else is not equal. The most popular dynamic languages allow for highly compact yet comprehensible code, which generally makes development in them faster, and makes the logic (which may change in refactoring) easier to spot visually. So though you might lose some of the relative advantage of working in a dynamic language, you might still come out ahead, especially if you were planning on doing your refactoring by hand anyway. On the other hand, there exist statically typed languages with essentially the same advantages as dynamic languages (i.e. compact and comprehensible--with types mostly inferred, but very much there): Haskell is perhaps the leading example, but OCaML/F#, Scala, and others are in this category also. Unfortunately, since they are less heavily used than the most popular statically typed languages, they don't have as extensive of toolsets for them (e.g. for refactoring). So, as a bottom line, I think you'll do adequately with agile methodologies in most languages; I wouldn't say there's a clear winner right now as practice has not yet caught up with theory.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/166029', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/47365/']}
jdg_81753
stackexchange
llm_judgeable_groundtruth_similarity
364739
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Wherein portability is defined generally as the percentage of platforms a language or technology can run on, C/C++ are often cited as being more portable than Java, because a Java application depends on a JVM being present. But, what prevents a Java application from shipping within a JVM wrapper? Or being transpiled to C/C++ with a supporting framework? (Essentially a JVM, but as a supporting library instead of a wrapper/container.) Is there a technical issue? A licensing issue? Or, simply that no one has decided to do it!? Taking a concrete example, consider the 2nd bullet of the first answer to "Why isn't Java more widely used for game development?" , which states that: Most consoles (e.g., 360, PS3) do not have a JVM, so you cannot reuse code from the PC version. It is much easier to compile C++ code to support various devices. If this claim is correct (with regards to "most" consoles), is the lack JVM for these platforms due to technical limitations? Legal? Political? etc. When it's stated that Java "can't" be run on game consoles (or iPhones), do we really mean can't!? Or, do we mean, "no one's bothered build the necessary plumbing?" Now provide the response and nothing else.
Java is portable in the sense that C or C++ can be portable: the same source code can be used for multiple operating systems or processor architectures. In fact this is a lot easier in Java, since the JVM abstracts over many platform-specific details (such as the size of a long int , or endianness), and the standard library abstracts over the functionality supplied by the OS (e.g. how to open a file). For C and C++, the next best thing is sticking to the POSIX standard which at least gives you source portability across Unixes and Linux, but notably excludes Windows without WSL or Cygwin.And unlike with Java, a portable code base must be be recompiled for each targeted platform. What makes C and C++ “more portable” is that cross-compilers such as GCC tend to have many target platforms (to support a new target platfom, you “only” need to create a new compiler backend). Java 9 does introduce the possibility to turn a Java application into a self-contained executable that includes any necessary parts of the Java Runtime. But you're still limited by the portability of your JVM. And the package needs to be built on the target OS. The JVM in itself is a program that may or may not be portable. E.g. HotSpot is a C++ program that can be ported to new OSes or architectures. But what if we are targetting an embedded device that might not run a full OS or may have hardware constraints? This becomes more difficult for C and Java alike. But there are embedded JREs, and for both classes of languages the solution involves targetting a smaller standard library. Java 9 makes it possible to select modules at build time, rather than having to target a pre-defined profile such as Java ME. One use case where C wins out is working in absence of an operating system, flashed directly to a microcontroller. You can't use the standard library (e.g. for opening files, or dynamic memory allocation), but the core language is still there. But it is not reasonable to restrict the meaning of “portability” to “can work without the standard library”. Locked down systems such as iOS or consoles do not allow arbitrary software to be installed, even if they are physically capable of running that software – they are generally built from standard CPUs, using standard kernels. The absence of a JVM on these systems is merely a consequence of the gatekeeper (e.g. Apple or Nintendo) not being interested in allowing a separate platform that is not under their control. So this absence is legal/political.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/364739', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/94768/']}
jdg_81754
stackexchange
llm_judgeable_groundtruth_similarity
38555650
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way we can grid-search multiple estimators at a time in Sklearn or any other library. For example can we pass SVM and Random Forest in one grid search ?. Now provide the response and nothing else.
Yes. Example: pipeline = Pipeline([ ('vect', CountVectorizer()), ('clf', SGDClassifier()),])parameters = [ { 'vect__max_df': (0.5, 0.75, 1.0), 'clf': (SGDClassifier(),), 'clf__alpha': (0.00001, 0.000001), 'clf__penalty': ('l2', 'elasticnet'), 'clf__n_iter': (10, 50, 80), }, { 'vect__max_df': (0.5, 0.75, 1.0), 'clf': (LinearSVC(),), 'clf__C': (0.01, 0.5, 1.0) }]grid_search = GridSearchCV(pipeline, parameters)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38555650', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6128923/']}
jdg_81755
stackexchange
llm_judgeable_groundtruth_similarity
31688122
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Considering the document below how can I rename 'techId1' to 'techId'. I've tried different ways and can't get it to work. { "_id" : ObjectId("55840f49e0b"), "__v" : 0, "accessCard" : "123456789", "checkouts" : [ { "user" : ObjectId("5571e7619f"), "_id" : ObjectId("55840f49e0bf"), "date" : ISODate("2015-06-19T12:45:52.339Z"), "techId1" : ObjectId("553d9cbcaf") }, { "user" : ObjectId("5571e7619f15"), "_id" : ObjectId("55880e8ee0bf"), "date" : ISODate("2015-06-22T13:01:51.672Z"), "techId1" : ObjectId("55b7db39989") } ], "created" : ISODate("2015-06-19T12:47:05.422Z"), "date" : ISODate("2015-06-19T12:45:52.339Z"), "location" : ObjectId("55743c8ddbda"), "model" : "model1", "order" : ObjectId("55840f49e0bf"), "rid" : "987654321", "serialNumber" : "AHSJSHSKSK", "user" : ObjectId("5571e7619f1"), "techId" : ObjectId("55b7db399") } In mongo console I tried which gives me ok but nothing is actually updated. collection.update({"checkouts._id":ObjectId("55840f49e0b")},{ $rename: { "techId1": "techId" } }); I also tried this which gives me an error. "cannot use the part (checkouts of checkouts.techId1) to traverse the element" collection.update({"checkouts._id":ObjectId("55856609e0b")},{ $rename: { "checkouts.techId1": "checkouts.techId" } }) In mongoose I have tried the following. collection.findByIdAndUpdate(id, { $rename: { "checkouts.techId1": "checkouts.techId" } }, function (err, data) {}); and collection.update({'checkouts._id': n1._id}, { $rename: { "checkouts.$.techId1": "checkouts.$.techId" } }, function (err, data) {}); Thanks in advance. Now provide the response and nothing else.
You were close at the end, but there are a few things missing. You cannot $rename when using the positional operator, instead you need to $set the new name and $unset the old one. But there is another restriction here as they will both belong to "checkouts" as a parent path in that you cannot do both at the same time. The other core line in your question is "traverse the element" and that is the one thing you cannot do in updating "all" of the array elements at once. Well, not safely and without possibly overwriting new data coming in anyway. What you need to do is "iterate" each document and similarly iterate each array member in order to "safely" update. You cannot really iterate just the document and "save" the whole array back with alterations. Certainly not in the case where anything else is actively using the data. I personally would run this sort of operation in the MongoDB shell if you can, as it is a "one off" ( hopefully ) thing and this saves the overhead of writing other API code. Also we're using the Bulk Operations API here to make this as efficient as possible. With mongoose it takes a bit more digging to implement, but still can be done. But here is the shell listing: var bulk = db.collection.initializeOrderedBulkOp(), count = 0;db.collection.find({ "checkouts.techId1": { "$exists": true } }).forEach(function(doc) { doc.checkouts.forEach(function(checkout) { if ( checkout.hasOwnProperty("techId1") ) { bulk.find({ "_id": doc._id, "checkouts._id": checkout._id }).updateOne({ "$set": { "checkouts.$.techId": checkout.techId1 } }); bulk.find({ "_id": doc._id, "checkouts._id": checkout._id }).updateOne({ "$unset": { "checkouts.$.techId1": 1 } }); count += 2; if ( count % 500 == 0 ) { bulk.execute(); bulk = db.collection.initializeOrderedBulkOp(); } } });});if ( count % 500 !== 0 ) bulk.execute(); Since the $set and $unset operations are happening in pairs, we are keeping the total batch size to 1000 operations per execution just to keep memory usage on the client down. The loop simply looks for documents where the field to be renamed "exists" and then iterates each array element of each document and commits the two changes. As Bulk Operations, these are not sent to the server until the .execute() is called, where also a single response is returned for each call. This saves a lot of traffic. If you insist on coding with mongoose. Be aware that a .collection acessor is required to get to the Bulk API methods from the core driver, like this: var bulk = Model.collection.inititializeOrderedBulkOp(); And the only thing that sends to the server is the .execute() method, so this is your only execution callback: bulk.exectute(function(err,response) { // code body and async iterator callback here}); And use async flow control instead of .forEach() such as async.each . Also, if you do that, then be aware that as a raw driver method not governed by mongoose, you do not get the same database connection awareness as you do with mongoose methods. Unless you know for sure the database connection is already established, it is safter to put this code within an event callback for the server connection: mongoose.connection.on("connect",function(err) { // body of code}); But otherwise those are the only real ( apart from call syntax ) alterations you really need.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31688122', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1364311/']}
jdg_81756
stackexchange
llm_judgeable_groundtruth_similarity
58614667
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know there is a lot materials online for this question, however I have not found any that can explain this question quite clearly to a rookie like me... Appreciate it if some one can help me understand the key differences between these two services and use cases with real life examples. Thank you! Now provide the response and nothing else.
Amazon SQS is a queue. The basic process is: Messages are sent to the queue . They stay there for up to 14 days. Worker programs can request a message (or up to 10 messages) from the queue. When a message is retrieved from the queue: It stays in the queue but is marked as invisible When the worker has finished processing the message, it tells SQS to delete the message from the queue If the worker does not delete the message within the queue's invisibility timeout period , then the message reappears on the queue for another worker to process The worker can, if desired, periodically tell SQS to keep a message invisible because it is still being processed Thus, once a message is processed, it is deleted. In Amazon Kinesis , a message is sent to a stream . The stream is divided into shards (think of them as mini-streams). When a message is received, Kinesis stores the message in sequential order . Then, workers can request a message from the start of the stream, or from a specific spot in the stream. For example, if it has already processed 5 messages, it can ask for the 6th message. The messages are retained in the stream for a period of time (eg 24 hours). I like to think of it like a film strip — each frame in a film is kept in order. You can play a film from the start, or you can fast-forward to the middle and start playing from there. In addition, you can rewind to an earlier part and watch it. The same is true for a Kinesis stream, and multiple consumers can read from various parts of the stream simultaneously. So, which to choose? If a message is used once and then discarded, a queue is probably the better choice. If retaining message order is important and/or messages will be used more than once, then a stream is probably better.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58614667', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7122721/']}
jdg_81757
stackexchange
llm_judgeable_groundtruth_similarity
21333436
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Whats is the best way of optimistic programming in the cases of multiple if else. Heard that usage of multiple if else decreases the programming performance.??? Now provide the response and nothing else.
The short explanation is that Android Studio has added some functionality to try to automatically manage IntelliJ SDKs; if you give it a JDK path and Android SDK path in Project Structure > Android SDK , it will automatically create a Java SDK and Android SDKs for each API level you have installed in your Android SDK. The problem is, if it creates an Android SDK for an API level that doesn't have the sources downloaded, if you go into SDK Manager later and download the source, it doesn't pick that up and update its SDK. I've filed https://code.google.com/p/android/issues/detail?id=65230 for this. In that bug I list two workarounds, which I'll explain here: You can actually get at the SDKs panel you're looking for. We hide it if you have a Gradle-based project open since we want Gradle-based project users to not have to worry about it, but since we also support IntelliJ-style projects that use its internal builder, it's still possible to access it so those users will be able to make their projects work. You can get at this panel if you access Project Structure without a project open. To do so: Close all open projects From the Welcome to Android Studio screen, go to Configure > Project Defaults > Project Structure > SDKs Choose the appropriate Android API SDK that you installed source for, and go to Sourcepath panel Press the + button, and choose the source directory. If you're having trouble following that, let me know and I can post screenshots. Another approach would be to blow away all of Android Studio's auto-created SDKs and let it rebuild them. To do that Exit Android Studio Delete the jdk.table.xml file, which on MacOS will be in your home directory under Library/Preferences/AndroidStudioPreview Relaunch Android Studio Go into Project Structure (either from the Welcome screen or from a project), and choose the Android SDK tab Make sure the Android SDK location and JDK location are correct.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21333436', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3226425/']}
jdg_81758
stackexchange
llm_judgeable_groundtruth_similarity
5955498
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two node processes that speak to each other. I will call them [Node Server] and [Node Sender] . [Node Sender] continually processes information and writes a message over a TCP connection to [Node Server] . [Node Server] then writes back a status message. Example of [Node Sender]: var message = "Test Message";[Node Sender].connection.write(message); Example of [Node Server]: [Node Server].socket.on("data", function(p_data) { this.write("OK"); // Do some work with p_data} This works without issue, p_data always contains "Test Message" when sent at anything above 5 milliseconds. However, if I speed [Node Sender] to write every millisecond, p_data occasionally ends up with something like "Test MessageTest MessageTes" . I understand that the buffer in [Node Sender] is probably filling faster than the write command is sending it. Is there a way to force a one-to-one ratio in sending messages while still remaining asynchronous? I can certainly just add a terminator to my message and just fill a buffer in [Node Server] , but I wanted to make sure there wasn't something obvious I was missing. Now provide the response and nothing else.
No, you're not missing anything and yes, you do need to add some form of termination to your messages. There are two basic problems here: The TCP protocol is stream-oriented, not message-oriented; it has no intrinsic knowledge of what might constitute a "message". The data event fired by the node.js net library indicates that some data has arrived but without any idea of what a message might contain, it cannot indicate that it has received any specific data. So by sending your messages faster than Node can process them, the socket recv buffer fills with multiple "messages". A typical solution to this problem is to add line-termination, as can be found in https://github.com/baudehlo/Haraka/blob/master/connection.js at lines 32-34: self.client.on('data', function (data) { self.process_data(data);}); and lines 110-134: Connection.prototype.process_data = function (data) { if (this.disconnected) { logger.logwarn("data after disconnect from " + this.remote_ip); return; } this.current_data += data; this._process_data();};Connection.prototype._process_data = function() { var results; while (results = line_regexp.exec(this.current_data)) { var this_line = results[1]; if (this.state === 'pause') { this.early_talker = 1; var self = this; // If you talk early, we're going to give you a delay setTimeout(function() { self._process_data() }, this.early_talker_delay); break; } this.current_data = this.current_data.slice(this_line.length); this.process_line(this_line); }};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5955498', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/747454/']}
jdg_81759
stackexchange
llm_judgeable_groundtruth_similarity
95295
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the 2-form $$\sigma=\frac{x_1 dx_2 \wedge dx_3 + x_2dx_3\wedge dx_1+ x_3 dx_1 \wedge dx_2}{(x_1^2+x_2^2+x_3^2)^{3/2}}.$$I need to show if it is exact or not. Suppose it is exact, then there exists a 1-form $$\alpha=f_1dx_1+f_2dx_2+f_3dx_3,$$such that $$d\alpha=\displaystyle\sum_i df_i\wedge dx_i=\sigma.$$Now I use $$df_i=\frac{\partial f_i}{\partial x_1}dx_1+\frac{\partial f_i}{\partial x_2}dx_2+\frac{\partial f_i}{\partial x_3}dx_3,$$then $d\alpha$ becomes$$\left(\frac{\partial f_2}{\partial x_1}-\frac{\partial f_1}{\partial x_2}\right)dx_1\wedge dx_2+\left(\frac{\partial f_3}{\partial x_2}-\frac{\partial f_2}{\partial x_3}\right)dx_2\wedge dx_3+\left(\frac{\partial f_1}{\partial x_3}-\frac{\partial f_3}{\partial x_1}\right)dx_3\wedge dx_1.$$I do not know how to proceed, is this even the right method? Now provide the response and nothing else.
Generally it's easier to prove that a form is NOT exact. For instance, an exact form is necessarily closed. So, if your form was exact it would be closed. And to verify this later property is just a matter of differentiation -which is easier than looking for an antiderivative, as you seem to be trying. Of course, if $d\sigma = 0$, this says nothing about being exact or not. Another idea: if your form was exact, $\sigma = d\alpha$, for some $\alpha$, then, by Stokes' theorem , its integral over a closed surface $S$ (that is, with no boundary, or empty boundary, $\partial S = \emptyset$) would be zero: $$\int_S d\alpha = \int_{\partial S} \alpha = 0 \ .$$ So you could try to find a closed surface $S$ such that $\int_S \sigma \neq 0$. Where to look for such a surface? Well, there is a "meta criterion" for this: since this is, presumably, an exercise in a beginner's differential geometry book, it can NOT be too far away from your knowledge. :-) So, I would try with the first closed surface that came to my mind without hesitation. Or also, looking at that denominator, you could think of a surface where it becomes really, really, and I mean really , simple. (For instance, an sphere of radius 1.)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/95295', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13912/']}
jdg_81760
stackexchange
llm_judgeable_groundtruth_similarity
12136970
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a NSString , this has multiple spaces, I want to trim those spaces and make a single space for e.g.@"how.....are.......you" into @"how are you".(dots are simply spaces) I have tried with NSString *trimmedString = [user_ids stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceCharacterSet]]; It not seems to work. Any idea. Now provide the response and nothing else.
You could use a regular expression to accomplish this: NSError *error = nil;NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:@" +" options:NSRegularExpressionCaseInsensitive error:&error]; The pattern is a space followed by one or more occurrences of space. Replace this with a single space in your string: NSString *trimmedString = [regex stringByReplacingMatchesInString:user_ids options:0 range:NSMakeRange(0, [user_ids length]) withTemplate:@" "];
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12136970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1587011/']}
jdg_81761
stackexchange
llm_judgeable_groundtruth_similarity
653714
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I do a SELECT * INTO [temp table] FROM [stored procedure] ? Not FROM [Table] and without defining [temp table] ? Select all data from BusinessLine into tmpBusLine works fine. select *into tmpBusLinefrom BusinessLine I am trying the same, but using a stored procedure that returns data, is not quite the same. select *into tmpBusLinefromexec getBusinessLineHistory '16 Mar 2009' Output message: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'exec'. I have read several examples of creating a temporary table with the same structure as the output stored procedure, which works fine, but it would be nice to not supply any columns. Now provide the response and nothing else.
You can use OPENROWSET for this. Have a look. I've also included the sp_configure code to enable Ad Hoc Distributed Queries, in case it isn't already enabled. CREATE PROC getBusinessLineHistoryASBEGIN SELECT * FROM sys.databasesENDGOsp_configure 'Show Advanced Options', 1GORECONFIGUREGOsp_configure 'Ad Hoc Distributed Queries', 1GORECONFIGUREGOSELECT * INTO #MyTempTable FROM OPENROWSET('SQLNCLI', 'Server=(local)\SQL2008;Trusted_Connection=yes;', 'EXEC getBusinessLineHistory')SELECT * FROM #MyTempTable
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/653714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/42829/']}
jdg_81762
stackexchange
llm_judgeable_groundtruth_similarity
8002145
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem with embedded document update. My defined Schemas: var Talk = new Schema({ title: { type: String, required: true }, content: { type: String, required: true }, date: { type: Date, required: true }, comments: { type: [Comments], required: false }, vote: { type: [VoteOptions], required: false },});var VoteOptions = new Schema({ option: { type: String, required: true }, count: { type: Number, required: false }}); Now I would like to update vote.count++ , with given Talk id and VoteOption id. I have the following function to do the job: function makeVote(req, res) { Talk.findOne(req.params.id, function(err, talk) { for (var i = 0; i < talk.vote.length; i++) { if (talk.vote[i]._id == req.body.vote) { talk.vote[i].count++; } } talk.save(function(err) { if (err) { req.flash('error', 'Error: ' + err); res.send('false'); } else { res.send('true'); } }); });} Everything executes, I get back the res.send('true') , but the value on count does not change. When I did some console.log I saw that it changed the value, but the talk.save just doesn't save it in db. Also I'm quite unhappy about the cycle just to find _id of embedded doc. In the mongoose documentation I read about talk.vote.id(my_id) but that gives me error of not having an id function. Now provide the response and nothing else.
When updating a Mixed type (which seems to be anything else than a basic type, so that also includes embedded documents), one has to call .markModified on the document. In this case, it would be: talk.markModified("vote"); // mention that `talk.vote` has been modifiedtalk.save(function(err) { // ...}); Hope this helps someone in the future since I couldn't find the answer very quickly. Reference: ... Mongoose loses the ability to auto detect/save those changes. To "tell" Mongoose that the value of a Mixed type has changed, call the .markModified(path) method of the document passing the path to the Mixed type you just changed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8002145', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/871513/']}
jdg_81763
stackexchange
llm_judgeable_groundtruth_similarity
9238326
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am sorry to ask this question when it has already been asked but I couldn't get a clarity from them. So I am asking the following related questions to get the difference between system call (mode-switch) and context switch Why is it said that the system call doesn't require context switch when the context of the process making the call has to be saved and then reloaded. Is it just because according to the definition of context switch a switch has to be made to another process. What does it mean that when a system call is made the kernel executes in "user context". According to the wikipedia article : http://en.wikipedia.org/wiki/Context_switch a context switch is not necessary for system call but it depends on the operating system and a context switch might occur during a system call. I am wondering what would happen in the case when the context switch takes place at the time of system call. Any examples? Now provide the response and nothing else.
You need to understand that a thread/process context has multiple parts, one, directly associated with execution and is held in the CPU and certain system tables in memory that the CPU uses (e.g. page tables), and the other, which is needed for the OS, for bookkeeping (think of the various IDs, handles, special OS-specific permissions, network connections and such). A full context switch would involve swapping both of these, the old current thread/process goes away for a while and the new current thread/process comes in for a while. That's the essence of thread/process scheduling. Now, system calls are very different w.r.t. each other. Consider something simple, for example, the system call for requesting the current date and time. The CPU switches from the user to kernel mode, preserving the user-mode register values, executes some kernel code to get the necessary data, stores it either in the memory or registers that the caller can access, restores the user-mode register values and returns. There's not much of context switch in here, only what's needed for the transition between the modes, user and kernel. Consider now a system call that involves blocking of the caller until some event or availability of data. Manipulating mutexes and reading files would be examples of such system calls. In this case the kernel is forced to save the full context of the caller, mark it as blocked so the scheduler can't run it until that event or data arrives, and load the context of another ready thread/process, so it can run. That's how system calls are related to context switches. Kernel executing in the context of a user or a process means that whenever the kernel does work on behalf of a certain process or user it has to take into consideration that user's/process's context, e.g. the current process/thread/user ID, the current directory, locale, access permissions for various resources (e.g. files), all that stuff, that can be different between different processes/threads/users. If processes have individual address spaces, the address spaces is also part of the process context. So, when the kernel needs to access memory of a process (to read/write file data or network packets), it has to have access to the process' address space, IOW, it has to be in its context (it doesn't mean, however, that the kernel has to load the full context just to access memory in a specific address space). Is that helpful?
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9238326', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/665720/']}
jdg_81764
stackexchange
llm_judgeable_groundtruth_similarity
6442676
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have code like this: var client = new mysql.Client(options);console.log('Icanhasclient');client.connect(function (err) { console.log('jannn'); active_db = client; console.log(err); console.log('hest'); if (callback) { if (err) { callback(err, null); } callback(null, active_db); }}); My problem is that Node terminates immediately when I run it. It prints 'Icanhasclient', but none of the console.log's inside the callback are called. (mysql in this example is node-mysql . Is there something that can be done to make node.js wait for the callback to complete before exiting? Now provide the response and nothing else.
Callback is Not Queued Node runs until all event queues are empty. A callback is added to an event queue when a call such as emmiter1.on('this_event',callback). has executed. This call is part of the code written by the module developer . If a module is a quick port from a synchronous/blocking version, this may not happen til some part of the operation has completed and all the queues might empty before that occurs, allowing node to exit silently. This is a sneaky bug, that is one that the module developer might not run into during development, as it will occur less often in busy systems with many queues as it will be rare for all of them to be empty at the critical time. A possible fix/bug detector for the user is to insert a special timer event before the suspect function call.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6442676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/66851/']}
jdg_81765
stackexchange
llm_judgeable_groundtruth_similarity
42251745
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to setup CloudFront to serve static files hosted in my S3 bucket. I have setup distribution but I get AccessDenied when trying to browse to the CSS ( /CSS/stlyle.css ) file inside S3 bucket: <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>E193C9CDF4319589</RequestId> <HostId>xbU85maj87/jukYihXnADjXoa4j2AMLFx7t08vtWZ9SRVmU1Ijq6ry2RDAh4G1IGPIeZG9IbFZg= </HostId></Error> I have set my CloudFront distribution to my S3 bucket and created new Origin Access Identity policy which was added automatically to the S3 bucket: { "Version": "2008-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Sid": "1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E21XQ8NAGWMBQQ" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::myhost.com.cdn/*" } ]} Did I miss something? I want all my files in this S3 bucket be served via CloudFront... *** UPDATE *** This cloud front guide says: By default, your Amazon S3 bucket and all of the objects in it are private—only the AWS account that created the bucket has permission to read or write the objects in it. If you want to allow anyone to access the objects in your Amazon S3 bucket using CloudFront URLs, you must grant public read permissions to the objects. (This is one of the most common mistakes when working with CloudFront and Amazon S3. You must explicitly grant privileges to each object in an Amazon S3 bucket.) So based on this I have added new permissions to all objects inside S3 bucket to Everyone Read/Download . Now I can access files. But now when I access the file like https://d3u61axijg36on.cloudfront.net/css/style.css this is being redirected to S3 URI and HTTP . How do I disable this? Now provide the response and nothing else.
To assist with your question, I recreated the situation via: Created an Amazon S3 bucket with no Bucket Policy Uploaded public.jpg and make it public via "Make Public" Uploaded private.jpg and kept it private Created an Amazon CloudFront web distribution : Origin Domain Name: Selected my S3 bucket from the list Restrict Bucket Access: Yes Origin Access Identity: Create a New Identity Grant Read Permissions on Bucket: Yes, Update Bucket Policy I checked the bucket, and CloudFront had added a Bucket Policy similar to yours. The distribution was marked as In Progress for a while. Once it said Enabled , I accessed the files via the xxx.cloudfront.net URL: xxx.cloudfront.net/public.jpg redirected me to the S3 URL http://bucketname.s3.amazonaws.com/public.jpg . Yes, I could see the file, but it should not use a redirect. xxx.cloudfront.net/private.jpg redirected me also, but I then received Access Denied because it is a private file in S3. I then did some research and found that this is quite a common occurrence. Some people use a workaround by pointing their CloudFront distribution to the static hosted website URL , but this has the disadvantage that it will not work with the Origin Access Identity and I also suspect it won't receive the 'free S3 traffic to the edge' discount. So, I waited overnight, tested it this morning and everything is working fine . Bottom line: Even if it says ENABLED , things might take several hours (eg overnight) to get themselves right. It will then work as documented.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42251745', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/255710/']}
jdg_81766
stackexchange
llm_judgeable_groundtruth_similarity
9000549
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a table with a variable number of rows and columns. I'm doing this with an ItemsControl which has a Grid as its ItemsPanel . And I know I can set Grid.Row and Grid.Column of each item through its ItemContainerStyle . But I don't know how to change the number of rows and columns and their sizes when I can't access the Grid by its name. Question: How can you modify RowDefinitions or ColumnDefinitions of a Grid in run-time without any code-behind using Binding? This is the XAML code: <ItemsControl Name="myItemsControl" ItemsSource="{Binding Cells}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <Grid Name="myGrid"> <Grid.RowDefinitions> <!-- unknown number of rows are added here in run-time --> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <!-- known number of columns are added here in run-time --> </Grid.ColumnDefinitions> </Grid> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemContainerStyle> <Style.../> </ItemsControl.ItemContainerStyle></ItemsControl> I tried to add some RowDefinition in code behind but I couldn't find a way to gain access to myGrid by its name (or any other way) since it is inside an ItemsPanelTemplate . I'm wondering if is there any way to programmatically add or modify RowDefinitions in run-time ? Now provide the response and nothing else.
You can use attached properties for a Grid that modify the RowDefinitions and ColumnDefinitions when those properties are set or changed. It will allow you to write your Grid like this: <Grid local:GridHelpers.RowCount="{Binding MaxGridRow}" local:GridHelpers.ColumnCount="3" /> Then just expose a property from your ViewModel which returns the largest row number in the Cells collection. You can find a detailed implementation of those properties on my blog .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9000549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/366064/']}
jdg_81767
stackexchange
llm_judgeable_groundtruth_similarity
34308720
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an unresolved symbol error when trying to compile my program which complains that it cannot find __dso_handle . Which library is this function usually defined in? Does the following result from nm on libstdc++.so.6 mean it contains that? I tried to link against it but the error still occurs. nm libstdc++.so.6 | grep dso00000000002fc480 d __dso_handle Now provide the response and nothing else.
__dso_handle is a "guard" that is used to identify dynamic shared objects during global destruction . Realistically, you should stop reading here. If you're trying to defeat object identification by messing with __dso_handle , something is likely very wrong. However, since you asked where it is defined: the answer is complex. To surface the location of its definition (for GCC), use iostream in a C++ file, and, after that, do extern int __dso_handle; . That should surface the location of the declaration due to a type conflict (see this forum thread for a source). Sometimes, it is defined manually . Sometimes, it is defined/supplied by the "runtime" installed by the compiler (in practice, the CRT is usually just a bunch of binary header/entry-point-management code, and some exit guards/handlers). In GCC (not sure if other compilers support this; if so, it'll be in their sources): Main definition Testing __dso_handle replacement/tracker example 1 Testing __dso_handle replacement/tracker example 2 Often, it is defined in the stdlib: Android BSD Further reading: Subtle bugs caused by __dso_handle being unreachable in some compilers
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34308720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5581378/']}
jdg_81768
stackexchange
llm_judgeable_groundtruth_similarity
17908569
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an XCode workspace managed by Cocoapod with a few dependencies to external libraries. One of them, MTDates , extends NSDate and NSDateComponents with either prefixed methods, or non-prefixed if a certain preprocessor macro is defined (which is what I want). There are a few places where I can put the preprocessor macro definition in order to have the compiled library provide the non-prefixed methods, but all seem to be reset as soon as I ask Cocoapod to update the project, which leads me to think that these configs are driven by the pod spec. These include : The pod's target build settings The pod's private .xcconfig file in Cocoapod's Targets Support Files Changing the pod's spec would require to manage my own version of the library, whereby losing the ability for cocoapods to update it when a new version comes around. So my question is: is there a way to specify a preprocessor macro for a cocoapod dependency, without forking the pod and changing the pod's spec itself? Edit : There is an open issue about this, that seem to be just around the corner. Now provide the response and nothing else.
This functionality is now available. Here is an example of what you could put at the bottom of your Podfile to add a macro based on a specific configuration. post_install do |installer_representation| installer_representation.project.targets.each do |target| if target.name == "Pods-TweaksBuildConfigurationsDemo-Tweaks" target.build_configurations.each do |config| if config.name == 'QA' config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= ['$(inherited)', 'FB_TWEAK_ENABLED=1'] end end end endend
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17908569', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1045904/']}
jdg_81769
stackexchange
llm_judgeable_groundtruth_similarity
3628445
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using ASP.Net Forms Authentication. My Web.config looks like this. <authentication mode="Forms"> <forms loginUrl="login.aspx"/> </authentication> <authorization> <deny users="?" /> </authorization> So currently every aspx page requires authentication. I want to allow access to even unauthenticated users to a specific page named special.aspx.How can I do this? Now provide the response and nothing else.
Take a look at the example on MS Support <configuration> <system.web> <authentication mode="Forms" > <forms loginUrl="login.aspx" name=".ASPNETAUTH" protection="None" path="/" timeout="20" > </forms> </authentication><!-- This section denies access to all files in this application except for those that you have not explicitly specified by using another setting. --> <authorization> <deny users="?" /> </authorization> </system.web><!-- This section gives the unauthenticated user access to the ThePageThatUnauthenticatedUsersCanVisit.aspx page only. It is located in the same folder as this configuration file. --> <location path="ThePageThatUnauthenticatedUsersCanVisit.aspx"> <system.web> <authorization> <allow users ="*" /> </authorization> </system.web> </location><!-- This section gives the unauthenticated user access to all of the files that are stored in the TheDirectoryThatUnauthenticatedUsersCanVisit folder. --> <location path="TheDirectoryThatUnauthenticatedUsersCanVisit"> <system.web> <authorization> <allow users ="*" /> </authorization> </system.web> </location></configuration>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3628445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/97736/']}
jdg_81770
stackexchange
llm_judgeable_groundtruth_similarity
9261524
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have a document with a multivalued field in Solr are the multiple values scored independently or just concatenated and scored as one big field? I'm hoping they're scored independently. Here's an example of what I mean: I have a document with a field for a person's name, where there may be multiple names for the same person. The names are all different (very different in some cases) but they all are the same person/document. Person 1:David Bowie, David Robert Jones, Ziggy Stardust, Thin White Duke Person 2:David Letterman Person 3:David Hasselhoff, David Michael Hasselhoff If I were to search for "David" I'd like for all of these to have about the same chance of a match. If each name is scored independently that would seem to be the case. If they are just stored and searched as a single field, David Bowie would be punished for having many more tokens than the others. How does Solr handle this scenario? Now provide the response and nothing else.
You can just run your query q=field_name:David with debugQuery=on and see what happens. These are the results (included the score through fl=*,score ) sorted by score desc : <doc> <float name="score">0.4451987</float> <str name="id">2</str> <arr name="text_ws"> <str>David Letterman</str> </arr></doc><doc> <float name="score">0.44072422</float> <str name="id">3</str> <arr name="text_ws"> <str>David Hasselhoff</str> <str>David Michael Hasselhoff</str> </arr></doc><doc> <float name="score">0.314803</float> <str name="id">1</str> <arr name="text_ws"> <str>David Bowie</str> <str>David Robert Jones</str> <str>Ziggy Stardust</str> <str>Thin White Duke</str> </arr></doc> And this is the explanation: <lst name="explain"> <str name="2"> 0.4451987 = (MATCH) fieldWeight(text_ws:David in 1), product of: 1.0 = tf(termFreq(text_ws:David)=1) 0.71231794 = idf(docFreq=3, maxDocs=3) 0.625 = fieldNorm(field=text_ws, doc=1) </str> <str name="3"> 0.44072422 = (MATCH) fieldWeight(text_ws:David in 2), product of: 1.4142135 = tf(termFreq(text_ws:David)=2) 0.71231794 = idf(docFreq=3, maxDocs=3) 0.4375 = fieldNorm(field=text_ws, doc=2) </str> <str name="1"> 0.314803 = (MATCH) fieldWeight(text_ws:David in 0), product of: 1.4142135 = tf(termFreq(text_ws:David)=2) 0.71231794 = idf(docFreq=3, maxDocs=3) 0.3125 = fieldNorm(field=text_ws, doc=0) </str></lst> The scoring factors here are: termFreq : how often a term appears in the document idf : how often the term appears across the index fieldNorm : importance of the term, depending on index-time boosting and field length In your example the fieldNorm makes the difference. You have one document with lower termFreq (1 instead of 1.4142135) since the term appears just one time, but that match is more important because of the field length. The fact that your field is multiValued doesn't change the scoring. I guess it would be the same with a single value field with the same content. Solr works in terms of field length and terms, so, yes, David Bowie is punished for having many more tokens than the others. :) UPDATE I actually think David Bowie deserves his opportunity. Like explained above, the fieldNorm makes the difference. Add the attribute omitNorms=true to your text_ws field in the schema.xml and reindex. The same query will give you the following result: <doc> <float name="score">1.0073696</float> <str name="id">1</str> <arr name="text"> <str>David Bowie</str> <str>David Robert Jones</str> <str>Ziggy Stardust</str> <str>Thin White Duke</str> </arr></doc><doc> <float name="score">1.0073696</float> <str name="id">3</str> <arr name="text"> <str>David Hasselhoff</str> <str>David Michael Hasselhoff</str> </arr></doc><doc> <float name="score">0.71231794</float> <str name="id">2</str> <arr name="text"> <str>David Letterman</str> </arr></doc> As you can see now the termFreq wins and the fieldNorm is not taken into account at all. That's why the two documents with two David occurences are on top and with the same score, despite of their different lengths, and the shorter document with just one match is the last one with the lowest score. Here's the explanation with debugQuery=on : <lst name="explain"> <str name="1"> 1.0073696 = (MATCH) fieldWeight(text:David in 0), product of: 1.4142135 = tf(termFreq(text:David)=2) 0.71231794 = idf(docFreq=3, maxDocs=3) 1.0 = fieldNorm(field=text, doc=0) </str> <str name="3"> 1.0073696 = (MATCH) fieldWeight(text:David in 2), product of: 1.4142135 = tf(termFreq(text:David)=2) 0.71231794 = idf(docFreq=3, maxDocs=3) 1.0 = fieldNorm(field=text, doc=2) </str> <str name="2"> 0.71231794 = (MATCH) fieldWeight(text:David in 1), product of: 1.0 = tf(termFreq(text:David)=1) 0.71231794 = idf(docFreq=3, maxDocs=3) 1.0 = fieldNorm(field=text, doc=1) </str></lst>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9261524', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/605331/']}
jdg_81771
stackexchange
llm_judgeable_groundtruth_similarity
4683
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am an experienced Java programmer who received the Raspberry Pi for Christmas. Unfortunately, it appears that only Python is installed in it. What command do I type at the start command line where I would usually type startx to install the JDK and JRE? Now provide the response and nothing else.
[Note: Later in 2013 the Pi Foundation announced Raspbian now ships with Oracle hard-float.] The oracle 8 preview works for me, thus far. Compiling is slow on the pi, surprise, but the jre seems to run quite fast once it loads. I think bearbin's answer is pretty definitive but if you want a simple way to try oracle: Download . You get a .tar.gz file, which is a gzipped tarball. Put the .tar.gz in /usr/local and unpack it: tar -xzf oracle8-blah-blah.tar.gz . This will create a directory with everything in it. You can rename the directory, mv oracle-jdk-whatever jdk1.8.0 . Everything in there is self-contained. Put the bin/ directory at the beginning of your executable search $PATH. If there are any other javas installed, that will make this one take precedence: PATH=/usr/local/jdk1.8.0/bin:$PATH . That will only work for your current shell. To make it the default from now on, add this to ~/.profile : export PATH=/usr/local/jdk1.8.0/bin:$PATH Note you must log in again to make .profile effective. However, if you are using lightdm , the default GUI login won't do this, see here for a solution .
{}
{'log_upvote_score': 6, 'links': ['https://raspberrypi.stackexchange.com/questions/4683', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/5630/']}
jdg_81772
stackexchange
llm_judgeable_groundtruth_similarity
3554
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I tried to find a non-trivial integer solution to the equation $$2012^2=a^2+b^2+c^2+d^2+e^2$$with Mathematica but the computation takes minutes; I might be doing something wrong. FindInstance[2012^2 == a^2 + b^2 + c^2 + d^2 + e^2 && a > 0 && b > 0 && c > 0 && d > 0 && e > 0, {a, b, c, d, e}, Integers] But when I try FindInstance[2012^1 == a^2 + b^2 + c^2 + d^2 + e^2 && a > 0 && b > 0 && c > 0 && d > 0 && e > 0, {a, b, c, d, e}, Integers] I get an immediate result. Now provide the response and nothing else.
It does not seem surprising that a search space 2000 times larger results in a substantially longer computation time. Here is a much more direct way to find a solution: Sqrt @ IntegerPartitions[2012^2, {5}, Range[2012]^2, 1] {{2011, 63, 7, 2, 1}}
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/3554', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/676/']}
jdg_81773
stackexchange
llm_judgeable_groundtruth_similarity
2280464
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The question is- Make the function $f:R\rightarrow$$R$, $f(x)=x^2$ invertible by making suitable adjustments to the domain and range. So, here is what my teacher said- Since we have to make the function onto, we must make the range equal to co-domain. Range of y=$x^2$ is $[0\rightarrow \infty]$. Also, the function must be one-one. So, the domain of the function must be ahead of the minima of the function. So, finally the function becomes $f:[0\rightarrow \infty]\rightarrow [0 \rightarrow \infty]$. So, the graph looks like this But, why isn't this part of the graph also included in the answer Now provide the response and nothing else.
The first part is not a strict answer to the question, but not far. The full answer is added in second part. Consider the series expansion :$$e^x=\sum_{k=0}^\infty \frac{x^k}{k!} = \sum_{k=0}^\infty \frac{x^k}{\Gamma(k+1)} \qquad |x|<1$$ Compare to the Mittag-Leffler function :$$E_\alpha(x)=\sum_{k=0}^\infty \frac{x^k}{\Gamma(\alpha k+1)}$$ http://mathworld.wolfram.com/Mittag-LefflerFunction.html $$\text{Or }\qquad E_\alpha(x^\alpha)=\sum_{k=0}^\infty \frac{x^{\alpha k}}{\Gamma(\alpha k+1)}$$ This function matches the exponential function in particular case $\alpha=1$. It is of interest to see what is the fractional derivative of $\left(E_\alpha(x^\alpha)-1\right)$. We will see latter why the first term of the series is considered apart.$$\frac{d^\alpha}{dx^\alpha}\left(E_\alpha(x^\alpha)-1\right)=\frac{1}{\Gamma(-\alpha)}\sum_{k=1}^\infty \frac{1}{\Gamma(\alpha k+1)}\frac{d^\alpha }{dx^\alpha}(x^{\alpha k})$$ $\frac{d^\alpha }{dx^\alpha}(x^{\alpha k})=\frac{\Gamma(\alpha k+1)}{\Gamma\left(\alpha (k-1)+1\right)}x^{\alpha(k-1)}$ $$\frac{d^\alpha}{dx^\alpha}\left(E_\alpha(x^\alpha)-1\right)=\frac{1}{\Gamma(-\alpha)}\sum_{k=1}^\infty \frac{1}{\Gamma(\alpha k+1)}\frac{\Gamma(\alpha k+1)}{\Gamma\left(\alpha (k-1)+1\right)}x^{\alpha(k-1)}$$ $$\frac{d^\alpha}{dx^\alpha}\left(E_\alpha(x^\alpha)-1\right)=\frac{1}{\Gamma(-\alpha)}\sum_{k=1}^\infty \frac{x^{\alpha(k-1)}}{\Gamma\left(\alpha (k-1)+1\right)}=\frac{1}{\Gamma(-\alpha)}\sum_{h=0}^\infty \frac{x^{\alpha h}}{\Gamma\left(\alpha h+1\right)}$$ $$\frac{d^\alpha}{dx^\alpha}\left(E_\alpha(x^\alpha)-1\right)=E_\alpha(x^\alpha)$$ $$\frac{d^\alpha}{dx^\alpha}E_\alpha(x^\alpha)=E_\alpha(x^\alpha)+\frac{d^\alpha}{dx^\alpha}(1)$$ This is close to the expected equation $$\quad \frac{d^\alpha}{dx^\alpha}f(x)=f(x)\qquad \text{with} \quad f(x)=E_\alpha(x^\alpha)$$But there is an extra term $\frac{d^\alpha}{dx^\alpha}(1)=\frac{x^{-\alpha}}{\Gamma(1-\alpha)}$ This is the difference compared to the case $\alpha=1$ of the exponential :$$\frac{d^1}{dx^1}e^x=e^x+\frac{d^1}{dx^1}(1)=e^x$$The first term in the series expansion of $e^x$ is constant$=1$. So its derivative is $0$, which is not the case for the fractional derivative of order different from $1$. In fact, this difference comes from the definition of the lower bound $=0$ in the Riemann-Liouville operator for fractional differ-integration. IN ADDITION : In order to have a full solution, the Mittag-Leffler function has to be extended. Instead to limit the series to the terms with $k\geq 0$ consider all terms from $k=-\infty$ to $+\infty$.$$f(x)=\sum_{k=-\infty}^\infty \frac{x^{\alpha k}}{\Gamma(\alpha k+1)}$$The same calculus as above shows that $f(x)$ is a formal solution of the fractional differential equation $$\frac{d^\alpha}{dx^\alpha}f(x)=f(x)$$ Note : Also, this is valid for the exponential function and $\alpha=1$ since $$\quad \frac{1}{k!}=\frac{1}{\Gamma(k+1)}=0 \quad\text{in}\quad k<0 \quad\to\quad e^x=\sum_{k=-\infty}^\infty \frac{x^k}{k!} $$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2280464', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/162079/']}
jdg_81774
stackexchange
llm_judgeable_groundtruth_similarity
311347
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an $N \times N$ orthogonal projection matrix $P = A^H(AA^H)^{-1}A$ that I'm trying to find the inverse for. I'm using matlab, however, I keep getting the warning "the matrix is close to singular or badly scaled". Now I'm wondering if it's even invertible. Are orthogonal projection matrices invertible and if so, is there any stable method for computing the inverse? Now provide the response and nothing else.
If a projection matrix is invertible then it is the identity matrix. This is because projection matrices satisfy $P^2 = P$ or $P(P - I) = 0$. If $P$ is invertible then this implies $P - I = 0$ or $P = I$. Another way of seeing that projections are usually not invertible is to think about their nullspace. If you are projecting onto a space of smaller dimension then the nullspace is not zero and hence the matrix is not invertible.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/311347', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/63552/']}
jdg_81775
stackexchange
llm_judgeable_groundtruth_similarity
30396
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Translation, or decoding, of the four-nucleotide language of DNA and mRNA into the 20–amino acid language of proteins requires tRNAs and enzymes called aminoacyl-tRNA synthetases. To participate in protein synthesis, a tRNA molecule must become chemically linked to a particular amino acid via a high-energy bond, forming an aminoacyl-tRNA; the anticodon in the tRNA then base-pairs with a codon in mRNA so that the activated amino acid can be added to the growing polypeptide chain. Some 30–40 different tRNAs have been identified in bacterial cells and as many as 50–100 in animal and plant cells. Thus the number of tRNAs in most cells is more than the number of amino acids used in protein synthesis (20) and also differs from the number of amino acid codons in the genetic code (61). Consequently, many amino acids have more than one tRNA to which they can attach ; in addition, many tRNAs can pair with more than one codon. If perfect Watson-Crick base pairing were demanded between codons and anticodons, cells would have to contain exactly 61 different tRNA species, one for each codon that specifies an amino acid. As noted above, however, many cells contain fewer than 61 tRNAs. What is the explanation for the smaller number of tRNA than codons codons? Now provide the response and nothing else.
Th reason for this is that for the third base of the tRNA non-Watson-Crick pairing is allowed. This phenomenon is called " Wobble base pairing ". See the figure (from here) for illustration (from here ): If you have a look at the codon table for amino acids, than the variation in the code for one amino acid mostly happens on the third position (from here ): This allows a smaller number of tRNA than 64, as there is some flexibility. The third base can be represented like this (taken from this answer): In the table Cricks predictions are compared to the pairings found in experiments.
{}
{'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/30396', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/14709/']}
jdg_81776
stackexchange
llm_judgeable_groundtruth_similarity
40609771
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to read the messages from kafka (version 10) in spark and trying to print it. import spark.implicits._ val spark = SparkSession .builder .appName("StructuredNetworkWordCount") .config("spark.master", "local") .getOrCreate() val ds1 = spark.readStream.format("kafka") .option("kafka.bootstrap.servers", "localhost:9092") .option("subscribe", "topicA") .load() ds1.collect.foreach(println) ds1.writeStream .format("console") .start() ds1.printSchema() getting an error Exception in thread "main" org.apache.spark.sql.AnalysisException: Queries with streaming sources must be executed with writeStream.start();; Now provide the response and nothing else.
You are branching the query plan: from the same ds1 you are trying to: ds1.collect.foreach(...) ds1.writeStream.format(...){...} But you are only calling .start() on the second branch, leaving the other dangling without a termination, which in turn throws the exception you are getting back. The solution is to start both branches and await termination. val ds1 = spark.readStream.format("kafka") .option("kafka.bootstrap.servers", "localhost:9092") .option("subscribe", "topicA") .load()val query1 = ds1.collect.foreach(println) .writeStream .format("console") .start()val query2 = ds1.writeStream .format("console") .start()ds1.printSchema()query1.awaitTermination()query2.awaitTermination()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40609771', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4290914/']}
jdg_81777
stackexchange
llm_judgeable_groundtruth_similarity
15892290
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to change background image of my button to some other image and I encountered some errors.This is the code I have on my xaml: <Button x:Name="Button1" Width="200" Height="200" Content="Button1" Margin="0,0,0,400"> <Button.Background> <ImageBrush **ImageSource ="Images/AERO.png"** ></ImageBrush> </Button.Background> </Button> and my cs: private void Button1_Click_1(object sender, RoutedEventArgs e) { var brush = new ImageBrush(); brush.ImageSource = new BitmapImage(new Uri("Images/AERO.png")); Button1.Background = brush; } The error I have on my xaml is " The file 'Images\logo.png' is not part of the project or its 'Build Action' property is not set to 'Resource'. Can anyone help me explain, thanks Now provide the response and nothing else.
In the build action, you can mark the image file as content or as resource. The syntax to use the image in an ImageBrush is different depending on which one you choose. Here is a image file marked as content . To set the button background to this image use the following code. var brush = new ImageBrush(); brush.ImageSource = new BitmapImage(new Uri("Images/ContentImage.png",UriKind.Relative)); button1.Background = brush; Here is an image file marked as resource . To set the button background to the resource image use the following code. Uri resourceUri = new Uri("Images/ResourceImage.png", UriKind.Relative); StreamResourceInfo streamInfo = Application.GetResourceStream(resourceUri); BitmapFrame temp = BitmapFrame.Create(streamInfo.Stream); var brush = new ImageBrush(); brush.ImageSource = temp; button1.Background = brush;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15892290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2243794/']}
jdg_81778
stackexchange
llm_judgeable_groundtruth_similarity
12463902
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: One thing that sets apart AngularJS from other JavaScript-MVC frameworks is it's ability to echo bound values from JavaScript into HTML using bindings. Angular does this "automatically" when you assign any value to the $scope variable. But how automatic is this? Sometimes, Angular won't pick up on the change so I need to call $scope.$apply() or $scope.$digest() to inform angular to pickup the change. Sometimes when I run either of those methods then it throws an error and says that a digest is already in progress. Since the bindings (anything inside {{ }} braces or ng-attributes) are echoed with eval then does this mean that Angular is constantly polling the $scope object to look for changes and then performing an eval to push those changes to the DOM/HTML? Or has AngularJS somehow figured out the use magic variables which fire events which are triggered when a variable value changes or is assigned? I've never heard of it being fully supported by all browsers, so I doubt it. How does AngularJS keep track of it's bindings and scope variables? Now provide the response and nothing else.
In addition to the documentation section found by Mark I think we can try to enumerate all possible sources of change. User interaction with HTML inputs ( 'text' , 'number' , 'url' , 'email' , 'radio' , 'checkbox' ). AngularJS has inputDirective . 'text', 'number', 'url' and 'email' inputs bind listener handler for 'input' or 'keydown' events. Listener handler calls scope.$apply . 'radio' and 'checkbox' bind similar handler for click event. User interaction with select element. AngularJS has selectDirective with similar behavior on 'change' event. Periodical changes using $timeout service that also do $rootScope.$apply() . eventDirectives (ngClick, etc) also use scope.$apply . $http also uses $rootScope.$apply() . Changes outside AngularJS world should use scope.$apply as you know.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12463902', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/340939/']}
jdg_81779
stackexchange
llm_judgeable_groundtruth_similarity
19872920
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am porting code from Java to c++ and I'd like to replicate some anonymous functionalities. In file A.h I have : class A{private: int a; class AnonClass; friend class AnonClass;}; In file A.cpp I have : namespace{ class AnonClass { public: AnonClass(A* parent) { parent->a = 0; // This doesn't work, a is not accessible } }} Is it possible to friend a class in an anonymous namespace in C++? In Java you can declare anonymous classes so it would be very similar. Also it would not expose AnonClass to clients of A.h Now provide the response and nothing else.
Less known alternative is to make class Anon a member class of A. Inside class A you only need a line class Anon; -- no real code, no friend declaration. Note it goes within class A, almost as in Java. In the .cpp file you write all the details about Anon but you put it not in anonymous namespace but within A:: class A::Anon { ..... }; You can split declaration and implementation of A::Anon, as usual, just remeber always add A:: to Anon. The class Anon is a member of A and as such gets access to all other members of A. Yet it remains unknown to clients of A and does not clutter global namespace.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19872920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6367/']}
jdg_81780
stackexchange
llm_judgeable_groundtruth_similarity
33748953
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a lot trouble converting my old Alamofire 2.0 to 3.0 in ReactiveCocoa . I keep getting an unknown identifier error on my sendNext and sendCompleted calls. public final class Network: Networking {private let queue = dispatch_queue_create( "Beet.BeetModel.Network.Queue", DISPATCH_QUEUE_SERIAL)public init() { }public func requestJSON(url: String, parameters: [String : AnyObject]?) -> SignalProducer<AnyObject, NetworkError>{ return SignalProducer { observer, disposable in let serializer = Alamofire.Request.JSONResponseSerializer() Alamofire.request(.GET, url, parameters: parameters) .response(queue: self.queue, responseSerializer: serializer) { _, _, result in switch result { case .Success(let value): sendNext(observer, value) sendCompleted(observer) case .Failure(_, let error): sendError(observer, NetworkError(error: error)) } } }}} Now provide the response and nothing else.
This syntax changed in 4.0 alpha 2 . Observer is now its own type so the old functions sendNext , sendError , etc are no longer free functions: switch result {case .Success(let value): observer.sendNext(value) observer.sendCompleted()case .Failure(_, let error): observer.sendError(NetworkError(error: error))} One thing I would add to your solution is to provide a disposable so that requests can be cancelled if needed, to save resources: return SignalProducer { observer, disposable in let serializer = Alamofire.Request.JSONResponseSerializer() let request = Alamofire.request(.GET, url, parameters: parameters) request.response(queue: self.queue, responseSerializer: serializer) { _, _, result in switch result { case .Success(let value): observer.sendNext(value) observer.sendCompleted() case .Failure(_, let error): observer.sendError(NetworkError(error: error)) } } disposable.addDisposable(request.cancel)}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33748953', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5570233/']}
jdg_81781
stackexchange
llm_judgeable_groundtruth_similarity
29490670
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The Rust language website claims move semantics as one of the features of the language. But I can't see how move semantics is implemented in Rust. Rust boxes are the only place where move semantics are used. let x = Box::new(5);let y: Box<i32> = x; // x is 'moved' The above Rust code can be written in C++ as auto x = std::make_unique<int>(5);auto y = std::move(x); // Note the explicit move As far as I know (correct me if I'm wrong), Rust doesn't have constructors at all, let alone move constructors. No support for rvalue references. No way to create functions overloads with rvalue parameters. How does Rust provide move semantics? Now provide the response and nothing else.
I think it's a very common issue when coming from C++. In C++ you are doing everything explicitly when it comes to copying and moving. The language was designed around copying and references. With C++11 the ability to "move" stuff was glued onto that system. Rust on the other hand took a fresh start. Rust doesn't have constructors at all, let alone move constructors. You do not need move constructors. Rust moves everything that "does not have a copy constructor", a.k.a. "does not implement the Copy trait". struct A;fn test() { let a = A; let b = a; let c = a; // error, a is moved} Rust's default constructor is (by convention) simply an associated function called new : struct A(i32);impl A { fn new() -> A { A(5) }} More complex constructors should have more expressive names. This is the named constructor idiom in C++ No support for rvalue references. It has always been a requested feature, see RFC issue 998 , but most likely you are asking for a different feature: moving stuff to functions: struct A;fn move_to(a: A) { // a is moved into here, you own it now.}fn test() { let a = A; move_to(a); let c = a; // error, a is moved} No way to create functions overloads with rvalue parameters. You can do that with traits. trait Ref { fn test(&self);}trait Move { fn test(self);}struct A;impl Ref for A { fn test(&self) { println!("by ref"); }}impl Move for A { fn test(self) { println!("by value"); }}fn main() { let a = A; (&a).test(); // prints "by ref" a.test(); // prints "by value"}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29490670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3711299/']}
jdg_81782
stackexchange
llm_judgeable_groundtruth_similarity
30958375
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was reading this question about using a bool for thread control and got intrigued by this answer by @eran: Using volatile is enough only on single cores, where all threads use the same cache. On multi-cores, if stop() is called on one core and run() is executing on another, it might take some time for the CPU caches to synchronize, which means two cores might see two different views of isRunning_. If you use synchronization mechanisms, they will ensure all caches get the same values, in the price of stalling the program for a while. Whether performance or correctness is more important to you depends on your actual needs. I have spent over an hour searching for some statement that says synchronization primitives force cache coherency but have failed. The closest I have come is Wikipedia : The keyword volatile does not guarantee a memory barrier to enforce cache-consistency. Which suggests that memory barriers do force cache consistency, and since some synchronization primitives are implemented using memory barriers (again from Wikipedia) this is some "evidence". But I don't know enough to be certain whether to believe this or not, and be sure that I'm not misinterpreting it. Can someone please clarify this? Now provide the response and nothing else.
As I understand, synchronization primitives won't affect cache coherency at all. Cache is French for hidden , it's not supposed to be visible to the user. A cache coherency protocol should work without the programmer's involvement. Synchronization primitives will affect the memory ordering, which is well defined and visible to the user through the processor's ISA. A good source with detailed information is A Primer on Memory Consistency and Cache Coherence from the Synthesis Lectures on Computer Architecture collection. EDIT : To clarify your doubt The Wikipedia statement is slightly wrong. I think the confusion might come from the terms memory consistency and cache coherency . They don't mean the same thing. The volatile keyword in C means that the variable is always read from memory (as opposed to a register) and that the compiler won't reorder loads/stores around it. It doesn't mean the hardware won't reorder the loads/stores. This is a memory consistency problem. When using weaker consistency models the programmer is required to use synchronization primitives to enforce a specific ordering. This is not the same as cache coherency. For example, if thread 1 modifies location A, then after this event thread 2 loads location A, it will receive an updated (consistent) value. This should happen automatically if cache coherency is used. Memory ordering is a different problem. You can check out the famous paper Shared Memory Consistency Models: A Tutorial for more information. One of the better known examples is Dekker's Algorithm which requires sequential consistency or synchronization primitives. EDIT2 : I would like to clarify one thing. While my cache coherency example is correct, there is a situation where memory consistency might seem to overlap with it. This when stores are executed in the processor but delayed going to the cache (they are in a store queue/buffer). Since the processor's cache hasn't received an updated value, the other caches won't either. This may seem like a cache coherency problem but in reality it is not and is actually part of the memory consistency model of the ISA. In this case synchronization primitives can be used to flush the store queue to the cache. With this in mind, the Wikipedia text that you highlighted in bold is correct but this other one is still slightly wrong: The keyword volatile does not guarantee a memory barrier to enforce cache-consistency . It should say: The keyword volatile does not guarantee a memory barrier to enforce memory consistency .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30958375', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2369597/']}
jdg_81783
stackexchange
llm_judgeable_groundtruth_similarity
55263085
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a purchased react template with the following lines, but its not clear to me whats the purpose of this, the template its in JS and I want to change it to Typescript The following lines were present in the template if (module.hot) { module.hot.accept('./dashApp.js', () => { const NextApp = require('./dashApp').default; ReactDOM.render(<NextApp />, document.getElementById('root')); }); } However when renamed to .TS, I get this error: Property 'hot' does not exist on type 'NodeModule'.ts(2339) What does this code really does? in plain english Now provide the response and nothing else.
This code is related to Webpack's hot module replacement feature (HMR). module.hot works like this: module.hot.accept( dependencies, // Either a string or an array of strings callback // Function to fire when the dependencies are updated); So the code you included does this: *when the code of ./dashApp.js or one of the module it requires/imports in the requirement/import tree gets updated, re-render the whole React app. The hot property on node modules it not standard, thus the TS error - make sure you install the required type definitions! npm install --save-dev @types/webpack-env should do the trick. Related reads: Hot Module Replacement concept : high-level Webpack doc on HMR Hot Module Replacement API low-level Webpack doc on HMR, explaining how to use it and how does module.hot.accept does Property 'hot' does not exist on type 'NodeModule' : Github issue resolving the TS error
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/55263085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1014217/']}
jdg_81784
stackexchange
llm_judgeable_groundtruth_similarity
44161892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have implemented an Azure AD OAuth2 Daemon or Server to ASP.NET Web API.However I only receive an access token which is the property on the AuthenticationResult. See implementation below. public IHttpActionResult GetAccessToken(string clientId, string clientkey) { AuthenticationContext authContext = new AuthenticationContext(authority); ClientCredential clientCredential = new ClientCredential(clientId, clientkey); AuthenticationResult authenticationResult = authContext.AcquireTokenAsync(resourceUri, clientCredential).Result; Authorisation authorisation = new Authorisation {access_token = authenticationResult.AccessToken, token_type = authenticationResult.AccessTokenType, expires_on = authenticationResult.ExpiresOn }; return Ok(authorisation); } This returns only access token. I would like an implementation, a Daemon or Server implementation that returns both access token and refresh token. Have your seen or done similar implementation. Any useful links to an example are welcome. Now provide the response and nothing else.
When I posted this question, this was the answer I was looking for, please see screen shot below for expected result and c# console solution.Having found the solution, it is worth sharing it here, may be useful to someone some day C# console app code to achieve expected result in the postman screen shot below using System;using System.Collections.Generic;using System.Net.Http;namespace AzureADTokenApp{ class Program { static void Main(string[] args) { var client = new HttpClient(); var uri = "https://login.microsoftonline.com/<tenant-name>.onmicrosoft.com/oauth2/token?api-version=1.0"; var pairs = new List<KeyValuePair<string, string>> { new KeyValuePair<string, string>("resource", "https://graph.microsoft.com"), new KeyValuePair<string, string>("client_id", "<azure ad client id e.g. 9b864-a5e6-4f0d-b155-1f53a6c78>"), new KeyValuePair<string, string>("client_secret", "<azure ad client secret e.g. MTMiXaO1P9HnhSawdXWmcnuQ="), new KeyValuePair<string, string>("grant_type", "password"), new KeyValuePair<string, string>("username", "<azure ad user e.g. [email protected]>"), new KeyValuePair<string, string>("password", "<azure ad user password e.g. Pa$$word01>"), new KeyValuePair<string, string>("scope", "openid") }; var content = new FormUrlEncodedContent(pairs); var response = client.PostAsync(uri, content).Result; string result = string.Empty; if (response.IsSuccessStatusCode) { result = response.Content.ReadAsStringAsync().Result; } Console.WriteLine(result); Console.ReadLine(); } }} Screenshot from Postman - Expected Result. You will have same result in console except is less readable
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44161892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1080738/']}
jdg_81785
stackexchange
llm_judgeable_groundtruth_similarity
47345
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It is a known fact that your laptop camera can be hacked to spy on you. While the most often mentioned remedy is very low tech yet effective, everyone seems to omit the fact that the microphone can be used to spy on you too. I have no proof but it seems probable. Is there any remedy to this except for the general "making sure your laptop is not hacked"? EDIT: "reversible block" is the word I was looking for! Now provide the response and nothing else.
This issue affect both laptops, tablets and cellphones with similar solutions, so even if only laptops were explicitly mentioned in the OP's question (with still a link to an article focusing on cellphones), I think it can be useful to address the issue as a whole. There are several ways to counter malicious use of embedded microphones: Physical destruction / removal of the microphone, Using a dummy plug, Relying on software isolation techniques, Using specific hardware to mitigate privacy issues. Let's see each one of them, what they are all about, and what are their strengths and weaknesses. Physical destruction / removal The most well-known and most effective solution is to physically destruct (drill) or remove (desolder) the microphone: no microphone anymore, no malicious way to use it. An external microphone can then be plugged whenever required (earphones for instance in the case of cellphones). Be aware however that certain devices (in particular cellphones and tablets) may contain several microphones working together (mainly used to attenuate background noise): you must be sure to not miss any of them. The iFixit website is usually a great place to find teardown videos, unmounting instructions and even vendor specific tools and parts for various hardware including cellphones and computing devices. But apart from the actual microphone(s), researchers have found that mobile devices gyroscopes (which do not even require any authorization in the case of mobile apps) can also be used as low quality microphones. So, the "no microphone, no audio spying possible" adage may not be completely true. Using a dummy plug As recommended at the end of this article , this is most probably the easiest way to get an easily reversible deactivation of the microphone. All you have to do is simply cut the plug from some old microphone or earphone and put it into your device to disable the main microphone, just unplug it and the microphone will be back. However, do not assume that while this requires a physical action on your side this means that the microphone will be physically disconnected. I remember old issues when running Linux on some laptops where plugin in earbuds would not deactivate the laptop's main speakers: sound would be diffused through both the earbuds and the main speakers. This means that some action had to be done at the software level in order to disable the main speaker when an earbud plugged-in event is triggered. One cannot exclude that the same rule also goes for the microphone. Depending on the device, plugin in a dummy device may indeed physically disconnect the main microphone(s), or the main microphone(s) may still remain reachable at a low level. Nevertheless, this should still be efficient notably against spyware relying on the device's standard audio stack and against cellphones' hidden automatic callbacks which, IMO, are the most common threats. Low level attacks are more in the realm of targeted attacks, and if you are a high enough target to justify such investment from the adversary and it turns out that your devices are affected by low level malware (or you assume so by default, which may be wise in such circumstances), then your balance should go more toward drastic measures than convenience and physically removing microphones may seem a minimum. Software isolation The most documented entry point for the attackers remains at the software layer, whether by exploiting some flaw or manipulating the user into running malicious code. Hardened desktop computing platforms (mainly Qubes OS and to a lesser measure the newcomer Subgraph OS in the FOSS realm, Polyxene in the proprietary realm, there may be other) strongly isolate software from the hardware. There is no real equivalent on mobile devices, as their low-consumption CPUs do not offer virtualization feature yet, however there are early ongoing projects integrating LXC into Android which seems to be a very promising first step. On such systems, in order to reach the microphone a malicious software has to be able to escape the containment system and build a covert channel through it. This provides a good protection level even against targeted attacks. However, in case of a high value target, chances are that the attacker will just try to bypass the whole operating system at once and directly reach the lowest and most privileged layers by taking advantage of: The Intel Management Engine (and its AMD equivalent) on computers, The baseband firmware on cellphones. As far as I know, there is no real solution to these threats. For computers: older computers do not come with Intel Management Engine and using libreboot allows to disable it on pre-2009 computers, but there does not seem to be any solution on recent platforms except using specific hardware as discussed in the following section. For cellphones: The osmoconBB publishes a free implementation of the GSM stack to replace the opaque default one, but AFAIK there is no other project going any further. On their Hardening Android for Security and Privacy , the Tor development team launched a call years ago for devices where the baseband hardware would be effectively isolated from the rest of the device but with no real luck. Using specific hardware A few projects aim to find solutions in order to provide platforms as trustworthy as possible to their users, allowing to have a real control on the hardware behavior. For the computers: The Purism company (clearly a pun on NSA's PRISM project) builds laptops and tablets with a specific focus on privacy (by the way their Librem 13 model is the first officially Qubes-certified laptop ). One of their originality is to equip each one of their product with hardware switches allowing to physically disable the camera, microphone, WiFi and Bluetooth. However, they still seem to run on CPUs enforcing Intel Management Engine (an Intel Core i5 in the case of the Librem 13). Crowdfunded projects like the EOMA98 (also featured in Linux Magazine ) try to create the most open platform as possible. This one especially is interesting due to the care taken in choosing the components, most especially the CPU . Such project rely on low-consumption CPUs, which does not provide the same computational power and cannot run a hardened desktop OS like Qubes OS (see the "Software isolation" part above, the limitation are the same as for mobile devices)... for know, since things are moving very quickly in this area and I don't see why open hardware would not follow the same path than open software (being optimistic, I would say that bugging their own CPUs was probably the nicest present Intel and AMD could have made to their competitors). For the cellphones: There are comparable projects on cellphones side, like the Open hardware Cellphone which rely on the Adafruit Fona cellular phone module, but as stated above you remain stuck with GSM with no concrete future plan to go beyond this. Moreover, while the hardware itself is open I am not sure at all that this module's firmware is open source (?), in which case it would solve absolutely nothing regarding the network baseband module trust issue. However, such device would still give the opportunity to have a better control on the microphone (and camera whenever you add one), for instance by adding a switch like the one equipping Purism computers. Mike Perry from the Tor project has written and maintains a guide on Hardening Android for Security and Privacy . In its current shape it is described more as a proof-of-concept due to usability issues, but it still provides a wealth of information. Fundamentally, due to the lack of proper isolation between the network baseband module and the rest of the cellphone, the idea is to use two devices: A WiFi-only tablet with no cell network support and optionally the microphone removed (this is actually only to address a potential software exploitation now, since the microphone is effectively out of reach of the cellular network), A separate cell modem device providing WiFi access for data services only, Use VoIP (or whatever you like, as long it is data) on top of that to communicate. Conclusion Practical security is mostly a matter of balance between pure security and convenience. As always, the exact good answer heavily depends on your actual needs and threats you are really facing, but to give an idea: For high-profile targets (I mean life or death issues, not a teenager downloading some warez ;) ), I would feel more comfortable with a solution relying on specific hardware whenever possible (ie. when such solution, which are still in their infancy, suits your needs), otherwise on physical modification of more common hardware by removing the microphones and putting some black electrical duct-tape on the cameras. For privacy-conscious users, unless their geeky part push them to actively participate in the open-hardware move, using low-level software isolation on computers, a dummy plug in the cellphone and some black electrical duct-tape on each camera should be sufficient to quickly and easily provide a good level of privacy without really sacrificing convenience.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/47345', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/2337/']}
jdg_81786
stackexchange
llm_judgeable_groundtruth_similarity
8682582
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: According to cplusplus.com, the std::type_info::before() function... Returns true if the type precedes the type of rhs in the collation order. The collation order is just an internal order kept by a particular implementation and is not necessarily related to inheritance relations or declaring order. So what is it useful for? Now provide the response and nothing else.
Consider you want to put your type_info objects as keys into a map<type_info*, value> . The type_info doesn't have an operator < defined, so you must provide your own comparator. The only thing that is guaranteed to work from the type_info interface is the before() function, since neither the addresses of type_info nor the name() must be unique: struct compare { bool operator ()(const type_info* a, const type_info* b) const { return a->before(*b); }};std::map<const type_info*, std::string, compare> m;void f() { m[&typeid(int)] = "Hello world";}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8682582', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/627005/']}
jdg_81787
stackexchange
llm_judgeable_groundtruth_similarity
3523106
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given $I_n=\displaystyle\int_0 ^1{\frac{x^{n+1}}{x+3}}\,dx$ for $n\in\mathbb N$ , prove that $$\lim_{n \to \infty} nI_n=\frac{1}{4}$$ Here is what the limit looks like: $$\lim_{n \to \infty} n\int_0 ^1{\frac{x^{n+1}}{x+3}}\,dx$$ I am not sure how to go about this exercise, but I tried solving the integral with no success. I integrated by parts once choosing $u = x^{n+1}$ and $v = \frac{1}{x+3}$ and this is what I got. $$I_n=\ln 4-(n+1)\int_0^1{x^n\ln(x+3)}\,dx$$ I don't know how to proceed. I was asked previously to prove that $I_{n+1}+3I_n=\frac{1}{n+2}$ , do you think this can be used somehow? I tried solving for $I_n$ but I don't see how it can help because of the $I_{n+1}$ that is left over? Help me out, please! Now provide the response and nothing else.
Notice that $$I_{n+1}-I_n = \int_0^1 \frac{x^{n+1}(x-1)}{x+3} \text{d}x \leq 0$$ so the sequence $\left(I_n\right)_{n\ge 1}$ is decreasing. Also: $$3I_n+I_{n+1} = \int_0^1 \frac{3x^{n+1}+x^{n+2}}{x+3}\text{d}x = \int_0^1 x^{n+1} \text{d}x = \frac{1}{n+2}$$ Now since the sequence is decreasing we have $I_{n+1} \leq I_{n}$ . Therefore: $$3I_n+I_{n+1}\leq 3I_n+I_n= 4I_n\ \ \ \ \ \ \ \ \ \ (1)$$ Similarly, because $I_n \leq I_{n-1} \Rightarrow 3I_n \leq 3I_{n-1}$ , we have: $$4I_n = 3I_n + I_n \leq 3I_{n-1}+I_n\ \ \ \ \ \ \ \ \ \ (2)$$ Chaining $(1)$ and $(2)$ , we get: $$3I_{n}+I_{n+1} \leq 4I_n \leq 3I_{n-1}+I_{n}$$ or $$\frac{1}{n+2} \leq 4I_n \leq \frac{1}{n+1}$$ which implies $$\frac{n}{4(n+2)} \leq nI_n \leq \frac{n}{4(n+1)}$$ From the squeeze theorem, it follows that: $$\lim_{n\to \infty} nI_n = \frac{1}{4}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3523106', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/577174/']}
jdg_81788
stackexchange
llm_judgeable_groundtruth_similarity
32751805
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My module definition looks like this: module RG::Stats def self.sum(a, args = {}) a.inject(0){ |accum, i| accum + i } endend To use this method I simply require the file containing this definition so that I can do: RG::Stats.sum(array) and also RG::Stats.method(:sum) However, if I need to know the list of methods using RG::Stats.instance_methods I get an empty array. This is because I have used self . If I omit self then RG::Stats.instance_methods gives the list of methods, but I cannot access them anymore. The question is: how to use self in module's method definition? Now provide the response and nothing else.
Use self in each method definition if you want the methods to be defined only in the singleton class of the module (where the methods defined using self live). Omit self and extend self if you want the methods of the module to be defined as instance methods and singleton methods at the same time. For instance, you can call the method using RG::Stats.sum(array) and still have it listed by the instance_methods method if you do this: module RG::Stats extend self def sum(a, args = {}) a.inject(0){ |accum, i| accum + i } endend This way, the sum method is defined as an instance method and it is included in the singleton class of the module after using extend self . You can check the instance methods of RG::Stats module to verify this: RG::Stats.instance_methods=> [:sum] With this technique you don't have to worry about defining the method without the self keyword because modules can't have instances so it cannot be called like an instance method of RG::Stats module. It can only be called as a singleton method RG::Stats.sum(array) thanks to the extend self statement.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32751805', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3259651/']}
jdg_81789
stackexchange
llm_judgeable_groundtruth_similarity
36316465
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So, what's I want to do is to run a function on a specific webpage (which is a match with my regex). Right now I'm checking it every second and it works, but I'm sure that there is a better way (as it's flooding that website with getting requests). while flag: time.sleep(1) print(driver.current_url) if driver.current_url == "mydesiredURL_by_Regex": time.sleep(1) myfunction() I was thinking to do that somehow with WebDriverWait but not really sure how. Now provide the response and nothing else.
I was thinking to do that somehow with WebDriverWait Exactly. First of all, see if the built-in Expected Conditions may solve that: title_is title_contains Sample usage: from selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as ECwait = WebDriverWait(driver, 10)wait.until(EC.title_is("title"))wait.until(EC.title_contains("part of title")) If not, you can always create a custom Expected Condition to wait for url to match a desired regular expression.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36316465', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6136211/']}
jdg_81790
stackexchange
llm_judgeable_groundtruth_similarity
138921
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $A\colon H\to S$ is a bounded operator on a Hilbert space $H$, and $S\subset H$. It is known that $\operatorname{trace}(A)=\sum_{n} \langle Af_n,f_n\rangle$ for any orthonormal basis $\{f_{n}\}$. Is there a relation between $\operatorname{trace}(A)$, $\operatorname{rank}(A)$, and dimension of $\operatorname{range}(S)$? Edit: What if $A$ is a composition of two orthogonal projections $A_{1}:H\to S_{1}$,$A_{2}:H\to S_{2}$, such that $A=A_{1}oA_{2}$, for $S_{1},S_{2}\subset H$. I need to show that $\operatorname{trace}(A)\leq \operatorname{rank}(A)\leq \dim(S_{2})$ Now provide the response and nothing else.
We could simply apply the chain rule, to avoid some confusions we let $ C(x,t) = C(x^* + vt,t^*) = C^*(x^*,t^*)$:$$\frac{\partial C}{\partial x} = \frac{\partial C^*}{\partial x^{\phantom{*}}}= \frac{\partial C^*}{\partial x^*} \frac{\partial x^*}{\partial x^{\phantom{*}}} + \frac{\partial C^*}{\partial t^*} \frac{\partial t^*}{\partial x^{\phantom{*}}} = \frac{\partial C}{\partial x^*}$$remember here in chain rule, the partial derivative is being taken wrt the first and second variable if not to confuse this wrt the total derivative, similary we could have $\displaystyle \frac{\partial^2 C}{\partial x^2} = \frac{\partial^2 C^*}{\partial {x^*}^2} $,$$\frac{\partial C}{\partial t} = \frac{\partial C^*}{\partial t} = \frac{\partial C^*}{\partial x^*} \frac{\partial x^*}{\partial t^{\phantom{*}}} + \frac{\partial C^*}{\partial t^*} \frac{\partial t^*}{\partial t^{\phantom{*}}} = -v\frac{\partial C^*}{\partial x^*} + \frac{\partial C^*}{\partial t^*}$$Plugging back to the original equation you will see the convection term is gone if we have done this velocity cone rescaling, you could think the original equation like a diffusion on a car with velocity $v$ measured by a standing person, after the change of variable it is just a pure diffusion measured on a car:$$\frac{\partial C^*}{\partial t^*} = D\frac{\partial^2 C^*}{\partial {x^*}^2}$$and the initial condition changes to $C^*(x^*,0) = C(x^*+vt^*,t^*)\Big\vert_{t^*=0}= f(x^*)$, the boundary condition remains the same.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/138921', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/30300/']}
jdg_81791
stackexchange
llm_judgeable_groundtruth_similarity
3341485
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to make a HTML page behave, for example, like a A4-sized page in MS Word? Essentially, I want to be able to show the HTML page in the browser, and outline the content in the dimensions of an A4 size page. For the sake of simplicity, I'm assuming that the HTML page will only contain text (no images etc.) and there will be no <br> tags for example. Also, when the HTML page is printed, it would come out as A4-sized paper pages. Now provide the response and nothing else.
Ages ago, in November 2005, AlistApart.com published an article on how they published a book using nothing but HTML and CSS. See: http://alistapart.com/article/boom Here's an excerpt of that article: CSS2 has a notion of paged media (think sheets of paper), as opposed to continuous media (think scrollbars). Style sheets can set the size of pages and their margins. Page templates can be given names and elements can state which named page they want to be printed on. Also, elements in the source document can force page breaks. Here is a snippet from the style sheet we used: @page { size: 7in 9.25in; margin: 27mm 16mm 27mm 16mm;} Having a US-based publisher, we were given the page size in inches. We, being Europeans, continued with metric measurements. CSS accepts both. After setting the up the page size and margin, we needed to make sure there are page breaks in the right places. The following excerpt shows how page breaks are generated after chapters and appendices: div.chapter, div.appendix { page-break-after: always;} Also, we used CSS2 to declare named pages: div.titlepage { page: blank;} That is, the title page is to be printed on pages with the name “blank.” CSS2 described the concept of named pages, but their value only becomes apparent when headers and footers are available. Anyway… Since you want to print A4, you'll need different dimensions of course: @page { size: 21cm 29.7cm; margin: 30mm 45mm 30mm 45mm; /* change the margins as you want them to be. */} The article dives into things like setting page-breaks, etc. so you might want to read that completely. In your case, the trick is to create the print CSS first. Most modern browsers (>2005) support zooming and will already be able to display a website based on the print CSS. Now, you'll want to make the web display look a bit different and adapt the whole design to fit most browsers too (including the old, pre 2005 ones). For that, you'll have to create a web CSS file or override some parts of your print CSS. When creating CSS for web display, remember that a browser can have ANY size (think: “mobile” up to “big-screen TVs”). Meaning: for the web CSS your page-width and image-width is best set using a variable width (%) to support as many display devices and web-browsing clients as possible. EDIT (26-02-2015) Today, I happened to stumble upon another, more recent article at SmashingMagazine which also dives into designing for print with HTML and CSS… just in case you could use yet-another-tutorial. EDIT (30-10-2018) It has been brought to my attention in that size is not valid CSS3, which is indeed correct — I merely repeated the code quoted in the article which (as noted) was good old CSS2 (which makes sense when you look at the year the article and this answer were first published). Anyway, here's the valid CSS3 code for your copy-and-paste convenience: @media print { body{ width: 21cm; height: 29.7cm; margin: 30mm 45mm 30mm 45mm; /* change the margins as you want them to be. */ } } In case you think you really need pixels ( you should actually avoid using pixels ), you will have to take care of choosing the correct DPI for printing: 72 dpi (web) = 595 X 842 pixels 300 dpi (print) = 2480 X 3508 pixels 600 dpi (high quality print) = 4960 X 7016 pixels Yet, I would avoid the hassle and simply use cm (centimeters) or mm (millimeters) for sizing as that avoids rendering glitches that can arise depending on which client you use.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/3341485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382818/']}
jdg_81792
stackexchange
llm_judgeable_groundtruth_similarity
4776514
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been very confused about this recently and can't find an answer anywhere. When programming for android, I want to update a textview every 10 seconds, but how would I go about that? I've seen some samples use "Run()" and "Update()", but that doesn't seem to help when I try it, any ideas? Right now I have: public void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); setContentView(R.layout.slideshow); CONST_TIME = (int) System.currentTimeMillis(); Resources res = getResources(); myString = res.getStringArray(R.array.myArray);}public void checkTime(View V){ TextView text = (TextView) findViewById(R.id.fadequote); CUR_TIME = (int) System.currentTimeMillis(); text.setText(""+(int) (CUR_TIME-CONST_TIME));//Debugs how much time has gone by if(CUR_TIME-CONST_TIME>10000){ getNextQuote(null); //A function that gets a random quote CONST_TIME = CUR_TIME; }} I guess what I'm REALLY asking is how do I make checkTime() repeat it-self endlessly until onPause() is called? Now provide the response and nothing else.
What about using a timer? private Timer timer = new Timer();private TimerTask timerTask;timerTask = new TimerTask() { @Override public void run() { //refresh your textview }};timer.schedule(timerTask, 0, 10000); Cancel it via timer.cancel(). In your run() method you could use runOnUiThread(); UPDATE: I have a livescoring app, which uses this Timer to update it every 30 sec. It looks like this: private Timer timer;private TimerTask timerTask;public void onPause(){ super.onPause(); timer.cancel();}public void onResume(){ super.onResume(); try { timer = new Timer(); timerTask = new TimerTask() { @Override public void run() { //Download file here and refresh } }; timer.schedule(timerTask, 30000, 30000); } catch (IllegalStateException e){ android.util.Log.i("Damn", "resume error"); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4776514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/586680/']}
jdg_81793
stackexchange
llm_judgeable_groundtruth_similarity
61659
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to find: $$\lim_{x \to \frac{\pi}{2}} \frac{\tan{2x}}{x - \frac{\pi}{2}}$$ I know that $\tan(2\theta)=\frac{2\tan\theta}{1-\tan^{2}\theta}$ but don't know how to apply it here. Now provide the response and nothing else.
Put $x = \frac{\pi}{2} + h$. As $x \to \frac{\pi}{2}$, you have $h \to 0$. Then you have \begin{align*}\lim_{x \to \frac{\pi}{2}} \frac{\tan{2x}}{x-\frac{\pi}{2}} &= \lim_{h \to 0} \: \frac{\tan{2\bigl(\frac{\pi}{2}+h\bigr)}}{h} \\ &=\lim_{h \to 0} \: \frac{\tan(\pi + 2h)}{h} \\ &= \lim_{h \to 0} \: \frac{\tan(2h)}{h}\end{align*} Can you do it from here?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/61659', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15522/']}
jdg_81794
stackexchange
llm_judgeable_groundtruth_similarity
12856569
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a customized config section in my web.config like this: <configSection> <section name="CustomConfig" type="ConfigSectionRoot" allowLocation="true" allowDefinition="Everywhere"/> </configSection> <CustomConfig> <ConfigRoot> <add key="DataBase" value="CouchDB"/> <add key="FrontEnd" value="Asp.Net"/> <add key="AppName" value="Virtual WorkPlace"/> </ConfigRoot> </CustomConfig><AppSettings><add key="DataBase" value="CouchDB"/></AppSettings> My ConfigSectionRoot.cs is like this: public class ConfigSectionRoot:ConfigurationSection { [ConfigurationProperty("key", DefaultValue = "", IsKey = true, IsRequired = true)] public string Key { get { return ((string)(base["key"])); } set { base["key"] = value; } } [ConfigurationProperty("value", DefaultValue = "", IsKey = false, IsRequired = false)] public string Value { get { return ((string)(base["value"])); } set { base["value"] = value; } } } If i use AppSettings Instead of Custom Config I could access it like: string results= ConfigurationManager.AppSettings["Database"];// results wil contain "CouchDB" Is there any way to achieve the same thing in Customized Config section ??? Pls help me out Now provide the response and nothing else.
NameValueSectionHandler If your configuration doesn't need to be more than a key-value store, I'd go for a NameValueSectionHandler . <section name="customConfig" type="System.Configuration.NameValueSectionHandler"/><!-- ... --><customConfig> <add key="DataBase" value="CouchDB" /> <add key="FrontEnd" value="Asp.Net" /> <add key="AppName" value="Virtual WorkPlace" /></customConfig> You can then read it out, just like the appSettings: var customConfig = (System.Collections.Specialized.NameValueCollection)System.Configuration.ConfigurationManager.GetSection("customConfig");//i have changed like this and it worked finevar database = customConfig["DataBase"]; SingleTagSectionHandler You could also achieve the same with a SingleTagSection: <section name="customConfig" type="System.Configuration.SingleTagSectionHandler"/><!-- ... --><customConfig database="CouchDB" frontEnd="Asp.Net" appName="Virtual Workplace" /> And then query it with: var customConfig = (System.Collections.Hashtable) System.Configuration.ConfigurationManager.GetConfig("customConfig");var database = customConfig["database"];
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12856569', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4336887/']}
jdg_81795
stackexchange
llm_judgeable_groundtruth_similarity
184205
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was reading up on scalable bloom filters and could not understand how each time a constituent bloom filters fills up, a new bloom filter with larger size is added. The elements that contributed to the set bits in the initially created filters cannot be looked up for presence. Perhaps I am wrong in my understanding of this? I do understand basic bloom filters. However, I cannot wrap my head around dynamic bloom filters. Now provide the response and nothing else.
Let me try to give this a shot to see how much I can butcher it. :-) So, to start off, you need to be able to create a regular bloom filter that allows a finite number of elements with a maximum probability of a false positive. The addition of these features to your basic filter are required before attempting to build a scalable implementation. Before we try to control and optimize what the probability is, lets figure out what the probability is for a given bloom filter size. First we split up the bitfield by how many hash functions we have (total number of bits / number of hash functions = slices) to get k slices of bits which represent each hash function so every element is always described by k bits. If you increase the number of slices or the number of bits per slice, the probability of false positives will decrease. It also follows that as elements are added, more bits are set to 1, so false positives increase. We refer to this as the "fill ratio" of each slice. When the filter holds a large amount of data, we can assume that the probability of false positives for this filter is the fill ratio raised to the number of slices (If we were to actually count the bits instead of using a ratio, this simplifies into a permutation with repetition problem). So, how do we figure out how to pick a probability of false positives in a bloom filter? We can modify the number of slices (which will affect the fill ratio). To figure out how many slices we should have, we start off with figuring out the optimal fill ratio for a slice. Since the fill ratio is determined by the number of bits in a slice which are 1 versus the number of bits which are 0, we can determine that each bit will remain unset with probability of (100% - (1 / bits in a slice)). Since we're going to have multiple items inserted, we have another permutation with reputation problem and we expand things out to the expected fill ratio, which is (100% - ((100% - (1 / bits in a slice)) ^ "elements inserted")). Well, it turns out that this is very similar to another equation. In the paper, they relate the fill ratio to another equation so it fits nicely into a taylor series (1-e^(-n/m)). After a bit of futzing with this, it turns out that the optimal fill ratio is always about 50%, regardless of any of the variables that you change. So, since the probability of a filter is the fill ratio raised to the number of slices, we can fill in 50% and get P=(50%)^k or k=log_2(1/P). We can then use this function to compute the number of slices we should generate for a given filter in the list of filters for a scalable bloom filter. def slices_count(false_positive_probability): return math.ceil(math.log(1 / false_positive_probability, 2)) Edit: After writing this, I came across a mention of the "fifty-percent rule" when reading up on buddy system based dynamic memory allocation in TAoCP Vol 1, pp 442-445 with a much cleaner reasoning versus fitting the curve to (1-e^(-n/m)). Knuth also references a paper "The fifty percent rule revisited" with a bit of background on the concept ( pdf available here ).
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/184205', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/78936/']}
jdg_81796
stackexchange
llm_judgeable_groundtruth_similarity
593678
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\zeta(-n) =-\frac{B_{n+1}}{n+1}$$ What is the motivation or the derivation of this formula? Now provide the response and nothing else.
Use the integral $$(e^{2\pi i s}-1)\zeta(s)\Gamma(s) = \int_C B(t)t^s\frac{dt}{t}$$ where $C$ is the keyhole contour and $B(t) = \frac{e^{-t}}{1-e^{-t}}$ is the exponential generating function of the Bernoulli numbers. This integral gives the analytic continuation of the Riemann zeta function. At negative integers, the two cuts of the contour collapse and you get a simple circular contour around the origin, which picks up the coefficients of $B(t)$ by the residue theorem. This formula is important because it expresses the zeta function as the Mellin transform of the so-called Bernoulli distribution . By "normalizing" the Bernoulli distribution at a prime $p$, one gets a $p$-adic distribution whose $p$-adic Mellin transform is the $p$-adic Riemann zeta function constructed by Kubota and Leopoldt.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/593678', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73319/']}
jdg_81797
stackexchange
llm_judgeable_groundtruth_similarity
8021958
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I executed the following code #include <stdio.h>int main(){ printf("%f\n", 9/5);} Output : 0.000000 why not 1 ? if i write printf("%f %f %d %d\n", (float)9/5, 4, sizeof(float), sizeof(int)); then output is 1.800000 0.000000 4 59 why not 1.800000 4 4 4 on my machine the sizeof (float) is 4 Thanks in advance Now provide the response and nothing else.
This is because your printf format specifier doesn't match what you passed it: 9/5 is of type int . But printf expects a float . So you need to either cast it to a float or make either literal a float: printf("%f\n", (float)9/5);printf("%f\n", 9./5); As for why you're getting 0.0 , it's because the printf() is reading the binary representation of 1 (an integer) and printing it as a float . Which happens to be a small denormalized value that is very close to 0.0 . EDIT : There's also something going with type-promotion on varargs. In vararg functions, float is promoted to double . So printf() in this case actually expects a 64-bit parameter holding a double . But you only passed it a 32-bit operand so it's actually reading an extra 32-bits from the stack (which happens to be zero in this case) - even more undefined behavior.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8021958', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/534087/']}
jdg_81798
stackexchange
llm_judgeable_groundtruth_similarity
581947
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The equation $3^x + 28^x=8^x+27^x$ has only the solutions $x=2$ and $x=0$? If yes, how to prove that these are the only ones? Now provide the response and nothing else.
By the end of this post, we shall prove a more general result: Theorem: Let $a, b, c, d$ be real numbers such that $0<a<b \leq c<d$. Then the equation $$a^x+d^x=b^x+c^x$$ has Exactly two solutions, $x=0$ and $x=t>0$ for some $t$, if $ad-bc<0$ Exactly two solutions, $x=0$ and $x=t<0$ for some $t$, if $ad-bc>0$ Exactly one solution, $x=0$, if $ad-bc=0$ We first prove the following lemma: Lemma $1$: Suppose that $f(x)$ is a function with the following properties: $f(x)$ is $k$ times differentiable For $i=0, 1, \ldots, k-1$, we have $f^{(i)}(0) \leq 0$ and $f^{(i)}(x)>0$ for all sufficiently large $x$. $f^{(k)}(x)$ has at most one positive real root. Then $f(x)$ has at most one positive real root. Proof: We prove by induction on $n$, $0 \leq n \leq k$, that $f^{(k-n)}(x)$ has at most one positive real root. When $n=0$, this follows from the given conditions. Suppose that the statement holds for $n=j \leq k-1$, i.e. $f^{(k-j)}(x)$ has at most one positive real root. Assume on the contrary that $f^{(k-j-1)}(x)$ has at least two positive real roots. Suppose that $u, v$ are positive real roots of $f^{(k-j-1)}(x)$ with $0<u<v$. By Rolle's theorem applied to $f^{(k-j-1)}(u)=f^{(k-j-1)}(v)=0$, $f^{(k-j)}(s)=0$ for some $s \in (u, v)$. We now consider three cases: Case $1$ : $f^{(k-j-1)}(s)=0$ By Rolle's theorem applied to $f^{(k-j-1)}(u)=f^{(k-j-1)}(s)$, we have $f^{(k-j)}(t)=0$ for some $t \in (u, s)$. Clearly $t \not =s$. Therefore $f^{(k-j)}(x)$ has at least two positive real roots $s, t$, contradicting the induction hypothesis. Case $2$: $f^{(k-j-1)}(s)>0$ Note that $f^{(k-j-1)}(x)$ is positive for sufficiently large $x$, so in particular, $f^{(k-j-1)}(w)>0$ for some $w>v$. Clearly there exists $\delta>0$ s.t. $$0<\delta<\min(f^{(k-j-1)}(s),f^{(k-j-1)}(w))$$ Now by Intermediate Value Theorem applied to $[s, v]$ and $[v, w]$, there exists $u_1, v_1$ with $u<s<u_1<v<v_1<w$ and $$f^{(k-j-1)}(u_1)=f^{(k-j-1)}(v_1)=\delta$$ By Rolle's theorem, we now get $f^{(k-j)}(t)=0$ for some $t \in (u_1, v_1)$. It is clear that $t \not =s$. Therefore $f^{(k-j)}(x)$ has at least two positive real roots $s, t$, contradicting the induction hypothesis. Case $3$: $f^{(k-j-1)}(s)<0$ We have $f^{(k-j-1)}(0) \leq 0$. Case 3a) $f^{(k-j-1)}(0)<0$ Clearly there exists $\delta<0$ s.t. $$\max(f^{(k-j-1)}(s),f^{(k-j-1)}(0))<\delta<0$$ Now by Intermediate Value Theorem applied to $[0, u]$ and $[u, s]$, there exists $u_1, v_1$ with $0<u_1<u<v_1<s<v$ and $$f^{(k-j-1)}(u_1)=f^{(k-j-1)}(v_1)=\delta$$ By Rolle's theorem, we now get $f^{(k-j)}(t)=0$ for some $t \in (u_1, v_1)$. It is clear that $t \not =s$. Therefore $f^{(k-j)}(x)$ has at least two positive real roots $s, t$, contradicting the induction hypothesis. Case 3b) $f^{(k-j-1)}(0)=0$ By Rolle's theorem applied to $[0, u]$, we now get $f^{(k-j)}(t)=0$ for some $t \in (0, u)$. It is clear that $t \not =s$. Therefore $f^{(k-j)}(x)$ has at least two positive real roots $s, t$, contradicting the induction hypothesis. Therefore we get a contradiction in all three cases, so $f^{(k-j-1)}(x)$ has at most one positive real root. We are thus done by induction, so $f^{(k-n)}(x)=0$ has at most one positive real root for $n=0, 1, \ldots , k$. In particular, setting $n=k$ gives that $f(x)$ has at most one positive real root, as desired. Lemma $2$: Let $p, q, r$ be real numbers such that $1<p \leq q<r$. Then the equation $$1+r^x=p^x+q^x$$ has No positive real solutions if $r \geq pq$ Exactly one positive real solution if $r<pq$ Proof: If $r \geq pq$, then for $x>0$ we have $$1+r^x-p^x-q^x \geq 1+(pq)^x-p^x-q^x=(p^x-1)(q^x-1)>0$$ so indeed there are no positive real solutions. If $r<pq$, let $$f(x)=1+r^x-p^x-q^x$$ Note that for $n \geq 1$ we have $$f^{(n)}(x)=(\log r)^nr^x-(\log p)^np^x-(\log q)^nq^x$$ We have $$\lim_{x \to 0+}{\frac{f(x)}{x}}=f'(0)=\log r-\log p-\log q<0$$ Thus $\exists u>0$ s.t. $f(u)<0$. On the other hand, clearly $f(x)$ is positive for sufficiently large $x$, so $\exists v>u$ s.t. $f(v)>0$. Since $f(x)$ is continuous, we may conclude using the Intermediate Value Theorem that $f(x)$ has a positive real root. We now show that $f(x)$ has at most one positive real root. Consider $$g(y)=(\log r)^y-(\log p)^y-(\log q)^y$$ It is clear that $g(y)$ is non-negative for sufficiently large $y$, so there exists $k \in \mathbb{Z}^+$ s.t. $g(k) \geq 0$. Take the minimal such $k$, so $g(i)<0$ for $i=1, 2, \ldots ,k-1$. We have $f^{(i)}(0)=g(i)<0$ for $i=1, 2, \ldots , k-1$, and $f(0)=0$. For $i=0, 1, \ldots k-1$, we clearly have $f^{(i)}(x)>0$ for sufficiently large $x$. Now for $x>0$ we have \begin{align}f^{(k)}(x)& =(\log r)^kr^x-(\log p)^kp^x-(\log q)^kq^x \\& \geq \left((\log p)^k+(\log q)^k\right)r^x-(\log p)^kp^x-(\log q)^kq^x \\&=(\log p)^k(r^x-p^x)+(\log q)^k(r^x-q^x) \\&>0\end{align} Thus $f^{(k)}(x)$ has no positive real roots. By Lemma $1$, we conclude that $f(x)$ has at most one positive real root. Therefore when $r<pq$, we have exactly one positive real solution. This concludes the proof of the lemma. We now proceed to prove the main theorem: Theorem: Let $a, b, c, d$ be real numbers such that $0<a<b \leq c<d$. Then the equation $$a^x+d^x=b^x+c^x$$ has Exactly two solutions, $x=0$ and $x=t>0$ for some $t$, if $ad-bc<0$ Exactly two solutions, $x=0$ and $x=t<0$ for some $t$, if $ad-bc>0$ Exactly one solution, $x=0$, if $ad-bc=0$ Proof: Clearly $x=0$ is always a solution. For $x>0$, we may rewrite the equation as $$1+\left(\frac{d}{a}\right)^x=\left(\frac{b}{a}\right)^x+\left(\frac{c}{a}\right)^x$$ We may now take $p=\frac{b}{a}, q=\frac{c}{a}, r=\frac{d}{a}$ and use Lemma $2$ to get that there are No positive real solutions (for $x$) if $\frac{d}{a} \geq \frac{bc}{a^2}$, i.e. $ad \geq bc$ Exactly one positive real solution (for $x$) if $\frac{d}{a}<\frac{bc}{a^2}$, i.e. $ad<bc$ For $x<0$, we may take $y=-x$ and rewrite the equation as $$\left(\frac{a}{d}\right)^x+1=\left(\frac{b}{d}\right)^x+\left(\frac{c}{d}\right)^x$$ $$1+\left(\frac{d}{a}\right)^y=\left(\frac{d}{b}\right)^y+\left(\frac{d}{c}\right)^y$$ We may now apply Lemma $2$ with $p=\frac{d}{c}, q=\frac{d}{b}, r=\frac{d}{a}$ and $y$ as the variable to get No positive real solutions (for $y$) if $\frac{d}{a} \geq \frac{d^2}{bc}$, i.e. $bc \geq ad$ Exactly one positive real solution (for $y$) if $\frac{d}{a}<\frac{d^2}{bc}$, i.e. $bc<ad$ This translates to No negative real solutions (for $x$) if $bc \geq ad$ Exactly one negative real solution (for $x$) if $bc<ad$ Combining, we get the statement of the theorem.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/581947', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_81799
stackexchange
llm_judgeable_groundtruth_similarity
3880615
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there an isDate function in jQuery? It should return true if the input is a date, and false otherwise. Now provide the response and nothing else.
If you don't want to deal with external libraries, a simple javascript-only solution is: function isDate(val) { var d = new Date(val); return !isNaN(d.valueOf());} UPDATE:     !!Major Caveat!! @BarryPicker raises a good point in the comments. JavaScript silently converts February 29 to March 1 for all non-leap years. This behavior appears to be limited strictly to days through 31 (e.g., March 32 is not converted to April 1, but June 31 is converted to July 1). Depending on your situation, this may be a limitation you can accept, but you should be aware of it: >>> new Date('2/29/2014')Sat Mar 01 2014 00:00:00 GMT-0500 (Eastern Standard Time)>>> new Date('3/32/2014')Invalid Date>>> new Date('2/29/2015')Sun Mar 01 2015 00:00:00 GMT-0500 (Eastern Standard Time)>>> isDate('2/29/2014')true // <-- no it's not true! 2/29/2014 is not a valid date!>>> isDate('6/31/2015')true // <-- not true again! Apparently, the crux of the problem is that it // allows the day count to reach "31" regardless of the month..
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3880615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/444569/']}
jdg_81800
stackexchange
llm_judgeable_groundtruth_similarity
40271140
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to stub out a super call with sinon, and es2016 but I'm not having much luck. Any ideas why this isn't working? Running Node 6.2.2, this might be an issue with its implementation of classes/constructors. .babelrc file: { "presets": [ "es2016" ], "plugins": [ "transform-es2015-modules-commonjs", "transform-async-to-generator" ]} Test: import sinon from 'sinon';class Foo { constructor(message) { console.log(message) }}class Bar extends Foo { constructor() { super('test'); }}describe('Example', () => { it('should stub super.constructor call', () => { sinon.stub(Foo.prototype, 'constructor'); new Bar(); sinon.assert.calledOnce(Foo.prototype.constructor); });}); Result: testAssertError: expected constructor to be called once but was called 0 times at Object.fail (node_modules\sinon\lib\sinon\assert.js:110:29) at failAssertion (node_modules\sinon\lib\sinon\assert.js:69:24) at Object.assert.(anonymous function) [as calledOnce] (node_modules\sinon\lib\sinon\assert.js:94:21) at Context.it (/test/classtest.spec.js:21:18) Note : this issue seems to only happen for constructors. I can spy on methods inherited from the parent class without any issues. Now provide the response and nothing else.
You'll need to setPrototypeOf the subClass due to the way JavaScript implements inheritance. const sinon = require("sinon");class Foo { constructor(message) { console.log(message); }}class Bar extends Foo { constructor() { super('test'); }}describe('Example', () => { it('should stub super.constructor call', () => { const stub = sinon.stub().callsFake(); Object.setPrototypeOf(Bar, stub); new Bar(); sinon.assert.calledOnce(stub); });});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40271140', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471810/']}
jdg_81801
stackexchange
llm_judgeable_groundtruth_similarity
285241
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: First of all, i just like reading and understanding things related to math, but NOT at all any expert in math. So, I apologize if the question seems dumb. It always puzzles me whenever a basic calculus is explained, it always make use of "infinitesimally" "very very small" etc. Why is it so important to use this term in calculus? Similarly with "limits". The answer always seems blurry.. as it ends up with the notion of "the answer is say, as and as we go from point $a\to b$" etc. Why can't there be a "clear and perfect" answer when doing problems in calculus. Like as i said, why something unclear like "infinite" or symbols like "$x \to 2$" (limits) etc needs to be used in Calculus for solving the problems. V. Now provide the response and nothing else.
I see calculus as being built from these "unclear" concepts, rather than forcing the use of them. The limit is the first and arguably most important concept in calculus, and the rest of the field is built on limits. Limits can be hard to completely understand, partly because they often involve very small ("infintessimal") and very large ("approaching infinity") numbers. My mental picture for limits, infintessimals, and infinity is based on two people having a conversation about numbers. When something "increases without bound", "goes to infinity" or (poorly worded) "equals infinity", I imagine that anytime one of my imaginary people says a number, the other one says a larger number, then the first says an even larger number and so on. They will never stop going back and forth, and that is how I see it. I see infintessimals analogously with smaller and smaller numbers. For limits, remember that there is a formal definition for a limit, which usually includes something like "for all $\epsilon$ there exists a $\delta$ such that," (now paraphrasing) when $x$ gets really close to $a$, $f(x)$ gets really close to some number $L$. How close is "really close"? That's what $\epsilon$ and $\delta$ are for. Now we go back to my two people. Here's the conversation: A: This gets really close to $L$. B: How close? A: How close do you want it? Pick any number greater than zero. B: Ok, how about this number $\epsilon$ I happen to have in my pocket? A: That works. Every time you pull any $\epsilon$ out of your pocket, I will give you a $\delta$. You take my $\delta$ and push $x$ until it is less than $\delta$ far away from $a$. Once you've done that, I guarantee that $f(x)$ will be closer than $\epsilon$ distance away from $L$. So person B keeps pulling out smaller and smaller epsilons and finding no matter how small they are, person A can always give back a $\delta$ that works. In other words, no matter how close person B wants $f(x)$ and $L$ to get, person A can always guarantee the closeness, as long as person B is willing to put $x$ a certain distance closer to $a$. Their conversation continues forever, just like them going back and forth with larger and larger numbers to approach infinity. Therefore, person A has "proven" the closeness of $f(x)$ and $L$ to person B. Since you and I can't talk forever, I let my two imaginary friends do the talking for us and just skip to the end of their conversation. Calculus is the part of math where we leave behind ideas that have one, simple answer. We are not asking "what is $513+2138756$?" We are asking more general questions like "if this goes on forever, where will it end up?" Once we start going to those imaginary places like "forever", we have to get some new concepts that are not as simple as the old ones. Does that help at all?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/285241', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39630/']}
jdg_81802
stackexchange
llm_judgeable_groundtruth_similarity
1120706
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table that looks like this: A 1 A 2 B 1 B 2 And I want to produce a result set that looks like this: A 1 2 B 1 2 Is there a SQL statement that will do this? I am using Oracle. Related questions: Returning multiple rows from a single row My question is close to the opposite of this question. Use LINQ to concatenate This is exactly what I want to do, but without LINQ. Now provide the response and nothing else.
( WARNING - WM_CONCAT is an unsupported function that was removed in version 12c. Unless you're using a very old database, you should avoid this function. You should probably use LISTAGG instead.) It depends on the version of Oracle you're using. If it supports the wm_concat() function, then you can simply do something like this: SELECT field1, wm_concat(field2) FROM YourTable GROUP BY field2; wm_concat() basically works just like group_concat() in MySQL. It may not be documented, so fire up ye olde sqlplus and see if it's there. If it isn't there, then you'll want to implement something equivalent yourself. You can find some instructions on how to do this in the string aggregation page at oracle-base.com.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1120706', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/128807/']}
jdg_81803
stackexchange
llm_judgeable_groundtruth_similarity
17693828
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Could someone explain what is the difference between Class loading and instantiating a Class. When we load a class with Static variable does it also get instantiated the same time the Class get loaded? After all static code is part of the class rather than it's individual instances. It would be helpful if someone provided an example to help me understand this better. Now provide the response and nothing else.
Here is some nice explanation(with an example and observation) When a class is loaded and initialized in JVM - Java When Class is loaded in Java Class loading is done by ClassLoaders in Java which can be implemented to eagerly load a class as soon as another class references it or lazy load the class until a need of class initialization occurs. If Class is loaded before its actually being used it can sit inside before being initialized. I believe this may vary from JVM to JVM. While its guaranteed by JLS that a class will be loaded when there is a need of static initialization. When a Class is initialized in Java When a Class is initialized in Java After class loading, initialization of class takes place which means initializing all static members of class. A Class is initialized in Java when : 1) an Instance of class is created using either new() keyword or using reflection using class.forName(), which may throw ClassNotFoundException in Java. 2) an static method of Class is invoked. 3) an static field of Class is assigned. 4) an static field of class is used which is not a constant variable. 5) if Class is a top level class and an assert statement lexically nested within class is executed. Hope that helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17693828', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2396539/']}
jdg_81804
stackexchange
llm_judgeable_groundtruth_similarity
3277182
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to check in a script if a certain other module is already loaded. if (ModuleName) { // extend this module} But if ModuleName doesn't exist, that throw s. If I knew what the Global Object was I could use that. if (window.ModuleName) { // extend this module} But since I want my module to work with both browsers and node , rhino , etc., I can't assume window . As I understand it, this doesn't work in ES 5 with "use strict" ; var MyGLOBAL = (function () {return this;}()); // MyGlobal becomes null This will also fail with a thrown exception var MyGLOBAL = window || GLOBAL So it seems like I'm left with try { // Extend ModuleName} catch(ignore) {} None of these cases will pass JSLint. Am I missing anything? Now provide the response and nothing else.
Well, you can use the typeof operator, and if the identifier doesn't exist in any place of the scope chain, it will not throw a ReferenceError , it will just return "undefined" : if (typeof ModuleName != 'undefined') { //...} Remember also that the this value on Global code, refers to the global object, meaning that if your if statement is on the global context, you can simply check this.ModuleName . About the (function () { return this; }()); technique, you are right, on strict mode the this value will simply be undefined . Under strict mode there are two ways to get a reference to the Global object, no matter where you are: Through the Function constructor: var global = Function('return this')(); Functions created with the Function constructor don't inherit the strictness of the caller, they are strict only if they start their body with the 'use strict' directive, otherwise they are non-strict. This method is compatible with any ES3 implementation. Through an indirect eval call , for example: "use strict";var get = eval;var global = get("this"); The above will work because in ES5, indirect calls to eval , use the global environment as both, the variable environment and lexical environment for the eval code. See details on Entering Eval Code , Step 1. But be aware that the last solution will not work on ES3 implementations, because an indirect call to eval on ES3 will use the variable and lexical environments of the caller as the environments for the eval code itself. And at last, you may find useful to detect if strict mode is supported: var isStrictSupported = (function () { "use strict"; return !this; })();
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3277182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151312/']}
jdg_81805
stackexchange
llm_judgeable_groundtruth_similarity
54284743
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need a rounded corner TextField , I'm able to do this but it is showing the default border color. I tried changing borderSide but was unable to change the color (it was still black): TextFormField( decoration: InputDecoration( prefixIcon: Icon( Icons.person, color: Colors.white, ), border: OutlineInputBorder( // width: 0.0 produces a thin "hairline" border borderRadius: BorderRadius.all(Radius.circular(90.0)), borderSide: BorderSide(color: Colors.white24) //borderSide: const BorderSide(), ), hintStyle: TextStyle(color: Colors.white,fontFamily: "WorkSansLight"), filled: true, fillColor: Colors.white24, hintText: 'Password'), ), I need this and I don't want the focus line but the cursor should be white. I tried to change everything in border parameter but still no change. I want: I'm getting this: Now provide the response and nothing else.
Create a transparent border: final border = OutlineInputBorder( borderRadius: BorderRadius.all(Radius.circular(90.0)), borderSide: BorderSide( color: Colors.transparent, ) ); Another option is using : borderSide: BorderSide.none And use it in focusedBorder and border properties, also add a Theme to set the cursor and hint Colors: Theme( data: Theme.of(context).copyWith( cursorColor: Colors.red, hintColor: Colors.transparent, ), child: TextFormField( decoration: InputDecoration( focusedBorder: border, border: border, prefixIcon: Icon( Icons.person, color: Colors.white, ), hintStyle: TextStyle( color: Colors.white, fontFamily: "WorkSansLight"), filled: true, fillColor: Colors.white24, hintText: 'Password'), ), ),
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54284743', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7357920/']}
jdg_81806
stackexchange
llm_judgeable_groundtruth_similarity
42197729
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Google Sign-In SDK 4.0.1. When I press googleSignInButton then app will be crash. And gave below error, how to fix this: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Your app is missing support for the following URL schemes: com.googleusercontent.apps.107731993306-6s44u18onibp6gi0ddj94si1aifshhg6'*** First throw call stack:( 0 CoreFoundation 0x0000000101ac0d4b __exceptionPreprocess + 171 1 libobjc.A.dylib 0x000000010110121e objc_exception_throw + 48 2 CoreFoundation 0x0000000101b2a2b5 +[NSException raise:format:] + 197 3 xxxxx 0x000000010084b3cb -[GIDSignIn signInWithOptions:] + 246 4 xxxxx 0x000000010084efc2 -[GIDSignInButton pressed] + 242 5 UIKit 0x00000001028f78bc -[UIApplication sendAction:to:from:forEvent:] + 83 6 UIKit 0x0000000102a7dc38 -[UIControl sendAction:to:forEvent:] + 67 7 UIKit 0x0000000102a7df51 -[UIControl _sendActionsForEvents:withEvent:] + 444 8 UIKit 0x0000000102a7ce4d -[UIControl touchesEnded:withEvent:] + 668 9 UIKit 0x0000000102965545 -[UIWindow _sendTouchesForEvent:] + 2747 10 UIKit 0x0000000102966c33 -[UIWindow sendEvent:] + 4011 11 UIKit 0x00000001029139ab -[UIApplication sendEvent:] + 371 12 UIKit 0x000000010310072d __dispatchPreprocessedEventFromEventQueue + 3248 13 UIKit 0x00000001030f9463 __handleEventQueue + 4879 14 CoreFoundation 0x0000000101a65761 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17 15 CoreFoundation 0x0000000101a4a98c __CFRunLoopDoSources0 + 556 16 CoreFoundation 0x0000000101a49e76 __CFRunLoopRun + 918 17 CoreFoundation 0x0000000101a49884 CFRunLoopRunSpecific + 420 18 GraphicsServices 0x00000001074cfa6f GSEventRunModal + 161 19 UIKit 0x00000001028f5c68 UIApplicationMain + 159 20 xxxxxxxx 0x00000001007c449f main + 111 21 libdyld.dylib 0x0000000104d5368d start + 1 22 ??? 0x0000000000000001 0x0 + 1)libc++abi.dylib: terminating with uncaught exception of type NSException` My AppDelegate.Swift is class AppDelegate: UIResponder, UIApplicationDelegate, GIDSignInDelegate {public func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) { if (error == nil) { let userId:NSString = user.userID as NSString; // For client-side use only! let idToken: NSString = user.authentication.idToken as NSString; // Safe to send to the server let fullName:NSString = user.profile.name as NSString; let givenName:NSString = user.profile.givenName as NSString; let familyName:NSString = user.profile.familyName as NSString; let email:NSString = user.profile.email as NSString; print(userId) print(userId,idToken,fullName,givenName,familyName,email) } else { print("\(error.localizedDescription)") }}var window: UIWindow?func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { let kclientID:NSString = "107731993306-xxxxxxxxxxxxxxxxx.apps.googleusercontent.com" GIDSignIn.sharedInstance().clientID = kclientID as String! GIDSignIn.sharedInstance().delegate = self return true}func application(application: UIApplication, openURL url: NSURL, options: [String: AnyObject], annotation:Any, sourceApplication:String?) -> Bool { return GIDSignIn.sharedInstance().handle(url as URL!, sourceApplication:sourceApplication, annotation: annotation)} Please give me the solution. Why it is crashed? Now provide the response and nothing else.
As the error clearly says, your app is missing support for the url schemes. Add the following schemes to your info.plist <key>CFBundleURLTypes</key><array> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>***Your bundle ID***</string> </array> </dict> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>com.googleusercontent.apps.107731993306-6s44u18onibp6gi0ddj94si1aifshhg6</string> </array> </dict> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLSchemes</key> <array> <string>***Something here***</string> </array> </dict></array> Check this url for your id => https://developers.google.com/identity/sign-in/ios/start-integrating Your info.plist should look like ->
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42197729', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7534266/']}
jdg_81807
stackexchange
llm_judgeable_groundtruth_similarity
152471
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A friend of Mine gave me a system of two equations and asked me to solve them $\rightarrow$ $$\sqrt{x}+y=11~~ ...1$$$$\sqrt{y}+x=7~~ ...2$$ I tried to solve them manually and got this horrendously complicated fourth degree equation $\rightarrow$ $$\begin{align*}y &= (7-x)^2 ~...\mbox{(from 2)} \\y &= 49 - 14 x + x^2 \\\implies 11&= \sqrt{x}+ 49 - 14 x + x^2 ...(\mbox{from 1)}\\\implies~~ 0&=x^4-28x^3+272x^2-1065x+1444\end{align*}$$ Solving this wasn't exactly my piece of cake but I could tell that one of Solutions would have been 9 and 4 But my friend kept asking for a formal solution. I tried plotting the equations and here's what I got $\rightarrow$ So the equations had two pairs of solutions (real ones). Maybe, Just maybe I think these could be solved using approximations. So How do i solve them using a formal method (Calculus,Algebra,Real Analysis...) P.S. I'm In high-school. Now provide the response and nothing else.
Assume $x$ and $y$ are integers. Notice that, in this case, if $\sqrt x +y=11$, an integer, then $\sqrt x $ must be an integer. A similar argument can be made for $y$. So if they're integers then they're both perfect squares. Rephrasing in terms of the square roots (still integers) $X=\sqrt x,Y=\sqrt y$$$X+Y^2=11$$$$Y+X^2=7$$subtracting the second equation from the first:$$X-Y+Y^2-X^2=4$$$$(X-Y)+(Y-X)(Y+X)=4$$$$(Y-X)(X+Y-1)=4$$Both of the brackets are integers, so the only values they can take are the factors of $4$. So either$$Y-X=2,X+Y-1=2$$or$$Y-X=4,X+Y-1=1$$or$$Y-X=1,X+Y-1=4$$Solving each of these is simple. The only one that gives positive integer values (the conditions of our little set up here) is the $3^{rd}$ one, which gives the answer you found. Keep in mind that there's nothing wrong with guessing and playing around with the problem first, then coming to a more structured argument later. If you want a full analytic solution you could use the quartic equation on the one you have and rule out the other solutions as involving the wrong branches of $\sqrt x$, but it's pointlessly messy.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/152471', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/31335/']}
jdg_81808
stackexchange
llm_judgeable_groundtruth_similarity
1650821
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As a prospective undergrad doing pre-study in preparation of my future endeavours, i'm still to fully grasp the concept of the supremum of a set. My question is: Suppose $\lim sup_{n \to \infty} f(n) = a$, does this imply that, for all sufficiently large $n$, $f(n) \leq a$ ? From the relevant Wikipedia article, it seems obvious by definition, but i just want to be absolutely sure. EDIT: For example, Gronwall's theorem on the asymptotic size of the divisor function states that $\lim sup_{n\to \infty} \frac{\sigma(n)}{n\log\log n}=e^{\gamma}$, is it true that for sufficiently large $n$, $\frac{\sigma(n)}{n\log\log n} \leq e^{\gamma}$ ? Or equivalently, is $e^{\gamma}$ also a limit superior in Gronwall's theorem ? Now provide the response and nothing else.
Here's an example showing that $S$ is not always a finite union of algebraic sets. Let $Z$ be the zero locus of the single polynomial $x_1x_2 - 1$. Then $S = \mathbb{A}^1\setminus \{0\}$. What is true is that $S$ is always a finite union of sets defined by finitely many polynomial equations (basic Zariski closed sets) and negated equations (basic Zariski open sets). Such a set is called a constructible set , and the fact that the projection of a Zariski-closed set (or more generally a constructible set) is a constructible set is known as Chevalley's theorem in algebraic geometry. The $S$ in the example above is defined by the single negated equation $x_1\neq 0$. As a logician, I prefer to think of this in terms of quantifier-elimination in the theory of algebraically closed fields. This result, due to Tarski, says that a subset of $K^n$ ($K$ algebraically closed) defined by a first-order formula (built up from polynomial equations by finite Boolean combinations and quantifiers ) can actually be defined without quantifiers. The set $S$ in your question is defined by the first-order formula $$\exists x_n\, \bigwedge_{i = 1}^k f_i(x_1,\dots,x_n) = 0.$$ Putting the quantifier-free formula we get from quantifier-elimination in disjunctive normal form, it looks like $$\bigvee_{i = 1}^n \bigwedge_{j = 1}^m \varphi_{ij}(\overline{x}),$$where each $\varphi_{ij}(\overline{x})$ is $p_{ij}(\overline{x}) = 0$ or $p_{ij}(\overline{x})\neq 0$ for some polynomial $p_{ij}$. This is explicitly a finite union of sets defined by finitely many polynomial equations and negated equations.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1650821', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/305659/']}
jdg_81809
stackexchange
llm_judgeable_groundtruth_similarity
119454
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: What are (if any) equivalent forms of AC (The Axiom of Choice) in Category Theory ? Now provide the response and nothing else.
Here's a somewhat trivial one, but it is one that category theorists use all the time: Let us say that a functor $F : \mathcal{C} \to \mathcal{D}$ is a weak equivalence if it is fully faithful and essentially surjective on objects, and that it is a strong equivalence if there exists a functor $G : \mathcal{D} \to \mathcal{C}$ such that $G F \cong \textrm{id}_{\mathcal{C}}$ and $F G \cong \textrm{id}_\mathcal{D}$. Proposition. In Zermelo set theory with only bounded separation, the following are equivalent: Every surjection of sets splits. Any weak equivalence between two small categories is a strong equivalence. Any weak equivalence between two small groupoids is a strong equivalence. Any weak equivalence between two small preorders is a strong equivalence. Any weak equivalence between two small setoids is a strong equivalence. Here, by "small" I mean something internal to the set-theoretic universe in question. On the other hand, if you're asking for category-theoretic formulations of the axiom of choice inside some category of "sets", then there are several: The usual formulation just says that every epimorphism in $\textbf{Set}$ splits. This generalises easily to any category. In any topos $\mathcal{E}$, one can formulate the axiom schema "every surjection $X \to Y$ splits" in the internal language of $\mathcal{E}$, and this axiom schema is valid if and only if every object is internally projective, in the sense that the functor $(-)^X : \mathcal{E} \to \mathcal{E}$ preserves epimorphisms. This is called the internal axiom of choice . The internal axiom of choice holds in $\textbf{Set}$ precisely if the usual axiom of choice holds; this is because $\textbf{Set}$ is a well-pointed topos; but in general the internal axiom of choice is weaker. For example, for any discrete group $G$, the category $\mathbf{B} G$ of all $G$-sets and $G$-equivariant maps is a topos in which the internal axiom of choice holds, but if $G$ is any non-trivial group whatsoever, then there exist epimorphisms in $\mathbf{B} G$ that do not split. (For example, $G \to 1$, where $G$ acts on itself by translation.)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/119454', 'https://mathoverflow.net', 'https://mathoverflow.net/users/-1/']}
jdg_81810
stackexchange
llm_judgeable_groundtruth_similarity
5315529
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have problem with organizing layout in android aplication. I'm dynamically creating buttons and adding them with this code to my layout: LayoutInflater layoutInflater = (LayoutInflater)getBaseContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); for (int i = 0; i < NO_NUMBERS; i++){ Button btn = new Button(this); btn = (Button) layoutInflater.inflate(R.layout.button, null); btn.setId(2000+i); Integer randomNumber = sort.getNumbersCopy()[i]; btn.setText(randomNumber.toString()); btn.setOnClickListener((OnClickListener) this); buttonList.addView(btn); list.add(btn); } I'm adding it to the LinearLayout: <LinearLayout android:id="@+id/buttonlist" android:layout_alignParentLeft="true" android:layout_marginTop="185dp" android:layout_marginLeft="5dp" android:layout_marginRight="5dp" android:orientation="horizontal" android:gravity="center_horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content"></LinearLayout> and i'm importing this .xml where i'm defining button layout: <Button xmlns:android="http://schemas.android.com/apk/res/android"android:textSize="26dp"android:textStyle ="bold"android:textColor="#ffffff"android:background="@drawable/button"android:layout_marginLeft="8px"android:layout_width="wrap_content"android:layout_height="wrap_content"android:layout_weight="1"/> Well, layout always ends up like this: Instead of something like this (even spce between buttons, square buttons): To summarize this: I have to: describe button in xml dynamically generate N buttons add properties of described button to dynamically created ones organize layout so it can evenly distribute buttons in buttonList with spaces between tham Now provide the response and nothing else.
Remember, android:layout_* attributes are LayoutParams . They are arguments to the parent and affect how the parent will perform layout on that view. You're specifying layout_margin attributes on your buttons, but they're getting ignored. Here's why: Since LayoutParams are specific to the parent view type, you need to supply an instance of the correct parent type when you inflate layouts using a LayoutInflater or else layout_ attributes on the top-level view in the layout will be dropped. (The inflater would have no idea what type of LayoutParams to generate.) Since buttonList is your intended parent for the button views, change your inflate line to this: btn = (Button) layoutInflater.inflate(R.layout.button, buttonList, false);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5315529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/99821/']}
jdg_81811
stackexchange
llm_judgeable_groundtruth_similarity
511636
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have an ansible 1.1 playbook where I do something like this: - name: copy files sudo: True shell: cp /from/* /to/- name: change owner sudo: True file: path=$item owner=newuser group=newgroup with_fileglob: /to/* The second task, "change owner" is always skipping. can anyone help me finding out why? is the file module skipping because the files exist? I'm stuck :) Now provide the response and nothing else.
From documentation : Remember lookup plugins are run on the "controlling" machine: with_fileglob is a lookup plugin, so it looks for files on the local server, the one you are running ansible-playbook from. Here is what you can do: - name: list files action: command ls -1 /to/* register: dumpfiles - name: change ownership action: file path=$item owner=newuser group=newgroup with_items: ${dumpfiles.stdout_lines}
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/511636', 'https://serverfault.com', 'https://serverfault.com/users/78688/']}
jdg_81812
stackexchange
llm_judgeable_groundtruth_similarity
18087217
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I have started developing for Bump, and in their short-sightedness they don't support x86 based Android devices/emulators. Only ARM. Well, setting the small market aside, it's a big problem for me since I thoroughly enjoy developing using the x86 based emulator for performance reasons. So, (since Bump is pointless on an emulator anyway) how can I disable the loading of the Bump libraries when running in the emulator? import com.bump.api.IBumpAPI;import com.bump.api.BumpAPIIntents; Error: 08-06 17:58:30.895: E/AndroidRuntime(1799): java.lang.UnsatisfiedLinkError: Couldn't load android-api from loader dalvik.system.PathClassLoader[dexPath=/data/app/com.xxxxxxxx-2.apk,libraryPath=/data/app-lib/com.xxxxxxxxx-2]: findLibrary returned null 08-06 17:58:30.895: E/AndroidRuntime(1799): at java.lang.Runtime.loadLibrary(Runtime.java:365) 08-06 17:58:30.895: E/AndroidRuntime(1799): at java.lang.System.loadLibrary(System.java:535) 08-06 17:58:30.895: E/AndroidRuntime(1799): at com.bump.api.BumpAPI.(BumpAPI.java:122) 08-06 17:58:30.895: E/AndroidRuntime(1799): at java.lang.Class.newInstanceImpl(Native Method) 08-06 17:58:30.895: E/AndroidRuntime(1799): at java.lang.Class.newInstance(Class.java:1319) Now provide the response and nothing else.
I've had to debug some particularly nasty unseen-cause Javascript issues at my job. Knowing the full depth of developer tools like Chrome's is definitely helpful. It undeniably takes some creativity to find places that might be causing the issue, but a few tips: Tracking down event listeners Under Chrome's Elements view, try Inspect-ing an element (right-click, Inspect); then, on the right side of the developer view, scroll down to Event Listeners. Here you can view what code files have hooked up an event. Often, this will just point you to a middle-framework from the really devious code you're looking for, but sometimes it will point you in the right direction. Trapping a DOM modification Many of the unwanted effects I see are because of something changing some value or attribute on the page that I don't want. Anytime this happens, you can right-click on the element (under the Elements view) and say "Break on..." and the specific scenario you're looking for. When Chrome then hits a breakpoint, you can then look downward in the Stack Trace until you find something recognizable that shouldn't be called. EDIT after reaching ten votes! Trapping a JS object modification If the change you're interested in is code-internal, not in the UI, things get trickier. What's meant by this scenario is that you know somewhere in the code, something incredibly annoying like the following is happening. company.data.myObject.parameter = undefined; In this situation, you know myObject is still the same object, but it's being modified, perhaps unintentionally. For that, I often insert the following bit of code, sometimes just through the developer console at some point before said modification happens. Object.defineProperty(company.data.myObject, 'parameter', { set: (val) => { debugger; }}); This includes an arrow function - you're only using this for debugging and Chrome supports it, so might as well save keystrokes. What this will do is freeze your debugger as soon as some line of code attempts to modify myObject 's " parameter " property. You don't necessarily have to have a global reference to the variable if you can run this line of code from a previous breakpoint that will have the given object in the locals. Otherwise, if all I'm starting out with is the HTML code, and I want to tie that to Javascript code, I'll often just look for identifying features like "id" elements, and search all JS files in my development directory for it. Normally, I can reach it pretty fast.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18087217', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3584/']}
jdg_81813
stackexchange
llm_judgeable_groundtruth_similarity
7855845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I call random.sample(arr,length) an error returns random_sample() takes at most 1 positional argument (2 given) . After some Googling I found out I'm calling Numpy's random sample function when I want to call the sample function of the random module. I've tried importing numpy under a different name, which doesn't fix the problem. I need Numpy for the rest of the program, though. Any thoughts? Thanks Now provide the response and nothing else.
Sounds like you have something like import randomfrom numpy import * and random is getting clobbered by the numpy import. If you want to keep the import * then you'll need to rename random : import random as rnd # or whatever name you likefrom numpy import * Alternatively, and probably better, is to import numpy as a module instead of just yanking it all into your module's namespace: import randomimport numpy as np # or leave as numpy, or whatever name you like
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7855845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1007926/']}
jdg_81814
stackexchange
llm_judgeable_groundtruth_similarity
1149780
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It seems to me that $\ln \cosh x\leq \frac{x^2}{2}$ for $x\in\mathbb{R}$, as suggested by graphing the difference between both functions as well as the fact that the Taylor series expansion of $\ln\cosh x$ at $x=0$ yields $\frac{x^2}{2}-\frac{x^4}{12}+\mathcal{O}(x^6)$. However, how do I prove the bound formally? Using Taylor's Theorem with the remainder seems sort of unwieldy... Any hints? Now provide the response and nothing else.
Since both sides of the inequality are even functions, we need only consider $x\geqslant 0$. Then using $$\ln \cosh x = \int_0^x \tanh t\,dt$$ one only needs to see that $\tanh t \leqslant t$ for $t \geqslant 0$. Since this inequality is strict for $t > 0$, the original inequality is strict for $x \neq 0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1149780', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24728/']}
jdg_81815
stackexchange
llm_judgeable_groundtruth_similarity
19473941
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that for the classic Play framework it's play debug ~run . I tried running activator debug ~run but I get the following error: [error] Not a valid command: debug (similar: idea)[error] Not a valid project ID: debug[error] Expected ':' (if selecting a configuration)[error] Not a valid key: debug (similar: debianSign, node)[error] debug[error] ^ What am I doing wrong? Now provide the response and nothing else.
If you're just doing activator ~run , then you should be able to pass a JVM debug port option with: ./activator -jvm-debug <port> ~run This may not do the same thing as play-run, but here's the arguments it's adding to the command line: -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=<port> From: https://github.com/typesafehub/activator/blob/master/dist/src/templates/activator#L107 I've successfully attached to this process in my IDE. If you're running in windows, the automatic configuration is a bit different. With the latest activator, you can do the following: Open %UserProfile%\.activator\activatorconfig.txt (The UserProfile is different depending on your windows install. Mine is C:\Documents and Settings\jsuereth on one installation and C:\Users\jsuereth on another). Past the following in the file: -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=<PUT YOUR PORT HERE> You can set the JAVA_OPTS property on the command line before starting activator, e.g. set "JAVA_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=<port>" Hope that helps!
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19473941', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1058046/']}
jdg_81816
stackexchange
llm_judgeable_groundtruth_similarity
26509801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have few tables that have columns that can either be ENUM type or INT type. I tend to always use integer type assuming that it will be faster to perform search based on it. For example one of my table has a column: StatusType which can have only 4 possible values: Completed , In Progress , Failed , Todo . Instead of storing above as ENUM strings I store them as: 1 , 2 , 3 , 4 respectively. And then in my PHP code I have constant variables that define these values like this: define('COMPLETED', 1);define('IN_PROGRESS', 2);define('FAILED', 3);define('TODO', 4); Now my question is, am I doing it right way or I should just change it to be ENUM type and use strings to compare in queries? I have many other columns that can only have set of max 4-5 possible values. Now provide the response and nothing else.
Enum values look really cool in MySQL, yet I am not a fan of them. They are limited to 255 values, so if you decide to add more values, then you might run into a limit. Also, as you describe, you need to synchronize the values in your application code with the values in the database -- something that seems potentially dangerous. In addition, they make certain future changes more difficult. For instance, other databases do not support enums. And, if you want to add multi-lingual support, having codes embedded in data type definitions in the database is a bit hard to deal with. The more standard method is one or more reference tables, where you use join to get the values. You can use a hybrid approach where you use a reference table in the database. Then you can load the reference table into the application to get the mapping from numbers to strings so you can avoid the joins in your code.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26509801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2204851/']}
jdg_81817
stackexchange
llm_judgeable_groundtruth_similarity
22954894
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a video app. I am streaming a video from server link , is it possible for me to generate a video thumbnail from the URL without downloading the video. Now provide the response and nothing else.
Without downloading video you can generate thumbnail from below code: public static Bitmap retriveVideoFrameFromVideo(String videoPath) throws Throwable{ Bitmap bitmap = null; MediaMetadataRetriever mediaMetadataRetriever = null; try { mediaMetadataRetriever = new MediaMetadataRetriever(); if (Build.VERSION.SDK_INT >= 14) mediaMetadataRetriever.setDataSource(videoPath, new HashMap<String, String>()); else mediaMetadataRetriever.setDataSource(videoPath); // mediaMetadataRetriever.setDataSource(videoPath); bitmap = mediaMetadataRetriever.getFrameAtTime(); } catch (Exception e) { e.printStackTrace(); throw new Throwable("Exception in retriveVideoFrameFromVideo(String videoPath)" + e.getMessage()); } finally { if (mediaMetadataRetriever != null) { mediaMetadataRetriever.release(); } } return bitmap;} NOTE : Video is stored as Intra and non Intra (Picture frames) getFrameAtTime will return the closest non- Intra frame as Bitmap. So basically it won't download the entire video.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22954894', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2633909/']}
jdg_81818
stackexchange
llm_judgeable_groundtruth_similarity
723152
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Apart from enhanced authentication options offered by SSH, is there any difference between basic working of SSH and SSL protocols ? I am asking since we can use SFTP or FTP over SSL, both would require authentication. Now provide the response and nothing else.
SSH and SSL are similar protocols that both use most of the same cryptographic primitives under the hood, so they are both as secure as each other. One advantage of SSH is that using key-pair authentication is actually quite easy to do, and built right into the protocol. With SSL it's a bit of a mess involving CA certificates and other things. After you have the PKI in place you also need to configure your services to use the PKI for authentication instead of its internal password database; this is a nightmare on some services and a piece of cake on others. It also means you need to go to the hassle of signing all of your user's keys so they can log in with them. Most competent users can grok SSH keys in no time but it takes a bit longer to get their heads around SSL keys (the extra CA certs and key certs confused me when I first discovered it). Pick what's supportable. SSH+SFTP is great for Unix people, but FTP over SSL is probably easier to do if your users are Windows-based and are pretty clueless about anything other than Internet Exploiter (and you don't mind risking that your users will choose insecure passwords).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/723152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64668/']}
jdg_81819
stackexchange
llm_judgeable_groundtruth_similarity
277499
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: There is a claim on https://en.wikipedia.org/wiki/Super-recursive_algorithm#Inductive_Turing_machines that 'Simple inductive Turing machines are equivalent to other models of computation such as general Turing machines of Schmidhuber, trial and error predicates of Hilary Putnam, limiting partial recursive functions of Gold, and trial-and-error machines of Hintikka and Mutanen.[1] More advanced inductive Turing machines are much more powerful. There are hierarchies of inductive Turing machines that can decide membership in arbitrary sets of the arithmetical hierarchy(Burgin 2005). In comparison with other equivalent models of computation, simple inductive Turing machines and general Turing machines give direct constructions of computing automata that are thoroughly grounded in physical machines'. Wiki also says these are different from infinite time Turing machines. Are inductive turing machines different from turing machines with oracles? Are inductive turing machines physically realizable (at least in the same sense of realizaility of Turing machines as Intel processors with bounded RAM and one that degrades over time)? Can inductive turing machines solve the halting problem (essentially Hilbert's $10$th as well)? Please refer here for overcoming Church-Turing Hypothesis with Inductive Turing machines https://en.wikipedia.org/wiki/Super-recursive_algorithm#Relation_to_the_Church.E2.80.93Turing_thesis . Here is another article (published in communications of the ACM and well cited) http://www.columbia.edu/itc/hs/medinfo/g6080/misc/p82-burgin.pdf . Now provide the response and nothing else.
Let me try to answer the actual question that was asked. The Wikipediapage defines inductive Turing machines as follows: An inductive Turing machine is a definite list of well-defined instructions for completing a task which, when given an initial state, will proceed through a well-defined series of successive states, eventually giving the final result. The difference between an inductive Turing machine and an ordinary Turing machine is that an ordinary Turing machine must stop when it has obtained its result, while in some cases an inductive Turing machine can continue to compute after obtaining the result, without stopping. Two remarks. I assume that when the description says "eventually givingthe final result," what is meant is that there is a stage afterwhich the computation is always displaying that result as output.This makes the concept identical to what has also been known by the term computability-in-the-limit , as well as other terminology. One naturally extends the concept to partial functions, by insisting that for inputs not in the domain, what we want is for the outputs not to converge or stabilize. This is evidently the simple model of inductive machine; the wikipedia page makes references to a hierarchy of more powerful machines. Although the Wikipedia page makes numerous references to MarkBurgin — his name appears 24 times in the linked article — to my understanding of the history of the subject, the particular concept of computability-in-the-limit has been well understood and analyzed by computability theorists much earlier than Burgin's writings. To my way of thinking, the main thing to say about this notion of computability is the following, which is commonly given as an exercise in computability theory courses. Theorem. For any function $f$, the following are equivalent. $f$ is computable by an inductive Turing machine; that is, $f$ is computable in the limit . $f$ is computable (in the usual sense) by a Turing machineequipped with an oracle for the halting problem. The graph of $f$ is $\Sigma_2$-definable. Proof. ($1\to 3$). If $f$ is computable by an inductive Turingmachine, then $f(a)=b$ if and only if there is some stage of theinductive computation on input $a$ such that at any later stage,the output is still $b$. This is a $\Sigma_2$ definition of thegraph of $f$. ($3\to 2$) If the graph of $f$ is $\Sigma_2$-definable, then$f(a)=b$ just in case $\exists x\forall y\ B(x,y,a,b)$, where $B$is $\Delta_0$. With an oracle for the halting problem and anyparticular $x$, $a$ and $b$, we can ask the oracle if the $\forally$ condition holds. In this way, on input $a$, we can search for an$x$ and $b$ that fulfill the condition. When found, output $b$. ($2\to 1$) If $f$ is computable with respect to an oracle for thehalting problem, then it is computable by an inductive Turingmachine: just compute better and better approximations to thehalting problem, and for each of them, use that approximation as anoracle for the computation of $f$. This process eventuallystabilizes, because for any given input, the approximation to thehalting problem will be accurate for a long enough time to supportthe correct computation of $f$. $\Box$ Note that the argument in the implication ($2\to 1$) exhibits thefeature that is central to some of the commentary about thesemachines, namely, that although we can compute better and betterapproximations to the halting problem, in a way that willeventually be correct on any given instance, nevertheless we aretypically not able to recognize computably when our approximationis correct. Thus, although we may be computing the function $f$accurately by using that approximation, we have no way of knowingfor sure that we have the final answer. Corollary . For any set $A$, the following are equivalent. $A$ is decidable by an inductive Turing machine. $A$ is Turing computable from the halting problem. $A$ has complexity $\Delta_2$ in the arithmetic hierarchy. Proof. The characteristic function of $A$ is a total function,and so its graph is $\Sigma_2$ if and only if $A$ has complexity$\Delta_2$. $\Box$ In this sense, yes, the so-called inductive Turing machines can compute the halting problem and therefore Hilbert's 10th problem, since that problem is equivalent to the halting problem. But to be clear, I don't take this to show that the inductive Turing machine model refutes the Church-Turing thesis. Unfortunately, it seems that much of the commentary and literature surrounding the claim that it does is of poor quality and in some cases mathematically empty. The discussion seems to have become distracted in the literature and gotten off track in a way; it is a pity. One of the central achievements of computability theory is the recognition of the subtle distinction between the concept of a set being computably enumerable and it being computably decidable. The recognition that these two aspects of computability are not the same has clarified so many issues in computability. We have known since Turing that the halting problem is computably enumerable but not decidable. Meanwhile, the main arguments for inductive computability violating the Church-Turing thesis seem to my way of understanding things to amount to an attempt to erase this important distinction. After all, the halting problem itself is computable in the limit, since we can say that a program does not halt until we see that it does, and then say from that point on that it does halt. Does this show that the halting problem is computably decidable? No, I don't think so, not in any satisfactory way. And similarly I reject that claim that functions computable-in-the-limit are computable. Since these kinds of simple observations seem to resolve essentially all of the issues on this topic, I cannot recommend following much of the literature surrounding this supposed debate.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/277499', 'https://mathoverflow.net', 'https://mathoverflow.net/users/10035/']}
jdg_81820
stackexchange
llm_judgeable_groundtruth_similarity
42981070
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I get this ESLint rule ( no-use-before-define ) to not warn in cases like these; class App extends React.Component { render() { return <div>{messages.helloWorld}</div> }}const messages = { helloWorld: 'Hello world!' } This is a simplified example, but I'd really like to have messages defined at the bottom of each component's file (by convention). Now provide the response and nothing else.
It looks like you might be interested in the variables option, for this rule. You can read about that option here . This flag determines whether or not the rule checks variable declarations in upper scopes. If this is true, the rule warns every reference to a variable before the variable declaration. Otherwise, the rule ignores a reference if the declaration is in an upper scope, while still reporting the reference if it's in the same scope as the declaration. You might configure it, in your .eslintrc , like so ... { "no-use-before-define": ["error", { "variables": false }]} This will keep that rule enabled for other things, such as classes and functions and variables in the same scope, but will relax it for variables in upper scopes.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42981070', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1310983/']}
jdg_81821
stackexchange
llm_judgeable_groundtruth_similarity
11951701
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a math expressions parser + evaluator for Python? I am not the first to ask this question, but answers usually point to eval() . For instance, one could do this: >>> safe_list = ['math','acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'degrees', 'e', 'exp', 'fabs', 'floor', 'fmod', 'frexp', 'hypot', 'ldexp', 'log', 'log10', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'abs']>>> safe_dict = dict([ (k, locals().get(k, None)) for k in safe_list ])>>> s = "2+3">>> eval(s, {"__builtins__":None}, safe_dict)5 But this is not safe : >>> s_badbaduser = """... (lambda fc=(... lambda n: [... c for c in ... ().__class__.__bases__[0].__subclasses__() ... if c.__name__ == n... ][0]... ):... fc("function")(... fc("code")(... 0,0,0,0,"KABOOM",(),(),(),"","",0,""... ),{}... )()... )()... """>>> eval(s_badbaduser, {"__builtins__":None}, safe_dict)Segmentation fault Also, using eval for parsing and evaluating mathematical expressions just seems wrong to me. I have found PyMathParser , but it also uses eval under the hood and is no better: >>> import MathParser>>> m=MathParser.PyMathParser()>>> m.expression = s_badbaduser>>> m.evaluate();Segmentation fault Is there a library available that would parse and evaluate mathematical expression without using Python parser? Now provide the response and nothing else.
Check out Paul McGuire's pyparsing . He has written both the general parser and a grammar for arithmetic expressions : from __future__ import divisionimport pyparsing as pypimport mathimport operatorclass NumericStringParser(object): ''' Most of this code comes from the fourFn.py pyparsing example http://pyparsing.wikispaces.com/file/view/fourFn.py http://pyparsing.wikispaces.com/message/view/home/15549426 __author__='Paul McGuire' All I've done is rewrap Paul McGuire's fourFn.py as a class, so I can use it more easily in other places. ''' def pushFirst(self, strg, loc, toks ): self.exprStack.append( toks[0] ) def pushUMinus(self, strg, loc, toks ): if toks and toks[0] == '-': self.exprStack.append( 'unary -' ) def __init__(self): """ expop :: '^' multop :: '*' | '/' addop :: '+' | '-' integer :: ['+' | '-'] '0'..'9'+ atom :: PI | E | real | fn '(' expr ')' | '(' expr ')' factor :: atom [ expop factor ]* term :: factor [ multop factor ]* expr :: term [ addop term ]* """ point = pyp.Literal( "." ) e = pyp.CaselessLiteral( "E" ) fnumber = pyp.Combine( pyp.Word( "+-"+pyp.nums, pyp.nums ) + pyp.Optional( point + pyp.Optional( pyp.Word( pyp.nums ) ) ) + pyp.Optional( e + pyp.Word( "+-"+pyp.nums, pyp.nums ) ) ) ident = pyp.Word(pyp.alphas, pyp.alphas+pyp.nums+"_$") plus = pyp.Literal( "+" ) minus = pyp.Literal( "-" ) mult = pyp.Literal( "*" ) div = pyp.Literal( "/" ) lpar = pyp.Literal( "(" ).suppress() rpar = pyp.Literal( ")" ).suppress() addop = plus | minus multop = mult | div expop = pyp.Literal( "^" ) pi = pyp.CaselessLiteral( "PI" ) expr = pyp.Forward() atom = ((pyp.Optional(pyp.oneOf("- +")) + (pi|e|fnumber|ident+lpar+expr+rpar).setParseAction(self.pushFirst)) | pyp.Optional(pyp.oneOf("- +")) + pyp.Group(lpar+expr+rpar) ).setParseAction(self.pushUMinus) # by defining exponentiation as "atom [ ^ factor ]..." instead of # "atom [ ^ atom ]...", we get right-to-left exponents, instead of left-to-right # that is, 2^3^2 = 2^(3^2), not (2^3)^2. factor = pyp.Forward() factor << atom + pyp.ZeroOrMore( ( expop + factor ).setParseAction( self.pushFirst ) ) term = factor + pyp.ZeroOrMore( ( multop + factor ).setParseAction( self.pushFirst ) ) expr << term + pyp.ZeroOrMore( ( addop + term ).setParseAction( self.pushFirst ) ) self.bnf = expr # map operator symbols to corresponding arithmetic operations epsilon = 1e-12 self.opn = { "+" : operator.add, "-" : operator.sub, "*" : operator.mul, "/" : operator.truediv, "^" : operator.pow } self.fn = { "sin" : math.sin, "cos" : math.cos, "tan" : math.tan, "abs" : abs, "trunc" : lambda a: int(a), "round" : round, # For Python3 compatibility, cmp replaced by ((a > 0) - (a < 0)). See # https://docs.python.org/3.0/whatsnew/3.0.html#ordering-comparisons "sgn" : lambda a: abs(a)>epsilon and ((a > 0) - (a < 0)) or 0} self.exprStack = [] def evaluateStack(self, s ): op = s.pop() if op == 'unary -': return -self.evaluateStack( s ) if op in "+-*/^": op2 = self.evaluateStack( s ) op1 = self.evaluateStack( s ) return self.opn[op]( op1, op2 ) elif op == "PI": return math.pi # 3.1415926535 elif op == "E": return math.e # 2.718281828 elif op in self.fn: return self.fn[op]( self.evaluateStack( s ) ) elif op[0].isalpha(): return 0 else: return float( op ) def eval(self, num_string, parseAll = True): self.exprStack = [] results = self.bnf.parseString(num_string, parseAll) val = self.evaluateStack( self.exprStack[:] ) return valnsp = NumericStringParser()print(nsp.eval('1+2'))# 3.0print(nsp.eval('2*3-5'))# 1.0
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11951701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/593487/']}
jdg_81822
stackexchange
llm_judgeable_groundtruth_similarity
2183087
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I try to use float as a template parameter, the compiler cries for this code, while int works fine. Is it because I cannot use float as a template parameter? #include<iostream>using namespace std;template <class T, T defaultValue>class GenericClass{private: T value;public: GenericClass() { value = defaultValue; } T returnVal() { return value; }}; int main(){ GenericClass <int, 10> gcInteger; GenericClass < float, 4.6f> gcFlaot; cout << "\n sum of integer is "<<gcInteger.returnVal(); cout << "\n sum of float is "<<gcFlaot.returnVal(); return 0; } Error: main.cpp: In function `int main()':main.cpp:25: error: `float' is not a valid type for a template constant parametermain.cpp:25: error: invalid type in declaration before ';' tokenmain.cpp:28: error: request for member `returnVal' in `gcFlaot', which is of non-class type `int' I am reading "Data Structures for Game Programmers" by Ron Penton, the author passes a float , but when I try it it doesn't seem to compile. Now provide the response and nothing else.
THE SIMPLE ANSWER The standard doesn't allow floating points as non-type template-arguments , which can be read about in the following section of the C++11 standard; 14.3.2/1      Template non-type arguments      [temp.arg.nontype] A template-argument for a non-type, non-template template-parametershall be one of: for a non-type template-parameter of integral or enumeration type, a converted constant expression (5.19) of the type of thetemplate-parameter; the name of a non-type template-parameter; or a constant expression (5.19) that designates the address of an object with static storage duration and external or internal linkageor a function with external or internal linkage, including functiontemplates and function template-ids but excluding non-static classmembers, expressed (ignoring parentheses) as & id-expression, exceptthat the & may be omitted if the name refers to a function or arrayand shall be omitted if the corresponding template-parameter is areference; or a constant expression that evaluates to a null pointer value (4.10); or a constant expression that evaluates to a null member pointer value (4.11); or a pointer to member expressed as described in 5.3.1. But.. but.. WHY!? It is probably due to the fact that floating point calculations cannot be represented in an exact manner. If it was allowed it could/would result in erroneous/weird behavior when doing something as this; func<1/3.f> (); func<2/6.f> (); We meant to call the same function twice but this might not be the case since the floating point representation of the two calculations isn't guaranteed to be exactly the same. How would I represent floating point values as template arguments? With C++11 you could write some pretty advanced constant-expressions ( constexpr ) that would calculate the numerator/denominator of a floating value compile time and then pass these two as separate integer arguments. Remember to define some sort of threshold so that floating point values close to each other yields the same numerator/denominator , otherwise it's kinda pointless since it will then yield the same result previously mentioned as a reason not to allow floating point values as non-type template arguments .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2183087', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/205929/']}
jdg_81823
stackexchange
llm_judgeable_groundtruth_similarity
11514
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does our PC work as Turing Machine? The model of a Turing Machine consists of infinite memory tape, which means infinite states. But suppose if our PC has 128 MB memory and 30GB disk it would have 256^30128000000 states and thus, it has finite states. I know that we can write a type of program that, if during execution we run out of memory, will request to swap memory disk with empty memory disk and resume execution. But what if we don’t swap memory disk, in this case is it right to consider PC as FA ? Now provide the response and nothing else.
You're right that physical computers have finite memory and so are not Turing-complete. There are other ways in which computability theory is not a good model for computing - it doesn't take into account time and memory constraints. Complexity theory was invented (perhaps) as a more realistic depiction of computing, but IMHO suffers from similar (but subtler) problems. On the other hand, in order to mathematically study the capabilities and limits of computing, we need to use some abstraction which is unconstrained. That makes the analysis possible. Similarly, in statistical mechanics we assume that the number of elements (atoms or molecules) is so large, that the behaviour is close to the limit (that is, we let the number of elements tend to infinity). Studying computing from an asymptotic perspective has similar advantages, but sometimes is misleading. Here are some examples of the latter: In cryptography, exponential algorithms are sometimes feasible. If we choose the wrong security parameters, our encryption might be insecure even though it's "provably secure". Polynomial-time algorithms are supposed to represent efficient and feasible computing, but many of them aren't feasible. As an example, most sophisticated matrix multiplication algorithms aren't used in practice. Modern complexity theory is obsessed with worst-case performance, and cannot analyze heuristic algorithms which are used in practice. NP-hard problems are considered infeasible, yet they are being solved in practice all the time. A separate issue is that real computers don't work like Turing machines at all. They work like RAM machines, which are a better abstraction for actual computing.
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/11514', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/7384/']}
jdg_81824
stackexchange
llm_judgeable_groundtruth_similarity
2689708
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As a follow-up to this question I asked , I wondered what would happen if I imposed the weaker condition of having positive eigenvalues, rather than being positive definite. How do I construct an example of two matrices $A$ and $B$ such that: 1) $A$ and $B$ have strictly positive eigenvalues. 2) $A + B$ has strictly negative eigenvalues (is this even possible?). 3) $AB$ has strictly negative eigenvalues. Generally, I'm unsure how to begin going about constructing an example of a matrix that satisfies these properties. Now provide the response and nothing else.
The trace of a matrix is the sum of its eigenvalues. Also, $$\mathrm{tr}(A+B) = \mathrm{tr}(A)+\mathrm{tr}(B).$$ If $A$ and $B$ have strictly positive eigenvalues, then $$\mathrm{tr}(A),\mathrm{tr}(B)>0\implies \mathrm{tr}(A+B)>0,$$ so it is not the case that $A+B$ has negative eigenvalues. For a case where $A,B$ have only positive eigenvalues and $AB$ has only negative eigenvalues, take $$A=\pmatrix{1&0\\0&4}, B=\pmatrix{4&-3\\3&-2}$$The eigenvalues of $A$ are obviously $1$ and $4$, both eigenvalues of $B$ are $1$, and both eigenvalues of $AB$ are $-2.$ I constructed this just by messing around with the characteristic polynomials. I don't have any insight into the problem at all.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2689708', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/300680/']}
jdg_81825
stackexchange
llm_judgeable_groundtruth_similarity
33214
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to adapt the Sieve of Eratosthenes for the usual integers to find all Gaussian primes with norm less than a specific limit. How to apply it to finding all Gaussian primes with norm less than 100?Thank you very much! Now provide the response and nothing else.
In the Sieve of Eratosthenes for rational integers, you select the positive ones and list those in increasing size. You do the same for the Gaussian integers:list them by increasing norm. But you need to select only one representative from each associate class, that is, modulo multiplication by units. In the rational integers, the units are $\pm 1$. In the Gaussian integers they are $\pm 1$ and $\pm i$. So to normalize the Gaussian integers, you may assume that $|a| \le |b|$ in $a+bi$ because you can always multiply by $i$ to make it so, and you may also assume that $a \ge 0$ because you can always multiply by $-1$ to make it so.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/33214', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9636/']}
jdg_81826
stackexchange
llm_judgeable_groundtruth_similarity
1688907
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am asked to find: $$\int_0^{\frac{\pi}{6}} \frac{1}{\sin x-\cos x} \, dx$$ I have tried: $$A=\int_0^{\frac{\pi}{6}} \frac{\sin x+\cos x}{\sin^2 x-\cos^2 x} \, dx$$$$A=\int_0^{\frac{\pi}{6}} \frac{\sin x}{2\sin^2 x-1} \, dx + \int_0^{\frac{\pi}{6}} \frac{\cos x}{1-2\cos^2 x} \, dx$$$$u=\sin x$$$$du=\cos x\,dx$$$$v=\cos x$$$$dv=-\sin x \,dx$$$$A=\int_0^{\frac{\pi}{6}} \frac{du}{2u^2-1} \, dx + \int_0^{\frac{\pi}{6}} \frac{dv}{2v^2-1} \, dx$$ But I am unable to move forward. Now provide the response and nothing else.
HINT: $$\sin(x)-\cos(x)=\sqrt 2 \sin(x-\pi/4)$$ and integrate the cosecant function. If you wish to proceed as in the OP, then we have $$\begin{align}\int_0^{\pi/6}\frac{1}{\sin(x)-\cos(x)}\,dx&=\int_0^{\pi/6}\frac{\sin(x)+\cos(x)}{\sin^2(x)-\cos^2(x)}\,dx\\\\&=\int_1^{\sqrt 3/2}\frac{1}{2u^2-1}\,du+\int_0^{1/2}\frac{1}{2v^2-1}\,du\\\\&=\frac12 \int_1^{\sqrt 3/2}\left(\frac{1}{\sqrt 2 u-1}-\frac{1}{\sqrt 2 u+1}\right)\,du+\frac12 \int_0^{1/2}\left(\frac{1}{\sqrt 2 v-1}-\frac{1}{\sqrt 2 v+1}\right)\,dv\\\\\\end{align}$$ Can you finish now?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1688907', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/293988/']}
jdg_81827
stackexchange
llm_judgeable_groundtruth_similarity
2420193
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I find that my constructors are starting to look like this: public MyClass(Container con, SomeClass1 obj1, SomeClass2, obj2.... ) with ever increasing parameter list. Since "Container" is my dependency injection container, why can't I just do this: public MyClass(Container con) for every class? What are the downsides? If I do this, it feels like I'm using a glorified static. Please share your thoughts on IoC and Dependency Injection madness. Now provide the response and nothing else.
You are right that if you use the container as a Service Locator, it's more or less a glorified static factory. For lots of reasons I consider this an anti-pattern (also see this excerpt from my book). One of the wonderful benefits of Constructor Injection is that it makes violations of the Single Responsibility Principle glaringly obvious. When that happens, it's time to refactor to Facade Services . In short, create a new, more coarse-grained interface that hides the interaction between some or all of the fine-grained dependencies you currently require.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2420193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10333/']}
jdg_81828
stackexchange
llm_judgeable_groundtruth_similarity
516617
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The WindowsBase DLL defines the IWeakEventListener event with summary: Provides event listening support for classes that expect to receive events through the WeakEvent pattern and a System.Windows.WeakEventManager. This vague description doesn't describe what the 'WeakEvent pattern' actually is. So, what is this pattern, why is it used and is it useful outside of WPF applications? EDIT Some good answers already, but no one has talked about whether this pattern is useful outside of WPF applications. It seems to me that the weak event pattern, like dependency properties, is inextricably linked to the WPF APIs and DLLs. Is there an equivalent implementation available for non-WPF applications? Now provide the response and nothing else.
The important bit is in the remarks: The principal reason for following the WeakEvent pattern is when the event source has an object lifetime that is potentially independent of the event listeners. Using the central event dispatching of a WeakEventManager allows the listener's handlers to be garbage collected even if the source object persists So if you have publisher and subscriber objects, then normally after subscriber has subscribed to publisher 's event, subscriber can't be garbage collected. The weak event pattern makes the link between the two "weak" (as in WeakReference ) so that there isn't this dependency. (The alternative is to unsubscribe from the event when subscriber wants to become eligible for garbage collection, but that gets messy.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/516617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/24874/']}
jdg_81829
stackexchange
llm_judgeable_groundtruth_similarity
2566412
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I find the nearest value in a numpy array? Example: np.find_nearest(array, value) Now provide the response and nothing else.
import numpy as npdef find_nearest(array, value): array = np.asarray(array) idx = (np.abs(array - value)).argmin() return array[idx] Example usage: array = np.random.random(10)print(array)# [ 0.21069679 0.61290182 0.63425412 0.84635244 0.91599191 0.00213826# 0.17104965 0.56874386 0.57319379 0.28719469]print(find_nearest(array, value=0.5))# 0.568743859261
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2566412', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/307633/']}
jdg_81830
stackexchange
llm_judgeable_groundtruth_similarity
1796791
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i tried something like this: boolean funkyBoolean = true;int array[] = funkyBoolean ? {1,2,3} : {4,5,6}; But this code won't even compile.Is there any explanation for this? isn't funkyBoolean ? {1,2,3} : {4,5,6} a valid expression?thank's in advance! Now provide the response and nothing else.
You can only use the {1, 2, 3} syntax in very limited situations, and this isn't one of them. Try this: int array[] = funkyBoolean ? new int[]{1,2,3} : new int[]{4,5,6}; By the way, good Java style is to write the declaration as: int[] array = ... EDIT: For the record, the reason that {1, 2, 3} is so restricted is that its type is ambiguous. In theory it could be an array of integers, longs, floats, etc. Besides, the Java grammar as defined by the JLS forbids it, so that is that.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1796791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/181915/']}
jdg_81831
stackexchange
llm_judgeable_groundtruth_similarity
186290
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I know my question is a bit vague, sorry for this. Let $k$ be a field of characteristic zero. Consider the Grothendieck ring of varieties over $k$, usually denoted by $K_0(Var_k)$. This is generated by isomorphism classes of varieties over $k$ modulo the relations [X]=[Y]+[X-Y] whenever $Y$ is a closed subvariety of $X$. People usually refer to [X] as the "naive" motive of $X$. On the other hand, one has Voevodsky's "true" motives $DM_{gm}(k)$ (not as true as we would like to, I know !) and to any variety $X$ we can attach an object $M(X)$in $DM_{gm}(k)$. Why is this $M(X)$ more serious than the naive one? That is, can you give some examples of properties that cannot be read at the level of $K_0(Var_k)$ but that one sees when working in $DM_{gm}(k)$? Now provide the response and nothing else.
I think the presence of Voevodsky's category of (mixed) motives is a red herring here. Let me briefly explain why I think that, and then say why any "real" notion of motives (say, pure Chow motives, as in Vivek Shende's comment) or, I guess, Voevodsky's motives, are much more serious than the so-called naive motives. The (partially conjectural) motivic philosophy says we should functorially associate to each variety $X$ a motive, which is an object in some Abelian category with a "weight" (which is just a number) associated to any simple object; any reasonable cohomology theory (e.g. Betti cohomology) is supposed to factor through this functor. If $X$ is a smooth projective variety, this motive is naturally supposed to be a direct sum of (semi-simple) pieces in each weight. A cohomological realization functor applied to the weight $i$ piece should be the degree $i$ cohomology of the original variety. For a general variety, one does not get a semi-simple object, but rather some iterated extension of pieces of large weight by pieces of smaller weight. The associated filtration gives the weight filtration on passing to some cohomological realization. Proving that something like this is true is pretty far out of reach. But anyway, the picture is supposed to be something like $$\text{Varieties}\to \text{Mixed Motives}\overset{ss}{\to} \text{Pure Motives}$$where the second map is semi-simplification w.r.t. the weight filtration. We don't know what the middle term in this diagram is, but there is a perfectly good term on the right hand side, given by Grothendieck's Chow Motives ; Voevodsky's category is a candidate for the derived category of the middle object. There are several other candidates as well, I guess. Now, we can also send any variety $X$ to its class $[X]$ in the Grothendieck ring of varieties, $K_0(\text{Var})$. This remembers some of the "motivic" information of $X$; in particular, there is a map $K_0(\text{Var})\to K_0(\text{Chow Motives}_\mathbb{Q})$ so that if $X$ is smooth and projective, the image of $[X]$ under this map will be the class of its associated Chow motive in the K-group of Chow motives. If $X$ is not smooth and projective, it will be (conjecturally) the class of the semi-simplification of its (mixed) motive with respect to the weight filtration (because $K_0$, by definition, forgets all extension information). So in particular, this is the "motivic" manifestation of Dan Petersen's answer: the Grothendieck ring of varieties forgets the weight filtration. But aside from this, I think "mixedness" is not really important for understanding this question--the Grothendieck ring of varieties forgets a lot more than the weight filtration! Because smooth projective varieties are "cohomologically pure," the Grothendieck ring of varieties does "remember," say, the Hodge structures on their cohomology groups. Namely, one can take a class in $K_0(\text{Var})$ and send it to its Euler characteristic in the $K$-theory of rational Hodge structures; because of purity, one can pick out its individual cohomology groups. The main issue is that the Grothendieck ring of varieties forgets morphisms! Namely, one can recover the map $$H^i: \text{Smooth Varieties} \mapsto H^i(-, \mathbb{Q})$$sending a smooth projective variety to its $i$-th cohomology group, but not the functor. And if you can get much algebraic geometry done without using any of the functorial properties of cohomology, I'd be very impressed. Let me just remark that the map $K_0(\text{Var})\to K_0(\text{Chow Motives}_\mathbb{Q})$ is quite mysterious, and does forget some information. For example, it is not injective--given two isogenous elliptic curves $E_1, E_2$ over $\mathbb{C}$, the class $[E_1]-[E_2]$ is not zero in $K_0(\text{Var})$ (by results of Larsen-Lunts, for example), but its image in $K_0(\text{Chow Motives}_\mathbb{Q})$ is zero. On the other hand, I see no reason for the map to be surjective either (for example, I have no idea how to hit a random Chow Motive of pure weight).
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/186290', 'https://mathoverflow.net', 'https://mathoverflow.net/users/61380/']}
jdg_81832
stackexchange
llm_judgeable_groundtruth_similarity
15962325
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing some writing where I use MathJax to render the math. I also occasionally include SVG diagrams which are generated dynamically by javascript. Those SVG diagrams occasionally include math. I'd like the text elements in those SVG diagrams to be rendered using MathJax. I know how to cause the dynamic math to be rendered . However, the mathjax output is in <span> s which aren't valid SVG and don't show up. This persists when I configure MathJax to use SVG output mode, though this is perhaps due to an improper use of SVG output mode. I changed by MathJax CDN link to http://cdn.mathjax.org/mathjax/2.1-latest/MathJax.js?config=TeX-AMS-MML_SVG , which did not produce SVG output. I haven't been able to coax MathJax into actually outputting SVG elements yet. I've considered using an SVG <foreignObject> tag which is non-ideal because (to my knowledge) I must specify a width and height, which is inconvenient. Is there a better way to include MathJax rendered text inside SVG inside HTML? Now provide the response and nothing else.
Currently, the only way to include MathJax within an SVG diagram is through <foreignObject> . Yes, it is not ideal, not only because you need to provide a size, but also because IE9+ doesn't support <foreignObject> . As for the SVG output, if you have used the MathJax contextual menu to select a math render, that will override the renderer choice in the document, so you may still be seeing HTML-CSS output for that reason. The value is stored in a cookie, so that it will be remembered from session to session. You can delete the cookie to remove the setting, or use the MathJax menu again to select SVG rendering. Note, however, that this isn't going to help you either, as MathJax's SVG output is not just an SVG snippet that can be included into a larger SVG file, but include some HTML and the <svg> element itself. Moreover, MathJax needs the surrounding HTML to determine things like the font-size to use, and some other factors, so it is not going to be able to placed directly into your SVG diagram. The <foreignObject> approach is really the only reliable way at the moment. Edit : Here is a full example: <!DOCTYPE html><html><head><title>MathJax in SVG diagram</title><script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_SVG"></script></head><body><svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="1000" height="500"> <circle cx="100" cy="100" r="99" fill="yellow" stroke="red" /> <circle cx="100" cy="100" r="3" fill="blue" /> <foreignObject x="100" y="100" width="100" height="100"> <div xmlns="http://www.w3.org/1999/xhtml" style="font-family:Times; font-size:15px"> \(\displaystyle{x+1\over y-1}\) </div> </foreignObject></svg></body></html>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15962325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/83435/']}
jdg_81833
stackexchange
llm_judgeable_groundtruth_similarity
3476969
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a Rotate Bits Right in Ruby ? Or how can I do that please. Thanks Now provide the response and nothing else.
Some facts: Ruby has operators << and >> to shift, but no built-in rotate operator. You have to fake it. Ruby's Fixnum class automatically promotes to Bignum when the value exceeds the machine word size. This includes numbers that would fit in an unsigned word but not a signed word -- for example, 0xffffffff is a positive Bignum , not a negative Fixnum . So if you want a rotate operation, you a) have to write it using the shift operators, b) either hardcode 32 or 64 bits or ask Fixnum for the word size, and c) accept that the result might end up being a Bignum . That being said, this might work: class Integer def ror count (self >> count) | (self << (32 - count)) & 0xFFFFFFFF endend >> printf "0x%x\n", (0x01234567.ror 4)0x70123456
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3476969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/419561/']}
jdg_81834
stackexchange
llm_judgeable_groundtruth_similarity
5710758
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an application consisting of a base app that brings in several modules. The base app reads a parameter file into a configuration hash, and I want to share it across all my modules. Currently, I am passing a 'parent' object down to modules, and then those modules are doing stuff like self.parent.config to obtain the configuration. However, as there are several levels to the module hierarchy, I find myself doing things like self.parent.parent.config, which is starting to look bad. What are some better patterns for sharing a config object across an application and it's modules? I am thinking about having a 'config' module which basically creates a global config variable that can be set by the base app, then imported and accessed by other modules, but I am not sure if using globals like that is a bad practice for other reasons. I am reasonably new to Python so be nice =) Now provide the response and nothing else.
You could just: import config and have a global config module excerpts from my comments: You can always add special rules for odd situations by just saying oddValue if isOddSituation() else config.normalValue . If you want to have configuration modules be hierarchically subclassable (like my other answer describes), then you can represent a config as a class, or you can use the copy module and make a shallow copy and modify it, or you can use a "config dictionary", e.g.: import config as baseConfigconfig = dict(baseConfig, overriddenValue=etc) It doesn't really matter too much which scope you're in.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5710758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/150050/']}
jdg_81835
stackexchange
llm_judgeable_groundtruth_similarity
36344
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm curious about what happens when you insert organelles from a cell into another. In particular, mitochondria. Take two cells from your own body. Somehow extract a mitochondria from one cell and somehow insert it in the other cell. Will the cell with the foreign mitochondria make use of it (respiration), or will it dispose it? What if the foreign mitochondria comes from someone else's body? What if the foreign mitochondria comes from another species' cell? Now provide the response and nothing else.
There is evidence that cells can exchange mitochondria. There are some donor cells that transfer mitochondria to recipient cells via cytoplasmic nanotubes (within the same species). This happens endogenously and triggering factors may include stress. References : Pasquier et al . 2013 Ahmad et al. 2014 Wang and Gerdes. 2015 Hayakawa et al . 2016 On interspecies mitochondria transfer. In a study by Yang and Koob , the feasibility of mitochondrial transplants was studied. Mitochondria isolated from donor cells were initially injected into mouse oocytes/embryos. Then mitochondria, along with a chunk of cytoplasm and membrane was taken out; these cell-like structures were called mitocytoplasts . Then, these mitocytoplasts were fused with recipient cells (mouse rho0 cells - these lack mitochondria) with the help of a viral fusion protein. They found that the recipient cells supported mitochondria from the same species (donor species include mouse, rat, Mongolian gerbils and Golden Syrian hamsters). The success of the "mitochondrial xenograft" depends on how different the species are and as shown in the abovementioned experiment, a donor cell (mouse) would not support the mitochondria from even a closely related species (rat). This incompatibility would primarily be based on the dependency of mitochondria on the nuclear genome which may be different for different species. It is possible that "mitochondrial xenografts" may be accepted; the impossibility of it has not been exhaustively verified.
{}
{'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/36344', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/16897/']}
jdg_81836
stackexchange
llm_judgeable_groundtruth_similarity
1412091
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The typewriter sequence is an example of a sequence which converges to zero in measure but does not converge to zero a.e. Could someone explain why it does not converge to zero a.e.? $f_n(x) = \mathbb 1_{\left[\frac{n-2^k}{2^k}, \frac{n-2^k+1}{2^k}\right]} \text{, where } 2^k \leqslant n < 2^{k+1}.$ Note: the typewriter sequence (Example 7). Now provide the response and nothing else.
Note that at any choice of $x$ and for any integer $N$, there is an $n>N$ with $f_n(x)=1$. So, the numerical sequence $f_n(x)$ cannot converge to $0$. Note, however, that we can certainly select a subsequence of this sequence of functions that converges pointwise a.e.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1412091', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/260879/']}
jdg_81837
stackexchange
llm_judgeable_groundtruth_similarity
9033191
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I add line breaks using the SetText method? I tried Clipboard.SetText("eee \n xxxx"); but it doesn't give me the expected result when I paste the clipboard data in the notepad. Expected result: eeexxxx How can I accomplish this? Now provide the response and nothing else.
Windows uses CR+LF pairs to indicate a new line. This equates to "\r\n" in C#. However, you are just sending "\n" , i.e. a single LF rather than a CR+LF pair. Use Environment.NewLine rather than of "\n" . This is the idomatic way to spell "\r\n" in C#. As a bonus, if you ever ran your code on a *nix platform, Environment.NewLine would evaluate to "\n" which is the *nix new line indicator. Finally, in my view Environment.NewLine is preferable from a readability perspective. It documents what this thing is logically rather than relying on you knowing the magic constants.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9033191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/396335/']}
jdg_81838
stackexchange
llm_judgeable_groundtruth_similarity
5750773
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a similar vein to Sending Pause and DTMF input in android , I'm trying to send the pause character "," to the dialer. This works on HTC Sense phones and even on the Xoom, but not on "stock experience" phones like the Nexus One or T-Mobile G2 (and I suspect the Motorola Droid). These phones seem to have a dialer that tries to pretty-format the number (ie adding dashes) and stop upon hitting a comma character. Interestingly, it doesn't choke on a "p" character, though it will strip out "p"s and keep adding numbers. Here is what the ActivityManager sees: I/ActivityManager( 92): Starting activity: Intent { act=android.intent.action.DIAL dat=tel:8883333,444 cmp=com.android.contacts/.DialtactsActivity } I've also tried the encoded form, "tel:8883333%2C444" with no difference in behavior on these phones. I've tried "p", as mentioned, but these characters are dropped resulting in the dialers having 888-333-3444 incorrectly populated, and I'm not sure that "p" is correct anyway. So, the question: Is there a way to specify a pause that works across most or all android dialers? Now provide the response and nothing else.
Short answer: Doesn't look like it's possible using the native dialer. Long answer: The native dialer in Android uses the following code to extract the number you pass in to the dialer using an Intent if ("tel".equals(uri.getScheme())) { // Put the requested number into the input area String data = uri.getSchemeSpecificPart(); setFormattedDigits(data, null); return true;} Within the setFormattedDigits method the number gets transformed thusly: String dialString = PhoneNumberUtils.extractNetworkPortion(data); Looking at the docs for extractNetworkPortion you'll notice that it, "Extracts the network address portion [where the] Network address portion is everything up to DTMF control digit separators (pause or wait) . So the code is intentionally striping out the pause character and anything that comes after it. The only alternative I can think of is to replace the dialer or use the ACTION_CALL action instead of ACTION_DIAL . That'll bypass the dialer, so it should be used with caution.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5750773', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/719763/']}
jdg_81839
stackexchange
llm_judgeable_groundtruth_similarity
85254
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I stumbled upon Hilbert transform when researching single sideband modulation. Apparently when the demodulator frequency is bit off by $\Delta f$ , the signal after demodulation and low-pass filtering is not $m(t)$ where $m(t)$ is the original signal, but rather: $$m(t)\cos(2 \pi \,\Delta f \,t) + \hat{m}(t)\sin(2 \pi \,\Delta f \,t)$$ ...where $\hat{m}(t)$ is the Hilbert transform of the original signal. So the inaccurately demodulated signal oscillates between the original signal and its Hilbert transform. I tried the equation for real music. I noticed that if $\Delta f$ is 100 Hz, frequencies are markedly distorted, but music at $\Delta f$ of 10 Hz sounds acceptable (but for someone with good musical ear it might be possible that 10 Hz shift wouldn't be acceptable). So, I became interested of whether a human could hear raw Hilbert-transformed signal -- then $\Delta f$ would be 0 Hz but there would be constant phase shift in the SSB demodulation equation. It can occur in single-sideband modulation for example if the demodulator has exactly the same frequency as the modulator, but its phase is 90° off. I can't notice any difference in the original signal (music) and Hilbert-transformed signal that I created. I researched a bit whether there's any theoretical justification on this, and found this answer . A Hilbert transform introduces a fixed 90° phase shift for all frequencies. Practical Hilbert transformers can only do this over a range of frequencies. Clearly a 90° phase shift represents a longer delay at low frequencies (one fourth of the period) than at higher frequencies where the period is shorter [snip] So a Hilbert transform filter passes high frequencies quickly while introducing progressively longer time delays for lower frequencies. In the limit where the frequency approaches zero, the time delay approaches infinity. [snip] However, a typical audio signal spanning a number of octaves will be distorted due to high frequencies arriving sooner than low frequencies. Can this really be the case? I didn't notice any error in the timing of different frequencies. I think the Quora answer might be erroneous, because Hilbert transform adds delay to the individual sine/cosine wave components of the original signal, with each component delayed by 90°. So for example a sine wave that has always been on, and will always be on, is delayed by 90° (so low frequencies are delayed more than high frequencies). But there should be no way of hearing this, right? However, what you generally are interested is for example a sine wave that is instantly turned on, for example at the moment $t=0$ . Then you no longer have $\sin(2 \pi f t)$ , but $u(t) \sin(2 \pi f t)$ , where $u(t)$ is the (Heaviside) unit step function that adds lots of high frequencies to the signal, so the signal is no longer a simple sine wave, and the high-frequency envelope, being high in frequencies, shouldn't be delayed as much as the low-frequency sine wave (whose delay / phase we can't hear). Is my reasoning correct? Or could it be possible that Hilbert transform can be heard in audio by different delay for different frequencies? Now provide the response and nothing else.
Can humans hear Hilbert transform in audio? Generally no. The human auditory system is fairly insensitive to monaural phase shifts. "Monaural" means "same phase shift for both ears". That's very different from "interaural phase differences" which the ear is very sensitive to and even very small shifts are easily detectible. One can consider the Hilbert Transform some sort of an allpass filter in the sense that $|H(\omega)| = 1$ . Most (but not all) monaural allpass filters are indeed inaudible. What you can hear are large differences and/or gradients of the group delay over frequency. The Hilbert transform is a bit of an odd-ball. The impulse response is infinite in both time directions and hence it's infinitely non-causal. The group delay is actually 0 except for DC and Nyquist where it's infinite. You can't implement anything like this, so any real world implementation can only approximate an ideal Hilbert Transformer. Any implementation will have Causality bulk delay Pre-ringing Some "anomalies" in either group delay, amplitude or both at very high and very low frequencies Pre-ringing can smear out transient signals and low-frequency group delay can make the bass less "punchy" and compact. To what extent this is audible depends (a lot) on the specific signals and the details of the approximation. In most case a well designed Hilbert Transformer will have little or no audible difference, unless very low latency is required. The Hilbert Transform and other allpass filters are actually a well known "tricks" to create a second audio signal that sounds essentially the same but is largely uncorrelated to the original signal. This can be used to create a diffuse stereo signal from a mono signal (stereo reverb, surround sound).
{}
{'log_upvote_score': 5, 'links': ['https://dsp.stackexchange.com/questions/85254', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/65164/']}
jdg_81840