source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 633614 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Circuit: I have to determine the relationship between V out and V in . When analyzing the circuit I realized it was quite similar to a non-inverting op-amp circuit. I know from my past homework that I can calculate Vout in the following way: $$V_\text{out} = (1 + R_2/R_1)V_\text{in}$$ This ends up giving me: $$V_\text{out} = (1 + 9\text{ k}/1\text{ k})V_\text{in} = 10V_\text{in}$$ However, to use this formula I had made the assumption in the past that \$V_{+} = V_\text{in}\$ . But now there is a resistor in front of the positive entrance, so I do not think this still holds true. However, I am not sure what value I'd give it now. I think I have to use the 5 V power source to decide this, but I also get the feeling it will have an effect on the negative side, but I do not know what sort. Any tips or hints to get me to the right direction?
Now provide the response and nothing else.
| The voltage, \$V_-=V_+=V_\text{in}\$ is produced by the voltage divider between \$V_\text{out}\$ and the \$ 5\text{ V}\$ source. Thus, \$V_\text{in}=5+0.1(V_\text{out}-5) \$ | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/633614', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/315844/']} | jdg_78541 |
stackexchange | llm_judgeable_groundtruth_similarity | 43921240 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dictionary like: d = {'a': 1, 'b': 2, 'c': 3, 'd': 4} which I would like to convert to a namedtuple.My current approach is with the following code namedTupleConstructor = namedtuple('myNamedTuple', ' '.join(sorted(d.keys())))nt= namedTupleConstructor(**d) which produces myNamedTuple(a=1, b=2, c=3, d=4) This works fine for me (I think), but am I missing a built-in such as... nt = namedtuple.from_dict() ? UPDATE: as discussed in the comments, my reason for wanting to convert my dictionary to a namedtuple is so that it becomes hashable, but still generally useable like a dict. UPDATE2: 4 years after I've posted this question, TLK posts a new answer recommending using the dataclass decorator that I think is really great. I think that's now what I would use going forward.
Now provide the response and nothing else.
| To create the subclass, you may just pass the keys of a dict directly: MyTuple = namedtuple('MyTuple', d) Now to create tuple instances from this dict, or any other dict with matching keys: my_tuple = MyTuple(**d) Beware: namedtuples compare on values only (ordered). They are designed to be a drop-in replacement for regular tuples, with named attribute access as an added feature. The field names will not be considered when making equality comparisons . It may not be what you wanted nor expected from the namedtuple type! This differs from dict equality comparisons, which do take into account the keys and also compare order agnostic. For readers who don't really need a type which is a subclass of tuple , there probably isn't much point to use a namedtuple in the first place. If you just want to use attribute access syntax on fields, it would be simpler and easier to create namespace objects instead: >>> from types import SimpleNamespace>>> SimpleNamespace(**d)namespace(a=1, b=2, c=3, d=4) my reason for wanting to convert my dictionary to a namedtuple is so that it becomes hashable, but still generally useable like a dict For a hashable "attrdict" like recipe, check out a frozen box : >>> from box import Box>>> b = Box(d, frozen_box=True)>>> hash(b)7686694140185755210>>> b.a1>>> b["a"]1>>> b["a"] = 2BoxError: Box is frozen There may also be a frozen mapping type coming in a later version of Python, watch this draft PEP for acceptance or rejection: PEP 603 -- Adding a frozenmap type to collections | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/43921240', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1870832/']} | jdg_78542 |
stackexchange | llm_judgeable_groundtruth_similarity | 26847192 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to reverse a list. Following is my code: reverseList :: [Int] -> [Int]reverseList [] = []reverseList (x:xs) = x:reverseList xs What ends up happening is I end up getting the list back in same order. I even have a solution for how to reverse the list but I am trying to understand what I did wrong here ? I am very new to haskell so think I should focus on understanding more then I can solve more problems with ease. I know there are many solutions out there for this problem but I need more help in the understanding esp what I did wrong here in this code.
Now provide the response and nothing else.
| There are several ways to solve this problem in Haskell. The naive approach would be to use the concatenate function ++ : reverseList [] = []reverseList (x:xs) = reverseList xs ++ [x] However, this will be really slow for large lists since Haskell lists are really singly linked lists, so in order to append an element you have to traverse the entire list. An alternative would be to keep up with the list you're building in a helper function: reverseList = go [] where go acc [] = acc go acc (x:xs) = go (x:acc) xs However, this is really just the fold pattern: reverseList = foldl (\acc x -> x : acc) [] But \acc x -> x : acc is just flip (:) , so this can be written as reverseList = foldl (flip (:)) [] However, the easiest way would probably be to just use the reverse function in Prelude. I would like to point out that your type of reverseList :: [Int] -> [Int] could be generalized to :: [a] -> [a] , you don't do anything special with the elements of the list, you're just building a new list with them. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26847192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2733436/']} | jdg_78543 |
stackexchange | llm_judgeable_groundtruth_similarity | 20003 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My friend, known as a liar, came to me two days ago and told stories about red mercury, something I never heard about. Could do magic, the pharaohs used it. 1 kg worth 30 million dollars, used in weapons something like that. I never believe him, he's not really a friend. Out of curiosity I started Googling about the subject, some say it's real, some say it's not. This documentary in Arabic got my attention, it's made by Al Jazeera, this is part 2 , it claims that the substance existed and was used by the Russians, and now the Mossad are in south Africa for that thing. You can watch minute 7:30, they talk to each other in English. In that documentary, they keep calling it, Red mercury 20, 20. Someone called Alan Kidger was murdered because he had something to do with red mercury, the documentary says. They spoke to someone in English at minute 19 in part one. Watch minute 32:50 part 1, a Russian nuclear chemist talking about it, I think it's red mercury RM 20 20 The British police arrested someone for reading this article and trying to produce the mercury. The USA said it doesn't exist, but they have a team in the U.S. Department of Energy (DOE) who's job is to find out more about the subject, according to Peter Hounam in the documentary. Samuel T. Cohen talks about it in minute 28 of part 2 in English. It seems to be used to create neutron bombs and small nuclear bombs. Terror accused in 'mercury sting' , would the people be arrested if there's nothing true about it? So what do we know about about Red mercury? And if it existed, then what's its chemical formula?
Now provide the response and nothing else.
| Red mercury does indeed exist, mercury (11) oxide (HgO) varies from yellow to red, here . The more fine the particles the more towards the yellow it is, the bigger the lumps, the more red. There is the neat trick that if you heat it in a low oxygen environment (say one in which there is charcoal to snap up the O2), it decomposes to mercury and oxygen (2HgO => 2Hg + O2). When heated to decomposition (932F) it decomposes into mercury and oxygen. Here. This can then be re-oxidized. Rinse and repeat. That's magic. However, if you read to the bottom of the "Reactivity Profile" et al. in the above link, you will see it's quite dangerous to play with and highly toxic. It can decompose spontaneously (explosively) under sunlight and/or friction. - It's conceivable that this was viewed as magical too. As I am unable to prove the existence of a special property of any mercury compounds or amalgams regarding nuclear weapons, I would posit that red mercury may have been spoken of or written about because it was covering for or being used as a synonym for something magical during WW2 - in technical terms, Germany and the rest of the industrialized world were trying to solve the supercriticality problem (creating a nuclear fission event on a large scale, ie. a bomb.) with the uranium, in simple terms they needed to create High Explosives, and ways of detonating them, to smash the uranium pieces together really hard or to compress a sphere of the material by surrounding it with the H.E. and detonating it as quickly as possible. Thus the code words Red Mercury might have come about as a "magical bullet" that wins the war. This is an unproven hypothesis though, it would need a reference from a consensus of historians of repute, there doesn't yet appear to be one. This is not to be confused with mercury fulminate , another explosive discovered in the 1830s, used to prime ammunition - this may well have been the chemical end-product that the terrorists were after in the article cited by the OP, here . | {} | {'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/20003', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/13947/']} | jdg_78544 |
stackexchange | llm_judgeable_groundtruth_similarity | 2140629 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While i'm developing in C/C++ and Java, i simply make a compile.bat script that does everything, that's fine for me. Why should i use make and why should i use ant?
Now provide the response and nothing else.
| Suppose you have 1000 source files and change just one of them. With your .bat script you will have to recompile the lot, with make you recompile just the one that changed. This can save quite a bit (read hours on a big project) of time. Even better, if you change one of your header files, make will re-compile only the source files that use that header. These are the two main features that have meant make and its offspring are used for all serious software development with compiled languages. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2140629', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259365/']} | jdg_78545 |
stackexchange | llm_judgeable_groundtruth_similarity | 14543846 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Take the following two tables in Oracle: Create Table A( A int, B int, C int, Constraint pk_ab Primary Key(A, B), Unique (C));Create Table B( D int, E int, F int, Constraint fk_d Foreign Key (D) References A(A), Constraint fk_e Foreign Key (E) References A(B)); Why doesn't this statement work? Or more specifically, why shouldn't it work? The reason I'm trying to create this type of relation is say, in the future, I want to delete B.D , but keep the relation FK_E . I'm getting the error: ORA-02270: no matching unique or primary key for this column-list
Now provide the response and nothing else.
| "Why doesn't this statement work? Or more specifically, why shouldn't it work? " You have defined the primary key on A as a compound of two columns (A,B). Any foreign key which references PK_AB must match those columns in number. This is because a foreign key must identify a single row in the referenced table which owns any given row in the child table. The compound primary key means column A.A can contain duplicate values and so can column A.B; only the permutations of (A,B) are unique. Consequently the referencing foreign key needs two columns. Create Table B( D int, E int, F int, Constraint fk_de Foreign Key (D,E) References A(A,B)); "Since there are multiple PK's that table B references" Wrong. B references a single primary key, which happens to comprise more than one column, " say, in the future, I want to delete B.D, but keep the relation fk_e. " That doesn't make sense. Think of it this way: D is not a property of B, it is an attribute B inherits through its dependence on table A. One way to avoid this situation is to use a surrogate (or synthetic) key. Compound keys are often business keys, hence their columns are meaningful in a business context. One feature of meaningful column values is that they can change, and cascading such changes to foreign keys can be messy. Implementing a surrogate key would look like this: Create Table A( id int not null, A int, B int, C int, Constraint pk_a Primary Key(ID), constraint uk_ab Unique (A,B));Create Table B( a_id int, F int, Constraint fk_n_a Foreign Key (A_ID) References A(ID)); Of course, you could kind of do this using the schema you posted, as you already have a single column constraint on A(C). However, I think it is bad practice to reference unique constraints rather than primary keys, even though it's allowed. I think this partly because unique constraints often enforce a business key, hence meaning, hence the potential for change, but mainly because referencing primary keys just is the industry standard. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14543846', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2014801/']} | jdg_78546 |
stackexchange | llm_judgeable_groundtruth_similarity | 11769200 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have build an external framework, and I am trying to utilize some images which are in the resource directory of the framework. The framework is not being copied within the application but rather just referenced to it. How can I utilize UIImage accordingly from xyz.framework resource folder? Thanks
Now provide the response and nothing else.
| What you need to do is load the bundle for the framework, and then access the resources using the NSBundle object. For example, if there is a framework that defines a class "FrameworkClass", we can do: NSBundle *frameworkBundle = [NSBundle bundleForClass:[FrameworkClass class]];NSString *resourcePath = [frameworkBundle pathForResource:@"an_image" ofType:@"jpeg"];UIImage *image = [UIImage imageWithContentsOfFile:resourcePath]; That should more or less do what you want. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11769200', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/618658/']} | jdg_78547 |
stackexchange | llm_judgeable_groundtruth_similarity | 394946 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know this is an easy question, but I cannot demonstrate it properly. Suppose by contradiction that $A \cap \bar{Y} \neq \emptyset$. Then $\exists \ x \in A \cap \bar{Y}$. I need help formalizing this reasoning (or correcting if it is wrong) By the definition of $A$ open, $\exists \ U \subset A$ open neighborhood of $x$. Then my idea is to prove that $U \cap Y \neq \emptyset$ because $U$ in somewhat sense "touches" $Y$, because it's a neighborhood of $x$. But i can't write down a formal demonstration.Any help is appreciated.Thanks^^
Now provide the response and nothing else.
| Suppose that $A$ is open, and $A\cap Y=\varnothing$. Let $F=X\setminus A$; then $F$ is closed, and $Y\subseteq F$. Now take closures: $\operatorname{cl}Y\subseteq\operatorname{cl}F=F$, since $F$ is closed. But this immediately implies that $\operatorname{cl}Y\cap A=\varnothing$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/394946', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/74013/']} | jdg_78548 |
stackexchange | llm_judgeable_groundtruth_similarity | 1491927 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We know that all primes are of the form $ 6k ± 1 $ with the exception of 2 and 3. We also know that not all numbers of the form $ 6k ± 1 $ are prime. This leads to four distinct sets of pairs adjacent to a multiple of six: Twin Primes, Example: $ 5, 7 $ (prime followed by a prime) Twin Composites, Example: $ 119, 121 $ (composite followed by a composite) Prime-Composite, Example: $ 23, 25 $ (prime followed by a composite) Composite-Prime, Example: $ 35, 37 $ (composite followed by a prime) The Twin Prime Conjecture states that there are infinitely many Twin Primes, but has yet to be proven. Could it be proven that any of these four sets are infinite?
Now provide the response and nothing else.
| The composite-composite case is easy. By the Chinese remainder theorem there are infinitely many solutions of, for example,$$x\equiv0\pmod6\ ,\quad x\equiv1\pmod5\ ,\quad x\equiv-1\pmod7\ .$$And for any such $x$ greater than $6$ we have $x-1,x+1$ are adjacent to a multiple of $6$, and $x-1$ is a multiple of $5$ and hence composite, and $x+1$ is a multiple of $7$ and hence composite. The composite-prime case follows from Dirichlet's theorem (which is not easy to prove). Consider the numbers $x=30k+6$. Then the numbers $x-1,x+1$ are adjacent to a multiple of $6$; and the numbers $x+1$ are prime infinitely often; and the numbers $x-1$ are always composite. Similarly, $x=30k-6$ covers the prime-composite case. And as you mentioned, the prime-prime case is still unsolved. Alternative proof for the composite case: consider the numbers$$x=6\times119\times121k+120\ .$$Then $x-1$ is always a multiple of $119$, and $x+1$ is always a multiple of $121$. Or to keep the numbers a bit smaller, $x=6\times7\times11k+120$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1491927', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/168345/']} | jdg_78549 |
stackexchange | llm_judgeable_groundtruth_similarity | 12284170 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem I am trying to display a random page from a file called ../health/ In this file there is a index.php file and 118 other files named php files.I would like to randomly display a file from the health folder but i would like it to exclude the index.php file. This following code includes the index.php file sometimes.I have also tried altering the $exclude line to show ../health/index.php but still no luck. <?php$exclude = array("index.php"); // can add more here later$answer = array_diff(glob("../health/*.php"),$exclude);$whatanswer = $answer[mt_rand(0, count($answer) -1)];include ($whatanswer);? Another code i have tried is the following <?php$exclude = array("../health/index.php"); // can add more here later$health = glob("../health/*.php");foreach ($health as $key => $filename) {foreach ($exclude as $x) {if (strstr($filename, $x)) {unset($whathealth[$key]);}}}$whathealth = $health[mt_rand(0, count($health) -1)];include ($whathealth);?> This code also includes the index.php file but rather than showing the page it displays the page as an error.
Now provide the response and nothing else.
| The first thing that came to mind is array_filter() , actually it was preg_grep() , but that doesn't matter: $health = array_filter(glob("../health/*.php"), function($v) { return false === strpos($v, 'index.php');}); With preg_grep() using PREG_GREP_INVERT to exclude the pattern: $health = preg_grep('/index\.php$/', glob('../health/*.php'), PREG_GREP_INVERT); It avoids having to use a callback though practically it will likely have the same performance Update The full code that should work for your particular case: $health = preg_grep('/index\.php$/', glob('../health/*.php'), PREG_GREP_INVERT);$whathealth = $health[mt_rand(0, count($health) -1)];include ($whathealth); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12284170', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1649416/']} | jdg_78550 |
stackexchange | llm_judgeable_groundtruth_similarity | 169256 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be a regular scheme, flat and of finite type over $Spec(\mathbb{Z})$ (add "projective" if you want). Then the Hasse-Weil zeta function of $X$ is defined as a product over all prime numbers of certain local factors which are rational functions in $p^{-s}$. The local factor at $p$ is the zeta function of the fiber $X_p$, which is a variety over the finite field $\mathbb{F}_p$. For all but finitely many primes, these local factors should have "similar shape", in some sense. For example, for an elliptic curve, and a good prime $p$, the numerator is a polynomial with coefficients $(1, a_p, p)$, i.e. all these numerators are exactly the same, except of course that the prime $p$ varies. For the denominators the situation is similar. If we take a higher-genus curve, or a higher-dimensional scheme, the patterns of the local zeta function coefficients should also in some sense be "uniform in p". But what exactly is the statement in the general case? In what precise sense are the local factors "the same"? EDIT: I added some (hopefully clarifying) comments related to point counts under the question as well as under ACL's answer.
Now provide the response and nothing else.
| This is an elaboration on ACL's answer, way too long for a comment, which highlights a technical ingredient (well-known to all experts) that underlies the precise sense in which the $\ell$-adic etale cohomology of the geometric generic fiber provides a "uniformity" in $p$: the good properties of constructible $\ell$-adic sheaves.In particular, I think it is a mistake to try to understand a precise sense of "uniformity in $p$" by focusing on point-counting: this misses the key structure, as noted in ACL's answer, namely certain $\ell$-adic representations (of the absolute Galois group of $\mathbf{Q}$) which individually are not expressed via point-counting at all (away from misleading special cases such as curves and abelian varieties for which degree-1 cohomology over finite fields contains all of the cohomological information). To explain this requires some preparations, hence the length of what follows (which is all standard stuff, but perhaps hard to extract for a non-expert; maybe even what follows is hard to read in parts for a non-expert, but I think it is important to recognize where serious theorems of etale cohomology are doing some work, going beyond the cohomological formula for the zeta function of a single variety over a single finite field). The crux is that the robustness of constructibility provides the magical glue linking behavior at different primes. Literally from the product definition, the zeta function of a separated finite type $\mathbf{Z}$-scheme $X$ is the product $\prod_p \zeta(X_{\mathbf{F}_p}, p^{-s})$ of the zeta functions of the fibers, with ${\rm{Re}}(s)$ is sufficiently large (determined by fiber dimensions alone; see Serre's article in the Purdue conference proceedings on arithmetic geometry from the mid-1960's). By the work of Dwork or Grothendieck-Artin (et al.), the zeta function of any separated scheme of finite type over $\mathbf{F}_p$ (such as $X_{\mathbf{F}_p}$) is a rational function in $p^{-s}$. The cohomological formalism provides an "$\ell$-adic" explanation for the rationality of the factor at each prime $p$ in the sense thatfor any prime $\ell \ne p$ we have $$\zeta(X_{\mathbf{F}_p}, t) = \prod_{i\ge 0} \det(1 - t\phi_p|{\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p}, \mathbf{Q}_{\ell}))^{(-1)^{i+1}}$$in $\mathbf{Q}_{\ell}[\![t]\!]$, where the left side is initially just a formal power series in $1 + t \mathbf{Z}[\![t]\!]$ (defined as a product over closed points of $X_{\mathbf{F}_p}$) and the rational function over $\mathbf{Q}_{\ell}$ on the right side might involve internal cancellations among the various determinant polynomials (ruled out for smooth proper $X_{\mathbf{F}_p}$ by the Deligne's work on the Riemann Hypothesis, but not otherwise). In other words, the "$\ell$-adic" explanation for rationality rests on the fact that $\mathbf{Q}(\!(t)\!) \cap \mathbf{Q}_{\ell}(t) = \mathbf{Q}(t)$ inside $\mathbf{Q}_{\ell}(\!(t)\!)$ (and Dwork's approach provides a variant of that explanation with $\ell=p$). In the displayed product on the right side, $i$ goes up to $2 \dim X_{\mathbf{F}_p}$ (which is bounded independently of $p$, and in fact equal to $2 \dim X_{\mathbf{Q}}$ for all but finitely many $p$). That was all just setup. Now fix a prime $\ell$ and an integer $i \ge 0$. One can ask if there is a finite set $S_{i,\ell}$ of primes of $\mathbf{Z}$ with $\ell \in S_{i,\ell}$ such that the polynomials $$R_{p,i,\ell}(t) = \det(1 - t \phi_p|{\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p},\mathbf{Q}_{\ell}))$$for all $p \not\in S_{i,\ell}$ are "linked" in the sense that there is a single finite-dimensional continuous $\mathbf{Q}_{\ell}$-linear representation $$\rho_{i,\ell}: G_{\mathbf{Q},S_{i,\ell}} \rightarrow {\rm{GL}}(V_{i,\ell})$$of the Galois group over $\mathbf{Q}$ of its maximal extension (inside $\overline{\mathbf{Q}}$) unramified outside $S_{i,\ell}$ such that$$\det(1 - t \rho_{i,\ell}(\phi_p)|V_{i,\ell}) = R_{p,i,\ell}(t)$$for all $p \not\in S_{i,\ell}$, where $\phi_p \in G_{\mathbf{Q},S_{i,\ell}}$ is a member of the conjugacy class of geometric Frobenius elements at $p$ (all choices giving the same determinant). This would imply in particular that the degree of $R_{p,i,\ell}(t)$ is the same for all $p \not\in S_{i,\ell}$, but it is a much stronger statement: that $\rho_{i,\ell}$ would be a kind of "$\ell$-adic glue" which unifies the disparate $R_{p,i,\ell}(t)$'s coming from the geometric special fibers $X_{\overline{\mathbf{F}}_p}$ in varying characteristics $p \not\in S_{i,\ell}$. The crux of the matter then is the following fundamental fact: the continuous representation $V_{i,\ell} := {\rm{H}}^i_c(X_{\overline{\mathbf{Q}}},\mathbf{Q}_{\ell})$ is such a $\rho_{i,\ell}$, for an appropriate choice of $S_{i,\ell}$. Why? Here is where one has to use a real theorem, namely the preservation of constructibility of $\ell$-adic sheaves under higher direct images with proper support, coupled with the proper base change theorem. More precisely, if $h:Y' \rightarrow Y$ is any separated map of finite type between noetherian schemes over $\mathbf{Z}[1/\ell]$ and if $\mathscr{F}$ is any constructible $\mathbf{Q}_{\ell}$-sheaf on $Y'$ (e.g., the constant sheaf $\mathbf{Q}_{\ell}$) then ${\rm{R}}^i h_{!}(\mathscr{F})$ is a constructible $\mathbf{Q}_{\ell}$-sheaf on $Y$ whose formation moreover commutes with any base change (the latter due to the proper base change theorem). The point is that any constructible $\mathbf{Q}_{\ell}$-sheaf on $Y$ is lisse over a dense open $U$ (depending on the sheaf), and hence "is" just a continuous $\mathbf{Q}_{\ell}$-linear representation of the fundamental group $\pi_1(U,\eta)$ if $Y$ is normal and connected (with geometric generic point $\eta$). In particular, when $Y$ is a connected Dedekind scheme then over $U$ this lisse sheaf is nothing more or less than an $\ell$-adic representation $\rho$ of the absolute Galois group of the function field of $Y$ (i.e., the residue field at the generic point of $Y$) such that $\rho$ is unramified at all closed points $u$ of $U$. The Galois representation at $u$ arising from the $u$-stalk of the lisse sheaf coincides with the residual Galois representation arising from $\rho$ on the Galois group at the generic point by virtue of its unramifiedness at $u$ (upon choosing a decomposition group at $u$ in the Galois group at the generic point, which amounts to working with a strict henselization at $u$ inside a separable closure of the function field of $Y$ in order to compute the specialization homomorphism from geometric stalk at $u$ to a geometric generic stalk). For example, take $Y' = X_{\mathbf{Z}[1/\ell]}$ and $Y = {\rm{Spec}}(\mathbf{Z}[1/\ell])$ and $\mathscr{F} = \mathbf{Q}_{\ell}$. The above says that there is a dense open subscheme $U_{i,\ell} \subset {\rm{Spec}}(\mathbf{Z}[1/\ell])$ such that the constructible ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ on ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ has restriction over $U_{i,\ell}$ that is lisse . Letting $S_{i,\ell}$ be the finite set of closed points of ${\rm{Spec}}(\mathbf{Z})$ complementary to $U_{i,\ell}$, we have that $\pi_1(U_{i,\ell}) = G_{\mathbf{Q},S_{i,\ell}}$ (using geometric generic point as base point of $\pi_1$) and the lisse restriction ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})|_{U_{i,\ell}}$ has respective stalks at the chosen geometric generic point and geometric closed point at $p \not\in S_{i,\ell}$ identified as Galois modules (for $\mathbf{Q}$ and $\mathbf{F}_p$ respectively) with the respective geometric fibral cohomologies $V_{i,\ell} := {\rm{H}}^i_c(X_{\overline{\mathbf{Q}}}, \mathbf{Q}_{\ell})$ and ${\rm{H}}^i_c(X_{\overline{\mathbf{F}}_p},\mathbf{Q}_{\ell})$ (recovering in particular that $V_{i,\ell}$ is unramified at $p$, as we know it must be due to $V_{i,\ell}$ arising from a $\pi_1(U_{i,\ell})$-representation). In other words, it is precisely the lisse pullback of ${\rm{R}}^if_{!}(\mathbf{Q}_{\ell})$ over ${\rm{Spec}}(\mathbf{Z}_{(p)})$ viewed as a representation of $\pi_1({\rm{Spec}}(\mathbf{Z}_{(p)}))$ which is the "$\ell$-adic glue" that links up the $i$th factor in the $\ell$-adic alternating product formula for $\zeta(X_{\overline{\mathbf{F}}_p},t)$ with the single entity $V_{i,\ell}$ that "doesn't know $p$". And the mechanism of this linkage is that (up to conjugation ambiguity!) we can compute that $\pi_1$ using geometric base points over either the generic or closed points of ${\rm{Spec}}(\mathbf{Z}_{(p)})$. So the upshot is that the lisse restriction of ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ over some dense open subscheme of ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ ensures that $V_{i,\ell}$ as built from the cohomology of the geometric generic fiber (no mention of $p$!) is the origin of "uniformity in $p$" when we stare at the $p$-factors of the zeta function of $X$ for varying $p$ (away from some finite set of primes). Note in particular that the set of "bad" primes here is not encoded by geometric means via "good reduction" (a bad notion to consider away from the proper case anyway); it's all about finding a dense open inside ${\rm{Spec}}(\mathbf{Z}[1/\ell])$ over which the constructible ${\rm{R}}^i f_{!}(\mathbf{Q}_{\ell})$ has lisse restriction. Note in particular that each $V_{i,\ell}$ on its own does not have anything to do with point-counting (away from special cases like curves and abelian varieties). It is only the alternating product built from these which is related to point-counting. But it is the $V_{i,\ell}$'s which are where the action is. The above is thoroughly $\ell$-adic for each $\ell$ separately whereas the zeta functions above do not mention $\ell$, so a truly satisfying sense of "uniformity in $p$" (away from a finite exceptional set) would be given by proving two more things: $U_{i,\ell}$ is "independent of $\ell$" in the sense that $U_{i,\ell} = U_i - \{\ell\}$ for some single dense open $U_i \subset {\rm{Spec}}(\mathbf{Z})$ and that the $V_{i,\ell}$ for varying $\ell$ constitute a "compatible family" in the sense defined in Serre's book Abelian $\ell$-adic representations (here, it would mean that for $p$ corresponding to a closed point of $U_i$ and any $\ell \ne p$ the characteristic polynomial of $\phi_p$ on $V_{i,\ell}$ lies in $\mathbf{Q}[t]$ and is independent of such $\ell$). If $X_{\mathbf{Q}}$ were smooth and proper over $\mathbf{Q}$, so $X_{\mathbf{Z}[1/N]}$ is smooth and proper over $\mathbf{Z}[1/N]$ for sufficiently divisible $N > 0$, then the smooth and proper base change theorems would ensure that we could take $U = {\rm{Spec}}(\mathbf{Z}[1/N])$ and the Riemann Hypothesis would provide the "compatible family" aspect (essentially because it rules out cancellation in the alternating $\ell$-adic formula, combined with the zeta function being unaware of $\ell$). But beyond that case we don't know: "independence of $\ell$" for the characteristic polynomial of Frobenius acting on the $i$th compactly supported $\ell$-adic cohomology of a separated finite type $\mathbf{F}_p$-scheme is believed to be true but remains an unsolved problem. (If you look at the Introduction to deJong's IHES paper on alterations you'll see that he was initially hopeful that his results replacing absence of resolutions of singularities in positive characteristic might have applications to prove new "independence of $\ell$" results, but that this didn't pan out; I am not aware of anyone having made substantial progress on it since that time either, but would be happy to hear to the contrary. Even if we grant resolution of singularities then I don't think an implication is known. In the absence of precise control on weights as the purity provided by RH in the smooth proper case, it is hard to geometrically isolate the contribution in a single cohomological degree from the rest, as the long exact excision sequence associated to a stratification lumps together all cohomological degrees. Deligne's Weil II is very suggestive, but alas I think not enough even assuming resolution.) | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/169256', 'https://mathoverflow.net', 'https://mathoverflow.net/users/349/']} | jdg_78551 |
stackexchange | llm_judgeable_groundtruth_similarity | 32927722 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
C++11 introduced the header <random> with declarations for random number engines and random distributions. That's great - time to replace those uses of rand() which is often problematic in various ways. However, it seems far from obvious how to replace srand(n);// ...int r = rand(); Based on the declarations it seems a uniform distribution can be built something like this: std::default_random_engine engine;engine.seed(n);std::uniform_int_distribution<> distribution;auto rand = [&](){ return distribution(engine); } This approach seems rather involved and is surely something I won't remember unlike the use of srand() and rand() . I'm aware of N4531 but even that still seems to be quite involved. Is there a reasonably simple way to replace srand() and rand() ?
Now provide the response and nothing else.
| Is there a reasonably simple way to replace srand() and rand()? Full disclosure: I don't like rand() . It's bad, and it's very easily abused. The C++11 random library fills in a void that has been lacking for a long, long time. The problem with high quality random libraries is that they're oftentimes hard to use. The C++11 <random> library represents a huge step forward in this regard. A few lines of code and I have a very nice generator that behaves very nicely and that easily generates random variates from many different distributions. Given the above, my answer to you is a bit heretical. If rand() is good enough for your needs, use it. As bad as rand() is (and it is bad), removing it would represent a huge break with the C language. Just make sure that the badness of rand() truly is good enough for your needs. C++14 didn't deprecate rand() ; it only deprecated functions in the C++ library that use rand() . While C++17 might deprecate rand() , it won't delete it. That means you have several more years before rand() disappears. The odds are high that you will have retired or switched to a different language by the time the C++ committee finally does delete rand() from the C++ standard library. I'm creating random inputs to benchmark different implementations of std::sort() using something along the lines of std::vector<int> v(size); std::generate(v.begin(), v.end(), std::rand); You don't need a cryptographically secure PRNG for that. You don't even need Mersenne Twister. In this particular case, rand() probably is good enough for your needs. Update There is a nice simple replacement for rand() and srand() in the C++11 random library: std::minstd_rand . #include <random>#include <iostream>int main (){ std:: minstd_rand simple_rand; // Use simple_rand.seed() instead of srand(): simple_rand.seed(42); // Use simple_rand() instead of rand(): for (int ii = 0; ii < 10; ++ii) { std::cout << simple_rand() << '\n'; }} The function std::minstd_rand::operator()() returns a std::uint_fast32_t . However, the algorithm restricts the result to between 1 and 2 31 -2, inclusive. This means the result will always convert safely to a std::int_fast32_t (or to an int if int is at least 32 bits long). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32927722', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1120273/']} | jdg_78552 |
stackexchange | llm_judgeable_groundtruth_similarity | 15557495 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Numbers all come back from Apache POI cell values as Double. When I do getCell(...).toString() , a number that appeared as "123" in Excel will convert to "123.0". How can I tell that the number should have displayed as an integer? Is there some magic I need to apply in Java to replicate what Excel does with "General" formatting?
Now provide the response and nothing else.
| Excel stores almost all numbers in the file format as floating point values, which is why POI will give you back a double for a numeric cell as that's what was really there I believe, though it's not quite clear from your question, that what you want to do is get a String object in Java that contains the number as it would look in Excel? i.e. apply the formatting rules applied to the cell to the raw number, and give you back the formatted string? If so, you want to do exactly the same thing as in my answer here . To quote: What you want to do is use the DataFormatter class . You pass this a cell, and it does its best to return you a string containing what Excel would show you for that cell. If you pass it a string cell, you'll get the string back. If you pass it a numeric cell with formatting rules applied, it will format the number based on them and give you the string back. For your case, I'd assume that the numeric cells have an integer formatting rule applied to them. If you ask DataFormatter to format those cells, it'll give you back a string with the integer string in it. Edit And for those of you who apparently find clicking through to the JavaDocs to be just that little bit too much work... , you need to use the DataFormatter.formatCellValue(Cell) method. If iterating, you'd do something along the lines of: Workbook workbook = WorkbookFactory.create(new File("input.xlsx"));DataFormatter formatter = new DataFormatter();Sheet s = workbook.getSheetAt(0);for (Row r : s) { for (Cell c : r) { System.out.println(formatter.formatCellValue(c)); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15557495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/836318/']} | jdg_78553 |
stackexchange | llm_judgeable_groundtruth_similarity | 5341919 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to create a text file using C/C++ calls in iOS? Is it then possible to open and read the file from outside the application? That is, application A creates a text file somewhere it has permissions to write to and application B then reads from it. Or, forget application B, can I just open it and read?
Now provide the response and nothing else.
| Since the question is explicitly asking for C/C++ and NOT Obj-C, I'm adding this answer, which I saw here and might help someone: char buffer[256];//HOME is the home directory of your application//points to the root of your sandboxstrcpy(buffer,getenv("HOME"));//concatenating the path string returned from HOMEstrcat(buffer,"/Documents/myfile.txt");//Creates an empty file for writingFILE *f = fopen(buffer,"w");//Writing somethingfprintf(f, "%s %s %d", "Hello", "World", 2016);//Closing the filefclose(f); Using this C code: buffer = "/private/var/mobile/Containers/Data/Application/6DA355FB-CA4B-4627-A23C-B1B36980947A/Documents/myfile.txt" While the Obj-C version: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);NSString *documentsDirectory = [paths objectAtIndex:0]; is returning documentsDirectory = /var/mobile/Containers/Data/Application/6DA355FB-CA4B-4627-A23C-B1B36980947A/Documents FYI, in case you noticed about the "private" small difference, I wrote to both places and they happened to be the same. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5341919', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/341008/']} | jdg_78554 |
stackexchange | llm_judgeable_groundtruth_similarity | 36490564 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a mixin that's globally available, but not automatically injected into every component. i.e. i do NOT want this: Vue.mixin({...}); I set up the project according to these instructions. This is my project structure. I also have assets/js/mixins.js file in there containing my mixins. I would like to be able to do this in my individual .vue files (many of my components use myMixin but not all of them): <script>export default { mixins:[myMixin], data:{....}}</script><template> <!-- some template code --></template> So far the only way to do that is to add this line to the top of every component that needs it: import {myMixin} from './assets/js/mixins.js" but is there a way to do this once and have myMixin variable available globally? I've tried including it in main.js and in app.vue but I still get "myMixin is not defined" error if I try to use myMixin in any of the child components. Or is there another way to register a mixin that doesn't require me typing the path to the mixins.js file in each component?
Now provide the response and nothing else.
| I would suggest setting your mixin up as a plugin. To do that, wrap it within an function call install and export the install function. Then, wherever your instantiate your app, you can simply do Vue.use(yourMixin): Docs: https://vuejs.org/guide/plugins.html http://vuejs.org/api/#Vue-mixin Example: //- build your mixinconst mixin = { // do something}//- export it as a pluginexport default { install (Vue, options) { Vue.mixin(mixin) }}//- make it globally availableimport myMixin from './myMixin'Vue.use(myMixin) Vue.use calls in the install fn(), so all subsequent Vues (or all if none have yet been created) have the mixin functionality Careful of namespace clashes on globally available mixins (!) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36490564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1913115/']} | jdg_78555 |
stackexchange | llm_judgeable_groundtruth_similarity | 85678 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Much of the theory of continued fractions has been developped by Euler in the 18th century. The little survey "Euler: continued fractions and divergent series (and Nicholas Bernoulli)" , mentions towards the end the continued fraction $$f(x)=\dfrac{1}{1+\dfrac{x}{1+\dfrac{x}{1+\dfrac{2x}{1+\dfrac{2x}{1+\dfrac{3x}{1+\dots}}}}}}$$ which Euler has "derived" from the divergent power series $1-x+2x^2-6x^3+24x^4-+...$ . For $x=1$, the continued fraction converges to a limit $f(1)\approx 0.5963475922$. I was wondering: what is known about the values $f(x)$, in particular $f(1)$? Are they known to be transcendental for $x\in\mathbb N$? Can they be expressed by other known constants?
Now provide the response and nothing else.
| The sequence $a_n=(-1)^n n!$ satisfies $a_{n+1}+(n+1)a_n = 0$ (with $a(0)=1$). Thus the generating function satisfies the differential equation $x^2y'+(x+1)y=1$ (where $y(0)=1$). The unique solution is $$\frac{e^{\frac{1}{x}}Ei\left(1,\frac{1}{x}\right)}{x}$$For $x=1$, the constant is $e Ei(1,1) \approx .5963473623231940743410785$. The reason this is justified is because the sequence is Gevrey , i.e. it does not diverge 'too fast', so associated to it is a unique (generating) function which has that sequence as coefficients (asymptotically) at 0, when approached along the real line. The modern theory that 'this all works' is essentially due to Écalle, although Lindelof had worked out quite a bit already $100$ years before. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/85678', 'https://mathoverflow.net', 'https://mathoverflow.net/users/29783/']} | jdg_78556 |
stackexchange | llm_judgeable_groundtruth_similarity | 30316956 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using a form to collect data. Once the user hits submit, it is suppose to update the user and question documents. The function works on desktop but not on mobile. When I hard code the values answer and wager it works on both. The issue seems to be event.target.wager.value . I tried putting the values in a Session and I tried converting them to int and strings. Code below is on Main.js Template.questionCard.events({ 'submit form': function(event) { event.preventDefault(); var questionId = this._id; var currentUser = Meteor.userId(); // Collects the data from the form when submitted. // var answer = event.target.play.value; // var wager = event.target.wager.value; var answer = "Run"; var wager = 5000; //Add user data to question Meteor.call('questionAnswered', currentUser, questionId, answer, wager); console.log('User: ' + currentUser + ' answered question ' + questionId + ' -- ' + answer + ' ' + wager);} Below is the questionAnswered method called. 'questionAnswered' : function( user, questionId, answer, wager){ QuestionList.update(questionId, {$push: {usersAnswered: user}}); console.log(user + " answered " + questionId) console.log("User wagered " + wager + " coins." ); Meteor.users.update( {_id: user}, {$inc: { "profile.coins": -wager}} ); //Add question, wager and answer to the user's account. Meteor.users.update( { _id: user}, {$push: {questionAnswered: { questionId: questionId, wager: wager, answered: answer}}}); console.log(user + " answered " + answer); //Update the question with the users answer and wager. if (answer == "Run"){ QuestionList.update(questionId, {$push: { usersRun: {userID: user, amount: wager } }}); } else if (answer == "Pass"){ QuestionList.update(questionId, {$push: {usersPass: {userID: user, amount: wager}}}); } else if (answer == "Fumble"){ QuestionList.update(questionId, {$push: {usersFumble: {userID: user, amount: wager}}}); } else if (answer == "Interception"){ QuestionList.update(questionId, {$push: {usersInterception: {userID: user, amount: wager}}}); }
Now provide the response and nothing else.
| I agree with the @Jeroen and I would like to add a short example from J. Munro's book which helped me a lot so this might be helpful for others as well. First of all, pureComputed observables are quite similar to the computed observables with several performance and memory improvements. The name is borrowed from the Pure function programming term and it means that any function which uses only local variable is potentially pure, whereas any function that uses a non-local variable is potentially impure. The observables in Knockout.js are treated differently. Thus pureComputed observables are placed in a sleeping mode (Knockout inclines all dependencies and re-evaluates the content when after reading) and computed observables are placed into listening mode (Knockout constantly checks whether the value is up-to-date prior to first access). Therefore, if you need to execute other code, then better to use a computed observables. function ViewModel() { var self = this; self.firstName = ko.observable('Arshile'); self.lastName = ko.observable('Gorky'); self.pureComputedExecutions = 0; self.computedExecutions = 0; self.pureComputedFullName = ko.pureComputed(function() { // This is NOT recommended self.pureComputedExecutions++; return 'Hello ' + self.firstName() + ' ' + self.lastName(); }); self.computedFullName = ko.computed(function() { self.computedExecutions++; return 'Hello ' + self.firstName() + ' ' + self.lastName(); }); }; var viewModel = new ViewModel(); ko.applyBindings(viewModel); alert('Pure computed executions: ' + viewModel.pureComputedExecutions); alert('Computed executions: ' + viewModel.computedExecutions); When this code is run, two alert messages are displayed that show the number of times the pureComputed and computed functions are called. Since pureComputed is in sleeping mode then the function has never been accessed, and the counter wil display 0. In contrast to this, the computed function is automatically evaluated on data binding, causing the counter to increase the number and display 1. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30316956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4416749/']} | jdg_78557 |
stackexchange | llm_judgeable_groundtruth_similarity | 32264232 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having 2 string variables ' userType ' and 'isOpened'. The value of isOpened is 'true' and userType is 'user' , both the values are of type string . when I print this on the JSP I get. <s:property value="%{#session.isProjectOpened}"/>;<s:property value="%{#session.isProjectOpened.equalsIgnoreCase(true)}"/>; I get both the values to be true. Whereas when i use <s:property value="%{#session.userType}"/>;<s:property value="%{#session.userType.equalsIgnoreCase(user)}"/>; I get the value user and false. As soon as I make a small change as follows <s:property value="%{#session.userType.equalsIgnoreCase('user')}"/>; I get the values to be user and true. My Questions are: 1) Can I use a string 'user' directly like " equalsIgnoreCase(user) " without defining the string. defining the string meaning using <s:set var="user">user</s:set> 2) if yes, then can i do it directly like i have done above, or it has to be done using the single quotes like this " equalsIgnoreCase('user') " 3) then how am I getting the value condition to be true, for the 'isProjectOpened'. Why am I getting one condition to be true and the other to be false .
Now provide the response and nothing else.
| MongoDB is type sensitive, which means 1 is different with '1' , so are "55880c251df42d0466919268" and ObjectId("55880c251df42d0466919268") . The later one is in ObjectID type but not str, and also is the default _id type of MongoDB document. You can find more information about ObjectID here . Just try: db.getCollection('feed').find({"_id" : {"$in" : [ObjectId("55880c251df42d0466919268"), ObjectId("55bf528e69b70ae79be35006")]}}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32264232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5246799/']} | jdg_78558 |
stackexchange | llm_judgeable_groundtruth_similarity | 40229444 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having some issues with the pd.pivot() or pivot_table() functions in pandas. I have this: df = pd.DataFrame({'site_id': {0: 'a', 1: 'a', 2: 'b', 3: 'b', 4: 'c', 5: 'c',6: 'a', 7: 'a', 8: 'b', 9: 'b', 10: 'c', 11: 'c'}, 'dt': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1,6: 2, 7: 2, 8: 2, 9: 2, 10: 2, 11: 2}, 'eu': {0: 'FGE', 1: 'WSH', 2: 'FGE', 3: 'WSH', 4: 'FGE', 5: 'WSH',6: 'FGE', 7: 'WSH', 8: 'FGE', 9: 'WSH', 10: 'FGE', 11: 'WSH'}, 'kw': {0: '8', 1: '5', 2: '3', 3: '7', 4: '1', 5: '5',6: '2', 7: '3', 8: '5', 9: '7', 10: '2', 11: '5'}})dfOut[140]: dt eu kw site_id0 1 FGE 8 a1 1 WSH 5 a2 1 FGE 3 b3 1 WSH 7 b4 1 FGE 1 c5 1 WSH 5 c6 2 FGE 2 a7 2 WSH 3 a8 2 FGE 5 b9 2 WSH 7 b10 2 FGE 2 c11 2 WSH 5 c I want this: dt site_id FGE WSH 1 a 8 5 1 b 3 7 1 c 1 5 2 a 2 3 2 b 5 7 2 c 2 5 I've tried everything! df.pivot_table(index = ['site_id','dt'], values = 'kw', columns = 'eu') or df.pivot(index = ['site_id','dt'], values = 'kw', columns = 'eu') should have worked. I also tried unstack(): df.set_index(['dt','site_id','eu']).unstack(level = -1)
Now provide the response and nothing else.
| Your last try (with unstack ) works fine for me, I'm not sure why it gave you a problem. FWIW, I think it's more readable to use the index names rather than levels, so I did it like this: >>> df.set_index(['dt','site_id','eu']).unstack('eu') kw eu FGE WSHdt site_id 1 a 8 5 b 3 7 c 1 52 a 2 3 b 5 7 c 2 5 But again, your way looks fine to me and is pretty much the same as what @piRSquared did (except their answer adds some more code to get rid of the multi-index). I think the problem with pivot is that you can only pass a single variable, not a list? Anyway, this works for me: >>> df.set_index(['dt','site_id']).pivot(columns='eu') For pivot_table , the main issue is that 'kw' is an object/character and pivot_table will attempt to aggregate with numpy.mean by default. You probably got the error message: "DataError: No numeric types to aggregate". But there are a couple of workarounds. First, you could just convert to a numeric type and then use your same pivot_table command >>> df['kw'] = df['kw'].astype(int)>>> df.pivot_table(index = ['dt','site_id'], values = 'kw', columns = 'eu') Alternatively you could change the aggregation function: >>> df.pivot_table(index = ['dt','site_id'], values = 'kw', columns = 'eu', aggfunc=sum ) That's using the fact that strings can be summed (concatentated) even though you can't take a mean of them. Really, you can use most functions here (including lambdas) that operate on strings. Note, however, that pivot_table's aggfunc requires some sort of reduction operation here even though you only have a single value per cell, so there actually isn't anything to reduce! But there is a check in the code that requires a reduction operation, so you have to do one. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40229444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4373061/']} | jdg_78559 |
stackexchange | llm_judgeable_groundtruth_similarity | 1611160 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $\psi: C([0,1]; \mathbb{R}) \mapsto C([0,1]; \mathbb{R}) $, be such that if $x \in [0,1]$ then $$\psi(f)(x) = \int\limits_{0}^{x} f(t) dt $$ I am asked to show that $\psi$ is not a contraction but $\psi^{2}$ is. And then conclude that $\psi$ has a unique fixed point. I´ve managed to prove a statment that helps me conclude the last part but I´m having trouble proving that the function is not a contraction. This is what I've got. If $f,g \in C([0,1]; \mathbb{R})$ then $$ |\psi (f) - \psi(g)| \leq \int\limits_{0}^{x} |f(t)-g(t)| dt \leq \sup\limits_{x\in [0,1]} |f(x)-g(x)| \int\limits_{0}^{x} 1 dt = d(f,g) x $$ Then $|\psi (f) - \psi(g)| \leq d(f,g) x $, and taking supremum on the left hand side allows me to conclude that $d(\psi (f),\psi (g)) \leq d(f,g) x$ Is there any problem with this procedure? Does anybody has an insight on how to procede with the $\psi^{2}$ part? Many Thanks!
Now provide the response and nothing else.
| $\psi$ is a contraction if there is aconstant $L$ strictly less than one such that$$d(\psi (f),\psi (g)) \leq L \, d(f,g)$$for all $f$ and $g$.This is not the case as you can verify with $f(x) \equiv 0$and $g(x) \equiv 1$. But$$\psi^2(f)(x) = \int_{0}^{x} \int_{0}^{t}f(s) \, dsdt = \int_{0}^{x} \int_{s}^{x}f(s) \, dtds = \int_{0}^{x} (x-s) f(s) ds$$so that$$ | \psi^2(f)(x) - \psi^2(g)(x)| \le \sup\limits_{x\in [0,1]} |f(x)-g(x)| \int_{0}^{x} (x-s) ds = \frac {x^2}2 \sup\limits_{x\in [0,1]} |f(x)-g(x)|$$and therefore$$d(\psi^2(f), \psi^2(g)) \le \frac 12 d(f, g) \, .$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1611160', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/175846/']} | jdg_78560 |
stackexchange | llm_judgeable_groundtruth_similarity | 2487 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There is a famous literature on $\displaystyle e = \sum\limits_{n=0}^{\infty} \frac{1}{n!}$. We know that $e$ is irrational as well as transcendental. Question is: For each $x_{n}$, sequence of $\pm{1}'s$, let $$f(x_{n}) = \sum\limits_{n=0}^{\infty} \frac{x_{n}}{n!}$$ for which sequences $(x_n)$ is $f(x_{n})$ rational? Also can we write $e$ as $\displaystyle e = \sum\limits_{n=1}^{\infty} \frac{x_{n}}{n}$ where $x_{n}$ is a sequence as above.
Now provide the response and nothing else.
| The numbers you describe are in fact all irrational. Given a fixed rational number $p/q$, any other rational number $a/b$ that approximates $p/q$ (but is not equal to $p/q$) can only do so to order $1/b$. This can be seen as follows:$$ p/q - a/b = (pb - qa)/qb,$$which is at least $(qb)^{-1}$ if it is not zero. So rationals cannot be approximated too well. Now consider the series $$\sum x_n/n!$$ and the partial sums $S_j$. The partial sums $S_k$ have denominator $k!$. Their error from the infinite sum is at most $\sum_{n>k} 1/n!$ which (from standard upper bounds via geometric series) little oh of $1/k!$ (as in the standard proof for $e$). So basically, the problem is that the series converges "too rapidly" for the sum to be rational. A rational number cannot be approximated too well (well in the denominators) by other rationals. Note that there are similar Diophantine approximation results for algebraic numbers, cf. Roth's theorem . Unfortunately that's not applicable here, though, since the approximations $S_k$ are little oh of the denominator, not little oh of a power of the denominator. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2487', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_78561 |
stackexchange | llm_judgeable_groundtruth_similarity | 1337703 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is really puzzling me. Say we are dealing with a Riemannian manifold $(M,g)$. Suppose $\nabla$ is the unique torsion free connection on $M$ that is compatible with $g$. Suppose we are in a neighbourhood $U$ with coordinate map $(x^1,\cdots, x^m )$. Since the connection is torsion free, $$[\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j}]=\nabla_{\frac{\partial}{\partial x_i}}{\frac{\partial}{\partial x_j}}-\nabla_{\frac{\partial}{\partial x_j}}{\frac{\partial}{\partial x_i}}.$$ And since the $\Gamma_{i,j}^k$ is symmetric on $i,j$, the right hand side of the above equation will vanish. So the Lie bracket will be 0. Now here is my confusion. If I start out with $m$ linearly independent vector fields $Y_1, \cdots, Y_m$, then I can find a coordinate system $(y_1,\cdots, y_m)$ such that $Y_i = \frac{\partial }{\partial y_i}$ (Correct me if I am wrong, because I am not sure about this) . Then arguing as above, I can show that the Lie bracket of $Y_i$ and $Y_j$ vanishes. I know Lie bracket shouldn't vanish on any two random vector fields I pick. So there must be something wrong with my argument here. Thank you in advance!
Now provide the response and nothing else.
| Ultimately, the vanishing is due the fact that partial derivatives commute. The reason we cannot realize a given set of vector fields as coordinate derivations is ultimately due to non-trivial curvature on the manifold. On the other hand, if a given set of nontrivial vector fields have vanishing brackets, or more generally Lie brackets which close on the span of the vector fields then we say such a set of vector fields is involutive . This means there exists a submanifold of the given manifold which takes the given set of vector fields as tangents. See Frobenius Theorem or this related MSE question for example. I should also mention, it is always possible to take one nontrivial vector field and make it a coordinate derivation see straightening theorem (which is a trivial case of the more general result of Frobenius). | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1337703', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26854/']} | jdg_78562 |
stackexchange | llm_judgeable_groundtruth_similarity | 4638520 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
According to this question a .Net dictionary resizes its allocated space to prime numbers that are at least twice the current size. Why is it important to use prime numbers and not just twice the current size? (I tried to use my google-fu powers to find an answer, but to no avail)
Now provide the response and nothing else.
| It is an algorithm implementation detail related to choosing a good hashing function and which provides uniform distribution. A non-uniform distribution increases the number of collisions, and the cost of resolving them. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4638520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20070/']} | jdg_78563 |
stackexchange | llm_judgeable_groundtruth_similarity | 8197709 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently building a site that should be able to function as a ftp sort of browser. Basically what I have is a ftp server with some images on it. What I can't figure out is: if I browse to this ftp site I can view the source of the ftp site (as seen in some browser), what I need is to save that source in a way to a string (using javascript). The reason is, that I will make som kind of 'image' browser. I plan on accomplishing that by reading the source into a string, then copy all the image sources and use innerHTML to create a new layout. In short: I want to read information from a url and display it in a different way. Well, can't seem to get it working. The problem might be that I cannot use serverside scripting. Would it be possible however to put a file on the ftp server that I can load that can dynamically load the data in the same folder? (when I say FTP I actually mean a NAS server with FTP access).
Now provide the response and nothing else.
| Your answer is Ajax. It can POST and GET data from an URL, just like browsing a website, and it will return the HTML as a string. If you plan on using jQuery (real handy), it is easy to use Ajax . Like this example (does not work without the library): $.ajax({ url : "/mysite/file.html", success : function(result){ alert(result); }}); If you want to use default Javascript, take a look at http://www.w3schools.com/ajax/default.asp var xmlhttp;if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp = new XMLHttpRequest();}else { // code for IE6, IE5 xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");}xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { document.getElementById("myDiv").innerHTML = xmlhttp.responseText; }}xmlhttp.open("GET", "ajax_info.txt", true);xmlhttp.send(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8197709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/465721/']} | jdg_78564 |
stackexchange | llm_judgeable_groundtruth_similarity | 1384 |
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My Volvo S40 (06) would not start when I turned the ignition key. I got a message that was somewhat like "Key Error, please try again". I tried with both my keys so it wasn't the key itself. I think I could hear some relay clicking in the on position. I had the same error about a week earlier but then it worked after a few tries and I didn't think more of it, but this time it would not work. I called the Volvo assistance and the guy they sent put the key into the ignition, turned it fully on and hit it relatively hard about ten times with the handle of a screwdriver, then he turned it to the off position and repeated the procedure. The car then started. He said that he had fixed this problem on a few Volvos and they seemed to continue working after this, but he could not guarantee anything. I am not a mechanic (obviously) so I wonder if someone knows what this error could be. Is it just oxidized connectors or is it a symptom of some bigger problem? What can I do about it?
Now provide the response and nothing else.
| I recently had a similar problem where I got intermittent "Immobilizer" messages. According to my mechanic (20+ years of Volvo experience) the connector between the antenna ring and the wiring harness sometimes causes problems - taking it off, spraying it with contact cleaner, and re-seating it fixes these. It could be that the vibrations from hitting the key were enough to overcome the resistance and allow the key to be recognized & the car started. It's a relatively simple job to get to and clean the connector, which might be good preventative maintenance. Other possibilities: You could have a bad relay. There's a fusebox under the hood, and one of the relays (R13) is for the starter. You should be able to remove the relay, and jumper the connection to determine whether or not the relay is the problem. You can get the wiring diagram for the car here . Finally, you could have an intermittent connection to the starter, or a bad starter. | {} | {'log_upvote_score': 5, 'links': ['https://mechanics.stackexchange.com/questions/1384', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/809/']} | jdg_78565 |
stackexchange | llm_judgeable_groundtruth_similarity | 66572044 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to get the following example code from https://docs.aws.amazon.com/cdk/latest/guide/serverless_example.html working, but I get a "Argument of type 'Function' is not assignable to parameter of type 'IFunction'" error. import * as cdk from '@aws-cdk/core';import * as apigateway from '@aws-cdk/aws-apigateway';import * as lambda from '@aws-cdk/aws-lambda';export default class ApiGatewayFunctionStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); const handler = new lambda.Function(this, 'WidgetHandler', { runtime: lambda.Runtime.NODEJS_10_X, // So we can use async in widget.js code: lambda.Code.fromAsset('resources'), handler: 'widgets.main', }); const api = new apigateway.RestApi(this, 'widgets-api', { restApiName: 'Widget Service', description: 'This service serves widgets.', }); const getWidgetsIntegration = new apigateway.LambdaIntegration(handler, { requestTemplates: { 'application/json': '{ "statusCode": "200" }' }, }); api.root.addMethod('GET', getWidgetsIntegration); // GET / }} The full error below seems to indicate that at least part of the issue migh be that the aws-apigateway package has its own packages that are incompatible. I am lost as to how to resolve this, so any help is much appreciated. test-deploy/ApiGatewayFunctionStack.ts:49:68 - error TS2345: Argument of type 'Function' is not assignable to parameter of type 'IFunction'. Types of property 'role' are incompatible. Type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-iam/lib/role").IRole | undefined' is not assignable to type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-apigateway/node_modules/@aws-cdk/aws-iam/lib/role").IRole | undefined'. Type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-iam/lib/role").IRole' is not assignable to type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-apigateway/node_modules/@aws-cdk/aws-iam/lib/role").IRole'. Types of property 'grant' are incompatible. Type '(grantee: import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-iam/lib/principals").IPrincipal, ...actions: string[]) => import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-iam/lib/grant").Grant' is not assignable to type '(grantee: import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-apigateway/node_modules/@aws-cdk/aws-iam/lib/principals").IPrincipal, ...actions: string[]) => import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-apigateway/node_modules/@aws-cdk/aws-iam/lib...'. Types of parameters 'grantee' and 'grantee' are incompatible. Type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-apigateway/node_modules/@aws-cdk/aws-iam/lib/principals").IPrincipal' is not assignable to type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-iam/lib/principals").IPrincipal'. Types of property 'addToPolicy' are incompatible. Type '(statement: import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-apigateway/node_modules/@aws-cdk/aws-iam/lib/policy-statement").PolicyStatement) => boolean' is not assignable to type '(statement: import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-iam/lib/policy-statement").PolicyStatement) => boolean'. Types of parameters 'statement' and 'statement' are incompatible. Type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-iam/lib/policy-statement").PolicyStatement' is not assignable to type 'import("D:/Users/andyb/Documents/github/agb-aws-functions/node_modules/@aws-cdk/aws-apigateway/node_modules/@aws-cdk/aws-iam/lib/policy-statement").PolicyStatement'. Types have separate declarations of a private property 'action'.49 const getWidgetsIntegration = new apigateway.LambdaIntegration(handler, {
Now provide the response and nothing else.
| This error Argument of type 'SomeClass' is not assignable to parameter of type 'ISomeClass' typically occurs when version of CDK dependencies are at different versions. To solve the issue, we need to bring all the dependencies to same version. Delete node_modules folder Delete package-lock.json Ensure all dependencies in package.json are using same version. Remove carrot ^ symbol before dependencies for example from "@aws-cdk/aws-lambda": "^1.90.0" to "@aws-cdk/aws-lambda": "1.90.0" , to avoid different minor versions getting installed. npm install | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/66572044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15370914/']} | jdg_78566 |
stackexchange | llm_judgeable_groundtruth_similarity | 4895719 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my app I need to show a form on mouse click. The problem is that the form is in another assembly and because of lazy nature of assembly loading it is likely that the assembly isn't loaded yet when the mouse button is pressed. So what I have is very noticeable pause before the form finally appears. I was able to come up with a dumb fix by calling new FormFromAnotherAssembly() in my initialization method. That, of course, took care of things and the pause is no longer there, but it's very ugly. The only thing I like about this solution is that I don't have to mess with paths and assembly names which I have to do if I want to use something like Assembly.Load . So, what's the good, robust solution of choice if I want to make sure the assembly is loaded before I actually need it? Thanks in advance.
Now provide the response and nothing else.
| Explicit pre-load in your init is probably still your best option. a typeof(SomeTypeFromAnotherAssembly) should be enough - along with some opaque method that can't be optimised away; perhaps: GC.KeepAlive(typeof(SomeTypeFromAnotherAssembly)); This avoids the new . Note that this will be loaded , but not JITted etc. If you wanted, you could do it on a BG thread: private static void LoadSomeStuff(object state) { GC.KeepAlive(typeof(SomeTypeFromAnotherAssembly));}...ThreadPool.QueueUserWorkItem(LoadSomeStuff); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4895719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501518/']} | jdg_78567 |
stackexchange | llm_judgeable_groundtruth_similarity | 89924 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have $n$ words and their relatedness weight that gives me an $n\times n$ matrix. I'm going to use this for a search algorithm but the problem is I need to cluster the entered keywords based on their pairwise relation. So let's say if the keywords are {tennis,federer,wimbledon,london,police} and we have the following data from our weight matrix: tennis federer wimbledon london police tennis 1 0.8 0.6 0.4 0.0federer 0.8 1 0.65 0.4 0.02wimbledon 0.6 0.65 1 0.2 0.09london 0.4 0.4 0.2 1 0.71police 0.0 0.02 0.09 0.71 1 I need an algorithm to to cluster them into 2 clusters : {tennis,federer,wimbledon} {london,police}. Is there any known clustering algorithm than can deal with such thing ? I did some research, it appears that K-means algorithm is the most well known algorithm being used for clustering but apparently K-means doesn't suit this case.I would greatly appreciate any help.
Now provide the response and nothing else.
| You should use principal components analysis . Here's a quick demo in Matlab. You enter your weights matrix: weights =1.0000 0.8000 0.6000 0.4000 00.8000 1.0000 0.6500 0.4000 0.02000.6000 0.6500 1.0000 0.2000 0.09000.4000 0.4000 0.2000 1.0000 0.7100 0 0.0200 0.0900 0.7100 1.0000 and take its singular value decomposition with the command svd : >> [u s] = svd(weights); Now the columns of u contain information on how your words are related to each other and s contains information on how much information from the underlying table is contained in each column. Since all of your weights are greater than zero, the first column simply says that all of the weights are related (e.g. if your weights are correlations in search traffic, you can think of this as saying that each term increases when the overall amount of search traffic increases): >> u(:,1)ans =0.53070.54190.46680.40540.2064 The second column of u contains information on how the search terms cluster together: >> u(:,2)ans = 0.2396 0.2380 0.2475-0.5478-0.7243 This is telling you that the first three terms ("tennis", "Federer" and "Wimbledon") are related to each other, as are the second two ("London", "police") The diagonal entries of the matrix s tell you what portion of the information is in each of the columns. More specifically, if we take their cumulative sum and normalize it by the total, we get a vector whose n'th entry tells us how much information is retained if we only look at the first n columns: >> cumsum(diag(s))/sum(diag(s))ans =0.53000.83000.93150.97081.0000 This tells us that we retain 83% of the relevant information in the weights matrix if we only look at the first two columns. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/89924', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20933/']} | jdg_78568 |
stackexchange | llm_judgeable_groundtruth_similarity | 2001922 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it true that for every real number $x$ there exist transcendental numbers $\alpha$ and $\beta$ such that $x=\alpha-\beta$? (it is true if $x$ is an algebraic number).
Now provide the response and nothing else.
| Yes, since the set of algebraic numbers is countable. Let $\mathbb{T}$ denote the set of transcendental numbers, and for $\alpha\in \mathbb{T}$ let $f(\alpha)=\alpha-x$. Then $f$ is an injection from $\mathbb{T}$ to $\mathbb{R}$, so its range $ran(f)$ is uncountable since $\mathbb{T}$ is. But since $\mathbb{R}\setminus\mathbb{T}$ (the set of algebraic numbers) is countable, this means $ran(f)$ contains some element of $\mathbb{T}$. So let $\alpha\in\mathbb{T}$ be such that $f(\alpha)\in\mathbb{T}$, and set $\beta=f(\alpha)$; then $\alpha, \beta$ are transcendental and $\alpha-\beta=x$. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2001922', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/158175/']} | jdg_78569 |
stackexchange | llm_judgeable_groundtruth_similarity | 82207 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Today in class (Intro to QM) we went over a couple of commutators. Among them was $[x, V]$, where $V=V(x)$ is a potential. What the teacher said to prove this is zero was: let's assume $V$ is analytic and can be expanded in a power series. Then we can take the commutator of $x$ and each term in the series, and since $[x, x^n] = 0$ everything is zero. I immediately thought of something much simpler: In coordinate space, both $x$ and $V$ are simply multiplication. For any function $\psi$, $xV\psi = Vx\psi$ trivially, because we're just taking products. Therefore, they commute. Is there anything wrong with my reasoning?
Now provide the response and nothing else.
| There is nothing wrong with your reasoning; the issue is that your starting assumption that the potential can be treated as a multiplication operator needs to be justified. Here's why: Let's consider the potential operator $V$ acting on a state $|\psi\rangle$. Let $X$ denote the position operator. The definition of the position space representation of $V$ acting on the position space wavefunction $\psi$ is as follows:\begin{align} V\psi(x) = \langle x| V|\psi\rangle\end{align}As you note, if we could somehow show that there exists some function $\tilde V$ such that\begin{align} V\psi(x) = \tilde V(x)\psi(x) \tag{$\star$}\end{align}then we would have\begin{align} XV\psi(x) &= X(\tilde V(x)\psi)(x) = \tilde V(x)X\psi(x) = \tilde V(x) x\psi(x)\end{align}while we would also have\begin{align} VX\psi(x) = V(x\psi)(x) = xV\psi(x) = x\tilde V(x)\psi(x) = \tilde V(x)x\psi(x)\end{align}which would give $XV = VX$ as desired. The issue is that that property $(\star)$ does not come for free from the definition above it. If, however, $V$ is, for example, defined to be some analytic function $\tilde V$ of the position operator, then that property (modulo some mathy subtleties) does hold because for any positive integer power $n$ of $X$, one has\begin{align} X^n\psi(x)=\langle x|X^n|\psi\rangle = x^n\langle x|\psi\rangle = x^n\psi(x)\end{align}by acting $X$ on the left repeatedly. One then uses this for each term in the power series expansion of $V$ in $X$ to obtain the property $(\star)$. Note. I am abusing notation here slightly and using the same symbol $V$ for the position operator and its position space representation. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/82207', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/5788/']} | jdg_78570 |
stackexchange | llm_judgeable_groundtruth_similarity | 566713 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems that batch gradient descent is the traditional gradient descent, except that the objective function is in the form of summation?
Now provide the response and nothing else.
| No, it may be that the discriminative information is in the direction of a principal component that explains a relatively small amount of the total variance, and hence gets discarded. Consider a two-dimensional dataset where the two classes lie in long parallel cigar-shaped elongated Gaussian clusters with a small gap between them. Most of the variance lies along the long axis of the clusters, so the first PC will be in that direction and the second will be orthogonal to it. The data are indistinguishable from the first component, so if you discard the second component, the data will no longer be separable. | {} | {'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/566713', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/337929/']} | jdg_78571 |
stackexchange | llm_judgeable_groundtruth_similarity | 18681956 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to make a custom view and have declared the styled attributes like the below:- <resources> <declare-styleable name="NewCircleView"> <attr name="radius" format="integer"/> <attr name="circlecolor" format="color"/></declare-styleable> </resources> in the constructor of the customview , these values are obtained like below:- circleradius=a.getInt(R.styleable.NewCircleView_radius, 0);//global var circlecolor=a.getColor(R.styleable.NewCircleView_circlecolor, 0);//global var and a is the typed array The view is used by declaring the xml as below:- <com.customviews.NewCircleView android:layout_below="@id/thetext" android:layout_width="match_parent" android:layout_height="fill_parent" app:radius="10000" app:circlecolor="@color/black"<!--this is defined in colors.xml /> In the custom view when i set the paint object as :- thePaintObj.setColor(circlecolor);//circlecolor logs to an integer as expected I dont get the color-"black" defined in the xml however when i set the color as thePaintObj.setColor(Color.GRAY) I get the color in the view Can someone tell me what would I be doing wrong ? (N.B:-If you want me to post more code , please let me know) EDIT1:- Posting my colors.xml. Looks like it is not clear in my code comments:- <?xml version="1.0" encoding="utf-8"?><resources><color name="red">#7f00</color><color name="blue">#770000ff</color><color name="green">#7700ff00</color><color name="yellow">#77ffff00</color><color name="black">#000000</color> </resources>
Now provide the response and nothing else.
| In colors.xml <?xml version="1.0" encoding="utf-8"?><resources> <color name="black_color">#000000</color></resources> To retrieve Resources res = getResources();int color = res.getColor(R.color.black_color); Then set color to paint thePaintObj.setColor(color); More info @ http://developer.android.com/guide/topics/resources/more-resources.html#Color Edit: MyCustomView public class CustomView extends View{ Paint p; int color ; public CustomView(Context context) { this(context, null); } public CustomView(Context context, AttributeSet attrs) { this(context, attrs, 0); } public CustomView(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); // real work here TypedArray a = context.getTheme().obtainStyledAttributes( attrs, R.styleable.NewCircleView, 0, 0 ); try { color = a.getColor(R.styleable.NewCircleView_circlecolor, 0xff000000); } finally { // release the TypedArray so that it can be reused. a.recycle(); } init(); }public void init(){ p = new Paint(); p.setColor(color);} @Override protected void onDraw(Canvas canvas) { // TODO Auto-generated method stub super.onDraw(canvas); if(canvas!=null) { canvas.drawCircle(100, 100,30,p ); } }} attrs.xml <?xml version="1.0" encoding="utf-8"?><resources> <declare-styleable name="NewCircleView"> <attr name="radius" format="integer"/> <attr name="circlecolor" format="color" /></declare-styleable></resources> colors.xml <?xml version="1.0" encoding="utf-8"?><resources> <color name="black_color">#000000</color></resources> MyCustomView in xml <com.example.circleview.CustomView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res/com.example.circleview" android:id="@+id/cv" android:layout_width="match_parent" android:layout_height="fill_parent" app:radius="30" app:circlecolor="@color/black_color" /> Snap Shot | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18681956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/459514/']} | jdg_78572 |
stackexchange | llm_judgeable_groundtruth_similarity | 39910031 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Tried to call a function inside a function. It does not return value from the second function. When the while loop breaks the function will return a value but it does not return array value. <?phpfunction getdetails(){ $parentarray=array(); $parentsid=2; array_push($parentarray,$parentsid); $getallparents=self::getparents($parentarray,$parentsid);}function getparents($parentarray,$parentsid){ $this->db->select('*'); $this->db->from('users'); $this->db->where('user_id',$parentsid); $query = $this->db->get(); $results = $query->result(); $parentsid=$results[0]->parent_id; $var="true"; while ($var=="true") { array_push($parentarray,$results); if($parentsid==0) { $var="false"; $returnvalue=$parentarray; } else { $var="true"; self::getparents($parentarray,$parentsid); } } return $returnvalue;}?>
Now provide the response and nothing else.
| What you need to do is get the injector that has the NgForm . It took me a while to figure it out. I thought you could just get it from the debugElement , but it looks like you need to get it from it's child 1 . let form: NgForm = fixture.debugElement.children[0].injector.get(NgForm); The you can just get individual controls from the form group with let emailControl = form.control.get('email');expect(emailControl.valid).toBe(true); Or you can just check the form for a specific error expect(form.control.hasError('emailInvalid', ['email'])).toBe(true); Below is a complete test import { Component, forwardRef, Directive } from '@angular/core';import { TestBed, getTestBed, async } from '@angular/core/testing';import { FormsModule, NG_VALIDATORS, Validator, AbstractControl, NgForm } from '@angular/forms';import { dispatchEvent } from '@angular/platform-browser/testing/browser_util';import { By } from '@angular/platform-browser';@Directive({ selector: '[ngModel][validEmail]', providers: [ { provide: NG_VALIDATORS, useExisting: forwardRef(() => EmailValidatorDirective), multi: true } ]})class EmailValidatorDirective implements Validator { validate(c: AbstractControl): { [key: string]: any } { if (c.value !== '[email protected]') { return { notPeeskillet: true }; } return null; }}@Component({ template: ` <form> <input name="email" [ngModel]="email" validEmail /> </form> `})class TestComponent { email;}describe('component: TestComponent', () => { beforeEach(() => { TestBed.configureTestingModule({ imports: [FormsModule], declarations: [TestComponent, EmailValidatorDirective] }); }); it('should validate', async(() => { let fixture = TestBed.createComponent(TestComponent); let comp = fixture.componentInstance; let debug = fixture.debugElement; let input = debug.query(By.css('[name=email]')); fixture.detectChanges(); fixture.whenStable().then(() => { input.nativeElement.value = '[email protected]'; dispatchEvent(input.nativeElement, 'input'); fixture.detectChanges(); let form: NgForm = debug.children[0].injector.get(NgForm); let control = form.control.get('email'); expect(control.hasError('notPeeskillet')).toBe(true); expect(form.control.valid).toEqual(false); expect(form.control.hasError('notPeeskillet', ['email'])).toEqual(true); input.nativeElement.value = '[email protected]'; dispatchEvent(input.nativeElement, 'input'); fixture.detectChanges(); expect(control.hasError('notPeeskillet')).toBe(false); expect(form.control.valid).toEqual(true); expect(form.control.hasError('notPeeskillet', ['email'])).toEqual(false); }); }));}); 1 - Found it in the source code tests | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39910031', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5050021/']} | jdg_78573 |
stackexchange | llm_judgeable_groundtruth_similarity | 9473389 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was creating a C# method in visual studio that contained only a switch statement where each case returned a value. By personal habit, I put something similar to the following code: private string SwitchMethod(int num) { switch (num) { case 0: return "result 1"; case 1: return "result 2"; case 2: return "result 3"; } return "no result"; } My question is this: Which code will have better performance? The code above or below, or are the same? And why? I would assume that because of compiler optimizations...they might just be the same...but I really don't know. private string SwitchMethod(int num) { switch (num) { case 0: return "result 1"; case 1: return "result 2"; case 2: return "result 3"; default: return "no result"; } } REVISION:It seems that I should be more specific: When compiled...will less efficient code be generated by one or the other? I realize that the difference in performance may be insignificant...I'm just curious really.
Now provide the response and nothing else.
| public static string foo(int num) { switch (num) { case 0: return "result 1"; case 1: return "result 2"; case 2: return "result 3"; } return "no result"; } Becomes: .method public hidebysig static string foo(int32 num) cil managed{ // Code size 57 (0x39) .maxstack 1 .locals init ([0] string CS$1$0000, [1] int32 CS$4$0001) IL_0000: nop IL_0001: ldarg.0 IL_0002: stloc.1 IL_0003: ldloc.1 IL_0004: switch ( IL_0017, IL_001f, IL_0027) IL_0015: br.s IL_002f IL_0017: ldstr "result 1" IL_001c: stloc.0 IL_001d: br.s IL_0037 IL_001f: ldstr "result 2" IL_0024: stloc.0 IL_0025: br.s IL_0037 IL_0027: ldstr "result 3" IL_002c: stloc.0 IL_002d: br.s IL_0037 IL_002f: ldstr "no result" IL_0034: stloc.0 IL_0035: br.s IL_0037 IL_0037: ldloc.0 IL_0038: ret} // end of method Program::foo Moving the return into a default case: .method public hidebysig static string foo(int32 num) cil managed{ // Code size 57 (0x39) .maxstack 1 .locals init ([0] string CS$1$0000, [1] int32 CS$4$0001) IL_0000: nop IL_0001: ldarg.0 IL_0002: stloc.1 IL_0003: ldloc.1 IL_0004: switch ( IL_0017, IL_001f, IL_0027) IL_0015: br.s IL_002f IL_0017: ldstr "result 1" IL_001c: stloc.0 IL_001d: br.s IL_0037 IL_001f: ldstr "result 2" IL_0024: stloc.0 IL_0025: br.s IL_0037 IL_0027: ldstr "result 3" IL_002c: stloc.0 IL_002d: br.s IL_0037 IL_002f: ldstr "result 4" IL_0034: stloc.0 IL_0035: br.s IL_0037 IL_0037: ldloc.0 IL_0038: ret} // end of method Program::foo Exactly the same. No performance difference. I changed the "no result" to result 4 just to make sure the code was regenerated. Apparently the C# compiler optimizes it or it just ends up being equivalent. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9473389', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1203288/']} | jdg_78574 |
stackexchange | llm_judgeable_groundtruth_similarity | 24023112 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to catch exceptions in Swift? Given the following code: NSException.raise(NSRangeException, format: "Now you've gone too far!", arguments: CVaListPointer(fromUnsafePointer: UnsafePointer())) Is it possible to prevent the exception from crashing the entire program? That is, what is the Swift equivalent of the following in Objective-C: @try { [NSException raise:NSRangeException format:@"Now you've gone too far!"];}
Now provide the response and nothing else.
| It doesn't have exception handling, and this discussion in the developer forum discusses why it may be so : but keep in mind that Cocoa and Cocoa Touch traditionally don't intend for you to catch exceptions; they intend for you to not cause them to be thrown in the first place. Ordinary errors should be handled with optional types and inout NSError parameters; you should address any situation that causes an assertion to fail (which seems to be the only exception-throwing mechanism in Swift) by writing better code. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24023112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/679254/']} | jdg_78575 |
stackexchange | llm_judgeable_groundtruth_similarity | 318774 |
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a simple answer I posted on how to execute a Java program. It got accepted by the poster also. But why are there are four downvotes? Downvoters didn't put any comments either. I have been on Stack Overflow for the past two months only. This is the first time I am getting this kind of experience. I don't really understand why this happened.
Now provide the response and nothing else.
| I guess it is because you're not properly addressing the problem at hand, you're merely providing a workaround. A proper answer would be something like this: You're getting an ArrayIndexOutOfBoundsException because you're accessing the command line arguments without checking that there actually are any. You need to check whether the user provided the proper arguments, and if not, print a message how to call the executable. [code showing how to do so] When debugging, you can pass command line arguments to your application like this: [your current answer] Also, you're answering a question that has about five thousand duplicates, which usually isn't really appreciated of users with more than 2K reputation. At least not by me. In addition to that, as pointed out in comments here: you're abusing inline code to highlight random terms, which also isn't appreciated (see When should code formatting be used for non-code text? ). | {} | {'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/318774', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/5678086/']} | jdg_78576 |
stackexchange | llm_judgeable_groundtruth_similarity | 35745385 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to pass in a hardcoded char array as the source parameter to memcpy ... Something like this: memcpy(dest, {0xE3,0x83,0xA2,0xA4,0xCB} ,5); This compiled with clang gives the following error: cccc.c:28:14: error: expected expression If i modify it to be (see the extra parenthesis ): memcpy(dest,({0xAB,0x13,0xF9,0x93,0xB5}),5); the error given by clang is: cccc.c:26:14: warning: incompatible integer to pointer conversion passing 'int' to parameter of type 'const void *' [-Wint-conversion]cccc.c:28:40: error: expected ';' after expressionmemcpy(c+110,({0xAB,0x13,0xF9,0x93,0xB5}),5); So, the question: How do I pass in a hardcoded array as the source parameter of memcpy ( http://www.cplusplus.com/reference/cstring/memcpy/ ) I have tried: (void*)(&{0xAB,0x13,0xF9,0x93,0xB5}[0]) - syntax error{0xAB,0x13,0xF9,0x93,0xB5} - syntax error({0xAB,0x13,0xF9,0x93,0xB5}) - see above(char[])({0xE3,0x83,0xA2,0xA4,0xCB}) - error: cast to incomplete type 'char []' (clang) and some more insane combinations I'm shamed to write here ... Please remember: I do NOT want to create a new variable to hold the array.
Now provide the response and nothing else.
| If you use C99 or later, you can use compound literals. ( N1256 6.5.2.5) #include <stdio.h>#include <string.h>int main(void){ char dest[5] = {0}; memcpy(dest, (char[]){0xE3,0x83,0xA2,0xA4,0xCB} ,5); for (int i = 0; i < 5; i++) printf("%X ", (unsigned int)(unsigned char)dest[i]); putchar('\n'); return 0;} UPDATE: This worked for C++03 and C++11 on GCC, but are rejected with -pedantic-errors option. This means this is not a valid solution for standard C++. #include <cstdio>#include <cstring>int main(void){ char dest[5] = {0}; memcpy(dest, (const char[]){(char)0xE3,(char)0x83,(char)0xA2,(char)0xA4,(char)0xCB} ,5); for (int i = 0; i < 5; i++) printf("%X ", (unsigned int)(unsigned char)dest[i]); putchar('\n'); return 0;} points are: Make the array const, or taking address of temporary array will be rejected. Cast numbers to char explicitly, or the narrowing conversion will be rejected. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35745385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/186193/']} | jdg_78577 |
stackexchange | llm_judgeable_groundtruth_similarity | 12550276 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to debug my program by printing something For example, isPos n | n<0 = False | otherwise = True I want something like: isPos n | n<0 = False AND print ("negative") | otherwise = True AND print ("positive") Is it possible to do in Haskell?
Now provide the response and nothing else.
| As hammar said, use trace from the Debug.Trace module. A tip I have found useful is to define the function debug : debug = flip trace You could then do isPos n | n < 0 = False `debug` "negative" | otherwise = True `debug` "positive" The benefit of this is that it is easy to enable/disable the debug printingduring development. To remove the debug printing, simply comment out rest ofthe line: isPos n | n < 0 = False -- `debug` "negative" | otherwise = True -- `debug` "positive" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12550276', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1554734/']} | jdg_78578 |
stackexchange | llm_judgeable_groundtruth_similarity | 42355778 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a concise way to compute the color contrast ratio between two UIColor instances in Swift. I've found examples that are close but are overly complicated or outdated.
Now provide the response and nothing else.
| UIColor extension with contrast ratio and luminance The following UIColor extension includes a static and instance contrast ratio method. A bonus luminance method is included since it is used by the static contrastRatio(between:and:) method. import UIKitextension UIColor { static func contrastRatio(between color1: UIColor, and color2: UIColor) -> CGFloat { // https://www.w3.org/TR/WCAG20-TECHS/G18.html#G18-tests let luminance1 = color1.luminance() let luminance2 = color2.luminance() let luminanceDarker = min(luminance1, luminance2) let luminanceLighter = max(luminance1, luminance2) return (luminanceLighter + 0.05) / (luminanceDarker + 0.05) } func contrastRatio(with color: UIColor) -> CGFloat { return UIColor.contrastRatio(between: self, and: color) } func luminance() -> CGFloat { // https://www.w3.org/TR/WCAG20-TECHS/G18.html#G18-tests let ciColor = CIColor(color: self) func adjust(colorComponent: CGFloat) -> CGFloat { return (colorComponent < 0.04045) ? (colorComponent / 12.92) : pow((colorComponent + 0.055) / 1.055, 2.4) } return 0.2126 * adjust(colorComponent: ciColor.red) + 0.7152 * adjust(colorComponent: ciColor.green) + 0.0722 * adjust(colorComponent: ciColor.blue) }} Example Use // static methodlet contrastRatio1 = UIColor.contrastRatio(between: UIColor.black, and: UIColor.white)print(contrastRatio1) // 21.0// instance methodlet contrastRatio2 = UIColor.black.contrastRatio(with: UIColor.white)print(contrastRatio2) // 21.0 Note Following these links: https://www.w3.org/TR/css-color-4/#predefined https://github.com/dequelabs/axe-core/issues/1629#issuecomment-509880306 For predefinite colorspace (like in iOS see this https://developer.apple.com/videos/play/wwdc2016/712/ ) and also in general the correct THRESHOLD value is 0.04045 and not 0.03928 ( read more ) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42355778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2092663/']} | jdg_78579 |
stackexchange | llm_judgeable_groundtruth_similarity | 32268336 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a wwebpage with Header-Content-Footer. In the content i Have a wrapper and in this wrapper i got a form who is extending itself. When i extend the form i change the min-height of the wrapper so that the footer will go below the wrapper. This and i use this jquery code for that: $('#formWrapper').animate({'min-height':totalHeight}, { duration: 400, queue: false }); now everytime i click a button it will extend this wrapper with an animation. But when it goes animate then my screen will go the the top and i myself cannot see the animation. How can i keep the focus on that part of the screen where the animation is happening? Thank you button: <td colspan="3" id="todu2"> <a href="#" id="newItemBoxButton" ><img src="images/addicon.png" /> Voeg een nieuwe regel toe</a></td> handler: var tdCount =1;var hiddenCount =1;$("#newItemBoxButton").click(function(){ tdCount++; hiddenCount++; $("#addNewRowAfterThisRow" ).before( "<tr id='itemBox"+tdCount+"'><td valign='top' nowrap='nowrap' class='style1' width='10'>"+tdCount+"</td><td valign='top' ><textarea name='item"+tdCount+"' id='item2"+tdCount+"' rows='4' cols='70' placeholder='Item'></textarea></td><td> </td><td valign='top' class='style3new'><input class='amountBox' type='text' id='amount"+tdCount+"' name='amount"+tdCount+"' value='' placeholder='0' /></td><td valign='top' class='style3new'> <a href='#' class='deleteItemBoxButton' style='vertical-align:middle' ><img src='images/deleteicon.png' /></a></td></tr>" ); //add formheight with 120px < which is the total height of 1 TRvar fieldsetHeight = document.getElementById("fieldset2").offsetHeight;var totalHeight = fieldsetHeight + 120; $('#formWrapper').animate({'min-height':totalHeight}, { duration: 400, queue: false }); //-----------------------------------------------------------------------------------------------> HIER BEN JE GEBLEVEN DE SCREEN JUMPT STEEDS NAAR DE TOP: http://stackoverflow.com/questions/32268336/when-adding-new-height-to-wrapper-my-screen-jumps-to-the-top-of-the-page});
Now provide the response and nothing else.
| The reason it doesn't work is that your using clause is disposing your singleton instance after first use. After that, it becomes useless, disposed but still not null. The fact that you insist you always used singletons and it always worked doesn't really mean anything. Using singletons for data contexts is considered a terribly bad habit that leads to numerous issues, including memory, concurrency and transaction problems. My guess is that before you always worked on single threaded desktop apps where a singleton is still risky but doesn't lead to immediate issues. Here, however, in a concurrent world of a web service, it just won't work. On the other hand, creating a new instance per a wcf call is perfectly valid, different instances do not interfere and you correctly dispose them after used. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32268336', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/500548/']} | jdg_78580 |
stackexchange | llm_judgeable_groundtruth_similarity | 61400 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm sure most of you are writing lots of automated tests and that you also have run into some common pitfalls when unit testing. My question is do you follow any rules of conduct for writing tests in order to avoid problems in the future? To be more specific: What are the properties of good unit tests or how do you write your tests? Language agnostic suggestions are encouraged.
Now provide the response and nothing else.
| Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.) Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..) Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic Thorough : Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions Repeatable : Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params. Independent : Very important. Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem. Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately Professional : In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc. Good tests also run Fast . any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit. Update 2010-08: Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist. Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/61400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3713/']} | jdg_78581 |
stackexchange | llm_judgeable_groundtruth_similarity | 1344 |
Below is a question asked on the forum softwarerecs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently have to take an image provided to me and crop it down to be considered a headshot. The problem is that this could be a few hundred photos a week. Does anyone know how to do this automatically, or even make it quicker? Surly my company is not the only company that does this on an enterprise level. Any assistance would be greatly appreciated. The program must be able to: Auto crop and re-size an image, Either by setting up a fixedcamera/distance or using facial recognition. It also must be able to do this in batch/bulk (multiple files). It should run on windows, if it has to be a Mac I can make that work. The image will be either a JPG or PNG. It needs to be in a specificformat I can convert easily enough.
Now provide the response and nothing else.
| I'm guessing that you don't just crop the image but you have other things you'll be doing with the images before and after they're cropped. So I guess it's worthwhile to take a look at scripting the cropping as well as those other tasks. If you want to do a facial recognition based cropping, you might want to look at OpenCV. OpenCV has a feature detection algorithms and also includes a pretrained classifier for facial detection. You can write a face detection and cropping script in probably about 10 lines of python code: http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html . OpenCV runs on Windows, Linux, Mac, even Android or iOS Can do batch processing, it had binding for many popular scripting languages Supports PNG and JPEG as long as you already have a pretrained classifier for the feature you want to detect, you wouldn't even need to know how those black magic works and if you have to, you have the option to train your own classifier | {} | {'log_upvote_score': 5, 'links': ['https://softwarerecs.stackexchange.com/questions/1344', 'https://softwarerecs.stackexchange.com', 'https://softwarerecs.stackexchange.com/users/510/']} | jdg_78582 |
stackexchange | llm_judgeable_groundtruth_similarity | 206900 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $\mathcal{M}_2$ be the moduli space of genus two curves and $\mathcal{A}_2$ the moduli space of principally polarized abelian surfaces. Then the Abel-Jacobi map gives an open embedding $\mathcal{M}_2 \hookrightarrow \mathcal{A}_2$. My question is, for which compactification $\overline{\mathcal{A}}_2$ of $\mathcal{A}_2$ does the open embedding $\mathcal{M}_2 \hookrightarrow \mathcal{A}_2$ extend over the Delign-Mumford compactification $\overline{\mathcal{M}}_2$?
Now provide the response and nothing else.
| This map is usually called the Torelli map, not the Abel-Jacobi map. In any case, Mumford observed that a certain toroidal compactification of $\mathscr{A}_g$ admits an extension of the Torelli map; the original reference is this paper of Namikawa , I think. That paper doesn't give a very good moduli description of the map; luckily Alexeev does in this paper . I imagine everything can be made extremely concrete in genus 2, but I don't know a good reference for this. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/206900', 'https://mathoverflow.net', 'https://mathoverflow.net/users/21014/']} | jdg_78583 |
stackexchange | llm_judgeable_groundtruth_similarity | 8045839 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To set some context, I'm in the process of learning Clojure, and Lisp development more generally. On my path to Lisp, I'm currently working through the "Little" series in an effort to solidify a foundation in functional programming and recursive-based solution solving. In "The Little Schemer," I've worked through many of the exercises, however, I'm struggling a bit to convert some of them to Clojure. More specifically, I'm struggling to convert them to use "recur" so as to enable TCO. For example, here is a Clojure-based implementation to the "occurs*" function (from Little Schemer) which counts the number of occurrences of an atom appearing within a list of S-expressions: (defn atom? [l] (not (list? l)))(defn occurs [a lst] (cond (empty? lst) 0 (atom? (first lst)) (cond (= a (first lst)) (inc (occurs a (rest lst))) true (occurs a (rest lst))) true (+ (occurs a (first lst)) (occurs a (rest lst))))) Basically, (occurs 'abc '(abc (def abc) (abc (abc def) (def (((((abc))))))))) will evaluate to 5. The obvious problem is that this definition consumes stack frames and will blow the stack if given a list of S-expressions too deep. Now, I understand the option of refactoring recursive functions to use an accumulator parameter to enable putting the recursive call into the tail position (to allow for TCO), but I'm struggling if this option is even applicable to situations such as this one. Here's how far I get if I try to refactor this using "recur" along with using an accumulator parameter: (defn recur-occurs [a lst] (letfn [(myoccurs [a lst count] (cond (empty? lst) 0 (atom? (first lst)) (cond (= a (first lst)) (recur a (rest lst) (inc count)) true (recur a (rest lst) count)) true (+ (recur a (first lst) count) (recur a (rest lst) count))))] (myoccurs a lst 0))) So, I feel like I'm almost there, but not quite. The obvious problem is my "else" clause in which the head of the list is not an atom. Conceptually, I want to sum the result of recurring over the first element in the list with the result of recurring over the rest of the list. I'm struggling in my head on how to refactor this such that the recurs can be moved to the tail position. Are there additional techniques to the "accumulator" pattern to achieving getting your recursive calls put into the tail position that I should be applying here, or, is the issue simply more "fundamental" and that there isn't a clean Clojure-based solution due to the JVM's lack of TCO? If the latter, generally speaking, what should be the general pattern for Clojure programs to use that need to recur over a list of S-expressions? For what it's worth, I've seen the multi method w/lazy-seq technique used (page 151 of Halloway's "Programming Clojure" for reference) to "Replace Recursion with Laziness" - but I'm not sure how to apply that pattern to this example in which I'm not attempting to build a list, but to compute a single integer value. Thank you in advance for any guidance on this.
Now provide the response and nothing else.
| Firstly, I must advise you to not worry much about implementation snags like stack overflows as you make your way through The Little Schemer. It is good to be conscientious of issues like the lack of tail call optimization when you're programming in anger, but the main point of the book is to teach you to think recursively. Converting the examples accumulator-passing style is certainly good practice, but it's essentially ditching recursion in favor of iteration. However, and I must preface this with a spoiler warning, there is a way to keep the same recursive algorithm without being subject to the whims of the JVM stack. We can use continuation-passing style to make our own stack in the form of an extra anonymous function argument k : (defn occurs-cps [a lst k] (cond (empty? lst) (k 0) (atom? (first lst)) (cond (= a (first lst)) (occurs-cps a (rest lst) (fn [v] (k (inc v)))) :else (occurs-cps a (rest lst) k)) :else (occurs-cps a (first lst) (fn [fst] (occurs-cps a (rest lst) (fn [rst] (k (+ fst rst)))))))) Instead of the stack being created implicitly by our non-tail function calls, we bundle up "what's left to do" after each call to occurs , and pass it along as the next continuation k . When we invoke it, we start off with a k that represents nothing left to do, the identity function: scratch.core=> (occurs-cps 'abc '(abc (def abc) (abc (abc def) (def (((((abc)))))))) (fn [v] v))5 I won't go further into the details of how to do CPS, as that's for a later chapter of TLS. However, I will note that this of course doesn't yet work completely: scratch.core=> (def ls (repeat 20000 'foo)) #'scratch.core/lsscratch.core=> (occurs-cps 'foo ls (fn [v] v)) java.lang.StackOverflowError (NO_SOURCE_FILE:0) CPS lets us move all of our non-trivial, stack-building calls to tail position, but in Clojure we need to take the extra step of replacing them with recur : (defn occurs-cps-recur [a lst k] (cond (empty? lst) (k 0) (atom? (first lst)) (cond (= a (first lst)) (recur a (rest lst) (fn [v] (k (inc v)))) :else (recur a (rest lst) k)) :else (recur a (first lst) (fn [fst] (recur a (rest lst) ;; Problem (fn [rst] (k (+ fst rst)))))))) Alas, this goes wrong: java.lang.IllegalArgumentException: Mismatched argument count to recur, expected: 1 args, got: 3 (core.clj:39) . The very last recur actually refers to the fn right above it, the one we're using to represent our continuations! We can get good behavior most of the time by changing just that recur to a call to occurs-cps-recur , but pathologically-nested input will still overflow the stack: scratch.core=> (occurs-cps-recur 'foo ls (fn [v] v))20000scratch.core=> (def nested (reduce (fn [onion _] (list onion)) 'foo (range 20000)))#'scratch.core/nestedscratch.core=> (occurs-cps-recur 'foo nested (fn [v] v))Java.lang.StackOverflowError (NO_SOURCE_FILE:0) Instead of making the call to occurs-* and expecting it to give back an answer, we can have it return a thunk immediately. When we invoke that thunk, it'll go off and do some work right up until it does a recursive call, which in turn will return another thunk. This is trampolined style, and the function that "bounces" our thunks is trampoline . Returning a thunk each time we make a recursive call bounds our stack size to one call at a time, so our only limit is the heap: (defn occurs-cps-tramp [a lst k] (fn [] (cond (empty? lst) (k 0) (atom? (first lst)) (cond (= a (first lst)) (occurs-cps-tramp a (rest lst) (fn [v] (k (inc v)))) :else (occurs-cps-tramp a (rest lst) k)) :else (occurs-cps-tramp a (first lst) (fn [fst] (occurs-cps-tramp a (rest lst) (fn [rst] (k (+ fst rst)))))))))(declare done answer)(defn my-trampoline [th] (if done answer (recur (th))))(defn empty-k [v] (set! answer v) (set! done true))(defn run [] (binding [done false answer 'whocares] (my-trampoline (occurs-cps-tramp 'foo nested empty-k))));; scratch.core=> (run) ;; 1 Note that Clojure has a built-in trampoline (with some limitations on the return type). Using that instead, we don't need a specialized empty-k : scratch.core=> (trampoline (occurs-cps-tramp 'foo nested (fn [v] v)))1 Trampolining is certainly a cool technique, but the prerequisite to trampoline a program is that it must contain only tail calls; CPS is the real star here. It lets you define your algorithm with the clarity of natural recursion, and through correctness-preserving transformations, express it efficiently on any host that has a single loop and a heap. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8045839', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1034895/']} | jdg_78584 |
stackexchange | llm_judgeable_groundtruth_similarity | 140179 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Xfce-4.10 on Debian 7. But I found some icons are missing (and only "x"'s appear) in the application menu. Take ipython3 as an example. Here's the ipython3.desktop : # If you want ipython3 to appear in a linux app launcher ("start menu"), install this by doing:# sudo desktop-file-install ipython3.desktop[Desktop Entry]Comment=Enhanced interactive Python shellExec=ipython3GenericName[en_US]=IPython3GenericName=IPython3Icon=ipython3Name[en_US]=ipython3Name=ipython3Categories=Development;Utility;StartupNotify=falseTerminal=trueType=Application The theme is installed (currently tango-icon-theme , but other themes don't work either).The icon file is at /usr/share/icons/hicolor/scalable/apps/ipython3.svg . It can be displayed in an image viewer. Reinstalling ipython3 doesn't help. I also tried to replace the ipython3.svg with some other normal icon files ( libreoffice-calc.svg , for example). This doesn't help either. Any ideas to bring the icons back? Thank you.
Now provide the response and nothing else.
| As the kernel documentation states, /dev/nfs is not a real device but only a hint to the kernel to use NFS as rootfs . You'll also have to tell the kernel where to find this root through the nfsroot parameter or a properly set up DHCP daemon. For the latter one to work you'll also have to either configure your kernel to auto-configure its network interfaces or have an initramfs which takes care of this. Also, make sure to have NFS support built into your kernel binary and not as a module (or have an initramfs , which takes care of this). Same goes for network drivers: you'll most probably want to have the driver for you ethernet NIC built into your kernel image, otherwise you'll have to load it from an initramfs . In short, there are several possibilities: Do as above link tells you: have root=/dev/nfs set, give the correct nfsroot parameter and tell your kernel your network configuration via the ip parameter (this would be the best way to make sure it's working at all, i.e. to rule out a misconfigured DHCP server). Have CONFIG_IP_PNP and CONFIG_IP_PNP_DHCP enabled and set up a DHCP daemon to tell your client which IP address to use and where to find its NFS-root. Build an initramfs which does the correct configuration and NFS-mounting. Edit: I think if you're using an initrd / initramfs as your edit suggests, you'll have to do the NFS-mount in the initrd (resp. your initrd has to be aware of the fact that it has to do so). Automounting through the kernel (as IP autoconfiguration, IIRC) only works if there's no initrd . | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/140179', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/74184/']} | jdg_78585 |
stackexchange | llm_judgeable_groundtruth_similarity | 10533848 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a table in Access with Columns A, B, and C. I want to write the records in this table into a SQL table with Columns A, B, C and D. What's the best way to achieve it? I tried using OleDbDataReader and bulkCopy, but the schema difference made it hard: OleDbDataReader reader;// = new OleDbDataReader();using (OleDbConnection sourceConnection = new OleDbConnection(sourceConnectionString)) { //Get data from Access OleDbCommand commandSourceData = new OleDbCommand("SELECT * FROM AddressLoCo;", sourceConnection); commandSourceData.Connection = sourceConnection; sourceConnection.Open(); reader = commandSourceData.ExecuteReader(); using (SqlConnection destinationConection = new SqlConnection(destinationConnectionString)) { destinationConection.Open(); // Set up the bulk copy object using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConection)) { bulkCopy.DestinationTableName = "dbo.AddressMaster"; try { bulkCopy.WriteToServer(reader); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { reader.Close(); } } } }
Now provide the response and nothing else.
| I know you're thinking of using __attribute__((sentinel)) somehow, but this is a red herring. What you want is to do something like this: #define push(s, args...) ({ \ char *_args[] = {args}; \ _push(s,_args,sizeof(_args)/sizeof(char*)); \}) which wraps: void _push(stack_t s, char *args[], int argn); which you can write exactly the way you would hope you can write it! Then you can call: push(stack, "foo", "bar", "baz");push(stack, "quux"); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10533848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/907064/']} | jdg_78586 |
stackexchange | llm_judgeable_groundtruth_similarity | 1271748 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found that Windows has some new Windows Data Types DWORD_PTR, INT_PTR, LONG_PTR, UINT_PTR, ULONG_PTR can you tell me when, how and why to use them?
Now provide the response and nothing else.
| The *_PTR types were added to the Windows API in order to support Win64's 64bit addressing. Because 32bit APIs typically passed pointers using data types like DWORD , it was necessary to create new types for 64 bit compatibility that could substitute for DWORD in 32bit applications, but were extended to 64bits when used in a 64bit applications. So, for example, application developers who want to write code that works as 32bit OR 64bit the windows 32bit API SetWindowLong(HWND,int,LONG) was changed to SetWindowLongPtr(HWND,int,LONG_PTR) In a 32bit build, SetWindowLongPtr is simply a macro that resolves to SetWindowLong , and LONG_PTR is likewise a macro that resolves to LONG .In a 64bit build on the other hand, SetWindowLongPtr is an API that accepts a 64bit long as its 3rd parameter, and ULONG_PTR is typedef for unsigned __int64 . By using these _PTR types, one codebase can compile for both Win32 and Win64 targets. When performing pointer arithmetic, these types should also be used in 32bit code that needs to be compatible with 64bit. so, if you need to access an array with more than 4billion elements, you would need to use an INT_PTR rather than an INT CHAR* pHuge = new CHAR[0x200000000]; // allocate 8 billion bytes INT idx; INT_PTR idx2; pHuge[idx]; // can only access the 1st 4 billion elements. pHuge[idx2]; // can access all 64bits of potential array space. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1271748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155120/']} | jdg_78587 |
stackexchange | llm_judgeable_groundtruth_similarity | 1278042 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can you think of an easy way to evaluate a text field as a template during the template rendering. I know how to do it in a view but I'm looking for a template Filter or a Tag ? Something like: {{ object.textfield|evaluate}} or {% evaluate object.textfield %} with object.textfield containing something like: a text with a {% TemplateTag %}. In which TemplateTag will be evaluated, thanks to the evaluate filter.
Now provide the response and nothing else.
| Here is a first Tag implementation to solve my question: from django import templateregister = template.Library()@register.tag(name="evaluate")def do_evaluate(parser, token): """ tag usage {% evaluate object.textfield %} """ try: tag_name, variable = token.split_contents() except ValueError: raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0] return EvaluateNode(variable)class EvaluateNode(template.Node): def __init__(self, variable): self.variable = template.Variable(variable) def render(self, context): try: content = self.variable.resolve(context) t = template.Template(content) return t.render(context) except template.VariableDoesNotExist, template.TemplateSyntaxError: return 'Error rendering', self.variable | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1278042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8450/']} | jdg_78588 |
stackexchange | llm_judgeable_groundtruth_similarity | 57230201 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In python, I often use strings as templates, e.g. templateUrl = '{host}/api/v3/{container}/{resourceid}' params = {'host': 'www.api.com', 'container': 'books', 'resourceid': 10} api.get(templateUrl.format(**params)) This allows for easy base class setup and the like. How can I do the same in dart? I'm assuming I will need to create a utility function to parse the template and substitute manually but really hoping there is something ready to use. Perhaps a TemplateString class with a format method that takes a Map of name/value pairs to substitute into the string. Note: the objective is to have a generic "format" or "interpolation" function that doesn't need to know in advance what tags or names will exist in the template. Further clarification: the templates themselves are not resolved when they are set up. Specifically, the template is defined in one place in the code and then used in many other places.
Now provide the response and nothing else.
| Dart does not have a generic template string functionality that would allow you to insert values into your template at runtime . Dart only allows you to interpolate strings with variables using the $ syntax in strings, e.g. var string = '$domain/api/v3/${actions.get}' . You would need to have all the variables defined in your code beforehand. However, you can easily create your own implementation. Implementation You pretty much explained how to do it in your question yourself: you pass a map and use it to have generic access to the parameters using the [] operator. To convert the template string into something that is easy to access, I would simply create another List containing fixed components, like /api/v3/ and another Map that holds generic components with their name and their position in the template string. class TemplateString { final List<String> fixedComponents; final Map<int, String> genericComponents; int totalComponents; TemplateString(String template) : fixedComponents = <String>[], genericComponents = <int, String>{}, totalComponents = 0 { final List<String> components = template.split('{'); for (String component in components) { if (component == '') continue; // If the template starts with "{", skip the first element. final split = component.split('}'); if (split.length != 1) { // The condition allows for template strings without parameters. genericComponents[totalComponents] = split.first; totalComponents++; } if (split.last != '') { fixedComponents.add(split.last); totalComponents++; } } } String format(Map<String, dynamic> params) { String result = ''; int fixedComponent = 0; for (int i = 0; i < totalComponents; i++) { if (genericComponents.containsKey(i)) { result += '${params[genericComponents[i]]}'; continue; } result += fixedComponents[fixedComponent++]; } return result; }} Here would be an example usage, I hope that the result is what you expected: main() { final templateUrl = TemplateString('{host}/api/v3/{container}/{resourceid}'); final params = <String, dynamic>{'host': 'www.api.com', 'container': 'books', 'resourceid': 10}; print(templateUrl.format(params)); // www.api.com/api/v3/books/10} Here it is as a Gist . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57230201', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1810447/']} | jdg_78589 |
stackexchange | llm_judgeable_groundtruth_similarity | 87846 |
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently using Python's RPi.GPIO module to toggle some pins, and I want to know how quickly it can do this. I need the pins to be toggled on the schedule of a ~50 mhz FPGA clock, which I'm worried is too fast for the Pi (I need 8 pins toggled). I'm using the Pi 3b+, and I've seen estimates for speed varying from .3 mhz to 30 mhz, and that's just with Python - they vary even more widely when I look at other languages/packages. What is the fastest that the GPIO pins on the Pi 3b+ can be toggled with Python's RPi.GPIO module? With any language/module? Will this be able to keep up with my FPGA clock? (The Pi is being used to feed inputs to the FPGA.)
Now provide the response and nothing else.
| There are some benchmarks from Henner Zeller's repository on GitHub which claimed that directly outputting data to the GPIO could achieve up to 65.8 MHz on a Raspberry Pi 3 (not B+, mind, but I suspect the figures won't be that far off). The code used is available here in C and the author gives the following pseudocode equivalent: // Pseudocodefor (;;) { *gpio_set_register = (1<<TOGGLE_PIN); *gpio_clr_register = (1<<TOGGLE_PIN);} I suppose you can treat that as an upper bound to the possible speeds you will get. Python is going to be significantly slower than that, though. For comparison, Joonas Pihlajamaa tested a Pi 2's ability to toggle GPIO pins quickly using various libraries. The values were as follows for RPi.GPIO: Pi 1: 70 kHz Pi 2: 243 kHz Change: 2.5x While there's clearly a significant difference between the Pi 1 and 2, it is not even close to your target of 50 MHz (over 200x too low, in fact). You might like to run the benchmarks from the two sources above on your own Pi to get a definitive answer, though they'll be on the same order of magnitude as the data above anyway. As noted above the fact that Raspbian (and any major multitasking OS on the Pi) uses preemptive multitasking means that there's little guarantee that you'll be able to consistently output a certain frequency. You are entirely at the mercy of the scheduler so if your application needs a consistent, fast output, the Pi is likely not well-suited to your needs. You'll probably have to consider an alternative option instead for your FPGA. | {} | {'log_upvote_score': 5, 'links': ['https://raspberrypi.stackexchange.com/questions/87846', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/49291/']} | jdg_78590 |
stackexchange | llm_judgeable_groundtruth_similarity | 34271334 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am struggling with the Automapper syntax.I have a List of PropertySurveys, each containing 1 Property.I wish to map each item on the collection into a new object which combines the 2 classes. So my code looks like; var propertySurveys = new List<PropertyToSurveyOutput >(); foreach (var item in items) { Mapper.CreateMap<Property, PropertyToSurveyOutput >(); var property = Mapper.Map<PropertyToSurvey>(item.Property); Mapper.CreateMap<PropertySurvey, PropertyToSurveyOutput >(); property = Mapper.Map<PropertyToSurvey>(item); propertySurveys.Add(property); } My simplified classes look like; public class Property{ public string PropertyName { get; set; }}public class PropertySurvey{ public string PropertySurveyName { get; set; } public Property Property { get; set;}}public class PropertyToSurveyOutput{ public string PropertyName { get; set; } public string PropertySurveyName { get; set; }} So in the PropertyToSurveyOutput object, after the first mapping PropertyName is set. Then after the second mapping PropertySurveyName is set, but PropertyName is overridden to null.How do I fix this?
Now provide the response and nothing else.
| First of all, Automapper supports mapping of collections. You don't need to map each item in a loop. Second - you don't need to re-create map each time you need to map single object. Put mappings creation to application start code (or before first usage of mapping). And last - with Automapper you can create mapping and define how to do custom map for some properties: Mapper.CreateMap<PropertySurvey, PropertyToSurveyOutput>() .ForMember(pts => pts.PropertyName, opt => opt.MapFrom(ps => ps.Property.PropertyName)); Usage: var items = new List<PropertySurvey>{ new PropertySurvey { PropertySurveyName = "Foo", Property = new Property { PropertyName = "X" } }, new PropertySurvey { PropertySurveyName = "Bar", Property = new Property { PropertyName = "Y" } }};var propertySurveys = Mapper.Map<List<PropertyToSurveyOutput>>(items); Result: [ { "PropertyName": "X", "PropertySurveyName": "Foo" }, { "PropertyName": "Y", "PropertySurveyName": "Bar" }] UPDATE: If your Property class has many properties, you can define two default mappings - one from Property : Mapper.CreateMap<Property, PropertyToSurveyOutput>(); And one from PropertySurvey . And use first mapping after you used mapping from PropertySurvey : Mapper.CreateMap<PropertySurvey, PropertyToSurveyOutput>() .AfterMap((ps, pst) => Mapper.Map(ps.Property, pst)); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34271334', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/125673/']} | jdg_78591 |
stackexchange | llm_judgeable_groundtruth_similarity | 10909685 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to run a few commands, each of which doesn't quit until Ctrl-C is pressed. Is there something I can run to run all of them at once, and Ctrl-C will quit them all? They can share the terminal output. Specifically, I have the compass compiler, coffeescript compiler, and a custom command that watches for file changes all running watching for file changes. I don't want to load up a terminal for each command.
Now provide the response and nothing else.
| This bash script is for N parallel threads. Each argument is a command. trap will kill all subprocesses when SIGINT is catched. wait $PID_LIST is waiting each process to complete.When all processes have completed, the program exits. #!/bin/bashfor cmd in "$@"; do { echo "Process \"$cmd\" started"; $cmd & pid=$! PID_LIST+=" $pid";} donetrap "kill $PID_LIST" SIGINTecho "Parallel processes have started";wait $PID_LISTechoecho "All processes have completed"; Save this script as parallel_commands and make it executable. This is how to use this script: parallel_commands "cmd arg0 arg1 arg2" "other_cmd arg0 arg2 arg3" Example: parallel_commands "sleep 1" "sleep 2" "sleep 3" "sleep 4" Start 4 parallel sleep and waits until "sleep 4" finishes. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10909685', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/73398/']} | jdg_78592 |
stackexchange | llm_judgeable_groundtruth_similarity | 1628204 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently trying to ssh into a remote machine and run a script, then leave the node with the script running. Below is my script. However, when it runs, the script is successfully run on the machine but ssh session hangs. What's the problem? ssh -x $username@$node 'rm -rf statuslist mkdir statuslist chmod u+x ~/monitor/concat.sh chmod u+x ~/monitor/script.sh nohup ./monitor/concat.sh & exit;'
Now provide the response and nothing else.
| There are some situations when you want to execute/start some scripts on a remote machine/server (which will terminate automatically) and disconnect from the server. eg: A script running on a box which when executed takes a model and copies it to a remote server creates a script for running a simulation with the model and push it to server starts the script on the server and disconnect The duty of the script thus started is to run the simulation in the server and once completed (will take days to complete) copy the results back to client. I would use the following command: ssh remoteserver 'nohup /path/to/script `</dev/null` >nohup.out 2>&1 &' @CKeven, you may put all those commands on one script, push it to the remote server and initiate it as follows: echo '#!/bin/bash rm -rf statuslist mkdir statuslist chmod u+x ~/monitor/concat.sh chmod u+x ~/monitor/script.sh nohup ./monitor/concat.sh & ' > script.shchmod u+x script.shrsync -azvp script.sh remotehost:/tmpssh remotehost '/tmp/script.sh `</dev/null` >nohup.out 2>&1 &' Hope this works ;-) Edit:You can also usessh user@host 'screen -S SessionName -d -m "/path/to/executable"' Which creates a detached screen session and runs target command within it | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1628204', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/67129/']} | jdg_78593 |
stackexchange | llm_judgeable_groundtruth_similarity | 7256672 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The Flot chart api supports dual v-axis scales, as shown by this example . I'm using Google Charts - is this possible also with Google? I've had a look through the examples and docs, but can't find any examples / references to indicate it does support dual axis charts.
Now provide the response and nothing else.
| It took me a while, to figure this out, but Google Charts does support dual Y-axis (v-axis). I want to use the Javascript API and not the HTML interface. This example can be tested here: http://code.google.com/apis/ajax/playground/?type=visualization#line_chart Replace all of that code with this code showing how to have two different Y-axis scales: function drawVisualization() { // Create and populate the data table. var data = new google.visualization.DataTable(); data.addColumn('string', 'x'); data.addColumn('number', 'Cats'); data.addColumn('number', 'Blanket 1'); data.addColumn('number', 'Blanket 2'); data.addRow(["A", 1, 1, 0.5]); data.addRow(["B", 2, 0.5, 1]); data.addRow(["C", 4, 1, 0.5]); data.addRow(["D", 8, 0.5, 1]); data.addRow(["E", 7, 1, 0.5]); data.addRow(["F", 7, 0.5, 1]); data.addRow(["G", 8, 1, 0.5]); data.addRow(["H", 4, 0.5, 1]); data.addRow(["I", 2, 1, 0.5]); data.addRow(["J", 3.5, 0.5, 1]); data.addRow(["K", 3, 1, 0.5]); data.addRow(["L", 3.5, 0.5, 1]); data.addRow(["M", 1, 1, 0.5]); data.addRow(["N", 1, 0.5, 1]); // Create and draw the visualization. new google.visualization.LineChart(document.getElementById('visualization')). draw(data, {curveType: "function", width: 500, height: 400, vAxes: {0: {logScale: false}, 1: {logScale: false, maxValue: 2}}, series:{ 0:{targetAxisIndex:0}, 1:{targetAxisIndex:1}, 2:{targetAxisIndex:1}}} );} By adding maxValue: 2 to the code, and setting series 1 & 2 to that axis, they work properly on a second axis. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7256672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156477/']} | jdg_78594 |
stackexchange | llm_judgeable_groundtruth_similarity | 22502632 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So in my post insert action I need to insert audit entity: public void OnPostInsert(PostInsertEvent e){ var childSession = session.GetSession(EntityMode.Poco); childSession.Save(logEntry); childSession.Flush(); } Question is, if I need to dispose childSession or root session is taking care of this?
Now provide the response and nothing else.
| One problem is that Nashorn can no longer by default import whole Java packages into the global scope by using importPackage(com.organization.project.package); There is, however, a simple workaround: By adding this line to your script, you can enable the old behavior of Rhino: load("nashorn:mozilla_compat.js"); Another problem I ran into is that certain type-conversions when passing data between java and javascript work differently. For example, the object which arrives when you pass a Javascript array to Java can no longer be cast to List , but it can be cast to a Map<String, Object> . As a workaround you can convert the Javascript array to a Java List in the Javascript code using Java.to(array, Java.type("java.util.List")) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22502632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212121/']} | jdg_78595 |
stackexchange | llm_judgeable_groundtruth_similarity | 4268063 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a set of the form $\{(x,y) \in \mathbb{R}^2| x = y^2 \}$ What are some ways to tell that it is a closed set? So the approach I am using is that this set is closed if its complement is open. I can draw this curve on a 2D grid, which looks something like $\subset$ and then take its complement. For any point in this complement, I can squeeze in a ball so that it is in the complement. This means the complement is open. Hence this set is closed. I am not very satisfied because this argument is graphical and a bit handwavy. I also tried to use the sequential definition for showing closedness, i.e., any sequence that is contained in this set will have a limit in it. But again, a bit on the graphical side. Are there any other approach?
Now provide the response and nothing else.
| The map $f(x, y) = x - y^2$ is continuous. $\{0\}$ is a closed set. Therefore, $f^{-1}(\{0\}) = \{(x, y) \mid x = y^2\}$ is closed. This follows from the general fact that if $f : A \to B$ is continuous and $C \subseteq B$ is closed then $f^{-1}(C)$ is closed. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4268063', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/468922/']} | jdg_78596 |
stackexchange | llm_judgeable_groundtruth_similarity | 1110332 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a connected graph $G(V,E)$ that has $|E|= n$ edges and $n$ vertices. Prove that the graph has one cycle in it. I'm little bit confused here. I tried some ways but failed. Can you direct me?
Now provide the response and nothing else.
| $G$ is connected so it has a spanning tree $T$. This uses up $n-1$ edges and has no cycles. There is one edge $e$ not used in the tree, say $e=(u,v)$. So any cycle in $G$ would have to use $e$. Now in $T$ there is a unique path between vertices $u$ and $v$ and so in $T\cup e$ that path plus $e$ is one cycle, say $C_1$. Can there be any others? Suppose $C_2$ is another cycle, and so we know that it must use $e$ but $C_2\neq C_1$. Then $C_2\setminus e$ and $C_1\setminus e$ are two different paths in $T$ from $u$ to $v$, a contradiction. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1110332', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/113089/']} | jdg_78597 |
stackexchange | llm_judgeable_groundtruth_similarity | 1574088 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array of timestamps in the format (HH:MM:SS.mmmmmm) and another array of floating point numbers, each corresponding to a value in the timestamp array. Can I plot time on the x axis and the numbers on the y-axis using Matplotlib? I was trying to, but somehow it was only accepting arrays of floats. How can I get it to plot the time? Do I have to modify the format in any way?
Now provide the response and nothing else.
| Update : This answer is outdated since matplotlib version 3.5 . The plot function now handles datetime data directly. See https://matplotlib.org/3.5.1/api/_as_gen/matplotlib.pyplot.plot_date.html The use of plot_date is discouraged. This method exists for historicreasons and may be deprecated in the future. datetime-like data should directly be plotted using plot. If you need to plot plain numeric data as Matplotlib date format orneed to set a timezone, call ax.xaxis.axis_date / ax.yaxis.axis_datebefore plot. See Axis.axis_date. Old, outdated answer: You must first convert your timestamps to Python datetime objects (use datetime.strptime ). Then use date2num to convert the dates to matplotlib format. Plot the dates and values using plot_date : import matplotlib.pyplotimport matplotlib.datesfrom datetime import datetimex_values = [datetime(2021, 11, 18, 12), datetime(2021, 11, 18, 14), datetime(2021, 11, 18, 16)]y_values = [1.0, 3.0, 2.0]dates = matplotlib.dates.date2num(x_values)matplotlib.pyplot.plot_date(dates, y_values) | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1574088', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/178437/']} | jdg_78598 |
stackexchange | llm_judgeable_groundtruth_similarity | 934877 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Similar to this question , let $H_n$ be the $n^{th}$ harmonic number, $$ H_n = \sum_{i=1}^{n} \frac{1}{i}$$ Is there a similar method to calculate the following?: $$\sum_{i=1}^{n}iH_i$$
Now provide the response and nothing else.
| I am ready to bet that Jack D'Aurizio will provide one of his elegant answers. Just mimicing what he answered in the previous question, by analogy with $$\int x\log(x) dx=\frac{1}{2} x^2 \log (x)-\frac{x^2}{4}$$ you have $$\sum_{i=1}^niH_i=\frac{1}{4} n (n+1) \left(2 H_{n+1}-1\right)$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/934877', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/101890/']} | jdg_78599 |
stackexchange | llm_judgeable_groundtruth_similarity | 76010 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the typical max voltage rms, peak and peak to peak for a sound card line in?
Now provide the response and nothing else.
| Nominal line level for consumer equipment is -10dBV, which is about $$10^{-10\:\mathrm V/20} = 316\:\mathrm{mV_{(RMS)}}$$ or (for simple sinusoidal waves): $$\sqrt{2} \cdot 316\:\mathrm{mV_{(RMS)}} = 447\:\mathrm{mV_{(peak)}}$$ or: $$2\sqrt{2} \cdot 316\:\mathrm{mV_{(RMS)}} = 894\:\mathrm{mV_\text{(peak-to-peak)}}$$ Equipment will often include as much as 20dB of headroom above this before clipping, so a full scale signal could be 10dBV, or 3.16V (RMS) , or 8.9V (peak-to-peak) . Of course, no manufacturer wants to be quieter than everyone else , so devices tend to have hotter outputs than they should, and inputs have less headroom than they should. Bend these rules to the extent that you think your customers care more about volume than fidelity. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/76010', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/26422/']} | jdg_78600 |
stackexchange | llm_judgeable_groundtruth_similarity | 25669214 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem I need to compile R 3.1.1 with shared library (--enable-R-shlib) with ICC/MKL (Composer XE 2013 SP 1.3.174) in order to use a specific IDE (rstudio) and I am running into trouble. Context Some information about my platform: OS: Ubuntu 14.04.1 LTSKernel: 3.13.0-30Compiler: Intel ICC (Composer XE 2013 SP 1.3.174)MKL: Intel MKL (Composer XE 2013 SP 1.3.174) I previously had a working installation of R 3.1.1 (without shared library) compiled with ICC/MKL (Composer XE 2013 SP 1.3.174) as follows: $source /opt/intel/composerxe/bin/compilervars.sh intel64$export CC="icc"$export CXX="icpc"$export AR="xiar"$export LD="xild"$export CFLAGS="-O3 -ipo -openmp -xHost -multiple-processes"$export CXXFLAGS="-O3 -ipo -openmp -xHost -multiple-processes"$export MKL="-lmkl_gf_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread"$./configure --with-lapack --with-blas="$MKL" --build="x86_64-linux-gnu" --host="x86_64-linux-gnu" > log_cfg$make > log_make_out 2> log_make_err#make install When I run the commands above the compilation is successful and log_make_err is empty at the end of the process. As I said in the beginning, I now need to compile R with shared library (--enable-R-shlib). Therefore, I tried to use the exact same commands as before (same computer) and changed the configure line to: $./configure --with-lapack --with-blas="$MKL" --build="x86_64-linux-gnu" --host="x86_64-linux-gnu" --enable-R-shlib This gives the following output: R is now configured for x86_64-pc-linux-gnu Source directory: . Installation directory: /usr/local C compiler: icc -std=gnu99 -O3 -ipo -openmp -xHost -multiple-processes Fortran 77 compiler: gfortran -g -O2 C++ compiler: icpc -O3 -ipo -openmp -xHost -multiple-processes C++ 11 compiler: icpc -std=c++11 -O3 -ipo -openmp -xHost -multiple-processes Fortran 90/95 compiler: x86_64-linux-gnu-gfortran -g -O2 Obj-C compiler: x86_64-linux-gnu-gcc Interfaces supported: X11, tcltk External libraries: readline, BLAS(generic), LAPACK(in blas), lzma Additional capabilities: PNG, JPEG, TIFF, NLS, cairo Options enabled: shared R library, R profiling Recommended packages: yes In this case, the compilation is not successful and log_make_err contains the following : ld: /tmp/ipo_iccUpPSPh.o: undefined reference to symbol '__kmpc_end@@VERSION'/opt/intel/composer_xe_2013_sp1.3.174/compiler/lib/intel64/libiomp5.so: error adding symbols: DSO missing from command linemake[3]: *** [R.bin] Error 1make[2]: *** [R] Error 2make[1]: *** [R] Error 1make: *** [R] Error 1 When I run diff on the output of the configure script for each case, nothing strange shows up: 753c753< Options enabled: R profiling---> Options enabled: shared R library, R profiling I tried to include the full output of the make command but it exceeds the maximum number of characters allowed and using pastebin is not a good practice at SO. Let me know if you feel like there is information missing that could help you lead me in the right direction. Thanks!
Now provide the response and nothing else.
| Yes there is a way to deactivate the ability to rotate the map. You need to customize the interactions of the ol.Map object. Either you use the ol.interaction.defaults function to create an ol.Collection with interactions or you create an array with only the interactions you want. Then you can pass it to the constructor of ol.Map. Using the ol.interaction.defaults function ( http://openlayers.org/en/master/apidoc/ol.interaction.html#defaults ): var interactions = ol.interaction.defaults({altShiftDragRotate:false, pinchRotate:false}); var map = new ol.Map { interactions: interactions}; The first line creates all default interactions but the ability to rotate via keyboard+mouse and using fingers on a mobile device. You maybe want to remove the ol.control.Rotate then, too. (This is the needle in the upper right which is used to reset the rotation and only appears if the map is rotated). Works the same way. Creating controls without compass via ol.control.defaults ( http://openlayers.org/en/master/apidoc/ol.control.html#defaults ) var controls = ol.control.defaults({rotate: false}); 'Full' code: var controls = ol.control.defaults({rotate: false}); var interactions = ol.interaction.defaults({altShiftDragRotate:false, pinchRotate:false});var map = new ol.Map { controls: controls, interactions: interactions}; | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25669214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1299432/']} | jdg_78601 |
stackexchange | llm_judgeable_groundtruth_similarity | 3996034 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The full question is: Given a group $F_n=\mathbb{Q}(\sqrt[n]{2})$ then prove that: (A) If $n$ is odd then $\Gamma(F_n:\mathbb{Q})$ ={ $\rm{id}$ } and (B) If $n$ is even then $\Gamma(F_n:\mathbb{Q})\cong\mathbb{Z_2}$ I am confused as to how to find either, I know that $F_n:\mathbb{Q}$ is a non-normal extension. We were given the hint: "For any $\tau\in\rm{Aut}_\mathbb{Q}(F_n)$ we must have that $\tau(\sqrt[n]{2})\in F_n\subseteq\mathbb{R}$ is again a real $n$ -th root of 2"
Now provide the response and nothing else.
| The $\mathbb Q$ -automorphisms of $F_n$ are completely determined by where they send $\sqrt[n]2$ , and they must send it to another $n$ -th root of $2$ . But if $n$ is odd, then no other $n$ -th root of $2$ is in $F_n$ , and if $n$ is even, then only one other $n$ -th root of $2$ is in $F_n$ (you might need to prove these facts beforehand). So there are only one/two possible $\mathbb Q$ -automorphisms. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3996034', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/741553/']} | jdg_78602 |
stackexchange | llm_judgeable_groundtruth_similarity | 3148319 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
[ edit ] For background reading, and to be clear, this is what I am talking about: Introduction to the volatile keyword When reviewing embedded systems code, one of the most common errors I see is the omission of volatile for thread/interrupt shared data. However my question is whether it is 'safe' not to use volatile when a variable is accessed via an access function or member function? A simple example; in the following code... volatile bool flag = false ;void ThreadA(){ ... while (!flag) { // Wait } ...}interrupt void InterruptB(){ flag = true ;} ... the variable flag must be volatile to ensure that the read in ThreadA is not optimised out, however if the flag were read via a function thus... volatile bool flag = false ;bool ReadFlag() { return flag }void ThreadA(){ ... while ( !ReadFlag() ) { // Wait } ...} ... does flag still need to be volatile? I realise that there is no harm in it being volatile, but my concern is for when it is omitted and the omission is not spotted; will this be safe? The above example is trivial; in the real case (and the reason for my asking), I have a class library that wraps an RTOS such that there is an abstract class cTask that task objects are derived from. Such "active" objects typically have member functions that access data than may be modified in the object's task context but accessed from other contexts; is it critical then that such data is declared volatile? I am really interested in what is guaranteed about such data rather than what a practical compiler might do. I may test a number of compilers and find that they never optimise out a read through an accessor, but then one day find a compiler or a compiler setting that makes this assumption untrue. I could imagine for example that if the function were in-lined, such an optimisation would be trivial for a compiler because it would be no different than a direct read.
Now provide the response and nothing else.
| My reading of C99 is that unless you specify volatile , how and when the variable is actually accessed is implementation defined. If you specify volatile qualifier then code must work according to the rules of an abstract machine . Relevant parts in the standard are: 6.7.3 Type qualifiers (volatile description) and 5.1.2.3 Program execution (the abstract machine definition). For some time now I know that many compilers actually have heuristics to detect cases when a variable should be reread again and when it is okay to use a cached copy. Volatile makes it clear to the compiler that every access to the variable should be actually an access to the memory. Without volatile it seems compiler is free to never reread the variable. And BTW wrapping the access in a function doesn't change that since a function even without inline might be still inlined by the compiler within the current compilation unit. P.S. For C++ probably it is worth checking the C89 which the former is based on. I do not have the C89 at hand. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3148319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168986/']} | jdg_78603 |
stackexchange | llm_judgeable_groundtruth_similarity | 1763696 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to expose a property on a view model that contains a list of objects (from database). I need this collection to be read-only. That is, I want to prevent Add/Remove, etc. But allow the foreach and indexers to work. My intent is to declare a private field holding the editable collection and reference it with a read-only Public Property. As follows public ObservableCollection<foo> CollectionOfFoo { get { return _CollectionOfFoo; }} However, that syntax just prevents changing the reference to the collection. It doesn't prevent add/remove, etc. What is the right way to accomplish this?
Now provide the response and nothing else.
| The [previously] accepted answer will actually return a different ReadOnlyObservableCollection every time ReadOnlyFoo is accessed. This is wasteful and can lead to subtle bugs. A preferable solution is: public class Source{ Source() { m_collection = new ObservableCollection<int>(); m_collectionReadOnly = new ReadOnlyObservableCollection<int>(m_collection); } public ReadOnlyObservableCollection<int> Items { get { return m_collectionReadOnly; } } readonly ObservableCollection<int> m_collection; readonly ReadOnlyObservableCollection<int> m_collectionReadOnly;} See ReadOnlyObservableCollection anti-pattern for a full discussion. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1763696', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214642/']} | jdg_78604 |
stackexchange | llm_judgeable_groundtruth_similarity | 9139 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've heard this claim quite a few times and have even repeated it as fact. The basic claim is that 'gravity compresses the spine' during the waking hours and since (most people) sleep at night, they are tallest first thing in the morning. I can find examples of the claims online - many with answers - but most are from non-reputable sites without sources. In my own personal experience, I believe this to be true; but I would like something more official than my own experience to support my claim (or I'd like to know that I'm wrong). Examples: Answers ScienceForums Yahoo! Answers
Now provide the response and nothing else.
| Yes , we are taller in the morning. From The transformation of spinal curvature into spinal deformity (2005): The effects of gravity on the upright human posture are powerful: Individuals are as much as 25 mm taller in the morning than in the evening (1)(2), as a result of compressive forces bearing down all day, And astronauts 'grow' by nearly 75 mm when released from the force of the earth's gravity (3). (1) Diurnal variation of Cobb angle measurement in adolescent idiopathic scoliosis (2) Postural and time-dependent effects on body height and scoliosis angle in adolescent idiopathic scoliosis (3) Diurnal changes in the profile shape and range of motion of the back From NASA (2004): Did you know that astronauts are up to 2 inches taller while they're in space? As soon as they come back to Earth, though, they return to their normal height. To some degree, a similar stretching of the spine happens to you every night. When you lie down, gravity isn't pushing down on your vertebrae. You can do your own experiments with a meterstick. Measure your height carefully as soon as you get up or while you are still lying down. You will find that you're about a centimeter or two taller. That's not as much as astronauts change in space. The idea, however, is the same. As the day passes, your vertebrae compress through normal activities, and you'll lose those few centimeters you "grew" overnight. | {} | {'log_upvote_score': 7, 'links': ['https://skeptics.stackexchange.com/questions/9139', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/2637/']} | jdg_78605 |
stackexchange | llm_judgeable_groundtruth_similarity | 5265772 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create an IPA in Xcode 4, much like the person who asked this question: Xcode 4: create IPA file instead of .xcarchive So--I got my Archive completing successfully--supposedly. It dumps .xcarchive files for the project in its destination folder. But these archives don't show up in my Organizer window. So I can't share them as described in the above question to create the IPA. There are no errors in the archiving process--they seem to be signed OK. So why aren't the archives showing up in the archive panel on the Organizer? Is there some step I'm missing...or obscure setting I need to modify?
Now provide the response and nothing else.
| EDIT (Incorporated all comments to a single answer) Try one of the following (or all) Instead of using Build For -> Archive, in the product menu just use archive. It will show up then. In the scheme editor, edit the scheme and go to the Archive tab, make sure the check box for show in Organizer is checked. In the archive tab in the scheme editor check the build configuration used for archiving. Make sure it has the right entitlements file & certificates. In the build settings switch Skip Install -> Release to NO, for the build settings used for archiving. Make sure the archives folder and XCode project files are inside the same shared folder if network drive is used. I took me a few days to finally figure this out as I placed my XCode source files from a Windows shared folder, but the Archives folder is on the local Mac, which caused archives not picked up by Organizer. Thanks to @Smikey & @Ralph B & @Scott McMillin | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5265772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/365438/']} | jdg_78606 |
stackexchange | llm_judgeable_groundtruth_similarity | 43526549 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
> dput(df_male)structure(list(Question = structure(c(1L, 2L, 3L, 1L, 2L, 3L), .Label = c("Q1", "Q2", "Q3", "Q4", "Q5", "Q6", "Q7", "Q8"), class = "factor"), Response = structure(c(1L, 1L, 1L, 2L, 2L, 2L), .Label = c("No", "Yes"), class = "factor"), Proportion = c(0.569230769230769, 0.569230769230769, 0.492307692307692, 0.430769230769231, 0.430769230769231, 0.507692307692308)), .Names = c("Question", "Response", "Proportion"), row.names = c(1L, 2L, 3L, 9L, 10L, 11L), class = "data.frame")> dput(df_female)structure(list(Question = structure(c(1L, 2L, 3L, 1L, 2L, 3L), .Label = c("Q1", "Q2", "Q3", "Q4", "Q5", "Q6", "Q7", "Q8"), class = "factor"), Response = structure(c(1L, 1L, 1L, 2L, 2L, 2L), .Label = c("No", "Yes"), class = "factor"), Proportion = c(0.603092783505155, 0.65979381443299, 0.54639175257732, 0.396907216494845, 0.34020618556701, 0.45360824742268)), .Names = c("Question", "Response", "Proportion"), row.names = c(1L, 2L, 3L, 9L, 10L, 11L), class = "data.frame") I have 2 data frames (one for each gender) on their proportion of response to 3 questions. > df_male Question Response Proportion1 Q1 No 0.56923082 Q2 No 0.56923083 Q3 No 0.49230779 Q1 Yes 0.430769210 Q2 Yes 0.430769211 Q3 Yes 0.5076923> df_female Question Response Proportion1 Q1 No 0.60309282 Q2 No 0.65979383 Q3 No 0.54639189 Q1 Yes 0.396907210 Q2 Yes 0.340206211 Q3 Yes 0.4536082 I would like to visualize this, so I use a ggplot segmented barplot. ggplot(df_male, aes(x = Question, y = Proportion)) + geom_bar(aes(fill = Response), stat = "identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.25)) + # Rotate tick mark labels guides(fill = guide_legend(reverse = TRUE)) + ggtitle("Male") + theme(plot.title = element_text(hjust=0.5))ggplot(df_female, aes(x = Question, y = Proportion)) + geom_bar(aes(fill = Response), stat = "identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1, vjust = 0.25)) + # Rotate tick mark labels guides(fill = guide_legend(reverse = TRUE)) + ggtitle("Female") + theme(plot.title = element_text(hjust=0.5)) Is this a way to combine the 2 plots into just 1 plot? i.e. I would like to find a way to visualize proportions to these 3 questions for 2 different groups in the same plot.
Now provide the response and nothing else.
| You can use terraform's template_file data source for this. Simply write your policy out to a file in a path that your terraform scripts can access, and then create a template_file data source that references it. For example: data "template_file" "policy" { template = "${file("somepath/my-policy.json")}"} And then, in foo-policy, you would render it like so: policy = "${data.template_file.policy.rendered}" An additional benefit of template_file is that you can interpolate variables within the referenced file. For example, you could have variables like ${IAMUser} or ${AWSAccountNumber} in your policy and pass it in via the template_file vars option, which would allow you to reuse the policy file. Further Reading Terraform Docs - Configuring Data Sources | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43526549', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3391549/']} | jdg_78607 |
stackexchange | llm_judgeable_groundtruth_similarity | 47820657 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My teacher usually starts indexing arrays from one. So, basically, when we need an array of 100 elements he uses int a[101] instead of int a[100] and, for example, he fills it like this: for (int i = 1; i <= 100; i++) cin >> a[i]; Are there any problems with using this method, or should I avoid it? (I don't have any problems in working with indexes starting from 0)
Now provide the response and nothing else.
| Should I use this regularly, or should I avoid it? You should avoid it. One problem is that 99.9% of C++ developers would not share this bad habit with you and your teacher, so you will find their code difficult to understand and vice versa. However, there is worse problem with that. Such indexing will conflict with any standard algorithm and others that follow them and you would have to write explicit pesky code to fix it as container.begin() and container.end() as well as std::begin() and std::end() for C style arrays to work with accordance to the C++ standard which is 0 based. Note: As mentioned in comments for range loop, which is implicitly using begin()/end() would be broken as well for the same reason. Though this issue is even worse as range used implicitly and there is no simple way to make for range loop work in this case. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47820657', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9067975/']} | jdg_78608 |
stackexchange | llm_judgeable_groundtruth_similarity | 62527 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In a standing wave, how does energy travel past a node? It should just get reflected. Assume the case of first overtone and you strike the string at a place. How will energy distribute itself? If it distributes over the whole string, then how is it the wave traveled past a node? Basically the problem is a standing wave has been set up in the string . Now you strike the string at a place, and this creates a new pulse. Now this pulse will travel past the next node in the string (not the boundary) or will it reflect off even that node and remain confined to a particular region of the string?
Now provide the response and nothing else.
| Waves on strings combine linearly. This means that you can split up a string's motion into two (or more) superimposed waves. The two superimposed waves behave independently, as if the other one was not there. So if you have a standing wave set up on a string, and then you also introduce a travelling pulse, you get something like the following. (The arrows represent the direction of movement, and the node is marked with a blue dot.) Now to answer your question. I wish I had a way to make the picture animated, but I think you can see it from still pictures. I'm going to draw what happens after a short time, when the pulse reaches the node. The standing wave has also moved, and is now swinging back in the other direction. As you can see, the standing wave component still passes through zero at the node, as it always must, but the combined wave (pulse + standing wave) does not. Because the pulse and the standing wave do not interact, the pulse just passes straight through the node as if it wasn't there, and the standing wave just keeps waving as if the pulse wasn't there. Note that not interacting isn't the same as not interfering . Interference happens when two waves get added together and sum to zero, but neither of the two waves is affected by being added in this way, so even when waves interfere, they don't interact. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/62527', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']} | jdg_78609 |
stackexchange | llm_judgeable_groundtruth_similarity | 809423 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was hoping to write my own syntax highlighter for a summer project I am thinking of working on but I am not sure how to write my own syntax highlighter. I know that there are bunch of implementations out there but I would like to learn about regular expressions and how syntax highlighting works. How does syntax highlighting work and what are some good references for developing one? Does the syntax highlighter scan each character as it is typed or does it scan the document/text area as a whole after each character is typed? Any insight would be greatly appreciated. Thanks. PS: I was planning on writing it in ActionScript
Now provide the response and nothing else.
| Syntax highlighters can work in two very general ways. The first implements a full lexer and parser for the language(s) being highlighted, exactly identifying each token's type (keyword, class name, instance name, variable type, preprocessor directive...). This provides all the information needed to exactly highlight the code according to some specification (keywords in red, class names in blue, what have you). The second way is something like the one Google Code Prettify employs, where instead of implementing one lexer/parser per language, a couple of very general parsers are used that can do a decent job on most syntaxes. This highlighter, for example, will be able to parse and highlight reasonably well any C-like language, because its lexer/parser can identify the general components of those kinds of languages. This also has the advantage that, as a result, you don't need to explicitely specify the language, as the engine will determine by itself which of its generic parsers can do the best job. The downside of course is that highlighting is less perfect than when a language-specific parser is used. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/809423', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91866/']} | jdg_78610 |
stackexchange | llm_judgeable_groundtruth_similarity | 299000 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
please help me to solve the following problem. Let $F$ be a non-abelian free group and $w(x)=1$ be an equation in one variable $x$ ($w(x)$ may contain elements of $F$ as constants). Clearly, one can consider $w(x)$ as an element of free product $F\ast \langle x\rangle$. Suppose $w(a)=1$ for all $a\in F$. Is it true that $w(x)$ equals $1$ in $F\ast \langle x\rangle$? Do you know a simple proof? Probably, you can remember the papers which can be useful for this propblem?
Now provide the response and nothing else.
| Yes. Denote by $a,b$ the first two free generators of $F$. The case $w\in F\cup\langle x\rangle$ is clear. So we can suppose, after conjugation, that $w=u_1x^{n_1}\dots u_kx^{n_k}$ with $k\ge 1$, $u_i\in F\smallsetminus\{1\}$, and $n_i\in\mathbf{Z}\smallsetminus\{0\}$. Choose $n$ large enough such that the reduced form of $a^nu_ia^{-n}$ starts with $a$ and finishes with $a^{-1}$ for all $u_i$ that is not a power of $a$. In all cases, the reduced form $v_i$ of $a^nu_ia^{-n}$ starts and finishes with $a^{\pm 1}$. Then $q=\prod v_ib^{n_i}$ is a reduced form and hence $\neq 1$. We have$$a^{-n}qa^n=a^{-n}\left(\prod a^nu_ia^{-n}b^{n_i}\right)a^n=\prod u_ia^{-n}b^{n_i}a^n=w(a^{-n}ba^n)\neq 1.$$ Added: This actually shows that the canonical map $F\ast\langle x\rangle\to F^F$, $w\mapsto (t\mapsto w(t))$ is injective. Indeed this is a group homomorphism ($F^F$ being a group for the target operation: infinite power of the group $F$ by the index set $F$). Of course this fails when $F$ has rank $r\le 1$: for $r=0$, $x$ is the kernel, while for $r=1$ $axa^{-1}x^{-1}$ is in the kernel. The above shows injectivity of the projection to $F^X$, where $X=\{a^nba^{-n}:n\in\mathbf{Z}\}.$ | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/299000', 'https://mathoverflow.net', 'https://mathoverflow.net/users/81263/']} | jdg_78611 |
stackexchange | llm_judgeable_groundtruth_similarity | 27742074 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The title says it all. When I call the mDrawerToggle.setDrawerIndicatorEnabled(false) I don't want the " hamburger " icon to be shown anymore but the backwards navigation arrow. Unfortunately when I call this method just the title is shown without the backwards arrow nor the "hamburger" icon. After setting the drawerIndicatorEnabled to be true again it shows the "hamburger" icon again. I set getSupportActionBar().setDisplayHomeAsUpEnabled(true) and getSupportActionBar().setDisplayShowHomeEnabled(true) Edit: Basically the solution suggested here: Change drawer icon back to back arrow somehow doesn't give me the back arrow. Does anyone know a solution for this issue? Thank you very much!
Now provide the response and nothing else.
| After hours of trials and errors I came up with a solution that allows to switch from "hamburger" to "arrow" and back. This is very weird and unnatural, don't ask me why it works in this way, but it works. Furthermore, this is the only solution that allowed me to do this, nothing else worked. I have only one activity with fragments. When I'm switching from one fragment to another, I'm setting boolean variable in my activity displayingInnerFragment . For those fragments, where displayingInnerFragment == true , I show "arrow" in the top left corner, and for all others I show "hamburger". The following code I execute before switching to any fragment: ActionBar actionBar = getSupportActionBar(); if (displayingInnerFragment) { actionBar.setDisplayHomeAsUpEnabled(false); drawerToggle.setDrawerIndicatorEnabled(false); actionBar.setDisplayHomeAsUpEnabled(true); } else { drawerToggle.setDrawerIndicatorEnabled(true); } Note the double call to actionBar.setDisplayHomeAsUpEnabled() in one branch. This is required for drawerToggle.setDrawerIndicatorEnabled(false) to work. Otherwise it will not work properly. All other options either don't show "arrow" or hide "arrow" or "hamburger" at one moment or another. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27742074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3850759/']} | jdg_78612 |
stackexchange | llm_judgeable_groundtruth_similarity | 14060 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When using command mount for a device, I was wondering about the following questions: Since device file is a parameter tomount, how does one know what thedevice file for a device is ingeneral? Do I have to create in advance thedirectory to which the device ismounted to, if it does not existyet? I saw this was said to berequired, but my CD with name "mycd"is automatically mounted to /media/mycd, whichdoesn't exist beforehand. can a device be mounted to severalplaces, without unmounting? Thanks and regards!
Now provide the response and nothing else.
| (You didn't specify your operating system. I'm assuming it's somevariant of GNU/Linux, the general concept applies to other UNIXesas well; details may not.) 1. How does one know what the device file for a device is in general? Basically, you have to know which device file name corresponds towhich device. Sources of this information are the Linux kernel documentation , the udev configuration files (look into /etc/udev ) and the MAKEDEV script. The correct explanation is quite longer here: the Linux kernelidentifies devices by a pair of numbers, called the "major" and the "minor" device numbers. Any device filehaving the major and minor number of your CD-ROM device will betreated by the kernel as that CD-ROM device; so you could create (seethe mknod command) a CD-ROM device /my/cdrom and use that;likewise, you could use any naming convention you like for any device.However, so much system software depends on finding a device by namethat it's too much work to change device names from the "standard". The actual device names used on the system are partly the result ofhistory (e.g., the /dev/sdX and /dev/hdX names for disk drives -somebody started using those in the beginning of times and the namestuck), part the result of an agreement between the people developingsome low-level parts of the system (mainly, the kernel, libC andudev). 2. Do I have to create in advance the directory to which the device is mounted to? Yes, mount will not create that directory for you. The reason you see the mount points for CDs, USB sticks and otherdevices automagically appearing into /media is that some daemonprocess has created that for you. (On GNU/Linux running the GNOMEdesktop it goes roughly as follows: you insert the CD, the mountdirectory is created, the CD is mounted and -possibly- a file managerwindow is opened. Almost everything can change, depending on the exactLinux version and distribution.) But on the command-line, you're on your own and have to create themount point yourself. 3. Can a device be mounted to several places, without unmounting? If you mean "how to make the contents of the CD appear in variousplaces of the filesystem", then yes, you can do that using a featurecalled "bind mount". Bind mount can be "replicate" any directory on the filesystem appearin another, disjoint, part of the filesystem. For instance, you could give the command: mount --bind /var/tmp /mnt and this will make replicate the contents of /var/tmp into thedirectory /mnt : if you create a file /var/tmp/foo , you will seethe same file appearing as /mnt/foo . Further reading You can find more information on mount and its operation at: The Librenix sysadmin tutorial The mount command man page | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/14060', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/674/']} | jdg_78613 |
stackexchange | llm_judgeable_groundtruth_similarity | 167623 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider a binary grid of size $4\times 4$, each of cell can either have $0$ or $1$. Among all possible $2^{16}$ arrangement how many arrangement of such grid exist in which each row and column contains even number of $1$s. Solution which I thought There will be $2$ possibilities for the answer of this question $1$st all ones in the $4\times 4$ grid that will count up to $1$ possible arrangement and $2$nd possibility will be $2$ ones in each row and column so how can i find the possible arrangement which will have $2$ ones in each row and column. Am I right?
Now provide the response and nothing else.
| Let the positions be $a_{i,j}$, where $1\le i,j\le 4$. You can fill the $3\times 3$ square in the upper lefthand corner, i.e., positions $a_{i,j}$ with $1\le i,j\le 3$, any way you like. Once those $9$ positions are filled, there is exactly one way to fill the remaining $7$ positions to get an even number of $1$’s in each row and column. Can you see why? (HINT: Fill $a_{4,4}$ last.) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/167623', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28722/']} | jdg_78614 |
stackexchange | llm_judgeable_groundtruth_similarity | 49354607 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a reusable component that is a video.js video player. This component works fine when the data is passed in on the initial DOM load. I need to figure out why my component is not re-rendering after the state is updated in Vuex. The parent component passes down the data for the video via props. I also have this set to be used with multiple videos and it works fine with a single one or many. <div v-for="video in videos" :key="video.id"> <video-player :videoName="video.videoName" :videoURL="video.videoURL" :thumbnail="video.thumbnail"></video-player></div> I'm setting the initial state to a generic video for all users in my Vuex store. getFreeVideo: [ { videoName: "the_video_name", videoURL: "https://demo-video-url.mp4", thumbnail: "https://s3.amazonaws.com/../demo-video-poster.jpg" }] This is set in data in videos (and later set to getFreeVideo) data () { return { videos: [] } } I'm setting videos in data() to getFreeVideo in the store within the created() lifecycle: this.videos = this.getFreeVideo ..and checking if a user has a personal video and updating the state in the created() lifecycle. this.$store.dispatch('getFreeVideo', 'the_video_name') This makes a request with axios and returns our video data successfully. I'm using mapState import { mapState } from 'vuex to watch for a state change. computed: { ...mapState(['getFreeVideo'])} I am not seeing why this.videos is not being updated.Here, my assumption regarding the expected behaviour would be videos[] being updated from the state change and a re-rendering of the component. As you can see below, the state has been updated, and the videoUpdate() within the computed properties has the new data as well: ..but, videos[] is never updated thus the video component never gets the props new props etc.. A couple of notes: already tried, hiding the child component with v-if (and showing after state change) tried setTimeout to test things, but the data will come through and then the videoJS player never instantiates correctly (must have initial data) tried doing this with a local method / not using Vuex state console is showing error TypeError: Cannot read property '_withTask' of undefined but this happens even when the demo video loads correctly, so this seem unrelated, and I can't find anything anywhere in here that presents itself as undefined. TL;DR I basically can't get child component to re-render after the state change.And although I can get the data into videos[] with a different structure, it still never re-renders. Why is the data not making it through, and the re-render never happening? Please don't post answers that only contain links to 'understanding reactivity' or something without any explanation. appended for @acdcjunior //action getFreeVideo: (context, videoName) => { axios({ method: 'post', url: 'https://hidden-for-posting', data: { action: 'getVideo', userId: '1777', // (hardcoded to test) videoName: videoName }, headers: { 'x-api-key': apiKey, 'Content-Type': 'application/json' } }) .then(response => { let video = [ { videoName: response.data.videoName, videoURL: response.data.videoURLs.mp4, thumbnail: response.data.thumbnails['1280'] } ] return context.commit('updateGetFreeVideo', video) }) .catch(error => { if (error.response) { console.log(error.response) } else if (error.request) { console.log(error.request) } else { console.log('Error', error.message) } console.log(error.config) })}// mutation:updateGetFreeVideo: (state, payload) => { return state.getFreeVideo = payload}// getter:getFreeVideo: state => { return state.getFreeVideo}
Now provide the response and nothing else.
| NOTE: at the bottom of this answer, see the general point I make about update/reactivity issues with Vue. Now, about the question, based on the code you posted , considering the template: <div v-for="video in videos" :key="video.id"> It picks the videos from: data () { return { videos: freeVideo } } Although it initializes from freeVideo , in nowhere in your code you show an update of videos . Solution: You already have the state mapped in the getFreeVideo computed: computed: { ...mapState(['getFreeVideo'])} Use it: <div v-for="video in getFreeVideo" :key="video.id"> Update: I'm setting videos in data() to getFreeVideo in the store within thecreated() lifecycle: this.videos = this.getFreeVideo This is not enough to keep this.videos updated with whatever this.getFreeVideo is. Whenever something is set to this.getFreeVideo it will only change this.getFreeVideo , not this.videos . If you want to automatically update this.videos whenever this.getFreeVideo changes, create a watcher: watch: { getFreeVideo() { this.videos = this.getFreeVideo }} And then keep using videos in the v-for : <div v-for="video in videos" :key="video.id"> Vue's reactivity All explanation below applies to Vue2 only. Vue3 doesn't have any of these caveats. If your state is not getting updated in the view, perhaps you are not exploring Vue at its best: To have Vue automatically react to value changes, the objects must be initially declared in data . Or, if not, they must be added using Vue.set() . See the comments in the demo below. Or open the same demo in a JSFiddle here . new Vue({ el: '#app', data: { person: { name: 'Edson' } }, methods: { changeName() { // because name is declared in data, whenever it // changes, Vue automatically updates this.person.name = 'Arantes'; }, changeNickname() { // because nickname is NOT declared in data, when it // changes, Vue will NOT automatically update this.person.nickname = 'Pele'; // although if anything else updates, this change will be seen }, changeNicknameProperly() { // when some property is NOT INITIALLY declared in data, the correct way // to add it is using Vue.set or this.$set Vue.set(this.person, 'address', '123th avenue.'); // subsequent changes can be done directly now and it will auto update this.person.address = '345th avenue.'; } }}) /* CSS just for the demo, it is not necessary at all! */span:nth-of-type(1),button:nth-of-type(1) { color: blue; }span:nth-of-type(2),button:nth-of-type(2) { color: red; }span:nth-of-type(3),button:nth-of-type(3) { color: green; }span { font-family: monospace } <script src="https://unpkg.com/vue@2"></script><div id="app"> <span>person.name: {{ person.name }}</span><br> <span>person.nickname: {{ person.nickname }}</span><br> <span>person.address: {{ person.address }}</span><br> <br> <button @click="changeName">this.person.name = 'Arantes'; (will auto update because `name` was in `data`)</button><br> <button @click="changeNickname">this.person.nickname = 'Pele'; (will NOT auto update because `nickname` was not in `data`)</button><br> <button @click="changeNicknameProperly">Vue.set(this.person, 'address', '99th st.'); (WILL auto update even though `address` was not in `data`)</button> <br> <br> For more info, read the comments in the code. Or check the docs on <b>Reactivity</b> (link below).</div> To master this part of Vue, check the Official Docs on Reactivity - Change Detection Caveats . It is a must read! | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/49354607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1623926/']} | jdg_78615 |
stackexchange | llm_judgeable_groundtruth_similarity | 11409518 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following characters being repeated at the end of every line: ^[[00m How can I remove them from each line using the Vim editor? When I give the command :%s/^[[00m//g , it doesn't work.
Now provide the response and nothing else.
| You could use :%s/.\{6}$// to literally delete 6 characters off the end of each line. The : starts ex mode which lets you execute a command. % is a range that specifies that this command should operate on the whole file. The s stands for substitute and is followed by a pattern and replace string in the format s/pattern/replacement/ . Our pattern in this case is .\{6}$ which means match any character ( . ) exactly 6 times ( \{6} ) followed by the end of the line ( $ ) and replace it with our replacement string, which is nothing. Therefore, as I said above, this matches the last 6 characters of every line and replaces them with nothing. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11409518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1370013/']} | jdg_78616 |
stackexchange | llm_judgeable_groundtruth_similarity | 154314 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've found a couple of references ( for example ) that suggest using final as much as possible and I'm wondering how important that is. This is mainly in the the context of method parameters and local variables, not final methods or classes. For constants, it makes obvious sense. On one hand, the compiler can make some optimizations and it makes the programmer's intent clearer. On the other hand, it adds verbosity and the optimizations may be trivial. Is it something I should make an effort to remember?
Now provide the response and nothing else.
| Obsess over: Final fields - Marking fields as final forces them to be set by end of construction, making that field reference immutable. This allows safe publication of fields and can avoid the need for synchronization on later reads. (Note that for an object reference, only the field reference is immutable - things that object reference refers to can still change and that affects the immutability.) Final static fields - Although I use enums now for many of the cases where I used to use static final fields. Consider but use judiciously: Final classes - Framework/API design is the only case where I consider it. Final methods - Basically same as final classes. If you're using template method patterns like crazy and marking stuff final, you're probably relying too much on inheritance and not enough on delegation. Ignore unless feeling anal: Method parameters and local variables - I RARELY do this largely because I'm lazy and I find it clutters the code. I will fully admit that marking parameters and local variables that I'm not going to modify is "righter". I wish it was the default. But it isn't and I find the code more difficult to understand with finals all over. If I'm in someone else's code, I'm not going to pull them out but if I'm writing new code I won't put them in. One exception is the case where you have to mark something final so you can access it from within an anonymous inner class. Edit: note that one use case where final local variables are actually very useful as mentioned by @adam-gent is when value gets assigned to the var in the if / else branches. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/154314', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23669/']} | jdg_78617 |
stackexchange | llm_judgeable_groundtruth_similarity | 13060 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a clear, concise and accurate answer. Ideally as the actual answer, although links to good explanations welcome. This also applies to VB.Net, but the keywords are different - ByRef and ByVal .
Now provide the response and nothing else.
| By default (in C#), passing an object to a function actually passes a copy of the reference to that object. Changing the parameter itself only changes the value in the parameter, and not the variable that was specified. void Test1(string param){ param = "new value";}string s1 = "initial value";Test1(s1);// s1 == "initial value" Using out or ref passes a reference to the variable specified in the call to the function. Any changes to the value of an out or ref parameter will be passed back to the caller. Both out and ref behave identically except for one slight difference: ref parameters are required to be initialised before calling, while out parameters can be uninitialised. By extension, ref parameters are guaranteed to be initialised at the start of the method, while out parameters are treated as uninitialised. void Test2(ref string param){ param = "new value";}void Test3(out string param){ // Use of param here will not compile param = "another value";}string s2 = "initial value";string s3;Test2(ref s2);// s2 == "new value"// Test2(ref s3); // Passing ref s3 will not compileTest3(out s2);// s2 == "another value"Test3(out s3);// s3 == "another value" Edit : As dp points out, the difference between out and ref is only enforced by the C# compiler, not by the CLR. As far as I know, VB has no equivalent for out and implements ref (as ByRef ) only, matching the support of the CLR. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/905/']} | jdg_78618 |
stackexchange | llm_judgeable_groundtruth_similarity | 32062894 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It used to be that ARM processors were unable to properly handle unaligned memory access (ARMv5 and below). Something like u32 var32 = *(u32*)ptr; would just fail (raise exception) if ptr was not properly aligned on 4-bytes. Writing such a statement would work fine for x86/x64 though, since these CPU have always handled such situation very efficiently. But according to C standard, this is not a "proper" way to write it. u32 is apparently equivalent to a structure of 4 bytes which must be aligned on 4 bytes. A proper way to achieve the same result while keeping the orthodoxy correctness and ensuring full compatibility with any cpu is : u32 read32(const void* ptr) { u32 result; memcpy(&result, ptr, 4); return result; } This one is correct, will generate proper code for any cpu able or not to read at unaligned positions. Even better, on x86/x64, it's properly optimized to a single read operation, hence has the same performance as the first statement. It's portable, safe, and fast. Who can ask more ? Well, problem is, on ARM, we are not so lucky. Writing the memcpy version is indeed safe, but seems to result in systematic cautious operations, which are very slow for ARMv6 and ARMv7 (basically, any smartphone). In a performance oriented application which heavily relies on read operations, the difference between the 1st and 2nd version could be measured : it stands at > 5x at gcc -O2 settings. This is way too much to be ignored. Trying to find a way to use ARMv6/v7 capabilities, I've looked for guidance on a few example codes around. Unfortunatley, they seem to select the first statement (direct u32 access), which is not supposed to be correct. That's not all : new GCC versions are now trying to implement auto-vectorization. On x64, that means SSE/AVX, on ARMv7 that means NEON. ARMv7 also supports some new "Load Multiple" (LDM) and "Store Multiple" (STM) opcodes, which require pointer to be aligned. What does that mean ? Well, the compiler is free to use these advanced instructions, even if they were not specifically called from the C code (no intrinsic). To take such decision, it uses the fact the an u32* pointer is supposed to be aligned on 4 bytes. If it's not, then all bets are off : undefined behavior, crashes. What that means is that even on CPU which support unaligned memory access, it's now dangerous to use direct u32 access, as it can lead to buggy code generation at high optimization settings ( -O3 ). So now, this is a dilemna : how to access the native performance of ARMv6/v7 on unaligned memory access without writing the incorrect version u32 access ? PS : I've also tried __packed() instructions, and from a performance perspective, they seem to work exactly the same as the memcpy method. [Edit] : Thanks for the excellent elements received so far. Looking at the generated assembly, I could confirm @Notlikethat finding that memcpy version does indeed generate proper ldr opcode (unaligned load). However, I also found that the generated assembly uselessly invokes str (command). So the complete operation is now an unaligned load, an aligned store, and then a final aligned load. That's a lot more work than necessary. Answering @haneefmubarak, yes the code is properly inlined. And no, memcpy is very far from providing the best possible speed, since forcing the code to accept direct u32 access translates into huge performance gains. So some better possibility must exist. A big thank to @artless_noise. The link to godbolt service is unvaluable. I've never been able to see so clearly the equivalence between a C source code and its assembly representation. This is highly inspiring. I completed one of @artless examples, and it gives the following : #include <stdlib.h>#include <memory.h>typedef unsigned int u32;u32 reada32(const void* ptr) { return *(const u32*) ptr; }u32 readu32(const void* ptr) { u32 result; memcpy(&result, ptr, 4); return result; } once compiled using ARM GCC 4.8.2 at -O3 or -O2 : reada32(void const*): ldr r0, [r0] bx lrreadu32(void const*): ldr r0, [r0] @ unaligned sub sp, sp, #8 str r0, [sp, #4] @ unaligned ldr r0, [sp, #4] add sp, sp, #8 bx lr Quite telling ....
Now provide the response and nothing else.
| OK, the situation is more confusing than one would like. So, in an effort to clarify, here are the findings on this journey : accessing unaligned memory The only portable C standard solution to access unaligned memory is the memcpy one. I was hoping to get another one through this question, but apparently it's the only one found so far. Example code : u32 read32(const void* ptr) { u32 value; memcpy(&value, ptr, sizeof(value)); return value; } This solution is safe in all circumstances. It also compiles into a trivial load register operation on x86 target using GCC. However, on ARM target using GCC, it translates into a way too large and useless assembly sequence, which bogs down performance. Using Clang on ARM target, memcpy works fine (see @notlikethat comment below). It would be easy to blame GCC at large, but it's not that simple : the memcpy solution works fine on GCC with x86/x64, PPC and ARM64 targets. Lastly, trying another compiler, icc13, the memcpy version is surprisingly heavier on x86/x64 (4 instructions, while one should be enough). And that's just the combinations I could test so far. I have to thank godbolt's project to make such statements easy to observe . The second solution is to use __packed structures. This solution is not C standard, and entirely depends on compiler's extension. As a consequence, the way to write it depends on the compiler, and sometimes on its version. This is a mess for maintenance of portable code. That being said, in most circumstances, it leads to better code generation than memcpy . In most circumstances only ... For example, regarding the above cases where memcpy solution does not work, here are the findings : on x86 with ICC : __packed solution works on ARMv7 with GCC : __packed solution works on ARMv6 with GCC : does not work. Assembly looks even uglier than memcpy . The last solution is to use direct u32 access to unaligned memory positions. This solution used to work for decades on x86 cpus, but is not recommended, as it violates some C standard principles : compiler is authorized to consider this statement as a guarantee that data is properly aligned, leading to buggy code generation. Unfortunately, in at least one case, it is the only solution able to extract performance from target. Namely for GCC on ARMv6. Do not use this solution for ARMv7 though : GCC can generate instructions which are reserved for aligned memory accesses, namely LDM (Load Multiple), leading to crash. Even on x86/x64, it becomes dangerous to write your code this way nowadays, as the new generation compilers may try to auto-vectorize some compatible loops, generating SSE/AVX code based on the assumption that these memory positions are properly aligned , crashing the program. As a recap, here are the results summarized as a table, using the convention : memcpy > packed > direct. | compiler | x86/x64 | ARMv7 | ARMv6 | ARM64 | PPC ||-----------|---------|--------|--------|--------|--------|| GCC 4.8 | memcpy | packed | direct | memcpy | memcpy || clang 3.6 | memcpy | memcpy | memcpy | memcpy | ? || icc 13 | packed | N/A | N/A | N/A | N/A | | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32062894', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/646947/']} | jdg_78619 |
stackexchange | llm_judgeable_groundtruth_similarity | 23554 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Someone told me that the $\log$ function was introduced to make the calculation easier. If we have to calculate $xy$, we can calculate instead $\log x+\log y$ since $\log xy=\log x+\log y$. How this can make the calculation easier? Maybe from a mathematician point of view but what about a computer scientist's point of view? If it makes the calcualtion easier then why people do not use it to simplify the complexity of the multiplication algorithms? From my own thinking, this transformation makes the calculation more difficult. How can we calculate the $\log x$ and $\exp x$ functions in a computer? Am I right? Any suggestions please? Thank you for your time.
Now provide the response and nothing else.
| If $\log xy = \log x + \log y$ then why is multiplication harder than addition? That's not a fair comparison: you're not comparing like with like. If you, instead, phrase it as "If $xy = \exp(\log x + \log y)$ then why is multiplication harder than addition?" then the answer is obvious. Multiplication, done that way, is harder than addition because doing addition just involves doing addition, whereas multiplication involves doing addition, taking logs twice and exponentiating. How can we calculate the $\log x$ and $\exp x$ functions in a computer? The main methods are to either use something like a Taylor series or table look-up and interpolation. Taylor series express functions as sums, e.g., $\exp x = \sum_{i=0}^{\infty} x^i/i!$. Add up as many terms as you need to get the desired level of accuracy – note that this involves many additions and many multiplications. Table look-up and interpolation is essentially the same way that paper log tables work. To calculate, say, $\log 4.3$, you'd look up $\log 4$ and $\log 5$ and approximate $\log 4.3$ as being three-tenths of the way between them. (In reality, the table would have more decimal places.) This involves a few additions and multiplications, and a lot of memory. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/23554', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/16535/']} | jdg_78620 |
stackexchange | llm_judgeable_groundtruth_similarity | 11752033 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I am trying out the new python code for the google app engine search library and I came across a weird syntax. This was: cls_createDocument(**params) where params was a dictionary. The function this refers to is: @classmethod def _createDocument( cls, pid=None, category=None, name=None, description=None, category_name=None, price=None, **params) My questions is, what does the **params signify and what does that do to the object? Thanks!Jon
Now provide the response and nothing else.
| Consider a function with default arguments: def func(foo=3): print(foo) The structure of the arguments is (in principle) very similar to a dictionary. The function foo has (essentially) a dictionary of default arguments (in this case {'foo':3} ). Now, lets say that you don't want to use the keyword in the function call, but you want to use a dictionary instead -- then you can call foo as: d = {"foo":8}func(**d) This allows you to dynamically change what arguments you are passing to the function func . This become a little more interesting if you try the following: d = {"foo":8, "bar":12}func(**d) This doesn't work (it is equivalent to foo(foo=8, bar=12) , but since bar isn't a valid argument, it fails). You can get around that problem by giving those extra arguments a place to go inside the definition of foo . def func( foo=3, **kwargs ): print(foo,kwargs) Now, try: d = {"foo":8, "bar":12}func(**d) #prints (8, {'bar':12}) All the extra keyword arguments go into the kwargs dictionary inside the function. This can also be called as: func(foo=8, bar=12) with the same result. This is often useful if funcA calls funcB and you want funcA to accept all of the keywords of funcB (plus a few extra) which is a very common thing when dealing with classes and inheritance: def funcA(newkey=None,**kwargs): funcB(**kwargs) Finally, here is a link to the documentation | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11752033', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/524307/']} | jdg_78621 |
stackexchange | llm_judgeable_groundtruth_similarity | 40186445 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand that asp.net core has a new configuration system that is quite flexible and that's great. But there are things I like about the web.config based configuration system from .net 4.x. For example one can put comments in the web.config file since it's an xml file. And that for me is worth sticking with xml rather than going with the shiny new json approach. [ Update: I now understand that the json approach also supports comments in the file.] So, if I have a Asp.Net Core Web Project that targets the full framework it seems like I should be able to use the web.config based System.Configuration.ConfigurationManager.AppSettings[key] approach to getting a setting. But when I try, the value always comes back null (at least with IIS express using VS2015). It should work right? Any thoughts on what I might be over looking? Web.config <configuration> <appSettings> <add key="SomeSetting" value="true"/> </appSettings> <system.webServer> <handlers> <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/> </handlers> <aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/> </system.webServer></configuration> Code to access setting: string key = "SomeSetting";string setting = ConfigurationManager.AppSettings[key];if (setting == null) throw new Exception("The required configuration key " + key + " is missing. "); UPDATE After more research I now understand why it doesn't work but I still haven't found a way to fix it. The root cause seems to be that the ConfigurationManager is looking for the config information in a different file and not in the web.config. This can be seen by looking at AppDomain.CurrentDomain.SetupInformation.ConfigurationFile property. In my case instead of pointing to the website_folder\web.config it's instead pointing to website_folder\bin\Debug\net461\win7-x64\wwwGiftOasisResponsive.exe.Config where website_folder is the path to the folder containing my website. The documentation and intellisense say AppDomain.CurrentDomain.SetupInformation.ConfigurationFile is a settable property but when I try I find that setting it does not change it's value. Very odd. So while I now see what the issue is I can't seem to find a way to fix it.
Now provide the response and nothing else.
| I kinda found the solution. The key to figuring it out was realizing that the AppDomain.CurrentDomain.SetupInformation.ConfigurationFile property wasn't pointing to the web.config file but rather to the exe.config file for the executable running the website. Remember, under .net core, the website runs in its own process and it has its own exe. So the config model that .Net 4.x uses with the ConfigurationManager is more like that of a desktop app than a 4.x web application. By that I mean that it's looking at the exe.config not the web.config. Then I noticed that the Asp.Net Core Web Project (using the full framework) contains an app.config file much like a desktop app would. And it turns out that if you put your .net 4.x application config settings in that file they will get placed in the exe.config file when the exe is generated, whether for debug or for release. Just exactly like it works with a win forms app for example. So the way to utilize the ConfigurationManager in an asp.net core web application that targets the full framework is to put the application setting in the app.config file rather than the web.config file. The ConfigurationManager will find them no problem. While this explains a lot, it still doesn't provide that ability to actually put those settings in the web.config and access them via the ConfigurationManager. But I'm beginning to believe that's not possible in a asp.net core web application even if it is targeting the full framework. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40186445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1415614/']} | jdg_78622 |
stackexchange | llm_judgeable_groundtruth_similarity | 23976670 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have issued the command to delete a topic: ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic vip_ips_alerts It seemed to give a happy response: [2014-05-31 20:58:10,112] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)Topic "vip_ips_alerts" queued for deletion. But now 10 minutes later the topic still appears in the --list command: ./bin/kafka-topics.sh --zookeeper localhost:2181 --listvip_ips_alerts - marked for deletion So what does that mean? When will the topic be really deleted? How do I expedite this process?
Now provide the response and nothing else.
| tl;dr Set delete.topic.enable = true in config/server.properties of Kafka brokers and...be patient. It happens with the latest development version of Kafka 0.8.3-SNAPSHOT : ➜ kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic my-topic --partitions 2 --replication-factor 1Created topic "my-topic".➜ kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic my-topicTopic:my-topic PartitionCount:2 ReplicationFactor:1 Configs: Topic: my-topic Partition: 0 Leader: 0 Replicas: 0 Isr: 0 Topic: my-topic Partition: 1 Leader: 0 Replicas: 0 Isr: 0➜ kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic my-topicTopic my-topic is marked for deletion.Note: This will have no impact if delete.topic.enable is not set to true.➜ kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-topics.sh --zookeeper localhost:2181 --list➜ kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ The point is to have delete.topic.enable=true in config/server.properties that you use to start a Kafka broker. ➜ kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ grep delete.topic.enable config/server.propertiesdelete.topic.enable=true You can also ensure the setting be true in a broker's log: ➜ kafka_2.11-0.8.3-SNAPSHOT git:(trunk) ✗ ./bin/kafka-server-start.sh config/server.properties[2015-07-24 22:33:26,184] INFO KafkaConfig values: ... delete.topic.enable = true | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23976670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1056563/']} | jdg_78623 |
stackexchange | llm_judgeable_groundtruth_similarity | 43367692 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Typescript I have a class Contact which implements an interface IContact .I want to add a property or function to the Contact class which returns the combination of two other fields (firstname and lastname).The definition of the IContact interface looks like this: export interface IContact { firstname: string; lastname: string;} The Contact class look like this: export class Contact implements IContact { constructor( public firstname: string, public lastname: string ) { } public fullname(): string { return this.firstname + " " + this.lastname; }} In my view I want to output the result of Contact.fullname() but I receive an error that fullname is not a function.I do have access to all other properties on that class. ==== UPDATE === I'll add some extra code to clarify some stuff.When I output the fullname property in my view I have tried contact.fullname() but also contact.fullname which results in nothing.In my component, trying to figure out what was happening I tried to output fullname to the console like this: console.log("FULLNAME " +contact.fullname()); , but that give me the following message in the console: EXCEPTION _this.contact.fullname is not a function ===== UPDATED WITH ANSWER ======== As Sakuto stated correctly the list of contacts is created by some json received from the server. By creating a new instance entirely by calling it's constructor i was able to output the fullname property.Thanks Sakuto!
Now provide the response and nothing else.
| You are probably trying to call this method on an object dynamically casted from a JSON . Dynamically casted object does not have the method defined in the class, they just respect the contract and have the property. The only way to make your current code work is to create a new instance of Contact instead of just casting it with <Contact> yourObject ; The solution would be to do something like this: let contactsList: Contact[] = [];yourServiceReturn.ForEach(c => contactsList[] = new Contact(c.name, c.surname));// You can now call getFullName on the object stored on contactsList | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43367692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4994053/']} | jdg_78624 |
stackexchange | llm_judgeable_groundtruth_similarity | 28521 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that I have a list called list . I would like to span an arbitrary function f across specified consecutive elements in list . Here is an example. Suppose the following: list = {10, 20, 30, 40, 50, 60, 70}; I would like to obtain the following new lists: {f[{10, 20, 30, 40}], 50, 60, 70}{10, f[{20, 30, 40, 50}], 60, 70}{10, 20, 30, 40, f[{50, 60, 70}]} Is there a simple (perhaps even built-in) way to accomplish this in Mathematica 8? I have come up with the following function spanMap , but my method seems very complicated and clunky: spanMap[function_, list_List, begin_Integer, end_Integer] := Module[{result}, If[begin == 1, If[end == Length[list], result = list;]; If[begin < end < Length[list], result = Flatten[{function[list[[begin ;; end]]], list[[(end + 1) ;;]]}];]; ]; If[begin > 1, If[end == Length[list], result = Flatten[{list[[1 ;; (begin - 1)]], function[list[[begin ;; end]]]}];]; If[begin < end < Length[list], result = Flatten[{list[[1 ;; (begin - 1)]], function[list[[begin ;; end]]], list[[(end + 1) ;;]]}];]; ]; result ] where spanMap[f, list, 1, 4]spanMap[f, list, 2, 5]spanMap[f, list, 5, 7] yield the desired results: {f[{10, 20, 30, 40}], 50, 60, 70}{10, f[{20, 30, 40, 50}], 60, 70}{10, 20, 30, 40, f[{50, 60, 70}]} albeit in a complicated, clunky way.
Now provide the response and nothing else.
| This is a good example where Internal`PartitionRagged (IPR) can be used very effectively. First, observe the following: list = {10, 20, 30, 40, 50, 60, 70};(* 3 continuous elements in the middle *)Internal`PartitionRagged[list, {2, 3, 2}](* {{10, 20}, {30, 40, 50}, {60, 70}} *)(* 3 continuous elements from the start *)Internal`PartitionRagged[list, {0, 3, 4}](* {{}, {10, 20, 30}, {40, 50, 60, 70}} *)(* 3 continuous elements at the end *)Internal`PartitionRagged[list, {4, 3, 0}](* {{10, 20, 30, 40}, {50, 60, 70}, {}} *) You see that for a "continuous span", you can always partition the list into 3 parts as {initial, continuous span, final} by properly choosing the arguments to IPR. This means that you simply need to MapAt your function f onto the second element of the partitioned list and then flatten it. The function you want can be written as: spanMap[f_, list_, {start_, end_}] := MapAt[ f, Internal`PartitionRagged[list, {start - 1, end - start + 1, Length@list - end}], {2}] ~Flatten~ 1 which is pretty clean and intuitive, IMO. Try it out! spanMap[f, list, {1, 4}]spanMap[f, list, {2, 5}]spanMap[f, list, {5, 7}](* {f[{10, 20, 30, 40}], 50, 60, 70} {10, f[{20, 30, 40, 50}], 60, 70} {10, 20, 30, 40, f[{50, 60, 70}]} *) | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/28521', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1185/']} | jdg_78625 |
stackexchange | llm_judgeable_groundtruth_similarity | 360834 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The LMFDB describes the elliptic curve 11a3 (or 11.a3) as "The first elliptic curve in nature". It has minimal Weierstraß equation $$y^2 + y = x^3 - x^2.$$ My guess is that there is some problem in Diophantus' Arithmetica , or perhaps some other ancient geometry problem, that is equivalent to finding a rational point on this curve. What might it be? Edit: Here's some extra info that I dug up and only mentioned in the comments. Alexandre Eremenko also mentions this in an answer below . The earliest-known example of an elliptic curve is one implicitly considered by Diophantus, in book IV of Arithmetica , problem 24 ( Heath's translation ): "To divide a given number into two numbers such that their product is cube minus its side". Actually this is a family of curves over the affine line, namely $y(a-y)= x^3-x$ , though Diophantus, in his usual way, only provides a single rational point for the single curve corresponding to $a=6$ . This curve is 8732.b1 in the L-functions and modular forms database (the Cremona label is 8732a1). So presumably the comment about 11a3 is not meant to mean "historically first".
Now provide the response and nothing else.
| I actually only wrote the part that says that this curve is a model for $X_1(11)$ , not the first part, which I think was written by John Cremona. It is standard to order elliptic curves by conductor (e.g. for statistics), and 11 is the smallest possible conductor. However, there are 3 curves with conductor 11, and no canonical way to order them as far as I know (though @François Brunault has an interesting point); for instance LMFDB labels do not order these 3 curves in the same way as Cremona labels. This curve being the first one could maybe also be understood in terms of modular degree, although this is also ambiguous: if we order them by degree of parametrisation by $X_1(N)$ , then this curve, being a model of $X_1(11)$ , comes first, but if we order in terms of degree of parametrisation by $X_0(N)$ , then 11.a2 comes first since it is a model for $X_0(11)$ . | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/360834', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4177/']} | jdg_78626 |
stackexchange | llm_judgeable_groundtruth_similarity | 415641 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have copied the following sentence from a PIC MCU datasheet: "PIC18(L)F26K22, PIC18(L)F46K22: 64 Kbytes of Flash Memory, up to 37,768 single-word instructions ." the question is what does " 37,768 single-word instructions" mean for a memory ? does it show how fast is the memory or something else ? what's its meaning and how is it calculated ?
Now provide the response and nothing else.
| First, your question contains a typo, you mean 32,768 (2^15) not 37,768. The PIC in question has a 16-bit instruction word. The flash memory size is specified as 64K (65536) bytes . With two bytes per word, that is space for 32768 simple instructions. Many processors, apparently including this one, offer instructions of varying length - more complex instructions may include things like immediate operands or memory addresses. These take more bits to encode, and so are longer than the "single-word" instructions. The data sheet is thus giving you a best case. Depending on the compiler or hand coding strategy, actual code might have varying average instruction length, so it is harder to say how many typical instructions could fit in flash. Even if it's possible to write a program using all single-word instructions, on a machine designed to support them it may well be more efficient to use some multi-word ones, especially if that avoids needing to go do another fetch of a constant from data memory. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/415641', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/174952/']} | jdg_78627 |
stackexchange | llm_judgeable_groundtruth_similarity | 2703001 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the limit of the following sequence?.$$\frac{\left ( 2n \right )!}{(n!)^{2}4^{n}}$$I guess the limit is zero but I don't know exactly how to prove it.Any help please.
Now provide the response and nothing else.
| Because I think there are people interested in an elementary solution: The ratio between terms $a_n=\binom{2n}{n}\frac{1}{4^n}$ is given by $$\begin{array}{ll} \displaystyle \frac{a_{n+1}}{a_n} & =\frac{\displaystyle\frac{(2n+2)!}{(n+1)!(n+1)!}\frac{1}{4^{n+1}}}{\displaystyle\frac{(2n)!}{n!n!}\frac{1}{4^n}} \\[6pt] & \displaystyle = \frac{(2n+2)(2n+1)}{(n+1)(n+1)4} \\[3pt] & \displaystyle =\frac{2n+1}{2n+2} \\[2pt] & \displaystyle =1-\frac{1}{2n+2}. \end{array} $$ Therefore, the term $a_n$ is $$\begin{array}{ll} a_n & \displaystyle =a_0\cdot\frac{a_1}{a_0}\cdot\frac{a_2}{a_1}\cdots\frac{a_n}{a_{n-1}} \\ & = \left(1-\frac{1}{2}\right)\left(1-\frac{1}{4}\right)\cdots\left(1-\frac{1}{2n}\right) \\ & \le \left(1-\frac{1}{3}\right)\left(1-\frac{1}{5}\right)\cdots\left(1-\frac{1}{2n+1}\right) \end{array} $$ and therefore $$ \begin{array}{ll} a_n^2 & \le \left(1-\frac{1}{2}\right)\left(1-\frac{1}{3}\right)\cdots\left(1-\frac{1}{2n}\right)\left(1-\frac{1}{2n+1}\right) \\ & = \left(\frac{1}{2}\right)\left(\frac{2}{3}\right)\cdots\left(\frac{2n-1}{2n}\right)\left(\frac{2n}{2n+1}\right)=\frac{1}{2n+1}. \end{array} $$ From $a_n\le\frac{1}{\sqrt{2n+1}}$ we conclude the limit is $0$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2703001', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/544402/']} | jdg_78628 |
stackexchange | llm_judgeable_groundtruth_similarity | 2706129 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to speed up a python routine by writing it in C++, then using it using ctypes or cython. I'm brand new to c++. I'm using Microsoft Visual C++ Express as it's free. I plan to implement an expression tree, and a method to evaluate it in postfix order. The problem I run into right away is: class Node { char *cargo; Node left; Node right;}; I can't declare left or right as Node types.
Now provide the response and nothing else.
| No, because the object would be infinitely large (because every Node has as members two other Node objects, which each have as members two other Node objects, which each... well, you get the point). You can, however, have a pointer to the class type as a member variable: class Node { char *cargo; Node* left; // I'm not a Node; I'm just a pointer to a Node Node* right; // Same here}; | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2706129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/169415/']} | jdg_78629 |
stackexchange | llm_judgeable_groundtruth_similarity | 19592625 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
<div class="one"><div><div class="parent"> <div class="child"> </div></div> Is it possible to have the div with class of one to be above the parent div but below the child div?
Now provide the response and nothing else.
| Try the following code: conform the < UIGestureRecognizerDelegate > to your class. set yourGesture.delegate = self ; then add this delegate Method: - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldReceiveTouch:(UITouch *)touch { // return YES (the default) to allow the gesture recognizer to examine the touch object, NO to prevent the gesture recognizer from seeing this touch object. if([touch.view isKindOfClass: [UITextView class]] == YES)] { return YES; } else { return NO; }} Hope it will solve your issue. Enjoy Coding..!!!! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19592625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1525241/']} | jdg_78630 |
stackexchange | llm_judgeable_groundtruth_similarity | 11487400 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I found this code inside Kotti : [child] = filter(lambda ch: ch.name == path[0], self._children) And I was wondering: What do the left-hand square brackets do? I did some testing in a python shell, but I can't quite figure out the purpose of it.Bonus question: What does the lambda return? I would guess a tuple of (Boolean, self._children) , but that's probably wrong...
Now provide the response and nothing else.
| This is list unpacking, of a list with only a single element. An equivalent would be: child = filter(lambda ch: ch.name == path[0], self._children)[0] (The exception would be if more than one element of self._children satisfied the condition- in that case, Kotti's code would throw an error (too many values to unpack) while the above code would use the first in the list). Also: lambda ch: ch.name == path[0] returns either True or False . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11487400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1326104/']} | jdg_78631 |
stackexchange | llm_judgeable_groundtruth_similarity | 40154727 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to easily access values from an xml file. <artikelen> <artikel nummer="121"> <code>ABC123</code> <naam>Highlight pen</naam> <voorraad>231</voorraad> <prijs>0.56</prijs> </artikel> <artikel nummer="123"> <code>PQR678</code> <naam>Nietmachine</naam> <voorraad>587</voorraad> <prijs>9.99</prijs> </artikel>..... etc If i want to acces the value ABC123, how do I get it? import xmltodictwith open('8_1.html') as fd: doc = xmltodict.parse(fd.read()) print(doc[fd]['code'])
Now provide the response and nothing else.
| Using your example: import xmltodictwith open('artikelen.xml') as fd: doc = xmltodict.parse(fd.read()) If you examine doc , you'll see it's an OrderedDict , ordered by tag: >>> docOrderedDict([('artikelen', OrderedDict([('artikel', [OrderedDict([('@nummer', '121'), ('code', 'ABC123'), ('naam', 'Highlight pen'), ('voorraad', '231'), ('prijs', '0.56')]), OrderedDict([('@nummer', '123'), ('code', 'PQR678'), ('naam', 'Nietmachine'), ('voorraad', '587'), ('prijs', '9.99')])])]))]) The root node is called artikelen , and there a subnode artikel which is a list of OrderedDict objects, so if you want the code for every article, you would do: codes = []for artikel in doc['artikelen']['artikel']: codes.append(artikel['code'])# >>> codes# ['ABC123', 'PQR678'] If you specifically want the code only when nummer is 121 , you could do this: code = Nonefor artikel in doc['artikelen']['artikel']: if artikel['@nummer'] == '121': code = artikel['code'] break That said, if you're parsing XML documents and want to search for a specific value like that, I would consider using XPath expressions , which are supported by ElementTree . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40154727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6810982/']} | jdg_78632 |
stackexchange | llm_judgeable_groundtruth_similarity | 1850742 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm not quite understanding why there are finalizers in languages such as java and c#. AFAIK, they: are not guaranteed to run (in java) if they do run, they may run an arbitrary amount of time after the object in question becomes a candidate for finalization and (at least in java), they incur an amazingly huge performance hit to even stick on a class. So why were they added at all? I asked a friend, and he mumbled something about "you want to have every possible chance to clean up things like DB connections", but this strikes me as a bad practice. Why should you rely on something with the above described properties for anything , even as a last line of defense? Especially when, if something similar was designed into any API, said API would get laughed out of existence.
Now provide the response and nothing else.
| Well, they are incredibly useful, in certain situations. In the .NET CLR, for example: are not guaranteed to run The finalizer will always, eventually, run, if the program isn't killed. It's just not deterministic as to when it will run. if they do run, they may run an arbitrary amount of time after the object in question becomes a candidate for finalization This is true, however, they still run. In .NET, this is very, very useful. It's quite common in .NET to wrap native, non-.NET resources into a .NET class. By implementing a finalizer, you can guarantee that the native resources are cleaned up correctly. Without this, the user would be forced to call a method to perform the cleanup, which dramatically reduces the effectiveness of the garbage collector. It's not always easy to know exactly when to release your (native) resources- by implementing a finalizer, you can guarantee that they will get cleaned up correctly, even if your class is used in a less-than-perfect manner. and (at least in java), they incur an amazingly huge performance hit to even stick on a class Again, the .NET CLR's GC has an advantage here. If you implement the proper interface ( IDisposable ), AND if the developer implements it correctly, you can prevent the expensive portion of finalization from occuring. The way this is done is that the user-defined method to do the cleanup can call GC.SuppressFinalize , which bypasses the finalizer. This gives you the best of both worlds - you can implement a finalizer, and IDisposable. If your user disposes of your object correctly, the finalizer has no impact. If they don't, the finalizer (eventually) runs and cleans up your unmanaged resources, but you run into a (small) performance loss as it runs. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1850742', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117069/']} | jdg_78633 |
stackexchange | llm_judgeable_groundtruth_similarity | 2741683 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way to print a pointer to a function in ANSI C? Of course this means you have to cast the function pointer to void pointer, but it appears that's not possible?? #include <stdio.h>int main() { int (*funcptr)() = main; printf("%p\n", (void* )funcptr); printf("%p\n", (void* )main); return 0;} $ gcc -ansi -pedantic -Wall test.c -o test test.c: In function 'main': test.c:6: warning: ISO C forbids conversion of function pointer to object pointer type test.c:7: warning: ISO C forbids conversion of function pointer to object pointer type $ ./test 0x400518 0x400518 It's "working", but non-standard...
Now provide the response and nothing else.
| The only legal way to do this is to access the bytes making up the pointer using a character type. Like this: #include <stdio.h>int main() { int (*funcptr)() = main; unsigned char *p = (unsigned char *)&funcptr; size_t i; for (i = 0; i < sizeof funcptr; i++) { printf("%02x ", p[i]); } putchar('\n'); return 0;} Examining the bytes of the function pointer with an lvalue of type void * , or any non character type, is undefined behaviour. What those bytes making up the function pointer actually mean is implementation-dependent. They could just represent an index into a table of functions, for example; or they could even be the first N characters of the function's name which is looked up in the symbol table when you call through the function pointer. The only operations that need be supported on a function pointer are calling the function through it and comparison against another function pointer or NULL for strict equality/inequality, so there is very wide latitude available in how they are implemented. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2741683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/80243/']} | jdg_78634 |
stackexchange | llm_judgeable_groundtruth_similarity | 4551175 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've an AJAX request which will be made every 5 seconds. But the problem is before the AJAX request if the previous request is not completed I've to abort that request and make a new request. My code is something like this, how to resolve this issue? $(document).ready( var fn = function(){ $.ajax({ url: 'ajax/progress.ftl', success: function(data) { //do something } }); }; var interval = setInterval(fn, 500););
Now provide the response and nothing else.
| The jquery ajax method returns a XMLHttpRequest object. You can use this object to cancel the request. The XMLHttpRequest has a abort method, which cancels the request, but if the request has already been sent to the server then the server will process the request even if we abort the request but the client will not wait for/handle the response. The xhr object also contains a readyState which contains the state of the request(UNSENT-0, OPENED-1, HEADERS_RECEIVED-2, LOADING-3 and DONE-4). we can use this to check whether the previous request was completed. $(document).ready( var xhr; var fn = function(){ if(xhr && xhr.readyState != 4){ xhr.abort(); } xhr = $.ajax({ url: 'ajax/progress.ftl', success: function(data) { //do something } }); }; var interval = setInterval(fn, 500);); | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/4551175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556673/']} | jdg_78635 |
stackexchange | llm_judgeable_groundtruth_similarity | 216421 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I get the impression that it is a programming best practice to create variables in specific scopes (like a function scope) and avoid global scope to make things more modular and better organized. However I'm not sure if there is also a security issue. Here is an example of global variables in Bash that worked for me fine more than a year: cat <<-EOF >> "$HOME"/.profile set -x complete -r export war="/var/www/html" # Web Application Root; export dmp="phpminiadmin" # Database Management Program; export -f war war() { cd $war/ }EOFsource "$HOME"/.profile 2>/dev/null I have never had a problem with global variables in Bash or JavaScript, most likely because I only wrote small scripts for personal usage on minimalist environments. Why do many programmers avoid using global variables and are there any examples of security breaches caused by using global variables?
Now provide the response and nothing else.
| Boycott Globals! I'm stealing from Steffen Ullrich's comment , but the main issue with global variables is that they make it difficult to keep code well organized and maintainable. His link is a fine one, but you won't have any trouble finding countless articles about the problems with global variables online. When you use global variables, it becomes easy to lose track of where in your program the variable gets modified, especially if you don't have a simple linear flow. As a result global variables can work perfectly fine in small scripts, but can cause massive headaches as an application begins to scale. The main reason why I would avoid globals is because they make automated unit/integration testing a nightmare. Small applications can survive without tests, but trying to manage a larger application without good tests is just a nightmare (trust me, I tried in my young-and-foolish days). This might leave you with the impression that globals are fine in very small applications, but since applications usually only grow over time, and things that start off temporary become permanent, it's really just a bad idea to use them at all. Why start on the wrong foot, when it is so easy to use properly scoped variables? Security Using global variables doesn't have any direct implications for security, but they do make it easier to end up with security issues, because it disconnects the source of data from it's usage. I have even seen actual vulnerabilities introduced in such a way: Imagine you have a variable which is used in an SQL query. Initially you set it from a safe value and so inject it directly into the query. However, it is a global (or becomes a global!) and later on use-cases change and it gets set from user input. The developer who sets it from user input doesn't realize that it is injected directly into a query (perhaps because it happens in a completely different file and only in a specific flow) and so doesn't bother with strict input checking, nor do they update the query it is used in for more secure usage. All of a sudden a very hard-to-find vulnerability has been introduced, because global variables hid the connection between the source and usage of a variable! Global Variables == death I don't know of any breaches that happened specifically because of global variables, but it's easy to argue that the use of global variables has literally killed people , so I think it's reasonable to just never use them. | {} | {'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/216421', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/-1/']} | jdg_78636 |
stackexchange | llm_judgeable_groundtruth_similarity | 11178414 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for an algorithm to generate equally distributed points inside a polygon. Here is the scenario: I have a polygon specified by the coordinates of the points at the corners (x, y) for each point. And I have the number of points to generate inside the polygon. For example lets say I have a polygon containing 5 points: (1, 1) ; (1, 2) ; (2, 3) ; (3, 2) ; and (3, 1) And I need to generate 20 equally distanced points inside that polygon. Note: Some polygons may not support equally distributed points, but I'm looking to distribute the points in a way to cover all the region of the polygon with as much consistency as possible. (what i mean is I don't want a part with a lot more points than another) Is there an algorithm to do so? or maybe a library I am working on a C# application, but any language is ok, since I only need the algorithm and I can translate it. Thanks a lot for any help
Now provide the response and nothing else.
| The simple approach I use is: Triangulate the polygon. Ear clipping is entirely adequate, as all you need is a dissection of the polygon into a set of non-overlapping triangles. Compute the area of each triangle. Sample from each triangle proportionally to the area of that triangle relative to the whole. This costs only a single uniform random number per sample. Once a point is determined to have come from a given triangle, sample uniformly over the triangle. This is itself easier than you might think. So really it all comes down to how do you sample within a triangle. This is easily enough done. A triangle is defined by 3 vertices. I'll call them P1, P2, P3. Pick ANY edge of the triangle. Generate a point (P4) that lies uniformly along that edge. Thus if P1 and P2 are the coordinates of the corresponding end points, then P will be a uniformly sampled point along that edge, if r has uniform distribution on the interval [0,1]. P4 = (1-r)*P1 + r*P2 Next, sample along the line segment between P3 and P4, but do so non-uniformly. If s is a uniform random number on the interval [0,1], then P5 = (1-sqrt(s))*P3 + sqrt(s)*P4 r and s are independent pseudo-random numbers of course. Then P5 will be randomly sampled, uniform over the triangle. The nice thing is it needs no rejection scheme to implement, so long, thin polygons are not a problem. And for each sample, the cost is only in the need to generate three random numbers per event. Since ear clipping is rather simply done and an efficient task, the sampling will be efficient, even for nasty looking polygons or non-convex polygons. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11178414', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/887835/']} | jdg_78637 |
stackexchange | llm_judgeable_groundtruth_similarity | 2070079 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does $n!+(n+1)!+(n+2)!=n!(n+2)^2$? Note: This is from the 1984 ARML I-7 .
Now provide the response and nothing else.
| We have $$\begin{align*} n! + (n+1)! + (n+2)! &= n! + n!(n+1) + n!(n+1)(n+2) \\&=n!(1 + (n+1) + (n+1)(n+2))\\&= n!(n^2 + 4n + 4) = n!(n+2)^2\;.\end{align*}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2070079', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/167548/']} | jdg_78638 |
stackexchange | llm_judgeable_groundtruth_similarity | 30835090 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to read from multiple input files and print the second row from each file next to each other as a table import sysimport fileinputwith fileinput.input(files=('cutflow_TTJets_1l.txt ', 'cutflow_TTJets_1l.txt ')) as f: for line in f: proc(line)def proc(line): parts = line.split("&") # split line into parts if "&" in line: # if at least 2 parts/columns print parts[1] # print column 2 But I get a "AttributeError: FileInput instance has no attribute ' __exit__ '"
Now provide the response and nothing else.
| The problem is that as of python 2.7.10, the fileinput module does not support being used as a context manager, i.e. the with statement, so you have to handle closing the sequence yourself. The following should work: f = fileinput.input(files=('cutflow_TTJets_1l.txt ', 'cutflow_TTJets_1l.txt '))for line in f: proc(line)f.close() Note that in recent versions of python 3, you can use this module as a context manager. For the second part of the question, assuming that each file is similarly formatted with an equal number of data lines of the form xxxxxx & xxxxx , one can make a table of the data from the second column of each data as follows: Start with an empty list to be a table where the rows will be lists of second column entries from each file: table = [] Now iterate over all lines in the fileinput sequence, using the fileinput.isfirstline() to check if we are at a new file and make a new row: for line in f: if fileinput.isfirstline(): row = [] table.append(row) parts = line.split('&') if len(parts) > 1: row.append(parts[1].strip())f.close() Now table will be the transpose of what you really want, which is each row containing the second column entries of a given line of each file. To transpose the list, one can use zip and then loop over rows the transposed table, using the join string method to print each row with a comma separator (or whatever separator you want): for row in zip(*table): print(', '.join(row)) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30835090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4070982/']} | jdg_78639 |
stackexchange | llm_judgeable_groundtruth_similarity | 2603283 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There are a total of 30 gold coins in three wooden boxes (but you do not know how many in each individual box). However, you know that one box has exactly 4 coins more than another box. For each box, you can ask for a number of coins from that box, of your choice. If there are at least that many coins in that box, then you get as many coins as you asked for. Otherwise, you get nothing from that box. You must place all your demands simultaneously in the beginning. What is the maximum number of coins that you can guarantee yourself to get? Added for clarification: you don't know which box has 4 coins more. Let $x, y, y+4$ be the coin contents $=> x+2y=26$ $=>$ solutions $(0,13,17),...,(24,1,5)$. What next?
Now provide the response and nothing else.
| If you ask for $13,13,13$, then you get nothing if the boxes are $10,8,12$. If you ask for $12,12,12$, you will always get exactly $12$ coins, since in all cases, exactly one of the boxes has at least $12$ coins. Claim $12$ is best possible. Certainly it's best possible with all requests equal. Suppose a better request triple is $a,b,c$, with $a \le b \le $c, and $a < c$. But any of the $3$ requests might fail if that box contains $0$ coins, hence to be sure to get more than $12$ coins, every pair must sum to more than $12$. From $a \le b \le c $, and $a + b > 12$, we get $6 < b \le c$. But if the boxes are some permutation of $26,0,4$, the $b,c$ requests might both fail, so to be sure to get more than $12$, we must have $a > 12$. But then $a,b,c$ all exceed $12$, so all requests would fail if the boxes are $10,8,12$. Thus, unequal requests can't beat the uniform $12,12,12$ request strategy. It follows that $12$ is the maximum number of coins which can be guaranteed. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2603283', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/269497/']} | jdg_78640 |
Subsets and Splits