source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
33571605
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Angular 2 - How do I navigate to another route using this.router.parent.navigate('/about') ? It doesn't seem to work.I tried location.go("/about"); as that didn't work. Basically, once a user has logged in I want to redirect them to another page. Here is my code below: import {Component} from 'angular2/angular2'; import {CORE_DIRECTIVES, FORM_DIRECTIVES} from 'angular2/angular2'; import {Router} from 'angular2/router'; import {AuthService} from '../../authService'; //Model class User { constructor(public email: string, public password: string) {} } @Component({ templateUrl:'src/app/components/todo/todo.html', directives: [CORE_DIRECTIVES, FORM_DIRECTIVES] }) export class Todo { model = new User('[email protected]', 'Password'); authService:AuthService; router: Router; constructor(_router: Router, _authService: AuthService){ this.authService = _authService; this.router = _router; } onLogin = () => { this.authService.logUserIn(this.model).then((success) => { //This is where its broke - below: this.router.parent.navigate('/about'); }); } } Now provide the response and nothing else.
Absolute path routing There are 2 methods for navigation, .navigate() and .navigateByUrl() You can use the method .navigateByUrl() for absolute path routing: import {Router} from '@angular/router';constructor(private router: Router) {}navigateToLogin() { this.router.navigateByUrl('/login');} You put the absolute path to the URL of the component you want to navigate to. Note: Always specify the complete absolute path when calling router's navigateByUrl method. Absolute paths must start with a leading / // Absolute route - Goes up to root level this.router.navigate(['/root/child/child']);// Absolute route - Goes up to root level with route params this.router.navigate(['/root/child', crisis.id]); Relative path routing If you want to use relative path routing, use the .navigate() method. NOTE: It's a little unintuitive how the routing works, particularly parent, sibling, and child routes: // Parent route - Goes up one level // (notice the how it seems like you're going up 2 levels)this.router.navigate(['../../parent'], { relativeTo: this.route });// Sibling route - Stays at the current level and moves laterally, // (looks like up to parent then down to sibling)this.router.navigate(['../sibling'], { relativeTo: this.route });// Child route - Moves down one levelthis.router.navigate(['./child'], { relativeTo: this.route });// Moves laterally, and also add route parameters// if you are at the root and crisis.id = 15, will result in '/sibling/15'this.router.navigate(['../sibling', crisis.id], { relativeTo: this.route });// Moves laterally, and also add multiple route parameters// will result in '/sibling;id=15;foo=foo'. // Note: this does not produce query string URL notation with ? and & ... instead it// produces a matrix URL notation, an alternative way to pass parameters in a URL.this.router.navigate(['../sibling', { id: crisis.id, foo: 'foo' }], { relativeTo: this.route }); Or if you just need to navigate within the current route path, but to a different route parameter: // If crisis.id has a value of '15'// This will take you from `/hero` to `/hero/15`this.router.navigate([crisis.id], { relativeTo: this.route }); Link parameters array A link parameters array holds the following ingredients for router navigation: The path of the route to the destination component. ['/hero'] Required and optional route parameters that go into the route URL. ['/hero', hero.id] or ['/hero', { id: hero.id, foo: baa }] Directory-like syntax The router supports directory-like syntax in a link parameters list to help guide route name lookup: ./ or no leading slash is relative to the current level. ../ to go up one level in the route path. You can combine relative navigation syntax with an ancestor path. If you must navigate to a sibling route, you could use the ../<sibling> convention to go up one level, then over and down the sibling route path. Important notes about relative nagivation To navigate a relative path with the Router.navigate method, you must supply the ActivatedRoute to give the router knowledge of where you are in the current route tree. After the link parameters array, add an object with a relativeTo property set to the ActivatedRoute . The router then calculates the target URL based on the active route's location. From official Angular Router Documentation
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/33571605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1590389/']}
jdg_77441
stackexchange
llm_judgeable_groundtruth_similarity
14399443
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As part of my view I have: <ul data-bind="foreach: caseStudies"> <li><a data-bind="text: title, attr: { href: caseStudyUrl }"></a></li></ul> I want to run some 3rd Party code once knockout has updated the DOM. caseStudies(data);thirdPartyFuncToDoStuffToCaseStudyLinks(); <-- DOM not updated at this point. Any idea on how I can hook into knockout to call this at the correct time? Now provide the response and nothing else.
Using the afterRender binding can help you. <ul data-bind="foreach: { data:caseStudies, afterRender:checkToRunThirdPartyFunction }"> <li><a data-bind="text: title, attr: { href: caseStudyUrl }"></a></li></ul>function checkToRunThirdPartyFunction(element, caseStudy) { if(caseStudies.indexOf(caseStudy) == caseStudies().length - 1){ thirdPartyFuncToDoStuffToCaseStudyLinks(); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14399443', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5170/']}
jdg_77442
stackexchange
llm_judgeable_groundtruth_similarity
1020453
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose you have the following situation #include <iostream>class Animal {public: virtual void speak() = 0;};class Dog : public Animal { void speak() { std::cout << "woff!" <<std::endl; }};class Cat : public Animal { void speak() { std::cout << "meow!" <<std::endl; }};void makeSpeak(Animal &a) { a.speak();}int main() { Dog d; Cat c; makeSpeak(d); makeSpeak(c);} As you can see, makeSpeak is a routine that accepts a generic Animal object. In this case, Animal is quite similar to a Java interface, as it contains only a pure virtual method. makeSpeak does not know the nature of the Animal it gets passed. It just sends it the signal “speak” and leaves the late binding to take care of which method to call: either Cat::speak() or Dog::speak(). This means that, as far as makeSpeak is concerned, the knowledge of which subclass is actually passed is irrelevant. But what about Python? Let’s see the code for the same case in Python. Please note that I try to be as similar as possible to the C++ case for a moment: class Animal(object): def speak(self): raise NotImplementedError()class Dog(Animal): def speak(self): print "woff!"class Cat(Animal): def speak(self): print "meow"def makeSpeak(a): a.speak()d=Dog()c=Cat()makeSpeak(d)makeSpeak(c) Now, in this example you see the same strategy. You use inheritance to leverage the hierarchical concept of both Dogs and Cats being Animals. But in Python, there’s no need for this hierarchy. This works equally well class Dog: def speak(self): print "woff!"class Cat: def speak(self): print "meow"def makeSpeak(a): a.speak()d=Dog()c=Cat()makeSpeak(d)makeSpeak(c) In Python you can send the signal “speak” to any object you want. If the object is able to deal with it, it will be executed, otherwise it will raise an exception. Suppose you add a class Airplane to both codes, and submit an Airplane object to makeSpeak. In the C++ case, it won’t compile, as Airplane is not a derived class of Animal. In the Python case, it will raise an exception at runtime, which could even be an expected behavior. On the other side, suppose you add a MouthOfTruth class with a method speak(). In the C++ case, either you will have to refactor your hierarchy, or you will have to define a different makeSpeak method to accept MouthOfTruth objects, or in java you could extract the behavior into a CanSpeakIface and implement the interface for each. There are many solutions... What I’d like to point out is that I haven’t found a single reason yet to use inheritance in Python (apart of frameworks and trees of exceptions, but I guess that alternative strategies exist). you don’t need to implement a base-derived hierarchy to perform polymorphically. If you want to use inheritance to reuse implementation, you can accomplish the same through containment and delegation, with the added benefit that you can alter it at runtime, and you clearly define the interface of the contained, without risking unintended side effects. So, in the end, the question stands: what's the point of inheritance in Python? Edit : thanks for the very interesting answers. Indeed you can use it for code reuse, but I am always careful when reusing implementation. In general, I tend to do very shallow inheritance trees or no tree at all, and if a functionality is common I refactor it out as a common module routine and then call it from each object. I do see the advantage of having one single point of change (eg. instead of adding to Dog, Cat, Moose and so on, I just add to Animal, which is the basic advantage of inheritance), but you can achieve the same with a delegation chain (eg. a la JavaScript). I'm not claiming it's better though, just another way. I also found a similar post on this regard. Now provide the response and nothing else.
You are referring to the run-time duck-typing as "overriding" inheritance, however I believe inheritance has its own merits as a design and implementation approach, being an integral part of object oriented design. In my humble opinion, the question of whether you can achieve something otherwise is not very relevant, because actually you could code Python without classes, functions and more, but the question is how well-designed, robust and readable your code will be. I can give two examples for where inheritance is the right approach in my opinion, I'm sure there are more. First, if you code wisely, your makeSpeak function may want to validate that its input is indeed an Animal, and not only that "it can speak", in which case the most elegant method would be to use inheritance. Again, you can do it in other ways, but that's the beauty of object oriented design with inheritance - your code will "really" check whether the input is an "animal". Second, and clearly more straightforward, is Encapsulation - another integral part of object oriented design. This becomes relevant when the ancestor has data members and/or non-abstract methods. Take the following silly example, in which the ancestor has a function (speak_twice) that invokes a then-abstract function: class Animal(object): def speak(self): raise NotImplementedError() def speak_twice(self): self.speak() self.speak()class Dog(Animal): def speak(self): print "woff!"class Cat(Animal): def speak(self): print "meow" Assuming "speak_twice" is an important feature, you don't want to code it in both Dog and Cat, and I'm sure you can extrapolate this example. Sure, you could implement a Python stand-alone function that will accept some duck-typed object, check whether it has a speak function and invoke it twice, but that's both non-elegant and misses point number 1 (validate it's an Animal). Even worse, and to strengthen the Encapsulation example, what if a member function in the descendant class wanted to use "speak_twice" ? It gets even clearer if the ancestor class has a data member, for example "number_of_legs" that is used by non-abstract methods in the ancestor like "print_number_of_legs" , but is initiated in the descendant class' constructor (e.g. Dog would initialize it with 4 whereas Snake would initialize it with 0). Again, I'm sure there are endless more examples, but basically every (large enough) software that is based on solid object oriented design will require inheritance.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1020453', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78374/']}
jdg_77443
stackexchange
llm_judgeable_groundtruth_similarity
16376094
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the routes: Example::Application.routes.draw do concern :commentable do resources :comments end resources :articles, concerns: :commentable resources :forums do resources :forum_topics, concerns: :commentable endend And the model: class Comment < ActiveRecord::Base belongs_to :commentable, polymorphic: trueend When I edit or add a comment, I need to go back to the "commentable" object. I have the following issues, though: 1) The redirect_to in the comments_controller.rb would be different depending on the parent object 2) The references on the views would differ as well = simple_form_for comment do |form| Is there a practical way to share views and controllers for this comment resource? Now provide the response and nothing else.
You can find the parent in a before filter like this: comments_controller.rb before_filter: find_parentdef find_parent params.each do |name, value| if name =~ /(.+)_id$/ @parent = $1.classify.constantize.find(value) end endend Now you can redirect or do whatever you please depending on the parent type. For example in a view: = simple_form_for [@parent, comment] do |form| Or in a controller comments_controller.rb redirect_to @parent # redirect to the show page of the commentable.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16376094', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1116389/']}
jdg_77444
stackexchange
llm_judgeable_groundtruth_similarity
206806
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need a little bit of help salvaging the data from a questionnaire. One of my colleagues applied a questionnaire, but inadvertently, instead of using the original 5-point Likert scale (strongly disagree to strongly agree), he inserted a 6th answer into the scale. And, to make the matter worse, the 6th response option is … “I don’t know”. The problem is the big proportion of respondents who, at one point or another, chose “I don’t know”. If they were a reasonably small percentage, I’d have just excluded them from the database. However, the core of the research rests on a conceptual model, and excluding so many records would create a problem for the model. Could someone point me in the right direction here? Are there any ‘good practices’, or can I do anything to use (transform, convert, etc.) those “I don’t know” responses? Also, if I do any manipulation of the data in question (i.e., if I convert the “I don’t know” responses, by substitution, imputation, etc.), what kind of ‘disclaimer’, ‘warning’, annotation, should I use? I know it is a long shot, but I confess, besides salvaging the responses, I am also curious what is the agreed practice (if there is one), in these type of cases. PS: I know it sounds childish, but no, the ‘colleague’ isn’t me :) Now provide the response and nothing else.
Why try to force a calibration on something which is not true? As Maarten said, this is not a loss of data but a gain of information. If the magical pill you are looking for exists, it would mean that there are some assumptions about your population that are made, for example, a bias in favor of one particular label even though users say "I don't know". I totally understand your frustration but the proper way to approach the problem is to modify the model to suit your needs based on the true existing data, and not the other way around (modifying the data).
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/206806', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/111860/']}
jdg_77445
stackexchange
llm_judgeable_groundtruth_similarity
714580
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to calculate the derivative of the Lagrangian $$\mathcal{L}=\frac{1}{2}\partial_\mu\phi\,\partial^\mu\phi-\frac{1}{2}m^2\phi^2$$ with respect to the metric tensor $g_{\mu\nu}$ , with the convention $(+, -, -, -)$ , in order to obtain $$T^{\mu\nu}=-g^{\mu\nu}\mathcal{L}-2\frac{\delta\mathcal{L}}{\delta g_{\mu\nu}}.$$ First I tried to do it as follows: $$\frac{\delta\mathcal{L}}{\delta g_{\mu\nu}}=\frac{1}{2}\frac{\delta\left(g_{\alpha\beta}\partial^\alpha\phi\partial^\beta\phi\right)}{\delta g_{\mu\nu}}=\frac{1}{2}\frac{\delta g_{\alpha\beta}}{\delta g_{\mu\nu}}\partial^\alpha\phi\partial^\beta\phi=\frac{1}{4}\left(\delta_\alpha^\mu\delta_\beta^\nu+\delta_\alpha^\nu\delta_\beta^\mu\right)\partial^\alpha\phi\partial^\beta\phi=\frac{1}{2}\partial^\mu\phi\partial^\nu\phi.$$ Where I have used the formula $$\frac{\delta g_{\alpha\beta}}{\delta g_{\mu\nu}}=\frac{1}{2}\left(\delta_\alpha^\mu\delta_\beta^\nu+\delta_\alpha^\nu\delta_\beta^\mu\right)$$ that my professor derived in class. But, if I write $\partial_\mu\phi\partial^\mu\phi=g^{\alpha\beta}\partial_\alpha\phi\partial_\beta\phi$ , I would use the expression for the derivative of the inverse metric: $$\frac{\delta g^{\alpha\beta}}{\delta g_{\mu\nu}}=-\frac{1}{2}\left(g^{\alpha\mu}g^{\beta\nu}+g^{\alpha\nu}g^{\beta\mu}\right)$$ This formula comes from the fact that $g^{\alpha\beta}g_{\beta\rho}=\delta^{\alpha}_{\rho}$ , so $$\frac{\delta g^{\alpha\beta}}{\delta g_{\mu\nu}}g_{\beta\rho}+g^{\alpha\beta}\frac{\delta g_{\beta\rho}}{\delta g_{\mu\nu}}=0$$ which leads to $$\frac{\delta g^{\alpha\beta}}{\delta g_{\mu\nu}}g_{\beta\rho}=-\frac{1}{2}g^{\alpha\beta}\left(\delta^\mu_\beta\delta^\nu_\rho+\delta^\mu_\rho\delta^\nu_\beta\right)=-\frac{1}{2}\left(g^{\alpha\mu}\delta^{\nu}_{\rho}+g^{\alpha\nu}\delta^{\mu}_\rho\right)$$ multiplying by $g^{\rho\varphi}$ , we get $$\frac{\delta g^{\alpha\varphi}}{\delta g_{\mu\nu}}=-\frac{1}{2}\left(g^{\alpha\mu}g^{\varphi\nu}+g^{\alpha\nu}g^{\varphi\mu}\right)$$ and then, aplying that we would get $$\frac{\delta\mathcal{L}}{\delta g_{\mu\nu}}=-\frac{1}{2}\partial^\mu\phi\partial^\nu\phi$$ I would like to understand what's causing this problem. I suppose this question may be duplicated, but it's still not clear to me how to fix that sign error from the answers I've read. This is my first time studying GR and I would be grateful for any responses. Now provide the response and nothing else.
This is just elaborating a little more on the 'behind the scenes', since OP's confusions seem to be resolved in the comments already. Consider the following simplified situation. Let $f_1,f_2:\Bbb{R}^2\to\Bbb{R}$ be two functions defined as $f_1(x,y)=x^2y^3$ and $f_2(x,y)=xy^2$ . These are clearly two different functions. Consider now two curves, $\gamma_1,\gamma_2:\Bbb{R}\to\Bbb{R}^2$ defined as $\gamma_1(t)=(t,t)$ and $\gamma_2(t)=(t,t^2)$ . Then, you can easily verify that the composed maps are equal: for all $t\in\Bbb{R}$ , we have $(f_1\circ\gamma_1)(t)=(f_2\circ\gamma_2)(t)=t^5$ .On the other hand, let us calculate their partial derivative: \begin{align}\frac{\partial f_1}{\partial x}\bigg|_{\gamma_1(t)}=2t^4,\quad\text{but}\quad\frac{\partial f_2}{\partial x}\bigg|_{\gamma_2(t)}=t^4.\end{align} This shouldn't be surprising: we started off with two different functions $f_1,f_2$ , and we just happened to find two curves $\gamma_1,\gamma_2$ such that $f_1\circ\gamma_1=f_2\circ\gamma_2$ . There's no reason to expect that this implies $\frac{\partial f_1}{\partial x}\circ \gamma_1= \frac{\partial f_2}{\partial x}\circ \gamma_2$ , and in fact as shown above, this equality is false. How does this relate to the Lagrangian? Fix any smooth manifold $M$ , and consider the mappings $\mathscr{L}_1: \Gamma(T^0_2(M))\times \Gamma(T^1_0(M))\to C^{\infty}(M)$ defined as $\mathscr{L}_1(H,\xi)=H(\xi,\xi)=H_{ab}\xi^a\xi^b.$ $\mathscr{L}_2: \Gamma(T^2_0(M))\times \Gamma(T^0_1(M))\to C^{\infty}(M)$ defined as $\mathscr{L}_2(K,\omega)=K(\omega,\omega)=K^{ab}\omega_a\omega_b.$ In words, $\mathscr{L}_1$ eats a $(0,2)$ -tensor field in its first slot, and a vector field (a $(1,0)$ tensor field) in its second slot, and it outputs a smooth function by contracting the tensor field and vector field completely; $\mathscr{L}_2$ does a similar thing (contraction) except it has a different domain. Now, without any doubt, $\mathscr{L}_1$ and $\mathscr{L}_2$ are completely different maps. Now, let us fix a scalar field $\phi$ on $M$ . We now get two induced mappings via composition, denoted $\mathcal{L}_1$ and $\mathcal{L}_2$ , defined on the space of metric tensors and taking values in $C^{\infty}(M)$ , such that $\mathcal{L}_1[g]:= \mathscr{L}_1(g,\text{grad}_g(\phi)):=\mathscr{L}_1(g,g^{\sharp}(d\phi))=g_{ab}\partial^a\phi\partial^b\phi$ . $\mathcal{L}_2[g]:= \mathscr{L}_2(g^{``-1"}, d\phi)= g^{ab}\partial_a\phi\partial_b\phi$ . Here, $g^{\sharp}$ denotes the musical isomorphism which converts covector fields into vector fields (the index-raising operation), and $g^{``-1"}$ denotes the 'inverse' metric tensor (I put 'inverse' in quotation marks since a $(0,2)$ tensor field strictly speaking doesn't have an inverse; rather we refer to a corresponding $(2,0)$ tensor). So you see, the composed functions $\mathcal{L}_1$ and $\mathcal{L}_2$ are equal. However, the variations $\frac{\delta \mathscr{L}_1}{\delta H}\bigg|_{(g,\text{grad}_g\phi)}$ and $\frac{\delta\mathscr{L}_2 }{\delta K}\bigg|_{(g^{``-1"}, d\phi)}$ are not equal. Hopefully the analogy with the above simple case is clear: $\mathscr{L}_i$ is like $f_i$ , the map $g\mapsto (g,\text{grad}_g(\phi))$ is like the curve $\gamma_1$ , and the map $g\mapsto (g^{``-1"},d\phi)$ is like the curve $\gamma_2$ , and it turns out their compositions are equal: $\mathcal{L}_1=\mathcal{L}_2$ . But that doesn't mean the original maps are equal, nor does it imply the composition of the derivatives along these 'curves'are equal (remember in variational calculus, we always perform the variation first, and only afterwards evaluate). So roughly speaking, your first calculation corresponds to $\mathscr{L}_1$ , where we view $\partial^a\phi$ as indpendent variables, whereas in the latter case we view $\partial_a\phi$ as the independent variables. In going from one to the other, there are factors of the metric which appear. Lastly, in physics, we view the second situation as more 'fundamental', i.e $\partial_a\phi$ is the basic quantity (afterall the exterior derivative $d\phi$ can be defined without any metric).
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/714580', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/291933/']}
jdg_77446
stackexchange
llm_judgeable_groundtruth_similarity
28503445
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a pandas series object xEzh2 2Hmgb 7Irf1 1 I want to save this as a dataframe with column names Gene and Count respectivelyI tried x_df = pd.DataFrame(x,columns = ['Gene','count']) but it does not work.The final form I want is Gene CountEzh2 2Hmgb 7Irf1 1 Can you suggest how to do this Now provide the response and nothing else.
You can create a dict and pass this as the data param to the dataframe constructor: In [235]:df = pd.DataFrame({'Gene':s.index, 'count':s.values})dfOut[235]: Gene count0 Ezh2 21 Hmgb 72 Irf1 1 Alternatively you can create a df from the series, you need to call reset_index as the index will be used and then rename the columns: In [237]:df = pd.DataFrame(s).reset_index()df.columns = ['Gene', 'count']dfOut[237]: Gene count0 Ezh2 21 Hmgb 72 Irf1 1
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28503445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4147748/']}
jdg_77447
stackexchange
llm_judgeable_groundtruth_similarity
7764755
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering if it was possible to split a file into equal parts ( edit: = all equal except for the last), without breaking the line? Using the split command in Unix, lines may be broken in half. Is there a way to, say, split up a file in 5 equal parts, but have it still only consist of whole lines (it's no problem if one of the files is a little larger or smaller)? I know I could just calculate the number of lines, but I have to do this for a lot of files in a bash script. Many thanks! Now provide the response and nothing else.
If you mean an equal number of lines, split has an option for this: split --lines=75 If you need to know what that 75 should really be for N equal parts, its: lines_per_part = int(total_lines + N - 1) / N where total lines can be obtained with wc -l . See the following script for an example: #!/usr/bin/bash# Configuration stufffspec=qq.cnum_files=6# Work out lines per file.total_lines=$(wc -l <${fspec})((lines_per_file = (total_lines + num_files - 1) / num_files))# Split the actual file, maintaining lines.split --lines=${lines_per_file} ${fspec} xyzzy.# Debug informationecho "Total lines = ${total_lines}"echo "Lines per file = ${lines_per_file}" wc -l xyzzy.* This outputs: Total lines = 70Lines per file = 12 12 xyzzy.aa 12 xyzzy.ab 12 xyzzy.ac 12 xyzzy.ad 12 xyzzy.ae 10 xyzzy.af 70 total More recent versions of split allow you to specify a number of CHUNKS with the -n/--number option. You can therefore use something like: split --number=l/6 ${fspec} xyzzy. (that's ell-slash-six , meaning lines , not one-slash-six ). That will give you roughly equal files in terms of size, with no mid-line splits. I mention that last point because it doesn't give you roughly the same number of lines in each file, more the same number of characters. So, if you have one 20-character line and 19 1-character lines (twenty lines in total) and split to five files, you most likely won't get four lines in every file.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7764755', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/101289/']}
jdg_77448
stackexchange
llm_judgeable_groundtruth_similarity
45044210
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying to get Gstreamer working in Android studio, following their tutorials, see for example here: https://gstreamer.freedesktop.org/documentation/tutorials/android/link-against-gstreamer.html But in the latest Android studio there is no jni/Android.mk. Where do I put the code at the end of that web page? Should it go in the CMakeLists.txt? Or should something different go in there? Or do I just make an Android.mk file, and if so, where (as there is no jni folder, only a cpp folder)? And how do I tell Android studio where to find my Android.mk file? Thanks! Now provide the response and nothing else.
I've made all tutorials from the Gstreamer's site work but it was very hard. To help another people I documented it. Here's the tutorial for linux (ubuntu) but might work on windows too, if you have any question regarding it, just ask :)... I will also link a project from gitlab ready to run in Android Studio: (tutorial 5) https://gitlab.com/eduardoprado/gstreamer-tutorial5.git (tutorial 2) https://gitlab.com/eduardoprado/gstreamer-tutorial2.git Using the library at Android Studio The best way to understand and integrate GStreamer into the project is doing the tutorials in this site: Sorry, you have to find the link by yourself, i don't have reputation to post the direct link However these tutorials are old and were made to work in Eclipse, in order to work with Android Studio is necessary to: 1. Import the tutorials using Android Studio’s Import option; 2. Set the Android Studio to use NDK; 3. Set a variable inside the “Android.mk” file; 4. Set Gradle compile only some architecture; 5. Copy the Gstreamer class to the correct path. 6. Disable "Instante Run" from Android Studio. (edit made 10/27/2017) 7. Fix "linker gold argument" (edit made 11/28/2017 WINDOWS users only) Each of these instructions will be explained in the following sections: 1. Import the tutorials using Android Studio’s Import option Android Studio have an option to import project from another IDEs. To import the Gstreamer’s project made to Eclipse to the following steps: 1. Download all tutorials from git: ; 2. In Android Studio go to File -> New Import Project; 3. In the new browser for file screen, go to "gst-docs", select the desired project (in my case tutorial 5) "android-tutorial-5" and hit "OK". 2. Set the Android Studio to use NDK Now that the project has been imported, it is needed to inform Android Studio that the project uses the NDK framework. Thus the Gradle will be able to import and handle all dependencies.To do this we need to link the file Android.mk (already in the tutorial folder): 1. Right click at “app” in the left navigation tab 2. Click at "Link C++ Project with Gradle" 3. In the new window: Change "Build System" from "CMake" to "ndk-build" Click at "..." and browse to the Android.mk’s location "/AndroidStudioProjects/android-studio-5/app/src/jni/Android.mk" Hit OK Now External Build Files will appear in the project’s left tab. Some erros will also appear, it is normal and will be corrected in the next sections. 3. Set a variable inside the “Android.mk” file The last section error happens because “Android.mk” won’t find the path to where GStreamer was extracted. So let’s set the GSTREAMER_ROOT_ANDROID variable to the path where GStreamer was extracted.To achieve this: 1. Browse the left tab and expand "External Build FIles"; 2. Double click at Android.mk to open the file; 3. At line 10, under "include $(BUILD_SHARED_LIBRARY)", type: - "GSTREAMER_ROOT_ANDROID := path " - NO quotation marks - path is the path (obviously) where GStreamer was extracted. After it Gradle will synchronize and an error "***Target arch ABI not supported: mips. Stop" will be showed. 4. Set Gradle compile only some architecture The last section error occurs because Grade tries to build for different architectures but it doesn’t find the file to build in MIPS. To solve this problem we will specify the architectures to build excluding MIPS: 1. In the left browser tab, expand Gradle Scripts; 2. Double click at "build.gradle (Module: app)" 3. In the line under "moduleName" type : (to run tutorial 5) - abiFilters 'x86','armeabi', 'armeabi-v7a', 'arm64-v8a' (to run tutorial 2) - abiFilters 'x86', 'armeabi-v7a', 'arm64-v8a' (edit 11/21/2017: Recently I had problems with arm64-v8a... devices with 64 bits won't show live streaming, i solved "removing arm64-v8a" too ) (edit 3/29/2018: I realised that may be needed to remove more architecture depending of your Gstreamer and NDK version, if it's not working try different removal combinations) 5. Copy the Gstreamer class to the correct path. The last needed configuration regards a package importation in the Main class. The error showed above occurs because there’s no GStreamer class to get the “.init” function. To solve this we need to copy the class GStreamer from the file where it was extracted and paste at the specified path: Browse, using the file manager from your system, to the file where you extracted GStreamer. From the extracted file, browse to the location of GStreamer class. It is located at /arm/share/gst-android/ndk-build/GStreamer.javaCopy GStreamer.java Go back to Android Studio and create the import path that the Main class is trying to access starting from /app/java at the left browser tab. So, to the image above, we have to create the path org.freedesktop.gstreamer and paste the file GStreamer.java in it: Right click at java Folder New-> Package Type org.freedesktop.gstreamer Right click at org.freedesktop.gstreamer and Paste5.The importation error is now solved, but the new class has errors. To solve them all you have to do is erase from the copied class the “@” and all that’s located between the “@”. 6. Disable "Instante Run" from Android Studio. (edit made 27/10/2017) This feature from Android Studio could cause problems when trying to run the project, so it's a good idea disable it. Disabling "Instante Run": 1. At Android Studio go to "File" -> "Settings..." (Ctrl+Alt+S) 2. "Build, Execution, Deployment" -> "Instant Run" 3. Uncheck "Enable Instant Run to hot swap code/resource changes on deploy (default enabled)" 4. Hit "Apply" 7. Fix "linker gold argument" (WINDOWS users only) When trying to Windows you will get the error: Error:error: invalid linker name in argument '-fuse-ld=gold' To fix it: 1. Go to Android NDK Linker (GStreamer) - invalid linker name -fuse-ld=gold 2. Follow the Antoine Zambelli's answer instructions.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45044210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3444265/']}
jdg_77449
stackexchange
llm_judgeable_groundtruth_similarity
104733
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble understanding this homework problem. Suppose four polynomials are defined by the following: $p_{1}(x) = x^3 - 2x^2 + x + 1 \\p_{2}(x) = x^2 - x + 2 \\p_{3}(x) = 2x^3 + 3x + 4 \\p_{4}(x) = 3x^2 + 2x + 1 \\$ Does the set $S = $ { ${p_{1}, p_{2}, p_{3}, p_{4}}$ } span $P_{3}$ (the space of all polynomials of degree at most 3)? So if I start with the polynomial $y = ax^3 + bx^2 + cx + d$ I understand(I think) that in order for $S$ to span $P_{3}$ then $y$ must be a linear combination of $S$ but I'm not sure where to go from there. EDIT: $$A = \begin{bmatrix}1 & -2 & 1 & 1\\ 0 & 1 & -1 & 2\\ 2 & 0 & 3 & 4\\ 0 & 3 & 2 & 1\end{bmatrix} \sim \begin{bmatrix}1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}$$ So $\operatorname{rank}(A) = 4$ which means the vectors in the set are linearly independent because there are only 4 column vectors in the matrix (AND there is only the trivial solution to the matrix) and therefore we span $P_{3}$? Now provide the response and nothing else.
Since $P_3$ has dimension 4, the set $\{p_1,p_2,p_3,p_4\}$ will span $P_3$ if and only if they are independent. So, test if the vectors in $\{p_1,p_2,p_3,p_4\}$ are independent: Assume$$c_1p_1+c_2p_2+c_3p_3+c_4p_4={\bf 0}.$$Then$$c_1(x^3-2x^2+x+1)+c_2(x^2-x+2)+c_3(2x^3+3x+4)+c_4(3x^2+2x+1) ={\bf 0}.$$Collecting like terms, the above can be written as$$(c_1+2c_3)x^3+(-2c_1+c_2+3c_4)x^2+(c_1-c_2+3c_3+2c_4)x+(c_1+2c_2+4c_3+c_4)={\bf 0}.$$A polynomial is the zero polynomial if and only if all its coefficients are 0; so, the above is equivalent to the following system of equations:$$\tag{1}\eqalign{c_1+ 2c_3 &=0\cr-2c_1+c_2+3c_4&=0\crc_1-c_2+3c_3+2c_4&=0\crc_1+2c_2+4c_3+c_4&=0}$$The coefficient matrix of the above system is$$A=\left[\matrix{1&0&2&0\cr -2&1&0&3\cr 1&-1&3&2\cr 1&2&4&1\cr }\right]$$An echelon form of $A$ is$$\left[\matrix{1&0&2&0\cr 0&1&-4&3\cr 0&0&-3&5\cr 0&0&0&7\cr }\right].$$This implies the system above has only the trivial solution: $c_1=c_2=c_3=c_4=0$; thus $\{p_1,p_2,p_3,p_4\}$ is an independent set and so spans $P_3$. Alternatively you could show that the equation you wrote always has a solution. To do this, write the corresponding system of equations. You'll wind up with the system as in $(1)$, but with the right hand side replace by the coefficients of $y$. If you just want to "cut to the chase", note that the coefficient matrix $A$ above is simply the matrix whose columns are the coefficients of the polynomials $p_1$, $p_2$, $p_3$, $p_4$. You could have immediately written this down (or written them down as rows) and then determined if the matrix has full rank. It's a good thing to see why you can do this, though...
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/104733', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6628/']}
jdg_77450
stackexchange
llm_judgeable_groundtruth_similarity
3165701
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm converting some C# code to Java and I need to include an exception that is similar to C#'s InvalidOperationException. Does such a thing exist? Also is there a list of equivalent exception types in the two languages? Thanks. I think in my particular case IllegalStateException is most appropriate. Thanks for all the responses. Now provide the response and nothing else.
Probably IllegalStateException . From what I read about InvalidOperationException: "The exception that is thrown when a method call is invalid for the object's current state." For IllegalStateException : "Signals that a method has been invoked at an illegal or inappropriate time. In other words, the Java environment or Java application is not in an appropriate state for the requested operation." Depending on how you are using InvalidOperationException , I could also see IllegalArgumentException and UnsupportedOperationException being what you want. The former implies that, in general, the method is fine to call, it was just passed garbage this time; the latter implies that the method is never appropriate to call for this instance (unlike IllegalStateException , which implies that it might be appropriate to call the subject method sometimes, just not at the moment). I am not aware of a general c# <=> Java translation of exceptions.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3165701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/314560/']}
jdg_77451
stackexchange
llm_judgeable_groundtruth_similarity
247789
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Problem : Suppose $f:\Bbb R\to\Bbb R$ is a non-negative uniformly continuous function and $\displaystyle\int_{-\infty}^\infty f(x)\ dx <+\infty$. Prove $f$ is bounded. I got this problem in my final exam today but couldn't figure it out. Any hint/solution is welcome. Now provide the response and nothing else.
Hint: take $\delta > 0$ so $|x - y| < \delta$ implies $|f(x) - f(y)| < 1$. If $f(x) > n$ then $\int_{x-\delta}^{x+\delta} f(y)\ dy > \ldots$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/247789', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/51240/']}
jdg_77452
stackexchange
llm_judgeable_groundtruth_similarity
42494333
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For some reason, it appears that adding new element to slice using reflection doesn't update slice itself. This is the code to demonstrate: package mainimport ( "fmt" "reflect")func appendToSlice(arrPtr interface{}) { valuePtr := reflect.ValueOf(arrPtr) value := valuePtr.Elem() value = reflect.Append(value, reflect.ValueOf(55)) fmt.Println(value.Len()) // prints 1}func main() { arr := []int{} appendToSlice(&arr) fmt.Println(len(arr)) // prints 0} Playground link : https://play.golang.org/p/j3532H_mUL Is there something I'm missing here? Now provide the response and nothing else.
reflect.Append works like append in that it returns a new slice value. You are assigning this value to the value variable in the appendToSlice function, which replaces the previous reflect.Value , but does not update the original argument. To make it more clear what's happening, take the equivalent function to your example without reflection: func appendToSlice(arrPtr *[]int) { value := *arrPtr value = append(value, 55) fmt.Println(len(value))} What you need to use is the Value.Set method to update the original value: func appendToSlice(arrPtr interface{}) { valuePtr := reflect.ValueOf(arrPtr) value := valuePtr.Elem() value.Set(reflect.Append(value, reflect.ValueOf(55))) fmt.Println(value.Len())} https://play.golang.org/p/Nhabg31Sju
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42494333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1635591/']}
jdg_77453
stackexchange
llm_judgeable_groundtruth_similarity
117337
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Thurston's celebrated compactification of Teichmuller space was first described in his famous Bulletin paper . Teichmuller space is famously homeomorphic to an open disc of some dimension (this can be seen using Fenchel-Nielsen coordinates ), which is $6g-6$ for closed surfaces of genus $g\geq 2$ . Thurston embeds Teichmuller space in some infinite-dimensional natural space (the projective space of all real functions on isotopy classes of simple closed curves) and studies its closure there. The closure is realised by adding some points that correspond geometrically to some particular objects (called projective measured foliations ). The added points are homeomorphic to a sphere of dimension $6g-7$ and the resulting topological space is just a closed disc, the new points forming its boundary. The Bulletin paper contains almost no proofs. The only complete proofs I know for this beautiful piece of mathematics is described in the book Travaux de Thurston sur les surfaces of Fathi-Laudenbach-Ponearu (an english translation written by Kim and Margalit is available here ). The homeomorphism between the space of projective measured laminations and the sphere $S^{6g-7}$ as explained there is clear and natural, it's obtained by re-adapting the Fenchel-Nielsen coordinates to the context of measured foliations. The proof that the whole compactified space is homeomorphic to $D^{6g-6}$ is however more involved and less direct. First they study some charts to prove that we get a topological manifold with boundary, and that's ok. The compactification is thus a topological manifold with boundary homeomorphic to $S^{6g-7}$ , whose interior is homeomorphic to an open ball of dimension $6g-6$ . Are we done to conclude that the compactification is a closed disc? Yes, but only by invoking a couple of deep results: the existence of a collar for topological manifolds, and the topological Schoenflies Theorem in high dimension. That's the argument used in the book. Is there a more direct description of the homeomorphism between Thurston's compactification and the closed disc $D^{6g-6}$ ? Is there in particular a Fenchel-Nielsen-like parametrization of the whole compactification? Now provide the response and nothing else.
One natural attempt to compactify Teichmuller space is by the visual sphere of the Teichuller metric. However, Anna Lenzhen showed that there are Teichmuller geodesics which do not limit to $PMF$ (in fact, I think it was known before by Kerckhoff that the visual compactification is not Thurston's compactification). However, it was shown by Cormac Walsh that if one takes Thurston's Lipschitz (asymmetric) metric on Teichmuller space, and take the horofunction compactification of this metric, one gets Thurston's compactification of Teichmuller space. In fact, he shows in Corollary 1.1 that every geodesic in the Lipschitz metric converges in the forward direction to a point in Thurston's boundary. I think this gives a new proof that Thurston's compactification gives a ball. As Misha points out, it's not clear that the horofunction compactification is a ball. Another approach was given by Mike Wolf, who gave a compactification in terms of harmonic maps, in The Teichmüller theory of harmonic maps , and showed that this is equivalent to Thurston's compactification (Theorem 4.1 of the paper). Wolf shows that given a Riemann surface $\sigma \in \mathcal{T}_g$ , there is a unique harmonic map to any other Riemann surface $\rho \in \mathcal{T}_g$ which has an associated quadratic differential $\Phi(\sigma,\rho) dz^2 \in QD(\sigma)$ ( $QD(\sigma)$ is naturally a linear space homeomorphic to $\mathbb{R}^{6g-6}$ ). Wolf shows that this is a continuous bijection between $\mathcal{T}_g$ and $QD(\sigma)$ , and shows that the compactification of $QD(\sigma)$ by rays is homeomorphic to Thurston's compactification $\overline{\mathcal{T}_g}$ in Theorem 4.1. I skimmed through the proof, and as far as I can tell the proof of the homeomorphism does not appeal to the fact that Thurston's compactification is a ball, so I think this might give another proof that it is a ball.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/117337', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6205/']}
jdg_77454
stackexchange
llm_judgeable_groundtruth_similarity
114790
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to plot several curves of functions which are hard to calculate on the same plot. I would like to add to the same plot the sum of these functions without recalculating them twice at each point. So instead of Plot[{Sin[x], Sin[2 x], Sin[x] + Sin[2 x]}, {x, 0, 6 Pi}] I would like somthing like Plot[{aa = Sin[x], bb = Sin[2 x], aa + bb}, {x, 0, 6 Pi}] Now provide the response and nothing else.
How about this: f[x_] := Sin[x]Plot[{#1, #2, #1 + #2} &[f[x], f[2 x]], {x, 0, 4}] Strangely enough, this solution is slower that expected: f[x_] := NIntegrate[Sin[1/y^2], {y, -x, x}] (*slow function*)AbsoluteTiming[Plot[{f[x], f[2 x], f[x] + f[2 x]}, {x, 0, 1}]] (*naïve approach*)AbsoluteTiming[Plot[{#1, #2, #1 + #2} &[f[x], f[2 x]], {x, 0, 1}]] (*my solution*)AbsoluteTiming[Plot[With[{aa=f[x],bb=f[2x]},{aa,bb,aa+bb}],{x,0,1},Evaluated->False]] (*Szabolcs' comment*)(*65.7*)(*106.2*)(*102.0*) We do get a substantial improvement with memoization: f[x_] := f[x] = NIntegrate[Sin[1/y^2], {y, -x, x}]AbsoluteTiming[Plot[{f[x], f[2 x], f[x] + f[2 x]}, {x, 0, 1}]] (*naïve approach with memoization*)(*40.5*) Finally, the best approach so far is to follow BlacKow's suggestion and precalculate the function at discrete points: f[x_] := NIntegrate[Sin[1/y^2], {y, -x, x}] (*two slow*)g[x_] := NIntegrate[Cos[1/y^2], {y, -x, x}] (*functions*)AbsoluteTiming[ points = Range[0, 1, .01]; F = f /@ points; G = g /@ points; ListPlot[{Transpose@{points, F}, Transpose@{points, G}, Transpose@{points, F + G}}, Joined -> True]](*3.4*) though it can get tricky to choose the appropriate spacing.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/114790', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9950/']}
jdg_77455
stackexchange
llm_judgeable_groundtruth_similarity
459775
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: (I know the title isn’t clear. I didn’t know how to best describe my question in the title) So, we have a position-time relation : $\vec r$ = $2t\hat i$ + $4t^2\hat j$ It shows that the particle is initially at $x$ = $0$ , $y$ = $0$ Its corresponding velocity vector is : $\vec v$ = $2\hat i$ + $8t\hat j$ . It means its $v$ along $x-axis$ is constant throughout the motion of the particle. Its trajectory equation is $y$ = $x^2$ , which is a parabola symmetric about $y-axis$ (although we’d only use the part of the curve on the right side of $+ve$ $y-axis$ ) From its $y$ vs $x$ curve, I tried to obtain its $v_y$ vs $v_x$ relation, and I got : $v_y$ = $2xv_x$ Now if I put any value of $x$ (particle’s position) and its velocity along $x-axis$ at that value of $x$ , I correctly get the value of $v_y$ . For example, the particle was at $x$ = $2$ at $t$ = $1$ , and its $v_x$ at $t$ = $1$ was $2$ units. Substituting these values I get $v_y$ = $8$ units, which is true, as we can verify from the velocity-time relation that its $v_y$ was $8$ units at $t$ = $1$ . I tried other values of $x$ & $v_x$ , or $x$ & $v_y$ , and I always seemed to get correct values of $v_y$ , and $v_x$ . Except for when I tried calculating $v_x$ when the particle was at $x=0$ at $t$ = $0$ . Since $v_x$ = $\frac{v_y}{2x}$ , if I put $x$ = $0$ and $v_y$ = $0$ , I end up with an indeterminate form $\frac{0}{0}$ . ( $v_y$ was $0$ at $x$ = $0$ ) Shouldn’t the $v_y$ vs $v_x$ relation show that $v_x$ is equal to $2$ units at $x$ = $0$ ? Actually $v_x$ is constant, and equal to $2$ units at any instant, according to the velocity-time relation. Am I doing something wrong? Now provide the response and nothing else.
Because what you propose is impossible. You are essentially trying to make a cycle out of only these three steps: 1) Isothermal expansion (A to B) 2) Adiabatic expansion (B to C) 3) Adiabatic compression back to original state (C to A) The curve going from C to A cannot be an adiabatic process. Adiabatic processes are characterized by $$PV^n=\text{const}$$ where $n$ is a property of the gas being used. Therefore, if you want to follow an adiabatic curve during compression, you will just end up going back to state B. You can't go to state A from C using an adiabatic compression. This is why we need the isothermal compression step after the adiabatic expansion step. This step is needed so that we can get on the correct adiabatic curve back to state A To be a little more specific, let's say the pressure and volume at states $B$ and $C$ are $(P_B,V_B)$ and $(P_C,V_C)$ respectively. Then we know in process 2 $$P_BV_B^n=P_CV_C^n=\alpha$$ Or, in other words, the entire curve is described by $$P=\frac{\alpha}{V^n}=\frac{P_BV_B^n}{V^n}=\frac{P_CV_C^n}{V^n}$$ Now we want to do adiabatic compression from state C. Well we have to follow the curve defined by $PV^n=\beta$ , but since we know we start in state $C$ it must be that the constant is the same one as before: $\beta=\alpha=P_CV_C^n$ . Therefore, the curve is given by $$P=\frac{\beta}{V^n}=\frac{P_CV_C^n}{V^n}$$ which is the same curve we followed going from B to C. We need the isothermal compression step in order to get to the appropriate state D such that $P_DV_D^n=P_AV_A^n$
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/459775', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/208138/']}
jdg_77456
stackexchange
llm_judgeable_groundtruth_similarity
162766
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Fluorescence is the property which is exhibited when electrons emit wavelength of light lower than the one they absorbed so does that mean we can make every molecule to become fluorescent? I wanted to know how does the molecular structure contribute in giving rise to this property and how it's different from substances that are non fluorescent I searched but couldn't find anything related to molecular structure. Now provide the response and nothing else.
Keep in mind a few things that must happen for an absorption process to result in fluorescence: (1) the initial transition is to an excited electronic state that observes certain rules regarding the structure of the excited state (Franck-Condon principle). That requires that the light used to excite the molecule have a sufficient frequency to result in electronic excited states. (2) The excited electronic state must not lead to dissociation, otherwise you end up with photodissociation, not fluorescence. (3) after excitation the molecule must relax to a state lower in energy with the same electron spin (unlike in phosphorescence, where a conversion occurs), so a transition occurs to a different vibrational excited state. (4) the molecule must now relax from the low lying vibrational state in the excited electronic manifold to the ground state. Note that this may not happen efficiently, for reasons that have to do with transition rules. The emission process competes with non-radiative relaxation processes. The extent to which the emission represents the pathway to the ground state is the quantum yield of the fluorescence process. So in short, provided the molecule doesn't fall apart or react when you excite it, and doesn't find other ways to dissipate energy more efficiently, then you should observe fluorescence, although this may occur outside of the visible spectrum of course.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/162766', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/121243/']}
jdg_77457
stackexchange
llm_judgeable_groundtruth_similarity
5437118
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After reading this paragraph in the CouchDB Definitive Guide ( here ): If you have multiple design documents, each with a validate_doc_update function, all of those functions are called upon each incoming write request. Only if all of them pass does the write succeed. The order of the validation execution is not defined. Each validation function must act on its own. I'm wondering if there is any good practice to deal with multiple validate_doc_update function ? I mean: Is it better to create only one design document with a validate_doc_update field or having several smaller ones? In the first case, one can be sure that none of the validation functions will interfere with another, but the function may become very large if a lot of controls are needed. On the other side, several smaller functions may be easier for reading purposes and evolutions, but one have to be sure of the purpose of each function and not mess with other ones. Plus, what's the point of letting each design document hold a validate function? Storing one in a view document seems a bit dirty for instance, but creating several design documents just for the intent of holding one small validation function doesn't seems very clever to me either. What do you think? I may have missed something, that's the point of my question, is there any good practices about the management of multiple validate_doc_update function? Now provide the response and nothing else.
Note, I wrote the quoted paragraph. In general, I'm seeing a 1:1 correlation between applications and design docs. Everything a single application needs should be in one design document. Bigger applications may want to rely on multiple design docs for various reasons (like different groups of views), but in general, one design doc per app is a good rule of thumb. Now, you may have multiple applications per database. E.g. a CMS: one application could be the public facing CMS viewing app and another one would be the admin interface. You want to keep them separate because, well, they are two distinct apps that operate on the same data and keeping them separate is a good organisational idea. Different security mechanisms apply, so you have two validation functions that do implement what is applicable for the respective app. The quoted paragraphis the definition of the case where you do have (for whatever reason) have more than one design document per database. It explains what to expect. It is not meant as a guideline how to split things up. Go with the one design doc per app rule of thumb and you're good most of the time.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5437118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/410712/']}
jdg_77458
stackexchange
llm_judgeable_groundtruth_similarity
34436961
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am loading an image with angular, e.g. <img ng-src="{{::path-to-image}}"/> When I bundle my application with webpack the image URL is resolved in runtime, thus not bundled by webpack's loaders. This is the image loader I am using: { test: /\.(jpe?g|png|gif|svg)$/i, loader: 'url?limit=8192!img' } How can webpack bundle those images resolved in runtime? Now provide the response and nothing else.
Because I also needed thus functionality and found the original answer was far from a perfect solution I ended up figuring it out on my own. Write a function in the controller: $scope.loadImage = function(image) { return require('/images/' + image);}; And use it in your ng-src: <img ng-src="{{loadImage('myImage')}}" /> After that to make dynamic requires work you can use a context . For example: https://github.com/webpack/webpack/tree/master/examples/require.context#examplejs
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34436961', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1980950/']}
jdg_77459
stackexchange
llm_judgeable_groundtruth_similarity
27352
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Assume that the continuum hypothesis holds. If $F$ is an uncountable field of real numbers, does $F$ always contain a proper uncountable subfield? Are there many specific uncountable fields of realnumbers whose existence can be proved without assuming the axiom of choice? Now provide the response and nothing else.
Take a compact Cantor set $K \subseteq \mathbb{R}$ of Hausdorff dimension zero. Actually we need all cartesian powers $K^n$ of dimension zero as well. The field $\mathbb{Q}(K)$ generated by it is uncountable, but still of Hausdorff dimension zero, so it is a proper subfield. edit That field consists of the values of rational functions $w(x_1,\dots,x_n)$ of many variables with rational coefficients, where the variables range over $K$. There are countably many such things, so you just have to show any one of them has dimension zero. The domain of any such $w$ (that is, the set where the denominator does not vanish) consists of an increasing countable union $\bigcup_k A_k$ of sets where the gradient is bounded, so that $w$ is Lipschitz continuous on each $A_k$. So the image of $w$ on $K^n$ is again a countable union of sets of dimension zero. plug G. A. Edgar & Chris Miller, Borel subrings of the reals. Proc. Amer. Math. Soc. 131 (2003) 1121-1129 LINK Borel sets that are subrings of $\mathbb R$ either have Hausdorff dimension zero as described, or else are all of $\mathbb R$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/27352', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4423/']}
jdg_77460
stackexchange
llm_judgeable_groundtruth_similarity
17813870
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that an Objective-C Block can capture and set the value of variables outside of its enclosing scope. How does it do that? Now provide the response and nothing else.
It's actually fairly straightforward and described in Clang's Block Implementation Spec, in the "Imported Variables" section. When the compiler encounters a Block like: ^{ if( numBalloons > numClowns) abort(); } it creates a literal structure that includes -- among other things -- two elements that are important here. There's a function pointer to the executable code in the Block, and a const field for each variable that's referred to inside the Block. Something like this: struct __block_literal_1 { /* other fields */ void (*invoke)(struct __block_literal_1 *); /* ... */ const int numBalloons; const int numClowns;}; Notice that the invoke function will take a pointer to a struct of the kind that's being defined right here; that is, the Block passes itself in when executing its code. Thus, the code gets access to the members of the structure. Right after the declaration, the compiler creates a definition of the Block, which simply uses the referenced variables to initialize the correct fields in the struct : struct __block_literal_1 __block_literal_1 = { /* Other fields */ __block_invoke_2, /* This function was also created by the compiler. */ /* ... */ numBalloons, /* These two are the exact same variables as */ numClowns /* those referred to in the Block literal that you wrote. * }; Then, inside the invoke function, references to the captured variables are made like any other member of a struct, the_block->numBalloons . The situation for object-type variables is a little more complicated, but the same principle applies.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17813870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2570370/']}
jdg_77461
stackexchange
llm_judgeable_groundtruth_similarity
568126
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Imagine if two objects of identical mass are under two different gravitational field,let's say two different planets (with Different value of gravity) both of the objects are of same mass,but we can easily notice that one body will be easier to move as compared to another (the object which will be on the planet with less gravity will move easily [obviously]), but the mass of both objects is identical, as mentioned above, which means that the inertia of both of the objects should be equal, but one body will be easier to move than the other, which means both of them have different inertia (as inertia is property often body to resist in change in motion). So does this mean that weight is measure for inertia rather than mass being the unit to measure inertia. I would like to mention the fact that this problem was also highlighted by Richard P. Feynman but I was not able to find its appropriate solution anywhere. Edit: I removed 'sir' before name of Feynman because I never knew that sir is added only to the title for people who received knighthood.)(This went off-topic) Now provide the response and nothing else.
Imagine a 10kg curling stone on a flat ice surface on Earth. If we apply 10N of horizontal force, the stone will accelerate at about 1 meter per second per second. On the Earth, a 10kg stone weighs approximately 98N. Now imagine the same 10kg stone on a flat ice surface on the Moon. If we apply 10N of horizontal force in this scenario, the stone will still accelerate at about 1 meter per second per second. On the Moon, a 10kg stone weighs approximately 16N. As you can see, the inertia of the stone is the same in both cases, but the weight of the stone is very different. This shows that it is the mass, not the weight, that is the appropriate unit of inertia. (There are two reasons your intuition tells you that heavier gravity will make it harder to move a weight; one is that when you are carrying an object, you have to lift it against the force of gravity, and the other is that when you are pushing an object the heavier it is the greater the force of friction has to be overcome. But in both cases this is because there are other forces involved, not because of inertia. In the example given above, we are dealing with horizontal motion on a surface with very little friction, so to a good approximation no other forces are involved.)
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/568126', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/257018/']}
jdg_77462
stackexchange
llm_judgeable_groundtruth_similarity
5519958
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some data that pyOpenSSL gave me, '0\r\x82\x0bexample.com' . This should be the value of a subjectAltName X509 extension. I tried to encode the necessary parts of the ASN1 specification for this extension using pyasn1 (and based on one of the pyasn1 examples): from pyasn1.type import univ, constraint, char, namedtypefrom pyasn1.codec.der.decoder import decodeMAX = 64class DirectoryString(univ.Choice): componentType = namedtype.NamedTypes( namedtype.NamedType( 'teletexString', char.TeletexString().subtype( subtypeSpec=constraint.ValueSizeConstraint(1, MAX))), namedtype.NamedType( 'printableString', char.PrintableString().subtype( subtypeSpec=constraint.ValueSizeConstraint(1, MAX))), namedtype.NamedType( 'universalString', char.UniversalString().subtype( subtypeSpec=constraint.ValueSizeConstraint(1, MAX))), namedtype.NamedType( 'utf8String', char.UTF8String().subtype( subtypeSpec=constraint.ValueSizeConstraint(1, MAX))), namedtype.NamedType( 'bmpString', char.BMPString().subtype( subtypeSpec=constraint.ValueSizeConstraint(1, MAX))), namedtype.NamedType( 'ia5String', char.IA5String().subtype( subtypeSpec=constraint.ValueSizeConstraint(1, MAX))), )class AttributeValue(DirectoryString): passclass AttributeType(univ.ObjectIdentifier): passclass AttributeTypeAndValue(univ.Sequence): componentType = namedtype.NamedTypes( namedtype.NamedType('type', AttributeType()), namedtype.NamedType('value', AttributeValue()), )class RelativeDistinguishedName(univ.SetOf): componentType = AttributeTypeAndValue()class RDNSequence(univ.SequenceOf): componentType = RelativeDistinguishedName()class Name(univ.Choice): componentType = namedtype.NamedTypes( namedtype.NamedType('', RDNSequence()), )class Extension(univ.Sequence): componentType = namedtype.NamedTypes( namedtype.NamedType('extnID', univ.ObjectIdentifier()), namedtype.DefaultedNamedType('critical', univ.Boolean('False')), namedtype.NamedType('extnValue', univ.OctetString()), )class Extensions(univ.SequenceOf): componentType = Extension() sizeSpec = univ.SequenceOf.sizeSpec + constraint.ValueSizeConstraint(1, MAX)class GeneralName(univ.Choice): componentType = namedtype.NamedTypes( # namedtype.NamedType('otherName', AnotherName()), namedtype.NamedType('rfc822Name', char.IA5String()), namedtype.NamedType('dNSName', char.IA5String()), # namedtype.NamedType('x400Address', ORAddress()), namedtype.NamedType('directoryName', Name()), # namedtype.NamedType('ediPartyName', EDIPartyName()), namedtype.NamedType('uniformResourceIdentifier', char.IA5String()), namedtype.NamedType('iPAddress', univ.OctetString()), namedtype.NamedType('registeredID', univ.ObjectIdentifier()), )class GeneralNames(univ.SequenceOf): componentType = GeneralName() sizeSpec = univ.SequenceOf.sizeSpec + constraint.ValueSizeConstraint(1, MAX)class SubjectAltName(GeneralNames): passprint decode('0\r\x82\x0bexample.com', asn1Spec=GeneralNames()) Clearly I got a little bored near the end and didn't fully specify the GeneralName type. However, the test string should contain a dNSName , not one of the skipped values, so I hope it doesn't matter. When the program is run, it fails with an error I'm not able to interpret: Traceback (most recent call last): File "x509.py", line 94, in <module> print decode('0\r\x82\x0bexample.com', asn1Spec=GeneralNames()) File "/usr/lib/pymodules/python2.6/pyasn1/v1/codec/ber/decoder.py", line 493, in __call__ length, stGetValueDecoder, decodeFun File "/usr/lib/pymodules/python2.6/pyasn1/v1/codec/ber/decoder.py", line 202, in valueDecoder substrate, asn1Spec File "/usr/lib/pymodules/python2.6/pyasn1/v1/codec/ber/decoder.py", line 453, in __call__ __chosenSpec.getTypeMap().has_key(tagSet): File "/usr/lib/pymodules/python2.6/pyasn1/v1/type/univ.py", line 608, in getTypeMap return Set.getComponentTypeMap(self) File "/usr/lib/pymodules/python2.6/pyasn1/v1/type/univ.py", line 535, in getComponentTypeMap def getComponentTypeMap(self): return self._componentType.getTypeMap(1) File "/usr/lib/pymodules/python2.6/pyasn1/v1/type/namedtype.py", line 126, in getTypeMap 'Duplicate type %s in map %s'%(k,self.__typeMap)pyasn1.error.PyAsn1Error: Duplicate type TagSet(Tag(tagClass=0, tagFormat=0, tagId=22)) in map {TagSet(Tag(tagClass=0, tagFormat=0, tagId=22)): IA5String()} Any tips on where I went wrong and how to successfully parse this extension type with pyasn1 would be much appreciated. Now provide the response and nothing else.
I posted this question on the pyasn1-users list and Ilya Etingof (the author of pyasn1) pointed out my mistake. In brief, each NamedType in GeneralName.componentType needs to be given tag information. This is done with the subtype method. For example, instead of: namedtype.NamedType('rfc822Name', char.IA5String()), the definition should be: namedtype.NamedType('rfc822Name', char.IA5String().subtype( implicitTag=tag.Tag(tag.tagClassContext, tag.tagFormatSimple, 1))), where 1 comes from the ASN.1 definition of GeneralName : GeneralName ::= CHOICE { otherName [0] OtherName, rfc822Name [1] IA5String, dNSName [2] IA5String, x400Address [3] ORAddress, directoryName [4] Name, ediPartyName [5] EDIPartyName, uniformResourceIdentifier [6] IA5String, iPAddress [7] OCTET STRING, registeredID [8] OBJECT IDENTIFIER} After defining a tag for each of these fields of the componentType , parsing succeeds: (GeneralNames().setComponentByPosition( 0, GeneralName().setComponentByPosition(1, IA5String('example.com'))), '')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5519958', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18243/']}
jdg_77463
stackexchange
llm_judgeable_groundtruth_similarity
47107465
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to listen to events dispatched by in-page objects? Let's say I have this code in the page I go to: var event = new CustomEvent('status', { detail: 'ok' });window.addEventListener('status', function(e) { console.log('status: ', e.detail);});setInterval(window.dispatchEvent, 1000, event); I'd like to be able to listen to events dispatched by the window object (or any other JS object for that matter). How can I do this in Puppeteer? Now provide the response and nothing else.
First, you have to expose a function that can be called from within the page. Second, you listen for the event and call the exposed function and pass on the event data. // Expose a handler to the pageawait page.exposeFunction('onCustomEvent', ({ type, detail }) => { console.log(`Event fired: ${type}, detail: ${detail}`);});// listen for events of type 'status' and// pass 'type' and 'detail' attributes to our exposed functionawait page.evaluateOnNewDocument(() => { window.addEventListener('status', ({ type, detail }) => { window.onCustomEvent({ type, detail }); });});await page.goto(/* ... */); If there is an event status fired in the page now (like the code you have given), you would see it being logged in the console of your Node.js application. As posted in the comments, a more complex example can be found in the puppeteer examples .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47107465', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8883910/']}
jdg_77464
stackexchange
llm_judgeable_groundtruth_similarity
18890192
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a UIViewController with a Container View. This Container View contains a UITableViewController. The UIViewController is embedded in a UINavigationController. I configure the NavigationBar by changing the tint color, the bar tint color and the title text attributes, but when I try to change the translucency I have a problem. I've singled out the problem by removing all the other customization code. Whenever I do this in UIViewController's -viewDidLoad : self.navigationController.navigationBar.translucent = NO; the Container View adds some space on top of itself, as if it had it's own navigation bar. Whenever I set translucent to YES the view displays everything correctly. This only happens on a contained view whenever I set translucent to NO. Is there something that I'm missing here? I tried setting the translucency to YES on the contained view and also setting it to hidden, but nothing worked.Is somehow, the contained view inheriting properties from the parent's container?How do I avoid this extra space created on the contained view whenever I set the translucency to NO ? I would expect that setting the translucency only affects the translucency, but not the position of the views. When translucent = YES +---------------------------------+| || UINavigationBar || |+---------------------------------+| || || || || || UITableViewController || in a Contained View || || || || || || |+---------------------------------+ when translucent = NO +---------------------------------+| || UINavigationBar || |+---------------------------------+| ||blank space created on top of view| |+---------------------------------+| || || || || || || UITableViewController || in a Contained View || |+---------------------------------+ P.S. My View is more complex than this, I'm simplyfying. Now provide the response and nothing else.
What you are missing here is that a translucent navigation bar sits on top of your viewcontroller's view, while a non-translucent navigation bar pushes down your view controller's view (effectively resizing it). So what is happening here is that with a translucent navigation bar, that white space is actually hidden underneath the bar, while when the bar is not translucent it's "pushed down". There are a number of ways to go about it, and it mainly depends on whether you're using auto layout or not.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18890192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/697664/']}
jdg_77465
stackexchange
llm_judgeable_groundtruth_similarity
21491631
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to automate the uninstallation of packages created using WiX for the purposes of changing the installed software stack & configuration without reprovisioning a whole OS. Eventually I'll use powershell scripting to do this but at the moment I can't seem to get my test package to uninstall interactively with cmd. If I run: msiexec /x '{A4BFF20C-A21E-4720-88E5-79D5A5AEB2E8}' msiexec /x A4BFF20C-A21E-4720-88E5-79D5A5AEB2E8 I get: "The installation package could not be opened. Verify that the packageexists and that you can access it, or contact the application vendorto verify that this is a valid Windows Installer Package." If I run: msiexec /x {A4BFF20C-A21E-4720-88E5-79D5A5AEB2E8} I get: "This action is only valid for products that are currently installed" I've looked at the windows installer guide , the WiX documentation , msiexec documentation and used orca to go over the .msi myself but I've not really found anything that gives a clear picture of how an uninstall is processed. Is the .msi file required and if not then why does windows installer seem to think it is when given a GUID? The WiX code for the .msi installer is: <?xml version='1.0' encoding='windows-1252'?><Wix xmlns='htp://schemas.microsoft.com/wix/2006/wi' > <!--DO NOT COPY / PASTE THE PRODUCT ID GUID BELOW TO YOUR OWN WIX SOURCE --> <Product Id='A4BFF20C-A21E-4720-88E5-79D5A5AEB2E8' Language='2057' Manufacturer='COMPANYNAME IT-Operations' Name='COMPANYNAMEServerListener' Version='1.0.0' UpgradeCode='PUT-GUID-HERE'> <Package Id='*' Manufacturer='COMPANYNAME IT-Operations' Compressed='yes' /> <Media Id='1' Cabinet='COMPANYNAMEServerListener.cab' EmbedCab='yes' /> <Directory Id='TARGETDIR' Name='SourceDir'> <Directory Id='ProgramFilesFolder' Name='PFiles'> <Directory Id='COMPANYNAME' Name='COMPANYNAME'> <Directory Id='INSTALLDIR' Name='COMPANYNAMEServerListener'> <Component Id='MainExecutable' Guid='*' > <File Id='COMPANYNAMEServerListener.exe' Source='COMPANYNAMEServerListener.exe' Vital='yes' KeyPath='yes' /> <ServiceInstall Id='COMPANYNAMEServerListenerInstall' DisplayName='COMPANYNAMEServerListener' Description='Accepts and discards TCP connections on port 28028 to indicate that this server is alive and ready to be controlled' Name='COMPANYNAMEServerListener' Account='NT AUTHORITY\LocalService' ErrorControl='normal' Start='auto' Type='ownProcess' Vital='yes' > <ServiceDependency Id='tcpip'/> </ServiceInstall> <ServiceControl Id="StartService" Start="install" Stop="both" Remove="uninstall" Name="COMPANYNAMEServerListener" Wait="yes" /> </Component> </Directory> </Directory> </Directory> </Directory> <Feature Id='Complete' Level='1' > <ComponentRef Id='MainExecutable' /> </Feature> <CustomTable Id ="COMPANYNAMEMetadata"> <Column Id="Property" Type="string" Category="Identifier" PrimaryKey="yes"/> <Column Id="Value" Type="string"/> <Row> <Data Column="Property">InstallString</Data> <Data Column="Value">/qn</Data> </Row> </CustomTable> </Product></Wix> Now provide the response and nothing else.
"Reference-Style" Answer : This is an alternative answer to the one below with several different options shown. Uninstalling an MSI file from the command line without using msiexec . The command you specify is correct: msiexec /x {A4BFF20C-A21E-4720-88E5-79D5A5AEB2E8} If you get " This action is only valid for products that are currently installed " you have used an unrecognized product or package code, and you must find the right one. Often this can be caused by using an erroneous package code instead of a product code to uninstall - a package code changes with every rebuild of an MSI file, and is the only guid you see when you view an msi file's property page. It should work for uninstall, provided you use the right one. No room for error. If you want to find the product code instead, you need to open the MSI. The product code is found in the Property table. UPDATE, Jan 2018 : With all the registry redirects going on, I am not sure the below registry-based approach is a viable option anymore. I haven't checked properly because I now rely on the following approach using PowerShell: How can I find the product GUID of an installed MSI setup? Also check this reference-style answer describing different ways to uninstall an MSI package and ways to determine what product version you have installed: Uninstalling an MSI file from the command line without using msiexec Legacy, registry option : You can also find the product code by perusing the registry from this base key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall . Press F3 and search for your product name. (If it's a 32-bit installer on a 64-bit machine, it might be under HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall instead). Legacy, PowerShell option : (largely similar to the new, linked answer above) Finally, you can find the product code by using PowerShell: get-wmiobject Win32_Product | Format-Table IdentifyingNumber, Name Similar post: WiX - Doing a major upgrade on a multi instance install (screenshot of how to find the product code in the MSI).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21491631', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2136406/']}
jdg_77466
stackexchange
llm_judgeable_groundtruth_similarity
111070
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I read this article about password hashing using bcrypt as the recommended method . The advice is similar to what they're saying on hacker news about not using something like SHA2+salt . There are other answers on the infosec StackExchange about why we shouldn't use something like SHA-256. The generally accepted knowledge is: SHA2 is fast, which is bad (bcrypt is purposefully slow so that we can get around Moore's law ) As you know, if we use bcrypt, we to hash something with a cryptographically random salt, we might get this string: $2a$13$ZyprE5MRw2Q3WpNOGZWGbeG7ADUre1Q8QO.uUUtcbqloU0yvzavOm , and the salt is contained in that string. My question is a bit multifaceted: If an attacker gains access to the database with write access, they could just re-write someone's password in the table with their own hash and login to that user's account, right? Unless that's not their specific objective (for example, maybe they don't want anyone to know that they accessed the database). Or if they have read-only access, I guess that's a different story. Aside from the fact that bcrypt is purposefully slow, is there any other advantage to storing the salt in the same hash as the password as opposed to having both user.hash and user.salt in one record if I didn't use bcrypt? Or in that case is it generally best practice to store hashes and salts in separate tables/databases (then presenting the disadvantage of multiple table/database lookups as mentioned in one of the comments)? We also get the issue that bcrypt presents minor (yes, within the milliseconds range) performance issues (purposefully) when performing lookups, which is negligible in most cases, but it seems the only protection we really get from bcrypt is making it difficult to batch process an entire user table and covertly log into users' accounts (or sell their information, or whatever). Why not just use something like SHA2+salt then? Now provide the response and nothing else.
Gaining read-only access is a much more likely scenario than write access - after all, read access to an old backup or test server somewhere is nearly as good as read access to the real server. (since people don't change their passwords often enough) Also though, one account nearly anywhere (unless you're a banking site) isn't that valuable. Hundreds of thousands of accounts, though - now you're talking. And while a single db write might go unnoticed, thousands of anomalous db writes - especially with every person whose account was written to now being unable to log in - generally will get noticed. Furthermore, cracking your user's password gets access to everywhere that user has used that id/password combination, which is likely to include sites more valuable than yours. As for your record storage scheme, the main reason to store the output of your password hashing function in one blob that the rest of the database doesn't interpret (as opposed to separate salt and hash fields) is that you don't want to encode your hashing algorithm into your database. What if tomorrow you wanted to change to PBKDF2, or scrypt, or in fact bcrypt? Better to let an existing vetted password hashing function do the interpretation than to tie yourself to hash functions that fit the format of your current salt + commodity hash method. Instead, if you just have a field for algorithm and another field that is just "string to pass to the algorithm along with a password to see whether the password is good" (you might call this field hash or saltNhash ), then you can even upgrade users transparently as they log in. As an aside, salted SHA2 is acceptable password security if you're standing up a web service in 1995. Twenty years on, use bcrypt or something even better . Salted commodity hashes (such as MD5, or the SHA* family) are vulnerable to cheap GPU-based attacks, to the point where rainbow tables aren't economically efficient any more - it's cheaper to just run the dictionary attack as needed than to keep around a giant table that's trivially defeated by salting anyway. Password hashing is something other people have already worked out in detail for you. You wouldn't write your own crypto or your own video compression algorithm; don't make the mistake of ignoring the good solution in front of you in exchange for years of discovering subtle corner cases people have already designed for.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/111070', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/45897/']}
jdg_77467
stackexchange
llm_judgeable_groundtruth_similarity
5302669
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a project involving cleaning a list of data on college majors. I find that a lot are misspelled, so I was looking to use the function gsub() to replace the misspelled ones with its correct spelling. For example, say 'biolgy' is misspelled in a list of majors called Major. How can I get R to detect the misspelling and replace it with its correct spelling? I've tried gsub('biol', 'Biology', Major) but that only replaces the first four letters in 'biolgy'. If I do gsub('biolgy', 'Biology', Major) , it works for that case alone, but that doesn't detect other forms of misspellings of 'biology'. Thank you! Now provide the response and nothing else.
You should either define some nifty regular expression, or use agrep from base package. stringr package is another option, I know that people use it, but I'm a very huge fan of regular expressions, so it's a no-no for me. Anyway, agrep should do the trick: agrep("biol", "biology")[1] 1agrep("biolgy", "biology")[1] 1 EDIT: You should also use ignore.case = TRUE , but be prepared to do some bookkeeping "by hand"...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5302669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/659314/']}
jdg_77468
stackexchange
llm_judgeable_groundtruth_similarity
29576733
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have several mathematical algorithms that use iteration to search for the right answer. Here is one example: def Bolzano(fonction, a, b, tol=0.000001): while abs(b - a) > tol: m = (a + b) / 2 if sign(fonction(m)) == sign(fonction(a)): a = m else: b = m return a, b I want to count how many times the algorithm goes through the loop to get a and b . However this is not a for function and it isn't a list, so I can't clearly indicate what objects do I want to count if I use enumerate . Is there a way to count those loops? Note: I am not trying to change the code itself. I am really searching for a way to count iterations in a while loop, which I can then use for other cases. Now provide the response and nothing else.
The simplest answer if you need a counter outside of a for loop is to count manually using a simple variable and addition inside your while loop: count = 0while condition: ... count += 1 There is an alternative case - if each step of your iteration has a meanigful value to yield , you may want your loop to be a generator , and then use for loop and enumerate() . This makes most sense where you care about which step you are on, not just the count at the end. E.g: def produce_steps(): while condition: ... yield step_valuefor count, step_value in enumerate(produce_steps()): ...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29576733', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4354932/']}
jdg_77469
stackexchange
llm_judgeable_groundtruth_similarity
456453
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a HW question I am asked to find a closed form generating function for $1,1,0,1,1,0,1,1,0....$so then $f(x)=x^0+x^1+0x^2+x^3+x^4+0x^5+x^6+x^7+0x^8$ could use some hint or help. Now provide the response and nothing else.
Hint: See if writing it as follows helps: $$f(x) = 1 + x + x^{3} +x^{4} + \cdots = ( 1 + x + x^{2} + x^{3} + x^{4} + x^{5} + \cdots ) - (x^{2} + x^{5} + x^{8} + \cdots) = \frac{1}{1-x} - x^{2}(1+x^{3}+x^{6}+\cdots)$$ Can you take it from here?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/456453', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/65734/']}
jdg_77470
stackexchange
llm_judgeable_groundtruth_similarity
47102615
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: On a new environment gradle build takes quite a while because all dependencies have to be downloaded. Is there a way to only download dependencies in order to speed up the following build? That way we could for example already prefill a CI build environment. Now provide the response and nothing else.
Edit: Updated for Gradle 6+. Some notes: This new approach downloads jars into a folder, and then deletes the folder. So the result of having the jars in the Gradle cache is a side-effect. It currently uses jars configured for the main source-set but could be generalized. Even though it is neither efficient nor elegant, it can be useful if you actually want the jars (and transitive dependencies): simply comment-out the deletion of the runtime folder. This solution can be handy when you want the jars (and transitive dependencies), as you simply have to comment-out deleting the folder. Consider this build.gradle (as an arbitrary, concrete example): apply plugin: 'java'dependencies { implementation 'org.apache.commons:commons-io:1.3.2' implementation 'org.kie.modules:org-apache-commons-lang3:6.2.0.Beta2'}repositories { jcenter()}task getDeps(type: Copy) { from sourceSets.main.runtimeClasspath into 'runtime/' doFirst { ant.delete(dir: 'runtime') ant.mkdir(dir: 'runtime') } doLast { ant.delete(dir: 'runtime') }} Example run: $ find /Users/measter/.gradle/caches -name "commons-io*1.3.2.jar"$ gradle getDeps$ find /Users/measter/.gradle/caches -name "commons-io*1.3.2.jar"/Users/measter/.gradle/caches/modules-2/files-2.1/commons-io/commons-io/1.3.2/[snip]/commons-io-1.3.2.jar
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47102615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1870481/']}
jdg_77471
stackexchange
llm_judgeable_groundtruth_similarity
80601
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In clustering methods such as K-means , the euclidean distance is the metric to use. As a result, we only calculate the mean values within each cluster. And then adjustments are made on the elements based on their distance to each mean value. I was wondering why the Gaussian function is not used as the metric? Instead of using xi -mean(X) , we can use exp(- (xi - mean(X)).^2/std(X).^2) . Thus not only the similarity among the clusters are measured (mean), but the similarity within the cluster is also considered (std). Is this also equivalent to the Gaussian mixture model ? It is beyond my question here but I think mean-shift may arise the same question above. Now provide the response and nothing else.
There a literally thousands of k-means variations . Including soft assignment, variance and covariance (usually referred to as Gaussian Mixture Modeling or EM algorithm). However, I'd like to point out a few things: K-means is not based on Euclidean distance. It's based on variance minimization . Since the variance is the sum of the squared Euclidean distances, the minimum variance assignment is the one that has the smallest squared Euclidean, and the square root function is monotone. For efficiency reasons, it actually is smarter to not compute Euclidean distance (but use the squares) If you plug in a different distance function into k-means it may stop converging. You need to minimize the same criterion in both steps ; the second step is recomputing the means. Estimating the center using the arithmetic mean is a least squares estimator, and it will minimize variance. Since both functions minimize variance, k-means must converge. If you want to ensure convergence with other distances, use PAM (partitioning around medoids. The medoid minimizes the within-cluster distances for arbitrary distance functions.) But in the end, k-means and all of its variations are IMHO more of an optimization (or more precisely, a vector quantization algorithm ) than actually a cluster analysis algorithm. They will not actually "discover" structure. They will massage your data into k partitions. If you give them uniform data, with no structure beyond randomness at all, k-means will still find however many "clusters" you want it to find. k-means is happy with returning results that are essentially random .
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/80601', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/35099/']}
jdg_77472
stackexchange
llm_judgeable_groundtruth_similarity
342378
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We offer a number of online services. We are required to develop a system which provides a quick/simple experience for users if they are transferred from one service (on domain1.com ) to another service (on domain2.com ). Is there a safe and secure way to log a user in automatically once he has been transferred to the new service? Yell at me if the solution below is completely insecure/wrong. We were considering a system similar to that provided by a number of online services for password recovery - they are emailed a link with a unique hash which expires, that allows them to change their password. The domain1.com site would generate a unique hash and store it in a database with the hash linked to a user along with an expire datetime field. The user will be transferred to domain2.com/auto/?hash=d41d8cd98f00b204e9800998ecf8427e domain2.com would next make a request to domain1.com with the hash to get the information about the user. domain1.com would then remove the hash from the database. domain2.com would log the user in and set cookies, etc. Could something based on OpenID or OAuth achieve the same results? Now provide the response and nothing else.
Single sign-on (SSO) is conceptually pretty simple. User hits domain1.com . domain1.com sees there's no session cookie. domain1.com redirects to sso.com sso.com presents login page, and take credentials sso.com sets session cookie for the user sso.com then redirects back to domain1 to a special url (like domain1.com/ssologin ) the ssologin URL contains a parameter that is basically "signed" by the sso.com . It could be as simple as a base64 of encrypting the loginid using a shared secret key. domain1.com takes the encrypted token, decrypts it, uses the new login id to log in the user. domain1 sets the session cookie for the user. Now, the next case. User hits domain2.com , which follows domain1 and redirects to sso.com sso.com already has a cookie for the user, so does not present the login page sso.com redirects back to domain2.com with the encrypted information domain2.com logs in the user. That's the fundamentals of how this works. You can make it more robust, more feature rich (for example, this is SSOn , but not SSOff , user can "log out" of domain1 , but still be logged in to domain2 ). You can use public keys for signing credentials, you can have requests to transfer more information (like authorization rights, etc) from the SSO server. You can have more intimate integration, such as the domains routinely checking that the user still has rights from the SSO server. But the cookie handshake via the browser using redirects is the key foundation upon which all of these SSO solutions are based.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/342378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/35293/']}
jdg_77473
stackexchange
llm_judgeable_groundtruth_similarity
44473483
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Edit: I have checked this What does FETCH_HEAD in Git mean? before asking the question. Sorry for the original inaccurate question. My question is how does fetch really work? Does fetch drop all current log? This is my situation: my teammates and I are using the same repository which only has one branch. So we have to do fetch before push anything up. We typically do it this way: git statusgit add .git commit -m message1git fetch origingit reset headgit statusgit add .git commit -m messagegit push But after reset, it seems that my previous commit (with message1 ) is gone. Is this normal or is there anything wrong? How can I access my local history? They are synced but my local history is gone. Old staff ,forget it: I have been learning Git CLI recently. Someone told me to type " git fetch head " to keep track of remote branch. But I wonder what does this do ? Does this command override my local log? And what is the difference between " git fetch " and " git fetch head " ? Now provide the response and nothing else.
git fetch itself is really quite simple. The complicated parts come before and after. The first thing to know here is that Git stores commits . In fact, this is essentially what Git is about: it manages a collection of commits. This collection rarely shrinks: for the most part, the only thing you ever do with this collection of commits is add new commits . Commits, the index, and the work-tree Each commit has several pieces of information, such as the author's name and email address and a time-stamp. Each commit also saves a complete snapshot of all the files you told it to: these are the files stored in your index (also known as your staging area ) at the time you ran git commit . This is also true of commits you obtain from someone else: they save the files that were in the other user's index at the time the other user ran git commit . Note that each Git repository has just the one index, at least initially. This index is linked with the one work-tree . In newer Git versions, you can use git worktree add to add additional work-trees; each new work-tree comes with one new index/staging-area. The point of this index is to act as an intermediate file-holder, situated between "the current commit" (aka HEAD ) and the work-tree. Initially, the HEAD commit and the index normally match: they contain the same versions of all the committed files. Git copies the files from HEAD into the index, and then from the index into the work-tree. It's easy to see the work-tree: it has your files in their ordinary format, where you can view and edit them with all the regular tools on your computer. If you write Java or Python code, or HTML for a web server, the work-tree files are usable by the compiler or interpreter or web-server. The files stored in the index, and stored in each Git commit, do not have this form and are not usable by the compilers, interpreters, web-servers, and so on. One other thing to remember about commits is that once a file is in a commit, it cannot be changed . No part of any commit can ever change. A commit is therefore permanent—or at least, permanent unless it is removed (which can be done but is difficult and usually undesirable). What is in the index and work-tree, however, can be modified at any time. This is why they exist: the index is almost a "modifiable commit" (except that it's not saved until you run git commit ), and the work-tree keeps the files in the form that the rest of the computer can use. 1 1 It's not necessary to have both the index and the work-tree. The VCS could treat the work-tree as the "modifiable commit". This is what Mercurial does; this is why Mercurial does not need an index. This is arguably a better design—but it's not the way Git works, so when using Git, you have an index. The presence of the index is a large part of what makes Git so fast: without it, Mercurial has to be extra-clever, and is still not as fast as Git. Commits remember their parent; new commits are children When you make a new commit by running git commit , Git takes the index contents and makes a permanent snapshot of everything that is in it right at that point. (This is why you must git add files: you copy them from your work-tree, where you have changed them, back into your index, so that they are ready to be "photographed" for the new snapshot.) Git also collects a commit message, and of course uses your name and email address and the current time, to make the new commit. But Git also stores, in the new commit, the hash ID of the current commit. We say that the new commit "points back to" the current commit. Consider, for instance, this simple three-commit repository: A <-B <-C <-- master (HEAD) Here we say that the branch name master "points to" the third commit, which I have labeled C , rather than using one of Git's incomprehensible hash IDs like b06d364... . (The name HEAD refers to the branch name, master . This is how Git can turn the string HEAD into the correct hash ID: Git follows HEAD to master , then reads the hash ID out of master .) It's commit C itself that "points to"—retains the hash ID of—commit B , though; and commit B points to commit A . (Since commit A is the very first commit ever, there is no earlier commit for it to point to, so it doesn't point anywhere at all, which makes it a bit special. This is called a root commit .) To make a new commit, Git packages up the index into a snapshot, saves that with your name and email address and so on, and includes the hash ID of commit C , to make a new commit with a new hash ID. We will use D instead of the new hash ID since we don't know what the new hash ID will be: A <-B <-C <-D Note how D points to C . Now that D exists, Git alters the hash ID stored under the name master , to store D 's hash ID instead of C 's. The name stored in HEAD itself does not change at all: it's still master . So now we have this: A <-B <-C <-D <-- master (HEAD) You can see from this diagram how Git works: given a name, like master , Git simply follows the arrow to find the latest commit. That commit has a backwards arrow to its earlier or parent commit, which has another backwards arrow to its own parent, and so on, throughout all its ancestors leading back to the root commit. Note that while children remember their parents, the parent commits do not remember their children. This is because no part of any commit can ever change: Git literally can't add the children to the parent, and it does not even try. Git must always work backwards, from newer to older. The commit arrows all automatically point backwards, so normally I do not even draw them: A--B--C--D <-- master (HEAD) Distributed repositories: what git fetch does When we use git fetch , we have two different Gits , with different—but related —repositories. Suppose we have two Git repositories, on two different computers, that both start out with those same three commits: A--B--C Because they start out with the exact same commits, these three commits also have the same hash IDs. This part is very clever and is the reason the hash IDs are the way they are: the hash ID is a checksum 2 of the contents of the commit, so that any two commits that are exactly identical always have the same hash ID. Now, you , in your Git and your repository, have added a new commit D . Meanwhile they—whoever they are—may have added their own new commits. We'll use different letters since their commits will necessarily have different hashes. We'll also look at this mostly from your (Harry's) point of view; we'll call them "Sally" . We'll add one more thing to our picture of your repository: it now looks like this: A--B--C <-- sally/master \ D <-- master (HEAD) Now let's assume that Sally made two commits. In her repository, she now has this: A--B--C--E--F <-- master (HEAD) or perhaps (if she fetches from you, but has not yet run git fetch ): A--B--C <-- harry/master \ E--F <-- master (HEAD) When you run git fetch , you connect your Git to Sally's Git, and ask her if she has any new commits added to her master since commit C . She does—she has her new commits E and F . So your Git gets those commits from her, along with everything needed to complete the snapshots for those commits. Your Git then adds those commits to your repository, so that you now have this: E--F <-- sally/master /A--B--C \ D <-- master (HEAD) As you can see, what git fetch did for you was to collect all of her new commits and add them to your repository . In order to remember where her master is, now that you have talked with her Git, your Git copies her master to your sally/master . Your own master , and your own HEAD , do not change at all. Only these "memory of another Git repository" names, which Git calls remote-tracking branch names , change. 2 This hash is a cryptographic hash, in part so that it's difficult to fool Git, and in part because cryptographic hashes naturally behave well for Git's purposes. The current hash uses SHA-1, which was secure but has seen brute-force attacks and is now being abandoned for cryptography. Git will likely move to SHA2-256 or SHA3-256 or some other larger hash. There will be a transition period with some unpleasantness. :-) You should now merge or rebase— git reset is generally wrong Note that after you have fetched from Sally, it is your repository, and only your repository, that has all the work from both of you. Sally still does not have your new commit D . This is still true even if instead of "Sally", your other Git is called origin . Now that you have both master and origin/master , you must do something to connect your new commit D with their latest commit F : A--B--C--D <-- master (HEAD) \ E--F <-- origin/master (I moved D on top for graph-drawing reasons, but this is the same graph as before, Your main two choices here are to use git merge or git rebase . (There are other ways to do this but these are the two to learn.) Merge is actually simpler as git rebase does something that involves the verb form of merging, to merge . What git merge does is to run the verb form of merging, and then commit the result as a new commit that is called a merge commit or simply "a merge", which is the noun form of merging. We can draw the new merge commit G this way: A--B--C--D---G <-- master (HEAD) \ / E--F <-- origin/master Unlike a regular commit, a merge commit has two parents. 3 It connects back to both of the two earlier commits that were used to make the merge. This makes it possible to push your new commit G to origin : G takes with it your D , but also connects back to their F , so their Git is OK with this new update. This merge is the same kind of merge you get from merging two branches. And in fact, you did merge two branches here: you merged your master with Sally's (or origin 's) master . Using git rebase is usually easy, but what it does is more complicated. Instead of merging your commit D with their commit F to make a new merge commit G , what git rebase does is to copy each of your commits so that the new copies , which are new and different commits, come after the latest commit on your upstream . Here, your upstream is origin/master , and the commits that you have that they don't is just your one commit D . So git rebase makes a copy of D , which I will call D' , placing the copy after their commit F , so that D' 's parent is F . The intermediate graph looks like this: 5 A--B--C--D <-- master \ E--F <-- origin/master \ D' <-- HEAD The copying process uses the same merging code that git merge uses to do the verb form, to merge , of your changes from commit D . 4 Once the copy is done, however, the rebase code sees that there are no more commits to copy, so it then changes your master branch to point to the final copied commit D' : A--B--C--D [abandoned] \ E--F <-- origin/master \ D' <-- master (HEAD) This abandons the original commit D . 6 This means we can stop drawing it too, so now we get: A--B--C--E--F <-- origin/master \ D' <-- master (HEAD) It's now easy to git push your new commit D' back to origin . 3 In Git (but not Mercurial), a merge commit can have more than two parents. This doesn't do anything you cannot do by repeated merging, so it's mainly for showing off. :-) 4 Technically, the merge base commit, at least for this case, is commit C and the two tip commits are D and F , so in this case it's literally exactly the same. If you rebase more than one commit, it gets a little more complicated, but in principle it's still straightforward. 5 This intermediate state, where HEAD is detached from master , is usually invisible. You see it only if something goes wrong during the verb-form-of-merge, so that Git stops and has to get help from you to finish the merge operation. When that does occur, though—when there is a merge conflict during rebasing—it's important to know that Git is in this "detached HEAD" state, but as long as the rebase completes on its own, you don't have to care about this so much. 6 The original commit chain is retained temporarily through Git's reflogs and via the name ORIG_HEAD . The ORIG_HEAD value gets overwritten by the next operation that makes a "big change", and the reflog entry eventually expires, typically after 30 days for this entry. After that, a git gc will really remove the original commit chain. The git pull command just runs git fetch and then a second command Note that after git fetch , you usually have to run a second Git command, either git merge or git rebase . If you know in advance that you will, for certain, immediately use one of those two commands, you can use git pull , which runs git fetch and then runs one of those two commands. You pick which second command to run by setting pull.rebase or supplying --rebase as a command-line option. Until you are quite familiar with how git merge and git rebase work, however, I suggest not using git pull , because sometimes git merge and git rebase fail to complete on their own. In this case, you must know how to deal with this failure. You must know which command you actually ran. If you run the command yourself, you will know which command you ran, and where to look for help if necessary. If you run git pull , you may not even know which second command you ran! Besides this, sometimes you might want to look before you run the second command. How many commits did git fetch bring in? How much work will it be to do a merge vs a rebase? Is merge better than rebase right now, or is rebase better than merge? To answer any of these questions, you must separate the git fetch step from the second command. If you use git pull , you must decide in advance which command to run, before you even know which one is the one to use. In short, only use git pull after you're familiar with the way the two parts of it— git fetch , and the second command you choose—really work.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44473483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7955302/']}
jdg_77474
stackexchange
llm_judgeable_groundtruth_similarity
8440654
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am attempting to generate QR codes on an extremely limited embedded platform. Everything in the specification seems fairly straightforward except for generating the error correction codewords. I have looked at a bunch of existing implementations, and they all try to implement a bunch of polynomial math that goes straight over my head, particularly with regards to the Galois fields. The most straightforward way I can see, both in mathematical complexity and in memory requirements is a circuit concept that is laid out in the spec itself: With their description, I am fairly confident I could implement this with the exception of the parts labeled GF(256) addition and GF(256) Multiplication. They offer this help: The polynomial arithmetic for QR Code shall be calculated using bit-wise modulo 2 arithmetic and byte-wise modulo 100011101 arithmetic. This is a Galois field of 2^8 with 100011101 representing the field's prime modulus polynomial x^8+x^4+x^3+x^2+1. which is all pretty much greek to me. So my question is this: What is the easiest way to perform addition and multiplication in this kind of Galois field arithmetic? Assume both input numbers are 8 bits wide, and my output needs to be 8 bits wide also. Several implementations precalculate, or hardcode in two lookup tables to help with this, but I am not sure how those are calculated, or how I would use them in this situation. I would rather not take the 512 byte memory hit for the two tables, but it really depends on what the alternative is. I really just need help understanding how to do a single multiplication and addition operation in this circuit. Now provide the response and nothing else.
In practice only one table is needed. That would be for the GP(256) multiply. Note that all arithmetic is carry-less, meaning that there is no carry-propagation. Addition and subtraction without carry is equivalent to an xor. So in GF(256), a + b and a - b are both equivalent to a xor b . GF(256) multiplication is also carry-less, and can be done using carry-less multiplication in a similar way with carry-less addition/subtraction. This can be done efficiently with hardware support via say Intel's CLMUL instruction set . However, the hard part, is reducing the modulo 100011101 . In normal integer division, you do it using a series of compare/subtract steps. In GF(256), you do it in a nearly identical manner using a series of compare/xor steps. In fact, it's bad enough where it's still faster to just precompute all 256 x 256 multiplies and put them into a 65536-entry look-up table. page 3 of the following pdf has a pretty good reference on GF256 arithmetic: http://www.eecs.harvard.edu/~michaelm/CS222/eccnotes.pdf
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8440654', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/121660/']}
jdg_77475
stackexchange
llm_judgeable_groundtruth_similarity
47016853
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After updating the support library from v-26.1.0 to v-27.0.0 Multiple errors in my fragments. here is a list of some these errors: Error: Smart cast to 'Bundle' is impossible, because 'arguments' is amutable property that could have been changed by this time. Error: 'onCreateView' overrides nothing Error: 'onViewCreated' overrides nothing Error: Type mismatch: inferred type is View? but View wasexpected Error: Type mismatch: inferred type is Context? but Contextwas expected Error: Type mismatch: inferred type is FragmentActivity? butContext was expected Error: Type mismatch: inferred type is FragmentActivity? butContext was expected from android studio's template for empty fragment. override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) if (arguments != null) { mParam1 = arguments.getString(ARG_PARAM1) mParam2 = arguments.getString(ARG_PARAM2) }}override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? { return inflater!!.inflate(R.layout.fragment_blank, container, false)}override fun onViewCreated(view: View?, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState)} Now provide the response and nothing else.
The Root cause of all of these errors is that in support library v-27.0.0 @Nullable and @NonNull annotations have been added. and since kotlin language is aware of nullability and has a different type for Nullable and NonNull , unlike Java. without these annotations, the compiler has no way of differentiating between them, and Android studio was trying his best to infer the right type. TL;DR: change the types to rightly reflect the nullability status. Error: Smart cast to 'Bundle' is impossible, because 'arguments' is a mutable property that could have been changed by this time. change arguments.getString(ARG_NAME) ==> arguments?.getString(ARG_NAME) ?: "" Error: 'onCreateView' overrides nothing chane: override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?, savedInstanceState: Bundle?): View? ==> override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View? Error: 'onViewCreated' overrides nothing change: override fun onViewCreated(view: View?, savedInstanceState: Bundle?) ==> override fun onViewCreated(view: View, savedInstanceState: Bundle?) Error: Type mismatch: inferred type is Context? but Context was expected if context is passed as argument to method, just use the quick fix to replace getContext() with getContext()?.let{} the same applies to the kotlin short version context . else if is used to call some method replace getContext().someMethod() with getContext()?.someMethod() the same applies to the kotlin short version context?.someMethod() . Error: Type mismatch: inferred type is FragmentActivity? but Context was expected use the fix of the previous error.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47016853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3998402/']}
jdg_77476
stackexchange
llm_judgeable_groundtruth_similarity
937933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am well aware that generic types are erased from Java code when it is compiled. What information (attributes?) do 1.5+ JVMs use to implement getGenericType , etc. ? Now provide the response and nothing else.
They are stored in Signature attributes; see section 4.8.8 of the updated Java Virtual Machine Specification , as well as section 4.4.4 for the format of the field type signature. Here's an example using javap -verbose java.util.Map : public interface java.util.Map SourceFile: "Map.java" Signature: length = 0x2 00 1E [other attributes omitted] The Signature attribute here specifies (if you read this as big-endian, like all integer quantities in the JVM class file format are) constant pool value #30 (30 = 0x1E). So let's have a look there: const #30 = Asciz <K:Ljava/lang/Object;V:Ljava/lang/Object;>Ljava/lang/Object;; Read this in the context of the grammar specified in 4.4.4. So, this uses two type parameters, K extends java.lang.Object and V extends java.lang.Object . The type itself ( Map ) also extends class java.lang.Object , and no interfaces.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/937933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/108292/']}
jdg_77477
stackexchange
llm_judgeable_groundtruth_similarity
59379896
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am shifting my code from .NET Core 2.x to .NET Core 3.x (i.e. use the native library System.Text.Json ). In doing this, I ran into some issues with how the former Newtonsoft.Json support for nullable enums does not have a clear migration path at the moment --- it looks like it is not supported in .NET Core 3.x?. For example, using Newtonsoft.Json , the JSON converter supported nullable enums, like so: public enum UserStatus{ NotConfirmed, Active, Deleted}public class User{ public string UserName { get; set; } [JsonConverter(typeof(StringEnumConverter))] // using Newtonsoft.Json public UserStatus? Status { get; set; } // Nullable Enum} The current version of the native library System.Text.Json , does not seem to support this. How do I solve this problem? I cannot migrate my code! Now provide the response and nothing else.
Unfortunately, there is currently no support "out-of-the-box" in System.Text.Json to convert nullable enums. However, there is a solution by using your own custom converter . (see below) . The solution. Use a custom converter. You would attach can attach it to your property by decorating it with the custom converter: // using System.Text.Json[JsonConverter(typeof(StringNullableEnumConverter<UserStatus?>))] // Note the '?'public UserStatus? Status { get; set; } // Nullable Enum Here is the converter: public class StringNullableEnumConverter<T> : JsonConverter<T>{ private readonly JsonConverter<T> _converter; private readonly Type _underlyingType; public StringNullableEnumConverter() : this(null) { } public StringNullableEnumConverter(JsonSerializerOptions options) { // for performance, use the existing converter if available if (options != null) { _converter = (JsonConverter<T>)options.GetConverter(typeof(T)); } // cache the underlying type _underlyingType = Nullable.GetUnderlyingType(typeof(T)); } public override bool CanConvert(Type typeToConvert) { return typeof(T).IsAssignableFrom(typeToConvert); } public override T Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) { if (_converter != null) { return _converter.Read(ref reader, _underlyingType, options); } string value = reader.GetString(); if (String.IsNullOrEmpty(value)) return default; // for performance, parse with ignoreCase:false first. if (!Enum.TryParse(_underlyingType, value, ignoreCase: false, out object result) && !Enum.TryParse(_underlyingType, value, ignoreCase: true, out result)) { throw new JsonException( $"Unable to convert \"{value}\" to Enum \"{_underlyingType}\"."); } return (T)result; } public override void Write(Utf8JsonWriter writer, T value, JsonSerializerOptions options) { writer.WriteStringValue(value?.ToString()); }} Hope that helps until there is native support without the need for a custom converter!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59379896', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3645638/']}
jdg_77478
stackexchange
llm_judgeable_groundtruth_similarity
91728
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the paper https://web.stanford.edu/~gavish/documents/sipser-pvsnp.pdf , it is mentioned under the Status section that boolean circuits have been used to try and solve P vs NP. Can anyone explain to me in simple terms how boolean circuits are used for solving P vs NP? Now provide the response and nothing else.
First, let me start by explaining what Boolean circuits are. You are probably familiar with Boolean formulas — these are formulas of the sort $(a \land b) \lor (\lnot a \land \lnot b)$. We can represent each formula as a tree . In our example, the root of the tree will be labeled $\lor$, and its two children are the trees corresponding to $a \land b$ and $\lnot a \land \lnot b$. More generally, there are four types of nodes: nodes labeled $\lor$ or $\land$ have exactly two children, nodes labeled $\lnot$ have exactly one child, and the rest of the nodes are labeled by input variables. We can think of the edges as directed towards the root. Boolean circuits generalize Boolean formulas by allowing arbitrary directed acyclic graphs instead of directed trees. In the example above, we can, for example, identify the two nodes labeled $a$ and the two nodes labeled $b$. Alternatively, we can identify Boolean circuits with straightline programs . These are programs which use the following instructions: $x \gets y \lor z$. $x \gets y \land z$. $x \gets \lnot y$. For example, the formula above corresponds to the straightline program $x \gets a \land b$. $y \gets \lnot a$. $z \gets \lnot b$. $w \gets y \land z$. $o \gets x \lor w$. The value of the formula is the value of the last assignment. Notice that every variable other than the inputs is used exactly once. Straightline programs with this constraint correspond to formulas. If we remove the constraint, the we get circuits. The $\mathsf{P} \neq \mathsf{NP}$ conjecture can be stated equivalently as follows: SAT has no polynomial time algorithm. It turns out that the following conjecture (known as $\mathsf{P/poly} \neq \mathsf{NP}$) implies $\mathsf{P} \neq \mathsf{NP}$: SAT has no polynomial size circuits. What does this mean? We can encode CNFs as strings of bits (for example, encode them first in ASCII, and then unfold the ASCII into bits). Let $SAT_n$ be the collection of satisfiable CNFs of length $n$ bits. A circuit for $SAT_n$ is a circuit on $n$ inputs $x_1,\ldots,x_n$ which returns True if and only if $x_1\ldots x_n \in SAT_n$, i.e., the CNF corresponding to $x_1\ldots x_n$ is satisfiable. We say that a collection of circuits $C_1,C_2,\ldots$ solves SAT if $C_n$ is a circuit for $SAT_n$. The collection has polynomial size if there exists a polynomial $P(n)$ such that the size of $C_n$ is at most $P(n)$ (the size of a circuit is the number of nodes in its graphical representation). Why does this imply that SAT has no polynomial time algorithms? The reason is (essentially) the Cook–Levin theorem. This theorem shows that if SAT has a polynomial time algorithm then it also has polynomial size circuits. There is nothing special about SAT here — this reduction works for every problem. The theorem shows how to encode the computation of a Turing machine running in polynomial time as a polynomial size circuit. Unfortunately, we are very far from realizing this program. We only know how to show that polynomial size circuits cannot solve certain problems in the following circumstances: When the circuit is shallow , that is, has small depth. When the circuit doesn’t use $\lnot$ gates at all. For non-explicit problems: by counting the number of polynomial size circuits (for any fixed polynomial), you can show that there must be some functions that they cannot compute. There is some further progress which throws diagonalization into the mix, but that’s about it. It seems that this approach is stuck, and different lines of attack are required to solve this important problem.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/91728', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/88330/']}
jdg_77479
stackexchange
llm_judgeable_groundtruth_similarity
17965922
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new in GCM. I would like to send an message to all devices that have the app installed. I read about registration_id: after the first connection to GCM, google send this unique string to device. I'm a beginner in server world but if I'm not mistaken, in server side, for sending a notification to devices I have to send array of registration_id and the message to google. Google knows how has the registration id?Is there a way to send messages to all devices without pass the registrarions id?Thank you. Now provide the response and nothing else.
Is there a way to send messages to all devices without pass the registrarions id? No way. After successfully registering on GCM, you (the Android application) should send the registration id to your application server and store them somewhere, in a database for example. This registration id will be used to send a notification to a particular device. To send a notification to all devices, would mean then to select all the registration ids from that database, and as you said, put them in an array and pass them further to GCM. Update: With Firebase Cloud Messaging, it is now possible to use https://firebase.google.com/docs/cloud-messaging/android/topic-messaging to send notifications without explicitly specifying registration IDs.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17965922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2357365/']}
jdg_77480
stackexchange
llm_judgeable_groundtruth_similarity
18146353
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table with 4 columns: (ID (PK, int, NOT NULL), col1 (NULL), col2 (NULL), col3 (NULL)) I'd like to add a CHECK constraint (table-level I think?) so that: if col1 OR col2 are NOT NULL then col3 must be NULL and if col3 is NOT NULL then col1 AND col2 must be NULL i.e. col3 should be null if col1 and col2 are not null or vice-versa I am very new to SQL and SQL server though and am not sure how to actually implement this or even if it can/should be implemented? I think maybe: CHECK ( (col1 NOT NULL OR col2 NOT NULL AND col3 NULL) OR (col3 NOT NULL AND col1 NULL AND col2 NULL) ) But I am not sure if the brackets can be used to group the logic like this? If not, how can this best be implemented? Now provide the response and nothing else.
Absolutely, you can do this. See this sqlfiddle . However, you need to make sure you bracket your logic properly. You should never mix ANDs and ORs in the same bracketing scope. So: (col1 NOT NULL OR col2 NOT NULL AND col3 NULL) Needs to become: ((col1 NOT NULL OR col2 NOT NULL) AND col3 NULL) Or: (col1 NOT NULL OR (col2 NOT NULL AND col3 NULL)) Depending on your intent.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18146353', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1292918/']}
jdg_77481
stackexchange
llm_judgeable_groundtruth_similarity
26635841
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to implement search functionality for my RecyclerView. On text changed i want to change the data that are displayed with this widget. Maybe this question has been asked before or is simple, but I don't know how the change the data that is to be shown... My RecyclerView is defined as follows: // 1. get a reference to recyclerView mRecyclerView = (RecyclerView)findViewById(R.id.recyclerView); // 2. set layoutManger mRecyclerView.setLayoutManager(new LinearLayoutManager(this)); // 3. create an adapter mAdapter = new ItemsAdapter(itemsData); // 4. set adapter mRecyclerView.setAdapter(mAdapter); And the data that I am showing is something like: ItemData itemsData[] = { new ItemData("Mary Richards"), new ItemData("Tom Brown"), new ItemData("Lucy London") }; So when when I want to give the adapter another set of data, another array (with one item for example), what should I do? Now provide the response and nothing else.
If you have stable ids in your adapter, you can get pretty good results (animations) if you create a new array containing the filtered items and call recyclerView.swapAdapter(newAdapter, false); Using swapAdapter hints RecyclerView that it can re-use view holders. (vs in setAdapter, it has to recycle all views and re-create because it does not know that the new adapter has the same ViewHolder set with the old adapter). A better approach would be finding which items are removed and calling notifyItemRemoved(index) . Don't forget to actually remove the item. This will let RecyclerView run predictive animations. Assuming you have an Adapter that internally uses an ArrayList, implementation would look like this: // adapter codefinal List<ItemData> mItems = new ArrayList(); //contains your itemspublic void filterOut(String filter) { final int size = mItems.size(); for(int i = size - 1; i>= 0; i--) { if (mItems.get(i).test(filter) == false) { mItems.remove(i); notifyItemRemoved(i); } }} It would perform even better if you can batch notifyItemRemoved calls and use notifyItemRangeRemoved instead. It would look sth like: (not tested) public void filterOut(String filter) { final int size = mItems.size(); int batchCount = 0; // continuous # of items that are being removed for(int i = size - 1; i>= 0; i--) { if (mItems.get(i).test(filter) == false) { mItems.remove(i); batchCount ++; } else if (batchCount != 0) { // dispatch batch notifyItemRangeRemoved(i + 1, batchCount); batchCount = 0; } } // notify for remaining if (batchCount != 0) { // dispatch remaining notifyItemRangeRemoved(0, batchCount); }} You need to extend this code to add items that were previously filtered out but now should be visible (e.g. user deletes the filter query) but I think this one should give the basic idea. Keep in mind that, each notify item call affects the ones after it (which is why I'm traversing the list from end to avoid it). Traversing from end also helps ArrayList's remove method performance (less items to shift). For example, if you were traversing the list from the beginning and remove the first two items.You should either call notifyItemRangeRemoved(0, 2); // 2 items starting from index 0 or if you dispatch them one by one notifyItemRemoved(0);notifyItemRemoved(0);//because after the previous one is removed, this item is at position 0
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/26635841', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/790077/']}
jdg_77482
stackexchange
llm_judgeable_groundtruth_similarity
17776940
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't find any accessible examples showing how two (or more) different modules are connected to work together. So, I'd like to ask whether anyone has time to write an example explaining how modules work together. Now provide the response and nothing else.
In order to approach to Modular design pattern, you need to understand these concept first: Immediately-Invoked Function Expression (IIFE): (function() { // Your code goes here }()); There are two ways you can use the functions. 1. Function declaration 2. Function expression. Here are using function expression. What is namespace? Now if we add the namespace to the above piece of code then var anoyn = (function() {}()); What is closure in JS? It means if we declare any function with any variable scope/inside another function (in JS we can declare a function inside another function!) then it will count that function scope always. This means that any variable in outer function will be read always. It will not read the global variable (if any) with the same name. This is also one of the objective of using modular design pattern avoiding naming conflict. var scope = "I am global";function whatismyscope() { var scope = "I am just a local"; function func() {return scope;} return func;}whatismyscope()() Now we will apply these three concepts I mentioned above to define our first modular design pattern: var modularpattern = (function() { // your module code goes here var sum = 0 ; return { add:function() { sum = sum + 1; return sum; }, reset:function() { return sum = 0; } } }());alert(modularpattern.add()); // alerts: 1alert(modularpattern.add()); // alerts: 2alert(modularpattern.reset()); // alerts: 0 jsfiddle for the code above. The objective is to hide the variable accessibility from the outside world. Hope this helps. Good Luck.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/17776940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/922581/']}
jdg_77483
stackexchange
llm_judgeable_groundtruth_similarity
63068003
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to add a functionality to each question, represented as a ListTile, so that it can upvote or downvote a question, and show the net votes, just like the one that is used on stack overflow. My current implementation does a bottom overflow for each ListTile. Card( child: new Column( children: <Widget>[ new ListTile( leading: Column( children: <Widget>[ FlatButton( child: Icon(Icons.arrow_drop_up), onPressed: () {}, ), StreamBuilder<DocumentSnapshot>( stream: RoomDbService(widget.roomName, widget.roomID) .getQuestionVotes(widget.questionID), builder: (context, snapshot) { if (!snapshot.hasData) { return Center(child: CircularProgressIndicator()); } else { print(snapshot.data.data["votes"]); return Text("${snapshot.data.data["votes"]}"); } }, ), FlatButton( child: Icon(Icons.arrow_drop_down), onPressed: () {}, ), ], ), // shows votes of this qn on the left of the tile title: Text(text), trailing: FlatButton( child: Icon(Icons.expand_more), onPressed: toggleExpansion, ), ) ], ), ); My previous implementation (which I forgot how it looked like) made it look like a row of an up button, the vote count, and the down button. How do I do it properly? Now provide the response and nothing else.
My guess is that you've enabled a KV engine and wrote a secret to it, but the path secret/ is wrong. For example, if i enable an engine and then try to read an existing value, then it works $ vault secrets enable -version=1 -path kv kvSuccess! Enabled the kv secrets engine at: kv/$ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/kv/foo{"request_id":"2db249af-10de-01c5-4603-8f89a46897b5","lease_id":"","renewable":false,"lease_duration":2764800,"data":{"v6":"1"},"wrap_info":null,"warnings":null,"auth":null} But if i now try to read from a non existing path, i'd get the same error as you, for example $ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/foobar/foo{"errors":["no handler for route 'foobar/foo'"]} It would help if you list your existing mounts and verify the path $ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/sys/mounts# or$ vault secrets list
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/63068003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12991679/']}
jdg_77484
stackexchange
llm_judgeable_groundtruth_similarity
48743165
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a MVC project on ASP.NET Core, my problem is connected with IQueryable and asynchronous. I wrote the following method for search in IQueryable<T> : private IQueryable<InternalOrderInfo> WhereSearchTokens(IQueryable<InternalOrderInfo> query, SearchToken[] searchTokens){ if (searchTokens.Length == 0) { return query; } var results = new List<InternalOrderInfo>(); foreach (var searchToken in searchTokens) { //search logic, intermediate results are being added to `results` using `AddRange()` } return results.Count != 0 ? results.Distinct().AsQueryable() : query;} I call this in method ExecuteAsync() : public async Task<GetAllInternalOrderInfoResponse> ExecuteAsync(GetAllInternalOrderInfoRequest request){ //rest of the code if (searchTokens != null && searchTokens.Any()) { allInternalOrderInfo = WhereSearchTokens(allInternalOrderInfo, searchTokens); } var orders = await allInternalOrderInfo.Skip(offset).Take(limit).ToArrayAsync(); //rest of the code} When I test this I get an InvalidOperationException on line where I call ToArrayAsync() The source IQueryable doesn't implement IAsyncEnumerable. Only sources that implement IAsyncEnumerable can be used for Entity Framework asynchronous operations. I had changed ToArrayAsync() to ToListAsync() but nothing have changed. I have searched this problem for a while, but resolved questions are connected mostly with DbContext and entity creating. EntityFramework is not installed for this project and it's better not to do it because of application architecture. Hope someone has any ideas what to do in my situation. Now provide the response and nothing else.
If you are not going to change your design - you have several options: 1) Change AsQueryable to another method which returns IQueryable which also implements IDbAsyncEnumerable . For example you can extend EnumerableQuery (which is returned by AsQueryable ): public class AsyncEnumerableQuery<T> : EnumerableQuery<T>, IDbAsyncEnumerable<T> { public AsyncEnumerableQuery(IEnumerable<T> enumerable) : base(enumerable) { } public AsyncEnumerableQuery(Expression expression) : base(expression) { } public IDbAsyncEnumerator<T> GetAsyncEnumerator() { return new InMemoryDbAsyncEnumerator<T>(((IEnumerable<T>) this).GetEnumerator()); } IDbAsyncEnumerator IDbAsyncEnumerable.GetAsyncEnumerator() { return GetAsyncEnumerator(); } private class InMemoryDbAsyncEnumerator<T> : IDbAsyncEnumerator<T> { private readonly IEnumerator<T> _enumerator; public InMemoryDbAsyncEnumerator(IEnumerator<T> enumerator) { _enumerator = enumerator; } public void Dispose() { } public Task<bool> MoveNextAsync(CancellationToken cancellationToken) { return Task.FromResult(_enumerator.MoveNext()); } public T Current => _enumerator.Current; object IDbAsyncEnumerator.Current => Current; }} Then you change results.Distinct().AsQueryable() to new AsyncEnumerableQuery<InternalOrderInfo>(results.Distinct()) And later, ToArrayAsync will not throw exception any more (obviously you can create your own extension method like AsQueryable ). 2) Change ToArrayAsync part: public static class EfExtensions { public static Task<TSource[]> ToArrayAsyncSafe<TSource>(this IQueryable<TSource> source) { if (source == null) throw new ArgumentNullException(nameof(source)); if (!(source is IDbAsyncEnumerable<TSource>)) return Task.FromResult(source.ToArray()); return source.ToArrayAsync(); }} And use ToArrayAsyncSafe instead of ToArrayAsync , which will fallback to synchronous enumeration in case IQueryable is not IDbAsyncEnumerable . In your case this only happens when query is really in-memory list and not query, so async execution does not make sense anyway.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48743165', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4910203/']}
jdg_77485
stackexchange
llm_judgeable_groundtruth_similarity
73413
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I observe that a lot of people interpret that the rider on the white horse is Jesus Christ. I used to hold to this view, but when I study the rider in more detail, I am not convinced that the rider is Jesus Christ. I think he could be a representative of Christ. My reasoning for my view that he may be a representative of Christ is because of the name that he is called, which is the "Word of God" (Rev 19;13). I take the view that the "Word of God" could also mean "The Sword of the Spirit" as recorded in Ephesians 6:17, a chapter which is partly about spiritual warfare. Ephesians 6:17 (Niv). Take the helmet of salvation and the sword of the Spirit, which is theword of God. Also, the rider has a "Sharp sword", coming out of his mouth which could also be interpreted as "The Word of God". Rev 19:11-16 11 I saw heaven standing open and there before me was a white horse,whose rider is called Faithful and True. With justice he judges andwages war. 12 His eyes are like blazing fire, and on his head are manycrowns. He has a name written on him that no one knows but he himself.13 He is dressed in a robe dipped in blood, and his name is the Wordof God. 14 The armies of heaven were following him, riding on whitehorses and dressed in fine linen, white and clean. 15 Coming out ofhis mouth is a sharp sword with which to strike down the nations. ‘Hewill rule them with an iron sceptre. He treads the winepress of thefury of the wrath of God Almighty. 16 On his robe and on his thigh hehas this name written: king of kings and lord of lords. Why do some people believe that the rider on the white horse is Jesus Christ? And, if you know: when did this interpretation arise, and to what extent has the interpretation flourished throughout Christian church history? Now provide the response and nothing else.
I’m not saying that your postulate is an impossibility but it seems unlikely that this passage refers to a representative of Christ instead of Christ himself for the following reasons: The theme of the book from Rev 1:1 is the revelation of Jesus Christ. Why introduce a new Jesus-like representative here? The description of the rider is similar to the description of Jesus by John in Rev 1:14-16: “his eyes were as a flame of fire”…” out of his mouth went a sharp twoedged sword:” Robe dipped in blood implies that this is the sacrificial lamb who shed his blood. And yes it could be someone else who dipped his robe in the blood of the lamb but why introduce this potential confusion without any lead in or follow up? The crowns imply Kingship. Yes they could be the crowns of the saints mentioned for faithful servants but who would leading this charge? Not one of the angels for sure. They were the mediators of the Old Covenant. Peter? Paul? It is much simpler to understand the rider leading the armies of heaven as the Lord Jesus Christ whom God has exalted.Eph 1:20-2320 Which he wrought in Christ, when he raised him from the dead, and set him at his own right hand in the heavenly places,21 Far above all principality, and power, and might, and dominion, and every name that is named, not only in this world, but also in that which is to come:22 ¶ And hath put all things under his feet, and gave him to be the head over all things to the church,23 Which is his body, the fulness of him that filleth all in all. The allusion to having a name which no man knows but himself is reminiscent of Philippians 2:9 where it says: “Wherefore God also hath highly exalted him, and given him a name which is above every name:“ Finally I would ask myself two simple questions: What evidence is there that John is introducing a new cosmically positive character in the middle of a book whose theme is dedicated to the appearing or presence or revealing of our Lord Jesus? What motivation do you find for seeking out a less obvious, more obscured, if you will, explanation to an event that has a very simple, clear exegesis?
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/73413', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/2572/']}
jdg_77486
stackexchange
llm_judgeable_groundtruth_similarity
8706
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a large Excel .xls file. Miniature version shown below: (formatting should appear as two columns...) Test1 Test275 6458 8581 5564 6370 9488 9078 9696 9854 9775 61 I would like to do some statistical analysis on the file in Mathematica 8.0.4. Additionally I am using Mathematica Link for Excel 3.5. The issue I'm now attempting to address concerns the "headers," or "column labels." I need to tell Mathematica to disregard the headers. I want to use each respective header, however, as the name for the "variable stream" (Mathematica-speak for each column of data.) What might be the best ways of dealing with such headers? ===== Additional edit: The "one-liner" version seems to work just fine. Thank you Vitaliy. Follow-up Question: If the Excel .xls spreadsheet being imported is rather large -- 200 columns, 5000 rows -- will that present a problem with the Mathematica "one-liner" approach Vitaliy has suggested? i.e., memory inadequacy, inability to make subsequent steps? What I mean here is, is it completely okay to import an entire large Excel .xls all at once? Or is it far better to ONLY import "columns" as needed, that will immediately be used in subsequent Mathematica steps? I'd prefer to import all at once -- using Vitaliy's method -- if that's totally acceptable. Now provide the response and nothing else.
If your Excel file test.xls is very simple: Then the code is a one-liner (if I understand correctly what is needed): Set@@@Transpose[{ToExpression[First[#]], Transpose[Rest[#]]}&@Import["test.xls"][[1]]] To check: {Paris, Moscow} {{1., 2., 3., 4., 5., 6., 7., 8.}, {12., 23., 34., 45., 56., 67., 78., 89.}} The rest is more complex cases. Imagine that you have an test.xls file with some not-trivial text distribution inside: Doing simply this will import the whole thing: Import["test.xls"]{{{"", "", "", ""}, {"", "Paris ", "Moscow", "New York"}, {"", 1., 12., 56.}, {"", 2., 23., 56.}, {"", 3., 34., 56.}, {"", "New", "Old", "Current"}, {"", 4., 45., 56.}, {"", 5., 56., 56.}, {"", 6., 67., 56.}, {"", 7., 78., 56.}}} Because Excel mostly stores numbers as reals you can do the trick: DeleteCases[Import["test.xls"], _String, Infinity]{{{}, {}, {1., 12., 56.}, {2., 23., 56.}, {3., 34., 56.},{}, {4., 45., 56.}, {5., 56., 56.}, {6., 67., 56.}, {7., 78., 56.}}} No text, but now you got some empty lists to remove: DeleteCases[DeleteCases[Import["test.xlsx"], _String, Infinity], {}, Infinity]{{{1., 12., 56.}, {2., 23., 56.}, {3., 34., 56.}, {4., 45., 56.}, {5., 56., 56.}, {6., 67., 56.}, {7., 78., 56.}}} Now you done - go play with your data: PieChart[% // First, ColorFunction -> "Rainbow"] This should work pretty universally, because you can do all sorts of pattern and string filtering. But you need to be careful in general case, where uneven text distribution may produce ragged data arrays with unexpected structure. Now, addressing @belisarius comment, if you want to use your headers as variables and asign to them your data, then something like this will do. You probably have then a simpler file with headers only: This will do: headers = DeleteCases[Cases[Import["test.xls"], _String, Infinity], ""] //ToExpression;data = DeleteCases[DeleteCases[Import["test.xls"], _String, Infinity], {},Infinity] // First // Transpose;Set @@@ Transpose[{headers, data}]; Now let's check: {Paris, Moscow} {{1., 2., 3., 4., 5., 6., 7., 8.}, {12., 23., 34., 45., 56., 67., 78., 89.}} This will work even if you have some empty rows/columns around your .XLS table in Excel file.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/8706', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1119/']}
jdg_77487
stackexchange
llm_judgeable_groundtruth_similarity
1773367
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a name for the mathematical rule/axiom/property $x/x = 1?$ What are the conditions for it to apply? For instance, the rule does not apply where $x = 0$ or $x = \inf$. I saw one site that claimed it only applied to real numbers, but it does apply to imaginary ones too, so that rule is not complete. What is the complete rule and what is its name? Now provide the response and nothing else.
Rather than viewing division as an operation in its own right (that would take a dividend and a divisor to a quotient), mathematicians think about inverses of multiplication. So one thinks of $\frac x y$ as $x · y^{-1}$ where $y^{-1}$ is, by definition , a number inverse to $y$, i.e. fulfilling $y·y^{-1} = 1 = y^{-1}·y$. For example $\frac 3 2$ is rather thought of as “three halfs” (where a half is the inverse of two) than “three divided by two”. (Well, but one still uses the latter parlance, in fact.) Therefore, I don’t think there is a name for this arithmetic law itself, at least I don’t know of one. But there’s of course one for the concept of inverse elements . A more fundamental concept is that of neutral elements , on which the concept of inverses depend. And by the way: Division isn’t viewed as an operation by itself because it behaves badly: It is not totally defined as a map, say $ℚ × ℚ → ℚ$ (you can’t divide by zero), it fails to be associative, let alone commutative, and it doesn’t have a neutral element.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1773367', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/337338/']}
jdg_77488
stackexchange
llm_judgeable_groundtruth_similarity
18456498
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am developing with jquery and I stumbled with the next problem: I added an IFrame inside the main page and I want to re size them from inside. I tried some ideas but without success. Here is my code: index.html <html> <head> <title>Index</title> </head> <body> <iframe id="myframe" src="frame.html" width="100px" height="100px"></frame> </body></html> frame.html <html> <head> <title>IFrame</title> <script> document.width = 500; document.height = 500; </script> </head> <body> <h2>My IFrame</h2> </body></html> Now provide the response and nothing else.
When you create an IFRAME the browser automatically adds a 'window' object for the IFRAME inside the 'window' object of main page. You need to change the size of the IFRAME instead of the size of the document. Try this code: For JavaScript : window.parent.document.getElementById('myframe').width = '500px';window.parent.document.getElementById('myframe').height = '500px'; And for jQuery : $('#myframe', window.parent.document).width('500px');$('#myframe', window.parent.document).height('500px');
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18456498', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2708544/']}
jdg_77489
stackexchange
llm_judgeable_groundtruth_similarity
36899
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was recently reading the satirical, fictional book Cat's Cradle by Kurt Vonnegut. In this book, one of the major subplots is the invention of a mythical polymorph of ice called ice-9 in the book. In the book, it is described that if this polymorph of ice were to touch actual water, all the water, inevitably touching through oceans, rivers, lakes, etc. would freeze into this polymorph of ice which is purported to melt at a temperature over 100 degrees Fahrenheit. This raises two questions for me: First, would something like this actually happen? I know, for instance, a method people use for purifying crystals is to create a supersaturated solution and disturb the substance and it will crystallize quite dramatically. Could something similar happen here in the sense that the mythical ice-9 structure would be introduced into liquid water (which to ice-9 is a supersaturated solution of water) and create a massive crystal out of all water? Second, and more in reality, is there any solid that is like this? A solid which has two stable crystal structure that exist at temperature reachable in everyday life at about atmospheric pressure? And if so, can you actually convert one crystal structure to another easily? And would one of the liquids behave as a supersaturated solution of the crystal? Sorry if that is a little unclear. I am sincerely interested in this and can clarify if there is any confusion. Anything interesting along those lines would be considered relevant in my mind. Now provide the response and nothing else.
A precise physical argument against the existence of an Ice-9 -like state of water is probably not trivial ( ab initio methods required?). However, there is a simple but powerful argument against it; we haven't found anything like it yet. Water is a very common substance, and humans have been applying the scientific method to it for hundreds of years, humans have manipulated it for hundreds of thousands of years, but most importantly, the entire Earth has been churning massive amounts of water around for billions of years. If an Ice-9 -like state of water did exist, we should have seen it by now. To be clear, superficially this argument may seem to be a fallacious argument from ignorance . However, we can qualitatively back it up with a bit more knowledge. Assuming that Ice-9 is the true ground state of water, and that all liquid water on Earth is currently in a false ground state, separated only by a kinetic barrier, and realising that the conversion of liquid water to Ice-9 is autocatalytic, we must ask ourselves, if it only had to happen once, why hasn't it happened yet? . The only reasonable conclusion is that, if it can happen, then the kinetic barrier to conversion must be enormous. The interconversion of diamond to graphite is a chemical process with a famously large kinetic barrier ( around $\mathrm{540\ kJ\ mol^{-1}}$ ), yet we have plentiful evidence of it happening many times in geological time scales (and we can do it in the lab too!). Thus the kinetic barrier for conversion between liquid water and Ice-9 would have to be even larger, which is unlikely. There's also a limit to how high we can expect chemical kinetic barriers to be, which we can put very generously at $\mathrm{100\ eV \approx 10\ MJ\ mol^{-1}}$. Water molecules on Earth have had many, many, many interactions at energies way higher than this, as they have been incessantly exposed to cosmic radiation, nuclear decays, lightning strikes, high temperatures, etc. In short, the collective history of all water molecules on Earth has explored a vast region of the space of allowed states, and not once has an Ice-9 -like state been found. It is thus fair to assume it simply does not exist. Regarding your second question, I just remembered a great example: pure tin. At ambient pressure, solid tin can exist in two different polymorphs, and the transition between the most stable form is conveniently close to room temperature. Above 13.2 ºC, tin is most stable as a ductile metallic material, called white tin , whereas below 13.2 ºC, the most stable form is a brittle non-metal called grey tin . It is possible to sustain white tin below 13.2 ºC for a period of time, creating a metastable system, much like a supersaturated solution. However, an adequate disturbance will cause the white tin to fall to its ground state. Like Vonnegut's Ice-9 , this transition is autocatalytic; once conversion begins, there's no stopping it below 13.2 ºC, creating the so-called tin pest . There are actually several great videos showing the process, which you can easily find on YouTube, such as this one .
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/36899', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/17583/']}
jdg_77490
stackexchange
llm_judgeable_groundtruth_similarity
2981836
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: myclass is a C++ class written by me and when I write: myclass x;cout << x; How do I output 10 or 20.2 , like an integer or a float value? Now provide the response and nothing else.
Typically by overloading operator<< for your class: struct myclass { int i;};std::ostream &operator<<(std::ostream &os, myclass const &m) { return os << m.i;}int main() { myclass x(10); std::cout << x; return 0;}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2981836', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/359387/']}
jdg_77491
stackexchange
llm_judgeable_groundtruth_similarity
366355
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My coding style for nested function calls is the following: var result_h1 = H1(b1);var result_h2 = H2(b2);var result_g1 = G1(result_h1, result_h2);var result_g2 = G2(c1);var a = F(result_g1, result_g2); I have recently changed to a department where the following coding style is very much in use: var a = F(G1(H1(b1), H2(b2)), G2(c1)); The result of my way of coding is that, in case of a crashing function, Visual Studio can open the corresponding dump and indicate the line where the problem occurs (I'm especially concerned about access violations). I fear that, in case of a crash due to the same problem programmed in the first way, I won't be able to know which function has caused the crash. On the other hand, the more processing you put on a line, the more logic you get on one page, which enhances readability. Is my fear correct or am I missing something, and in general, which is preferred in a commercial environment? Readability or maintainability? I don't know if it's relevant, but we are working in C++ (STL) / C#. Now provide the response and nothing else.
If you felt compelled to expand a one liner like a = F(G1(H1(b1), H2(b2)), G2(c1)); I wouldn't blame you. That's not only hard to read, it's hard to debug. Why? It's dense Some debuggers will only highlight the whole thing at once It's free of descriptive names If you expand it with intermediate results you get var result_h1 = H1(b1); var result_h2 = H2(b2); var result_g1 = G1(result_h1, result_h2); var result_g2 = G2(c1); var a = F(result_g1, result_g2); and it's still hard to read. Why? It solves two of the problems and introduces a fourth: It's dense Some debuggers will only highlight the whole thing at once It's free of descriptive names It's cluttered with non-descriptive names If you expand it with names that add new, good, semantic meaning, even better! A good name helps me understand. var temperature = H1(b1); var humidity = H2(b2); var precipitation = G1(temperature, humidity); var dewPoint = G2(c1); var forecast = F(precipitation, dewPoint); Now at least this tells a story. It fixes the problems and is clearly better than anything else offered here but it requires you to come up with the names. If you do it with meaningless names like result_this and result_that because you simply can't think of good names then I'd really prefer you spare us the meaningless name clutter and expand it using some good old whitespace: int a = F( G1( H1(b1), H2(b2) ), G2(c1) ); It's just as readable, if not more so, than the one with the meaningless result names (not that these function names are that great). It's dense Some debuggers will only highlight the whole thing at once It's free of descriptive names It's cluttered with non-descriptive names When you can't think of good names, that's as good as it gets. For some reason debuggers love new lines so you should find that debugging this isn't difficult: If that's not enough, imagine G2() was called in more than one place and then this happened: Exception in thread "main" java.lang.NullPointerException at composition.Example.G2(Example.java:34) at composition.Example.main(Example.java:18) I think it's nice that since each G2() call would be on it's own line, this style takes you directly to the offending call in main. So please don't use problems 1 and 2 as an excuse to stick us with problem 4. Use good names when you can think of them. Avoid meaningless names when you can't. Lightness Races in Orbit's comment correctly points out that these functions are artificial and have dead poor names themselves. So here's an example of applying this style to some code from the wild: var user = db.t_ST_User.Where(_user => string.Compare(domain, _user.domainName.Trim(), StringComparison.OrdinalIgnoreCase) == 0).Where(_user => string.Compare(samAccountName, _user.samAccountName.Trim(), StringComparison.OrdinalIgnoreCase) == 0).Where(_user => _user.deleted == false).FirstOrDefault(); I hate looking at that stream of noise, even when word wrapping isn't needed. Here's how it looks under this style: var user = db .t_ST_User .Where( _user => string.Compare( domain, _user.domainName.Trim(), StringComparison.OrdinalIgnoreCase ) == 0 ) .Where( _user => string.Compare( samAccountName, _user.samAccountName.Trim(), StringComparison.OrdinalIgnoreCase ) == 0 ) .Where(_user => _user.deleted == false) .FirstOrDefault(); As you can see, I've found this style works well with the functional code that's moving into the object oriented space. If you can come up with good names to do that in intermediate style then more power to you. Until then I'm using this. But in any case, please, find some way to avoid meaningless result names. They make my eyes hurt.
{}
{'log_upvote_score': 8, 'links': ['https://softwareengineering.stackexchange.com/questions/366355', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/250257/']}
jdg_77492
stackexchange
llm_judgeable_groundtruth_similarity
19379
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Just saw this one shared on Facebook by Occupy Democrats with 79k+ likes. It's a repost of this by Less Than 10 Dollars an Hour . Text reads: Today's Bit of Irony Owsley County, Kentucky Is 99.22% White and 95% Republican It has the Highest Usage Of SNAP Benefits In the USA Now provide the response and nothing else.
Unless there are more recent statistics on SNAP usage than the official statistics available from the US Department of Agriculture or someone has a very creative definition of "highest useage", the statement is false. If we go back a few years, the statistics may however back the claim. In the statistics, there are two different values, which both could be described as "usage" in a colloquial language. One value is the ratio of the population receiving SNAP benefits, the other value being the average benefit amount per inhabitant. If we take a look at the latest statistics from 2010, 52.08% of the population in Owsley County received SNAP benefits, ranking Owsley behind Shannon County, SD (59.33%), Todd County, SD (55.08%) and Wade Hampton, AK (54.13%). If we go back to 2008 and earlier, Owsley County was indeed in the "lead", but the three other counties passed Owsley in 2009 when it comes to number of recipients per population count. If we look at the average payment per inhabitant, Owsley is even further down the list. The statistic contains however some very strange numbers, which seem erroneous to me. E.g. Hopewell City, VA with a population count of 22655 allegedly received $243,376,900 in SNAP benefits in 2010, making an average monthly payment per inhabitant of $895.23, or per participant of $3,051.67.
{}
{'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/19379', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/3835/']}
jdg_77493
stackexchange
llm_judgeable_groundtruth_similarity
7347
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm talking about the way we write simple routines in order to improve performance without making your code harder to read... for instance, this is the typical for we learned: for(int i = 0; i < collection.length(); i++ ){ // stuff here} But, I usually do this when a foreach is not applicable: for(int i = 0, j = collection.length(); i < j; i++ ){ // stuff here} I think this is a better approach since it will call the length method once only... my girlfriend says it's cryptic though. Is there any other simple trick you use on your own developments? Now provide the response and nothing else.
insert premature-discussion-is-the-root-of-all-evil lecture That said, here are some habits I've gotten into to avoid unnecessary efficiency, and in some cases, make my code simpler and more correct as well. This isn't a discussion of general principles, but of some things to be aware of to avoid introducing unnecessary inefficiencies into code. Know your big-O This should probably be merged into the lengthy discussion above. It's pretty much common sense that a loop inside of a loop, where the inner loop repeats a calculation, is gonna be slower. For example: for (i = 0; i < strlen(str); i++) { ...} This will take a horrendous amount of time if the string is really long, because the length is being recalculated on every iteration of the loop. Note that GCC actually optimizes this case because strlen() is marked as a pure function. When sorting a million 32-bit integers, bubble sort would be the wrong way to go . In general, sorting can be done in O(n * log n) time (or better, in the case of radix sort), so unless you know your data is going to be small, look for an algorithm that's at least O(n * log n). Likewise, when dealing with databases, be aware of indexes. If you SELECT * FROM people WHERE age = 20 , and you don't have an index on people(age), it'll require an O(n) sequential scan rather than a much faster O(log n) index scan. Integer arithmetic hierarchy When programming in C, bear in mind that some arithmetic operations are more expensive than others. For integers, the hierarchy goes something like this (least expensive first): + - ~ & | ^ << >> * / Granted, the compiler will usually optimize things like n / 2 to n >> 1 automatically if you're targeting a mainstream computer, but if you're targeting an embedded device, you might not get that luxury. Also, % 2 and & 1 have different semantics. Division and modulus usually rounds toward zero, but it's implementation defined. Good ol' >> and & always rounds toward negative infinity, which (in my opinion) makes a lot more sense. For instance, on my computer: printf("%d\n", -1 % 2); // -1 (maybe)printf("%d\n", -1 & 1); // 1 Hence, use what makes sense. Don't think you're being a good boy by using % 2 when you were originally going to write & 1 . Expensive floating point operations Avoid heavy floating point operations like pow() and log() in code that doesn't really need them, especially when dealing with integers. Take, for example, reading a number: int parseInt(const char *str){ const char *p; int digits; int number; int position; // Count the number of digits for (p = str; isdigit(*p); p++) {} digits = p - str; // Sum the digits, multiplying them by their respective power of 10. number = 0; position = digits - 1; for (p = str; isdigit(*p); p++, position--) number += (*p - '0') * pow(10, position); return number;} Not only is this use of pow() (and the int <-> double conversions needed to use it) rather expensive, but it creates an opportunity for precision loss (incidentally, the code above doesn't have precision issues). That's why I wince when I see this type of function used in a non-mathematical context. Also, notice how the "clever" algorithm below, which multiplies by 10 on each iteration, is actually more concise than the code above: int parseInt(const char *str){ const char *p; int number; number = 0; for (p = str; isdigit(*p); p++) { number *= 10; number += *p - '0'; } return number;}
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/7347', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/1773/']}
jdg_77494
stackexchange
llm_judgeable_groundtruth_similarity
17308818
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a List of messages.Each message has a type. public enum MessageType{ Foo = 0, Bar = 1, Boo = 2, Doo = 3} The enum names are arbitrary and cannot be changed. I need to return the list sorted as: Boo, Bar, Foo, Doo My current solution is to create a tempList, add the values in the order I want, return the new list. List<Message> tempList = new List<Message>();tempList.AddRange(messageList.Where(m => m.MessageType == MessageType.Boo));tempList.AddRange(messageList.Where(m => m.MessageType == MessageType.Bar));tempList.AddRange(messageList.Where(m => m.MessageType == MessageType.Foo));tempList.AddRange(messageList.Where(m => m.MessageType == MessageType.Doo));messageList = tempList; How can I do this with an IComparer ? Now provide the response and nothing else.
So, let's write our own comparer: public class MyMessageComparer : IComparer<MessageType> { protected IList<MessageType> orderedTypes {get; set;} public MyMessageComparer() { // you can reorder it's all as you want orderedTypes = new List<MessageType>() { MessageType.Boo, MessageType.Bar, MessageType.Foo, MessageType.Doo, }; } public int Compare(MessageType x, MessageType y) { var xIndex = orderedTypes.IndexOf(x); var yIndex = orderedTypes.IndexOf(y); return xIndex.CompareTo(yIndex); }}; How to use: messages.OrderBy(m => m.MessageType, new MyMessageComparer()) There is a easier way: just create ordereTypes list and use another overload of OrderBy: var orderedTypes = new List<MessageType>() { MessageType.Boo, MessageType.Bar, MessageType.Foo, MessageType.Doo, };messages.OrderBy(m => orderedTypes.IndexOf(m.MessageType)).ToList(); Hm.. Let's try to take advantages from writing our own IComparer. Idea: write it like our last example but in some other semantic. Like this: messages.OrderBy( m => m.MessageType, new EnumComparer<MessageType>() { MessageType.Boo, MessageType.Foo }); Or this: messages.OrderBy(m => m.MessageType, EnumComparer<MessageType>()); Okay, so what we need. Our own comparer: Must accept enum as generic type ( how to solve ) Must be usable with collection initializer syntax ( how to ) Must sort by default order, when we have no enum values in our comparer (or some enum values aren't in our comparer) So, here is the code: public class EnumComparer<TEnum>: IComparer<TEnum>, IEnumerable<TEnum> where TEnum: struct, IConvertible { protected static IList<TEnum> TypicalValues { get; set; } protected IList<TEnum> _reorderedValues; protected IList<TEnum> ReorderedValues { get { return _reorderedValues.Any() ? _reorderedValues : TypicalValues; } set { _reorderedValues = value; } } static EnumComparer() { if (!typeof(TEnum).IsEnum) { throw new ArgumentException("T must be an enumerated type"); } TypicalValues = new List<TEnum>(); foreach (TEnum value in Enum.GetValues(typeof(TEnum))) { TypicalValues.Add(value); }; } public EnumComparer(IList<TEnum> reorderedValues = null) { if (_reorderedValues == null ) { _reorderedValues = new List<TEnum>(); return; } _reorderedValues = reorderedValues; } public void Add(TEnum value) { if (_reorderedValues.Contains(value)) return; _reorderedValues.Add(value); } public int Compare(TEnum x, TEnum y) { var xIndex = ReorderedValues.IndexOf(x); var yIndex = ReorderedValues.IndexOf(y); // no such enums in our order list: // so this enum values must be in the end // and must be ordered between themselves by default if (xIndex == -1) { if (yIndex == -1) { xIndex = TypicalValues.IndexOf(x); yIndex = TypicalValues.IndexOf(y); return xIndex.CompareTo(yIndex); } return -1; } if (yIndex == -1) { return -1; // } return xIndex.CompareTo(yIndex); } public void Clear() { _reorderedValues = new List<TEnum>(); } private IEnumerable<TEnum> GetEnumerable() { return Enumerable.Concat( ReorderedValues, TypicalValues.Where(v => !ReorderedValues.Contains(v)) ); } public IEnumerator<TEnum> GetEnumerator() { return GetEnumerable().GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerable().GetEnumerator(); }} So, well, let's make sorting more faster. We need to override default OrderBy method for our enums: public static class LinqEnumExtensions{ public static IEnumerable<TSource> OrderBy<TSource, TEnum>(this IEnumerable<TSource> source, Func<TSource, TEnum> selector, EnumComparer<TEnum> enumComparer) where TEnum : struct, IConvertible { foreach (var enumValue in enumComparer) { foreach (var sourceElement in source.Where(item => selector(item).Equals(enumValue))) { yield return sourceElement; } } }} Yeah, that's lazy. You can google how yield works. Well, let's test speed. Simple benchmark: http://pastebin.com/P8qaU20Y . Result for n = 1000000; Enumerable orderBy, elementAt: 00:00:04.5485845 Own orderBy, elementAt: 00:00:00.0040010Enumerable orderBy, full sort: 00:00:04.6685977 Own orderBy, full sort: 00:00:00.4540575 We see, that our own orderBy by is more lazy that standart order by (yeah, it doesn't need to sort everything). And faster even for fullsort. Problems in this code: it doesn't support ThenBy() . If you need this, you can write your own linq extension that returns IOrderedEnumerable There are a blog post series by Jon Skeet which goes into LINQ to Objects in some depth, providing a complete alternative implementation. The basis of IOrderedEnumerable is covered in part 26a and 26b , with more details and optimization in 26c and 26d .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17308818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/577571/']}
jdg_77495
stackexchange
llm_judgeable_groundtruth_similarity
238073
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This question borders between a programming and math question (more math). I have a little matrix knowledge but this is past my ability, so any help is very much appreciated. Question I have a very large sparse matrix (say one million rows square). I'd like to solve this for many different sets of constant values (to clarify, by constant values, I mean the 1xn matrix 'b' in Ax = b). Performance is absolutely crucial - so it would make sense to do whatever simplification I can in advance of introducing each set of constants. Goal I'd like to get the time complexity for solving each constant set down to O(n)... I may be dreaming however. Other Info The matrix really is a million (maybe even 100 million) rows It's very sparse It has a bandwidth ranging between 2 and as much as a few hundred Even for larger bandwidths, each row has max. 7 or 8 non zero entries It takes an (extremely rough) diagonal form Each matrix 'b' in Ax = b is actually very sparse itself Due to the nature of the problem, a solution should always be available All nonzero values are real, floating point numbers. They can be negative however. Now provide the response and nothing else.
Instead of reinventing the wheel, look at the major linear algebra packages. Many have procedures for large sparse matrices. One method is "sparse $LU$ decomposition" (google that phrase for tutorials on it). Once the matrix is factored, solving multiple equations with different right sides becomes fast. The difficult part is factoring into sparse factors, which is tricky, and that's why it would be best to use existing code rather than write your own. It is theoretically possible for a sparse matrix to be impossible to factor into sparse factors (the additional non-zero entries are called "fill-in") but there are heuristics that attempt to keep the factors sparse and in practice they are often successful. The only sure way to know if it will work is to try it. Note that if your matrix is symmetric there will be special versions for that case.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/238073', 'https://mathoverflow.net', 'https://mathoverflow.net/users/91257/']}
jdg_77496
stackexchange
llm_judgeable_groundtruth_similarity
52801
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $A + B = (A - B) \cup (B - A)$ also known as the symmetric difference. Look for the identity and let $e$ be the identity element $A + e = A$ $(A - e) \cup (e - A) = A$ Now there are two cases: $(A - e) = A$This equation can be interpreted as removing from A all elements that belong to $e$ to yield the set $A$. In order for this statement to be true, the identity element $e$ must be the empty set. $(e - A) = A$This equation can be interpreted as removing from $e$ all elements that belong to $A$ to generate a set $A$. Is this statement undefined? Now provide the response and nothing else.
$e-A = A$ is clearly impossible... $e-A$ contains no element of the set $A$ by construction. The only thing that could define an identity is clearly the empty set, because $(e-A)$ doesn't contain any element of $A$, hence must not contain any element at all, for if $e$ contains an element, there exists a set $A$ which does not contain this element and therefore $(e-A)$ contains an element not in $A$, contradicting the fact that $(A-e) \cup (e-A) = A$. Now $A-\varnothing = A$ so that $\varnothing$ is your identity. Hope that helps,
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/52801', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13576/']}
jdg_77497
stackexchange
llm_judgeable_groundtruth_similarity
13009893
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Assume you have (in Postgres 9.1 ) a table like this: date | value which have some gaps in it (I mean: not every possible date between min(date) and max(date) has it's row). My problem is how to aggregate this data so that each consistent group (without gaps) is treated separately, like this: min_date | max_date | [some aggregate of "value" column] Any ideas how to do it? I believe it is possible with window functions but after a while trying with lag() and lead() I'm a little stuck. For instance if the data are like this: date | value ---------------+------- 2011-10-31 | 2 2011-11-01 | 8 2011-11-02 | 10 2012-09-13 | 1 2012-09-14 | 4 2012-09-15 | 5 2012-09-16 | 20 2012-10-30 | 10 the output (for sum as the aggregate) would be: min | max | sum -----------+------------+------- 2011-10-31 | 2011-11-02 | 20 2012-09-13 | 2012-09-16 | 30 2012-10-30 | 2012-10-30 | 10 Now provide the response and nothing else.
create table t ("date" date, "value" int);insert into t ("date", "value") values ('2011-10-31', 2), ('2011-11-01', 8), ('2011-11-02', 10), ('2012-09-13', 1), ('2012-09-14', 4), ('2012-09-15', 5), ('2012-09-16', 20), ('2012-10-30', 10); Simpler and cheaper version: select min("date"), max("date"), sum(value)from ( select "date", value, "date" - (dense_rank() over(order by "date"))::int g from t) sgroup by s.gorder by 1 My first try was more complex and expensive: create temporary sequence s;select min("date"), max("date"), sum(value)from ( select "date", value, d, case when lag("date", 1, null) over(order by s.d) is null and "date" is not null then nextval('s') when lag("date", 1, null) over(order by s.d) is not null and "date" is not null then lastval() else 0 end g from t right join generate_series( (select min("date") from t)::date, (select max("date") from t)::date + 1, '1 day' ) s(d) on s.d::date = t."date") qwhere g != 0group by gorder by 1;drop sequence s; The output: min | max | sum ------------+------------+----- 2011-10-31 | 2011-11-02 | 20 2012-09-13 | 2012-09-16 | 30 2012-10-30 | 2012-10-30 | 10(3 rows)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13009893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587578/']}
jdg_77498
stackexchange
llm_judgeable_groundtruth_similarity
52010
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am wondering if fractional oxidation states of an atom are possible. I'm not referring to cases such as $\ce{Fe3O4}$ or $\ce{Mn3O4}$ where the average oxidation state is fractional, since these actually comprise a mixture of atoms which are individually in the +2 and +3 oxidation states. What I mean is, is it possible for an individual atom in some compound to have an oxidation state of (for example) 2.5? To me it doesn't seem possible just because of the way oxidation states are defined. However I have seen some sources which state that fractional oxidation states are possible. I would be interested in knowing if there is some weird compound that has fractional oxidation states? Note: This is not a duplicate of Are fractional oxidation states possible? I want to know if it is possible for an individual atom in some compound to have a fractional oxidation state, not its average fractional oxidation states. Now provide the response and nothing else.
It depends. Consider various radicals such as the superoxide anion $\ce{O2^{.-}}$ or $\ce{NO2^{.}}$. For both of these, we can draw simple Lewis representations: In these structures, the oxygen atoms would have different oxidation states ($\mathrm{-I}$ and $\pm 0$ for superoxide, $\mathrm{-II}$ and $\mathrm{-I}$ for $\ce{NO2}$). That is the strict, theoretical IUPAC answer to the question. However, we also see that the oxygens are symmetry-equivalent (homotopic) and should thus be identical. Different oxidation states violate the identity rule. For each compound, we can imagine an additional resonance structure that puts the radical on the other oxygen. (For $\ce{NO2}$, we can also draw resonance structures that locate a radical on both oxygens and another one that expands nitrogen’s octet and localises the radical there.) To better explain this physical reality theoretically, we can calculate a ‘resonance-derived average oxidation state’ which would be $-\frac{1}{2}$ for superoxide and $-\frac{3}{2}$ for $\ce{NO2}$. This is not in agreement with IUPAC’s formal definition but closer to the physical reality.
{}
{'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/52010', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/18002/']}
jdg_77499
stackexchange
llm_judgeable_groundtruth_similarity
25847191
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: the following code will check to see if you have any duplicate characters in the string, but i don't understand the if clause: public static boolean isUniqueChars(String str) { int checker = 0; for (int i = 0; i < str.length(); ++i) { int val = str.charAt(i) - 'a'; if ((checker & (1 << val)) > 0) return false; checker |= (1 << val); } return true; } I tried to look up some references, I am new to bit shifting, all i understand is that << shifts the binary number left or right. Can you explain to me how checker |= (1 << val) works ? and that 'if' statement as well. Now provide the response and nothing else.
I was also going through this book Cracking the Code Interview and ended up googling for a clear explanations. Finally I understood the concept. Here is the approach. Note : We will assume, in the below code, that the string is only lower case ‘a’ through ‘z’. This will allow us to use just a single int. Java integer is of size 32 Number of lower case alphabets is 26 So we can clearly set 0/1 (true or false) value inside one integer in decimal notation. It is similar to bool visited[32] . bool uses 1 byte. Hence you need 32 bytes for storing bool visited[32]. Bit masking is a space optimization to this. Lets start : You are looping through all the characters in the string. Suppose on i'th iteration you found character 'b' .You calculate its 0 based index . int val = str.charAt(i) - 'a'; For 'b' it is 1 . ie 98-97 . Now using left shift operator, we find the value of 2^1 => 2. (1 << val) // 1<<1 => 10(binary) Now let us see how bitwise & works 0 & 0 -> 00 & 1 -> 01 & 0 -> 01 & 1 -> 1 So by the below code : (checker & (1 << val)) We check if the checker[val] == 0 .Suppose we had already encountered 'b'. check = 0000 0000 0000 0000 0000 1000 1000 0010 & 'b' = 0000 0000 0000 0000 0000 0000 0000 0010 ----------------------------------------------result= 0000 0000 0000 0000 0000 0000 0000 0010 ie decimal value = 2 which is >0 So you finally we understood this part. if ((checker & (1 << val)) > 0) return false; Now if 'b' was not encountered, then we set the second bit of checker using bitwise OR . ( This part is called as bit masking . ) OR's Truth table 0 | 0 -> 00 | 1 -> 11 | 0 -> 11 | 1 -> 1 So check = 0000 0000 0000 0000 0000 1000 1000 0000 | 'b' = 0000 0000 0000 0000 0000 0000 0000 0010 ----------------------------------------------result= 0000 0000 0000 0000 0000 1000 1000 0010 So that simplifies this part: checker |= (1 << val); // checker = checker | (1 << val); I hope this helped someone !
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25847191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4011551/']}
jdg_77500
stackexchange
llm_judgeable_groundtruth_similarity
26252038
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm building a website that requires a carousel to be implemented. Because this website is built on AngularJS I wanted to go with Angulars Boostrap Carousel, however, this carousel appears to only allow one image at a time. What I will need will be 3 images at a time on desktop, on a tablet 2 images and on mobile 1. So there's a significant element of responsive design involved here too. Does anyone have any experince with this that doesn't involve JQuery? I'm not opposed to it but have been told by a senior member of the team to try to source an alternative, if any. What I tried from Angulars bootstrap: $scope.getPromoURLs = function() { var subObj = myJSON.response.details.promotionalSpots; for( var keys in subObj ) { var value = subObj[keys].promotionUrl; $scope.slides.push( value ); } }; // Builts an array of promotional URLS to from a JSON object to source the images $scope.getPromoURLs(); $scope.addSlide = function () { // Test to determine if 3 images can be pulled together - FAILS var newWidth = 600 + slides.length; slides.push({ image: ''+slides[0]+''+slides[1] // etc // Tried to stitch images together here }); }; // TODO Should examine array length not hardcoded 4 for (var i = 0; i < 4; i++) { $scope.addSlide(); } Now provide the response and nothing else.
ui-bootstrap's carousel is not a good choice, it has other drawback like isolated scope on each slide.I'm using https://github.com/revolunet/angular-carousel which support multi item on each slide. Because this directive support ng-repeat. You easy change you collection and using nested ng-repeat to set different number of items in each slide. <ul rn-carousel class="image"> <li ng-repeat="images in imageCollection"> <div ng-repeat="image in images" class="layer">{{ image }}</div> </li></ul> As you have already defined 3 break points. We just need to reconstruct the imageCollection array when viewport size changed. $window.on('resize', function() { var width = $window.width(); if(width > 900) { // desktop rebuildSlide(3); } else if(width <= 900 && width > 480) { // tablet rebuildSlide(2); } else { // phone rebuildSlide(1); } // don't forget manually trigger $digest() $scope.$digest();});function rebuildSlide(n) { var imageCollection = [], slide = [], index; // values is your actual data collection. for(index = 0; index < values.length; index++) { if(slide.length === n) { imageCollection.push(slide); slide = []; } slide.push(values[index]); } imageCollection.push(slide); $scope.imageCollection = imageCollection;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26252038', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1100874/']}
jdg_77501
stackexchange
llm_judgeable_groundtruth_similarity
3812429
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The question arose when answering to another SO question ( there ). When I iterate several times over a python set (without changing it between calls), can I assume it will always return elements in the same order? And if not, what is the rationale of changing the order ? Is it deterministic, or random? Or implementation defined? And when I call the same python program repeatedly (not random, not input dependent), will I get the same ordering for sets? The underlying question is if python set iteration order only depends on the algorithm used to implement sets, or also on the execution context? Now provide the response and nothing else.
There's no formal guarantee about the stability of sets. However, in the CPython implementation, as long as nothing changes the set, the items will be produced in the same order. Sets are implemented as open-addressing hashtables (with a prime probe), so inserting or removing items can completely change the order (in particular, when that triggers a resize, which reorganizes how the items are laid out in memory.) You can also have two identical sets that nonetheless produce the items in different order, for example: >>> s1 = {-1, -2}>>> s2 = {-2, -1}>>> s1 == s2True>>> list(s1), list(s2)([-1, -2], [-2, -1]) Unless you're very certain you have the same set and nothing touched it inbetween the two iterations, it's best not to rely on it staying the same. Making seemingly irrelevant changes to, say, functions you call inbetween could produce very hard to find bugs.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3812429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168465/']}
jdg_77502
stackexchange
llm_judgeable_groundtruth_similarity
9627546
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to version my API like Stripe has. Below is given the latest API version is 2. /api/users returns a 301 to /api/v2/users /api/v1/users returns a 200 of users index at version 1 /api/v3/users returns a 301 to /api/v2/users /api/asdf/users returns a 301 to /api/v2/users So that basically anything that doesn't specify the version links to the latest unless the specified version exists then redirect to it. This is what I have so far: scope 'api', :format => :json do scope 'v:api_version', :api_version => /[12]/ do resources :users end match '/*path', :to => redirect { |params| "/api/v2/#{params[:path]}" }end Now provide the response and nothing else.
The original form of this answer is wildly different, and can be found here . Just proof that there's more than one way to skin a cat. I've updated the answer since to use namespaces and to use 301 redirects -- rather than the default of 302. Thanks to pixeltrix and Bo Jeanes for the prompting on those things. You might want to wear a really strong helmet because this is going to blow your mind . The Rails 3 routing API is super wicked. To write the routes for your API, as per your requirements above, you need just this: namespace :api do namespace :v1 do resources :users end namespace :v2 do resources :users end match 'v:api/*path', :to => redirect("/api/v2/%{path}") match '*path', :to => redirect("/api/v2/%{path}")end If your mind is still intact after this point, let me explain. First, we call namespace which is super handy for when you want a bunch of routes scoped to a specific path and module that are similarly named. In this case, we want all routes inside the block for our namespace to be scoped to controllers within the Api module and all requests to paths inside this route will be prefixed with api . Requests such as /api/v2/users , ya know? Inside the namespace, we define two more namespaces (woah!). This time we're defining the "v1" namespace, so all routes for the controllers here will be inside the V1 module inside the Api module: Api::V1 . By defining resources :users inside this route, the controller will be located at Api::V1::UsersController . This is version 1, and you get there by making requests like /api/v1/users . Version 2 is only a tiny bit different. Instead of the controller serving it being at Api::V1::UsersController , it's now at Api::V2::UsersController . You get there by making requests like /api/v2/users . Next, a match is used. This will match all API routes that go to things like /api/v3/users . This is the part I had to look up. The :to => option allows you to specify that a specific request should be redirected somewhere else -- I knew that much -- but I didn't know how to get it to redirect to somewhere else and pass in a piece of the original request along with it. To do this, we call the redirect method and pass it a string with a special-interpolated %{path} parameter. When a request comes in that matches this final match , it will interpolate the path parameter into the location of %{path} inside the string and redirect the user to where they need to go. Finally, we use another match to route all remaining paths prefixed with /api and redirect them to /api/v2/%{path} . This means requests like /api/users will go to /api/v2/users . I couldn't figure out how to get /api/asdf/users to match, because how do you determine if that is supposed to be a request to /api/<resource>/<identifier> or /api/<version>/<resource> ? Anyway, this was fun to research and I hope it helps you!
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9627546', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/381285/']}
jdg_77503
stackexchange
llm_judgeable_groundtruth_similarity
11065683
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There I am making a 2D game in C# XNA 4.0, and run across yet again a petty annoyance of mine; the Rectangle . For those using basic collision, this is almost a necessity. For almost any game object created you need to have a rectangle. Then I go to change the X, check collision, or anything else. And there I begin the never-ending battle of objectName.Rectangle.Whatever . To get around this I of course give the class of objectName properties/methods that access these for me. Then I dared to dream. I had grand designs to make a basic game object class that everything drawable would inherit from, that would allow parenting, precise local coordinates (floats), and hold the texture/spritebatch. To make this complete I was ready to inherit from Rectangle, to use all the methods and attributes that it holds. Heck, whenever something required a Rectangle, I could be so lazy as to say objectName, not objectName.Rectangle. Then I realized, not a chance. I started out depressed as my oh-so-clever idea was smashed to bits. Since then my perfect little class holds a rectangle, with various methods and properties accessing it as needed. I have also taken the chance to have it inherit from the XNA DrawableGameComponent. While in the long run this has been more practical, every time I view a draw or update method and see the call to rectangle I often wonder, was there ever a hope to do what I had wanted? Was there some clever work around that I could have done? Or was inheriting from a Rectangle truly sealed from my grasp? While using the DrawableGameComponent class provided in XNA allows most game-object related actions happen inside the classes Update() method, every time outside of a class I need to reference not to property of a Rectangle, but rather the Rectangle itself, I am slightly peeved considering that in really every way my object is, in fact, and souped-up Rectangle. And then once again I can find myself asking: Is there any way to inherit from a pre-defined struct, or give the project the impression you are (a work-around)? Now provide the response and nothing else.
Inherit no, but you can add a lot of 'default' functionality to the Rectangle object with extension methods.For example //(inside a static class) public static int GetSurface(this Rectangle rect){return rect.Width * rect.Height;} //calling Rectangle rect; var s = rect.GetSurface(); That said, what I normally do is encapsulate said struct. Use that class as the base object, and add an operator so that it can be implicitly cast to a Rectangle. That way it can be passed to a method that needs a rectangle without casting. public class MyRect //class so you can inherit from it, but you could make your own struct as well { public int X { get; set; } public int Y { get; set; } public int Width { get; set; } public int Height { get; set; } public int Right { get { return X + Width; } } public int Bottom{ get { return Y + Height; } } public static implicit operator Rectangle(MyRect rect) { return new Rectangle(rect.X, rect.Y, rect.Width, rect.Height); } public static implicit operator MyRect(Rectangle rect) { return new MyRect { X = rect.X, Y = rect.Y, Width = rect.Width, Height = rect.Height }; } } } Now you can create your own rect manually or from an existing one: MyRect rect = ARectangleVar And you can use it in legacy methods that expect a Rectangle without casting
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11065683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1411473/']}
jdg_77504
stackexchange
llm_judgeable_groundtruth_similarity
1400236
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to prove that $\forall n\geq 3, n^{n+1}>(n+1)^n$. I came that this is true for $n>(1+\frac{1}{n})^n$. WolphramAlpha gives $n>2.293166...$ but I failed to compute it analytically. Now provide the response and nothing else.
You should know that $(1+\frac1n)^n$ is an increasing sequence and its limit is $e<3$. So, for $n \ge 3$ you have$$\left( 1+\frac1n \right)^n \le e < 3 \le n$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1400236', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/119839/']}
jdg_77505
stackexchange
llm_judgeable_groundtruth_similarity
3900658
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want change the default font of webview to a custom font. I'm using webview in developing an bilingual browser app for Android. I tried getting an instance of custom typeface by placing my custom font in assets. But still couldn't set webview's default font to my font. This is what I tried: Typeface font = Typeface.createFromAsset(getAssets(), "myfont.ttf"); private WebView webview;WebSettings webSettings = webView.getSettings();webSettings.setFixedFontFamily(font); Can anyone correct this or suggest any other method to change webview's default font to a custom font? Thanks! Now provide the response and nothing else.
There's a working example of this in this project . It boils down to: In your assets/fonts folder, place the desired OTF or TTF font (here MyFont.otf) Create a HTML file that you'll use for the WebView's content, inside the assets folder (here inside assets/demo/my_page.html ): <html><head><style type="text/css">@font-face { font-family: MyFont; src: url("file:///android_asset/fonts/MyFont.otf")}body { font-family: MyFont; font-size: medium; text-align: justify;}</style></head><body>Your text can go here! Your text can go here! Your text can go here!</body></html> Load the HTML into the WebView from code: webview.loadUrl("file:///android_asset/demo/my_page.html"); Take note that injecting the HTML through loadData() is not permitted. As per the documentation : Note that JavaScript's same origin policy means that script running in a page loaded using this method will be unable to access content loaded using any scheme other than 'data', including 'http(s)'. To avoid this restriction, use loadDataWithBaseURL() with an appropriate base URL. As @JaakL suggests in the comment below, for loading HTML from a string, you should instead provide the base URL pointing to your assets: webView.loadDataWithBaseURL("file:///android_asset/", htmlData); When referencing the font in htmlData , you may then simply use /fonts/MyFont.otf (omitting the base URL).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3900658', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471349/']}
jdg_77506
stackexchange
llm_judgeable_groundtruth_similarity
1214181
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a java application which has a lot of use cases. Inputs to the application are different kinds of events occuring at different times. This kind of input gives rise to hundreds of test cases. Has anybody faced this kind of scenario? Do you make sure all the test cases are covered before making a release to the QA team?So my question is: what is the best approach for testing programs with lots of test cases? Now provide the response and nothing else.
Don't try to cover with unit tests the whole application from the beginning. Do it in small, incremental steps. Set a small milestone to reach in within a week or two and then start writing tests for the first functionality of that milestone. Then start implementing that functionality. It should be something like this: Small, incremental steps Break down the application into smaller feature milestones that you can see at that moment Choose the most pressing feature that has to be implemented at that moment Break that feature into smaller tasks Write a test for one of the tasks Run the test. It should fail ( RED ). If it pass your test is broken. Start write the least amount of code in order for that test to pass. Hard coded values are allowed. Run the tests ( GREEN ). They should pass (especially when using hard-coded values). Now you know you have a safety net for future refactorings. Start refactoring ( REFACTOR ) your code if there's a need, otherwise go to step 4. Prepare for change The advantage of this method, breaking a huge task into manageable pieces is that it gives you the chance to have something finished in within a week or two. Later on, the management may rethink they priorities and you'll have to reorganize the list from the first point above. Another advantage is that having at every step a unit test that backs you up gives confidence an a sense that you are actually accomplishing something, and you may actually deliver something to your management faster than you'd believe because at every step you have a (somewhat) working version of your program. They can see progress and this is very important for both you and them. They see that work is actually being done, and you get the feedback that you need for your application (requirements always change, let's keep them changing as early as possible). As Gren said, you're probably confusing use cases with unit testing. The actions that a user may take on an application may just as well be handled by a single method in the domain model. So the situation may not be as bad as it seems. No up front design, even for unit tests Anyway, don't try to write all of your tests from the beginning. That's the way I was doing it and it was a big fail. Once you do small iterations (test method/ method implementation) you'll become much more productive and self-confident. When writing all of your tests up front, you may notice that due to factorizations necessary to make your first tests pass, you'll need to rethink the whole API that you envisioned when writing the tests in the first place, whereas writing a test, then the implementation, a test, then the implementation, you end up with what it's called emergent design. And this is the best kind of design. This is how design patterns appeared. Design patterns did not emerge from someone who stood all day long and thought about ways to solve the problem.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1214181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/100300/']}
jdg_77507
stackexchange
llm_judgeable_groundtruth_similarity
37959588
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am sending push notification from firebase to my Android Application. but when my app is in background firebase onMessageReceived method is not called instead firebase send notification to system for showing notification in system tray.notification appears in system tray but no sound for notification even i have allowed notification sound for my app in system settings. what I can do to play notification sound when notification received from firebase. This is how I am sending notification from firebase to my app Blogpost link . How to Add firebase in your Android Application Now provide the response and nothing else.
In the notification payload of the notification there is a sound key. From the official documentation its use is: Indicates a sound to play when the device receives a notification. Supports default or the filename of a sound resource bundled in the app. Sound files must reside in /res/raw/. Eg: { "to" : "bk3RNwTe3H0:CI2k_HHwgIpoDKCIZvvDMExUdFQ3P1...", "notification" : { "body" : "great match!", "title" : "Portugal vs. Denmark", "icon" : "myicon", "sound" : "mySound" } } If you want to use default sound of the device, you should use: "sound": "default" . See this link for all possible keys in the payloads: https://firebase.google.com/docs/cloud-messaging/http-server-ref#notification-payload-support For those who don't know firebase handles notifications differently when the app is in background. In this case the onMessageReceived function is not called. When your app is in the background, Android directs notification messages to the system tray. A user tap on the notification opens the app launcher by default. This includes messages that contain both notification and data payload. In these cases, the notification is delivered to the device's system tray, and the data payload is delivered in the extras of the intent of your launcher Activity.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/37959588', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4578513/']}
jdg_77508
stackexchange
llm_judgeable_groundtruth_similarity
57974511
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying rowselection reactjs using antd table.I'm trying this one ant-components-table-demo-expand-children when I select parent row it must select child rows of that parent (it should tick mark the child rows). this.rowSelection = { onSelect: (record, selected, selectedRows) => this.onSelectChange(record, selected, selectedRows), onSelectAll: (selected, selectedRows, changeRows) => this.allRowsSelected(selected, selectedRows, changeRows)};<Table rowKey={data._id} columns={this.columns1} rowSelection={this.rowSelection} expandedRowRender={(record, index, indent, expanded) => this.expanding(record, expanded) } onExpand={this.onExpand} dataSource={data}/> Now provide the response and nothing else.
Consider the following two JavaScript examples: for (let value of array) { value = 10} function (value) { value = 10} In both cases trying to assign 10 to value will only have an effect locally, it won't have any impact beyond the local scope. The caller, for example, would not be affected by the change. Now consider these two examples where an object is used instead, where the object is of the form { value: 9 } : for (let valueWrapper of array) { valueWrapper.value = 10} function (valueWrapper) { valueWrapper.value = 10} In this case the change is not limited to the local scope as we're updating objects. External code, such as the caller of the function, would also be impacted by this change to the value property as it can see the same object. These examples are equivalent to trying to update a value using v-model in a variety of cases. The first two examples are equivalent to: <template v-for="value in array"> <input v-model="value"></template> and: <template v-slot="{ value }"> <input v-model="value"></template> The arguments passed to v-slot can very much be thought of as analogous to function parameters. Neither the loop nor the scoped slot would work as desired, exactly the same as they don't for their pure JavaScript equivalents. However, the latter two of my four examples would be equivalent to: <template v-for="valueWrapper in array"> <input v-model="valueWrapper.value"></template> and: <template v-slot="{ valueWrapper }"> <input v-model="valueWrapper.value"></template> These should work fine as they are updating a property on an object. However, to go back to the original question, it's important that we're binding the appropriate object. In this case we would need to bind the newTag property of the component. Copying that property to another object wouldn't work either as v-model would just be updating an irrelevant object.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57974511', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6436209/']}
jdg_77509
stackexchange
llm_judgeable_groundtruth_similarity
709187
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Input $n$, output the last $2$ digits of the result. n=1 03 3=3 n=2 27 3^3=27 n=3 87 3^27=7625597484987n=4 ?? 3^7625597484987=?? Sorry guys, the formula given is $T_n=3^{T_{n-1}}$, I have updated the example.I was asked this question during an interview, but I didn't have any clue. (The interviewer kind of gave me a hint that for $n>10$, the last $10$ digits of all results would be the same?) Now provide the response and nothing else.
Notice $$3^{100} = 515377520732011331036461129765621272702107522001 \equiv 1 \pmod{100}$$If we define $p_n$ such that $p_1 = 3$ and $p_n = 3^{p_{n-1}}$ recursively andsplit $p_n$ as $100 q_n + r_n$ where $q_n, r_n \in \mathbb{Z}$, $0 \le r_n < 100$, we have $$p_n = 3^{p_{n-1}} = 3^{100 q_{n-1} + r_{n-1}} \equiv 1^{q_{n-1}}3^{r_n} \equiv 3^{r_{n-1}} \pmod{100} \\ \implies r_n \equiv 3^{r_{n-1}} \pmod{100}$$ This means to obtain $r_n$, the last two digit of $p_n$, we just need to start with$r_1 = 3$ and repeat iterate it. We find $$\begin{align}r_1 &= 3\\r_2 &\equiv 3^3 = 27 \pmod{100}\\r_3 &\equiv 3^{3^3} = 7625597484987 \equiv 87 \pmod{100}\end{align}$$ Notice $$3^{87} = 323257909929174534292273980721360271853387 \equiv 87 \pmod{100}$$ So after the third (not fourth) iteration, we have $r_n = 87$ for all $n \ge 3$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/709187', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/134882/']}
jdg_77510
stackexchange
llm_judgeable_groundtruth_similarity
43633527
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to build a staff area in an e-commerce website that allows staff to view a table of available products, edit their details and update them with a button click. Currently I've been having trouble with the actual update function, as my request inputs have been returning null, meaning the query builder fails. I also can't seem to figure out a good way to uniquely identify each input in my table, which is probably what the main problem is. EDIT Problem solved. Form tags had to wrap each row, and thanks to all who gave advice on input arrays. Code shown below: <div class="container"><div class="jumbotron text-center clearfix">@foreach($products as $index => $product)@if ($index == 0) <h2>{{$product->category}} Products </h2>@endif@endforeach</div><table id="orders" class="table table-striped table-inverse"><thead> <tr id='tableHeader'> <td>ID</td> <td>Product Name</td> <td>Slug</td> <td>Quantity</td> <td>Cost (£)</td> </tr></thead><tbody> @foreach ($products as $product) <tr> <form action="{{ url('staff/storeUpdateProduct', [$product->id]) }}" method="POST" class="side-by-side"> {!! csrf_field() !!} <td><input id="{{ $product->id}}" type="text" class="form-control" name="id[{{ $product->id}}]" value="{{ $product->id }}" readonly></td> <td><input id="product_name[{{ $product->id}}]" type="text" class="form-control" name="product_name[{{ $product->id}}]" value="{{ $product->product_name }}"></td> <td><input id="slug[{{ $product->id}}]" type="text" class="form-control" name="slug[{{ $product->id}}]" value="{{ $product->slug }}"></td> <td><input id="quantity[{{ $product->id}}]" type="text" class="form-control" name="quantity[{{ $product->id}}]" value="{{ $product->quantity}}"></td> <td><input id="cost[{{ $product->id}}]" type="text" class="form-control" name="cost[{{ $product->id}}]" value="{{ $product->cost }}"></td> <td> <input type="submit" class="btn btn-sm" value="Update"> </td> </form> </tr> @endforeach</tbody> And the new Controller code: /** * Updates product details. * * @param Request $request * @param String $id * @return \Illuminate\Http\Response */public function storeUpdateProduct(Request $request, $id){ $product = Product::where('id', '=', $id)->firstOrFail(); /*dd([ $request->product_name[$id], $request->slug[$id], $request->quantity[$id], $request->cost[$id] ]);*/ $product->product_name = $request->product_name[$id]; $product->slug = $request->slug[$id]; $product->quantity = $request->quantity[$id]; $product->cost = $request->cost[$id]; $product->save(); return redirect()->route('staff.updateProducts');} Now provide the response and nothing else.
If you hadn't used ViewTreeObserver , than mainLayout.getRootView().getHeight() would simply return 0px, because it hasn't been laid out yet (see getWidth() and getHeight() of View returns 0 ). Thus, you are waiting until view is measured, laid out, and then you are fetching width/height values from it. This callback will be fired exactly when the view is going to be laid out on the screen.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43633527', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5704530/']}
jdg_77511
stackexchange
llm_judgeable_groundtruth_similarity
14371955
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi Javascript/underscore gurus.. Lets say I receive a json object from the server which has an anonymous array nested as one of its properties... how would i loop through that array in an underscore _.each method? This is my json object: "onlineUsers": [ { "Id": "users/2", "Name": "Hamish", "LatestActivity": "2013-01-17T04:02:14.2113433Z", "LatestHeartbeat": "2013-01-17T04:02:14.2113433Z" }, { "Id": "users/3", "Name": "Ken", "LatestActivity": "2013-01-17T03:45:20.066Z", "LatestHeartbeat": "2013-01-17T04:04:34.711Z" } ] how would I modify this function to print out the names? _.each(onlineUsers, function(user){log(user.name);}); This is printing out the actual collection of nested objects, obviously because they are elements in the nested array of onlineUsers... not sure how to get to that array to loop if it is anonymously passed in... Thanks,Hamish. Now provide the response and nothing else.
The JSON you are receiving from the server is invalid JSON. The array needs a property name, eg: onlineUsers = { names: [{name : "Joe"}, {name : "bloggs"}]} Then you could do this: _.each(onlineUsers.names, function(user){log(user.name);});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14371955', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/964388/']}
jdg_77512
stackexchange
llm_judgeable_groundtruth_similarity
1743293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use Python 2.5.4. My computer: CPU AMD Phenom X3 720BE, Mainboard 780G, 4GB RAM, Windows 7 32 bit. I use Python threading but can not make every python.exe process consume 100% CPU. Why are they using only about 33-34% on average?. I wish to direct all available computer resources toward these large calculations so as to complete them as quickly as possible. EDIT:Thanks everybody. Now I'm using Parallel Python and everything works well. My CPU now always at 100%. Thanks all! Now provide the response and nothing else.
It appears that you have a 3-core CPU. If you want to use more than one CPU core in native Python code, you have to spawn multiple processes. (Two or more Python threads cannot run concurrently on different CPUs) As R. Pate said, Python's multiprocessing module is one way. However, I would suggest looking at Parallel Python instead. It takes care of distributing tasks and message-passing. You can even run tasks on many separate computers with little change to your code. Using it is quite simple: import ppdef parallel_function(arg): return argjob_server = pp.Server() # Define your jobsjob1 = job_server.submit(parallel_function, ("foo",))job2 = job_server.submit(parallel_function, ("bar",))# Compute and retrieve answers for the jobs.print job1()print job2()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1743293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212196/']}
jdg_77513
stackexchange
llm_judgeable_groundtruth_similarity
6254869
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The Tomcat Class Loader HOW-TO documentation describes 4 different class loaders: Bootstrap System Common Webapp In the default catalina.properties file, however, there are properties defined for a shared and server class loader as well. In the default version of the file both of these properties are empty and the comments say: If left as blank, the "common" loader will be used as Catalina's "shared"/"server" loader. I have not been able to find any additional documentation about these class loaders. My question is, in which order are the shared and system loaders searched relative to the common loader? And additionally, what is the intended use for these class loaders? Now provide the response and nothing else.
I recently ran into this issue as well and here is what I found, (This is all from Tomcat 7 trunk) If left as blank, the "common" loader will be used as Catalina's "shared"/"server" loader. Here is the relevant source , 89 private void initClassLoaders() {90 try {91 commonLoader = createClassLoader("common", null);92 if( commonLoader == null ) {93 // no config file, default to this loader - we might be in a 'single' env.94 commonLoader=this.getClass().getClassLoader();95 }96 catalinaLoader = createClassLoader("server", commonLoader);97 sharedLoader = createClassLoader("shared", commonLoader);98 } catch (Throwable t) {99 handleThrowable(t);100 log.error("Class loader creation threw exception", t);101 System.exit(1);102 }103 }106 private ClassLoader createClassLoader(String name, ClassLoader parent)107 throws Exception {108 109 String value = CatalinaProperties.getProperty(name + ".loader");110 if ((value == null) || (value.equals("")))111 return parent; So if nothing is defined, they fallback to using the common.loader entries. As to the order that they are loaded in,here is the source for loading them in, from source 229 Thread.currentThread().setContextClassLoader(catalinaLoader);230 231 SecurityClassLoad.securityClassLoad(catalinaLoader);232 233 // Load our startup class and call its process() method234 if (log.isDebugEnabled())235 log.debug("Loading startup class");236 Class<?> startupClass =237 catalinaLoader.loadClass238 ("org.apache.catalina.startup.Catalina");239 Object startupInstance = startupClass.newInstance();240 241 // Set the shared extensions class loader242 if (log.isDebugEnabled())243 log.debug("Setting startup class properties");244 String methodName = "setParentClassLoader";245 Class<?> paramTypes[] = new Class[1];246 paramTypes[0] = Class.forName("java.lang.ClassLoader");247 Object paramValues[] = new Object[1];248 paramValues[0] = sharedLoader;249 Method method =250 startupInstance.getClass().getMethod(methodName, paramTypes);251 method.invoke(startupInstance, paramValues); Line 229 sets the common.loader classLoader, then line 251 sets the shared.loader classloader is set as Catalinas parent class loader.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6254869', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1247/']}
jdg_77514
stackexchange
llm_judgeable_groundtruth_similarity
6930621
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a main app view, with a filter menu in the header. When it's clicked, I want to filter content in a seperate news-feed view. But I don't know how to bind events (and pass class data) from clicks in one view to a function in another. How can I accomplish this? Now provide the response and nothing else.
There are a number of ways to accomplish this, but probably you want to create a model object, which is shared between the two views. Then on 'click' in view one, update the model object, and bind 'on change' in view two to the model object. Basically, you can set up both views to stay in sync with the model object, and any changes to the object will result in changes to the view.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6930621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/873392/']}
jdg_77515
stackexchange
llm_judgeable_groundtruth_similarity
3652736
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to update all rows in queryset by using annotated value. I have a simple models: class Relation(models.Model): rating = models.IntegerField(default=0)class SignRelation(models.Model): relation = models.ForeignKey(Relation, related_name='sign_relations') rating = models.IntegerField(default=0) And I want to awoid this code: for relation in Relation.objects.annotate(total_rating=Sum('sign_relations__rating')): relation.rating = relation.total_rating or 0 relation.save() And do update in one SQL-request by using something like this: Relation.objects.update(rating=Sum('sign_relations__rating')) Doesn't work: TypeError: int() argument must be a string or a number, not 'Sum' or Relation.objects.annotate(total_rating=Sum('sign_relations__rating')).update(rating=F('total_rating')) Also doesn't work: DatabaseError: missing FROM-clause entry for table "relations_signrelation"LINE 1: UPDATE "relations_relation" SET "rating" = SUM("relations_si... Is it possible to use Django's ORM for this purpose? There is no info about using update() and annotate() together in docs. Now provide the response and nothing else.
For Django 1.11+ you can use Subquery : from django.db.models import OuterRef, Subquery, SumRelation.objects.update( rating=Subquery( Relation.objects.filter( id=OuterRef('id') ).annotate( total_rating=Sum('sign_relations__rating') ).values('total_rating')[:1] )) This code produce the same SQL code proposed by Tomasz Jakub Rup but with no use of RawSQL expression. The Django documentation warns against the use of RawSQL due to the possibility of SQL injection ) . Update I published an article based on this answer with more in-depth explanations: Updating a Django queryset with annotation and subquery on paulox.net
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3652736', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/87535/']}
jdg_77516
stackexchange
llm_judgeable_groundtruth_similarity
1319338
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two lists that i need to combine where the second list has any duplicates of the first list ignored. .. A bit hard to explain, so let me show an example of what the code looks like, and what i want as a result. first_list = [1, 2, 2, 5]second_list = [2, 5, 7, 9]# The result of combining the two lists should result in this list:resulting_list = [1, 2, 2, 5, 7, 9] You'll notice that the result has the first list, including its two "2" values, but the fact that second_list also has an additional 2 and 5 value is not added to the first list. Normally for something like this i would use sets, but a set on first_list would purge the duplicate values it already has. So i'm simply wondering what the best/fastest way to achieve this desired combination. Thanks. Now provide the response and nothing else.
You need to append to the first list those elements of the second list that aren't in the first - sets are the easiest way of determining which elements they are, like this: first_list = [1, 2, 2, 5]second_list = [2, 5, 7, 9]in_first = set(first_list)in_second = set(second_list)in_second_but_not_in_first = in_second - in_firstresult = first_list + list(in_second_but_not_in_first)print(result) # Prints [1, 2, 2, 5, 9, 7] Or if you prefer one-liners 8-) print(first_list + list(set(second_list) - set(first_list)))
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1319338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/80943/']}
jdg_77517
stackexchange
llm_judgeable_groundtruth_similarity
105034
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I create GUIDs (globally-unique identifiers) in JavaScript? The GUID / UUID should be at least 32 characters and should stay in the ASCII range to avoid trouble when passing them around. I'm not sure what routines are available on all browsers, how "random" and seeded the built-in random number generator is, etc. Now provide the response and nothing else.
UUIDs (Universally Unique IDentifier), also known as GUIDs (Globally Unique IDentifier), according to RFC 4122 , are identifiers designed to provide certain uniqueness guarantees. While it is possible to implement RFC-compliant UUIDs in a few lines of JavaScript code (e.g., see @broofa's answer , below) there are several common pitfalls: Invalid id format (UUIDs must be of the form " xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx ", where x is one of [0-9, a-f] M is one of [1-5], and N is [8, 9, a, or b] Use of a low-quality source of randomness (such as Math.random ) Thus, developers writing code for production environments are encouraged to use a rigorous, well-maintained implementation such as the uuid module.
{}
{'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/105034', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4926/']}
jdg_77518
stackexchange
llm_judgeable_groundtruth_similarity
19856927
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to virtualenv . I want to install spyder, which require PyQt4 , which requires SIP . pip doesn't work, so I downloaded SIP , and I did the following commands: python configure.pymakemake install But I received this error: make[1]: entrant dans le répertoire « /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/build/SIP/sipgen »cp -f sip /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/bin/sipmake[1]: quittant le répertoire « /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/build/SIP/sipgen »make[1]: entrant dans le répertoire « /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/build/SIP/siplib »cp -f sip.so /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/lib/python2.7/site-packages/sip.sostrip /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/lib/python2.7/site-packages/sip.socp -f /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/build/SIP/siplib/sip.h /usr/local/python/include/python2.7/sip.hcp: impossible de supprimer « /usr/local/python/include/python2.7/sip.h »: Permission non accordéemake[1]: *** [install] Erreur 1make[1]: quittant le répertoire « /stck2/stck2.2/ptoniato/python/pip/virtualenv-1.10.1/provaenv/build/SIP/siplib »make: *** [install] Erreur 2 Impossible de supprimer means Impossible to erase. I have no root access to this pc. I think that's there is a conflict between the python package that is installed by default on this pc and the virtualenv that I am creating. How can I resolve the problem? Now provide the response and nothing else.
Here are the steps I used to install sip in my virtualenv. The trick is to make sure that you use the (undocumented?) --always-copy flag, so that it doesn't just symlink the /usr/include/python2.7 directory into your virtualenv . virtualenv --always-copy ve. ve/bin/activatewget http://sourceforge.net/projects/pyqt/files/sip/sip-4.15.4/sip-4.15.4.zipunzip sip-4.15.4.zipcd sip-4.15.4python configure.py --incdir=../ve/include/python2.7makemake installcd .. I was then able to install PyQt4 like so: wget http://sourceforge.net/projects/pyqt/files/PyQt4/PyQt-4.10.3/PyQt-x11-gpl-4.10.3.tar.gztar zxvf PyQt-x11-gpl-4.10.3.tar.gzcd PyQt-x11-gpl-4.10.3python configure.pymakemake install This did give an error at the end, but it was ignored. I think this is just part of installing 'designer' which isn't critical. install: cannot create regular file `/usr/lib/x86_64-linux-gnu/qt4/plugins/designer/libpyqt4.so': Permission deniedmake[1]: [install_target] Error 1 (ignored)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19856927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2634197/']}
jdg_77519
stackexchange
llm_judgeable_groundtruth_similarity
44137774
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I move to a new page after some validation is done with React Router V4 ? I have something like this: export class WelcomeForm extends Component { handleSubmit = (e) => { e.preventDefault() if(this.validateForm()) // send to '/life' } render() { return ( <form className="WelcomeForm" onSubmit={this.handleSubmit}> <input className="minutes" type="number" value={this.state.minutes} onChange={ (e) => this.handleChanges(e, "minutes")}/> </form> ) }} I would like to send the user to another route. I had a look at Redirect but it seems like it would delete the current page from the history which I don't want. Now provide the response and nothing else.
You are using react-router v4 , so you need to use withRouter with your component to access the history object’s properties, then use history.push to change the route dynamically. withRouter : You can get access to the history object’s properties and the closest 's match via the withRouter higher-order component. withRouter will re-render its component every time the route changes with the same props as render props: { match, location, history }. Like this: import {withRouter} from 'react-router-dom';class WelcomeForm extends Component { handleSubmit = (e) => { e.preventDefault() if(this.validateForm()) this.props.history.push("/life"); } render() { return ( <form className="WelcomeForm" onSubmit={this.handleSubmit}> <input className="minutes" type="number" value={this.state.minutes} onChange={ (e) => this.handleChanges(e, "minutes")}/> </form> ) }}export default withRouter(WelcomeForm);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44137774', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5438503/']}
jdg_77520
stackexchange
llm_judgeable_groundtruth_similarity
10795973
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to implement to search for a value in Python dictionary for specific key values (using regular expression as a key). Example: I have a Python dictionary which has values like: {'account_0':123445,'seller_account':454545,'seller_account_0':454676, 'seller_account_number':3433343} I need to search for values whose key has 'seller_account'? I wrote a sample program but would like to know if something can be done better. Main reason is I am not sure of regular expression and miss out something (like how do I set re for key starting with 'seller_account'): #!usr/bin/pythonimport remy_dict={'account_0':123445,'seller_account':454545,'seller_account_0':454676, 'seller_account_number':3433343}reObj = re.compile('seller_account')for key in my_dict.keys(): if(reObj.match(key)): print key, my_dict[key]~ home> python regular.pyseller_account_number 3433343seller_account_0 454676seller_account 454545 Now provide the response and nothing else.
If you only need to check keys that are starting with "seller_account" , you don't need regex, just use startswith() my_dict={'account_0':123445,'seller_account':454545,'seller_account_0':454676, 'seller_account_number':3433343}for key, value in my_dict.iteritems(): # iter on both keys and values if key.startswith('seller_account'): print key, value or in a one_liner way : result = [(key, value) for key, value in my_dict.iteritems() if key.startswith("seller_account")] NB: for a python 3.X use, replace iteritems() by items() and don't forget to add () for print .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10795973', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/304974/']}
jdg_77521
stackexchange
llm_judgeable_groundtruth_similarity
5794414
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I want to select a subset of data in R, I can use the subset function. I wanted to base an analysis on data that that was matching one of a few criteria, e.g. that a certain variable was either 1, 2 or 3. I tried myNewDataFrame <- subset(bigfive, subset = (bigfive$bf11==(1||2||3))) It did always just select values that matched the first of the criteria, here 1. My assumption was that it would start with 1 and if it does evaluate to "false" it would go on to 2 and than to 3, and if none matches the statement after == is "false" and if one of them matches, it is "true". I got the right result using newDataFrame <- subset(bigfive, subset = (bigfive$bf11==c(1,2,3))) But I would like to be able to select data via logical operators, so: why did the first approach not work? Now provide the response and nothing else.
The correct operator is %in% here. Here is an example with dummy data: set.seed(1)dat <- data.frame(bf11 = sample(4, 10, replace = TRUE), foo = runif(10)) giving: > head(dat) bf11 foo1 2 0.20597462 2 0.17655683 3 0.68702284 4 0.38410375 1 0.76984146 4 0.4976992 The subset of dat where bf11 equals any of the set 1,2,3 is taken as follows using %in% : > subset(dat, subset = bf11 %in% c(1,2,3)) bf11 foo1 2 0.20597462 2 0.17655683 3 0.68702285 1 0.76984148 3 0.99190619 3 0.380035210 1 0.7774452 As to why your original didn't work, break it down to see the problem. Look at what 1||2||3 evaluates to: > 1 || 2 || 3[1] TRUE and you'd get the same using | instead. As a result, the subset() call would only return rows where bf11 was TRUE (or something that evaluated to TRUE ). What you could have written would have been something like: subset(dat, subset = bf11 == 1 | bf11 == 2 | bf11 == 3) Which gives the same result as my earlier subset() call. The point is that you need a series of single comparisons, not a comparison of a series of options. But as you can see, %in% is far more useful and less verbose in such circumstances. Notice also that I have to use | as I want to compare each element of bf11 against 1 , 2 , and 3 , in turn. Compare: > with(dat, bf11 == 1 || bf11 == 2)[1] TRUE> with(dat, bf11 == 1 | bf11 == 2) [1] TRUE TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE TRUE
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5794414', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/263398/']}
jdg_77522
stackexchange
llm_judgeable_groundtruth_similarity
2059309
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wanted to find out how can one capture screencast using java. I know that using Robot class one can get a screenshot but how do I go about capturing it as a video and then uploading it to the server? How exactly would that work? ideas? Now provide the response and nothing else.
With a pure Java solution, I doubt that it will work, but it depends of course on what your interpretation of "video". On my desktop with a 1920x1200 resolution, I am able to get about 20 frames per second when using the Java Robot to capture the entire screen. Since each image contains >6 MByte of uncompressed data, I would need more than 1 Gbps bandwidth to transmit the raw data of these images to a server. Most probably, requiring so much bandwidth is not acceptable, so you either have to decrease the number of frames per second or apply some kind of compression to the images. One possibility is to compress each image using one of the image formats supported by ImageIO. The size of the compressed images will of course depend heavily on what is actually shown on the screen, but the performance of the compressors is not particularly good. Compressing to PNG ought to give the best lossless compression ratio for most desktop content, but at least my computer is only able to process just about 2 frames per second. Using the JPEG compressor with default quality settings reaches about 5 frames per second. Using common video codecs through an abstraction layer like jffmpeg will probably achieve both better performance and better compression ratio, but I doubt that mainstream video codecs like WMV or H.264 are suitable for common desktop content. If you really require a pure Java solution (and are not able to use any of the available standalone software, which do what you're asking for), I would make an attempt to implement my own, simple compression algorithm. With common desktop activity, there ought to be very little difference between most consecutive screen shots, so what might work quite well is to transmit the first frame completely and after that implement an algorithm to roughly detect rectangles, in which changes have been made and then transmit only these combined with JPG or preferrably (quality) PNG compression.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2059309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151089/']}
jdg_77523
stackexchange
llm_judgeable_groundtruth_similarity
961
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a dataset made up of elements from three groups, let's call them G1, G2, and G3.I analysed certain characteristics of these elements and divided them into 3 types of "behaviour" T1, T2, and T3 (I used cluster analysis to do that). So, now I have a 3 x 3 contingency table like this with the counts of elements in the three groups divided by type: | T1 | T2 | T3 |------+---------+---------+---------+--- G1 | 18 | 15 | 65 | ------+---------+---------+---------+--- G2 | 20 | 10 | 70 |------+---------+---------+---------+--- G3 | 15 | 55 | 30 | Now, I can run a Fisher test on these data in R data <- matrix(c(18, 20, 15, 15, 10, 55, 65, 70, 30), nrow=3)fisher.test(data) and I get Fisher's Exact Test for Count Datadata: data p-value = 9.028e-13alternative hypothesis: two.sided So my questions are: is it correct to use Fisher test this way? how do I know who is different from who? Is there a post-hoc test I can use? Looking at the data I would say the 3 rd group has a different behaviour from the first two, how do I show that statistically? someone pointed me to logit models: are they a viable option for this type of analysis? any other option to analyse this type of data? Thank you a lot nico Now provide the response and nothing else.
At first I think that the Fisher test is used correctly. Count data are better handled using log-linear models (not logit, to ensure that the fitted values are bounded below). In R you can specify family=poisson (which sets errors = Poisson and link = log). The log link ensures that all the fitted values are positive, while the Poisson errors take account of the fact that the data are integer and have variances that are equal to their means.e.g. glm(y~x,poisson) and the model is fitted with a log link and Poisson errors (to account for the non-normality). In cases where there is overdispersion (the residual deviance should be equal to the residual degrees of freedom, if the Poisson errors assumption is appropriate), instead of using quasipoisson as the error family, you could fit a negative binomial model.(This involves the function glm.nb from package MASS ) In your case you could fit and compare models using commands like the following: observed <- as.vector(data)Ts<-factor(rep(c("T1","T2","T3"),each=3))Gs<-factor(rep(c("G1","G2","G3"),3))model1<-glm(observed~Ts*Gs,poisson)#or and a model without the interaction termsmodel2<-glm(observed~Ts+Gs,poisson)#you can compare the two models using anova with a chi-squared testanova(model1,model2,test="Chi")summary(model1) Always make sure that your minimal model contains all the nuisance variables. As for how do we know who is different from who, there are some plots that may help you. R function assocplot produces an association plot indicating deviations from independence of rows and columns in a two dimensional contingency table. Here are the same data plotted as a mosaic plot mosaicplot(data, shade = TRUE)
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/961', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/582/']}
jdg_77524
stackexchange
llm_judgeable_groundtruth_similarity
48409
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $(X,\mathcal F,\mu)$ be a measure space. For a measurable function $f:X\to\mathbb R$ define $S = \{x:f(x)>0\}$. Suppose for some set $B\in\mathcal F$ holds$$\int\limits_{S\cap B}f(x)\mu(dx) = 0.$$Does it mean that $\mu(S\cap B) =0$, or there should be additional conditons? Now provide the response and nothing else.
Something like the following argument should work: Denote $S_n=\{x : f(x) \geq \frac{1}{n}\}$. Then $S_n \subset S$ and $0\leq \int_{S_n \cap B} f d \mu\leq \int_{S \cap B}f d\mu=0$. But $0=\int_{ S_n\cap B}f \geq \frac{1}{n} \mu(S_n\cap B)\geq 0$. This means that $\mu(S_n \cap B)=0$ for every $n$. Since $(S_n\cap B)_n$ is an increasing sequence of measurable sets, with union equal to $S\cap B$ it follows that $\mu(S\cap B)=\lim_{n \to \infty}\mu(S_n \cap B)=0$. Another finishing argument is the fact that the union of a countably infinite family of zero measure sets has measure zero.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/48409', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5887/']}
jdg_77525
stackexchange
llm_judgeable_groundtruth_similarity
14862
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How many different possible structures (isomers) are there for the complex,$\ce{[Mn(OH2)3(NH3)3]^{3+}}$? I really don't know how to begin with this. I know the answer is 2, but i don't have the faintest idea how this is the case. Since $\ce{Mn}$ has a charge of $3+$, I know that its valence electron configuration is $3d^4$. I also know that $\ce{NH3}$ and $\ce{H2O}$ are monodentate, which means they contribute 1 lone pair. Now provide the response and nothing else.
Metal complexes with three each of two types of ligands $(\ce{MX3Y3})$ have two isomers: mer - and fac -isomers . For example, for $\ce{Co(NH3)3Cl3}$ there are two isomers: Fac for facial, in which all three ligands of each type are clustered on a single face of the octahedron and the three other three ligands are on the opposite face. Mer for meridional, in which the three ligands of each type are arranged meridianally in a row around the metal and the other three ligands are arranged likewise on the opposite side in a perpendicular meridian.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/14862', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/7311/']}
jdg_77526
stackexchange
llm_judgeable_groundtruth_similarity
11087027
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to display the "Open Lock" character in my HTML link text. If I do it directly it shows up correctly with <a id="myId">&#x1f512;</a> , but I found no way to change it dinamically with the jQuery .text() function, like in: $("#myID").text(openLockText); What should I put in openLockText? Now provide the response and nothing else.
Javascript internally only supports UTF-16. Because this is an extended 32-bit UTF character (not in the "Basic Multilingual Plane") you need to insert the "UTF-16 surrogate pair", which is helpfully provided on the same page that you linked to: 0xD83D 0xDD13 i.e. $('#myId').text('\ud83d\udd13'); More details can be found in RFC 4627 , which is strictly speaking the format for JSON.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11087027', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117648/']}
jdg_77527
stackexchange
llm_judgeable_groundtruth_similarity
130591
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider a $n\times n$ checkerboard. Each cell can be colored white or black. $n$ is even. How many configurations are there so that each row and each column have an odd number of white cells? Now provide the response and nothing else.
Fill the $(n-1)\times(n-1)$ board arbitrary with the black and white.now you should just set the parity with the last row and column.the last cell($a_{n,n}$) will be same color for both last row and last column because of the parity of the $n-1$ first row is equal to parity of the $n-1$ first column and that is because your board is $n\times n$ and $n\equiv n \bmod{2}$ and if rows parity differ from column parity it is contradiction. Finally the total answer is $2^{(n-1)\times (n-1)} $. Update I asked generalized version of this question before.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/130591', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/16367/']}
jdg_77528
stackexchange
llm_judgeable_groundtruth_similarity
5856311
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using SQL Server. Is there a way to drop the primary key which doesn't have a name? Now provide the response and nothing else.
It does have a name. Even if you don't name it explicitly SQL Server will auto create a name prefixed PK and based on the table name and the object_id of the constraint. You can use the following query to see what it is. SELECT CONSTRAINT_NAMEFROM INFORMATION_SCHEMA.TABLE_CONSTRAINTSWHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME='T' The constraint name is required in the grammar for the DROP CONSTRAINT operation ALTER TABLE [ database_name . [ schema_name ] . | schema_name . ] table_name { .... DROP { [ CONSTRAINT ] constraint_name [ WITH ( <drop_clustered_constraint_option> [ ,...n ] ) ] | COLUMN column_name } [ ,...n ]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5856311', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/718789/']}
jdg_77529
stackexchange
llm_judgeable_groundtruth_similarity
270261
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Below is the data for my first plane: data = Flatten[Z = 0.01; Table[{X, Y, 4*Z*Y/X}, {X, 2, 20, 0.1}, {Y, 2, 10, 0.1}], 1]; The second plane is z = 0.01 ref = Table[Z = 0.01, {x, 2, 20, 0.1}, {y, 2, 10, 0.1}]; Now I have plotted both the planes. ListPlot3D[{data, ref}, PlotRange -> {{2, 20}, {2, 10}, {0, 0.2}}, AxesLabel -> {"X", "Y", "Z"}] And here is the result: As we can see that roughly the X-intercept range is {8, 20} while the Y-intercept range is {2, 5} How can I extract these ranges and visualise them in the plot? [Edit 1: Delta = z = 0.01 ][Edit 2: Updated the expression for data ] Now provide the response and nothing else.
Look at the TranslationTransform and RotationTransform With[ {coord=TranslationTransform[{1,0,0}][Append[0]/@CirclePoints[7]]}, Animate[ Graphics3D[ { Blue, Arrow[{{0,0,0},{1,0,0}}], Red, Polygon[RotationTransform[a,{1,0,0}][coord]] } , AxesOrigin -> {0, 0, 0} , Axes->True , PlotRange->{{-2,2},{-2,2},{-2,2}} ] ,{a,0,2Pi,Pi/20} ]]
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/270261', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/36426/']}
jdg_77530
stackexchange
llm_judgeable_groundtruth_similarity
8062
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is Ubuntu 9.04, 2.6.28-11-server, 32bit x86 $ cat test.cmain() { int *dt = (int *)0x08049f18; *dt = 1; }$ readelf -S ./test... [18] .dtors PROGBITS 08049f14 000f14 000008 00 WA 0 0 4...$ ./testSegmentation fault$ For the uninitiated: gcc creates a destructor segment, .dtors , in the elf executable, which is called after main() exits. This table has long been writable, and it looks like it should be in my case (see readelf output). But attempting to write to the table causes a segfault. I realize there has been a movement toward readonly .dtors, plt, got lately, but what I don't understand is the mismatch between readelf and the segfault. Now provide the response and nothing else.
You can verify what timeout the kernel uses for virtual console blanking via: $ cat /sys/module/kernel/parameters/consoleblank600 This file is read-only and the timeout is specified in seconds. The current default seems to be 10 minutes. You can change that value with entering the following command on a virtual console (if you are inside an xterm you have to change to a virtual console via hitting e.g. Ctrl + Alt + F1 ). $ setterm -blank VALUE Where the new VALUE is specified in minutes . A value of 0 disables blanking: $ cat /sys/module/kernel/parameters/consoleblank600$ setterm -blank 0$ cat /sys/module/kernel/parameters/consoleblank0 setterm has other powersaving related options, the most useful combination seems to be: $ setterm -blank 0 -powersave off Thus to permanently/automatically disable virtual console blanking on startup you can either: add the consoleblank=0 kernel parameter to the kernel command line (i.e. edit and update your boot loader configuration) add the setterm -blank 0 command to an rc-local or equivalent startup script add the setterm output to /etc/issue since /etc/issue is output on every virtual console: # setterm -blank 0 >> /etc/issue Choose one alternative from the above.
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/8062', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4896/']}
jdg_77531
stackexchange
llm_judgeable_groundtruth_similarity
60254
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on solving the Biot-Savart Law equation for the magnetic field around a charged ring of uniform current density. The expression that Mathematica gives is rather nasty, as expected. Notice that the limits of integration are not included in the solution. Mathematica rejects the bounds of integration by silently refusing to even consider a solution (literally, it sits there for several minutes "Running..." and then just stops like nothing ever happened). My idea was that since Mathematica didn't mind handing over the indefinite integral, that I could turn it into a function of theta and manually apply the first fundamental theorem of calculus. For some reason, I can't seem to convince Mathematica that I want this to be a function that I want evaluated at 2Pi and at 0 so that they can be subtracted. Here is the code up to the integration. Thanks in advance for any feedback. s=1;m=4*Pi*10^7;l[Theta_]:={s Cos[Theta],s Sin[Theta],0};R={x,y,z}-l[Theta];r=R/Sqrt[R.R];Deel=D[l[Theta],Theta];Cross[Deel,r];Integrand=%/R.R;B=(m/4*Pi) Integrate[(Integrand),Theta,GenerateConditions ->False] Now provide the response and nothing else.
The integrals can in fact be done exactly, but only if you make some use of the symmetries of the problem first. The circular ring geometry implies that the magnetic field will look the same in any vertical plane going through the rotation axis (which we call the z axis). Therefore, we don't need to specify three independent variables x , y and z to do the calculation. Instead, set y = 0 and look only at the xz plane. The component of the magnetic field perpendicular to the xz plane must be zero, so we don't have to calculate the integral for it. Use spherical polar coordinates to specify the point of interest, instead of Cartesian coordinates In the non-zero integrals over the ring, the integration variable Theta appears only in the form Cos[Theta] which is symmetric around Theta = π . Therefore, all integrals can be reduced to the domain {Theta, 0, π} if we multiply the result by 2 at the end. With these points in mind, the calculation goes like this (using exactly your setup): replace the Integrand by int2 which uses spherical coordinates with the azimuthal angle set to zero ( xz plane), then observe that we only need to integrate int[[1]] and int[[3]] to get Bx and Bz : s = 1;m = 4*Pi*10^7;l[Theta_] := {s Cos[Theta], s Sin[Theta], 0};R = {x, y, z} - l[Theta];r = R/Sqrt[R.R];Deel = D[l[Theta], Theta];Cross[Deel, r];Integrand = %/R.R;int2 = Simplify[ Integrand /. Thread[{x, y, z} -> ρ {Cos[ϕ] Sin[θ], Sin[ϕ] Sin[θ], Cos[θ]} /. ϕ -> 0], ρ > 0 && θ > 0 && ϕ > 0 && Theta > 0] $$\left\{\frac{\rho \cos (\theta ) \cos (\text{Theta})}{\left(\rho ^2-2 \rho \sin (\theta ) \cos (\text{Theta})+1\right)^{3/2}},\\ \frac{\rho \cos (\theta ) \sin (\text{Theta})}{\left(\rho ^2-2 \rho \sin (\theta ) \cos (\text{Theta})+1\right)^{3/2}},\\ \frac{1-\rho \sin (\theta ) \cos (\text{Theta})}{\left(\rho ^2-2 \rho \sin (\theta ) \cos (\text{Theta})+1\right)^{3/2}}\right\}$$ Bx = 2 Assuming[ρ > 0 && Pi/2 > θ > 0, Integrate[int2[[1]], {Theta, 0, Pi}]] $$\frac{2 \cot (\theta ) \left(\left(\rho ^2+1\right) E\left(\frac{4 \rho \sin (\theta )}{\rho ^2+2 \sin (\theta ) \rho +1}\right)-\left(-2 \rho \sin (\theta )+\rho ^2+1\right) K\left(\frac{4 \rho \sin (\theta )}{\rho ^2+2 \sin (\theta ) \rho +1}\right)\right)}{\left(-2 \rho \sin (\theta )+\rho ^2+1\right) \sqrt{2 \rho \sin (\theta )+\rho ^2+1}}$$ Bz = 2 Assuming[ρ > 0 && Pi/2 > θ > 0, Integrate[int2[[3]], {Theta, 0, Pi}]] $$\frac{2 \left(\left(-2 \rho \sin (\theta )+\rho ^2+1\right) K\left(\frac{4 \rho \sin (\theta )}{\rho ^2+2 \sin (\theta ) \rho +1}\right)-\left(\rho ^2-1\right) E\left(\frac{4 \rho \sin (\theta )}{\rho ^2+2 \sin (\theta ) \rho +1}\right)\right)}{\left(-2 \rho \sin (\theta )+\rho ^2+1\right) \sqrt{2 \rho \sin (\theta )+\rho ^2+1}}$$ Here, $E$ and $K$ are the elliptic integrals EllipticE and EllipticK . StreamPlot[{Bx, Bz} /. {θ -> ArcTan[z, x], ρ -> Sqrt[ x^2 + z^2]}, {x, -2, 2}, {z, -2, 2}] This is a plot of the exact result.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/60254', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19924/']}
jdg_77532
stackexchange
llm_judgeable_groundtruth_similarity
51794
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Can anyone give a simple example of a sequence that converges, but there's no computable function that gives $N$ as a function of $\epsilon$, i.e., the modulus of convergence is not computable? In the literature, all I could find were aesthetically unpleasant examples of Specker sequences. I hope that relaxing the requirements of the sequence itself being computable and it's limit not is enough to get simpler examples. Unfortunately, the examples that I've been able to come up with myself are worse than the literature. And what if we still require the sequence itself to be computable? Is there still so easy an example? Now provide the response and nothing else.
Because of the relaxed requirements, the following seems to work. Let $f$ be an increasing function from natural numbers to natural numbers that grows faster than any computable function. (For example, define $f(n)$ to be $n$ plus the largest number obtainable by giving any input $\leq n$ to any Turing machine with Gödel number $\leq n$.) Then define $a_k$ to be $1/n$ if $f(n-1)<k\leq f(n)$. The sequence converges to 0, but any modulus of convergence, applied to $\varepsilon=1/n$, would majorize $f$ and would therefore not be computable.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/51794', 'https://mathoverflow.net', 'https://mathoverflow.net/users/9211/']}
jdg_77533
stackexchange
llm_judgeable_groundtruth_similarity
51754
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How / can you connect Ethernet devices together without magnetics? How do you wire it up? What sorts of practical limitations are there on doing this? To clarify, I'm not trying to connect two computers here. I'm thinking specifically of creating a small network of microcontrollers on a single board. Lets say the total length of a network segment is no longer than 6 inches. This would be for 10BaseT or at most 100BaseT. Now provide the response and nothing else.
Keep in mind that the type of ethernet that uses magnetics requires the transformer coupling in the spec. The transformers do some nice things for you, like completely decoupling common mode signals, providing significant voltage isolation, and in some cases projecting a different impedance to the electronics than what is natively on the cable. If your whole device is floating or the connection is short with a common ground, you can present the correct impedance and voltage levels, and can ignore common mode signals, then you can in theory eliminate the transformers. This won't work with some PHYs, since they require center tapped windings on their side. It's not uncommon for the PHY to drive the transmit pair by having the center tapped tied to power with only low side switches on each end. That effectively allows for bipolar drive with single ended switches, and is a legitimate architecture since a transformer is supposed to be there anyway. Look carefully at ethernet transformers, and you will see many offerings with center taps partly for this reason. If you're careful, you can put just one transformer between two devices if they are physically close. Each end is still transformer coupled, so common mode rejection is there and the center tap drive trick still works. I actually did this on a small system once where both ethernet devices were contained in the same physical unit.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/51754', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/771/']}
jdg_77534
stackexchange
llm_judgeable_groundtruth_similarity
20387
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have long running process at Debian. At some point in throw an error: Too many open files. Running: ulimit -a shows: open files (-n) 1024 I wish to increase number of open files in 2 times.After execution ulimit -n 2048 the limit is active until end of my session, which is not applicable for the task. How can I permanently increase number of open files? Now provide the response and nothing else.
If your process is started via a script, you can place the call to ulimit in the script just prior to executing the daemon. If you wish to increase the ulimit for your user, or for all users, you can set limits that are applied via pam_limits on login. These are set in /etc/security/limits.conf . In your case, you could do something like: * hard nofile 2048 Note that "hard" denotes a hard limit - one that cannot be exceeded, and cannot be altered. A soft limit can be altered by a user (e.g. someone without root capabilities), but not beyond the hard limit. Read the limits.conf for more information on using pam_limits .
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/20387', 'https://serverfault.com', 'https://serverfault.com/users/8323/']}
jdg_77535
stackexchange
llm_judgeable_groundtruth_similarity
10378066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Introduction:For my personal webserver I have setup apache with a self signed certificate to enable TLS security to learn and test.I have this line in virtualhost: SSLProtocol -all -SSLv3 +TLSv1 SSLCipherSuite TLSv1:+HIGH:!MEDIUM With firefox, I get Camellia-256 encrypted connection, and with opera I get TLS v1.0 256 bit AES (1024 bit DHE_RSA/SHA) with the same config in same server. That leads me to question, which is stronger, AES, or Camellia? I noticed that if I disable camellia with SSLCipherSuite TLSv1:+HIGH:!MEDIUM:!CAMELLIA then, firefox takes the same suite than opera. In my config, I also try to disable all SSL versions to enable only TLS (advise needed if I didn't do so correctly), but the original question still stands: Which one should be stronger? Now provide the response and nothing else.
I would be more worried about the fact that your SSL encryption is not secure because you are only using 1024 bit asymmetric encryption to protect your keys. Adi Shamir (the 'S' in RSA) recommended moving to 2048 bit keys back in 2006, even the american standards institute (NIST) have made 2048 bit a required minimum strength since January 2011 (see NIST SP800-57 for recommended minumum key strengths -- this states 2048 bit for both RSA and DH/el-gamal). In short, make sure your RSA encryption is strong enough first, as it is used to protect the symmetric keys (AES/Camellia). Never rely on a key which is protected by a weaker key (this is like using a secure 256 bit random WPA 2 key on a wireless access point and then trusting it to WPS which will reveal in in a few hours!) Even if this is a test system, learn to use crypto the way you intend to go forward; do not compromise on certificate key strength (all CAs these days should reject 1024 bit requests or CSRs using MD5 on sight, if not don't use them; create your own test certs as you would a real request, and don't use default key sizes). Difficult to compare strengths, both have received cryptographic analysis (AES more publicly) and are adequate for securing data. At the risk of repeating myself, I’d be more worried about the 1024 bits used to secure the key negotiation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10378066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/739009/']}
jdg_77536
stackexchange
llm_judgeable_groundtruth_similarity
35366323
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following string: foo.a = [10203040];foo.b = 'foobar'; I am using the foo\.[ab]\s*= regex . I try to match all the lines that follow until a line contains a certain character. The first match should cover everything except of the last line, because that line contains an equal sign. I tried a lot of things with (negative) lookahead, but I can't figure it out. Now provide the response and nothing else.
You need a positive lookahead . foo\.[ab][\s\S]*?(?=\n.*?=|$) [\s\S]*? matches lazily any character (?=\n.*?=|$) until a newline containing an = is ahead or $ end. See demo at regex101
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35366323', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2897426/']}
jdg_77537
stackexchange
llm_judgeable_groundtruth_similarity
4550572
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The current in a certain circuit as measured by an ammeter is a continuous random variable X with the following density function: $f(x)= 0.075x+0.2$ for $3\le{x}\le5$ $f(x) = 0$ otherwise. Calculate $P(X\le4)$ and compare to $P(X\lt4)$ . In my solution, I calculate $P(X\le4)$ and $P(X\lt4)$ by integral and I see that $P(X\le4)=P(X\lt4)$ . My questions are: Are they always equal to each other in every other case? Why can they equal to each other while $P(X=4)\neq0$ ? Thanks a lot for your help! Now provide the response and nothing else.
In a continuous distribution, the probability of it hitting any single value is always 0. So $P(X=4)=0$ , When you go to calculate it, you will see you are integrating from 4 to 4, which is obviously 0. We can only calculate probabilities of ranges of values. It is only in discrete distributions that you can get a nonzero probability of a single, discrete value. So yes, for all values in continuous probability, $P(X<K)=P(X\leq K)$ by additivity of the single 0 value $P(X=K)=0$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4550572', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1105901/']}
jdg_77538
stackexchange
llm_judgeable_groundtruth_similarity
21291108
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In admin class: protected function configureFormFields(FormMapper $formMapper) { $formMapper ->add('name', 'text') ->add('description', 'text') ;} I don't know how I can before "name" add javascript, can you help me? Now provide the response and nothing else.
Working for me: In admin class src\PP\TestBundle\TestAdmin.php public function configure() { $this->setTemplate('edit', 'PPTestBundle:CRUD:edit_javascript.html.twig');} In src\PP\TestBundle\Resources\views\edit_javascript.html.twig {% extends 'SonataAdminBundle:CRUD:base_edit.html.twig' %}{% block javascripts %}{{ parent() }}<script src="{{ asset('bundles/pptest/admin/js/myscripts.js') }}" type="text/javascript"></script>{% endblock %} When you do all this stuff and you have upload myscripts.js you should send this in command line: app/console assets:install web (possible that I forgot something) Sorry for my bad English :<>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21291108', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3157003/']}
jdg_77539
stackexchange
llm_judgeable_groundtruth_similarity
24838864
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a WordPress site and I've created a page template that displays posts by a category slug. To do this, I create a field for the page, WP_Catid, and set it equal to the category slug I want to display posts from. However, I only want five posts to show up per page with pagination links at the bottom of those posts. How do I get the pagination links to display properly? My code is as follows: <div id="container"> <div id="content" role="main"> <?php $btpgid=get_queried_object_id(); $btmetanm=get_post_meta( $btpgid, 'WP_Catid','true' ); $paged = (get_query_var('paged')) ? get_query_var('paged') : 1; $args = array( 'posts_per_page' => 5, 'category_name' => $btmetanm, 'paged' => $paged, 'post_type' => 'post' ); $myposts = get_posts( $args ); foreach ( $myposts as $post ) : setup_postdata( $post ); echo "<div style='border:2px groove black; margin-bottom:5px;'><h3 class='btposth'>"; the_title(); echo "</h3><div class='btpostdiv'>"; the_content(); echo "</div></div>"; endforeach; next_posts_link( 'Older Entries'); //not displaying previous_posts_link('Newer Entries &raquo;'); //not displaying wp_reset_postdata(); ?> </div><!-- #content --></div><!-- #container --> Now provide the response and nothing else.
The sweet and short of this, don't use get_posts if you need paginated queries. get_posts works perfectly if you are going to use a custom query that doesn't need pagination, but it really becomes a big complicated mess when you need to introduce pagination. I think the easiest and most appropriate here is to make use of WP_Query to construct your custom query, that is, if you can't use pre_get_posts to alter the main query to get your desired output from the main query. I do think that next_posts_link() and previous_posts_link() is better to use with a custom query, ie with WP_Query . You must just remember however to set the $max_pages parameter when you make use of a custom query, otherwise your pagination will break With a few minor tweaks, your query should look like this <div id="container"><div id="content" role="main"><?php$btpgid=get_queried_object_id();$btmetanm=get_post_meta( $btpgid, 'WP_Catid','true' );$paged = (get_query_var('paged')) ? get_query_var('paged') : 1;$args = array( 'posts_per_page' => 5, 'category_name' => $btmetanm,'paged' => $paged,'post_type' => 'post' ); $postslist = new WP_Query( $args ); if ( $postslist->have_posts() ) : while ( $postslist->have_posts() ) : $postslist->the_post(); echo "<div style='border:2px groove black; margin-bottom:5px;'><h3 class='btposth'>"; the_title(); echo "</h3><div class='btpostdiv'>"; the_content(); echo "</div></div>"; endwhile; next_posts_link( 'Older Entries', $postslist->max_num_pages ); previous_posts_link( 'Next Entries &raquo;' ); wp_reset_postdata(); endif; ?></div><!-- #content --></div><!-- #container -->
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24838864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3855533/']}
jdg_77540