source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 130788 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm not a math guy and probably this is a stupid question. However, I was browsing Wikipedia out of curiosity and I could not understand this one statement : [...] not all infinite sets have the same cardinality. For example, Georg Cantor (who introduced this branch of mathematics) demonstrated that the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers), and therefore that the set of real numbers has a greater cardinality than the set of natural numbers. Now, shortly before this a formula was used to map even integers to odd integers: n β 2 n + 1 Granted we have infinite natural and real numbers, we can devise a formula to map them like so: 0 β 01 β 12 β -13 β 24 β -25 β 36 β -37 β 48 β -49 β 510 β -5 As I'm a math analphabet, this is javascript: function realToNatural(x) { if(x == 0) return 0; if(x < 0) return x * -2; else return x * 2 - 1;}function naturalToReal(x) { if(x == 0) return 0; if(x % 2 == 0) return x / 2 * -1; else return (x + 1) / 2;} Now, I'm sure there is a hole in my argument, but what is it? A couple of additional thoughts: The article mentions the cardinality of the set of odd integers being equal to the one of even integers, and as well equal to the cardinality of all integers, so my confusion is: if this applies to odd and even numbers (being both a "full" infinity instead of "half" infinity) versus the set of both, so it would to natural numbers versus real ones. Also it states that I just need a function in the form of f : S β β to make a set countable.
Now provide the response and nothing else.
| You have a bijection between the natural numbers and the integers, not between the natural numbers and the real numbers. The real numbers include $\frac12,\pi,\sqrt2$, etc. The first of these, $\frac12$, is a rational number; the last two are not. There are bijections between $\Bbb Q$, the set of rational numbers, and $\Bbb N$, the set of natural numbers, but $\Bbb R$, the set of real numbers, is too big: one can prove that any function mapping $\Bbb N$ to $\Bbb R$ will miss some real numbers, and any function mapping $\Bbb R$ to $\Bbb N$ will hit some natural number infinitely many times. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/130788', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28905/']} | jdg_73941 |
stackexchange | llm_judgeable_groundtruth_similarity | 38044374 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Xcode UI testing, how do I test that staticTexts contains a string? In the debugger, I can run something like this to print out all the content of staticTexts: po app.staticTexts . But how do I test if a string exists anywhere within all of that content? I can check for the existence of each staticText doing something like app.staticTexts["the content of the staticText"].exists ? but I have to use the exact content of that staticText. How can I use only a string which may be only a part of that content?
Now provide the response and nothing else.
| First, you need to set an accessibility identifier for the static text object you want to access. This will allow you to find it without searching for the string it is displaying. // Your app codelabel.accessibilityIdentifier = "myLabel" Then you can assert whether the string displayed is the string you want by writing a test by calling .label on the XCUIElement to get the contents of the displayed string: // Find the labellet myLabel = app.staticTexts["myLabel"]// Check the string displayed on the label is correctXCTAssertEqual("Expected string", myLabel.label) To check it contains a certain string, use range(of:) , which will return nil if the string you give is not found. XCTAssertNotNil(myLabel.label.range(of:"expected part")) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38044374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4852711/']} | jdg_73942 |
stackexchange | llm_judgeable_groundtruth_similarity | 2853503 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to connect the definition of dense in a total ordered set Def. 1 Given total order $(X,\le)$, a set $S\subseteq X$ is dense in $X$ if for any $x,x'\in S$ such that $x<x'$, there is $y\in S$ such that $x<y<x'$. with the definition of dense-in-itself in a order topology (induced by the same total order) Def. 2 Given the order topology $(X,\tau)$ with total order $\le$, a set $S\subseteq X$ without isolated points is called dense-in-itself. The comparison is taken from the answer: dense in terms of order and in terms of the order topology , and is formulated as Lemma 1. Given a totally ordered set $(X,\le)$ then $S\subseteq X$ in the induced order topology is dense-in-itself according to Definition 2 if and only if $S$ is dense according to Definition 1. Proof. If $S$ is dense-in-itself then it has no isolated points and consists only of limit points. Given any two $x,x'\in S$ such that $x<x'$, then $(x,x')$ is not empty because (*) On the other side, if $S$ is dense according to the total order definition, suppose that $S$ has an isolated point, that is, there is an open set containing only $x$, which contradicts Lemma ??? (the order topology consists of unions of open intervals and rays). Is the Lemma true? How do I proceed with the proof at point (*)? Intuitively it is clear: To have a hole with $x$ and $x'$ on its ends and $x,y\in S$ then $S$ cannot be open. Should I add the condition that $S\in\tau$ to make the Lemma valid? From the answer in the cited question it seems to be the case, but I need anyway to understand how to prove it because the argument "Since X is dense in terms of topology, there is an x in (a,b)" is not enough for me to understand. The starting point of the reasoning was to connect the definition of dense according to a total order and this third definition: Def. 3 Given a topological space $(X,\tau)$, subset $S\subseteq X$ is said to be dense if $\bar{S}=X$. Generally given a subset $D\subseteq S$, $D$ is said to be dense in $S$ if $S\subseteq \bar{D}$. If I take literally "dense-in-itself" and apply it to Def. 3 I would get the following lemma Lemma 2 $S\subseteq X$, $S$ is dense-in-itself if and only if $S\subseteq\bar{S}$. which is wrong since the r.h.s. is always true. It cannot be that all (let restrict it to) open set are dense in order topology? How are connected Definitions 2 and 3?
Now provide the response and nothing else.
| Let $f(x)$ be any quartic polynomial with coefficients from $\{ -1, +1 \}$. Replacing $f(x)$ by $-f(x)$ if necessary, we can assume $f(x)$ is monic. i.e. $$f(x) = x^4 + ax^3 + bx^2 + cx + d\quad\text{ with }\quad a,b,c,d \in \{ -1, +1 \}$$ If $f(x)$ has $4$ real roots $\lambda_1,\lambda_2,\lambda_3,\lambda_4$, then by Vieta's formula , we have $$\sum_{i=1}^4 \lambda_i = -a, \sum_{1\le i < j\le 4} \lambda_i\lambda_j = b\quad\text{ and }\quad\prod_{i=1}^4 \lambda_i = d$$Notice$$\sum_{i=1}^4 \lambda_i^2 = \left(\sum_{i=1}^4\lambda_i\right)^2 - 2\sum_{1\le i < j \le 4}\lambda_i\lambda_j = a^2 - 2b = 1 -2b$$ Since $\sum_{i=1}^4 \lambda_i^2 \ge 0$, we need $b = -1$. As a result, $$\sum_{i=1}^4 \lambda_i^2 = 3$$By AM $\ge$ GM, this leads to $$\frac34 = \frac14\sum_{i=1}^4 \lambda_i^2 \ge \left(\prod_{i=1}^4 \lambda_i^2\right)^{1/4} = (d^2)^{1/4} = 1$$This is impossible and hence $f(x)$ cannot has 4 real roots. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2853503', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/322402/']} | jdg_73943 |
stackexchange | llm_judgeable_groundtruth_similarity | 18758 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am plotting the voltage in a pool of water with two electrodes. I have contour lines on my graph at every volt, but I would like to show the electric field lines which are going to be perpendicular to the contour lines. How can I do that?
Now provide the response and nothing else.
| What you're asking for could be done, but perpendicular contour lines are not a good way to represent the electric field in a 3D plot of a 2D potential. The electric field (or negative gradient) will be a vector field in the plane, and its field lines will be planar too. The third dimension in your 3d plot is geometrically meaningless because it doesn't have dimension length . So instead what I'd strongly suggest is to either plot your potential and field in 2D, or plot the electric field as planar vectors attached to the 3D plot of the potential. The latter is what I'm illustrating below, because it's most closely matched to your starting point: fieldArrow[pos_, field_, scale_] := {Hue[Norm[field]], Arrowheads[.02], Arrow[Tube[{{pos, pos + scale field}}]]};grid = Table[{x, y}, {y, 0, 10, .5}, {x, 0, 10, .5}];gridData = Table[Sin[y/10] ArcTan[x - 5], {y, 0, 10, .5}, {x, 0, 10, .5}];fieldX = -DerivativeFilter[gridData, {0, 1}, InterpolationOrder -> 3];fieldY = -DerivativeFilter[gridData, {1, 0}, InterpolationOrder -> 3];data3D = MapThread[Append[#1, #2] &, {grid, gridData}, 2];vectorField = MapThread[ fieldArrow[Append[#1, #2], {#3, #4, 0}, 3] &, {grid, gridData, fieldX, fieldY}, 2];arrows = Graphics3D[vectorField];Show[ListPlot3D[Flatten[data3D, 1], MeshFunctions -> (#3 &)], arrows] Here, I take the gradient of a made-up potential given as data, by using GradientFilter . Then I make arrows out of the field vectors and superimpose them on your original plot with Show . It's up to you to style the arrows differently - I colored them by the field strengthand scaled them by a parameter scale in the function fieldArrow . You may also want to shrink the arrowheads proportionally to the arrow length. But I left that out so you can see the smaller arrows better. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/18758', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2361/']} | jdg_73944 |
stackexchange | llm_judgeable_groundtruth_similarity | 37362986 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to make a view(not rotated) wider in a controller. I tried two approaches to do that: MyView.frame.size.width += 80 MyView.bounds.size.width += 80 And I noticed that they gave me different effect, how does that happen?
Now provide the response and nothing else.
| For the indexing to work with two DataFrames they have to have comparable indexes. In this case it won't work because one DataFrame has an integer index, while the other has dates. However, as you say you can filter using a bool array . You can access the array for a Series via .values . This can be then applied as a filter as follows: df # pandas.DataFrames # pandas.Series df[s.values] # df, filtered by the bool array in s For example, with your data: import pandas as pddf = pd.DataFrame([ [30.20, 29.41, 29.87], [30.28, 29.32, 30.24], [30.45, 29.96, 30.10], [29.35, 28.74, 28.90], [29.35, 28.56, 28.92], ], columns=['High','Low','Close'], index=['2009-02-11','2009-02-12','2009-02-13','2009-02-17','2009-02-18'] )s = pd.Series([True, False, False, True, False], name='bools')df[s.values] Returns the following: High Low Close2009-02-11 30.20 29.41 29.872009-02-17 29.35 28.74 28.90 If you just want the High column, you can filter this as normal (before, or after the bool filter): df['High'][s.values]# Or: df[s.values]['High'] To get your target output (as a Series ): 2009-02-11 30.20 2009-02-17 29.35 Name: High, dtype: float64 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37362986', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5104361/']} | jdg_73945 |
stackexchange | llm_judgeable_groundtruth_similarity | 21654008 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my case, I only want to drag one point each time. However, since the two points are heavily overlapping, dragging one point would cause another point to be dragged. How can I only drag the point that is on the above? Thank you! from pylab import *from scipy import *import matplotlib.pyplot as pltimport matplotlib.patches as patchesclass DraggablePoint: def __init__(self, p): self.point = p self.press = None def connect(self): self.cidpress = self.point.figure.canvas.mpl_connect('button_press_event', self.button_press_event) self.cidrelease = self.point.figure.canvas.mpl_connect('button_release_event', self.button_release_event) self.cidmotion = self.point.figure.canvas.mpl_connect('motion_notify_event', self.motion_notify_event) def disconnect(self): 'disconnect all the stored connection ids' self.point.figure.canvas.mpl_disconnect(self.cidpress) self.point.figure.canvas.mpl_disconnect(self.cidrelease) self.point.figure.canvas.mpl_disconnect(self.cidmotion) def button_press_event(self,event): if event.inaxes != self.point.axes: return contains = self.point.contains(event)[0] if not contains: return self.press = self.point.center, event.xdata, event.ydata def button_release_event(self,event): self.press = None self.point.figure.canvas.draw() def motion_notify_event(self, event): if self.press is None: return if event.inaxes != self.point.axes: return self.point.center, xpress, ypress = self.press dx = event.xdata - xpress dy = event.ydata - ypress self.point.center = (self.point.center[0]+dx, self.point.center[1]+dy) print self.point.center self.point.figure.canvas.draw()if __name__ == '__main__': fig = plt.figure() ax = fig.add_subplot(111) ax.set_xlim(-1,2) ax.set_ylim(-1,2) circles = [] circle1 = patches.Circle((0.32,0.3), 0.2, fc='r', alpha=0.5, picker=True) circle = patches.Circle((0.3,0.3), 0.2, fc='b', alpha=0.5, picker=True) circles.append(ax.add_patch(circle1)) circles.append(ax.add_patch(circle)) drs = [] for c in circles: #print c.center[0] dr = DraggablePoint(c) dr.connect() drs.append(dr) plt.show()
Now provide the response and nothing else.
| Joe's method works fine, but it makes a set of draggablepoints as a class instead of a single draggablepoint class. I just came across an alternative method to solve the above problem using animation blit techniques . It not only makes the dragging faster and smoother, but also only one point can be dragged. See the following code. import matplotlib.pyplot as pltimport matplotlib.patches as patchesclass DraggablePoint: lock = None #only one can be animated at a time def __init__(self, point): self.point = point self.press = None self.background = None def connect(self): 'connect to all the events we need' self.cidpress = self.point.figure.canvas.mpl_connect('button_press_event', self.on_press) self.cidrelease = self.point.figure.canvas.mpl_connect('button_release_event', self.on_release) self.cidmotion = self.point.figure.canvas.mpl_connect('motion_notify_event', self.on_motion) def on_press(self, event): if event.inaxes != self.point.axes: return if DraggablePoint.lock is not None: return contains, attrd = self.point.contains(event) if not contains: return self.press = (self.point.center), event.xdata, event.ydata DraggablePoint.lock = self # draw everything but the selected rectangle and store the pixel buffer canvas = self.point.figure.canvas axes = self.point.axes self.point.set_animated(True) canvas.draw() self.background = canvas.copy_from_bbox(self.point.axes.bbox) # now redraw just the rectangle axes.draw_artist(self.point) # and blit just the redrawn area canvas.blit(axes.bbox) def on_motion(self, event): if DraggablePoint.lock is not self: return if event.inaxes != self.point.axes: return self.point.center, xpress, ypress = self.press dx = event.xdata - xpress dy = event.ydata - ypress self.point.center = (self.point.center[0]+dx, self.point.center[1]+dy) canvas = self.point.figure.canvas axes = self.point.axes # restore the background region canvas.restore_region(self.background) # redraw just the current rectangle axes.draw_artist(self.point) # blit just the redrawn area canvas.blit(axes.bbox) def on_release(self, event): 'on release we reset the press data' if DraggablePoint.lock is not self: return self.press = None DraggablePoint.lock = None # turn off the rect animation property and reset the background self.point.set_animated(False) self.background = None # redraw the full figure self.point.figure.canvas.draw() def disconnect(self): 'disconnect all the stored connection ids' self.point.figure.canvas.mpl_disconnect(self.cidpress) self.point.figure.canvas.mpl_disconnect(self.cidrelease) self.point.figure.canvas.mpl_disconnect(self.cidmotion)fig = plt.figure()ax = fig.add_subplot(111)drs = []circles = [patches.Circle((0.32, 0.3), 0.03, fc='r', alpha=0.5), patches.Circle((0.3,0.3), 0.03, fc='g', alpha=0.5)]for circ in circles: ax.add_patch(circ) dr = DraggablePoint(circ) dr.connect() drs.append(dr)plt.show() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21654008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1789827/']} | jdg_73946 |
stackexchange | llm_judgeable_groundtruth_similarity | 25390401 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i have an angular app in which i am using a textbox as follows: <div class="panel-body text-center"> <textarea id="mytext" class="form-control" rows="4">John,2 Jane,3 John,4 Jane,5 </textarea> </div> Here the values John,2...etc are loaded by default. However when i introduce an ng-model to access this data as follows, the default values do not show up anymore. What might be happening? <textarea id="mytext" ng-model="mytextvalue" class="form-control" rows="4">
Now provide the response and nothing else.
| From the docs : ngModel will try to bind to the property given by evaluating the expression on the current scope. If the property doesn't already exist on this scope, it will be created implicitly and added to the scope. So what's happening here with the following code: <textarea ng-model="mytextvalue" >...</texarea> When the ng-model directive is processed, it will look for a property mytextvalue on $scope . When it doesn't find one, it will create an empty value and then proceed to assign that empty value to your element. If you want to default the value, you'll have to explicicly specify a value for mytextvalue You can do that in HTML with ng-init ng-init="mytextvalue='John,2\nJane,3\nJohn,4\nJane,5'" Or you can do it with JavaScript on your controller: $scope.mytextvalue = 'John,2\nJane,3\nJohn,4\nJane,5'; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25390401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1467535/']} | jdg_73947 |
stackexchange | llm_judgeable_groundtruth_similarity | 22104792 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my TableView, I have a dataSource of NSMutableArray *currList - it contains objects of object Agent. I created customized TableCell and set up everything properly. I am finding problem while displaying data : -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *simpleTableIdentifier = @"ChatListCell"; // Custom TableViewCell ChartListCell *cell = (ChartListCell *)[tableView dequeueReusableCellWithIdentifier:simpleTableIdentifier]; if (cell == nil) { // I believe here I am going wrong cell = [[ChartListCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:simpleTableIdentifier]; NSLog(@"Cell = %@", cell); // Shows null } /* With UITableViewCell, things are working perfectly fine UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:simpleTableIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:simpleTableIdentifier]; } */ Agent *agent = (Agent *)[currChatList objectAtIndex:indexPath.row]; NSLog(@"Agent name - %@", agent.name); // Prints proper data cell.nameLabel.text = agent.name; cell.thumbImageView.image = [UIImage imageNamed:agent.photo]; cell.timeLabel.text = agent.chatTime; return cell;} As you can see in above code comments, If I comment the custom ChartListCell and use UITableViewCell, it works properly. But with ChartListCell, nothing comes up and in logs I get "Cell = null" & Agent name is showing properly. Cell shouldn't be null. Why is it null, can anyone please help me out with this. Where am I doing mistake ? Any help is highly appreciated. Thanks
Now provide the response and nothing else.
| import your custom cell file and try this - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {static NSString *simpleTableIdentifier = @"ChartListCell";ChartListCell *cell = (ChartListCell *)[tableView dequeueReusableCellWithIdentifier:simpleTableIdentifier];if (cell == nil){ NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"ChartListCell" owner:self options:nil]; cell = [nib objectAtIndex:0];} Agent *agent = (Agent *)[currChatList objectAtIndex:indexPath.row]; NSLog(@"Agent name - %@", agent.name); // Prints proper data cell.nameLabel.text = agent.name; cell.thumbImageView.image = [UIImage imageNamed:agent.photo]; cell.timeLabel.text = agent.chatTime;return cell;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22104792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/455979/']} | jdg_73948 |
stackexchange | llm_judgeable_groundtruth_similarity | 68649524 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I`m trying to import OpenCV to android studio project, but have inactive "next" and "finish" button. So i cant finish OpenCV importing.Using Android Studio Arctic Fox 2020.3.1 and tried different version of OpenCV(4.5.3, 3.4.1, 3.4.15 etc.)There are screenshots of issue Next and Finish disabled Next and Finish disabled Edit: Tried Android studio 4.2.2 - and it fix my problem. But i still dont know how to use it in Arctic Fox
Now provide the response and nothing else.
| This Bug is in Android studio Arctic Fox This issue is not about only OpenCV module, This is happening with all library modules. Many people are posted this issue on issuetracker but nobody get solution from Android studio team side.here is some links of that issues posted on issuetracker . Link 1 , Link 2 , Link 3 , Link 4 , Link 5 , Link 6 , Link 7 So here is Temporary Workaround copy sdk folder in your project directory Add below line in settings.gradle include ':sdk' click sync now Done If you get below error (I got this error while solving this issue) Plugin with id 'kotlin-android' not found. add below dependency in root level build.gradle and click sync now dependencies { classpath "com.android.tools.build:gradle:4.2.2" classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:1.5.21" // add this line } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68649524', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193069/']} | jdg_73949 |
stackexchange | llm_judgeable_groundtruth_similarity | 14275249 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to animate a progressive drawing of a line using only css with svg/canvas and js maximum. An idea of the line I want to draw can be found here <svg width="640" height="480" xmlns="http://www.w3.org/2000/svg"> <!-- Created with SVG-edit - http://svg-edit.googlecode.com/ --> <g> <title>Layer 1</title> <path d="m33,104c1,0 2.1306,-0.8037 23,3c9.07012,1.65314 10,2 24,2c7,0 29,0 33,0c8,0 9,0 11,0c2,0 8,0 11,0c9,0 17,0 18,0c10,0 12,0 20,0c1,0 6,0 7,0c2,0 3.07613,0.38268 4,0c2.61313,-1.08239 2,-3 2,-6c0,-1 0,-2 0,-3c0,-1 0,-2 0,-3c0,-1 0,-2 0,-3c0,-1 0.30745,-3.186 -1,-5c-0.8269,-1.14727 -0.09789,-2.82443 -2,-4c-0.85065,-0.52573 -2.82443,-0.09789 -4,-2c-0.52573,-0.85065 -2.58578,-0.58578 -4,-2c-0.70711,-0.70711 -1.81265,-1.20681 -4,-3c-2.78833,-2.28588 -3.64749,-2.97251 -8,-4c-2.91975,-0.68926 -4.82375,-2.48626 -7,-3c-2.91975,-0.68926 -5.15224,-0.23463 -7,-1c-1.30656,-0.5412 -4.38687,-1.91761 -7,-3c-1.84776,-0.76537 -5.03609,0.37821 -7,0c-5.28799,-1.01837 -8,-3 -9,-3c-2,0 -5.0535,-0.54049 -7,-1c-2.17625,-0.51374 -4.15224,-0.23463 -6,-1c-1.30656,-0.54119 -3,-1 -4,-1c-2,0 -5,-1 -6,-1c-1,0 -3,-2 -6,-2c-2,0 -5,-2 -6,-2c-2,0 -2.02583,-0.67963 -4,-1c-3.12144,-0.50654 -4.15224,-0.23463 -6,-1c-1.30656,-0.54119 -2,-1 -3,-1c-2,0 -3,0 -5,0c-1,0 -2,0 -3,0c-1,0 -2,0 -3,0c-1,0 -2,0 -3,0c-2,0 -3.85273,0.1731 -5,1c-1.81399,1.30745 -5.18601,1.69255 -7,3c-1.14727,0.8269 -1.82375,2.48626 -4,3c-0.97325,0.22975 -1.69344,1.45881 -3,2c-0.92388,0.38268 -1.45951,1.0535 -1,3c0.51374,2.17625 3.07844,2.78985 6,4c2.06586,0.85571 3.38688,1.91761 6,3c1.84776,0.76537 5.2987,-1.05146 7,0c1.90211,1.17557 3.82375,2.48626 6,3c0.97325,0.22975 3.29289,0.29289 4,1c0.70711,0.70711 4,2 9,4c5,2 8,4 11,4c2,0 5,0 7,0c3,0 5,0 7,0c2,0 4,0 7,0c2,0 4,0 8,0c3,0 7,0 10,0c4,0 7,0 12,0c3,0 5,0 6,0c2,0 3,0 5,0c1,0 1.09789,-0.82443 3,-2c0.85065,-0.52573 3.07613,0.38268 4,0c1.30656,-0.5412 0.71022,-2.04291 1,-3c1.04483,-3.45084 2.84723,-5.04132 4,-9c0.88414,-3.03616 1.85194,-5.22836 3,-8c0.5412,-1.30656 1.5405,-2.0535 2,-4c0.51375,-2.17625 2.71413,-4.21167 5,-7c2.68979,-3.28101 4,-6 5,-7c1,-1 2,-2 2,-4c0,-1 0.70711,-2.29289 0,-3c-0.70711,-0.70711 -2.07613,0.38268 -3,0c-1.30656,-0.54119 -2,-1 -4,-1c-3,0 -6.87856,-2.49346 -10,-3c-2.96126,-0.48055 -6.71201,-0.98162 -12,-2c-2.94586,-0.56732 -5,-1 -9,-1c-3,0 -6,-1 -8,-1c-2,0 -5,-3 -7,-3c-2,0 -5.38687,-0.91761 -8,-2c-0.92388,-0.38268 -3.0535,-0.54049 -5,-1c-2.17625,-0.51374 -4.58578,0.41421 -6,-1c-0.70711,-0.70711 -1,-1 -2,-1c-1,0 -2,0 -3,0c-1,0 -2,0 -4,0c-1,0 -2,0 -3,0c-1,0 -2,0 -4,0c-1,0 -2,0 -3,0c-2,0 -3,0 -5,0c-1,0 -2,0 -3,0c-1,0 -3,0 -4,0c-3,0 -5,0 -7,0c-2,0 -4,0 -6,0c-2,0 -3,0 -5,0c-1,0 -2,0 -3,0c-2,0 -4,0 -5,0c-1,0 -2,0 -4,0c-1,0 -2,0 -3,1l-1,0" id="svg_1" stroke-width="5" stroke="#000000" fill="none"/> </g></svg>
Now provide the response and nothing else.
| There are three techniques listed in this answer: There is an all-SVG solution that involves progressively modifying the stroke-dasharray for the shape to draw a longer and longer 'dash' followed by an enormous gap. Demo: http://phrogz.net/svg/progressively-drawing-svg-path-via-dasharray.html Relevant code: var distancePerPoint = 1;var drawFPS = 60;var orig = document.querySelector('path'), length, timer;orig.addEventListener('mouseover',startDrawingPath,false);orig.addEventListener('mouseout', stopDrawingPath, false);function startDrawingPath(){ length = 0; orig.style.stroke = '#f60'; timer = setInterval(increaseLength,1000/drawFPS);}function increaseLength(){ var pathLength = orig.getTotalLength(); length += distancePerPoint; orig.style.strokeDasharray = [length,pathLength].join(' '); if (length >= pathLength) clearInterval(timer);}function stopDrawingPath(){ clearInterval(timer); orig.style.stroke = ''; orig.style.strokeDasharray = '';} Alternatively, you can still use all SVG and choose to build the SVG path one command at a time: Demo: http://phrogz.net/svg/progressively-cloning-svg-path.html Relevant code: // Assumes 'orig' and dup' are SVG pathsfunction addNextPathSegment(){ var nextIndex = dup.pathSegList.numberOfItems; if (nextIndex<orig.pathSegList.numberOfItems){ var nextSegment = orig.pathSegList.getItem(nextIndex); var segmentDup = cloneSVGPathSeg( dup, nextSegment ); dup.pathSegList.appendItem( segmentDup ); }}function cloneSVGPathSeg( path, seg ){ switch(seg.pathSegTypeAsLetter){ case 'M': return path.createSVGPathSegMovetoAbs(seg.x,seg.y); break; case 'm': return path.createSVGPathSegMovetoRel(seg.x,seg.y); break; case 'L': return path.createSVGPathSegLinetoAbs(seg.x,seg.y); break; case 'l': return path.createSVGPathSegLinetoRel(seg.x,seg.y); break; case 'H': return path.createSVGPathSegLinetoHorizontalAbs(seg.x); break; case 'h': return path.createSVGPathSegLinetoHorizontalRel(seg.x); break; case 'V': return path.createSVGPathSegLinetoVerticalAbs(seg.y); break; case 'v': return path.createSVGPathSegLinetoVerticalRel(seg.y); break; case 'C': return path.createSVGPathSegCurvetoCubicAbs(seg.x,seg.y,seg.x1,seg.y1,seg.x2,seg.y2); break; case 'c': return path.createSVGPathSegCurvetoCubicRel(seg.x,seg.y,seg.x1,seg.y1,seg.x2,seg.y2); break; case 'S': return path.createSVGPathSegCurvetoCubicSmoothAbs(seg.x,seg.y,seg.x2,seg.y2); break; case 's': return path.createSVGPathSegCurvetoCubicSmoothRel(seg.x,seg.y,seg.x2,seg.y2); break; case 'Q': return path.createSVGPathSegCurvetoQuadraticAbs(seg.x,seg.y,seg.x1,seg.y1); break; case 'q': return path.createSVGPathSegCurvetoQuadraticRel(seg.x,seg.y,seg.x1,seg.y1); break; case 'T': return path.createSVGPathSegCurvetoQuadraticSmoothAbs(seg.x,seg.y); break; case 't': return path.createSVGPathSegCurvetoQuadraticSmoothRel(seg.x,seg.y); break; case 'A': return path.createSVGPathSegArcAbs(seg.x,seg.y,seg.r1,seg.r2,seg.angle,seg.largeArcFlag,seg.sweepFlag); break; case 'a': return path.createSVGPathSegArcRel(seg.x,seg.y,seg.r1,seg.r2,seg.angle,seg.largeArcFlag,seg.sweepFlag); break; case 'z': case 'Z': return path.createSVGPathSegClosePath(); break; }} Finally, you may choose to draw your path to an HTML5 canvas by sampling the SVG path periodically and drawing to the canvas. (Note that the SVG path does not need to be displayed for this to happen; you can build an SVG path element entirely in JavaScript and sample it): Demo: http://phrogz.net/svg/progressively-drawing-svg-path.html Relevant code: function startDrawingPath(){ points = []; timer = setInterval(buildPath,1000/drawFPS);}// Assumes that 'orig' is an SVG pathfunction buildPath(){ var nextPoint = points.length * distancePerPoint; var pathLength = orig.getTotalLength(); if (nextPoint <= pathLength){ points.push(orig.getPointAtLength(nextPoint)); redrawCanvas(); } else stopDrawingPath();}function redrawCanvas(){ clearCanvas(); ctx.beginPath(); ctx.moveTo(points[0].x,points[0].y); for (var i=1;i<points.length;i++) ctx.lineTo(points[i].x,points[i].y); ctx.stroke();} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14275249', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1820097/']} | jdg_73950 |
stackexchange | llm_judgeable_groundtruth_similarity | 41151579 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I fail to explicitly call exit for certain function-based Bash scripts then there are additional unexpected executions for some functions. What is causing this? The behavior was first noticed while making a git alias as part of answering another user's question on StackOverflow . That alias was composed of this script ( which runs the function twice instead of once ): #!/usr/bin/env bashgithub(){ echo github; };twitter(){ echo twitter; };facebook(){ echo facebook;};if [[ $(type -t "$1") == "function" ]];then "$1";else echo "There is no defined function for $1";fi; But this slightly modified script executes as expected ( runs the function only once ): #!/usr/bin/env bashgithub(){ echo github; };twitter(){ echo twitter; };facebook(){ echo facebook;};if [[ $(type -t "$1") == "function" ]];then "$1"; exit 0;else echo "There is no defined function for $1"; exit 1;fi; Here is exactly what is happening when I run those scripts via a git alias ( added set command for debugging purposes only ): $ git config --global alias.encrypt-for '!set -evu -o pipefail;github(){ echo github;};twitter(){ echo twitter;};facebook(){ echo facebook;};if [[ $(type -t "$1") == "function" ]];then "$1"; exit 0; else echo "There is no defined function for $1"; exit 1; fi;'$ git encrypt-for "github"type -t "$1"github$ git config --global alias.encrypt-for '!set -evu -o pipefail;github(){ echo github;};twitter(){ echo twitter;};facebook(){ echo facebook;};if [[ $(type -t "$1") == "function" ]];then "$1"; else echo "There is no defined function for $1"; fi;'$ git encrypt-for "github"type -t "$1"githubgithub The output from set -x : $ git encrypt-for "github"++ type -t github+ [[ function == \f\u\n\c\t\i\o\n ]]+ github+ echo githubgithub+ github+ echo githubgithub The output from replacing echo github with echo "I am echo in github" as a way of ruling out the echo command as the source of the second function execution: $ git encrypt-for "github"++ type -t github+ [[ function == \f\u\n\c\t\i\o\n ]]+ github+ echo 'I am echo in github'I am echo in github+ github+ echo 'I am echo in github'I am echo in github The following is the simplest version of the alias/script which gives the undesired behavior of double execution: g(){ echo "once";};$1; And this is the resulting output from executing the simplified alias/script ( which has the incorrect behavior of executing twice ): $ git config --global alias.encrypt-for '!g(){ echo "once";};$1;'$ git encrypt-for gonceonce
Now provide the response and nothing else.
| That's because of the way git handles aliases: Given an alias [alias] myalias = ! string where string is any string that represents some code, when calling git myalias args where args is a (possibly empty) list of arguments, git will execute: sh -c ' string "$@"' ' string ' args For example: [alias] banana = !echo "$1,$2,SNIP " and calling git banana one 'two two' three git will execute: sh -c 'echo "$1,$2,SNIP " "$@"' 'echo "$1,$2,SNIP "' one 'two two' three and so the output will be: one,two two,SNIP one two two three In your case, [alias] encrypt-for = "!g(){ echo \"once\";};$1;" and calling git encrypt-for g git will execute: sh -c 'g(){ echo "once";};$1;"$@"' 'g(){ echo "once";};$1;' g For clarity, let me rewrite this in an equivalent form: sh -c 'g(){ echo "once";};$1;"$@"' - g I only replaced the 'g(){ echo "once";};$1;' part (that will be sh 's $0 's positional parameter and will not play any role here) by a dummy argument - . It should be clear that it's like executing: g(){ echo "once";};g;g so you'll see: onceonce To remedy this: don't use parameters! just use: [alias] encrypt-for = "!g(){ echo "once";};" Now, if you really want to use parameters, make sure that the trailing parameters given are not executed at all. One possibility is to add a trailing comment character like so: [alias] encrypt-for = "!g(){ echo "once";};$1 #" For your full example, a cleaner way could also be to wrap everything in a function: [alias] encrypt-for = "!main() {\ case $1 in \ (github) echo github;; \ (twitter) echo twitter;; \ (facebook) echo facebook;; \ (*) echo >&2 \"error, unknown $1"\; exit 1;; \ esac \ }; main" Hopefully you understood what git is doing under the hood with aliases! it really appends "$@" to the alias string and calls sh -c with this string and the given arguments. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41151579', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2446574/']} | jdg_73951 |
stackexchange | llm_judgeable_groundtruth_similarity | 53712 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Conjecturally, every finite group is the Galois group of some extension of the rationals. This question made me wonder what is known about infinitesimple groups occurring as Galois groups. What are the infinite simple groups that are expected to be Galois groups, i.e., profinite? Are they classified? Are there any examples of such extensions?
Now provide the response and nothing else.
| Any profinite simple group is finite, since it has nontrivial finite quotients (the conjugates of a finite index subgroup intersect in a finite index subgroup). | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/53712', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3503/']} | jdg_73952 |
stackexchange | llm_judgeable_groundtruth_similarity | 2682678 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $n=p_1^{2a_1}p_2^{2a_2}...p_n^{2a_n}$ Number of divisors of $n$: $(2a_1+1)(2a_2+1)...(2a_n+1)$ I saw an other similar question, but it is complicated. There should be an more easy answer.
Now provide the response and nothing else.
| It's much more simple than that. For each divisor $d$ of $n$, $\frac nd$ is another divisor of $n$. So the divisors of $n$ come in pairs except if there is a divisor $d$ such that $d=\frac nd$ . But this happens if and only if $n$ is a perfect square ($d=\frac nd\iff n=d^2$). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2682678', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/503282/']} | jdg_73953 |
stackexchange | llm_judgeable_groundtruth_similarity | 1623480 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I wanted to define an interface which represents a call to a remote service. Now, the call to the remote service generally returns something, but might also include input parameters. Suppose that an implementing class will typically only implement one service method. Given the above information, is the following a poor design (it doesn't quite feel right): public interface IExecutesService<A,B>{ public A executeService(); public A executeService(B inputParameter);} Now, let's say that I implement this interface with a class that executes a remote service with an input parameter: public class ServiceA implements IExecutesService<String,String>{ public String executeService() { //This service call should not be executed by this class throw new IllegalStateException("This method should not be called for this class...blabla"); } public String executeService(String inputParameter) { //execute some service } I have two questions regarding the above: Is the use of a generic interface ( IExecutesService<A,B> ) good in the case where you want to provide subclasses which require different input parameters and return types for the interface methods? How can I do the above better? I.e. I want to group my service executors under a common interface ( IExecutesService ); however, an implementing class will typically only implement one of the methods, and the use of an IllegalStateException feels really ugly. Also, the B type parameter in IExecutesService<A,B> will be redundant for an implementing class that calls a service without any input parameters. It also seems overkill creating two separate interfaces for the two different service calls.
Now provide the response and nothing else.
| Here's one suggestion: public interface Service<T,U> { T executeService(U... args);}public class MyService implements Service<String, Integer> { @Override public String executeService(Integer... args) { // do stuff return null; }} Because of type erasure any class will only be able to implement one of these. This eliminates the redundant method at least. It's not an unreasonable interface that you're proposing but I'm not 100% sure of what value it adds either. You might just want to use the standard Callable interface. It doesn't support arguments but that part of the interface has the least value (imho). | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1623480', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/192585/']} | jdg_73954 |
stackexchange | llm_judgeable_groundtruth_similarity | 17908565 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a button that submit ajax request: <h:commandButton> <f:ajax listener="..." onevent=".." onerror="" render="panel1" /></h:commandButton><h:panelGrid id="panel1" styleClass="${bean.style}> .... I can change CSS if ajax is successful by changing bean.style, but I also wanna change when ajax is sent and when ajax failed. For example, I wanna change the style to .panel-inprogress when user clicks the button, .panel-failed when failed. I understand I can do it in JavaScript such as in onevent and onerror attribute my code, but I wanna know if there is a "JSF" way of doing this.
Now provide the response and nothing else.
| As other answers and comments have pointed out, there's not a standard macro for this, and you can write your own. In my opinion, this is a good case for define-modify-macro , and I'll describe that first. You can also write such a macro manually, using get-setf-expansion , and I'll show an example of that, too. Using define-modify-macro One of the examples on the HyperSpec page for define-modify-macro is appendf : Description: define-modify-macro defines a macro named name to read and write a place. The arguments to the new macro are a place, followed by the arguments that are supplied in lambda-list. Macros defined with define-modify-macro correctly pass the environment parameter to get-setf-expansion. When the macro is invoked, function is applied to the old contents of the place and the lambda-list arguments to obtain the new value, and the place is updated to contain the result. Examples (define-modify-macro appendf (&rest args) append "Append onto list") => APPENDF(setq x '(a b c) y x) => (A B C)(appendf x '(d e f) '(1 2 3)) => (A B C D E F 1 2 3)x => (A B C D E F 1 2 3)y => (A B C) The appendf in the example is reversed from what you're looking for, since the extra arguments are appended as the tail of the place argument. However, we can write the functional version of the desired behavior (it's just append with argument order swapped), and then use define-modify-macro : (defun swapped-append (tail head) (append head tail))(define-modify-macro swapped-appendf (&rest args) swapped-append)(let ((x '(1 2 3)) (y '(4 5 6))) (swapped-appendf x y) x); => (4 5 6 1 2 3) If you don't want to define swapped-append as a function, you can give a lambda -expression to define-modify-macro : (define-modify-macro swapped-appendf (&rest args) (lambda (tail head) (append head tail)))(let ((x '(1 2 3))Β Β Β (y '(4 5 6)))Β (swapped-appendf x y)Β x); => (4 5 6 1 2 3) So, the answer is that, conceptually, (swapped-appendf list list2) expands to (setq list (append list2 list)) . It's still the case that the arguments to swapped-appendf may seem to be in the wrong order. After all, if we defined push using define-modify-macro and cons , the arguments would be in a different order from the standard push : (define-modify-macro new-push (&rest args) (lambda (list item) (cons item list)))(let ((x '(1 2 3))) (new-push x 4) x); => (4 1 2 3) define-modify-macro is a handy tool to know about, and I've found it useful when functional (i.e., non-side-effecting) versions of functions are easy to write and a modifying version is also desired for an API. Using get-setf-expansion new-push 's arguments are list and item , whereas push 's arguments are item and list . I don't think the argument order in swapped-appendf is quite as important, since it's not a standard idiom. However, it is possible to achieve the other order by writing a prependf macro whose implementation uses get-setf-expansion to safely get the Setf Expansion for the place, and to avoid multiple evaluation. (defmacro prependf (list place &environment environment) "Store the value of (append list place) into place." (let ((list-var (gensym (string '#:list-)))) (multiple-value-bind (vars vals store-vars writer-form reader-form) (get-setf-expansion place environment) ;; prependf works only on a single place, so there ;; should be a single store-var. This means we don't ;; handle, e.g., (prependf '(1 2 3) (values list1 list2)) (destructuring-bind (store-var) store-vars ;; Evaluate the list form (since its the first argument) and ;; then bind all the temporary variables to the corresponding ;; value forms, and get the initial value of the place. `(let* ((,list-var ,list) ,@(mapcar #'list vars vals) (,store-var ,reader-form)) (prog1 (setq ,store-var (append ,list-var ,store-var)) ,writer-form))))))(let ((x '(1 2 3)) (y '(4 5 6))) (prependf y x) x); => (4 5 6 1 2 3) The use of get-setf-expansion means that this macro works on more complicated places, too: (let ((x (list 1 2 3)) (y (list 4 5 6))) (prependf y (cddr x)) x); => (1 2 4 5 6 3) For educational purposes, it's interesting to see the relevant macroexpansions, and how they avoid multiple evaluations of the forms, and what the writer-form s are that are used to actually set the value. There's a lot of functionality bundled into get-setf-expansion , and some of it is implementation specific: ;; lexical variables just use SETQCL-USER> (pprint (macroexpand-1 '(prependf y x)))(LET* ((#:LIST-885 Y) (#:NEW886 X)) (PROG1 (SETQ #:NEW886 (APPEND #:LIST-885 #:NEW886)) (SETQ X #:NEW886)));; (CDDR X) gets an SBCL internal RPLACDCL-USER> (pprint (macroexpand-1 '(prependf y (cddr x))))(LET* ((#:LIST-882 Y) (#:G883 X) (#:G884 (CDDR #:G883))) (PROG1 (SETQ #:G884 (APPEND #:LIST-882 #:G884)) (SB-KERNEL:%RPLACD (CDR #:G883) #:G884)));; Setting in an array gets another SBCL internal ASET functionCL-USER> (pprint (macroexpand-1 '(prependf y (aref some-array i j))))(LET* ((#:LIST-887 Y) (#:TMP891 SOME-ARRAY) (#:TMP890 I) (#:TMP889 J) (#:NEW888 (AREF #:TMP891 #:TMP890 #:TMP889))) (PROG1 (SETQ #:NEW888 (APPEND #:LIST-887 #:NEW888)) (SB-KERNEL:%ASET #:TMP891 #:TMP890 #:TMP889 #:NEW888))) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17908565', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1270003/']} | jdg_73955 |
stackexchange | llm_judgeable_groundtruth_similarity | 7701772 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Java Servlet life cycle is managed by the servlet container. When first web request comes in, the container will load the Servlet class, calls its init method, then calls its service method to process the web request. It says there can be only one instance of servlet class. The container creates multiple threads and manages these threads to process multiple web requests (this is what I know from my knowledge). But I want to understand, how multiple threads are running and processing multiple simultaneous web requests given that there is only one instance of the servlet class. Can some one shed light on this?
Now provide the response and nothing else.
| An object instance('s methods) can be called simultaneously by more than one thread. This isn't specific for servlets though and is true in general. What happens when this occurs? Each thread will still have their own stack, which means, each thread will have a different copy of local variables to work on. As a result there will be no interference between threads and you won't have to worry about concurrent calls. Only when a shared resource, e.g. instance/class variable is accessed, there can be a problem. Same thing if instance/class variable is directly accessed concurrently. EJBs in contrast do exactly what you seem to suggest. The EJB container makes sure that only one thread enters an EJB instance at a time, and hence an EJB programmer doesn't have to worry about concurrency so long as he/she doesn't break the EJB programming contract. There is no reason why the servlet spec didn't do this. Most likely no body came up with it during the meetings? It does have the advantage though that you can use a more efficient concurrency management than EJB's "one thread per instance". | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7701772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/574122/']} | jdg_73956 |
stackexchange | llm_judgeable_groundtruth_similarity | 5428304 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In iOS App, how to add Email validation on UITextField?
Now provide the response and nothing else.
| Use NSPredicate and Regex: - (BOOL)validateEmailWithString:(NSString*)email{ NSString *emailRegex = @"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}"; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; return [emailTest evaluateWithObject:email];} For a bunch of emails separated by a comma: - (NSMutableArray*)validateEmailWithString:(NSString*)emails{ NSMutableArray *validEmails = [[NSMutableArray alloc] init]; NSArray *emailArray = [emails componentsSeparatedByString:@","]; for (NSString *email in emailArray) { NSString *emailRegex = @"[A-Z0-9a-z._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}"; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; if ([emailTest evaluateWithObject:email]) [validEmails addObject:email]; } return [validEmails autorelease];} Edited Answer: (It also validates extra dots ) - (BOOL)validateEmailWithString:(NSString*)checkString{ BOOL stricterFilter = NO; // Discussion http://blog.logichigh.com/2010/09/02/validating-an-e-mail-address/ NSString *stricterFilterString = @"[A-Z0-9a-z\\._%+-]+@([A-Za-z0-9-]+\\.)+[A-Za-z]{2,4}"; NSString *laxString = @".+@([A-Za-z0-9-]+\\.)+[A-Za-z]{2}[A-Za-z]*"; NSString *emailRegex = stricterFilter ? stricterFilterString : laxString; NSPredicate *emailTest = [NSPredicate predicateWithFormat:@"SELF MATCHES %@", emailRegex]; return [emailTest evaluateWithObject:checkString];} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5428304', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/531783/']} | jdg_73957 |
stackexchange | llm_judgeable_groundtruth_similarity | 22120639 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello all :) Today I am refining my skills on graph theory and data structures. I decided to do a small project in C++ because it's been a while since I've worked in C++. I want to make an adjacency list for a directed graph. In other words, something which looks like: 0-->1-->31-->22-->43-->4--> This would be a directed graph with V0 (vertex 0) having an edge to V1 and V3, V1 having an edge to V2, and V2 having an edge to V4, like this: V0----->V1---->V2---->V4 | | v V3 I know that in order to do this, I will need to create an adjacency list in C++. An adjacency list is basically an array of linked lists . Okay, let's see some pseudo C++ code: #include <stdio>#include <iostream>using namespace std;struct graph{//The graph is essentially an array of the adjList struct. node* List[];};struct adjList{//A simple linked list which can contain an int at each node in the list.};struct node {int vertex;node* next;};int main() {//insert cool graph theory sorting algorithm here} As you can tell, this pseudocode is currently far from the mark. And that is what i wanted some help with -- pointers and structs in C++ have never been my strong suit. First of all, this takes care of the vertices that a vertex points to -- but what about the vertex itself? How can I keep track of that vertex? When I loop over the array, it will do me no good to only know what vertices are being pointed to, rather than also knowing what points to them. The first element in each list should probably be that vertex, and then the elements after that are the vertices it points to. But then, how can I access this first element of the list in my main program? (sorry if this is convoluted or confusing, I would happy to rephrase). I would like to be able to loop over this adjacency list to do some cool things with graphs. For example, to implement some graph theory algorithms (sorts, shortest paths, etc) using the adjacency list representation. (Also, I had a question about the adjacency list. What is different than just using a list of arrays? Why can't I just have a list with an array at each element in the list?)
Now provide the response and nothing else.
| You may use a vector in node, as a adjacency list. class node { int value; vector<node*> neighbors; }; If the graph is known at compile time, you can use array , but it's "a little bit" harder. If you know just size of graph (at compile time) you can do something like that. template<unsigned int N>class graph { array<node, N> nodes; } To add a neighbor, you doing something like that (do not forget numbering from zero): nodes[i].neighbors.push_back(nodes+j); //or &nodes[j] Of course, you can do no-pointer adjacency list and work "above" a table. Than you have vector<int> in node and you pushing number of neighbour. With both representation of the graph, you can realize all algorithms which use adjacency list. And finally, I might add. Some use a list instead of a vector, because the removal is in O(1) time. Mistake. For most algorithms, the order in the adjacency list is not important. So you can erase any element from vector in O(1) time. Just swap it with last element, pop_back is O(1) complexity. Something like that: if(i != number_of_last_element_in_list) //neighbors.size() - 1 swap(neighbors[i], neighbor.back());neighbors.pop_back(); Specific example (you have vector in node, C++11 (!)): //creation of nodes, as previouslyconstexpr unsigned int N = 3;array<node,N> nodes; //or array<node, 3> nodes;//creating edge (adding neighbors), in the constructor, or somewherenodes[0].neighbors = {&nodes[1]};nodes[1].neighbors = {&nodes[0], &nodes[1]};//adding runtime, i,j from user, eg. i = 2, j = 0nodes[i].neighbors.push_back(&nodes[j]); //nodes[2].neighbors = {&nodes[0]}; I believe it's clear. From 0 you can go to 1 , from 1 to 0 and to itself, and (as in eg.) from 2 to 0 . It's directed graph. If you want undirected, you should add to both nodes neighbourβs addresses. You can use numbers instead of pointers. vector<unsigned int> in class node and pushing back numbers, no addresses. As we know, you do not need to use pointers. Here is an example of it, too. When the number of vertexes may change, you can use vector of nodes ( vector<node> ) instead array, and just resizing . The rest remains unchanged. For example: vector<node> nodes(n); //you have n nodesnodes.emplace_back(); //you added new node, or .resize(n+1)//here is place to runtime graph generate//as previously, i,j from user, but now you have 'vector<unsigned int>' in nodenodes[i].neighbors.push_back(j); But you can't erase a node, this breaches numbering! If you want to erase something, you should use list ( list<node*> ) of pointers. Otherwise you must keep non-existent vertexes. Here, the order matters! Regarding the line nodes.emplace_back(); //adding node , It is safe with graph without pointers. If you want use pointers, you predominately shouldn't change size of graph.You can accidentally change address of some nodes, while adding vertex, when vector will be copied to new place (out of space). One way to deal with it is using reserve , although you have to know maximal size of graph! But in fact I encourage you not to use vector to keep vertexes, when you are using pointers. If you don't know implementation, more safe could be self memory management (smart pointers eg. shared_ptr or just new ). node* const graph = new node[size]; //<-- It is your graph.//Here no address change accidentally. Using vector as adjacency list is always fine ! There's no chance to change node's address. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22120639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1368884/']} | jdg_73958 |
stackexchange | llm_judgeable_groundtruth_similarity | 140183 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The age old question of comparing sums of squares (SS) between programs has reared its ugly head again. I am trying to replicate output in SPSS, that was computed using Type 3 Sums of Squares, in R. I understand that with multiple regressions, there are several ways to get Type 3 SS in R (to match Type 3 output from SPSS). However, I am running a mixed model using aov (which uses Type 1 SS) and even when I try all the "usual" fixes," my estimates don't match the Type 3 SS output from SPSS. First of all, when I run the SPSS syntax using "/METHOD=SSTYPE(1)" the results match those I get using this code: mymodel<-aov(data=longdat, DV ~ 1 + Task + Cue + Compatibility + Cue:Task + Compatibility:Task + Cue:Compatibility + Cue:Compatibility:Task + Error(subject/Cue/Compatibility/Cue*Compatibility)) summary(mymodel) So I know the analyses are the same when they use Type 1 SS. However, when I use: options(contrasts = c("contr.sum","contr.poly")) tt<-lm(DV ~ 1 + Task + Cue + Compatibility + Cue:Task + Compatibility:Task + Cue:Compatibility + Cue:Compatibility:Task + 1/subject/Cue/Compatibility /(Cue*Compatibility), data=longdat) drop1(tt, ~., test="F") The results do not match the SPSS Type 3 output. In attempts to get matching output, I have also tried the Anova function (which can give Type 3 SS) Anova(mymodel, type=3, test.statistic="F") but I get this error "Error in terms.formula(formula, data = data) : 'data' argument is of the wrong type." I have also tried using lmer . Can someone help me get Type 3 Sums of Squares for a mixed model in R? Thank you!
Now provide the response and nothing else.
| If you have two competing linear models with different regressors, say: m1 <- lm(y ~ x1 + x2, data = ...)m2 <- lm(y ~ z1 + z2, data = ...) then the encompassing model is m12 <- lm(y ~ x1 + x2 + z1 + z2, data = ...) and the encompassing test compares anova(m1, m12)anova(m2, m12) The "hope" would be that one of the models ( m1 , m2 ) is significantly worse than the encompassing model m12 while the other is not. The one that is not significantly worse would be preferable then. However, it may happen that both models are significantly worse than m12 in which case neither model alone is entirely convincing. If you want a convenience function to carry out this test in R, you can use the encomptest() function from the lmtest package: encomptest(m1, m2) The package also provides other tests for non-nested model comparisons, e.g., jtest() and coxtest() . | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/140183', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/39568/']} | jdg_73959 |
stackexchange | llm_judgeable_groundtruth_similarity | 1362296 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Without calculating the characteristic polynomial, can we say that $0$ is eigenvalue of the following matrix? $$\begin{bmatrix}0&1&0&1\\-3&0&0&0\\0&1&0&1\\0&0&0&0\end{bmatrix}$$
Now provide the response and nothing else.
| Yes. This matrix has a row of zeros and therefore has determinant zero. Since the determinant is the product of the eigenvalues it follows that zero is an eigenvalue of this matrix. Alternatively we can note that $A\vec x=\vec 0$ where$$\vec x=\begin{bmatrix}0\\1\\0\\-1\end{bmatrix}$$That is, $A\vec x=0\cdot\vec x$ so $0$ is indeed an eigenvalue. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1362296', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/254322/']} | jdg_73960 |
stackexchange | llm_judgeable_groundtruth_similarity | 144005 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to calculate the type 1 error of a bootstrap hypothesis test procedure (won't go into the hypothesis test here). I ran the test on 1000 simulated datasets (simulated under the null hypothesis of no effect). The smallest p value I got was 0.45, giving my procedure a type 1 error rate of <0.001. Is this a contradiction? Shouldn't I find p<0.05 in about 5% of the datasets?
Now provide the response and nothing else.
| We can be more precise: if your test is performing as expected then the probability of finding a 1,000 samples from a population where the null hypothesis is true with a $p$-value more than .05 is $.95^{1000} \approx 5.3 \cdot 10^{-23}$. That chance is not zero, but pretty small (that is an understatement...). If this were happening in a project of mine, my first step would be to assume that I made an error in implementing this simulation. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/144005', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/72279/']} | jdg_73961 |
stackexchange | llm_judgeable_groundtruth_similarity | 8821461 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Scala array of strings: val names:Array[String] = something.map(...) I need to call an Android(java) method that accepts a Collection public void addAll (Collection<? extends T> collection) How do I covert the Array to a Collection?
Now provide the response and nothing else.
| Java approach: java.util.Arrays.asList(names: _*) Scala approach: import collection.JavaConversions._val namesColl: java.util.Collection[String] = names.toSeq In the latter approach names array is first converted to Seq[String] and then an implicit conversion located in JavaConversions figures out that Java collection is needed so it applies necessary transformation trasparently. Don't worry, it is constant in time . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8821461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131640/']} | jdg_73962 |
stackexchange | llm_judgeable_groundtruth_similarity | 724218 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
From my understanding, inertia is typically taken as an axiom rather than something that can be explained by some deeper phenomenon. However, it's also my understanding that quantum mechanics must reduce to classical, Newtonian mechanics in the macroscopic limit. By inertia, I mean the resistance to changes in velocity -- the fact of more massive objects (or paticles, let's say) accelerating more slowly given the same force. What is the quantum mechanical mechanism that, in its limit, leads to Newtonian inertia? Is there some concept of axiomatic inertia that applies to the quantum mechanical equations and explains Newtonian inertia, even if it remains a fundamental assumption of quantum theory?
Now provide the response and nothing else.
| I'd go with the Ehrenfest theorem, for the time derivative of the expectation value of an operator $A$ : $$\frac d{dt}\langle A \rangle = \frac 1 {i\hbar}\langle[A, H]\rangle +\big\langle \frac{\partial A}{\partial t} \big\rangle$$ which gives: $$ m \frac d{dt}\langle x \rangle=\langle p \rangle$$ and $$ \frac d{dt}\langle p \rangle=-\big\langle V'(x) \big\rangle$$ Also note that force is: $$ F=-\big\langle V'(x) \big\rangle $$ Combining it all and you get: $$ \frac{d^2}{dt^2} \langle x \rangle = \frac{F}{m} $$ or: mass suppresses changes in velocity at fixed force. Aka: inertia . | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/724218', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/311904/']} | jdg_73963 |
stackexchange | llm_judgeable_groundtruth_similarity | 3582798 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was answering some math exercises, and by accident, I 'discovered' the following equation: $$\int_{0}^{\infty} e^{-x}x^kdx = k!$$ for instance, if $k=3$ , we have the following (using an online integral calculator, i.e. Wolfram): $$\int_{0}^{\infty} e^{-x}x^3dx = 3! = 6$$ However, I could not figure out how the equation makes sense... Is there a way to analytically transform the integral to $k!$ ?
Now provide the response and nothing else.
| $$I_0=\int_{0}^{\infty} e^{-at} dt =\frac{1}{a}~~~(1)$$ D.w.r.t. $a$ both sides you get $$I_1=\int_{0}^{\infty}t e^{-at} dt =\frac{1}{a^2}$$ Again differentiate to get $$I_2=\int_{0}^{\infty} t^2 e^{-at} dt =\frac{2}{a^3}$$ Finally , differentiate w.r.t. $a$ $k$ -times to get $$I_k=\int_{0}^{k} t^k e^{-at} dt = \frac{k!}{a^{k+1}}$$ Put= $a=1$ to get $I_k=k!$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3582798', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/480470/']} | jdg_73964 |
stackexchange | llm_judgeable_groundtruth_similarity | 15626528 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created very quick example of alert triggering twice on backbone View. http://jsfiddle.net/feronovak/8svqX/ This is does nothing special, it really is for me to understand how to call separate Views correctly, so they dont trigger previous click method. Right now I generate two Views, second View overwrites form in #boxSearch. When I click button Search it also generates alert doSearch SearchLocation . I was expecting to see only alert doSearch SearchLatitude . What I have done wrong? var Geo = { Views: {}, Templates: {}};Geo.Templates.SearchLocation = _.template( $("#tpl-search-location").html());Geo.Templates.SearchLatitude = _.template( $("#tpl-search-latitude").html());Geo.Views.SearchLocation = Backbone.View.extend({ initialize: function() { this.render(); }, el: $("#boxSearch"), template: Geo.Templates.SearchLocation, render: function() { $(this.el).html(this.template); }, events: { "click input[type=button]": "doSearch" }, doSearch: function(e) { e.preventDefault(); alert('doSearch SearchLocation'); }});Geo.Views.SearchLatitude = Backbone.View.extend({ initialize: function() { this.render(); }, el: $("#boxSearch"), template: Geo.Templates.SearchLatitude, render: function() { $(this.el).html(this.template); }, events: { "click input[type=button]": "doSearch" }, doSearch: function(e) { e.preventDefault(); alert('doSearch SearchLatitude'); }});$(document).ready(function(e) { var searchLocation = new Geo.Views.SearchLocation(); var searchLatitude = new Geo.Views.SearchLatitude();});
Now provide the response and nothing else.
| This is the "ghost view" problem often seen with Backbone apps: You attach a view to a DOM element You attach another view to the same DOM element You trigger an event on the DOM element (or worse, trigger an event somewhere else that both views listen to) Both views fire their event handlers, often wreaking havoc. As @machineghost points out, there's nothing in your code to unbind the first view, so both views are attached to #boxSearch , and both views will respond to the click event. There are several ways around this: You could call e.stopPropagation() ( docs ) in the event handlers (I think this is what you're trying to do with e.preventDefault() ). This will prevent the event from bubbling up once it's triggered, so the last defined view would catch it. This works, but note that you still have the two views hanging around, and the outdated ghost view might have other side-effects. You could have each view render its own DOM element, with a class .boxSearch , and then call view.remove() on the one you no longer want. This is probably the cleanest option, but it means creating most of your DOM on the fly, which is a bit more expensive and hard to manage than coding it in HTML. You can give each view a cleanup method, e.g. .clear() or .exit() , which could call view.undelegateEvents() and view.stopListening() . Then make sure you call this method when you are done with a view. If you're careful about making sure it gets called and making sure it clears everything that needs to be cleared, then it works well. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15626528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/453771/']} | jdg_73965 |
stackexchange | llm_judgeable_groundtruth_similarity | 31109581 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have timer function, with progress bar. It works well, but I want make a smooth progress bar animation, with 60 FPS function started(duration) { var TotalSeconds = 40; var documentWidth = $(document).width(); var start = Date.now(); var intervalSetted = null; function timer() { var diff = duration - (((Date.now() - start) / 1000) | 0); var seconds = (diff % 60) | 0; seconds = seconds < 10 ? "0" + seconds : seconds; $('#timer').html("00:" + seconds); var progresBarWidth = (seconds * documentWidth / TotalSeconds); $('#progress').css({ width: progresBarWidth + 'px' }); if (diff <= 0) { clearInterval(intervalSetted); } } timer(); intervalSetted = setInterval(timer, 1000);}started(40); How can I achieve 60FPS animation? Here is demo of my code: JSFiddle !
Now provide the response and nothing else.
| You could make use of CSS3 animations.I wrote some examplecode which shows a progressbar which counts down using a duration you can choose. You can also have a callback when the animation is finished. CSS3 animations are hardware-accelerated, so you'll have the smoothest experience when you make use of it. /* * Creates a progressbar. * @param id the id of the div we want to transform in a progressbar * @param duration the duration of the timer example: '10s' * @param callback, optional function which is called when the progressbar reaches 0. */function createProgressbar(id, duration, callback) { // We select the div that we want to turn into a progressbar var progressbar = document.getElementById(id); progressbar.className = 'progressbar'; // We create the div that changes width to show progress var progressbarinner = document.createElement('div'); progressbarinner.className = 'inner'; // Now we set the animation parameters progressbarinner.style.animationDuration = duration; // Eventually couple a callback if (typeof(callback) === 'function') { progressbarinner.addEventListener('animationend', callback); } // Append the progressbar to the main progressbardiv progressbar.appendChild(progressbarinner); // When everything is set up we start the animation progressbarinner.style.animationPlayState = 'running';}addEventListener('load', function() { createProgressbar('progressbar1', '40s'); createProgressbar('progressbar2', '30s'); createProgressbar('progressbar3', '20s', function() { alert('20s progressbar is finished!'); }); createProgressbar('progressbar4', '10s', function() { alert('10s progressbar is finished!'); });}); .progressbar { width: 80%; margin: 25px auto; border: solid 1px #000;}.progressbar .inner { height: 15px; animation: progressbar-countdown; /* Placeholder, this will be updated using javascript */ animation-duration: 40s; /* We stop in the end */ animation-iteration-count: 1; /* Stay on pause when the animation is finished finished */ animation-fill-mode: forwards; /* We start paused, we start the animation using javascript */ animation-play-state: paused; /* We want a linear animation, ease-out is standard */ animation-timing-function: linear;}@keyframes progressbar-countdown { 0% { width: 100%; background: #0F0; } 100% { width: 0%; background: #F00; }} <div id='progressbar1'></div><div id='progressbar2'></div><div id='progressbar3'></div><div id='progressbar4'></div> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31109581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5060004/']} | jdg_73966 |
stackexchange | llm_judgeable_groundtruth_similarity | 9100969 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to find a way to query mongodb in a case insensitive way using PHP because currently when checking a user "Username" and "username" can both be diffrent usernames... $cursor = $collection->findOne(array('username' => /{$_POST['value']}/i));$lowerInput = strtolower($_POST['value']);$username = strtolower($cursor['username']);if($lowerInput == $username){ echo "That username appears to be in our database!";} I tried this but the cursor only looks for a case sensitive match so it will only lowercase the cursor value if it has one.
Now provide the response and nothing else.
| PHP Mongo driver has an internal Regex Object: $cursor = $collection->findOne( array('username' => new MongoRegex("/$_POST['value']/i")); And btw I strongly recommand to check $_POST value and probably transform you regex to get only username (without more before/after => new MongoRegex('/^' . $securevalue . '$/i') Edit: my answer wasn't precise: starting anchor allow mongo to use index on this query, if available. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9100969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1055838/']} | jdg_73967 |
stackexchange | llm_judgeable_groundtruth_similarity | 35276 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Although this may be a basic question but I'm still struggling with it. In this schematic, two zener diodes D1 and D2 are connected back-to back across relay coil L1. The BVds = -30V for Q1. Can I use 15V(Vz = 15V) zeners for D1 and D2 instead of 5.1 V zeners? Will the relay coil or contacts can get damaged during turn-off of relay? If required, I'm using this relay (5V DC Standard Coil). Also, to reduce steady state current consumption of relay coil, I wanna use the RC ckt shown aside in schematic. As soon as Q1 turns-on, uncharged capacitor temporarily appears as a dead short, causing maximum current to flow through the relay coil and closing the relay contacts without chatter.As the capacitor charges, however, both the voltage across and the current through the relay coil decline. The circuit reaches steady state when the capacitor has charged to the point that all the current through the relay coil is moving through R1. The contacts will still remain closed until the drive voltage is removed. Which is the best place to put this RC ckt - section marked 'A' or 'B' in schematic. Will it make any difference? Section-B seems to me the best choice, as when Q1 turns-off, capacitor C1 can discharge via R1 through ground. How will C1 discharge when instead I place RC ckt at section-A? Am I missing something here?Does putting this RC ckt has any side-effects? Any better solution? Please correct me if I'm wrong or missing something? UPDATE1 on 2012-07-09 : Say in above schematic I have 6V DC Standard coil(see datasheet above), 48.5 ohm relay. And take C1 = 10uF say. Assume that R1C1 ckt is placed at section-A in schematic above. The power supply is at +5V. For a Drop of 3V(Hold-on voltage) across relay coil, the current must be 62mA approx. through coil.So drop across R1 at steady state is 2V. For a current of 62mA through relay coil at steady state, R1 must be 32.33 ohm. And charge on C1 is 2V x 10uF = 20uC, at steady state. Now in this data sheet, the operate time is given to be 15ms worst case.From above data we have RC = 48.5ohm x 10uF = 0.485 ms. So, as soon as Q1 is turned on, the C1 will be almost fully charged in 2.425 ms. Now how do I know that this duration of 2.425 ms is sufficient for relay to make its contacts close? Similarly, as soon as Q1 is turned-off, due to back emf generated and clamped to 3.3V by zener D2(Vz = 3.3V) plus diode D1 drop of 0.7V, the voltage across C1 will be -2V + (-3.3V - 0.7V) = -2V. But charge on C1 is still 20uC. Since capacitance is constant, so charge must decrease as voltage across C1 decreased from +2V to -2V instantly after turning off Q1. Isn't it violation of Q = CV? At this point, the current that is flowing through relay coil due to back emf will be 62mA in same direction as was before turning-off Q1. Will this 62mA current will charge or discharge the C1? The voltage across C1 is 6V as soon as Q1 is turned off right?I didn't get how currents will flow b/w R1, C1, D1, D2 and relay coil as soon as Q1 is turned-off. Can someone throw light on these issues? UPDATE2 on 2012-07-14 : "Current in an inductor will not change instantaneously" - While there is a flyback diode D1( Say, D1 is not zener but a small-signal or a schottky diode , and zener D2 is removed in the schematic above), as soon as Q1 is turned-off, will there not even be a current spike(not even for few usecs)? I'm asking this becoz, if there is a current spike then the amount of current that will flow during this spike(say > 500mA in this case) might damage the flyback diode if I had selected a diode with max peak forward current rating of around 200mA or so only. 62mA is the amount of current that is flowing through the relay coil when Q1 is on. So, will the current through relay coil never exceed 62mA - not even for a moment(say for some usecs) after Q1 is turned off?
Now provide the response and nothing else.
| You can place the RC either at the B side or the A side. When components are placed in series the order of them doesn't matter for the working. About the diodes. When you switch off the relay it will cause a (possibly large) negative voltage on the FET's drain, and a flyback diode is used to limit that voltage to a 0.7 V diode drop. So the diode(s) don't serve to protect the coil, but the FET. Using the zeners will allow this voltage to go to -5.7 V or -15.7 V if you'd use the 15 V zeners. There's no reason for taking risks here, even if the FET can handle -30 V. So I would just use a rectifier or signal diode, or even better a Schottky diode. edit re your comment You can indeed use a zener (combined with a common diode, D1 doesn't have to be a zener) to decrease switch-off time , and Tyco also mentions it in this application note , but I don't read it as if they insist on it. The scope images in the first link show a dramatic decrease in switch-off time, but that measures the time between deactivating the relay and the first opening of the contact, not the time between first opening and the return to the rest position, which will change much less. edit re the 6 V relay and the RC circuit Like I says in this answer you can operate a relay below its rated voltage, and since its operate voltage is 4.2 V the 6 V version of your relay can also be used at 5 V. If you use a series resistor not higher than 9 Ξ© you'll have that 4.2 V, and then you don't need the capacitor (keep an eye on the tolerance for the 5 V!). If you want to go lower you're on your own; the datasheet doesn't give a must hold voltage. But let's say this would be 3 V. Then you can use a series resistor of 32 Ξ© and you'll need the capacitor to get the relay activated. Operate time is maximum 15 ms (which is long), so as the capacitor charges the relay voltage shouldn't go below 4.2 V until 15 ms after switching on. Now we have to calculate the RC time for that. R is the parallel of the relay's coil resistance and the series resistance (that's ThΓ©venin's fault), so that's 19.3 Ξ©. Then \$ 3 V + 2 V \cdot e^{\dfrac{- 0.015 ms}{19.3 \Omega \text{ C}}} = 4.2 V \$ Solving for \$\text{C}\$ gives us 1500 Β΅F minimum. Re switching off: You can't violate Q = CV, it's the Law. Your clamping voltage is 3.3 V + 0.7 V = 4 V. That means that when you switch the FET off the low side of the capacitor momentarily will be pulled to -4 V, and quickly rise again to 0 V. The high side is 2 V higher, and will simply follow that 4 V drop while the capacitor discharges through the parallel resistor. The capacitor won't even notice the drop. The discharge time constant is 1500 Β΅F \$\times\$ 32 Ξ© = 48 ms, then the capacitor will discharge to 20 mV (1% of its initial value) in 220 ms. The 62 mA won't charge nor discharge the capacitor. We often apply Kirchhoff's Current Law (KCL) to nodes, but it also applies to regions: Draw a boundary around C1 and R1, and you'll see there's only one path to the outer world since the way to the FET is cut off. Since the total current has to be zero there can't be any current through that unique connection. The coil has to take care of the 62 mA on its own, and it does so by using the loop formed by the zeners. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/35276', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10654/']} | jdg_73968 |
stackexchange | llm_judgeable_groundtruth_similarity | 2721768 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to write a regular expression in Javascript to match a name field, where the only allowed values are letters, apostrophes and hyphens. For example, the following names should be matched: jhon'savat-arJosh Could someone please help me construct such a regex?
Now provide the response and nothing else.
| Yes. ^[a-zA-Z'-]+$ Here, ^ means start of the string, and $ means end of the string. [β¦] is a character class which anything inside it will be matched. x+ means the pattern before it can be repeated once or more. Inside the character class, a-z and A-Z are the lower and upper case alphabets, ' is the apostrophe, and - is the hyphen. The hyphen must appear at the beginning or the end to avoid confusion with the range separator as in a-z . Note that this class won't match international characters e.g. Γ€. You have to include them separately e.g. ^[-'a-zA-ZΓ-ΓΓ-ΓΆΓΈ-ΕΏ]+$ | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2721768', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/326401/']} | jdg_73969 |
stackexchange | llm_judgeable_groundtruth_similarity | 4820591 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to have the SQL Server PowerShell extensions available to me whenever I start PowerShell by loading the snap-ins in my profile.ps1 script. I found an article here with a script example that shows how to do this, and this works fine on my 32-bit Windows XP box. Unfortunately, on my 64-bit Windows 7 machine, this blows up. If I try to launch this script with the 64-bit PowerShell, I get: Add-PSSnapin : No snap-ins have been registered for Windows PowerShell version 2.At C:\Users\xxxx\Documents\WindowsPowerShell\profile.ps1:84 char:13+ Add-PSSnapin <<<< SqlServerCmdletSnapin100+ CategoryInfo : InvalidArgument: (SqlServerCmdletSnapin100:String[Add-PSSnapin], PSArgumentException+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand If I run this instead in a 32-bit PowerShell, I get: Get-ItemProperty : Cannot find path 'HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds \Microsoft.SqlServer.Management.PowerShell.sqlps' because it does not exist.At C:\Users\xxxx\Documents\WindowsPowerShell\profile.ps1:39 char:29+ $item = Get-ItemProperty <<<< $sqlpsreg+ CategoryInfo : ObjectNotFound: (HKLM:\SOFTWARE\...owerShell.sqlps:String) [Get-ItemProperty], ItemNotFoundException+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemPropertyCommand I'd like to be able to run this in a 64-bit PowerShell if possible. To this end, I tracked down what I thought was the Powershell extension dlls and in a 64-bit Administrator elevated PowerShell I ran: cd "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn"installutil Microsoft.SqlServer.Management.PSProvider.dllinstallutil Microsoft.SqlServer.Management.PSSnapins.dll No dice. Although installutil seemed to indicate success, I still get the "No snap-ins have been registered for Windows PowerShell version 2" error message when I run the script. Anyone have any suggestions as to where I go from here?
Now provide the response and nothing else.
| I've used this script without issue on x64 machines. The problem with the x86 invocation is that the script looks for registry keys which on an x64 instance are only accessible from x64 PowerShell. For the x64 invocation you could try registering the snapins since that is the error message you're receiving. Run as administrator... Change this: cd $sqlpsPathAdd-PSSnapin SqlServerCmdletSnapin100Add-PSSnapin SqlServerProviderSnapin100 to this: cd $sqlpsPath$framework=$([System.Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory())Set-Alias installutil "$($framework)installutil.exe"installutil Microsoft.SqlServer.Management.PSSnapins.dllinstallutil Microsoft.SqlServer.Management.PSProvider.dllAdd-PSSnapin SqlServerCmdletSnapin100Add-PSSnapin SqlServerProviderSnapin100 An even better solution is not use add-pssnapin instead turn sqlps into a module. I have blog post here: http://sev17.com/2010/07/10/making-a-sqlps-module Update for SQL Server 2012 - now ships a sqlps module you can install instead of the above blog: http://www.microsoft.com/en-us/download/details.aspx?id=35580 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4820591', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/241023/']} | jdg_73970 |
stackexchange | llm_judgeable_groundtruth_similarity | 431556 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I can prove the following result. Theorem 1. Let $f_n:\mathbb{R}^n\to \mathbb{R}$ be a sequence of convex functionsthat converges almost everywhere to a function $f:\mathbb{R}^n\to\mathbb{R}$ .Then $f$ is convex in thesense that there is a convex function $F:\mathbb{R}^n\to\mathbb{R}$ such that $F=f$ a.e. I am sure, it must be known. Question 1. Where can I find a proof? Edit. I added one more question (see below) and the answers published back in October 2022 apply to Question 1. Theorem 1 implies the following fact that I could not find in any book: Theorem 2. If $f\in W^{2,1}_{\rm loc}(\mathbb{R}^n)$ (Sobolev space) satisfies $v^T D^2f(x)v\geq 0$ for almost all $x$ and all $v$ , then $f$ is convex. Indeed, if $f_\epsilon$ is an approximation by convolution (with a positive mollifier), then $v^T D^2f_\epsilon(x)v\geq 0$ and hence $f_\epsilon$ is convex since it is smooth. Now $f_\epsilon\to f$ a.e. along with Theorem 1 implies that $f$ is convex. Question 2. Has Theorem 2 been written somewhere? This is a simple result, but I am quite surprised I could not find it in any textbook.
Now provide the response and nothing else.
| It follows from Theorem 10.8 in R. Tyrrell Rockafellar. Convex analysis. Princeton Mathematical Series, No.28. Princeton University Press, Princeton, N.J., 1970. This theorem essentially says the following: if $f_n$ are convex functions on an open domain $\Omega\subset\mathbb{R}^n$ that converge pointwise (to a finite value) on a dense subset of $\Omega$ , then the limit exists for every point of $\Omega$ , this limit is a convex function, and the convergence is uniform on every compact set inside $\Omega$ . | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/431556', 'https://mathoverflow.net', 'https://mathoverflow.net/users/121665/']} | jdg_73971 |
stackexchange | llm_judgeable_groundtruth_similarity | 2206378 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a multiline string which is delimited by a set of different delimiters: (Text1)(DelimiterA)(Text2)(DelimiterC)(Text3)(DelimiterB)(Text4) I can split this string into its parts, using String.split , but it seems that I can't get the actual string, which matched the delimiter regex. In other words, this is what I get: Text1 Text2 Text3 Text4 This is what I want Text1 DelimiterA Text2 DelimiterC Text3 DelimiterB Text4 Is there any JDK way to split the string using a delimiter regex but also keep the delimiters?
Now provide the response and nothing else.
| You can use lookahead and lookbehind, which are features of regular expressions. System.out.println(Arrays.toString("a;b;c;d".split("(?<=;)")));System.out.println(Arrays.toString("a;b;c;d".split("(?=;)")));System.out.println(Arrays.toString("a;b;c;d".split("((?<=;)|(?=;))"))); And you will get: [a;, b;, c;, d][a, ;b, ;c, ;d][a, ;, b, ;, c, ;, d] The last one is what you want. ((?<=;)|(?=;)) equals to select an empty character before ; or after ; . EDIT: Fabian Steeg's comments on readability is valid. Readability is always a problem with regular expressions. One thing I do to make regular expressions more readable is to create a variable, the name of which represents what the regular expression does. You can even put placeholders (e.g. %1$s ) and use Java's String.format to replace the placeholders with the actual string you need to use; for example: static public final String WITH_DELIMITER = "((?<=%1$s)|(?=%1$s))";public void someMethod() { final String[] aEach = "a;b;c;d".split(String.format(WITH_DELIMITER, ";")); ...} | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2206378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23368/']} | jdg_73972 |
stackexchange | llm_judgeable_groundtruth_similarity | 33150564 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my linux server when I run netstat -su I can get the statistics of udp packets like this: netstat -su IcmpMsg: InType0: 10827 InType3: 42792 InType8: 298795 InType13: 2 OutType0: 298795 OutType3: 328120 OutType8: 10827 OutType14: 2 Udp: 232862733 packets received 12074334 packets to unknown port received. 555474 packet receive errors 8650718 packets sent UdpLite: IpExt: InBcastPkts: 375 InOctets: 169855997552 OutOctets: 60497003017 InBcastOctets: 144080 Where did netstat command get these statistics from ? Can I clear the buffer in order to have them start from zero ?
Now provide the response and nothing else.
| You can find out answers to these sort of things without leaving your terminal. Let's see for ourselves: # strace netstat -su &> netstat_strace It will be an 'open' and 'read' because it's getting the data from somewhere (but grep out where it failed to read/open): # grep -E 'open|read' netstat_strace | grep -v ENOENTopen("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\37\2\0\0\0\0\0"..., 832) = 832open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3open("/proc/meminfo", O_RDONLY|O_CLOEXEC) = 3read(3, "MemTotal: 3854816 kB\nMemF"..., 1024) = 1024open("/proc/net/snmp", O_RDONLY) = 3read(3, "Ip: Forwarding DefaultTTL InRece"..., 4096) = 1261open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 4read(4, "# Locale name alias data base.\n#"..., 4096) = 2570read(4, "", 4096) = 0read(3, "", 4096) = 0open("/proc/net/netstat", O_RDONLY) = 3read(3, "TcpExt: SyncookiesSent Syncookie"..., 4096) = 2158read(3, "", 4096) = 0 and from inspecting the strace output, we can see that it's writing a string: write(1, "IcmpMsg:\n InType0: 11\n InT"..., 373IcmpMsg: InType0: 11 well, that's fun. Let's check out the man page for netstat : man netstat If you look under FILES : FILES /etc/services -- The services translation file /proc -- Mount point for the proc filesystem, which gives access to kernel status information via the following files. /proc/net/dev -- device information /proc/net/raw -- raw socket information /proc/net/tcp -- TCP socket information /proc/net/udp -- UDP socket information /proc/net/igmp -- IGMP multicast information... you can see why it open ed and read from the above. In searching for 'clear' or 'reset' (or reading it), you'll find that those aren't options for the command. The next step would be checking out man proc , which describes itself as a "process information pseudo-filesystem." From here, you could get the idea that if you modified the files that netstat read from, you could change the output of netstat ( /proc/net/netstat looks particularly interesting in my opinion) -- and you can -- but I'd suggest letting this be read only. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33150564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1066660/']} | jdg_73973 |
stackexchange | llm_judgeable_groundtruth_similarity | 47017454 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a CSS grid layout where I have the top row spanning the entire grid using the grid-column property. Does anyone know how I can set this row to be 100px high and have all the subsequent rows set to a grid-auto-rows height of 200px? I know I can input individual values with grid-template-rows for all the specific rows, but there are going to be a lot of divs on the page and don't want to input 100px and then a load of 200px values using this grid-template-rows property. I'm thinking there must be a way to set individual pixel values on certain rows and then have everything else as grid-auto-rows ? I can't seem to find how to do this in the docs. Any help would be awesome! https://codepen.io/emilychews/pen/dZPJGQ .gridwrapper { display: grid; grid-template-columns: repeat(2, 2fr); grid-auto-rows: 200px; grid-template-rows: 100px 200px; grid-gap: 10px;}nav { background: yellow;}.gridwrapper div { padding: 1em; background: red; color: white; box-sizing: border-box;}.gridwrapper div:nth-child(odd) { background: blue;}nav { grid-column: 1 / -1;}/*MAKE DIVS 1FR ON MOBILE*/@media only screen and (max-width: 736px) { .gridwrapper { grid-template-columns: 1fr; }} <div class="gridwrapper"> <nav class="grid">1</nav> <div class="grid">2</div> <div class="grid">3</div> <div class="grid">4</div> <div class="grid">5</div> <div class="grid">6</div></div>
Now provide the response and nothing else.
| Does anyone know how I can set this row to be 100px high and have all the subsequent rows set to a grid-auto-rows height of 200px? Yes. Grid can do this cleanly and easily. First, you don't need to set a height value on the grid item itself. That overrides one of the great benefits of CSS Grid: the ability to control the dimensions of grid items at the container level. You know that your grid will always have at least one row: the top row. That's your explicit grid . You don't know how many additional rows there will be. That number is variable and unpredictable. That's your implicit grid . The grid-template-rows and grid-template-columns properties set track sizes in the explicit grid. The grid-auto-rows and grid-auto-columns properties set track sizes in the implicit grid. Therefore, this is probably what you're looking for: .gridwrapper{ display: grid; grid-template-rows: 100px; /* top row is 100px in height */ grid-auto-rows: 200px; /* any new rows created are 200px in height */} revised codepen | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47017454', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5219761/']} | jdg_73974 |
stackexchange | llm_judgeable_groundtruth_similarity | 99160 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am in the process of writing an open source State Space Analysis suite in C# (for fun). I have implemented a number of different Kalman-Based Filters (Kalman Filter, Information Filter and the Square Root Filter) and the State and Disturbance Smoothers that work with these implementations. My tests on these filters (using the Nile data from Durban and Koopman's (DK) book "Time Series Analysis by State Space Methods" and other more complex data) show that the filters and smoothers work and they produce very similar results (as you would expect for a local univariate model). The smoothed output for the basic Kalman Filter for the Nile data looks like (because everyone likes a graphic or two :]) The dotted line is the 90% confidence interval. Now, my problem; I am now attempting to write the first part of the parameter estimation code and for the parameter estimation of the covariances $\mathbf{H}_{1}$ and $\mathbf{Q}_{1}$ in the linear Gaussian State Space model $$y_t = \mathbf{Z}_{t}\alpha_{t} + \epsilon_{t},$$$$\alpha_{t + 1} = \mathbf{T}_{t}\alpha_{t} + \mathbf{R}_{t}\eta_{t},$$ where $y_{t}$ is our observation vector, $\alpha_{t}$ our state vector at time step $t$ and $$\epsilon_{t} \sim NID(0, \mathbf{H}_{t}),$$$$\eta_{t} \sim NID(0, \mathbf{Q}_{t}),$$$$\alpha_{1} \sim NID(a_{1}, \mathbf{P}_{1}).$$ where $t = 1,\ldots, n$. We have an implementation of the Expectation Maximisation (EM) algorithm and this is estimating $\mathbf{H} = 19550.37$ and $\mathbf{Q} = 7411.44$ with the loglikelihood converging in 32 iterations to -683.1 with a convergence tolerence of $10^-6$. The DK book quotes $\mathbf{H} = 15099$ and $\mathbf{Q} = 1469.1$ and from Tusell's paper and this walkthrough using R he matches this prediction getting $\mathbf{H} = 15102$ and $\mathbf{Q} = 1468$. I have debugged my code and it really seems like my implementation of the EM algorithm is correct. So get to the bottom of what is going on I want to run the same data set using R and KFAS... Using R and the R Package KFAS , I have attempted to reproduce these estimates for $\mathbf{H}$ and $\mathbf{Q}$, but my R knowledge is weak. My current R script is as follows install.packages('KFAS')require(KFAS)# Example of local level model for Nile seriesmodelNile<-SSModel(Nile~SSMtrend(1,Q=list(matrix(NA))),H=matrix(NA))modelNilemodelNile<-fitSSM(inits=c(log(var(Nile)),log(var(Nile))),model=modelNile,method='BFGS',control=list(REPORT=1,trace=1))$model# Can use different optimisation: # should be one of βNelder-Meadβ, βBFGSβ, βCGβ, βL-BFGS-Bβ, βSANNβ, βBrentβmodelNile<-SSModel(Nile~SSMtrend(1,Q=list(matrix(NA))),H=matrix(NA))modelNilemodelNile<-fitSSM(inits=c(log(var(Nile)),log(var(Nile))),model=modelNile,method='L-BFGS-B',control=list(REPORT=1,trace=1))$model# Filtering and state smoothingout<-KFS(modelNile,filtering='state',smoothing='state')out How can I extend this R script using KFAS in order to estimate the parameters $\mathbf{H}$ and $\mathbf{Q}$? and Should each method of parameter estimation be hitting the same values for $\mathbf{H}$ and $\mathbf{Q}$? Thanks for your time.
Now provide the response and nothing else.
| I believe there is a confusion of notation here. The paper uses $\mathbf I$ to denote the identity matrix, while you seem to use this symbol to denote a matrix where all its elements are equal to one, which the paper expresses using the notation $\mathbf1\mathbf1'$ , $\mathbf1$ denoting a column vector of ones. (Moreover the paper defines $\mathbf X$ as a matrix where the series of a regressor is in one row, not in one column as is the most traditional case, but you don't seem troubled by this). If we get past this notational issue, then the two expressions are the same (taking into account that you use $\mathbf I$ to denote $\mathbf 1\mathbf 1'$ ), since the matrix (using the paper's notation) $$Q_T = \mathbf I - \frac{1}{T} \mathbf 1\mathbf1'$$ is idempotent, $Q_TQ_T= Q_T$ and symmetric, $Q_T' = Q_T$ , and what it does is to de-mean the regressors (center them around their mean). Assume we have 2 regressors $x,z$ , with three observations. The authors define the regressor matrix as $ 2 \times 3$ $$X = \left [\begin{matrix} x_{1} & x_{2} & x_{3}\\ z_{1} & z_{2} & z_{3}\\\end{matrix}\right]$$ and so $$Q_3 = \left[ \begin{matrix} 1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\\\end{matrix} \right] - \frac{1}{T} \left[ \begin{matrix} 1 & 1 & 1\\1 & 1 & 1\\1 & 1 & 1\\\end{matrix} \right]$$ Calculating $Q_3 X'$ shows the effect of $Q_3$ . Then the authors expression is compactly written $$S = \frac 1 T XQ_TX' =\frac 1 T XQ_TQ_TX' = \frac 1 T(Q_T'X')'(Q_TX') = \frac 1 T(Q_TX')'(Q_TX')$$ where we have used the properties of the $Q_T$ matrix stated. If we define the transpose matrix of regressors de-meaned by $\tilde X' = Q_TX'$ , then we have $$S = \frac 1 T\tilde X \tilde X'$$ which has dimensions $2 \times 2$ as it should, and is the sample covariance matrix, given how the original regressor matrix is defined. If you too, write $\mathbf I$ for the identity matrix, then your expression is simply mistaken, since it does not lead to subtraction of the sample mean from the observations. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/99160', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/7258/']} | jdg_73975 |
stackexchange | llm_judgeable_groundtruth_similarity | 2887619 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple users table, I guess the maximum users I am going to have is 300,000. Currently I am using: CREATE TABLE users ( id INT UNSIGNED AUTOINCREMENT PRIMARY KEY, .... Of course I have many other tables for which users(id) is a FOREIGN KEY. I read that since the id is not going to use the full maximum of INT it is better to use:MEDIUMINT and it will give better performance. Is it true? (I am using mysql on Windows Server 2008)
Now provide the response and nothing else.
| I would consider using MEDIUMINT sometimes.. on 300K rows.. MEDIUMINT gives u enough room up to 16M rows (unsigned). It is not only about "smaller table size" when u use indexes.. the difference can be hugeon a 27M rows tables.. changing 2 columns from INT to MEDIUMINT saved me 1GB (indexes + table data) so it went from 2.5GB to 1.5 GB. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2887619', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/112601/']} | jdg_73976 |
stackexchange | llm_judgeable_groundtruth_similarity | 3008 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
What are some number theoretic sequences that you know of that occur as (or are closely related to) the sequence of Fourier coefficients of some sort of automorphic function/form or the sequence of Hecke eigenvalues attached to a Hecke eigenform? I know many such sequences, but am always looking for more. Some examples (1) The sequence a(n) deriving from the traces a(p) of the Frobenius elements in a Galois representation (Langlands reciprocity conjecture) (2) Number of representations of a natural number as a sum of k squares (theta functions) (3) The sum of powers of divisor functions (Eisenstein series) (4) The central critical values of L-functions attached to all quadratic twists of a Hecke eigenform (Kohnen, Waldspurger) (5) Intersection numbers of certain subvarieties of Hilbert modular surfaces (Hirzebruch-Zagier) I'll end with a question that is ill-posed but nevertheless very interesting (at least to me personally): why do so many familiar and yet diverse sequences appear in this fashion? Note that many of them have a history of study that precedes the recognition that they are essentially coefficients of automorphic functions/forms.
Now provide the response and nothing else.
| Characters of rational vertex operator algebras tend to yield modular functions. This is due to the space of torus partition functions in a chiral conformal field theory being a complex moduli invariant. The standard example is the monster vertex algebra, whose character is j-744. Other examples come from lattice CFTs (presumably describing a bosonic string propagating in a torus), and have the form of a theta function divided by a power of eta. The characters are never Hecke eigenforms, because of the pole at infinity, but traces arising from higher-weight vectors may be. In some cases, the vertex algebra structure is supposed to arise from geometry of a target space, so this phenomenon may be related to Hirzebruch-Zagier (number 5). Characters of highest weight representations of affine Kac-Moody algebras yield modular forms. One can reasonably argue that this is a special case of the previous paragraph, since (I think) they come from Wess-Zumino-Witten. The Weyl-Kac character formula for such representations is one way to get Macdonald identities, and the smallest case (trivial rep of affine sl 2 ) yields the Jacobi triple product. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/3008', 'https://mathoverflow.net', 'https://mathoverflow.net/users/683/']} | jdg_73977 |
stackexchange | llm_judgeable_groundtruth_similarity | 7597812 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently updated my Rails to 3.1. Here's a part where I added: <%= asset_path('logo_symbol.png') %> This renders /assets/logo_symbol.png which works perfectly fine in development environment. However, when i push the code to production on heroku, it shows a broken image, with the url: assets/logo_symbol-135ddc8db2c9b59f032bed7db520137a.png . I am guessing the new name is for the reason of some optimization. It is however interesting to note that when I go to the assets/logo_symbol-135ddc8db2c9b59f032bed7db520137a.png url on production, I see a blank page, but when I change that url to anything random, like adding numbers to it, it shows a page not found. So clearly it is finding something on that url. It also shows a blank page when I go to /assets/logo_symbol.png directly on production/heroku. If this is any help, heroku does not precompile successfully when I push the code and heroku's documentation says that there is currently no work around for that issue. Any help in this will be greatly appreciated. My guess is that it has something to do with some configuration related to environments. I am attaching contents of my application.rb, development.rb and production.rb files here are the contents of my production.rb file # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the webserver when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send #config.action_mailer.default_url_options = { :host => 'localhost:3000' } #config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.action_mailer.delivery_method = :smtp # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = trueendmodule ActiveAdmin class Reloader def attach! end endend and here are the contents of my development.rb file # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the webserver when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send #config.action_mailer.default_url_options = { :host => 'localhost:3000' } #config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.action_mailer.delivery_method = :smtp # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = trueendmodule ActiveAdmin class Reloader def attach! end endend Here are the contents of my production.rb file # Settings specified here will take precedence over those in config/application.rb # The production environment is meant for finished, "live" apps. # Code is not reloaded between requests config.cache_classes = true # Full error reports are disabled and caching is turned on config.consider_all_requests_local = false config.action_controller.perform_caching = true # Specifies the header that your server uses for sending files config.action_dispatch.x_sendfile_header = "X-Sendfile" # For nginx: # config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # If you have no front-end server that supports something like X-Sendfile, # just comment this out and Rails will serve the files # See everything in the log (default is :info) # config.log_level = :debug # Use a different logger for distributed setups # config.logger = SyslogLogger.new # Use a different cache store in production # config.cache_store = :mem_cache_store # Disable Rails's static asset server # In production, Apache or nginx will already do this config.serve_static_assets = false # Enable serving of images, stylesheets, and javascripts from an asset server # config.action_controller.asset_host = "http://assets.example.com" # Disable delivery errors, bad email addresses will be ignored # config.action_mailer.raise_delivery_errors = false #config.action_mailer.default_url_options = { :host => 'ha1.heroku.com' } config.action_mailer.delivery_method = :smtp # Enable threaded mode # config.threadsafe! # Enable locale fallbacks for I18n (makes lookups for any locale fall back to # the I18n.default_locale when a translation can not be found) config.i18n.fallbacks = true # Send deprecation notices to registered listeners config.active_support.deprecation = :notify # Compress JavaScripts and CSS #config.assets.compress = true # Don't fallback to assets pipeline if a precompiled asset is missed config.assets.compile = false # Generate digests for assets URLs config.assets.digest = true # Defaults to Rails.root.join("public/assets") # config.assets.manifest = YOUR_PATH config.assets.js_compressor = :uglifier config.assets.css_compressor = :scss I have compared my config files with the rails documentation for 3.1 and seems like I have all the defaults needed. However I am still seeing no image. Any help will be much appreciated
Now provide the response and nothing else.
| Remove this line from production.rb: config.action_dispatch.x_sendfile_header = "X-Sendfile" You should also align the settings in your config files with those in section 9 of the pipeline guides . Sendfile headers contain information for the upstream webserver of where to find the file (on the file system) to serve it. This removes the load from the backend (Rails/Sprockets). When sendfile is on the HTTP response contains no body (it is zero length) which is why you see nothing. On heroku the nginx servers do not have access to the application filesystem, so this won't work. See this note on the Heroku dev site re sendfile. If you are using heroku, this document outlines the best options for using the pipeline effectively. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7597812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/860844/']} | jdg_73978 |
stackexchange | llm_judgeable_groundtruth_similarity | 318493 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I had a discussion with a co-worker about interface members having parameters that some implementations don't use. Say I have an interface interface IDoctor{ string GetMedicalOpinion(Age age, Weight weight, SleepSchedule sleepSchedule, Symptom symptom)} Implementations of IDoctor won't necessarily use every method parameter: class BadDoctor : IDoctor{ string GetMedicalOpinion(Age age, Weight weight, SleepSchedule sleepSchedule, Symptom symptom) { if (Symptom == Symptoms.Fatigue) return "You're sick!"; return "You're okay." }} My co-worker thinks that the interface is poorly defined because there are implementations that don't use every method parameter. I think this is fine. Having one method with all of the parameters gives us polymorphism -- we can swap in IDoctors of varying competence without modifying the surrounding code. Also, the parameters are immutable data, so consumers shouldn't have any expectation that the function does anything other than return a string. Finally, the parameters are all things that consumers should have naturally, so providing the info isn't a burden on consumers. Does this violate contemporary design principles? Is there really a problem with interface members having parameters that are used by some implementations, but ignored by others?
Now provide the response and nothing else.
| I think the main problem is actually caused due to a reason that has been widely discussed recently on this exact board, Should we avoid custom objects as parameters? In order to call the method you defined, you inherently need to deconstruct the object the attributes belong to. By changing the method signature to the one in the following example, you eliminate the need to pass unused parameters. interface IDoctor{ string DetermineMedicalState(IPatient patient);} It really seems like all the properties you are passing belong to one patient anyway, so keep them together in an object. That's what objects are also used for, to reduce the number of parameters. With this approach you know the method accepts an IPatient variable and couldn't care less which type of doctor extracts which information from it. To answer the questions you asked, I don't really think the initial design violates any rules, as long as it respects the contract given by the interface. But it's not an elegant design either and perhaps not even the right one. By passing an IPatient object you still have polymorphism at hand, thus haven't lost anything. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/318493', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/198850/']} | jdg_73979 |
stackexchange | llm_judgeable_groundtruth_similarity | 3029639 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
First of all, I am new to R (I started yesterday). I have two groups of points, data and centers , the first one of size n and the second of size K (for instance, n = 3823 and K = 10 ), and for each i in the first set, I need to find j in the second with the minimum distance. My idea is simple: for each i , let dist[j] be the distance between i and j , I only need to use which.min(dist) to find what I am looking for. Each point is an array of 64 doubles, so > dim(data)[1] 3823 64> dim(centers)[1] 10 64 I have tried with for (i in 1:n) { for (j in 1:K) { d[j] <- sqrt(sum((centers[j,] - data[i,])^2)) } S[i] <- which.min(d)} which is extremely slow (with n = 200 , it takes more than 40s!!). The fastest solution that I wrote is distance <- function(point, group) { return(dist(t(array(c(point, t(group)), dim=c(ncol(group), 1+nrow(group)))))[1:nrow(group)])}for (i in 1:n) { d <- distance(data[i,], centers) which.min(d)} Even if it does a lot of computation that I don't use (because dist(m) computes the distance between all rows of m ), it is way more faster than the other one (can anyone explain why?), but it is not fast enough for what I need, because it will not be used only once. And also, the distance code is very ugly. I tried to replace it with distance <- function(point, group) { return (dist(rbind(point,group))[1:nrow(group)])} but this seems to be twice slower. I also tried to use dist for each pair, but it is also slower. I don't know what to do now. It seems like I am doing something very wrong. Any idea on how to do this more efficiently? ps: I need this to implement k-means by hand (and I need to do it, it is part of an assignment). I believe I will only need Euclidian distance, but I am not yet sure, so I will prefer to have some code where the distance computation can be replaced easily. stats::kmeans do all computation in less than one second.
Now provide the response and nothing else.
| Rather than iterating across data points, you can just condense that to a matrix operation, meaning you only have to iterate across K . # Generate some fake data.n <- 3823K <- 10d <- 64x <- matrix(rnorm(n * d), ncol = n)centers <- matrix(rnorm(K * d), ncol = K)system.time( dists <- apply(centers, 2, function(center) { colSums((x - center)^2)})) Runs in: utilisateur système écoulé 0.100 0.008 0.108 on my laptop. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3029639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/291609/']} | jdg_73980 |
stackexchange | llm_judgeable_groundtruth_similarity | 325273 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to design a web page with a banner and an iframe. I hope the iframe can fill all the remaining page height and be resized automatically as the browser is resizing. Is it possible to get it done without writing JavaScript code, only with CSS? I tried to set height:100% on iframe, the result is quite close but the iframe tried to fill the whole page height, including the 30px height of banner div element, so I got unnecessary vertical scrollbar. It's not perfect. I tried CSS margin, padding attribute on DIV to occupy the whole remaining height of a web page successfully, but the trick didn't work on iframe. <body> <div style="width:100%; height:30px; background-color:#cccccc;">Banner</div> <iframe src="http: //www.google.com.tw" style="width:100%; height:100%;"></iframe></body>
Now provide the response and nothing else.
| Update in 2019 TL;DR: Today the best option is - flexbox . Everything supports it nicely and has for years. Go for that and don't look back. Here is a code sample for flexbox: body, html {width: 100%; height: 100%; margin: 0; padding: 0}.row-container {display: flex; width: 100%; height: 100%; flex-direction: column; background-color: blue; overflow: hidden;}.first-row {background-color: lime; }.second-row { flex-grow: 1; border: none; margin: 0; padding: 0; } <div class="row-container"> <div class="first-row"> <p>Some text</p> <p>And some more text</p> </div> <iframe src="https://jsfiddle.net/about" class="second-row"></iframe></div> The rest of this answer is left for learning & historical reasons. The trick is to understand what the 100% is taken of. Reading CSS specs can help you there. To make a long story short - there is such a thing as "containing block" - which is not necessary the parent element. Simply said, it is the first element up the hierarchy that has position:relative or position:absolute. Or the body element itself if there is nothing else. So, when you say "width: 100%", it checks the width of the "containing block" and sets the width of your element to the same size. If there was something else there, then you might get contents of a "containing block" that are larger than itself (thus "overflowing"). Height works the same way. With one exception. You can't get height to 100% of the browser window. The very top level element, against which 100% can be calculated, is the body (or html? not sure) element, and that stretches just enough to contain its contents. Specifying height:100% on it will have no effect, because it has no "parent element" against which to measure 100%. Window itself doesn't count. ;) To make something stretch exactly 100% of the window, you have two choices: Use JavaScript Don't use DOCTYPE. This is not a good practice, but it puts the browsers in "quirks mode", in which you can do height="100%" on elements and it will stretch them to the window size. Do note, that the rest of your page will probably have to be changed too to accommodate for the DOCTYPE changes. Update: I'm not sure if I wasn't wrong already when I posted this, but this certainly is outdated now. Today you can do this in your stylesheet: html, body { height: 100% } and it will actually stretch to the whole of your viewport. Even with a DOCTYPE. min-height: 100% could also be useful, depending on your situation. And I wouldn't advise anyone to make a quirks-mode document anymore either, because it causes way more headaches than solves them. Every browser has a different quirks-mode, so getting your page to look consistently across browsers becomes two orders of magnitude more difficult. Use a DOCTYPE. Always. Preferably the HTML5 one - <!DOCTYPE html> . It's easy to remember and works like a charm in all browsers, even the 10 years old ones. The only exception is when you have to support something like IE5 or something. If you're there, then you're on your own anyway. Those ancient browsers are nothing like the browsers today, and little advice that is given here will help you with them. On the bright side, if you're there, you probably just have to support ONE kind of browser, which gets rid of the compatibility problems. Good luck! Update 2: Hey, it's been a long time! 6 years later, new options are on the scene. I just had a discussion in the comments below, here are more tricks for you that work in today's browsers. Option 1 - absolute positioning. Nice and clean for when you know the precise height of the first part. body, html {width: 100%; height: 100%; margin: 0; padding: 0}.first-row {position: absolute;top: 0; left: 0; right: 0; height: 100px; background-color: lime;}.second-row {position: absolute; top: 100px; left: 0; right: 0; bottom: 0; background-color: red }.second-row iframe {display: block; width: 100%; height: 100%; border: none;} <div class="first-row"> <p>Some text</p> <p>And some more text</p></div><div class="second-row"> <iframe src="https://jsfiddle.net/about"></iframe></div> Some notes - the second-row container is needed because bottom: 0 and right: 0 doesn't work on iframes for some reason. Something to do with in being a "replaced" element. But width: 100% and height: 100% works just fine. display: block is needed because it's an inline element by default and whitespace starts creating weird overflows otherwise. Option 2 - tables. Works when you don't know the height of the first part. You can use either actual <table> tags or do it the fancy way with display: table . I'll go for the latter because it seems to be in fashion these days. body, html {width: 100%; height: 100%; margin: 0; padding: 0}.row-container {display: table; empty-cells: show; border-collapse: collapse; width: 100%; height: 100%;}.first-row {display: table-row; overflow: auto; background-color: lime;}.second-row {display: table-row; height: 100%; background-color: red; overflow: hidden }.second-row iframe {width: 100%; height: 100%; border: none; margin: 0; padding: 0; display: block;} <div class="row-container"> <div class="first-row"> <p>Some text</p> <p>And some more text</p> </div> <div class="second-row"> <iframe src="https://jsfiddle.net/about"></iframe> </div></div> Some notes - the overflow: auto makes sure that the row always includes all of its contents. Otherwise floating elements can sometimes overflow. The height: 100% on the second row makes sure it expands as much as it can squeezing the first row as small as it gets. Recommended: Option 3 - flexbox - The cleanest one of them all. body, html {width: 100%; height: 100%; margin: 0; padding: 0}.row-container {display: flex; width: 100%; height: 100%; flex-direction: column; background-color: blue; overflow: hidden;}.first-row {background-color: lime; }.second-row { flex-grow: 1; border: none; margin: 0; padding: 0; } <div class="row-container"> <div class="first-row"> <p>Some text</p> <p>And some more text</p> </div> <iframe src="https://jsfiddle.net/about" class="second-row"></iframe></div> Some notes - the overflow: hidden is because the iframe still generates some sort of overflow even with display: block in this case. It isn't visible in the fullscreen view or the snippet editor, but the small preview window gets an extra scrollbar. No idea what that is, iframes are weird. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/325273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/288936/']} | jdg_73981 |
stackexchange | llm_judgeable_groundtruth_similarity | 10729234 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to integrate the Google maps V3 API into our website. I need to read the address and set it to the MAP API so that map will show the street view for that address. So I need help in 3 points. How to read the address values. Example: var streetAddressArray = getElementsByAttribute(parent.document.forms[0], "*", "smokeid", "Address Line 1"); (I am not sure what this function will do, just pasting to give an idea on what exactly I am looking for.) After reading the value set to the API classes/methods so that map can show the street view. Show it as a street view. This is similar to what I am looking for: https://google-developers.appspot.com/maps/documentation/javascript/examples/streetview-simple I am very thankful for your response Suvi!Here is my Program: <!DOCTYPE html><html><head><meta name="viewport" content="initial-scale=1.0, user-scalable=no" /><style type="text/css"> html { height: 100% } body { height: 100%; margin: 0; padding: 0 } #map_canvas { height: 100% } </style><script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=KEY_VALUE &sensor=true"></script> <script type="text/javascript"> function initialize() { var fenway = new google.maps.LatLng(42.345573,-71.098326); var mapOptions = { center: fenway, zoom: 14, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map( document.getElementById("map_canvas"), mapOptions); var panoramaOptions = { position: fenway, pov: { heading: 34, pitch: 10, zoom: 1 } }; var panorama = new google.maps.StreetViewPanorama(document.getElementById("pano"),panoramaOptions); map.setStreetView(panorama); } </script> </head> <body onload="initialize()"> <div id="map_canvas" style="width:100%; height:100%"><div id="pano" style="position:absolute; left:410px; top: 8px; width: 400px; height: 300px;"></div></div></body></html> I am facing two issues with this, Somehow I am not able to view Panorama view. Any issue with my code? Address for map should come from bunch input fields( address line 1 and City and Zip etc). How can I assign the input fields address to the map so that map will show directly that particular location? I am using a custom product. I have another example in that they are using: var streetAddressArray = getElementsByAttribute(parent.document.forms[0], "*", "smokeid", "LossLocation_Address1"); I guess I am little closer to the target, But my this example is not working, it not rendering anything. <!DOCTYPE html><html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <style type="text/css"> html { height: 100% } body { height: 100%; margin: 0; padding: 0 } #map_canvas { height: 100% } </style> <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=AIzaSyBweQln0Jf5uBWvkTGSkv8AfkBdt-zZ3dE&sensor=true"> </script> <script type="text/javascript"> <!-- --> var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(-34.397, 150.644); var myOptions = { zoom: 8, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } function codeAddress() { var address = document.getElementById("address").value; geocoder.geocode( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { map.setCenter(results[0].geometry.location); var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location}); } else { alert("Geocode was not successful for the following reason: " + status); } }); } <body onload="initialize()"> <div id="map_canvas" style="width: 320px; height: 480px;"> </div> <div> <input id="address" type="textbox" value="Sydney, NSW"> <input type="button" value="Encode" onclick="codeAddress()"></div></body> <!-- --></html> Can you help me what went wrong?
Now provide the response and nothing else.
| You can get the syntax by using marker interfaces and extension methods. Prerequisite: the interfaces need to define the contract which is later used by the extension method. Basically the interface defines the contract for being able to "implement" a trait; ideally the class where you add the interface should already have all members of the interface present so that no additional implementation is required. public class Client { public double Weight { get; } public double Height { get; }}public interface TClientWeight { double Weight { get; }}public interface TClientHeight { double Height { get; }}public class ClientA: Client, TClientWeight { }public class ClientB: Client, TClientHeight { }public class ClientC: Client, TClientWeight, TClientHeight { }public static class TClientWeightMethods { public static bool IsHeavierThan(this TClientWeight client, double weight) { return client.Weight > weight; } // add more methods as you see fit}public static class TClientHeightMethods { public static bool IsTallerThan(this TClientHeight client, double height) { return client.Height > height; } // add more methods as you see fit} Use like this: var ca = new ClientA();ca.IsHeavierThan(10); // OKca.IsTallerThan(10); // compiler error Edit: The question was raised how additional data could be stored. This can also be addressed by doing some extra coding: public interface IDynamicObject { bool TryGetAttribute(string key, out object value); void SetAttribute(string key, object value); // void RemoveAttribute(string key)}public class DynamicObject: IDynamicObject { private readonly Dictionary<string, object> data = new Dictionary<string, object>(StringComparer.Ordinal); bool IDynamicObject.TryGetAttribute(string key, out object value) { return data.TryGet(key, out value); } void IDynamicObject.SetAttribute(string key, object value) { data[key] = value; }} And then, the trait methods can add and retrieve data if the "trait interface" inherits from IDynamicObject : public class Client: DynamicObject { /* implementation see above */ }public interface TClientWeight, IDynamicObject { double Weight { get; }}public class ClientA: Client, TClientWeight { }public static class TClientWeightMethods { public static bool HasWeightChanged(this TClientWeight client) { object oldWeight; bool result = client.TryGetAttribute("oldWeight", out oldWeight) && client.Weight.Equals(oldWeight); client.SetAttribute("oldWeight", client.Weight); return result; } // add more methods as you see fit} Note: by implementing IDynamicMetaObjectProvider as well the object would even allow to expose the dynamic data through the DLR, making the access to the additional properties transparent when used with the dynamic keyword. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10729234', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1413729/']} | jdg_73982 |
stackexchange | llm_judgeable_groundtruth_similarity | 10658 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that there is a certain collected works of plays that is N symbols long in the following sense: a "symbol" is one of the 26 letters of the alphabet, a line break, period, space, or a colon; in other words there are 30 possible symbols. If "a monkey" randomly "types" 1 of these 30 symbols at a rate of one per second, how long will it take M monkeys working at this rate, on average, for one of them to randomly write this specific N symbol long collected works? For clarity let me state that I am assuming each monkey ceaselessly types random symbols at this rate, and unless a monkey immediately types the right things, the collected works will be preceded by gibberish.
Now provide the response and nothing else.
| We will estimate the probability for a "generic" string. The number of occurrences of the string in any given monkey's output is roughly distributed Poisson with $\lambda = 30^{-N}$. The time until the first event happens is thus roughly distributed exponentially with rate $\lambda = 30^{-N}$. The minimum of $M$ such processes is also distributed exponentially with rate $\lambda = M/30^N$. Thus the expected time is roughly $30^N/M$. The same estimate can be obtained if we calculate the expected number of appearances. The expected number of appearances in any given monkey's stream for the first $t+M-1$ characters is $t/30^N$. For $M$ monkeys, it is $tM/30^N$. This is $1$ for $t = 30^N/M$, and this gives a rough estimate for the actual expectation. In fact, assuming that the string "doesn't overlap with itself", we can get an exact expression for the expectation (depending only on $N$ and $M$) using Theorem 1.1 in "String overlaps, pattern matching, and nontransitive games" (Guibas and Odlyzko '81), which gives a generating function for the probability that a given monkey is not done after $t$ steps. The paper also gives an expression for "non-generic" strings and for multiple strings, but the collected works are not going to overlap themselves; even if they do, it will probably have only a slight effect on the probabilities. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/10658', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1417/']} | jdg_73983 |
stackexchange | llm_judgeable_groundtruth_similarity | 27905 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How concentrated can an acid be without it being supersaturated? Is there a certain limit to how concentrated an acid or base can be? When I mean concentration, I mean molarity; so how concentrated can an acid or base be? NOTE: Can be a strong or weak acid/base. Because we are just looking for the amount of solvent needed to dissolve to start the dissociation process. EDIT: No autoprotolysis please. Acids that need a solvent to dissociate.
Now provide the response and nothing else.
| Are you suggesting that if an acid molecule is surrounded by a sufficiently small number of water molecules, then it might display lower than 100% ionization even if it is a "strong acid" ($pK_a<-1.76$)? That is true; some acids we commonly consider to be strong, such as nitric, hydrochloric or sulphuric acid, will not completely ionize in sufficiently concentrated aqueous solutions, even if there is more than one molecule of water per molecule of acid. So is there an acid capable of protonating water quantitatively even when there is only one molecule of water per molecule of acid? Yes, in fact there are several. Interesting proof of this is that sufficiently strong acids will form solid crystalline salts containing the hydronium cation when mixed with water in a 1:1 molar ratio. The most striking examples are probably the hydronium carborane salts, such as $\ce{(H3O)^+(HCB11Cl11)^{-}}$. I recommend taking a look at this fantastic article from the Reed group for details on the wonderful properties of carborane superacids, with a section describing its several salts when reacted with water in different proportions and environments. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/27905', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/8208/']} | jdg_73984 |
stackexchange | llm_judgeable_groundtruth_similarity | 34203112 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Everything was fine about an hour ago, when i tried to checkin something, the policy friendly reminded me to "provide a comment for the check-in". Then I installed VS2015 Update 1, like some of my colleagues did, and after that, i receive the following error message when trying to check something in: Internal error in Changeset Comments Policy. Error loading the Changeset Comments Policy policy (The policy assembly 'Microsoft.TeamFoundation.PowerTools.CheckinPolicies.ChangesetComments, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' is not registered.). Installation instructions: To install this policy, follow the instructions in CheckForComments.cs. My colleagues, of course, don't. As far as i know since VS2012 there is no need to install any version of the Power Tools anymore, so why does this happen? Additionally I also have VS2013 installed, where i do NOT get that error message, that version is working perfectly. Anyone having any idea where i need to look?
Now provide the response and nothing else.
| Okay, i got it. Somehow the installation messed something up. On my machine the folder C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation was pretty empty, where my colleagues had a junction there, leading to this folder C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\XXXXXXX where XXXXXX obviously varies from machine to machine. To find the correct one, go to the extensions folder (one up) and look for the one that contains "Microsoft.TeamFoundation.Build.Controls.dll" and some other Teamfoundation dlls, then open an elevated command prompt and create the missing junction like so: mklink /j "C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer" "C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\Extensions\XXXXXX" (watch out, you need to replace the XXXXX with the folder name on your machine) - NOTE: Please dont forget to RESTART Visual Studio after doing this change in order to start cheking in seamlessly. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34203112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1662314/']} | jdg_73985 |
stackexchange | llm_judgeable_groundtruth_similarity | 7879641 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm building a web app that needs to work offline. The system is built to capture sales transactions. The bulk of the "offline" part is fairly straightforward -- I just need to store data locally and sync it when I'm back on the network. So far, so good. The problem is with authentication. The app will run on a shared machine with a single OS user account. If I'm offline, how do I authenticate the user? Users themselves do not have any private data that I will need to segregate (i.e., I don't have to protect them from each other on the client). I need to be able to validate their password so I can let different users login throughout the day even if the connection is down. One approach I'm thinking of involves caching the password hashes on the client-side in an IndexedDB. Only a limited set of users will be allowed to log in from a specific shared machine, so I won't need to cache my whole password database locally. Assuming that I have a good password policy (complexity and expiry requirements) in place and the hashes themselves are secure (bcrypt), just how horrible of an idea is this? Do I have any other options?
Now provide the response and nothing else.
| This is effectively how Windows (and other systems) work when the machine is not able to reach the domain controller (e.g., you take your work laptop onto the airplane and need to log into your laptop w/o connectivity). Your machine has written down a cache of your username|password pair and will let you in via those credentials even if it's offline. I think generally speaking storing the username|password hashes is pretty safe, assuming you're hashing them reasonably (e.g., using a salt, using an IV, etc). One exposure you'll want to think through is having the hash file "escape." If this is sensitive data you'll want to be exceedingly careful -- and this may not even be acceptable, but if it's not super sensitive data then you're probably OK: with good hashing I think you should be reasonably (but certainly not completely) safe. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7879641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34985/']} | jdg_73986 |
stackexchange | llm_judgeable_groundtruth_similarity | 60722108 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently, I am using alert dialog to show the notification while receiving push notification while the app is in foreground. But I want to show something non-intrusive like local notification in flutter. How do I implement that in my application? Here is my current implementation: FirebaseMessaging _firebaseMessaging = FirebaseMessaging(); @override void initState() { _firebaseMessaging.configure( onMessage: (Map<String, dynamic> message) async { showDialog( context: context, builder: (context) => AlertDialog( content: ListTile( title: Text(message['notification']['title']), subtitle: Text(message['notification']['body']), ), actions: <Widget>[ FlatButton( child: Text('Ok'), onPressed: () => Navigator.of(context).pop(), ), ], ), ); print("onMessage: $message"); }, onLaunch: (Map<String, dynamic> message) async { print("onLaunch: $message"); Navigator.pushNamed(context, '/notify'); }, onResume: (Map<String, dynamic> message) async { print("onResume: $message"); }, );}
Now provide the response and nothing else.
| Since I didn't get an answer I tried on my own and figured it out. Do not forget to add the flutter_local_notifications library to your project for working this code https://pub.dev/packages/flutter_local_notifications TESTED AND WORKING FOR ANDROID FirebaseMessaging _firebaseMessaging = FirebaseMessaging();FlutterLocalNotificationsPlugin flutterLocalNotificationsPlugin = FlutterLocalNotificationsPlugin(); @override void initState() {var initializationSettingsAndroid = new AndroidInitializationSettings('@mipmap/ic_launcher'); var initializationSettingsIOS = new IOSInitializationSettings(); var initializationSettings = new InitializationSettings( initializationSettingsAndroid, initializationSettingsIOS); flutterLocalNotificationsPlugin.initialize(initializationSettings, onSelectNotification: onSelectNotification); _firebaseMessaging.configure( onMessage: (Map<String, dynamic> message) async { showNotification( message['notification']['title'], message['notification']['body']); print("onMessage: $message"); }, onLaunch: (Map<String, dynamic> message) async { print("onLaunch: $message"); Navigator.pushNamed(context, '/notify'); }, onResume: (Map<String, dynamic> message) async { print("onResume: $message"); }, );}Future onSelectNotification(String payload) async { showDialog( context: context, builder: (_) { return new AlertDialog( title: Text("PayLoad"), content: Text("Payload : $payload"), ); }, ); }void showNotification(String title, String body) async { await _demoNotification(title, body); } Future<void> _demoNotification(String title, String body) async { var androidPlatformChannelSpecifics = AndroidNotificationDetails( 'channel_ID', 'channel name', 'channel description', importance: Importance.Max, playSound: true, sound: 'sound', showProgress: true, priority: Priority.High, ticker: 'test ticker'); var iOSChannelSpecifics = IOSNotificationDetails(); var platformChannelSpecifics = NotificationDetails( androidPlatformChannelSpecifics, iOSChannelSpecifics); await flutterLocalNotificationsPlugin .show(0, title, body, platformChannelSpecifics, payload: 'test'); } It works like a charm. Hopefully will help someone out :) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/60722108', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11306017/']} | jdg_73987 |
stackexchange | llm_judgeable_groundtruth_similarity | 17772553 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have tried all solution proposed to other, similar questions but none of them seems to work.In essence I am trying to display a table filled with data from collection of models. That in itself is not a problem, however I would like to force razor to generate it always in 3 columns (no matter how many elements we have). My original idea was to do it that way: <table class="projects-grid"> <tr> @for(int i = 0; i< Model.Count(); i++) { if (i != 0 && i % 3 == 0) { Html.Raw("</tr><tr>"); } var item = Model.ElementAt(i); <td class="project-tile"> @Html.DisplayFor(modelItem => item.Title) </td> } </tr> </table> So in essence every third element I would like Razor to output "" string to add another row to the table. All seems to work fine other than this sting is not present in page source. In debug I can see that this line Html.Raw("</tr><tr>"); Is actually called, but no output in generated page is present. Any help?Many thanks in advance....
Now provide the response and nothing else.
| The reason it's not outputing is because of the context of the razor syntax being executed. In your if block, all code runs as if you were in a regular C# context and the line: Html.Raw("</tr><tr>"); Returns an MvcHtmlString but you are not doing anything with it. You need to enter an output context: @Html.Raw("</tr><tr>"); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17772553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2604196/']} | jdg_73988 |
stackexchange | llm_judgeable_groundtruth_similarity | 4248223 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a generic class which represents a fragment of text. That fragment of text may have any of a number of different modes (different types of highlighting). Those modes are represented by an Enum. The Enum could be different for each project but it must implement an interface which provides a method to combine 2 of them (could be highlighted and bolded). So i have an interface: public interface TextFragmentMode<E extends Enum<E>> { /** * Will combine the supplied mode with the current mode and return the * result. * * @param mode The mode to combine with. * @return The combined mode. */ public E combine( E mode );} Then my TextFragment is a container for both a String of text, and a mode. But when I try to declare the class: public class TextFragment<E extends TextFragmentMode<E extends Enum<E>>> { StringBuilder text; E mode; ... I get the following error: Syntax error on token "extends", , expected Which, according to eclipse syntax highlighting, is referring to the E extends Enum<E> portion of the code. Does anyone know what I am doing wrong? I must be missing something about Generics... --------------------- edit ------------------- I'm finally taking the time to read Effective Java by Josh Bloch (second edition), and it turns out he goes over this use case as Item 34: Emulate extensible enums with interfaces . As much as I would like to say great mind think alike... That would be WAY too presumtuous!
Now provide the response and nothing else.
| TextFragment<E> needs to say two things about E . It "extends" TextFragmentMode<E> . In order to do that, you must also constrain it to extend Enum<E> . Because of Java inheritance wonkiness, you need to write that the other way around: public class TextFragment<E extends Enum<E> & TextFragmentMode<E>> { | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4248223', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/516433/']} | jdg_73989 |
stackexchange | llm_judgeable_groundtruth_similarity | 1954745 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The following is an excerpt from Rudin's book in mathematical analysis. Here he states: The part highlighted in red is the one I can't seem to wrap my head around. I thought that if we wanted to know whether the limit, say $f$, of a sequence of functions, say $f_n$, is continuous or not then we would just need: $$\lim_{t\to x} (\lim_{n \to \infty}f_n(t)) = f(x)$$ I.e. just that the limit of functions $f_n$, assumed to be $f$, is continuous by definition. So I don't understand the right hand side of the equation in red. Can somebody explain this?
Now provide the response and nothing else.
| To ask, whether $f$, given by $f(t) := \lim_n f_n(t)$ for each $t$, is continuous at $x$, is asking whether $$ \tag 1 \lim_{t\to x} f(t) = f(x) $$Now, let's plug in the definition of $f$ as the limit of the $f_n$ on both sides of (1). We get (as both $f(x) = \lim_n f_n(x)$ and $f(t) = \lim_n f_n(t)$ hold), that $$ \tag 2 \lim_{t\to x} \lim_n f_n(t) = \lim_n f_n(x) $$As the $f_n$ are continuous by assumption, we may write $f_n(x) = \lim_{t\to x} f_n(t)$ in (2), giving us $$ \tag 3 \lim_{t\to x} \lim_n f_n(t) = \lim_n \lim_{t\to x} f_n(t) $$which is Rudin's red formula. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1954745', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/234160/']} | jdg_73990 |
stackexchange | llm_judgeable_groundtruth_similarity | 6132642 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking to pass a value to a custom validation. I have done the following as a test: validate :print_out, :parameter1 => 'Hello' With this: def print_out (input="blank") puts inputend When creating an object or saving an object, the output is 'blank.' However, if called directly: object.print_out "Test" Test is instead outputted. The question is, why is my parameter not passing properly?
Now provide the response and nothing else.
| Inside the 'config\initializers\' directory, you can create your own validations. As an example, let's create a validation 'validates_obj_length.' Not a very useful validation, but an acceptable example: Create the file 'obj_length_validator.rb' within the 'config\intializers\' directory. ActiveRecord::Base.class_eval do def self.validates_obj_length(*attr_names) options = attr_names.extract_options! validates_each(attr_names, options) do |record, attribute, value| record.errors[attribute] << "Error: Length must be " + options[:length].to_s unless value.length == options[:length] end endend Once you have this, you can use the very clean: validates_obj_length :content, :length => 5 Basically, we reopen ActiveRecord::Base class and implement a new sub-validation. We use the splat (*) operator to accept an array of arguments. We then extract out the hash of options into our 'options' variable. Finally we implement our validation(s). This allows the validation to be used with any model anytime and stay DRY! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6132642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/675940/']} | jdg_73991 |
stackexchange | llm_judgeable_groundtruth_similarity | 21233230 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have made a double drawer layout without an actionbar something like this: Using Navigation Drawer without TitleBar or ActionBar My requirement is to disable the drawer on right when drawer on left is open & vice versa. So I'm hiding the right drawer button when left drawer is open & vice versa and that works fine. But the problem is, even when I hide a button(left or right), the drawer still opens with horizontal swipe(right to left swipe). So how do I prevent the drawer from opening from swipe?? And since I'm doing it without ActionBarDrawerToggle inbuilt functions like setOnDrawerOpenListenersetOnDrawerCloseListener are not available. Please Help!!
Now provide the response and nothing else.
| this may help you... drawerLayout.setDrawerListener(new DrawerLayout.DrawerListener() { @Override public void onDrawerStateChanged(int arg0) { } @Override public void onDrawerSlide(View view, float arg1) { } @Override public void onDrawerOpened(View view) { if(view == rightDrawerView) { drawerLayout.setDrawerLockMode(DrawerLayout.LOCK_MODE_LOCKED_CLOSED, leftDrawerView); } else if(view == leftDrawerView) { drawerLayout.setDrawerLockMode(DrawerLayout.LOCK_MODE_LOCKED_CLOSED, rightDrawerView); } } @Override public void onDrawerClosed(View view) { if(view == rightDrawerView) { drawerLayout.setDrawerLockMode(DrawerLayout.LOCK_MODE_UNLOCKED, leftDrawerView); } else if(view == leftDrawerView) { drawerLayout.setDrawerLockMode(DrawerLayout.LOCK_MODE_UNLOCKED, rightDrawerView); } } }); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21233230', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2937847/']} | jdg_73992 |
stackexchange | llm_judgeable_groundtruth_similarity | 25604563 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The selection methods I am looking for are the ones based on subset evaluation (i.e. do not simply rank individual features). I prefer implementations in Matlab or based on WEKA, but implementations in any other language will still be useful. I am aware of the existence of CsfSubsetEval and ConsistencySubsetEval in WEKA, but they did not lead to good classification performance, probably because they suffer from the following limitation: CsfSubsetEval is biased toward small feature subsets, which may prevent locally predictive features from being included in the selected subset, as noted in [1]. ConsistencySubsetEval use min-features bias [2] which, similarly to CsfSubsetEval, result in the selection of too few features. I know it is "too few" because I have built classification models with larger subsets and their classification performance were relatively much better. [1] M. A. Hall, Correlation-based Feature Subset Selection for Machine Learning, 1999. [2] Liu, Huan, and Lei Yu, Toward integrating feature selection algorithms for classification and clustering, 2005.
Now provide the response and nothing else.
| Using the PdfWriter class to merge documents goes against all the recommendations given in the official documentation, though there are unofficial examples that may have lured you into writing bad code. I hope that you understand that I find these bad examples even more annoying than you do. Please take a look at Table 6.1 in chapter 6 of my book . It gives you an overview showing when to use which class. In this case, you should use PdfCopy : String[] files = { MovieLinks1.RESULT, MovieHistory.RESULT };// step 1Document document = new Document();// step 2PdfCopy copy = new PdfCopy(document, new FileOutputStream(RESULT));// step 3document.open();// step 4PdfReader reader;int n;// loop over the documents you want to concatenatefor (int i = 0; i < files.length; i++) { reader = new PdfReader(files[i]); // loop over the pages in that document n = reader.getNumberOfPages(); for (int page = 0; page < n; ) { copy.addPage(copy.getImportedPage(reader, ++page)); } copy.freeReader(reader); reader.close();}// step 5document.close(); If you are using a recent version of iText, you can even use the addDocument() method in which case you don't need to loop over all the pages. You also need to take special care if forms are involved. There are several examples demonstrating the new functionality (dating from after the book was written) in the Sandbox . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25604563', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2287147/']} | jdg_73993 |
stackexchange | llm_judgeable_groundtruth_similarity | 6880232 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Looking to find the best way to prevent / detect GPS spoofing on Android. Any suggestions on how this is accomplished, and what can be done to stop it? I am guessing the user has to turn on mock locations to spoof GPS, if this is done, then they can spoof GPS? I guess I would need to just detect if Mock Locations are enabled? Any other suggestions?
Now provide the response and nothing else.
| I have done some investigation and sharing my results here,this may be useful for others. First, we can check whether MockSetting option is turned ON public static boolean isMockSettingsON(Context context) { // returns true if mock location enabled, false if not enabled. if (Settings.Secure.getString(context.getContentResolver(), Settings.Secure.ALLOW_MOCK_LOCATION).equals("0")) return false; else return true;} Second, we can check whether are there other apps in the device, which are using android.permission.ACCESS_MOCK_LOCATION (Location Spoofing Apps) public static boolean areThereMockPermissionApps(Context context) { int count = 0; PackageManager pm = context.getPackageManager(); List<ApplicationInfo> packages = pm.getInstalledApplications(PackageManager.GET_META_DATA); for (ApplicationInfo applicationInfo : packages) { try { PackageInfo packageInfo = pm.getPackageInfo(applicationInfo.packageName, PackageManager.GET_PERMISSIONS); // Get Permissions String[] requestedPermissions = packageInfo.requestedPermissions; if (requestedPermissions != null) { for (int i = 0; i < requestedPermissions.length; i++) { if (requestedPermissions[i] .equals("android.permission.ACCESS_MOCK_LOCATION") && !applicationInfo.packageName.equals(context.getPackageName())) { count++; } } } } catch (NameNotFoundException e) { Log.e("Got exception " , e.getMessage()); } } if (count > 0) return true; return false;} If both above methods, first and second are true, then there are good chances that location may be spoofed or fake. Now, spoofing can be avoided by using Location Manager's API. We can remove the test provider before requesting the location updates from both the providers (Network and GPS) LocationManager lm = (LocationManager) getSystemService(LOCATION_SERVICE);try { Log.d(TAG ,"Removing Test providers") lm.removeTestProvider(LocationManager.GPS_PROVIDER);} catch (IllegalArgumentException error) { Log.d(TAG,"Got exception in removing test provider");}lm.requestLocationUpdates(LocationManager.GPS_PROVIDER, 1000, 0, locationListener); I have seen that removeTestProvider(~) works very well over Jelly Bean and onwards version. This API appeared to be unreliable till Ice Cream Sandwich. Flutter Update: Use Geolocator and check Position object's isMocked property. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6880232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/393258/']} | jdg_73994 |
stackexchange | llm_judgeable_groundtruth_similarity | 44550 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Forgive the stupid question but when colliding particles together, how does one know that a particle is actually a new form of sub-atomic matter and not simply just some shattered fragment of the larger particle? I am not quantum theory literate, I do not pretend to understand more than abstract concepts and the math is beyond my comprehension. But I have never heard anyone explain how they know they have discovered a new particle and not some shattered fragment of something larger. If I throw a baseball at a glass window, I expect the window to shatter into smaller pieces of glass. Its still glass. Why doesn't this occur at a sub-atomic level too? Granted particles are too small to observe behavior in this manner but how does one know that some split nucleus of an atom is nothing more than a shattered fragment of the larger whole which in itself might be shattered further still?
Now provide the response and nothing else.
| Every particle has certain properties (e.g. mass, spin, charge, etc) and behaviors attributed to it. In a particle collider, you can measure properties of the particles it detects. The idea is to determine (statistically) if the properties you observe are consistent with the properties you know (or expect) from each particle. This isn't easy, and that's a big part of why detectors are so complicated (and expensive), because they have to be able to measure properties very accurately to determine---with confidence---what actually happened. Additionally, this is why particle experiments require lots of results. Just because one event does or doesn't seem to match well, that doesn't mean there's anything strange going on. You need to see lots of example to say, with confidence, that you know what's going on. Consider an analogy: Someone has a collection of sports balls (baseballs, basketballs, softballs, footballs, bowling balls, etc), that they are pitching to a batter. You want to know what ball the batter hits, based on certain pieces of information (lets say you can't actually see what the batter hits). If you hear a loud crack, you can say it probably wasn't a tennis ball. If the ball flies out of the park, you can probably say it wasn't a bowling ball. If the batter hits lots of balls out of the park, but a certain set of balls seem to always land within 20 feet of him, you can say that those are probably not baseballs --- but they are consistent with what you would imagine for basketballs, or footballs or something. Finally, if unexpected happens -- like he hits a ball, but it gets stuck-to the bat --- you might think there is a ball in there you're not familiar with: a new type of ball. But you would also want that type of event to happen a number of times so that you're confident it's not a fluke. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/44550', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/16089/']} | jdg_73995 |
stackexchange | llm_judgeable_groundtruth_similarity | 275761 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a textbox and a link button.When I write some text, select some of it and then click the link button, the selected text from textbox must be show with a message box. How can I do it? When I click the submit button for the textbox below, the message box must show Lorem ipsum . Because "Lorem ipsum" is selected in the area. If I select any text from the page and click the submit button it is working, but if I write a text to textbox and make it, it's not. Because when I click to another space, the selection of textbox is canceled. Now problem is that, when I select text from textbox and click any other control or space, the text, which is selected, must still be selected. How is it to be done?
Now provide the response and nothing else.
| OK, here is the code I have: function ShowSelection(){ var textComponent = document.getElementById('Editor'); var selectedText; if (textComponent.selectionStart !== undefined) { // Standards-compliant version var startPos = textComponent.selectionStart; var endPos = textComponent.selectionEnd; selectedText = textComponent.value.substring(startPos, endPos); } else if (document.selection !== undefined) { // Internet Explorer version textComponent.focus(); var sel = document.selection.createRange(); selectedText = sel.text; } alert("You selected: " + selectedText);} The problem is, although the code I give for InternetΒ Explorer is given on a lot of sites, I cannot make it work on my copy of InternetΒ ExplorerΒ 6 on my current system. Perhaps it will work for you, and that's why I give it. The trick you look for is probably the .focus() call to give the focus back to the textarea, so the selection is reactivated. I got the right result (the selection content) with the onKeyDown event: document.onkeydown = function (e) { ShowSelection(); } So the code is correct. Again, the issue is to get the selection on click on a button... I continue to search. I didn't have any success with a button drawn with a li tag, because when we click on it, InternetΒ Explorer deselects the previous selection. The above code works with a simple input button, though... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/275761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/439507/']} | jdg_73996 |
stackexchange | llm_judgeable_groundtruth_similarity | 135869 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Circuit switching's packets take the same route as virtual circuit switching's do. So, how do I distinguish between virtual circuit switching and circuit switching?
Now provide the response and nothing else.
| In the good old days of cables that consisted of a large number of copper wires, and switching stations with relays that effectively implemented a big crossbar, circuit switching meant that the provider put all the switches in the position that made a real (electrical) circuit from one client to another. The two literally had a cable for their own private use (for as long as then connection was established). Virtual circuit switching gives the clients the impression that they have a dedicated cable, while in fact they are allocated time-slots (or frequency bands, or some other shared slice) in all the cables that make up the connection. This better matches the current economic reality that a cable is very expensive, but has a bandwidth that far exceeds the needs of most clients, hence sharing is economical. The difference between a virtual switched circuit and plain packet switching is that for a virtual circuit all the steps that make up the connection are determined when the connection is established, and the buffers and bandwidth at each step is claimed and thus guaranteed. For (plain) packet switching there is no such circuit creation: each packet finds its own way through the network, and has to hope that sufficient buffers and bandwidth is available at each step. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/135869', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/56902/']} | jdg_73997 |
stackexchange | llm_judgeable_groundtruth_similarity | 49318826 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having a lot of confusion using the sys.getsizeof function in python. All I want to find out is that for say a floating point value, is the system using 4 or 8 bytes (i.e. single or double precision in C terms). I do the following: import sysx = 0.0sys.getsizeof(x) # Returns 24type(x) # returns floatsys.getsizeof(float) # Returns 400. How can I simply find out the how many bytes are actually used for the floating point representation. I know it should be 8 bytes but how can I verify this (something like the sizeof operator in C++)
Now provide the response and nothing else.
| Running sys.getsizeof(float) does not return the size of any individual float, it returns the size of the float class . That class contains a lot more data than just any single float, so the returned size will also be much bigger. If you just want to know the size of a single float, the easiest way is to simply instantiate some arbitrary float. For example: sys.getsizeof(float()) Note that float() simply returns 0.0 , so this is actually equivalent to: sys.getsizeof(0.0) This returns 24 bytes in your case (and probably for most other people as well). In the case of CPython (the most common Python implementation), every float object will contain a reference counter and a pointer to the type (a pointer to the float class), which will each be 8 bytes for 64bit CPython or 4 bytes each for 32bit CPython. The remaining bytes ( 24 - 8 - 8 = 8 in your case which is very likely to be 64bit CPython) will be the bytes used for the actual float value itself. This is not guaranteed to work out the same way for other Python implementations though. The language reference says: These represent machine-level double precision floating point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating point numbers. and I'm not aware of any runtime methods to accurately tell you the number of bytes used. However, note that the quote above from the language reference does say that Python only supports double precision floats, so in most cases (depending on how critical it is for you to always be 100% right) it should be comparable to double precision in C. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49318826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2713740/']} | jdg_73998 |
stackexchange | llm_judgeable_groundtruth_similarity | 1434148 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Assume I have a table called "table" and I have 3 columns, a, b, and c. What does it mean to have a non-clustered index on columns a,b? Is a nonclustered index on columns a,b the same as a nonclustered index on columns b,a? (Note the order). Also, Is a nonclustered index on column a the same as a nonclustered index on a,c? I was looking at the website sqlserver performance and they had these dmv scripts where it would tell you if you had overlapping indexes and I believe it was saying that having an index on a is the same as a,b, so it is redundant. Is this true about indexes? One last question is why is the clustered index put on the primary key. Most of the time the primary key is not queried against, so shouldn't the clustered index be on the most queried column. I am probably missing something here like having it on the primary key speeds up joins? Great explanations. Should I turn this into a wiki and change the title index explanation?
Now provide the response and nothing else.
| This is turning into a more-general introduction to indexing, but I suspect you'll still find it useful. The first two paragraphs especially speak to your question. Clustered vs Non-clustered This refers to how the table is physically arranged on disk. A clustered index works by sorting the physical pages and rows in a table on disk based on the index definition. Non-clustered indexes use a separate location on disk to store a copy of the columns in the index (and only those columns), plus a pointer to the source records. For this reason, clustered indexes are often faster because they will always cover any data you need in the query. However, you only get one of them because otherwise you'd duplicate the entire table. It's also important to know that adding non-clustered indexes to a table actually slows down write operations like inserts and updates, because the database has to rebuild the index, or at least certain pages in the index. Index Order An index on (A,B) is not the same as on (B,A) . If the first case, records in the index are ordered by column A first, and column B only effects the index order when you have duplicate values for A . Searching the index with a column B value only won't help you, because you still have to scan through every record in the index to find all your matching values in B . In the second case, the reverse happens: records are ordered by column B first, and column A only helps when you have duplicate values for A . Searching that index with a column A value only won't help you. Covering Indexes Sometimes a database can fulfill the requirements of a query entirely from an index. In this case, the index is said to be a "covering" index for that query. This is advantageous because indexes are often cached in memory, and so the database may not have to go do disk at all. To understand this, imagine an index on (A,B) where there are very few duplicate values for A . Including A in the index seems wasteful, unless you have a query that runs often that looks for a particular value of A and also needs B . This index will now save a lot work going back to the original table to retrieve B . Selectivity Selectivity is a value from 0 to 1 (often expressed as a percentage) that tells you how unique each value in an index is. A selectivity of 1 or 100% means there are no duplicates. A selectivity of 0 means there is only one value in the column. Generally, a higher selectivity (approaching 1) is better for indexes. To demonstrate this, think about what would happen with a low-selectivity index. For example, you try to speed up a query by adding an index to a bit column in a table with 10000 records. In this case (assuming uniform distribution), the selectivity is .5. You run your query, and the index returns 5000 records. But each of those records still has to go back to the original table, and since the index order doesn't match the table order it would have to do a lot of separate look-ups into the table. Instead, it's likely faster to just scan through the entire table start to finish to retrieve the needed data. Selectivity explains why you would want to cluster on the primary key. Since the clustered index tells the database how to order the table, going for anything less than 100% selectivity here means a query will have to scan the table more often. Clustering on the primary key gives you perfect selectivity. And since this primary key is often used as the record pointer in other indexes, you want to keep it as small as possible (ie, an integer identity column). There's a good article on the selectivity and indexing here: http://www.akadia.com/services/ora_index_selectivity.html Sargable This refers to whether the database is able to use a particular filter with an index. As we have shown, indexes normally work by first sorting the data into a specific order, so that lookups into that index can use something efficient like a tree-based search rather than a slower linear search. Anything that can't be effectively compared with sorted data can't be used with an index. A good example is the LIKE operator. This is sargable: SELECT * FROM [Table] WHERE [Column] LIKE @Value + '%' but this is not sargable: SELECT * FROM [Table] WHERE [Column] LIKE '%' + @Value + '%' Some other things that can make a filter un-sargable are non-deterministic functions (and there are more of those than you think). Per-Column Indexes A common mistake I've seen is to have a separate index for each column in the table. For example, someone will take a table with columns (A,B,C,D) and create four separate indexes, one each for A , B , C , D , believing they have now indexed every column and so every query should be fast. In fact, this is rarely helpful for reasons I hope I've already explained, and will often make things worse rather than better because the database will now need to update these indexes for every change to the data. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1434148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/33690/']} | jdg_73999 |
stackexchange | llm_judgeable_groundtruth_similarity | 12735788 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to replace a number that is in a couple different strings in a text file. Basically it would take the form of tableNameNUMBERcarNUMBER I'm pretty new to bash and scripting and I wasn't sure how to replace NUMBER with what I pass in. So I've tried this: #! /usr/bin/env bashsed "s/NUMBER/$1/" myScript.txt > test.txt then at the command line: sh test.sh 123456 This only works if NUMBER is on its own, without tableName or car preceding it. How can I replace NUMBER in those cases. Is it better to have ${NUMBER} ? Sorry if these are totally noob questions.
Now provide the response and nothing else.
| This should work just fine: sed "s/NUMBER/$1/g" myScript.txt > test.txt The g on the end allows set to replace NUMBER if it appears multiple times on one line. In fact, a quick test: foo.txt carNUMBERtableNameNUMBERNUMBERNUMBERfoo$ NUMBER=3.14$ sed "s/NUMBER/$NUMBER/g" foo.txtcar3.14tableNumber3.143.143.14foo Isn't that what your sed command is doing? If you want to make sure you don't change NUMBER unless it's by itself, use \b around NUMBER : $ sed "s/\bNUMBER\b/$NUMBER/g" foo.txtcarNumbertabelNumberNUMBER3.14NUMBERfoo If you don't care about the case of the string NUMBER , put an i on the end of the sed command: $ sed "s/NUMBER/$NUMBER/gi" foo.txt | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12735788', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/207524/']} | jdg_74000 |
stackexchange | llm_judgeable_groundtruth_similarity | 47337218 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have used this guide to build persistence with Room in my Android App: https://developer.android.com/training/data-storage/room/index.html and added dependances like shown here: https://developer.android.com/topic/libraries/architecture/adding-components.html when i build the debug version and deply to phone, everithing works fine. When i build the release signed APK i got this error message: Error:Error: json defines classes that conflict with classes now provided by Android. Solutions include finding newer versions or alternative libraries that don't have the same problem (for example, for httpclient use HttpUrlConnection or okhttp instead), or repackaging the library using something like jarjar. [DuplicatePlatformClasses] my app.gradle: apply plugin: 'com.android.application'android { signingConfigs { /* TODO(developer): Configure to sign app with a release key for testing. release { storeFile file('path/to/release/signing/key') keyAlias 'release_key_alias' keyPassword "${password}" storePassword "${password}" }*/ } compileSdkVersion 26 buildToolsVersion '26.0.2' defaultConfig { applicationId "myappid" minSdkVersion 14 targetSdkVersion 23 versionCode 10 versionName "1.8" } buildTypes { release { minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' // TODO(developer): uncomment below once config above is complete and uncommented. //signingConfig signingConfigs.release } }}configurations { all { exclude module: 'httpclient' }}dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:26.1.0' androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile 'com.android.support.constraint:constraint-layout:1.0.2' compile 'com.android.volley:volley:1.0.0' compile 'com.github.nkzawa:socket.io-client:0.3.0' compile 'io.socket:socket.io-client:0.8.3' compile 'com.android.support:design:26.1.0' compile 'android.arch.persistence.room:runtime:1.0.0' implementation "android.arch.persistence.room:runtime:1.0.0" annotationProcessor "android.arch.persistence.room:compiler:1.0.0"} my project.gradle buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:3.0.0' //classpath 'io.socket:socket.io-client:0.8.3' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files }}ext{ roomVersion = '1.0.0'}allprojects { repositories { google() jcenter() maven { url 'https://maven.google.com' } }} Somebody can help or give me clues?
Now provide the response and nothing else.
| I finally found the problem was a JSON sub-module: compile 'com.github.nkzawa:socket.io-client:0.3.0' this library has a submodule: org.json:json that is now conflicting with android native module, because in my other dependancies i can't find this one. It was working fine 10 days ago.I also had to kill this: compile 'io.socket:socket.io-client:0.8.3' the final solution was to add an exclude for the module and change the line like this: implementation ('com.github.nkzawa:socket.io-client:0.3.0',{ exclude group:'org.json', module:'json' }) I also have noticed AFTER i solved the problem that in the error log it was suggesting me the module that was in conflict but even if i read it a hundred times i didn't noticed before: so maybe google or Intellij could improve the writing of this errors... To spot this class duplicate conflict error module i found the best way to proceed is to create a new project and paste in the dependancies in app build.gradle, and check them one by one or with "dividi et impera", maybe this is an obvious suggestion for someone but i would have like to have it sooner. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47337218', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1921956/']} | jdg_74001 |
stackexchange | llm_judgeable_groundtruth_similarity | 2298242 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C++, when and how do you use a callback function? EDIT: I would like to see a simple example to write a callback function.
Now provide the response and nothing else.
| Note: Most of the answers cover function pointers which is one possibility to achieve "callback" logic in C++, but as of today not the most favourable one I think. What are callbacks(?) and why to use them(!) A callback is a callable (see further down) accepted by a class or function, used to customize the current logic depending on that callback. One reason to use callbacks is to write generic code which is independant from the logic in the called function and can be reused with different callbacks. Many functions of the standard algorithms library <algorithm> use callbacks. For example the for_each algorithm applies an unary callback to every item in a range of iterators: template<class InputIt, class UnaryFunction>UnaryFunction for_each(InputIt first, InputIt last, UnaryFunction f){ for (; first != last; ++first) { f(*first); } return f;} which can be used to first increment and then print a vector by passing appropriate callables for example: std::vector<double> v{ 1.0, 2.2, 4.0, 5.5, 7.2 };double r = 4.0;std::for_each(v.begin(), v.end(), [&](double & v) { v += r; });std::for_each(v.begin(), v.end(), [](double v) { std::cout << v << " "; }); which prints 5 6.2 8 9.5 11.2 Another application of callbacks is the notification of callers of certain events which enables a certain amount of static / compile time flexibility. Personally, I use a local optimization library that uses two different callbacks: The first callback is called if a function value and the gradient based on a vector of input values is required (logic callback: function value determination / gradient derivation). The second callback is called once for each algorithm step and receives certain information about the convergence of the algorithm (notification callback). Thus, the library designer is not in charge of deciding what happens with the information that is given to the programmer via the notification callback and he needn't worry about how to actually determine function values because they're provided by the logic callback. Getting those things right is a task due to the library user and keeps the library slim and more generic. Furthermore, callbacks can enable dynamic runtime behaviour. Imagine some kind of game engine class which has a function that is fired, each time the users presses a button on his keyboard and a set of functions that control your game behaviour.With callbacks you can (re)decide at runtime which action will be taken. void player_jump();void player_crouch();class game_core{ std::array<void(*)(), total_num_keys> actions; // void key_pressed(unsigned key_id) { if(actions[key_id]) actions[key_id](); } // update keybind from menu void update_keybind(unsigned key_id, void(*new_action)()) { actions[key_id] = new_action; }}; Here the function key_pressed uses the callbacks stored in actions to obtain the desired behaviour when a certain key is pressed.If the player chooses to change the button for jumping, the engine can call game_core_instance.update_keybind(newly_selected_key, &player_jump); and thus change the behaviour of a call to key_pressed (which the calls player_jump ) once this button is pressed the next time ingame. What are callables in C++(11)? See C++ concepts: Callable on cppreference for a more formal description. Callback functionality can be realized in several ways in C++(11) since several different things turn out to be callable* : Function pointers (including pointers to member functions) std::function objects Lambda expressions Bind expressions Function objects (classes with overloaded function call operator operator() ) * Note: Pointer to data members are callable as well but no function is called at all. Several important ways to write callbacks in detail X.1 "Writing" a callback in this post means the syntax to declare and name the callback type. X.2 "Calling" a callback refers to the syntax to call those objects. X.3 "Using" a callback means the syntax when passing arguments to a function using a callback. Note: As of C++17, a call like f(...) can be written as std::invoke(f, ...) which also handles the pointer to member case. 1. Function pointers A function pointer is the 'simplest' (in terms of generality; in terms of readability arguably the worst) type a callback can have. Let's have a simple function foo : int foo (int x) { return 2+x; } 1.1 Writing a function pointer / type notation A function pointer type has the notation return_type (*)(parameter_type_1, parameter_type_2, parameter_type_3)// i.e. a pointer to foo has the type:int (*)(int) where a named function pointer type will look like return_type (* name) (parameter_type_1, parameter_type_2, parameter_type_3)// i.e. f_int_t is a type: function pointer taking one int argument, returning inttypedef int (*f_int_t) (int); // foo_p is a pointer to function taking int returning int// initialized by pointer to function foo taking int returning intint (* foo_p)(int) = &foo; // can alternatively be written as f_int_t foo_p = &foo; The using declaration gives us the option to make things a little bit more readable, since the typedef for f_int_t can also be written as: using f_int_t = int(*)(int); Where (at least for me) it is clearer that f_int_t is the new type alias and recognition of the function pointer type is also easier And a declaration of a function using a callback of function pointer type will be: // foobar having a callback argument named moo of type // pointer to function returning int taking int as its argumentint foobar (int x, int (*moo)(int));// if f_int is the function pointer typedef from above we can also write foobar as:int foobar (int x, f_int_t moo); 1.2 Callback call notation The call notation follows the simple function call syntax: int foobar (int x, int (*moo)(int)){ return x + moo(x); // function pointer moo called using argument x}// analogint foobar (int x, f_int_t moo){ return x + moo(x); // function pointer moo called using argument x} 1.3 Callback use notation and compatible types A callback function taking a function pointer can be called using function pointers. Using a function that takes a function pointer callback is rather simple: int a = 5; int b = foobar(a, foo); // call foobar with pointer to foo as callback // can also be int b = foobar(a, &foo); // call foobar with pointer to foo as callback 1.4 Example A function ca be written that doesn't rely on how the callback works: void tranform_every_int(int * v, unsigned n, int (*fp)(int)){ for (unsigned i = 0; i < n; ++i) { v[i] = fp(v[i]); }} where possible callbacks could be int double_int(int x) { return 2*x; }int square_int(int x) { return x*x; } used like int a[5] = {1, 2, 3, 4, 5};tranform_every_int(&a[0], 5, double_int);// now a == {2, 4, 6, 8, 10};tranform_every_int(&a[0], 5, square_int);// now a == {4, 16, 36, 64, 100}; 2. Pointer to member function A pointer to member function (of some class C ) is a special type of (and even more complex) function pointer which requires an object of type C to operate on. struct C{ int y; int foo(int x) const { return x+y; }}; 2.1 Writing pointer to member function / type notation A pointer to member function type for some class T has the notation // can have more or less parametersreturn_type (T::*)(parameter_type_1, parameter_type_2, parameter_type_3)// i.e. a pointer to C::foo has the typeint (C::*) (int) where a named pointer to member function will -in analogy to the function pointer- look like this: return_type (T::* name) (parameter_type_1, parameter_type_2, parameter_type_3)// i.e. a type `f_C_int` representing a pointer to member function of `C`// taking int returning int is:typedef int (C::* f_C_int_t) (int x); // The type of C_foo_p is a pointer to member function of C taking int returning int// Its value is initialized by a pointer to foo of Cint (C::* C_foo_p)(int) = &C::foo;// which can also be written using the typedef:f_C_int_t C_foo_p = &C::foo; Example: Declaring a function taking a pointer to member function callback as one of its arguments: // C_foobar having an argument named moo of type pointer to member function of C// where the callback returns int taking int as its argument// also needs an object of type cint C_foobar (int x, C const &c, int (C::*moo)(int));// can equivalently declared using the typedef above:int C_foobar (int x, C const &c, f_C_int_t moo); 2.2 Callback call notation The pointer to member function of C can be invoked, with respect to an object of type C by using member access operations on the dereferenced pointer. Note: Parenthesis required! int C_foobar (int x, C const &c, int (C::*moo)(int)){ return x + (c.*moo)(x); // function pointer moo called for object c using argument x}// analogint C_foobar (int x, C const &c, f_C_int_t moo){ return x + (c.*moo)(x); // function pointer moo called for object c using argument x} Note: If a pointer to C is available the syntax is equivalent (where the pointer to C must be dereferenced as well): int C_foobar_2 (int x, C const * c, int (C::*meow)(int)){ if (!c) return x; // function pointer meow called for object *c using argument x return x + ((*c).*meow)(x); }// or equivalent:int C_foobar_2 (int x, C const * c, int (C::*meow)(int)){ if (!c) return x; // function pointer meow called for object *c using argument x return x + (c->*meow)(x); } 2.3 Callback use notation and compatible types A callback function taking a member function pointer of class T can be called using a member function pointer of class T . Using a function that takes a pointer to member function callback is -in analogy to function pointers- quite simple as well: C my_c{2}; // aggregate initialization int a = 5; int b = C_foobar(a, my_c, &C::foo); // call C_foobar with pointer to foo as its callback 3. std::function objects (header <functional> ) The std::function class is a polymorphic function wrapper to store, copy or invoke callables. 3.1 Writing a std::function object / type notation The type of a std::function object storing a callable looks like: std::function<return_type(parameter_type_1, parameter_type_2, parameter_type_3)>// i.e. using the above function declaration of foo:std::function<int(int)> stdf_foo = &foo;// or C::foo:std::function<int(const C&, int)> stdf_C_foo = &C::foo; 3.2 Callback call notation The class std::function has operator() defined which can be used to invoke its target. int stdf_foobar (int x, std::function<int(int)> moo){ return x + moo(x); // std::function moo called}// or int stdf_C_foobar (int x, C const &c, std::function<int(C const &, int)> moo){ return x + moo(c, x); // std::function moo called using c and x} 3.3 Callback use notation and compatible types The std::function callback is more generic than function pointers or pointer to member function since different types can be passed and implicitly converted into a std::function object. 3.3.1 Function pointers and pointers to member functions A function pointer int a = 2;int b = stdf_foobar(a, &foo);// b == 6 ( 2 + (2+2) ) or a pointer to member function int a = 2;C my_c{7}; // aggregate initializationint b = stdf_C_foobar(a, c, &C::foo);// b == 11 == ( 2 + (7+2) ) can be used. 3.3.2 Lambda expressions An unnamed closure from a lambda expression can be stored in a std::function object: int a = 2;int c = 3;int b = stdf_foobar(a, [c](int x) -> int { return 7+c*x; });// b == 15 == a + (7*c*a) == 2 + (7+3*2) 3.3.3 std::bind expressions The result of a std::bind expression can be passed. For example by binding parameters to a function pointer call: int foo_2 (int x, int y) { return 9*x + y; }using std::placeholders::_1;int a = 2;int b = stdf_foobar(a, std::bind(foo_2, _1, 3));// b == 23 == 2 + ( 9*2 + 3 )int c = stdf_foobar(a, std::bind(foo_2, 5, _1));// c == 49 == 2 + ( 9*5 + 2 ) Where also objects can be bound as the object for the invocation of pointer to member functions: int a = 2;C const my_c{7}; // aggregate initializationint b = stdf_foobar(a, std::bind(&C::foo, my_c, _1));// b == 1 == 2 + ( 2 + 7 ) 3.3.4 Function objects Objects of classes having a proper operator() overload can be stored inside a std::function object, as well. struct Meow{ int y = 0; Meow(int y_) : y(y_) {} int operator()(int x) { return y * x; }};int a = 11;int b = stdf_foobar(a, Meow{8});// b == 99 == 11 + ( 8 * 11 ) 3.4 Example Changing the function pointer example to use std::function void stdf_tranform_every_int(int * v, unsigned n, std::function<int(int)> fp){ for (unsigned i = 0; i < n; ++i) { v[i] = fp(v[i]); }} gives a whole lot more utility to that function because (see 3.3) we have more possibilities to use it: // using function pointer still possibleint a[5] = {1, 2, 3, 4, 5};stdf_tranform_every_int(&a[0], 5, double_int);// now a == {2, 4, 6, 8, 10};// use it without having to write another function by using a lambdastdf_tranform_every_int(&a[0], 5, [](int x) -> int { return x/2; });// now a == {1, 2, 3, 4, 5}; again// use std::bind :int nine_x_and_y (int x, int y) { return 9*x + y; }using std::placeholders::_1;// calls nine_x_and_y for every int in a with y being 4 every timestdf_tranform_every_int(&a[0], 5, std::bind(nine_x_and_y, _1, 4));// now a == {13, 22, 31, 40, 49}; 4. Templated callback type Using templates, the code calling the callback can be even more general than using std::function objects. Note that templates are a compile-time feature and are a design tool for compile-time polymorphism. If runtime dynamic behaviour is to be achieved through callbacks, templates will help but they won't induce runtime dynamics. 4.1 Writing (type notations) and calling templated callbacks Generalizing i.e. the std_ftransform_every_int code from above even further can be achieved by using templates: template<class R, class T>void stdf_transform_every_int_templ(int * v, unsigned const n, std::function<R(T)> fp){ for (unsigned i = 0; i < n; ++i) { v[i] = fp(v[i]); }} with an even more general (as well as easiest) syntax for a callback type being a plain, to-be-deduced templated argument: template<class F>void transform_every_int_templ(int * v, unsigned const n, F f){ std::cout << "transform_every_int_templ<" << type_name<F>() << ">\n"; for (unsigned i = 0; i < n; ++i) { v[i] = f(v[i]); }} Note: The included output prints the type name deduced for templated type F . The implementation of type_name is given at the end of this post. The most general implementation for the unary transformation of a range is part of the standard library, namely std::transform , which is also templated with respect to the iterated types. template<class InputIt, class OutputIt, class UnaryOperation>OutputIt transform(InputIt first1, InputIt last1, OutputIt d_first, UnaryOperation unary_op){ while (first1 != last1) { *d_first++ = unary_op(*first1++); } return d_first;} 4.2 Examples using templated callbacks and compatible types The compatible types for the templated std::function callback method stdf_transform_every_int_templ are identical to the above mentioned types (see 3.4). Using the templated version however, the signature of the used callback may change a little: // Letint foo (int x) { return 2+x; }int muh (int const &x) { return 3+x; }int & woof (int &x) { x *= 4; return x; }int a[5] = {1, 2, 3, 4, 5};stdf_transform_every_int_templ<int,int>(&a[0], 5, &foo);// a == {3, 4, 5, 6, 7}stdf_transform_every_int_templ<int, int const &>(&a[0], 5, &muh);// a == {6, 7, 8, 9, 10}stdf_transform_every_int_templ<int, int &>(&a[0], 5, &woof); Note: std_ftransform_every_int (non templated version; see above) does work with foo but not using muh . // Letvoid print_int(int * p, unsigned const n){ bool f{ true }; for (unsigned i = 0; i < n; ++i) { std::cout << (f ? "" : " ") << p[i]; f = false; } std::cout << "\n";} The plain templated parameter of transform_every_int_templ can be every possible callable type. int a[5] = { 1, 2, 3, 4, 5 };print_int(a, 5);transform_every_int_templ(&a[0], 5, foo);print_int(a, 5);transform_every_int_templ(&a[0], 5, muh);print_int(a, 5);transform_every_int_templ(&a[0], 5, woof);print_int(a, 5);transform_every_int_templ(&a[0], 5, [](int x) -> int { return x + x + x; });print_int(a, 5);transform_every_int_templ(&a[0], 5, Meow{ 4 });print_int(a, 5);using std::placeholders::_1;transform_every_int_templ(&a[0], 5, std::bind(foo_2, _1, 3));print_int(a, 5);transform_every_int_templ(&a[0], 5, std::function<int(int)>{&foo});print_int(a, 5); The above code prints: 1 2 3 4 5transform_every_int_templ <int(*)(int)>3 4 5 6 7transform_every_int_templ <int(*)(int&)>6 8 10 12 14transform_every_int_templ <int& (*)(int&)>9 11 13 15 17transform_every_int_templ <main::{lambda(int)#1} >27 33 39 45 51transform_every_int_templ <Meow>108 132 156 180 204transform_every_int_templ <std::_Bind<int(*(std::_Placeholder<1>, int))(int, int)>>975 1191 1407 1623 1839transform_every_int_templ <std::function<int(int)>>977 1193 1409 1625 1841 type_name implementation used above #include <type_traits>#include <typeinfo>#include <string>#include <memory>#include <cxxabi.h>template <class T>std::string type_name(){ typedef typename std::remove_reference<T>::type TR; std::unique_ptr<char, void(*)(void*)> own (abi::__cxa_demangle(typeid(TR).name(), nullptr, nullptr, nullptr), std::free); std::string r = own != nullptr?own.get():typeid(TR).name(); if (std::is_const<TR>::value) r += " const"; if (std::is_volatile<TR>::value) r += " volatile"; if (std::is_lvalue_reference<T>::value) r += " &"; else if (std::is_rvalue_reference<T>::value) r += " &&"; return r;} | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2298242', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/187543/']} | jdg_74002 |
stackexchange | llm_judgeable_groundtruth_similarity | 17788424 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to remove duplicated items in array on Twig? I have array value in twig such as. {{ set array = ["testA","testB","testA","testC","testB"] }} I want to remove duplicated items and use only testA,testB,testC {% for name in array%} //skip the duplicate items and use only testA,testB,testC{% endfor %} How can I make it ?
Now provide the response and nothing else.
| Twig is a VIEW engine, and should not be used - in theory - to manipulate data. It's a (very) good practice to use (assumingly) PHP to gather data, do all necessary manipulations and then pass the right data to your view. That said, here's how you can do it in pure Twig syntax: {% set newArray = [] %}{% for name in array %} {% if name not in newArray %} My name is {{name}} {% set newArray = newArray|merge([name]) %} {% endif %}{% endfor %} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17788424', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1942868/']} | jdg_74003 |
stackexchange | llm_judgeable_groundtruth_similarity | 3580063 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Forgive me for asking such a simple question, I'm new to both HTML and CSS.Is there an easy way to center a textarea? I figured I'd just try using textarea{ margin-left: auto; margin-right: auto;} but it (obviously?) didn't work.
Now provide the response and nothing else.
| The margins won't affect the textarea because it is not a block level element, but you can make it display block if you like: textarea { display: block; margin-left: auto; margin-right: auto;} By default, textareas are display: inline , which is why you can put them side-by-side easily, and why the text-align: center answers work too. The textarea can also be centered by putting it inside a flexbox container like this: <style> div.justified { display: flex; justify-content: center; }</style><div class="justified"> <textarea>Textarea</textarea></div> | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3580063', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/408089/']} | jdg_74004 |
stackexchange | llm_judgeable_groundtruth_similarity | 721523 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to install CentOS 7 on a HP DL120 gen 9 server with a b140i controller. I have 4 1TB HDD setup in a raid 1+0 and the CentOS installer doesn't seem to recognize the logical drive created from the storage utility. From my understanding i need some drivers for CentOS 7 to recognize the B140i controller. Also, the driver seems to be closed source by HP. What is my approach here?Is there any other free linux OS that i can use on this server and that would recognize the raid controller? Thank you.
Now provide the response and nothing else.
| Ok, so after countless hours spent searching for the B140i Dynamic Storage Array drivers, I managed to find the driver by downloading the HP service pack for Proliant, mounting the iso locally and finding the hpdsa driver inside a folder named DUD . The steps I did in order to successfully install CentOS 7 on my HP Proliant DL120 gen 9 are the following: Download CentOS Image and write it on a USB; Download HPProliant Service Pack and locate the hpds driver inside it(download only the b140i drivers from here ); Use dd to write the appropriate driver ( u0 or u1 ) on a separate USB drive; Insert the CentOS USB in the server, boot from it but at theinstall menu, press e in case of an UEFI system and add on the linuxefi line the following inst.dd modprobe.blacklist=ahci ; Press Ctrl+x or the keys needed to continue the installation; Insert the usb drive with the driver; Press r to refresh the drive list and choose the appropriate drive; After it unpacks, continue with the installation as usual; Special thanks to @ewwhite and @michael-hampton :) Hope this helps someone else who is facing this problem and is not experienced with servers. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/721523', 'https://serverfault.com', 'https://serverfault.com/users/264937/']} | jdg_74005 |
stackexchange | llm_judgeable_groundtruth_similarity | 9152854 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What does the following line do? #line 25 "CSSGrammar.y" And what's with the extension?
Now provide the response and nothing else.
| According to the Standard: Β§16.4.3: A preprocessing directive of the form # line digit-sequence new-line causes the implementation to behave as if the following sequence of source lines begins with a source line that has a line number as specified by the digit sequence (interpreted as a decimal integer). If the digit sequence specifies zero or a number greater than 2147483647, the behavior is undefined. Β§16.4.4: A preprocessing directive of the form # line digit-sequence " s-char-sequenceopt" new-line sets the presumed line number similarly and changes the presumed name of the source file to be the contents of the character string literal. Β§16.4.5: A preprocessing directive of the form # line pp-tokens new-line (that does not match one of the two previous forms) is permitted. The preprocessing tokens after line on the directive are processed just as in normal text (each identifier currently defined as a macro name is replaced by its replacement list of preprocessing tokens). If the directive resulting after all replacements does not match one of the two previous forms, the behavior is undefined; otherwise, the result is processed as appropriate. The .y extension is just what the author chose to use, perhaps to make it apparent that it was a YACC file (the word "grammar" also points to that though it's just a guess). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9152854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/673730/']} | jdg_74006 |
stackexchange | llm_judgeable_groundtruth_similarity | 189943 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's what I would like to do: I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don't want to store the latest snapshot. I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold. I'm looking for simplicity rather than perfection.I'm using python.
Now provide the response and nothing else.
| General idea Option 1: Load both images as arrays ( scipy.misc.imread ) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the difference. Option 2: Load both images. Calculate some feature vector for each of them (like a histogram). Calculate distance between feature vectors rather than images. However, there are some decisions to make first. Questions You should answer these questions first: Are images of the same shape and dimension? If not, you may need to resize or crop them. PIL library will help to do it in Python. If they are taken with the same settings and the same device, they are probably the same. Are images well-aligned? If not, you may want to run cross-correlation first, to find the best alignment first. SciPy has functions to do it. If the camera and the scene are still, the images are likely to be well-aligned. Is exposure of the images always the same? (Is lightness/contrast the same?) If not, you may want to normalize images. But be careful, in some situations this may do more wrong than good. For example, a single bright pixel on a dark background will make the normalized image very different. Is color information important? If you want to notice color changes, you will have a vector of color values per point, rather than a scalar value as in gray-scale image. You need more attention when writing such code. Are there distinct edges in the image? Are they likely to move? If yes, you can apply edge detection algorithm first (e.g. calculate gradient with Sobel or Prewitt transform, apply some threshold), then compare edges on the first image to edges on the second. Is there noise in the image? All sensors pollute the image with some amount of noise. Low-cost sensors have more noise. You may wish to apply some noise reduction before you compare images. Blur is the most simple (but not the best) approach here. What kind of changes do you want to notice? This may affect the choice of norm to use for the difference between images. Consider using Manhattan norm (the sum of the absolute values) or zero norm (the number of elements not equal to zero) to measure how much the image has changed. The former will tell you how much the image is off, the latter will tell only how many pixels differ. Example I assume your images are well-aligned, the same size and shape, possibly with different exposure. For simplicity, I convert them to grayscale even if they are color (RGB) images. You will need these imports: import sysfrom scipy.misc import imreadfrom scipy.linalg import normfrom scipy import sum, average Main function, read two images, convert to grayscale, compare and print results: def main(): file1, file2 = sys.argv[1:1+2] # read images as 2D arrays (convert to grayscale for simplicity) img1 = to_grayscale(imread(file1).astype(float)) img2 = to_grayscale(imread(file2).astype(float)) # compare n_m, n_0 = compare_images(img1, img2) print "Manhattan norm:", n_m, "/ per pixel:", n_m/img1.size print "Zero norm:", n_0, "/ per pixel:", n_0*1.0/img1.size How to compare. img1 and img2 are 2D SciPy arrays here: def compare_images(img1, img2): # normalize to compensate for exposure difference, this may be unnecessary # consider disabling it img1 = normalize(img1) img2 = normalize(img2) # calculate the difference and its norms diff = img1 - img2 # elementwise for scipy arrays m_norm = sum(abs(diff)) # Manhattan norm z_norm = norm(diff.ravel(), 0) # Zero norm return (m_norm, z_norm) If the file is a color image, imread returns a 3D array, average RGB channels (the last array axis) to obtain intensity. No need to do it for grayscale images (e.g. .pgm ): def to_grayscale(arr): "If arr is a color image (3D array), convert it to grayscale (2D array)." if len(arr.shape) == 3: return average(arr, -1) # average over the last axis (color channels) else: return arr Normalization is trivial, you may choose to normalize to [0,1] instead of [0,255]. arr is a SciPy array here, so all operations are element-wise: def normalize(arr): rng = arr.max()-arr.min() amin = arr.min() return (arr-amin)*255/rng Run the main function: if __name__ == "__main__": main() Now you can put this all in a script and run against two images. If we compare image to itself, there is no difference: $ python compare.py one.jpg one.jpgManhattan norm: 0.0 / per pixel: 0.0Zero norm: 0 / per pixel: 0.0 If we blur the image and compare to the original, there is some difference: $ python compare.py one.jpg one-blurred.jpg Manhattan norm: 92605183.67 / per pixel: 13.4210411116Zero norm: 6900000 / per pixel: 1.0 P.S. Entire compare.py script. Update: relevant techniques As the question is about a video sequence, where frames are likely to be almost the same, and you look for something unusual, I'd like to mention some alternative approaches which may be relevant: background subtraction and segmentation (to detect foreground objects) sparse optical flow (to detect motion) comparing histograms or some other statistics instead of images I strongly recommend taking a look at βLearning OpenCVβ book, Chapters 9 (Image parts and segmentation) and 10 (Tracking and motion). The former teaches to use Background subtraction method, the latter gives some info on optical flow methods. All methods are implemented in OpenCV library. If you use Python, I suggest to use OpenCV β₯ 2.3, and its cv2 Python module. The most simple version of the background subtraction: learn the average value ΞΌ and standard deviation Ο for every pixel of the background compare current pixel values to the range of (ΞΌ-2Ο,ΞΌ+2Ο) or (ΞΌ-Ο,ΞΌ+Ο) More advanced versions make take into account time series for every pixel and handle non-static scenes (like moving trees or grass). The idea of optical flow is to take two or more frames, and assign velocity vector to every pixel (dense optical flow) or to some of them (sparse optical flow). To estimate sparse optical flow, you may use Lucas-Kanade method (it is also implemented in OpenCV). Obviously, if there is a lot of flow (high average over max values of the velocity field), then something is moving in the frame, and subsequent images are more different. Comparing histograms may help to detect sudden changes between consecutive frames. This approach was used in Courbon et al, 2010 : Similarity of consecutive frames. The distance between two consecutive frames is measured. If it is too high, it means that the second frame is corrupted and thus the image is eliminated. The KullbackβLeibler distance , or mutual entropy, on the histograms of the two frames: where p and q are the histograms of the frames is used. The threshold is fixed on 0.2. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/189943', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20498/']} | jdg_74007 |
stackexchange | llm_judgeable_groundtruth_similarity | 112423 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For a function $f\colon [-1,1]\to\mathbb{R}$, the Gagliardo seminorm of $f$ is defined to be $$|f| = \int_{-1}^1\int_{-1}^1 \frac{(f(x)-f(y))^2}{(x-y)^2}\, \mathrm{d}x\, \mathrm{d} y.$$ Given $(x,x^2,\ldots,x^n)$ for some $n$, I am interested in finding an orthonormal basis of these functions with respect to this seminorm. (We have to start at $x^1$ and not $x^0$ since the seminorm of the latter is zero). The function Orthogonalize is normally how I do something like this. Indeed, in the help entry for Orthogonalize under Generalizations and Extensions, there is something similar to what I want to do. I can't figure out the notation/syntax for this norm. What do I need to do?
Now provide the response and nothing else.
| Perhaps this? ip[f_, g_, v_] := Module[{x, y, int}, int = (((f /. v -> x) - (f /. v -> y)) ((g /. v -> x) - (g /. v -> y)))/(x - y)^2; Integrate[int, {x, -1, 1}, {y, -1, 1}] ];Orthogonalize[x^Range[4], ip[##, x] &](* {x/2, 1/2 Sqrt[3/2] x^2, 3/2 Sqrt[5/13] (-((2 x)/3) + x^3), 15/8 Sqrt[21/31] (-((14 x^2)/15) + x^4)}*) The inner product ip assumes that f and g are real-valued functions of the parameter v . Update: For the sake of speed on polynomials If you're dealing only with polynomials, one can get a speed-up by precomputing the integrals of monomials. Block[{i, j, k}, (* protect i,j,k since they will be evaluated during the definition *) Assuming[k β Integers && k >= 0, ip`power[i_Integer?EvenQ] = (* integral of even powers *) Integrate[x^(2 k) , {x, -1, 1}] /. k -> i/2; ip`moment[i_Integer?EvenQ, j_Integer?EvenQ] = ip`power[i] ip`power[j]; (* the integral of x^i y^j is a product *) ]];ip`moment[_Integer, _Integer] := 0; (* odd powers integrate to zero *)ipfast[f_, g_, v_] := Module[{x, y, int, coeff, mom}, int = (((f /. v -> x) - (f /. v -> y)) ((g /. v -> x) - (g /. v -> y)))/(x - y)^2; mom = SparseArray@CoefficientList[Factor@int, {x, y}]; (* get coefficient list *) coeff = mom["NonzeroValues"]; (* get coeffs of monomials *) mom = mom["NonzeroPositions"] - 1; (* get powers of monomials *) coeff.(ip`moment @@@ mom) (* integrate and add results *) ]b = x^Range[5];res = Orthogonalize[b, ip[##, x] &]; // AbsoluteTiming // Firstres2 = Orthogonalize[b, ipfast[##, x] &]; // AbsoluteTiming // First(* 7.20899 0.021552*)res - res2 // Simplify(* {0, 0, 0, 0, 0} *) | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/112423', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/34370/']} | jdg_74008 |
stackexchange | llm_judgeable_groundtruth_similarity | 463635 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Both x and y are real numbers and x > y .Given ${x^2-x y+y^2 = 15, x y+x+y = 13}$ find the value of $x^2+6y$ .I tried solving the second equation to get $y=(13-x)/(x+1)$ and substituted that in equation one to get a quartic equation in x : $x^4+3 x^3-25 x^2-69 x+154 = 0$ .I tried a few rational root test guesses to no avail. Then I checked wolfram alpha to find out that it has no rational roots. So I think there must be a better way to compute $x^2+6y$ if x > y.
Now provide the response and nothing else.
| HINT: We have , $$x^2-xy+y^2+3(xy+x+y)=15+3\cdot13$$ $$\implies (x+y)^2+3(x+y)-54=0$$ Solve for $x+y$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/463635', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/89679/']} | jdg_74009 |
stackexchange | llm_judgeable_groundtruth_similarity | 12295825 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Brief use-case description: The user arrives to the page: http://localhost:8080/.../show.xhtml?itemId=1 .My ShowBean is a @RequestScoped JSF Managed Bean which obtains the id through <f:viewParam.../> and looks for item in the database: <f:metadata> <f:viewParam name="itemId" value="#{showBean.itemId}"> <f:convertNumber integerOnly="#{true}"/> </f:viewParam></f:metadata><f:event type="preRenderView" listener="#{showBean.init()}"/> The user can also edit the displayed item. I want to construct the @ViewScoped ItemEditBean on-demand (by clicking on the "Edit" button). To achieve this I did the following: <p:commandButton value="Edit" oncomplete="editWidget.show()" update=":itemEditForm"> <f:setPropertyActionListener target="#{itemEditBean.id}" value="#{showBean.itemId}"/></p:commandButton> My questions are: Is there any better approach to pass itemId to the @ViewScoped bean? EDIT: Doing itemEditBean.fetch(id) in the action of the p:command button won't initialize the @ViewScoped bean on page rendering. <p:commandButton value="Edit" oncomplete="editWidget.show()" update=":itemEditForm" action="#{itemEditBean.fetchItem(showBean.itemId)}"/> . With the code above itemEditBean gets constructed on-demand.
Now provide the response and nothing else.
| Okay,seems like you can bypass the restriction by using reflection : package lab.heapdump;import javax.management.MBeanServer;import java.lang.management.ManagementFactory;import java.lang.reflect.Method;@SuppressWarnings("restriction")public class HeapDump { // This is the name of the HotSpot Diagnostic MBean private static final String HOTSPOT_BEAN_NAME = "com.sun.management:type=HotSpotDiagnostic"; // field to store the hotspot diagnostic MBean private static volatile Object hotspotMBean; /** * Call this method from your application whenever you * want to dump the heap snapshot into a file. * * @param fileName name of the heap dump file * @param live flag that tells whether to dump * only the live objects */ static void dumpHeap(String fileName, boolean live) { // initialize hotspot diagnostic MBean initHotspotMBean(); try { Class clazz = Class.forName("com.sun.management.HotSpotDiagnosticMXBean"); Method m = clazz.getMethod("dumpHeap", String.class, boolean.class); m.invoke( hotspotMBean , fileName, live); } catch (RuntimeException re) { throw re; } catch (Exception exp) { throw new RuntimeException(exp); } } // initialize the hotspot diagnostic MBean field private static void initHotspotMBean() { if (hotspotMBean == null) { synchronized (HeapDump.class) { if (hotspotMBean == null) { hotspotMBean = getHotspotMBean(); } } } } // get the hotspot diagnostic MBean from the // platform MBean server private static Object getHotspotMBean() { try { Class clazz = Class.forName("com.sun.management.HotSpotDiagnosticMXBean"); MBeanServer server = ManagementFactory.getPlatformMBeanServer(); Object bean = ManagementFactory.newPlatformMXBeanProxy(server, HOTSPOT_BEAN_NAME, clazz); return bean; } catch (RuntimeException re) { throw re; } catch (Exception exp) { throw new RuntimeException(exp); } } public static void main(String[] args) { // default heap dump file name String fileName = "D:\\heap.bin"; // by default dump only the live objects boolean live = true; // simple command line options switch (args.length) { case 2: live = args[1].equals("true"); case 1: fileName = args[0]; } // dump the heap dumpHeap(fileName, live); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12295825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/407466/']} | jdg_74010 |
stackexchange | llm_judgeable_groundtruth_similarity | 45600615 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have downloaded Android Studio and it has been since then giving me problems. I've reinstalled it 3 times and I've looked up those problems but I just can't find a solution! Here is how it looks like: It seems like I can't get passed the 2 errors shown below (23,24 ; 26,13). Note that if I click on "Install repository and sync project", it won't work.Also my design window looks bad as I have render problems and missing classes.
Now provide the response and nothing else.
| You have already called loop.close() before you ran that sample piece of code, on the global event loop: >>> import asyncio>>> asyncio.get_event_loop().close()>>> asyncio.get_event_loop().is_closed()True>>> asyncio.get_event_loop().run_until_complete(asyncio.sleep(1))Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/.../lib/python3.6/asyncio/base_events.py", line 443, in run_until_complete self._check_closed() File "/.../lib/python3.6/asyncio/base_events.py", line 357, in _check_closed raise RuntimeError('Event loop is closed')RuntimeError: Event loop is closed You need to create a new loop: loop = asyncio.new_event_loop() You can set that as the new global loop with: asyncio.set_event_loop(asyncio.new_event_loop()) and then just use asyncio.get_event_loop() again. Alternatively, just restart your Python interpreter, the first time you try to get the global event loop you get a fresh new one, unclosed. As of Python 3.7, the process of creating, managing, then closing the loop (as well as a few other resources) is handled for you when use asyncio.run() . It should be used instead of loop.run_until_complete() , and there is no need any more to first get or set the loop. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/45600615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6911522/']} | jdg_74011 |
stackexchange | llm_judgeable_groundtruth_similarity | 7902324 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With jQuery.css() I've been told I can use the following two functions for the same results: $(".element").css("marginLeft") = "200px";$(".element").css("margin-left") = "200px"; I've always used marginLeft as this is what is used in the documentation: http://api.jquery.com/css/ Shorthand CSS properties (e.g. margin, background, border) are not supported. For example, if you want to retrieve the rendered margin, use: $(elem).css('marginTop') and $(elem).css('marginRight'), and so on. Why has jQuery allowed for marginLeft as well as margin-left ? It seems pointless and uses more resources to be converted to the CSS margin-left ?
Now provide the response and nothing else.
| jQuery's underlying code passes these strings to the DOM, which allows you to specify the CSS property name or the DOM property name in a very similar way: element.style.marginLeft = "10px"; is equivalent to: element.style["margin-left"] = "10px"; Why has jQuery allowed for marginLeft as well as margin-left? It seems pointless and uses more resources to be converted to the CSS margin-left? jQuery's not really doing anything special. It may alter or proxy some strings that you pass to .css() , but in reality there was no work put in from the jQuery team to allow either string to be passed. There's no extra resources used because the DOM does the work. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7902324', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/370103/']} | jdg_74012 |
stackexchange | llm_judgeable_groundtruth_similarity | 223744 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does it make sense to write default constructor when it has no arguments, empty body and no other constructors exist? The only benefit I can think of is reducing the risk of forgetting to add the default constructor when another one is created. But this error would show up anyway if the default constructor is actually being used (at least if the class is "internal" and not in a library), and if it isn't used it can be omitted (yagni). What benefit can this have?
Now provide the response and nothing else.
| There is no benefit unless you are interacting with a tool or framework that detects your class's declared methods and constructors via reflection and then silently does different things depending on whether it finds the default constructor or not. (I seem to remember that early versions of Spring or Hibernate did this.) Otherwise, as you say, any inconsistency would be flagged by the compiler, so there is no point in declaring things that would be auto-generated anyway. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/223744', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/56691/']} | jdg_74013 |
stackexchange | llm_judgeable_groundtruth_similarity | 5997006 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to combine a folder full of pdfs into one file. However they must be combined in a certain order. A sample of the file names is: WR_Mapbook__1.pdf WR_Mapbook__1a.pdf WR_Mapbook__2.pdf WR_Mapbook__2a.pdf WR_Mapbook__3.pdf WR_Mapbook__3a.pdf etc... The way that they are sorted in windows explorer is the way I need them to be added to the a single file. However my script adds all the "a" files first, and then the files without an "a". Why does it do that? How can I sort it so that the files are added in the way I want? See the code below. Thanks! from pyPdf import PdfFileWriter, PdfFileReader import globoutputLoc = "K:\\test\\pdf_output\\"output = PdfFileWriter()pdfList = glob.glob(r"K:\test\lidar_MB_ALL\*.pdf")pdfList.sortprint pdfListfor pdf in pdfList: print pdf input1 = PdfFileReader(file(pdf, "rb")) output.addPage(input1.getPage(0)) # finally, write "output" to document-output.pdf outputStream = file(outputLoc + "WR_Imagery_LiDar_Mapbook.pdf", "wb") output.write(outputStream) print ("adding " + pdf) outputStream.close()
Now provide the response and nothing else.
| What you need is to implement "Natural Order String Comparison". Hopefully someone has done this already and shared it. EDIT: Here's a brute force example of doing this in Python. import redigits = re.compile(r'(\d+)')def tokenize(filename): return tuple(int(token) if match else token for token, match in ((fragment, digits.search(fragment)) for fragment in digits.split(filename)))# Now you can sort your PDF file names like so:pdfList.sort(key=tokenize) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5997006', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/752944/']} | jdg_74014 |
stackexchange | llm_judgeable_groundtruth_similarity | 1393257 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is my problem: I have to find the integer that is the highest lower bound for the roots of $$f(x)=x^4-3x^2+2x-4$$ I am not sure how to do this and the book I am using does not explain it very well. If you could explain to me how to find the highest lower bound and lowest upper bound that would be great. Thanks in advance!
Now provide the response and nothing else.
| If $f(x)$ is an strictly increasing function, then$$\displaystyle \lim_{n\rightarrow \infty}f(n) = \displaystyle \lim_{n\rightarrow \infty}f(n+1) = L$$ So $$\displaystyle \lim_{n\rightarrow \infty}f(n+1) = \lim_{n\rightarrow \infty}\frac{1}{2}\left[f(n)+\frac{9}{f(n)}\right]\Rightarrow L = \frac{1}{2}\left[L+\frac{9}{L}\right]$$where $L>0$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1393257', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/261332/']} | jdg_74015 |
stackexchange | llm_judgeable_groundtruth_similarity | 60884941 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With a script I duplicate the active file and save him in a particular folder on google Drive. Then, I try to open the new file. With the following code the new file is well created but nothing happen after... function copy(){ nom=Browser.inputBox("Nom du magasin ?") var sheet = SpreadsheetApp.getActiveSpreadsheet(); var destFolder = DriveApp.getFoldersByName("Tests").next(); DriveApp.getFileById(sheet.getId()).makeCopy(nom, destFolder); SpreadsheetApp.open(destFolder.getFilesByName(nom).next());} Do you have any idea to get my new file opened instead of the first one ?Thanks a lot !
Now provide the response and nothing else.
| I'm going to be willing to bet that the issue is that you are not import ing the ConfigModule to the AuthModule and instead you are adding the ConfigService to the providers array directly. This would mean that if ConfigModule does any sort of set up on the ConfigService , it won't be happening anymore. What you should have instead is something like this: @Module({ imports: [ PassportModule.register({defaultStrategy: 'jwt' }), UserModule, JwtModule.registerAsync({ imports: [ConfigModule], useFactory: async (configService: ConfigService) => { return { secret: configService.get<string>('JWT_KEY') }; }, inject: [ConfigService] }), ConfigModule, ], providers: [LocalStrategy, JwtStrategy, AuthService], controllers: [AuthController], exports: [PassportStrategy],})export class AuthModule {} Now as long a ConfigModule exports ConfigService , the AuthModule should fire up just fine. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/60884941', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13134783/']} | jdg_74016 |
stackexchange | llm_judgeable_groundtruth_similarity | 29725932 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
All I would like to do is delete a row if it has a value of '0' in the third column. An example of the data would be something like: 6.5, 5.4, 0, 3206.5, 5.4, 1, 320 So the first row would need to be deleted whereas the second would stay. What I have so far is as follows: import csvinput = open('first.csv', 'rb')output = open('first_edit.csv', 'wb')writer = csv.writer(output)for row in csv.reader(input): if row[2]!=0: writer.writerow(row)input.close()output.close() Any help would be great
Now provide the response and nothing else.
| You are very close; currently you compare the row[2] with integer 0 , make the comparison with the string "0" . When you read the data from a file, it is a string and not an integer, so that is why your integer check fails currently: row[2]!="0": Also, you can use the with keyword to make the current code slightly more pythonic so that the lines in your code are reduced and you can omit the .close statements: import csvwith open('first.csv', 'rb') as inp, open('first_edit.csv', 'wb') as out: writer = csv.writer(out) for row in csv.reader(inp): if row[2] != "0": writer.writerow(row) Note that input is a Python builtin, so I've used another variable name instead. Edit : The values in your csv file's rows are comma and space separated; In a normal csv, they would be simply comma separated and a check against "0" would work, so you can either use strip(row[2]) != 0 , or check against " 0" . The better solution would be to correct the csv format, but in case you want to persist with the current one, the following will work with your given csv file format: $ cat test.py import csvwith open('first.csv', 'rb') as inp, open('first_edit.csv', 'wb') as out: writer = csv.writer(out) for row in csv.reader(inp): if row[2] != " 0": writer.writerow(row)$ cat first.csv 6.5, 5.4, 0, 3206.5, 5.4, 1, 320$ python test.py $ cat first_edit.csv 6.5, 5.4, 1, 320 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29725932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4756798/']} | jdg_74017 |
stackexchange | llm_judgeable_groundtruth_similarity | 161689 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Lets say I go to some hotel, the hotel has a WIFI network that requires as password and/or username to access. Can other hotelguests then see my wifi traffic if they are not logged in to the wifi, and can they see it when they are connected to the WIFI as a guest like me..but not as the network admininistrator (which CAN obviously see it)?
Now provide the response and nothing else.
| Yes, if they know what they're doing. See https://www.howtogeek.com/204335/warning-encrypted-wpa2-wi-fi-networks-are-still-vulnerable-to-snooping/ for an assessment of the vulnerabilities for WPA2-PSK (the most common form of encrypted WiFi). If the hotel wifi network uses WEP, that's even easier to crack, and if it doesn't use encryption at all (open Wifi), it's trivial. The only way to protect yourself is to use a VPN service, so that all of your network traffic is encrypted at your end (regardless of whether you are browsing a news web site or doing Internet banking). This will protect you from someone snooping at the router as well. Note that it doesn't stop anyone from kicking you off the wifi (via DEAUTH), but at least they won't get any of your data even if they capture the encryption key used between your device and their wifi hotspot. | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/161689', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/150616/']} | jdg_74018 |
stackexchange | llm_judgeable_groundtruth_similarity | 6623375 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm playing around with template specialization, and I've found an issue I can't seem to solve; this is my code: template<int length, typename T>void test(T* array){ ... test<length-1>(array);}template<typename T>void test<0>(T* array){ return;} So what I'm trying to do, is to pass the length, of what's to be processed in the template. The problem is, that the compilation of this, well outputs forever: a.cpp:83:43: error: template-id 'test<0>' in declaration of primary templatea.cpp: In function 'void test(T*) [with int length= -0x000000081, T = int]':a.cpp:77:9: instantiated from 'void test(T*) [with int length= -0x000000080, T = int]'a.cpp:77:9: instantiated from 'void test(T*) [with int length= -0x00000007f, T = int]'a.cpp:77:9: [ skipping 151 instantiation contexts ]a.cpp:77:9: instantiated from 'void test(T*) [with int length= 28, T = int]'a.cpp:77:9: instantiated from 'void test(T*) [with int length= 29, T = int]'...a.cpp: In function 'void test(T*) [with int length= -0x000000082, T = int]':a.cpp:77:9: instantiated from 'void test(T*) [with int length= -0x000000081, T = int]'a.cpp:77:9: instantiated from 'void test(T*) [with int length= -0x000000080, T = int]' Last two lines, is pretty much the same as the first ones. To me it would seem, its not catching the specialization, hence: a.cpp:83:43: error: template-id 'test<0>' in declaration of primary template Am I correct? And if I'm correct, I'm guessing it's the issue that partial template specialisation is not allowed for function templates, so what would be the solution then, making a struct, and using specialisation on that?
Now provide the response and nothing else.
| Partial specialization of function templates is not allowed. Herb Sutter explains why in his article "Why Not Specialize Function Templates?" . To work around this limitation you need to use class templates instead. You can then write a regular function template that uses that class template. That specific error you're getting is because you forgot the second parameter in your specialization. If you this instead: template<int length, typename T>void test(T* array){ //... test<length-1,T>(array);}template<typename T>void test<0,T>(T* array){ return;} GCC complains with the following: error: function template partial specialization 'test<0, T>' is not allowed | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6623375', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/480212/']} | jdg_74019 |
stackexchange | llm_judgeable_groundtruth_similarity | 1210571 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to update same field to same value for hundreds of workitems in TFS. Is there any way to do it in a batch instead of updating them manually one by one?
Now provide the response and nothing else.
| You can do this in Excel : Open the work items in Excel, via: right click a query in Team Explorer -> open in Excel multi-select some work items in a WIT result pane, then right click -> open in Excel load Excel, use Team -> Import to load a predefined query open a *.xls file that is already bound to TFS Make your bulk edits Click the Publish button on the Team ribbon Full documentation: Managing work items in Excel (overview page; lots & lots of links inside) You can bulk-edit in the web interface too Windows command line : REM make Martin Woodward fix all my bugstfpt query /format:id "TeamProject\public\My Work Items" | tfpt workitem /update @ /fields:"Assigned To=Martin" Powershell : # make Bill & Steve happy$tfs = tfserver -path . -all$items = $tfs.wit.Query(" SELECT id FROM workitems WHERE [Created By] IN ('bill gates', 'steve ballmer')") | % { $_.Open() $_.Fields["priority"].value = 1 $_ }# note: this will be much faster than tfpt since it's only one server call$tfs.wit.BatchSave($items) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1210571', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/42356/']} | jdg_74020 |
stackexchange | llm_judgeable_groundtruth_similarity | 1569446 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I change the current locale ? I tried to put controller/action?lang=de but my locale is still en_US I tried to override the value using this piece of code: def key = "org.springframework.web.servlet.DispatcherServlet.LOCALE_RESOLVER"def localeResolver = request.getAttribute(key)localeResolver.setLocale(request, response, new Locale("de","DE")) Nothing changed. I tried to override the value using this piece of code: import org.springframework.web.servlet.support.RequestContextUtils as RCU;RCU.getLocaleResolver(request).setLocale(request, response, new Locale("de","DE")) And... nothing happened. i still got my locale set to en_US . Any idea to change the locale ?
Now provide the response and nothing else.
| According to the chapter 10. Internationalization of the Grails documentation, Grails supports i18n out of the box and you should indeed be able to change the locale using the lang parameter: By default the user locale is detected from the incoming Accept-Language header. However, you can provide users the capability to switch locales by simply passing a parameter called lang to Grails as a request parameter: /book/list?lang=de Grails will automatically switch the user locale and store it in a cookie so subsequent requests will have the new header. But sometimes you may want to preset the default language because not all your applications will be in english. To do this, all you have to do is to set your localeResolver in your resources.groovy spring configuration file as shown below: beans = { localeResolver(org.springframework.web.servlet.i18n.SessionLocaleResolver) { defaultLocale = new Locale("de","DE") java.util.Locale.setDefault(defaultLocale) }} Now, without more details, I can't say why using the lang parameter isn't working in your case. Just in case, how do you know that the locale is still en_US?. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1569446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138933/']} | jdg_74021 |
stackexchange | llm_judgeable_groundtruth_similarity | 13957082 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a MySql table consisting of daily stock quotes (open, high, low, close and volume) which I'm trying to convert into weekly data on the fly. So far, I have the following function, which works for the highs, lows, and volume: SELECT MIN(_low), MAX(_high), AVG(_volume),CONCAT(YEAR(_date), "-", WEEK(_date)) AS myweekFROM mystockdataGROUP BY myweekORDER BY _date; I need to select the first instance of _open in the above query. So for example, if there was a holiday on Monday (in a particular week) and stock market opened on Tuesday, _open value should be selected from the Tuesday that's grouped into its week. Similarly, the close value should be the last _close from that week. Is it possible to select something like FIRST() and LAST() in MySql so that the above could be wrapped up within a single SELECT rather than using nested select queries? Here's my table's create statement to get an idea of the schema: delimiter $$CREATE TABLE `mystockdata` ( `id` int(11) NOT NULL AUTO_INCREMENT, `symbol_id` int(11) NOT NULL, `_open` decimal(11,2) NOT NULL, `_high` decimal(11,2) NOT NULL, `_low` decimal(11,2) NOT NULL, `_close` decimal(11,2) NOT NULL, `_volume` bigint(20) NOT NULL, `add_date` date NOT NULL, PRIMARY KEY (`id`), KEY `Symbol_Id` (`symbol_id`,`add_date`)) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8$$ Update: There are no nulls, wherever there's a holiday/weekend, the table does not carry any record for that date.
Now provide the response and nothing else.
| If you are using MySQL 8, the preferable solution would make use of the window functions FIRST_VALUE() and/or LAST_VALUE() , which are now available. Please have a look at Lukas Eder's answer . But if you're using an older version of MySQL, those functions are notsupported. You have to simulate them using some kind of workarounds,for example you could make use of the aggregated string function GROUP_CONCAT() that creates a set of all _open and _close values of the week ordered by _date for _open and by _date desc for _close , and extracting the first element of the set: select min(_low), max(_high), avg(_volume), concat(year(_date), "-", lpad(week(_date), 2, '0')) AS myweek, substring_index(group_concat(cast(_open as CHAR) order by _date), ',', 1 ) as first_open, substring_index(group_concat(cast(_close as CHAR) order by _date desc), ',', 1 ) as last_closefrom mystockdatagroup by myweekorder by myweek; Another solution would make use of subqueries with LIMIT 1 in the SELECT clause: select min(_low), max(_high), avg(_volume), concat(year(_date), "-", lpad(week(_date), 2, '0')) AS myweek, ( select _open from mystockdata m where concat(year(_date), "-", lpad(week(_date), 2, '0'))=myweek order by _date LIMIT 1 ) as first_open, ( select _close from mystockdata m where concat(year(_date), "-", lpad(week(_date), 2, '0'))=myweek order by _date desc LIMIT 1 ) as last_closefrom mystockdatagroup by myweekorder by myweek; Please note I added the LPAD() string function to myweek , to make the week number always two digits long, otherwise weeks won't be ordered correctly. Also be careful when using substring_index in conjunction with group_concat(): if one of the grouped strings contains a comma, the function might not return the expected result. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13957082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1174406/']} | jdg_74022 |
stackexchange | llm_judgeable_groundtruth_similarity | 5833790 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Lets say, the code looks something like this: if(!$test) { header("Location: somefile.html"); ...some PHP code.... header("Location: anotherfile.html");} Is 'some PHP code' above executed? If yes, then what happens to further HTTP response(s) therein (eg.: the second 'header' statement in the code)?
Now provide the response and nothing else.
| Yes - the code will be executed. The header() will configure the headers to be returned, not send them right away. If there is no output between the 2calls, then only the last one will betaken into account. However, if you output anything before the second call, then the headers will be sent , and the secondcall will result in an error (headersalready sent). A classic mistake is : redirect and not exit() right after that, which can cause security problems. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5833790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/617883/']} | jdg_74023 |
stackexchange | llm_judgeable_groundtruth_similarity | 538435 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to encrypt a string on the iPhone and send it to a .Net web service for decryption. I am able to encrypt/decrypt on the iPhone and with .Net, but the encrypted strings from the iPhone cannot be decrypted by .Net. The error I get is "Padding is invalid and cannot be removed." The .Net code is from: http://blog.realcoderscoding.com/index.php/2008/07/dot-net-encryption-simple-aes-wrapper/ The iPhone code uses the sample code from: http://nootech.wordpress.com/2009/01/17/symmetric-encryption-with-the-iphone-sdk/ AFAIK my key settings are the same: result.BlockSize = 128; // iPhone: kCCBlockSizeAES128result.KeySize = 128; // kCCBlockSizeAES128result.Mode = CipherMode.CBC;result.Padding = PaddingMode.PKCS7; // kCCOptionPKCS7Padding I tried different ways of generating ciphertext. hello/hello is: e0PnmbTg/3cT3W+92CDw1Q== in .Net yrKe5Z7p7MNqx9+CbBvNqQ== on iPhone and "openssl enc -aes-128-cbc -nosalt -a -in hello.txt -pass pass:hello" generates: QA+Ul+r6Zmr7yHipMcHSbQ== Update: I've posted the working code for this here .
Now provide the response and nothing else.
| At the very least, you are using differing initialization vectors (IV). The .Net code uses the key for IV. private static AesCryptoServiceProvider GetProvider(byte[] key){ //Set up the encryption objects AesCryptoServiceProvider result = new AesCryptoServiceProvider(); byte[] RealKey = Encryptor.GetKey(key, result); result.Key = RealKey; result.IV = RealKey; return result;} and private static byte[] GetKey(byte[] suggestedKey, AesCryptoServiceProvider p){ byte[] kRaw = suggestedKey; List kList = new List(); for (int i = 0; i < p.LegalKeySizes[0].MinSize; i += 8 ) { kList.Add(kRaw[i % kRaw.Length]); } byte[] k = kList.ToArray(); return k;} which should probably be: kList.Add(kRaw[(i / 8) % kRaw.Length]); . Otherwise a key whose length % 8 == 0 will use the same letter repeatedly, doh! Thus the IV (and key) used by .Net is: hleolhleolhleolh . This is not part of the API, but rather due to the wrapper code that you pointed at (which has a serious bug in it...). The iPhone code uses 0 for IV. // Initialization vector; dummy in this case 0's.uint8_t iv[kChosenCipherBlockSize];memset((void *) iv, 0x0, (size_t) sizeof(iv)); openssl by default prepends a randomly generated salt (which is why the output is longer!). The openssl output is more secure since it is prepending a random initialization vector. It looks like the first few bytes of the base64 decoded string is "Salted__". You can also ask openssl to not use a salt (-nosalt) and / or provide an IV (-iv). Essentially, openssl, .Net, and the iPhone are using the same encryption, you just need to be careful how you initialize the APIs with the encryption key and the initialization vector. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/538435', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65244/']} | jdg_74024 |
stackexchange | llm_judgeable_groundtruth_similarity | 27751 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am unable to see how to eliminate $t$. Wolfram Alpha fails at it too. $$x=2t-4t^3$$ $$y=t^2-3t^4$$ I can guess that the curve is a polynomial equation so in principle I can write this as $$w_1 x^4 +w_2 x^3 +w_3 x^2 +w_4 x +w_5 y^3 +w_6 y^2 +w_7 y +w_8=0$$ for the powers of $t$ to cancel when plugged in I am determine all these $w's$ by substituting the $t's$ for $x(t)$ and $y(t)$ and setting each of the new coefficients of the $t$'s $C_i(w_1,...w_8)$ to zero individually. I have my curve determined if I put each of the $C_i(w_1,...w_8)=0$ and solve for the $w$'s. All this was in principle, but this is like an examination question so there must be a clever manipulation/way, which I have been unable to find.
Now provide the response and nothing else.
| My opinion is that you should not try to eliminate $t$ at all. Instead, you should think about how $x$ and $y$ behave as $t$ varies, find some important points such as the critical points with respect to $x$ and $y$, and try to sketch the graph from that information. $x=2t-4t^3$ means that as $t$ goes from $-\infty$ to $\infty$, $x$ decreases from $\infty$ to a local minimum of $-\sqrt{8/27}$ at $t = -1/\sqrt{6}$, then rises to $\sqrt{8/27}$ at $t = 1/\sqrt{6}$, then falls again to $-\infty$. $y=t^2-3t^4$ means that $y$ starts and ends at $-\infty$, with local maxima of $1/12$ when $t = \pm 1/\sqrt{6}$, and a local minimum of $0$ at $t = 0$. Interestingly, $t = \pm 1/\sqrt{6}$ is a critical point for both $x$ and $y$, both of whose derivatives change sign, so the curve forms a cusp at that point. That's enough information to follow the curve as $t$ goes from $-\infty$ to $\infty$: it comes in from the bottom right quadrant $(\infty,-\infty)$, goes upward and leftward till it hits the cusp $(-\sqrt{8/27},1/12)$, upon which it turns around, passing smoothly through $(0,0)$ where its tangent is horizontal, until it reaches the second cusp $(\sqrt{8/27},1/12)$, and then turns around a second time and exits at the bottom left towards $(-\infty,-\infty)$. This is consistent with the plot arising from Juan Joder's answer . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/27751', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_74025 |
stackexchange | llm_judgeable_groundtruth_similarity | 72054026 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
According to The Rust Reference , If a main function is present, (snip), and its return type must be one of the following: () Result<(), E> where E: Error but it doesn't say what happens when main() returns () , Ok(()) or Err(<value>) . As far as I tested, () Ok(()) Err(<value>) Exit Status 0 0 1 Additional Behavior - - Error: <value> is printed to stderr Are these behaviors defined, explicitly explained or guaranteed in some documentation? In particular, can I assume a program always exits with 1 status when main() returns Err(<value>) ? the error message displayed when main() returns Err(<value>) is always of the form Error: <value> ? Notes: I want some sort of documented guarantee rather than an empirical explanation. This is why I added #language-lawyer tag. This question is not about When should I use () and when should I use Result<(), E> ? or such. One can find answers (or at least hints or criteria) to such questions in many documentations or tutorials, as you know. Updates: Termination trait is finally stabilized in Rust 1.61.0 ( source ).
Now provide the response and nothing else.
| The behavior of different return values from main is defined by the std::process::Termination trait: trait std::process::Termination A trait for implementing arbitrary return types in the main function. This trait is documented to return libc::EXIT_SUCCESS on success and libc::EXIT_FAILURE on error. The default implementations are returning libc::EXIT_SUCCESS to indicate a successful execution. In case of a failure, libc::EXIT_FAILURE is returned. But those values aren't guaranteed to be 0 and 1 on non-POSIX systems . As for printing the error message, Termination requires E: Debug and does print the Debug impl to stderr, but I don't believe it's guaranteed to stay exactly the same. impl<E: fmt::Debug> Termination for Result<!, E> { fn report(self) -> ExitCode { let Err(err) = self; eprintln!("Error: {:?}", err); ExitCode::FAILURE.report() }} Source | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/72054026', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8776746/']} | jdg_74026 |
stackexchange | llm_judgeable_groundtruth_similarity | 13683162 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to show a additional column in the backend of WooCommerce (in Orders overview).The column should contain a custom field, which i have defined (delivery date). How to do this?
Now provide the response and nothing else.
| In case someone still needs it - instructions on how to add new columns in Woocommerce orders list. No need in unsetting the default columns, just add this in your functions.php and your code will be valid for updates. 1. Define columns position and names add_filter( 'manage_edit-shop_order_columns', 'MY_COLUMNS_FUNCTION' );function MY_COLUMNS_FUNCTION($columns){ $new_columns = (is_array($columns)) ? $columns : array(); unset( $new_columns['order_actions'] ); //edit this for you column(s) //all of your columns will be added before the actions column $new_columns['MY_COLUMN_ID_1'] = 'MY_COLUMN_1_TITLE'; $new_columns['MY_COLUMN_ID_2'] = 'MY_COLUMN_2_TITLE'; //stop editing $new_columns['order_actions'] = $columns['order_actions']; return $new_columns;} 2. For each custom column, show the values add_action( 'manage_shop_order_posts_custom_column', 'MY_COLUMNS_VALUES_FUNCTION', 2 );function MY_COLUMNS_VALUES_FUNCTION($column){ global $post; $data = get_post_meta( $post->ID ); //start editing, I was saving my fields for the orders as custom post meta //if you did the same, follow this code if ( $column == 'MY_COLUMN_ID_1' ) { echo (isset($data['MY_COLUMN_1_POST_META_ID']) ? $data['MY_COLUMN_1_POST_META_ID'] : ''); } if ( $column == 'MY_COLUMN_ID_2' ) { echo (isset($data['MY_COLUMN_2_POST_META_ID']) ? $data['MY_COLUMN_2_POST_META_ID'] : ''); } //stop editing} 3. (optional) Function to make the columns sortable add_filter( "manage_edit-shop_order_sortable_columns", 'MY_COLUMNS_SORT_FUNCTION' );function MY_COLUMNS_SORT_FUNCTION( $columns ) { $custom = array( //start editing 'MY_COLUMN_ID_1' => 'MY_COLUMN_1_POST_META_ID', 'MY_COLUMN_ID_2' => 'MY_COLUMN_2_POST_META_ID' //stop editing ); return wp_parse_args( $custom, $columns );} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13683162', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1872543/']} | jdg_74027 |
stackexchange | llm_judgeable_groundtruth_similarity | 189346 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to show that $n+1$ is a divisor of $\displaystyle\binom{2n} {n}$, for all $n\in\mathbb{N}.$ I have tried to show it by induction and Pascal's rule but it did not worked out. I would appreciate some help.
Now provide the response and nothing else.
| From the Wikipedia page for Catalan numbers: http://en.wikipedia.org/wiki/Catalan_number Note that $$\binom {2n}{n+1} = \frac{(2n)!}{(n-1)!(n+1)!} = \frac{n}{n+1} \frac{(2n)!}{n!n!} = \frac{n}{n+1}\binom{2n}{n}$$ So $$\frac{1}{n+1}\binom{2n}{n} = \binom{2n}{n} - \binom{2n}{n+1}$$is an integer. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/189346', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23505/']} | jdg_74028 |
stackexchange | llm_judgeable_groundtruth_similarity | 7910209 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a navigation bar which consists of a <img> elements within their respective <a> elements. However, for some reason in IE its making a dark black border around the images. Its not doing the same in other browsers, I can't seem to figure this out... This is the html that I'm using. <li> <a href="#"> <span id="nav1"> <img src="tt_1.png" /> </span> </a></li> I have about 5 links all written like that and I've used CSS to style it into a nav bar. On other browsers it comes out like this but on IE it comes out like this I've never encountered a problem like this before and what I've reserached to try and fix it so far haven't worked. Is there a way to take out these borders using CSS?
Now provide the response and nothing else.
| TL;DR Remove borders from all links and images: a, img { border:none; outline:none; /* this breaks accessibility */} **Full version** If you only want to remove borders from images that are links, you should do the following: a img { border:none; outline:none; /* this breaks accessibility */} The only difference is that there is no comma between a and img meaning only images inside a -tags will have this rule applied Pro tip: Use a css reset Browser inconsistencies like this one are numerous, so web developers often use a "css reset" i.e. https://necolas.github.io/normalize.css/ or http://meyerweb.com/eric/tools/css/reset/ . This will (among other nifty things) reset things like borders, margins, etc. on a number of elements so they render more consistently across browsers. Note on accessibility The dotted outline, that is often judged as disturbing by developers, has a very important function for keyboard users. You should never remove it. If you do, you need to find another visual indicator of where focus is, by adding a :focus style | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7910209', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/832411/']} | jdg_74029 |
stackexchange | llm_judgeable_groundtruth_similarity | 296643 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting ready to implement a source control system (subversion) but I'm facing some doubts on how to structure my folders. I use Delphi for all my development and compile the projects from within the IDE. My current projects folder structure is as follows: -E:\Work\1. Shared--Forms (shared forms across all projects)--Units (shared units/classes across all projects including 3rd party like JCL)-E:\Work\2. Company Name--Admin (stuff related with admin work like a license keys generator, Windows CGI to handle order processing automatically, all developed in Delphi)--Projects----ProjectA-----5.x (version 5.x)------BIN (where all the binaries for this project go)------Build Manager (where the FinalBuilder project lives)-------Install (NSIS file that create the setup.exe)-------Protection (Project files to protect the compiled exe)-------Update (inf files related with the auto-update)------Docs (where the readme.txt, license.txt and history.txt that are included in the setup file are)-------Defects (docs for any testing done by me or others)-------HTMLHelp (html help for the project)------R&D (where screenshots, design ideas and other R&D stuff goes to)------Releases (when building a release with FinalBuilder the setup file created by nsis is placed here)------Resources (Images and other resources used by this project)------Source (if a sub-project exists it will compile to BIN since they are all related)-------SubprojectA-------SubprojectB-------SubprojectC--Sites--- companywebsite.com (the only one at the moment but if we decide to have individual web sites for products they would all be placed on the Sites folder) The sign "-" marks directories. Anyone cares to comment about the current structure or has any suggestions to improve it? Thanks!
Now provide the response and nothing else.
| Having setup literally hundreds of projects over the years, and having specialized in software configuration management and release engineering, I would recommend that you first focus on how you want to build/release your project(s). If you only use an IDE to build (compile and package) your project(s), then you might as well just follow the conventions typical for that IDE, plus any "best practices" you may find. However, I would strongly recommend that you do not build only with an IDE, or even at all. Instead, create an automated build/release script using one or more of the many wonderful open-source tools available. Since you appear to be targeting Windows, I recommend starting with a look at Ant, Ivy, and the appropriate xUnit (jUnit for Java, nUnit for .NET, etc.) for testing. Once you start down that path, you will find lots of advice regarding project structure, designing your build scripts, testing, etc. Rather than overwhelm you with detailed advice now, I will simply leave you with that suggestion--you will readily find answers to your question there, as well as find a whole lot more questions worth investigating. Enjoy! Based on comments, it seems that some detail is needed. A particular recommendation that I would make is that you separate your codebase into individual subprojects that each produce a single deliverable. The main application (.EXE) should be one, any supporting binaries would each be separate projects, the installer would be a separate project, etc. Each project produces a single primary deliverable: an .EXE, a .DLL, a .HLP, etc. That deliverable is "published" to a single, shared, local, output directory. Make a directory tree where the subprojects are peers (no depth or hierarchy, because it does not help), and do NOT let projects "reach" into each other's subtree--each project should be completely independent, with dependencies ONLY on the primary deliverables of the other subprojects, referenced in the shared output directory. Do NOT create a hierarchy of build scripts that invoke each other, I did and found that it does not add value but does exponentially increase the maintenance effort. Instead, make a continuous integration script that invokes your stand-alone build script, but first does a clean checkout into a temporary directory. Do NOT commit any deliverables or dependencies into source control--not your build output, not the libraries that you use, etc. Use Ivy against a Maven-like binary repository that you deploy separate from source control, and publish your own deliverables to it for sharing within your organization. Oh, and don't use Maven--it is too complicated, obfuscates the build process, and therefore is not cost-effective to customize. I am moving towards SCons, BuildBot, Ant, Ivy, nAnt, etc. based on my target platform. I have been composing a whitepaper on this topic, which I see may have an audience. EDIT: Please see my detailed answer to How do you organize your version control repository? | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/296643', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36544/']} | jdg_74030 |
stackexchange | llm_judgeable_groundtruth_similarity | 405468 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Belyi's theorem establishes a correspondence between smooth projective curves defined over number fields and the so called dessins d'enfants which are bipartite graphs embedded on an oriented surface so that each component of their complement is a topological disk. To be more precise, given a smooth projective curve $X$ of genus $g$ over $\overline{\Bbb{Q}}$ , Belyi's theorem guarantees the existence of a branched cover $f:X(\Bbb{C})\rightarrow\Bbb{P}^1(\Bbb{C})$ unramified outside $\{0,1,\infty\}$ . Then, $f^{-1}([0,1])$ is an embedded graph that can be considered as the $1$ -skeleton of a CW structure on $X(\Bbb{C})$ . Conversely, such a graph on an oriented compact surface of genus $g$ equips it with a complex structure and such a branched cover. My Question: Can dessins be used to defined a "reasonable" (e.g. comparable to a Weil height) height function on the moduli space $\mathcal{M}_g(\overline{\Bbb{Q}})$ ? For instance, is the minimum possible number of edges of a dessin (i.e. the minimum possible degree of a Belyi map on the complex curve) be used as a height function? Any reference to the literature on this question is highly appreciated.
Now provide the response and nothing else.
| If $X$ is a (smooth projective) curve over $\overline{\mathbb{Q}}$ , we define The Belyi degree $\deg_B(X)$ of $X$ to be the minimum degree of a Belyi map $X\to \mathbb{P}^1_{\overline{\mathbb{Q}}}$ . The Belyi degree is a function on $\mathcal{M}_g(\overline{\mathbb{Q}})$ which satisfies the following Northcott-type finiteness property. Proposition. (Strong Northcott) For every integer $d$ , the set of $\overline{\mathbb{Q}}$ -isomorphism classes of curves $X$ over $\overline{\mathbb{Q}}$ with $\deg_B(X)\leq d$ is finite. Proof. Like all finiteness statements, this one also boils down to some "general" finiteness statements. In this case, the statement (seemingly arithmetic in nature) is a consequence of a (topological) finiteness property of the fundamental group of $\mathbb{P}^1\setminus \{0,1,\infty\}$ . Indeed, the proposition can be proven using the fact that the fundamental group of $\mathbb{P}^1\setminus \{0,1,\infty\}$ is finitely generated, and that a finite generated group has only finitely many finite index subgroups of index at most $d$ . QED Note that this Proposition can be used to enumerate all curves over $\overline{\mathbb{Q}}$ . Simply "write" down the curves of Belyi degree at most $3$ , then $4$ , then $5$ , etc. Note that the Northcott property satisfied by the Belyi degree is much stronger than that of any Weil height $h$ . The Northcott property for a Weil height usually requires in addition a bound on the degree of the point. The Strong Northcott property implies that, given a Weil height $h$ (or any function!) on $\overline{\mathbb{Q}}$ , there is a function $f(\deg_B(-))$ such that $$ h(X) \leq f(\deg_B(X)).$$ Thus, any function on $\ {\mathcal{M}_g}(\overline{\mathbb{Q}})$ is bounded by a function in the Belyi degree (simply because of the above Proposition). For example, the genus of $X$ is bounded by $\deg_B(X)$ . This follows from the Riemann-Hurwitz formula. There are a few natural (arithmetic) invariants on $\ {\mathcal{M}_g}(\overline{\mathbb{Q}})$ such as the Faltings height for which one can write down explicit bounds. For example: Theorem. If $X$ is a curve over $\overline{\mathbb{Q}}$ with Faltings height $h_F(X)$ , then $$h_F(X) \leq 10^8 \deg_B(X)^6.$$ This (with many more explicit inequalities) is proven in [1]. The motivation for proving such inequalities is that they can be used to control the running time of certain algorithms computing coefficients of modular forms. The question of actually computing the Belyi degree of a curve is an interesting one. An algorithm (which I would not recommend trying to implement) for doing so is given in [2]. [1] A. Javanpeykar. Polynomial bounds for Arakelov invariants of Belyi curves, with an appendix by Peter Bruin. Algebra and Number Theory, Vol. 8 (2014), No. 1, 89β140. [2] A. Javanpeykar and J. Voight. The Belyi degree of a curve is computable Contemp. Math., 2019, 722, p. 43-57. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/405468', 'https://mathoverflow.net', 'https://mathoverflow.net/users/128556/']} | jdg_74031 |
stackexchange | llm_judgeable_groundtruth_similarity | 48013438 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a column with date formatted as MM-DD-YYYY, in the Date format.I want to add 2 columns one which only contains YYYY and the other only contains MM. How do I do this?
Now provide the response and nothing else.
| Once again base R gives you all you need, and you should not do this with sub-strings. Here we first create a data.frame with a proper Date column. If your date is in text format, parse it first with as.Date() or my anytime::anydate() (which does not need formats). Then given the date creating year and month is simple: R> df <- data.frame(date=Sys.Date()+seq(1,by=30,len=10))R> df[, "year"] <- format(df[,"date"], "%Y")R> df[, "month"] <- format(df[,"date"], "%m")R> df date year month1 2017-12-29 2017 122 2018-01-28 2018 013 2018-02-27 2018 024 2018-03-29 2018 035 2018-04-28 2018 046 2018-05-28 2018 057 2018-06-27 2018 068 2018-07-27 2018 079 2018-08-26 2018 0810 2018-09-25 2018 09R> If you want year or month as integers, you can wrap as as.integer() around the format. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48013438', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8207337/']} | jdg_74032 |
stackexchange | llm_judgeable_groundtruth_similarity | 7688193 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've spent lots of hours with this problem. I have a class with data: public class User : INotifyPropertyChanged{ private int _key; private string _fullName; private string _nick; public int Key { get{return _key;} set { _key = value; NotifyPropertyChanged("Key"); } } public string Nick { get { return _nick; } set { _nick = value; NotifyPropertyChanged("Nick"); } } public string FullName { get { return _fullName; } set { _fullName = value; NotifyPropertyChanged("FullName"); } } public User() { Nick = "nickname"; FullName = "fullname"; } public User(String nick, String name, int key) { Nick = nick; FullName = name; } //INotifyPropertyChanged implementation public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(String propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public override string ToString() { return string.Format("{0} {1}, {2}", Key, Nick, FullName); }} Next I have a class with observablecollection of userClass class: public class UserList : ObservableCollection<UserList>{ public UserList (){} ~UserList () { //Serialize(); } public void Serialize(ObservableCollection<UserList> usersColl) { FileStream fs = new FileStream("DataFile.dat", FileMode.Create); BinaryFormatter formatter = new BinaryFormatter(); try { formatter.Serialize(fs, usersColl); } catch (SerializationException e) { Console.WriteLine("Failed to serialize. Reason: " + e.Message); throw; } finally { fs.Close(); } } public void Deserialize() { FileStream fs = new FileStream("DataFile.dat", FileMode.Open); try { BinaryFormatter formatter = new BinaryFormatter(); //users = (Hashtable) formatter.Deserialize(fs); //usersColl = (ObservableCollection<userClass>)formatter.Deserialize(fs); } catch (SerializationException e) { MessageBox.Show(" Error: " + e.Message); throw; } finally { fs.Close(); } }} In fact, after lots of testing an editing, big part of code doesn't work, like serialization. But it is not necessary for data binding and binding is what i am solving now. So i have this collection and want to bind it to listBox.I tried several ways, but haven't got it to work. The last one I tried gave me the write error: The resource 'users' cannot be resolved. <ListBox Grid.Column="0" Name="userViewLeft" ItemsSource="{Binding Source={StaticResource users} }" />
Now provide the response and nothing else.
| Some points to be noted Make Properties public and not private . Make Variables private . Follow Naming Conventions and don't append class behind the class. ItemsSource you supply should be as per scope of the data, In my example the userlist in class scope and I have provided the ItemSource on Window Loaded event. Here is the an complete example code, In this I have nested the Grid Control inside ListBox because later on you can change the ListBox property for VirtualizingStackPanel.So that it would give huge performance gain when you have heavy updates on the list.Also you can use BindingList which is in my opinion better than ObservableCollection performance wise. User class: public class User : INotifyPropertyChanged { private int _key; private string _fullName; private string _nick; public int Key { get { return _key; } set { _key = value; NotifyPropertyChanged("Key"); } } public string NickName { get { return _nick; } set { _nick = value; NotifyPropertyChanged("NickName"); } } public string Name { get { return _fullName; } set { _fullName = value; NotifyPropertyChanged("Name"); } } public User(String nick, String name, int key) { this.NickName = nick; this.Name = name; this.Key = key; } public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(String propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public override string ToString() { return string.Format("{0} {1}, {2}", Key, NickName, Name); } } User List class: public class Users : ObservableCollection<User> { public Users() { Add(new User("Jamy", "James Smith", Count)); Add(new User("Mairy", "Mary Hayes", Count)); Add(new User("Dairy", "Dary Wills", Count)); } } XAML: <Grid> <Button Content="Start" Height="23" HorizontalAlignment="Left" Margin="416,12,0,0" x:Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> <ListBox x:Name="UserList" HorizontalContentAlignment="Stretch" Margin="12,41,12,12"> <ListBox.ItemTemplate> <DataTemplate> <Grid Margin="10"> <Grid.ColumnDefinitions> <ColumnDefinition Width="20" /> <ColumnDefinition Width="150" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <TextBlock Text="{Binding Key}" Margin="3" Grid.Column="0" /> <TextBlock Text="{Binding NickName}" Margin="3" Grid.Column="1" /> <TextBlock Text="{Binding Name}" Margin="3" Grid.Column="2" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </Grid> XAML Code behind: public partial class MainWindow : Window{ public static Users userslist = new Users(); DispatcherTimer timer = new DispatcherTimer(); public MainWindow() { InitializeComponent(); this.Loaded += new RoutedEventHandler(MainWindow_Loaded); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { timer.Interval = DateTime.Now.AddSeconds(10) - DateTime.Now; timer.Tick += new EventHandler(timer_Tick); UserList.ItemsSource = userslist; } void timer_Tick(object sender, EventArgs e) { userslist.Add(new User("Jamy - " + userslist.Count, "James Smith", userslist.Count)); userslist.Add(new User("Mairy - " + userslist.Count, "Mary Hayes", userslist.Count)); userslist.Add(new User("Dairy - " + userslist.Count, "Dary Wills", userslist.Count)); } private void button1_Click(object sender, RoutedEventArgs e) { if (button1.Content.ToString() == "Start") { button1.Content = "Stop"; timer.Start(); } else { button1.Content = "Start"; timer.Stop(); } }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7688193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/984081/']} | jdg_74033 |
stackexchange | llm_judgeable_groundtruth_similarity | 4251790 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using NSOutlineView for a project, and can't seem to figure out two things: How to remove the disclosure triangle for tree nodes. Apps like iTunes seem to be able to do this: Is there some sort of NSOutlineView Delegate method that is used for this? Or does it require a subclass? How to disable indenting for items. I've tried using setIndentationPerLevel: and setting it to 0, as well as changing the column indent to 0 in Interface Builder, but it does not seem to have any effect.
Now provide the response and nothing else.
| You've run into the right person here. I've had to grapple with this just a week ago. Removing the disclosure triangle : implement the frameOfOutlineCellAtRow: method in your NSOutlineView subclass and return NSZeroRect (only if you want to hide that particular row's triangle, of course.) - (NSRect)frameOfOutlineCellAtRow:(NSInteger)row { return NSZeroRect;} Disable indenting : the outline view's standard layout reserves space at the far left to draw the triangles in, in case the item is expandable. But you can override that for individual items by specifying a different drawing frame. You also do that in your subclass, by responding to this message: - (NSRect)frameOfCellAtColumn:(NSInteger)column row:(NSInteger)row { NSRect superFrame = [super frameOfCellAtColumn:column row:row]; if ((column == 0) /* && isGroupRow */) { return NSMakeRect(0, superFrame.origin.y, [self bounds].size.width, superFrame.size.height); } return superFrame;} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4251790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/153112/']} | jdg_74034 |
stackexchange | llm_judgeable_groundtruth_similarity | 34726205 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am making a custom wordpress theme with bootstrap for my client. He want similar like this website with drop-down menu as primary menu. How I can remove or hide the images on drop-down menu like screenshot below: style.css .navbar-nav { z-index: 0; position: absolute; left: 0; right: 0; height: 0}.navbar-nav a { display: block;}.navbar-nav li { display: block; margin: -33px auto 0 auto; width: 180px; height: 150px; background: url(img/logo.png) no-repeat scroll right/ 91% 100%; /* text-indent: -999em;*/ direction: ltr; float: none;}.navbar-nav li a { background: transparent;}.navbar-default .navbar-nav>li>a { color: transparent;}.navbar-default .navbar-nav>.open>a, .navbar-default .navbar-nav>.open>a:hover, .navbar-default .navbar-nav>.open>a:focus { background: transparent; /* background-color: #e7e7e7; */ /* color: #555; */}.navbar-default .navbar-nav>li>a:hover, .navbar-default .navbar-nav>li>a:focus { color: transparent; background-color: transparent;}.navbar-default .navbar-nav>.open>a, .navbar-default .navbar-nav>.open>a:hover, .navbar-default .navbar-nav>.open>a:focus { /* background-color: #e7e7e7; */ /* color: #555; */}.navbar-default .navbar-nav>.active>a, .navbar-default .navbar-nav>.active>a:hover, .navbar-default .navbar-nav>.active>a:focus { color: transparent; background-color: transparent;}.navbar-default .navbar-nav>.open>a, .navbar-default .navbar-nav>.open>a:hover, .navbar-default .navbar-nav>.open>a:focus { /* background-color: #e7e7e7; */ color: transparent;}.nav .open>a, .nav .open>a:hover, .nav .open>a:focus { /* background-color: #eeeeee; */ border-color: #337ab7;}.navbar-nav ul {}.navbar-nav ul ul {}.navbar-nav ul ul ul {}.navbar-nav ul ul a {}.nav .open>a, .nav .open>a:hover, .nav .open>a:focus { /* background-color: #eeeeee; */ border-color: #337ab7;}.dropdown-menu li a { background-color: #ccc; display: block; padding: 3px 20px; clear: both; font-weight: normal; line-height: 1.428571429; color: #333; white-space: nowrap;}.dropdown-menu li a:hover,.dropdown-menu li a:focus { color: #ffffff; text-decoration: none; background-color: #357ebd; background-image: -webkit-gradient(linear, left 0%, left 100%, from(#428bca), to(#357ebd)); background-image: -webkit-linear-gradient(top, #428bca, 0%, #357ebd, 100%); background-image: -moz-linear-gradient(top, #428bca 0%, #357ebd 100%); background-image: linear-gradient(to bottom, #428bca 0%, #357ebd 100%); background-repeat: repeat-x; filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ff428bca', endColorstr='#ff357ebd', GradientType=0);}.dropdown-menu ul { list-style: none; } header.php <div class="navbar navbar-default"> <div class="navbar-header"> <!-- .navbar-toggle is used as the toggle for collapsed navbar content --> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only"><?php _e('Toggle navigation','_tk') ?> </span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <!-- Your site title as branding in the menu --> <a class="navbar-brand" href="<?php echo esc_url( home_url( '/' ) ); ?>" title="<?php echo esc_attr( get_bloginfo( 'name', 'display' ) ); ?>" rel="home">Home</a> </div> <!-- The WordPress Menu goes here --> <?php wp_nav_menu( array( 'theme_location' => 'primary', 'depth' => 2, 'container' => 'div', 'container_class' => 'collapse navbar-collapse', 'menu_class' => 'nav navbar-nav', 'fallback_cb' => 'wp_bootstrap_navwalker::fallback', 'menu_id' => 'main-menu', 'walker' => new wp_bootstrap_navwalker() ) ); ?> </div><!-- .navbar --> </div> For more detail please check out my website
Now provide the response and nothing else.
| Update The original answer was written for Stanford POS Tagger Version 3.6.0, Date 2015-12-09 There is a new Version (3.7.0, released 2016-10-31) . Here's the code for the newer version: from nltk.tag import StanfordPOSTaggerfrom nltk import word_tokenize# Add the jar and model via their path (instead of setting environment variables):jar = 'your_path/stanford-postagger-full-2016-10-31/stanford-postagger.jar'model = 'your_path/stanford-postagger-full-2016-10-31/models/english-left3words-distsim.tagger'pos_tagger = StanfordPOSTagger(model, jar, encoding='utf8')text = pos_tagger.tag(word_tokenize("What's the airspeed of an unladen swallow ?"))print(text) Original answer I had the same problem (but using OS X and PyCharm), finally got it to work. Here's what I've pieced together from the StanfordPOSTagger Documentation and alvas' work on the issue (big thanks!): from nltk.internals import find_jars_within_pathfrom nltk.tag import StanfordPOSTaggerfrom nltk import word_tokenize# Alternatively to setting the CLASSPATH add the jar and model via their path:jar = '/Users/nischi/PycharmProjects/stanford-postagger-full-2015-12-09/stanford-postagger.jar'model = '/Users/nischi/PycharmProjects/stanford-postagger-full-2015-12-09/models/english-left3words-distsim.tagger'pos_tagger = StanfordPOSTagger(model, jar)# Add other jars from Stanford directorystanford_dir = pos_tagger._stanford_jar.rpartition('/')[0]stanford_jars = find_jars_within_path(stanford_dir)pos_tagger._stanford_jar = ':'.join(stanford_jars)text = pos_tagger.tag(word_tokenize("What's the airspeed of an unladen swallow ?"))print(text) Hope this helps. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34726205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3077416/']} | jdg_74035 |
stackexchange | llm_judgeable_groundtruth_similarity | 1036320 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A $5\times5$ grid is missing one of its main diagonals. In how many ways can we place $5$ pieces on the grid such that no two pieces share a row or column?
Now provide the response and nothing else.
| In general, this number is equal to the number $D_n$ of derangements of $n$ elements: for any derangement $f:[1,n]\to[1,n]$ you get a placement $(i,f(i))$. The known formulas for $D_n$ are:$$D_n=n!\sum_{j=0}^n\frac{(-1)^j}{j!}=\left\lfloor\frac{n!}e+\frac12\right\rfloor=\left[\frac{n!}e\right]$$There is also a recursive formula, which one can easily prove: $$D_{n+1}=n\left(D_{n}+D_{n-1}\right)$$In particular, $D_5=44$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1036320', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/192984/']} | jdg_74036 |
stackexchange | llm_judgeable_groundtruth_similarity | 170393 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For $n$ in $\mathbb{N}$, consider the sequence $\left \{ a_{n} \right \}$ defined by:$$a_{n}=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}\frac{1}{\sqrt{k}}$$ I would like to prove whether this sequence is convergent, and if so what its limit is. I can prove by induction that $\sum\limits_{k=1}^{n}\frac{1}{\sqrt{k}}\geqslant \sqrt{n}$ For any $n$ in $\mathbb{N}$. Hence, $a_{n}\geqslant 1$. I wanted to prove that the sequence is decreasing and then use the monotone convergence theorem to prove it is convergent. However, I couldn't come up with a proof for this part. Anyone know how to prove convergence and find the limit? I had another proof based on using Riemann sums, but I am looking for another proof using onne of the tricks used to prove convergence of sequences. Here is my proof: $$a_{n}=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}\frac{1}{\sqrt{k}}=\sum_{k=1}^{n}\frac{1}{n}\frac{1}{\sqrt{\frac{k}{n}}}.$$Hence, $$\lim_{n \to \infty }a_{n}=\int_{0}^{1}\frac{dx}{\sqrt{x}}=2$$
Now provide the response and nothing else.
| Using only elementary inequalities and no (improper) integral: For every $k$, $\sqrt{k+1}-\sqrt{k}=\dfrac1{\sqrt{k+1}+\sqrt{k}}$ and $2\sqrt{k}\lt\sqrt{k+1}+\sqrt{k}\lt2\sqrt{k+1}$, hence$$\sqrt{k+1}-\sqrt{k}\lt\dfrac1{2\sqrt{k}}\lt\sqrt{k}-\sqrt{k-1}.$$Summing the rightmost inequality yields$$a_n\lt\frac2{\sqrt{n}}\sum_{k=1}^n\sqrt{k}-\sqrt{k-1}=2.$$Likewise, summing the leftmost inequality yields$$a_n\gt\frac2{\sqrt{n}}\sum_{k=1}^n\sqrt{k+1}-\sqrt{k}=2\frac{\sqrt{n+1}-1}{\sqrt{n}}\gt2-\frac2{\sqrt{n}}.$$Since $\dfrac2{\sqrt{n}}\to0$, this proves that $\lim\limits_{n\to\infty}a_n=2$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/170393', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34354/']} | jdg_74037 |
stackexchange | llm_judgeable_groundtruth_similarity | 321666 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In specific how can I setup a contradiction proof if $3n+2$ is odd then $n$ is odd? I don't want the answer. I just want to know how to set up the proof by contradiction. I think that I should assume if $3n+2$ is not odd, then $n$ is even then prove that $n$ is in fact odd. I am unsure of what to assume and what do I prove. I am trying to teach a class you can do the same proof in many different ways. I know how to prove with a direct proof. Assume $3n+2=2k+1$ and prove $n=2J+1$ I know how to prove with a contrapositive proof. Assume $n=2J$ and prove $3n+2=2k$ How would I do the same setup for a contradiction proof?
Now provide the response and nothing else.
| For a proof by contradiction, you assume the truth of the premise and the negation of the desired conclusion to obtain a contradiction: "$3n+2$ is odd then $n$ is odd" Assume both (1) $3n + 2$ IS odd, and hence, there $3n + 2 = 2k+1$ for some integer $k$. (2) $n$ is even, that is, there is some integer $m$ such that $n = 2m$. From these assumptions, you obtain a contradiction, from which you conclude that the negation of both $(1)$ and $(2)$ entails the affirmation of the statement to be proved: That it must follow that if $3n+2$ is odd, then $n$ is odd. In general, a proof by contradiction proceeds as follows: Prove: $\;p\rightarrow q$. Assume $\;\lnot (p \rightarrow q) \equiv \lnot(\lnot p \lor q) \equiv (p \land \lnot q)$. Derive a contradiction. Conclude $\;\lnot \lnot (p \rightarrow q) \equiv (p\rightarrow q)$, as desired. Added: For a general discussion about how the differences and similarities between the two proof strategies, you might want to read an earlier MSE post: Proof by contradiction vs. proving the contrapositive. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/321666', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/62458/']} | jdg_74038 |
stackexchange | llm_judgeable_groundtruth_similarity | 719912 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently learning quantum mechanics and I am trying to understand the connection between the wave function and the state vector.Is it correct to say that the components of the state vector of a quantum system are all possible wavefunctions in that state?
Now provide the response and nothing else.
| A state vector is just a wavefunction where the domain doesn't need to be $\mathbb{R}^d$ . Similarly, you can call wavefunctions vectors because the functions used in quantum mechanics (square integrable ones) are elements of a space which is closed under addition and scalar multiplication. I.e. a vector space. A simple example is a large atom with a magnetic moment. In the Stern-Gerlach experiment, quantum fluctuations of its position are negligible but its spin (which can be up or down) is highly probabilistic. So instead of a wavefunction $\psi: \mathbb{R}^3 \to \mathbb{C}$ , we just need $\psi: \{0,1\} \to \mathbb{C}$ . A function from a two element set to $\mathbb{C}$ can be regarded as a two component vector so we can write this object as \begin{align}\left | \psi \right > = \begin{pmatrix} a \\ b \end{pmatrix}\end{align} where $|a|^2$ is the spin-up probability and $|b|^2$ is the spin-down probability. This is also written \begin{align}\left < + | \psi \right > = a, \quad \left < - | \psi \right > = b.\end{align} Depending on how formal you want to be, the normalization condition $|a|^2 + |b|^2 = 1$ can also be seen as an integral of $|\psi|^2$ except using the counting measure instead of the Lebesgue measure. We can also consider the opposite case. A particle with no spin but probabilistic position on a line. Position is a continuous degree of freedom so our $\psi: \mathbb{R} \to \mathbb{C}$ will be a vector with "a continuum of components". Instead of being able to project $\left | \psi \right >$ onto just two basis states ( $\left < + \right |$ or $\left < - \right |$ ), we will be able to project it onto infinitely many position eigenstates. One for each position we can imagine the particle to have. The eigenstates can be labelled by $\left < x \right |$ and the projection by $\left < x | \psi \right >$ which is nothing more than $\psi(x)$ in the usual function notation. Since we now have an all-encompassing framework that accounts for both continuous and discrete degrees of freedom, we can combine the two and consider things like $\psi: \mathbb{R} \times \{0, 1\} \to \mathbb{C}$ for particles with uncertain positions and spins. This allows us to consider projections like $\left < +, x | \psi \right >$ or $\left < -, y | \psi \right >$ . In this situation, textbooks often use a hybrid notation where the $2 \times \infty$ components of the state vector are written as two functions worth of components. Something like \begin{align}\left | \psi \right > = \begin{pmatrix} \psi_+(x) \\ \psi_-(x) \end{pmatrix}.\end{align} This may have given rise to your notion of a vector consisting of many wavefunctions. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/719912', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/319991/']} | jdg_74039 |
stackexchange | llm_judgeable_groundtruth_similarity | 6319379 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have written a Python module in C++ and built it as a shared object library and it worked fine. But while figuring all that out, I noticed (via strace) that Python looks for a few different variations import is called. In particular, when I say import foo , Python searches for, in order: foo (a directory) foo.so foomodule.so foo.py foo.pyc This was all pretty understandable except for foomodule.so. Why does Python look for everything both as name.so and namemodule.so? Is it some historical artifact? I searched quite a bit and came up with no explanation at all, and am left wondering if I should name my module foomodule.so instead of foo.so. My system seems to have some existing Python modules following each convention, so I can't help but wonder if the different names imply something.
Now provide the response and nothing else.
| This is actually platform-dependent, Python has different suffixes that it tries depending on the operating system. Here is the initialization of the suffix table in import.c : #ifdef HAVE_DYNAMIC_LOADING memcpy(filetab, _PyImport_DynLoadFiletab, countD * sizeof(struct filedescr));#endif memcpy(filetab + countD, _PyImport_StandardFiletab, countS * sizeof(struct filedescr)); filetab[countD + countS].suffix = NULL; _PyImport_Filetab = filetab; So it joins two lists, _PyImport_DynLoadFiletab and _PyImport_StandardFiletab . The latter is the easier one, it is defined as [".py", ".pyw", ".pyc"] in the same file (second entry is only present on Windows). _PyImport_DynLoadFiletab is defined in various dynload_<platform>.c files. On Unix-based systems its value is [".so", "module.so"] , for CygWin it defines [".dll", "module.dll"] whereas for OS/2 it is [".pyd", ".dll"] and for Windows it is simply [".pyd"] . I went through the source code history and finally arrived at this change from 1999 that apparently added "module.so" as a possible suffix: http://hg.python.org/cpython-fullhistory/diff/8efa37a770c6/Python/importdl.c . So the changes were originally added for NeXTStep (the one that eventually became Mac OS X), for particular linking settings only. I don't know this OS so it is hard to tell why it was done - I suspect that it was simply to prevent naming conflicts. E.g. a framework library foo.so might be loaded already and the OS won't allow loading another library with the same name. So foomodule.so was a compromise to allow a Python module with the name foo to exist nevertheless. Edit : The paragraph above was wrong - I didn't go far enough back in history, thanks to senderle for pointing that out. In fact, the interesting change appears to be http://hg.python.org/cpython-fullhistory/diff/2230/Python/import.c from 1994 which is where a new module naming scheme ( foo.so ) was added as an alternative to the old scheme ( foomodule.so ). I guess that the old form became deprecated at some point given that support for it has been removed for some platforms like Windows in one of the numerous rewrites of that code. Note that even when it was first introduced the short module name version was listed first - meaning that it already was the preferred variant. Edit2 : I searched the mailing list/newsgroup from 1994 to see whether this change was discussed somewhere - it doesn't look like it was, Guido van Rossum seems to have implemented it without telling anyone. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6319379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4323/']} | jdg_74040 |
Subsets and Splits