source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
4048345
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int_{Ο€/3}^{Ο€/4}\frac{{\sin x\cos x}}{{1 - 2\cos (2x)}}dx= - \frac{{\log 2}}{8}$$ It tells me to prove this. I inputed both and they both equale to -0.0866. Is this enough to prove it or do I have to make the left side equaled to the right- and if so how do you do this? Any help is much appricated. Thank you! Now provide the response and nothing else.
Let's consider that: $\sin{x}\cos{x}=\frac{1}{2}\sin{2x}$ and $(-\cos{2x})'=2\sin{2x}$ . So: $$\int_{Ο€/3}^{Ο€/4}\frac{{\sin x\cos x}}{{1 - 2\cos (2x)}}dx= \int_{Ο€/3}^{Ο€/4}\frac{{\frac{1}{2}\sin{2x}}}{{1 - 2\cos (2x)}}dx=\frac{1}{8}\log{(1-2\cos{2x}})\lvert_{\pi/3}^{\pi/4}=0-\frac{\log{2}}{8}$$ $\textbf{EDIT:}$ to be more clear let's observe that, You should have at the numerator the derivative of $(1-2\cos{2x})$ to be able to solve the integral with the logarithm as I have shown...but the numerator it is not exactly the derivative of the denominator, why? $$(1-2\cos{2x})'=(-2\cos{2x})'=-2\cdot(\cos{2x})'=4\sin{2x}$$ But at the numerator we have $\frac{1}{2}\sin{2x}$ so in effect we can multiply and divide by 4 to obtain the desidered form, in this following way: $$\int_{Ο€/3}^{Ο€/4}\frac{{\frac{1}{2}\sin{2x}}}{{1 - 2\cos (2x)}}dx=\int_{Ο€/3}^{Ο€/4}\frac{{\frac{\color{red}4}{\color{red}4 \cdot 2}\sin{2x}}}{{1 - 2\cos (2x)}}dx=$$ $$=\frac{1}{8}\int_{Ο€/3}^{Ο€/4}\frac{{4\sin{2x}}}{{1 - 2\cos (2x)}}dx=\Big[\frac{1}{8}\log{(1-2\cos{2x})}\Big]\Big\lvert_{\pi/3}^{\pi/4}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4048345', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_80041
stackexchange
llm_judgeable_groundtruth_similarity
57049191
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a nutshell: My implementation of the Wave Collapse Function algorithm in Python 2.7 is flawed but I'm unable to identify where the problem is located. I would need help to find out what I'm possibly missing or doing wrong. What is the Wave Collapse Function algorithm ? It is an algorithm written in 2016 by Maxim Gumin that can generate procedural patterns from a sample image. You can see it in action here (2D overlapping model) and here (3D tile model). Goal of this implementation: To boil down the algorithm (2D overlapping model) to its essence and avoid the redondancies and clumsiness of the original C# script (surprisingly long and difficult to read). This is an attempt to make a shorter, clearer and pythonic version of this algorithm. Characteristics of this implementation: I'm using Processing (Python mode), a software for visual design that makes image manipulation easier (no PIL, no Matplotlib, ...). The main drawbacks are that I'm limited to Python 2.7 and can NOT import numpy. Unlike the original version, this implementation: is not object oriented (in its current state), making it easier to understand / closer to pseudo-code is using 1D arrays instead of 2D arrays is using array slicing for matrix manipulation The Algorithm (as I understand it) 1/ Read the input bitmap, store every NxN patterns and count their occurences. ( optional: Augment pattern data with rotations and reflections.) For example, when N = 3: 2/ Precompute and store every possible adjacency relations between patterns.In the example below, patterns 207, 242, 182 and 125 can overlap the right side of pattern 246 3/ Create an array with the dimensions of the output (called W for wave). Each element of this array is an array holding the state ( True of False ) of each pattern. For example, let's say we count 326 unique patterns in input and we want our output to be of dimensions 20 by 20 (400 cells). Then the "Wave" array will contain 400 (20x20) arrays, each of them containing 326 boolan values. At start, all booleans are set to True because every pattern is allowed at any position of the Wave. W = [[True for pattern in xrange(len(patterns))] for cell in xrange(20*20)] 4/ Create another array with the dimensions of the output (called H ). Each element of this array is a float holding the "entropy" value of its corresponding cell in output. Entropy here refers to Shannon Entropy and is computed based on the number of valid patterns at a specific location in the Wave. The more a cell has valid patterns (set to True in the Wave), the higher its entropy is. For example, to compute the entropy of cell 22 we look at its corresponding index in the wave ( W[22] ) and count the number of booleans set to True . With that count we can now compute the entropy with the Shannon formula. The result of this calculation will be then stored in H at the same index H[22] At start, all cells have the same entropy value (same float at every position in H ) since all patterns are set to True , for each cell. H = [entropyValue for cell in xrange(20*20)] These 4 steps are introductory steps, they are necessary to initalize the algorithm. Now starts the core of the algorithm: 5/ Observation: Find the index of the cell with the minimum nonzero entropy (Note that at the very first iteration all entropies are equal so we need to pick the index of a cell randomly.) Then, look at the still valid patterns at the corresponding index in the Wave and select one of them randomly, weighted by the frequency that pattern appears in the input image (weighted choice). For example if the lowest value in H is at index 22 ( H[22] ), we look at all the patterns set to True at W[22] and pick one randomly based on the number of times it appears in the input. (Remember at step 1 we've counted the number of occurences for each pattern). This insures that patterns appear with a similar distribution in the output as are found in the input. 6/ Collapse: We now assign the index of the selected pattern to the cell with the minimum entropy. Meaning that every pattern at the corresponding location in the Wave are set to False except for the one that has been chosen. For example if pattern 246 in W[22] was set to True and has been selected, then all other patterns are set to False . Cell 22 is assigned pattern 246 . In output cell 22 will be filled with the first color (top left corner) of pattern 246. (blue in this example) 7/ Propagation: Because of adjacency constraints, that pattern selection has consequences on the neighboring cells in the Wave. The arrays of booleans corresponding to the cells on the left and right, on top of and above the recently collapsed cell need to be updated accordingly. For example if cell 22 has been collapsed and assigned with pattern 246 , then W[21] (left), W[23] (right), W[2] (up) and W[42] (down) have to be modified so as they only keep to True the patterns that are adjacent to pattern 246 . For example, looking back at the picture of step 2, we can see that only patterns 207, 242, 182 and 125 can be placed on the right of pattern 246. That means that W[23] (right of cell 22 ) needs to keep patterns 207, 242, 182 and 125 as True and set all other patterns in the array as False . If these patterns are not valid anymore (already set to False because of a previous constraint) then the algorithm is facing a contradiction . 8/ Updating entropies Because a cell has been collapsed (one pattern selected, set to True ) and its surrounding cells updated accordingly (setting non adjacent patterns to False ) the entropy of all these cells have changed and needs to be computed again. (Remember that the entropy of a cell is correlated to the number of valid pattern it holds in the Wave.) In the example, the entropy of cell 22 is now 0, ( H[22] = 0 , because only pattern 246 is set to True at W[22] ) and the entropy of its neighboring cells have decreased (patterns that were not adjacent to pattern 246 have been set to False ). By now the algorithm arrives at the end of the first iteration and will loop over steps 5 (find cell with minimum non zero entropy) to 8 (update entropies) until all cells are collapsed. My script You'll need Processing with Python mode installed to run this script.It contains around 80 lines of code (short compared to the ~1000 lines of the original script) that are fully annotated so it can be rapidly understood. You'll also need to download the input image and change the path on line 16 accordingly. from collections import Counterfrom itertools import chain, izipimport mathd = 20 # dimensions of output (array of dxd cells)N = 3 # dimensions of a pattern (NxN matrix)Output = [120 for i in xrange(d*d)] # array holding the color value for each cell in the output (at start each cell is grey = 120)def setup(): size(800, 800, P2D) textSize(11) global W, H, A, freqs, patterns, directions, xs, ys, npat img = loadImage('Flowers.png') # path to the input image iw, ih = img.width, img.height # dimensions of input image xs, ys = width//d, height//d # dimensions of cells (squares) in output kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)] # NxN matrix to read every patterns contained in input image directions = [(-1, 0), (1, 0), (0, -1), (0, 1)] # (x, y) tuples to access the 4 neighboring cells of a collapsed cell all = [] # array list to store all the patterns found in input # Stores the different patterns found in input for y in xrange(ih): for x in xrange(iw): ''' The one-liner below (cmat) creates a NxN matrix with (x, y) being its top left corner. This matrix will wrap around the edges of the input image. The whole snippet reads every NxN part of the input image and store the associated colors. Each NxN part is called a 'pattern' (of colors). Each pattern can be rotated or flipped (not mandatory). ''' cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel] # Storing rotated patterns (90Β°, 180Β°, 270Β°, 360Β°) for r in xrange(4): cmat = zip(*cmat[::-1]) # +90Β° rotation all.append(cmat) # Storing reflected patterns (vertical/horizontal flip) all.append(cmat[::-1]) all.append([a[::-1] for a in cmat]) # Flatten pattern matrices + count occurences ''' Once every pattern has been stored, - we flatten them (convert to 1D) for convenience - count the number of occurences for each one of them (one pattern can be found multiple times in input) - select unique patterns only - store them from less common to most common (needed for weighted choice)''' all = [tuple(chain.from_iterable(p)) for p in all] # flattern pattern matrices (NxN --> []) c = Counter(all) freqs = sorted(c.values()) # number of occurences for each unique pattern, in sorted order npat = len(freqs) # number of unique patterns total = sum(freqs) # sum of frequencies of unique patterns patterns = [p[0] for p in c.most_common()[:-npat-1:-1]] # list of unique patterns sorted from less common to most common # Computes entropy ''' The entropy of a cell is correlated to the number of possible patterns that cell holds. The more a cell has valid patterns (set to 'True'), the higher its entropy is. At start, every pattern is set to 'True' for each cell. So each cell holds the same high entropy value''' ent = math.log(total) - sum(map(lambda x: x * math.log(x), freqs)) / total # Initializes the 'wave' (W), entropy (H) and adjacencies (A) array lists W = [[True for _ in xrange(npat)] for i in xrange(d*d)] # every pattern is set to 'True' at start, for each cell H = [ent for i in xrange(d*d)] # same entropy for each cell at start (every pattern is valid) A = [[set() for dir in xrange(len(directions))] for i in xrange(npat)] #see below for explanation # Compute patterns compatibilities (check if some patterns are adjacent, if so -> store them based on their location) ''' EXAMPLE: If pattern index 42 can placed to the right of pattern index 120, we will store this adjacency rule as follow: A[120][1].add(42) Here '1' stands for 'right' or 'East'/'E' 0 = left or West/W 1 = right or East/E 2 = up or North/N 3 = down or South/S ''' # Comparing patterns to each other for i1 in xrange(npat): for i2 in xrange(npat): for dir in (0, 2): if compatible(patterns[i1], patterns[i2], dir): A[i1][dir].add(i2) A[i2][dir+1].add(i1)def compatible(p1, p2, dir): '''NOTE: what is refered as 'columns' and 'rows' here below is not really columns and rows since we are dealing with 1D patterns. Remember here N = 3''' # If the first two columns of pattern 1 == the last two columns of pattern 2 # --> pattern 2 can be placed to the left (0) of pattern 1 if dir == 0: return [n for i, n in enumerate(p1) if i%N!=2] == [n for i, n in enumerate(p2) if i%N!=0] # If the first two rows of pattern 1 == the last two rows of pattern 2 # --> pattern 2 can be placed on top (2) of pattern 1 if dir == 2: return p1[:6] == p2[-6:]def draw(): # Equivalent of a 'while' loop in Processing (all the code below will be looped over and over until all cells are collapsed) global H, W, grid ### OBSERVATION # Find cell with minimum non-zero entropy (not collapsed yet) '''Randomly select 1 cell at the first iteration (when all entropies are equal), otherwise select cell with minimum non-zero entropy''' emin = int(random(d*d)) if frameCount <= 1 else H.index(min(H)) # Stoping mechanism ''' When 'H' array is full of 'collapsed' cells --> stop iteration ''' if H[emin] == 'CONT' or H[emin] == 'collapsed': print 'stopped' noLoop() return ### COLLAPSE # Weighted choice of a pattern ''' Among the patterns available in the selected cell (the one with min entropy), select one pattern randomly, weighted by the frequency that pattern appears in the input image. With Python 2.7 no possibility to use random.choice(x, weight) so we have to hard code the weighted choice ''' lfreqs = [b * freqs[i] for i, b in enumerate(W[emin])] # frequencies of the patterns available in the selected cell weights = [float(f) / sum(lfreqs) for f in lfreqs] # normalizing these frequencies cumsum = [sum(weights[:i]) for i in xrange(1, len(weights)+1)] # cumulative sums of normalized frequencies r = random(1) idP = sum([cs < r for cs in cumsum]) # index of selected pattern # Set all patterns to False except for the one that has been chosen W[emin] = [0 if i != idP else 1 for i, b in enumerate(W[emin])] # Marking selected cell as 'collapsed' in H (array of entropies) H[emin] = 'collapsed' # Storing first color (top left corner) of the selected pattern at the location of the collapsed cell Output[emin] = patterns[idP][0] ### PROPAGATION # For each neighbor (left, right, up, down) of the recently collapsed cell for dir, t in enumerate(directions): x = (emin%d + t[0])%d y = (emin/d + t[1])%d idN = x + y * d #index of neighbor # If that neighbor hasn't been collapsed yet if H[idN] != 'collapsed': # Check indices of all available patterns in that neighboring cell available = [i for i, b in enumerate(W[idN]) if b] # Among these indices, select indices of patterns that can be adjacent to the collapsed cell at this location intersection = A[idP][dir] & set(available) # If the neighboring cell contains indices of patterns that can be adjacent to the collapsed cell if intersection: # Remove indices of all other patterns that cannot be adjacent to the collapsed cell W[idN] = [True if i in list(intersection) else False for i in xrange(npat)] ### Update entropy of that neighboring cell accordingly (less patterns = lower entropy) # If only 1 pattern available left, no need to compute entropy because entropy is necessarily 0 if len(intersection) == 1: H[idN] = '0' # Putting a str at this location in 'H' (array of entropies) so that it doesn't return 0 (float) when looking for minimum entropy (min(H)) at next iteration # If more than 1 pattern available left --> compute/update entropy + add noise (to prevent cells to share the same minimum entropy value) else: lfreqs = [b * f for b, f in izip(W[idN], freqs) if b] ent = math.log(sum(lfreqs)) - sum(map(lambda x: x * math.log(x), lfreqs)) / sum(lfreqs) H[idN] = ent + random(.001) # If no index of adjacent pattern in the list of pattern indices of the neighboring cell # --> mark cell as a 'contradiction' else: H[idN] = 'CONT' # Draw output ''' dxd grid of cells (squares) filled with their corresponding color. That color is the first (top-left) color of the pattern assigned to that cell ''' for i, c in enumerate(Output): x, y = i%d, i/d fill(c) rect(x * xs, y * ys, xs, ys) # Displaying corresponding entropy value fill(0) text(H[i], x * xs + xs/2 - 12, y * ys + ys/2) Problem Despite all my efforts to carefully put into code all the steps described above, this implementation returns very odd and disappointing results: Example of a 20x20 output Both the pattern distribution and the adjacency constraints seem to be respected (same amount of blue, green, yellow and brown colors as in input and same kind of patterns: horizontal ground , green stems). However these patterns: are often disconnected are often incomplete (lack of "heads" composed of 4-yellow petals) run into way too many contradictory states (grey cells marked as "CONT") On that last point, I should clarify that contradictory states are normal but should happen very rarely (as stated in the middle of page 6 of this paper and in this article) Hours of debugging convinced me that introductory steps (1 to 5) are correct (counting and storing patterns, adjacency and entropy computations, arrays initialization). This has led me to think that something must be off with the core part of the algorithm (steps 6 to 8) . Either I am implementing one of these steps incorrectly or I am missing a key element of the logic. Any help regarding that matter would thus be immensely appreciated ! Also, any answer that is based on the script provided (using Processing or not) is welcomed . Useful additionnal ressources: This detailed article from Stephen Sherratt and this explanatory paper from Karth & Smith.Also, for comparison I would suggest to check this other Python implementation (contains a backtracking mechanism that isn't mandatory) . Note: I did my best to make this question as clear as possible (comprehensive explanation with GIFs and illustrations, fully annotated code with useful links and ressources) but if for some reasons you decide to vote it down, please leave a brief comment to explain why you're doing so. Now provide the response and nothing else.
The hypothesis suggested by @mbrig and @Leon that the propagation step iterates over a whole stack of cells (instead of being limited to a set of 4 direct neighbors) was correct. The following is an attempt to provide further details while answering my own questions. The problem occured at step 7, while propagating. The original algorithm does update the 4 direct neighbors of a specific cell BUT: the index of that specific cell is in turns replaced by the indices of the previously updated neighbors . this cascading process is triggered every time a cell is collapsed and last as long as the adjacent patterns of a specific cell are available in 1 of its neighboring cell In other words, and as mentionned in the comments, this is a recursive type of propagation that updates not only the neighbors of the collapsed cell, but also the neighbors of the neighbors... and so on as long as adjacencies are possible. Detailed Algorithm Once a cell is collapsed, its index is put in a stack. That stack is meant later to temporarily store indices of neighoring cells stack = set([emin]) #emin = index of cell with minimum entropy that has been collapsed The propagation will last as long as that stack is filled with indices: while stack: First thing we do is pop() the last index contained in the stack (the only one for now) and get the indices of its 4 neighboring cells (E, W, N, S). We have to keep them withing bounds and make sure they wrap around. while stack: idC = stack.pop() # index of current cell for dir, t in enumerate(mat): x = (idC%w + t[0])%w y = (idC/w + t[1])%h idN = x + y * w # index of neighboring cell Before going any further, we make sure the neighboring cell is not collapsed yet (we don't want to update a cell that has only 1 pattern available): if H[idN] != 'c': Then we check all the patterns that could be placed at that location. ex: if the neighboring cell is on the left of the current cell (east side), we look at all the patterns that can be placed on the left of each pattern contained in the current cell. possible = set([n for idP in W[idC] for n in A[idP][dir]]) We also look at the patterns that are available in the neighboring cell: available = W[idN] Now we make sure that the neighboring cell really have to be updated. If all its available patterns are already in the list of all the possible patterns β€”> there’s no need to update it (the algorithm skip this neighbor and goes on to the next) : if not available.issubset(possible): However, if it is not a subset of the possible list β€”> we look at the intersection of the two sets (all the patterns that can be placed at that location and that, "luckily", are available at that same location): intersection = possible & available If they don't intersect (patterns that could have been placed there but are not available) it means we ran into a "contradiction". We have to stop the whole WFC algorithm. if not intersection: print 'contradiction' noLoop() If, on the contrary, they do intersect --> we update the neighboring cell with that refined list of pattern's indices: W[idN] = intersection Because that neighboring cell has been updated, its entropy must be updated as well: lfreqs = [freqs[i] for i in W[idN]] H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001) Finally, and most importantly, we add the index of that neighboring cell to the stack so it becomes the next current cell in turns (the one whose neighbors will be updated during the next while loop): stack.add(idN) Full updated script from collections import Counterfrom itertools import chainfrom random import choicew, h = 40, 25N = 3def setup(): size(w*20, h*20, P2D) background('#FFFFFF') frameRate(1000) noStroke() global W, A, H, patterns, freqs, npat, mat, xs, ys img = loadImage('Flowers.png') iw, ih = img.width, img.height xs, ys = width//w, height//h kernel = [[i + n*iw for i in xrange(N)] for n in xrange(N)] mat = ((-1, 0), (1, 0), (0, -1), (0, 1)) all = [] for y in xrange(ih): for x in xrange(iw): cmat = [[img.pixels[((x+n)%iw)+(((a[0]+iw*y)/iw)%ih)*iw] for n in a] for a in kernel] for r in xrange(4): cmat = zip(*cmat[::-1]) all.append(cmat) all.append(cmat[::-1]) all.append([a[::-1] for a in cmat]) all = [tuple(chain.from_iterable(p)) for p in all] c = Counter(all) patterns = c.keys() freqs = c.values() npat = len(freqs) W = [set(range(npat)) for i in xrange(w*h)] A = [[set() for dir in xrange(len(mat))] for i in xrange(npat)] H = [100 for i in xrange(w*h)] for i1 in xrange(npat): for i2 in xrange(npat): if [n for i, n in enumerate(patterns[i1]) if i%N!=(N-1)] == [n for i, n in enumerate(patterns[i2]) if i%N!=0]: A[i1][0].add(i2) A[i2][1].add(i1) if patterns[i1][:(N*N)-N] == patterns[i2][N:]: A[i1][2].add(i2) A[i2][3].add(i1)def draw(): global H, W emin = int(random(w*h)) if frameCount <= 1 else H.index(min(H)) if H[emin] == 'c': print 'finished' noLoop() id = choice([idP for idP in W[emin] for i in xrange(freqs[idP])]) W[emin] = [id] H[emin] = 'c' stack = set([emin]) while stack: idC = stack.pop() for dir, t in enumerate(mat): x = (idC%w + t[0])%w y = (idC/w + t[1])%h idN = x + y * w if H[idN] != 'c': possible = set([n for idP in W[idC] for n in A[idP][dir]]) if not W[idN].issubset(possible): intersection = possible & W[idN] if not intersection: print 'contradiction' noLoop() return W[idN] = intersection lfreqs = [freqs[i] for i in W[idN]] H[idN] = (log(sum(lfreqs)) - sum(map(lambda x: x * log(x), lfreqs)) / sum(lfreqs)) - random(.001) stack.add(idN) fill(patterns[id][0]) rect((emin%w) * xs, (emin/w) * ys, xs, ys) Overall improvements In addition to these fixes I also did some minor code optimization to speed-up both the observation and propagation steps, and shorten the weighted choice computation. The "Wave" is now composed of Python sets of indices whose size decrease as cells are "collapsed" (replacing large fixed size lists of booleans). Entropies are stored in a defaultdict whose keys are progressively deleted. The starting entropy value is replaced by a random integer (first entropy calculation not needed since equiprobable high level of uncertainty at start) Cells are diplayed once (avoiding storing them in a array and redrawing at each frame) The weighted choice is now a one-liner (avoiding several dispensable lines of list comprehension)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57049191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8452243/']}
jdg_80042
stackexchange
llm_judgeable_groundtruth_similarity
13031144
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to know the reasoning behind the way the compiler choose the TaskScheduler when compiling using the async keyword. My test method is called by SignalR (ASP.NET host, IIS8, websocket transport) on the OnConnectedAsync method. protected override async Task OnConnectedAsync(IRequest request, string connectionId){ SendUpdates();} Starting a task on the Current synchronization context will result to an InvalidOperationException in System.Web.AspNetSynchronizationContext.OperationStarted() An asynchronous operation cannot be started at this time. Asynchronous operations may only be started within an asynchronous handler or module or during certain events in the Page lifecycle. If this exception occurred while executing a Page, ensure that the Page is marked <%@ Page Async="true" %> . Fine. With this SendUpdates definition, I get the above exception: private async void SendUpdates() { Task.Run(async () => { while (true) { await Task.Delay(1000); await Connection.Broadcast("blabla"); } }); } But even more interesting is when I don't get the exception. The following works: private void SendUpdates() And the following works too private async Task SendUpdates() this last one works too, but it's essentially the same as the above example. private Task SendUpdates() { return Task.Run(async () => { while (true) { await Task.Delay(1000); await Connection.Broadcast("blabla"); } }); } Do you know how the compiler choose which scheduler to use here? Now provide the response and nothing else.
One of the primary guidelines in writing async code is "avoid async void " - that is, use async Task instead of async void unless you're implementing an async event handler. async void methods use SynchronizationContext 's OperationStarted and OperationCompleted ; see my MSDN article It's All about the SynchronizationContext for more details. ASP.NET detects the call to OperationStarted and (correctly) rejects it because it's illegal to put an async event handler there. When you correct the code to use async Task , then ASP.NET no longer sees an async event handler. You may find my intro to async / await post helpful.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13031144', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/467638/']}
jdg_80043
stackexchange
llm_judgeable_groundtruth_similarity
45340256
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I keep getting this error. I am working on - Mac Sierra 10.8 Python 3.6.2 tabula 1.0.5 Traceback (most recent call last): File "/Users/Sam/Desktop/mitch test/test.py", line 22, in <module> tabula.convert_into(root.fileName, "_ExportedPDF-" + date_time + ".csv", output_format="csv", pages="all")AttributeError: module 'tabula' has no attribute 'convert_into' This is my code that is giving me an error. tabula.convert_into(root.fileName, "_ExportedPDF-" + date_time + ".csv", output_format="csv", pages="all") UPDATE: When I try to do from tabula import wrapper I get th error: ImportError: cannot import name 'wrapper' UPDATE: Fixed comment as per @L. Alvarez Getting following error: Traceback (most recent call last): File "/Users/Sam/Desktop/mitch test/test.py", line 22, in <module> tabula.convert_into(root.fileName, "_ExportedPDF-" + date_time + ".csv", output_format="csv", pages="all") File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tabula/wrapper.py", line 140, in convert_into subprocess.check_output(args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr)subprocess.CalledProcessError: Command '['java', '-jar', '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tabula/tabula-0.9.2-jar-with-dependencies.jar', '--pages', 'all', '--guess', '--format', 'CSV', '--outfile', '_ExportedPDF-Jul 26 2017.csv', '/Users/Sam/Desktop/mitch test/security_by_curr_risk_ldw.pdf']' returned non-zero exit status 1. Now provide the response and nothing else.
I suspect you did pip install tabula , which installed a tabula library that has a version 1.0.5. Here's the github repo . It does not have a convert_into function But you actually meant to install this tabula , whose last version is 0.9.0 You should pip uninstall tabula and pip3 install tabula-py
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45340256', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1058584/']}
jdg_80044
stackexchange
llm_judgeable_groundtruth_similarity
41993176
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an object with the name "element". If somebody touches the tablet, I would like to return the x and y coordinates of the touch position relative to the object, i. e. the upper left corner of the object has the coordinates x=0 and y=0. I know how to implement this on desktops: $(function() {$(document).mousedown(function(e) { var offset = $("#element").offset(); var relativeX = (e.pageX - offset.left); var relativeY = (e.pageY - offset.top); alert(relativeX+':'+relativeY); $(".position").val("afaf");});}); So the word "mousedown" should be replaced by "touchstart", I guess. However, it still doesn't work. How do I change the above code such that it works on tablets with "touchstart" instead of "mousedown"? Now provide the response and nothing else.
UPDATE : See Daniel Lavedonio de Lima's answer below You have to explicitly pull a touches object out of the event, it doesn't contain the coordinates directly. Look at line two of the code below. Here is the code I always use to get touch/pointer coordinates: if(e.type == 'touchstart' || e.type == 'touchmove' || e.type == 'touchend' || e.type == 'touchcancel'){ var touch = e.originalEvent.touches[0] || e.originalEvent.changedTouches[0]; x = touch.pageX; y = touch.pageY; } else if (e.type == 'mousedown' || e.type == 'mouseup' || e.type == 'mousemove' || e.type == 'mouseover'|| e.type=='mouseout' || e.type=='mouseenter' || e.type=='mouseleave') { x = e.clientX; y = e.clientY; } Put this inside an event listener that listens for any or all of those events and add your offset calculation and this should work.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41993176', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7503585/']}
jdg_80045
stackexchange
llm_judgeable_groundtruth_similarity
9893
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to set up a buffer-local behavior so that optionally, only after I execute the babel section, Org exports to LaTeX asynchronously after each save. I tried the following: (defun kdm/org-save-and-export () (interactive) (org-latex-export-to-latex t))(add-hook 'after-save-hook 'kdm/org-save-and-export) But it creates file.tex.tex.tex.tex indefinitely. How do I modify this to only save 1x, and be a buffer-local change? Now provide the response and nothing else.
Probably, Emacs is creating a file.tex buffer and then saving it, triggering the after-save-hook again indefinitely. Try checking major-mode before calling the export function: (defun kdm/org-save-and-export () (interactive) (if (eq major-mode 'org-mode) (org-latex-export-to-latex)))(add-hook 'after-save-hook 'kdm/org-save-and-export nil t)
{}
{'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/9893', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/91/']}
jdg_80046
stackexchange
llm_judgeable_groundtruth_similarity
28296740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: im trying to rewrite the following code: W_WHERE := ' PD.NIF(+) = p.NIF and pd.num_colegiado(+) = p.num_colegiado AND PD.FECHA_INICIO(+) <= SYSDATE AND NVL(PD.FECHA_FIN(+), SYSDATE) >= SYSDATE AND D.ID_DIRECCION(+) = PD.ID_DIRECCION AND p.num_colegiado = coleg.num_colegiado'; into normal JOIN notation, could anybody help me ? PS. PD is for PERSONA_DIRECCION table and P is for PERSONA table Now provide the response and nothing else.
Based on the link: Why doesn't RecyclerView have onItemClickListener()? and How RecyclerView is different from Listview? , and also @Duncan's general idea, I give my solution here: Define one interface RecyclerViewClickListener for a passing message from the adapter to Activity / Fragment : public interface RecyclerViewClickListener { public void recyclerViewListClicked(View v, int position); } In Activity / Fragment implement the interface, and also pass listener to adapter: @Override public void recyclerViewListClicked(View v, int position){... ...} //set up adapter and pass clicked listener this myAdapter = new MyRecyclerViewAdapter(context, this); In Adapter and ViewHolder : public class MyRecyclerViewAdapter extends RecyclerView.Adapter<MyRecyclerViewAdapter.ItemViewHolder> { ... ... private Context context; private static RecyclerViewClickListener itemListener; public MyRecyclerViewAdapter(Context context, RecyclerViewClickListener itemListener) { this.context = context; this.itemListener = itemListener; ... ... } //ViewHolder class implement OnClickListener, //set clicklistener to itemView and, //send message back to Activity/Fragment public static class ItemViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener{ ... ... public ItemViewHolder(View convertView) { super(convertView); ... ... convertView.setOnClickListener(this); } @Override public void onClick(View v) { itemListener.recyclerViewListClicked(v, this.getPosition()); } } } After testing, it works fine. [ UPDATE ] Since API 22, RecyclerView.ViewHolder.getPosition() is deprecated, so instead with getLayoutPosition() .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/28296740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2165263/']}
jdg_80047
stackexchange
llm_judgeable_groundtruth_similarity
1333522
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Looking on the internet for C++ brainteasers, I found this example: #include <iostream>using namespace std;class A {public: A() { cout << "A::A()" << endl; } ~A() { cout << "A::~A()" << endl; throw "A::exception"; }};class B {public: B() { cout << "B::B()" << endl; throw "B::exception"; // <- crashes here } ~B() { cout << "B::~B()"; }};int main(int, char**) { try { cout << "Entering try...catch block" << endl; A objectA; B objectB; cout << "Exiting try...catch block" << endl; } catch (const char* ex) { cout << ex << endl; } return 0;} This is what I thought the program would do: A::A() will be output to screen when the constructor of objectA is called. Object A is constructed successfully. B::B() will be output to screen when the constructor of objectB is called. The constructor of B then throws an exception. Object B is not constructed successfully. Destructor of objectB is not called as the constructor never completed successfully. Destructor of objectA will be called as the object goes out of scope when the try block is exited. However, when I ran the program, it actually crashed at the line marked with <-. Could anybody explain what exactly was going on at that point? Now provide the response and nothing else.
If you are really coding, not just brainteasing never ever throw exception from destructor . If an exception is thrown during stack-unwinding, terminate() is called. In your case destructor of A has thrown while processing exception that was thrown in B's constructor. EDIT:To be more precise (as suggested in comments) - never ever let exception escape destructor. Exceptions that are caught inside destructor make no problem. But if during stack unwinding program has to deal with two exceptions - the one that caused stack unwinding and the one that escaped destructor during unwinding, std::terminate() goes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1333522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/107627/']}
jdg_80048
stackexchange
llm_judgeable_groundtruth_similarity
35939432
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand when I should use Prototype design pattern.Here is example of Prototype as I understand it: class Prototype{public: virtual Prototype* clone() = 0;...};class ConcretePrototype : public Prototype{public: Prototype* clone() override { ... }};// Usage:ConcretePrototype proto;auto protPtr = proto.clone(); Where is a question:Why this better than: class Obj{public: Obj(); Obj(const Obj&); Obj& operator = (const Obj& other);};Obj o;Obj o2 = o; So when should I actually use Prototype? Now provide the response and nothing else.
Copy constructor is an element of the language. Prototype is a design pattern used to spawn (polymorphic) objects basing on some existing instance. It would be difficult to use the former to implement the latter as copy constructor is intended to be used when knowing exact instance of the object, while prototype is used when there could be any possible implementation of some interface and you just want to obtain new object of exactly the same implementation, without resorting to some weird casting and checking methods. Lets assume that you have Interface I and implementations A and B . At some point you are given object i implementing I . Perhaps you would not want to modify it, instead you would prefer to obtain new instance and then introduce some modification to it. How could that be achieved, when you don't know exact class of i ? Prototype pattern is one solution to that problem: I* i2 = i.clone(); .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35939432', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4520592/']}
jdg_80049
stackexchange
llm_judgeable_groundtruth_similarity
4642272
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm attempting to match a string that can contain any number of numeric characters or a decimal point using the following regex: ([0-9.])* Here's some C# code to test the regex: Regex regex = new Regex("([0-9.])*");if (!regex.IsMatch("a")) throw new Exception("No match."); I expect the exception to be thrown here but it isn't - am I using the Regex incorrectly or is there an error in the pattern? EDIT: I'd also like to match a blank string. Now provide the response and nothing else.
The * quantifier means "match 0 or more". In your case, "a" returns 0 matches, so the regex still succeeds. You probably wanted: ([0-9.]+) The + quantifier means "match 1 or more, so it fails on non-numeric inputs and returns no matches. A quick spin the regex tester shows: input result----- ------[empty] No matchesa No matches. 1 match: "."20.15 1 match: "20.15"1 1 match: "1"1.1.1 1 match: "1.1.1"20. 1 match: "20." Looks like we have some false positives, let's revise the regex as such: ^([0-9]+(?:\.[0-9]+)?)$ Now we get: input result----- ------[empty] No matchesa No matches. No matches20.15 1 match: "20.15"1 1 match: "1"1.1.1 No matches: "1.1.1"20. No matches Coolness.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4642272', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/82017/']}
jdg_80050
stackexchange
llm_judgeable_groundtruth_similarity
346039
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So I'm just wondering about three scenarios: You've got a book on the table. It's got a gravitational force acting downwards. This is the force on the book by the earth. Apparently there is an equal and opposite force on the earth by the book. But then there is also a normal force involved. The normal force is the force of the table pushing back against the book. Does this have a reaction force of the book pushing back on the table? And isn't this already accounted for with the gravitational force? What? And then another massive question: If everything has equal and opposite forces on each other then how can there be net forces? Seriously that is pretty weird. For example when you push a cube with a non-constant velocity along a horizontal plane then there are net forces which is confusing because according to the third law there are equal and opposite forces on every interaction. Lastly if you walk along a floor it pushes out against you with an equal and opposite reaction normal force. But surely if you are really heavy such that the surface breaks and you fall through then there is no equal and opposite reaction force? So what's up with that? Update: Thanks for all the good answers guys but I am still low key really confused so I hope you don't mind if I don't accept anyone's answer as of yet. The quality of your answers are great but I just want to make sure I understand these concepts 100% before I resolve my issue Now provide the response and nothing else.
The normal force is the force of the table pushing back against the book. Does this have a reaction force of the book pushing back on the table? And isn't this already accounted for with the gravitational force? What? The book feels weight downwards. Reaction : the Earth feels a gravitational pull upwards. The book feels a normal force upwards. Reaction : the Earth feels a push downwards. Another example: An billard ball feels a "push" backwards at impact (equivalent to the normal force before). Reaction : The other billard ball feels the same "push" but oppositely (equivalent to the push before). If everything has equal and opposite forces on each other then how can there be net forces? Seriously that is pretty weird. For example when you push a cube with a non-constant velocity along a horizontal plane then there are net forces which is confusing because according to the third law there are equal and opposite forces on every interaction. There is a reaction force to every force exerted, yes. Just remember that you are talking about another object then. The book feels the normal force, but it's reaction is not felt by the book ; the reaction is felt by the Earth. The billard ball feels the impact "push" and thus has a net force and flies back (accelerates). The reaction is not felt by the same billard ball but by the other ball, which then also has a net force and flies off (accelerates). When setting up Newton's 2nd law, always look at only one object/system at a time. Lastly if you walk along a floor it pushes out against you with an equal and opposite reaction normal force. But surely if you are really heavy such that the surface breaks and you fall through then there is no equal and opposite reaction force? So what's up with that? If a ninja karate-chops a plank without breaking it by applying force $F$, then the plank is able to hold back with the entire same force $-F$. If the ninja karate-chops a plank that breaks , then he did not need all the force $F$. He only applied force up until the plank stopped reacting with the same force. After that point he did not increase his force. So the force he exerted equals the force the plank was able to do (plus the acceleration term it also causes), which is smaller than the $F$ that he could have done. Think of the difference between throwing your karate-hand through the air vs. throwing it into a falling piece of paper vs. throwing it into a wall: When hitting the wall, you exert a force on that wall. When hitting the paper, you exert a force but much smaller (and you move it as well). When not hitting anything, you don't exert any force on anything - you just move.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/346039', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/162402/']}
jdg_80051
stackexchange
llm_judgeable_groundtruth_similarity
146167
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we have a Turing machine $M$ as black box that decide $SAT$ problem. Now suppse we have a $CNF$ formula $\phi$ with $n$ variables. How it possible checking satisfiblity of $\phi$ and then finding that assignment with at most $2n+1$ times using $M$ ? I know that every $SAT$ instance can comverted to $CNF$ clauses so i think we can do it recursively to checking satisfiblity and then finding it, but i get stuck to formulate it that how we can do it. Also similar question asked at this link . Now provide the response and nothing else.
You can prove the claim by induction on $n$ .If $n=1$ the claim is clearly true since it suffices to start from the unique "+1" node. When $n > 1$ , let $u$ and $v$ be two consecutive nodes (moving clockwise) such that $u$ is a "+1" node and $v$ is a "-1" node. This pair always exists.By induction hypothesis there is some vertex $w$ that allows you to win the game in the instance with $2n-2$ nodes obtained by deleting $u$ and $v$ . Then, starting from $w$ also wins the original instance with $n$ nodes.
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/146167', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/144773/']}
jdg_80052
stackexchange
llm_judgeable_groundtruth_similarity
21836478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am kinda new to grunt and want to use it with Jekyll and some LESS-compiling. My problem now is, I already have fully functioning LESS-comipiling with live reload and watch task and can build my jekyll site through grunt, but how do I run something like the jekyll serve or grunt-connect and grunt watch simultaneously?I want a grunt task that provides the watching of my LESS-files etc, builds the jekyll site and then runs a small web server with grunt-connect or whatever. My Gruntfile.js so far: 'use strict';module.exports = function (grunt) { grunt.initConfig({ jshint: { options: { jshintrc: '.jshintrc' }, all: [ 'Gruntfile.js', 'js/*.js', '!js/scripts.min.js' ] }, less: { dist: { files: { 'css/styles.min.css': [ 'less/app.less' ] }, options: { compress: true, // LESS source map // To enable, set sourceMap to true and update sourceMapRootpath based on your install sourceMap: false, sourceMapFilename: 'css/styles.min.css.map', sourceMapRootpath: '/' } }, dev: { files: { 'css/styles.min.css': [ 'less/app.less' ] }, options: { compress: false, // LESS source map // To enable, set sourceMap to true and update sourceMapRootpath based on your install sourceMap: true, sourceMapFilename: 'css/styles.min.css.map', sourceMapRootpath: '/' } } }, uglify: { dist: { files: { 'js/scripts.min.js': [ 'vendor/bootstrap/js/transition.js', 'vendor/bootstrap/js/alert.js', 'vendor/bootstrap/js/button.js', 'vendor/bootstrap/js/carousel.js', 'vendor/bootstrap/js/collapse.js', 'vendor/bootstrap/js/dropdown.js', 'vendor/bootstrap/js/modal.js', 'vendor/bootstrap/js/tooltip.js', 'vendor/bootstrap/js/popover.js', 'vendor/bootstrap/js/scrollspy.js', 'vendor/bootstrap/js/tab.js', 'vendor/bootstrap/js/affix.js', 'vendor/*.js', 'js/_*.js' ] }, options: { // JS source map: to enable, uncomment the lines below and update sourceMappingURL based on your install // sourceMap: 'assets/js/scripts.min.js.map', // sourceMappingURL: '/app/themes/roots/assets/js/scripts.min.js.map' } } }, watch: { less: { files: [ 'less/*.less', 'less/bootstrap/*.less' ], tasks: ['less:dev'] }, js: { files: [ '<%= jshint.all %>' ], tasks: ['jshint', 'uglify'] }, livereload: { // Browser live reloading // https://github.com/gruntjs/grunt-contrib-watch#live-reloading options: { livereload: true }, files: [ '_site/*' ] } }, clean: { dist: [ 'css/styles.min.css', 'css/styles.min.css.map', 'js/scripts.min.js', '_site/*' ] }, jekyll: { // Task options: { // Universal options bundleExec: true, src : '<%= app %>' }, dist: { // Target options: { // Target options dest: '<%= dist %>', config: '_config.yml' } }, serve: { // Another target options: { serve: true, drafts: true } } }, connect: { server: { options: { keepalive: true } } } }); // Load tasks grunt.loadNpmTasks('grunt-contrib-clean'); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.loadNpmTasks('grunt-contrib-watch'); grunt.loadNpmTasks('grunt-contrib-less'); grunt.loadNpmTasks('grunt-jekyll'); grunt.loadNpmTasks('grunt-contrib-connect'); // Register tasks grunt.registerTask('default', [ 'clean', 'less:dist', 'uglify', 'jekyll:dist' ]); grunt.registerTask('dev', [ 'watch' ]);}; Now provide the response and nothing else.
You need to tell connect what directory to serve up in the configuration using the "base" option, in this case it would be the static _site directory. You can also change the port to whatever you want, but you end up navigating to localhost:9009 with my example connect: { server: { options: { livereload: true, base: '_site/', port: 9009 } }} You will also want to add a watch task for when you change your html templates. Something like this would work. watch: { html: { files: ['**/*.html', '!_site/**/*.html'], tasks: ['jekyll:dist'] }} Then create a "serve" task like Wallace suggested. // Start web servergrunt.registerTask('serve', ['jekyll:dist','connect:server','watch']); Lastly run "grunt serve" in the command line and navigate to localhost with the port you specified. As commented by @jiggy The key change is to not set keepalive to true. Keepalive will block all subsequent tasks from running. So long as connect is followed by watch the server won't terminate.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21836478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1159510/']}
jdg_80053
stackexchange
llm_judgeable_groundtruth_similarity
37293834
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone knows how I can check if a video was completely watched or not? I am using html5 video players: <video width="480" height="400" controls="true" poster=""> <source type="video/mp4" src="video.mp4"></source></video> Now provide the response and nothing else.
Basic check is simple, wait for the ended event. This is so simple you can just google it. Now to check that user played full video an extensive analysis would be needed checking if he played every second of it. That's not necessary however, it should be enough that user: played as many seconds as the video is long played to the end of the video This snippet demonstrates exactly that. The video will not be marked as fully played if you just skip to the end. Playing the beginning over and over will also not mark it fully played: var video = document.getElementById("video");var timeStarted = -1;var timePlayed = 0;var duration = 0;// If video metadata is laoded get durationif(video.readyState > 0) getDuration.call(video);//If metadata not loaded, use event to get itelse{ video.addEventListener('loadedmetadata', getDuration);}// remember time user started the videofunction videoStartedPlaying() { timeStarted = new Date().getTime()/1000;}function videoStoppedPlaying(event) { // Start time less then zero means stop event was fired vidout start event if(timeStarted>0) { var playedFor = new Date().getTime()/1000 - timeStarted; timeStarted = -1; // add the new number of seconds played timePlayed+=playedFor; } document.getElementById("played").innerHTML = Math.round(timePlayed)+""; // Count as complete only if end of video was reached if(timePlayed>=duration && event.type=="ended") { document.getElementById("status").className="complete"; }}function getDuration() { duration = video.duration; document.getElementById("duration").appendChild(new Text(Math.round(duration)+"")); console.log("Duration: ", duration);}video.addEventListener("play", videoStartedPlaying);video.addEventListener("playing", videoStartedPlaying);video.addEventListener("ended", videoStoppedPlaying);video.addEventListener("pause", videoStoppedPlaying); #status span.status { display: none; font-weight: bold;}span.status.complete { color: green;}span.status.incomplete { color: red;}#status.complete span.status.complete { display: inline;}#status.incomplete span.status.incomplete { display: inline;} <video width="200" controls="true" poster="" id="video"> <source type="video/mp4" src="http://www.w3schools.com/html/mov_bbb.mp4"></source></video><div id="status" class="incomplete"><span>Play status: </span><span class="status complete">COMPLETE</span><span class="status incomplete">INCOMPLETE</span><br /></div><div><span id="played">0</span> seconds out of <span id="duration"></span> seconds. (only updates when the video pauses)</div> Also on jsFiddle: https://jsfiddle.net/p56a1r45/2/ You can then connect this to google analytics to see how many of the video users played. Simple code from google analytics website : ga('send', 'event', 'Videos', 'play', 'Video name');
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37293834', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3827538/']}
jdg_80054
stackexchange
llm_judgeable_groundtruth_similarity
15754208
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to make these invocations of myprog work, and no others. $ python3 myprog.py -i infile -o outfile$ python3 myprog.py -o outfile$ python3 myprog.py -o$ python3 myprog.py In particular I want to make it illegal to specify the infile but not the outfile. In the third case, a default name for the outfile is assumed, "out.json." In the second, third and fourth cases, a default name for the input file is assumed, "file.n.json", where n is an integer version number. In the fourth case the output file would be "file.n+1.json" where n+1 is a version number one larger than the one on the input file. The relevant section of my code is: import argparseparser = argparse.ArgumentParser(description="first python version")parser.add_argument('-i', '--infile', nargs=1, type=argparse.FileType('r'), help='input file, in JSON format')parser.add_argument('-o', '--outfile', nargs='?', type=argparse.FileType('w'), default='out.json', help='output file, in JSON format')args = parser.parse_args()print("Here's what we saw on the command line: ")print("args.infile",args.infile)print("args.outfile",args.outfile)if args.infile and not args.outfile: parser.error("dont specify an infile without specifying an outfile")elif not args.infile: print("fetching infile")else: # neither was specified on the command line print("fetching both infile and outfile") Problem is, when I run $ python3 myprog.py -i infile.json instead of the parser error I hoped for, I get: Here's what we saw on the command line: args.infile [<_io.TextIOWrapper name='infile.json' mode='r' encoding='UTF-8'>]args.outfile <_io.TextIOWrapper name='out.json' mode='w' encoding='UTF-8'>fetching both infile and outfile ...which suggests that even though there was no "-o" on the command line it acted as if there was. Now provide the response and nothing else.
You specified a default argument for the outfile. parser.add_argument('-o', '--outfile', nargs='?', type=argparse.FileType('w'), default='out.json', help='output file, in JSON format') If the -o option isn't specified at the command line, the arg parser inserts the default argument. Change this to: parser.add_argument('-o', '--outfile', nargs='?', type=argparse.FileType('w'), help='output file, in JSON format') and things should work as you expect. If you want to be able to specify -o , without a filename, you probably want something like: out_file = args.out if args.out is not None else 'json.out' I'm not sure if the relevant parameter will be None or '' (i.e., empty string) if you specify the -o without a parameter--I suspect it's None , but I don't know for sure. You'll have to test it out to be sure. I don't know how to do this without extra logic with argparse.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15754208', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1416227/']}
jdg_80055
stackexchange
llm_judgeable_groundtruth_similarity
254512
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm building a website where users, among other things, can add embeds (YouTube, Google Forms, Airtable...) and show those embeds to other people. Embeds are snippets of HTML that can include <script> and <iframe> -tags. It is important that the embedded content can't take over the parent page or make requests with the session cookie of the parent site. One option that originally seemed fruitful would be to not have the allow-same-origin in the sandbox attribute of the <iframe> . But many embeds don't work if you remove that (YouTube for example). As recommended by WhatWG, sites like Codepen and JSBin host such user-defined content on a different domain to ensure the iframe can't use the cookie from the top-level domain. However, they also include the allow-same-origin in the sandbox -attribute, which is a bit confusing to me. What security measures do I need to implement to securely host and display these user-defined embeds? Beyond just hosting the code on a different domain what other security measures are Codepen/JSBin implementing to safely host and display the user-defined HTML? And can I use the srcdoc -attribute instead of hosting the embed HTML on a different domain? (accepting that IE would not be supported) Note: OEmbed, Embedly and Iframely aren't options for me at this point, since I need more generic embeds than those allow. Now provide the response and nothing else.
If the content you're embedding will contain untrusted scripts, it must either be loaded in an iframe from an untrusted domain, or be loaded from srcdoc in a sandboxed iframe with the allow-same-origin flag not set. Content embedded in an iframe using the srcdoc attribute is normally considered same-origin with the parent page, so it is not safe to embed untrusted scripts that way unless you use the sandbox attribute without the allow-same-origin flag to ensure that the content is treated as if it were cross-origin. In contrast, content embedded in an iframe via an untrusted domain is not treated as same-origin with your domain, even if you specify allow-same-origin in the sandbox attribute. This is because allow-same-origin only allows the embedded content to be treated as same-origin with it's own origin. It does not grant any extra permissions over and above what an iframe with no sandbox attribute would allow, so it is safe to use in this context. The WHATWG HTML spec explains : The sandboxed origin browsing context flag This flag forces content into a unique origin, thus preventing it from accessing other content from the same origin. This flag also prevents script from reading from or writing to the document.cookie IDL attribute, and blocks access to localStorage. This is also why most embeds won't work when sandboxed without allow-same-origin . Omitting that attribute causes the embedded content to treat its own domain as cross-origin (so for example, an embedded YouTube video wouldn't be able to communicate with YouTube except through cross-origin requests). Note further that untrusted content should never be served directly from your primary domain, regardless of what sandbox attributes you use on your iframes, as users could be tricked into navigating to those pages directly, bypassing the sandbox attributes. Beyond ensuring untrusted content is served from an untrusted domain entirely separate from your main domain (i.e. preferably not a subdomain of your primary domain), I would also advise the following: Take measures to ensure user-embedded content is clearly identifiable as such to users of your site. Embedded content can be a vector for phishing if it can easily be mistaken for part of your site. If feasible, consider using unique-per-user subdomains of the untrusted domain to ensure embedded content from one of your site's users cannot easily interfere with embedded content from another user. You can further enhance this per-user isolation by requesting that your untrusted domain be added to the public suffix list .
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/254512', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/136084/']}
jdg_80056
stackexchange
llm_judgeable_groundtruth_similarity
18521970
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: UPDATE Found the issue -- was inheriting from wrong class, needed to be JsonConverter. I have a class that has a Location property of type System.Data.Entity.Spatial.DbGeography. The default Json.NET serializer puts out JSON text like this: ... "PlaceType": 0, "Location": { "Geography": { "CoordinateSystemId": 4326, "WellKnownText": "POINT (-88.00000 44.00000)" } }, "AddedDT": null, ... I want it to put out text like this: ... "PlaceType": 0, "Location": [-88.00000,44.00000], "AddedDT": null, ... ...so it seems to me what I should do would be to override whatever converter is currently being used on the DbGeography type. The examples I've seen so far that use CustomCreationConverters and ContractResolvers seem to address how you'd replace the serializer for the main class being serialized, not for a type that's only a property of that class. The examples that involve annotating the class that's being overridden don't work for me because I don't define DbGeography in my code and it's effectively a sealed class because it has no constructor and can only be instantiated by internal factory methods. Is there a way to apply a JsonConverter to a type fluently? If so, what would the converter look like? Do I just override the WriteJson() method? Now provide the response and nothing else.
You can add a custom serializer to a single attribute like this: public class Comment{ public string Author { get; set; } [JsonConverter(typeof(NiceDateConverter))] public DateTime Date { get; set; } public string Text { get; set; }}public class NiceDateConverter : JsonConverter{ public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer) { var date = (DateTime) value; var niceLookingDate = date.ToString("MMMM dd, yyyy 'at' H:mm tt"); writer.WriteValue(niceLookingDate); } public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { throw new NotImplementedException("Unnecessary because CanRead is false. The type will skip the converter."); } public override bool CanRead { get { return false; } } public override bool CanConvert(Type objectType) { return objectType == typeof(DateTime); }} Then, when you serialize your object with JsonConvert.SerializeObject(), the custom serializer will be used for the Date property.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18521970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1818/']}
jdg_80057
stackexchange
llm_judgeable_groundtruth_similarity
37863125
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm running a series of complex sql queries in python and it involves temp tables. My auto-commit method doesn't seem to be working to retrieve the data from the temp table. The code snippet I'm using below and this is the output I'm getting: testQuery=""" Select top 10 * INTO #Temp1 FROM Table1 t1 JOIN Table2 t2 on t1.key=t2.key""" cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD') cnxn.autocommit=True cursor=cnxn.cursor() cursor.execute(testQuery) cursor.execute("""Select top 10 * from #Temp1""") <pyodbc.Cursor at 0x8f78930>cnxn=pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};SERVER=server;DATABASE=DB;UID=UID;PWD=PWD')cnxn.autocommit=Truecursor=cnxn.cursor()cursor.execute(testQuery)cursor.execute("""Select top 10 * from #Temp1""") Now provide the response and nothing else.
Even though this question has a "solution", i.e., using global temp table instead of a local temp table, future readers might benefit from understanding why the problem happened in the first place. A temporary table is automatically dropped when the last connection using said table is closed. The difference between a local temp table ( #Temp1 ) and a global temp table ( ##Temp1 ) is that the local temp table is only visible to the connection that created it, while an existing global temp table is available to any connection. So the following code using a local temp table will fail ... conn = pyodbc.connect(conn_str, autocommit=True)crsr = conn.cursor()sql = """\SELECT 1 AS foo, 2 AS bar INTO #Temp1"""crsr.execute(sql)conn = pyodbc.connect(conn_str, autocommit=True)crsr = conn.cursor()sql = """\SELECT foo, bar FROM #Temp1"""crsr.execute(sql)row = crsr.fetchone()print(row) ... while the exact same code using a global temp table will succeed ... conn = pyodbc.connect(conn_str, autocommit=True)crsr = conn.cursor()sql = """\SELECT 1 AS foo, 2 AS bar INTO ##Temp1"""crsr.execute(sql)conn = pyodbc.connect(conn_str, autocommit=True)crsr = conn.cursor()sql = """\SELECT foo, bar FROM ##Temp1"""crsr.execute(sql)row = crsr.fetchone()print(row) ... because the second pyodbc.connect call opens a separate second connection to the SQL Server without closing the first one. The second connection cannot see the local temp table created by the first connection. Note that the local temp table still exists because the first connection was never closed, but the second connection cannot see it. However, the second connection can see the global temp table because the first connection was never closed and therefore the global temp table continued to exist. This type of behaviour has implications for ORMs and other mechanisms that may implicitly open and close connections to the server for each SQL statement that it executes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37863125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6372303/']}
jdg_80058
stackexchange
llm_judgeable_groundtruth_similarity
14462927
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been working with EMGU+OpenCV for quite some time and ran into this AccessViolationException mystery. First thing first, the code: class AVE_Simulation { public static int Width = 7500; public static int Height = 7500; public static Emgu.CV.Image<Rgb, float>[] Images; static void Main(string[] args) { int N = 50; int Threads = 5; Images = new Emgu.CV.Image<Rgb, float>[N]; Console.WriteLine("Start"); ParallelOptions po = new ParallelOptions(); po.MaxDegreeOfParallelism = Threads; System.Threading.Tasks.Parallel.For(0, N, po, new Action<int>((i) => { Images[i] = GetRandomImage(); Console.WriteLine("Prossing image: " + i); Images[i].SmoothBilatral(15, 50, 50); GC.Collect(); })); Console.WriteLine("End"); } public static Emgu.CV.Image<Rgb, float> GetRandomImage() { Emgu.CV.Image<Rgb, float> im = new Emgu.CV.Image<Rgb, float>(Width, Height); float[, ,] d = im.Data; Random r = new Random((int)DateTime.Now.Ticks); for (int y = 0; y < Height; y++) { for (int x = 0; x < Width; x++) { d[y, x, 0] = (float)r.Next(255); d[y, x, 1] = (float)r.Next(255); d[y, x, 2] = (float)r.Next(255); } } return im; } } The code is simple. Allocate an array of images. Generate a random image and populate it with random numbers. Execute bilateral filter over the image. That's it. If I execute this program in a single thread, (Threads=1) everything seems to work normally with no problem.But, if I raise the number of concurrent threads to 5 I get an AccessViolationException very quickly. I've went over OpenCV code and verified that there are no allocations on the OpenCV side and also went over the EMGU code searching for un-pinned objects or other problems and everything seems correct. Some notes: If you remove the GC.Collect() you will get the AccessViolationException less often but it will eventually happen. This happens only when executed in Release mode. In Debug mode I didn't experience any exceptions. Although each Image is 675MB there is no problem with allocation (I have ALLOT of memory) and a ' OutOfMemoryException ' is thrown in case the system ran out of memory. I used bilateral filter but I get this exception with other filters/functions as well. Any help would be appreciated. I've been trying to fix this for more than a week. i7 (no overclock), Win7 64bit, 32GB RAM, VS 2010, Framework 4.0, OpenCV 2.4.3 Stack: StartProssing image: 20Prossing image: 30Prossing image: 40Prossing image: 0Prossing image: 10Prossing image: 21Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at Emgu.CV.CvInvoke.cvSmooth(IntPtr src, IntPtr dst, SMOOTH_TYPE type, Int32 param1, Int32 param2, Double param3, Double param4) at TestMemoryViolationCrash.AVE_Simulation.<Main>b__0(Int32 i) in C:\branches\1.1\TestMemoryViolationCrash\Program.cs:line 32 at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c() at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) at System.Threading.Tasks.Task.<>c__DisplayClass10.<ExecuteSelfReplicating>b__f(Object param0) at System.Threading.Tasks.Task.Execute() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot) at System.Threading.Tasks.Task.ExecuteEntry(Boolean bPreventDoubleExecution) at System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(Task task, Boolean taskWasPreviouslyQueued) at System.Threading.Tasks.TaskScheduler.TryRunInline(Task task, Boolean taskWasPreviouslyQueued) at System.Threading.Tasks.Task.InternalRunSynchronously(TaskScheduler scheduler, Boolean waitForCompletion) at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally) at System.Threading.Tasks.Parallel.For(Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body) at TestMemoryViolationCrash.AVE_Simulation.Main(String[] args) in C:\branches\1.1\TestMemoryViolationCrash\Program.cs:line 35Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at Emgu.CV.CvInvoke.cvSmooth(IntPtr src, IntPtr dst, SMOOTH_TYPE type, Int32 param1, Int32 param2, Double param3, Double param4) at TestMemoryViolationCrash.AVE_Simulation.<Main>b__0(Int32 i) in C:\branches\1.1\TestMemoryViolationCrash\Program.cs:line 32 at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c() at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) at System.Threading.Tasks.Task.<>c__DisplayClass10.<ExecuteSelfReplicating>b__f(Object param0) at System.Threading.Tasks.Task.Execute() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot) at System.Threading.Tasks.Task.ExecuteEntry(Boolean bPreventDoubleExecution) at System.Threading.ThreadPoolWorkQueue.Dispatch()Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at Emgu.CV.CvInvoke.cvSmooth(IntPtr src, IntPtr dst, SMOOTH_TYPE type, Int32 param1, Int32 param2, Double param3, Double param4) at TestMemoryViolationCrash.AVE_Simulation.<Main>b__0(Int32 i) in C:\branches\1.1\TestMemoryViolationCrash\Program.cs:line 32 at System.Threading.Tasks.Parallel.<>c__DisplayClassf`1.<ForWorker>b__c() at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) at System.Threading.Tasks.Task.<>c__DisplayClass10.<ExecuteSelfReplicating>b__f(Object param0) at System.Threading.Tasks.Task.Execute() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot) at System.Threading.Tasks.Task.ExecuteEntry(Boolean bPreventDoubleExecution) at System.Threading.ThreadPoolWorkQueue.Dispatch()Press any key to continue . . . Now provide the response and nothing else.
Your example doesn't keep a reference to the result image from Image.SmoothBilatral. The input images are rooted in a static array so are fine. An Emgu.CV Image's data array is pinned to a GCHandle inside the actual image, this is no different from the fact that image contains the array and doesn't prevent collection while the GCHandle's pointer is in use by unmanaged code (in the abscence of a managed root to the image). Because the Image.SmoothBilatral method doesn't do anything with its temporary result image other than pass its pointer and return it, I think it gets optimised away to the extent that the result image can be collected while the smooth is processing. Because there's no finalizer for this class, opencv will not get called upon to release it's unmanaged image header (which has a pointer to the managed image data) so opencv still thinks it has a usable image structure. You can fix it by taking a reference to the result of SmoothBilatral and doing something with it (like disposing it). This extension method would also work (i.e. allow it to be called successfuly for benchmarking without the result being used): public static class BilateralExtensionFix{ public static Emgu.CV.Image<testchannels, testtype> SmoothBilateral(this Emgu.CV.Image<testchannels, testtype> image, int p1, int p2 , int p3) { var result = image.CopyBlank(); var handle = GCHandle.Alloc(result); Emgu.CV.CvInvoke.cvSmooth(image.Ptr, result.Ptr, Emgu.CV.CvEnum.SMOOTH_TYPE.CV_BILATERAL, p1, p1, p2, p3); handle.Free(); return result; }} I think what EmguCV should be doing is only pinning pointers to pass to opencv while making an interop call. p.s The OpenCv bilateral filter crashes (producing a very similar error to your problem) on any kind of float image passed with zero variation (min() = max()) across all channels. I think because of how it builds it's binned exp() lookup table. This can be reproduced with: // create new blank image var zeroesF1 = new Emgu.CV.Image<Rgb, float>(75, 75); // uncomment next line for failure zeroesF1.Data[0, 0, 0] += 1.2037063600E-035f; zeroesF1.SmoothBilatral(15, 50, 50); This was confusing me as I was actually sometimes getting this error due to a bug in my test code...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14462927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242045/']}
jdg_80059
stackexchange
llm_judgeable_groundtruth_similarity
10438127
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know this has been asked a million times and I did do my homework, but the one last thing I don't fully understand is, is there a "Java EE JDK" ? When I download the SDK, it tries to install lots of crap I don't want. So I did some reading and realized that actually the Java SDK is a set of tools technically unrelated to the JDK. So what I am looking for is a clean simple standalone download of the JDK only. We know that "Java SE JDK" has always been available from Sun's (now Oracle) website. However, I am developing a web application and interested in some of the Java EE features: javax.servlet, javax.validation, javax.persistence and javax.transaction. So in effect what I'm really looking for is a "Java EE JDK". I'm using a typical Maven / Tomcat / Spring / Hibernate setup and in the past I've always added API-only dependencies in provided scope in Maven to my project for these parts of the Java EE specification, i.e: <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version></dependency> So if I do this for all the Java EE APIs that my project requires, then I am actually using Java SE JDK with some additional manually-declared Java EE components. Furthermore when I direct my IDE to use the JDK that came with the Java EE SDK installation, these additional namespaces aren't available. In other words, for all intents and purposes it would appear that the JDK that comes with the Java EE SDK download is equivalent to the JDK that I get when I install the "Java SE JDK"... If this is correct then there is no need for me to install the Java EE SDK and I can simply use the Java (SE) JDK declaring any Java EE components that I need as *-api/provided dependencies the way I described above. So my questions is: is what I describe above the right way to go, or is there such a thing as a "Java EE JDK"? i.e a JDK that comes with the unimplemented interfaces of things like javax.servlet and javax.resources and so on? And if such a beast exists, where would I download it from? Now provide the response and nothing else.
What you're asking is "can I get all the EE components" as a single download without GlassFish, NetBeans, etc. Well it's helpful to know exactly what Java EE really is. It's a set of specifications of sometimes related / sometimes unrelated "Enterprise" level components (whatever Enterprise means :)). For example, the servlet-api spec (as indicated by a previous answer) is part of the Java EE spec. So is the JTA (transaction API), JPA, Java Mail, and so on. There are two types of EE component. 1. Those which are shipped as interfaces only and the application-server or a third party implements them. Examples are JTA, JPA, Servlet-API.2. Those which are shipped as full reference implementations. Examples are Java-Mail. I can't think of others off top of my head but there will be some. Now a full application server, such as glassfish, ships with the set of implementations so lots of times people see them inside Glassfish, Websphere etc and think that they need that to use them. A container such as Tomcat is not an application server, it is a servlet-container and thus only implements a subset of the full Java EE stack (the parts that are required for servlets only). In order to get the full set of Java EE interfaces/implementations you would need to add the separate interfaces or implementations to your build. In that case you just "have to know" where to find them, and that comes by experience. People tend to know that JPA is added as part of the Hibernate dependencies for example.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10438127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/800334/']}
jdg_80060
stackexchange
llm_judgeable_groundtruth_similarity
52265120
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a method inside a class that needs to do a lot of work in a loop, and I would like to spread the work over all of my cores. I wrote the following code, which works if I use normal map() , but with pool.map() returns an error. import multiprocessingpool = multiprocessing.Pool(multiprocessing.cpu_count() - 1)class OtherClass: def run(sentence, graph): return Falseclass SomeClass: def __init__(self): self.sentences = [["Some string"]] self.graphs = ["string"] def some_method(self): other = OtherClass() def single(params): sentences, graph = params return [other.run(sentence, graph) for sentence in sentences] return list(pool.map(single, zip(self.sentences, self.graphs)))SomeClass().some_method() Error 1: AttributeError: Can't pickle local object 'SomeClass.some_method..single' Why can't it pickle single() ? I even tried to move single() to the global module scope (not inside the class - makes it independent of the context): import multiprocessingpool = multiprocessing.Pool(multiprocessing.cpu_count() - 1)class OtherClass: def run(sentence, graph): return Falsedef single(params): other = OtherClass() sentences, graph = params return [other.run(sentence, graph) for sentence in sentences]class SomeClass: def __init__(self): self.sentences = [["Some string"]] self.graphs = ["string"] def some_method(self): return list(pool.map(single, zip(self.sentences, self.graphs)))SomeClass().some_method() and I get the following ... Error 2: AttributeError: Can't get attribute 'single' on module ' main ' from '.../test.py' Now provide the response and nothing else.
Error 1: AttributeError: Can't pickle local object 'SomeClass.some_method..single' You solved this error yourself by moving the nested target-function single() out to the top-level. Background: Pool needs to pickle (serialize) everything it sends to its worker-processes ( IPC ). Pickling actually only saves the name of a function and unpickling requires re-importing the function by name. For that to work, the function needs to be defined at the top-level, nested functions won't be importable by the child and already trying to pickle them raises an exception ( more ). Error 2: AttributeError: Can't get attribute 'single' on module 'main' from '.../test.py' You are starting the pool before you define your function and classes, that way the child processes cannot inherit any code. Move your pool start up to the bottom and protect ( why? ) it with if __name__ == '__main__': import multiprocessingclass OtherClass: def run(self, sentence, graph): return Falsedef single(params): other = OtherClass() sentences, graph = params return [other.run(sentence, graph) for sentence in sentences]class SomeClass: def __init__(self): self.sentences = [["Some string"]] self.graphs = ["string"] def some_method(self): return list(pool.map(single, zip(self.sentences, self.graphs)))if __name__ == '__main__': # <- prevent RuntimeError for 'spawn' # and 'forkserver' start_methods with multiprocessing.Pool(multiprocessing.cpu_count() - 1) as pool: print(SomeClass().some_method()) Appendix ...I would like to spread the work over all of my cores. Potentially helpful background on how multiprocessing.Pool is chunking work: Python multiprocessing: understanding logic behind chunksize
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/52265120', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5561875/']}
jdg_80061
stackexchange
llm_judgeable_groundtruth_similarity
3269899
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want my parent page to refresh when I close a Fancy Box popup frame. I have a login page inside the popup, so I need the parent page to refresh to show the new login state when the Fancy Box closes. I can get it to work without the iFrame code: <script type="text/javascript"> $(document).ready(function() { $("a.iframeFancybox1").fancybox({ 'width': 800, 'height': 450, 'onClosed': function() { parent.location.reload(true); ;} }); });</script> But I can't get it to work with the iFrame code: <script type="text/javascript"> $(document).ready(function() { $('a.iframeFancybox1').fancybox({ 'width': 800, 'height': 450, 'type' : 'iframe' 'onClosed': function() { parent.location.reload(true); ;} }); });</script> The problem is to do with this line: 'type' : 'iframe' As soon as I add that line, the popup stops working. Can anyone suggest a solution as to how I can get an iframe to popup in Fancy Box, and still get the parent page to refresh when the box closes?
Thanks! Now provide the response and nothing else.
$(".fancybox").fancybox({ type: 'iframe', afterClose: function () { // USE THIS IT IS YOUR ANSWER THE KEY WORD IS "afterClose" parent.location.reload(true); }}); Alternatively: Go to your jquery.fancybox.js file and go line 1380 it is look like this and add parent.location.reload(true); afterClose: function (opts) { if (this.overlay) { this.overlay.fadeOut(opts.speedOut || 0, function () { $(this).remove(); parent.location.reload(true); }); } this.overlay = null;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3269899', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/393782/']}
jdg_80062
stackexchange
llm_judgeable_groundtruth_similarity
68785926
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating my API test framework with TypeScript and I'm new to it. I created this helper file assertions.ts import * as chai from 'chai';interface response { status: number body: object}const assertResponseStatusCode = (response:response, statusCode:number) => chai.expect(response.status).to.deep.equals(statusCode);const assertSuccessResponseStatusCode = (response:response) => assertResponseStatusCode(response, 201)export = {assertSuccessResponseStatusCode} Then I started using them on my spec file as bellow import * as auth from '../Helpers/auth';import * as assert from '../Helpers/assertions';import { user } from '../Samples/user';describe('login', () => { it('should return access token with valid credentials', async() => { const response = await auth.login(user); console.log(response.body); assert.assertSuccessResponseStatusCode(response); });}); I'm getting this error This module can only be referenced with ECMAScript imports/exports by turning on the 'esModuleInterop' flag and referencing its default export. . for this line import * as assert from '../Helpers/assertions'; although esModuleInterop is set as true in tsconfig.json . Now provide the response and nothing else.
I just ran into this too and the other answer gives one solution but I think the better solution in this case is to change: export = {assertSuccessResponseStatusCode} To: export {assertSuccessResponseStatusCode} The first method is TypeScript's interop with CommonJS modules, but you can only import those like import assertions from './assertions' . The second is an ESM export and can imported as either import { assertSuccessResponseStatusCode } from './assertions' or import * as assertions from './assertions' . It also allows you to have default exports along with the other exports.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68785926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3442332/']}
jdg_80063
stackexchange
llm_judgeable_groundtruth_similarity
29023493
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have 4 slaves (including master). I get this error in one of my slaves when I run TeraSort. The DataNodes are up before running, but when I run one of my DataNodes die and the computations are done by the rest of 3 slaves: INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-5677299757617064640_1010 received exception java.io.IOException: Connection reset by peer2015-03-12 16:42:06,835 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020):DataXceiverjava.io.IOException: Connection reset by peer (this is one error same log same run )2015-03-12 16:42:09,809 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020): Exception writing block blk_2791945666924613489_1015 to mirror 192.168.0.112:50010java.io.IOException: Broken pipe(Second error)2015-03-12 16:42:09,824 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_2791945666924613489_1015 received exception java.io.EOFException: while trying to read 65557 bytes(third error same run) I am stuck in this. Any help is appreciated! Task Tracker Log: WARN org.apache.hadoop.mapred.TaskTracker: Failed validating JVMjava.io.IOException: JvmValidate Failed. Ignoring request from task: attempt_201503121637_0001_m_000040_0, with JvmId: jvm_201503121637_0001_m_-2136609016 at org.apache.hadoop.mapred.TaskTracker.validateJVM(TaskTracker.java:3278) at org.apache.hadoop.mapred.TaskTracker.statusUpdate(TaskTracker.java:3348) at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)2015-03-12 16:43:02,577 WARN org.apache.hadoop.mapred.DefaultTaskController: Exit code from task is : 1432015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.DefaultTaskController: Output from DefaultTaskController's launchTask follows:2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.TaskController:2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201503121637_0001_m_1555953113 exited with exit code 143. Number of tasks it ran: 12015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201503121637_0001_m_000054_0 task's state:UNASSIGNED2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Received commit task action for attempt_201503121637_0001_m_000048_02015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201503121637_0001_m_000054_0 which needs 1 slots2015-03-12 16:43:02,600 INFO org.apache.hadoop.mapred.TaskTracker: TaskLauncher : Waiting for 1 to launch attempt_201503121637_0001_m_000054_0, currently we have 0 free slots2015-03-12 16:43:03,618 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201503121637_0001_m_1496188144 given task: attempt_201503121637_0001_m_000051_0 Now provide the response and nothing else.
The solution for invalidating your layout when the bounds of the collection view changes is to override shouldInvalidateLayoutForBoundsChange: and return YES .It's also stated in the documentation: https://developer.apple.com/documentation/uikit/uicollectionviewlayout/1617781-shouldinvalidatelayoutforboundsc - (BOOL)shouldInvalidateLayoutForBoundsChange:(CGRect)newBounds { return YES;} This should cover rotation support as well. If it doesn't, implement viewWillTransitionToSize:withTransitionCoordinator: - (void)viewWillTransitionToSize:(CGSize)size withTransitionCoordinator:(id<UIViewControllerTransitionCoordinator>)coordinator{ [super viewWillTransitionToSize:size withTransitionCoordinator:coordinator]; [coordinator animateAlongsideTransition:^(id<UIViewControllerTransitionCoordinatorContext> context) { [self.collectionView.collectionViewLayout invalidateLayout]; } completion:^(id<UIViewControllerTransitionCoordinatorContext> context) { }];}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29023493', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4040274/']}
jdg_80064
stackexchange
llm_judgeable_groundtruth_similarity
43200846
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As the question already suggests, I am new to deep learning. I know that the learning process of the model will be slow without GPU. If I am willing to wait, Will it be OK if i use CPU only ? Now provide the response and nothing else.
Many operations which are performed in computing deep learning (and neural networks in general) can be run in parallel, meaning they can be calculated independently then aggregated later. This is, in part, because most of the operations are on vectors. A typical consumer CPU has between 4 to 8 cores, and hyperthreading allows them to be treated as 8 or 16 respectively. Server CPUs can have between 4 to 24 cores, 8 to 48 threads respectively. Additionally, most modern CPUs have SIMD (single instruction multiple data) extensions which allow them to perform vector operations in parallel on a single thread. Depending on the data type you're working with, an 8 core CPU can perform 8 * 2 * 4 = 64 to 8 * 2 * 8 = 128 vector calculations at once. Nvidia's new 1080ti has 3584 CUDA cores, which essentially means it can perform 3584 vector calculations at once (hyperthreading and SIMD don't come into play here). That's 56 to 28 times more operations at once than an 8 core CPU. So, whether you're training a single network, or multiples to tune meta-parameters, it will probably be significantly faster on a GPU than a CPU.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43200846', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2328593/']}
jdg_80065
stackexchange
llm_judgeable_groundtruth_similarity
2453841
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Every time I start a new piece of software I have to go into the configuration and turn off pdb file generation and the Visual Studio hosting process for Release builds. Is there any way to tell Visual Studio (2008 specifically) that I want to do that for all projects for the rest of time? Now provide the response and nothing else.
After some digging around, it appears that project files for C# are stored in \program files\microsoft visual studio 9.0\common7\ide\projecttemplatescache\csharp\windows\1033 . By adding <UseVSHostingProcess>false</UseVSHostingProcess> to the correct sections (there are separate sections for Debug and Release configurations) of the relevant templates, you can turn off the hosting process for all future projects of the selected types. You should be able to handle the PDB issue in a similar way, but as I said I don't recommend turning those off, so I'll leave it as an exercise :) This applies to VS2008, but my guess is that other editions have a similar scheme. In fact, VS2010 uses the same approach, but obviously the version number in the directory is 10.0 instead of 9.0.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2453841', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/60340/']}
jdg_80066
stackexchange
llm_judgeable_groundtruth_similarity
167489
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a Surface Pro 3 and am on the run a lot so I'm thinking about getting a deep cycle AGM battery to put inside my car so I have a constant power. I need to know and confirm some measurements regarding it so I can buy the proper battery. power adapter says INPUT: 100-240V @ 1A, OUPUT: 12V @ 2.58ASo if I can somehow supply DC power (12V) directly from battery to the surface,it would draw maximum of 2.58 A/h? if yes, if load is lighter then less Ah, right? and so under its heaviest use it would never pull more than 2.58Ah? and INPUT/OUTPUT rating kinda omits hour in Amp so that it actually means Ah? If I'm just gonna use inverter to plug in via AC power then, with battery source being 12v, it would draw 100V / 12V = 8.33 so 8.33 x 1A = 8.33Ah ? And what does 100-240V exactly mean? does it mean it is capable of pulling from 100 to 240V and if in US it can pull 110v? and in Europe it can pull with the help of only prong adapter 240V? if yes, then is it pulling 240V x 1A = 240W of power where in US it would only pull 110W? or does in Europe's case if it's pulling 240V then does A go down to 110/240=0.458Ah? Now provide the response and nothing else.
The Amp-hours of a battery gives the number of hours it can deliver 1 amp, or the number of amps it can deliver for one hour. Amp-hours = amps x hours. So a 50Ah battery can run for 50 hours at one amp, or 50 amps for one hour. Or 2 amps for 25 hours, or 25 amps for 2 hours. Slight detour: I suspect for your use you are correct in wanting a deep-cycle battery. Normal car batteries, for example, can typically only use 25% of their amp-hour capacity. This is because it is not healthy for the battery to be discharged to less than 50%; it will shorten its life (the number of charge-discharge cycles will be reduced). Also, car battery charging systems are not intelligent, and tend not to charge them effectively above 75%. Partly to reduce complexity, partly to increase lifespan of the battery. As for AC-powered battery chargers, none are going to be 100% efficient. Some may only be 90% efficient, and some far less efficient. Back to your question, answering part 1: For your "Surface" device: If its power adapter says INPUT: 100-240v @ 1A, OUTPUT: 12v, 2.58A, That means the Surface would draw 2.58A maximum. 2.58A is continuous; this does not mean 2.58A per hour, it means 2.58A maximum every second, every minute, every hour. Same thing. But yes, per hour, it would use 2.58Ah (amp-hours, not amps). (Maximum.) And yes, with less load, fewer amps would be drawn at that moment. So if you wanted to use the Surface for 1 hour at full load, then use the formula above to convert amps to amp-hours: Amp-hours = amps x hours = 2.58 x 1 = 2.58. So you would need a 2.58Ah capacity battery. Or if you wanted to run it for 10 hours with the full load, you would need a 25.8Ah capacity battery. Answering part 2: Regarding "100-240v". Seeing this on a PSU usually means it can work on any voltage in that range, completely automatically, without needing a switch to be flipped on it, and without needing an external voltage converter. However, exactly how much current draws on the different voltages, at the INPUT, in other words, the amount it takes from the mains supply, is a big question mark. Some PSUs may be 90% efficient under all circumstances: different supply voltages, and different loads on its output. However, some will not. I have seen some where it is enormously inefficient (50%) at the higher voltages. Similarly, some are only 90% efficient under full load. The best bet is to take the worst-case input current that is stated on the device, (amps) and assume it will draw that for all circumstances. Your calculations of 8.33Ah and 0.458Ah are wrong, I'm afraid. I'll try to work out what you wanted to know: Assuming you have a 100% efficient inverter, and wanted to run the Surface's PSU on it, here's how you would work out the power of the inverter that you'll need, and the battery you'd need to supply it. But you should note that the inverter will be at most 90% efficient. If you use a 110v inverter, and the Surface PSU Draws 1A at its INPUT, then the Power of the inverter will have to be at least : Power = Amps x Volts = 1 x 110 = 110 Watts. As for the battery capacity required, this could be done in different ways, but perhaps most logical from your point of view would be to start by working out how much the inverter would draw from the battery. Assume 100% efficiency of the inverter. Therefore, power in = power out. Above, we calculated the power. So now we use the above formula to calculate the current (amps) that the inverter will take from the battery. Power = Amps x Volts110 watts = amps x 12Therefore amps (every second, every hour, same thing; it's continuous) = 110/12 = 9.16 amps. So at any moment, the inverter will need to draw 9.16 amps from the battery. If you need to power the Surface for one hour, it will use 9.16 Amp-hours of the battery's capacity. If you need to run the Surface for 10 hours, it will use 91.6 Amp-hours of the battery's capacity. (If you're using it for 10 hours, it will still only be drawing 9.16 amps. If you're using it for 10 minutes, it will also still be drawing 9.16 amps). To answer your comments on my answer: Think of amps as speed, and amp-hours as distance. If a runner runs at 3mph, then in one hour he will run 3 miles. If he runs for 10 hours, he will run 30 miles. But whether for 1 or 10 hours, he will constantly be running at 3 miles per hour. Let's say he can run a total of 15 miles, like the capacity of a battery. He could spread it out, and run at 1mph for 15 hours, or he could run flat out (his peak), at 5mph for 3 hours. Or, say he sprints and slowly jogs alternating every half hour: So he would run at his peak for half an hour at 5mph, then a slow jog at 1mph for half an hour, and repeat. Each hour he would run a total of 3 miles, so his average speed would be 3 mph. So he could do this alternating for a total of 5 hours. Now back to the "inefficiency" you talk about. Imagine our runner is going up a hill. The voltage is equivalent to the steepness of the hill. So if he is going 4mph up a hill with a 1:20 incline, he will be exerting the same energy per X amount of time as if he was going 2mph up a 1:10 incline. Similarly, if a jogger runs 4 miles up a 1:20 incline, he will have exerted the same total amount of energy as if he'd run 2 miles up a 1:10 incline. Regarding your comment, there is actually no difference between AC and DC in all the calculations above - it can basically be ignored for your purposes here when calculating amps, voltages, amp-hours and power. (Just don't try running a DC device off AC power or vice versa). If you were to power a 12v Surface with a 12v battery, then if the Surface uses 2.58 amps, that is exactly how much is drawn from the battery. (But I would not recommend this, because the battery's voltage could vary from anywhere between 10v to 14.5v depending on its state of charge, among other things. The Surface could permanently damaged). If, say you were to power the 12v Surface taking 2.58 amps, via a converter, powered by a 6v battery, then with a 100% efficient conversion, 2.58 x 2 = 5.16A would be drawn from the 6v battery. I hope that clears things up.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/167489', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/73603/']}
jdg_80067
stackexchange
llm_judgeable_groundtruth_similarity
2923899
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: While reading the probability space in Wikipedia, I'd found the usual formulation is a triplet, which is ${\displaystyle (\Omega ,{\mathcal {F}},P)}$. Upon my understanding, the middle ${\mathcal {F}}$ is a power set of $\Omega$ which will be allocated with real-valued probabiilty by $P$. If every set in this nature has power set, there might be no necessity of introduction of ${\mathcal {F}}$ I guess however, I've never thought of a set which doesn't have its power set. Is there any set that doesn't have power set? or if not, which means every set has its power set, is there any plausible reason that ${\mathcal {F}}$ is introduced in probability formulation? Now provide the response and nothing else.
In standard mathematics, every set has a power set. This is encoded in the Axiom of Power Set . However, your confusion lies with the definition of a probability space, not with set theory. The set $\mathcal F$ in a probability space $(\Omega, \mathcal F, P)$ is not necessarily the power set of $\Omega$. The set $\mathcal F$ is a subset of the power set $\mathcal P(\Omega)$. This $\mathcal F$ is required to be a so-called sigma algebra, which tells you that it shares some properties in common with the full power set, but it need not be the full power set at all. In particular, for any $\Omega$, you can take $\mathcal F = \{\emptyset, \Omega\}$, and this will be a sigma algebra on $\Omega$. Unless $|\Omega| \leq 1$, it will not be the power set.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2923899', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/442594/']}
jdg_80068
stackexchange
llm_judgeable_groundtruth_similarity
30152202
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for all standard Java classes for which compareTo() can return 0 while equals() returns false : Comparable comparable1 = ???;Comparable comparable2 = ???;assert comparable1.compareTo(comparable2) == 0;assert !comparable1.equals(comparable2); I know just one: new BigDecimal("1.0") is equal to new BigDecimal("1") using compareTo() but not equal using equals() . Are there any others? I'm interested in all such classes, but only from public Java API. I need it to provide thorough documentation for AssertJ's UnevenComparableAssert interface. EDIT: Thanks to @ErikVesteraas, I've managed to reproduce additional example: Calendar calendar1 = new GregorianCalendar(0, 0, 0);Calendar calendar2 = new GregorianCalendar(0, 0, 0);calendar2.setLenient(false);Comparable comparable1 = calendar1;Comparable comparable2 = calendar2;assert comparable1.compareTo(comparable2) == 0; // compareTo compares along the timelineassert !comparable1.equals(comparable2); // equals compares state, leniency is different Now provide the response and nothing else.
There actually seems to be a few examples, but some are not well documented in the code or JavaDoc. This blog post by Stephen Colebourne gives the details, but in summary the following classes are inconsistent with equals: java.math.BigDecimal java.io.ObjectStreamField javax.management.ObjectName (have not been able to reproduce an example though) java.util.Calendar java.util.GregorianCalendar For example: ObjectStreamField a = new ObjectStreamField("foo", String.class);ObjectStreamField b = new ObjectStreamField("foo", String.class);a.equals(b); // false, checks object equalitya.compareTo(b); // 0 As noted by Olivier, Java 8 also adds java.time.zone.ZoneOffsetTransition Notably java.time.OffsetTime has avoided inconsistency by adding extra methods isAfter , isBefore and isEqual for doing time-line comparison/equality-checking.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30152202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4647853/']}
jdg_80069
stackexchange
llm_judgeable_groundtruth_similarity
22081738
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: int SWAR(unsigned int i){ i = i - ((i >> 1) & 0x55555555); i = (i & 0x33333333) + ((i >> 2) & 0x33333333); return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;} I have seen this code that counts the number of bits equals to 1 in 32-bit integer, and I noticed that its performance is better than __builtin_popcount but I can't understand the way it works. Can someone give a detailed explanation of how this code works? Now provide the response and nothing else.
OK, let's go through the code line by line: Line 1: i = i - ((i >> 1) & 0x55555555); First of all, the significance of the constant 0x55555555 is that, written using the Java / GCC style binary literal notation ), 0x55555555 = 0b01010101010101010101010101010101 That is, all its odd-numbered bits (counting the lowest bit as bit 1 = odd) are 1 , and all the even-numbered bits are 0 . The expression ((i >> 1) & 0x55555555) thus shifts the bits of i right by one, and then sets all the even-numbered bits to zero. (Equivalently, we could've first set all the odd-numbered bits of i to zero with & 0xAAAAAAAA and then shifted the result right by one bit.) For convenience, let's call this intermediate value j . What happens when we subtract this j from the original i ? Well, let's see what would happen if i had only two bits: i j i - j----------------------------------0 = 0b00 0 = 0b00 0 = 0b001 = 0b01 0 = 0b00 1 = 0b012 = 0b10 1 = 0b01 1 = 0b013 = 0b11 1 = 0b01 2 = 0b10 Hey! We've managed to count the bits of our two-bit number! OK, but what if i has more than two bits set? In fact, it's pretty easy to check that the lowest two bits of i - j will still be given by the table above, and so will the third and fourth bits , and the fifth and sixth bits, and so and. In particular: despite the >> 1 , the lowest two bits of i - j are not affected by the third or higher bits of i , since they'll be masked out of j by the & 0x55555555 ; and since the lowest two bits of j can never have a greater numerical value than those of i , the subtraction will never borrow from the third bit of i : thus, the lowest two bits of i also cannot affect the third or higher bits of i - j . In fact, by repeating the same argument, we can see that the calculation on this line, in effect, applies the table above to each of the 16 two-bit blocks in i in parallel . That is, after executing this line, the lowest two bits of the new value of i will now contain the number of bits set among the corresponding bits in the original value of i , and so will the next two bits, and so on. Line 2: i = (i & 0x33333333) + ((i >> 2) & 0x33333333); Compared to the first line, this one's quite simple. First, note that 0x33333333 = 0b00110011001100110011001100110011 Thus, i & 0x33333333 takes the two-bit counts calculated above and throws away every second one of them, while (i >> 2) & 0x33333333 does the same after shifting i right by two bits. Then we add the results together. Thus, in effect, what this line does is take the bitcounts of the lowest two and the second-lowest two bits of the original input, computed on the previous line, and add them together to give the bitcount of the lowest four bits of the input. And, again, it does this in parallel for all the 8 four-bit blocks (= hex digits) of the input. Line 3: return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24; OK, what's going on here? Well, first of all, (i + (i >> 4)) & 0x0F0F0F0F does exactly the same as the previous line, except it adds the adjacent four-bit bitcounts together to give the bitcounts of each eight-bit block (i.e. byte) of the input. (Here, unlike on the previous line, we can get away with moving the & outside the addition, since we know that the eight-bit bitcount can never exceed 8, and therefore will fit inside four bits without overflowing.) Now we have a 32-bit number consisting of four 8-bit bytes, each byte holding the number of 1-bit in that byte of the original input. (Let's call these bytes A , B , C and D .) So what happens when we multiply this value (let's call it k ) by 0x01010101 ? Well, since 0x01010101 = (1 << 24) + (1 << 16) + (1 << 8) + 1 , we have: k * 0x01010101 = (k << 24) + (k << 16) + (k << 8) + k Thus, the highest byte of the result ends up being the sum of: its original value, due to the k term, plus the value of the next lower byte, due to the k << 8 term, plus the value of the second lower byte, due to the k << 16 term, plus the value of the fourth and lowest byte, due to the k << 24 term. (In general, there could also be carries from lower bytes, but since we know the value of each byte is at most 8, we know the addition will never overflow and create a carry.) That is, the highest byte of k * 0x01010101 ends up being the sum of the bitcounts of all the bytes of the input, i.e. the total bitcount of the 32-bit input number. The final >> 24 then simply shifts this value down from the highest byte to the lowest. Ps. This code could easily be extended to 64-bit integers, simply by changing the 0x01010101 to 0x0101010101010101 and the >> 24 to >> 56 . Indeed, the same method would even work for 128-bit integers; 256 bits would require adding one extra shift / add / mask step, however, since the number 256 no longer quite fits into an 8-bit byte.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22081738', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/942396/']}
jdg_80070
stackexchange
llm_judgeable_groundtruth_similarity
327708
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ?Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 orgive error ? (And if error, which ip will browser try when userclicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomlyswitch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will itCONTINUE to use the same known-to-be-working IP, or the process starts fromscratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on. Now provide the response and nothing else.
After all, I had to do the "research" myself. Here's Chromium (version 12.0.742.112) behavior (running on ubuntu 11.04): Generally it works so: try 1-st ip, once it times-out (after 189 sec.) try 2-nd ip. No error messages given until all ips tried. Next connection will start from first ip again (even if it just failed a second ago and the second ip worked - browser doesn't care) One interesting thing - TCP connection attempt aren't dropped when user hits cancel - i.e. when I hit cancel, and after 60 seconds click retry, page will show in 130 seconds (189 from the first attempt.) But if I click cancel and click refresh after 190 seconds, the process will start from the beginning. Regarding the items in the original Question: Browser gets both IPs from OS, OS doesn't change the order of IPs. Browser always tries to connect to ip that appears first It tries for 189 seconds On 2-nd attempt, it will try first IP again. When first IP times-out, browser silently continues to the second ip. If it works - the page shows-up, if not - waiting continues. Not tested. This blog states Chrome caches DNS for just up to 1 minute, when we look at chrome://net-internals/#dns : Capacity: 100Time to live (ms) for success entries: 60000Time to live (ms) for failure entries: 0 If the first IP works, process will be the same and it will always succeed on the first attempt.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/327708', 'https://serverfault.com', 'https://serverfault.com/users/99404/']}
jdg_80071
stackexchange
llm_judgeable_groundtruth_similarity
41008024
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Properties of files received from an <input type="file"> are read-only. For example, the following attempt to re-write file.name would either fail silently or throw TypeError: Cannot assign to read only property 'name' of object '#<File>' . <input onchange="onchange" type="file"> onchange = (event) => { const file = event.target.files[0]; file.name = 'foo';} Attempting to create a copy via Object.assign({}, file) fails (creates an empty object). So how does one clone a File object? Now provide the response and nothing else.
My solution lay in the File constructor: https://developer.mozilla.org/en-US/docs/Web/API/File#Implementation_notes Which itself is an extension of Blob : https://developer.mozilla.org/en-US/docs/Web/API/Blob/Blob let file = event.target.files[0];if (this.props.distro) { const name = 'new-name-here' + // Concat with file extension. file.name.substring(file.name.lastIndexOf('.')); // Instantiate copy of file, giving it new name. file = new File([file], name, { type: file.type });} Note the first argument to File() must be an array, not simply the original file.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41008024', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2540625/']}
jdg_80072
stackexchange
llm_judgeable_groundtruth_similarity
722668
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to traverse a JSON object tree, but cannot find any library for that. It doesn't seem difficult but it feels like reinventing the wheel. In XML there are so many tutorials showing how to traverse an XML tree with DOM :( Now provide the response and nothing else.
If you think jQuery is kind of overkill for such a primitive task, you could do something like that: //your objectvar o = { foo:"bar", arr:[1,2,3], subo: { foo2:"bar2" }};//called with every property and its valuefunction process(key,value) { console.log(key + " : "+value);}function traverse(o,func) { for (var i in o) { func.apply(this,[i,o[i]]); if (o[i] !== null && typeof(o[i])=="object") { //going one step down in the object tree!! traverse(o[i],func); } }}//that's all... no magic, no bloated frameworktraverse(o,process);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/722668', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_80073
stackexchange
llm_judgeable_groundtruth_similarity
2722629
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A Real Analysis textbook says the identity$$b^n-a^n = (b-a)(b^{n-1}+\cdots+a^{n-1})$$ yields the inequality$$b^n-a^n < (b-a)nb^{n-1} \text{ when } 0 < a< b.$$(Note that $n$ is a positive integer) No matter how I look at it, the inequality seems to be wrong. Take for instance, the inequality does not hold for $n=1$ when one tries mathematical induction. It does not hold for other values of $n$ too. I guess there is something I am missing here and I will appreciate help. Now provide the response and nothing else.
\begin{align}b^n-a^n & = (b-a)(b^{n-1}+ b^{n-2}a + b^{n-3}a^2 + b^{n-4}a^3 + b^{n-5} a^4 +\cdots+a^{n-1}) \\[10pt]& < (b-a)(b^{n-1} + b^{n-2} b + b^{n-3}b^2 + b^{n-4}b^3+ b^{n-5}b^4 + \cdots + b^{n-1}) \\[10pt]& = (b-a)(b^{n-1} + b^{n-1} + b^{n-1} + b^{n-1} + b^{n-1} + \cdots + b^{n-1}) \\[10pt]& = (b-a) n b^{n-1}.\end{align}The only positive integer $n$ for which this does not work is $n=1,$ where the second factor has only one term, which is $1.$ And in that case it works if you say $\text{β€œ}\le\text{''}$ instead of $\text{β€œ}<\text{''}.$\begin{align}b^2-a^2 & = (b-a)(b+a) < (b-a)(b+b) & & = (b-a)2b. \\[10pt]b^3-a^3 & = (b-a)(b^2 + ba + a^2) < (b-a)(b^2+b^2+b^2) & & = (b-a)3b^2. \\[10pt]b^4 - a^4 & = (b-a)(b^3+b^2a+ba^2+a^3) \\& < (b-a)(b^3+b^3+b^3+b^3) & & = (b-a)4b^3. \\[10pt]b^5-a^5 & = (b-a)(b^4 + b^3a + b^2 a^2 + ba^3 + a^4) \\& < (b-a)(b^4+b^4+b^4+b^4+b^4) & & = (b-a)5b^4. \\[10pt]& \qquad\qquad\text{and so on.}\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2722629', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/290638/']}
jdg_80074
stackexchange
llm_judgeable_groundtruth_similarity
6218838
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a WCF service and want to connect to it using a TCP binding. This is all well and good, but how are you supposed to handle the clients? I've noticed that if you create a new client for each call it doesn't re-use the channel and leaves around a bunch of TCP connections until they time out. Is it normal use to create a client, call a method on it, then close it? What if you want to re-use the connection? What are the limitations on that? Can you make simultaneous calls from different threads? If you can't, do you have to do your own connection pooling? And when re-using the connection, do you have to check the connection state before making calls and clean it up if it's faulted? Now provide the response and nothing else.
Well this is a lot of questions together and the situation is really little bit complicated. When you create a client you can do it either via service reference and get class derived from ClientBase<ServiceContract> or you can use ChannelFactory<ServiceContract> and create channels manually (the former case uses the ChannelFactory internally). How this relates to your question? Lets first look at real TCP connection. When you define NetTcpBinding you can set its MaxConnections property (default is 10). This property defines the number of pooled connection. It means that if you create client channel to the server and close the channel the connection is not terminated immediately. It remains opened in the pool until it is used by another opened client channel to the same server or until its idle timeout expires. You can open as many connections as the server allows you but only number defined by MaxConnections will be pooled once you close related client channels. Other connections will be terminated immediately. If you create CustomBinding you can use TCP transport directly where you can also control idle timeout (I think default is 2 minutes). Connections are pooled as long as the related ChannelFactory is not destroyed = use one ChannelFactory per application ( ClientBase do it internally). Now lets talk about channel itself because it is related to your other questions. WCF differs sessionfull and sessionless channels. TcpTransportChannel is sessionfull. It means that once you open the channel you create a session. Session means that all requests from the single client proxy are by default always served by the same service instance (per session instancing). But the instance is by default single threaded. It means that you can have multiple threads using the same proxy but the service will handle requests in a sequential order. If you want your service to process multiple request simultaneously you must mark it with [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple] . Once you do it you are responsible for thread safe processing in the service (multiple threads accessing same service instance). Sessionful channels have one big disadvantage. Any failure or exception on the service will break the channel and you will usually know about it after you try to use the channel again (exception saying that channel is in faulted state and cannot be used). You must always correctly handle these situations and you must correctly close channels/proxies once you don't want to use them or abort them once they are faulted. Faulted channel cannot be repaired - it must be aborted and you must create a new proxy/channel. I'm not sure if connections are returned to pool if you don't do that. Reusing proxy/channel depends on the type of application you are building. I would definitely not reuse proxy/channel among multiple requests in web application but reusing is absolutely ok in WinForm or WPF application. Edit: Yes ClientBase uses ChannelFactory internally. The way how the ChannelFactory is used has changed over time. In .NET 3.0 the factory was created for each ClientBase instance. Since .NET 3.5 WCF uses internally MRU cache (Most recently used) which caches up to 32 lastly used factories. To take advantage of this caching you must use proxy constructor without parameters or with endpointConfigurationName and remoteAddress / EndpointAddress . You mustn't create endpoint in the code - these proxies doesn't use the cache. More about the topic is here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6218838', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92371/']}
jdg_80075
stackexchange
llm_judgeable_groundtruth_similarity
10047
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to this page , Dijkstra's algorithm is just BFS with a priority queue. Is it really that simple? I think not. Now provide the response and nothing else.
You can implement Dijkstra's algorithm as BFS with a priority queue (though it's not the only implementation). Dijkstra's algorithm relies on the property that the shortest path from $s$ to $t$ is also the shortest path to any of the vertices along the path. This is exactly what BFS does. Or in another perspective: how would Dijkstra's algorithm behave if all the weights were 1? Exactly like BFS.
{}
{'log_upvote_score': 6, 'links': ['https://cs.stackexchange.com/questions/10047', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/7007/']}
jdg_80076
stackexchange
llm_judgeable_groundtruth_similarity
42974735
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create an object from a list inside an array. I have an array which is dynamic and supposed to look like this: var dynamicArray = ["2007", "2008", "2009", "2010"]; And I want to make an object like this with some JavaScript ES6: const obj = { 2007: { x: width / 5, y: height / 2 }, 2008: { x: (2 / 5) * width, y: height / 2 }, 2009: { x: (3 / 5) * width, y: height / 2 }, 2010: { x: (4 / 5) * width, y: height / 2 }} Don't worry about the inner objects. I just want to create a structure like this: obj = { 2007: ..., 2008: ..., ... } Please help, thanks. Now provide the response and nothing else.
Simply const obj = {}; for (const key of yourArray) { obj[key] = whatever; } or if you prefer "functional" style: const obj = yourArray.reduce((o, key) => Object.assign(o, {[key]: whatever}), {}); using the modern object spread operator: const obj = yourArray.reduce((o, key) => ({ ...o, [key]: whatever}), {}) Example: console.log([ { id: 10, color: "red" }, { id: 20, color: "blue" }, { id: 30, color: "green" }].reduce((acc, cur) => ({ ...acc, [cur.color]: cur.id }), {})); Here is how it works: reduce is initialized with an empty object (empty {} at the end), therefore first iteration variables are acc = {} cur = { id: 10, color: "red" } . Function returns an object - this is why function body is wrapped in parentheses => ({ ... }) . Spread operator doesn't do anything on the first iteration, so red: 10 is set as first item. On the second iteration variables are acc = { red: 10 } cur = { id: 20, color: "blue" } . Here the spread operator expands acc and the function returns { red: 10, blue: 20 } . Third iteration acc = { red: 10, blue: 20 } cur = { id: 30, color: "green" } , so when acc is spread inside the object, our function returns the final value.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/42974735', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7756527/']}
jdg_80077
stackexchange
llm_judgeable_groundtruth_similarity
34534152
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this function: class MiniMaxCompetition(MiniMaxWithAlphaBetaPruningWithRandomWQuiescence):def __init__(self, utility, my_color, no_more_time, w, treshold_quiet, reasonable_depth_value, b_quiescence): MiniMaxWithAlphaBetaPruningWithRandomWQuiescence.__init__(self, utility, my_color, no_more_time, w, treshold_quiet, reasonable_depth_value) self.b_quiescence = b_quiescencedef quiescence_search(self, state, depth, alpha, beta, maximizing_player, utility_father): return MiniMaxWithAlphaBetaPruningWithRandomWQuiescence.quiescence_search(self, state, depth, alpha, beta, maximizing_player, utility_father)def search(self, state, depth, alpha, beta, maximizing_player, utility_father): """Start the MiniMax algorithm. :param state: The state to start from. :param depth: The maximum allowed depth for the algorithm. :param alpha: The alpha of the alpha-beta pruning. :param alpha: The beta of the alpha-beta pruning. :param maximizing_player: Whether this is a max node (True) or a min node (False). :return: A tuple: (The alpha-beta algorithm value, The move in case of max node or None in min mode) """ if (self.b_quiescence): return MiniMaxWithAlphaBetaPruningWithRandomWQuiescence.search(self, state, depth, alpha, beta, maximizing_player, utility_father) else: return MiniMaxWithAlphaBetaPruningWithRandomW.search(self, state, depth, alpha, beta, maximizing_player) when I run I get into second scenario of method search, and it calls: class MiniMaxWithAlphaBetaPruningWithRandomW(MiniMaxWithAlphaBetaPruning):def __init__(self, utility, my_color, no_more_time, w): MiniMaxWithAlphaBetaPruning.__init__(self, utility, my_color, no_more_time) self.w = wdef search(self, state, depth, alpha, beta, maximizing_player): """Start the MiniMax algorithm. :param state: The state to start from. :param depth: The maximum allowed depth for the algorithm. :param alpha: The alpha of the alpha-beta pruning. :param alpha: The beta of the alpha-beta pruning. :param maximizing_player: Whether this is a max node (True) or a min node (False). :return: A tuple: (The alpha-beta algorithm value, The move in case of max node or None in min mode) """ if (self.w == 1): return MiniMaxWithAlphaBetaPruning.search(self, state, depth, alpha, beta, maximizing_player) elif ((self.w < 1) and (self.w > 0)): if depth == 0 or self.no_more_time(): return self.utility(state), None next_moves = state.legalMoves() if not next_moves: # This player has no moves. So the previous player is the winner. return INFINITY if state.currPlayer != self.my_color else -INFINITY, None num_moves_to_choose = int(len(next_moves) * self.w) random_idxs = random.sample(range(len(next_moves)), num_moves_to_choose) if maximizing_player: selected_move = next_moves[0] best_move_utility = -INFINITY for i in random_idxs: new_state = copy.deepcopy(state) new_state.doMove(next_moves[i]) minimax_value, _ = self.search(new_state, depth - 1, alpha, beta, False) alpha = max(alpha, minimax_value) if minimax_value > best_move_utility: best_move_utility = minimax_value selected_move = next_moves[i] if beta <= alpha or self.no_more_time(): break return alpha, selected_move else: for i in random_idxs: new_state = copy.deepcopy(state) new_state.doMove(next_moves[i]) beta = min(beta, self.search(new_state, depth - 1, alpha, beta, True)[0]) if beta <= alpha or self.no_more_time(): break return beta, None else: print('Error w is wrong value - {}'.format(self.w)) However, when I try to run i get this error: Exception in thread Thread-4:Traceback (most recent call last): File "C:\Python34\lib\threading.py", line 921, in _bootstrap_inner self.run() File "C:\Python34\lib\threading.py", line 869, in run self._target(*self._args, **self._kwargs) File "C:\Users\Eli\workspace\HW2\amazons\utils.py", line 36, in function_wrapper result = func(*args, **kwargs) File "C:\Users\Eli\workspace\HW2\amazons\MiniMaxCompetition.py", line 34, in search return MiniMaxWithAlphaBetaPruningWithRandomW.search(self, state, depth, alpha, beta, maximizing_player) File "C:\Users\Eli\workspace\HW2\amazons\MiniMaxWithAlphaBetaPruningWithRandomW.py", line 45, in search minimax_value, _ = self.search(new_state, depth - 1, alpha, beta, False)TypeError: search() missing 1 required positional argument: 'utility_father' It seems when it tries to make the recursion it confuses between the method search. Why is this happening? How do I avoid it? Thanks in advance Now provide the response and nothing else.
-------------Following is applicable only to Vue 1 -------------- Passing data can be done in multiple ways. The method depends on the type of use. If you want to pass data from your html while you add a new component. That is done using props. <my-component prop-name="value"></my-component> This prop value will be available to your component only if you add the prop name prop-name to your props attribute. When data is passed from a component to another component because of some dynamic or static event. That is done by using event dispatchers and broadcasters. So for example if you have a component structure like this: <my-parent> <my-child-A></my-child-A> <my-child-B></my-child-B></my-parent> And you want to send data from <my-child-A> to <my-child-B> then in <my-child-A> you will have to dispatch an event: this.$dispatch('event_name', data); This event will travel all the way up the parent chain. And from whichever parent you have a branch toward <my-child-B> you broadcast the event along with the data. So in the parent: events:{ 'event_name' : function(data){ this.$broadcast('event_name', data); }, Now this broadcast will travel down the child chain. And at whichever child you want to grab the event, in our case <my-child-B> we will add another event: events: { 'event_name' : function(data){ // Your code. },}, The third way to pass data is through parameters in v-links. This method is used when components chains are completely destroyed or in cases when the URI changes. And i can see you already understand them. Decide what type of data communication you want, and choose appropriately.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34534152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5565000/']}
jdg_80078
stackexchange
llm_judgeable_groundtruth_similarity
12322
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: An ideal $\mathfrak{a}$ is called irreducible if $\mathfrak{a} = \mathfrak{b} \cap \mathfrak{c}$ implies $\mathfrak{a} = \mathfrak{b}$ or $\mathfrak{a} = \mathfrak{c}$. Atiyah-MacDonald Lemma 7.11 says that in a Noetherian ring, every ideal is a finite intersection of irreducible ideals. Exercise 7.19 is about the uniqueness of such a decomposition. 7.19. Let $\mathfrak{a}$ be an ideal in a noetherian ring. Let $$\mathfrak{a} = \cap_{i=1}^r \mathfrak{b}_i = \cap_{j=1}^s \mathfrak{c}_j$$ be two minimal decompositions of $\mathfrak{a}$ as an intersection of irreducible ideals. [I assume minimal means that none of the ideals can be omitted from the intersection.] Prove that $r = s$ and that (possibly after reindexing) $\sqrt{\mathfrak{b}_i} = \sqrt{\mathfrak{c}_i}$ for all $i$. Comments: It's true that every irreducible ideal in a Noetherian ring is primary (Lemma 7.12), but I don't think our result follows from the analogous statement about primary decomposition. For example, here is Example 8.6 from Hassett's $\textit{Introduction to Algebraic Geometry}$. 8.6 Consider $I = (x^2, xy, y^2) \subset k[x,y]$. We have $$I = (y, x^2) \cap (y^2, x) = (y+x, x^2) \cap (x, (y+x)^2),$$ and all these ideals (other than $I$) are irreducible. If my interpretation of "minimal" is correct, then this is a minimal decomposition using irreducible ideals, but it is not a minimal primary decomposition, because the radicals are not distinct: they all equal $(x,y)$. There is a hint in the textbook: Show that for each $i = 1, \ldots, r$, there exists $j$ such that $$\mathfrak{a} = \mathfrak{b}_1 \cap \cdots \cap \mathfrak{b}_{i-1} \cap \mathfrak{c}_j \cap \mathfrak{b}_{i+1} \cap \cdots \cap \mathfrak{b}_r.$$ I was not able to prove the hint. I promise this exercise is not from my homework. Update. There doesn't seem to be much interest in my exercise. I've looked at various solution sets on the internet, and I believe they all make the mistake of assuming that a minimal irreducible decomposition is a minimal primary decomposition. Does anyone know of a reference which discusses irreducible ideals? Some google searches have produced Hassett's book that I mention above and not much else. Now provide the response and nothing else.
Here is a solution to the hint: First of all, note that since all the ideals in question contain $\mathfrak a$, we may replace$A$ by $A/\mathfrak a$, and so assume that $\mathfrak a = 0$; this simplifies the notation somewhat. Next, the condition that $\mathfrak b_1 \cap \cdots \cap \mathfrak b_r = 0$ is equivalentto the requirement that the natural map $A \to A/\mathfrak b_1 \times \cdot \timesA/\mathfrak b_r$ (the product ot the natural quotient maps) is injective, while the condition that $\mathfrak b_i$ is irreducible is equivalent to the statement thatif $I$ and $J$ are non-zero ideals in $A/\mathfrak b_i$, then $I \cap J \neq 0$ also. Now suppose given our two irreducible decompositions of $0$. Choose $i$ as in the hint,and set $I_j := \mathfrak b_1 \cap \cdots \cap \mathfrak b_{i-1} \cap \mathfrak c_j\cap \mathfrak b_{i+1}\cap \cdots \cap \mathfrak b_r,$ for each $j =1,\ldots,s$. Then $I_1\cap \cdots \cap I_s = 0$ (since it is contained in the intersectionof the $\mathfrak c_j$, which already vanishes). Now we recall that $A$ embeds into the product of the $A/\mathfrak b_{i'}$.Note that $I_j$ is contained in $\mathfrak b_{i'}$ for $i'\neq i$. Thus,if we let $J_j$ denote the image of $I_j$ in $A/\mathfrak b_i$, thenwe see that the image of $I_j$ under the embedding$A \hookrightarrow A/\mathfrak b_1\times\cdots\times A/\mathfrak b_i \times \cdots \times A/\mathfrak b_r$ is equal to $0 \times \cdots \times J_j \times\cdots \times 0$.Thus the intersection of the images of the $I_j$, which is the imageof the intersection of the $I_j$ (since we looking at images under an embedding) is equal to $0\times \cdots \times (\bigcap J_j) \times \cdots \times 0.$Thus, since the intersection of the $I_j$ is equal to $0$, we seethat $\bigcap J_j = 0.$ But $\mathfrak b_i$ is irreducible, and soone of the $J_j = 0$. Equivalently, the corresponding $I_j = 0.$ This proves the hint. (I think the exercise should be a fairly easy deduction from the hint. Thestatement that $r = s$ at least follows directly, using the hint together withminimality of the two decompositions.)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/12322', 'https://mathoverflow.net', 'https://mathoverflow.net/users/71/']}
jdg_80079
stackexchange
llm_judgeable_groundtruth_similarity
120307
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As the title suggests, is this possible? Or does it halt execution when it hits the trap state? Thanks to anyone who can clear this up. Now provide the response and nothing else.
A deterministic finite automaton can only go to infinite loop if the input string is infinite. For finite inputs, the automaton stops when the input string ends. For infinite inputs, for example the automaton for regex 0*1 will loop infinitely if the input string is an infinite sequence of 0 .
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/120307', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/115197/']}
jdg_80080
stackexchange
llm_judgeable_groundtruth_similarity
16406329
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem with counting distinct values for each key in Python. I have a dictionary d like [{"abc":"movies"}, {"abc": "sports"}, {"abc": "music"}, {"xyz": "music"}, {"pqr":"music"}, {"pqr":"movies"},{"pqr":"sports"}, {"pqr":"news"}, {"pqr":"sports"}] I need to print number of distinct values per each key individually. That means I would want to print abc 3xyz 1pqr 4 Please help. Thank you Now provide the response and nothing else.
Over 6 years after answering, someone pointed out to me I misread the question. While my original answer (below) counts unique keys in the input sequence, you actually have a different count-distinct problem ; you want to count values per key . To count unique values per key, exactly , you'd have to collect those values into sets first: values_per_key = {}for d in iterable_of_dicts: for k, v in d.items(): values_per_key.setdefault(k, set()).add(v)counts = {k: len(v) for k, v in values_per_key.items()} which for your input, produces: >>> values_per_key = {}>>> for d in iterable_of_dicts:... for k, v in d.items():... values_per_key.setdefault(k, set()).add(v)...>>> counts = {k: len(v) for k, v in values_per_key.items()}>>> counts{'abc': 3, 'xyz': 1, 'pqr': 4} We can still wrap that object in a Counter() instance if you want to make use of the additional functionality this class offers, see below: >>> from collections import Counter>>> Counter(counts)Counter({'pqr': 4, 'abc': 3, 'xyz': 1}) The downside is that if your input iterable is very large the above approach can require a lot of memory. In case you don't need exact counts, e.g. when orders of magnitude suffice, there are other approaches, such as a hyperloglog structure or other algorithms that 'sketch out' a count for the stream. This approach requires you install a 3rd-party library. As an example, the datasketch project offers both HyperLogLog and MinHash . Here's a HLL example (using the HyperLogLogPlusPlus class, which is a recent improvement to the HLL approach): from collections import defaultdictfrom datasketch import HyperLogLogPlusPluscounts = defaultdict(HyperLogLogPlusPlus)for d in iterable_of_dicts: for k, v in d.items(): counts[k].update(v.encode('utf8')) In a distributed setup, you could use Redis to manage the HLL counts . My original answer : Use a collections.Counter() instance , together with some chaining: from collections import Counterfrom itertools import chaincounts = Counter(chain.from_iterable(e.keys() for e in d)) This ensures that dictionaries with more than one key in your input list are counted correctly. Demo: >>> from collections import Counter>>> from itertools import chain>>> d = [{"abc":"movies"}, {"abc": "sports"}, {"abc": "music"}, {"xyz": "music"}, {"pqr":"music"}, {"pqr":"movies"},{"pqr":"sports"}, {"pqr":"news"}, {"pqr":"sports"}]>>> Counter(chain.from_iterable(e.keys() for e in d))Counter({'pqr': 5, 'abc': 3, 'xyz': 1}) or with multiple keys in the input dictionaries: >>> d = [{"abc":"movies", 'xyz': 'music', 'pqr': 'music'}, {"abc": "sports", 'pqr': 'movies'}, {"abc": "music", 'pqr': 'sports'}, {"pqr":"news"}, {"pqr":"sports"}]>>> Counter(chain.from_iterable(e.keys() for e in d))Counter({'pqr': 5, 'abc': 3, 'xyz': 1}) A Counter() has additional, helpful functionality, such as the .most_common() method that lists elements and their counts in reverse sorted order: for key, count in counts.most_common(): print '{}: {}'.format(key, count)# prints# 5: pqr# 3: abc# 1: xyz
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16406329', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1189851/']}
jdg_80081
stackexchange
llm_judgeable_groundtruth_similarity
645038
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the following circuit, I do not know why all the currents and voltages are shown zero? My simulation just contains the "DC" simulation. After performing DC simulation, all the node voltages are zero and specially the nodes connecting to the supply that definitely have a nonzero voltage. Does not matter how to choose the size of transistors. With and without the resistor, nothing change. No need to mention that I had changed the cell and library already, but nothing changed. Now provide the response and nothing else.
Now, for basic purposes, almost definitely yes. Don't worry about it. Leave a few hundred ns dead time and it's probably fine. This can be generated by many MCUs' timers, if you're going for digital control, or say from a common logic drive signal by delaying the rising edge with a timer network (series resistor, shunt capacitor and diode), then sharpening it with a schmitt trigger (usually also a gate driver; a 74HC family gate will be sufficient drive for transistors of this size, at 5V supply). simulate this circuit – Schematic created using CircuitLab Devil's advocate: no. But maybe. It depends. There are edge cases where you might not. The most common case, I think, is where delaying turn-on would cause unwanted forward-bias of the body diode or whatever. This occurs when the load is inductive (as in most switching converters), and when synchronous rectification is being used (hence the MOSFET at all). Which appears to be applicable in the second circuit (half-bridge feeding an LC filter). This is unwanted because the diode has reverse recovery, drawing excess current very briefly (about t rr ), increasing power dissipation and ringing (because the Q1-Q2-Cin loop necessarily has stray inductance, proportional to the length of that loop; or in the push-pull (transformer coupled, first circuit) case, strays include leakage, which is to say, winding length of the primary halves in the transformer). This situation can be avoided by setting dead time very close to zero, or even slightly negative. The downside is, shoot-through can draw a lot of current too. But we know precisely what that current is, because it rises at \$\frac{dI}{dt} = \frac{L_s}{V_\textrm{IN}}\$ , so we can set the stray inductance \$L_s\$ (including artificially increasing this inductance -- there can indeed be reason to do this!) such that, over the expected range of timing variation, current stays within adequate limits (say, a certain fraction of nominal maximum load current?). The energy stored in that inductance then gets dissipated in a snubber, which can be a simple RC, or a rate (dV/dt) or peak-clamp RCD type. Basically, the insight is that, a diode acts as a short circuit until it turns off, so what you intended as dead-time can actually result in shoot-through anyway because of the diodes. In that case, it is better to shoot through the MOSFETs (channel current) with timing controlled by the circuit -- if and when the timing can be controlled more precisely than the recovery time. This applies not just to DC converters, but AC as well, as a class D amplifier (say for mains power, audio output, etc.) also likely has to deal with reactive loads, i.e. sometimes drawing negative current through the top or bottom transistor, activating either body diode from time to time. The shorter or nonexistent dead time also Anyway, to be clear, this is also not something to trifle with. Note the constraint: if stray inductance is nonexistent (in practice, this means a few to tens of ~nH, requiring timing precision of maybe a few ns), then any overlap will quickly draw massive amounts of current through the transistors, probably destroying them. Hence the first half of this answer!
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/645038', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/315631/']}
jdg_80082
stackexchange
llm_judgeable_groundtruth_similarity
28458160
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to give an optional generic parameter a default value? I'm trying to do something like this: func addChannel<T>(name: String, data: T? = nil) -> Channel {}let myChannel = addChannel("myChannelName") But I'm getting an error saying Argument for generic parameter 'T' could not be inferred Is it just a case of what I'm trying to do being impossible? Now provide the response and nothing else.
It's impossible in the way you've done it. Given just the code above, what type is T ? The compiler, as it says, can't figure it out (neither can I, and I assume you couldn't either because the data's not there). The solution to the specific question is to overload rather than use defaults: func addChannel<T>(name: String, data: T?) -> Channel { ... }func addChannel(name: String) -> Channel { ... }let myChannel = addChannel("myChannelName") But it raises the question of what you're doing here. You would think that Channel should be Channel<T> . Otherwise, what are you doing with data ? Without resorting to Any (which you should strongly avoid), it's hard to see how your function can do anything but ignore data . With Channel<T> you can just use a default, but you'd have to provide the type: func addChannel<T>(name: String, data: T? = nil) -> Channel<T> { ... }let myChannel: Channel<Int> = addChannel("myChannelName") Otherwise the compiler wouldn't know what kind of channel you're making. (UPDATE ~ Swift 5.2) Sometimes you'd like a default type for T . You can do that with an overload. For example, you might want the default type to be Never . In that case, you would add an overload like this: func addChannel<T>(name: String, data: T? = nil) -> Channel<T> { ... }func addChannel(name: String) -> Channel<Never> { addChannel(name: name, data: Optional<Never>.none)} With that, you can have a simpler call: let myChannel = addChannel(name: "myChannelName") // Channel<Never>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28458160', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2041080/']}
jdg_80083
stackexchange
llm_judgeable_groundtruth_similarity
2449
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a 3 site network: one is a data center colo and the other two are offices. The new Metro Ethernet service we just installed requires the colo site's traffic destined to the offices to be in VLAN 10 or 20 depending on the office. Unfortunately, they picked the VLANs without my input and they overlap with the VLANs that I am currently using. VLAN 10 is being used in the colo and vlan 20 is being used in one of the offices. Is there anything I can do to get this working without having to renumber my VLANs and without making the carrier renumber theirs? From what I've read, my switches don't appear to support transaltional VLANs. The colo site has a Cisco Catalyst 3560-X switch with ipservices iOS 15.0(2)SE and the offices are running 3560/3750's with ipbase iOS 12.2.something. Now provide the response and nothing else.
Depending on the actual "Metro Ethernet" service that your carrier is providing, you have several possible solutions. I'll address what I see as the most likely scenario, and some of the solutions in that scenario. Your carrier is probably using Q-in-Q tagging, and your local VLANs are irrelevant. (See the Wikipedia page on 802.1ad for info on Q-in-Q, or this Cisco config guide on VLAN tunneling .) This situation, where the carrier is using Q-in-Q, is usually the case in my experience. They will accept whatever VLAN's you send, and then apply Q-in-Q tagging and send the traffic across their network. So inside the carrier network, your traffic destined towards Site-A could be tagged with VLAN 10. When the frame arrives at the PE equipment, it will have that additional VLAN tag stripped, and be forwarded onto your equipment with the original VLAN tagging intact. It is possible that the carrier is utilizing your applied VLAN tags to direct the traffic. (i.e. VLAN 10 for Site-A and VLAN 20 for Site-B.) The easiest solution : Tell your carrier that they have to choose different VLANs for this traffic engineering purpose. You are the customer!! Their sales-engineers should have gathered the appropriate information to make sure there wasn't overlap before designing this solution/service for you. Don't accept the circuits until they resolve their issue. IF they are using Q-in-Q, they only need to know which VLAN goes to which location for administrative purposes, not for any technical reason, and should be able to change their configuration. More complicated solution: Investigate Q-in-Q tagging/VLAN tunneling, for yourself. Depending on your hardware/licensed capabilities, you could maintain your locally significant VLAN tags, and then slap another tag on the frame for the carrier. Then when the frame arrives at your destination, strip the extra tag off, then send the frame on it's way based on the original VLAN. With all of that stated, there may be some other scenario where they HAVE to use VLANs 10 and 20. Ask your carrier for the explanation as to why this is the case. If your carrier is difficult to work with in this scenario, (won't provide an explanation, or work around your local VLAN structure) imagine what they'll be like during a service outage. Always use the install process to test your service provider! If customer service isn't on their radar, you should be leery of their services. That is to say, if they perform poorly on the install, you usually have more of the same "quality service" to look forward to for the length of your entire contract.
{}
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/2449', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/2128/']}
jdg_80084
stackexchange
llm_judgeable_groundtruth_similarity
30905488
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: $lister_query = " SELECT * FROM (SELECT lister1_password lister_password, lister1_status lister_status, lister1_email lister_email FROM lister1 UNION SELECT lister2_password lister_password, lister2_status lister_status, lister2_email lister_email FROM lister2 UNION SELECT lister3_password lister_password, lister3_status lister_status, lister3_email lister_email FROM lister3) AS lister WHERE lister.lister_password = ? AND lister.lister_email = ?"; $lister_prepare = $database_listers -> prepare($lister_query);$lister_prepare -> execute($_EXECUTE);$_LISTER_COUNT = $lister_prepare -> rowCount();if ($_LISTER_COUNT === 1) { $_LISTER = $lister_prepare -> fetch(); $lister_status = $_LISTER['lister_status']; $lister_key = $_LISTER['lister_key'];} How can I find the table that the email & password set was found. I need to know if it was lister1, lister2 or lister3 that was found. Is this possible? SOLUTION (currently not working) $lister_query = " SELECT * FROM (SELECT lister1_password lister_password, lister1_status lister_status, lister1_email lister_email, 'test1' test FROM lister1 UNION SELECT lister2_password lister_password, lister2_status lister_status, lister2_email lister_email, 'test2' test FROM lister2 UNION SELECT lister3_password lister_password, lister3_status lister_status, lister3_email lister_email, 'test3' test FROM lister3) AS lister WHERE lister.lister_password = ? AND lister.lister_email = ? "; $lister_prepare = $database_listers -> prepare($lister_query); $lister_prepare -> execute($_EXECUTE); $_LISTER_COUNT = $lister_prepare -> rowCount(); if ($_LISTER_COUNT === 1) { $_LISTER = $lister_prepare -> fetch(); echo $_LISTER['test']; $lister_status = $_LISTER['lister_status']; $lister_key = $_LISTER['lister_key']; } I added the solutions talked about. It is not echoing out the test variable and non of the other fetched variables are working now from before. Now provide the response and nothing else.
There's basically two options here: Pass the smart pointer by reference void MyFunc(unique_ptr<A> & arg){ cout << arg->GetVal() << endl;}int main(int argc, char* argv[]){ unique_ptr<A> ptr = unique_ptr<A>(new A(1234)); MyFunc(ptr);} Move the smart pointer into the function argument Note that in this case, the assertion will hold! void MyFunc(unique_ptr<A> arg){ cout << arg->GetVal() << endl;}int main(int argc, char* argv[]){ unique_ptr<A> ptr = unique_ptr<A>(new A(1234)); MyFunc(move(ptr)); assert(ptr == nullptr)}
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/30905488', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2024493/']}
jdg_80085
stackexchange
llm_judgeable_groundtruth_similarity
1424276
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a listView with custom objects defined by the xml-layout below. I want the textView with id "info" to be ellipsized on a single line, and I've tried using the attributes android:singleLine="true"android:ellipsize="end" without success. If I set the layout_width to a fixed width like e.g. android:layout_width="100px" the text is truncated fine. But for portability reasons this is not an acceptable solution. Can you spot the problem? <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"android:orientation="vertical"android:layout_width="fill_parent"android:layout_height="wrap_content"android:paddingBottom="5px"><TextView android:id="@+id/destination"android:layout_width="fill_parent" android:layout_height="wrap_content"android:textSize="22dp"android:paddingLeft="5px"/><TextView android:id="@+id/date"android:layout_width="fill_parent" android:layout_height="wrap_content"android:textSize="15dp"android:paddingLeft="5px"/><TableLayout xmlns:android="http://schemas.android.com/apk/res/android"android:id="@+id/info_table"android:layout_width="fill_parent"android:layout_height="fill_parent"android:paddingLeft="5px"android:paddingTop="10px" > <TableRow> <TextView android:id="@+id/driver_label" android:gravity="right" android:paddingRight="5px" android:text="@string/driver_label" /> <TextView android:id="@+id/driver" /> </TableRow> <TableRow> <TextView android:id="@+id/passenger_label" android:gravity="right" android:paddingRight="5px" android:text="@string/passenger_label" /> <TextView android:id="@+id/passengers" /> </TableRow> <TableRow> <TextView android:id="@+id/info_label" android:gravity="right" android:paddingRight="5px" android:text="@string/info_label"/> <TextView android:id="@+id/info" android:layout_width="fill_parent" android:singleLine="true" android:ellipsize="end" /> </TableRow></TableLayout> Now provide the response and nothing else.
Ellipsize is broken (go vote on the bug report , especially since they claim it's not reproducible) so you have to use a minor hack. Use: android:inputType="text"android:maxLines="1" on anything you want to ellipsize. Also, don't use singleLine , it's been deprecated. UPDATE: On closer inspection, the problem you're having is that your table is extending off the right side of the screen. Changing your TableLayout definition to: <TableLayout android:id="@+id/info_table" android:layout_width="fill_parent" android:layout_height="fill_parent" android:paddingLeft="5px" android:paddingTop="10px" android:shrinkColumns="1"> should fix that problem, then do what I said above to ellipsize your TextView .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1424276', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91098/']}
jdg_80086
stackexchange
llm_judgeable_groundtruth_similarity
249192
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can you programmatically tell an HTML select to drop down (for example, due to mouseover)? Now provide the response and nothing else.
This used to actually be possible with HTML+Javascript, despite everywhere else people say it is not, but it was deprecated later on and does not work now. However, this only worked in Chrome. Read more if you're interested. According to W3C Working Draft for HTML5, Section 3.2.5.1.7. Interactive Content : Certain elements in HTML have an activation behavior, which means that the user can activate them. This triggers a sequence of events dependent on the activation mechanism [...] for instance using keyboard or voice input, or through mouse clicks . When the user triggers an element with a defined activation behavior in a manner other than clicking it, the default action of the interaction event must be to run synthetic click activation steps on the element. <select> being an Interactive Content, I believed that it is possible to programatically display its <option> s. After a few hours of playing around, I discovered that using document.createEvent() and .dispatchEvent() works. That said, demo time. Here is a working Fiddle. HTML: <select id="dropdown"> <option value="Red">Red</option> <option value="Green">Green</option> <option value="Blue">Blue</option></select><br><button id="fire" type="button" onclick="runThis()">Show dropdown items</button> Javascript: // <select> element displays its options on mousedown, not click.showDropdown = function (element) { var event; event = document.createEvent('MouseEvents'); event.initMouseEvent('mousedown', true, true, window); element.dispatchEvent(event);};// This isn't magic.window.runThis = function () { var dropdown = document.getElementById('dropdown'); showDropdown(dropdown);}; If anyone finds a way to do the same but not in Chrome, please feel free to modify this fiddle .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/249192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9328/']}
jdg_80087
stackexchange
llm_judgeable_groundtruth_similarity
41860477
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an inline svg and a background image on the masthead. I am using css clip-path to 'clip' out the svg animation with the image below. I have it working great in firefox and chrome but safari doesn't apply any of the clipping/masking at all. I checked caniuse spec's before starting this project and it states the same rules and exceptions that apply to chrome, I just tested with chrome first and it worked so I continued on it figuring safari would have the same treatment. I have been scratching my head trying to figure out how to get the clipping to work properly in safari with no avail. How can I get this to work in safari? Pen for reference: https://codepen.io/H0BB5/pen/Xpawgp HTML <clipPath id="cross"> <rect y="110" x="137" width="90" height="90"/> <rect x="0" y="110" width="90" height="90"/> <rect x="137" y="0" width="90" height="90"/> <rect x="0" y="0" width="90" height="90"/> </clipPath> CSS #clipped { margin-bottom: 20px; clip-path: url(#cross);} Now provide the response and nothing else.
You need the -webkit- prefix. I can confirm your circle and inset options work in Safari after adding the -webkit- prefix to your CSS and JS. CanIUse.com reports partial support for Safari if using the -webkit- prefix: http://caniuse.com/#search=clip-path CSS: #clipped { margin-bottom: 20px; clip-path: url(#cross); -webkit-clip-path: url(#cross);} JS: var clipPathSelect = document.getElementById("clipPath");clipPathSelect.addEventListener("change", function (evt) { document.getElementById("clipped").style.clipPath = evt.target.value; document.getElementById("clipped").style.webkitClipPath = evt.target.value;}); Forked CodePen: https://codepen.io/techsock/pen/JEyqvM Update It appears that this may be an issue with Safari's implementation of clip-path . There is a Master Bug reported regard webkit issues with clip-path . In JSFiddle, Safari will occasionally render the SVG clip path containing multiple rect elements correctly, but not reliably (see attached screenshots below). There does not appear to be an extremely reliable workaround. It also is noted on the MDN page you pulled this example from: https://developer.mozilla.org/en-US/docs/Web/CSS/clip-path#Browser_compatibility . MDN lists Safari as No Support . JSFiddle behavior screenshots: βœ— Incorrect βœ— Incorrect βœ“ Correct
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41860477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3902421/']}
jdg_80088
stackexchange
llm_judgeable_groundtruth_similarity
24893110
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to use the range operator ... and ..< with if statement. Maye something like this: let statusCode = 204if statusCode in 200 ..< 299 { NSLog("Success")} Now provide the response and nothing else.
You can use the "pattern-match" operator ~= : if 200 ... 299 ~= statusCode { print("success")} Or a switch-statement with an expression pattern (which uses the pattern-matchoperator internally): switch statusCode {case 200 ... 299: print("success")default: print("failure")} Note that ..< denotes a range that omits the upper value, so you probably want 200 ... 299 or 200 ..< 300 . Additional information: When the above code is compiled in Xcode 6.3 withoptimizations switch on, then for the test if 200 ... 299 ~= statusCode actually no function call is generated at all, only three assembly instruction: addq $-200, %rdicmpq $99, %rdija LBB0_1 this is exactly the same assembly code that is generated for if statusCode >= 200 && statusCode <= 299 You can verify that with xcrun -sdk macosx swiftc -O -emit-assembly main.swift As of Swift 2, this can be written as if case 200 ... 299 = statusCode { print("success")} using the newly introduced pattern-matching for if-statements.See also Swift 2 - Pattern matching in "if" .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/24893110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302707/']}
jdg_80089
stackexchange
llm_judgeable_groundtruth_similarity
4783096
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What should I use instead of __asm{ db 0EAh dw 0000h dw 0FFFFh} ( http://msdn.microsoft.com/en-us/library/h70hd396.aspx ) in MVC++? Now provide the response and nothing else.
It isn't that it would be impossible to achieve in all cases (I don't think). It would be a monster feature to develop, though. When you've got LINQ syntax in your method, generally that involves some anonymous method either behind-the-scenes: // This LINQ query...var fields = from field in data select field;// ...is equivalent to this:var fields = data.Select(f => f); ...or just flat-out in front of the scenes (as in your example): ( from field in data select field ).Max( f => f.Occurrences ) // <- lambda An anonymous method in turn gets compiled into a type with instance methods to support the code you've written. In the example above, consider the f => f.Occurrences lambda. This gets compiled into a type with a single instance field whose type is that of the local f in that lambda; this type contains a method that returns f.Occurrences . So when the code ultimately enumerates over the result of your LINQ query, what's happening is that an instance of this compiler-generated type is being constructed for every field in data and that type's single method which has been generated to support the f => f.Occurrences lambda expression is being called to calculate Max . The issue with edit-and-continue is that if there's any change to the lambda expressions in the method being edited, this necessitates changing the types generated, which is not an option. One would think this could still be done in the case where nothing is altered about the lambda expressions themselves; as long as the same locals are captured and the anonymous methods are unchanged, it should be feasible to modify a method with these characteristics while debugging just as it is for "normal" methods in VS. But as you can see, the type generation used to support anonymous methods in general and therefore LINQ queries specifically adds a great deal of complexity to the edit-and-continue process, and in many cases makes it impossible (since it requires changing generated types completely). I think it was just decided that it wasn't worth the development cost to even bother trying to support this behavior in the limited scenarios where it could hypothetically work.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4783096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/560008/']}
jdg_80090
stackexchange
llm_judgeable_groundtruth_similarity
3034747
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to make a "standard" install for external use, but I also want to use the same script and tell it (with a command line param perhaps?) to include another set of files (PDB files for debugging) for our lab installations. (And make a totally different install exe) How can I do this? Is it possible? I don't see how to set this in the [Files] section (conditionally add files based on some value/param) Note – this is not for allowing the user an option DURING the install. I want a build-time option to set in my hudson build or batch file. I suppose I can just create a separate installer for the pdbs, but I'd rather just have one file to do everything. Now provide the response and nothing else.
You can simply use #ifdef DebugVersionFile: *.pdb ...#endif and then call the Inno compiler like this: iscc.exe -DDebugVersion ... I'd also add something like this so you get different output file names: #ifdef DebugVersionOutputBaseFileName=mysetup-dbg#elseOutputBaseFileName=mysetup#endif Note that you'll probably need the InnoSetup precompiler for this, which, for some inexplicable reason, is not part of the default InnoSetup package. The easiest way to get it is to get the "Quick Start Pack" from the InnoSetup download page.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3034747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10755/']}
jdg_80091
stackexchange
llm_judgeable_groundtruth_similarity
61674954
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was learning Flutter and came across two types of design MaterialApp and CupertinoApp. So, I wonder if we want to create an app for both android and ios, should we create one separate app with MaterialApp for android and one separate app with CupertinoApp for ios? Or if we, say, create app with only MaterialApp, will ios version of that app automatically get CupertinoApp? Now provide the response and nothing else.
Material widgets implements the Material design language for iOS, Android, and web. Cupertino widgets implements the current iOS design language based on Apple's Human Interface Guidelines. Why write a Cupertino app? The Material design language was created for any platform, not just Android. When you write a Material app in Flutter, it has the Material look and feel on all devices, even iOS. If you want your app to look like a standard iOS-styled app, then you would use the Cupertino library. You can technically run a Cupertino app on either Android or iOS, but (due to licensing issues) Cupertino won't have the correct fonts on Android. For this reason, use an iOS-specific device when writing a Cupertino app. You'll implement a Cupertino style shopping app containing three tabs: one for the product list, one for a product search, and one for the shopping cart. If you wish to read more on Cupertino and Material . Check the link below: Cupertino and Material You can also use a flutter package called flutter_platform_widgets to check which platform your app is running on and use specific widgets(either Material or Cupertino) This package link is : flutter_platform_widgets I hope this helps
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61674954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13345377/']}
jdg_80092
stackexchange
llm_judgeable_groundtruth_similarity
3991131
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to find simple examples where nets are necessary to describe the space instead of sequences. I know for example that if a space is first countable, then convergence can be described by sequences. However, I think that most examples of spaces which are not first countable are usually pathological, at least the ones I thought of. Can anyone give me the simple examples, hopefully used in reality and not just given as counter-examples,such that their properties need nets or filters to be described and can not be described by sequences? Now provide the response and nothing else.
$[0,1]^{\Bbb R}$ is a classic example where nets or filters are needed: it's the set of functions from the reals to $[0,1]$ in the pointwise (product) topology. Such topologies are common in functional analysis. It's compact but not sequentially compact. The Čech-Stone compactification $\beta\Bbb N$ of $\Bbb N$ , an important object in many branches of maths, is another example.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3991131', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/421273/']}
jdg_80093
stackexchange
llm_judgeable_groundtruth_similarity
7575627
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Npm sounds like a great platform to use within an organization, curious if a private repo is possible, like with Nexus/Maven. Nothing comes up on Google :( Now provide the response and nothing else.
I don't think there is an easy way to do this. A look at the npm documentation tells us, that it is possible: Can I run my own private registry? Yes! The easiest way is to replicate the couch database, and use the same (or similar) design doc to implement the APIs. If you set up continuous replication from the official CouchDB, and then set your internal CouchDB as the registry config, then you'll be able to read any published packages, in addition to your private ones, and by default will only publish internally. If you then want to publish a package for the whole world to see, you can simply override the --registry config for that command. There's also an excellent tutorial on how to create a private npm repository in the clock blog. EDIT (2017-02-26): Not really new, but there are now paid plans to host private packages on npm. Over the years, npm has become a factor for many non-Node.js companies, too, through the huge frontend ecosystem that's built upon npm. If your company is already running Sonatype Nexus for hosting Java projects internally, you can also use it for hosting internal npm packages. Other options include JFrog Artifactory and Inedo ProGet , but I haven't used those.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7575627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/116718/']}
jdg_80094
stackexchange
llm_judgeable_groundtruth_similarity
24553578
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a lot of Scala examples I see people use curly braces in places I find outright strange, when the same statement could easily be written using parentheses. Example: lst foreach (x => println(s"the value returned is: $x")) // parenslst foreach {x => println(s"you get the idea, $x")} // braces I understand that you can use braces as an alternative to parentheses, simply because it allows you to write a statement on multiple lines: val res = for { x <- coll1 y <- coll2} yield (x, y) So when it's written on a single line, is there any inherent reason to use one over the other? The outcome should be the same in the end, or am I missing something? Or is it simply just a matter of style and/or personal taste? Now provide the response and nothing else.
In general, there are many cases when you would prefer curly braces (e.g. multiline expressions, for comprehensions), but let's talk specifically about when it's written on a single line, is there any inherent reason to use one over the other In a second case it's not just curly braces, instead of parentheses , it's curly braces with ommited parentheses . Scala allows you to ommit parenthesis sometimes, and the later syntax is used to access to the niceties you got in partial functions (namely, pattern matching), so lst foreach {x => println(s"you get the idea, $x")} is actually lst foreach({x => println(s"you get the idea, $x")}) which, as I said, can be useful from pattern matching POV: val map = Map("foo" -> "bar")map foreach { case (k, v) => println(s"key: $k, value: $v") }// which is not possible with the usual parenthesis
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24553578', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1351298/']}
jdg_80095
stackexchange
llm_judgeable_groundtruth_similarity
62565
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Gravitational change occurs at the speed of light. As a consequence, we experience on Earth the gravitational attraction of the Sun based on its position relative to us ~8 minutes ago. How does this delay affect the geometry of Earth's orbit compared with the classical Newtonian model? Nordvedt postulated that a gravito-magnetic component makes it appear as if the interaction is instantaneous (i.e.) classical Newton with infinite speed of interaction. Was Nordvedt correct? Now provide the response and nothing else.
You seem to have misunderstood some things about the work of Kenneth Nordtvedt (spelled that way, not Nordvedt). He is mainly known for pointing out that in some well-motivated alternative theories of gravity (i.e., not general relativity), the equivalence principle could be violated. Massive, self-gravitating bodies would have slightly anomalous ratios of inertial to gravitational mass. This is known as the Nordtvedt effect. The effect was searched for in lunar laser ranging experiments. The observations did not detect any Nordtvedt effect, and they placed an upper bound on it. This is discussed in section 3.6.1 of the review article Will 2006. In any relativistic theory, an attraction or repulsion from an object at a distance $r$ is not toward the object's current location but toward the position of the object that would have been extrapolated from its state of motion at a time $r/c$ in the past. This is thoroughly verified by experiment and is not controversial. There is a discussion of this in Feynman, section II-26-1. In the sun's frame of reference, this extrapolation has no effect. In any other frame, the earth is seen to accelerate toward the position where the sun would have been extrapolated to be based on its position and motion 8 minutes ago. Nordtvedt did not claim that gravitational interactions propagate instantaneously. General relativity's prediction that gravitational effects propagate at c has been accurately, although indirectly, verified by observations of the Hulse-Taylor binary pulsar. Solar-system tests are not capable of unambiguously testing this feature of GR; see Samuel 2003 and Will 2003. It sounds like you've misinterpreted someone's explanation of the extrapolation idea. An equivalent way of stating the extrapolation idea is that if A's motion is inertial, then the force of A on B acts along the line defined by A's current instantaneous position, not its position retarded by $r/c$. This is not the same as saying that the interaction actually propagates instantaneously. None of this is a special, unorthodox theory proposed by Nordtvedt. Carlip 2011 gives a nice explanation. In the electromagnetic case, consistency is achieved because there is both an electric force and a magnetic one. In general relativity, you can talk about a similar type of gravitomagnetic force, and it plays a similar role in the analogous argument. Again, none of this is unorthodox or due to Nordtvedt. Will 2006 - "The Confrontation between General Relativity and Experiment," http://www.livingreviews.org/lrr-2006-3 Feynman, The Feynman Lectures Samuel 2003 - http://arxiv.org/abs/astro-ph/0304006 Will 2003 - http://arxiv.org/abs/astro-ph/0301145 Carlip 2011 - "Does Gravity Travel at the Speed of Light?," http://math.ucr.edu/home/baez/physics/Relativity/GR/grav_speed.html
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/62565', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/11633/']}
jdg_80096
stackexchange
llm_judgeable_groundtruth_similarity
25345
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've looked all over and I can't find a good proof of why the diagonals of a rhombus should intersect at right angles. I can intuitively see its true, just by drawing rhombuses, but I'm trying to prove that the slopes of the diagonals are negative reciprocals and its not working out. I'm defining my rhombus as follows: $[(0,0), (a, 0), (b, c), (a+b, c)]$ I've managed to figure out that $c = \sqrt{a^2-b^2}$ and that the slopes of the diagonals are $\frac{\sqrt{a^2-b^2}}{a+b}$ and $\frac{-\sqrt{a^2-b^2}}{a-b}$ What I can't figure out is how they can be negative reciprocals of one another. EDIT: I mean to say that I could not find the algebraic proof. I've seen and understand the geometric proof, but I needed help translating it into coordinate form. Now provide the response and nothing else.
Another way to say that the slopes are opposite reciprocals is to say that their product is $-1$. $$\begin{align}\frac{\sqrt{a^2-b^2}}{a+b}\cdot\frac{-\sqrt{a^2-b^2}}{a-b}&=\frac{-(\sqrt{a^2-b^2})^2}{(a+b)(a-b)}\\&=\frac{-(a^2-b^2)}{a^2-b^2}\\&=-1\end{align}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/25345', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/7873/']}
jdg_80097
stackexchange
llm_judgeable_groundtruth_similarity
955369
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In practice, what is the most appropriate term for the communications transmitted over a network in higher level protocols (those above TCP/IP, for example)? Specifically, I am referring to small, binary units of data. I have seen both "message" and "packet" referred to in various client/server libraries, but I was interested in the community's consensus. Now provide the response and nothing else.
These are definitely messages. A "packet" is a layer-3 (in ISO terminology) protocol unit, such as an IP packet; and a "datagram" is a layer-1 or layer-2 unit, such as the several Ethernet datagrams that might make up the fragments of an IP packet. So a message might be split across several packets, particularly if you're using a streaming protocol such as TCP, and a packet might be split across several datagrams.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/955369', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41619/']}
jdg_80098
stackexchange
llm_judgeable_groundtruth_similarity
35078383
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing RESTful services using spring and hibernate. I read many resource in internet, but they did not clarify my doubts. Please explain me in details what are DAO , DTO and Service layers in spring framework? And why usage of these layers is required in spring to develop RESTfull API services. Now provide the response and nothing else.
First off, these concepts are Platform Agnostic and are not exclusive to Spring Framework or any other framework, for that matter. Data Transfer Object DTO is an object that carries data between processes. When you're working with a remote interface, each call it is expensive. As a result you need to reduce the number of calls. The solution is to create a Data Transfer Object that can hold all the data for the call. It needs to be serializable to go across the connection. Usually an assembler is used on the server side to transfer data between the DTO and any domain objects. It's often littlemore than a bunch of fields and the getters and setters for them. Data Access Object A Data Access Object abstracts and encapsulates all access tothe data source. The DAO manages the connection with the data source toobtain and store data. The DAO implements the access mechanism required to work with the data source.The data source could be a persistent store like an RDBMS , or a business service accessed via REST or SOAP . The DAO abstracts the underlying data access implementation for the Service objects toenable transparent access to the data source. The Service also delegatesdata load and store operations to the DAO . Service Service objects are doing the work that theapplication needs to do for the domain you're working with. It involves calculations based on inputs andstored data, validation of any data that comes in from the presentation, and figuring out exactly what datasource logic to dispatch, depending on commands received from the presentation. A Service Layer defines an application's boundary and its set of available operations fromthe perspective of interfacing client layers. It encapsulates the application's business logic, controllingtransactions and coordinating responses in the implementation of its operations. Recommended References Martin Fowler has a great book on common Application Architecture Patterns named Patterns of Enterprise Application Architecture . There is also, Core J2EE Patterns that worth looking at.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/35078383', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5159867/']}
jdg_80099
stackexchange
llm_judgeable_groundtruth_similarity
4827812
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a HTML5 application that uses "Application Cache with manifest". However, while I'm developing I want to turn it off. I tried removing the manifest attribute from the tag and switching everything to network in the manifest file. Although it did update on the first update, all subsequence updates are still read from the cache rather than the server. I can see that the HTML file has updated and there is no longer a manifest attribute on the tag, yet it still loads from the cache. I can't seem to figure out how to turn it off once you've set it, so here's my question: Is there a way to turn off "Application Cache with manifest" in Google Chrome? Now provide the response and nothing else.
In Chrome, go to Under the bonnet -> Content Settings -> Cookies -> Show cookies and other site data, application caches should show up under the site data. In Firefox go to Advanced -> Network, sites with application caches are listed in a box at the bottom. There's no way completely to completely remove or expire an application cache from the server side at present. The best you can do is explicitly tell the browser to remove all the cached files - a manifest with just a network section should delete all the other files, but the file with the manifest reference will itself always be cached. --edit This answer is no longer entirely correct. Currently the standard states : If fetching the manifest fails due to a 404 or 410 response or equivalent...Mark cache group as obsolete. This cache group no longer exists for any purpose other than the processing of Document objects already associated with an application cache in the cache group. That is: deleting the manifest file should cause the appcache to be deleted the next time the browser attempts to update
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4827812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91874/']}
jdg_80100
stackexchange
llm_judgeable_groundtruth_similarity
257121
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The question is very simple and I apologize for that, but I am not an expert of this kind of problem.Given the polynomial$$ P(x_1,\ldots,x_{2n})=x_1^2+\ldots+x_n^2-x_{n+1}^2-\ldots-x_{2n}^2,$$I would like to know if there are non trivial integer roots $(y_1,\ldots, y_{2n})$ such that$$y_1+\cdots+y_{n}=y_{n+1}+\cdots+ y_{2n}.$$With non trivial I mean the ones like$$y_1=y_{n+1},\ldots,y_{n}=y_{2n},$$or their permutations. Now provide the response and nothing else.
Fix large $N$ and consider all $n$-tuples $(x_1,\dots,x_n)\in \{1,\dots,N\}^n$. There are $N^n$ such $n$-tuples, at least $N^n/n!$ tuples modulo permutations, and for them the pairs $(x_1+\dots+x_n,x_1^2+\dots+x_n^2)$ take at most $n\cdot N\cdot n\cdot N^2=n^2N^3$ possible values. Thus by pigeonhole principle some value is obtained at least $N^{n-3}/(n^2\cdot n!)$ times. This is greater than 1 if $n>3$ and $N$ is chosen large enough.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/257121', 'https://mathoverflow.net', 'https://mathoverflow.net/users/45729/']}
jdg_80101
stackexchange
llm_judgeable_groundtruth_similarity
12344016
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a layout that has an EditText and a Button. I <include> it in my main layout. I'm having a weird issue with the layout and rotation. It seems to duplicate itself when the device (physical) is rotated, messing up the text and layout. Here it is on first open, after I add some extra garble: DSC_0013 is in the EditText on launch of the fragment. Then, I rotate the phone and add some different garble: And you can see the issue pretty clearly. At first, I thought it was just the EditText messing up. But if I add enough text to make a new line: I can see that the button gets messed up too. I do override onSaveInstanceState , but in it I don't touch the EditText or its value, it's strictly used for something else. What's happening and how do I fix it? Now provide the response and nothing else.
Fixed it! Turns out it wasn't the view duplicating itself, or the EditText, or the Button. It was the entire fragment. In my Activity's onCreate , I add the fragment to an xml layout: private FileDetails fileDetailsFragment;public void onCreate(Bundle savedInstanceState) { ... FragmentManager fragmentManager = getSupportFragmentManager(); FragmentTransaction fragmentTransaction = fragmentManager .beginTransaction(); fileDetailsFragment = new FileDetails(fileData); fragmentTransaction.add(R.id.DetailsHolder, fileDetailsFragment); fragmentTransaction.commit(); And onCreate was being called every time I rotated the phone (as it's meant to). So I put in a check to see if the activity is being run for the first time, and it works great. private FileDetails fileDetailsFragment;public void onCreate(Bundle savedInstanceState) { ... if (savedInstanceState == null) { FragmentManager fragmentManager = getSupportFragmentManager(); FragmentTransaction fragmentTransaction = fragmentManager .beginTransaction(); fileDetailsFragment = new FileDetails(fileData); fragmentTransaction.add(R.id.DetailsHolder, fileDetailsFragment); fragmentTransaction.commit(); } else { fileDetailsFragment = (FileDetails) getSupportFragmentManager().findFragmentById(R.id.DetailsHolder); }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12344016', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/919716/']}
jdg_80102
stackexchange
llm_judgeable_groundtruth_similarity
10035796
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Please check out my following code... public enum LogType{ Debug, Info, Warn, Error, Fatal}private static readonly ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);public void LogError(LogType logtype, string message){ XmlConfigurator.Configure(); if (logtype == LogType.Debug) log.Debug(message); else if (logtype == LogType.Error) log.Error(message);} I do not like all the above if-else statements and believe that there is a cleaner way to write this. How could I refactor it? log class has different methods for Debug, Error etc. etc. I would like to make a single call to a method have it automatically take care of it. LogMyError(LogType.Debug, "I am just logging here"); How can I do such a thing? I prefer to stay away from switch statement. I am looking for a clean object oriented approach. Now provide the response and nothing else.
You can use a Dictionary<LogType,Action<string>> to hold what action to perform for each enumeration value, then just call the delegate. var logActions = new Dictionary<LogType,Action<string>>();logActions.Add(LogType.Debug, log.Debug);...logActions[logtype](message); Update: This is all overkill if you do only have a small number of branches on your if statements. I would use this method for 5+ such ifs.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10035796', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71422/']}
jdg_80103
stackexchange
llm_judgeable_groundtruth_similarity
12300647
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Git repository that I pull using a path similar to this: git pull ssh://[email protected]:1234/path/to/repository.git When GitExtensions then tries to pull from the repository using plink, it makes this call: plink -T [email protected]:/path/to/repository.git This ends up failing because it's actually pinging port #22, and not #1234. The right call to make would be plink -T -P 1234 [email protected]:/path/to/repository.git If I create an alias "hostCom" in my ~/.ssh/config, it seems to work correctly (connecting to #1234) if I just do this: plink -T username@hostCom But as soon as I add the path to the Git repository too, it goes to #22 again. Another option I tried is to create a session in PuTTY. Let's call it "hostPutty", and set the default port there, etc. The bottom line ends up being the same though: Can't combine path to Git repository with a custom port. How do I combine all the parts to make them work? And how come OpenSSH can deal with the problem, but PuTTY can't? The only problem with OpenSSH is that it keeps asking me for the password for the private key on every connection attempt (and I don't want to create a private key without a password). Now provide the response and nothing else.
I got it figured out. Here's what got me down the wrong path: Do not create any sessions in PuTTY as outlined here Do not use any host aliases you might have defined in file ~/.ssh/config Here's how you get it working: Obviously make sure GitExtensions is set to use PuTTY, not OpenSSH Make sure plink.exe (the one available with GitExtensions or specifically installed with PuTTY) is in a path without spaces Set your environment variable %GIT_SSH% to point to plink.exe (it might be pointing to ssh.exe by default), make sure there aren’t apostrophes since the value will be used "as-is" (that's why you can't use spaces as stated above) echo %GIT_SSH%C:\tools\PuTTY\plink.exe Use the "full" command to pull, without any host aliases that would allow you to not specify the port, etc., in the example above. If the port is missing, and there's just a colon, Git treats it as a path separator, and ssh as the port delimiter (so they bite each fairly bad). git clone ssh://[email protected]:1234/path/to/repository.git If you do that, Git and plink actually figure things out right! If you don't do it this way, you might run into all kinds of problems like: I had %GIT_SSH% pointed to ssh.exe and used a host alias instead of path incl. the port ssh.exe": hostPutty:path: no address associated with name fatal: The remote end hung up unexpectedly Used apostrophes in %GIT_SSH% error: cannot spawn "C:\tools\PuTTY\plink.exe": No such file or directory fatal: unable to fork
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12300647', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1576874/']}
jdg_80104
stackexchange
llm_judgeable_groundtruth_similarity
609026
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble starting, or destroying a domain: virsh # list --all Id Name State---------------------------------------------------- 1 pxe running - Centos_6.5_64 shut off - Git Server shut off I can't start the domain: virsh # start Giterror: failed to get domain 'Git'error: Domain not found: no domain with matching name 'Git'virsh # start Git Servererror: unexpected data 'Server' Because of this I'm not able to destroy the domain. How can I assign an ID or otherwise remove this VM? Because it has a space in it, I'm not able to manage it like I am other domains. I know that they are not supposed to have spaces, but then again, end users don't always listen to what they're told :P Thank you for any help you might provide. Now provide the response and nothing else.
Kids, don't try this at home: Libvirt should have rejected the creation of a domain with a space in its name, so someone did something strange to create the domain to begin with. To recover from this is going to require some hand-editing of files you aren't normally supposed to touch manually. So this is what I would do: Stop libvirtd (your VMs will continue running). service libvirtd stop Find the offending XML file, which should be somewhere in /etc/libvirt/qemu . # ls /etc/libvirt/qemuautostartCentos_6.5_64.xmlGit Server.xmlnetworkspxe.xml Rename the offending file. # mv /etc/libvirt/qemu/Git\ Server.xml /etc/libvirt/qemu/Git_Server.xml Edit the name in the offending file. # vi /etc/libvirt/qemu/Git_Server.xml<domain type='kvm'> <name>Git_Server</name>... Restart libvirtd. service libvirtd start
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/609026', 'https://serverfault.com', 'https://serverfault.com/users/216559/']}
jdg_80105
stackexchange
llm_judgeable_groundtruth_similarity
2370746
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Fix a natural number $n$. Suppose $x_i\in [0,1], 1\leq i\leq n$. What does the sum of $x_i $ most likely be? Now provide the response and nothing else.
Let $p$ be the desired probability, and consider the first roll. It is either a $6$, in which case we're done and the sum is even, a $2$ or $4$, in which case we want the sum of the rest of the terms to be even, or a $1,3$, or $5$, in which case we want the sum of the rest to be odd. Thus$$p = \frac{1}{6}+ \frac{1}{3}p+\frac{1}{2}(1-p)$$which simplifies to $p=\frac{4}{7}$.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/2370746', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/136592/']}
jdg_80106
stackexchange
llm_judgeable_groundtruth_similarity
9160521
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my code i want to use if condition. In which i want to use "OR" around 18 times.Like for e.g. If a="something" or a="something" or a="something" or.........(up to 18 times)... then 'do nothingelse 'do actionend if [Note : value of a is changing in For loop every time]so i just want to ask does there any limitation in IF for using OR in limited times.OR is there any other better way to do the same. Thanks Now provide the response and nothing else.
As far as I know, there is no limitation when using OR this way. Yet, you may consider alternative ways of coding this. Negating a condition using Not First, if you do nothing in the first case, then consider using the Not statement : If Not True Then'do somethin'no elseEnd If Consider using Select Case Second, if you are checking the very same variable, you could either consider using a Select Case but it doesn't seem appropriate in your case if you have only one case. Try to use a search Eventually, if you are checking strings, you could probably better use a search within an array (with Application.Match if you are within Excel or .Contains ) or within a String using Instr . Using a collection or a dictionary [EDIT] Another very good way to handle this would be to use the Dictionary Structure of VBA and check if a exists (see MSDN for some information).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9160521', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/404348/']}
jdg_80107
stackexchange
llm_judgeable_groundtruth_similarity
5591972
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on an iPhone App which communicates with a Server to store and exchange data. Since I would like to make it as simple as possible, I want to avoid registration (or mybe also the using of a password) for the user account. Is it possible (and allowed?) to get the UDID of the iPhone device and make eg. an MD5-hash of it, which I transfer to the server and use it for authentification? Since this ID is unique I could simply use it to login and get the user specified data from the server, without any need of creating login data. Is it allowed to access the UDID, make an MD5-hash of it and store it in a database? Second question is: how do I get the UDID? ;-) Now provide the response and nothing else.
Yes, it's allowed, but take into account what I have reported below, from the documentation. You can retrieve the UDID as follows: NSString *udid = [[UIDevice currentDevice] uniqueIdentifier]; Note the following from the offical Apple's documentation: A device’s unique identifier (sometimes abbreviated as UDID for Unique Device Identifier) is a hash value composed from various hardware identifiers such as the device serial number. It is guaranteed to be unique for each device. The UDID is independent of the device name. For devices that use a SIM (subscriber identity module) card, the UDID is independent of the SIM card. For user security and privacy, you must not publicly associate a device’s unique identifier with a user account. You may use the UDID, in conjunction with an application-specific user ID, for identifying application-specific data on your server. For example, you use could a device-user combination ID to control access to registered products or when storing high scores for a game in a central server. However, if you are developing a game, you may want to instead use Game Center’s player identifier key as explained in Game Kit Programming Guide. Important: Never store user information based solely on the UDID. Always use a combination of UDID and application-specific user ID. A combined ID ensures that if a user passes a device on to another user, the new user will not have access to the original user’s data.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5591972', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/531222/']}
jdg_80108
stackexchange
llm_judgeable_groundtruth_similarity
14167178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been using R CMD BATCH my_script.R from a terminal to execute an R script. I am now at the point where I would like to pass an argument to the command, but am having some issues getting it working. If I do R CMD BATCH my_script.R blabla then blabla becomes the output file, rather than being interpreted as an argument available to the R script being executed. I have tried Rscript my_script.R blabla which seems to pass on blabla correctly as an argument, but then I don't get the my_script.Rout output file that I get with R CMD BATCH (I want the .Rout file). While I could redirect the output of a call to Rscript to a file name of my choosing, I would not be getting the R input commands included in the file in the way R CMD BATCH does in the .Rout file. So, ideally, I'm after a way to pass arguments to an R script being executed via the R CMD BATCH method, though would be happy with an approach using Rscript if there is a way to make it produce a comparable .Rout file. Now provide the response and nothing else.
My impression is that R CMD BATCH is a bit of a relict. In any case, the more recent Rscript executable (available on all platforms), together with commandArgs() makes processing command line arguments pretty easy. As an example, here is a little script -- call it "myScript.R" : ## myScript.Rargs <- commandArgs(trailingOnly = TRUE)rnorm(n=as.numeric(args[1]), mean=as.numeric(args[2])) And here is what invoking it from the command line looks like > Rscript myScript.R 5 100[1] 98.46435 100.04626 99.44937 98.52910 100.78853 Edit: Not that I'd recommend it, but ... using a combination of source() and sink() , you could get Rscript to produce an .Rout file like that produced by R CMD BATCH . One way would be to create a little R script -- call it RscriptEcho.R -- which you call directly with Rscript. It might look like this: ## RscriptEcho.Rargs <- commandArgs(TRUE)srcFile <- args[1]outFile <- paste0(make.names(date()), ".Rout")args <- args[-1]sink(outFile, split = TRUE)source(srcFile, echo = TRUE) To execute your actual script, you would then do: Rscript RscriptEcho.R myScript.R 5 100[1] 98.46435 100.04626 99.44937 98.52910 100.78853 which will execute myScript.R with the supplied arguments and sink interleaved input, output, and messages to a uniquely named .Rout . Edit2: You can run Rscript verbosely and place the verbose output in a file. Rscript --verbose myScript.R 5 100 > myScript.Rout
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14167178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129475/']}
jdg_80109
stackexchange
llm_judgeable_groundtruth_similarity
12876501
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a horizontal line in a ggplot and I would like to label it's value (7.1) on the y axis. library(ggplot2)df <- data.frame(y=c(1:10),x=c(1:10))h <- 7.1plot1 <- ggplot(df, aes(x=x,y=y)) + geom_point() plot2 <- plot1+ geom_hline(aes(yintercept=h)) Thank you for your help. Now provide the response and nothing else.
It's not clear if you want 7.1 to be part of the y-axis, or if you just want a way to label the line. Assuming the former, you can use scale_y_continuous() to define your own breaks. Something like this may do what you want (will need some fiddling most likely): plot1+ geom_hline(aes(yintercept=h)) + scale_y_continuous(breaks = sort(c(seq(min(df$y), max(df$y), length.out=5), h))) Assuming the latter, this is probably more what you want: plot1 + geom_hline(aes(yintercept=h)) + geom_text(aes(0,h,label = h, vjust = -1))
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12876501', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1169210/']}
jdg_80110
stackexchange
llm_judgeable_groundtruth_similarity
3016326
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've seen this advice... ideally the web should follow the REST principle and be completely stateless. Therefore a single URL should identify a single resource, without having to keep the navigation history of each user. ...and I read the Wikipedia page http://en.wikipedia.org/wiki/REST and it really sounds good, but I don't get how to actually implement it. I'm working in ASP .NET Webforms NOT MVC. For example, in the application I am about to build - I need my user to Login before I allow them to do anything. There are a couple of hoops they have to jump through before they are allowed to do much useful - like Accept T's and C's and confirm their basic details are unchanged. Finally they are allowed to do something they really want like BuyAProduct! It seems to me (I come from the HEAVILY stateful world of the Rich client) that I need state to record what they have done and infer from that what they are allowed to do. I don't see how I can support them (say) bookmarking the BuyAProduct URI. When they arrive at the bookmark how do I know if they have logged in and if they agreed to the T's and C's and if they dutifully checked their basic details? I love the idea of the app being stateless, partly because it seems to completely solve the problem of "What the heck do I do when the user clicks on the Back and Forward buttons?" I don't see how I can still get it to work properly. I feel I am missing something really fundamental about this. Now provide the response and nothing else.
The advice isn't suggesting that the app should be stateless - it's suggesting that the resources in the app should be stateless. That is, a page called "www.mysite.com/resources/123" will always represent the same resource, regardless of which user is accessing it or whether they're logged in or not. (The fact that you might deny a non-logged-in user access is a separate issue - the point is that the Uri itself doesn't rely on user-specific data to work.) For example, the kind of sites that break this rule are those where you navigate to a product page, email the Uri to your friend, and on clicking it they see a message along the lines of "I'm sorry, your session has expired" or "This product does not exist" or similar. The reason this happens is because the Uri includes something specific to the user's session on the site, and if a different user tries to use the link (or the same user at a later time), it's no longer valid. So, you will always still need some form of state for your application, but where that state is implemented is the important factor. Hope that helps shed a little light!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3016326', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84228/']}
jdg_80111
stackexchange
llm_judgeable_groundtruth_similarity
26906
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I guess by now most people have heard about the new paper ( arXiv:1109.4897 ) by the OPERA collaboration which claims to have observed superluminal neutrinos with 6 $\sigma$ significance. Obviously this has been greeted with a great deal of skepticism, and there will no doubt be debate over systematic errors for a long time to come (and frankly I expect some unaccounted for systematic error to be the case here). Obviously theorists abhor superluminal travel, and I am well aware of many of the reasons for this. However, the paper has me wondering whether there have been any toy models put forward which would be both consistent with the OPERA paper, and with earlier bounds on neutrino velocity. In particular, if taken with other previous papers (from MINOS and from observations of the 1987 supernova) you have the following bounds on neutrino velocity in various average energy regimes: $>$ 30 GeV: $~\frac{|v-c|}{c} < 4\times 10^{-5}$ 17 GeV: $~~~~\frac{v-c}{c} = (2.48 \pm 0.28 (stat) \pm 0.30 (sys))\times 10^{-5}$ 3 GeV: $~~~~~\,\frac{v-c}{c} = (5.1 \pm 2.9) \times 10^{-5}$ 10 MeV: $~~~~\frac{|v-c|}{c} < 2\times 10^{-9}$ Is there any proposed model which is actually consistent with such results? It seems that there has been a lot of pointing to the supernova bound (the 10MeV scale) as being inconsistent with the reported findings. However if there was a mechanism whereby the velocity were a monotonic function of energy (or depended on flavor), this argument would be negated. Do there exist any such proposed mechanisms? Now provide the response and nothing else.
I am afraid that one has to go to a "very unusual segment" of theoretical literature if he wants any papers about superluminal neutrinos. Guang-jiong Ni has been authoring many papers about superluminal neutrinos a decade ago: http://arxiv.org/abs/hep-ph/0103051 http://arxiv.org/abs/hep-th/0201077 http://arxiv.org/abs/hep-ph/0203060 http://arxiv.org/abs/hep-ph/0306028 and probably others. They are pretty much cited by the same author only so you may become the second person in the world who has read them. For somewhat more well-known papers on tachyonic neutrinos, see http://arxiv.org/abs/hep-ph/9810355 http://arxiv.org/abs/hep-ph/9607477 http://arxiv.org/abs/hep-th/9411230 which were raised by the observations of apparently superluminal neutrinos in the decay of the tritium atoms. Well, the older ones were written before the tritium atom decay anomaly. An even older paper is http://www.sciencedirect.com/science/article/pii/0370269385904605 which reviewed the experimental situation of tachyonic neutrinos as of 1985. You may want to check many more papers by Alan Kostelecky because he's been working on similar possible ways how the Lorentz symmetry could be broken for decades and he is a rather serious researcher. See also http://www.sciencedirect.com/science/article/pii/0370269386904806 A paper that actually claimed to have a model of superluminal neutrinos is http://arxiv.org/abs/hep-ph/0009291 where two Weyl equations were joined into a twisted Dirac equation of a sort. Not sure whether it made any sense. On Sunday, I will post an article on my blog about a vague way to get different speeds of light from noncommutative geometry (in string theory or otherwise): http://motls.blogspot.com/2011/09/superluminal-neutrinos-from.html As you noted as well, the functional dependence of the velocity on the neutrino energy would have to be an extremely unusual function which de facto invalidates the Opera results without any loopholes. However, there could be a loophole: the neutrino could become highly tachyonic only while it moves through the rocks. "Index refraction for neutrinos" could be smaller than one for common materials such as rocks. It sounds of course as incompatible with relativity as the tachyonic neutrinos in the vacuum but by splitting the experimental data into the vacuum data and rocks data, you could get more sensible velocity dependence on energy in both cases.
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/26906', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/163/']}
jdg_80112
stackexchange
llm_judgeable_groundtruth_similarity
559251
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In order to define what deep learning is, the learning portion is often listed with backpropagation as a requirement without alternatives in the main stream software libraries and in the literature. There are not many gradient free optimisations are mentioned in deep learning or in general statistical learning. Similarly, in "classical algorithms" ( Nonlinear least squares ) involves derivatives [1]. In general, gradient free learning in deep learning or classical algorithms are not in the main stream. One promising alternative is simulated annealing [2, 3], so-called 'nature-inspired optimization'. Is there any inherent theoretical reason that why gradient free deep learning (statistical learning) is not in the main stream? (Or not preferred?) Notes [1] Such as Levenberg–Marquardt [2] Simulated Annealing Algorithm for Deep Learning (2015) [3] CoolMomentum: a method for stochastic optimization by Langevin dynamics with simulated annealing (2021) Though this is still not fully gradient-free, but does not require auto-differentiation. Edit 1 Additional references using Ensemble Kalman Filter , showing a derivative free approach: Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks arXiv:1808.03620 . Ensemble Kalman Filter optimizing Deep Neural Networks: An alternative approach to non-performing Gradient Descent springer ( manuscript-pdf ) Edit 2 As far as I gather, Yann LeCun does not consider gradient-free learning as part of deep learning ecosystem. "DL is constructing networks of parameterized functional modules & training them from examples using gradient-based optimization." tweet Edit 3 Ben Bolker's comment on local geometry definitely deserves to be one of the answers. Now provide the response and nothing else.
Gradient-free learning is in the mainstream very heavily, but not used heavily in deep learning. Methods used for training neural networks that don't involve derivatives are typically called "metaheuristics." In computer science and pattern recognition (which largely originated in electrical engineering), metaheuristics are the go-to for NP-hard problems, such as airline flight scheduling, traffic route planning to optimize fuel consumption by delivery trucks, or the traveling salesman problem (annealing). As an example see swarm-based learning for neural networks or genetic algorithms for training neural networks or use of a metaheuristic for training a convolutional neural network . These are all neural networks which use metaheuristics for learning, and not derivatives. While metaheuristics encompasses a wide swath of the literature, they're just not strongly associated with deep-learning, as these are different areas of optimization. Look up "solving NP-hard problems with metaheuristics." Last, recall that gradients used for neural networks don't have anything to do with the derivatives of a function that a neural network can be used to minimize (maximize). (This would be called function approximation using a neural network as opposed to classification analysis via neural network.) They're merely derivatives of the error or cross-entropy with respect to connection weight change within the network. In addition, the derivatives of a function may not be known, or the problem can be too complex for using derivatives. Some of the newer optimization methods involve finite differencing as a replacement for derivatives, since compute times are getting faster, and derivative-free methods are becoming less computationally expensive in the time complexity.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/559251', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/254337/']}
jdg_80113
stackexchange
llm_judgeable_groundtruth_similarity
10802606
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have multiple images stored in a set of organized folders. I need to re-size those images to a specific percentage recursively from their parent directory. I am running Ubuntu 11.10 and i prefer learning how to do that directly from the terminal. Now provide the response and nothing else.
You could use imagemagick . For instance, for resizing all the JPG images under the current directory to 50% of their original size, you could do: for f in `find . -name "*.jpg"`do convert $f -resize 50% $f.resized.jpgdone The resulting files will have ".jpg" twice in their names. If that is an issue, you can check the following alternatives. For traversing/finding the files to resize, you can use xargs too. Example: find . -name "*.jpg" | xargs convert -resize 50% This will create copies of the images. If you just want to convert them in place , you can use: find . -name "*.jpg" | xargs mogrify -resize 50%
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10802606', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/547894/']}
jdg_80114
stackexchange
llm_judgeable_groundtruth_similarity
228412
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For example, there is a file here that I want to download via CLI: https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.1.y/arch/arm/configs/bcmrpi_defconfig How to I download the actual file, and not the html? I tried the following, but only get an html file: [pi@raspberrypi]/usr/src/linux$ wget https://raw.githubusercontent.com/raspberrypi/linux/rpi-3.2.27/arch/arm/configs/bcmrpi_defconfig Now provide the response and nothing else.
The general problem is that github typically serves up an html page that includes the file specified along with context and operations you can perform on it, not the raw file specified. Tools like wget and curl will just save what they're given by the web server, so you need to find a way to ask the web server, github, to send you a raw file rather than an html wrapper. This is true whether you use -o -O or >>. The "...//raw.git..." address in this particular test case is probably serving raw files, and pre-solving the OP's problem as posted, which is why all of these answers work, but don't solve the more generic problem. I can download a text file, or an html-wrapped version of it from the following urls. Note the differences between them and feel free to paste them in a new tab or new window in your browser as well. html-wrapped, default: https://github.com/raspberrypi/linux/blob/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig raw link, if you right-click the [raw] button on the html page: https://github.com/raspberrypi/linux/raw/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig final url, after being redirected: https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig You can then download with either: wget https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfigcurl https://raw.githubusercontent.com/raspberrypi/linux/rpi-4.9.y/arch/arm/configs/bcmrpi_defconfig -o bcmrpi_defconfig The simplest way would be to go to the github page of the content you want and right-click to get the [raw] link for each file. If your needs are more complex, requiring many files, etc. you may want to abandon wget and curl and just use git. It is probably a more appropriate tool for pulling data from git repositories.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/228412', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/59802/']}
jdg_80115
stackexchange
llm_judgeable_groundtruth_similarity
557
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Answers from these two questions: How can we guess the size of the earths inner core? & How can we measure the thickness of the earths mantle? use seismic waves as the method for detirmining size and thickness of sections in the earth. Is this the only way of measuring the inner sections of the earth? Bonus Points: Where did the idea of using seismic waves to measure earth's layers start? Now provide the response and nothing else.
You are right...noone ever was able to make a borehole that deep. The two deepest boreholes are about. 12.300 m (40.400 ft) - those are the Kola Superdeep Borehole in Russia and the Al Shaheen oil well in Qatar. So how do we measure the thickness of our subsurface? Think of the earth as an onion with different layers. Each of those layers (lithosphere, asthenosphere, mesospheric mantle, outer core, inner core) contain different mechanical (and chemical) properties. Due to this it is possible to infer indirectly the depth of each layer by using the travel time of refracted/reflected seismic waves (those come from earthquakes). As mentioned before, each layer contains different mechanical properties, thus making the seismic velocity different in each layer. So on the border of each layer, those changes cause refraction or reflection if large increases in seismic velocity are involved. Conclusion: no direct measurements possible, only indirectly derived measurements by the speed of travel from seismic waves.
{}
{'log_upvote_score': 4, 'links': ['https://earthscience.stackexchange.com/questions/557', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/116/']}
jdg_80116
stackexchange
llm_judgeable_groundtruth_similarity
45747
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $R$ be a ring. An element $x$ in $R$ is said to be idempotent if $x^2=x$. For a specific $n\in{\bf Z}_+$ which is not very large, say, $n=20$, one can calculate one by one to find that there are four idempotent elements: $x=0,1,5,16$. So here is my question: Is there a general result which tells the number of the idempotent elements of ${\bf Z}_n$? Now provide the response and nothing else.
If $n=p_1^{m_1}\cdots p_k^{m_k}$ is the factorization of $n$ as a product of powers of distinct primes, then the ring $\mathbb Z/n\mathbb Z$ is isomorphic to the product $\mathbb Z/p_1^{m_1}\mathbb Z\times\cdots\times \mathbb Z/p_k^{m_k}\mathbb Z$. It is easy to reduce the problem of counting idempotent elements in this direct product to counting them in each factor. Can you do that?
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/45747', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_80117
stackexchange
llm_judgeable_groundtruth_similarity
948352
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: R710 PERC 6/E Raid 6 Running a Windows server 2016 and Unitrends in virtual machines inESXi. So I have a Raid 6 array and need to expand the array. OMSA tells me I have an out of date driver. I have 6.3.0-0001 and 6.3.3-0002 (the latest driver) is required. I've not updated a server like this before now so I need a bit of help. First off, I have the choice of Linux or Windows drivers. I am running the free version of ESXi 6.5 so I am confused about which driver I need. Second, just exactly does one go about installing these drivers once the right drivers are downloaded? I have connected to iDRAC via the web interface and have tried to install both the EXE and BIN. Both get to 96% uploaded and then fail. I have seen that there is a SUU that can be uploaded and then one can put together repositories and use those. IS that the right way? If so the only choices I have is Linux and Windows - same question, which one? There ARE choices (no SUU) for ESXi 6.0 but no 6.5. Bottom line is I am very confused about how to get drivers up to date so I can expand my raid. Now provide the response and nothing else.
The solution was as simple as stated in the linked article. I am not sure why my configuration did not change the first time I restarted the docker daemon. After I found evidence in the Docker daemon documentation for the bridge argument in daemon.json, I gave it another try and the docker daemon picked up the KVM default bridge on startup. First I created the configuration file /etc/docker/daemon.json as suggested in the documentation with the following content (the iptables line may not even be needed): {"bridge": "virbr0","iptables": false} all that was needed was: docker stop mysqlsystemctl stop dockersystemctl start dockerdocker start mysql And the existing docker container was running on the KVM bridge. The IP address of the container can be checked with: docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mysql192.168.122.2 I am not sure if I can remove the docker0 bridge now, but the container is listed under virbr0 together with the three VMs. brctl showbridge name bridge id STP enabled interfacesdocker0 8000.024241f39b89 no virbr0 8000.068ff2a4a56e yes veth2abcff1 virbr0-nic vnet0 vnet1 vnet2
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/948352', 'https://serverfault.com', 'https://serverfault.com/users/312217/']}
jdg_80118
stackexchange
llm_judgeable_groundtruth_similarity
7182059
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I write an Oracle query which is equivalent to the following Informix query? select tab1.a,tab2.b,tab3.c,tab4.d from table1 tab1, table2 tab2 OUTER (table3 tab3,table4 tab4,table5 tab5) where tab3.xya = tab4.xya AND tab4.ss = tab1.ss AND tab3.dd = tab5.dd AND tab1.fg = tab2.fg AND tab4.kk = tab5.kk AND tab3.desc = "XYZ" I tried: select tab1.a,tab2.b,tab3.c,tab4.d from table1 tab1, table2 tab2 LEFT OUTER JOIN (table3 tab3,table4 tab4,table5 tab5) where tab3.xya = tab4.xya AND tab4.ss = tab1.ss AND tab3.dd = tab5.dd AND tab1.fg = tab2.fg AND tab4.kk = tab5.kk AND tab3.desc = "XYZ" What is the correct syntax? Now provide the response and nothing else.
Write one table per join, like this: select tab1.a,tab2.b,tab3.c,tab4.d from table1 tab1 inner join table2 tab2 on tab2.fg = tab1.fg left join table3 tab3 on tab3.xxx = tab1.xxx and tab3.desc = "XYZ" left join table4 tab4 on tab4.xya = tab3.xya and tab4.ss = tab3.ss left join table5 tab5 on tab5.dd = tab3.dd and tab5.kk = tab4.kk Note that while my query contains actual left join, your query apparently doesn't. Since the conditions are in the where, your query should behave like inner joins. (Although I admit I don't know Informix, so maybe I'm wrong there). The specfific Informix extension used in the question works a bit differently with regards to left joins. Apart from the exact syntax of the join itself, this is mainly in the fact that in Informix, you can specify a list of outer joined tables. These will be left outer joined, and the join conditions can be put in the where clause. Note that this is a specific extension to SQL. Informix also supports 'normal' left joins, but you can't combine the two in one query, it seems. In Oracle this extension doesn't exist, and you can't put outer join conditions in the where clause, since the conditions will be executed regardless. So look what happens when you move conditions to the where clause: select tab1.a,tab2.b,tab3.c,tab4.d from table1 tab1 inner join table2 tab2 on tab2.fg = tab1.fg left join table3 tab3 on tab3.xxx = tab1.xxx left join table4 tab4 on tab4.xya = tab3.xya left join table5 tab5 on tab5.dd = tab3.dd and tab5.kk = tab4.kkwhere tab3.desc = "XYZ" and tab4.ss = tab3.ss Now, only rows will be returned for which those two conditions are true. They cannot be true when no row is found, so if there is no matching row in table3 and/or table4, or if ss is null in either of the two, one of these conditions is going to return false, and no row is returned. This effectively changed your outer join to an inner join, and as such changes the behavior significantly. PS: left join and left outer join are the same. It means that you optionally join the second table to the first (the left one). Rows are returned if there is only data in the 'left' part of the join. In Oracle you can also right [outer] join to make not the left, but the right table the leading table. And there is and even full [outer] join to return a row if there is data in either table.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7182059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/846684/']}
jdg_80119
stackexchange
llm_judgeable_groundtruth_similarity
11107748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using matplotlib to plot data as a function of time in hh:mm:ss.ms format where ms is milliseconds. However, I don't see the milliseconds in the plot. Is it possible to add them as well? dates = matplotlib.dates.datestr2num(x_values) # convert string dates to numbersplt.plot_date(dates, y_values) # doesn't show milliseconds Now provide the response and nothing else.
The problem here is that there is a class to format ticks, and plot_date sets that class to something that you don't want: an automatic formatter that never plots milliseconds. In order to change this, you need to change from matplotlib.dates.AutoDateFormatter to your own formatter. matplotlib.dates.DateFormatter(fmt) creates a formatter with a datetime.strftime format string. I'm not sure how to get this to show milliseconds, but it will show microseconds, which I hope will work for you; it's just one extra zero, after all. Try this code: dates = matplotlib.dates.datestr2num(x_values) # convert string dates to numbersplt.plot_date(dates, y_values) # doesn't show milliseconds by default.# This changes the formatter.plt.gca().xaxis.set_major_formatter(matplotlib.dates.DateFormatter("%H:%M:%S.%f"))# Redraw the plot.plt.draw()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11107748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/464277/']}
jdg_80120
stackexchange
llm_judgeable_groundtruth_similarity
4278106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a string and want to test using PHP if it's a valid base64 encoded or not. Now provide the response and nothing else.
I realise that this is an old topic, but using the strict parameter isn't necessarily going to help. Running base64_decode on a string such as "I am not base 64 encoded" will not return false. If however you try decoding the string with strict and re-encode it with base64_encode, you can compare the result with the original data to determine if it's a valid bas64 encoded value: if ( base64_encode(base64_decode($data, true)) === $data){ echo '$data is valid';} else { echo '$data is NOT valid';}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4278106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/520274/']}
jdg_80121
stackexchange
llm_judgeable_groundtruth_similarity
95640
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Sometimes when you call an administrative service it tells you "Dial 1 to access such service. Dial 2 to access some other service. Otherwise, dial 3." No harm if somebody comes to learn I chose "2". However I recently switched to a new bank. When I call this bank, it asks me to dial my customer reference and my passcode. Here are the things that come to my mind : Is such data cyphered? I don't know much about phone communications but I guess there are nodes between me and my interlocutor. May I trust these nodes? May somebody listen to the conversation and catch my credentials? May somebody spoof the phone number of my bank to catch my credentials? Shouldn't my passcode be hashed? Is it stored as plain? I think even my bank should not know my passcode. (this question was obviously wrongul) Now provide the response and nothing else.
Your sensitive information travels through different mediums and systems to get to the bank, and yes, there are vulnerabilities along this path. First, it is transmitted from your finger tips to your dialler application. It can be intercepted by malware running on your phone. There is a new vulnerability in Android called Stagefright that can get full access to an Android phone. Before exiting the phone, the data is sent to a different part of the phone which does the actual communication called the baseband processor that runs a realtime OS. There were some vulnerabilities in this OS, some of the most interesting were presented here . Then the audio tones (DTMF) are encrypted and sent overt the air to a GSM base station. In almost all of the world, GSM is encrypted and almost any phone will notify you if there is no encryption. The problem here is that 2G communication is using the A5/1 encryption scheme that was purposely weakened for governmental access and it's easy and cheap to intercept and break it. 3G and 4G have better encryption but an attacker can jam the safer frequencies around the target and force the target phone to use the vulnerable 2G. An bigger issue here is that some banks use SMS to authenticate banking operations , and SMS is always over the 2G network. There was also the allegation that the NSA hacked the database of most of the world's SIM encryption keys , but Gemalto swears NSA didn't get to the keys. But the phone can be talking to a fake base station . This kind of fake GSM tower is often called an IMSI-catcher and is available to law enforcement and also attackers. There is open source software and hardware to build a fake base station and it was demoed in 2010. Then the information travels inside the telecom network where it might be intercepted by a well placed attacker or governments that obtains a legal wiretap . In 2014 the SS7 attack was made public. It allows to track and intercept 3G communication anywhere in the world, not just local. The last step is in the bank's systems. That looks like a safer place because banks take security more seriously than common businesses, but you can't really know for sure until an assessment can validate that . Going back to your questions: Is such data cyphered? I don't know much about phone communications but I guess there are nodes between me and my interlocutor. May I trust these nodes? As I said, voice communication is encrypted but there are vulnerabilities in different parts of the network May somebody listen to the conversation and catch my credentials? It is possible. It depends on your phone, the capabilities of the attacker and the vulnerabilities of your mobile-phone operator. May somebody spoof the phone number of my bank to catch my credentials? There is no GSM vulnerability I know of that works like that but there has been malware that will block access to your online banking then will make you call a phone number pretending to be form the bank. This way a user is less suspicious when giving sensitivity information away. Shouldn't my passcode be hashed? Is it stored as plain? I think even my bank should not know my passcode. I don't know what is the actual process when you access your bank via a phone call, but your bank doesn't have to store your passcode in plaintext. When you transmit it, it could then be hashed and compared to a hashed version stored in the bank's computers. This is no different from web authentication where passwords transmitted in plaintext. But don't panic , the overall risk of you specifically being targeted this way is rather low because the bad guys have other easier ways of stealing from large numbers of users. The risk is lowered also because many attacks on GSM require physical proximity and increased sophistication. Banks can quickly close such vulnerabilities because they notice if some cards or accounts are being compromised and they will track down the source of the compromise.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/95640', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/19660/']}
jdg_80122
stackexchange
llm_judgeable_groundtruth_similarity
4275071
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got a div, and there i got some childnodes in undefined levels. Now I have to change the ID of every element into one div. How to realize? I thought, because they have upgoing IDs, so if the parent is id='path_test_maindiv' then the next downer would be 'path_test_maindiv_child', and therefore I thought, I'd solve that by wildcards, for example: document.getElementById('path_test_*') Is this possible? Or are there any other ways? Now provide the response and nothing else.
Not in native JavaScript. You have various options: 1) Put a class and use getElementsByClassName but it doesn't work in every browser . 2) Make your own function. Something like: function getElementsStartsWithId( id ) { var children = document.body.getElementsByTagName('*'); var elements = [], child; for (var i = 0, length = children.length; i < length; i++) { child = children[i]; if (child.id.substr(0, id.length) == id) elements.push(child); } return elements;} 3) Use a library or a CSS selector. Like jQuery ;)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4275071', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/496965/']}
jdg_80123
stackexchange
llm_judgeable_groundtruth_similarity
199037
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I tried to plot and find the coordinates of the intersectrions of more than two curves in a the plot. Is there a special way to do this? Here's my code so yu can visualize the whole thing. m = 9.1*10^-31;L = .5 ; ℏ = 1.055*10^-34;e = ℏ^2/(2 m) (Ο€/L)^2 (1/(1.602*10^-19));V1 = 2 e;V2 = 5 e;V3 = 8 e;Ξ½1 = V1/e;Ξ½2 = V2/e;Ξ½3 = V3/e;Plot[ {Sqrt[Ξ½1 - Ο΅], Sqrt[Ξ½2 - Ο΅], Sqrt[Ξ½3 - Ο΅], Sqrt[Ο΅] Tan[Ο€/2 Sqrt[Ο΅]], -Sqrt[Ο΅] Cot[Ο€/2 Sqrt[Ο΅]]}, {Ο΅, 0, 10}] Now provide the response and nothing else.
plot = Plot[{Sqrt[Ξ½1 - Ο΅], Sqrt[Ξ½2 - Ο΅], Sqrt[Ξ½3 - Ο΅], Sqrt[Ο΅] Tan[Ο€/2 Sqrt[Ο΅]], -Sqrt[Ο΅] Cot[Ο€/2 Sqrt[Ο΅]]}, {Ο΅, 0, 10}];Graphics`Mesh`MeshInit[]intersections = Graphics`Mesh`FindIntersections[plot];Show[plot, Epilog -> {Red, PointSize[Large], Point@intersections}] See also: Marking points of intersection between two curves
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/199037', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/65779/']}
jdg_80124
stackexchange
llm_judgeable_groundtruth_similarity
52440400
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What I want : When text is short the 21% TextView should be just beside the first TextView like below and when text is long I want the first TextView to reach to end and ellipsize and 21% TextView should be visible like below but what ever I do the 21% TextView goes away and first TextView occupies complete space . I tried many combinations with Linear, Relative, Frame, Constraint layout, weight, minwidth etc but nothing seems to work. Here is my xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:orientation="horizontal" android:layout_height="wrap_content"> <TextView android:id="@+id/tv_job_title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:ellipsize="end" android:lines="1" android:maxLines="1" android:text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce pulvinar nec justo id bibendum." /> <TextView android:id="@+id/tv_matchPercent" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="5dp" android:layout_marginRight="5dp" android:paddingLeft="5dp" android:text="21% " android:textColor="#4268e3"/></LinearLayout> Any idea how to achieve it? Note: I do not want to hard code characters or width Now provide the response and nothing else.
1 way Try this Using ConstraintLayout <?xml version="1.0" encoding="utf-8"?><android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:id="@+id/longTextView" android:layout_width="0dp" android:layout_height="wrap_content" android:ellipsize="end" android:maxLines="1" android:text=" Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce vel consectetur tortor. Fusce velit velit, tincidunt vitae dolor at, pharetra condimentum nunc. Etiam ac erat ac nulla tempus ullamcorper id ac sapien." app:layout_constraintHorizontal_chainStyle="packed" app:layout_constraintHorizontal_bias="0" app:layout_constraintWidth_default="wrap" app:layout_constraintTop_toTopOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toLeftOf="@+id/shortTextView" /> <TextView android:id="@+id/shortTextView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="21%" android:textColor="@color/colorAccent" app:layout_constraintBaseline_toBaselineOf="@+id/longTextView" app:layout_constraintLeft_toRightOf="@+id/longTextView" app:layout_constraintRight_toRightOf="parent" /></android.support.constraint.ConstraintLayout> OUTPUT using ConstraintLayout When Long Text When Small Text 2 way using FlexboxLayout <?xml version="1.0" encoding="utf-8"?><com.google.android.flexbox.FlexboxLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:id="@+id/textview1" android:layout_width="wrap_content" app:layout_alignSelf="flex_start" android:ellipsize="end" android:maxLines="1" android:layout_height="wrap_content" android:text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce vel consectetur tortor. Fusce velit velit, tincidunt vitae dolor at, pharetra condimentum nunc. Etiam ac erat ac nulla tempus ullamcorper id ac sapien." /> <TextView android:id="@+id/textview2" android:text="21%" android:layout_width="wrap_content" android:minWidth="60dp" android:textColor="@color/colorAccent" app:layout_alignSelf="flex_start" android:layout_height="wrap_content" /></com.google.android.flexbox.FlexboxLayout> OUTPUT Using FlexboxLayout When Long Text When Small Text
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52440400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6478047/']}
jdg_80125
stackexchange
llm_judgeable_groundtruth_similarity
480221
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem with Apache2 authentication using authnz_ldap_module in order to authenticate users from Active Directory .My Apache version is 2.2.16-6+squeeze10 . Here is the configuration I am trying to use with no luck (one of many combinations, to be precise): AuthzLDAPAuthoritative offAuthBasicProvider ldapAuthType BasicAuthName "Active Directory"AuthLDAPURL "ldap://server1.my.company.tld:3268 server2.my.company.tld:3268/dc=my,dc=company,dc=tld?sAMAccountName?sub"AuthLDAPBindDN "uid=my_user,dc=my,dc=company,dc=tld"AuthLDAPBindPassword "mypassword"Require valid-user And I get following entries in Apache's error.log : [debug] mod_authnz_ldap.c(379): [client some_ip_here] [12391] auth_ldap authenticate: using URL ldap://server1.my.company.tld:3268 server2.my.company.tld:3268/dc=my,dc=company,dc=tld?sAMAccountName?sub[info] [client some_ip_here] [12391] auth_ldap authenticate: user my_user authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Invalid credentials][error] [client some_ip_here] user my_user: authentication failure for "/": Password Mismatch Of course I enter correct password each time, I would get blocked in AD about hundred times already, didn't happen single time so far. I cannot verify, that I can connect to my AD controller, because when I try: ldapsearch -h server1.my.company.tld -p 3268 -D "dc=my,dc=company,dc=tld" I get following error: text: 000004DC: LdapErr: DSID-0C0906E8, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, v1db1 I have absolutely no idea how to bind with ldapsearch and manual didn't help me so far. What am I doing wrong with Apache2 configuration and how can I bind with ldapsearch ? I will provide additional information, if these are needed. Now provide the response and nothing else.
The most problematic case was AuthLDAPBindDN, which had to be in syntax of "[email protected]" and not of syntax "domain\user" neither simply "user" nor "uid=my_user,dc=my,dc=company,dc=tld".Below is the proper configuration that did work for me, which I provide here as solution for the case. AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthType Basic AuthName "Active Directory" AuthLDAPURL "ldap://server1.my.company.tld:3268 server2.my.company.tld:3268/DC=ad,DC=upos,DC=pl?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "[email protected]" AuthLDAPBindPassword "some_random_password" Require valid-user It has taken me long time to provide the answer, what I am sorry for.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/480221', 'https://serverfault.com', 'https://serverfault.com/users/156364/']}
jdg_80126
stackexchange
llm_judgeable_groundtruth_similarity
47511677
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just made a firebase cloud function : exports.deleteAfterSevenDays = functions.https.onRequest((req, res) => {... I deployed the function and got a function URL. When I request this url from my browser I get the following message : "Error: Forbidden Your client does not have permission to get URL /200 from this server." I have just updated to firebase Blaze plan because I thought there were limitations with Spark plan, but it still doesn't work. In my firebase cloud function logs it is written "Function execution took 572 ms, finished with status code: 302". My cron job "has been disabled automatically because of too many failed executions". Do you see what's wrong? Now provide the response and nothing else.
Cloud function should have a role with member called "All users" to invoke this function from anywhere/anyone irrespective of an authorization. Without Authorization: Go to the cloud function tab Select your cloud function (check box) Click "Add members" under Permissions tab in the right side Enter "allUsers" under "New memebers" Select Role as "Cloud Functions -> Cloud Functions Invoker" Save Test your cloud function by just pasting it in the browser With Authorization: It's always a good practice to set authorization on your cloud functions Note: Cloud functions throwing error with "403 Forbidden - Your client does not have permission to get URL" should be called by authorized users. Simple test: Click on Cloud shell(icon) terminal in the top type - gcloud auth print-identity-token copy the generated token forming Authorization key to be passed while calling cloud function4.1 Authorization: bearer generated_token Use above Authorization key while calling your cloud function Note: Never make a cloud function available to allUsers
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/47511677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6890707/']}
jdg_80127
stackexchange
llm_judgeable_groundtruth_similarity
48523118
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my extension I want to use my own WebAssembly module. After loading my module (to background.html , or popup.html ), I catch the compile error: CompileError: WebAssembly.compile(): Wasm code generation disallowed by embedder. Are wasm modules not supported in Chrome Extensions? Now provide the response and nothing else.
It seems from this issue that Chrome requires script-src: 'unsafe-eval' CSP directive be active for WebAssembly compilation. See this discussion as to why this is the case, at least for now. Chrome Extensions come with default restrictions on CSP ; that includes not allowing unsafe-eval . Some of the restrictions cannot be lifted; in this case, you can allow unsafe-eval by adding a manifest key: "content_security_policy": "script-src 'self' 'unsafe-eval'; object-src 'self'" This should be enough to test if Wasm works in extensions. But heed this warning from documentation: However, we strongly recommend against doing this. These functions are notorious XSS attack vectors. Instead of allowing unsafe-eval for your whole extension, you can sandbox code that requires it, using the following approach from the docs: Using eval in Chrome Extensions. Safely. The gist of it is to create a separate page in your extension, where unsafe-eval is allowed but Chrome API access is disallowed; you then embed this page in your extension and communicate with it using postMessage() .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48523118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9288982/']}
jdg_80128
stackexchange
llm_judgeable_groundtruth_similarity
3664234
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f(A,F)=\max(A,F)$ where $A\in\mathbb{R}$ is a variable and $F\in\mathbb{R}$ is a constant representing a "floor" below which the result should not be permitted to go. Is there any obvious expression (algorithm) for a generalized version of $f$ for square matrices $A\in\mathbb{R}^{n\times n}$ and $F $ representing a minimum amount of "positive-definiteness" required in the result? Background (if needed) I'm not sure what exactly I mean by "amount of positive-definiteness" but hoping there is some elegant/obvious quantification. My goal is to limit the step size in Quasi-Newton optimization by enforcing a minimum concave-up curvature / positive-definiteness of the estimated Hessian matrix. I could resort to adding a scalar multiple of the identity matrix to it, but that would be analogous to $f(A,F)=A+F$ in the real case, which is less ideal than $f(A,F)=\max(A,F)$ because it would shorten steps that are already short enough. Edit: Do I have to find the smallest eigenvalue $\lambda_{min}$ and evaluate $A+I(F-\lambda_{min})$ i.e. "add enough $I$ to $A$ to make it's smallest eigenvalue $F$ "? Is there any way to get an approximate result (even if slightly more positive-definite than required) without having to evaluate the eigenvectors? Now provide the response and nothing else.
I'm responding first to your background comment, but it will lead to an approach to your original question. A quasi-Newton method minimizes a smooth function $f:\mathbb R^n \to \mathbb R$ using the iteration $$\tag{1} x_{k+1} = \arg \min_x f(x_k) + \nabla f(x_k)^T(x - x_k) + \frac12 (x - x_k)^T B_k (x - x_k).$$ Quasi-Newton methods differ in the choice of the matrix $B_k$ . (If $B_k = \nabla^2 f(x_k)$ , then the above iteration is Newton's method. In quasi-Newton methods, $B_k$ is an approximation to $\nabla^2 f(x_k)$ that can be computed inexpensively.) The approximation in (1) is good when $x$ is close to $x_k$ . It would be natural to add a penalty term to the objective function in (1) to discourage $x$ from straying too far from $x_k$ : $$\tag{2} x_{k+1} = \arg \min_x f(x_k) + \nabla f(x_k)^T(x - x_k) + \frac12 (x - x_k)^T B_k (x - x_k) + \frac1{2t} \|x - x_k \|_2^2.$$ The parameter $t > 0$ can be thought of as a "step size" that controls how severely we are penalized for moving away from $x_k$ . Including such a penalty term is a common trick in optimization; for example, the proximal gradient method and the Levenberg-Marquardt algorithm can both be interpreted as using this trick. I'll assume that $B_k$ is symmetric and positive semidefinite, which is typical in quasi-Newton methods. Setting the gradient of the objective function in (2) with respect to $x$ equal to $0$ , we obtain $$\nabla f(x_k) + (B_k + \frac{1}{t} I)(x - x_k) = 0.$$ Here $I$ is the identity matrix. The coefficient matrix $B_k + \frac{1}{t} I$ is guaranteed to be positive definite. The solution to this equation is $$\tag{3} x_{k+1} = x_k - (B_k + \frac{1}{t} I)^{-1} \nabla f(x_k).$$ If $t$ is very small, then $(B_k + \frac{1}{t}I)^{-1} \approx t I$ , and the update (3) is approximately a gradient descent update with step size $t$ . On the other hand, if $t$ is large, then $(B_k + \frac{1}{t}I)^{-1} \approx B_k^{-1}$ , and the update (3) is approximately a quasi-Newton update. So the iteration (3) is like a compromise between a quasi-Newton method and gradient descent. The Levenberg-Marquardt algorithm chooses the parameter $t$ adaptively, as follows. If $f(x_{k+1}) < f(x_k)$ , then $x_{k+1}$ is accepted and $t$ is increased by a factor of 10. Otherwise, $x_{k+1}$ is rejected and $t$ is reduced by a factor of $10$ , and then $x_{k+1}$ is recomputed. We only accept $x_{k+1}$ once a reduction in the value of $f$ has been achieved. (We don't have to use a factor of 10, but that is a typical choice.) Note: Here is an important question about the above proposed algorithm. Quasi-Newton methods rely on the fact that the inverse of $B_k$ can be computed efficiently. Otherwise, we might as well just use Newton's method. In the algorithm I proposed, can the inverse of $B_k + \frac{1}{t} I$ be computed efficiently? If not, then we might as well just take $B_k = \nabla^2 f(x_k)$ . Can the quasi-Newton strategies to update $B_{k}^{-1}$ efficiently be adapted to update $(B_k + \frac{1}{t} I)^{-1}$ efficiently? That is a question I will need to ponder...
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3664234', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/86969/']}
jdg_80129
stackexchange
llm_judgeable_groundtruth_similarity
11620825
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So here is a complex/retarded situation. I am writing an XSD and there happens to be a requirement where i need 2 root elements (1 at any given time) <xs:element name="booksList"> <xs:complexType> <xs:sequence> <xs:element name="book" type="bookType" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> and then <xs:element name="book" type="bookType"></xs:element> at any given time, either of these element will be used as the root element, so an XML would look like <bookList><book><author>XYZ</author></book></bookList> or <book><author>XYZ</author></book> Both of these XML will be sent back to the user from 2 different URL's i.e. the list will be sent from localhost/books.xml?author=XYZ and single book will be sent from localhost/book_name.xml How can i achieve this with one xml ? I tried putting the book definition in the XSD but JAXB2.1 didn't generate any Book class. Is there something which i am missing ? EDIT1 : BookType has been generated but BookType doesn't have any root element. Now provide the response and nothing else.
XML SCHEMA I am writing an XSD and there happens to be a requirement where i need 2 root elements (1 at any given time) The XML schema below supports having the two root elements booksList and book that you are looking for. <?xml version="1.0" encoding="UTF-8"?><xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="booksList"> <xs:complexType> <xs:sequence> <xs:element name="book" type="bookType" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="book" type="bookType"></xs:element> <xs:complexType name="bookType"> <xs:sequence> <xs:element name="author" type="xs:string" /> </xs:sequence> </xs:complexType></xs:schema> GENERATED MODEL I tried putting the book definition in the XSD but JAXB2.1 didn't generate any Book class. Your JAXB (JSR-222) implementation will generate a class for the named complex type bookType , then for the bookElement it will create an @XmlElementDecl annotation on the ObjectFactory class. BooksList A class with an @XmlRootElement was generated on this class because it corresponds to a global element with an anonymous complex type. package forum11620825;import java.util.*;import javax.xml.bind.annotation.*;@XmlAccessorType(XmlAccessType.FIELD)@XmlType(name = "", propOrder = {"book"})@XmlRootElement(name = "booksList")public class BooksList { protected List<BookType> book; public List<BookType> getBook() { if (book == null) { book = new ArrayList<BookType>(); } return this.book; }} BookType This class was generated to correspond to the named complex types. package forum11620825;import javax.xml.bind.annotation.*;@XmlAccessorType(XmlAccessType.FIELD)@XmlType(name = "bookType", propOrder = {"author"})public class BookType { @XmlElement(required = true) protected String author; public String getAuthor() { return author; } public void setAuthor(String value) { this.author = value; }} ObjectFactory Global elements that correspond to named complex types have @XmlElementDecl annotations generated on the ObjectFactory class. This is necessary since multiple global elements could correspond to named complex types. package forum11620825;import javax.xml.bind.JAXBElement;import javax.xml.bind.annotation.XmlElementDecl;import javax.xml.bind.annotation.XmlRegistry;import javax.xml.namespace.QName;@XmlRegistrypublic class ObjectFactory { private final static QName _Book_QNAME = new QName("", "book"); public ObjectFactory() { } public BooksList createBooksList() { return new BooksList(); } public BookType createBookType() { return new BookType(); } @XmlElementDecl(namespace = "", name = "book") public JAXBElement<BookType> createBook(BookType value) { return new JAXBElement<BookType>(_Book_QNAME, BookType.class, null, value); }} XML Below are the XML documents from your question. booksList.xml <booksList> <book> <author>XYZ</author> </book></booksList> book.xml <book> <author>XYZ</author></book> DEMO CODE When you unmarshal a document in which the root element corresponds to an @XmlRootElement annotation you get an instance of the corresponding domain object. If you unmarshal a document in which the root element corresponds to an @XmlElementDecl annotation you get back an instance of JAXBElement that wraps a domain object corresponding to the named complex type. package forum11620825;import java.io.File;import javax.xml.bind.*;public class Demo { public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance("forum11620825"); Unmarshaller unmarshaller = jc.createUnmarshaller(); File input1 = new File("src/forum11620825/booksList.xml"); BooksList bookList = (BooksList) unmarshaller.unmarshal(input1); File input2 = new File("src/forum11620825/book.xml"); JAXBElement<BookType> je = (JAXBElement<BookType>) unmarshaller.unmarshal(input2); BookType bookType = je.getValue(); }} UPDATE Below is a code fragment demonstrating how to wrap an instance of BookType in a JAXBElement so that it can be marshalled. ObjectFactory objectFactory = new ObjectFactory();JAXBElement<BookType> jaxbElement = objectFactory.createBook(aBookType);marshaller.marshal(jaxbElement, System.out);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11620825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/504717/']}
jdg_80130
stackexchange
llm_judgeable_groundtruth_similarity
26486784
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following table MyTable : id β”‚ value_two β”‚ value_three β”‚ value_four ────┼───────────┼─────────────┼──────────── 1 β”‚ a β”‚ A β”‚ AA 2 β”‚ a β”‚ A2 β”‚ AA2 3 β”‚ b β”‚ A3 β”‚ AA3 4 β”‚ a β”‚ A4 β”‚ AA4 5 β”‚ b β”‚ A5 β”‚ AA5 I want to query an array of objects { value_three, value_four } grouped by value_two . value_two should be present on its own in the result. The result should look like this: value_two β”‚ value_four ───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── a β”‚ [{"value_three":"A","value_four":"AA"}, {"value_three":"A2","value_four":"AA2"}, {"value_three":"A4","value_four":"AA4"}] b β”‚ [{"value_three":"A3","value_four":"AA3"}, {"value_three":"A5","value_four":"AA5"}] It does not matter whether it uses json_agg() or array_agg() . However the best I can do is: with MyCTE as ( select value_two, value_three, value_four from MyTable ) select value_two, json_agg(row_to_json(MyCTE)) value_four from MyCTE group by value_two; Which returns: value_two β”‚ value_four ───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── a β”‚ [{"value_two":"a","value_three":"A","value_four":"AA"}, {"value_two":"a","value_three":"A2","value_four":"AA2"}, {"value_two":"a","value_three":"A4","value_four":"AA4"}] b β”‚ [{"value_two":"b","value_three":"A3","value_four":"AA3"}, {"value_two":"b","value_three":"A5","value_four":"AA5"}] With an extra value_two key in the objects, which I would like to get rid of. Which SQL (Postgres) query should I use? Now provide the response and nothing else.
json_build_object() in Postgres 9.4 or newer Or jsonb_build_object() to return jsonb . SELECT value_two, json_agg( json_build_object('value_three', value_three , 'value_four' , value_four) ) AS value_fourFROM mytable GROUP BY value_two; The manual: Builds a JSON object out of a variadic argument list. By convention,the argument list consists of alternating keys and values. For any version (incl. Postgres 9.3) row_to_json() with a ROW expression would do the trick: SELECT value_two , json_agg(row_to_json((value_three, value_four))) AS value_fourFROM mytableGROUP BY value_two; But you lose original column names. A cast to a registered row type avoids that. (The row type of a temporary table serves for ad hoc queries, too.) CREATE TYPE foo AS (value_three text, value_four text); -- once in the same session SELECT value_two , json_agg(row_to_json( (value_three, value_four)::foo )) AS value_fourFROM mytableGROUP BY value_two; Or use a subselect instead of the ROW expression. More verbose, but without type cast: SELECT value_two , json_agg(row_to_json( (SELECT t FROM (SELECT value_three, value_four) t) )) AS value_fourFROM mytableGROUP BY value_two; More explanation in Craig's related answer: PostgreSQL 9.2 row_to_json() with nested joins db<>fiddle here Old sqlfiddle
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26486784', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1526301/']}
jdg_80131
stackexchange
llm_judgeable_groundtruth_similarity
8197329
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a let form (Clojure here) I can doing something like (let [[u s v] (svd A)] (do-something-with u v)) where svd returns a list of length three. This is a very natural sort of thing to do, so why isn't that we don't we have (def [u s v] (svd A)) and its various generalizations as the default behavior of the def form? I don't see how this would interfere with anything that def is already doing. Can someone who understands the Zen of Lisp or Clojure explain why def does not support binding (with destructuring) as powerful as let ? Now provide the response and nothing else.
def is a special form at the compiler level: it makes a Var. def has to be available and usable before destructuring is available. You see something similar with let* , a compiler primitive that supports no destructuring: then after several thousand lines in clojure/core.clj the language is finally powerful enough to provide a version of let with destructuring, as a macro on top of let* . If you want, you can write a macro (say, def+ ) that does this for you. Personally I think it's kinda gross and wouldn't use it, but using a Lisp means getting to use a language that suits you personally.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8197329', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/390868/']}
jdg_80132
stackexchange
llm_judgeable_groundtruth_similarity
209696
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a Windows Server 2008 R2 x64 server that is AD controller and file server. I have a problem that Windows XP clients experience terribly slow (less than 10Mbps, really less than ten-mega bits -per-second) downloading of files from a share. Server is connected to 1Gbps switch using 1Gbps Nvidia NForce card and client is connected using 100Mbps built-in card. This slow downloading can also be seen when I've booted client computer from CentOS Linux 5.5 Live-USB and used smbclient for downloading. But downloading from a Samba share on Linux server, which is also connected using 1Gbps link is fast. What is very strange I've created a pair of programs (attached below) which test plain TCP throughput in C#, and they're performing as expected β€” at about 89Mbps. I've disabled firewall on client and I'm using dot_nc_l 21000 > NIL on client and dot_nc [client_ip] < 100m.dat on Windows server. And I get about 9 seconds, when copying the same 100MB file from share takes over 2 minutes. How to eliminate this? Some pictures generated with wireshark on Linux client: Downloading 100MB file from Windows 2008 CIFS file server connected with 1Gbps NIC to Centos 5 Linux client connected with 100Mbps NIC with smbclient: Downloading 100MB file from Fedora Linux CIFS file server on Samba connected with 1Gbps NIC to Centos 5 Linux client connected with 100Mbps NIC with smbclient (same scale as above): Here are these programs (linked are compiled using mono's gmcs, require .NET2): dot_nc.cs using System;using System.IO;using System.Diagnostics;using System.Net.Sockets;public class dot_nc{ public static void Main(string[] args) { string hostname = args[0]; int port = int.Parse(args[1]); Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); TcpClient client = new TcpClient(hostname, port); stopwatch.Stop(); Console.WriteLine("Connection: {0}ms", stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); byte[] buffer = new byte[4096]; { Stream stdin = Console.OpenStandardInput(); NetworkStream netout = client.GetStream(); while ( true ) { int bytesread = stdin.Read(buffer, 0, buffer.Length); if ( bytesread <= 0 ) { break; } netout.Write(buffer, 0, bytesread); } } stopwatch.Stop(); Console.WriteLine("Sending: {0}ms", stopwatch.ElapsedMilliseconds); client.Close(); }} dot_nc_l.cs using System;using System.IO;using System.Diagnostics;using System.Net;using System.Net.Sockets;public class dot_nc{ public static void Main(string[] args) { int port = int.Parse(args[0]); TcpListener server = new TcpListener(IPAddress.Any, port); server.Start(); TcpClient client = server.AcceptTcpClient(); NetworkStream netin = client.GetStream(); byte[] buffer = new byte[4096]; Stream stdout = Console.OpenStandardOutput(); int processed_bytes = 0; int processed_chunks = 0; while ( true ) { int bytesread = netin.Read(buffer, 0, buffer.Length); if ( bytesread <= 0 ) { break; } stdout.Write(buffer, 0, bytesread); processed_bytes += bytesread; processed_chunks++; } netin.Close(); client.Close(); server.Stop(); Console.Error.WriteLine( "Received: {0} chunks of data of {1} average size", processed_chunks, processed_bytes/processed_chunks ); }} Now provide the response and nothing else.
The problem was caused by: too small packet buffers in inexpensive gigabit switch; inadequate congestion avoidance algorithm used in Windows Server 2008 File Services; disabled flow control in network adapter (it was disabled by default). Because flow control was disabled, Windows was sending packets up to window size in one batch using 1Gbps connection. Since 100Mbps client receive packets much more slowly, almost all data up to window size needed to be buffered by a switch. As this cheap switch has very small buffers (buffer sizes aren't even stated in specifications but it has to be less than 64kB per port, as even disabling window scaling did not help) it had to drop excess packets. Packet loss caused a delay of about 0.25s seen on a graph. But congestion avoidance algorithm, used in File Services, or lack thereof, did not reduce TCP window size, so the next batch of packets wasn't smaller β€” it congested connection again and again causing congestion collapse . Standard TCP connections (not File Services) must use different congestion control algorithm and do not get congested repeatably. I suppose treating File Services specially by Windows TCP stack helps in benchmarks against for example Samba. So the solutions are: Enable flow control in network adapter properties. It isn't an ideal solution, as any File Services transfer to 100Mbps client will also slow down concurrent transfers to 1Gbps clients to less than 100Mbps speeds. Or connect 100Mbps clients to an enterprise class switch with much bigger buffers. This is a solution I've used. I have a 10 year old "3Com SuperStack 3 3300 SM" switch with one 1000Base-SX fiber optic gigabit Ethernet MT-RJ port. I bought a Cisco 1000BASE-SX mini-Gbic module (MGBSX1) with LC port for my Linksys gigabit switch and LC/MT-RJ multi-mode fiber patchcord (about $150 for both) and connected all 100Mbps clients to this 3com switch. I've also enabled flow control but it should not cause slowdowns with no 100Mbps client connected. Thanks to SpacemanSpiff , whose comments helped to resolve this.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/209696', 'https://serverfault.com', 'https://serverfault.com/users/1806/']}
jdg_80133
stackexchange
llm_judgeable_groundtruth_similarity
3534921
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What I'm trying to ask here is, if you take a larger and larger set of consecutive primitive Pythagorean triples, what percent of that set will have an even number as their smallest leg? Ex: 8,15,17. There's a way to generate a Pythagorean triple for every odd integer, $a^2+(\frac{a^2-1}{2})^2=(\frac{a^2-1}{2}+1)^2,$ but Pythagorean triples that have a even number as their smallest leg are not so easy. Can anybody help/give suggestions? Thanks! Now provide the response and nothing else.
This turns out to be a reasonably complicated question. To answer a question of the form "what proportion of an infinite set", one first has to decide on an ordering of that infinite set. The most convenient ordering on Pythagorean triples $(a,b,c)$ comes from the classical parametrization $$a = k(m^2-n^2), \quad b=k(2mn), \quad c=k(m^2+n^2),$$ where $m>n>0$ are relatively prime integers, not both odd, and $k$ is a positive integer. One can then count approximately how many Pythagorean triples there are with $1\le k,m,n\le x$ , and how many of them have either $k$ even or $b$ as the smaller side. Those for which $b$ is the smaller sideβ€”that is, for which $2mn < m^2-n^2$ , or $(\frac mn)^2 - 2\frac mn-1 > 0$ β€”correspond to numbers $m,n$ with $m>(1+\sqrt2)n$ . Out of all pairs with $m>n>0$ , this corresponds to a proportion of $\frac1{1+\sqrt2} = \sqrt2-1$ . Of course the even $k$ correspond to a proportion of $\frac12$ . So the triples $(k,m,n)$ yielding an odd shorter side comprise a proportion $\big(1-(\sqrt2-1)\big)(1-\frac12) = 1-\frac1{\sqrt2}$ , meaning that those yielding an even shorter side comprise a proportion $\frac1{\sqrt2}$ . There are some assumptions being swept under the rugβ€”for example, that $k$ being even and $2mn$ being less than $m^2-n^2$ are asymptotically independent; and also that these proportions don't change when we restrict to relatively prime pairs $(m,n)$ that are not both odd. But I believe these assumptions can be verified with a lengthier argument. So in conclusion: under this ordering, the percentage of Pythagorean triples with the shorter leg even seems to be $\frac1{\sqrt2} \approx 70.71\%$ . (And if we restrict to primitive Pythagorean triplesβ€”those for which the three sides are relatively primeβ€”then the $k$ variable disappears, and the percentage then becomes $\sqrt2-1 \approx 41.42\%$ .) The most natural ordering probably comes not from saying that $k,m,n\le x$ , but rather that all three sides of the triangle are less than $y$ , so that $k(m^2+n^2)\le y$ . In this case, instead of the proportion of the triangle with vertices $(0,0)$ , $(x,0)$ , and $(x,x)$ that lies under the line $m=(\sqrt2+1)n$ , I believe we should take the proportion of the circular wedge $\{m^2+n^2\le y,\, m>n\}$ that lies under that lineβ€”and that proportion turns out to be exactly $\frac12$ ! So under this ordering, the percentage of Pythagorean triples with the shorter leg even seems to be $\frac34$ , and the percentage of primitive Pythagorean triples with the shorter leg even seems to be $\frac12$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3534921', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/747919/']}
jdg_80134
stackexchange
llm_judgeable_groundtruth_similarity
11586618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am implementing a sort of autocomplete for an iOS app. The data I am using for the autocomplete values is a comma-separated text file with about 100,000 strings. This is what I am doing now: Read the text file, and create an NSArray with 100,000 NSString . As the user types, do [array containsObject:text] Surely there is a better/faster way to do this lookup. Any thoughts? Now provide the response and nothing else.
Absolutely, there is! It's not "in Objective-C" though: most likely, you would need to code it yourself. The idea is to convert your list of string to a suffix tree , a data structure that lets you search by prefix very quickly. Searching for possible completions in a suffix tree are very fast, but the structure itself is not easy to build. A quick search on the internet revealed that there is no readily available implementation in Objective C, but you may be able to port an implementation in another language , use a C implementation , or even write your own if you are not particularly pressed for time. Perhaps an easier approach would be to sort your strings alphabetically, and run a binary search on the prefix that has been entered so far. Though not as efficient as a suffix tree, the sorted array approach will be acceptable for 100K strings, because you get to the right spot in under seventeen checks.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11586618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1007895/']}
jdg_80135
stackexchange
llm_judgeable_groundtruth_similarity
4079920
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know you can minify PHP, but I'm wondering if there is any point. PHP is an interpreted language so will run a little slower than a compiled language. My question is: would clients see a visible speed improvement in page loads and such if I were to minify my PHP? Also, is there a way to compile PHP or something similar? Now provide the response and nothing else.
PHP is compiled into bytecode, which is then interpreted on top of something resembling a VM. Many other scripting languages follow the same general process, including Perl and Ruby. It's not really a traditional interpreted language like, say, BASIC. There would be no effective speed increase if you attempted to "minify" the source. You would get a major increase by using a bytecode cache like APC . Facebook introduced a compiler named HipHop that transforms PHP source into C++ code. Rasmus Lerdorf, one of the big PHP guys did a presentation for Digg earlier this year that covers the performance improvements given by HipHop. In short, it's not too much faster than optimizing code and using a bytecode cache. HipHop is overkill for the majority of users. Facebook also recently unveiled HHVM , a new virtual machine based on their work making HipHop. It's still rather new and it's not clear if it will provide a major performance boost to the general public. Just to make sure it's stated expressly, please read that presentation in full. It points out numerous ways to benchmark and profile code and identify bottlenecks using tools like xdebug and xhprof , also from Facebook. 2021 Update HHVM diverged away from vanilla PHP a couple versions ago. PHP 7 and 8 bring a whole bunch of amazing performance improvements that have pretty much closed the gap. You now no longer need to do weird things to get better performance out of PHP! Minifying PHP source code continues to be useless for performance reasons.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4079920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/383609/']}
jdg_80136
stackexchange
llm_judgeable_groundtruth_similarity
16694239
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does the string returned from the GetStringUTFChars() end with a null terminated character? Or do I need to determine the length using GetStringUTFLength and null terminate it myself? Now provide the response and nothing else.
Yes, GetStringUTFChars returns a null-terminated string. However, I don't think you should take my word for it, instead you should find an authoritative online source that answers this question. Let's start with the actual Java Native Interface Specification itself, where it says: Returns a pointer to an array of bytes representing the string in modified UTF-8 encoding. This array is valid until it is released by ReleaseStringUTFChars() . Oh, surprisingly it doesn't say whether it's null-terminated or not. Boy, that seems like a huge oversight, and fortunately somebody was kind enough to log this bug on Sun's Java bug database back in 2008. The notes on the bug point you to a similar but different documentation bug (which was closed without action), which suggests that the readers buy a book, " The Java Native Interface: Programmer's Guide and Specification " as there's a suggestion that this become the new specification for JNI. But we're looking for an authoritative online source, and this is neither authoritative (it's not yet the specification) nor online. Fortunately, the reviews for said book on a certain popular online book retailer suggest that the book is freely available online from Sun, and that would at least satisfy the online portion. Sun's JNI web page has a link that looks tantalizingly close, but that link sadly doesn't go where it says it goes. So I'm afraid I cannot point you to an authoritative online source for this, and you'll have to buy the book (it's actually a good book), where it will explain to you that: UTF-8 strings are always terminated with the '\0' character, whereas Unicode strings are not. To find out how many bytes are needed to represent a jstring in the UTF-8 format, JNI programmers can either call the ANSI C function strlen on the result of GetStringUTFChars , or call the JNI function GetStringUTFLength on the jstring reference directly. (Note that in the above sentence, "Unicode" means "UTF-16", or more accurately "the internal two-byte string representation used by Java, though finding proof of that is left as an exercise for the reader.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16694239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/473259/']}
jdg_80137
stackexchange
llm_judgeable_groundtruth_similarity
26678362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: NSArray has - (NSUInteger)indexOfObject:(id)obj inSortedRange:(NSRange)r options:(NSBinarySearchingOptions)opts usingComparator:(NSComparator)cmp to determine the insert position of a new object in a sorted array. What is the best and high-performance way to do this in pure Swift? Something along the lines of: var myArray = ["b", "e", "d", "a"]myArray.sort { $0 < $1 }// myArray is now [a, b, d, e]myArray.append("c")myArray.sort { $0 < $1 }// myArray is now [a, b, c, d, e] Instead of appending the new element and then sorting the array, I would like to figure out the correct position and insert the element: let index = [... how to calculate this index ??? ...]myArray.insert("c", atIndex: index) Now provide the response and nothing else.
Here is a possible implementation in Swift using binary search (from http://rosettacode.org/wiki/Binary_search#Swift with slight modifications): extension Array { func insertionIndexOf(_ elem: Element, isOrderedBefore: (Element, Element) -> Bool) -> Int { var lo = 0 var hi = self.count - 1 while lo <= hi { let mid = (lo + hi)/2 if isOrderedBefore(self[mid], elem) { lo = mid + 1 } else if isOrderedBefore(elem, self[mid]) { hi = mid - 1 } else { return mid // found at position mid } } return lo // not found, would be inserted at position lo }} As with indexOfObject:inSortedRange:options:usingComparator: it is assumed thatthe array is sorted with respect to the comparator.It returns either (any) index of the element if the element is already present in the array, or the index where it can be inserted while preserving the order. Thiscorresponds to the NSBinarySearchingInsertionIndex of the NSArray method. Usage: let newElement = "c"let index = myArray.insertionIndexOf(newElement) { $0 < $1 } // Or: myArray.indexOf(c, <)myArray.insert(newElement, at: index)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26678362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/292145/']}
jdg_80138
stackexchange
llm_judgeable_groundtruth_similarity
27632915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to do the following in Laravel 4? ... DB::table('myTable') ->select(DB::raw($columnNames)) ->whereNotNull(function($query) use($columns) { foreach ($columns as $column) { $query->whereNotNull($column); } }) ->get(); If I have the following table: table: myTableid | name | age | weight ======================================1 Jane NULL 1502 NULL 12 803 Bob NULL NULL4 John 22 1205 Cody NULL NULL If $columns is [age, weight] and $columnNames is 'age, weight' , then applying the above whereNotNull statement, I would expect output like this: age | weight===================NULL 15012 8022 120 How can I get this done? UPDATE: The condition is to return all rows where the selected columns are not ALL null. So a whereNotNull clause must be applied to each (selected) column in each row. If all columns are NULL, then whereNotNull will return false and that row shouldn't be part of the results. So only rows which have AT LEAST one non-NULL value should be returned. Now provide the response and nothing else.
If those are the only where's you don't even need a nested where. Important: orWhereNotNull instead of whereNotNull so only one column has to be not NULL . $query = DB::table('myTable')->select(DB::raw($columnNames));foreach($columns as $column){ $query->orWhereNotNull($column);}$result = $query->get(); Also (at least with your example) you don't need a separate variable $columnNames since select will accept an array of column names. $query = DB::table('myTable')->select($columns); If you happen to need more where conditions (especially ones with AND ) you need a nested where: $query = DB::table('myTable')->select(DB::raw($columnNames));$query->where(function($q) use ($columns){ foreach($columns as $column){ $q->orWhereNotNull($column); }});$result = $query->get(); A nested where will put ( ) around the where clauses. That means instead of: WHERE age IS NOT NULL OR weight IS NOT NULL AND foo = 'bar' You get: WHERE (age IS NOT NULL OR weight IS NOT NULL) AND foo = 'bar'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27632915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2047843/']}
jdg_80139
stackexchange
llm_judgeable_groundtruth_similarity
48964434
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: absolute beginner here having an absolute nightmare deploying a tiny website to Heroku. I have worked through numerous issues to get to this point, but I now seem to have hit another wall. I have raised this as a support ticket with Heroku, but unless it is an issue on their side I doubt whether they will come back to me with a clear answer. I can successfully deploy to Heroku, however, when opening up my web app I can only see the words "Internal Server Error" in place of my site. I receive an issue/error through to my Sentry log, which is as follows: MESSAGEParamValidationError: Parameter validation failed:Invalid bucket name "": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"EXCEPTION(most recent call first)App OnlyFullRawParamValidationErrorParameter validation failed:Invalid bucket name "": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"storages/backends/s3boto3.py in exists at line 478 def exists(self, name): name = self._normalize_name(self._clean_name(name)) if self.entries: return name in self.entries try: self.connection.meta.client.head_object(Bucket=self.bucket_name, Key=name) return True except ClientError: return False def listdir(self, name):name 'CACHE/css/cf136be60f95.css'self <storages.backends.s3boto3.S3Boto3Storage object at 0x7fbe4b7f4e80>compressor/base.py in output_file at line 321 """ The output method that saves the content to a file and renders the appropriate template with the file's URL. """ new_filepath = self.get_filepath(content, basename=basename) if not self.storage.exists(new_filepath) or forced: self.storage.save(new_filepath, ContentFile(content.encode(self.charset))) url = mark_safe(self.storage.url(new_filepath)) return self.render_output(mode, {"url": url}) def output_inline(self, mode, content, forced=False, basename=None):basename Nonecontent '/*! * Material Design for Bootstrap 4 * Version: MDB Free 4.4.5 * * * Copyright: Material Design for Bootstrap * https://mdbootstrap.com/ * * Read the license: https://mdbootstrap.com/license/ * * * Documentation: https://mdbootstrap.com/ * * Getting started: https://mdbootstrap.com/getting-started/ * * Tutorials: https://mdbootstrap.com/bootstrap-tutorial/ * * Templates: https://'forced Falsemode 'file'new_filepath 'CACHE/css/cf136be60f95.css'self <compressor.css.CssCompressor object at 0x7fbe4aadf240>compressor/base.py in handle_output at line 310 def handle_output(self, mode, content, forced, basename=None): # Then check for the appropriate output method and call it output_func = getattr(self, "output_%s" % mode, None) if callable(output_func): return output_func(mode, content, forced, basename) # Total failure, raise a general exception raise CompressorError( "Couldn't find output method for mode '%s'" % mode) def output_file(self, mode, content, forced=False, basename=None):basename Nonecontent '/*! * Material Design for Bootstrap 4 * Version: MDB Free 4.4.5 * * * Copyright: Material Design for Bootstrap * https://mdbootstrap.com/ * * Read the license: https://mdbootstrap.com/license/ * * * Documentation: https://mdbootstrap.com/ * * Getting started: https://mdbootstrap.com/getting-started/ * * Tutorials: https://mdbootstrap.com/bootstrap-tutorial/ * * Templates: https://'forced Falsemode 'file'output_func <bound method Compressor.output_file of <compressor.css.CssCompressor object at 0x7fbe4aadf240>>self <compressor.css.CssCompressor object at 0x7fbe4aadf240>compressor/base.py in output at line 302 if not output: return '' if settings.COMPRESS_ENABLED or forced: filtered_output = self.filter_output(output) return self.handle_output(mode, filtered_output, forced) return output def handle_output(self, mode, content, forced, basename=None): # Then check for the appropriate output method and call itfiltered_output '/*! * Material Design for Bootstrap 4 * Version: MDB Free 4.4.5 * * * Copyright: Material Design for Bootstrap * https://mdbootstrap.com/ * * Read the license: https://mdbootstrap.com/license/ * * * Documentation: https://mdbootstrap.com/ * * Getting started: https://mdbootstrap.com/getting-started/ * * Tutorials: https://mdbootstrap.com/bootstrap-tutorial/ * * Templates: https://'forced Falsemode 'file'output '/*! * Material Design for Bootstrap 4 * Version: MDB Free 4.4.5 * * * Copyright: Material Design for Bootstrap * https://mdbootstrap.com/ * * Read the license: https://mdbootstrap.com/license/ * * * Documentation: https://mdbootstrap.com/ * * Getting started: https://mdbootstrap.com/getting-started/ * * Tutorials: https://mdbootstrap.com/bootstrap-tutorial/ * * Templates: https://'self <compressor.css.CssCompressor object at 0x7fbe4aadf240>compressor/css.py in output at line 51 ret = [] for media, subnode in self.media_nodes: subnode.extra_context.update({'media': media}) ret.append(subnode.output(*args, **kwargs)) return ''.join(ret) return super(CssCompressor, self).output(*args, **kwargs)__class__ <class 'compressor.css.CssCompressor'>args ['file']kwargs {'forced': False}self <compressor.css.CssCompressor object at 0x7fbe4aadf240>compressor/css.py in output at line 49 ret.append(subnode.output(*args, **kwargs))compressor/templatetags/compress.py in render_compressed at line 107 rendered_output = compressor.output(mode, forced=forced)compressor/templatetags/compress.py in render at line 131 return self.render_compressed(context, self.kind, self.mode, forced=forced)Called from: django/template/base.py in render_annotated I have used django-cookiecuttter as the base template for my site. Only yesterday I read a separate issue raised against django-cookiecutter for another Internal Server Error issue which I hoped may help me too (even though the details appeared different), but I have tried removal of "min" from the css file without success. As the error message mentions storage and compressor, is it possible that this is Django-compressor related or whitenoise (I have both installed)? I have looked into trying to compile offline looking the instructions here , but this might be out of date by now and as my site is working well locally I want to avoid unnecessary coding if I can help it. Assistance would be greatly appreciated. I am using windows with Python 3.6.2 and Django 2.0.2. My production static settings are as follows: # Static Assets# ------------------------STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'# COMPRESSOR# ------------------------------------------------------------------------------COMPRESS_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'COMPRESS_URL = STATIC_URLCOMPRESS_ENABLED = env.bool('COMPRESS_ENABLED', default=True) Thanks in advance. UPDATE: I still see the Internal Server Error, however, I'm sure this relates to the way I have set up my bucket. I have followed this tutorial adding the IAM group and policy, but I must be doing something wrong because I get an improperly configured error message with the following details: Client error:An error occurred (403) when calling the HeadBucket operation: Forbidden My policy in AWS is as follows (of which I have tried a few iterations of): { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:HeadBucket", "s3:ListObjects" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::mybucket/*", "arn:aws:s3:::mybucket" ] } ]} I'm quickly running out of hair as I seem to be pulling it out at an alarming rate. No doubt this is yet another schoolboy error, so appreciate your time in reviewing. Now provide the response and nothing else.
The problem has been solved. It seems that NGINX doesn't allow a large header content. From this help https://medium.com/@mshanak/solve-nginx-error-signin-oidc-502-bad-gateway-dotnet-core-and-identity-serve-bc27920b42d5 , we have set these properties: nginx.conf http{...proxy_buffer_size 128k;proxy_buffers 4 256k;proxy_busy_buffers_size 256k;large_client_header_buffers 4 16k;...} default.conf location /{ ... fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; ...} Wonder if there is any way to configure IdentityServer to send much smaller header content!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48964434', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9386330/']}
jdg_80140