source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
66009560
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to implement frontend validation with either Yup or Joi. From all the docs and articles that I've found, I've got to a couple of conclusions: Joi has better performance Joi is more popular for backend validation, while Yup is more popular for frontend validation Joi has a lack of support on the frontend Per official docs , Yup is leaner and built with client-side validation Yup bundle size is ~2.5 times smaller than Joi - link However, I didn't manage to find what Joi lacks in terms of support compared to Yup? Right now, from all of these conclusions, it's choosing to either have a smaller bundle or better performance. Now provide the response and nothing else.
In the past, it was true that joi lacked browser support, at least out of the box. It uses a few Node.js APIs, which aren't available in the browser, to implement some of its features. It was still possible to use Browserify on it, or use the unofficial joi-browser npm package, but the process was cumbersome enough that it was less common to see joi used on the frontend. Yup was basically a workaround for that. However, that information is out of date, as joi now includes an official browser build that's easy to use and roughly the same size as yup. See its package.json: https://github.com/sideway/joi/blob/83092836583a7f4ce16cbf116b8776737e80d16f/package.json#L8 Your bundler, assuming it is set up correctly, should detect the browser build and use it automatically. For example, if you're using Rollup, make sure you are using @rollup/plugin-node-resolve with the browser: true option. I would strongly recommend using joi on the frontend now as you can share schemas between frontend and backend, which is really fantastic.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/66009560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8383917/']}
jdg_86841
stackexchange
llm_judgeable_groundtruth_similarity
60499
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If one considers the intermediate above, there are immediately two possibilities for subsequent reaction: 1. A Wittig-type olefination which would lead to a vinyl silane : This reaction happens to be unproductive as the vinyl silane can't do any further chemistry under the reaction conditions 2. A Peterson-type olefination : Which gives a vinyl phosphine (sp.), which may further be deprotonated to give another ylid (with concomitant formation of TMSOH) which will carry on to eventually give an allene. My question is why is the Peterson faster than the Wittig. The immediate thing that springs to mind is the difference between acid and base catalysed Peterson reactions in which the base mediated version snaps shut before the C-C bond has a chance to rotate (i.e. kinetically very fast), but this isn't an explanation. Potentially more promising is comparing the electronegativity/positivity of the elements, and noting that silicon is significantly more electropositive and hence a better electrophile however in the Peterson, the leaving group isn't such a driving force as the phosphine oxide from the Wittig. Now provide the response and nothing else.
Seems like the silicon atom is more oxophilic as compared to the phosphorus atom. This could in principal be explained by the higher strength of the $\ce{Si-O}$ bond, as indicated by the dissociation energies $D$ (see http://www.wiredchemist.com/chemistry/data/bond_energies_lengths.html ):$$\begin{array}{ccc}\hline\textbf{Bond} & \textbf{Dissociation energy / kJ mol}\mathbf{^{-1}} & \textbf{Bond length / pm} \\\hline\ce{Si-O} & 452 & 163 \\\ce{P-O} & 335 & 163 \\ \hline\end{array}$$The $\ce{Si-O}$ bond is stronger thus given an explanation why the Peterson olefination is favoured.It seems that the bond lengths $r_b$ are the same, though. By the way, a nice discussion of oxophilicity was given recently in Inorg. Chem. 2016, 55 (18), 9461 .
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/60499', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/21296/']}
jdg_86842
stackexchange
llm_judgeable_groundtruth_similarity
515812
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So I was just starting on convex optimization and was having a slightly hard time visualizing the lagrangian being always concave because it is the pointwise infimum of a family of affine functions. Can anyone help explain this? I've googled extensively but most places just state this without elaboration or examples. Thanks. Now provide the response and nothing else.
Daniel Fischer gave a transparent explanation in terms of epigraphs $\{(x,y): y\ge f(x)\}$: A function is convex if and only if its epigraph is convex, and the epigraph of a pointwise supremum is the intersection of the epigraphs. [Hence,] the pointwise supremum of convex functions is convex. One can similarly argue from concavity, using the sets $\{(x,y): y\le f(x)\}$: this set is convex if and only if $f$ is concave. Taking infimum of functions results in taking the intersection of such sets.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/515812', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73004/']}
jdg_86843
stackexchange
llm_judgeable_groundtruth_similarity
4089455
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the above isosceles right triangle ABC, with its two sides $AB = AC = 1$ unit, we take a random point D on the hypotenuse and draw perpendicular lines to the sides AB and AC, which intersect them at points E and F respectively.Show that the maximum of the three areas AFDE, EBD and CDF is always $\geq \frac {2}{9}$ . If we set $EB = x$ then, area $EBD = \frac {x^2}{2}$ , $AFDE = (1-x)*x$ , $CDF = \frac {(1-x)^2}{2}$ . Also $AFDE + EBD + CDF = \frac {1}{2}$ . Clearly the 3 areas can't be equal.We can only have 2 of them equal, when $x=0.5$ or $x=0.33$ or $x=0.66$ . When $x=0.33 = \frac {1}{3}$ then $EBD = \frac {x^2}{2} = \frac{1}{18}$ and $AFDE = CDF = \frac{1}{2}*(\frac{9}{18}-\frac{1}{18}) = \frac{2}{9}$ . But I don't know if this is a sufficient proof. Now provide the response and nothing else.
Yes you are right to compare them in this way. But in fact, you should have an array divided into intervals $$[0,1/3],[1/3,2/3],[2/3,1]$$ showing which one is "on the top of the podium" on each interval. This is made rather convincing in terms of analysis, i.e., when you see it with curves. Fig. 1: Curves with equations $y=\dfrac12 x^2, \ y=x(1-x), \ y=\dfrac12 (1-x)^2$ in red, black, blue resp. Their max function, in green, is always above the ordinate of the intersection of the two first curves, or the two last ones as well, which is $\tfrac29$ . Therefore, you were right to compare the cases of equality. Edit: These three polynomials, up to a factor $\frac12$ are the same as those in the 3rd figure here . They are particular cases of a general family called "Bernstein polynomials" useful in different part of mathematics such as Bezier curves or probability (order statistics).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4089455', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/666262/']}
jdg_86844
stackexchange
llm_judgeable_groundtruth_similarity
2978569
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have a question, what is the best way to append ints and Strings to build a new String? In the allocation debug tool i see too much allocations if i use the operator +. But i have tried also with StringBuffer and there are still too much allocations. Anyone can help me? Thanks Now provide the response and nothing else.
Use StringBuilder or StringBuffer but with adequate initial capacity to avoid re-allocation. The default capacity is 16, so once you exceed that, the data has to be copied into a new bigger place. Use append not + . int integer = 5;StringBuilder s = new StringBuilder(100);s.append("something");s.append(integer);s.append("more text"); Will allocate 100 slots upfront. Reference: http://developer.android.com/reference/java/lang/StringBuilder.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2978569', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/258863/']}
jdg_86845
stackexchange
llm_judgeable_groundtruth_similarity
1511716
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following linq query: var files = (from d in new DirectoryInfo(@"c:\program files").GetDirectories() where d.GetFiles().Count() > 10 where d.GetFiles().Count() < 100 select d ); However, as you can see above, I am calling d.GetFiles().Count() twice, is this where the concept of the => operator comes in handy. Using the syntax from jdehaan, why can't I do it directly like this: var files = (from d in new DirectoryInfo(@"c:\program files").GetDirectories() where(x => x.GetFiles().Count() > 10) && (x.GetFiles().Count() < 100)) select d ); I know the above is stupid because I could just do my original query with d.GetFiles().Count(), but I am curious on the difference between using Where as opposed to the keyword where. Now provide the response and nothing else.
Use let to assign within the query, like this var files = (from d in new DirectoryInfo(@"c:\program files").GetDirectories() let f = d.GetFiles().Count() where f > 10 where f < 100 select d); As Reed Copsey points out you can merge the two where clauses using && . Also, since GetFiles returns an array you can use the Length property instead of the Count method. var files = (from d in new DirectoryInfo(@"c:\program files").GetDirectories() let f = d.GetFiles().Length where f > 10 && f < 100 select d );
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1511716', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/33690/']}
jdg_86846
stackexchange
llm_judgeable_groundtruth_similarity
35956
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A related question to this question , I am wondering $$\lim_{x\to a}{f^\prime(x)}=+\infty,$$ what can be concluded about $f(a)$? Does this invalidate that $f(x)$ is not continuous at $a$ because of the non-existence of $f^\prime(a)$? Does this condition also imply maybe $$\lim_{x\to a}{f(x)}=+\infty?$$ I would think $$\lim_{x\to a}{f(x)}=+\infty,$$ because here $f(a+h)-f(a)$ can be arbitrarily large no matter how small $h$ is. EDIT Okay, I see where I got it wrong. Even though $$\lim_{x\to a}{f^\prime(x)}=+\infty,$$ it does not mean $f(a+h)-f(a)$ is arbitrarily large, because an $\infty$ times an infinitesimal quantity may not be determinate. I just wonder another related question: given $$\lim_{x\to a}{f(x)}=+\infty,$$ what can be concluded to $f^\prime(a)$? Can it be finite or non-existent? How about also when $a=\infty$ in this case. Now provide the response and nothing else.
Consider $f(x)=\sqrt[3]{x}$, $a=0$. The derivative goes to infinity, but the function is continuous. The graph has a vertical tangent line at $(0,0)$. For your second question, if $\displaystyle{\lim_{x\to a}f(x)=+\infty}$, then $f(a)$ cannot be defined in a way such that $f$ continuous at $a$, and therefore the derivative at $a$ cannot exist. You can conclude that the derivative near $a$, if it exists, would be unbounded above to the left of $a$ and unbounded below to the right of $a$, by the mean value theorem. If $\displaystyle{\lim_{x\to +\infty}f(x)=+\infty}$, then it is possible that $\displaystyle{\lim_{x\to \infty}f'(x)}$ exists and is finite. E.g., $f(x)=x$, or $\sqrt[3]{x}$ again. But it could do lots of things. You can be sure that if $f'(x)$ exists everywhere, then the set of $x$ where $f'(x)>0$ is unbounded, by the mean value theorem.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/35956', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6016/']}
jdg_86847
stackexchange
llm_judgeable_groundtruth_similarity
10779
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to prove that for non-negative random variables with distribution F:$$E(X^{n}) = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx$$ Is the following proof correct? $$R.H.S = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx = \int_0^\infty n x^{n-1} (1-F(x)) dx$$ using integration by parts:$$R.H.S = [x^{n}(1-F(x))]_0^\infty + \int_0^\infty x^{n} f(x) dx = 0 + \int_0^\infty x^{n} f(x) dx = E(X^{n})$$ If not correct, then how to prove it? Now provide the response and nothing else.
Here's another way. (As the others point out, the statement is true if $E[X^n]$ actually exists.) Let $Y = X^n$. $Y$ is non-negative if $X$ is. We know $$E[Y] = \int_0^{\infty} P(Y \geq t) dt,$$so$$E[X^n] = \int_0^{\infty} P(X^n \geq t) dt.$$Then, perform the change of variables $t = x^n$. This immediately yields $$E[X^n] = \int_0^{\infty} n x^{n-1} P(X^n \geq x^n) dx = \int_0^{\infty} n x^{n-1} P(X \geq x) dx.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/10779', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/953/']}
jdg_86848
stackexchange
llm_judgeable_groundtruth_similarity
58142058
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just installed plotly express. And I am trying to do something simple - plot each column of my data frame on the same y-axis with the index as x-axis. Here are questions/observations: Is it necessary for the data frame to have index as a column to be used as x-axis ? Can I not directly use the index for x-axis?How can I add multiple traces as were called in plotly on y-axis for the same x-axis ? Please note that, I am not trying to add traces using plotly, rather trying to use plotly-express. Also, there a few similar posts online, the closest was this : https://community.plot.ly/t/multiple-traces-plotly-express/23360 However, this post shows how you can add a scatter, not a line. I want to plot a line and there is no add_line similar to add_scatter shown in the example here. Appreciate any help in advance Sample code: import plotly.express as px import pandas as pd import numpy as np # Get some datadf = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')# Plot fig = px.line(df, x='Date', y='AAPL.High')# Only thing I figured is - I could do this fig.add_scatter(x=df['Date'], y=df['AAPL.Low']) # Not what is desired - need a line # Show plot fig.show() PLot: Now provide the response and nothing else.
Short answer: fig = px.line(df, x='Date', y=df.columns[1:-6]) Where df.columns are the column names of the columns returned as a list, or a subset of the columns using, for example, df.columns[1:-6] The details Your code works fine But if you specifically do not want to apply the (somewhat laborious) add_trace() function to each line, you can use px.line() . This used to require you to transform your data from a wide to long format. But not anymore, so just define an index and name the columns you'd like to plot. Or reference all or a subset of your dataframe columns through, for ecxample, y=df.columns[1:-6] Code 1: # importsimport plotly.express as px import pandas as pd import numpy as np # datadf = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')fig = px.line(df, x='Date', y=df.columns[1:-6])# Show plot fig.show() Plot: If you'd like to know how to do the same thing with data of a long format, here's how you do that too using pandas and plotly: Code 2: # importsimport plotly.express as px import pandas as pd import numpy as np # datadf_wide = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')df_long=pd.melt(df_wide, id_vars=['Date'], value_vars=['AAPL.Open', 'AAPL.High', 'AAPL.Low', 'AAPL.Close', 'mavg'])# plotly fig = px.line(df_long, x='Date', y='value', color='variable')# Show plot fig.show()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58142058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7454877/']}
jdg_86849
stackexchange
llm_judgeable_groundtruth_similarity
7361647
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: /// <summary>/// Returns list of popular searches/// </summary>public static string[] getPopularSearches(int SectionID, int MaxToFetch){ using (MainContext db = new MainContext()) { return (from c in db.tblSearches where c.SectionID == SectionID && c.Featured select new[] { c.Term }); }} I looked at other questions but they seem to be slightly different, I get the error: Cannot implicitly convert type 'System.Linq.IQueryable<string[]>' to 'string[]' I know this is probably simple, could someone point out what's wrong here please? Now provide the response and nothing else.
Sure - you're trying to return from a method declared to return a string[] , but you're returning a query - which isn't a string in itself. The simplest way of converting a query to an array is to call the ToArray extension method. However, as you're already selecting a string array for every element in the query, that would actually return string[][] . I suspect you really want to select a single string per query element, and then convert the whole thing into an array, i.e. code like this: public static string[] GetPopularSearches(int sectionID, int maxToFetch){ using (MainContext db = new MainContext()) { var query = from c in db.tblSearches where c.SectionID == sectionID && c.Featured select c.Term; return query.Take(maxToFetch) .ToArray(); }} Note that: I've renamed the method and parameters to match .NET naming conventions I've added a call to Take in order to use the maxToFetch parameter
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7361647', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/356635/']}
jdg_86850
stackexchange
llm_judgeable_groundtruth_similarity
77756
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a USB barcode scanner at /dev/input/event0 ( /dev/input/by-id/usb-Metrologic_Metrologic_Scanner-event-kbd ), and scanning a barcode causes it to send keypress events. I'm capturing these keypresses using the libdevinput Ruby library, which works great. The issue is that each barcode is also entered as a username, and subsequently a password on the Raspberry Pi, causing lots of failed login attempts. (The Raspberry Pi will be headless, and inside a microwave.) How can I stop Linux from seeing the barcode scanner as an input device, and have my program be the sole event consumer? Or is there a different, better solution? Now provide the response and nothing else.
I found out that I need to send an EVIOCGRAB ioctl to the device, which grabs it for exclusive use . Here's how to do it in Ruby: #!/usr/bin/env rubyBARCODE_SCANNER = "/dev/input/by-id/usb-Metrologic_Metrologic_Scanner-event-kbd"require 'rubygems'require 'libdevinput'require 'ffi'require 'ffi/tools/const_generator'# We need access to the fileDevInput.class_eval { attr_reader :dev }# Look up value of EVIOCGRAB constantcg = FFI::ConstGenerator.new('input') do |gen| gen.include('linux/input.h') gen.const(:EVIOCGRAB, '%u', '(unsigned)')endEVIOCGRAB = cg['EVIOCGRAB'].to_iscanner = DevInput.new(BARCODE_SCANNER)# Send EVIOCGRAB to scanner, which grabs it for exclusive use by our processscanner.dev.ioctl(EVIOCGRAB, 1)puts "Waiting for events..."scanner.each do |event| # Ignore everything except key press events next unless event.type == 1 && event.value == 1 puts "Key: #{event.code_str}"end Note : You'll need to install the libdevinput gem, ffi , and Linux headers. If you are using a Linux version between 3.2.0 and 3.6.11 , you can replace the FFI::ConstGenerator part with EVIOCGRAB = 1074021776 , and then you don't need ffi or Linux headers.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/77756', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/9963/']}
jdg_86851
stackexchange
llm_judgeable_groundtruth_similarity
8963400
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a data file, with each line having one number, like 10203040 How do I read this file and store the data into an array? So that I can conduct some operations on this array. Now provide the response and nothing else.
Just reading the file into an array, one line per element, is trivial: open my $handle, '<', $path_to_file;chomp(my @lines = <$handle>);close $handle; Now the lines of the file are in the array @lines . If you want to make sure there is error handling for open and close , do something like this (in the snipped below, we open the file in UTF-8 mode , too): my $handle;unless (open $handle, "<:encoding(utf8)", $path_to_file) { print STDERR "Could not open file '$path_to_file': $!\n"; # we return 'undefined', we could also 'die' or 'croak' return undef}chomp(my @lines = <$handle>);unless (close $handle) { # what does it mean if close yields an error and you are just reading? print STDERR "Don't care error while closing '$path_to_file': $!\n";}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8963400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/297850/']}
jdg_86852
stackexchange
llm_judgeable_groundtruth_similarity
9078754
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im changing some fields via AJAX and when im trying to save a form i recive a error that Extra fields are not allowed . How to change that validator property like validatorPass() in sf1.4 ? Or its possible change to form to accept extra fields ? Im using SonataAdminBundle to create forms. Now provide the response and nothing else.
You could remove the extra fields from the request data before binding them to the form: // The JSON PUT data will include all attributes in the entity, even // those that are not updateable by the user and are not in the form. // We need to remove these extra fields or we will get a // "This form should not contain extra fields" Form Error $data = $request->request->all(); $children = $form->all(); $data = array_intersect_key($data, $children); $form->bind($data);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9078754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/254186/']}
jdg_86853
stackexchange
llm_judgeable_groundtruth_similarity
44997344
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an array of objects and I'd like to filter it based on the objects property values. I'd like to filter it by different properties, so it needed to be dynamic. For this I have an input field where I type and then filter the array. So, let's say I have these 2 different arrays: const array_one = [ {id: 1, code: 'ABC123', name: 'John'}, {id: 2, code: 'DEF456', name: 'Stew'}, // ...];const array_two = [ {id: 1, value: '012345', company: 'Company 01' }, {id: 2, value: '678910', company: 'Company 02' }, // ...]; I want a function where I can filter the first array based on the name , also If I want to filter the second array, I want to filter it by the value . For this, I built this function: filterArray(array: Array<any>, fields: Array<any>, value: string) { value = this.convertString(value); array = array.filter((item) => { fields.forEach(obj => { if ( item[obj] ) { const _newObj = this.convertString(item[obj]); if ( _newObj.indexOf(value) !== -1 ) { console.log(item); return item; } } }); }); return array;}// convertString() is just another function to replace accents, spaces, etc... Then I call it like this: filterArray(originalArray, ['name'], valueFromInput);// or...filterArray(originalArray, ['value'], valueFromInput);// or even...filterArray(originalArray, ['value', 'company'], valueFromInput); But the array filtered is always returnin empty, even if the console inside the indexOf verification prints the correct object on the console. What am I doing wrong here? Because it's filtering properly, I have manually checked it, but it doesn't add to the new filtered array. Now provide the response and nothing else.
You can iterate the fields using Array#some , and if one of them is equal to value return the item : const array_one = [ {id: 1, code: 'ABC123', name: 'John'}, {id: 2, code: 'DEF456', name: 'Stew'}];const array_two = [ {id: 1, value: '012345', company: 'Company 01' }, {id: 2, value: '678910', company: 'Company 02' }];const filterArray = (array, fields, value) => { fields = Array.isArray(fields) ? fields : [fields]; return array.filter((item) => fields.some((field) => item[field] === value));}; console.log(filterArray(array_one, 'name', 'Stew'));console.log(filterArray(array_two, ['id', 'company'], 2));console.log(filterArray(array_two, ['id', 'company'], 'Company 02'));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44997344', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3106828/']}
jdg_86854
stackexchange
llm_judgeable_groundtruth_similarity
2666567
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$2^y = 5$$ $$ 5^x = 2$$ $$xy = ?$$ Indeed, I could find the $x$ and $y$ by using logarithm. However, I don't want to do it with that way. Let me show my thinkings: $$5^x \cdot 2^y = 10$$ Here we get $$xy = 10$$ It seems wrong on my textbook, why? My Kindest Regards! Now provide the response and nothing else.
Here is an approach without using logs:$$5^{xy} = \left(5^x\right)^y = 2^y = 5$$so $xy=1$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2666567', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/484136/']}
jdg_86855
stackexchange
llm_judgeable_groundtruth_similarity
15923075
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What are the fastest and the most flexible (applicable in most situations) ways to use VBOs? I am developing an openGL application and i want it to get to the best performance, so i need someone to answer these questions. I read many questions and anwers but i guess there's to much information that i don't need which messes up my brain... How many vbos should i use? How should i create vbos? How should i update vbos data, if the data size is not fixed? How should i render vbos? How should i deal with data in vbos that i don't want to render anymore? Now provide the response and nothing else.
How many vbos should i use? As few as possible. Switching VBOs comes with a small, but measureable cost. In general you'll try to group similar data into VBOs. For example in a FPS game all the different kinds of garbage lying on the street, small props, etc., will be usually located in the same or only a small number of VBOs. It also comes down to drawing batch sizes. glDraw… calls which render less than about 100 primitives are suboptimal (this has always been the case, even 15 years ago). So you want to batch at least 100 primitives where possible. But if a single mesh has only, say 20 triangles (low polycount props for instancing or such), each in its own VBO you can no longer batch more. How should i create vbos? glGenBuffers → glBindBuffer → glBufferData UPDATE You can pass a null pointer to the data parameter of glBufferData to initialize the buffer object without setting the data. How should i update vbos data, if the data size is not fixed? Create VBOs with a coarser size granularity than your data size is. Your operating system is doing this anyway for your host side data, it's called paging. Also if you want to use glMapBuffer making the buffer object a multiple of the host page size is very nice to the whole system. The usual page size on current systems is 4kiB. So that's the VBO size granularity I'd choose. UPDATE: You can BTW ask your operating system which page size it is using. That's OS dependent though, I'd ask another question for that. Update the data using glBufferSubData or map it with glMapBuffer modify in the host side mapped memory, then glUnmapBuffer. If the data outgrows the buffer object, create a new, larger one and copy with glCopyBufferSubData. See the lase paragraph. How should i render vbos? glBindBuffer → glDraw… How should i deal with data in vbos that i don't want to render anymore? If the data consumes only a part of the VBO and shares it with other data and you're not running out of memory then, well, just don't access it. Ideally you keep around some index in which you keep track of which VBO has which parts of it available for what kind of task. This is very much like memory management, specifically a scheme known as object stacks (obstacks). However eventually it may make sense to compactify an existing buffer object. For this you'd create a new buffer object, bind it as writing target, with the old buffer object being selected as reading target. Then use glCopyBufferSubData to copy the contents into a new, tightened buffer object. Of course you will then have to update all references to buffer object name (=OpenGL ID) and offsets. For this reason it makes sense to write a thin abstraction layer on top of OpenGL buffer objects that keeps track of the actual typed data within the structureless blobs OpenGL buffer objects are.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15923075', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1216033/']}
jdg_86856
stackexchange
llm_judgeable_groundtruth_similarity
80604
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have file with below contents . /ABC/RTE/AD_900_VOP_123/OPP/ABC/RTE/TRE/AD_900_VOP_145/BBB/ABC/RTE/AN_900_VFP_124/FBF/ABC/RTE/HD_900_FOP_153/WEW/ABD/RDV/AD_900_VOP_123/OPP/ABC/RTE/WD_900_VOP_123/GRR/TRD/ABC/RTE/RTD/AR_900_VOP_443/SDD How can I use regular expression on this file such that I get the output such as AD_900_VOP_123AD_900_VOP_145AN_900_VFP_124HD_900_FOP_153AD_900_VOP_123WD_900_VOP_123AR_900_VOP_443 Now provide the response and nothing else.
Gnu grep grep -oE '[[:alpha:]]+_[[:digit:]]+_[[:alpha:]]+_[[:digit:]]+' Use the perl-regex flag and look-behind and look-ahead assertions to guarantee that the match is surrounded by / grep -oP '(?<=/)[[:alpha:]]+_[[:digit:]]+_[[:alpha:]]+_[[:digit:]]+(?=/)'
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/80604', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/26374/']}
jdg_86857
stackexchange
llm_judgeable_groundtruth_similarity
6003662
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to sort a list with sublists? (setq list '((0) (1) (2) (0 1 5) (0 1 3) (0 1 5) (0 3 0) (0) (1) (2 7 19) (0 0 3 0))); restricting the sort to only the first element:(sort (copy-seq list) #'< :key #'car)--> ((0) (0 1 5) (0 1 3) (0 1 5) (0 3 0) (0) (0 0 3 0) (1) (1) (2) (2 7 19)) The output I am looking for is sort on all elements of the sublist: --> ((0) (0) (0 0 3 0) (0 1 3) (0 1 5) (0 1 5) (0 3 0) (1) (1) (2) (2 7 19)) Now provide the response and nothing else.
Start by defining a function that determines whether one list is less than another. The following example assumes that the lists can only contain numbers: (defun list< (a b) (cond ((null a) (not (null b))) ((null b) nil) ((= (first a) (first b)) (list< (rest a) (rest b))) (t (< (first a) (first b))) )) Armed with this function, you can now sort the list of lists. (sort (copy-seq list) #'list<)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6003662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/753832/']}
jdg_86858
stackexchange
llm_judgeable_groundtruth_similarity
62671106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently discovered that onActivityResult is deprecated. What should we do to handle it? Any alternative introduced for that? Now provide the response and nothing else.
A basic training is available at developer.android.com . Here is an example on how to convert the existing code with the new one: The old way: public void openSomeActivityForResult() { Intent intent = new Intent(this, SomeActivity.class); startActivityForResult(intent, 123);}@Overrideprotected void onActivityResult (int requestCode, int resultCode, Intent data) { if (resultCode == Activity.RESULT_OK && requestCode == 123) { doSomeOperations(); }} The new way (Java): public void openSomeActivityForResult() { Intent intent = new Intent(this, SomeActivity.class); someActivityResultLauncher.launch(intent);}// You can do the assignment inside onAttach or onCreate, i.e, before the activity is displayedActivityResultLauncher<Intent> someActivityResultLauncher = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), new ActivityResultCallback<ActivityResult>() { @Override public void onActivityResult(ActivityResult result) { if (result.getResultCode() == Activity.RESULT_OK) { // There are no request codes Intent data = result.getData(); doSomeOperations(); } } }); The new way (Kotlin): fun openSomeActivityForResult() { val intent = Intent(this, SomeActivity::class.java) resultLauncher.launch(intent)}var resultLauncher = registerForActivityResult(StartActivityForResult()) { result -> if (result.resultCode == Activity.RESULT_OK) { // There are no request codes val data: Intent? = result.data doSomeOperations() }} EDIT. A better approach would be to make it more generalised so that we can reuse it. The snippet below is used in one of my projects but beware that it's not well-tested and may not cover all the cases. BetterActivityResult.java import android.content.Intent;import androidx.activity.result.ActivityResult;import androidx.activity.result.ActivityResultCaller;import androidx.activity.result.ActivityResultLauncher;import androidx.activity.result.contract.ActivityResultContract;import androidx.activity.result.contract.ActivityResultContracts;import androidx.annotation.NonNull;import androidx.annotation.Nullable;public class BetterActivityResult<Input, Result> { /** * Register activity result using a {@link ActivityResultContract} and an in-place activity result callback like * the default approach. You can still customise callback using {@link #launch(Object, OnActivityResult)}. */ @NonNull public static <Input, Result> BetterActivityResult<Input, Result> registerForActivityResult( @NonNull ActivityResultCaller caller, @NonNull ActivityResultContract<Input, Result> contract, @Nullable OnActivityResult<Result> onActivityResult) { return new BetterActivityResult<>(caller, contract, onActivityResult); } /** * Same as {@link #registerForActivityResult(ActivityResultCaller, ActivityResultContract, OnActivityResult)} except * the last argument is set to {@code null}. */ @NonNull public static <Input, Result> BetterActivityResult<Input, Result> registerForActivityResult( @NonNull ActivityResultCaller caller, @NonNull ActivityResultContract<Input, Result> contract) { return registerForActivityResult(caller, contract, null); } /** * Specialised method for launching new activities. */ @NonNull public static BetterActivityResult<Intent, ActivityResult> registerActivityForResult( @NonNull ActivityResultCaller caller) { return registerForActivityResult(caller, new ActivityResultContracts.StartActivityForResult()); } /** * Callback interface */ public interface OnActivityResult<O> { /** * Called after receiving a result from the target activity */ void onActivityResult(O result); } private final ActivityResultLauncher<Input> launcher; @Nullable private OnActivityResult<Result> onActivityResult; private BetterActivityResult(@NonNull ActivityResultCaller caller, @NonNull ActivityResultContract<Input, Result> contract, @Nullable OnActivityResult<Result> onActivityResult) { this.onActivityResult = onActivityResult; this.launcher = caller.registerForActivityResult(contract, this::callOnActivityResult); } public void setOnActivityResult(@Nullable OnActivityResult<Result> onActivityResult) { this.onActivityResult = onActivityResult; } /** * Launch activity, same as {@link ActivityResultLauncher#launch(Object)} except that it allows a callback * executed after receiving a result from the target activity. */ public void launch(Input input, @Nullable OnActivityResult<Result> onActivityResult) { if (onActivityResult != null) { this.onActivityResult = onActivityResult; } launcher.launch(input); } /** * Same as {@link #launch(Object, OnActivityResult)} with last parameter set to {@code null}. */ public void launch(Input input) { launch(input, this.onActivityResult); } private void callOnActivityResult(Result result) { if (onActivityResult != null) onActivityResult.onActivityResult(result); }} With the above approach, you still have to register it before or during launching the activity or fragment attachment. Once defined, it can be reused within the activity or fragment. For example, if you need to start new activities in most of the activity, you can define a BaseActivity and register a new BetterActivityResult like this: BaseActivity.java public class BaseActivity extends AppCompatActivity { protected final BetterActivityResult<Intent, ActivityResult> activityLauncher = BetterActivityResult.registerActivityForResult(this);} After that, you can simply launch an activity from any child activities like this: public void openSomeActivityForResult() { Intent intent = new Intent(this, SomeActivity.class); activityLauncher.launch(intent, result -> { if (result.getResultCode() == Activity.RESULT_OK) { // There are no request codes Intent data = result.getData(); doSomeOperations(); } })} Since you can set the callback function along with the Intent , you can reuse it for any activities. Similarly, you can also use other activity contracts using the other two constructors.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/62671106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2652368/']}
jdg_86859
stackexchange
llm_judgeable_groundtruth_similarity
6918987
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to measure the computation time of a function in R using system.time() . I want to run the function a few hundred times to get an average but I don't wantto copy and paste that many times. Is there an easier way to do that? Now provide the response and nothing else.
The microbenchmark package takes a ,times= option and has the added bonus of being a bit more accurate. > library(microbenchmark)> m <- microbenchmark( seq(10)^2, (1:10)^2, times=10000)> mUnit: nanoseconds expr min lq median uq max1 (1:10)^2 2567 3423 3423 4278 419182 seq(10)^2 44484 46195 46195 47051 1804147> plot(m) And using the not-yet-released autoplot() method for ggplot2: autoplot(m)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6918987', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/633426/']}
jdg_86860
stackexchange
llm_judgeable_groundtruth_similarity
2984493
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following problem concerning a limit in $L^{1}(\mathbb{R}$ ), the class of all Lebesgue integrable functions on $\mathbb{R}$ with the Lebesgue measure $m$ . Let $f\in L^{1}(\mathbb{R})$ . Does $$\lim_{h\to 0}\int_{\mathbb{R}}|f(x-h)-f(x)|~dx$$ exist? If it does, what is its value? My intuition is that the limit does exist. However, I have no idea how to prove it. I noticed that $\int_{\mathbb{R}}|f(x-h)-f(x)|~dx=||f(x-h)-f(x)||_{1}$ , and I know that since $f\in L^{1}(\mathbb{R})$ , we have $||f(x)||_{1}<\infty$ . I think this needs to used somewhere, but I'm not sure where. Any help is appreciated. Now provide the response and nothing else.
No. With the change of variable $ t = \sqrt{n} x $ you get $$ \sqrt{n} \int_0^1 \frac{1}{(1+x^2)^n} \, dx = \int_0^\sqrt{n} \frac{1}{\left( 1 + \frac{t^2}{n} \right)^n} \, dt = \int_0^{+\infty} \frac{1}{\left( 1 + \frac{t^2}{n} \right)^n} \chi_{[0,\sqrt{n}]}(t) \, dt. $$ Observe that $$ \lim_{n \to \infty} \left( 1 + \frac{t^2}{n} \right)^n \chi_{[0,\sqrt{n}]}(t) = e^{t^2}$$ pointwise everywhere, say for $t>0$ ; also the sequence $ n \mapsto (1 + t^2 /n)^n$ is increasing for all $t \in \mathbb{R}$ which implies that $$\frac{1}{1+t^2} \ge \frac{1}{\left( 1 + \frac{t^2}{n} \right)^n} \ge \frac{1}{\left( 1 + \frac{t^2}{n} \right)^n} \chi_{[0,\sqrt{n}]}(t) $$ for all $n \in \mathbb{N}$ and for all $t \in \mathbb{R}$ . By the Dominated Convergence Theorem, since $1/(1+t^2) \in L^1([0,+\infty))$ , we have that $$\lim_{n \to \infty} \int_0^{+\infty} \frac{1}{\left( 1 + \frac{t^2}{n} \right)^n} \chi_{[0,\sqrt{n}]}(t) \, dt = \int_0^{+\infty} e^{-t^2} \, dt = \frac{\sqrt{\pi}}{2}. $$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2984493', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/608571/']}
jdg_86861
stackexchange
llm_judgeable_groundtruth_similarity
17825
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have recently completed an MSc in Control Systems from a top university. It seems to me that control theory must have an application within quantitative finance. I would like to apply my degree within finance, but I want to be sure that it is relevant to the role of a quantitative analyst. The topics which I have particular interest and experience in are State Space Control, Systems Identification, Model Predictive Control and Optimal Control. I imagine that effective management of finances must involve modelling of financial systems in terms of transfer functions/state space models (based on large sets of historical data). These models could then be used to predict the evolution of a market over time, and therefore optimise a given cost function such as profit, risk etc. If this kind of role exists within quantitative finance/ other areas, can you please give me more information/ ideas of job roles/ industries to research. Now provide the response and nothing else.
Of course, optimal control is at the core of math finance. Take few applications: Option Pricing : you have an exposure to a time dependent combination of market factors; you have some knowledge of their dynamics. They are partly deterministic, partly stochastic (i.e. random). At each " time step " you can adjust your portfolio at a given cost. Your goal is to lower your risk. Of course it is a control program, see Continuous-time Stochastic Control and Optimization with Financial Applications , by Huyen Pham, for examples (and here for a summary by the same author). Portfolio Construction : you have a given amount of money to invest, you will build a portfolio with it. You have some expectations in terms of the dynamics of returns of the available investment instruments (stocks, bonds, etc) and estimated for the associated risk. Changing your allocation has a cost at each time step. Again it is a control program, see for instance Dynamic Portfolio Choice with Frictions , Garleanu and Pedersen. Trading : you have a large amount of stocks to buy or sell today. You have the whole day (around 8h) to do it, and you can take a decision every millisecond. The faster you go the more you move the market at your detriment, the slower you go the more your are exposed to a bad price change. It is nothing else than control again, see Market Microstructure in Practice , by L and Laruelle. Market Microstructure Knowledge Needed for Controlling an Intra-Day Trading Process for an overview (there is a preprint available here ).
{}
{'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/17825', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/16253/']}
jdg_86862
stackexchange
llm_judgeable_groundtruth_similarity
32531377
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a large NumPy.array field_array and a smaller array match_array , both consisting of int values. Using the following example, how can I check if any match_array-shaped segment of field_array contains values that exactly correspond to the ones in match_array ? import numpyraw_field = ( 24, 25, 26, 27, 28, 29, 30, 31, 23, \ 33, 34, 35, 36, 37, 38, 39, 40, 32, \ -39, -38, -37, -36, -35, -34, -33, -32, -40, \ -30, -29, -28, -27, -26, -25, -24, -23, -31, \ -21, -20, -19, -18, -17, -16, -15, -14, -22, \ -12, -11, -10, -9, -8, -7, -6, -5, -13, \ -3, -2, -1, 0, 1, 2, 3, 4, -4, \ 6, 7, 8, 4, 5, 6, 7, 13, 5, \ 15, 16, 17, 8, 9, 10, 11, 22, 14)field_array = numpy.array(raw_field, int).reshape(9,9)match_array = numpy.arange(12).reshape(3,4) These examples ought to return True since the pattern described by match_array aligns over [6:9,3:7] . Now provide the response and nothing else.
Approach #1 This approach derives from a solution to Implement Matlab's im2col 'sliding' in python that was designed to rearrange sliding blocks from a 2D array into columns . Thus, to solve our case here, those sliding blocks from field_array could be stacked as columns and compared against column vector version of match_array . Here's a formal definition of the function for the rearrangement/stacking - def im2col(A,BLKSZ): # Parameters M,N = A.shape col_extent = N - BLKSZ[1] + 1 row_extent = M - BLKSZ[0] + 1 # Get Starting block indices start_idx = np.arange(BLKSZ[0])[:,None]*N + np.arange(BLKSZ[1]) # Get offsetted indices across the height and width of input array offset_idx = np.arange(row_extent)[:,None]*N + np.arange(col_extent) # Get all actual indices & index into input array for final output return np.take (A,start_idx.ravel()[:,None] + offset_idx.ravel()) To solve our case, here's the implementation based on im2col - # Get sliding blocks of shape same as match_array from field_array into columns# Then, compare them with a column vector version of match array.col_match = im2col(field_array,match_array.shape) == match_array.ravel()[:,None]# Shape of output array that has field_array compared against a sliding match_arrayout_shape = np.asarray(field_array.shape) - np.asarray(match_array.shape) + 1# Now, see if all elements in a column are ONES and reshape to out_shape. # Finally, find the position of TRUE indicesR,C = np.where(col_match.all(0).reshape(out_shape)) The output for the given sample in the question would be - In [151]: R,COut[151]: (array([6]), array([3])) Approach #2 Given that opencv already has template matching function that does square of differences, you can employ that and look for zero differences, which would be your matching positions. So, if you have access to cv2 (opencv module), the implementation would look something like this - import cv2from cv2 import matchTemplate as cv2mM = cv2m(field_array.astype('uint8'),match_array.astype('uint8'),cv2.TM_SQDIFF)R,C = np.where(M==0) giving us - In [204]: R,COut[204]: (array([6]), array([3])) Benchmarking This section compares runtimes for all the approaches suggested to solve the question. The credit for the various methods listed in this section goes to their contributors. Method definitions - def seek_array(search_in, search_for, return_coords = False): si_x, si_y = search_in.shape sf_x, sf_y = search_for.shape for y in xrange(si_y-sf_y+1): for x in xrange(si_x-sf_x+1): if numpy.array_equal(search_for, search_in[x:x+sf_x, y:y+sf_y]): return (x,y) if return_coords else True return None if return_coords else Falsedef skimage_based(field_array,match_array): windows = view_as_windows(field_array, match_array.shape) return (windows == match_array).all(axis=(2,3)).nonzero()def im2col_based(field_array,match_array): col_match = im2col(field_array,match_array.shape)==match_array.ravel()[:,None] out_shape = np.asarray(field_array.shape) - np.asarray(match_array.shape) + 1 return np.where(col_match.all(0).reshape(out_shape))def cv2_based(field_array,match_array): M = cv2m(field_array.astype('uint8'),match_array.astype('uint8'),cv2.TM_SQDIFF) return np.where(M==0) Runtime tests - Case # 1 (Sample data from question): In [11]: field_arrayOut[11]: array([[ 24, 25, 26, 27, 28, 29, 30, 31, 23], [ 33, 34, 35, 36, 37, 38, 39, 40, 32], [-39, -38, -37, -36, -35, -34, -33, -32, -40], [-30, -29, -28, -27, -26, -25, -24, -23, -31], [-21, -20, -19, -18, -17, -16, -15, -14, -22], [-12, -11, -10, -9, -8, -7, -6, -5, -13], [ -3, -2, -1, 0, 1, 2, 3, 4, -4], [ 6, 7, 8, 4, 5, 6, 7, 13, 5], [ 15, 16, 17, 8, 9, 10, 11, 22, 14]])In [12]: match_arrayOut[12]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])In [13]: %timeit seek_array(field_array, match_array, return_coords = False)1000 loops, best of 3: 465 µs per loopIn [14]: %timeit skimage_based(field_array,match_array)10000 loops, best of 3: 97.9 µs per loopIn [15]: %timeit im2col_based(field_array,match_array)10000 loops, best of 3: 74.3 µs per loopIn [16]: %timeit cv2_based(field_array,match_array)10000 loops, best of 3: 30 µs per loop Case #2 (Bigger random data): In [17]: field_array = np.random.randint(0,4,(256,256))In [18]: match_array = field_array[100:116,100:116].copy()In [19]: %timeit seek_array(field_array, match_array, return_coords = False)1 loops, best of 3: 400 ms per loopIn [20]: %timeit skimage_based(field_array,match_array)10 loops, best of 3: 54.3 ms per loopIn [21]: %timeit im2col_based(field_array,match_array)10 loops, best of 3: 125 ms per loopIn [22]: %timeit cv2_based(field_array,match_array)100 loops, best of 3: 4.08 ms per loop
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32531377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3408500/']}
jdg_86863
stackexchange
llm_judgeable_groundtruth_similarity
15656463
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a MySQL database and I am trying to find a way to export its structure only, without the auto increment values. mysqldump --no-data would almost do the job, but it keeps the auto_increment values. Is there any way to do it without using PHPMyAdmin (that I know it can do it)? Now provide the response and nothing else.
You can do this : mysqldump -u root -p -h <db-host> --opt <db-name> -d --single-transaction | sed 's/ AUTO_INCREMENT=[0-9]*\b//' > <filename>.sql As mentioned by others, If you want sed to works properly, add the g (for g lobal replacement) parameter like this : mysqldump -u root -p -h <db-host> --opt <db-name> -d --single-transaction | sed 's/ AUTO_INCREMENT=[0-9]*\b//g' > <filename>.sql (this only works if you have GUI Tools installed: mysqldump --skip-auto-increment ) New UPDATE thanks to comments. The \b is useless and sometimes will break the command. See this SO topic for explanations.So the optimized answer would be : mysqldump -u root -p -h <db-host> --opt <db-name> -d --single-transaction | sed 's/ AUTO_INCREMENT=[0-9]*//g' > <filename>.sql
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15656463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/577598/']}
jdg_86864
stackexchange
llm_judgeable_groundtruth_similarity
378243
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In my system I frequently operate with airport codes ( "YYZ" , "LAX" , "SFO" , etc.), they are always in the exact same format (3 letter, represented as uppercase). The system typically deals with 25-50 of these (different) codes per API request, with over a thousand allocations total, they are passed around through many layers of our application, and are compared for equality quite often. We started with just passing strings around, which worked fine for a bit but we quickly noticed lots of programming mistakes by passing in a wrong code somewhere the 3 digit code was expected. We also ran into issues where we were supposed to do a case-insensitive comparison and instead did not, resulting in bugs. From this, I decided to stop passing strings around and create an Airport class, which has a single constructor that takes and validates the airport code. public sealed class Airport{ public Airport(string code) { if (code == null) { throw new ArgumentNullException(nameof(code)); } if (code.Length != 3 || !char.IsLetter(code[0]) || !char.IsLetter(code[1]) || !char.IsLetter(code[2])) { throw new ArgumentException( "Must be a 3 letter airport code.", nameof(code)); } Code = code.ToUpperInvariant(); } public string Code { get; } public override string ToString() { return Code; } private bool Equals(Airport other) { return string.Equals(Code, other.Code); } public override bool Equals(object obj) { return obj is Airport airport && Equals(airport); } public override int GetHashCode() { return Code?.GetHashCode() ?? 0; } public static bool operator ==(Airport left, Airport right) { return Equals(left, right); } public static bool operator !=(Airport left, Airport right) { return !Equals(left, right); }} This made our code much easier to understand and we simplified our equality checks, dictionary / set usages. We now know that if our methods accept an Airport instance that it will behave the way we expect, it has simplified our method checks to a null reference check. The thing I did notice, however, was the garbage collection was running much more often, which I tracked down to a lot of instances of Airport getting collected. My solution to this was to convert the class into a struct . Mostly it was just a keyword change, with the exception of GetHashCode and ToString : public override string ToString(){ return Code ?? string.Empty;}public override int GetHashCode(){ return Code?.GetHashCode() ?? 0;} To handle the case where default(Airport) is used. My questions: Was creating an Airport class or struct a good solution in general, or am I solving the wrong problem / solving it the wrong way by creating the type? If it's not a good solution, what is a better solution? How should my application handle instances where the default(Airport) is used? A type of default(Airport) is nonsensical to my application, so I've been doing if (airport == default(Airport) { throw ... } in places where getting an instance of Airport (and its Code property) is critical to the operation. Notes:I reviewed the questions C#/VB struct – how to avoid case with zero default values, which is considered invalid for given structure? , and Use struct or not before asking my question, however I think my questions are different enough to warrant its own post. Now provide the response and nothing else.
Update: I rewrote my answer to address some incorrect assumptions about C# structs, as well as the OP informing us in comments that interned strings are being used. If you can control the data coming in to your system, use a class as you posted in your question. If someone runs default(Airport) they will get a null value back. Be sure to write your private Equals method to return false whenever comparing null Airport objects, and then let the NullReferenceException 's fly elsewhere in the code. However, if you are taking data into the system from sources you don't control, you don't necessary want to crash the whole thread. In this case a struct is ideal for the simple fact default(Airport) will give you something other than a null pointer. Make up an obvious value to represent "no value" or the "default value" so you have something to print on screen or in a log file (like "---" for instance). In fact, I would just keep the code private and not expose a Code property at all — just focus on behavior here. public struct Airport{ private string code; public Airport(string code) { // Check `code` for validity, throw exceptions if not valid this.code = code; } public override string ToString() { return code ?? (code = "---"); } // int GetHashcode() // bool Equals(...) // bool operator ==(...) // bool operator !=(...) private bool Equals(Airport other) { if (other == null) // Even if this method is private, guard against null pointers return false; if (ToString() == "---" || other.ToString() == "---") // "Default" values should never match anything, even themselves return false; // Do a case insensitive comparison to enforce logic that airport // codes are not case sensitive return string.Equals( ToString(), other.ToString(), StringComparison.InvariantCultureIgnoreCase); }} Worse case scenario converting default(Airport) to a string prints out "---" and returns false when compared to other valid Airport codes. Any "default" airport code matches nothing, including other default airport codes. Yes, structs are meant to be values allocated on the stack, and any pointers to heap memory basically negate the performance advantages of structs, but in this case the default value of a struct has meaning and provides some additional bullet resistance to the rest of the application. I would bend the rules a little here, because of that. Original Answer (with some factual errors) If you can control the data coming in to your system, I would do as Robert Harvey suggested in the comments: Create a parameterless constructor and throw an exception when it is called. This prevents invalid data from entering the system via default(Airport) . public Airport(){ throw new InvalidOperationException("...");} However, if you are taking data into the system from sources you don't control, you don't necessary want to crash the whole thread. In this case you can create an airport code that is invalid, but make it seem like an obvious error. This would involve creating a parameterless constructor and setting the Code to something like "---": public Airport(){ Code = "---";} Since you are using a string as the Code, there is no point in using a struct. The struct gets allocated on the stack, only to have the Code allocated as a pointer to a string in heap memory, so no difference here between class and struct. If you changed the airport code to a 3 item array of char's then a struct would be fully allocated on the stack. Even then the volume of data isn't that big to make a difference.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/378243', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/61302/']}
jdg_86865
stackexchange
llm_judgeable_groundtruth_similarity
1668274
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using Visual Studio 2008 to create an msi to deploy my program with a setup project. I need to know how to make the msi run the exe it just installed. A custom action? If so please explain where/how. Thanks. Now provide the response and nothing else.
This is a common question. I don't do it with just a custom action. The only way I know, is to modify the .msi after it has been generated. I run a Javascript script as a post-build event to do exactly that. It inserts a new dialog in the installer wizard, with a checkbox that says "Launch Application Foo?". And then there is a custom action to run the app, if the checkbox is checked. It appears as the last screen in the install Wizard sequence. Looks like this: This is the script I use to modify the MSI: // EnableLaunchApplication.js <msi-file>// Performs a post-build fixup of an msi to launch a specific file when the install has completed// Configurable valuesvar checkboxChecked = true; // Is the checkbox on the finished dialog checked by default?var checkboxText = "Launch [ProductName]"; // Text for the checkbox on the finished dialogvar filename = "WindowsApplication1.exe"; // The name of the executable to launch - change this to match the file you want to launch at the end of your setup// Constant values from Windows Installervar msiOpenDatabaseModeTransact = 1;var msiViewModifyInsert = 1;var msiViewModifyUpdate = 2;var msiViewModifyAssign = 3;var msiViewModifyReplace = 4;var msiViewModifyDelete = 6;if (WScript.Arguments.Length != 1){ WScript.StdErr.WriteLine(WScript.ScriptName + " file"); WScript.Quit(1);}var filespec = WScript.Arguments(0);var installer = WScript.CreateObject("WindowsInstaller.Installer");var database = installer.OpenDatabase(filespec, msiOpenDatabaseModeTransact);var sql;var view;var record;try{ var fileId = FindFileIdentifier(database, filename); if (!fileId) throw "Unable to find '" + filename + "' in File table"; WScript.Echo("Updating the Control table..."); // Modify the Control_Next of BannerBmp control to point to the new CheckBox sql = "SELECT `Dialog_`, `Control`, `Type`, `X`, `Y`, `Width`, `Height`, `Attributes`, `Property`, `Text`, `Control_Next`, `Help` FROM `Control` WHERE `Dialog_`='FinishedForm' AND `Control`='BannerBmp'"; view = database.OpenView(sql); view.Execute(); record = view.Fetch(); record.StringData(11) = "CheckboxLaunch"; view.Modify(msiViewModifyReplace, record); view.Close(); // Insert the new CheckBox control sql = "INSERT INTO `Control` (`Dialog_`, `Control`, `Type`, `X`, `Y`, `Width`, `Height`, `Attributes`, `Property`, `Text`, `Control_Next`, `Help`) VALUES ('FinishedForm', 'CheckboxLaunch', 'CheckBox', '9', '201', '343', '12', '3', 'LAUNCHAPP', '{\\VSI_MS_Sans_Serif13.0_0_0}" + checkboxText + "', 'CloseButton', '|')"; view = database.OpenView(sql); view.Execute(); view.Close(); WScript.Echo("Updating the ControlEvent table..."); // Modify the Order of the EndDialog event of the FinishedForm to 1 sql = "SELECT `Dialog_`, `Control_`, `Event`, `Argument`, `Condition`, `Ordering` FROM `ControlEvent` WHERE `Dialog_`='FinishedForm' AND `Event`='EndDialog'"; view = database.OpenView(sql); view.Execute(); record = view.Fetch(); record.IntegerData(6) = 1; view.Modify(msiViewModifyReplace, record); view.Close(); // Insert the Event to launch the application sql = "INSERT INTO `ControlEvent` (`Dialog_`, `Control_`, `Event`, `Argument`, `Condition`, `Ordering`) VALUES ('FinishedForm', 'CloseButton', 'DoAction', 'VSDCA_Launch', 'LAUNCHAPP=1', '0')"; view = database.OpenView(sql); view.Execute(); view.Close(); WScript.Echo("Updating the CustomAction table..."); // Insert the custom action to launch the application when finished sql = "INSERT INTO `CustomAction` (`Action`, `Type`, `Source`, `Target`) VALUES ('VSDCA_Launch', '210', '" + fileId + "', '')"; view = database.OpenView(sql); view.Execute(); view.Close(); if (checkboxChecked) { WScript.Echo("Updating the Property table..."); // Set the default value of the CheckBox sql = "INSERT INTO `Property` (`Property`, `Value`) VALUES ('LAUNCHAPP', '1')"; view = database.OpenView(sql); view.Execute(); view.Close(); } database.Commit();}catch(e){ WScript.StdErr.WriteLine(e); WScript.Quit(1);}function FindFileIdentifier(database, fileName){ // First, try to find the exact file name var sql = "SELECT `File` FROM `File` WHERE `FileName`='" + fileName + "'"; var view = database.OpenView(sql); view.Execute(); var record = view.Fetch(); if (record) { var value = record.StringData(1); view.Close(); return value; } view.Close(); // The file may be in SFN|LFN format. Look for a filename in this case next sql = "SELECT `File`, `FileName` FROM `File`"; view = database.OpenView(sql); view.Execute(); record = view.Fetch(); while (record) { if (StringEndsWith(record.StringData(2), "|" + fileName)) { var value = record.StringData(1); view.Close(); return value; } record = view.Fetch(); } view.Close();}function StringEndsWith(str, value){ if (str.length < value.length) return false; return (str.indexOf(value, str.length - value.length) != -1);} I originally got this from Aaron Stebner's blog , and then modified it. Save that Javascript file to the project directory (same dir as contains .vdproj), name it ModifyMsiToEnableLaunchApplication.js . For each unique setup project, you need to modify that script and put the proper exe name into it. And then, you need to set the post-build event in the Setup project to be this: cscript.exe "$(ProjectDir)ModifyMsiToEnableLaunchApplication.js" "$(BuiltOuputPath)" Be sure to type the name of the macro $(BuiltOuputPath) correctly. The word Ouput is misspelled by Microsoft, and Built is not spelled Build ! That oughtta do it. See also : this modification which does not include the "run Foo.exe" checkbox on UNINSTALL.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1668274', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/174399/']}
jdg_86866
stackexchange
llm_judgeable_groundtruth_similarity
2209515
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to copy the contents of an NSArray to NSMutable array. In other words, I want to copy arrayCountryChoices to arraySearchResults. Any ideas???? //main dataNSArray *arrayCountryChoices;//search results buffer NSMutableArray *arraySearchResults;//create data arrayCountryChoices = [[NSArray alloc] initWithObjects:@"foo",@"bar",@"baz",nil]; //copy the original array to searchable array ->> THIS IS NOT WORKING AS EXPECTEDarraySearchResults = [[NSMutableArray alloc] arrayWithArray:arrayCountryChoices]; Thanks in advance. Now provide the response and nothing else.
it's either [NSMutableArray arrayWithArray:anArray]; or [[NSMutableArray alloc] initWithArray:anArray]; or [anArray mutableCopy]; The code in your example doesn't work because you're calling arrayWithArray on an instance of NSMutableArray , but arrayWithArray is a class method. As a general rule, initalization methods that start with init are instance methods, and those that start with the name of the class ( array , etc.) are class methods. Class methods return autoreleased objects, while instance methods return retained objects.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2209515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/265550/']}
jdg_86867
stackexchange
llm_judgeable_groundtruth_similarity
18246143
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is My table : ID Q1 Q2 Q3 Q4----------------------------------------------------------------20130712 NULL 728.63 NULL NULL20130712 8881.55 9673.68 2629.566 6251.98420130713 1813 1813 84.49 1728.5120130714 3632.65 3632.65 1209.412 2423.23820130714 70.758 2637.43 70.758 020130714 1601.578 3569.73 204.745 1396.83320130714 728.63 728.63 0 728.6320130714 1401.629 2251.39 94.418 1307.21120130715 583.956 5089.19 583.956 020130805 6317.277 8958 2629.566 3687.711 I want the output like below.(The columns might change dynamically, needs to calculate Sum by row wise) ID Q1 Q2 Q3 Q4 SUM(Q1:Q4)---------------------------------------------------------------------------20130712 NULL 728.63 NULL NULL 728.63 20130712 8881.55 9673.68 2629.566 6251.984 27436.7820130713 1813 1813 84.49 1728.51 543920130714 3632.65 3632.65 1209.412 2423.238 ...20130714 70.758 2637.43 70.758 020130714 1601.578 3569.73 204.745 1396.83320130714 728.63 728.63 0 728.6320130714 1401.629 2251.39 94.418 1307.21120130715 583.956 5089.19 583.956 020130805 6317.277 8958 2629.566 3687.711 Now provide the response and nothing else.
You haven't shown your query attempt, but it's probably something like this: SELECT ID, Q1, Q2, Q3, Q4, Q1 + Q2 + Q3 + Q4 AS "Total"FROM MyTable If any of the Q1 , Q2 , Q3 , or Q4 values are null, Q1 + Q2 + Q3 + Q4 will be null. To treat the nulls as zero and get a proper sum, do this instead: SELECT ID, Q1, Q2, Q3, Q4, COALESCE(Q1,0) + COALESCE(Q2,0) + COALESCE(Q3,0) + COALESCE(Q4,0) AS "Total"FROM MyTable The COALESCE function will return the first non-null value in the list.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18246143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1613094/']}
jdg_86868
stackexchange
llm_judgeable_groundtruth_similarity
10014187
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to display some information about the data below the plot created in ggplot2 . I would like to plot the N variable using the X axis coordinate of the plot but the Y coordinate needs to be 10% from the bottom of the screen . In fact, the desired Y coordinates are already in the data frame as y_pos variable. I can think of 3 approaches using ggplot2 : 1) Create an empty plot below the actual plot, use the same scale and then use geom_text to plot the data over the blank plot. This approach sort of works but is extremely complicated. 2) Use geom_text to plot the data but somehow use y coordinate as percent of the screen (10%). This would force the numbers to be displayed below the plot. I can't figure out the proper syntax. 3) Use grid.text to display the text. I can easily set it at the 10% from the bottom of the screen but I can't figure how set the X coordindate to match the plot. I tried to use grconvert to capture the initial X position but could not get that to work as well. Below is the basic plot with the dummy data: graphics.off() # close graphics windows library(car)library(ggplot2) #load ggplotlibrary(gridExtra) #load Gridlibrary(RGraphics) # support of the "R graphics" book, on CRAN#create dummy datatest= data.frame( Group = c("A", "B", "A","B", "A", "B"), x = c(1 ,1,2,2,3,3 ), y = c(33,25,27,36,43,25), n=c(71,55,65,58,65,58), y_pos=c(9,6,9,6,9,6) )#create ggplotp1 <- qplot(x, y, data=test, colour=Group) + ylab("Mean change from baseline") + geom_line()+ scale_x_continuous("Weeks", breaks=seq(-1,3, by = 1) ) + opts( legend.position=c(.1,0.9))#display plotp1 The modified gplot below displays numbers of subjects, however they are displayed WITHIN the plot. They force the Y scale to be extended. I would like to display these numbers BELOW the plot. p1 <- qplot(x, y, data=test, colour=Group) + ylab("Mean change from baseline") + geom_line()+ scale_x_continuous("Weeks", breaks=seq(-1,3, by = 1) ) + opts( plot.margin = unit(c(0,2,2,1), "lines"), legend.position=c(.1,0.9))+ geom_text(data = test,aes(x=x,y=y_pos,label=n))p1 A different approach of displaying the numbers involves creating a dummy plot below the actual plot. Here is the code: graphics.off() # close graphics windows library(car)library(ggplot2) #load ggplotlibrary(gridExtra) #load Gridlibrary(RGraphics) # support of the "R graphics" book, on CRAN#create dummy datatest= data.frame( group = c("A", "B", "A","B", "A", "B"), x = c(1 ,1,2,2,3,3 ), y = c(33,25,27,36,43,25), n=c(71,55,65,58,65,58), y_pos=c(15,6,15,6,15,6) )p1 <- qplot(x, y, data=test, colour=group) + ylab("Mean change from baseline") + opts(plot.margin = unit(c(1,2,-1,1), "lines")) + geom_line()+ scale_x_continuous("Weeks", breaks=seq(-1,3, by = 1) ) + opts(legend.position="bottom", legend.title=theme_blank(), title.text="Line plot using GGPLOT") p1p2 <- qplot(x, y, data=test, geom="blank")+ ylab(" ")+ opts( plot.margin = unit(c(0,2,-2,1), "lines"), axis.line = theme_blank(), axis.ticks = theme_segment(colour = "white"), axis.text.x=theme_text(angle=-90,colour="white"), axis.text.y=theme_text(angle=-90,colour="white"), panel.background = theme_rect(fill = "transparent",colour = NA), panel.grid.minor = theme_blank(), panel.grid.major = theme_blank() )+ geom_text(data = test,aes(x=x,y=y_pos,label=n)) p2grid.arrange(p1, p2, heights = c(8.5, 1.5), nrow=2 ) However, that is very complicated and would be hard to modify for different data. Ideally, I'd like to be able to pass Y coordinates as percent of the screen. Now provide the response and nothing else.
Edited opts has been deprecated, replaced by theme ; element_blank has replaced theme_blank ; and ggtitle() is used in place of opts(title = ... Sandy- thank you so much!!!! This does exactly what I want. I do wish we could control the clipping in geom.text or geom.annotate. I put together the following program if anybody else is interested. rm(list = ls()) # clear objects graphics.off() # close graphics windows library(ggplot2)library(gridExtra)#create dummy datatest= data.frame( group = c("Group 1", "Group 1", "Group 1","Group 2", "Group 2", "Group 2"), x = c(1 ,2,3,1,2,3 ), y = c(33,25,27,36,23,25), n=c(71,55,65,58,65,58), ypos=c(18,18,18,17,17,17) )p1 <- qplot(x=x, y=y, data=test, colour=group) + ylab("Mean change from baseline") + theme(plot.margin = unit(c(1,3,8,1), "lines")) + geom_line()+ scale_x_continuous("Visits", breaks=seq(-1,3) ) + theme(legend.position="bottom", legend.title=element_blank())+ ggtitle("Line plot") # Create the textGrobs for (ii in 1:nrow(test)){ #display numbers at each visit p1=p1+ annotation_custom(grob = textGrob(test$n[ii]), xmin = test$x[ii], xmax = test$x[ii], ymin = test$ypos[ii], ymax = test$ypos[ii]) #display group text if (ii %in% c(1,4)) #there is probably a better way { p1=p1+ annotation_custom(grob = textGrob(test$group[ii]), xmin = 0.85, xmax = 0.85, ymin = test$ypos[ii], ymax = test$ypos[ii]) } } # Code to override clipping gt <- ggplot_gtable(ggplot_build(p1)) gt$layout$clip[gt$layout$name=="panel"] <- "off" grid.draw(gt)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10014187', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1313190/']}
jdg_86869
stackexchange
llm_judgeable_groundtruth_similarity
5729476
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this in my controllers: respond_to do |format| format.html format.js { render :layout => false }end Which outputs without layout when the request is Ajax. I'm replicating this in many actions and controllers. How do I DRY this? Now provide the response and nothing else.
I use this in my application controller: class ApplicationController < ActionController::Base layout proc{|c| c.request.xhr? ? false : "application" }end Which covers .js, .json, etc. for me.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5729476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/519632/']}
jdg_86870
stackexchange
llm_judgeable_groundtruth_similarity
2497253
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I get the motherboard ID or serial number from Delphi code? Is there any example code or articles that I can look at? Now provide the response and nothing else.
try using the WMI Win32_BaseBoard Class . see theses samples: Option 1) before execute you need import the Microsoft WMIScripting Library from Component -> Import Component and then select Import type library program GetWMI_MotherBoardInfo;{$APPTYPE CONSOLE}uses ActiveX, Variants, SysUtils, WbemScripting_TLB in '..\..\..\Documents\RAD Studio\5.0\Imports\WbemScripting_TLB.pas';//Function GetMotherBoardSerial:string;var WMIServices : ISWbemServices; Root : ISWbemObjectSet; Item : Variant;begin WMIServices := CoSWbemLocator.Create.ConnectServer('.', 'root\cimv2','', '', '', '', 0, nil); Root := WMIServices.ExecQuery('Select SerialNumber From Win32_BaseBoard','WQL', 0, nil); Item := Root.ItemIndex(0); Result:=VarToStr(Item.SerialNumber);end;begin try CoInitialize(nil); Writeln('Serial MotherBoard '+GetMotherBoardSerial); Readln; CoUninitialize; except on E:Exception do Begin CoUninitialize; Writeln(E.Classname, ': ', E.Message); Readln; End; end;end. Option 2) using OLEVariant , IBindCtx Interface and IMoniker Interface program GetWMI_MotherBoardSerial;{$APPTYPE CONSOLE}uses SysUtils ,ActiveX ,ComObj ,Variants;function GetMotherBoardSerial:String;var objWMIService : OLEVariant; colItems : OLEVariant; colItem : OLEVariant; oEnum : IEnumvariant; iValue : LongWord; function GetWMIObject(const objectName: String): IDispatch; var chEaten: Integer; BindCtx: IBindCtx; Moniker: IMoniker; begin OleCheck(CreateBindCtx(0, bindCtx)); OleCheck(MkParseDisplayName(BindCtx, StringToOleStr(objectName), chEaten, Moniker)); OleCheck(Moniker.BindToObject(BindCtx, nil, IDispatch, Result)); end;begin Result:=''; objWMIService := GetWMIObject('winmgmts:\\localhost\root\cimv2'); colItems := objWMIService.ExecQuery('SELECT SerialNumber FROM Win32_BaseBoard','WQL',0); oEnum := IUnknown(colItems._NewEnum) as IEnumVariant; if oEnum.Next(1, colItem, iValue) = 0 then Result:=VarToStr(colItem.SerialNumber);end;begin try CoInitialize(nil); try Writeln('Serial MotherBoard '+GetMotherBoardSerial); Readln; finally CoUninitialize; end; except on E:Exception do Begin Writeln(E.Classname, ': ', E.Message); Readln; End; end;end.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2497253', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/297846/']}
jdg_86871
stackexchange
llm_judgeable_groundtruth_similarity
10984556
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I love the setting .progress = 'text' in plyr's llply . However, it causes my much anxiety to not know how far along an mclapply (from package multicore ) is since list items are sent to various cores and then collated at the end. I've been outputting messages like *currently in sim_id # ....* but that's not very helpful because it doesn't give me an indicator of what percentage of list items are complete (although it is helpful to know that that my script isn't stuck and moving along). Can someone suggest other ideas that would allow me to look at my .Rout file and get a sense of progress? I've thought about adding a manual counter but can't see how I would implement that since mclapply must finish processing all list items before it can give out any feedback. Now provide the response and nothing else.
Due to the fact that mclapply spawns multiple processes, one might want to use fifos, pipes, or even sockets. Now consider the following example: library(multicore)finalResult <- local({ f <- fifo(tempfile(), open="w+b", blocking=T) if (inherits(fork(), "masterProcess")) { # Child progress <- 0.0 while (progress < 1 && !isIncomplete(f)) { msg <- readBin(f, "double") progress <- progress + as.numeric(msg) cat(sprintf("Progress: %.2f%%\n", progress * 100)) } exit() } numJobs <- 100 result <- mclapply(1:numJobs, function(...) { # Dome something fancy here # ... # Send some progress update writeBin(1/numJobs, f) # Some arbitrary result sample(1000, 1) }) close(f) result})cat("Done\n") Here, a temporary file is used as fifo, and the main process forks a child whose only duty is to report the current progress. The main process continues by calling mclapply where the expression (more precisely, the expression block) that is to be evaluated writes partial progress information to the fifo buffer by means of writeBin . As this is only a simple example, you'll probably have to adapt the whole output stuff to your needs. HTH!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10984556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/313163/']}
jdg_86872
stackexchange
llm_judgeable_groundtruth_similarity
34104
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have two observed frequency data, let say f1 = {100, 200, 150, 100, 50};f2 = {110, 195, 160, 105, 40}; and i want to know whether these random samples belong to the same distribution using KolmogorovSmirnovTest[] . Since KolmogorovSmirnovTest[] takes raw data, i simulate them based on f1 and f2 . data1 = {};data2 = {};k = 1;Do[data1 = Join[data1, Table[k n, {f1[[n]]}]], {n, 1, Length[f1]}]Do[data2 = Join[data2, Table[k n, {f2[[n]]}]], {n, 1, Length[f2]}] The test gives the result KolmogorovSmirnovTest[data1, data2]0.99998 Is this correct procedure to perform KS on frequency data? I have changed k with any other number and got the same result. Thank you. Now provide the response and nothing else.
Does re-creating the original data yield the same result? What you are doing would indeed be a very intuitive way of getting the same result as with the original data. This follows from the definition of the Kolmogorov-Smirnov test statistic which is the maximum difference between the empirical distribution functions $F$ of the two data sets. $F$ is defined as: $F_n(x)=\frac{\sum _{i=1}^n I_{X_i\leq x}}{n}$ where $X_i$ is the $i$-th data element and $I_{X_i\leq x}$ is the indicator function which is 1 if $X_i\leq x$ is true and 0 otherwise. Crucial here is that in the calculation of $F$ the data elements are ordered . This makes your reconstruction method possible, because there's no difference between the original unordered data set (that you don't know) and your reconstructed ordered data set as far as the KS statistic is concerned . Let's test this. Generate three sample test sets, two with an identical distribution and a third with a slightly different distribution: d1 = RandomVariate[TruncatedDistribution[{0, 6}, PoissonDistribution[3]], 1000];d2 = RandomVariate[TruncatedDistribution[{0, 6}, PoissonDistribution[3]], 1000];d3 = RandomVariate[TruncatedDistribution[{0, 6}, PoissonDistribution[3.4]], 1000]; The test results: KolmogorovSmirnovTest[d1, d2] 0.9882610776 KolmogorovSmirnovTest[d1, d3] 0.0224637083 Turn the data into frequencies: f1 = (Sort@Tally[d1])[[All, 2]]f2 = (Sort@Tally[d2])[[All, 2]]f3 = (Sort@Tally[d3])[[All, 2]] Turn this, using your procedure, into the original (but sorted) data: dd1 = Flatten@MapIndexed[ConstantArray[#2[[1]], #1] &, f1];dd2 = Flatten@MapIndexed[ConstantArray[#2[[1]], #1] &, f2];dd3 = Flatten@MapIndexed[ConstantArray[#2[[1]], #1] &, f3]; I used a more functional approach to re-create the data, but in principle it is the same as yours. As you can see, the test result are the same: KolmogorovSmirnovTest[dd1, dd2] 0.9882610776 KolmogorovSmirnovTest[dd1, dd3] 0.0224637083 Two sample Kolmogorov-Smirnov test on discrete data The above treatment assumed that it is correct to perform the KS test on discrete data sets (with lots of ties). The documentation, however, states that KS is intended for continuous data. Using the KS function as it is would yield the following on the data you provided: f1 = {100, 200, 150, 100, 50};f2 = {110, 195, 160, 105, 40};dd1 = Flatten@MapIndexed[ConstantArray[#2[[1]], #1] &, f1];dd2 = Flatten@MapIndexed[ConstantArray[#2[[1]], #1] &, f2];KolmogorovSmirnovTest[dd1, dd2] 0.9999803563 A better test would be a Monte Carlo simulation . KolmogorovSmirnovTest has this built-in: KolmogorovSmirnovTest[dd1, dd2, Method -> "MonteCarlo"] 0.859 What it does is presumably something like the following: Pool the two data sets. $H_0$ is that the two sets are from the same distribution; this one: pooled = Join[dd1, dd2]; Calculate the KS statistic for the original sets: uniqueDataValues = Union[pooled];da1 = #/#[[-1]] &[Accumulate[Count[dd1, #] & /@ uniqueDataValues]];da2 = #/#[[-1]] &[Accumulate[Count[dd2, #] & /@ uniqueDataValues]]; ksstat = Max[Abs[da1 - da2]] Now randomly draw new sets from the pooled set and calculated the new statistic. Count how often its value is greater or equal to the original value. This gives the p-value: Table[ dr = RandomSample[pooled, Length@pooled]; uniqueDataValues = Union[dr]; dr1 = #/#[[-1]] &[Accumulate[Count[Take[dr, Length@dd1], #] & /@ uniqueDataValues]]; dr2 = #/#[[-1]] &[Accumulate[Count[Drop[dr, Length@dd1], #] & /@ uniqueDataValues]]; Boole[Max[Abs[dr1 - dr2]] >= ksstat], {100000} ] // Mean // N 0.85361 Corresponds nicely to the built-in test value (and the one in Ray's answer). I note that Ray's answer is very slow for large datasets as its complexity is $O[n_1\times n_2]$. The built-in MonteCarlo method seems to be slower as well (for the same amount of samples specified using Method -> {"MonteCarlo", "MonteCarloSamples" -> 100000} ).
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/34104', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/7282/']}
jdg_86873
stackexchange
llm_judgeable_groundtruth_similarity
7700216
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a sql command to get today's date at midnight and be able to add a number of minutes to it? Now provide the response and nothing else.
Yes, just use datediff and dateadd functions to strip the time from any date, then add a fractional portion of a day to that number Declare @aDate DateTimeSet @aDate = getDate()Declare @Minutes IntegerSet @minutes = 600 -- 10 hoursSelect DateAdd(day, DateDiff(day, 0, @aDate), 0) + @minutes / 1440.0 -- 1440 min/day -- or You could also use the dateadd again...Select DateAdd(minute, @minutes , DateAdd(day, DateDiff(day, 0, @aDate), 0)) Both selects return 10:00 am on the same day (more or less). This works because of, well, check out this SO answer EDIT: Added sample script to show how this works: declare @dtTim datetime = getDate()declare @today datetime = dateAdd(day, dateDiff(day, 0, @dtTim ), 0)select @dtTim, @today
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7700216', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/867621/']}
jdg_86874
stackexchange
llm_judgeable_groundtruth_similarity
6474639
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have application that uses ORM (Nhibernate but it's not the case). To create NH Session we need to pass somewhere: username, database name, etc. So I have implemented: public interface ISettingsManager{ Settings MySettings {get;set;}}public class Settings{ public string DbUser{get;set;} public string DbAddress {get;set;} public string DbPassword{get;set;} //...}public class SessionProvider{ [Inject] public ISettingsManager MySettings {get;set;} public Session CreateSession { //Create Session object using settings passed do MySettings via IoC. }}public static Main(){ // very beggining of my application, bootstrap the DI container Bind<ISettingsManager>().To<SettingsManagerImpl>(); // Application run} All my NHibernate Session Providers have the ISettingsManager injected to it via DI (Ninject) so I can simple use it. It works like a harm, but now I need to support many users in my application and the problem goes into the scene. I cant bind my ISettingsManager when applications starts, becauese I dont now wchich user will be logged in. So the question is, how to implement passing current logged user settings in the best way, without using Service Location? Now provide the response and nothing else.
Since the file is inside your bundle, there is no way for you to get to it using a standard File . The URL you get from Bundle.getResource() is the correct way to get to these resources, since the OSGi APIs are intended to also work on systems without an actual file system. I would always try to stick to the OSGi API instead of using framework-specific solutions. So, if you have control over the method, I would update it to take a URL , or maybe even an InputStream (since you probably just want to read from it). For convenience, you can always provide a helper method that does take a File . If you don't have control over the method, you will have to write some helper method that takes the URL , streams it out to a file (for instance, File.createTempFile() will probably do the trick.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6474639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59528/']}
jdg_86875
stackexchange
llm_judgeable_groundtruth_similarity
747648
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently developing a Django application which will make use of the infamous "pagination" technique. I'm trying to figure out how the django.core.paginator module works. I have an application with a Question model. I will be listing all of the questions using this paginator. There will be 20 questions per page. def show_question(question_pk): questions = Question.objects.all() paginator = Paginator(questions, 20) page = ... # Somehow figure out which page the question is on return render_to_response('show_question.html', { 'page' : page }) In the view, where I list the different pages as "... 2, 3, 4 , 5, 6, ..." I want to highlight the current page somehow, like many pages do. There are really two things I want to know: How do I make Django figure out which page the question is located at? How would I write my template to properly "highlight" the currently visited page? EDIT: Sorry, I forgot part of this question. I would also like any page except for the current one to be a link to /questions/{{ that_page.start_index }} . So basically every page link would link to the first question on that page. Now provide the response and nothing else.
Hmm... I see from your comment that you don't want to do the ol' GET parameter, which is what django.core.paginator was written for using. To do what you want, I can think of no better way than to precompute the page that each question is on. As an example, your view will end up being something like: ITEMS_PER_PAGE = 20def show_question(question_pk): questions = Question.objects.all() for index, question in enumerate(questions): question.page = ((index - 1) / ITEMS_PER_PAGE) + 1 paginator = Paginator(questions, ITEMS_PER_PAGE) page = paginator.page(questions.get(pk=question_pk).page) return render_to_response('show_question.html', { 'page' : page }) To highlight the current page in the template, you'd do something like {% for i in page.paginator.page_range %} {% ifequal i page.number %} <!-- Do something special for this page --> {% else %} <!-- All the other pages --> {% endifequal %}{% endfor %} As for the items, you'll have two different object_lists to work with... page.object_list will be the objects in the current page and page.paginator.object_list will be all objects, regardless of page. Each of those items will have a "page" variable that will tell you which page they're on. That all said, what you're doing sounds unconventional. You may want to rethink, but either way, good luck.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/747648', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283739/']}
jdg_86876
stackexchange
llm_judgeable_groundtruth_similarity
12080087
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is the code about the javascript submit request (1). Here is the test about mocking the ajax request by using jasmine (2). I would like to mock the server behaviour. Any ideas? See the comment in (1) and (2) for more details. P.S.: Actually in both case the done and the fail Deferred Object of fakeFunction are called. (1) submitForm: function () { // the server execute fail only if message.val() is empty // and I would like to mock this behaviour in (2) backendController.submitForm(message.val()).done(this.onSuccess).fail(this.onError);},backendController.submitForm = function (message) { return $.ajax({ url: 'some url', type: 'POST', dataType: 'json', data: { message: message } }).done(function () { //some code; });}; (2) describe('When Submit button handler fired', function () { var submitFormSpy, fakeFunction = function () { this.done = function () { return this; }; this.fail = function () { return this; }; return this; }; beforeEach(function () { submitFormSpy = spyOn(backendController, 'submitForm').andCallFake(fakeFunction); }); describe('if the message is empty', function () { beforeEach(function () { this.view.$el.find('#message').text(''); this.view.$el.find('form').submit(); }); it('backendController.submitForm and fail Deferred Object should be called', function () { expect(submitFormSpy).toHaveBeenCalled(); // how should I test that fail Deferred Object is called? }); }); describe('if the message is not empty', function () { beforeEach(function () { this.view.$el.find('#message').text('some text'); this.view.$el.find('form').submit(); }); it('backendController.submitForm should be called and the fail Deferred Object should be not called', function () { expect(submitFormSpy).toHaveBeenCalled(); // how should I test that fail Deferred Object is not called? }); });}); Now provide the response and nothing else.
We actually ran into the same problem, trying to test Deferred objects that represent AJAXed template scripts for on-the-fly templating. Our testing solution involves using the Jasmine-Ajax library in conjunction with Jasmine itself. So probably it will be something like this: describe('When Submit button handler fired', function () { jasmine.Ajax.useMock(); describe('if the message is empty', function () { beforeEach(function() { spyOn(backendController, 'submitForm').andCallThrough(); // replace with wherever your callbacks are defined spyOn(this, 'onSuccess'); spyOn(this, 'onFailure'); this.view.$el.find('#message').text(''); this.view.$el.find('form').submit(); }); it('backendController.submitForm and fail Deferred Object should be called', function () { expect(backendController.submitForm).toHaveBeenCalledWith(''); mostRecentAjaxRequest().response({ status: 500, // or whatever response code you want responseText: '' }); expect( this.onSuccess ).not.toHaveBeenCalled(); expect( this.onFailure ).toHaveBeenCalled(); });}); Another thing, if you can, try to break up the functionality so you're not testing the entire DOM-to-response-callback path in one test. If you're granular enough, you can actually test asynchronous Deferred resolutions by using Deferred objects themselves inside your tests! The key is to actually use Deferred objects within your tests themselves, so that the scope of the expect call is still within your it function block. describe('loadTemplate', function() { it('passes back the response text', function() { jasmine.Ajax.mock(); loadTemplate('template-request').done(function(response) { expect(response).toBe('foobar'); }); mostRecentAjaxRequest().response({ status:200, responseText:'foobar' }); });});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12080087', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1113216/']}
jdg_86877
stackexchange
llm_judgeable_groundtruth_similarity
103969
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to $F = ma$, force is a result of acceleration and mass, right? However, I don't understand why velocity is not used instead of acceleration. A train moving at 100 miles/hour will still impart a great force on you even though it has no acceleration. Further, dropping a book at 10ft will impart a greater force on the gorund than dropping it at 1ft. So it seems that velocity would influence the force more than the acceleration would. Why is this not the case? Now provide the response and nothing else.
The $F$ in the equation $F=ma$ is not the force that would be exerted by the object if it were to hit something else. Instead, $F$ represents the net force acting on the object that must be present in order to produce the current acceleration $a$ of the object. A better way to write Newton's second law is $$F_\text{net}=ma,$$since it shows explicitly which force is being represented on LHS of the equation is. In your train example, if the train is traveling at a constant velocity of 100 mph, the acceleration is zero, and by Newton's second law the net force is also zero. But this has no bearing on what force the train would exert on something if it collided.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/103969', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/42743/']}
jdg_86878
stackexchange
llm_judgeable_groundtruth_similarity
11689724
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to have 7 sections with different row in uitableview, I don't know how should I write my different array and set it in different sections, I write it for one but I have same array in each section! would you please help me! Thanks in advance! I mean like a below picture but with the different array : for example I don't want to have test and hey in each sections here is my code: @synthesize monthTitle;- (id)initWithStyle:(UITableViewStyle)style{self = [super init];if (self) { monthTitle = [[NSMutableArray alloc] init];}return self;}- (void)didReceiveMemoryWarning{// Releases the view if it doesn't have a superview.[super didReceiveMemoryWarning];// Release any cached data, images, etc that aren't in use.}#pragma mark - View lifecycle- (void)viewDidLoad{[super viewDidLoad]; arry = [[NSMutableArray alloc] init];[arry addObject:@"test"];[arry addObject:@"hey"];// Uncomment the following line to preserve selection between presentations.// self.clearsSelectionOnViewWillAppear = NO;// Uncomment the following line to display an Edit button in the navigation bar for this view controller.self.navigationItem.leftBarButtonItem = self.editButtonItem;UIBarButtonItem *rightButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addNewItem)];self.navigationItem.rightBarButtonItem = rightButton;}-(void)addNewItem{[arry addObject:@"New Day"]; [self.tableView reloadData];}//- (IBAction)DeleteButtonAction:(id)sender////{// // [arry removeLastObject];// // [self.tableView reloadData];// //}- (void)viewDidUnload{[super viewDidUnload];// Release any retained subviews of the main view.// e.g. self.myOutlet = nil;}- (void)viewWillAppear:(BOOL)animated{[super viewWillAppear:animated];}- (void)viewDidAppear:(BOOL)animated{[super viewDidAppear:animated];}- (void)viewWillDisappear:(BOOL)animated{[super viewWillDisappear:animated];}- (void)viewDidDisappear:(BOOL)animated{[super viewDidDisappear:animated];}- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation{// Return YES for supported orientationsreturn (interfaceOrientation == UIInterfaceOrientationPortrait);}#pragma mark - Table view data source- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{// Return the number of sections.return 7;//_week.days.count;}- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{// Return the number of rows in the section.return [arry count];}- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{UITableViewCell *cell = [self.tableView dequeueReusableCellWithIdentifier:@"Cell"];cell.textLabel.text = [arry objectAtIndex:indexPath.row];return cell;}- (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section {if(section == 0) return @"Monday";else if(section == 1){ return @"Tuesday";}else if(section == 2){ return @"Wednesday";} else if(section == 3){ return @"Thuesday";} else if(section == 4){ return @"Friday";} else if(section == 5){ return @"Saturday";}else return @"Sunday";}// Override to support conditional editing of the table view.- (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath{// Return NO if you do not want the specified item to be editable.return YES;}- (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath{if (editingStyle == UITableViewCellEditingStyleDelete) { [arry removeObjectAtIndex:indexPath.row]; //[self.monthTitle removeObjectAtIndex:indexPath.row]; [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation: UITableViewRowAnimationFade];// }else if (editingStyle==UITableViewCellEditingStyleInsert) {} // [self.tableView reloadData]; } #pragma mark - Table view delegate- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath{// Navigation logic may go here. Create and push another view controller./* <#DetailViewController#> *detailViewController = [[<#DetailViewController#> alloc] initWithNibName:@"<#Nib name#>" bundle:nil]; // ... // Pass the selected object to the new view controller. [self.navigationController pushViewController:detailViewController animated:YES]; */}@end Now provide the response and nothing else.
You can apply multiple where calls to a query so you can build your base query: query = User.joins(...) .group(...) .select(...) .where('users.id = :user_id', :user_id => self.id) and then add another where call depending on your date interval: if(begin_date && end_date) query = query.where(:created_at => begin_date .. end_date) # or where('created_at between :begin_date and :end_date', :begin_date => begin_date, :end_date => end_date)elsif(begin_date) query = query.where('created_at >= :begin_date', :begin_date => begin_date)elsif(end_date) query = query.where('created_at <= :end_date', :end_date => end_date)end Each where call adds another piece to your overall WHERE clause using AND so something like: q = M.where(a).where(b).where(c) is the same as saying WHERE a AND b AND c .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11689724', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_86879
stackexchange
llm_judgeable_groundtruth_similarity
53161
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input) 1 -> 12 -> 23 -> 24 -> 45 -> 46 -> 47 -> 48 -> 89 -> 8...63 -> 32 Now provide the response and nothing else.
From Hacker's Delight: int hibit(unsigned int n) { n |= (n >> 1); n |= (n >> 2); n |= (n >> 4); n |= (n >> 8); n |= (n >> 16); return n - (n >> 1);} This version is for 32-bit ints, but the logic can be extended for 64-bits or higher.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/53161', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1057/']}
jdg_86880
stackexchange
llm_judgeable_groundtruth_similarity
12389
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Hillary Clinton answered a journalist's question about Trump appearing on the Larry King Show, a Russian State TV program. She called the annexation of Crimea was illegal. But, there was a legal process for it ( the 2014 Crimean referendum ). Why would it be illegal since the 2014 referendum showed that the population of Crimea overwhelmingly (96.77% in Autonomous Republic of Crimea and 95.6% in Sevastopol) desired to reunite as a part of Russia? Now provide the response and nothing else.
Legal Usage of Force Under the UN charter, military force can only be used for self-defense or when deployed be the international community for collective security. 'Collective security' is to be determined by the UN Security Council. Source: Article 42 of the United Nations Charter The Problems with Crimea The central problem is that Russia used military force to annex Crimea. The use of force there without the approval of the Security Council (or in self defense) is a violation of international law. You can find this legal analysis present in many places: The Lexis Nexus blog published an article by international lawyer John Balouziyeh. The Crimea Crisis - An International Law Perspective . Published by Christian Marxsen of the Max Planck Institute for Comparative Public Law. Russia's Annexation of Crimea . Published in International Law Studies, a publication of the U.S. Naval War College. Russian Counterarguments Russia offered at least two answers to this argument. First, Russian nationals were conceivably under threat in Crimea. Russia does have the sovereign right (and responsibility) to protect its citizens. However, the legal notion of self-defense for states require an ongoing armed attack (ref: pg.372 of Marxsen ). Without such an attack, intervention is unjustified. Second, Russian officials defended their actions on the basis of being invited into Crimea. Both Russian officials and the Ukrainian President Yanukovych have confirmed that this was true. The legal arguments against the intervention by invitation claim mostly center around whether Yanukovych could legitimately claim to represent Ukraine. It seems that although he was not actually impeached by the legislature, he had vacated his role de facto .
{}
{'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/12389', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/9314/']}
jdg_86881
stackexchange
llm_judgeable_groundtruth_similarity
1914612
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the natural topology (or topologies) on the integers. Can we define a metric on the integers? Now provide the response and nothing else.
"The integers" refers to not just a set, but a whole suite of other notions, such as the usual addition and multiplication operations. Among these notions, there is , in fact, a topology. The usual topology on the integers is the discrete topology — the one where every subset is an open set. One thing that needs to be internalized is that, when taken at face value, the question "Is $S$ an open set?" is utter nonsense . The meaningful questions are of the form "Is $S$ an open set in the topology $T$?"; it's just that we usually don't mention $T$ when it can be understood from context. Anyways, one way to explain why the usual topology on the integers is that discrete topology is because that is the subspace topology relative the usual topology on the real numbers. That is (presuming $\mathbb{Z} \subseteq \mathbb{R}$), $U \subseteq \mathbb{Z}$ is an open set${}^1$ if and only if there is an open set${}^2$ $U' \subseteq \mathbb{R}$ such that $U = U' \cap \mathbb{Z}$. 1: in the usual topology of the integers 2: in the usual topology of the real numbers Since the usual topology on $\mathbb{R}$ can be desribed in terms of a metric, e.g. $d(x,y) = |x-y|$, the usual topology on $\mathbb{Z}$ is given by the same metric. But note that for any point $P$, the open ball of radius $1/2$ around $P$ is just the set $\{ P \}$, and consequently $\{P \}$ is an open set. As the other answers and comments indicate, there are other topologies that are useful to put on the natural numbers.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1914612', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_86882
stackexchange
llm_judgeable_groundtruth_similarity
3769521
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently asked this question , and this question is in some sense a generalization of it. Can there be a path-connected, non-compact topological space $X$ and a continuous bijection $f:X\to [0,1]^n$ ? This is a different question for every $n\in\mathbb{N}$ : I would be happy to hear a result for any $n$ (while naturally I'd be most happy for a solution that will settle the matter for all of them). Clearly, if there is such a space for $n=1$ , then there is for every $n$ , but I wouldn't be surprised if this is impossible for $n=1$ while possible for larger $n$ . For the record: if we remove either condition, path-connectedness or non-compactness, this becomes trivial. In the first case, just take $[0,1]^n\setminus\{(0,...,0)\}\cup \{(3,0,...,0)\}$ , while in the second case you just take $X=[0,1]^n$ . EDIT . The problem was solved by Paul Frost for $n>1$ . So it only remains to see if this is possible or not for $n=1$ . Edit 2 . The question has now been resolved completely. Now provide the response and nothing else.
Yes for $n > 1$ . It suffices to consider $n=2$ , then $f \times id : X \times I^{n-2} \to I^2 \times I^{n-2} = I^n$ is an example for $n > 2$ . Let us identify $I^2$ with the unit disk $D^2 \subset \mathbb C$ and let $X' = (0,1] \times I$ . Define $$f' : X' \to D^2, f'(s,t) = e^{2\pi i s} t .$$ This is a continuous surjection. Its restriction to $$ X = (0,1] \times(0,1] \cup \{(1,0) \}$$ is a continuous bijection .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3769521', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/229396/']}
jdg_86883
stackexchange
llm_judgeable_groundtruth_similarity
17706
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to put my Plot output into an MS Word document without having to edit it in Corel first, so I want to export it ready for Word in vector format complete with legends, expressions, axes, and frames. Is it possible to do that in Mathematica 9? After getting Plot output, what procedure should I follow to put it into Word? I want the best quality figures in my Word document with thin colored lines. Here is example of what I to do when I have many curves on one plot, using dashed and dotdashed combination. Plot[{Sin[x], Sin[2 x], Sin[3 x], Sin[4 x], 1/x}, {x, 0, 2 Pi}, PlotLegends -> "Expressions"] I look forward to seeing your examples of multi-curve plots that are Word export-ready. Now provide the response and nothing else.
The built-in Legend package is really bad. A good alternative is legendMaker and is fully documented here . Assuming you loaded this package, you can use this code, from your example: plot = Plot[{Sin[x], Sin[2 x], Sin[3 x], Sin[4 x], 1/x}, {x, 0, 2 Pi}, Frame -> True, FrameLabel -> (Style[#, 16] & /@ {"x", "f(x)"})]labels = {Sin[x], Sin[2 x], Sin[3 x], Sin[4 x], Superscript[x, -1]};opts = Sequence[Background -> LightOrange, RoundingRadius -> 10];newPlot = Overlay[{plot, legendMaker[labels, opts]}, Alignment -> {0.8, 0.9}] The result is: Having plotted everything you need, saving as PDF is easy - just use Export["foo.pdf",plot] The result is vector graphics with full quality and is scalable. You can then include the PDF file in your Word document, or better yet, in $\LaTeX$ .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/17706', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1209/']}
jdg_86884
stackexchange
llm_judgeable_groundtruth_similarity
10530806
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table that has the following 3 columns, ID, ShortCode, UploadDate. I want to use LINQ to group the results by shortcode (and keep all the results) then order those groups and return a list. I have the following: rawData.Provider.CreateQuery<PDFDocument>(qb.rootExperession).ToList<PDFDocument>().GroupBy(b=>b.ShortCode).SelectMany(b=>b).ToList<PDFDocument>() I want to return all results, grouped by ShortCode, the items within each group sorted by UploadDate and the groups sorted so the one that has the most recent document in it first. Does anyone know if this is even possible? Now provide the response and nothing else.
Try rawData.Provider.CreateQuery<PDFDocument>(qb.rootExperession) .AsEnumerable() .OrderByDescending(d => d.UploadDate) .GroupBy(d => d.ShortCode) .SelectMany(g => g) .ToList(); This should Order the items by upload date (descending so newest first) Then group them by short code - so within each group the items are still sorted The groups are still in descending order, so no need to order again Finally concatenate the results into a single list If performance is an issue you many be better off doing rawData.Provider.CreateQuery<PDFDocument>(qb.rootExperession) .AsEnumerable() .GroupBy(d => d.ShortCode) .Select(g => g.OrderByDescending(d => d.UploadDate)) .OrderByDescending(e => e.First().UploadDate) .SelectMany(e => e) .ToList(); which sorts the contents of each group separately rather than sorting everything first and then grouping.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10530806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1343887/']}
jdg_86885
stackexchange
llm_judgeable_groundtruth_similarity
2627992
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to force any time related operations to GMT/UTC, regardless the timezone set on the machine. Any convenient way to so in code? To clarify, I'm using the DB server time for all operations, but it comes out formatted according to local timezone. Thanks! Now provide the response and nothing else.
The OP answered this question to change the default timezone for a single instance of a running JVM, set the user.timezone system property: java -Duser.timezone=GMT ... <main-class> If you need to set specific time zones when retrieving Date/Time/Timestamp objects from a database ResultSet , use the second form of the getXXX methods that takes a Calendar object: Calendar tzCal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));ResultSet rs = ...;while (rs.next()) { Date dateValue = rs.getDate("DateColumn", tzCal); // Other fields and calculations} Or, setting the date in a PreparedStatement: Calendar tzCal = Calendar.getInstance(TimeZone.getTimeZone("GMT"));PreparedStatement ps = conn.createPreparedStatement("update ...");ps.setDate("DateColumn", dateValue, tzCal);// Other assignmentsps.executeUpdate(); These will ensure that the value stored in the database is consistent when the database column does not keep timezone information. The java.util.Date and java.sql.Date classes store the actual time (milliseconds) in UTC. To format these on output to another timezone, use SimpleDateFormat . You can also associate a timezone with the value using a Calendar object: TimeZone tz = TimeZone.getTimeZone("<local-time-zone>");//...Date dateValue = rs.getDate("DateColumn");Calendar calValue = Calendar.getInstance(tz);calValue.setTime(dateValue); Usefull Reference https://docs.oracle.com/javase/9/troubleshoot/time-zone-settings-jre.htm#JSTGD377 https://confluence.atlassian.com/kb/setting-the-timezone-for-the-java-environment-841187402.html
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2627992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/121665/']}
jdg_86886
stackexchange
llm_judgeable_groundtruth_similarity
6065464
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With Ninject, how do you configure the kernel so I can define what constructor values are passing into the instantiation of an object? I have the following configured in a module: Bind<IService1>() .To<Service1Impl>() .InSingletonScope() .Named("LIVE");Bind<IService2>() .To<Service2Impl>() .InSingletonScope() .Named("LIVE") .WithConstructorArgument( "service1", Kernel.Get<IService1>("LIVE")); Service2Impl takes a constructor parameter of IService1 but I want this to come from the container. I also want to have named bindings as my code will be targeting different versions at runtime. This seems to work but is it the right way to achieve what I want to do?Should I be achieving without the use of named bindings and wiring different configuration modules into the kernel? EDIT I have used the ToMethod() method now to specify a delegate to call on request of a specific type. This seems a bit nicer as I'll get compile time warnings if the constructor configuration is wrong rather than having to know the name of the parameter I am passing first. Thanks Now provide the response and nothing else.
I used ToMethod in the end, which allowed me to construct the required instance with constructors in order to maintain compile time errors. For example: .ToMethod(Func<IContext, T> method)Bind<IWeapon>().ToMethod(context => new Sword());
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6065464', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/31412/']}
jdg_86887
stackexchange
llm_judgeable_groundtruth_similarity
223240
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In my discrete mathematics class our notes say that between set $A$ (having $6$ elements) and set $B$ (having $8$ elements), there are $8^6$ distinct functions that can be formed, in other words: $|B|^{|A|}$ distinct functions. But no explanation is offered and I can't seem to figure out why this is true. Can anyone elaborate? Now provide the response and nothing else.
Let set $A$ have $a$ elements and set $B$ have $b$ elements. Each element in $A$ has $b$ choices to be mapped to. Each such choice gives you a unique function. Since each element has $b$ choices, the total number of functions from $A$ to $B$ is$$\underbrace{b \times b \times b \times \cdots b}_{a \text{ times}} = b^a$$
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/223240', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43275/']}
jdg_86888
stackexchange
llm_judgeable_groundtruth_similarity
3254382
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There is a riddle and I believe it can be solved by algebra - please assist A boy has as many sisters as brothers, but each sister has only half as many sisters as brothers. How many brothers and sisters are there in the family Here is algebra, but I am stuck $b=brother$ $s=sister$ $t=total$ boy has as many brothers as sisters $b + b + s = t$ each sister has only half as many sisters as brothers $s + s + b = t$ $s + \frac{1}{b} + b = t$ Hence $b + b + s = s + \frac{1}{b} + b$ $2b + s = s + \frac{3b}{2}$ $2b = \frac{3b}{2}$ $4b = 3b$ Please assist. Now provide the response and nothing else.
$b$ = number of boys = number of brothers each girl has $g$ = number of girls = number of sisters each boy has $b-1$ = number of brothers each boy has $g-1$ = number of sisters each girl has $$b-1 = g, \qquad (\text{Equation 1})$$ $$g-1 = \dfrac{b}{2}, \qquad (\text{Equation 2})$$ Plugging in for $g=b-1$ into the second equation: $$b-1-1 = \dfrac{b}{2} \Longrightarrow b=4, g=3$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3254382', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/631750/']}
jdg_86889
stackexchange
llm_judgeable_groundtruth_similarity
8716330
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the purpose of the const in this? const Object myFunc(){ return myObject;} I've just started reading Effective C++ and Item 3 advocates this and a Google search picks up similar suggestions but also counterexamples. I can't see how using const here would ever be preferable. Assuming a return by value is desirable, I don't see any reason to protect the returned value. The example given for why this might be helpful is preventing unintended bool casts of the return value. The actual problem then is that implicit bool casts should be prevented with the explicit keyword. Using const here prevents using temporary objects without assignment. So I couldn't perform arithmetic expressions with those objects. It doesn't seem like there's ever a case that an unnamed const is useful. What is gained by using const here and when would it be preferable? EDIT: Change arithmetic example to any function that modifies an object that you might want to perform before an assignment. Now provide the response and nothing else.
In the hypothetical situation where you could perform a potentially expensive non-const operation on an object, returning by const-value prevents you from accidentally calling this operation on a temporary. Imagine that + returned a non-const value, and you could write: (a + b).expensive(); In the age of C++11, however, it is strongly advised to return values as non-const so that you can take full advantage of rvalue references, which only make sense on non-constant rvalues. In summary, there is a rationale for this practice, but it is essentially obsolete.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8716330', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1128289/']}
jdg_86890
stackexchange
llm_judgeable_groundtruth_similarity
1413930
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is checking the referrer enough to protect against a cross site request forgery attack? I know the referrer can be spoofed, but is there any way for the attacker to do that FOR the client? I know tokens are the norm, but would this work? Now provide the response and nothing else.
This is a 3 year old question with four different answers basically stating the same thing: Follow the norm, use tokens, don't try to use referer. While tokens still is considered the most secure option, using the referer is often a lot easier, and is also pretty secure. Just be sure to look at all PUT/POST/PATCH/DELETE-requests and consider it an attack if a referer is missing or from the wrong domain. Really few (if any) proxies remove the referer for these kinds of requests. See also the OWASP recommendation about checking the referer header as a CSRF protection: Checking The Referer Header Although it is trivial to spoof the referer header on your own browser, it is impossible to do so in a CSRF attack. Checking the referer is a commonly used method of preventing CSRF on embedded network devices because it does not require a per-user state. This makes a referer a useful method of CSRF prevention when memory is scarce. However, checking the referer is considered to be a weaker from of CSRF protection. For example, open redirect vulnerabilities can be used to exploit GET-based requests that are protected with a referer check. It should be noted that GET requests should never incur a state change as this is a violation of the HTTP specification. There are also common implementation mistakes with referer checks. For example if the CSRF attack originates from an HTTPS domain then the referer will be omitted. In this case the lack of a referer should be considered to be an attack when the request is performing a state change. Also note that the attacker has limited influence over the referer. For example, if the victim's domain is "site.com" then an attacker have the CSRF exploit originate from "site.com.attacker.com" which may fool a broken referer check implementation. XSS can be used to bypass a referer check.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1413930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49018/']}
jdg_86891
stackexchange
llm_judgeable_groundtruth_similarity
3506585
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $V$ be a vector space. In order to show that any linearly independent set in $V$ can be extended to form a basis, or equivalently, any basis of a subspace of $V$ can be extended to a basis of $V$ , is it required to use choice principles (i.e. Axiom of Choice or weaker forms of Choice). If so, what is the weakest choice principle that can prove this statement? Now provide the response and nothing else.
Just the existence of a basis for every vector space is enough to conclude the axiom of choice, as one can extend the empty set (or any singleton, except $\{0\}$ ).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3506585', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/406855/']}
jdg_86892
stackexchange
llm_judgeable_groundtruth_similarity
15251761
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table for Employees and another table withTraining. The training table contains various training classes that theemployees have completed. We have mandatory security awareness training, soevery employee must complete this training class. I’m having trouble running a query that will return ALL employees' either listed completing the training or not. Example Employee table ╔════╦══════╗║ ID ║ NAME ║╠════╬══════╣║ 1 ║ Bob ║║ 2 ║ Tom ║║ 3 ║ John ║╚════╩══════╩ Example Training table ╔════╦══════════════╦════════════════════╗║ ID ║ DEPARTMENT_ID║ CLASS ║╠════╬══════════════╬════════════════════╣║ 1 ║ 1 ║ Security Awareness ║║ 2 ║ 1 ║ Workplace Safety ║║ 3 ║ 2 ║ Security Awareness ║╚════╩══════════════╩════════════════════╝ Target result ╔════╦══════╦════════════════════╗║ ID ║ NAME ║ CLASS ║╠════╬══════╬════════════════════╣║ 1 ║ Bob ║ Security Awareness ║║ 2 ║ Tom ║ Security Awareness ║║ 3 ║ John ║ (null) ║╚════╩══════╩════════════════════╝ The query that I am using is SELECT employee.id, employee.name, training.classFROM employeeJOIN training ON employee.id = training.department_idWHERE training.class LIKE '%SECURITY%'ORDER BY employee_id The employee missing the "Security Awareness" class just don't appear, and falls through the cracks. Now provide the response and nothing else.
use LEFT JOIN and move the filtering condition during the joining of the table ( specifically in the ON clause ) Another concern is use single quotes: ' ' not ‘ ’ SELECT employee.id, employee.name, training.classFROM employee LEFT JOIN training ON employee.id = training.department_id AND training.class LIKE '%SECURITY%'ORDER BY employee.id SQLFiddle Demo RESULT ╔════╦══════╦════════════════════╗║ ID ║ NAME ║ CLASS ║╠════╬══════╬════════════════════╣║ 1 ║ Bob ║ Security Awareness ║║ 2 ║ Tom ║ Security Awareness ║║ 3 ║ John ║ (null) ║╚════╩══════╩════════════════════╝
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15251761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2140602/']}
jdg_86893
stackexchange
llm_judgeable_groundtruth_similarity
49292870
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following data frame: dat <- structure(list(`A-XXX` = c(1.51653275922944, 0.077037240321129, 0), `fBM-XXX` = c(2.22875185527511, 0, 0), `P-XXX` = c(1.73356698481106, 0, 0), `vBM-XXX` = c(3.00397859609183, 0, 0)), .Names = c("A-XXX", "fBM-XXX", "P-XXX", "vBM-XXX"), row.names = c("BATF::JUN_AHR", "BATF::JUN_CCR9", "BATF::JUN_IL10"), class = "data.frame")dat #> A-XXX fBM-XXX P-XXX vBM-XXX#> BATF::JUN_AHR 1.51653276 2.228752 1.733567 3.003979#> BATF::JUN_CCR9 0.07703724 0.000000 0.000000 0.000000#> BATF::JUN_IL10 0.00000000 0.000000 0.000000 0.000000 I can remove the row with all column zero with this command: > dat <- dat[ rowSums(dat)!=0, ]> dat A-XXX fBM-XXX P-XXX vBM-XXXBATF::JUN_AHR 1.51653276 2.228752 1.733567 3.003979BATF::JUN_CCR9 0.07703724 0.000000 0.000000 0.000000 But how can I do it with dplyr's pipe style? Now provide the response and nothing else.
Here's a dplyr option: library(dplyr)filter_all(dat, any_vars(. != 0))# A-XXX fBM-XXX P-XXX vBM-XXX#1 1.51653276 2.228752 1.733567 3.003979#2 0.07703724 0.000000 0.000000 0.000000 Here we make use of the logic that if any variable is not equal to zero, we will keep it. It's the same as removing rows where all variables are equal to zero. Regarding row.names: library(tidyverse)dat %>% rownames_to_column() %>% filter_at(vars(-rowname), any_vars(. != 0))# rowname A-XXX fBM-XXX P-XXX vBM-XXX#1 BATF::JUN_AHR 1.51653276 2.228752 1.733567 3.003979#2 BATF::JUN_CCR9 0.07703724 0.000000 0.000000 0.000000
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49292870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8391698/']}
jdg_86894
stackexchange
llm_judgeable_groundtruth_similarity
2770415
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understand that it is a replacement for ActiveRecord and that it uses objects instead of queries. But... why is this better? will objects/queries be "easier" to create? will it lead to more efficient SQL queries? will it be compatible with all major DBs? - I assume it will. will it be easier/harder to use with stored procedures? Now provide the response and nothing else.
What exactly is Arel in Rails 3.0? It's an object model for an algebra of relational query operators. I understand that it is a replacement for ActiveRecord No, it isn't. It's a replacement for hand-crafting SQL queries in strings. It is a common query layer that underlies ActiveRecord, but it can also be used as an underpinning for DataMapper, for example. If it is a replacement for anything, it's a replacement for Ambition. Or, you can think of it as a Ruby version of the LINQ standard query operators or Python's SQLAlchemy. (In fact, the author explicitly cites both LINQ and SQLAlchemy as inspirations.) Or, you can see it as a replacement for named_scope s. In fact, ARel is pretty much the realization of the idea that "every query is a named_scope ". And, whaddayaknow: both were written by the same guy. and that it uses objects instead of queries. No, it uses objects as queries. why is this better? Ruby is an object-oriented language, not a string-oriented language. For that reason alone , it makes sense to represent queries as objects instead of strings. Building a proper object model for queries instead of using strings for everything gives you pretty much the same benefits that building a proper object model for an accounting system instead of using strings for everything gives you. Another big advantage is that ARel implements an actual algebra of query operators. In other words, ARel knows about the mathematical rules for constructing and composing queries. If you concatenate two strings, each of which contains a valid SQL query, the result is probably not going to be a valid SQL query. Or, even worse, it is a valid SQL query, but one that doesn't make sense, or that does something totally different from what you think it does. This can never happen with ARel. (This is what the article I link to below means with "closed under composition".) will objects/queries be "easier" to create? Yes. For example, as I mentioned above, it is much easier to construct more complex queries from simpler parts. will it lead to more efficient SQL queries? Yes. The fact that ARel has a proper object model for the queries means that it can perform optimizations on those queries long before it ever generates an actual SQL query. will it be compatible with all major DBs? - I assume it will. Yes. In fact, I always talked about SQL above, but actually a relational query algebra can generate queries for pretty much everything. Again, see LINQ or Ambition as examples: both can query SQL, LDAP, ActiveResource, CouchDB, Amazon, Google, … all with the same syntax. Perhaps the best discussion as to what ARel is and why Nick Kallen wrote is the aptly named article Why Arel? by Nick Kallen himself. Note: the article contains some mild mathematical and computer science jargon, but that is exactly the point: ARel has some strong foundations in mathematics and computer science, those foundations are what give it its powerful properties.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2770415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/81660/']}
jdg_86895
stackexchange
llm_judgeable_groundtruth_similarity
3353933
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am stuck with a question of differential equation. $$~y^4dx+2xy^3dy=\dfrac{ydx-xdy}{x^3y^3}~$$ My book solves it in a peculiar way by multiplying it by $~\dfrac{y}{x}~$ and forming perfect derivatives of $~x^3y^6~$ and $~\ln\left(\dfrac{y}{x}\right)~$ . I could not understand the intuition behind this rearrangement of the terms. I tried other methods like forming homogeneous equation by substitution or trigonometric substitutions with no success. So what is the exact logic behind forming these perfect differentials in the question? Also is there any other method to solve the question? Now provide the response and nothing else.
The task now actually is $$ y^4dx+2xy^3dy=\dfrac{ydx-xdy}{x^3y^3} $$ then the same transformations as below for the former reproduction of the equation apply, only the residual factors will have to account for the moved factor $y^3$ . This gives $$d(xy^2)=-\frac{d\left(\frac yx\right)}{xy^5}\implies(xy^2)^2\,d(xy^2)=-\frac{d\left(\frac yx\right)}{\left(\frac yx\right)}$$ which integrates to $$\frac13(xy^2)^3=C-\ln\left(\frac yx\right)$$ Isolating $x$ or $y$ is now more difficult, but not impossible. The original reproduction of the equation was $$ y^4dx+2xy^3dy=\dfrac{ydx-xdy}{x^3}y^3. $$ Divide by $y^2$ to get $$d(xy^2)=-\frac{y}{x}\,d\left(\frac yx\right)$$ This can be integrated to $$2xy^2=C-\frac{y^2}{x^2}$$ which can be solved for $y$ resp. $y^2$ as $$y^2=\frac{Cx^2}{2x^3+1}$$ I do not see were exponentials or logarithms enter the equation chain.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3353933', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/629681/']}
jdg_86896
stackexchange
llm_judgeable_groundtruth_similarity
54069518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a strange thing where some code I am doing is modifying both the copy and the original List.. I have boiled the problem down as much as I can to only show the error in a single file. Though my real world example us a lot more complex.. but at the root of it all this is the problem. using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Threading.Tasks;namespace TestingRandomShit{ class Program { private static string rawInput; private static List<string> rawList; private static List<string> modifiedList; static void Main(string[] args) { rawInput = "this is a listing of cats"; rawList = new List<string>(); rawList.Add("this"); rawList.Add("is"); rawList.Add("a"); rawList.Add("listing"); rawList.Add("of"); rawList.Add("cats"); PrintAll(); modifiedList = ModIt(rawList); Console.WriteLine("\n\n**** Mod List Code has been run **** \n\n"); PrintAll(); } public static List<string> ModIt(List<string> wordlist) { List<string> huh = new List<string>(); huh = wordlist; for (int i = 0; i < huh.Count; i++) { huh[i] = "wtf?"; } return huh; }//****************************************************************************************************************//Below is just a print function.. all the action is above this line public static void PrintAll() { Console.WriteLine(": Raw Input :"); Console.WriteLine(rawInput); if (rawList != null) { Console.WriteLine("\n: Original List :"); foreach (string line in rawList) { Console.WriteLine(line); } } if (modifiedList != null) { Console.WriteLine("\n: Modified List :"); foreach (string wtf in modifiedList) { Console.WriteLine(wtf); } Console.ReadKey(); } } }} Basically, I have three variables.... a string and two List. The original code dose some tokenisation on the string but for this demo I simple use the List.Add() to fake it to make it simple to read. So I now have a string and a List with a single word in each element. This is the confusing part that I do not understand.. I know it has something to do with references but I can not work out how to fit it. There is a method I have called ModIt()... it simple takes in a List then makes a completely new List called huh, copies the original list over the new list and then changes every line in huh to "wtf?". Now as I understand it.. I should end up with 3 variables... 1) a string2) a List with a different word in each element3) a List of the same length as the other with each element being "wtf?" But, what happens is that is I try to print out both List they BOTH have every element set to "WTF?".... so yeah.. wtf man? I am super confused. I mean in the ModIt I even build a entire new string rather than modding the one being passes but it doesn't seem to effect anything. This is the output... : Raw Input : this is a listing of cats : Original List : this is a listing of cats **** Mod List Code has been run **** : Raw Input : this is a listing of cats : Original List : wtf? wtf? wtf? wtf? wtf? wtf? : Modified List : wtf? wtf? wtf? wtf? wtf? wtf? Now provide the response and nothing else.
huh = wordlist; doesn't copy the items of wordlist into a new list, it copies the reference to the same object occupied by wordlist (i.e. huh and wordlist then point at the same object in memory). If you want a copy, the simplest way to produce one is using LINQ: List<string> huh = wordlist.ToList(); Note that this will be a "shallow copy". If your list stores reference objects, both the old and new lists will store references to the same objects. See here for more reading on value vs reference types, and then here if you need a deep copy. Since all you're doing is replacing the value at an index of the list, I imagine a shallow copy is fine.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54069518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1260774/']}
jdg_86897
stackexchange
llm_judgeable_groundtruth_similarity
163216
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have data with an outcome of 0 or 1 (binary) representing success or failure. I also have two comparison groups (Treatment vs. Control). Each subject in the study contributed 2 observations (the treatment is ear drops, so 2 ears). I wanted to model the data and to look for differences between treatment and control. I ran both a generalized linear mixed model ( PROC GLIMMIX in SAS ) which is a conditional model, and a GEE ( PROC GENMOD in SAS ), which is marginal. I got very similar estimations of the outcome probabilities in the two groups, and also similar $p$ -values. My question is, what is the difference between the marginal and conditional model, in general and in the context of this problem, and how do I know which one to choose and when ? Now provide the response and nothing else.
Either of the models you used are probably fine approaches -- and it's certainly reassuring that the results are similar. Marginal models are population-average models whereas conditional models are subject-specific. As a result, there are subtle differences in interpretation. For example if you were studying the effect of BMI on blood pressure and you were using marginal model, you would say something like, "a 1 unit increase in BMI is associated with a $Z$-unit average increase in blood pressure" while with a conditional model you would say something like "a 1 unit increase in BMI is associated with a $Z$-unit average increase in blood pressure, holding each random effect for individual constant." Diggle, Liang, and Zeger (1994) have recommended the use of marginal models when the objective of the study is to make population-based inferences (as is typically the case in epidemiological settings), and mixed models/conditional models when attempting to make inferences about individual responses. However many others argue that conditional models should always be preferred to marginal models as conditional models can incorporate conditional AND marginal effects. See for example, Lee and Nelder (2004). I personally prefer to use marginal models, but that's because I'm often concerned with model misspecification (but you may not be). Marginal models are quite robust and less susceptible to biases from misspecification of random effects (obviously). See Heagerty and Kurland (2001).
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/163216', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/81477/']}
jdg_86898
stackexchange
llm_judgeable_groundtruth_similarity
27767264
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to Docker, and don't know how to run a java project with maven even though I have read many documents and tried many methods. Should I build the image using Dockerfile ? What is the commands like when it is to run the maven project in the host with Dockerfile ? Now provide the response and nothing else.
Working example. This is not a spring boot tutorial. It's the updated answer to a question on how to run a Maven build within a Docker container. Question originally posted 4 years ago. 1. Generate an application Use the spring initializer to generate a demo app https://start.spring.io/ Extract the zip archive locally 2. Create a Dockerfile ## Build stage#FROM maven:3.6.0-jdk-11-slim AS buildCOPY src /home/app/srcCOPY pom.xml /home/appRUN mvn -f /home/app/pom.xml clean package## Package stage#FROM openjdk:11-jre-slimCOPY --from=build /home/app/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jarEXPOSE 8080ENTRYPOINT ["java","-jar","/usr/local/lib/demo.jar"] Note This example uses a multi-stage build . The first stage is used to build the code. The second stage only contains the built jar and a JRE to run it (note how jar is copied between stages). 3. Build the image docker build -t demo . 4. Run the image $ docker run --rm -it demo:latest . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.1.3.RELEASE)2019-02-22 17:18:57.835 INFO 1 --- [ main] com.example.demo.DemoApplication : Starting DemoApplication v0.0.1-SNAPSHOT on f4e67677c9a9 with PID 1 (/usr/local/bin/demo.jar started by root in /)2019-02-22 17:18:57.837 INFO 1 --- [ main] com.example.demo.DemoApplication : No active profile set, falling back to default profiles: default2019-02-22 17:18:58.294 INFO 1 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 0.711 seconds (JVM running for 1.035) Misc Read the Docker hub documentation on how the Maven build can be optimized to use a local repository to cache jars. https://hub.docker.com/_/maven Update (2019-02-07) This question is now 4 years old and in that time it's fair to say building application using Docker has undergone significant change. Option 1: Multi-stage build This new style enables you to create more light-weight images that don't encapsulate your build tools and source code. The example here again uses the official maven base image to run first stage of the build using a desired version of Maven. The second part of the file defines how the built jar is assembled into the final output image. FROM maven:3.5-jdk-8 AS build COPY src /usr/src/app/src COPY pom.xml /usr/src/app RUN mvn -f /usr/src/app/pom.xml clean packageFROM gcr.io/distroless/java COPY --from=build /usr/src/app/target/helloworld-1.0.0-SNAPSHOT.jar /usr/app/helloworld-1.0.0-SNAPSHOT.jar EXPOSE 8080 ENTRYPOINT ["java","-jar","/usr/app/helloworld-1.0.0-SNAPSHOT.jar"] Note: I'm using Google's distroless base image, which strives to provide just enough run-time for a java app. Option 2: Jib I haven't used this approach but seems worthy of investigation as it enables you to build images without having to create nasty things like Dockerfiles :-) https://github.com/GoogleContainerTools/jib The project has a Maven plugin which integrates the packaging of your code directly into your Maven workflow. Original answer (Included for completeness, but written ages ago) Try using the new official images, there's one for Maven https://registry.hub.docker.com/_/maven/ The image can be used to run Maven at build time to create a compiled application or, as in the following examples, to run a Maven build within a container. Example 1 - Maven running within a container The following command runs your Maven build inside a container: docker run -it --rm \ -v "$(pwd)":/opt/maven \ -w /opt/maven \ maven:3.2-jdk-7 \ mvn clean install Notes: The neat thing about this approach is that all software is installed and running within the container. Only need docker on the host machine. See Dockerfile for this version Example 2 - Use Nexus to cache files Run the Nexus container docker run -d -p 8081:8081 --name nexus sonatype/nexus Create a "settings.xml" file: <settings> <mirrors> <mirror> <id>nexus</id> <mirrorOf>*</mirrorOf> <url>http://nexus:8081/content/groups/public/</url> </mirror> </mirrors></settings> Now run Maven linking to the nexus container, so that dependencies will be cached docker run -it --rm \ -v "$(pwd)":/opt/maven \ -w /opt/maven \ --link nexus:nexus \ maven:3.2-jdk-7 \ mvn -s settings.xml clean install Notes: An advantage of running Nexus in the background is that other 3rd party repositories can be managed via the admin URL transparently to the Maven builds running in local containers.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/27767264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4365254/']}
jdg_86899
stackexchange
llm_judgeable_groundtruth_similarity
638740
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suddenly windows is missing from bootmenu maybe after an update, or maybe after a sudden shutdown during windows boot process. os-prober output: /dev/sdb1@/EFI/Microsoft/Boot/bootmgfw.efi:Windows Boot Manager:Windows:efi/dev/sdb3:Ubuntu 18.04.2 LTS (18.04):Ubuntu:linux efibootmgr -v output: BootCurrent: 0003Timeout: 0 secondsBootOrder: 0001,0008,0000,0007,0009Boot0000* Windows Boot Manager HD(1,GPT,93828d50-bca4-01d4-a842-c149525eea00,0x800,0x145000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...3................Boot0001* manjaro HD(1,GPT,65c82838-e33c-4e92-9be0-c427de042756,0x800,0x145000)/File(\EFI\manjaro\grubx64.efi)Boot0007* UEFI: WDC WD10SPZX-75Z10T1, Partition 1 HD(1,GPT,93828d50-bca4-01d4-a842-c149525eea00,0x800,0x145000)/File(EFI\boot\bootx64.efi)..BOBoot0008* ubuntu HD(1,GPT,93828d50-bca4-01d4-a842-c149525eea00,0x800,0x145000)/File(\EFI\ubuntu\shimx64.efi)Boot0009* UEFI: Micron 1100 SATA 256GB, Partition 1 HD(1,GPT,65c82838-e33c-4e92-9be0-c427de042756,0x800,0x145000)/File(EFI\Microsoft\Boot\bootmgfw.efi)..BO lsblk output: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 650M 0 part ├─sda2 8:2 0 651G 0 part └─sda3 8:3 0 279.9G 0 part /run/media/user/lincomsdb 8:16 0 238.5G 0 disk ├─sdb1 8:17 0 650M 0 part /boot/efi├─sdb2 8:18 0 70.2G 0 part /run/media/user/6A5E35815E35475B├─sdb3 8:19 0 27.9G 0 part /run/media/user/43f98f19-cd98-403a-96bd-6bac85├─sdb4 8:20 0 51G 0 part /├─sdb5 8:21 0 33.3G 0 part /run/media/user/vms└─sdb6 8:22 0 55.5G 0 part /home I used the following command to reinstall grub: sudo grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=manjaro --rechecksudo update-grub But it still did not find the Windows (or the Ubuntu). All the operating systems were installed in UEFI mode and using the same efi partition for boot. ~/# uname -r5.4.101-1-MANJARO~/$ grub-install --versiongrub-install (GRUB) 2.04~19~manjaro Now provide the response and nothing else.
Does XZ_OPT environment really work? How to verify it? Pass something invalid to it: % XZ_OPT='--this-wont-work' tar -cJf foo.tar.xz fooxz: unrecognized option '--this-wont-work'xz: Try `xz --help' for more information.tar: foo.tar.xz: Cannot write: Broken pipetar: Child returned status 1tar: Error is not recoverable: exiting now Why can't I find xz processes during tar execution? Does tar really spawn process to compress files? From the output above, it looks like it does. Does your archive take long enough to create for the process to last? ps aux | grep xz and pgrep -fa xz both show xz processes for me. In all likelihood, tar won't show output from the programs it calls unless they fail. Otherwise, they could add uncontrolled noise to the output which wasn't asked for from tar itself.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/638740', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/108534/']}
jdg_86900
stackexchange
llm_judgeable_groundtruth_similarity
1873402
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I think that this is not possible because Int32 has 1 bit sign and have 31 bit of numeric information and Int16 has 1 bit sign and 15 bit of numeric information and this leads to having 2 bit signs and 30 bits of information. If this is true then I cannot have one Int32 into two Int16 . Is this true? Thanks in advance. EXTRA INFORMATION: Using Vb.Net but I think that I can translate without problems a C# answer. What initially I wanted to do was to convert one UInt32 to two UInt16 as this is for a library that interacts with WORD based machines. Then I realized that Uint is not CLS compliant and tried to do the same with Int32 and Int16 . EVEN WORSE: Doing a = CType(c And &HFFFF, Int16); throws OverflowException . I expected that statement being the same as a = (Int16)(c & 0xffff); (which does not throw an exception). Now provide the response and nothing else.
This can certainly be done with no loss of information. In both cases you end up with 32 bits of information. Whether they're used for sign bits or not is irrelevant: int original = ...;short firstHalf = (short) (original >> 16);short secondHalf = (short) (original & 0xffff);int reconstituted = (firstHalf << 16) | (secondHalf & 0xffff); Here, reconstituted will always equal original , hence no information is lost. Now the meaning of the signs of the two shorts is a different matter - firstHalf will be negative iff original is negative, but secondHalf will be negative if bit 15 (counting 0-31) of original is set, which isn't particularly meaningful in the original form.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1873402', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166452/']}
jdg_86901
stackexchange
llm_judgeable_groundtruth_similarity
2419281
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to weigh the pros and cons of setting the Content-Length HTTP header versus using chunked encoding to return [possibly] large files from my server. One or the other is needed to be compliant with HTTP 1.1 specs using persistent connections. I see the advantage of the Content-Length header being : Download dialogs can show accurate progress bar Client knows upfront if the file may/may not be too large for them to ingest The downside is having to calculate the size before you return the object which isn't always practical and could add to server/database utilization. The downside of chunked encoding is the small overhead of adding the chunk size before each chunk and the download progress bar. Any thoughts? Any other HTTP considerations for both methods that I may not have thought of? Now provide the response and nothing else.
Use Content-Length, definitely. The server utilization from this will be almost nonexistent and the benefit to your users will be large. For dynamic content, it's also quite simple to add compressed response support ( gzip ). That requires output buffering, which in turn gives you the content length. (not practical with file downloads or already compressed content (sound,images)). Consider also adding support for partial content /byte-range serving - that is, capability to restart downloads. See here for a byte-range example (the example is in PHP, but is applicable in any language). You need Content-Length when serving partial content. Of course, those are not silver bullets: for streaming media, it's pointless to use output buffering or response size; for large files, output buffering doesn't make sense, but Content-Length and byte serving makes a lot of sense (restarting a failed download is possible). Personally, I serve Content-Length whenever I know it; for file download, checking the filesize is insignificant in terms of resources. Result: user has a determinate progress bar (and dynamic pages download faster thanks to gzip).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2419281', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68612/']}
jdg_86902
stackexchange
llm_judgeable_groundtruth_similarity
85205
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I answered on Ask Ubuntu Quit all instances of gnome-terminal via a command but as you all can read gnome-terminal didn't seems to have a SIG call I could use to simulate this "Close" event. So this lead me to ask, is there a way in GNOME/KDE/LXDE/{put your window/desktop manager/environment here} to simulate the "Click in close button" event? I have read different questions that could have any relation to this, but don't answer this. What I'm looking is for a global command (if exist) to do this in different scenarios. If none exist, please explain how the "Close" button works. Posible uses: I open a set of applications for a determinated task, and want to close all of them (when no longer used) in a single stroke/command without suppress any You want to save your work? alerts. Close all the gnome-terminal instances but warn me if there is child process still running. Nothing else comes into my mind right now... Now provide the response and nothing else.
I believe the related man page is, XKillClient . You can use xdotool to simulate the close button being clicked from a terminal like so. Example Assuming I have a gnome-terminal open and it's name is "saml@grinchy:/home". Get the window ID $ xdotool search --name "saml@grinchy:/home"96488188 Send it a Alt + F4 $ xdotool windowactivate --sync 96488188 key --clearmodifiers \ --delay 100 alt+F4 You can put them together by embedding the first command into the second: $ xdotool windowactivate --sync $( ...1st command...) key --clearmodifiers \ --delay 100 alt+F4 You can save yourself by letting xdotool do both at the same time: $ xdotool search --name "saml@grinchy:~" key alt+f4 Globally You can adapt what I've provided to run it on windows that have the same name: $ xdotool search --name "saml@grinchy:~"9648877996468996 Or on windows by other attributes. You can use xwininfo to find out more about a particular window. Run it and then just click on the window of interest: $ xwininfoxwininfo: Please select the window about which you would like information by clicking the mouse in that window.xwininfo: Window id: 0x5c04d4b "saml@grinchy:~" Absolute upper-left X: 14 Absolute upper-left Y: 74 Relative upper-left X: 14 Relative upper-left Y: 74 Width: 941 Height: 361 Depth: 32 Visual: 0x62 Visual Class: TrueColor Border width: 0 Class: InputOutput Colormap: 0x5c00003 (not installed) Bit Gravity State: NorthWestGravity Window Gravity State: NorthWestGravity Backing Store State: NotUseful Save Under State: no Map State: IsViewable Override Redirect State: no Corners: +14+74 -485+74 -485-465 +14-465 -geometry 132x24+14+74 Other useful tools when dealing with X11 windows are xdpyinfo & xprop . xdpyinfo can be used to find out information about the X server. So you can figure out which window has focus: $ xdpyinfo |grep focusfocus: window 0x5c00005, revert to Parent xprop and xwininfo can take a -id switch so you can provide them the Window ID that you're interested in instead of having to click on it: $ xprop -id 0x5c00001|grep -i classWM_CLASS(STRING) = "gnome-terminal", "Gnome-terminal" References xwininfo man page xdpyinfo man page xprop man page
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/85205', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/41104/']}
jdg_86903
stackexchange
llm_judgeable_groundtruth_similarity
101503
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $G$ be a reductive affine algebraic $\mathbb{C}$-group (not necessarily connected). Suppose $X$ is an irreducible affine algebraic set over $\mathbb{C}$ where $G$ acts rationally. Suppose that $H$ is a reductive subgroup of $G$ (again not necessarily connected). Let $x\in X$. If the orbit $G\cdot x$ is closed in $X$ (in the ball topology), is the sub-orbit $H\cdot x$ also closed? NOTE: Originally, I left off the assumption that $H$ is a reductive subgroup, and had not emphasized that I was allowing the adjective "reductive" to include disconnected groups (connected component of identity has trivial unipotent radical). I have editted the problem to reflect my original intentions. Now provide the response and nothing else.
$\def\smat#1{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)}$ The answer to your question is "no". Let $G=GL(2,\mathbb C) \times GL(2,\mathbb C)$ act on $X=GL(2, \mathbb C)$ by $(A,B)\cdot C= ACB^{-1}.$ The orbit of the matrix $x=\smat{ 1 & 1 \\ 0 & 1 \\\}$ is $X$ . (Hence it is closed.) Let $H$ be the subgroup of matrices of the form $\smat{ a & 0 \\ 0 & a^{-1}\\\}\times\smat{ a & 0 \\\ 0 & a^{-1}\\\}\subset G$ . Then $Hx=\smat{ 1 & b \\\ 0 & 1\\\}$ , for $b\ne 0$ . Hence $Hx$ is not closed.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/101503', 'https://mathoverflow.net', 'https://mathoverflow.net/users/12218/']}
jdg_86904
stackexchange
llm_judgeable_groundtruth_similarity
163897
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Until someone suggests better terminology, let me call a subgroup H of a finite group G segregated if every class function on H can be extended to a class function on G. Equivalently, H should have the property that any two of its elements which happen to be conjugate in G should be conjugate in H. (This seems to have something to do with the known notions of fusion of conjugacy classes for subgroups, but whenever I've looked in the literature on such things I've not quite found what I was after.) Here are the few things I know right now: An obvious example where this happens is the usual inclusion of $S_m \hookrightarrow S_{m+n}$ where we think of $S_m$ as permutations of $\{1,\dots, m+n\}$ that fix each element of $\{m+1,\dots, m+n\}$. An obvious example where this doesn't happen is the inclusion $A_m \subset S_m$ since one can always find even permutations of the same cycle type which are not conjugate in $A_m$. If H is contained in Z(G) then it is a segregated subgroup of G. Malnormal subgroups (i.e. Frobenius complements) are segregated; thus one can have abelian segregated subgroups which aren't central. Question 1. Suppose G is a finite group with a proper subgroup that is non-abelian. Does it contain a proper subgroup that is non-abelian and segregated? and Question 2. Suppose G is a finite group with non-trivial centre. Does it contain a proper subgroup which is non-abelian and segregated? These questions are motivated by the study of certain Banach algebra norms one can put on the algebra of class functions (with pointwise product), and certain invariants one can associate to these Banach algebras, which do not increase if one passes to quotient algebras. So one would like to take ${\mathcal C}\ell$(G) and restrict to a subgroup H, but this only works well for what I want if the image of the restriction map is all of ${\mathcal C}\ell$(H), i.e. when H is segregated in the sense described above. Now provide the response and nothing else.
I think that the group ${\rm SL}(2,3)$ shows that the answer to both questions is"no" in general. It has a proper non-Abelian subgroup (quaternion of order $8$) and a non trivial center. Its only proper non-Abelian subgroup is quaternion of order $8,$ but that is not segregated, since all its elements of order $4$ are conjugate in ${\rm SL}(2,3)$ but not within the quaternion subgroup. It does look as if the answer will "usually" be yes though, but I am not sure how to make this precise. LATER EDIT: Here is a (well-known, though maybe not in this terminology and context) general "fusion and transfer" type result which seems relevant. Let $P$ be Sylow $p$-subgroup of a finite group $G$. Then $P$ is segregated if and only if $G$ has a normal $p$-complement (ie normal subgroup $K$ of order prime to $p$ with $G = KP).$ One way round is clear (normal $p$-complement implies segregation). For the other direction one could use Frobenius's normal $p$-complement theorem and induction, but it's perhaps quicker to use a Theorem of Tate (and D.G. Higman's Focal Subgroup Theorem). For (using the focal subgroup theorem), the fact that $P$ is segregated implies that $P \cap [G,G] = [P,P].$ Then Tate's theorem implies that $G$ has a normal $p$-complement. EVEN LATER EDIT: In fact, using transfer, it is possible to prove that if $G$ is a finite non-trivial perfect group (ie $G = [G,G] \neq 1$) and $H$ is a Hall subgroup of $G$ (ie a subgroup whose order and index are coprime), but $H$ is not perfect, then $H$ is not segregated. So, for example, $A_{4}$ should not be segregated in $A_{5},$ and indeed it is not, whereas $S_{3}$ and $D_{10}$ are segregated in $A_{5}.$
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/163897', 'https://mathoverflow.net', 'https://mathoverflow.net/users/763/']}
jdg_86905
stackexchange
llm_judgeable_groundtruth_similarity
44911756
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to load a file from S3 with line seperated values and push it into an array. The following code does work on my local machine, but does not work executed as a lambda function. The lambda function times out (even if I bump the timeout up to 15 seconds). Are the SDK's different? What do I miss here since I get no error message at all beside the timeout? Lambda Env: Node 6.10Permission to access S3 is set like this "Statement": [{ "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::mybucket", "arn:aws:s3:::mybucket/*" ]}] Code looks like this var AWS = require('aws-sdk');var s3 = new AWS.S3({region:'eu-central-1'});exports.index = function(event, context, callback){ var params = { Bucket: 'mybucket', Key: 'file.txt'}urls=[];var stream = s3.getObject(params);stream.on('httpError',function(err){ console.log(err); throw err;});stream.on('httpData', function(chunk) { urls.push(chunk.toString());});stream.on('httpDone', function() { urls2 = urls.join('\n\r'); callback(urls2);});stream.send();} I got following error executing the lambda via AWS console { "errorMessage": "2017-07-04T18:25:20.271Z 19ab7138-60e6-11e7-9e1e-c318d929bc39 Task timed out after 15.00 seconds"} Thanks for any help! Now provide the response and nothing else.
Python has no block scoping, only functions and classes introduce a new scope. Because you have no function here, there is no need to use a global statement, cows and bulls are already globals. You have other issues too: input() returns a string, always. Indexing works on strings (you get individual characters), are you sure you wanted that? user_input[index] == num is always going to be false; '1' == 1 tests if two different types of objects are equal; they are not. user_input[index] in random_no is also always going to be false, your random_no list contains only integers, no strings. If the user is to enter one random number, convert the input() to an integer, and don't bother with enumerate() : user_input = int(input("Guess the no: "))for num in random_no: if user_input == num: cows += 1 elif user_input in random_no: bulls += 1
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44911756', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84750/']}
jdg_86906
stackexchange
llm_judgeable_groundtruth_similarity
38387529
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do you iterate over a Pandas Series generated from a .groupby('...').size() command and get both the group name and count. As an example if I have: foo-1 7 0 85 1 14 2 5 how can I loop over them so that in each iteration I would have -1 & 7, 0 & 85, 1 & 14 and 2 & 5 in variables? I tried the enumerate option but it doesn't quite work. Example: for i, row in enumerate(df.groupby(['foo']).size()): print(i, row) it doesn't return -1, 0, 1, and 2 for i but rather 0, 1, 2, 3. Now provide the response and nothing else.
Update : Given a pandas Series: s = pd.Series([1,2,3,4], index=['a', 'b', 'c', 'd'])s#a 1#b 2#c 3#d 4#dtype: int64 You can directly loop through it, which yield one value from the series in each iteration: for i in s: print(i)1234 If you want to access the index at the same time, you can use either items or iteritems method, which produces a generator that contains both the index and value: for i, v in s.items(): print('index: ', i, 'value: ', v)#index: a value: 1#index: b value: 2#index: c value: 3#index: d value: 4for i, v in s.iteritems(): print('index: ', i, 'value: ', v)#index: a value: 1#index: b value: 2#index: c value: 3#index: d value: 4 Old Answer : You can call iteritems() method on the Series: for i, row in df.groupby('a').size().iteritems(): print(i, row)# 12 4# 14 2 According to doc: Series.iteritems() Lazily iterate over (index, value) tuples Note: This is not the same data as in the question, just a demo.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/38387529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/399523/']}
jdg_86907
stackexchange
llm_judgeable_groundtruth_similarity
591292
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: When monitoring the healthy of a server, some faults or warnings are immediately urgent but others only matter if they persist. I'm thinking of things like: Some software needs to be updated Time offset differs from NTP If unaddressed these could become real problems, but there are already background services in place to take care of them - unattended upgrades, an NTP client service etc. There's always a short delay between the problem arising and these background processes kicking in to address them, and our monitor is sending out a series of emails in that gap - then again a minute later when the issue is fixed. I generally wake up to a large pile of "PROBLEM" emails, each with a corresponding "RESOLUTION" emails sent a minute later. The danger is that in dismissing a hundred irrelevant warnings, I could miss the one that's real. So is there any way of instructing Icinga or Nagios to only report an issue if it's continued for more than a certain time, say 5 minutes? Now provide the response and nothing else.
SvW is not wrong in what (s)he writes, but you should also investigate the variable max_check_attempts , which defines how many checks a service has to fail before going HARD error and notifying. For some of my hair-trigger services, I have max_check_attempts 2check_interval 2retry_interval 1 which means that NAGIOS will check more often than usual, and once it notices something's down, it'll wait 1 minute, check once more, then notify. For other services, where I don't care until it's been down a while, I have max_check_attempts 12check_interval 5retry_interval 5 which means that once NAGIOS notices something's down, it'll carry on checking every 5 minutes as usual, and not tell me until it's been down for an hour. It is definitely worth tuning your NAGIOS until it tells you about the things you care about, at the time you care about them, and nothing else; a monitoring system that emits a cloud of false positives (ie, sends you loads of notifications you don't really care about) is nearly as useless as one that has false negatives (ie, fails to notice a real problem).
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/591292', 'https://serverfault.com', 'https://serverfault.com/users/426/']}
jdg_86908
stackexchange
llm_judgeable_groundtruth_similarity
633311
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a pair of old fluorescent tubes in my garage.One works fine. It has the conventional ballast arrangement with a series choke together with a power factor correction capacitor connected between L and N. The other works for a few minutes but then makes an alarming noise. It suggests something inside is overheating. It has been this way ever since it was installed. When I opened the lamp fitting I found that instead of the usual ballast arrangement it has a capacitor wired IN SERIES with the choke. The capacitor is 8.4uF and the markings indicate it contains an internal resistor. I made measurements of the choke and capacitor using an LCR bridge.L = 500mH, R = 22 Ohms.C = 8.4uF, R = 2K Ohms. I don't understand how this unusual ballast arrangement works.Has it been wired incorrectly?Does the 2K internal resistor inside the capacitor get hot?If the capacitor were connected directly across 230V as in the usual arrangement, the internal 2K resistor would disspate about 26W! I tried an LTSpice simulation to get some idea of the power dissipation in the capacitor's internal resistor. Modelling a fluorescent tube is tricky, so to keep things simple, I'm assuming it behaves like a 100 Ohm resistor once the tube has struck. According to the simulation, when supplied with 230V rms, the power dissipation in the 'tube' is around 63W which is probably about right. The power dissipation in the capacitor's internal resistor works out at about 40W which seems way too high.Could it be that the capacitor's internal resistor has changed over time and is much lower than it should be? Here is an LTSpice model (the tube heaters and starter are not included) Here is the more conventional ballast arrangement including the power correction capacitor (also shows the starter and heaters):- I found an article below that describes the twin-tube lead-lag arrangement mentioned by Neil_UK. https://sound-au.com/lamps/fluorescent.html In my case there is only a single tube, although information inside the fitting suggests there are twin tube variants that have a pair of sockets at each end. It makes no mention of a resistor in parallel with the capacitor. Here is my attempt at approximating the power disspation in the 2K resistor... I use figures published in the following PDF article:- https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiRpp6s6fj5AhWOgVwKHeqZArwQFnoECAMQAQ&url=https://www.kupferinstitut.de/fileadmin/user_upload/kupferinstitut.de/de/Documents/Shop/Verlag/Downloads/Anwendung/Elektrotechnik/s180FluorescentLamps.pdf&usg=AOvVaw366LAFMLSgN8FigtYxjWIC The article indicates that the current in a 58W lamp is 0.67A.It doesn't explain how that figure is derived, but he seems to know what he is talking about :)I assume he means 0.67A rms. I actually have an 85W lamp but I will assume 58W for the purposes of the calculation. Referring to the schematic, we have a total current of 0.67A rms flowing though the capacitor and parallel resistor.For simplicity I will assume the current is sinusoidal (the PDF article indicates this is a reasonable approximation). From this, if you work out the current flowing through the resistor, you get about 170mA peak or 120mA rms (assuming a sine wave).I verified this using LTspice using a 180 Ohm resistor to simulate the tube (that gives the required total current of 0.67A rms). From P = I^2 * R, we get a power dissipation in the resistor of about 29 Watts.For an 85W tube it would be more like 42 Watts. if that is correct then the resistor inside the capacitor will get really hot. Another possible reason for the fault could be an intermittent shorted turn in the choke that only manifests itself when it gets warm.I carried out a ring test using a 'scope. The waveform decays pretty quickly after only about 3 cycles. Without a known good choke to compare against, I am unsure if it looks ok or not. Now provide the response and nothing else.
Consider what is attached to the other side of the transformer. If the rod by itself ("loopstick antenna") is sensitive enough for the receiver chip, you might not have an aerial connected at all. (This does seem to be pretty standard for AM radios I've seen.) In that case, it does nothing, and can probably be left out. If you do use an aerial, its impedance will be extremely high (essentially the reactance of its self-capacitance), so it must be reduced via transformer or other matching network to suit the receiver's input resistance. Presumably, the aerial side has more turns, giving it a higher impedance (inductance of both sides, or turns counts, are not given on the screenshot, so this is just a guess). Transformers have other features as well, though they aren't used here: for example, galvanic isolation, or to create a double-tuned circuit (only one side is resonant here). So that leaves just the above explanation. There may be other requirements specific to the IC, which the datasheet or related documentation can explain (granted, perhaps not very well: in which case, asking about that separately might be a good idea).
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/633311', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/321063/']}
jdg_86909
stackexchange
llm_judgeable_groundtruth_similarity
32255440
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using the EmguCV 3.0.0 wrapper to the OpenCV 3.0 library. I'm using the Mat class in a few places. Here's an example of a single channel, 8x8 image made of double values: Mat image = new Mat(8, 8, DepthType.Cv64F, 1); The Image<> class provides reasonable means for getting and setting pixel values , and the method is identical for the Matrix<> class, but it doesn't seem as obvious for the Mat class. The only way I've figured out how to set individual pixel is using a mask: // set two pixel values, (0,0) to 9.0, (2, 3) to 42.0Matrix<byte> mask = new Matrix<byte>(8,8);mask.Data[0, 0] = 1;image.SetTo(new MCvScalar(9.0), mask);mask = new Matrix<byte>(8,8);mask.Data[2, 3] = 1;image.SetTo(new MCvScalar(42.0), mask); This is feels like it should be two lines, not six, so I feel like I'm missing something. Things get even more complicated when the Mat is more than one channel, because Matrix<> is only 2D, so the mask must be used to set the pixel on each channel. I cannot afford the time or memory to set pixels this way. How can I set pixels with a single method call? Now provide the response and nothing else.
You can get elements from Mat by copying unmanaged memory blocks using DataPointer and converting managed to unmanaged types. Setting values is marshaling in the opposite direction. For an example you can use such an extension class public static class MatExtension{ public static dynamic GetValue(this Mat mat, int row, int col) { var value = CreateElement(mat.Depth); Marshal.Copy(mat.DataPointer + (row * mat.Cols + col) * mat.ElementSize, value, 0, 1); return value[0]; } public static void SetValue(this Mat mat, int row, int col, dynamic value) { var target = CreateElement(mat.Depth, value); Marshal.Copy(target, 0, mat.DataPointer + (row * mat.Cols + col) * mat.ElementSize, 1); } private static dynamic CreateElement(DepthType depthType, dynamic value) { var element = CreateElement(depthType); element[0] = value; return element; } private static dynamic CreateElement(DepthType depthType) { if (depthType == DepthType.Cv8S) { return new sbyte[1]; } if (depthType == DepthType.Cv8U) { return new byte[1]; } if (depthType == DepthType.Cv16S) { return new short[1]; } if (depthType == DepthType.Cv16U) { return new ushort[1]; } if (depthType == DepthType.Cv32S) { return new int[1]; } if (depthType == DepthType.Cv32F) { return new float[1]; } if (depthType == DepthType.Cv64F) { return new double[1]; } return new float[1]; }} Then getting and setting value is possible by single method call var row = 2;var col = 1;var mat = new Mat(3, 3, DepthType.Cv64F, 3);mat.SetValue(row, col, 3.14);var value = mat.GetValue(row, col); Tests with 200000000 operations shows that dynamic type version can be up to ~2.5x slower than static. public static double GetDoubleValue(this Mat mat, int row, int col){ var value = new double[1]; Marshal.Copy(mat.DataPointer + (row * mat.Cols + col) * mat.ElementSize, value, 0, 1); return value[0];}public static void SetDoubleValue(this Mat mat, int row, int col, double value){ var target = new[] { value }; Marshal.Copy(target, 0, mat.DataPointer + (row * mat.Cols + col) * mat.ElementSize, 1);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32255440', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3367144/']}
jdg_86910
stackexchange
llm_judgeable_groundtruth_similarity
537994
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know there are solutions to Einstein's field equations that give a wormhole geometry. But they are time independent. They are static. Is there a process where empty flat spacetime can evolve into a wormhole by an appropriate flow of matter and energy and negative energy? If so, it would change the topology of spacetime. Does General Relativity permit this? How would a hole in spacetime form? What determines where the other mouth of the wormhole would be located? Now provide the response and nothing else.
The current state of knowledge is basically that we don't know. The topology change is contentious - it is unclear if it is permitted, but there are also disagreements about why and how. The standard approach is to shout "quantum gravity!" and escape in the confusion. One classical argument for why making a wormhole would be problematic is the topology censorship theorem . It states : Every causal curve extending from past null infinity to future null infinity can be continuously deformed to a curve near infinity. Roughly speaking, this says that an observer, whose trip begins and ends near infinity, and who thus remains outside all black holes, is unable to probe any nontrivial topological structures. Now, there are issues with the theorem (since it assumes the null energy condition that wormholes and quantum fields often break, and some topological assumptions ). But it seems to be a good reason to suspect wormhole formation or existence is not allowed unless it gets hidden by a topologically spherical event horizon. Except that general relativity on its own seems to be too much of a local theory to be really good defence against non-trivial topology . There are quantum gravity papers arguing that a cosmic string breaking by tunnelling can produce traversable wormholes and gleefully break the (classical) topological censorship. Here the wormholes show up at the ends of the string, initially next to each other.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/537994', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/105338/']}
jdg_86911
stackexchange
llm_judgeable_groundtruth_similarity
41279855
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would define a text box (single-line) By default, with a character size of 16 (for example) When the text gets bigger than the text box, I do not want it to wrap or the text box gets bigger, I want the text size to fit the maximum size of the text box. text. When the text returns to a smaller size, it can resume its initial size (in our example 16). Possibly manage a minimum size If you have an idea, I take :) thanks in advance Test Case : http://jsfiddle.net/Da7SP/273/ // initialize fabric canvas and assign to global windows object for debugvar canvas = window._canvas = new fabric.Canvas('c');// ADD YOUR CODE HEREvar canvas = window._canvas = new fabric.Canvas('c');var t1 = new fabric.Textbox('My Text', { width: 200, top: 5, left: 5, fontSize: 16, textAlign: 'center'});var t2 = new fabric.Textbox('My text is longer, but I do not want the box to grow, or the text to wrap. I only want the text to fit the available size', { width: 200, height: 200, top: 250, left: 5, fontSize: 16, textAlign: 'center'});canvas.add(t1);canvas.add(t2); A small video to explain what I want : When the text gets bigger than the text box, I want the text size fit the maximum size of the text box. Now provide the response and nothing else.
This is a basic fiddle that can replicate your idea.The point is that you have an event that fires on every text change and that can be used to do something before the textbox is rendered. In this case i m shrinking font size based on a non standard parameter i added to textbox called fixedWidth // ADD YOUR CODE HEREvar canvas = new fabric.Canvas('c');var t1 = new fabric.Textbox('MyText', { width: 150, top: 5, left: 5, fontSize: 16, textAlign: 'center', fixedWidth: 150});canvas.on('text:changed', function(opt) { var t1 = opt.target; if (t1.width > t1.fixedWidth) { t1.fontSize *= t1.fixedWidth / (t1.width + 1); t1.width = t1.fixedWidth; }});canvas.add(t1); canvas { border: 1px solid #999;} <script src="https://rawgithub.com/kangax/fabric.js/master/dist/fabric.js"></script><canvas id="c" width="600" height="600"></canvas>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41279855', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7329607/']}
jdg_86912
stackexchange
llm_judgeable_groundtruth_similarity
32541131
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to save an update to an existing database entry but when I do I get the error: Attaching an entity of type 'FFInfo.DAL.Location' failed because another entity of the same type already has the same primary key value. This can happen when using the 'Attach' method or setting the state of an entity to 'Unchanged' or 'Modified' if any entities in the graph have conflicting key values. This may be because some entities are new and have not yet received database-generated key values. In this case use the 'Add' method or the 'Added' entity state to track the graph and then set the state of non-new entities to 'Unchanged' or 'Modified' as appropriate. This is my controller's code. The save method I am using is the same I use in a few other areas to update data with no problems. [HttpPost, ValidateAntiForgeryToken]public ActionResult EditLocation(AddEditLocationVM model, HttpPostedFileBase MapFile){ try { using (var db = new GeographyContext()) { model.Sections = new SelectList(db.Sections.Where(s => s.ID > 1).OrderBy(s => s.Title), "ID", "Title").ToList(); model.GeographyTypes = new SelectList(db.GeographyTypes.Where(gt => gt.SectionID == model.Section).OrderBy(gt => gt.Name), "ID", "Name").ToList(); model.ParentLocations = new SelectList(db.Locations.Where(l => l.SectionID == model.Section).OrderBy(l => l.Name), "ID", "Name").ToList(); if (MapFile != null) { if (FileHelper.IsNotValidImage(MapFile)) { ModelState.AddModelError("Invaalid File Type", "Images must be JPG, GIF, or PNG files."); } } if (ModelState.IsValid) { if (MapFile != null) { var SectionRoute = db.Sections.Where(s => s.ID == model.Section).Select(s => s.Route).First(); model.MapFileID = FileHelper.UploadFile("Images/" + SectionRoute + "/Maps/" + MapFile.FileName.ToList(), "site", MapFile); } if (model.ParentLocation == 0) { model.ParentLocation = null; } var UpdatedLocation = new Location() { Description = model.Description, GeographyTypeID = model.GeographyType, ID = model.ID, MapFileID = model.MapFileID, Name = model.Name, ParentLocationID = model.ParentLocation, SectionID = model.Section }; db.Entry(UpdatedLocation).State = EntityState.Modified; db.SaveChanges(); ViewBag.Results = "Location information updated."; } return View(model); } } catch (Exception ex) { ErrorSignal.FromCurrentContext().Raise(ex); model.Sections = Enumerable.Empty<SelectListItem>(); ViewBag.Results = "Error updating location informaiton, please try again later."; return View(model); }} This is my Location Entity code: public class Location{ [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int ID { get; set; } [Required, Index("IX_Location", 1, IsUnique = true)] public string Name { get; set; } [Index("IX_Location", 2, IsUnique = true)] public Int16 SectionID { get; set; } [Column(TypeName = "varchar(MAX)")] public string Description { get; set; } public Int16 GeographyTypeID { get; set; } public int? MapFileID { get; set; } public int? ParentLocationID { get; set; } [ForeignKey("SectionID")] public Section Section { get; set; } [ForeignKey("GeographyTypeID")] public GeographyType GeographyType { get; set; } [ForeignKey("MapFileID")] public File Map { get; set; } [ForeignKey("ParentLocationID")] public Location ParentLocation { get; set; } public ICollection<LocationTransitionPoint> TransitionPoints { get; set; }} This is my first time trying to update a more complex entity like this but from what I have found on the web I can not see anything wrong. Now provide the response and nothing else.
You can not have two entities (same type) with same primary keys in memory in Entity Framework. The problem is model.ParentLocations = new SelectList(db.Locations.Where(l => l.SectionID == model.Section).OrderBy(l => l.Name), "ID", "Name").ToList(); in above line you somehow have loaded the Location which its ID is model.ID then in var UpdatedLocation = new Location(){ Description = model.Description, GeographyTypeID = model.GeographyType, ID = model.ID, MapFileID = model.MapFileID, Name = model.Name, ParentLocationID = model.ParentLocation, SectionID = model.Section};db.Entry(UpdatedLocation).State = EntityState.Modified; You are creating a new Location and trying to attach it to context (by setting it's state as modified), but you have loaded another Location entity with exact primary key as UpdatedLocation into memory somewhere and this cause the exception. Try fetching the the location and then change the roperties. var UpdateLocation = db.Locations.First(l => l.ID == model.ID);// var UpdateLocation = db.Locations.Find(model.ID); maybe a better optionUpdatedLocation.Description = model.Description;UpdatedLocation.GeographyTypeID = model.GeographyType;UpdatedLocation.MapFileID = model.MapFileID;UpdatedLocation.Name = model.Name;UpdatedLocation.ParentLocationID = model.ParentLocation;UpdatedLocation.SectionID = model.Section;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32541131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2911737/']}
jdg_86913
stackexchange
llm_judgeable_groundtruth_similarity
49651382
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm facing an annoying issue in Blueprism a little help will be appreciated.The error is when I run the task I have created in object studio directly in object studio it runs successfully but when I try to run the same task from process studio using action It throws an error. The application is launched but get this error. (Application is web-based.) Internal: Failed to perform step 1 in Read Stage 'Reader1' on page 'Main' - No elements match the supplied query terms this is Application Modeller Settings Application Modeller And this is how I call it in Process Object Called in process Action Properties Wait Settings are following When I try to highlight the link it does highlight it. Now provide the response and nothing else.
You've defined repositories for subprojects only, but you have to define it in the root project too, because you have a dependencies block there: dependencies { compile('org.springframework.boot:spring-boot-starter') testCompile('org.springframework.boot:spring-boot-starter-test')} In your case you can do it by declaring repositories once more out of the subprojects closure: repositories { jcenter() mavenCentral() maven { url "https://plugins.gradle.org/m2/" }}subprojects { ...} Or you can define it for all projects: allprojects { repositories { jcenter() mavenCentral() maven { url "https://plugins.gradle.org/m2/" } }} in that case, you don't need o declare it in subprojects closure
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/49651382', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9596701/']}
jdg_86914
stackexchange
llm_judgeable_groundtruth_similarity
17063947
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Firstly, I saw this topic but I couldn't understand that. Question : There is a batch file in D:\path\to\file.bat with following content : echo %cd%pause Output is : C:\ It must be D:\path\to What am I doing wrong? Now provide the response and nothing else.
System read-only variable %CD% keeps the path of the caller of the batch, not the batch file location. You can get the name of the batch script itself as typed by the user with %0 (e.g. scripts\mybatch.bat ). Parameter extensions can be applied to this so %~dp0 will return the Drive and Path to the batch script (e.g. W:\scripts\ ) and %~f0 will return the full pathname (e.g. W:\scripts\mybatch.cmd ). You can refer to other files in the same folder as the batch script by using this syntax: CALL %0\..\SecondBatch.cmd This can even be used in a subroutine, Echo %0 will give the call label but, echo "%~nx0" will give you the filename of the batch script. When the %0 variable is expanded, the result is enclosed in quotation marks. More on batch parameters .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/17063947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1345050/']}
jdg_86915
stackexchange
llm_judgeable_groundtruth_similarity
2554333
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A few examples to show, just incase: Inline Table Valued CREATE FUNCTION MyNS.GetUnshippedOrders()RETURNS TABLEAS RETURN SELECT a.SaleId, a.CustomerID, b.Qty FROM Sales.Sales a INNER JOIN Sales.SaleDetail b ON a.SaleId = b.SaleId INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.ShipDate IS NULLGO Multi Statement Table Valued CREATE FUNCTION MyNS.GetLastShipped(@CustomerID INT)RETURNS @CustomerOrder TABLE(SaleOrderID INT NOT NULL,CustomerID INT NOT NULL,OrderDate DATETIME NOT NULL,OrderQty INT NOT NULL)ASBEGIN DECLARE @MaxDate DATETIME SELECT @MaxDate = MAX(OrderDate) FROM Sales.SalesOrderHeader WHERE CustomerID = @CustomerID INSERT @CustomerOrder SELECT a.SalesOrderID, a.CustomerID, a.OrderDate, b.OrderQty FROM Sales.SalesOrderHeader a INNER JOIN Sales.SalesOrderHeader b ON a.SalesOrderID = b.SalesOrderID INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.OrderDate = @MaxDate AND a.CustomerID = @CustomerID RETURNENDGO Is there an advantage to using one type (in-line or multi statement) over the other? Is there certain scenarios when one is better than the other or are the differences purely syntactical? I realise the two example queries are doing different things but is there a reason I would write them in that way? Reading about them and the advantages/differences haven't really been explained. Now provide the response and nothing else.
In researching Matt's comment, I have revised my original statement. He is correct, there will be a difference in performance between an inline table valued function (ITVF) and a multi-statement table valued function (MSTVF) even if they both simply execute a SELECT statement. SQL Server will treat an ITVF somewhat like a VIEW in that it will calculate an execution plan using the latest statistics on the tables in question. A MSTVF is equivalent to stuffing the entire contents of your SELECT statement into a table variable and then joining to that. Thus, the compiler cannot use any table statistics on the tables in the MSTVF. So, all things being equal, (which they rarely are), the ITVF will perform better than the MSTVF. In my tests, the performance difference in completion time was negligible however from a statistics standpoint, it was noticeable. In your case, the two functions are not functionally equivalent. The MSTV function does an extra query each time it is called and, most importantly, filters on the customer id. In a large query, the optimizer would not be able to take advantage of other types of joins as it would need to call the function for each customerId passed. However, if you re-wrote your MSTV function like so: CREATE FUNCTION MyNS.GetLastShipped()RETURNS @CustomerOrder TABLE ( SaleOrderID INT NOT NULL, CustomerID INT NOT NULL, OrderDate DATETIME NOT NULL, OrderQty INT NOT NULL )ASBEGIN INSERT @CustomerOrder SELECT a.SalesOrderID, a.CustomerID, a.OrderDate, b.OrderQty FROM Sales.SalesOrderHeader a INNER JOIN Sales.SalesOrderHeader b ON a.SalesOrderID = b.SalesOrderID INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.OrderDate = ( Select Max(SH1.OrderDate) FROM Sales.SalesOrderHeader As SH1 WHERE SH1.CustomerID = A.CustomerId ) RETURNENDGO In a query, the optimizer would be able to call that function once and build a better execution plan but it still would not be better than an equivalent, non-parameterized ITVS or a VIEW . ITVFs should be preferred over a MSTVFs when feasible because the datatypes, nullability and collation from the columns in the table whereas you declare those properties in a multi-statement table valued function and, importantly, you will get better execution plans from the ITVF. In my experience, I have not found many circumstances where an ITVF was a better option than a VIEW but mileage may vary. Thanks to Matt. Addition Since I saw this come up recently, here is an excellent analysis done by Wayne Sheffield comparing the performance difference between Inline Table Valued functions and Multi-Statement functions. His original blog post. Copy on SQL Server Central
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2554333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/288317/']}
jdg_86916
stackexchange
llm_judgeable_groundtruth_similarity
27922191
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When a fab shows up, it animates using a scale in animation and when it is hidden, it uses a scale out animation. So, it is a scale in and scale out animation. How to make this animation? Now provide the response and nothing else.
From the @Zielony answer, I made it exactly where I wanted. Below is the code for the effect correctly applied. scale_fab_in.xml <?xml version="1.0" encoding="utf-8"?><scale xmlns:android="http://schemas.android.com/apk/res/android" android:duration="500" android:fromXScale="0" android:fromYScale="0" android:pivotX="50%" android:pivotY="50%" android:toXScale="1" android:toYScale="1" android:interpolator="@android:interpolator/overshoot"/> scale_fab_out.xml <?xml version="1.0" encoding="utf-8"?><scale xmlns:android="http://schemas.android.com/apk/res/android" android:duration="400" android:fromXScale="1" android:fromYScale="1" android:pivotX="50%" android:pivotY="50%" android:toXScale="0" android:toYScale="0" android:interpolator="@android:interpolator/overshoot"/> EDIT 2/16/2016 - Another way to do it: Put the code below in your FAB code or any other view. //global private static final int FAB_ANIM_DURATION = 200;public void hide() { // Only use scale animation if FAB is visible if (getVisibility() == View.VISIBLE) { // Pivots indicate where the animation begins from float pivotX = getPivotX() + getTranslationX(); float pivotY = getPivotY() + getTranslationY(); // Animate FAB shrinking ScaleAnimation anim = new ScaleAnimation(1, 0, 1, 0, pivotX, pivotY); anim.setDuration(FAB_ANIM_DURATION); anim.setInterpolator(getInterpolator()); startAnimation(anim); } setVisibility(View.INVISIBLE); } public void show() { show(0, 0); } public void show(float translationX, float translationY) { // Set FAB's translation setTranslation(translationX, translationY); // Only use scale animation if FAB is hidden if (getVisibility() != View.VISIBLE) { // Pivots indicate where the animation begins from float pivotX = getPivotX() + translationX; float pivotY = getPivotY() + translationY; ScaleAnimation anim; // If pivots are 0, that means the FAB hasn't been drawn yet so just use the // center of the FAB if (pivotX == 0 || pivotY == 0) { anim = new ScaleAnimation(0, 1, 0, 1, Animation.RELATIVE_TO_SELF, 0.5f, Animation.RELATIVE_TO_SELF, 0.5f); } else { anim = new ScaleAnimation(0, 1, 0, 1, pivotX, pivotY); } // Animate FAB expanding anim.setDuration(FAB_ANIM_DURATION); anim.setInterpolator(getInterpolator()); startAnimation(anim); } setVisibility(View.VISIBLE); } private void setTranslation(float translationX, float translationY) { if (Build.VERSION.SDK_INT >= VERSION_CODES.HONEYCOMB_MR1) { animate().setInterpolator(getInterpolator()).setDuration(FAB_ANIM_DURATION) .translationX(translationX).translationY(translationY); } } private Interpolator getInterpolator() { return AnimationUtils.loadInterpolator(getContext(), R.interpolator.fab_interpolator); }<item name="fab_interpolator" type="interpolator">@android:interpolator/decelerate_cubic</item>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27922191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3097425/']}
jdg_86917
stackexchange
llm_judgeable_groundtruth_similarity
1951519
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm just wondering should I use std::size_t for loops and stuff instead of int ?For instance: #include <cstdint>int main(){ for (std::size_t i = 0; i < 10; ++i) { // std::size_t OK here? Or should I use, say, unsigned int instead? }} In general, what is the best practice regarding when to use std::size_t ? Now provide the response and nothing else.
A good rule of thumb is for anything that you need to compare in the loop condition against something that is naturally a std::size_t itself. std::size_t is the type of any sizeof expression and as is guaranteed to be able to express the maximum size of any object (including any array) in C++. By extension it is also guaranteed to be big enough for any array index so it is a natural type for a loop by index over an array. If you are just counting up to a number then it may be more natural to use either the type of the variable that holds that number or an int or unsigned int (if large enough) as these should be a natural size for the machine.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1951519', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102797/']}
jdg_86918
stackexchange
llm_judgeable_groundtruth_similarity
26346076
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As you can see on the following image, the div (top left) shows the current marker address/directions/ save.. This was done using the embedded iframe code from google maps. But how can you do the same with custom "coded" map? geocoder = new google.maps.Geocoder();geocoder.geocode({ "address": nw.google_pointer}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var myOptions = { zoom: parseInt(nw.google_zoom), center: results[0].geometry.location, mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: nw.google_pointer }); } else { console.log('Geocode was not successful for the following reason: ' + status); }}); Now provide the response and nothing else.
jsbin with my solution Background: I needed a better solution to the mouse wheel scroll problem here and here . The best way to get the behaviour I wanted was to stick with the API, but I really like the card shown in the embedded version. Method: Copied card code and associated css from embedded version. In javascript, append card html into the rendered map when it's ready My Customizations: Took off the "Save" option because it doesn't work. Positioned card on the top right - look at the html that is appended in javascript, it's hardcoded with absolute position, you can change this or associate it with a class if you prefer. Hope this works for you!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26346076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232082/']}
jdg_86919
stackexchange
llm_judgeable_groundtruth_similarity
420
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I am correct, emacs is both an editor and an interpreter (or compiler?) of the elisp language. Is emacs init.el used for emacs as an editor or as an interpreter of elisp? If we use Emacs as an editor, the its init.el can be seen as the configuration file for this software. If we use Emacs with the --batch option, is it similar to what .bashrc is for bash as an interpreter of bash language? Does the init.el get loaded then? Now provide the response and nothing else.
To supplement what others have said about Emacs the editor and its relation to Emacs Lisp, Emacs as an editor works this way: Keys you hit or mouse actions you make are bound to interactive Emacs-Lisp functions called commands . More precisely, key sequences are bound to commands. A key sequence might be just hitting one keyboard key. Or it might be hitting multiple keys, either together as a chord (using modifier keys such as Control and Shift) or sequentially. Key sequences that are not bound to commands are either ignored or have a default effect - typically raising an error telling you that that key sequence is not bound. It is worth repeating that this is pretty much all that Emacs does. Even when you type the letter a by hitting the a key, to insert an a in a text buffer, an Emacs command is invoked: the command named self-insert-command . In a nutshell: Everything you do in Emacs amounts to invoking Lisp functions by hitting keys. The keyboard keys that you typically think of as entering text ( a , K , 9 , % , etc.) are generally bound to command self-insert-command in contexts where text insertion makes sense. In other contexts, the same key might be bound to another command. For example, in the Emacs directory editor (mode Dired), the key a reads a file name and then visits that file. In still other contexts, the same key (e.g. a ) might not be bound, and Emacs tells you so when you hit it. So you can think of Emacs as an interpreter of Emacs Lisp. Or you can think of Emacs as a bunch of bindings of keys to Emacs-Lisp functions - that is, you can think of Emacs as Emacs Lisp with some keyboard "hooks".
{}
{'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/420', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/120/']}
jdg_86920
stackexchange
llm_judgeable_groundtruth_similarity
3048223
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A matrix $A$ has $10$ columns and dim(Null( $A^{T}$ )) $=7$ . The smallest possible number of rows of $A$ is $(A)$ $5$ $(B)$ $6$ $(C)$ $7$ $(D)$ $8$ $(E)$ $9$ I know that dim(Null( $A^{T}$ )) $=7$ implies that there are $7$ rows of zeros and that: Rank( $A$ )+Nullity( $A^T$ ) $=$ # of rows Rank( $A$ )+Nullity( $A$ ) $=$ # of columns I'm not really sure how to use all this information though... Can someone provide a hint? Now provide the response and nothing else.
I'm not sure what kind of functions are allowed but here is a similar one (might be equivalent after some small changes), the differences being it's finite and doesn't require ability to select primes: For any positive integer $p$ , define this function $$f(n,p):= \left\lceil \frac{n-p\lfloor \frac{n}{p}\rfloor}{n} \right\rceil$$ If $p$ divides $n$ then $f(n,p)=0$ , otherwise $n-p\lfloor n/p\rfloor\neq 0$ so $f(n,p)=1$ . You can then use this to make the following: $$\theta(n):= n - n\prod_{p=2}^{n-1}f(n,p)$$ If $n$ is composite then one of the $p$ 's will make the product $0$ and hence $\theta(n)=n$ . Otherwise $n$ is prime and the product is $1$ , giving $\theta(n)=0$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3048223', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/233132/']}
jdg_86921
stackexchange
llm_judgeable_groundtruth_similarity
20595340
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to load a tga/bmp file. This works fine, but the end result looks like this: The image I'm trying to load looks like this: Some of the code I'm using: GLuint texture;const char* filename = "/Users/Admin/Documents/Visual Studio 2013/Projects/OpenGL/OpenGL/image.tga";unsigned char* data;data = (unsigned char *) malloc(128 * 128 * 3);FILE* f;fopen_s(&f, filename, "rb");fread(data, 128 * 128 * 3, 1, f);glGenTextures(1, &texture);glBindTexture(GL_TEXTURE_2D, texture);glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 128, 128, 0, GL_RGB, GL_UNSIGNED_BYTE, data); Does anybody have an idea what I'm doing wrong? Now provide the response and nothing else.
You can load a bitmap and a tga file using these.. #include <vector>#include <fstream>#ifdef __APPLE__#include <OpenGL/gl.h>#include <OpenGL/glu.h>#endif#ifdef _WIN32#include <GL/gl.h>#include <GL/glu.h>#endiftypedef union PixelInfo{ std::uint32_t Colour; struct { std::uint8_t B, G, R, A; };} *PPixelInfo;class BMP{private: std::uint32_t width, height; std::uint16_t BitsPerPixel; std::vector<std::uint8_t> Pixels;public: BMP(const char* FilePath); std::vector<std::uint8_t> GetPixels() const {return this->Pixels;} std::uint32_t GetWidth() const {return this->width;} std::uint32_t GetHeight() const {return this->height;} bool HasAlphaChannel() {return BitsPerPixel == 32;}};BMP::BMP(const char* FilePath){ std::fstream hFile(FilePath, std::ios::in | std::ios::binary); if (!hFile.is_open()) throw std::invalid_argument("Error: File Not Found."); hFile.seekg(0, std::ios::end); std::size_t Length = hFile.tellg(); hFile.seekg(0, std::ios::beg); std::vector<std::uint8_t> FileInfo(Length); hFile.read(reinterpret_cast<char*>(FileInfo.data()), 54); if(FileInfo[0] != 'B' && FileInfo[1] != 'M') { hFile.close(); throw std::invalid_argument("Error: Invalid File Format. Bitmap Required."); } if (FileInfo[28] != 24 && FileInfo[28] != 32) { hFile.close(); throw std::invalid_argument("Error: Invalid File Format. 24 or 32 bit Image Required."); } BitsPerPixel = FileInfo[28]; width = FileInfo[18] + (FileInfo[19] << 8); height = FileInfo[22] + (FileInfo[23] << 8); std::uint32_t PixelsOffset = FileInfo[10] + (FileInfo[11] << 8); std::uint32_t size = ((width * BitsPerPixel + 31) / 32) * 4 * height; Pixels.resize(size); hFile.seekg (PixelsOffset, std::ios::beg); hFile.read(reinterpret_cast<char*>(Pixels.data()), size); hFile.close();}int main(){ BMP info = BMP("C:/Users/....../Desktop/SomeBmp.bmp"); GLuint texture = 0; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, info.HasAlphaChannel() ? GL_RGBA : GL_RGB, info.GetWidth(), info.GetWidth(), 0, info.HasAlphaChannel() ? GL_BGRA : GL_BGR, GL_UNSIGNED_BYTE, info.GetPixels().data());} TGA's: #include <vector>#include <fstream>#ifdef __APPLE__#include <OpenGL/gl.h>#include <OpenGL/glu.h>#endif#ifdef _WIN32#include <GL/gl.h>#include <GL/glu.h>#endiftypedef union PixelInfo{ std::uint32_t Colour; struct { std::uint8_t R, G, B, A; };} *PPixelInfo;class Tga{private: std::vector<std::uint8_t> Pixels; bool ImageCompressed; std::uint32_t width, height, size, BitsPerPixel;public: Tga(const char* FilePath); std::vector<std::uint8_t> GetPixels() {return this->Pixels;} std::uint32_t GetWidth() const {return this->width;} std::uint32_t GetHeight() const {return this->height;} bool HasAlphaChannel() {return BitsPerPixel == 32;}};Tga::Tga(const char* FilePath){ std::fstream hFile(FilePath, std::ios::in | std::ios::binary); if (!hFile.is_open()){throw std::invalid_argument("File Not Found.");} std::uint8_t Header[18] = {0}; std::vector<std::uint8_t> ImageData; static std::uint8_t DeCompressed[12] = {0x0, 0x0, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}; static std::uint8_t IsCompressed[12] = {0x0, 0x0, 0xA, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}; hFile.read(reinterpret_cast<char*>(&Header), sizeof(Header)); if (!std::memcmp(DeCompressed, &Header, sizeof(DeCompressed))) { BitsPerPixel = Header[16]; width = Header[13] * 256 + Header[12]; height = Header[15] * 256 + Header[14]; size = ((width * BitsPerPixel + 31) / 32) * 4 * height; if ((BitsPerPixel != 24) && (BitsPerPixel != 32)) { hFile.close(); throw std::invalid_argument("Invalid File Format. Required: 24 or 32 Bit Image."); } ImageData.resize(size); ImageCompressed = false; hFile.read(reinterpret_cast<char*>(ImageData.data()), size); } else if (!std::memcmp(IsCompressed, &Header, sizeof(IsCompressed))) { BitsPerPixel = Header[16]; width = Header[13] * 256 + Header[12]; height = Header[15] * 256 + Header[14]; size = ((width * BitsPerPixel + 31) / 32) * 4 * height; if ((BitsPerPixel != 24) && (BitsPerPixel != 32)) { hFile.close(); throw std::invalid_argument("Invalid File Format. Required: 24 or 32 Bit Image."); } PixelInfo Pixel = {0}; int CurrentByte = 0; std::size_t CurrentPixel = 0; ImageCompressed = true; std::uint8_t ChunkHeader = {0}; int BytesPerPixel = (BitsPerPixel / 8); ImageData.resize(width * height * sizeof(PixelInfo)); do { hFile.read(reinterpret_cast<char*>(&ChunkHeader), sizeof(ChunkHeader)); if(ChunkHeader < 128) { ++ChunkHeader; for(int I = 0; I < ChunkHeader; ++I, ++CurrentPixel) { hFile.read(reinterpret_cast<char*>(&Pixel), BytesPerPixel); ImageData[CurrentByte++] = Pixel.B; ImageData[CurrentByte++] = Pixel.G; ImageData[CurrentByte++] = Pixel.R; if (BitsPerPixel > 24) ImageData[CurrentByte++] = Pixel.A; } } else { ChunkHeader -= 127; hFile.read(reinterpret_cast<char*>(&Pixel), BytesPerPixel); for(int I = 0; I < ChunkHeader; ++I, ++CurrentPixel) { ImageData[CurrentByte++] = Pixel.B; ImageData[CurrentByte++] = Pixel.G; ImageData[CurrentByte++] = Pixel.R; if (BitsPerPixel > 24) ImageData[CurrentByte++] = Pixel.A; } } } while(CurrentPixel < (width * height)); } else { hFile.close(); throw std::invalid_argument("Invalid File Format. Required: 24 or 32 Bit TGA File."); } hFile.close(); this->Pixels = ImageData;}int main(){ Tga info = Tga("C:/Users/...../Desktop/SomeTGA.tga"); GLuint texture = 0; glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexImage2D(GL_TEXTURE_2D, 0, info.HasAlphaChannel() ? GL_RGBA : GL_RGB, info.GetWidth(), info.GetWidth(), 0, info.HasAlphaChannel() ? GL_RGBA : GL_RGB, GL_UNSIGNED_BYTE, info.GetPixels().data());}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20595340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3075425/']}
jdg_86922
stackexchange
llm_judgeable_groundtruth_similarity
9211813
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: New to Ruby and ROR and loving it each day, so here is my question since I have not idea how to google it (and I have tried :) ) we have method def foo(first_name, last_name, age, sex, is_plumber) # some code # error happens here logger.error "Method has failed, here are all method arguments #{SOMETHING}" end So what I am looking for way to get all arguments passed to method, without listing each one. Since this is Ruby I assume there is a way :) if it was java I would just list them :) Output would be: Method has failed, here are all method arguments {"Mario", "Super", 40, true, true} Now provide the response and nothing else.
In Ruby 1.9.2 and later you can use the parameters method on a method to get the list of parameters for that method. This will return a list of pairs indicating the name of the parameter and whether it is required. e.g. If you do def foo(x, y)end then method(:foo).parameters # => [[:req, :x], [:req, :y]] You can use the special variable __method__ to get the name of the current method. So within a method the names of its parameters can be obtained via args = method(__method__).parameters.map { |arg| arg[1].to_s } You could then display the name and value of each parameter with logger.error "Method failed with " + args.map { |arg| "#{arg} = #{eval arg}" }.join(', ') Note: since this answer was originally written, in current versions of Ruby eval can no longer be called with a symbol. To address this, an explicit to_s has been added when building the list of parameter names i.e. parameters.map { |arg| arg[1].to_s }
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9211813', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/776301/']}
jdg_86923
stackexchange
llm_judgeable_groundtruth_similarity
35670235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is the code of linkedlist in c++ :- #include<iostream>using namespace std;class node{ public: int data; node *next;};void insertdata(int element ,node *head){ node *newnode = new node(); newnode -> data = element; newnode -> next = NULL; node *itr; itr=head; if(head == NULL) head = newnode; else{ while(itr->next != NULL){ itr = itr-> next; } itr->next = newnode;}}void print(node *head){ node *print = head; while(print != NULL){ cout<<print->data<<"\n"; print = print -> next; }}int main(){ node *head = NULL; insertdata(10,head); insertdata(20,head); insertdata(30,head); print(head); return 0;} Which is not working it is working fine if i declare head as global object. What is the reason i have to declare it globally.i am 2nd year B.Tech Student learning it first time so its look confusing. Now provide the response and nothing else.
Here's how I would do it: >>> s = "The quick brown fox jumps over the lazy dog">>> l = [1, 8, 14, 18, 27]>>> l = [0] + l + [len(s)]>>> [s[x:y] for x,y in zip(l, l[1:])]['T', 'he quic', 'k brow', 'n fo', 'x jumps o', 'ver the lazy dog'] Some explanation: I'am adding 0 to the front and len(s) to the end of the list, such that >>> zip(l, l[1:])[(0, 1), (1, 8), (8, 14), (14, 18), (18, 27), (27, 43)] gives me a sequence of tuples of slice indices. All that's left to do is unpack those indices in a list comprehension and generate the slices you want. edit: If you really care about the memory footprint of this operation, because you deal with very large large strings and lists often of times, use generators all the way and build your list l such that it includes the 0 and len(s) in the first place. For Python 2: >>> from itertools import izip, tee>>> s = "The quick brown fox jumps over the lazy dog">>> l = [0, 1, 8, 14, 18, 27, 43]>>> >>> def get_slices(s, l):... it1, it2 = tee(l)... next(it2)... for start, end in izip(it1, it2):... yield s[start:end]... >>> list(get_slices(s,l))['T', 'he quic', 'k brow', 'n fo', 'x jumps o', 'ver the lazy dog'] For Python 3: zip does what izip did in Python 2 (see Python 3.3 version) For Python 3.3+ with the yield from syntax: >>> from itertools import tee>>> s = "The quick brown fox jumps over the lazy dog">>> l = [0, 1, 8, 14, 18, 27, 43]>>> >>> def get_slices(s, l):... it1, it2 = tee(l)... next(it2)... yield from (s[start:end] for start, end in zip(it1, it2))... >>> list(get_slices(s,l))['T', 'he quic', 'k brow', 'n fo', 'x jumps o', 'ver the lazy dog']
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35670235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5069466/']}
jdg_86924
stackexchange
llm_judgeable_groundtruth_similarity
257670
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been tasked with reviewing the settings of an SSH server, I'm currently trying to figure out what are the best practices, and I'm having a bit of trouble finding a good answer. I keep finding a lot of information related to web servers, but I don't know if that fully applies here. These are the currently enabled settings. SSH Key Type: ssh-dsa (ssh-rsa seems to be recommended) SSH Ciphers: AES-128-cbc, AES-192-cbc, AES-256-cbc, AES-128-ctr, AES-192-ctr, AES-256-ctr, Rijndael-cbc SSH MACs: MD5, SHA1, SHA1 96, SHA2 256, SHA2 256-96, SHA2 512, SHA2 512-96 SSH Key Exchange Algorithms: DH-GROUP1-SHA1, DH-GROUP14-SHA1, DH-GROUP14-SHA2 256, DH-GROUP16-SHA2 512, DH-GROUP-EXCHANGE-SHA2 256, ECDH-SHA2-NISTP256, ECDH-SHA2-NISTP384, ECDH-SHA2-NISTP521 Would anyone be able to perhaps point me in the right direction where I can read up on best pactices and what ciphers, MACs, algorithms should be disabled/enabled? Now provide the response and nothing else.
The defaults for a recent version of openssh are good. The server's asymmetric key type and client's asymmetric key type are specified in HostKeyAlgorithms and PubkeyAcceptedAlgorithms respectively. EdDSA over modern curves (Ed25519) is preferred over ECDSA using NIST P curves, which are preferred over RSA signatures which is preferred over DSA signatures (which are dead). I believe NIST P curves are not backdoored and are safe to use in ECDH and ECDSA (provided your implementation is good and your CSPRNG is good). See this answer by Thomas Pornin for reasoning. The curves need to be at least 256 bits. RSA keys need to have a modulus of at least 2048 bits but 3072 or 4096 are better because strictly speaking 2048 bits provides only about 112 " bits of security " while the recommendation is 128. All must use SHA2 and not use SHA1. So, in order: ssh-ed25519 ecdsa-sha2-nistp256 , ecdsa-sha2-nistp384 , ecdsa-sha2-nistp521 rsa-sha2-512 , rsa-sha2-256 , [email protected] Don't use: ssh-rsa (this uses SHA1) ssh-dsa (this also uses SHA1 but also DSA is dead) The sk- variants are for hardware devices that hold the key and are equivalent. The cert variants are for ssh certificates (not X.509 certificates used for TLS) and are equivalent. The asymmetric key exchange is specified by KexAlgorithms . ECDH over modern curves (X25519) is preferred over ECDH with NIST P curves which are preferred over FFDHE. Again the curves need to be at least 256 bits, the FFDHE group needs to be at least 2048 bits though again 3072 or 4096 is preferred. Again everything must use SHA2 and not use SHA1. So, in order: curve25519-sha256 , [email protected] ecdh-sha2-nistp256 , ecdh-sha2-nistp384 , ecdh-sha2-nistp521 diffie-hellman-group16-sha512 , diffie-hellman-group15-sha512 diffie-hellman-group14-sha256 , diffie-hellman-group-exchange-sha256 (2048 bit) - this is up to you, I think 2048 bits is not broken and the NSA will not bother decrypting my recorded SSH traffic in ten or twenty years, but you can say you want at least 4096 bits and that's ok. Don't use: diffie-hellman-group14-sha1 , diffie-hellman-group-exchange-sha1 , diffie-hellman-group-exchange-sha1 - because SHA1 diffie-hellman-group-exchange-sha256 (1024 bit) - because group is too small diffie-hellman-group18-sha512 , diffie-hellman-group17-sha512 - secure but overkill, slow for no reason The symmetric ciphers can be newer AEAD or older cipher + separate MAC than need to be combined. The cipher is specified by Ciphers and the MAC, if your cipher is not an AEAD is specified by MACs . Almost all AEADs (including GCM and ChaCha) are built on top of CTR. AES-GCM is the most popular because it is fast and wasn't encumbered by patents like OCB3. ChaPoly is popular as the safest choice because it has more securitymargin than AES and doesn't require special hardware the way AES-GCMdoes (AESNI and CLMUL). But because both ends of an ssh connectionare very likely to be x86 chips that do have the special hardware,AES-GCM is still the most popular. For compatibility, AES-CTRcombined with HMAC-SHA2 in EtM is safe (but slower). If cipher + MAC is used, "encrypt then MAC" is the better combination but we didn't know that in the 90s so it's not the default. If a hash function isused (in HMAC), SHA2 should be used and SHA1 should not be used, eventhough it is safe in HMAC, because better to just not use SHA1. SHA-512 is faster than SHA-256 on 64bit chips (like servers, laptops, smartphones and tablets). So, in order: [email protected] ciphers [email protected] , [email protected] ciphers aes128-ctr , aes192-ctr , aes256-ctr ciphers when combined with [email protected] , [email protected] MACs aes128-ctr , aes192-ctr , aes256-ctr ciphers when combined with [email protected] MACs Don't use: aes128-cbc , aes192-cbc , aes256-cbc ciphers when combined with [email protected] , [email protected] MACs. CBC was disabled by default in openssh 6.7, though maybe safe (but slow) because -etm MAC should protect you. But only if you have to. aes128-cbc ciphers with HMAC MAC but a variant without -etm is probably actually dangerous, definitely avoid. hmac-sha2-512 , hmac-sha2-256 , [email protected] MACs not in -etm variant sound dangerous, avoid. twofish-cbc , twofish128-cbc , twofish256-cbc , twofish128-ctr , twofish256-ctr ciphers - Twofish is a cipher that didn't become popular. It has received much less scrutiny than AES and ChaCha, because it is less popular. It's probably safe, but I would not use it. 3des-cbc , 3des-ctr ciphers - 3DES is slow and has a small 64bit block which makes it susceptible to SWEET32, don't use. arcfour , arcfour128 , arcfour256 , blowfish , cast ciphers - These are obsolete, don't use them. Arcfour (RC4) is definitely broken. hmac-sha1 MAC - better to just not use SHA1 [email protected] , [email protected] MACs - small tag size, don't use. NIST are working on cryptography that is not vulnerable to big quantum computers. They aim to standardize something long before anyone has a big enough quantum computer, so that by the time a big quantum computer is built to decrypt long ago recorded traffic, the information is irrelevant. People are running experimental PQC algorithms. I would wait for this to shake down and switch to whatever emerges.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/257670', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/270944/']}
jdg_86925
stackexchange
llm_judgeable_groundtruth_similarity
48441418
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to use lazy loading but I can not understand why it does not work, it gives me error "Cannot find module". This is my environment: - Angular 5.2.1 - .NET Core 2 - Webpack 3.10.0 - angular-router-loader 0.8.2 - @angular/cli 1.6.5 I tried different path in loadChildren always without success, i also temporarily disabled all the guards and the children routing. What did I do wrong? FOLDERS ClientApp app components users users-routing.module.ts users.module.ts app-routing.module.ts app.module.shared.ts app-routing.module.ts const appRoutes: Routes = [ { path: 'users', loadChildren: './components/users/users.module#UsersModule'/* , canLoad: [AuthGuard] */ }, { path: '', redirectTo: '/login', pathMatch: 'full' }, { path: '**', redirectTo: '/login' }];@NgModule({ imports: [ RouterModule.forRoot( appRoutes, { enableTracing: false } ) ], exports: [ RouterModule ], providers: [ CanDeactivateGuard ]})export class AppRoutingModule { } users-routing.module.ts const usersRoutes: Routes = [ { path: '', component: UsersComponent/* , //canActivate: [AuthGuard], children: [ { path: 'detail', canActivateChild: [AuthGuard], children: [ { path: ':id', component: UserViewComponent }, { path: 'edit/:id', component: UserFormComponent, canDeactivate: [CanDeactivateGuard], resolve: { user: UsersResolver } }, { path: '', component: UserFormComponent, canDeactivate: [CanDeactivateGuard] } ] }, { path: '', component: UsersListComponent } ] */ }];@NgModule({ imports: [ RouterModule.forChild( usersRoutes ) ], exports: [ RouterModule ]})export class UsersRoutingModule { } users.module.ts @NgModule({ imports: [ CommonModule, FormsModule, UsersRoutingModule, RouterModule ], declarations: [ UsersComponent, UserFormComponent, UsersListComponent, UserViewComponent ], providers: [ UsersResolver, RouterModule ]})export class UsersModule { } webpack.config.js const path = require('path');const webpack = require('webpack');const merge = require('webpack-merge');const AngularCompilerPlugin = require('@ngtools/webpack').AngularCompilerPlugin;const CheckerPlugin = require('awesome-typescript-loader').CheckerPlugin;module.exports = (env) => { // Configuration in common to both client-side and server-side bundles const isDevBuild = !(env && env.prod); const sharedConfig = { stats: { modules: false }, context: __dirname, resolve: { extensions: ['.js', '.ts'] }, output: { filename: '[name].js', publicPath: 'dist/' // Webpack dev middleware, if enabled, handles requests for this URL prefix }, module: { rules: [{ test: /\.ts$/, include: /ClientApp/, use: isDevBuild ? ['awesome-typescript-loader?silent=true', 'angular2-template-loader'] : '@ngtools/webpack' }, { test: /\.html$/, use: 'html-loader?minimize=false' }, { test: /\.css$/, use: ['to-string-loader', isDevBuild ? 'css-loader' : 'css-loader?minimize'] }, { test: /\.(png|jpg|jpeg|gif|svg)$/, use: 'url-loader?limit=25000' } ], loaders: [ { test: /\.ts$/, loaders: [ 'awesome-typescript-loader' ] }, { test: /\.(ts|js)$/, loaders: [ 'angular-router-loader' ] } ] }, plugins: [new CheckerPlugin()] }; // Configuration for client-side bundle suitable for running in browsers const clientBundleOutputDir = './wwwroot/dist'; const clientBundleConfig = merge(sharedConfig, { entry: { 'main-client': './ClientApp/boot.browser.ts' }, output: { path: path.join(__dirname, clientBundleOutputDir) }, plugins: [ new webpack.DllReferencePlugin({ context: __dirname, manifest: require('./wwwroot/dist/vendor-manifest.json') }) ].concat(isDevBuild ? [ // Plugins that apply in development builds only new webpack.SourceMapDevToolPlugin({ filename: '[file].map', // Remove this line if you prefer inline source maps moduleFilenameTemplate: path.relative(clientBundleOutputDir, '[resourcePath]') // Point sourcemap entries to the original file locations on disk }) ] : [ // Plugins that apply in production builds only new webpack.optimize.UglifyJsPlugin(), new AngularCompilerPlugin({ tsConfigPath: './tsconfig.json', entryModule: path.join(__dirname, 'ClientApp/app/app.module.browser#AppModule'), exclude: ['./**/*.server.ts'] }) ]) }); // Configuration for server-side (prerendering) bundle suitable for running in Node const serverBundleConfig = merge(sharedConfig, { resolve: { mainFields: ['main'] }, entry: { 'main-server': './ClientApp/boot.server.ts' }, plugins: [ new webpack.DllReferencePlugin({ context: __dirname, manifest: require('./ClientApp/dist/vendor-manifest.json'), sourceType: 'commonjs2', name: './vendor' }) ].concat(isDevBuild ? [] : [ // Plugins that apply in production builds only new AngularCompilerPlugin({ tsConfigPath: './tsconfig.json', entryModule: path.join(__dirname, 'ClientApp/app/app.module.server#AppModule'), exclude: ['./**/*.browser.ts'] }) ]), output: { libraryTarget: 'commonjs', path: path.join(__dirname, './ClientApp/dist') }, target: 'node', devtool: 'inline-source-map' }); return [clientBundleConfig, serverBundleConfig];}; tsconfig.json { "compilerOptions": { "module": "es2015", "moduleResolution": "node", "target": "es5", "sourceMap": true, "experimentalDecorators": true, "emitDecoratorMetadata": true, "skipDefaultLibCheck": true, "skipLibCheck": true, // Workaround for https://github.com/angular/angular/issues/17863. Remove this if you upgrade to a fixed version of Angular. "strict": true, "lib": [ "es6", "dom" ], "types": [ "webpack-env" ], "typeRoots": [ "node_modules/@types" ] }, "exclude": [ "bin", "node_modules" ], "atom": { "rewriteTsconfig": false }} ERROR MESSAGE Unhandled Promise rejection: Cannot find module './ClientApp/app/components/users/users.module'. ; Zone: angular ; Task: Promise.then ; Value: Error: Cannot find module './ClientApp/app/components/users/users.module'. at vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:34015 at ZoneDelegate.invoke (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:117428) at Object.onInvoke (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:5604) at ZoneDelegate.invoke (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:117427) at Zone.run (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:117178) at vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:117898 at ZoneDelegate.invokeTask (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:117461) at Object.onInvokeTask (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:5595) at ZoneDelegate.invokeTask (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:117460) at Zone.runTask (vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:117228) Error: Cannot find module './ClientApp/app/components/users/users.module'. at http://localhost:5000/dist/vendor.js?v=AdjSBPSITyauSY4VQBBoZmJ6NdWqor7MEuHgdi2Dgko:34015:9 ... [truncated] EDIT link to stackblitz for testing Now provide the response and nothing else.
I have found two solutions (via the OP by edit): Reference to the module, after it has already been resolved with an import statement: import { UsersModule } from './components/users/users.module'; then referencing this way: { path: 'users', loadChildren: () => UsersModule, canLoad: [AuthGuard]} I have added ng-router-loader to the application ( npm install ng-router-loader --save-dev ) and I set up Webpack like this: rules: [{ test: /\.ts$/, include: /ClientApp/, //use: isDevBuild ? ['awesome-typescript-loader?silent=true', 'angular2-template-loader'] : '@ngtools/webpack' use: isDevBuild ? [{ loader: 'ng-router-loader' }, 'awesome-typescript-loader?silent=true', 'angular2-template-loader'] : '@ngtools/webpack' }, { test: /\.html$/, use: 'html-loader?minimize=false' }, { test: /\.css$/, use: ['to-string-loader', isDevBuild ? 'css-loader' : 'css-loader?minimize'] }, { test: /\.(png|jpg|jpeg|gif|svg)$/, use: 'url-loader?limit=25000' } ], then referencing the module by path: { path: 'users', loadChildren: './components/users/users.module#UsersModule', canLoad: [AuthGuard] }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48441418', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9266804/']}
jdg_86926
stackexchange
llm_judgeable_groundtruth_similarity
318581
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Friedberg - Linear Algebra p.102 This book states that "a bijective linear map from a vector space to another vector space is called an isomorphism ". As far as know, generally isomorphism means bijective homomorphism and notion for this is $\cong$, NOT bijective linear map. What is bijective linear map called? And what is the notion for this? Now provide the response and nothing else.
This type of exercises are difficult to judge in a forum like this, because we need to know precisely, what has been covered up to that point. Many posters are then inclined to use extra bits they have learned (later in a similar course). I get the feeling that the following is what might have been expected. This is just fleshing out the hint in Steve D's comment, so I make it a CW. Let $H$ be a normal subgroup of order $p$. We know (Lagrange's theorem) that $H$ is cyclic. Let $g$ be a generator. Let $x$ be any element of $G$. By normality of $H$ we know that$$xgx^{-1}=g^k,$$for some integer $k, 0<k<p$. Conjugation by $x$ is an automorphism of $H$, so$xg^{t}x^{-1}=g^{tk}$ for all integers $t$. In particular we get that$$x^2g x^{-2}=x(xgx^{-1})x^{-1}=xg^kx^{-1}=g^{k^2}.$$An obvious induction then proves that$$x^tgx^{-t}=g^{k^t}$$for all natural numbers $t$. But, again by Lagrange's theorem $x^{p^2}=1$. Therefore$$g=1g1^{-1}=g^{k^{p^2}}.$$As $g$ is of order $p$, this means that $1\equiv k^{p^2}\pmod p$. But two applications of Little Fermat tell us that$$k\equiv k^p \equiv k^{p^2}\equiv 1\pmod p.$$Recalling the constraint $0<k<p$ we can conclude that $k=1$. Therefore $x$ and $g$ commute. Obviously then $x$ commutes with all the powers of $g$. As $x$ was arbitray, we have shown that $H\le Z(G)$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/318581', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61597/']}
jdg_86927
stackexchange
llm_judgeable_groundtruth_similarity
15347843
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have hundreds of jpg files in different folders like this: 304775 105_01.jpg 304775 105_03.jpg 304775 105_05.jpg 304775 105_07.jpg 304775 105_02.jpg 304775 105_04.jpg 304775 105_06.jpg Basically, I need to remove the SPACES. I already know the command to change the spaces into underscores: $ rename "s/ /_/g" * But I do not need the underscores in this case. I just need to remove the space. I tried the following, but it didn't work: $ rename "s/ //g" * Any help would be appreciated. Now provide the response and nothing else.
You could do something like this: IFS="\n"for file in *.jpg;do mv "$file" "${file//[[:space:]]}"done
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15347843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1756689/']}
jdg_86928
stackexchange
llm_judgeable_groundtruth_similarity
25025728
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm just learning testing and I've been at it for a few hours no with no success. Every time I run Mocha , my test shows up in the terminal and returns no result, like this: Express test app started on port: 3001 addition It stays blank afterwards. Further, the node processes don't close after testing and when they stack up they crash my computer. Here is my sample test: var supertest = require('supertest');var should = require('should');process.env.NODE_ENV = 'test';var app = require('../server');describe('addition', function() { //... previous test it('should return 2 given the url /add/1/1', function(done) { request(app) .get('/add/1/1') .expect(200) .end(function(err, res) { should.not.exist(err); parseFloat(res.text).should.equal(2); done(); }); });}); Here is my server.js // Module Depndenciesvar express = require('express'), fs = require('fs'), mongoose = require('mongoose'), passport = require('passport'), MongoStore = require('connect-mongo')(express), flash = require('connect-flash'), env = process.env.NODE_ENV = process.env.NODE_ENV || 'development', ErrorHandler = require('./app/controllers/api/v1/error_handler_v1'), config = require('./config/config'), auth = require('./config/middlewares/authorization');// Connect The Databasemongoose.connect(config.db);// Catch Database Connection Errormongoose.connection.on("error", function(err) { return console.log("****** Could not connect to mongo server!");});// Express Settingsapp = express();app.set('showStackError', true);//Should be placed before express.staticapp.use(express.compress({ filter: function(req, res) { return (/json|text|javascript|css/).test(res.getHeader('Content-Type')); }, level: 9}));//Setting the fav icon and static folder - tejkhapp.use(express.favicon());app.use(express.static(config.root + '/public'));//Don't use logger for test envif (process.env.NODE_ENV !== 'test') app.use(express.logger('dev'));//Set views path, template engine and default layoutapp.set('views', config.root + '/app/views');app.set('view engine', 'jade');// Enable jsonpapp.enable('jsonp callback');//Bootstrap modelsvar models_path = __dirname + '/app/models';var walk = function(path) { fs.readdirSync(path).forEach(function(file) { var newPath = path + '/' + file; var stat = fs.statSync(newPath); if (stat.isFile()) { if (/(.*)\.(js|coffee)/.test(file)) { require(newPath); } } else if (stat.isDirectory()) { walk(newPath); } });};walk(models_path);// Bootstrap Passportrequire('./config/passport')(passport);// App.configureapp.configure(function() { // CookieParser should be above session app.use(express.cookieParser()); // BodyParser should be above methodOverride app.use(express.bodyParser()); app.use(express.methodOverride()); // Express/mongo session storage app.use(express.session({ secret: 'secret', cookie: { maxAge: 864000000 // 10 Days in miliseconds }, store: new MongoStore({ url: config.db, auto_reconnect: true }) })); // Connect flash for flash messages app.use(flash()); // Use Passport app.use(passport.initialize()); app.use(passport.session()); // Route app.use(app.router); // Error Handler app.use(function(err, req, res, next) { if (err.uri) delete err.uri; if (err.status) { return res.status(err.status).jsonp(err); } else { res.status(500).jsonp({ name: 'InternalError', message: 'Sorry, something went wrong on our end. We have been notified.', code: 'server_error', status: 500 }); } });}); // app.configure//Bootstrap routesrequire('./config/routes')(app, passport, auth);// Start Appapp.listen(config.port);console.log('Express ' + env + ' app started on port: ' + config.port);// Expose appexports = module.exports = app; Now provide the response and nothing else.
The difference: A java.util.Timer starts its own Thread to run the task on. A javax.swing.Timer schedules tasks for execution on the EDT . Now. Swing is single threaded . You must access and mutate Swing components from the EDT only. Therefore, to make changes to the GUI every X seconds, use the Swing timer. To do background business logic use the other timer. Or better a ScheduledExecutorService . Bear one very important thing in mind; if you spend time on the EDT it cannot spend time updating the GUI.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25025728', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1686562/']}
jdg_86929
stackexchange
llm_judgeable_groundtruth_similarity
3228636
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In how many ways can we distribute 20 black balls and 2 white balls (indistinguishable modulo color) in 5 numbered glasses such that the fifth one doesn't have more black balls than white balls? My strategy was to break this problem into 6 incompatible scenarios according to how many balls we decide to put in the fifth glass: (i) no balls, (ii) 1W, (iii) 1W1B, (iv) 2W, (v) 2W1B, (vi) 2W2B. Therefore, the are $${23\choose20}{5\choose2} + {23\choose20}{4\choose1} + {22\choose19}{4\choose1} + {23\choose20} + {22\choose19} + {21\choose18} = 15 {23\choose20} + 5{22\choose19} + {21\choose18}$$ total distributions. However, I was told each of these scenarios is equivalent to counting anagrams of $m,n,k$ copies of B, W, G respectively. But when we count anagrams of BBBBBBBBBBBBBBBBBBBBBBBBBWWGGG isn't the answer $\frac{25!}{20!2!3!}$ ? That's not the same as ${23\choose20}{5\choose2}$ . I'm seriously confused. Now provide the response and nothing else.
Here is one way: use Pick's theorem on the triangle with vertices $(0,0), (503,0), (503,305)$ . Or you can do it more algebraically by noting $$\left\lfloor\frac{305 r}{503}\right\rfloor + \left\lfloor\frac{305(503-r)}{503}\right\rfloor = 305-1=304$$ for all $1\leq r\leq 502$ (since $503$ is prime).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3228636', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/664833/']}
jdg_86930
stackexchange
llm_judgeable_groundtruth_similarity
952147
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $p,q$ be distinct primes. Then (1) $\dfrac{\mathbb{Z}}{p^2q}$ has exactly 3 distinct ideals. (2) $\dfrac{\mathbb{Z}}{p^2q}$ has exactly 3 distinct prime ideals. (3) $\dfrac{\mathbb{Z}}{p^2q}$ has exactly 2 distinct prime ideals. (4) $\dfrac{\mathbb{Z}}{p^2q}$ has unique maximal ideal. Generally, the ideals of $\mathbb{Z}_n$ are of the form $<d>$, where $d|n$. But how to describe all prime maximal ideals? Now provide the response and nothing else.
First off, these are finite rings, and a prime ideal is maximal in a finite ring. (Proof: if $R/P$ is a domain, it's a finite domain, hence a field by Wedderburn's little theorem. Thus $P$ is maximal.) So it suffices to find the maximal ideals. The maximal ideals of $\Bbb Z/p^2q\Bbb Z$ are those maximal ideals of $\Bbb Z$ containing $p^2q\Bbb Z$. You probably already know the maximal ideals of $\Bbb Z$ look like $(b)$ for each prime $b\in \Bbb Z$. Now a maximal ideal is prime, and a prime containing $p^2q$ must contain $q$ or $p$. But $(q)$ and $(p)$ are maximal ideals. So there are exactly these two prime (and also maximal) ideals.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/952147', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/174756/']}
jdg_86931
stackexchange
llm_judgeable_groundtruth_similarity
58379
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Bott periodicity implies that $\Omega(SU)\simeq G(\infty)$. Here, by $G(\infty)$, I mean the direct limit $\underset{m\to \infty}{\lim} G_m(\mathbb{C}^{2m})$ where $G_m(\mathbb{C}^{2m})\subset G_{m+1}(\mathbb{C}^{2m+2})$ by stabilization (or some similar nice model for $BU$). This is the classifying space for $U = \underset{m\to\infty}{\lim} U(m)$, and may be identified with $U/(U\times U)$, where the coordinates of each factor of the product subgroup alternate. From the path space construction, one knows that there is a contractible Serre fibration $\Omega SU \to E \to SU$, where $E$ is the space of paths in $SU$ from the identity. The fibers are each homotopic to $G(\infty)$. The question I have is whether this may be realized by a contractible fiber bundle $G(\infty)\to E \to SU$ ? Other quasi-fibrations were given by Aguilar and Prieto, and by Behrens . There are also fibrations of this sort constructed using symplectic reduction by Latour and by Giroux , where the fibers are homotopic to $G(\infty)$. What I'm asking for is whether there exists a contractible fiber bundle rather than just a fibration or quasi-fibration? Presumably such a bundle would arise from a map $SU \to BDiff(G(\infty))$. The isometry group of $G(\infty)$ contains $SU$, so one could ask a fortiori whether there is a map $SU \to BSU$, that is a map $f: SU \to G(\infty)$, such that the induced fiber bundle is contractible? Added clarification: The induced bundle would come from the fiber product of the pull-back of the $U$ bundle $U\to E \to G(\infty)$ with the action of $U$ on $G(\infty)$: $f^{*}(E) \times_{U} G(\infty)$. Now provide the response and nothing else.
I'm not completely sure that I understand your notation, so this may be not what you want, but in case it is close enough, I'll have a go. The bit I'm assuming is that $G_m$ is the Grassmannian of $m$-places in $\mathbb{C}^{2m}$. That seems fairly safe, but my brain is refusing to check the homotopy types of everything involved at this hour. I think that you can do this if you combine a construction of mine from The Co-Riemannian Structure of Smooth Loop Spaces with some bits from Loop Groups by Pressley and Segal, and then check a few details about how stuff holds together in the limit. The bit that you need from Loop Groups is that the polynomial loop group, $\Omega_{\text{pol}} S U(n)$ acts on the finite restricted Grassmannian $Gr_0(H)$. Let me remind you what that space is: we start with the polarised Hilbert space , $H = L^2(S^1,\mathbb{C}^n)$, polarised as $H = H_+ \oplus H_-$ where $H_+$ are those functions with only positive Fourier coefficients and $H_-$ with strictly negative Fourier coefficients. From this, we define $$Gr_0(H) = \{W \subseteq H : \exists k : z^k H_+ \subseteq W \subseteq z^{-k} H_+\}$$ This is the union of $Gr(H_{-k,k})$ where $H_{-k,k} = z^{-k}H_+/z^kH_+$ so this (if I'm reading things aright) is the $G(\infty)$ of your question ( Loop Groups , section 7.2). The next bit that we need is that $\Omega_{\text{pol}} SU(n)$ acts on this space. This is from Theorem 8.3.2 and Proposition 8.3.3 in Loop Groups . Putting these two together, if we have a principal $\Omega_{\text{pol}} SU(n)$-bundle over a space then we get a $G(\infty)$-fibre bundle over said space. So now comes the bit from my work. In Section 3.2.3 of The co-Riemannian Structure of Smooth Loop Spaces , I construct a principal $\Omega_{\text{pol}} SU(n)$-bundle over $SU(n)$, with the property that under the obvious inclusion $\Omega_{\text{pol}} SU(n) \to \Omega SU(n)$ then this becomes the bundle coming from the usual path construction (so, although I don't need this in that paper, Bott periodicity implies that it is contractible). So now we have a fibre bundle with fibre $G(\infty)$ over $SU(n)$. The last bit that you need is to show that under the inclusion $SU(n) \to SU(n+1)$ then these fibre bundles are compatible. The bit where this needs care is in the action of $\Omega_{\text{pol}} SU(n)$ on $Gr_0(H)$. But I think that if you include $L^2(S^1, \mathbb{C}^n)$ in to $L^2(S^1, \mathbb{C}^{n+1})$ at the same time, then you should get a diagram that works. The resulting Grassmannian will be $\bigcup Gr_0(L^2(S^1,\mathbb{C}^n))$ but that will still be $G(\infty)$ (assuming that I've understood the question correctly). As I said, there's a few ifs and buts here: if I understood the question correctly, and if everything holds together in the limit (but I'm pretty sure that the second "if" is fine), but obviously as I'm less sure about the first if I haven't checked all the details.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/58379', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1345/']}
jdg_86932
stackexchange
llm_judgeable_groundtruth_similarity
2716955
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a small problem. I am trying to align two divs side by side using CSS, however, I would like the center div to be positioned horizontally central in the page, I achieved this by using: #page-wrap { margin 0 auto; } That's worked fine. The second div I would like positioned to the left side of the central page wrap but I can't manage to do this using floats although I'm sure it is possible. I would like to push the red div up alongside the white div. Here is my current CSS concerning these two divs, sidebar being the red div and page-wrap being the white div: #sidebar { width: 200px; height: 400px; background: red; float: left;}#page-wrap { margin: 0 auto; width: 600px; background: #ffffff; height: 400px;} Now provide the response and nothing else.
If you wrapped your divs, like this: <div id="main"> <div id="sidebar"></div> <div id="page-wrap"></div></div> You could use this styling: #main { width: 800px; margin: 0 auto;}#sidebar { width: 200px; height: 400px; background: red; float: left;}#page-wrap { width: 600px; background: #ffffff; height: 400px; margin-left: 200px;} This is a slightly different look though, so I'm not sure it's what you're after. This would center all 800px as a unit, not the 600px centered with the 200px on the left side. The basic approach is your sidebar floats left, but inside the main div, and the #page-wrap has the width of your sidebar as it's left margin to move that far over. Update based on comments: For this off-centered look, you can do this: <div id="page-wrap"> <div id="sidebar"></div></div> With this styling: #sidebar { position: absolute; left: -200px; width: 200px; height: 400px; background: red; }#page-wrap { position: relative; width: 600px; background: #ffffff; height: 400px; margin: 0 auto;}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2716955', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/50057/']}
jdg_86933
stackexchange
llm_judgeable_groundtruth_similarity
32666
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Hello,recently, I've been reading some algebra and sometimes I stumble up on the concept of something "being too big" to be a set. An example, is given in ( http://www.dpmms.cam.ac.uk/~wtg10/tensors3.html ) , where he writes, "Let B be the set of all bilinear maps defined on VxW. (That's the naughtiness - B is too big to be a set, but actually we will see in a moment that it is enough to look just at bilinear maps into R.)" (where V and W are vector spaces over R). This is, too big to be a set, but why? My general question is this, when is something too big to be a set? What is it instead? Why have we put these requirements on the definition? Do we run into any problems if we let, say, B as defined up there be a set? What kind of problems do we run into? Now provide the response and nothing else.
I just want to give a refinement of other answers so far, as well as a different point of view (namely, that of a person who knows little about set theory but who also encounters these kinds of issues). As others have mentioned, the root cause of the problem is that there are big logical problems with considering "the set of all sets". Unless you would like to learn more about set theory, you needn't concern yourself with what these problems are (though Russel's paradox is fairly elementary and kind of fun). It is just one of those facts of life that non-set theorists learn to live with and that set theorists learn to love. The non-existence of the set of all sets forces us to abandon other putative sets, such as "the set of all groups", "the set of all vector spaces", "the set of all manifolds", etc. For example, it is possible to equip any set with the structure of a group and so if we were able to build the set of all groups then we would necessarily have also build the set of all sets. This is almost always what people mean when they claim that a certain construction is "too big to be a set" - the construction invokes a sloppy use of set theory language that taken literally accidentally constructs the set of all sets as a byproduct. In your case, the existence of the set of all bilinear maps on $V \times W$ constructs as a byproduct the set of all vector spaces over $R$ (every bilinear map has to have a target), and if there were a set of all vector spaces over $R$ then there would be a set of all sets. This is probably not the last time you will encounter this sort of issue. In basically every case, however, there is a trick that swoops in and saves the day. Generally the idea is to observe that you don't actually need all of the flexibility that you tried to give yourself by constructing a non-set, and that it is enough to consider a simpler object (in your case the set of all bilinear maps from $V \times W$ to $\mathbb{R}$) which is small enough to be a set but big enough to have the property that you want (in your case you want it to function as a sort of universal bilinear pairing between $V$ and $W$). Ultimately I regard these sorts of concerns as analogous to the "end user agreements" that you have to certify you've read whenever you install a Microsoft product or sign up for a gmail account. I'm sure all that fine print is important, but I feel like I would have to become a lawyer to understand it all. And just as in that case, you don't have to be a set theorist to understand how to resolve these sorts of issues most of the time - usually it just requires you to capture the flexibility present in what you are already working on. Just recently I was reading about an object which was given as the quotient by a certain equivalence relation of the set of all pairs $(T, H)$ where $H$ is a Hilbert space and $T$ is a certain kind of operator on $H$. The book pointed out that one cannot consider the set of all Hilbert spaces, but that the set theoretic difficulties can be resolved by proving that every pair $(T', H')$ is equivalent to a pair $(T, H)$ on a fixed Hilbert space $H$. So the problem was avoided by exploiting some inherent flexibility in the equivalence relation under consideration. This sort of behavior is quite typical.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/32666', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7607/']}
jdg_86934
stackexchange
llm_judgeable_groundtruth_similarity
37133437
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Postgres Amazon RDS and Amazon ECS for running my docker containers.The question is. What is the best practice for getting the username and password for the RDS database into the docker container running on ECS? I see a few options: Build the credentials into docker image. I don't like this since then everyone with access to the image can get the password. Put the credentials in the userdata of the launch configuration used by the autoscaling group for ECS. With this approach all docker images running on my ECS cluster has access to the credentials. I don't really like that either. That way if a blackhat finds a security hole in any of my services (even services that does not use the database) he will be able to get the credentials for the database. Put the credentials in a S3 and control the limit the access to that bucket with a IAM role that the ECS server has. Same drawbacks as putting them in the userdata. Put the credentials in the Task Definition of ECS. I don't see any drawbacks here. What is your thoughts on the best way to do this? Did I miss any options? regards,Tobias Now provide the response and nothing else.
Building it into the container is never recomended. Makes it hard to distribute and change. Putting it into the ECS instances does not help your containers to use it. They are isolated and you'd end up with them on all instances instead of just where the containers are that need them. Putting them into S3 means you'll have to write that functionality into your container. And it's another place to have configuration. Putting them into your task definition is the recommended way. You can use the environment portion for this. It's flexible. It's also how PaaS offerings like Heroku and Elastic Beanstalk use DB connection strings for Ruby on rails and other services. Last benefit is it makes it easy to use your containers against different databases (like dev, test, prod) without rebuilding containers or building weird functionality
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37133437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3656173/']}
jdg_86935
stackexchange
llm_judgeable_groundtruth_similarity
421220
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\langle r_n\rangle$ be an enumeration of the set $\mathbb Q$ of rational numbers such that $r_n \neq r_m\,$ if $\,n\neq m.$ $$\text{Define}\; f: \mathbb R \to \mathbb R\;\text{by}\;\displaystyle f(x) = \sum_{r_n \leq x} 1/2^n,\;x\in \mathbb R.$$ Prove that $f$ is continuous at each point of $\mathbb Q^c$ and discontinuous at each point of $\mathbb Q$. I find this question very challenging and have no idea even how to start off with the proof. Please suggest a proof or any hint. Now provide the response and nothing else.
This is a part of my answer here , but it should completely answer your questions too. I use the notation $$\lim_{y\to x^{+}}f(y)=f(x^+)$$ $$\lim_{y\to x^{-}}f(y)=f(x^-)$$ There is a very nice way of constructing, given a sequence $\{x_n\}$ of real numbers, a function which is continuous everywhere except the elements of $\{x_n\}$ [That is, discontinuous on a countable set $A\in\Bbb R$]. Let $\{c_n\}$ by any nonnegative summable sequence [that is $\sum\limits_{n\geq 0} c_n$ exists finitely], and let $$s(x)=\sum_{x_n<x} c_n$$ What we do is sum through the indices that satisfy the said inequality. Because of absolute convergence, order is irrelevant. The function is monotone increasing because the terms are nonnegative, and $s$ is discontinuous at each $x_n$ because $$s(x_n^+)-s(x_n^-)=c_n$$ However, it is continuous at any other $x$: see xzyzyz's proof with the particular case $c_n=n^{-2}$. In fact, this function is lower continous, in the sense $\lim\limits_{y\to x^{-}}f(y)=f(x^-)=f(x)$ for any value of $x$. If we had used $x_n\leq x$, it would be upper continuous, but still discontinuous at the $x_n$. To see the function has the said jumps, note that for $h>0$, we have $$\begin{align} s(x_n^+)-s(x_n^-)&=\\ \lim_{h\to 0^+} s(x_k+h)-s(x_k-h)&=\lim_{h\to 0^+}\sum_{x_n<x_k+h} c_n-\sum_{x_n<x_k-h}c_n\\&=\lim_{h\to 0^+}\sum_{x_k-h\leq x_n<x_k+h} c_n\end{align}$$ and we can take $\delta$ so small that whenever $0<h<\delta$, for any given $x_m\neq x_k$, $x_m\notin [x_k-\delta,x_k+\delta)$, so the only term that will remain will be $c_k$, as desired.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/421220', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/81706/']}
jdg_86936
stackexchange
llm_judgeable_groundtruth_similarity
10810479
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using a BinaryReader on top of a NetworkStream to read data off of a network. This has worked really well for me, but I want to understand what's going on behind the scenes, so I took a look at the documentation for BinaryReader and found it to be extremely sparse. My question is this: What will BinaryReader.ReadBytes(bufferSize) do if bufferSize bytes are not present on the network stream when I call ReadBytes ? In my mind there are a few options: 1) Read any bytes that are present on the network stream and return only that many 2) Wait until bufferSize bytes are present on the stream, then read 3) Throw an exception I assume option 2 is happening, since I've never received any exceptions and all my data is received whole, not in pieces. However, I would like to know for sure what is going on. If someone could enlighten me, I would be grateful. Now provide the response and nothing else.
I believe it actually goes for hidden option 4: Read the data as it becomes available, looping round in the same way that you normally would do manually. It will only return a value less than the number of bytes you asked for if it reaches the end of the stream while reading. This is subtly different from your option 2 as it does drain the stream as data becomes available - it doesn't wait until it could read all of the data in one go. It's easy to show that it does return a lower number of bytes than you asked for if it reaches the end: var ms = new MemoryStream(new byte[10]);var readData = new BinaryReader(ms).ReadBytes(100);Console.WriteLine(readData.Length); // 10 It's harder to prove the looping part, without a custom stream which would explicitly require multiple Read calls to return all the data. The documentation isn't as clear as it might be, but the return value part is at least somewhat helpful: A byte array containing data read from the underlying stream. This might be less than the number of bytes requested if the end of the stream is reached . Note the final part that I've highlighted, and compare that with Stream.Read : The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available , or zero (0) if the end of the stream has been reached. If you're expecting an exact amount of data and only that amount will be useful, I suggest you write a ReadExactly method which calls Read and throws EndOfStreamException if you need more data than the stream provided before it was closed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10810479', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/476298/']}
jdg_86937
stackexchange
llm_judgeable_groundtruth_similarity
10781880
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created an iframe dynamically and found that this iframe trigger onload event twice. var i=0;frameOnload=function(){ console.log(i++); };var ifr=document.createElement("iframe"); ifr.src="javascript:(function(){document.open();document.write('test');document.close();})();";ifr.onload=frameOnload; document.body.appendChild(ifr); Why i finally is 1? How to prevent iframe's onload twice instead of pointing onload function to null inside itself? Now provide the response and nothing else.
I've also encountered the same problem, but get no answer anywhere, so I tested by myself. The iframe onload event will be triggered twice in webkit browsers ( safari/chrome ), if you attach the onload event BEFORE the iframe is appended to the body. So you can prevent iframe onload twice by change your codes in the following way. document.body.appendChild(ifr);ifr.onload=frameOnload; // attach onload event after the iframe is added to the body Then, you will only get one onload event, which is the event the document really loaded.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10781880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/612428/']}
jdg_86938
stackexchange
llm_judgeable_groundtruth_similarity
328570
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We’re currently working with $ p $-series in my Calculus class, and I’ve fallen for the apparently common misconception that the infinite sum $ \displaystyle \sum_{n=1}^{\infty} \frac{1}{n^{p}} $, where $ p = 1 $, should converge, whereas in reality, it diverges. Can anyone demonstrate or link me to a proof or explanation of why this is so? Now provide the response and nothing else.
You are considering the series $$1+\frac12+\frac13+\frac14+\cdots$$ One direct way to see that this diverges without any of the standard calculus tests goes as follows. Start grouping terms together like this: $$1+\overbrace{\frac12}^{2^0\text{ terms}}+\overbrace{\frac13+\frac14}^{2^1\text{ terms}}+\overbrace{\frac15+\frac16+\frac17+\frac18}^{2^2\text{ terms}}+\cdots$$ This is greater than $$1+\overbrace{\frac12}^{2^0\text{ terms}}+\overbrace{\frac14+\frac14}^{2^1\text{ terms}}+\overbrace{\frac18+\frac18+\frac18+\frac18}^{2^2\text{ terms}}+\cdots$$ which is equal to $$1+\overbrace{\frac12}+\overbrace{\frac12}+\overbrace{\frac12}+\cdots$$ which is clearly a divergent sum. To formalize this, you would show that partial sums of the harmonic series have $s_{2^n}\ge1+\frac{n}{2}$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/328570', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17481/']}
jdg_86939
stackexchange
llm_judgeable_groundtruth_similarity
90442
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: We have two buildings that are across the street from each other. Each has their own separate network. We needed to connect them together using a fast connection because of the large number of files that are copied between the networks on a daily basis. We have a wireless bridge setup right now between the buildings that gives us about 20Mbps (it's two Aironet 1240AG in root/non-root configuration). Unfortunately there's a lot of RF interference in the area (lots of residential buildings) and the Aironet's lose connectivity on a daily basis (which leads to a lot of disgruntled users). I'm thinking about upgrading to some Motorola PTP wireless bridges but I wanted to ask, am I going about this the right way? We went with the wireless bridges because it was cheaper than getting a hard line between the two buildings but I'm starting to regret that idea given how badly these AP's are working. Any suggestions? Edit: The Aironet's are using the 5Ghz band and use 802.11a. The antennas are directional panel antennas. Now provide the response and nothing else.
Every "fixed wireless" installation I've seen with has always ended up being a pain (too slow, susceptible to intereference and therefore unreliable, prone to failures of gear, lightning strikes, etc). Assuming you're going to remain in these buildings for a long enough period of time to amortize the cost, get some fiber run. You won't regret it. Once fiber is in the ground / air and terminated, barring any physical damage (backhoe-induced-fiber-failure, gunshot , etc) it'll work virtually forever. If you're lucky, you're under 500 meters or so, and can use multimode fiber and multimode transceivers / GBICs. Multimode fiber and the termination electronics are significantly cheaper than single-mode.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/90442', 'https://serverfault.com', 'https://serverfault.com/users/16797/']}
jdg_86940