source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 40634 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
there are two definitions for Reed-Solomon codes, as transmitting points and as BCH code ( http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction ). On Wikipedia there is written that we can transform from one definition to second by using Fourier transform. So for example there is RS(7, 3) (length of codeword is 7, so codeword is maximally 7 - 1 = 6 degree polynomial and degree of message polynomial is maximally 3 - 1 = 2) code with generator polynomial $g(x)=x^4 + \alpha^3 x^3 + x^2 + \alpha x + \alpha^3 = (x-\alpha)(x-\alpha^2)(x-\alpha^3)(x-\alpha^4)$. Let message polynomial be for example $m(x) = \alpha x^2 + \alpha x + 1$. So following the definition that there is created BCH-style codeword (from definition): Systematic codeword is $c_{sys}(x)=x^4m(x) + (x^4m(x)) \bmod g(x)=\alpha x^6 + \alpha x^5 + x^4 + \alpha^5 x^3 + x^2 + \alpha^2 x + \alpha^5$. Non-systematic codeword is $c_{nonsys}(x)=m(x)g(x)= \alpha x^6 + \alpha^2 x^5 + \alpha^6 x^4 + \alpha^6 x^3 + \alpha^3 x^2 + \alpha^2 x + \alpha^3$. And the codeword created with second definition (transmitting points): $c_{tp}(x) = m(\alpha^5)x^6 + m(\alpha^4)x^5 + m(\alpha^3)x^4 + m(\alpha^2)x^3 + m(\alpha)x^2 + m(1)x + m(0) $$= \alpha x^6 + \alpha x^5 + \alpha^4 x^4 + \alpha^6 x^3 + \alpha^4 x^2 + x + 1$. So now I tried to check equivalence of these two codeword creation methods. As it is written on Wikipedia: $c_{tp_i} = c_{nonsys}(\alpha^i)$ and Galois Field Fourier Transform should be used. I tried to compute it for $0...\alpha^5$, $1...\alpha^6$, $\alpha^5...0$, $\alpha^6...0$ and the result is always incorrect. So the question is: when it is given codeword created with BCH-encoding scheme then how to transform it to equivalent codeword created with transmitting point scheme and vice versa?
Now provide the response and nothing else.
| Presumably your $\alpha$ is a primitive element satisfying the equation $\alpha^3=\alpha+1$ (as opposed to a primitive element satisfying the equation $\alpha^3=\alpha^2+1$, which is the other alternative) - at least that choice allowed me to reproduce your results for both the generator polynomial $g(x)$ as well as the polynomial $c_{nonsys}(x)=m(x)g(x)$. However, there seems to be a mistake in your formula for the second definition. With this type of RS-codes, you do not evaluate the message polynomial at zero, only at the elements of the multiplicative group. The Wikipedia article concurs, so you should calculate$$c_{tp}(x)=m(\alpha^6)x^6+m(\alpha^5)x^5+m(\alpha^4)x^4+m(\alpha^3)x^3+m(\alpha^2)x^2+m(\alpha)x+m(1),$$which is equal to (unless I made a mistake)$$c_{tp}(x)=\alpha^6x^6+\alpha x^5+\alpha x^4+\alpha^4 x^3+\alpha^6 x^2+\alpha^4x+1.$$ But I couldn't find the relation between the two encodings from the Wikipedia article at all. Are you sure that it was supposed to go like that? What it says (and also holds) is that the sequence of coefficients of the polynomial $c_{tp}(x)$ is the DFT of the sequence of the coefficients of the message polynomial $m(x)$. Therefore we should be able to recover the message as the inverse DFT of the coefficients of $c_{tp}(x)$. Let's try! $$c_{tp}(\alpha^{7-0})=c_{tp}(1)=\alpha^6+\alpha+\alpha+\alpha^4+\alpha^6+\alpha^4+1=1,$$$$c_{tp}(\alpha^{7-1})=c_{tp}(\alpha^6)=1+\alpha^3+\alpha^4+\alpha+\alpha^4+\alpha^3+1=\alpha,$$$$c_{tp}(\alpha^{7-2})=c_{tp}(\alpha^5)=\alpha+\alpha^5+1+\alpha^5+\alpha^2+\alpha^2+1=\alpha.$$It does look like we, indeed, recovered the coefficients of $m(x)$. The coefficient of degree $i$ term is gotten as $c_{tp}(\alpha^{7-i})$. Because $c_{tp}(x)$ is a valid codeword, we know in advance that $c_{tp}(\alpha^j)=0$, for $j=1,2,3,4$ which is such as well, because those values are the coefficients of $x^6, x^5, x^4, x^3$ respectively, and the message polynomial is quadratic. The equivalence proof in the Wikipedia article is about showing that the resulting set of vectors (= the RS-code) is the same for both methods of encoding messages. It is not attempting to say anything about transforming the codeword gotten by encoding a message $m(x)$ encoded in the spirit of the first definition to another codeword that would correspond to the same message $m(x)$, when encoded with the second method. I'm fairly sure that is all that the argument in Wikipedia is attempting to say. Mind you, there should be a way of achieving the transformation that you were looking for. Unfortunately I don't remember right now, how it goes. It would be based on the fact that the polynomial multiplication that gave us the polynomial $c_{nonsys}(x)$ is more or less like a convolution. So when we take the DFT into account that corresponds to pointwise multiplication. Converting this idea into a useful formula that we could test takes more time and space than I can invest on this at the moment, so I stop at this point for now. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/40634', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11143/']} | jdg_77141 |
stackexchange | llm_judgeable_groundtruth_similarity | 23596530 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently moved to Java 8 to, hopefully, deal with local and zoned times more easily. However, I'm facing an, in my opinion, simple problem when parsing a simple date. public static ZonedDateTime convertirAFecha(String fecha) throws Exception { DateTimeFormatter formatter = DateTimeFormatter.ofPattern( ConstantesFechas.FORMATO_DIA).withZone( obtenerZonaHorariaServidor()); ZonedDateTime resultado = ZonedDateTime.parse(fecha, formatter); return resultado;} In my case: fecha is '15/06/2014' ConstantesFechas.FORMATO_DIA is 'dd/MM/yyyy' obtenerZonaHorariaServidor returns ZoneId.systemDefault() So, this is a simple example. However, the parse throws this exception: java.time.format.DateTimeParseException: Text '15/06/2014' could notbe parsed: Unable to obtain ZonedDateTime from TemporalAccessor:{},ISO resolved to 2014-06-15 of type java.time.format.Parsed Any tips? I've been trying different combinations of parsing and using TemporalAccesor, but without any luck so far.
Now provide the response and nothing else.
| This does not work because your input (and your Formatter) do not have time zone information. A simple way is to parse your date as a LocalDate first (without time or time zone information) then create a ZonedDateTime : public static ZonedDateTime convertirAFecha(String fecha) { DateTimeFormatter formatter = DateTimeFormatter.ofPattern("dd/MM/yyyy"); LocalDate date = LocalDate.parse(fecha, formatter); ZonedDateTime resultado = date.atStartOfDay(ZoneId.systemDefault()); return resultado;} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23596530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1081313/']} | jdg_77142 |
stackexchange | llm_judgeable_groundtruth_similarity | 5701312 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made my table row clickable with this function $("#grid tbody tr").click(function () { var $checkbox = $(this).find(':checkbox'); $checkbox.attr('checked', !$checkbox.attr('checked'));}); and it works fine, however when I try to click the checkbox self, it doesnt work.What should I do to make both of them work?
Now provide the response and nothing else.
| Using a single event handler: $("#grid tbody tr").click(function(e) { if (e.target.type == "checkbox") { // stop the bubbling to prevent firing the row's click event e.stopPropagation(); } else { var $checkbox = $(this).find(':checkbox'); $checkbox.attr('checked', !$checkbox.attr('checked')); }}); http://jsfiddle.net/karim79/UX2Fv/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5701312', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/239588/']} | jdg_77143 |
stackexchange | llm_judgeable_groundtruth_similarity | 1232835 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following two matrices: $$A=\begin{pmatrix}1 & -2\\3 & 1\end{pmatrix}\text{ and }B=\begin{pmatrix}1 & 3 & 2\\-1 & 0 & 2\end{pmatrix}$$ So I have two matrixes with different sizes. Multiple sources tell me that I can't do multiplication or addition with matrix of different sizes. So I'm a bit confused. Can I do it with these? How?
Now provide the response and nothing else.
| Consider you have two matrices $A$ and $B$ of orders $a_1\times a_2$ and $b_1\times b_2$ respectively. Matrix addition/subtraction on the two matrices will be defined iff $a_1=b_1$ and $a_2=b_2$ Matrix multiplication on them is defined iff $a_2=b_1$ for $AB$ to be defined and $b_2=a_1$ for $BA$ to be defined. $AB$ will be of order $a_1\times b_2$ and $BA$ will be of order $b_1\times a_2$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1232835', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/189621/']} | jdg_77144 |
stackexchange | llm_judgeable_groundtruth_similarity | 673279 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm logging all execve's in a sort of honey pot box and as such I try to make sense of the commands. There's many many of these, all with 'bash -c' and some long alpha numerical value not enclosed in double quotes. How can I understand what I'm really looking at? type=EXECVE msg=audit(1425426965.480:57967): argc=3 a0="bash" a1="-c" a2=6C73202F6574632F696E69742E64207C2067726570202D4520275B302D39612D7A5D7B31307D27207C2061776B20277B7072696E742024317D27207C207861726773206B696C6C616C6Ctype=EXECVE msg=audit(1425510362.928:72792): argc=3 a0="bash" a1="-c" a2=6B696C6C616C6C20373737206874747064type=EXECVE msg=audit(1425510366.832:72800): argc=3 a0="bash" a1="-c" a2=726D202D66202F746D702F68747470642Atype=EXECVE msg=audit(1425510366.832:72801): argc=3 a0="rm" a1="-f" a2="/tmp/httpd*" The last part with httpd gives some indication but I would really like to understand exactly what's going in.
Now provide the response and nothing else.
| A bit late to the party, but in case it still helps you or others searching... Linux audit logs aren't really meant to be looked at directly in the raw log file--they're meant to be viewed and analyzed using tools like "ausearch" and "aureport". Many things (including even time/date stamps) are stored in hex format, but you can tell ausearch to interpret the hex stuff, as well as translating UIDs/GIDs to names, using the "-i" option. By default, ausearch uses the file "/var/log/audit/audit.log", but you can also view a specific file with the "-if filename" option. As an example, I cut-and-pasted your specific lines to a temp file, and got the following results: $ ausearch -if temp_audit.log -i----type=EXECVE msg=audit(03/03/2015 18:56:05.480:57967) : argc=3 a0=bash a1=-c a2=ls /etc/init.d | grep -E '[0-9a-z]{10}' | awk '{print $1}' | xargs killall----type=EXECVE msg=audit(03/04/2015 18:06:02.928:72792) : argc=3 a0=bash a1=-c a2=killall 777 httpd----type=EXECVE msg=audit(03/04/2015 18:06:06.832:72800) : argc=3 a0=bash a1=-c a2=rm -f /tmp/httpd*----type=EXECVE msg=audit(03/04/2015 18:06:06.832:72801) : argc=3 a0=rm a1=-f a2=/tmp/httpd* | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/673279', 'https://serverfault.com', 'https://serverfault.com/users/24836/']} | jdg_77145 |
stackexchange | llm_judgeable_groundtruth_similarity | 112492 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given ${\{C}\}\ $ the set of all the $Surreal\ numbers$, is it possible to define the integral:$$\int_a^b{dxf(x)}$$where$$a\in{\{C}\},b\in{\{C}\},x\in{\{C}\}$$Thanks
Now provide the response and nothing else.
| There is some discussion of this topic at the very end of the second edition (2000) On Numbers and Games by Conway. He describes work by himself, Simon Norton, and Martin Kruskal to define integration. According to the description, it looked good for a while, producing workable logarithm function in terms of the integral of $x^{-1}$, but then got stuck, and finally: For twenty years we believed that the definition was nevertheless probably “correct” in some natural sense, and that these difficulties arose merely because we did not understand exactly which genetic definitions were “legal” to use in it. Kruskal has now made some progress of a rather sad kind by showing that this belief was false. Namely, the definition integrates $e^t$ over the range $[0, \omega]$ to the wrong answer, $e^\omega$, rather than $e^\omega-1$, independent of whatever reasonable genetic definition we give for the exponential function. (Page 228.) There is other discussion of the details in the same section. I do not know whether any progress has been made since then. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/112492', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24089/']} | jdg_77146 |
stackexchange | llm_judgeable_groundtruth_similarity | 70404049 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to migrate my project from Material-UI v4 to MUI v5 using this How to customize guide from the official website but can't wrap my head how can I rewrite this component with the new guidelines: ...const useStyles = makeStyles(() => ({ root: { backgroundColor: "rgba(255, 255, 255, 0.36)", display: "flex", height: "100%", position: "absolute", textAlign: "center", top: 0, width: "100%", justifyContent: "center", alignItems: "center", }, visible: { visibility: "visible", }, hidden: { visibility: "hidden", }, holder: { height: 60, width: 60, },}));interface ILoaderBlockProps { isLoading: boolean; className?: string; testId?: string;}type Props = ILoaderBlockProps;const LoaderBlock: FunctionComponent<Props> = (props: Props) => { const classes = useStyles({}); const { isLoading, className } = props; const rootClasses = clsx( classes.root, isLoading ? classes.visible : classes.hidden, className ); return ( <div className={rootClasses}> <div className={classes.holder}> <CircularProgress color={"primary"}/> </div> </div> );};... Since makeStyles is deprecated and I have additional difficulties in MUI v5, I'm trying to stop using it but can't wrap my head around new styling yet. How could these <div> s be rewritten in new, v5-way since makeStyles is no longer an option? It seems the only way to do it is to introduce SCSS/CSS file with styles for root and holder .
Now provide the response and nothing else.
| Full credit goes to ianhanniballake , who explained the solution to me in a comment. I'm going to show the working version of my code sample here. The big insight to me was: startDestination must not match a composable route in the sense of pattern matching but it must be exactly the same string . That means an argument can't be set via startDestination directly but has to be set via the argument's defaultValue . Here is the working sample: class MainActivity : ComponentActivity(){ override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { val navController = rememberNavController() NavHost( navController = navController, // 1st change: Set startDestination to the exact string of route startDestination = "dynamic/{$ARG_ID}", // NOT "dynamic/1", provide arguments via defaultValue ) { composable( route = "dynamic/{$ARG_ID}", // 2nd change: Set startDestination argument via defaultValue arguments = listOf(navArgument(ARG_ID) { type = NavType.StringType; defaultValue = "1" }), ) { val id = it.arguments?.getString(ARG_ID) Text("dynamic route, received argument: $id!") } } } } companion object { const val ARG_ID = "id" }} The approach equally works with the argument provided in the form of a query parameter. To be honest, I see this as a small limitation because the start route now dictates what has to be the defaultValue . I might want to set a different defaultValue or none at all. Yet, in most cases this should be the most elegant solution. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/70404049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12552171/']} | jdg_77147 |
stackexchange | llm_judgeable_groundtruth_similarity | 43302727 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a program that reads through an array of nodes to see if one with a duplicate name is about to be created. Currently I'm getting the error C2228: left of '.compare' must have class/struct/union I think the fact that getName() returns a pointer to a char array could have something to do with this but I really don't know what to do. Node.h simply includes constructors for Node. Main.cpp #include "stdafx.h"#include "stdio.h"#include "string.h"#include "stdlib.h"#include "fstream"#include "iostream"#include "istream"#include "new"#include "Node.cpp"int _tmain(int argc, _TCHAR* argv[]){ //name of file to be opened char filename[]="resource.txt"; //temporary holding place of strings char temp[256]; //Node Array Node<char [256]> * resourceArray; resourceArray = new Node<char [256]>[1]; int resourceArraySize=1; Node<char [256]> * tempNode; std::ifstream reFile; //File is opened reFile.open(filename, std::ifstream::in); if(reFile.is_open()) { while(!reFile.eof()){ //get the first item on a line and store it in temp if(reFile.getline(temp,256,' ')){ // for(int i=0;i<=resourceArraySize;i++){ //if index is greater that array size, reallocate array if(i>=resourceArraySize){ tempNode = new Node<char [256]>[2*resourceArraySize]; std::copy(resourceArray,resourceArray+resourceArraySize,tempNode); delete resourceArray; resourceArray = tempNode; delete tempNode; resourceArraySize = resourceArraySize * 2; } if(resourceArray[i].getName.compare(temp)){ } } } if(reFile.getline(temp,256,'\n')){ //TODO:Create a node, if necessary and add to previous's nodes requires array. } } //File is closed reFile.close(); }else{ perror("Error opening file\n"); } system("pause"); return 0;} node.cpp #include "Node.h"//Default Constructortemplate<class T>Node<T>::Node(){}//This constructor sets the next pointer of a node and the data contained in that nodetemplate<class T>Node<T>::Node(const T& item,Node<T>* ptrnext){ this->data = item; this->next = ptrnext; useable=true; requiresSize=1;}//This method inserts a node after the current nodetemplate<class T>void Node<T>::insertAfter(Node<T> *p){ //Links the rest of list to the Node<T>* p p->next = this->next; //Links the previous node to this one this-> next = p;}//This method deletes the current node from the list then returns it.template<class T>Node<T> * Node<T>::deleteAfter(){ Node<T>* temp = next; if(next !=NULL){ next = next->next; } return temp;}template<class T>Node<T> * Node<T>::getNode(const T& item, Node<T>* nextptr = NULL){ Node<T>* newnode; //Local pointer for new node newNode = new Node<T>(item,nextptr); if (newNode == NULL){ printf("Error Allocating Memory"); exit(1); } return newNode;}template<class T>void Node<T>::setName(char input[]){ strncpy(name,input,sizeof(name));}template<class T>char * Node<T>::getName(){ return name;}template<class T>bool Node<T>::isUseable(){ return useable;}template<class T>void Node<T>::setUseable(bool use){ useable = use;}template<class T>void Node<T>::addRequirement(char item[256]){ int i=0,end=0; char * temp; while(end==0){ //if search index is larger than size of the array,reallocate the array if(i>= requiresSize){ temp = new char [requiresSize*2][256]; std::copy(requires,requires + requiresSize,temp); delete requires; requires = temp; delete temp; requiresSize = requiresSize *2; } //if the index at requires is not empty, check to see if it is the same as given item if(requires[i]!= NULL){ if(item.compare(requires[i])){ //if they are the same, break out of the loop, item is already included break; }else{ //otherwise, increase the index and check again (continue loop) i++; } }else{ //if the index is empty, add the item to the list and break out of loop requires[i]=item; end=1; break; } }}
Now provide the response and nothing else.
| Xmx merely reserves virtual address space. Xms actually allocates (commits) it but does not necessarily prefault it. How operating systems respond to allocations varies. Windows does allow you to reserve very large chunks of address space (Xmx) but will not allow overcommit (Xms). The limit is defined by swap + physical. The exception are large pages (which need to be enabled with a group policy setting), which will limit it by physical ram. Linux behavior is more complicated, it depends on the vm.overcommit_memory and related sysctls and various flags passed to the mmap syscall, which to some extent can be controlled by JVM configuration flags. The behavior can range from a) Xms can exceed total ram + swap to b) Xmx is capped by available physical ram. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43302727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2009666/']} | jdg_77148 |
stackexchange | llm_judgeable_groundtruth_similarity | 9125782 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've just noticed that there's only a little difference in declaring a non-member discriminated union: type Color = | Red | Green | Blue and declaring an enum: type Color = | Red = 0 | Green = 1 | Blue = 2 What are their main differences in terms of performance, usage, etc? Do you have suggestions when to use what?
Now provide the response and nothing else.
| Enum are stucts and are therefore allocated on the stack, while discriminated unions are references types so are heap allocated. So, you would expect DU to be slightly less performant that enums, though in reality you'll probably never notice this difference. More importantly a discriminated union can only ever be one of the types declared, where as enums are really just an integer, so you could cast an integer that isn't a member of the enum to the enum type. This means that when pattern matching the compiler can assert that the pattern matching is complete when you've covered all the cases for a DU, but for an enum you must always put in a default catch all the rest case, i.e for an enum you'll always need pattern matching like: match enumColor with| Red -> 1 | Green -> 2| Blue -> 3| _ -> failwith "not an enum member" where as the last case would not be necessary with an DU. One final point, as enums are natively supported in both C# and VB.NET, were as DUs are not, enums are often a better choice when creating a public API for consumption by other languages. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9125782', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/634025/']} | jdg_77149 |
stackexchange | llm_judgeable_groundtruth_similarity | 41633 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Basic setup: regression model: $y = \text{constant} +\beta_1x_1+\beta_2x_2+\beta_3x_3+\beta_4x_4+\alpha C+\epsilon$where C is the vector of control variables. I'm interested in $\beta$ and expect $\beta_1$ and $\beta_2$ to be negative. However, there is multicollinearity problem in the model, the coefficient of correlation is given by, corr($x_1$,$x_2)=$ 0.9345, corr($x_1$,$x_3)=$ 0.1765, corr($x_2$,$x_3)=$ 0.3019. So $x_1$ and $x_2$ are highly correlated, and they should virtually provide the same information. I run three regressions: exclude $x_1$ variable; 2. exclude $x_2$ variable; 3. original model with both $x_1$ and $x_2$. Results: For regression 1 and 2, it provide the expected sign for $\beta_2$ and $\beta_1$ respectively and with similar magnitude. And $\beta_2$ and $\beta_1$ are significant in 10% level in both model after I do the HAC correction in standard error. $\beta_3$ is positive but not significant in both model. But for 3, $\beta_1$ has the expected sign, but the sign for $\beta_2$ is positive with the magnitude twice greater than $\beta_1$ in absolute value. And both $\beta_1$ and $\beta_2$ are insignificant. Moreover, the magnitude for $\beta_3$ reduces almost in half compared to regression 1 and 2. My question is: Why in 3, the sign of $\beta_2$ becomes positive and much greater than $\beta_1$ in absolute value? Is there any statistical reason that $\beta_2$ can flip sign and has large magnitude? Or is it because model 1 and 2 suffer omitted variable problem which inflated $\beta_3$ provided $x_2$ has positive effect on y? But then in regression model 1 and 2, both $\beta_2$ and $\beta_1$ should be positive instead of negative, since the total effect of $x_1$ and $x_2$ in regression model 3 is positive.
Now provide the response and nothing else.
| Think of this example: Collect a dataset based on the coins in peoples pockets, the y variable/response is the total value of the coins, the variable x1 is the total number of coins and x2 is the number of coins that are not quarters (or whatever the largest value of the common coins are for the local). It is easy to see that the regression with either x1 or x2 would give a positive slope, but when incuding both in the model the slope on x2 would go negative since increasing the number of smaller coins without increasing the total number of coins would mean replacing large coins with smaller ones and reducing the overall value (y). The same thing can happen any time you have correlalted x variables, the signs can easily be opposite between when a term is by itself and in the presence of others. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/41633', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/16423/']} | jdg_77150 |
stackexchange | llm_judgeable_groundtruth_similarity | 14818530 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to Android development and am working on an accessibility research project for blind people (Jelly Bean API level 17 project). I have been experimenting with some gestures and the Two-Finger-Drag gesture has been really tough to implement. The following image captures what I actually require quite well. I want the blind user to drag two fingers across horizontally (anywhere on the screen) and he would obtain an Audio output of the text he/she typed in the EditText. Also according to the distance the two fingers travel while dragging, we output each word separately.(Example below) Example: If the user types "Today is a good day" and drags his finger (x value) by (say) 10 units to Left we output "good day" but if he drags it say 20 units to Left we output "a good day" , for 30 units to Left "is a good day" etc etc. I stumbled across which seems to detect two-finger touch: TouchEvent(MotionEvent event) Also this tutorial on detecting multiple touches seems promising, but I need it to work for touch and drag which I am not sure can be implemented like this. Any new suggestions to implement this or pointers to tutorials that can help would be great! Thanks in advance,Adit
Now provide the response and nothing else.
| Okay So thanks to Gabe here and numerous blogs on this, I have found a solution to my question! First I initialized my variables in the "Activity" class int GLOBAL_TOUCH_POSITION_X = 0;int GLOBAL_TOUCH_CURRENT_POSITION_X = 0; Next, Inside the onCreate(): //Two-Finger Drag Gesture detection RelativeLayout TextLoggerLayout = (RelativeLayout)findViewById(R.id.ActivityrView); TextLoggerLayout.setOnTouchListener( new RelativeLayout.OnTouchListener(){ @Override public boolean onTouch(View v, MotionEvent m) { handleTouch(m); return true; } }); Now define the function handleTouch(m) as follows , it outputs the current position of the "Two-finger-touch" along with the initial position of the touch: void handleTouch(MotionEvent m){ //Number of touches int pointerCount = m.getPointerCount(); if(pointerCount == 2){ int action = m.getActionMasked(); int actionIndex = m.getActionIndex(); String actionString; TextView tv = (TextView) findViewById(R.id.testDiffText); switch (action) { case MotionEvent.ACTION_DOWN: GLOBAL_TOUCH_POSITION_X = (int) m.getX(1); actionString = "DOWN"+" current "+GLOBAL_TOUCH_CURRENT_POSITION_X+" prev "+GLOBAL_TOUCH_POSITION_X; tv.setText(actionString); break; case MotionEvent.ACTION_UP: GLOBAL_TOUCH_CURRENT_POSITION_X = 0; actionString = "UP"+" current "+GLOBAL_TOUCH_CURRENT_POSITION_X+" prev "+GLOBAL_TOUCH_POSITION_X; tv.setText(actionString); break; case MotionEvent.ACTION_MOVE: GLOBAL_TOUCH_CURRENT_POSITION_X = (int) m.getX(1); int diff = GLOBAL_TOUCH_POSITION_X-GLOBAL_TOUCH_CURRENT_POSITION_X; actionString = "Diff "+diff+" current "+GLOBAL_TOUCH_CURRENT_POSITION_X+" prev "+GLOBAL_TOUCH_POSITION_X; tv.setText(actionString); break; case MotionEvent.ACTION_POINTER_DOWN: GLOBAL_TOUCH_POSITION_X = (int) m.getX(1); actionString = "DOWN"+" current "+GLOBAL_TOUCH_CURRENT_POSITION_X+" prev "+GLOBAL_TOUCH_POSITION_X; tv.setText(actionString); break; default: actionString = ""; } pointerCount = 0; } else { GLOBAL_TOUCH_POSITION_X = 0; GLOBAL_TOUCH_CURRENT_POSITION_X = 0; }} There you have it! The "Two-finger-drag" gesture finally implemented.Looks, like I can wrte a tiny blog post on it as well!! :) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14818530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1441582/']} | jdg_77151 |
stackexchange | llm_judgeable_groundtruth_similarity | 16931691 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a text field from which I am calling a function on onChange event. I am throwing a confirm window here when the value in that textfield is changed. If cancel is clicked( in confirm window), the old value must get set back into the text field. If proceed or ok is clicked, the new value should be retained.I have been trying this for a long time but am unable to retain the old value. Eg:Before onchange, val in text field ='first';onChange event, val changed to 'second', confirm window opens, if ok selected text field should have second and if cancel is selected 'first' should be present in the text field. function onChangeOfValue(input){ //var oldValue = document.getElementById(input).value; document.getElementById(input).onchange = function(){ var newValue = this.value; alert("newVal is--->"+newValue); if(document.getElementById(input) != null && document.getElementById(input) != '' && !confirm("Do you want to continue?")){ // this.input=oldValue; return false; } }}
Now provide the response and nothing else.
| Note that form controls have a defaultValue property that is the default value (surprisingly). You can use this property to store the previous value of the input, or to return the value to the previous value. So putting together the suggestions you've been given, the following function is passed a reference to the element and asks the user if they want to keep the current value. If they answer yes, then the defaultValue of the input is set to the current value . If the users says no (i.e. cancel), then the value is reset to the defaultValue . Note that this approach will overwrite the original value of the input, so if you reset the form, the input's value will be reset to the current defaultValue . <script>function onChangeOfValue(element) { var oldValue = element.defaultValue; var newValue = element.value; if (window.confirm('do you really want to change the value to ' + newValue + '?')) { element.defaultValue = newValue; } else { element.value = element.defaultValue; } }</script><input onchange="onChangeOfValue(this);"> This approach will work for any number of inputs in the same page and form. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16931691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/818557/']} | jdg_77152 |
stackexchange | llm_judgeable_groundtruth_similarity | 33235561 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between the following expressions for initializing the properties in C# 6: 1. Auto-Property initialized from constructor public class Context1{ public Context1() { this.Items = new List<string>(); } public List<string> Items { get; private set; }} 2: Property initialized from a backing field public class Context2{ private readonly List<string> items; public Context2() { this.items = new List<string>(); } public List<string> Items { get { return this.items; } }} 3: Auto-Property new syntax in C# 6 public class Context3{ public List<string> Items { get; } = new List<string>();} 4: Auto-Property new syntax in C# 6 public class Context4{ public List<string> Items => new List<string>();}
Now provide the response and nothing else.
| Listing 3 is C# 6's equivalent of listing 2, where the backing field is provided under the hood. Listing 4: public List<string> Items => new List<string>(); is equivalent to: public List<string> Items { get { return new List<string>(); } } which as you can imagine returns a new empty list every time you access the property. The difference between listings 2/3 and 4 is further explored in this Q&A with an example. Listing 1 is just an auto property with a getter and a private setter. It's not a readonly property in that you can set it anywhere you can access any of the type's private members. A readonly property (that is, a getter-only property) can only be initialized either in a constructor or in the property declaration, much like a readonly field. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33235561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70439/']} | jdg_77153 |
stackexchange | llm_judgeable_groundtruth_similarity | 506 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does a bike/bicycle fall when its speed is very low or close to zero and is balanced when going with a high speed?
Now provide the response and nothing else.
| The surprising answer is that the stability of the modern bicycle has little or nothing to do with centrifugal force or gyroscopes or any of that. Look up "bicycle stability" on Google. Experiments show that the sloped angle of the front fork is very important, e.g. If the fork pointed backwards it is very difficult to stay upright at any speed. At higher speeds a very slight turn of the handles moves the bicycle under the center of gravity of the rider quicker, so that the dynamical stability is improved. As usual experiment corrects theory here. See the answer of Tristan at Does leaning (banking) help cause turning on a bicycle? as well for an even better answer See the comment of nibot below for a reference to an actual definitive article. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/506', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/248/']} | jdg_77154 |
stackexchange | llm_judgeable_groundtruth_similarity | 38999842 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on a chat application using angular 2. How can i send the finish chat command to the backend when the user closes the window? My component has a method that calls the backend service to end the chat in the following way endChat() { this.chatService.endChat(this.chatSessionInfo).subscribe( result => this.OnGetMessages(result), error => this.OnChatEndError(error) ); } How can i execute that code when closing the window? How can i detect the window close event? I tried with ngOnDestroy but for some reason the code is not being executed. In my Component.ts I have. import { Component, OnInit, AfterViewChecked, ElementRef, ViewChild,OnDestroy} from '@angular/core';export class ChatComponent implements OnInit, AfterViewChecked,OnDestroy { and finally ngOnDestroy() { this.endChat(); } Thanks!
Now provide the response and nothing else.
| Thanks everyone for the help. I was able to create a solution based on different proposal. First I used the beforeunload event in the component @HostListener('window:beforeunload', ['$event'])beforeunloadHandler(event) { this.endChat();} where endChat() { this.chatService.endChatSync(this.chatSessionInfo);} Then, the trick is to make the http call sync not async. Before, the endchat method at the chat service was endChat(chatSessionInfo: ChatSessionInfo) : Observable<ChatTranscription> { console.log("Ending chat.."); let body = JSON.stringify(chatSessionInfo); let headers = new Headers({ 'Content-Type': 'application/json' }); let options = new RequestOptions({ headers: headers }); return this.http.delete(this.apiUrl + "Chat?userId="+chatSessionInfo.UserId+"&secureKey="+chatSessionInfo.SecureKey,options) .map(this.extractData) .catch(this.handleError);} I was able to make it work with endChatSync(chatSessionInfo: ChatSessionInfo) { console.log("Ending chat.."); let xhr = new XMLHttpRequest() xhr.open("DELETE",this.apiUrl +"Chat?userId="+chatSessionInfo.UserId+"&secureKey="+chatSessionInfo.SecureKey,false); xhr.send();} Hope this helps! | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38999842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1270088/']} | jdg_77155 |
stackexchange | llm_judgeable_groundtruth_similarity | 11518035 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want a regular expression code which allows only numeric phone number (e.g 1234567890) as well as GB format number (e.g 123-456-7890). This expression must have to work for both conditions. Currently I am using below regular expression which only allows GB phone number /^(\()?\d{3}(\))?(-|\s)?\d{3}(-|\s)\d{4}$/ I need help to allow only numeric number also.
Now provide the response and nothing else.
| There are some UK phone number regular expressions here . The most highly rated one from that page is: ^(((\+44\s?\d{4}|\(?0\d{4}\)?)\s?\d{3}\s?\d{3})|((\+44\s?\d{3}|\(?0\d{3}\)?)\s?\d{3}\s?\d{4})|((\+44\s?\d{2}|\(?0\d{2}\)?)\s?\d{4}\s?\d{4}))(\s?\#(\d{4}|\d{3}))?$ It is described as: Matches +447222555555 | +44 7222 555 555 | (0722) 5555555 #2222Non-Matches (+447222)555555 | +44(7222)555555 | (0722) 5555555 #22 Personally I would not put a strict regex on a phone number unless it is absolutely required to have the users actual phone number for identity verification or something involving payments. There are lots of variations of phone numbers, some users could even use Skype as their primary phone, and by putting a validation on you're just going to block/frustrate users. Also note you have got the UK format wrong, UK phone numbers should contain 11 digits and are normally in the format (01234) 123123 - we never use dashes and first number should always be a 0. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11518035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1866078/']} | jdg_77156 |
stackexchange | llm_judgeable_groundtruth_similarity | 5674 |
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is not really a DevOps question but more of a System Automation question (which DevOps covers only a part of, by popular opinion , so I hope it's okay). I feel like some 5 USD IaaS VPS with Ansible could give me the peace of mind of technical stability for say 15 years on a debian:stable system, but the debian:stable release itself will get old and vulnerable with the years. Everything above the OS will generally not get old because by Ansible (given a correctly orchestrated playbook) will continuously update and upgrade it. Yet the fact that the OS will indeed get old can't get out of my mind. Transferring websites from one IaaS to another even once per 4 or 8 or 12 or 16 years is not something I want to do in principle. One could say, well, you can't eat the cake and keep it whole. I thought of using PaaS but PaaS solutions wouldn't necessarily bring me that peace of mind as well because my data might be "thrown" in some server environment with an old OS (could be old as 20 or 25 years) without me even knowing that and not all PaaS companies guarantee streamed OS release_upgrade. So, I desire some full system automation solution. Is there even such a thing which is usually stable and countable in earth in 2018? I ask this totally seriously, maybe I need to be less stressed about this, but maybe such technical solution (which isn't SaaS of course) exist and might suit my needs? I just want to build my websites (usually Drupal) on the most automated, most upgraded system (OS and above) possible, in the budget of 5-30 USDs. Note: I never provided hosting services and doesn't provide them now either - usually my websites are personal websites that only I myself manage.
Now provide the response and nothing else.
| TL;DR Pick software that focuses on long-term support. Use containers so that you can separate the runtime you need for your app from the security patching of the underlying infrastructure. Long Answer: I used to run some old PHP sites running popular content management systems. The plugins would fall out of support such that I could not upgrade the content management system else the plug-ins would break. So then I would stick on the latest major version things worked on and hope that no new security holes would come. I then ran monitoring software and manually fought off any scripts that got injected through security holes. Then the operating systems that support that old version of PHP I needed would fall out of support. I then couldn't easily move the hoster to take better use of cheaper deals or high availability. So I would find ports of the legacy PHP I needed onto the oldest version of an OS that hosters would support. Total nightmare. What this demonstrated to me was that some software isn't built to last more than three or four years. Millions of sites would have had the same issues as me but tens of millions more wouldn't have as they would have been shutdown else rebuilt from scratch on newer software. So the first lesson here is that if you think you need to build for the long term you need to figure out if your using software that cares about long-term support and has a published lifetime. The world has moved on and the PHP7 world is taking a new approach to having a predictable software release cycle and lifetime (still quite short I might add, but more predictable). You should not just assume nor expect any distro or any community to be with you for the long haul unless that is exactly what they said they are trying to do. And if that is what they are trying to do then they won't do other things that are in conflict with that such as run hot new framework features that are trending on stackoverflow. These days I happen to be helping a company that is running Node.js 8 and PHP 7.x on RHEL7 containers on kubernetes on AWS. They get long-term support of security updates with a published lifetime. I have automated the detection of new security patches to the PHP and Node.js containers. A weekly job checks the upstream container registry and if there is a new security patched version it is announce to a team chat room along with the command to patch the containers in the test environment. A commercial subscription to those patches is included in my managed kubernetes hosting costs. Yet as its all opensource I could fall back to using Centos7 if I wanted to cut some costs. As I am running containers I don't care whatsoever what patches are being applied to the IaaS nor the version of kubernetes I am hosted on. I am on openshift.com and they have upgraded to 3.11 through security fixes without telling me. They just patch the worker VMs and bounce them and kubernetes starts my containers on other VMs. I am insulated from all the security patching of the cluster I am on because I am running containers that have inside them the minimal operating environment needed to run my app. That only changes when I choose to upgrade to a new container and with long-term security patch support; I don't need to do that for years. I understand that the Kubernetes hosts that run my containers themselves run a very stripped down distro to maximise their security and customer density. It doesn't matter to me. The idea of an IaaS stack of Linux that goes out of support seems a totally alien concept to me now even through I lived many years with that problem. It also means that I am free to move to any cloud or hosting provider that can run my containers. It doesn't have to be kubernetes it could be Docker Swarm or whatever. All that I am betting on is that this better way of working with containers will be supported in the long term. I think that's a good bet. A container image is only a json file naming a load of tar files that are the layers in the image. Yes; its just tar files and json. As long as the container host can load my files from a tar and attach a TCP/IP stack my stuff will run. I am pretty confident that this approach will have a very, very long shelf life. I don't think that anyone that is trying to run apps should start out on IaaS today. PaaS is one solution but tends to have very strong vendor lock-in. CaaS is the new sweet spot. Choice. Stability. Control. Portability. If it runs under docker on your laptop just push it to the container orchestrator you are renting space on. My containers only have one thing in them. The hosts that they run on only does secure container hosting. | {} | {'log_upvote_score': 4, 'links': ['https://devops.stackexchange.com/questions/5674', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/-1/']} | jdg_77157 |
stackexchange | llm_judgeable_groundtruth_similarity | 34311964 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I use this pattern: [A-Za-z0-9]{10} to also match to other text sizes like: 12 and 25? I tried to make it like: [A-Za-z0-9]{10|12|25} ([A-Za-z0-9]){10}|{12}|{25} But it didn't succeed.
Now provide the response and nothing else.
| You need to use alternations if you need to match specific sized only: ^(?:[A-Za-z0-9]{10}|[A-Za-z0-9]{12}|[A-Za-z0-9]{25})$ If you want to match symbols within a range, say, from 10 to 25, you can use ^[A-Za-z0-9]{10,25}$ Also, [A-Za-z0-9] can be replaced with \p{Alnum} (see Java regex reference ). \p{Alnum} An alphanumeric character: [\p{Alpha}\p{Digit}] Java code demo with String#matches (that does not require anchors): System.out.println("1234567890".matches("[A-Za-z0-9]{10}|[A-Za-z0-9]{12}|[A-Za-z0-9]{25}")); // => true, 10 Alnum charactersSystem.out.println("12345678901".matches("\\p{Alnum}{10}|\\p{Alnum}{12}|\\p{Alnum}{25}"));// => false, 11 Alnum charactersSystem.out.println("123456789012".matches("\\p{Alnum}{10}|\\p{Alnum}{12}|\\p{Alnum}{25}"));// => true, 12 Alnum characters | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34311964', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5049420/']} | jdg_77158 |
stackexchange | llm_judgeable_groundtruth_similarity | 30154489 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is my test ajax in laravel 5 (refer below) $("#try").click(function(){ var url = $(this).attr("data-link"); $.ajax({ url: "test", type:"POST", data: { testdata : 'testdatacontent' }, success:function(data){ alert(data); },error:function(){ alert("error!!!!"); } }); //end of ajax}); and the trigger link <a href="#" id="try" data-link="{{ url('/test') }}">Try</a> and my route Route::post('test', function(){ return 'Success! ajax in laravel 5';}); but it gives me an error when I run the console in google chrome and it doesn't return the expected response "return 'Success! ajax in laravel 5';" POST http://juliver.laravel.com/test 500 (Internal Server Error) whats wrong/problem to my code? anything I'm missing?
Now provide the response and nothing else.
| While this question exists for a while, but no accepted answer is given I'd like to point you towards the solution. Because you're sending with ajax, and presumably still use the CSRF middleware, you need to provide an additional header with your request. Add a meta-tag to each page (or master layout): <meta name="csrf-token" content="{{ csrf_token() }}"> And add to your javascript-file (or section within the page): $.ajaxSetup({ headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') }}); See https://laravel.com/docs/master/csrf#csrf-x-csrf-token for more details. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/30154489', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1292042/']} | jdg_77159 |
stackexchange | llm_judgeable_groundtruth_similarity | 62457 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a PIC12F675 that I need to program. I'm using a Pickit 2 and the Pickit 2 software. Supposedly I should be able to program it using this circuit , but the Pickit 2 software is not detecting the chip at all. What's going wrong? EDIT: I've added an image of my circuit breadboarded out. And here is where you can find the datasheet for the PIC12F675. Skip to page 4 for the pinout. EDIT2: I just actually looked at the chips (dumb, but I kind of assumed mouser would send me what I ordered), and I noticed they're actually PIC12C671's. How much of a difference would this make? Final EDIT: Wow! Thanks to everyone that helped me with this! I was not expecting this kind of response. It ended up being a small stupid problem just like I thought it would be. It was my fault for not double checking everything when I received the package from Mouser.
Now provide the response and nothing else.
| A possible problem is that some PICs need a power-cycle to get into programming mode. The 12F675 in particular needs this when the configuration word in your application configures the MCLR pin as GPIO. When the pcikit2 powers your target chip it will provide the power cycle. With the independent +5V you supply it can't do this. After covering the most probable problems you might go for the improbable ones... Don't laugh at me, I am drawing on my own experience! could you have two pickit2's connected? (Yes, I once did this and was flabbergasted that the one the pickit2 application selected was not the one the PIC was connected to. I hve also once wasted an hour when I had two target PICs wires in parallel. That gives very strange results when they both put their chip ID out to the programmer. ) is the pickit2 you are using connected to the PC you are using? (I do PIC classes and the students can get quite creative in who's laptop connects to who's PIC) are both the pickit2 and the chip the right side round? The pickit2 pciture is as seen from above (where the button and LEDs are). is the chip really a 12F675? (I once tried for full day to get the A/D converter of a 12F629 working - for the non-PIC-guru's: that's the cheaper brother of the 12F675, it is cheaper because it does not have an A/D converter ). Are the wires exactly as you show them, not mixed up? And are they not by any accident broken? If you are using a solderless breadboard: does it still make all the contacts? Is the firmware in the pickit2 a version that matches your PC side? maybe re-load the firmware to be 100% sure. In response to your EDIT2: ROFL, check my bullet 4. The PIC12C671 is a totally different chip, and the C means it is an EPROM (one-time-programmable) chip, which is not supported by pickit2. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/62457', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/20747/']} | jdg_77160 |
stackexchange | llm_judgeable_groundtruth_similarity | 4077487 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i've used the following code to repeat the image in the background but its not working can any one help? Layout.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@drawable/grass_bg" > grass_bg.xml in drawable looks like this <?xml version="1.0" encoding="utf-8"?> <bitmap xmlns:android="http://schemas.android.com/apk/res/android" android:src="@drawable/grass_small" android:tileMode="repeat"/> its showing the same small image. its not repeating...
Now provide the response and nothing else.
| Bitmaps (and their states) get reused a lot, and I've found it's easy to lose the tileMode if a BitmapDrawable is used in more than one place. The following code fixes the problem for me: public static void fixBackgroundRepeat(View view) { Drawable bg = view.getBackground(); if(bg != null) { if(bg instanceof BitmapDrawable) { BitmapDrawable bmp = (BitmapDrawable) bg; bmp.mutate(); // make sure that we aren't sharing state anymore bmp.setTileModeXY(TileMode.REPEAT, TileMode.REPEAT); } } } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4077487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/421309/']} | jdg_77161 |
stackexchange | llm_judgeable_groundtruth_similarity | 38201560 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to change height and width of Navigation Icon(in black circle in screen shot) in Android Toolbar programmatically. Is there any way to do so. This is not toolbar logo. I can't update toolbar theme in Styles xml as I want it to be dynamic. Please help.
Now provide the response and nothing else.
| I did this way : toolbar= (Toolbar) findViewById(R.id.toolbar);setSupportActionBar(toolbar);if(getSupportActionBar()!=null){ Drawable drawable= getResources().getDrawable(R.drawable.ic_sync_white_36dp); Bitmap bitmap = ((BitmapDrawable) drawable).getBitmap(); Drawable newdrawable = new BitmapDrawable(getResources(), Bitmap.createScaledBitmap(bitmap, 250, 250, true)); newdrawable.setColorFilter(Color.BLUE, PorterDuff.Mode.SRC_ATOP); getSupportActionBar().setDisplayHomeAsUpEnabled(true); getSupportActionBar().setHomeAsUpIndicator(newdrawable);} You can use calculate-dp-from-pixels-in-android-programmatically and converting-pixels-to-dp when creating scaled bitmap . My Toolbar : <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:minHeight="?attr/actionBarSize" android:background="?attr/colorPrimary" app:popupTheme="@style/AppTheme.PopupOverlay" /> And screenshot : Also another way is using custom layout in Toolbar like that: <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:minHeight="?attr/actionBarSize" android:background="?attr/colorPrimary" app:popupTheme="@style/AppTheme.PopupOverlay" > <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="match_parent" android:background="@color/colorPrimary" android:id="@+id/custom_toolbar_layout" android:layout_height="wrap_content"> <ImageView android:layout_width="100dp" android:layout_height="100dp" android:tint="@android:color/holo_purple" android:id="@+id/imageView" android:gravity="center_vertical" android:src="@drawable/ic_clear_black_36dp" /> <TextView android:layout_width="wrap_content" android:layout_height="match_parent" android:text="My Custom Toolbar" android:gravity="center_vertical" android:textSize="20sp" android:textColor="@android:color/white" android:layout_gravity="center_horizontal" /> </LinearLayout></android.support.v7.widget.Toolbar> If you want to access any View in Toolbar see @Aleksandar Stefanović's answer. And you can take a look at Material design guidelines for creating custom Toolbar Then screenshot : Icons from : Material Icons | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38201560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6270950/']} | jdg_77162 |
stackexchange | llm_judgeable_groundtruth_similarity | 1066006 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Evaluate $$\displaystyle\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}} \,\mathrm dx$$ How do I evaluate this integral? I know that the result is $0$, but I don't know how to obtain this. Wolfram|Alpha yields a non-elementary antiderivative for the indefinite integral, so I don't think I can directly integrate and then plug in the upper/lower limits.
Now provide the response and nothing else.
| First let $t = x-2$ this way $4x-x^2 = 4 - (x-2)^2 = 4-t^2$. Substitute, $$ \int_{-2}^2 \frac{\log(t+2)}{\sqrt{4-t^2}} ~ dt $$Now let, $\theta = \sin^{-1}\tfrac{t}{2}$ so that $2\sin \theta = t$ and hence, after substitute, $$ \int_{-\pi/2}^{\pi/2} \frac{\log [2(1+\sin \theta)]}{2\cos \theta} 2\cos \theta ~ d\theta = \pi \log 2 + \int_{-\pi/2}^{\pi/2} \log(1+\sin \theta)~d\theta $$To solve this integral, replace $\theta$ by $-\theta$, $$ I = \int_{-\pi/2}^{\pi/2} \log(1+\sin \theta) ~d\theta= \int_{-\pi/2}^{\pi/2} \log(1-\sin \theta)~d\theta$$Now, $$ I + I = \int_{-\pi/2}^{\pi/2} \log(1-\sin^2 \theta) ~ d\theta = 4\int_{0}^{\pi/2} \log (\cos \theta) ~ d\theta$$The last integral is a well-known integral that computes to $-\frac{\pi}{2}\log 2$. Your final answer is, $\pi \log 2 -\pi\log 2$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1066006', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/153974/']} | jdg_77163 |
stackexchange | llm_judgeable_groundtruth_similarity | 10012219 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone help me with the depth of field implementation in Ray Tracer please? I am using a simple pin-hole camera model as shown below. I need to know how we can generate DOF effect using pin-hole camera model? (image is taken from wikipedia ) My basic ray tracer in working fine. I have eye at (0,0,0,1) with direction at (dx, dy , 1.0f, 0.0f) where float dx = (x * (1.0 / Imgwidth) ) - 0.5; float dy = (y * (1.0 / Imgheight) ) - 0.5; Now everywhere I read they are talking about sampling a lens which should be placed between the image plane and the Scene. for example as shown below(image taken from wikipedia): How can I introduce a lens in front of an image plane if the rays are coming from one single point position(camera or eye)? If someone can help than it will be great! thank you
Now provide the response and nothing else.
| There are 3 ways to do this: The physically-correct DOF would require multiple renders of the scene. Cameras have depth-of-field because they are not really a pinhole model. Instead, they have an aperture that allows light in within a certain diameter. This is equivalent to taking a pinhole camera and taking a lot of pictures within that aperture and averaging them. So basically, you need to rotate your camera slightly multiple times around your focus point, render the entire scene, accumulate the output colour in a buffer and divide all values by the number of renders. A simple post processing effect - render not only the scene colour, but also its depth, then use this depth to control blur effect strength. Note that this is technique requires some tricks to get seamless transitions between objects at different blur levels. A more complex post processing effect - create a depth buffer as previously and then use it to render an aperture-shaped particle for every pixel of the original scene. Use the depth to control the particle size just as you would use it for blur effect strength. (1) gives best results, but is the most expensive technique; (2) is cheapest, (3) is quite tricky but provides good cost-effect balance. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10012219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1082701/']} | jdg_77164 |
stackexchange | llm_judgeable_groundtruth_similarity | 173021 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is the ring $K[a,b,c,d]/(ad-bc-1)$ a unique factorization domain ? I think this is a regular ring , so all of its localizations are UFDs by the Auslander–Buchsbaum theorem . However, I know there are Dedekind domains (which are regular; every local ring is a PID, so definitely UFD) that are not UFDs, so being a regular ring need not imply the ring is a UFD. With the non-UFD Dedekind domains (at least the number rings), I can usually spot a non-unique factorization, but I don't see any here in this higher dimensional example.
Now provide the response and nothing else.
| CW version of Justin Campbell and Pete Clark's answer: More generally, the coordinate ring of any simply connected, semisimple, linear algebraic group is a UFD. This is proved as the Corollary on page 296 (p. 303 in translation) of Popov (1974). The proof of the corollary from the proposition is explained in §11.2 of Pete Clark's Factorization notes for those of us for whom the proof was not obvious. This requires knowing the coordinate ring of a linear algebraic group is regular. Georges Elencwajg's answer appears very related to §9.4 of Pete's notes, where indeed the behavior of very similar rings requires characteristic not 2 and algebraic closure to apply. For some reason, this particular ring is always a UFD, regardless of field. I am still interested in a solution I can actually understand (so why would the Picard group of SL2 vanish?). The general proof is available in Popov (1974) to those who can read it: Popov, V. L."Picard groups of homogeneous spaces of linear algebraic groups and one-dimensional homogeneous vector fiberings."Izv. Akad. Nauk SSSR Ser. Mat. 38 (1974), 294–322.MR 357399 URL: http://mi.mathnet.ru/eng/izv/v38/i2/p294 (original) DOI: 10.1070/IM1974v008n02ABEH002107 (translation) | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/173021', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/583/']} | jdg_77165 |
stackexchange | llm_judgeable_groundtruth_similarity | 2158183 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to profile a specific page of my ASP.NET site to optimize memory usage, but the nature of .NET as a Garbage Collected language is making it tough to get a true picture of what how memory is used and released in the program. Is there a perfmon counter or other method for profiling that will allow me to see not only how much memory is allocated, but also how much has been released by the program and is just waiting for garbage collection?
Now provide the response and nothing else.
| I am using Per4j to monitor webservice endpoint performance and also internal service and dao class performance. I mainly just log the raw performance statistics to a daily rolling file. Then i use the jar on the command line to analyse the data with different timeslices as i please. I often use the -g commandline option to output a html file which i can then open up and view the data visually which is really useful. I enjoy using Spring AOP with the @Profiled annotation. It makes timing very clean. I had a few doubters regarding perf4j degrading performance, and i could easily offer that i could turn the logging off by removing the TimingAspect from my Spring applicationContext.xml file. <!-- just remove and all uses of @Profiled do nothing --><bean id="timingAspect" class="org.perf4j.log4j.aop.TimingAspect"/> Becareful about the mean value and note the max value. While doing performance tuning, we had a method call having wild values with a large standard deviation. Most of the values were around the median, however there were a sprinkling of calls which were 100x more than the average. More of user error than anything, but watch out. I use the AsyncCoalescingStatisticsAppender with a timeslice of 15000, but hardly read the log. Because i have the raw performance logs, I can chop and change that by running perf4j on the command line. I tried out the JMX integration and it works as promised from the documentation. But I have no real use at the moment for it. I have plans to expose the data using SNMP and if it's useful i'll contribute. In all, i recommend Perf4j. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2158183', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/30018/']} | jdg_77166 |
stackexchange | llm_judgeable_groundtruth_similarity | 70882092 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Python int s are objects that encapsulate the actual number value. Can we mess with that value, for example setting the value of the object 1 to 2? So that 1 == 2 becomes True ?
Now provide the response and nothing else.
| Yes, we can. But don't do this at home . Seriously, the 1 object is used in many places and I have no clue what this might break and what that might do to your computer. I reject all responsibility. But I found it interesting to learn about these things. The id function gives us the memory address and the ctypes module lets us mess with memory: import ctypesctypes.memmove(id(1) + 24, id(2) + 24, 4)print(1 == 2)x = 40print(x + 1) Output: True42 Try it online! . I tried it there because such sites have got to be protected from our hacking anyway. More explanation / analysis: The memmove copied the value from the 2 object into the 1 object. Their size is 28 bytes each, but I skipped the first 24 bytes, because that's the object's reference count, type address, and value size, as we can view/verify as well: import ctypes, struct, sysx = 1data = ctypes.string_at(id(x), 28)ref_count, type_address, number_of_digits, lowest_digit = \ struct.unpack('qqqi', data)print('reference count: ', ref_count, sys.getrefcount(x))print('type address: ', type_address, id(type(x)))print('number of digits:', number_of_digits, -(-x.bit_length() // 30))print('lowest digit: ', lowest_digit, x % 2**30) Output ( Try it online! ): reference count: 135 138type address: 140259718753696 140259718753696number of digits: 1 1lowest digit: 1 1 The reference count gets increased by the getrefcount call, but I don't know why by 3. Anyway, ~134 things other than us reference the 1 object, and we're potentially messing all of them up, so... really don't try this at home. The "digits" refer to how CPython stores int s as digits in base 2 30 . For example, x = 2 ** 3000 has 101 such digits. Output for x = 123 ** 456 for a better test: reference count: 1 2type address: 140078560107936 140078560107936number of digits: 106 106lowest digit: 970169057 970169057 | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/70882092', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12671057/']} | jdg_77167 |
stackexchange | llm_judgeable_groundtruth_similarity | 2064447 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using JAX WS to expose a WebService. Some of the operations of this service can generate exceptions. Not internal server exceptions, but rather exceptions that are dependent on the input arguments of the operation invocation. If I specify that my operation throws a custom Exception, like so: @WebService@SOAPBinding(style = Style.RPC, use = Use.LITERAL)public class MyServiceEndpointImpl implements MyServiceEndpoint { @WebMethod public void throwsException throws InvalidInputException;} I end up with the following stacktrace when running the application: com.sun.xml.ws.model.RuntimeModelerException: runtime modeler error: Wrapper class com.mypackage.ws.services.jaxws.InvalidInputExceptionBean is not found. Have you run APT to generate them? at com.sun.xml.ws.model.RuntimeModeler.getClass(RuntimeModeler.java:285) at com.sun.xml.ws.model.RuntimeModeler.processExceptions(RuntimeModeler.java:1006) at com.sun.xml.ws.model.RuntimeModeler.processRpcMethod(RuntimeModeler.java:969) at com.sun.xml.ws.model.RuntimeModeler.processMethod(RuntimeModeler.java:546) at com.sun.xml.ws.model.RuntimeModeler.processClass(RuntimeModeler.java:370) at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:256) at com.sun.xml.ws.server.EndpointFactory.createSEIModel(EndpointFactory.java:322) at com.sun.xml.ws.server.EndpointFactory.createEndpoint(EndpointFactory.java:188) at com.sun.xml.ws.api.server.WSEndpoint.create(WSEndpoint.java:467) at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:333) at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:45) at org.springframework.beans.factory.support.FactoryBeanRegistrySupport$1.run(FactoryBeanRegistrySupport.java:121) Adding @XmlRootEntity to InvalidInputException does not solve the problem. If this is not the recommended way to report faults over web services, then is there a better way? Should my exceptions inherit from RuntimeException and rely on the transport for the error handling (i.e., everything will end up wrapped in a SOAPException)? I was hoping for something like Spring-WS' SoapFaultAnnotationExceptionResolver . Is there something similar at all available for JAX-WS?
Now provide the response and nothing else.
| Did you try to annotate your exception with @WebFault ? Also, do you implement getFaultInfo() ? EDIT: I realize my answer was maybe not detailed enough. As reminded in this thread (for example): The JAX-WS 2.0 specification demands that the exception annotated with @WebFault must have two constructors and one method [getter to obtain the fault information]: WrapperException(String message, FaultBean faultInfo)WrapperException(String message, FaultBean faultInfo, Throwable cause)FaultBean getFaultInfo() The WrapperException is replaced by the name of the exception, and FaultBean is replaced by the class name that implements the fault bean. The fault bean is a Java bean that contains the information of the fault and is used by the Web service client to know the cause for the fault. This is detailed in section 2.5 Fault of the JAX-WS specification. Does your exception conform to this? Can you post the code? The OP is right. As per specification 2.1, section 3.7 Service Specific Exception, it is not required to use the @WebFault annotation, JAX-WS can generate the wrapper beans dynamically for exceptions that do not match the pattern described in section 2.5 (just provide a getter for the information you want to be present in the fault). For exceptions that match the pattern described in section 2.5 (i.e. exceptions that have a getFaultInfo method and @WebFault annotation), the FaultBean is used as input to JAXB when mapping the exception to XML Schema. So the solution suggested above (matching the pattern described in section 2.5) is only a workaround. The generation of wrapper beans should just work for other exceptions. And I don't know why this fails here. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2064447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/90566/']} | jdg_77168 |
stackexchange | llm_judgeable_groundtruth_similarity | 20268544 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a Portable Class Library (PCL) version Of HttpUtility.ParseQueryString contained in System.Web or some code I could use? I want to read a very complex URL.
Now provide the response and nothing else.
| HttpUtility.ParseQueryString returns HttpValueCollection (Internal Class) which inherits from NameValueCollection . NameValueCollection is a collection of key value pairs like a dictionary but it supports duplicates, maintains order and only implements IEnumerable (This collection is pre-generics). NameValueCollection is not supported in PCL. My solution (Partly lifted and modified from the .NET framework) is to substitute HttpValueCollection with Collection<HttpValue> where HttpValue is just a key value pair. public sealed class HttpUtility{ public static HttpValueCollection ParseQueryString(string query) { if (query == null) { throw new ArgumentNullException("query"); } if ((query.Length > 0) && (query[0] == '?')) { query = query.Substring(1); } return new HttpValueCollection(query, true); }}public sealed class HttpValue{ public HttpValue() { } public HttpValue(string key, string value) { this.Key = key; this.Value = value; } public string Key { get; set; } public string Value { get; set; }}public class HttpValueCollection : Collection<HttpValue>{ #region Constructors public HttpValueCollection() { } public HttpValueCollection(string query) : this(query, true) { } public HttpValueCollection(string query, bool urlencoded) { if (!string.IsNullOrEmpty(query)) { this.FillFromString(query, urlencoded); } } #endregion #region Parameters public string this[string key] { get { return this.First(x => string.Equals(x.Key, key, StringComparison.OrdinalIgnoreCase)).Value; } set { this.First(x => string.Equals(x.Key, key, StringComparison.OrdinalIgnoreCase)).Value = value; } } #endregion #region Public Methods public void Add(string key, string value) { this.Add(new HttpValue(key, value)); } public bool ContainsKey(string key) { return this.Any(x => string.Equals(x.Key, key, StringComparison.OrdinalIgnoreCase)); } public string[] GetValues(string key) { return this.Where(x => string.Equals(x.Key, key, StringComparison.OrdinalIgnoreCase)).Select(x => x.Value).ToArray(); } public void Remove(string key) { this.Where(x => string.Equals(x.Key, key, StringComparison.OrdinalIgnoreCase)) .ToList() .ForEach(x => this.Remove(x)); } public override string ToString() { return this.ToString(true); } public virtual string ToString(bool urlencoded) { return this.ToString(urlencoded, null); } public virtual string ToString(bool urlencoded, IDictionary excludeKeys) { if (this.Count == 0) { return string.Empty; } StringBuilder stringBuilder = new StringBuilder(); foreach (HttpValue item in this) { string key = item.Key; if ((excludeKeys == null) || !excludeKeys.Contains(key)) { string value = item.Value; if (urlencoded) { // If .NET 4.5 and above (Thanks @Paya) key = WebUtility.UrlDecode(key); // If .NET 4.0 use this instead. // key = Uri.EscapeDataString(key); } if (stringBuilder.Length > 0) { stringBuilder.Append('&'); } stringBuilder.Append((key != null) ? (key + "=") : string.Empty); if ((value != null) && (value.Length > 0)) { if (urlencoded) { value = Uri.EscapeDataString(value); } stringBuilder.Append(value); } } } return stringBuilder.ToString(); } #endregion #region Private Methods private void FillFromString(string query, bool urlencoded) { int num = (query != null) ? query.Length : 0; for (int i = 0; i < num; i++) { int startIndex = i; int num4 = -1; while (i < num) { char ch = query[i]; if (ch == '=') { if (num4 < 0) { num4 = i; } } else if (ch == '&') { break; } i++; } string str = null; string str2 = null; if (num4 >= 0) { str = query.Substring(startIndex, num4 - startIndex); str2 = query.Substring(num4 + 1, (i - num4) - 1); } else { str2 = query.Substring(startIndex, i - startIndex); } if (urlencoded) { this.Add(Uri.UnescapeDataString(str), Uri.UnescapeDataString(str2)); } else { this.Add(str, str2); } if ((i == (num - 1)) && (query[i] == '&')) { this.Add(null, string.Empty); } } } #endregion} UPDATE Updated so that HttpValueCollection now inherits from Collection rather than List as highlighted in the comments. UPDATE 2 Updated to use WebUtility.UrlDecode if using .NET 4.5, thanks to @Paya. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20268544', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1212017/']} | jdg_77169 |
stackexchange | llm_judgeable_groundtruth_similarity | 1010724 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Programming isn't my main job, though I enjoy it and sometimes get paid for it. For many years now I've been hearing about Linux and my friends have shown to me many *nixes (or *nici?), though I stick with Mac OS. Do you think there are any parts of the Linux kernel that I could enjoy looking at, that would help me understand what's the whole stuff about? For example, how Linux is different from Darwin? I've grown up with assembler and DOS, so things like interrupts or low-level C shouldn't be barriers to understanding. But in the end I'm more interested in high-level concepts, like threading or networking stack - I know different operating systems do them differently. And I'm looking for something fun, easy and enjoyable, like late-night reading. (Note: made a CW, just in case) Update: I looked for some docs and started reading: Unreliable Guide To Locking
Now provide the response and nothing else.
| I would recommend looking at LXR . It makes it easier to follow the flow of the code (you do not have to search for each function that is called — well, you have, but the site does it for you). Some starting points, for the current version (2.6.30): start_kernel() — think of it as the kernel equivalent of main() . This function initializes almost all the kernel subsystems; follow it to see in code what you see scrolling on the screen during the boot. entry_32.S — system calls and interrupts (x86-32 version, which should be nearer what you know; note the use of the AT&T assembly dialect instead of the Intel dialect you might be more used to). head_32.S — the kernel entry point. This is where the kernel starts after switching to protected mode; in the end, it will call start_kernel() . arch/x86/boot — the real-mode bootstrap code. It starts in assembly ( boot/header.S ), but quickly jumps into C code (starting at boot/main.c ). Does the real-mode initialization (mostly BIOS calls which have to be done before switching to protected mode); it is compiled using a weird GCC trick ( .code16gcc ), which allows the generation of 32-bit real-mode code. arch/x86/boot/compressed — if you ever wondered where does the "Decompressing Linux..." message comes from, it is from here. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1010724', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115200/']} | jdg_77170 |
stackexchange | llm_judgeable_groundtruth_similarity | 19995343 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to parse xml and put its value in two arraylist .but i am getting array index out of bound exception on bold line My xml looks like this :- <visitor_info><txt_trailer_no_front>23,5</txt_trailer_no_front><txt_trailor_no>24,4</txt_trailor_no><txt_trailer_no_bol>25,</txt_trailer_no_bol></visitor_info> and my code to parse it is: for(Node n = doc.getFirstChild().getFirstChild().getNextSibling(); n!=null; n = n.getNextSibling().getNextSibling()) { String str=getElementValue(n); if(n.getNextSibling()==null) { Log.i("info",str+"test"); fillid.add(str.split(",")[0]); **fillvalues.add(str.split(",")[1]);**//Exception occurs break; } if(!str.equals("") && !str.equals(null)) { fillid.add(str.split(",")[0]); fillvalues.add(str.split(",")[1]!=null?str.split(",")[1]:""); } } Log.i("info",fillvalues.size()+"v"); for(int i=0 ;i<fillvalues.size();i++) { Log.i("info","1"); for(int j=0;j<values.size();j++) { Log.i("info","2"); if(values.get(j).equals(fillid.get(i))) { Log.i("info","3"); values.get(j).value=fillvalues.get(i); break; } } }}
Now provide the response and nothing else.
| 4) Partition ranges are determined by granting each node the range from their available tokens up until the next specified token. 2)Data is exchanged through gossip detailing which nodes have which tokens. This meta-data allows every node to know which nodes are responsible for which ranges. Keyspace/Replication settings also change where data is actually saved. EXAMPLE:1)A gets 256 ranges B gets 256 Ranges. But to make this simple lets give them each 2 tokens and pretend the token range is 0 to 30 Given tokens: A 10,15 and B 3,11Nodes are responsible for the following ranges (3-9:B)(10:A)(11-14:B)(15-30,0-2:A) 3)If C Joins also with 2 tokens 20,5Nodes will now be responsible for the following ranges (3-4:B)(5-9:C)(10:A)(11-14:B)(15-19:A)(20-30,0-2:C) Vnodes are powerful because now when C joins the cluster it gets its data from multiple nodes (5-9 from B and 20-30,0-2 from A) sharing the load between those machines. In this toy example you can see that having only 2 tokens allows for some nodes to host the majority of the data while others get almost none. As the number of Vnodes increases the balance between the nodes increases as the ranges become randomly subdivided more and more. At 256 nodes you are extremely likely to have distributed an even amount of data to each node in the cluster. For more informationVNodes: http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2 | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19995343', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1747460/']} | jdg_77171 |
stackexchange | llm_judgeable_groundtruth_similarity | 1285176 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $M$ is the vector space of $2\times 2$ real matrices, then I can show that $$ \{A \in M \mid A^\mathrm{T}=-A \} $$ is a subspace of $M$, since $$ \left[ \begin{array}{cc} x & z \\ -z & y \end{array} \right]+\left[ \begin{array}{cc} x' & z' \\ -z' & y' \end{array} \right] = \left[ \begin{array}{cc} x+x' & z+z' \\ -(z+z') & y+y' \end{array} \right] $$ and for some $\lambda\in\mathbb{R}$ $$ \lambda\left[ \begin{array}{cc} x & z \\ -z & y \end{array} \right] = \left[ \begin{array}{cc} \lambda x & \lambda z \\ -\lambda z & \lambda y \end{array} \right] $$ But I'm not sure if I'm correct on finding the dimension and a basis of the subspace: $$ \left[ \begin{array}{cc} x & z \\ -z & y \end{array} \right] = x\left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right]+y\left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right]+z\left[ \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right] $$ This makes me think that a basis is made up of $$ \left(\left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right],\left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right],\left[ \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right] \right) $$ and so the dimension is three. Is that right?
Now provide the response and nothing else.
| $$ A^\mathrm{T}=-A $$implies that $A$ has zeroes on the main diagonal, since they change sign on the RHS, but remain unchanged on the LHS. if you incorporate that into your representation, it should be easy to answer the question about dimension and basis. as a related exercise you might like to consider the $2 \times 2$ matrices with complex entries. these are isomorphic to $\Bbb{R}^8$ as a real vector space, and to $\Bbb C^4$ as a complex vector space. the skew-hermitian matrices satisfy:$$A^{\mathrm{T}}+A^* =0$$where $A^*$ is the complex conjugate of $A$. the skew-hermitian matrices are isomorphic to a $4$-dimensional subspace of $\Bbb R^8$. however in $\Bbb C^4$ they are only a subgroup, but do not form a subspace - the diagonal elements must be purely imaginary and this property is not preserved under multiplication by an arbitrary complex scalar. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1285176', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/241089/']} | jdg_77172 |
stackexchange | llm_judgeable_groundtruth_similarity | 34590369 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to format a Date String in React Native. Ex: 2016-01-04 10:34:23 Following is the code I'm using. var date = new Date("2016-01-04 10:34:23");console.log(date); My problem is, when I'm emulating this on a iPhone 6S, it'll print Mon Jan 04 2016 10:34:23 GMT+0530 (IST) without any problem. But if I try with the iPhone 5S it prints nothing. And if you try to get the month by using a method like date.getMonth() it'll print "NaN" . Why is this? What is the workaround?
Now provide the response and nothing else.
| The beauty of the React Native is that it supports lots of JS libraries like Moment.js. Using moment.js would be a better/easier way to handle date/time instead coding from scratch just run this in the terminal ( yarn add moment also works if using React's built-in package manager): npm install moment --save And in your React Native js page: import Moment from 'moment';render(){ Moment.locale('en'); var dt = '2016-05-02T00:00:00'; return(<View> {Moment(dt).format('d MMM')} </View>) //basically you can do all sorts of the formatting and others} You may check the moment.js official docs here https://momentjs.com/docs/ | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34590369', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2182809/']} | jdg_77173 |
stackexchange | llm_judgeable_groundtruth_similarity | 32716 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
One difficulty I'm encountering in studying the structure of Graphics objects is that I have not yet found a way to print or display such structures that are sufficiently general. The FullForm of Graphics objects can be huge and extremely difficult to take in visually. I have tried to deal with this using Shallow , but with only limited success, because I find that "the interesting bits" in a Graphics not always occur at the same depth. It's a chicken-and-egg problem: to write a function that displays such structure in a useful way, I need to understand what such structure could be. But gaining this understanding is precisely what I'm trying to do here! In case it matters, I'm primarily interested in examining the structure of Graphics objects generated, directly or indirectly, by plotting commands such as Plot , ListPlot , etc.
Now provide the response and nothing else.
| Finally, the following code gives you an interactive tree. You might want to enlarge the area of the tree if the nodes are too small. structureOfGraph[gr_] := Module[{xx = gr /. Rule[a_, _] :> a /. x_ /; And @@ NumericQ /@ x :> x[[0]] /. {List ..} :> ListOfLists}, Manipulate[TreeForm[xx, n], {n, 1, Depth[xx] - 1, 1}]];structureOfGraph[Show[Plot[{Cos[x], Sin[x]}, {x, 0, 4 Pi}], Graphics[{Circle[]}]]] Second iteration This second attempt makes an even shorter tree by getting rid of the terminal List : TreeForm[Graphics[{Blue, {EdgeForm[{Red, Thick}], Disk[]}, Disk[{1, 0}]}] /. x_ /; And @@ NumericQ /@ x :> x[[0]] /. x_[List] :> x] First iteration I will send this as an answer because I cannot add comments with graphics to this previous answer . TreeForm[Graphics[{Blue, {EdgeForm[{Red, Thick}], Disk[]}, Disk[{1, 0}]}] /. x_ /; And @@ NumericQ /@ x :> x[[0]]] produces | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/32716', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2464/']} | jdg_77174 |
stackexchange | llm_judgeable_groundtruth_similarity | 54350585 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to convert the date format '31-Dec-09' to '2009-12-31' in Python with the following code: df = ['19-Jan-19', '4-Jan-19'] f = [datetime.datetime.strptime(x,'%d-%mmm-%y').strftime('%Y-%m-%d') for x in df]print(f) Getting the following error: time data '9-Jan-19' does not match format '%d-%mmm-%y' I have read somewhere that matching a two-digit year '00' would require a lowercase y instead of an upper case one. Either way, I am getting the same error so I guess something else is wrong. Any help?
Now provide the response and nothing else.
| Your %y and %Y patterns are fine, the issue is that you used %mmm here. The datetime.strptime() method patterns are all single letter patterns, and %mmm is seen as the %m pattern followed by two literal m characters. %m matches a numeric month (1 or 2 digits, zero-padding optional). So 19-1mm-19 would match, but 19-Jan-19 does not because the month is not numeric and the two literal m characters are missing. The correct pattern to use is '%d-%b-%y' here, where %b matches an abbreviated month name. Demo: >>> import datetime>>> df = ['19-Jan-19', '4-Jan-19']>>> [datetime.datetime.strptime(x,'%d-%b-%y').strftime('%Y-%m-%d') for x in df]['2019-01-19', '2019-01-04'] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54350585', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1905726/']} | jdg_77175 |
stackexchange | llm_judgeable_groundtruth_similarity | 18279622 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am printing out elements from an ArrayList and want to have a comma after each word except the last word. Right now, I am doing it like this: for (String s : arrayListWords) { System.out.print(s + ", ");} It prints out the words like this: one, two, three, four, The problem is the last comma. How do I solve it?
Now provide the response and nothing else.
| Print the first word on its own if it exists. Then print the pattern as comma first, then the next element. if (arrayListWords.length >= 1) { System.out.print(arrayListWords[0]);}// note that i starts at 1, since we already printed the element at index 0for (int i = 1; i < arrayListWords.length, i++) { System.out.print(", " + arrayListWords[i]);} With a List , you're better off using an Iterator // assume StringIterator<String> it = arrayListWords.iterator();if (it.hasNext()) { System.out.print(it.next());}while (it.hasNext()) { System.out.print(", " + it.next());} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18279622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1299477/']} | jdg_77176 |
stackexchange | llm_judgeable_groundtruth_similarity | 100271 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to analyse a survey study in which I'm interested in the way that individual differences between my participants influence how they respond to my stimuli. The stimuli are pieces of writing randomly selected from a large pool of such, and I'm not currently interested in any fixed effects of these stimuli. My participants each view 5 pieces of writing and make a binomial choice about each on a single-item DV. I have personality variables from each of the participants that I'm interested in as a predictor of their response. When setting up a mixed-effects model in lmer (with a long dataset) I know that I should include a term for the random effects of the stimuli. My question is whether I should also include a term for the random effects of participant, along with the fixed effects of the personality variable. Because differences between participants on my personality measure are my variable of interest, does it make sense to control for random participant effects? Also, if I do control for the random effects of participant, what am I conceptually doing here? Partitioning out all the participant-level variance that can't be explained by my personality variable? Controlling for the random effects of participant seems to consistently reduce the predictive power of any personality variable I include in the model, which is why I ask. Some illustrative lmer syntax for the two options:Without participant random effects: glmer(Choice ~ Personality + (1| StimID), data=choice.long, family='binomial') With participant random effects: glmer(Choice ~ Personality + (1| StimID) + (1| PartID), data=choice.long, family='binomial') Am I specifying the latter model right?
Now provide the response and nothing else.
| You can generally continue to improve your estimate of whatever parameter you might be testing with more data. Stopping data collection once a test achieves some semi-arbitrary degree of significance is a good way to make bad inferences. That analysts may misunderstand a significant result as a sign that the job is done is one of many unintended consequences of the Neyman–Pearson framework, according to which people interpret p values as cause to either reject or fail to reject a null without reservation depending on which side of the critical threshold they fall on. Without considering Bayesian alternatives to the frequentist paradigm (hopefully someone else will), confidence intervals continue to be more informative well beyond the point at which a basic null hypothesis can be rejected. Assuming collecting more data would just make your basic significance test achieve even greater significance (and not reveal that your earlier finding of significance was a false positive), you might find this useless because you'd reject the null either way. However, in this scenario, your confidence interval around the parameter in question would continue to shrink, improving the degree of confidence with which you can describe your population of interest precisely. Here's a very simple example in r – testing the null hypothesis that $\mu=0$ for a simulated variable: One Sample t-testdata: rnorm(99) t = -2.057, df = 98, p-value = 0.04234alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: -0.377762241 -0.006780574 sample estimates: mean of x -0.1922714 Here I just used t.test(rnorm(99)) , and I happened to get a false positive (assuming I've defaulted to $\alpha=.05$ as my choice of acceptable false positive error rate). If I ignore the confidence interval, I can claim my sample comes from a population with a mean that differs significantly from zero. Technically the confidence interval doesn't dispute this either, but it suggests that the mean could be very close to zero, or even further from it than I think based on this sample. Of course, I know the null is actually literally true here, because the mean of the rnorm population defaults to zero, but one rarely knows with real data. Running this again as set.seed(8);t.test(rnorm(99,1)) produces a sample mean of .91, a p = 5.3E-13, and a 95% confidence interval for $\mu=[.69,1.12]$. This time I can be quite confident that the null is false, especially because I constructed it to be by setting the mean of my simulated data to 1. Still, say it's important to know how different from zero it is; maybe a mean of .8 would be too close to zero for the difference to matter. I can see I don't have enough data to rule out the possibility that $\mu=.8$ from both my confidence interval and from a t -test with mu=.8 , which gives a p = .33. My sample mean is high enough to seem meaningfully different from zero according to this .8 threshold though; collecting more data can help improve my confidence that the difference is at least this large, and not just trivially larger than zero. Since I'm "collecting data" by simulation, I can be a little unrealistic and increase my sample size by an order of magnitude. Running set.seed(8);t.test(rnorm(999,1),mu=.8) reveals that more data continue to be useful after rejecting the null hypothesis of $\mu=0$ in this scenario, because I can now reject a null of $\mu=.8$ with my larger sample. The confidence interval of $\mu=[.90,1.02]$ even suggests I could've rejected null hypotheses up to $\mu=.89$ if I'd set out to do so initially. I can't revise my null hypothesis after the fact, but without collecting new data to test an even stronger hypothesis after this result, I can say with 95% confidence that replicating my "study" would allow me to reject a $H_0:\mu=.9$. Again, just because I can simulate this easily, I'll rerun the code as set.seed(9);t.test(rnorm(999,1),mu=.9) : doing so demonstrates my confidence wasn't misplaced. Testing progressively more stringent null hypotheses, or better yet, simply focusing on shrinking your confidence intervals is just one way to proceed. Of course, most studies that reject null hypotheses lay the groundwork for other studies that build on the alternative hypothesis. E.g., if I was testing an alternative hypothesis that a correlation is greater than zero, I could test for mediators or moderators in a follow-up study next...and while I'm at it, I'd definitely want to make sure I could replicate the original result. Another approach to consider is equivalence testing. If you want to conclude that a parameter is within a certain range of possible values, not just different from a single value, you can specify that range of values you'd want the parameter to lie within according to your conventional alternative hypothesis and test it against a different set of null hypotheses that together represent the possibility that the parameter lies outside that range. This last possibility might be most similar to what you had in mind when you wrote: We have "some evidence" for the alternative to be true, but we can't draw that conclusion. If I really want to draw that conclusion conclusively... Here's an example using similar data as above (using set.seed(8) , rnorm(99) is the same as rnorm(99,1)-1 , so the sample mean is -.09). Say I want to test the null hypothesis of two one-sided t -tests that jointly posit that the sample mean is not between -.2 and .2. This corresponds loosely to the previous example's premise, according to which I wanted to test if $\mu=.8$. The difference is that I've shifted my data down by 1, and I'm now going to perform two one-sided tests of the alternative hypothesis that $-.2\le\mu\le.2$. Here's how that looks: require(equivalence);set.seed(8);tost(rnorm(99),epsilon=.2) tost sets the confidence level of the interval to 90%, so the confidence interval around the sample mean of -.09 is $\mu=[-.27,.09]$, and p = .17. However, running this again with rnorm(999) (and the same seed) shrinks the 90% confidence interval to $\mu=[-.09,.01]$, which is within the equivalence range specified in the null hypothesis with p = 4.55E-07. I still think the confidence interval is more interesting than the equivalence test result. It represents what the data suggest the population mean is more specifically than the alternative hypothesis, and suggests I can be reasonably confident that it lies within an even smaller interval than I've specified in the alternative hypothesis. To demonstrate, I'll abuse my unrealistic powers of simulation once more and "replicate" using set.seed(7);tost(rnorm(999),epsilon=.09345092) : sure enough, p = .002. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/100271', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/46273/']} | jdg_77177 |
stackexchange | llm_judgeable_groundtruth_similarity | 2090723 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If it's Path_To_DocumentRoot/a/b/c.php ,should always be /a/b . I use this: dirname($_SERVER["PHP_SELF"]) But it won't work when it's included by another file in a different directory. EDIT I need a relative path to document root .It's used in web application. I find there is another question with the same problem,but no accepted answer yet. PHP - Convert File system path to URL
Now provide the response and nothing else.
| Do you have access to $_SERVER['SCRIPT_NAME'] ? If you do, doing: dirname($_SERVER['SCRIPT_NAME']); Should work. Otherwise do this: In PHP < 5.3: substr(dirname(__FILE__), strlen($_SERVER['DOCUMENT_ROOT'])); Or PHP >= 5.3: substr(__DIR__, strlen($_SERVER['DOCUMENT_ROOT'])); You might need to realpath() and str_replace() all \ to / to make it fully portable, like this: substr(str_replace('\\', '/', realpath(dirname(__FILE__))), strlen(str_replace('\\', '/', realpath($_SERVER['DOCUMENT_ROOT'])))); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2090723', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/198729/']} | jdg_77178 |
stackexchange | llm_judgeable_groundtruth_similarity | 12941984 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
On iOS, I load a custom font in my project by adding its file name (an .otf file) to the info.plist file and then using this line of code: UIFont myFont * = [UIFont fontWithName:titleFontName size:titleFontSize]; I obtain the font that I can use in UILabels and UITextViews. How could I obtain that this font is displayed only in small caps? If I use it in Photoshop, it's possible to turn on the small caps switch to have all words typeset in small caps (and so, I conclude that there is nothing missing with the font). How could I obtain a similar effect on iOS? Converting my strings to uppercase is not a viable option for other reasons. Further information : the font has only one member in its family, as I could understand by using the following code, there is no standalone small caps member in the family. for (NSString * familyName in [UIFont familyNames]) { NSLog(@"---------------- %@ ---------------", familyName); for (NSString * fontName in[UIFont fontNamesForFamilyName:familyName] ) NSLog(@"- %@", fontName); }
Now provide the response and nothing else.
| Small caps are enabled in the font through an open type feature. In iOS 7 we can use a font descriptor to access open type features and enable small caps. This question goes into how to turn on small caps using core text, but the same can be done for UIFonts and UIKit views just as easily. You'll need to create a UIFontDescriptor and set the UIFontDescriptorFeatureSettingsAttribute to an array of dictionaries for the features you want to enable. Each font feature dictionary contains a key and value to specify the feature type, and a key and value for the feature selector. Depending on the font you're using, you'll need to find the correct values corresponding to small caps. You can find these in the array that the commented section logs. UIFont Category This category will generate a UIFont object with small caps enabled. You'll need to add the correct font name. #import "UIFont+SmallCaps.h"#import <CoreText/CoreText.h>@implementation UIFont (SmallCaps)+ (UIFont *) applicationSmallCapsFontWithSize:(CGFloat) size { /* // Use this to log all of the properties for a particular font UIFont *font = [UIFont fontWithName: fontName size: fontSize]; CFArrayRef fontProperties = CTFontCopyFeatures ( ( __bridge CTFontRef ) font ) ; NSLog(@"properties = %@", fontProperties); */ NSArray *fontFeatureSettings = @[ @{ UIFontFeatureTypeIdentifierKey: @(kLowerCaseType), UIFontFeatureSelectorIdentifierKey : @(kLowerCaseSmallCapsSelector) } ]; NSDictionary *fontAttributes = @{ UIFontDescriptorFeatureSettingsAttribute: fontFeatureSettings , UIFontDescriptorNameAttribute: FONT_NAME } ; UIFontDescriptor *fontDescriptor = [ [UIFontDescriptor alloc] initWithFontAttributes: fontAttributes ]; return [UIFont fontWithDescriptor:fontDescriptor size:size];}@end | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12941984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/87967/']} | jdg_77179 |
stackexchange | llm_judgeable_groundtruth_similarity | 976703 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to solve the following question? $$$$ Suppose $f:A\subset\Bbb{R}^2\to\Bbb{R}$ continuous in the rectangle $A=\{(x,y)\in\Bbb{R}^2|\alpha\leq x\leq\beta;\alpha'\leq y\leq\beta'\}.$ Proof that $f$ is bounded in this rectangle. ($f$ to be bounded in $A$ means that exists some $M>0$ such that $|f(x,y)|\leq M$ in $A$.) $$$$(Sugestion: suppose, by absurd, that $f$ is not bounded in $A$. Then, exists $(x_1,y_1)$ in $A$ such that $|f(x_1,y_1)|>1$. Taking the middle point of each side, divide the rectangle $A$ in four equal rectangles; in one of them, labeled $A_2$, $f$ won't be bounded, thus will exist $(x_2,y_2)\in A_2$ such that $|f(x_2,y_2)>2|$ etc.)
Now provide the response and nothing else.
| Well, let's look at the structure of the problem: There is a set $S$ of suspects (three in the original problem, a countably infinite number of them in Hilbert's hotel). There's a subset $G\subset S$ of guilty suspects. And there's a mapping $f:S\to P(S)$ where $P(S)$ is the power set (set of subsets) of $S$, where $M\in f(s)$ means "If $s$ says the truth, it is possible that $G=M$". $f(s)$ is specified by a logical form $L_s$, that is, $f(s) = \{M\in P(S): L_s(M)\}$. Now we can formulate the separate questions as follows: a) Is $\bigcap_{s\in S} f(s) \ne \emptyset$? b) For which pairs $(s,t)\in S\times S$ do we have $f(s)\subseteq f(t)$? c) Assuming $G=\emptyset$, what is $\{s\in S: G\notin f(s)\}$? d) What is $\bigcap_{s\in S} f(s)$? (Actually, the question as formulated already presumes that this set has exactly one element; especially it assumes that the answer to (a) is "yes"). e) What is $\bigcap_{s\in G} (P(S)\setminus f(s)) \cap \bigcup_{s\in S\setminus G} f(s)$? So to generalize the problem to Hilbert's hotel, you have to find a function $f(n)$ specified by a logical formula dependent on $n$ such that $\bigcap_{s\in n} f(n)$ has exactly one element, and (to be a generalization of the original problem) reduces to the original problem when restricted to the set $\{0,1,2\}$ Let's look closer at the original testimonies: Brown gives an explicit list of who's guilty or innocent. Jones gives an implication connecting the other two. Smith makes a testimony about himself, and the claim that someone is guilty. Associating $0$ with Brown, $1$ with Jones and $2$ with Smith, we could write those as follows in the formalism derived above, with $S=\{1,2,3\}$$$\begin{align}f(0) &= \{M\in P(S): 1 \in M \land 2\notin M\}\\f(1) &= \{M\in P(S): 0 \in M \implies 2\in M\}\\f(2) &= \{M\in P(S): 2 \notin S \land M\ne\emptyset\}\end{align}$$ There are of course many ways to generalize that, but let's try the following:$$f(n) =\begin{cases}\{M\in P(\mathbb N): \forall m > n, m\in M\iff m \equiv 1\ (\mod 2)\} & n \equiv 0\ (\mod 3)\\\{M\in P(\mathbb N): \forall i < n, \forall k > n, m\in M\implies k\in M\} & n \equiv 1\ (\mod 3)\\\{M\in P(\mathbb N): n\notin M\land M\ne\emptyset\} & n\equiv 2\ (\mod 3)\end{cases}$$However this gives an inconsistent set of conditions (i.e. $\bigcup_{n\in\mathbb N} f(n)=\emptyset$), since from $f(0)$, one concludes $5\in G$, but from $f(5)$ one concludes $5\notin G$. This is a deviation from the original problem where the statements are indeed consistent. I'm not going to spend the time to actually find a proper generalization now (I already spent far more time on this answer than originally planned), but I think the mathematical concepts involved should now be clear. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/976703', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/177211/']} | jdg_77180 |
stackexchange | llm_judgeable_groundtruth_similarity | 364 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What evidence is there that $coRP \neq NP$? $coRP$ is the class of languages for which there exists a probabililistic Turing Machine that runs in polynomial time and always answers Yes on an input belonging to the language and answers No with probability at least one half on an input not belonging to the language.
Now provide the response and nothing else.
| When considering the power of non-determinism (P vs NP), randomization seems like a 2nd order issue. In particular when we think about "P=NP?" we are really interested in the question "are all NP problems tractable", where randomization could be allowed, so tractability really means "in BPP". So "NP contained in BPP" seems essentially as unlikely as "P=NP", and in fact if these were considered different then people would care about the former rather than the latter. (The peculiar variant "NP in coRP" is formally somewhere in the middle between these two, but conceptually essentially the same). If good enough pseudo random generators exist then the two questions are formally the same. Similarly, in "non-uniform settings" randomization is known not to help and thus "NP in BPP" implies that NP has poly-size circuits. | {} | {'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/364', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/72/']} | jdg_77181 |
stackexchange | llm_judgeable_groundtruth_similarity | 33866 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am under the understanding that a toy top will weigh less when it is spinning. The Russians made a spinning type transport back in the 70s to lessen its payload over the tundra. Is this an effective way to beat gravity?
Now provide the response and nothing else.
| No. A top can offer stability like in a gyroscope , but it does not in any way reduce the mass of an object. Some applications of a gyroscope are useful for transport, however. The popular Segway scooters use them to measure how far/fast it is tilting forward or backward and drives the wheels to compensate.. Similarly, motorcycles take advantage of the gyroscopic motion of their wheels to stay upright. Over the tundra, I could conceive of a gyroscope being used to keep a sled tilted up so that it is heavier on the backside and thus doesn't get caught toe up in the snow, but it would not reduce the mass at all. On the contrary, the mass of the gyroscopic device is just more to haul around. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/33866', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/10769/']} | jdg_77182 |
stackexchange | llm_judgeable_groundtruth_similarity | 19225152 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have built a small PHP/MySQL internal app to host and sort documents. All works perfectly until it comes to updating a file, in this case a .PDF file. When a user updates the .PDF the new file is on the server as expected and the older version deleted. A user is getting the new version providing they never opened the old version. Now the problem.... If a user has opened the older version of the .PDF at some point in the past they do not get the newer version when a link is clicked to view the document even though its only the new version actually physically on the server. I'm guessing that Google Chrome Browser is caching the older version of the PDF somewhere. How can I get around this? Due to the amount of users and the number of times a day some of the documents are updated asking users to manually clear any cache is not practical.
Now provide the response and nothing else.
| You really have three choices here: Change the filename everytime it gets updated Always generate the HREF with a GET parameter Send header information telling the browser to always download fresh from server Option 1 - Works in 100% of cases. Might be tricky to maintain echo '<a href="files/pdfs/'.$row['FILENAME_FROM_DATABASE'].'">PDF</a>';// Could produce something like:// <a href="files/pdfs/filename_v5.pdf">PDF</a> Option 2 - Works in 99% of cases echo '<a href="files/pdfs/filename.pdf?q='.microtime(true).'">PDF</a>'; Option 3 - Works in 99% of cases header("Pragma: public");header("Cache-Control: maxage=1"); // <-- importantheader('Expires: '.gmdate('D, d M Y H:i:s', time()+1).' GMT');header('Content-type: application/pdf');exit(file_get_contents(PATH_TO_PDF_FILE)); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19225152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1005169/']} | jdg_77183 |
stackexchange | llm_judgeable_groundtruth_similarity | 138600 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So the question I am trying to work through is: Test the series $$\frac{1}{3}+\frac{2^3}{3^2}+\frac{3^3}{3^3}+\frac{4^3}{3^4}+\frac{5^3}{3^5}+\cdot\cdot\cdot$$for convergence. The solution (using D'Alembert's ratio test) is: $$u_n=\frac{n^3}{3^n}\;,$$ so $$\begin{align*}\frac{|u_{n+1}|}{|u_n|} &=\frac{(n+1)^3}{3^{n+1}}\cdot \frac{3^n}{n^3}\\&=\frac{3^n(n+1)^3}{3^{n+1}\cdot n^3}\\&=\frac{3^n(n+1)^3}{3^{n+1}\cdot n^3}\\&=\frac{3^n(n^3+3n^2+3n+1)}{3^{n+1}\cdot n^3}\;.\end{align*}$$ How do we get from there to... $$=\frac{n^3+3n^2+3n+1}{3n^3}$$ What happens with $3^n$ in the numerator and power of $n+1$ in the denominator? How do they cancel out? Also, in the very next step that all goes to being equal to $$\lim\limits_{n\rightarrow\infty}\frac{|u_{n+1}|}{|u_n|}=\frac{1}{3}<1\;,$$ which means the series is convergent. But how do we get to $\dfrac{1}{3}$?
Now provide the response and nothing else.
| You have a factor of $3^n$ in the numerator, and a factor of $3^{n+1}$ in the denominator. So$$\frac{3^n(\text{stuff})}{3^{n+1}(\text{other stuff})} = \frac{3^n(\text{stuff})}{3\times 3^{n}\text{(other stuff)}} = \frac{\text{stuff}}{3(\text{other stuff})}.$$Since $3^{n+1}=3\times 3^n$. Dividing numerator and denominator by $n^3$, we have$$\begin{align*}\lim_{n\to\infty}\frac{n^3+3n^2+3n+1}{3n^3} &= \lim_{n\to\infty}\frac{\frac{1}{n^3}(n^3+3n^2+3n+1)}{\frac{1}{n^3}(3n^3)}\\&= \lim_{n\to\infty}\frac{1 + \frac{3}{n} + \frac{3}{n^2}+\frac{1}{n^3}}{3}\\&= \frac{\lim\limits_{n\to\infty}(1 + \frac{3}{n}+\frac{3}{n^2}+\frac{1}{n^3})}{\lim\limits_{n\to\infty}3}\\&= \frac{1 + 0 + 0 + 0}{3} = \frac{1}{3}.\end{align*}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/138600', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28251/']} | jdg_77184 |
stackexchange | llm_judgeable_groundtruth_similarity | 5141 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering which security model is behind the OpenID. Is it anything like kerberos?
Now provide the response and nothing else.
| The question is very broad and it is hard to guess what you are actually asking. You can find the specification at http://openid.net/developers/specs/ Kerberos is typically used in a controlled environment . In that environment there are known and trusted Kerberos servers ("key distribution center"). The Kerberos server authenticates the service provider to the client and the client to the service provider. Therefore the Kerberos server must know of all clients and all service providers. OpenID , however, is designed for an open environment such as the Internet at large. That is any service provider (website) can work together with any openid-server ("identity provider") to authenticate clients. Differences between Kerberos and OpenID As Kerberos is used in controlled environments, logging in to services is totally transparent after you logged into your computer. On the Internet, however, you have to pick your openid-provider, and then login to it. These are two additional steps. Furthermore the redirect to your openid-provider may end up at a phishing site instead, if the service provider is malicious. The openid-server only knows about its users. Unlike Kerberos, OpenID providers cannot authenticate service-providers . Many openid-provider therefore ask the user to confirm that they want to login to a specific website before passing identity information. In a controlled environment such decisions are done by the administrators instead (assuming that they have a better understanding about security than users). This confirmation can be saved for each domain. Unlike Kerberos, however, there is no authentication involved. If the ownership of a domain changes for legit or illegit reasons, this is not noticed. Conclusion OpenID has a number of issues that Kerberos does not have. Kerberos, however, does not work at all in an open environment, so it is no alternative for the common use cases of OpenID. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/5141', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/-1/']} | jdg_77185 |
stackexchange | llm_judgeable_groundtruth_similarity | 232861 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Generate the Fibonacci sequence in the fewest amount of characters possible. Any language is OK, except for one that you define with one operator, f , which prints the Fibonacci numbers. Starting point: 25 14 characters in Haskell : f=0:1:zipWith(+)f(tail f) f=0:scanl(+)1f
Now provide the response and nothing else.
| RePeNt, 9 , 8 chars 1↓[2?+1] Or 10 chars with printing: 1↓[2?+↓£1] Run using: RePeNt "1↓[2?+1]" RePeNt is a stack based toy language I wrote (and am still improving) in which all operators/functions/blocks/loops use Reverse Polish Notation (RPN). Command Explanation Stack------- ----------- -----1 Push a 1 onto the stack 1↓ Push last stack value 1 1[ Start a do-while loop 1 12? Push a two, then pop the 2 and copy the last 2 stack 1 1 1 1 items onto the stack+ Add on the stack 1 1 2↓£ Push last stack value then print it 1 1 21 Push a 1 onto the stack 1 1 2 1] Pop value (1 in this case), if it is a 0 exit the loop 1 1 2 otherwise go back to the loop start. The answer is on the stack which builds itself up like: 1 11 1 21 1 2 31 1 2 3 5 It never terminates (it has the eqivilent of a C#/JAVA do { } while(true) loop) because the sequence will never terminate, but a terminating solution can be written thus: N_1↓nI{2?+} which is 12 chars. I wonder if anyone will ever read this :( | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/232861', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15055/']} | jdg_77186 |
stackexchange | llm_judgeable_groundtruth_similarity | 379343 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C -> C: about 50MB/sec C -> E: about 55MB/sec E -> E: 8MB/sec!!! E -> C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..
Now provide the response and nothing else.
| Nearly all RAID5 implementations suffer from a write penalty . The short answer is: Never use RAID5, ever. The longer answer is, RAID 5 is useful in a select handful of cases where overall IOPS are not terribly important. When you're copying files to another folder, you're reading and writing to the same set of disks, which causes I/O contention greater than the sum of the cost of the operations. The reason is simple: It takes time for the disks to physically seek to the appropriate location on the disks. Intel ICH10R is a type of " fake RAID ," where the motherboard creates and maintains an array without the typical benefits of a dedicated processor and battery-backed cache. This generally does not perform well. A battery-backed cache (aka RAM) stores a number of operations and keeps you from thrashing your disks directly. To compound matters, 7.2K RPM SATA disks are also cheap and plentiful, but by no means perform well in a large-data environment. In conclusion, you're pushing consumer hardware to its (realistic) limits. Make sure you have the absolute latest drivers, and take a serious look at buying a RAID card to help your ailing and slow storage system (note that you'll need to start from scratch, as RAID implementations are not inter-compatible). | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/379343', 'https://serverfault.com', 'https://serverfault.com/users/117212/']} | jdg_77187 |
stackexchange | llm_judgeable_groundtruth_similarity | 538045 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't understand why the permission does not change for a user when I run the chmod command with fakeroot . Initially, the file has these permissions: -rwxr-xr-x a.txt* When I try to change the permission for the file using chmod it works fine: chmod 111 a.txt---x--x--x a.txt* When I run it with fakeroot it doesn't seem to do the work fine. It sets the permissions for group and other correctly, but not for the user. The permissions for read and write are set, no matter what the 1st value in chmod command is. fakeroot chmod 111 a.txt-rwx--x--x a.txt* Am I missing something?
Now provide the response and nothing else.
| Fakeroot doesn't carry out all the file metadata changes, that's the point: it only pretends to the program that runs under it. Fakeroot does not carry out changes that it can't do, such as changing the owner. It also does not carry out changes that would cause failures down the line. For example, the following code succeeds when run as root, because root can always open files regardless of permissions: chmod 111 a.txtcp a.txt b.txt But when run as a non-root user, cp fails because it can't read a.txt . To avoid this, chmod under fakeroot does not remove permissions from the user. Fakeroot does pretend to perform the change for the program it's running. $ stat -c "Before: %A" a.txt; fakeroot sh -c 'chmod 111 a.txt; stat -c "In fakeroot: %A" a.txt'; stat -c "After: %A" a.txtBefore: -rwx--x--xIn fakeroot: ---x--x--xAfter: -rwx--x--x Generally speaking, file metadata changes done inside fakeroot aren't guaranteed to survive the fakeroot call. That's the point. Make a single fakeroot call that does both the metadata changes and whatever operations (such as packing an archive) you want to do with the changed metadata. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/538045', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/369220/']} | jdg_77188 |
stackexchange | llm_judgeable_groundtruth_similarity | 55633060 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two threads running concurrent updates on a table similar to: CREATE TABLE T ( SEQ NUMBER(10) PRIMARY KEY, VAL1 VARCHAR2(10), VAL2 VARCHAR2(10)) The table is containing a large number of entries where the updates are similar to: UPDATE T SET VAL1 = ? WHERE SEQ < ?UPDATE T SET VAL2 = ? WHERE SEQ = ? Both statements are run within two different transaction as JDBC batch updates with 1000 rows each. Doing so, I encounter ORA-00060: deadlock detected while waiting for resource fairly quickly. I assume that both transaction would partially affect the same rows where both transactions managed to lock some rows before the other one. Is there a way to avoid this by making the locking atomic or would I need to introduce some form of explicit lock between the two threads?
Now provide the response and nothing else.
| When you update a record, a lock is taken to prevent dirty writes which would compromise Atomicity. However, in your case, you could use SKIP LOCKED . This way, before you try to do the update you attempt to acquire the FOR UPDATE lock with SKIP LOCKED. This will allow you to lock the records that you plan to modify and also skipping the ones which are already locked by other concurrent transactions. Check out the SkipLockJobQueueTest in my High-Performance Java Persistence GitHub repository for an example of how you can use SKIP LOCKED. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55633060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1237575/']} | jdg_77189 |
stackexchange | llm_judgeable_groundtruth_similarity | 22948 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to learn loops and array manipulation in zsh . Say I have two arrays in zsh with the same unknown # of elements and each of them containing a list of paths, e.g. LOCAL_ARRAY=($LOCAL_PATH_1 $LOCAL_PATH_2 $LOCAL_PATH_3)REMOTE_ARRAY=($REMOTE_PATH_1 $REMOTE_PATH_2 $REMOTE_PATH_3) I would like to iterate through both arrays simultaneously with a common index, so that I can do something like: cd $LOCAL_PATH_ihg pull $REMOTE_PATH_i in each iteration. How can I do this in zsh? Also, I have searched for loop control and arrays in the zsh documentation but had little success. Is there a good tutorial that introduces the basics of loops , arrays , etc. in zsh? Something for people with little bash or shell-scripting experience but with a background in programming?
Now provide the response and nothing else.
| Section 15.2.1 “Array Subscripts” says that arrays can be index with [exp] (where exp is a numeric expression) and that elements start at index 1 (0 if KSH_ARRAYS is set). Section 14.3 “Parameter Expansion” says that the syntax ${#array_name} will expand to the number of elements of an array. Section 6.3 “Complex Commands” gives the syntax for a numeric for loop (as in C): for (( initExpr ; testExpr ; stepExpr )) do … done . Putting them all together: for (( i = 1; i <= $#LOCAL_PATH; i++ )) do ( # subshell to contain the effect of the chdir cd $LOCAL_PATH[i] hg pull $REMOTE_PATH[i] )done Or, if you are using KSH_ARRAYS , then this: for (( i = 0; i < ${#LOCAL_PATH[@]}; i++ )) do ( # subshell to contain the effect of the chdir cd ${LOCAL_PATH[i]} hg pull ${REMOTE_PATH[i]} )done Using KSH_ARRAYS makes arrays start with index 0, requires the use of braces for array expressions, and interprets $array as $array[0] (requiring a change to the array length expression). The syntax changes required by KSH_ARRAY will also work without KSH_ARRAY , but you still have to adjust the logic for the different index range (1 through N versus 0 through N-1). | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/22948', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4531/']} | jdg_77190 |
stackexchange | llm_judgeable_groundtruth_similarity | 33127 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dataset {x_i} which form 3-10 clusters. Is there any analytical function of {x_i} that I may use to estimate the number of clusters in the dataset? The fact that there might be 3-10 clusters is actually the domain knowledge I have for my problem.
Now provide the response and nothing else.
| I suggest you give heavy-tail Lambert W x F or skewed Lambert W x F distributions a try (disclaimer: I am the author). In R they are implemented in the LambertW package. They arise from a parametric, non-linear transformation of a random variable (RV) $X \sim F$, to a heavy-tailed (skewed) version $Y \sim \text{Lambert W} \times F$. For $F$ being Gaussian, heavy-tail Lambert W x F reduces to Tukey's $h$ distribution. (I will here outline the heavy-tail version, the skewed one is analogous.) They have one parameter $\delta \geq 0$ ($\gamma \in \mathbb{R}$ for skewed Lambert W x F) that regulates the degree of tail heaviness (skewness). Optionally, you can also choose different left and right heavy tails to achieve heavy-tails and asymmetry. It transforms a standard Normal $U \sim \mathcal{N}(0,1)$ to a Lambert W $\times$ Gaussian $Z$ by$$Z = U \exp\left(\frac{\delta}{2} U^2\right)$$ If $\delta > 0$ $Z$ has heavier tails than $U$; for $\delta = 0$, $Z \equiv U$. If you don't want to use the Gaussian as your baseline, you can create other Lambert W versions of your favorite distribution, e.g., t, uniform, gamma, exponential, beta, ... However, for your dataset a double heavy-tail Lambert W x Gaussian (or a skew Lambert W x t) distribution seem to be a good starting point. library(LambertW)set.seed(10)### Set parameters ##### skew Lambert W x t distribution with # (location, scale, df) = (0,1,3) and positive skew parameter gamma = 0.1theta.st <- list(beta = c(0, 1, 3), gamma = 0.1)# double heavy-tail Lambert W x Gaussian# with (mu, sigma) = (0,1) and left delta=0.2; right delta = 0.4 (-> heavier on the right)theta.hh <- list(beta = c(0, 1), delta = c(0.2, 0.4))### Draw random sample ##### skewed Lambert W x tyy <- rLambertW(n=1000, distname="t", theta = theta.st)# double heavy-tail Lambert W x Gaussian (= Tukey's hh)zz =<- rLambertW(n=1000, distname = "normal", theta = theta.hh)### Plot ecdf and qq-plot ####op <- par(no.readonly=TRUE)par(mfrow=c(2,2), mar=c(3,3,2,1))plot(ecdf(yy))qqnorm(yy); qqline(yy)plot(ecdf(zz))qqnorm(zz); qqline(zz)par(op) In practice, of course, you have to estimate $\theta = (\beta, \delta)$, where $\beta$ is the parameter of your input distribution (e.g., $\beta = (\mu, \sigma)$ for a Gaussian, or $\beta = (c, s, \nu)$ for a $t$ distribution; see paper for details): ### Parameter estimation ####mod.Lst <- MLE_LambertW(yy, distname="t", type="s")mod.Lhh <- MLE_LambertW(zz, distname="normal", type="hh")layout(matrix(1:2, ncol = 2))plot(mod.Lst)plot(mod.Lhh) Since this heavy-tail generation is based on a bijective transformations of RVs/data, you can remove heavy-tails from data and check if they are nice now, i.e., if they are Gaussian (and test it using Normality tests). ### Test goodness of fit ###### test if 'symmetrized' data follows a Gaussianxx <- get_input(mod.Lhh)normfit(xx) This worked pretty well for the simulated dataset. I suggest you give it a try and see if you can also Gaussianize() your data . However, as @whuber pointed out, bimodality can be an issue here. So maybe you want to check in the transformed data (without the heavy-tails) what's going on with this bimodality and thus give you insights on how to model your (original) data. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/33127', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/12888/']} | jdg_77191 |
stackexchange | llm_judgeable_groundtruth_similarity | 26895349 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We use flexbox heavily for an desktop application like looking web app and it has been working out great. But with the latest Firefox Developer Edition (35.0a2) the layout does not behave as expected (it grows beyond the viewport): http://tinyurl.com/k6a8jde This works fine in Firefox 33.1. I would assume this has something to do with the flexbox changes described here: https://developer.mozilla.org/en-US/Firefox/Releases/34/Site_Compatibility But sadly I can't yet figure out a way to get the FF 33.x behavior in FF 34 or 35.x. Any help regarding the layout or how to better isolate the problem is appreciated.
Now provide the response and nothing else.
| The relevant difference there is the "implied minimum size of flex items", a new feature in the flexbox spec. (or rather, a feature that was removed and re-introduced) The simplest (bluntest) way to restore the old behavior is to add this style rule: * { min-height:0 } (or min-width , if you were concerned about overflow in a horizontal flexbox; but it looks like your testcase only has trouble with overflow from a vertical flex container). Updated fiddle, with that change: http://jsfiddle.net/yoL2otcr/1/ Really, you should only need to set min-height:0 on specific elements -- in particular, you need it on each element that: is a child of a 'column'-oriented flex container has a tall descendant, which you want to allow to overflow (which will perhaps be handled gracefully by an intermediate element with "overflow:scroll", as is the case here) (In your case, there's actually a nested pile of these elements, since you have a single tall element inside of many nested flex containers. So, you likely need min-height:0 all the way up, unfortunately.) (If you're curious, this bug has more information & more examples of content that was broken on the web due to this spec-change: https://bugzilla.mozilla.org/show_bug.cgi?id=1043520 ) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26895349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/856403/']} | jdg_77192 |
stackexchange | llm_judgeable_groundtruth_similarity | 6713932 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand the design decision behind this part of the language. I admit i'm very new to it all but this is something which caught me out initially and I was wondering if I'm missing an obvious reason. Consider the following code: List<int> MyList = new List<int>() { 5, 4, 3, 2, 1 };int[] MyArray = {5,4,3,2,1};//Sort the listMyList.Sort();//This was an instance method//Sort the ArrayArray.Sort(MyArray);//This was a static method Why are they not both implemented in the same way - intuitively to me it would make more sense if they were both instance methods?
Now provide the response and nothing else.
| The question is interesting because it reveals details of the .NET type system. Like value types, string and delegate types, array types get special treatment in .NET. The most notable oddish behavior is that you never explicitly declare an array type. The compiler takes care of it for you with ample helpings of the jitter. System.Array is an abstract type, you'll get dedicated array types in the process of writing code. Either by explicitly creating a type[] or by using generic classes that have an array in their base implementation. In a largish program, having hundreds of array types is not unusual. Which is okay, but there's overhead involved for each type. It is storage required for just the type, not the objects of it. The biggest chunk of it is the so-called 'method table'. In a nutshell, it is a list of pointers to each instance method of the type. Both the class loader and the jitter work together to fill this table. This is commonly known as the 'v-table' but isn't quite a match, the table contains pointers to methods that are both non-virtual and virtual. You can see where this leads perhaps, the designers were worried about having lots of types with big method tables. So looked for ways to cut down on the overhead. Array.Sort() was an obvious target. The same issue is not relevant for generic types. A big nicety of generics, one of many, one method table can handle the method pointers for any type parameter of a reference type. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6713932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/847265/']} | jdg_77193 |
stackexchange | llm_judgeable_groundtruth_similarity | 12598120 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using the following tables for storing product data: mysql> SELECT * FROM product;+---------------+---------------+--------+| id | name | description | stock |+---------------+---------------+--------+| 1 | product1 | first product | 5 | | 2 | product2 | second product| 5 | +---------------+---------------+--------+mysql> SELECT * FROM product_additional;+-----------------+------------+| id | fieldname | fieldvalue |+-----------------+------------+| 1 | size | S || 1 | height | 103 || 2 | size | L || 2 | height | 13 || 2 | color | black |+-----------------+------------+ Using the following query to select the records from both tables mysql> SELECT p.id , p.name , p.description ,MAX(IF(pa.fieldname = 'size', pa.fieldvalue, NULL)) as `size` ,MAX(IF(pa.fieldname = 'height', pa.fieldvalue, NULL)) as `height` ,MAX(IF(pa.fieldname = 'color', pa.fieldvalue, NULL)) as `color`FROM product pLEFT JOIN product_additional AS pa ON p.id = pa.idGROUP BY p.id+---------------+---------------+--------+---------+--------+| id | name | description | size | height | color |+---------------+---------------+--------+---------+--------+| 1 | product1 | first product | S | 103 | null || 2 | product2 | second product| L | 13 | black |+---------------+---------------+--------+---------+--------+ And everything is working correctly :) Because i fill the 'additional' table dynamically it would be nice, if the query would also be dynamic. In that way i dont have to change the query everytime i put in a new fieldname and fieldvalue.
Now provide the response and nothing else.
| The only way in MySQL to do this dynamically is with Prepared statements. Here is a good article about them: Dynamic pivot tables (transform rows to columns) Your code would look like this: SET @sql = NULL;SELECT GROUP_CONCAT(DISTINCT CONCAT( 'MAX(IF(pa.fieldname = ''', fieldname, ''', pa.fieldvalue, NULL)) AS ', fieldname ) ) INTO @sqlFROM product_additional;SET @sql = CONCAT('SELECT p.id , p.name , p.description, ', @sql, ' FROM product p LEFT JOIN product_additional AS pa ON p.id = pa.id GROUP BY p.id, p.name, p.description');PREPARE stmt FROM @sql;EXECUTE stmt;DEALLOCATE PREPARE stmt; See Demo NOTE: GROUP_CONCAT function has a limit of 1024 characters. See parameter group_concat_max_len | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12598120', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1514094/']} | jdg_77194 |
stackexchange | llm_judgeable_groundtruth_similarity | 13583153 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to zoom a portion of data/image and plot it inside the same figure. It looks something like this figure. Is it possible to insert a portion of zoomed image inside the same plot. I think it is possible to draw another figure with subplot but it draws two different figures. I also read to add patch to insert rectangle/circle but not sure if it is useful to insert a portion of image into the figure. I basically load data from the text file and plot it using a simple plot commands shown below. I found one related example from matplotlib image gallery here but not sure how it works. Your help is much appreciated. from numpy import *import osimport matplotlib.pyplot as pltdata = loadtxt(os.getcwd()+txtfl[0], skiprows=1)fig1 = plt.figure()ax1 = fig1.add_subplot(111)ax1.semilogx(data[:,1],data[:,2])plt.show()
Now provide the response and nothing else.
| Playing with runnable code is one of thefastest ways to learn Python. So let's start with the code from the matplotlib example gallery . Given the comments in the code, it appears the code is broken up into 4 main stanzas.The first stanza generates some data, the second stanza generates the main plot,the third and fourth stanzas create the inset axes. We know how to generate data and plot the main plot, so let's focus on the third stanza: a = axes([.65, .6, .2, .2], axisbg='y')n, bins, patches = hist(s, 400, normed=1)title('Probability')setp(a, xticks=[], yticks=[]) Copy the example code into a new file, called, say, test.py . What happens if we change the .65 to .3 ? a = axes([.35, .6, .2, .2], axisbg='y') Run the script: python test.py You'll find the "Probability" inset moved to the left.So the axes function controls the placement of the inset.If you play some more with the numbers you'll figure out that (.35, .6) is thelocation of the lower left corner of the inset, and (.2, .2) is the width andheight of the inset. The numbers go from 0 to 1 and (0,0) is the located at thelower left corner of the figure. Okay, now we're cooking. On to the next line we have: n, bins, patches = hist(s, 400, normed=1) You might recognize this as the matplotlib command for drawing a histogram , butif not, changing the number 400 to, say, 10, will produce an image with a muchchunkier histogram, so again by playing with the numbers you'll soon figure outthat this line has something to do with the image inside the inset. You'll want to call semilogx(data[3:8,1],data[3:8,2]) here. The line title('Probability') obviously generates the text above the inset. Finally we come to setp(a, xticks=[], yticks=[]) . There are no numbers to play with,so what happens if we just comment out the whole line by placing a # at the beginning of the line: # setp(a, xticks=[], yticks=[]) Rerun the script. Oh! now there are lots of tick marks and tick labels on the inset axes.Fine. So now we know that setp(a, xticks=[], yticks=[]) removes the tick marks and labels from the axes a . Now, in theory you have enough information to apply this code to your problem.But there is one more potential stumbling block: The matplotlib example uses from pylab import * whereas you use import matplotlib.pyplot as plt . The matplotlib FAQ says import matplotlib.pyplot as plt is the recommended way to use matplotlib when writing scripts, while from pylab import * is for use in interactive sessions. So you are doing it the right way, (though I would recommend using import numpy as np instead of from numpy import * too). So how do we convert the matplotlib example to run with import matplotlib.pyplot as plt ? Doing the conversion takes some experience with matplotlib. Generally, you justadd plt. in front of bare names like axes and setp , but sometimes thefunction come from numpy, and sometimes the call should come from an axesobject, not from the module plt . It takes experience to know where all thesefunctions come from. Googling the names of functions along with "matplotlib" can help.Reading example code can builds experience, but there is no easy shortcut. So, the converted code becomes ax2 = plt.axes([.65, .6, .2, .2], axisbg='y')ax2.semilogx(t[3:8],s[3:8])plt.setp(ax2, xticks=[], yticks=[]) And you could use it in your code like this: from numpy import *import osimport matplotlib.pyplot as pltdata = loadtxt(os.getcwd()+txtfl[0], skiprows=1)fig1 = plt.figure()ax1 = fig1.add_subplot(111)ax1.semilogx(data[:,1],data[:,2])ax2 = plt.axes([.65, .6, .2, .2], axisbg='y')ax2.semilogx(data[3:8,1],data[3:8,2])plt.setp(ax2, xticks=[], yticks=[])plt.show() | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13583153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1856260/']} | jdg_77195 |
stackexchange | llm_judgeable_groundtruth_similarity | 27546901 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I assign an attribute to a variable? For eg. > x <- rpart(f.la, mydata) assigns the attributes: $names[1] "frame" "where" [3] "call" "terms" [5] "cptable" "method" [7] "parms" "control" [9] "functions" "numresp" [11] "splits" "variable.importance"[13] "y" "ordered" $xlevelsnamed list()$ylevels[1] "cancelled" "cart-abandon" "purchased" "returned" $class[1] "rpart" Like this, I want to create attributes for a variable and assign a value to that attribute.
Now provide the response and nothing else.
| Alternatively to using attributes ( see the answer by @CathG ) you can use attr . The first will work on NULL objects but the second one won't. When you work with R attributes you have to remember there are not a simple as they look and can have some interesting side affects. Quick example: > x <- 1:10> class(x)[1] "integer"> x [1] 1 2 3 4 5 6 7 8 9 10 So far so good. Now lets set dim attribute > attr(x, 'dim') <- c(2, 5)> class(x)[1] "matrix"> x [,1] [,2] [,3] [,4] [,5][1,] 1 3 5 7 9[2,] 2 4 6 8 10 class attribute is fundamental part of the S3 classes : > foo <- list()> foolist() Lets see what happens if we set attribute class as a 'data.frame' > attr(foo, 'class') <- 'data.frame'> foodata frame with 0 columns and 0 rows or we can define custom behavior (BTW this behavior is a reason why it is better to avoid dots when define functions): > print.foo <- function(x) cat("I'm foo\n")> attr(foo, 'class') <- 'foo'> fooI'm foo Other attributes like comment and names have also special meaning and constraints.Take away message here is you have to a little bit careful when you work with attributes in R. One simple idea how to deal with is to use prefixes as artificial namespaces: > x <- 1:10> attr(x, 'zero323.dim') <- c(2, 5)> class(x)[1] "integer"> x[1] 1 2 3 4 5 6 7 8 9 10attr(, 'zero323.dim')[1] 2 5 In my opinion it particularly useful when you use third party libraries. Usage of the attributes is usually poorly documented, especially when used for some internal tasks, and it is pretty easy to introduce some hard to diagnose bugs if you use conflicting names. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27546901', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4361099/']} | jdg_77196 |
stackexchange | llm_judgeable_groundtruth_similarity | 31814291 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone explain me the output. #include<stdio.h>int main() { int a[]={10,20,30}; int *p=a; ++*p++; printf("%d %d %d %d",*p,a[0],a[1],a[2]);} output is 20 11 20 30 Postfix incrementation has a higher precedence, so value of second index should have been incremented. Why is the value of first index incremented?
Now provide the response and nothing else.
| Due to operator precedence , ++*p++ is same as ++(*(p++)) . That is equivalent to: int* p1 = p++; // p1 points to a[0], p points to a[1]++(*p1); // Increments a[0]. It is now 11. That explains the output. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31814291', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5175557/']} | jdg_77197 |
stackexchange | llm_judgeable_groundtruth_similarity | 11833786 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to find out how can I strike-through some text in my nodes in dot based graphviz diagrams? I checked out on this page, but couldn't figure out: http://www.graphviz.org/doc/info/attrs.html Googling around didn't help as well. Consider this diagram, these are basically bug numbers from a bugzilla. The red nodes represent closed bugs, but I do not want to color code them like this. Obviously striken-through 511272 is more intuitive than a red colored node 511272. If anyone knows how to strike-through text inside nodes, please share.thanks, Shobhit
Now provide the response and nothing else.
| Graphviz does not have a styling of its own to do this, but since it is Unicode you can use the technique with combining characters and "combining long stroke overlay" (U+0336) that the wikipedia article on strikethrough suggests: In plain text scenarios where markup cannot be used, Unicode offers a number of combining characters that achieve similar effects. The "long stroke overlay" (U+0336) results in an unbroken stroke across the text, Separate: A̶B̶C̶D̶E̶F̶G̶H̶I̶ Combined: A̶B̶C̶D̶E̶F̶G̶H̶I̶ This graph: digraph G { a [label="1̶2̶3̶4̶5̶"] b [label="54321"] a->b} Renders this png output with graphviz 2.23.6: | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11833786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/360844/']} | jdg_77198 |
stackexchange | llm_judgeable_groundtruth_similarity | 648468 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a number of systems on which I need to modify the kernel cmdline, adding a few options. At the moment I do it using the following procedure: Open /etc/default/grub Modify GRUB_CMDLINE_LINUX_DEFAULT, adding the options Run update-grub Reboot However, I would prefer to automate this process as a part of a packet installation, and avoid modifying the default file (as this is generally brittle). What I would want to do is something like: Drop a file (in /etc/grub.d/?), overriding GRUB_CMDLINE_LINUX_DEFAULT or similar Run update-grub and reboot There are a pile of scripts in /etc/grub.d/* which are used to build the actual menu config, however, there is no obvious way to interact with them. The script which generates the config only seems to read /etc/default/grub :( Can somebody enlighten me if there is a way to drop a file to modify the default kernel commandline?
Now provide the response and nothing else.
| Create the directory /etc/default/grub.d if it doesn't exist already. Create a file /etc/default/grub.d/myextraoption.cfg adding to the variable you want (Append to it only, with an extra space. You want to be careful to not clobber or mangle any existing data there.): GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} extra-option" Run update-grub . You should be able to safely include a yourpackage.cfg file in your package without risk of it being overwritten or clobbering something else. Any of those .cfg files are included after the main default file, so just be aware of that and plan accordingly. You will almost certainly also want a postinst script to run update-grub when your package is installed, and just to be safe since it is in /etc you should probably also include it in conffiles in your package. I think though that this will leave it behind unless a purge of the package is done, so dealer's choice on that part. For reference, /usr/sbin/grub-mkconfig on or around line 157 is what reads the default files, including anything matching /etc/default/grub.d/*.cfg . It seems likely to me that this situation is exactly why it does so. I wrote this based on Trusty. I don't know how far back in releases this is still applicable. I just checked Lucid and it is not there. It is there in Precise. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/648468', 'https://serverfault.com', 'https://serverfault.com/users/30525/']} | jdg_77199 |
stackexchange | llm_judgeable_groundtruth_similarity | 2099 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This question is inspired by this security question https://security.stackexchange.com/questions/1707 What are the threats in using Ajax? (Please note I am talking about security threats, not features drawbacks) How do I do Ajax securely? (Please give examples, preferably examples in PHP since I am using the CakePHP framework.)
Now provide the response and nothing else.
| Ajax is just pasing data over HTTP - it's not magic - so you secure it in exactly the same way you do with your normal webpages - check for authentication and authorization Encrypt or add salted hash checks to data exported to the browser for resubmission treat any data received in the request as potentially dangerous use HTTPS where its appropriate transform data leaving your PHP using the right method for were its going (htmlentities, mysql_real_escape_string transform data entering your script using the right method based on its origin (e.g. json_decode) | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/2099', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/1122/']} | jdg_77200 |
stackexchange | llm_judgeable_groundtruth_similarity | 4310217 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to save an object to a file, and then read it from the file easily. As a simple example, lets say I have the following 3d array: m = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]],[[0, 0, 0], [0, 0, 0], [0, 0, 0]]] Is there an easy Ruby API that I can use to achieve this without programming a parser to interpret the data from the file? In the example I give it is easy, but as the objects become more complicated, it gets annoying to make objects persistent.
Now provide the response and nothing else.
| You need to serialize the objects before you could save them to a file and deserialize them to retrieve them back. As mentioned by Cory, 2 standard serialization libraries are widely used, Marshal and YAML . Both Marshal and YAML use the methods dump and load for serializing and deserializing respectively. Here is how you could use them: m = [ [ [0, 0, 0], [0, 0, 0], [0, 0, 0] ], [ [0, 0, 0], [0, 0, 0], [0, 0, 0] ] ]# Quick way of opening the file, writing it and closing itFile.open('/path/to/yaml.dump', 'w') { |f| f.write(YAML.dump(m)) }File.open('/path/to/marshal.dump', 'wb') { |f| f.write(Marshal.dump(m)) }# Now to read from file and de-serialize it:YAML.load(File.read('/path/to/yaml.dump'))Marshal.load(File.read('/path/to/marshal.dump')) You need to be careful about the file size and other quirks associated with File reading / writing. More info, can of course be found in the API documentation. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4310217', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/487980/']} | jdg_77201 |
stackexchange | llm_judgeable_groundtruth_similarity | 3084188 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Find an element of multiplicative order 4 and an element of order 5 in $F_{121}$ defined by $x^{2} +x +7$ ∈ $Z_{11}$ . The most obvious way to go about this seems to find a generator and raise it to a quarter the order of the field, thus producing an element that is equal to 1 when raised to the power of 4, according to an analog of Fermat's Little theorem. But since the polynomial's coefficients are over $Z_{11}$ , I can't seem to find an obvious generator, and the reduction mod the quadratic seems cumbersome. Is there a more efficient elegant way to gleam elements of a desired order from this finite field?
Now provide the response and nothing else.
| I don't think there's a magic shortcut to this kind of question in general, but the specific cases we have here are simple enough that there are some tricks. For an element of order $5$ , we can note that $5$ divides $11-1$ , so there is actually an element of order $5$ in $\mathbb{F}_{11}$ , which is easy to find by trial and error. A bit more systematically, we can see that the subgroup of squares in $\mathbb{F}_{11}^*$ is cyclic of order $5$ so any square besides $0$ and $1$ must have order $5$ . For an element of order $4$ , we can save some work by noticing that an element of order $4$ is just a square root of $-1$ . So, we just need to solve for $a,b\in\mathbb{F}_{11}$ such that $(a+bx)^2=-1$ . Expanding out $(a+bx)^2$ using $x^2+x+7=0$ we get $$(a^2-7b^2)+(2ab-b^2)x=-1$$ and so we need $$a^2-7b^2=-1$$ and $$2ab-b^2=0.$$ The second equation gives $b=0$ or $2a=b$ . The first case does not work (since there is no square root of $-1$ in $\mathbb{F}_7$ ); in the second case the first equation simplifies to $$5a^2=1$$ which we can easily solve in $\mathbb{F}_{11}$ to get $a=\pm 3$ . So, the elements of order $4$ are $\pm(3+6x)$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3084188', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/613048/']} | jdg_77202 |
stackexchange | llm_judgeable_groundtruth_similarity | 615 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Corrective glasses are usually intended to help focus light on your retina. Supposing I had good vision already, but simply wanted more light, could I make glasses that would send more light into my eye without magnifying, focusing, or distorting the image? (Ignore chromatic aberration.)
Now provide the response and nothing else.
| First, let me try to clarify the question, because not everyone seems to get it ("where would the photons come from?"). If I'm buying binoculars, I might choose from 8x24, 8x36, or 8x50 binoculars. The first number is the magnification; 8x for all the examples. The second number is the size of the objective in millimeters, 24 to 50 millimeters in the example. All else being equal, the larger objectives will gather more light, and produce a brighter image. The 8x24 and 8x50 binoculars will deliver the same 8x magnification, but the 8x50 pair will be brighter. So it's reasonable to ask if you could make 1x binoculars that have a big objective that brightens the scene in front of you. There's no conservation of energy issue here; the larger front objective would gather more light. The problem is that there's a relationship between the magnification and the sizes of the entrance and exit pupils: magnification = entrance_pupil_size / exit_pupil_size If you create a binocular with an entrance pupil (objective, basically) that's larger than your eye's pupil, so that it can gather more light than your eye's pupil, then at 1x magnification, the exit pupil is going to be just as big as the objective lens. And since that's bigger than your eye's pupil, the "extra" light is going to run into your iris instead of going through the pupil, and you will gain no advantage from it. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/615', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/74/']} | jdg_77203 |
stackexchange | llm_judgeable_groundtruth_similarity | 50473 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
In many textbooks, in fact all textbooks I've seen, the fiberwise group action on the principal bundle is on the right. It seems to me that left actions and right actions are essentially the same. Then why do people keep acting on the right for the principal bundle, while acting on the left for many other cases? Is it because one might want to consider the possible additional left action on the bundle?
Now provide the response and nothing else.
| The answer to the first question is: Because in local chart we want the action to commute with transition functions, and the latter are traditionally assumed to be acting on the left . I'll explain better below. Assume we're working with a ringed space $(X, \mathcal{O})$ with structure sheaf $\mathcal{O}$ in your favourite geometry: $\mathcal{C}^{\; 0}$, $\mathcal{C}^{\infty}$, analytic, algebraic,... First of all, what's a fiber bundle with fiber $F$ on a space $X$? It's essentially the datum of a covering {$U_i$} of $X$ and, for each double intersection $U_{ij}=U_i\cap U_j$ , some transition functions $\varphi_{ij}:U_{ij}\times F\rightarrow U_{ij}\times F$ that verify some cocycle condition. The transition functions may be tautologically seen as functions $g_{ij}:U_{ij}\rightarrow \mathrm{Aut}(F)$, $x\mapsto g_{ij}(x)$ and the traditional convention is that $\mathrm{Aut}(F)$ acts on $F$ on the left . Now, what's a principal bundle? It's a fiber bundle with $F=G$ and with transition functions $g_{ij}$ with values in the group $Left(G)\subset \mathrm{Aut}_{sp}(G)$ of left translations of $G$ (which is, btw, isomorphic to $G$ itself; and here of course we're considering automorphisms of $G$ as a space not as a group). So, in local chart, we have: $\varphi_{ij}:(x,g)\mapsto (x,g_{ij}(x)\cdot g)$, where the dot is left group multiplication in $G$, and even actual left matrix multiplication in case $G$ is a matrix group. Let's stick to the case $G=$ matrix group, for the sake of clearness (but the case of general $G$ is not different). Suppose we want to define an action of $G$ itself on the total space of the bundle. The spontaneous idea is to write down things in local chart (like physicists usually do) and try the obvious matrix multiplication, say, on the left: $h_{\cdot}:(x,g)\mapsto (x,h\cdot g)$ for $h\in G$. But... wait!! That doesn't glue , as that locally defined action doesn't commute with the action of the "gauge group" (i.e. transition functions), so it doesn't define an intrinsically defined global left action. What about trying to do the same on the right ? $h:(x,g)\mapsto (x,g\cdot h)$ for $h\in G$ Well, now it works as the two actions clearly commute: $h \circ g_{ij}=g_{ij}\circ h:(x,g)\mapsto (x, g_{ij}(x)\cdot g\cdot h)$ and we can glue and get a globally well defined right action. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/50473', 'https://mathoverflow.net', 'https://mathoverflow.net/users/11846/']} | jdg_77204 |
stackexchange | llm_judgeable_groundtruth_similarity | 37265275 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In TypeScript, the const keyword cannot be used to declare class properties. Doing so causes the compiler to an error with "A class member cannot have the 'const' keyword." I find myself in need to clearly indicate in code that a property should not be changed. I want the IDE or compiler to error if I attempt to assign a new value to the property once it has been declared. How do you guys achieve this? I'm currently using a read-only property, but I'm new to Typescript (and JavaScript) and wonder whether there is a better way: get MY_CONSTANT():number {return 10}; I'm using typescript 1.8. Suggestions? PS: I'm now using typescript 2.0.3, so I've accepted David's answer
Now provide the response and nothing else.
| TypeScript 2.0 has the readonly modifier : class MyClass { readonly myReadOnlyProperty = 1; myMethod() { console.log(this.myReadOnlyProperty); this.myReadOnlyProperty = 5; // error, readonly }}new MyClass().myReadOnlyProperty = 5; // error, readonly It's not exactly a constant because it allows assignment in the constructor, but that's most likely not a big deal. Alternative Solution An alternative is to use the static keyword with readonly : class MyClass { static readonly myReadOnlyProperty = 1; constructor() { MyClass.myReadOnlyProperty = 5; // error, readonly } myMethod() { console.log(MyClass.myReadOnlyProperty); MyClass.myReadOnlyProperty = 5; // error, readonly }}MyClass.myReadOnlyProperty = 5; // error, readonly This has the benefit of not being assignable in the constructor and only existing in one place. | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/37265275', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3977061/']} | jdg_77205 |
stackexchange | llm_judgeable_groundtruth_similarity | 812239 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to refer the following answer: https://math.stackexchange.com/a/628992/130682 @robjohn claims that: $$\cos(x):\left[\frac1{\sqrt2},\frac\pi4\right]\mapsto\left[\frac1{\sqrt2},\frac\pi4\right]$$ $\pi\over 4$ is $a_1$ but where does $1\over \sqrt(2)$ came from? Update: My actual question is: Given $a_1 = {\pi \over 4}$, $a_n = \cos(a_{n-1})$ Why does the range of this recurrence is $\left[\frac1{\sqrt2},\frac\pi4\right]$ Thanks.
Now provide the response and nothing else.
| Your second attempt only needs one more fact: $(A \cup B)' = A' \cup B'$. To see this: pick $x \in A'$. Then every neighbourhood $U$ of $x$ intersects $A$ in at least one point different from $x$. This point is also in $A \cup B$, so then every neighbourhood of $x$ intersects a point of $A \cup B$ not equal to $x$, so $x \in (A\cup B)'$. Similarly, if $x \in B'$, $x \in (A \cup B)'$, so we have $A' \cup B' \subset (A \cup B)'$. On the other hand, if $x \in (A \cup B)'$, we need to show $x$ is in $A' \cup B'$. So assume not, $x \notin A'$, which means there is a neighbourhood $U$ of $x$ that intersects $A$ in an empty set or just $\{x\}$. Also $x \notin B'$, so there is a neighbourhood $V$ of $x$ such that $V$ intersects $B$ either in the empty set or $\{x\}$. Now, $U \cap V$ is a neighbourhood of $x$, and $(U \cap V) \cap (A \cup B) = (U \cap V \cap A) \cup (U \cap B \cap V) \subset (U \cap A) \cup (V \cap B)$, which is also either the empty set or $\{x\}$, from the properties of $U$ and $V$. But this contradicts that $x \in (A \cup B)'$. So our assumption was incorrect: $x$ must be in $A' \cup B'$, as required. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/812239', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/130682/']} | jdg_77206 |
stackexchange | llm_judgeable_groundtruth_similarity | 19431296 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My problem is the following: I have a go binary on a machine From that binary I need to compile an external .go file Once compiled, I need to link the compiled go file into the current binary so I can use the just-compiled go code. Do you think that's possible ? I did a few researches and it does not seem to be possible, but I might have overlooked something. Thanks :) The first go binary would contain something like func main() { // Here I need to compile an external go file (or package) which contains // The definition of runFoo() // Once the file/package is compiled and linked I need to call the compiled code runFoo() // Continue the execution process normally here}
Now provide the response and nothing else.
| Update: It is now possible to do this in mainline Go, see Go Execution Modes From the Go 1.5 release notes : For the amd64 architecture only, the compiler has a new option, -dynlink, that assists dynamic linking by supporting references to Go symbols defined in external shared libraries. Old Answer ( useful discussion of other options ): It is not currently possible to create dynamically linked libraries* in main line Go. There has been some talk about this, so you may see support in the future. However, there is a 3rd party go project called goandroid that needed the same functionality you need, so they maintain patches that should allow you to patch the official Go code base to support the dynamic linked support you are requesting. If you want to use a the standard Go run-time, I would recommend the one of the following.Invoke your Go program from your other program, and communicate using: Pipes to communicate A UNIX domain socket An mmaped region of shared memory. That is, create a file on /dev/shm and have both programs mmap it. The Go mmap library: https://github.com/edsrzf/mmap-go Each consecutive option will take more effort to setup, be more platform specific, but potentially be more powerful than the previous one. *Note: That is, DLLs in the Windows world, and .so files in the UNIX/Linux world. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19431296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1121179/']} | jdg_77207 |
stackexchange | llm_judgeable_groundtruth_similarity | 26289681 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to get the ending for email addresses (ie .net, .com, .edu, etc.) but the portion after the @ can have multiple periods. library(stringi)strings1 <- c( '[email protected]', '[email protected]', '[email protected]', '[email protected]')list1 <- stri_split_fixed(strings1, "@", 2)df1 <- data.frame(do.call(rbind,list1)) > list2 <- stri_split_fixed(df1$X2, '.(?!.*.)', 2);list2[[1]][1] "aol.com"[[2]][1] "hotmail.com"[[3]][1] "xyz.rr.edu"[[4]][1] "abc.xx.zz.net" Any suggestions to get something like this: X1 X2 X31 test aol.com com2 test hotmail.com com3 test xyz.rr.edu edu4 test abc.xx.zz.net net EDIT:Another attempt: > list2 <- stri_split_fixed(df1$X2, '\.(?!.*\.)\w+', 2);list2Error: '\.' is an unrecognized escape in character string starting "'\."
Now provide the response and nothing else.
| Here are a few approaches. The first seems particularly straight foward and the second particularly short. 1) sub That can be done with a an application of sub in R to produce each column: data.frame(X1 = sub("@.*", "", strings1), X2 = sub(".*@", "", strings1), X3 = sub(".*[.]", "", strings1), stringsAsFactors = FALSE) giving: X1 X2 X31 test aol.com com2 test hotmail.com com3 test xyz.rr.edu edu4 test abc.xx.zz.net net 2) strapplyc Here is an alternative using the gsubfn package that is particularly short. This returns a character matrix. strappylyc returns the matches to the portions of the pattern in parentheses. The first set of parantheses matches everything before @, the second set of parentheses matches everything after @ and the last set of parentheses matches everything after the last dot. library(gsubfn)pat <- "(.*)@(.*[.](.*))"t(strapplyc(strings1, pat, simplify = TRUE)) [,1] [,2] [,3] [1,] "test" "aol.com" "com"[2,] "test" "hotmail.com" "com"[3,] "test" "xyz.rr.edu" "edu"[4,] "test" "abc.xx.zz.net" "net" 2a) read.pattern read.pattern also in the gsubfn package can do it using the same pat defined in (2): library(gsubfn)pat <- "(.*)@(.*[.](.*))"read.pattern(text = strings1, pat, as.is = TRUE) giving a data.frame similar to (1) except the column names are V1 , V2 and V3 . 3) strsplit The overlapping extractions make it difficult to do with strsplit but we can do it with two applications of strsplit . The first strsplit splits at the @ and the second uses everything up to the last dot to split on. This last strsplit always produces an empty string as the first split string and we delete this using [, -1] . This gives a character matrix: ss <- function(x, pat) do.call(rbind, strsplit(x, pat)) cbind( ss(strings1, "@"), ss(strings1, ".*[.]")[, -1] ) giving the same answer as (2). 4) strsplit/sub This is a mix of (1) and (3): cbind(do.call(rbind, strsplit(strings1, "@")), sub(".*[.]", "", strings1)) giving the same answer as (2). 4a) This is another way to use strsplit and sub . Here we append a @ followed by the TLD and then split on @. do.call(rbind, strsplit(sub("(.*[.](.*))", "\\1@\\2", strings1), "@")) giving the same answer as (2). Update Added additional solutions. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26289681', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/914308/']} | jdg_77208 |
stackexchange | llm_judgeable_groundtruth_similarity | 435815 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Assume there is a distributed system which works on the principles of eventual consistency. Let's consider a scenario where I update my profile picture using my phone. If I log in using another system, let's say my personal laptop, it should show me the updated profile picture assuming it routes through the same data centre(Here I am assuming that the request will be routed to the same data centre where the write i.e. update of the profile picture was processed) What will happen if let's say I provide my profile credentials to a friend, who lives in another country, and the request is routed to another data centre there. Will he see the updated profile picture or the old one? For this scenario I would like to know how is eventual consistency achieved in case of multiple devices and multiple locations. For the same device/location, we can route the read requests to the same data centre where the write was processed. But I am not able to find a solution where we can handle the scenario described above.
Now provide the response and nothing else.
| Different distributed systems try to achieve different consistency models . Eventual consistency is an extremely weak model that just says that eventually, all nodes will agree on a common state. It does not guarantee which state the nodes will agree on, or when this will happen. In your scenario, all of the following could be observed in an eventually consistent model: both persons see different profile pictures for an arbitrarily long but finite time (the nodes haven't yet reached eventual consistency). the nodes eventually agree on your new profile picture. After reaching agreement, all subsequent accesses to the profile picture will see the new image. the nodes eventually agree to discard your new profile picture. After reaching agreement, all subsequent accesses to the profile picture will see the old image. Eventual consistency is a useful model for highly distributed applications but it provides really weak guarantees about updates – it is entirely unsuitable for many business processes. It is difficult to program software that operates correctly in an eventually consistent environment. Classic ACID databases are easier to reason about. However, eventually consistent systems make it easier to achieve very high availability. This more or less boils down to the CAP theorem: you can choose any two of consistency, availability, and partition tolerance. Distributed ACID databases sacrifice availability in order to ensure consistency even in the presence of network partitions. For example, a replica node might go into read-only mode. In contrast, an eventually consistent system sacrifices consistency in order to maintain availability even in case of a network partition. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/435815', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/408692/']} | jdg_77209 |
stackexchange | llm_judgeable_groundtruth_similarity | 15637 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the function: $F(\omega) = \frac{5\; - \;i\;\omega}{5^2\; +\; \omega^2}$ When $\omega$ has the values : $\{ -7, -2,\; 0,\; 2,\; 7\}$ How would I plot the Argand diagram in Mathematica? Or should I just treat it as a normal plot? The graph should look like a circle with radius $\frac{1}{10}$ passing through the points specified.
Now provide the response and nothing else.
| Defining the function F and a subset of its domain : pts : F[z_] := (5 - I z)/(5^2 + z^2)pts = {-7, -2, 0, 2, 7}; the most straightforward way fulfilling the task is based on ParametricPlot and Epilog . We can also make a diagram with the basic graphics primitives like e.g. : Line , Circle , Point . Here are the both ways enclosed in GraphicsRow : GraphicsRow[{ Graphics[{Line[{{0, -0.1}, {0, 0.1}}], Line[{{0, 0}, {0.21, 0}}], Blue, Thick, Circle[{0.1, 0}, 0.1], Red, PointSize[.03], Point[{Re @ #, Im @ #} & /@ F[pts]]}], ParametricPlot[{Re @ #, Im @ #}& @ F[z], {z, -200, 200}, PlotRange -> All, PlotStyle -> Thick, Epilog -> { Red, PointSize[0.03], Point[{Re @ F @ #, Im @ F @ #} & /@ pts]}] }] Studying properties of holomorphic complex mappings is really rewarding, therefore one should take a closer look at it. This function has a simple pole in 5 I : Residue[ F[z], {z, 5 I}] -I and it is conformal in its domain : Reduce[ D[ F[z], z] == 0, z] False i.e. it preserves angles locally. One can easily recognize the type of F evaluating Simplify[ F[z]] , namely it is a composition of a translation, rescaling and inversion. We should look at images (via F ) of simple geometric objects. To visualize the structure of the mapping F we choose an appropriate grid in the complex domain of F and look at its image. We take a continuous parameter $t$ varying in a range $(-25, 25)$ and contours $\;t+ i\;y $ for $y$ in a discrete set of values $\{-3, -2,-1, 0, 1, 2, 3 \}$ and another orthogonal contours $\;x+ i\;t$ for $x$ in a discrete set $\{-7,-5,-3, -2, 0, 2, 3, 5, 5\;\}$, i.e.we have a grid of straight lines in the complex plane. Next we'd like to plot the image of this grid through the mapping $F$. Images of every line in the grid will be circles with centers on the abscissa and ordinate respectively intersecting orthogonally. The red points denote values of $F(x)$ on the complex plane for $x$ in $\{-7, -2, 0, 2, 7 \}$. On the lhs we have the original grid in the domain of F and on the rhs we have the plot of its image : Animate[ GraphicsRow[ ParametricPlot[ ##, Evaluated -> True, PlotStyle -> Thick] & @@@ { { Join @@ {Table[{t, k}, {k, -3, 3}], Table[{k, t}, {k, {-7, -5, -3, -2, 0, 2, 3, 5, 7}}]}, {t, -25, a}, PlotRange -> {{-30, 30}, {-30, 30}}, Epilog -> {Red, PointSize[0.015], Point[{#, 0} & /@ pts]} }, { Join @@ {Table[{Re @ F[t + I k], Im @ F[t + I k]}, {k, -3, 3}], Table[{Re @ F[k + I t], Im @ F[k + I t]}, {k, {-7, -5, -3, -2, 0, 2, 3, 5, 7}}]}, {t, -25, a}, PlotRange -> {{-.4, .6}, {-.51, .51}}, Epilog -> { Red, PointSize[0.015], Point[{Re @ F[#], Im @ F[#]} & /@ pts]}}}, ImageSize -> 800 ], {a, -25 + 0.1, 25}] and slightly modyfing the ranges of the last ParametricPlot : {t, -300, 300} , and PlotRange -> {{-.2, .3}, {-.25, .25}} : ParametricPlot[ Join @@ { Table[{Re @ F[t + I k], Im @ F[t + I k]}, {k, -3, 3}], Table[{Re @ F[k + I t], Im @ F[k + I t]}, {k, {-7, -5, -3, -2, 0, 2, 3, 5, 7}}]}, {t, -300, 300}, PlotRange -> {{-.2, .3}, {-.25, .25}}, Epilog -> {Red, PointSize[0.015], Point[{Re @ #, Im @ #} & /@ F @ pts]}, Evaluated -> True, PlotStyle -> Thick] | {} | {'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/15637', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/4375/']} | jdg_77210 |
stackexchange | llm_judgeable_groundtruth_similarity | 52001922 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to use scipy.optimize to minimize a function (eventually non-linear) over a large set of linear inequalities. As a warm-up, I'm trying to minimize x+y over the box 0<=x<=1, 0<=y<=1. Following the suggestion of Johnny Drama below, I am currently using a dict-comprehesion to produce the dictionary of inequalities, but am not getting the expected answer (min value=0, min at (0,0)). New section of code (currently relevant): import numpy as npfrom scipy.optimize import minimize#Create initial point.x0=[.1,.1]#Create function to be minimizeddef obj(x): return x[0]+x[1]#Create linear constraints lbnd<= A*(x,y)^T<= upbndA=np.array([[1,0],[0,1]])b1=np.array([0,0])b2=np.array([1,1])cons=[{"type": "ineq", "fun": lambda x: np.matmul(A[i, :],x) -b1[i]} for i in range(A.shape[0])]cons2=[{"type": "ineq", "fun": lambda x: b2[i]-np.matmul(A[i, :], x) } for i in range(A.shape[0])]cons.extend(cons2)sol=minimize(obj,x0,constraints=cons)print(sol) Original version of question: I would like to use the LinearConstraint objectin scipy.optimize, as described in the tutorial here: "Defining linear constraints" I've tried to do a simpler example, where it's obvious what the answer should be: minimize x+y over the square 0<=x<=1, 0<=y<=1. Below is my code, which returns the error "'LinearConstraint' object is not iterable", but I don't see how I'm trying to iterate. EDIT 1: The example is deliberately over simple. Ultimately, I want to minimize a non-linear function over a large number of linear constraints. I know that I can use dictionary comprehension to turn my matrix of constraints into a list of dictionaries, but I'd like to know if "LinearConstraints" can be used as an off-the-shelf way to turn matrices into constraints. EDIT 2: As pointed out by Johnny Drama, LinearConstraint is for a particular method. So above I've tried to use instead his suggestion for a dict-comprehension to produce the linear constraints, but am still not getting the expected answer. Original section of code (now irrelevant): from scipy.optimize import minimizefrom scipy.optimize import LinearConstraint#Create initial point.x0=[.1,.1]#Create function to be minimizeddef obj(x): return x[0]+x[1]#Create linear constraints lbnd<= A* #(x,y)^T<= upbndA=[[1,0],[0,1]]lbnd=[0,0]upbnd=[0,0]lin_cons=LinearConstraint(A,lbnd,upbnd)sol=minimize(obj,x0,constraints=lin_cons)print(sol)
Now provide the response and nothing else.
| As newbie already said, use scipy.optimize.linprog if you want to solve a LP (linear program), i.e. your objective function and your constraints are linear. If either the objective or one of the constraints isn't linear, we are facing a NLP (nonlinear optimization problem), which can be solved by scipy.optimize.minimize : minimize(obj_fun, x0=xinit, bounds=bnds, constraints=cons) where obj_fun is your objective function, xinit a initial point, bnds a list of tuples for the bounds of your variables and cons a list of constraint dicts. Here's an example. Suppose we want to solve the following NLP: Since all constraints are linear, we can express them by a affin-linear function A*x-b such that we have the inequality A*x >= b . Here A is a 3x2 matrix and b the 3x1 right hand side vector: import numpy as npfrom scipy.optimize import minimizeobj_fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2A = np.array([[1, -2], [-1, -2], [-1, 2]])b = np.array([-2, -6, -2])bnds = [(0, None) for i in range(A.shape[1])] # x_1 >= 0, x_2 >= 0xinit = [0, 0] Now the only thing left to do is defining the constraints, each one has to be a dict of the form {"type": "ineq", "fun": constr_fun} where constr_fun is a callable function such that constr_fun >= 0 . Thus, we could define each constraint cons = [{'type': 'ineq', 'fun': lambda x: x[0] - 2 * x[1] + 2}, {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6}, {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2}] and we'd be done. However, in fact, this can be quite cumbersome for many constraints. Instead, we can pass all constraints directly by: cons = [{"type": "ineq", "fun": lambda x: A @ x - b}] where @ denotes the matrix multiplication operator . Putting all together res = minimize(obj_fun, x0=xinit, bounds=bnds, constraints=cons)print(res) yields fun: 0.799999999999998 jac: array([ 0.79999999, -1.59999999]) message: 'Optimization terminated successfully.' nfev: 16 nit: 4 njev: 4 status: 0 success: True x: array([1.39999999, 1.69999999]) Likewise, you could use a LinearConstraint object: from scipy.optimize import LinearConstraint# lb <= A <= ub. In our case: lb = b, ub = inflincon = LinearConstraint(A, b, np.inf*np.ones(3))# rest as aboveres = minimize(obj_fun, x0=xinit, bounds=bnds, constraints=(lincon,)) Edit: To answer your new question: # b1 <= A * x <==> -b1 >= -A*x <==> A*x - b1 >= 0# A * x <= b2 <==> A*x - b2 <= 0 <==> -Ax + b2 >= 0cons = [{"type": "ineq", "fun": lambda x: A @ x - b1}, {"type": "ineq", "fun": lambda x: -A @ x + b2}]sol=minimize(obj,x0,constraints=cons)print(sol) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52001922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10268970/']} | jdg_77211 |
stackexchange | llm_judgeable_groundtruth_similarity | 622944 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I have a member variable such as this (declared in the body of a class) private Dot[] dots=new Dot[numDots]; I loop through all members of this array, and: 1) Pass every Dot object to a function of another class, which: 2) Passes it to yet another function of a 3rd class, if some conditions are met 3) And the 3rd class changes some properties of the Dot object then when this object is returned to the original/parent class, would those changes to its properties have retained? Or would it be treated like a local variable by the 2nd/3rd functions?
Now provide the response and nothing else.
| Yes, the changes to the properties are retained. Java is 100% pass-by-value, however, when you pass an object, the "value" passed is truly a pointer to the object. Thus, when you change an object in a method, you're changing the actual object passed in. That is, if you have the following method, then the calling method will see the changes: private void updateMyDot(final Dot aDot) { aDot.dotColor = new Color(255,255,255);} but if you do the following, then the calling method will not see the changes. private void updateMyDot(/* not final */ Dot aDot) { aDot = new Dot(); aDot.dotColor = new Color(255,255,255);} In the second example, the caller will not see any changes and will not see the newly created Dot object. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/622944', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49153/']} | jdg_77212 |
stackexchange | llm_judgeable_groundtruth_similarity | 8693342 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my program I want to draw a simple score line graph. I have a text file and on each line is an integer score, which I read in and want to pass as argument to my graph class. I'm having some trouble implementing the graph class and all the examples I've seen have their methods in the same class as their main, which I won't have. I want to be able to pass my array to the object and generate a graph, but when calling my paint method it is asking me for a Graphics g... This is what I have so far: public class Graph extends JPanel { public void paintGraph (Graphics g){ ArrayList<Integer> scores = new ArrayList<Integer>(10); Random r = new Random(); for (int i : scores){ i = r.nextInt(20); System.out.println(r); } int y1; int y2; for (int i = 0; i < scores.size(); i++){ y1 = scores.get(i); y2 = scores.get(i+1); g.drawLine(i, y1, i+1, y2); } }} For now I have inserted a simple random number generator to fill up my array. I have an existing frame and basically want to instantiate the Graph class and mount the panel onto my frame. I'm really sorry that this question seems so jumbled by the way, but I've had little sleep... The code in my main statement is: testFrame = new JFrame();testFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);Graph graph = new Graph();testFrame.add(graph); I'm not sure exactly what an SSCE is but this is my attempt at one: public class Test { JFrame testFrame; public Test() { testFrame = new JFrame(); testFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); Graph graph = new Graph(); testFrame.add(graph); testFrame.setBounds(100, 100, 764, 470); testFrame.setVisible(true); } Graph.java public class Graph extends JPanel { public Graph() { setSize(500, 500); } @Override public void paintComponent(Graphics g) { Graphics2D gr = (Graphics2D) g; // This is if you want to use Graphics2D // Now do the drawing here ArrayList<Integer> scores = new ArrayList<Integer>(10); Random r = new Random(); for (int i : scores) { i = r.nextInt(20); System.out.println(r); } int y1; int y2; for (int i = 0; i < scores.size() - 1; i++) { y1 = (scores.get(i)) * 10; y2 = (scores.get(i + 1)) * 10; gr.drawLine(i * 10, y1, (i + 1) * 10, y2); } }}
Now provide the response and nothing else.
| Problems with your code and suggestions: Again you need to change the preferredSize of the component (here the Graph JPanel), not the size Don't set the JFrame's bounds. Call pack() on your JFrame after adding components to it and before calling setVisible(true) Your foreach loop won't work since the size of your ArrayList is 0 (test it to see that this is correct). Instead use a for loop going from 0 to 10. You should not have program logic inside of your paintComponent(...) method but only painting code. So I would make the ArrayList a class variable and fill it inside of the class's constructor. For example: import java.awt.BasicStroke;import java.awt.Color;import java.awt.Dimension;import java.awt.Graphics;import java.awt.Graphics2D;import java.awt.Point;import java.awt.RenderingHints;import java.awt.Stroke;import java.util.ArrayList;import java.util.List;import java.util.Random;import javax.swing.*;@SuppressWarnings("serial")public class DrawGraph extends JPanel { private static final int MAX_SCORE = 20; private static final int PREF_W = 800; private static final int PREF_H = 650; private static final int BORDER_GAP = 30; private static final Color GRAPH_COLOR = Color.green; private static final Color GRAPH_POINT_COLOR = new Color(150, 50, 50, 180); private static final Stroke GRAPH_STROKE = new BasicStroke(3f); private static final int GRAPH_POINT_WIDTH = 12; private static final int Y_HATCH_CNT = 10; private List<Integer> scores; public DrawGraph(List<Integer> scores) { this.scores = scores; } @Override protected void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D)g; g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); double xScale = ((double) getWidth() - 2 * BORDER_GAP) / (scores.size() - 1); double yScale = ((double) getHeight() - 2 * BORDER_GAP) / (MAX_SCORE - 1); List<Point> graphPoints = new ArrayList<Point>(); for (int i = 0; i < scores.size(); i++) { int x1 = (int) (i * xScale + BORDER_GAP); int y1 = (int) ((MAX_SCORE - scores.get(i)) * yScale + BORDER_GAP); graphPoints.add(new Point(x1, y1)); } // create x and y axes g2.drawLine(BORDER_GAP, getHeight() - BORDER_GAP, BORDER_GAP, BORDER_GAP); g2.drawLine(BORDER_GAP, getHeight() - BORDER_GAP, getWidth() - BORDER_GAP, getHeight() - BORDER_GAP); // create hatch marks for y axis. for (int i = 0; i < Y_HATCH_CNT; i++) { int x0 = BORDER_GAP; int x1 = GRAPH_POINT_WIDTH + BORDER_GAP; int y0 = getHeight() - (((i + 1) * (getHeight() - BORDER_GAP * 2)) / Y_HATCH_CNT + BORDER_GAP); int y1 = y0; g2.drawLine(x0, y0, x1, y1); } // and for x axis for (int i = 0; i < scores.size() - 1; i++) { int x0 = (i + 1) * (getWidth() - BORDER_GAP * 2) / (scores.size() - 1) + BORDER_GAP; int x1 = x0; int y0 = getHeight() - BORDER_GAP; int y1 = y0 - GRAPH_POINT_WIDTH; g2.drawLine(x0, y0, x1, y1); } Stroke oldStroke = g2.getStroke(); g2.setColor(GRAPH_COLOR); g2.setStroke(GRAPH_STROKE); for (int i = 0; i < graphPoints.size() - 1; i++) { int x1 = graphPoints.get(i).x; int y1 = graphPoints.get(i).y; int x2 = graphPoints.get(i + 1).x; int y2 = graphPoints.get(i + 1).y; g2.drawLine(x1, y1, x2, y2); } g2.setStroke(oldStroke); g2.setColor(GRAPH_POINT_COLOR); for (int i = 0; i < graphPoints.size(); i++) { int x = graphPoints.get(i).x - GRAPH_POINT_WIDTH / 2; int y = graphPoints.get(i).y - GRAPH_POINT_WIDTH / 2;; int ovalW = GRAPH_POINT_WIDTH; int ovalH = GRAPH_POINT_WIDTH; g2.fillOval(x, y, ovalW, ovalH); } } @Override public Dimension getPreferredSize() { return new Dimension(PREF_W, PREF_H); } private static void createAndShowGui() { List<Integer> scores = new ArrayList<Integer>(); Random random = new Random(); int maxDataPoints = 16; int maxScore = 20; for (int i = 0; i < maxDataPoints ; i++) { scores.add(random.nextInt(maxScore)); } DrawGraph mainPanel = new DrawGraph(scores); JFrame frame = new JFrame("DrawGraph"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.getContentPane().add(mainPanel); frame.pack(); frame.setLocationByPlatform(true); frame.setVisible(true); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGui(); } }); }} Which will create a graph that looks like so: | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8693342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1058210/']} | jdg_77213 |
stackexchange | llm_judgeable_groundtruth_similarity | 44165193 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: struct Foo { int var1; int var2; friend std::ostream& operator<<(std::ostream& os, const Foo& s){ return os << "[Foo] " << s.var1 << "," << s.var2 ; }};int main() { Foo foo; foo.var1 = 1; foo.var2 = 2; std::list<Foo> list; list.push_back(foo); Foo &foo2 = list.front(); foo2.var2 = 5; std::cout << "foo (" << &foo << "): " << foo << std::endl; std::cout << "foo2 (foo from list) (" << &list.front() << "): " << foo2 << std::endl;} I want both foo and foo2 to reference the same object. So when I assign 5 to foo2.var2 , I would want to modify foo.var2 as well. Yet, as we can see in the following output this is not happening: foo (0x7fffffffe140): [Foo] 1,2foo2 (foo from list) (0x61ac30): [Foo] 1,5 What would be the correct way to do that?
Now provide the response and nothing else.
| When you use push_back to insert elements into a list, push_back creates a copy which is inserted into the list. A solution is to use a std::reference_wrapper instead as the underlying type of the list, like std::list<std::reference_wrapper<Foo>> lst; and then push into it like lst.push_back(foo); Here is a super simple example that shows you how it works: #include <functional>#include <iostream>#include <list>int main() { int i = 42; std::list<std::reference_wrapper<int>> lst; lst.push_back(i); // we add a "reference" into the list lst.front().get() = 10; // we update the list std::cout << i; // the initial i was modified!} Live on Coliru You need the reference_wrapper since you cannot simply create a list of references, like std::list<Foo&> . Alternatively, you can use pointers, but I find the reference_wrapper approach more transparent. In the simple example above note the need to use std::reference_wrapper::get() to obtain the underlying reference, as the reference_wrapper is on the left hand side of the assignment operator and hence won't be implicitly converted to int via [ std::reference_wrapper::operator T& . Below is your full working code modified to use reference_wrapper s: http://coliru.stacked-crooked.com/a/fb1fd67996d6e5e9 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44165193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2601512/']} | jdg_77214 |
stackexchange | llm_judgeable_groundtruth_similarity | 31606978 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Please consider the following example Java class (pom.xml below): package test.filedelete;import java.io.ByteArrayInputStream;import java.io.IOException;import java.io.InputStream;import java.io.OutputStream;import java.nio.file.Files;import java.nio.file.NoSuchFileException;import java.nio.file.Path;import org.apache.commons.io.IOUtils;public class Main{ public static void main(String[] args) throws IOException { byte[] bytes = "testtesttesttesttesttesttesttesttesttest".getBytes(); InputStream is = new ByteArrayInputStream(bytes); Path tempFileToBeDeleted = Files.createTempFile("test", ""); OutputStream os = Files.newOutputStream(tempFileToBeDeleted); IOUtils.copy(is, os); deleteAndCheck(tempFileToBeDeleted); // breakpoint 1 System.out.println("\nClosing stream\n"); os.close(); deleteAndCheck(tempFileToBeDeleted); } private static void deleteAndCheck(Path file) throws IOException { System.out.println("Deleting file: " + file); try { Files.delete(file); } catch (NoSuchFileException e) { System.out.println("No such file"); } System.out.println("File really deleted: " + !Files.exists(file)); System.out.println("Recreating deleted file ..."); try { Files.createFile(file); System.out.println("Recreation successful"); } catch (IOException e) { System.out.println("Recreation not possible, exception: " + e.getClass().getName()); } }} I write to a FileOutputStream and try to delete the file afterwards without closing the Stream first . This was my original problem, and of course wrong, but it leads to some strange observations. When you run the main Method on Windows 7 it produces the following output: Deleting file: C:\Users\MSCHAE~1\AppData\Local\Temp\test6100073603559201768File really deleted: trueRecreating deleted file ...Recreation not possible, exception: java.nio.file.AccessDeniedExceptionClosing streamDeleting file: C:\Users\MSCHAE~1\AppData\Local\Temp\test6100073603559201768No such fileFile really deleted: trueRecreating deleted file ...Recreation successful Why does the first call to Files.delete() not throw an exception? Why does the following call to Files.exist() return false? Why is it not possible to create the file anew? Regarding the last question I noticed that the file is still visible in the Explorer when you stop at breakpoint 1. When you terminate the JVM then, the file will be deleted anyway. After closing the stream deleteAndCheck() works as expected. It seems to me that the deletion is not propagated to the OS before closing the stream and the Files API does not reflect this properly. Can someone explain exactly what's happening here? pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>test</groupId> <artifactId>filedelete</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.4</version> </dependency> </dependencies></project> Update for clarification The file disappears in the Windows explorer, if the stream is closed AND Files.delete() is called - the last operation triggers -, or if Files.delete() has been called without closing the stream and the JVM is terminated.
Now provide the response and nothing else.
| Can you delete an open file? It is perfectly valid to delete the directory entry of a file when the file is opened. In Unix this is the default semantics and Windows behaves similarily as long as FILE_SHARE_DELETE is set on all file handles open to that file. [Edit: Thanks to @couling for discussions and corrections] However, there is a slight difference: Unix deletes the file name immediately , while Windows deletes the file name only when the last handle is closed . It however, prevents you from opening a file with the same name until the last handle to the (deleted) file is closed. Go figure ... On both systems, however, deleting the file does not necessarily make the file go away, it still occupies space on the disk as long as there is still an open handle to it. Space occupied by the file is only released when the last open handle is closed. Excursion: Windows That it is necessary to specify the flag on Windows makes it seem to most people that Windows cannot delete open files, but that's actually not true. That's just the default behaviour. CreateFile() : Enables subsequent open operations on a file or device to request delete access. Otherwise, other processes cannot open the file or device if they request delete access. If this flag is not specified, but the file or device has been opened for delete access, the function fails. Note Delete access allows both delete and rename operations. DeleteFile() : The DeleteFile function marks a file for deletion on close. Therefore, the file deletion does not occur until the last handle to the file is closed. Subsequent calls to CreateFile to open the file fail with ERROR_ACCESS_DENIED. Having an open handle to a file with no name is one of the most typical methods of creating unnamed temporary files: Create a new file, open it, delete the file. You now have a handle to a file that nobody else can open. On Unix, the filename is truly gone and on Windows you cannot open a file with the same name. The question is now: Does Files.newOutputStream() set FILE_SHARE_DELETE ? Looking at the source , you can see that shareDelete indeed defaults to true . The only way to reset it is to use the non-standard ExtendedOpenOption NOSHARE_DELETE . So yes, you can delete opened files in Java unless they are explicitly locked. Why can't I re-create the deleted file? The answer to that is hidden in the documentation of DeleteFile() above: The file is only marked for deletion, the file is still there. On Windows, you cannot create a file with the name of a file marked for deletion until the file is properly deleted, i.e. all handles to the file are closed. The possible confusion of mixing name deletion and actual file deletion is probably why Windows disallows deleting open files by default in the first place. Why does Files.exists() return false ? Files.exists() in the deep end on Windows opens that file at some point and we already know that we cannot re-open a deleted-but-still-open file on Windows . In detail: Java code calls FileSystemProvider.checkAccess() ) with no arguments, which calls WindowsFileSystemProvider.checkReadAccess() which straight away tries to open the file and hence fails. From what I can tell, this is the path taken when you call Files.exist() . There is also another code path that calls GetFileAttributeEx() to retrieve file attributes. Once again, it is not documented what happens when you try to retrieve the attributes of an deleted but not yet removed file, but indeed, you cannot retrieve the file attributes of a file marked for deletion . Guessing, I'd say that GetFileAttributeEx() calls GetFileInformationByHandle() at some point, which it will never get to because it cannot get a file handle in the first place. So indeed, after DeleteFile() the file is gone for most practical purposes. It still has a name, however, shows up in directory listings and you cannot open a file with the same name until the original file had all its handles closed. This behaviour is more or less consistent, because using GetFileAttributes() to check if a file exists is a actually an file accessibility check, which is interpreted as file exists . FindFirstFile() (used by Windows Explorer to determine the file list) finds file names but tells you nothing about accessibility of the names. Welcome to a few more weird loops in your head. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31606978', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4197768/']} | jdg_77215 |
stackexchange | llm_judgeable_groundtruth_similarity | 2335223 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to solve the following problem: Let $ABC$ be an equilateral triangle with side $l$ . If $P$ and $Q$ are points respectively in sides $AB$ and $AC$ , different from the triangle vertices, prove that $$|BQ| + |PQ| + |CP| > 2l$$ I can see that, as point $P$ tends to $A$ , $|CP|+|PQ|$ tends to $|AC|+|AQ|$ . If I could prove this, the problem would be solved (the rest follows from the triangle inequality). However I have no clue on how to do this. I tried to play with triangle inequality and relations between sides and angles but nothing worked. How can I proceed?
Now provide the response and nothing else.
| Upon reflection ... $$2s = |\overline{AB^\prime}| \leq |\overline{AP}|+|\overline{PQ^\prime}|+|\overline{Q^\prime B^\prime}| = p+q+r$$ | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2335223', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/388924/']} | jdg_77216 |
stackexchange | llm_judgeable_groundtruth_similarity | 9262221 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am creating a class in C# called "Robot", and each robot requires a unique ID property which gives themselves an identity. Is there any way of creating an auto incremental ID for each new class object? So, If i created 5 new robots, their IDs respectively will be 1, 2, 3, 4, 5. If I then destroy robot 2 and create a new robot later, it will have the ID of 2. And if I add a 6th it will have the ID of 6 and so on.. Thanks.
Now provide the response and nothing else.
| Create a static instance variable, and use Interlocked.Increment(ref nextId) on it. class Robot { static int nextId; public int RobotId {get; private set;} Robot() { RobotId = Interlocked.Increment(ref nextId); }} Note #1: using nextId++ would be valid only in non-concurrent environments; Interlocked.Increment works even if you allocate your robots from multiple threads. EDIT This does not deal with re-using robot IDs. If you need reuse, the solution is a lot more complex: you need a list of reusable IDs, and a ReaderWriterLockSlim around the code that accesses that list. class Robot : IDisposable { static private int nextId; static private ReaderWriterLockSlim rwLock = new ReaderWriterLockSlim(); static private IList<int> reuseIds = new List<int>(); public int RobotId {get; private set;} Robot() { rwLock.EnterReadLock(); try { if (reuseIds.Count == 0) { RobotId = Interlocked.Increment(ref nextId); return; } } finally { rwLock.ExitReadLock(); } rwLock.EnterWriteLock(); try { // Check the count again, because we've released and re-obtained the lock if (reuseIds.Count != 0) { RobotId = reuseIds[0]; reuseIds.RemoveAt(0); return; } RobotId = Interlocked.Increment(ref nextId); } finally { rwLock.ExitWriteLock(); } } void Dispose() { rwLock.EnterWriteLock(); reuseIds.Add(RobotId); rwLock.ExitWriteLock(); }} Note #2: If you would like to reuse smaller IDs ahead of larger IDs (as opposed to reusing IDs released earlier before IDs released later, as I coded it) you can replace IList<int> with SortedSet<int> and make a few adjustments around the parts where an ID to be reused is taken from the collection. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9262221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1137017/']} | jdg_77217 |
stackexchange | llm_judgeable_groundtruth_similarity | 19857824 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to specify custom filters like 'ABC*.pdf' which means: " Show all PDF which starts with ABC "? I can only specify *.pdf , *.doc , *.* , etc. ThanksFlorian
Now provide the response and nothing else.
| Updated Changed my solution a little after realizing the following would be better: This is not a complete "hard filter", but making use of the FileName property should still cover your needs: using System;using System.Windows.Forms;namespace TestingFileOpenDialog{ public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { this.openFileDialog1.FileName = "pro*"; this.openFileDialog1.Filter = "Pdf Files|*.pdf"; this.openFileDialog1.ShowDialog(); } }} I suppose this might depend on which OS you are working with, but it did work in my case any way, on Windows 8. I also realize that this does not filter out all irrelevant files "permanently", but it does at least provide an initial filter. Result: ( Without pro* in the FileName-field, this will show several other PDF files ). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19857824', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2071938/']} | jdg_77218 |
stackexchange | llm_judgeable_groundtruth_similarity | 37472567 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The problem is, I keep getting this error > error: no matching function for call to 'std::basic_ifstream::basic_ifstream(std::basic_string)' and >error: no matching function for call to 'std::basic_ofstream::open(std::basic_string)' How do I fix this? #include <iostream>#include <string>#include <fstream>#include <cstdlib>using namespace std;bool loginOn(){string username, password, un, pw;cout << "Username > ";cin >> username;cout << "Password > ";cin >> password;ifstream read("c:\\" + username + ".txt");getline(read, un);getline(read, pw);if ( un == username && pw == password){ return true;}else{ return false;}}int main(){int choice;cout << "1: Register" << endl;cout << "2: Login" << endl;cout << "Your chouce > ";cin >> choice;if (choice == 1){ string username, password; cout << "Choose username > "; cin >> username; cout << "Choose password > "; cin >> password; ofstream file; file.open("c:\\" + username + ".txt"); file << username << endl << password; file.close(); main();}else if (choice == 2){ bool status = loginOn(); if (!status){ cout << "Wrong login information" << endl; system("PAUSE"); return 0; } else{ cout << "Congratulation! You've login successfully" << endl; system("PAUSE"); return 1; }}}
Now provide the response and nothing else.
| It has to do with how Matlab performs name-binding at compilation time. Because matlabBugTest has a line that assigns a value to a , a is determined to be a variable, and the later line with a is a reference to that variable and not a call to the local function. More modern versions of Matlab, like my R2015a install, gives a more clear error message: At compilation, "a" was determined to be a variable and this variable is uninitialized. "a" is also a function name and previous versions of MATLAB would have called the function. However, MATLAB 7 forbids the use of the same name in the same context as both a function and a variable. It's not so much a bug, as it is an ambiguity introduced by the naming scheme that was given a default resolution method, which can be annoying if you have never encountered the problem before and m-lint doesn't mark it. Similar behavior occurs when variables are poofed into the workspace without initialization beforehand. So the solution is to either change the name of the function or the variable to different things, which I would argue is good practice anyways. In considering your follow-up example, I have noticed some interesting behavior in moving things around in the function. Firstly, if the function is either external or nested, you get the behavior discussed very well by Suever's answer . However, if the function is local, you can get around the limitation (at least you can in my R2014b and R2015a installs) by invoking the function prior to converting it to a variable as long as you initialize it or explicitly convert it to a variable at some point. Going through the cases, the following bodies of matlabBugTest perform thusly: Fails: aif false a = 'foo';enda Runs: aif true a = 'foo';enda Runs: a = a;if false % runs with true as well. a = 'foo';enda I'm not entirely sure why this behavior is the way it is, but apparently the parser handles things differently depending on the scope of the function and the order of what symbols appear and in what contexts. So assuming this behavior hasn't and will not change you could try something like: pathConstant = pathConstant;if ~exist('pathConstant.m', 'file') pathConstant = 'C:\some\path';endload(fullfile(pathConstant, 'filename.ext')); Though, entirely personal opinion here, I would do something like pathConstant = getPathConstant();if ~exist('pathConstant.m', 'file') pathConstant = 'C:\some\path';endload(fullfile(pathConstant, 'filename.ext')); Concerning breaking "compatibility between functions and scripts/command line", I don't really see this as an issue since those are two entirely different contexts when it comes to Matlab. You cannot define a named function on the command line nor in a script file; therefore, there is no burden on the Matlab JIT to properly and unambiguously determine whether a symbol is a function call or a variable since each line executes sequentially and is not compiled (aside from certain blocks of code the JIT is designed to recognize and optimize like loops in scripts). Now as to why the above juggling of declarations works, I'm not entirely sure since it relies on the Matlab JIT which I know nothing about (nor have I taken a compiler class, so I couldn't even form an academic reason if I wanted). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37472567', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6388488/']} | jdg_77219 |
stackexchange | llm_judgeable_groundtruth_similarity | 164114 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is a commutative ring a field? A set equipped with addition and multiplication which is abelian over those two operations and it holds distributivity of multiplication over addition?
Now provide the response and nothing else.
| A key difference between an ordinary commutative ring and a field is that in a field, all non-zero elements must be invertible. For example: $\Bbb{Z}$ is a commutative ring but $2$ is not invertible in there so it can't be a field, whereas $\Bbb{Q}$ is a field and every non-zero element has an inverse. Examples of commutative rings that are not fields: The ring of polynomials in one indeterminate over $\Bbb{Q}, \Bbb{R}$, $\Bbb{C}$, $\Bbb{F}_{11}$, $\Bbb{Q}(\sqrt{2},\sqrt{3})$ or $\Bbb{Z}$. The quotient ring $\Bbb{Z}/6\Bbb{Z}$ $\Bbb{Z}[\zeta_n]$ - elements in here are linear combinations of powers of $\zeta_n$ with coefficients in $\Bbb{Z}$ (In fact this is also a finitely generated $\Bbb{Z}$ - module) The direct sum of rings $\Bbb{R} \oplus \Bbb{R}$ that also has the additional structure of being a 2-dimensional $\Bbb{R}$ - algebra. Let $X$ be a compact Hausdorff space with more than one point. Then $C(X)$ is an example of a commutative ring, the ring of all real valued functions on $X$. The localisation of $\Bbb{Z}$ at the prime ideal $(5)$. The result ring, $\Bbb{Z}_{(5)}$ is the set of all $$\left\{\frac{a}{b} : \text{$b$ is not a multiple of 5} \right\}$$and is a local ring, i.e. a ring with only one maximal ideal. I believe when $G$ is a cyclic group, the endomorphism ring $\textrm{End}(G)$ is an example of a commutative ring. Examples of Fields: $\Bbb{F}_{2^5}$ $\Bbb{Q}(\zeta_n)$ $\Bbb{R}$ $\Bbb{C}$ The fraction field of an integral domain More generally given an algebraic extension $E/F$, for any $\alpha \in E$ we have $F(\alpha)$ being a field. The algebraic closure $\overline{\Bbb{Q}}$ of $\Bbb{Q}$ in $\Bbb{C}$. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/164114', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10181/']} | jdg_77220 |
stackexchange | llm_judgeable_groundtruth_similarity | 66970702 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In pure Python, None or True returns True . However with pandas when I'm doing a | between two Series containing None values, results are not as I expected: >>> df.to_dict(){'buybox': {0: None}, 'buybox_y': {0: True}}>>> df buybox buybox_y0 None True>>> df['buybox'] = (df['buybox'] | df['buybox_y'])>>> df buybox buybox_y0 False True Expected result: >>> df buybox buybox_y0 True True I get the result I want by applying the OR operation twice, but I don't get why I should do this. I'm not looking for a workaround (I have it by applying df['buybox'] = (df['buybox'] | df['buybox_y']) twice in a row) but an explanation, thus the 'why' in the title.
Now provide the response and nothing else.
| Pandas | operator does not rely on Python or expression , and behaves differently. If both operands are boolean, the result is mathematically defined, and the same for Python and Pandas. But in your case series "buybox" is of type object , and "buybox_y" is bool . In this case Pandas | operator is not commutative : right operand is coerced to boolean then bitwise or is attempted None | True is invalid operation, resulting in None and result is coerced to boolean Thus, >>> df['buybox'] | df['buybox_y']0 False>>> df['buybox_y'] | df['buybox']0 True For predictable results, you can clean up data, and cast to boolean type with Pandas astype before attempting boolean operations. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/66970702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9453926/']} | jdg_77221 |
stackexchange | llm_judgeable_groundtruth_similarity | 23061164 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im trying to run my (Phonegap / Angular) App on a Windows Phone Emulator.Everything works fine but some images are not be displayed. I figured out that ng-src is the troublemaker. When im using src instead of ng-src for local images in the same folder - it works fine. Here some examples: dispalyed: <img ng-click="mediaPlay()" ng-hide=showButton() src="img/go.png" style="float: right"> not displayed: <img ng-src="img/{{poi.poiimage}}" style="display: block; margin: 0 auto"> My App works perfect on Android and IOs. Any ideas how to fix this? EDIT* Ok now its getting weird o0... I've putted this "img/{{poi.poiimage}}" in a p tag and it gives me this path: img/poi-12-galgentor-01.png So i tried it with src: <img src="img/12-galgentor-01.png" style="display: block; margin: 0 auto"> And ng-src: <img ng-src="img/12-galgentor-01.png" style="display: block; margin: 0 auto"> Both worked for me! But this... <img ng-src="img/{{poi.poiimage}}" style="display: block; margin: 0 auto"><img src="img/{{poi.poiimage}}" style="display: block; margin: 0 auto"> does not work. Edit2* Tested in the emulators browser ... works fine too. -.-
Now provide the response and nothing else.
| this is not a solution per se: Angularjs is broken on WP8 devices. no it is not... i got it: It is just a Problem with old IE's and Windows Phones.SrcUrls have to be whitelistet. Otherwhise angular generates a "unsafe:" prefix on every src Url. It can be fixed by changing the angular apps config like below: schreibwerkApp.config(['$routeProvider', '$compileProvider', function ($routeProvider, $compileProvider) { $compileProvider.imgSrcSanitizationWhitelist('img/'); $routeProvider. when('/splash', { templateUrl: 'partials/splash.html' }). when('/terms', { templateUrl: 'partials/terms.html' }). when('/intro', { templateUrl: 'partials/intro.html' }). when('/poi/:stationID', { templateUrl: 'partials/poi.html', controller: 'PoiCtrl' }). when('/directive', { templateUrl: 'partials/directive.html' }). when('/imgview/:stationID/:imgID', { templateUrl: 'partials/imgview.html', controller: 'ImgCtrl' }). otherwise({ redirectTo: '/splash' });}]); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23061164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3532011/']} | jdg_77222 |
stackexchange | llm_judgeable_groundtruth_similarity | 565811 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have test data in a file text.txt abtesttest21,23,3 I want to output the file starting from the line number where test is + 2. I need this to be a oneliner usable in gnuplot , i have comeup with the following: awk -v linestart=$(awk '$0~"test" {a=NR}END{print a+2}' $filename) 'BEGIN{FS=",";OFS="\t";lines}NR>=linestart{print $1, $2}' $filename but i need somehow to supply the file contents to two awk 's which i do not know how to do. So i came up with solution with the $filename but this has the problem, how to get the $filename in. I was thinking along the lines: echo "test.txt" | read filename | awk -v linestart=$(awk '$0~"test" {a=NR}END{print a+2}' $filename) 'BEGIN{FS=",";OFS="\t";lines}NR>=linestart{print $1, $2}' $filename but that does not work. How else can i make the above work? The obvious problem is that i need to know the number of the line where i want to start printing before i run awk . i was also thinking something along this: awk 'BEGIN{FS=",";OFS="\t";lines=100000}{if ($0~"test"){lines=NR+2}; if(NR>=lines){print $1, $2}}' But i did not even try it since, it is very ugly and not general, i have to make the variable lines always sufficiently big. So is there an elegant solution that would work with a normal text file pipe or in the other case with some way of pushing the file name inside?
Now provide the response and nothing else.
| Using ed : $ printf '%s\n' '/^test/+2,$p' | ed -s file1,23,3 In the ed editor, the command /^test/+2,$p would print ( p ) the lines from two lines beyond the line matching ^test , to the end ( $ ). Using awk : $ awk '/^test/ { flag = 1; count = 1 }; (flag == 1 && count <= 0); { count-- }' file1,23,3 Here, a line will be printed if flag is 1 and if count is less than or equal to zero. The flag is set to 1 when the pattern ^test is matched in the input data, and count is then also set to the number of lines to skip until the output should start (not counting the current line). The count is decreased for all lines. A slightly different approach with awk : $ awk '/^test/ { getline; while (getline > 0) print }' file1,23,3 Here, we match our pattern and then immediately read and discard the next line of input. Then we use a while loop to read the rest of the file, printing each line read. The exact same approach, but with sed : $ sed -n -e '/^test/ { n' -e ':again' -e 'n; p; b again' -e '}' file1,23,3 Match the pattern, then read and discard the next line ( n ), then get into a loop reading and printing each line ( n; p; ). The loop is made up of the label again and the branching/jumping to this label ( b again ). | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/565811', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/172751/']} | jdg_77223 |
stackexchange | llm_judgeable_groundtruth_similarity | 1941212 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to use get_or_create for some fields in my forms, but I'm getting a 500 error when I try to do so. One of the lines looks like this: customer.source = Source.objects.get_or_create(name="Website") The error I get for the above code is: Cannot assign "(<Source: Website>, False)": "Customer.source" must be a "Source" instance.
Now provide the response and nothing else.
| From the documentation get_or_create : # get_or_create() a person with similar first names.p, created = Person.objects.get_or_create( first_name='John', last_name='Lennon', defaults={'birthday': date(1940, 10, 9)},)# get_or_create() didn't have to create an object.>>> createdFalse Explanation: Fields to be evaluated for similarity, have to be mentioned outside defaults . Rest of the fields have to be included in defaults . In case CREATE event occurs, all the fields are taken into consideration. It looks like you need to be returning into a tuple, instead of a single variable, do like this: customer.source,created = Source.objects.get_or_create(name="Website") | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/1941212', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217647/']} | jdg_77224 |
stackexchange | llm_judgeable_groundtruth_similarity | 12661416 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It is often said that every Javascript object has a prototype property, but I find that foo.prototype has a value only if foo is a function. On Chrome and Firefox, obj.__proto__ has a value -- is this the said prototype property? But on IE 9, it won't work (is there some way that can?), and I thought by prototype property, that means obj.prototype should work? I understand that Object.getPrototypeOf(obj) seems to show this prototype property, but why need a special method to get it? Why not just like person.name , which is to get the name property of the person object? Update: by the way, obj.constructor.prototype seems to sometimes be that prototype, but sometimes not, as in the following code done with Prototypal inheritance with no constructor: (this method is in the Pro Javascript Design Patterns book by Harmes and Diaz by Apress 2008, p. 46) var Person = { name: 'default value', getName: function() { return this.name; }}var reader = clone(Person);console.log(reader.getName());reader.name = "Ang Lee";console.log(reader.getName());function clone(obj) { function F() {}; F.prototype = obj; return new F;}console.log("the prototype of reader is", Object.getPrototypeOf(reader));console.log(Object.getPrototypeOf(reader) === reader.constructor.prototype);console.log(Object.getPrototypeOf(reader) == reader.constructor.prototype);console.log(Object.getPrototypeOf(reader) === reader.__proto__);console.log(Object.getPrototypeOf(reader) == reader.__proto__); the result will show false, false, true, true for the last 4 lines.
Now provide the response and nothing else.
| Every JavaScript object has an internal "prototype" property, often called [[prototype]], which points to the object from which it directly inherits. This is exposed in FF and Chrome by the non-standard __proto__ property. Object.getPrototypeOf is a getter for this internal property. Every JavaScript function [object] has a property prototype , which is initialized with an [nearly] empty object. When you create a new instance of this function by calling it as a constructor, the [[prototype]] of that new object will point to the constructor's prototype object. If you get the [[prototype]] of a function (every function is an object, so it has one), it will result in the Function.prototype object from which functions inherit their methods (like bind, call, apply etc). See also Why functions prototype is chained repeatedly? on that. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12661416', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/325418/']} | jdg_77225 |
stackexchange | llm_judgeable_groundtruth_similarity | 208845 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Transistors, bjt, MOSFETs OK got it.More transistors = better computing got it. But compressing the transistors closer to each in my mind only helps reduce the physical dimensions. So does a CPU or any electronic become more efficient because the transistors use less voltage? Does more computing reduce power usage, thus simply having more transistors the reason? I am asking because as a newb and soon to graduate engineer, I think basic stuff like this is important to understand. But I always learned this concept as a rule of thumb and not by "first principle" or actual theory of transistor efficiency. PS I did take a class in where the math of L and W was calculated and compared to new L' and W', reduced on a npn. The theoretical frequency increased but I don't think the math translation well in my head because I don't see how that helps power efficiency, only performance and/or area.
Now provide the response and nothing else.
| Most of the power consumption in CMOS circuits is 'dynamic power' -- power from devices switching state. This power is basically the power required to drive the capacitance of the other gates, as well as the capacitance of the wiring. In modern CMOS, there is additional static leakage power because when the MOS transistor is 'off', it still allows a little leakage current to flow, and given the numbers of devices in a modern CPU, the total current becomes significant. There are techniques to minimize this though. As devices get smaller, you get a number of benefits: Device capacitances decrease, so less power is required to drive this capacitance. Devices are smaller and closer together, so parasitic capacitance of the wiring also decreases. Devices can be made operate at lower supply voltages, so both leakage current, and power required to drive parasitic capacitances decreases. Generally, the leakage currents increase as devices get smaller (not really because they are smaller, but because the lower threshold voltages allow higher leakages). More complex power-switching techniques are used to keep this in check. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/208845', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/87817/']} | jdg_77226 |
stackexchange | llm_judgeable_groundtruth_similarity | 124546 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $\{f_n\}_{n\in \mathbb{N}}$ be a sequence of measurable functions on a measure space and $f$ measurable. Assume the measure space $X$ has finite measure. If $f_n$ converges to $f$ in $L^{\infty}$-norm , then $f_n$ converges to $f$ in $L^{1}$-norm. This is my approach: We know $||f_n-f||_{\infty} \to 0 $ and by definition $||f_n-f||_{\infty} =\inf\{M\geq 0: |f_n-f|\leq M \}.$ Then\begin{align}||f_n-f||_1\&=\int |f_n-f| dm\&\leq \int|f_n|dm+\int|f|dm\\end{align} I don't know how to proceed after that, any help would be appreciated.
Now provide the response and nothing else.
| For any function $g$, $||g||_1 = \int_X|g(m)|dm \leq \int_X||g||_\infty dm = \mu(X)*||g||_\infty$ (as $|g(m)| \leq ||g||_\infty$ almost everywhere); $||g||_\infty \geq \frac{||g||_1}{\mu(X)}$, so if $||f_n-f||_\infty$ tends to zero, then $||f_n-f||_1$ tends to zero as well. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/124546', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/25683/']} | jdg_77227 |
stackexchange | llm_judgeable_groundtruth_similarity | 283 |
Below is a question asked on the forum engineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A friend and I wrote a paper for a fluids class discussing the details (as they relate to fluid dynamics) of constructing a cannon that could shoot a steak fast enough to cook it. We quickly discovered (but not quickly enough to change our topic) that our paper was a bit too ambitious for two twenty year old undergrads taking an introductory course in fluid mechanics. Nonetheless, we still busted out a ballistics simulator, a cookbook, and a compressive heating calculator and did our best. One of the issues that stumped us was compressing the gas we used to launch the steak. We chose Helium because it was the least dense gas that probably won't burst into flames (like Hydrogen). Using a compressive heating calculator, we found the velocity we needed to shoot the steak, and were using Bernoulli's equation to find the pressure we needed to launch at our chosen velocity. The issue we ran into was density is dependent on pressure, but we needed the density to calculate the pressure needed. How does one determine the pressure given the issue above? Is it simply several rounds of iteration until an acceptable answer is found?
Now provide the response and nothing else.
| Seriously?! :D There are going to be two parts to the solution you are looking for, A) Till the steak is in the cannon B) The steak leaves the cannon, is into the air and cooking starts A) Internal ballistics: You are dealing with compressible flow. NEVER use simple Bernaulli's equation beyond Mach no. 0.3. Make sure you are using correction terms till Mach number 0.7 and beyond that, use equations of gas dynamics (refer Modern Compressible Flow by John Anderson). That said, your case is same as an air rifle case. Instead of pellet, you are shooting steaks. So if you know the muzzle velocity, you can design your cannon as shown in this paper . Now your question is how does one get $P_0$ mentioned in this paper, right? For that you will have to do reverse calculations. B) Steak leaves the cannon Assuming that you want your steak medium (as rare is not recommended apparently! ), figure out the internal and surface temperatures for cooking. Also time required for cooking. At these temperature, your steak most probably will be flying at supersonic speeds . Then there will be a bow shock in front of the steak. You can safely approximate it as a normal shock and use normal shock relations to calculate total temperature ratio across the shock. Now $T_{01}$ becomes the atmospheric temperature and $T_{02}$ becomes the surface temperature on the steak (using total pressure ratio and gas dynamics relations). This will give you required shock strength and hence the flying Mach number. Assuming STP conditions at sea level, find acoustic velocity and hence the steak velocity. Now this is average steak velocity. But there is going to be wave and pressure drag on the steak all the time. Use this Stanford supersonic wing drag calculator to calculate this drag. In this take aspect ratio (AR) = 1, $C_L = 0$, put length of steak and its thickness / length as t/c. So compute the muzzle velocity by using newton's second and then first law. Now substitute this muzzle velocity in point A discussed above. That will give you your chamber pressure. Also I found one report in which internal ballistics of spring loaded gun is considered. There is a matlab code as well. You can take author's permission to use it. Another issue is, as you are going to use pre compressed pneumatic cylinder, the temperature is going to drop considerably when expansion happens. So flames is not a problem, however, during compressing of the gas in that cylinder, things are going to heat up, so using helium is smart move. Another way you can do this exercise is to write a small code in your favorite language and carry out those iterations you mentioned. However, don't use Bernaulli's equation. All the best for your paper. Speculation: If your hypothetical steak is flying supersonic for several minutes through the air, its most likely going to be eaten by some dog some hundred miles away from you! Cheers! | {} | {'log_upvote_score': 4, 'links': ['https://engineering.stackexchange.com/questions/283', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/3/']} | jdg_77228 |
stackexchange | llm_judgeable_groundtruth_similarity | 20865581 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Chrome extension content script that I want to run on Trello boards. For now, it contains only: console.log("Hello, world!"); When you open the Trello board page through an internal link, like from the My Boards page, the content script does not run. It does run after you refresh the page though. My manifest file contains: { "manifest_version": 2, "name": "Temp Ext", "version": "1.0", "content_scripts": [ { "matches": ["*://trello.com/b/*"], "js":["contentscript.js"] } ]} Can anyone help me figure out why the script doesn't run at the time the page is initially loaded? EDIT: Corrected question. Issue only occurred after following internal links, not any links.
Now provide the response and nothing else.
| The problem was that Trello uses HTML5's pushState for page transitions, so the content script wasn't always being run after a board was opened. Solution Changes to manifest: { "manifest_version": 2, "name": "Temp Ext", "version": "1.1", "content_scripts": [{ "matches": ["*://trello.com/*"], "js":["contentscript.js"] }], "background": { "scripts": ["background.js"] }, "permissions": [ "*://trello.com/*", "tabs", "webNavigation" ]} Add background script: chrome.webNavigation.onHistoryStateUpdated.addListener(function(details) { chrome.tabs.executeScript(null,{file:"contentscript.js"});}); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20865581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1368520/']} | jdg_77229 |
stackexchange | llm_judgeable_groundtruth_similarity | 258168 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do you do the second part of question 8, chapter 5, of Evans' PDE book (first edition)? I have proven the inequality for smooth, compactly supported functions using integration by parts, and I understand why approximating sequences as described in the hint exist, but I can not use the hint to extend the inequality as required. The question is "Integrate by parts to prove the interpolation inequality $$\int_U |Du|^2\,dx\leqslant C\left(\int_Uu^2\, dx\right)^{\frac{1}{2}}\left(\int_U|D^2u|^2\,dx\right)^{\frac{1}{2}}$$ for $u\in C^\infty_c(U)$. By approximation, prove this inequality if $u\in H^2(U)\cap H_0^1(U)$." The hint is to approximate $u$ by functions in $C^\infty_c(U)$ which converge to u in $H_0^1(U)$, and to approximate $u$ (also) by functions in $C^\infty(closure(U))$ which converge to $u$ in $H^2(U)$.
Now provide the response and nothing else.
| What you need is the following: Let $v \in C^\infty_c(U)$ and $w\in C^\infty(\bar{U})$, we have $$ \left(\int_U Dv \cdot Dw ~\mathrm{d}x\right)^2 \leq C \int_U |v|^2 \mathrm{d}x \int_U |D^2 w|^{2}\mathrm{d}x \tag{*}$$ This follows by directly integrating by parts (the boundary terms vanish as $v$ has compact support). Now, given $u \in H^1_0(U) \cap H^2(U)$, let $v_i \to u$ in $H^1_0$ and $w_i \to u$ in $H^2(U)$ where $v_i \in C^\infty_c(U)$ and $w_i \in C^\infty(\bar{U})$. By the strong convergence in $H^1_0$ and $H^2$ respectively, we have that for any function $f\in L^2$ we have $$ \lim_{\ell \to \infty}\int_U \partial_{x^j} v_\ell f \mathrm{d}x = \lim_{\ell \to \infty}\int_{U} \partial_{x_j} (v_\ell - u + u) f \mathrm{d}x = \int_{U} \partial_{x^j} u f \mathrm{d}x + \lim_{\ell\to\infty}\int_{U} (\partial_{x_j}v_\ell - \partial_{x_j}u) f \mathrm{d}x $$ The second term on the RHS tends to zero using Cauchy-Schwarz and the assumed convergence of $v_\ell\to u$. Similarly we also have $$ \lim_{\ell \to \infty}\int_U \partial_{x_j} w_\ell f \mathrm{d}x = \int_{U} \partial_{x_j} u f \mathrm{d}x $$ So we have that $$ \int_U |Du|^2 \mathrm{d}x = \lim_{i,j\to \infty} \int_U Dv_i \cdot D w_j ~\mathrm{d}x \leq \lim_{i,j\to\infty} C \|v_i\|_{L^2} \|D^2 w_j\|_{L^2} $$ by (*). Since $v_i \to u$ in $H^1_0$, we also have $v_i \to u$ in $L^2$. Similarly as $w_j \to u$ in $H^2$ we have $D^2w \to D^2 u$ in $L^2$. So the RHS is $$ \lim_{i,j\to\infty} C \|v_i\|_{L^2} \|D^2 w_j\|_{L^2} = C\|u\|_{L^2} \|D^2 u\|_{L^2}$$ and we have the desired result. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/258168', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/47948/']} | jdg_77230 |
stackexchange | llm_judgeable_groundtruth_similarity | 174994 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to have a set of $K$ variables that are uncorrelated but linearly dependent? i.e.$cor(x_i, x_j)=0$ and $ \sum_{i=1}^K a_ix_i=0$ If yes can you write an example ? EDIT:From the answers it follows that it is not possible. Would it at least be possible that $\mathbb{P}(|\hat \rho_{x_i, x_j}-\hat \rho_{x_i, v}|<\epsilon)$ where $\hat\rho$ is the estimated correlation coefficient estimated from $n$ samples of the variables and $v$ is a variable that is uncorrelated with $x_i$. I am thinking something like $x_K=\dfrac{1}{K} \sum_{i=1}^{K-1} x_i$ $K>>0$
Now provide the response and nothing else.
| As @RUser4512's answer shows, uncorrelated random variables cannot be linearly dependent. But, nearly uncorrelated random variables can be linearly dependent, and one example of these is something dear to thestatistician's heart. Suppose that $\{X_i\}_{i=1}^K$ is aset of $K$ uncorrelated unit-variance random variables with common mean $\mu$. Define$Y_i = X_i - \bar{X}$ where $\bar{X} = \frac 1K \sum_{i=1}^K X_i$.Then, the $Y_i$ are zero-mean random variables such that $\sum_{i=1}^K Y_i = 0$, that is, they are linearly dependent.Now, $$Y_i = \frac{K-1}{K} X_i - \frac 1K\sum_{j \neq i}X_j$$so that $$\operatorname{var}(Y_i) = \left(\frac{K-1}{K}\right)^2+\frac{K-1}{K^2}= \frac{K-1}{K}$$while$$\operatorname{cov}(Y_i,Y_j) = -2\left(\frac{K-1}{K}\right)\frac 1K+ \frac{K-2}{K^2}= \frac{-1}{K}$$showing that the $Y_i$ are nearly uncorrelated random variableswith correlation coefficient $\displaystyle -\frac{1}{K-1}$. See also this earlier answer of mine. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/174994', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/25392/']} | jdg_77231 |
stackexchange | llm_judgeable_groundtruth_similarity | 1959012 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on a remote automated test framework for Android based on JUnit (tests run outside android, interacting with code inside it). It's all working fairly well, but one issue I have is that when I automatically start a fresh emulator, the screen starts out locked. This appears to affect my tests being able to run, plus, I want to watch the tests run (buttons clicked, text typed, etc.). If I manually start an emulator and unlock its screen, all works well. Is there a way to programmatically unlock the screen in Android? A Java API, a command line or shell command, etc. would all be fine. Barring that, perhaps there is a way to start an emulator unlocked?
Now provide the response and nothing else.
| You can interact with the emulator via its console interface . If you ever wondered why your emulator started with a number like 5554 - that's because that's the port the emulator listening on. You can find the port for running emulators with the adb devices command. It will have output like this: C:\>adb devicesList of devices attachedemulator-5554 device So you can connect to the emulator using a command like: telnet localhost 5554 If you connect successfully you'll get an OK prompt and you can start entering commands. There are various commands but the one we are interested in is event to simulate hardware events. We can unlock the screen by pressing Menu which we emulate with the following command: event send EV_KEY:KEY_MENU:1 EV_KEY:KEY_MENU:0 The EV_KEY:KEY_MENU:1 is key-down event and the EV_KEY:KEY_MENU:0 is the corresponding key-up event. Make sure you do both or the Menu key will be stuck down. I realise scripting this will be far from easy, but it's all I can think of to solve your problem. Edit: I don't think event send EV_KEY:KEY_MENU:1 EV_KEY:KEY_MENU:0 is emulating Menu but if I run the command just after I've started the emulator it does unlock it. Not sure why but I guess this is a start. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1959012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168212/']} | jdg_77232 |
stackexchange | llm_judgeable_groundtruth_similarity | 14824747 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to have an image which is uploaded from my database and on top of it the exact same size in the same position is a HTML5 canvas. Most of the solutions I have found I have been using JQuery/JavaScript, however I want a similar solution if possible just using CSS3 as the images are being outputted from a database and there can be more than one image on the page and each image will have a canvas. How can I achieve this?
Now provide the response and nothing else.
| Yes. You can do this entirely in CSS, but you will have to add some specific HTML plumbing for each image. If you ever get tired of the extra plumbing, javascript could do most of the plumbing for you. Here is a Fiddle of the CSS-only version: http://jsfiddle.net/m1erickson/g3sTL/ The HTML: <div class="outsideWrapper"> <div class="insideWrapper"> <img src="house-icon.jpg" class="coveredImage"> <canvas class="coveringCanvas"></canvas> </div></div> Of course, in your version, you would replace the image src with your dynamic database call to fetch the image. The CSS: .outsideWrapper{ width:256px; height:256px; margin:20px 60px; border:1px solid blue;}.insideWrapper{ width:100%; height:100%; position:relative;}.coveredImage{ width:100%; height:100%; position:absolute; top:0px; left:0px;}.coveringCanvas{ width:100%; height:100%; position:absolute; top:0px; left:0px; background-color: rgba(255,0,0,.1);} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14824747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1134468/']} | jdg_77233 |
stackexchange | llm_judgeable_groundtruth_similarity | 29276880 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have a file with this content in master : Line 1Line 2Line 3Line 4 Now say I create and checkout a new branch called test . In this branch I change the file to this: Line 1Line 2Line 3 ModifiedLine 4 and I commit this and switch back to master . In master I change the file to: Line 1Line 2Line 3Line 4 Modified and I commit. Now if I merge branch test into master , I get a conflict. Why can't git auto resolve this, using the common ancestor? If I tell git to edit conflicts using BeyondCompare as the difftool, BeyondCompare autoresolves this without even telling the user, since this isn't a real conflict. Is there a way to get git to autoresolve these? I've tried the recursive and resolve merge strategies but neither do it. It's an issue in our company because there are certain files where multiple developers change lines in close proximity and this causes many unnecessary conflicts when they pull.
Now provide the response and nothing else.
| The reason that Git behaves like this is explained well in the answers to this question: https://softwareengineering.stackexchange.com/questions/194788/why-doesnt-git-merge-adjacent-lines-without-conflict/378258#378258 Essentially, because you need the neighboring lines to provide context to the change (you can't just use line numbers, because something may have been added or deleted above), if the lines around it have changed you usually don't want Git to just naively continue with the merge. User Arsen7 gives a good example in that thread of how this could go badly wrong. However, I agree with you that sometimes this is quite annoying, so I wrote a custom merge driver that can resolve such conflicts during merging/rebasing. It's designed to be interactive, because I always want to check that it's going to do the right thing before going ahead, but you could easily modify it not to be if you're confident it's going to work. If you're interested, the script is available on GitHub under a GPLv3+ license: https://github.com/paulaltin/git-subline-merge | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29276880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/339939/']} | jdg_77234 |
stackexchange | llm_judgeable_groundtruth_similarity | 1356598 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$g:\mathbb{(0,1]}\to \mathbb{R}$ We have the function $$g\left(x\right)=\int _x^1\left(\frac{\sin\left(t\right)}{t}dt\right)\:$$ Show that the function is strictly decreasing. So I thought that I'd differentiate the function and prove that the derivative is $\lt0$. I found on-line that that the derivative of this function is:$$\frac{d}{dx} \int _x^1\left(\frac{\sin(t)}{t}\right) \, dt=-\frac{\sin(x)}{x}$$ Why is that exactly? I thought that $\frac{d}{dx} \int _a^b (f(t)) \, dt = f(t)\text{ from }a\text{ to }b$.
Now provide the response and nothing else.
| There is no need of differentiating: If $0<x<y<1$ then$$g(x)-g(y)=\int_x^1\frac{\sin t}tdt-\int_y^1\frac{\sin t}tdt=\int_x^y\frac{\sin t}tdt>0$$since the integrand is positive and $y>x$. If you still want to differentiate $g$, just write$$g(x)=\int_0^1\frac{\sin t}tdt-\int_0^x\frac{\sin t}tdt$$and apply the fundamental theorem of calculus. Note that you can assume that the integrand is $1$ at $0$ since $\sin t/t\to 1$ when $t\to 0$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1356598', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/247415/']} | jdg_77235 |
stackexchange | llm_judgeable_groundtruth_similarity | 981570 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Following snippet wouldn't compile. With following error: Cannot implicitly convert type 'Container<ChildClass>' to 'Container<BaseClass>' class BaseClass {}class ChildClass : BaseClass {}class Container<T> where T : BaseClass {}class Program { static void Main() { // why doesn't this work? Container<BaseClass> obj = new Container<ChildClass>(); }} Is this by design? If it is, what is the reason?
Now provide the response and nothing else.
| (made wiki, in case of dups) C# (3.0) doesn't support covariance of lists etc. C# 4.0 will support limited [co|contra]variance, but still not lists . The problem is that with: Container<BaseClass> obj = new Container<ChildClass>(); I could do: obj.Add(new SomeOtherSubclass()); // SomeOtherSubclass : BaseClass which would compile, but not work. This behaviour is supported for arrays, but largely for historic reasons. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/981570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93422/']} | jdg_77236 |
stackexchange | llm_judgeable_groundtruth_similarity | 39249231 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list List<OfferComparison> Comparison . I want to check if all the items have Value == null in an if condition. How can I do it with linq? public class OfferComparison : BaseModel{ public string Name { get; set; } public string Value { get; set; } public bool Valid { get; set; }}
Now provide the response and nothing else.
| Updated (post C# 7) Answer If using C# 7 or 8 then one could use the is keyword together with Linq.All : var result = Comparison.All(item => item.Value is null) If using C# 9 then one could use the is not null together with Linq.Any : var result = Comparison.Any(item => item.Value is not null) If using C# 9 then one could also use the is object or is {} together with Linq.Any : var result = Comparison.Any(item => item.Value is object) All these options are somewhat equivalent. At least in terms of time complexity they are all O(n) . I guess the "preferred" option simply depends on personal opinion. Original (pre C# 7) Answer Using linq method of All : var result = Comparison.All(item => item.Value == null) Basically what it does is to iterate all items of a collection and check a predicate for each of them. If one does not match - result is false | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39249231', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2877044/']} | jdg_77237 |
stackexchange | llm_judgeable_groundtruth_similarity | 15454394 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following line has the error Default argument is not allowed . public ref class SPlayerObj{private: void k(int s = 0){ //ERROR }} Why C++ has no default argument on managed types ? I would like to know if there is a way to fix this.
Now provide the response and nothing else.
| It does have optional arguments, they just don't look the same way as the C++ syntax. Optional arguments are a language interop problem. It must be implemented by the language that makes the call, it generates the code that actually uses the default argument. Which is a tricky problem in a language that was designed to make interop easy, like C++/CLI, you of course don't know what language is going to make the call. Or if it even has syntax for optional arguments. The C# language didn't until version 4 for example. And if the language does support it, how that compiler knows what the default value is. Notable is that VB.NET and C# v4 chose different strategies, VB.NET uses an attribute, C# uses a modopt. You can use the [DefaultParameterValue] attribute in C++/CLI. But you shouldn't, the outcome is not predictable. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15454394', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1780739/']} | jdg_77238 |
stackexchange | llm_judgeable_groundtruth_similarity | 23878400 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Curious what the processor/CPU does in general or let say, on intel cpu & Linux, when it executes a division by zero instruction. Also how the error is relayed to the application, so that it can log the error or notify the developer? Thank you!
Now provide the response and nothing else.
| To answer in general terms, rather than going into gory details for Linux on x86_64, which are likely to obscure the concepts. CPUs tend to throw an exception interrupt, on things like division by zero, or dereferencing a NULL pointer. These interrupts are trapped, like when hardware interrupts, halting execution of current program and return control to the OS, which then handles the event. Whilst the actions are very environment dependant, commonly the program may be terminated, all resources freed (memory, open files) and optionally core dumps/stack traces generated for debugging purposes on a developers system. A runtime might be able to configure things so an exception handler is called, perhaps a scripting language wants to catch integer division by 0, or integer overflow and then throw a programming language exception or generate diagnostics to help the programmer understand where & why, it happened. Raising a signal, which may be caught by the application and handled, or lead to termination, is another traditional possibility. On some RISC CPUs, software traps in OS would run to fix up misaligned data accesses, so reading memory would work, but with a performance penalty. In past, traps would sometimes be used to emulate, defined instructions but which were not implemented in hardware by a particular CPU model. I've also seen hardware memory errors logged, as OS initiates an ECC memory recovery operation, though that is handled differently on x86. System calls, actually use the same mechanism to jump, from a user space application, into the OS kernel which then handles the event, hence the common term trap . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23878400', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/396323/']} | jdg_77239 |
stackexchange | llm_judgeable_groundtruth_similarity | 16243 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for an analogue for the following 2 dimensional fact: Given 3 angles $\alpha,\beta,\gamma\in (0;\pi)$ there is always a triangle with these prescribed angles. It is spherical/euclidean/hyperbolic, iff the angle sum is smaller than /equal to/bigger than $\pi$. And the length of the sides (resp. their ratio in the Euclidean case) can be computed with the sine and cosine law. The analogous problem in 3 dimensions would be: Assign to each edge of a tetrahedron a number in $(0;\pi)$. Does there exists a tetrahedron with these numbers as face angles at those edges. And when is it spherical/euclidean/hyperbolic. Is there a similar Invariant to the angle sum?And are there formulas to compute the length of the edges?
Now provide the response and nothing else.
| The short answer is no - there is no single inequality criterion. Already in $\mathbb{R}^3$ everything is much more complicated. Let me give a sample of inequalities the angles should satisfy. Denote by $\gamma_{ij}, 1\leq i < j \leq 4$ the six dihedral angles of a Euclidean tetrahedron. Then: $$\gamma_{12}+\gamma_{23} + \gamma_{34}+\gamma_{14} \le 2 \pi$$ $$2\pi \le \gamma_{12} + \gamma_{13} + \gamma_{14}+\gamma_{23} + \gamma_{24}+\gamma_{34} \le 3\pi$$ $$0 \le \cos \gamma_{12} + \cos\gamma_{13} + \cos\gamma_{14}+ \cos\gamma_{23} + \cos\gamma_{24}+ \cos\gamma_{34} \le 2$$ (See my book ex. 42.27 for the proofs of these inequalities - they are not terribly difficult, so you might enjoy proving them yourself). This shows that the set of allowed sixtuples of angles is rather complicated (for spherical/hyperbolic tetrahedra with angles close to $\gamma_{ij}$ , these angles will have to satisfy these inequalities as well). The "invariant" you mention corresponds to the unique equation the angles satisfy in the Euclidean space. The latter is also rather delicate: it is the Gauss-Bonnet equation $\omega_1+...+\omega_4=4\pi$ , where $\omega_i$ is the curvature of $i$ -th vertex - you need to use spherical cosine theorem to compute it from dihedral angles (see e.g. Prop. 41.3 in my book). Finally, you might like to take a look at this interesting paper by Rivin, to see that a similar generalization of the triangle inequality is just as difficult. To answer your last question (edge lengths from dihedral angles), yes, this is known. I am not an expert on this, but I would start with this recent paper. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/16243', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3969/']} | jdg_77240 |
Subsets and Splits