source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
2840559
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In web development, when session state is enabled, a session id is stored in cookie(in cookieless mode, query string will be used instead). In asp.net, the session id is encrypted automatically. There are plenty of topics on the internet regarding how you should encrypt your cookie, including session id. I can understand why you want to encrypt private info such as DOB, but any private info should not be stored in cookie at first place. So for other cookie values such as session id, what is the purpose encryption? Does it add security at all? no matter how you secure it, it will be sent back to server for decryption. Be be more specific, For authentication purpose, turn off session, i don't want to deal with session time out any more store some sort of id value in the cookie, on the server side, check if the id value exists and matches, if it is, authenticate user. let the cookie value expire when browser session is ended, this way. vs Asp.net form authentication mechanism (it relies on session or session id, i think) does latter one offer better security? Now provide the response and nothing else.
Attacks on sessions like Session Hijacking aim for a valid session ID. If you now would encrypt the session ID, attackers would simply aim for the encrypted session ID and you wouldn’t have any advantage. So encrypting the session ID is useless. Remember that the session ID is just a random value that is used to identify a session. Attackers don’t need to know if that random value has some specific meaning; they just need to know that random value. If you want to secure your session, use HTTPS to encrypt the whole HTTP communication via SSL and set the cookies only with the flags secure to only allow the cookie to be send via HTTPS and HttpOnly to forbid local access via JavaScript.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2840559', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/325486/']}
jdg_81541
stackexchange
llm_judgeable_groundtruth_similarity
172344
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a bifurcation diagram of a map I iterate, but can't manage to proprely save it as a list within lists and plot it The code I have is list = RecurrenceTable[{x[n] == r (x[n - 1] - x[n - 1]^3) /. r -> 2.1, x[1] == 0.5}, x, {n, 1, 200}] newList = DeleteDuplicates[Take[list, -100]]; The first list is the iterative map, which I run for 200 times, and the last one is the "final values" of the list. In the example above I do it for a single value of r , however I would like to create lists like newList for many different r running between 1 to 5, and plot all newList I have as a function of r Now provide the response and nothing else.
Here's one way, using Replace to wrap your points x with {r,x} and Table to iterate over r . res = Flatten[Table[ list = RecurrenceTable[{x[n] == r (x[n - 1] - x[n - 1]^3), x[1] == 0.5}, x, {n, 1, 200}]; Replace[DeleteDuplicates[Take[list, -100]], x_ -> {r, x}, 1] , {r, 1.0, 3.0, 0.01}], 1]ListPlot[res] Note that r>3.0 doesn't seem to converge, so I stopped there.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/172344', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/26549/']}
jdg_81542
stackexchange
llm_judgeable_groundtruth_similarity
44195496
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm installing LightGBM on OSX, following the installation guide . While installing cmake, Homebrew gives such warning message: Warning: cmake-3.8.1 already installed, it's just not linked. Then I try: $brew link cmake it gives: Linking /usr/local/Cellar/cmake/3.8.1...Error: Could not symlink share/man/man7/cmake-buildsystem.7/usr/local/share/man/man7 is not writable. Now provide the response and nothing else.
The proposition of Mark Setchell to use sudo chown -R $(whoami):admin /usr/local/share/man is right. According to the Homebrew troubleshooting page : If commands fail with permissions errors, check the permissions of /usr/local’s subdirectories. If you’re unsure what to do, you can run cd /usr/local && sudo chown -R $(whoami) bin etc include lib sbin share var Frameworks but the proposition of Nano to use sudo is not a good one. According to the Homebrew FAQ : Homebrew is designed to work without using sudo . You can decide to use it but we strongly recommend not to do so . If you have used sudo and run into a bug then this is likely to be the cause. Please don’t file a bug report unless you can reproduce it after reinstalling Homebrew from scratch without using sudo.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44195496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5527280/']}
jdg_81543
stackexchange
llm_judgeable_groundtruth_similarity
335793
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This homework problem is giving me some trouble. My current thought process is this: In order for $ f = u + iv $ to be analytic, it must differentiable. Therefore, it must satisfy the Cauchy-Riemann equation, $ f_{y} = if_{x} $ , and $ f_{x} \text{ and } f_{y} $ must be continuous. The Cauchy-Riemann equation is equivalent to: $$ u_{x} = v_{y},\\ u_{y} = -v_{x} $$ Because the function must be analytic, they must be differentiable on open neighborhoods of $ z = x + iy $ as well as at z. Because of this fact, I tried doing this: $$ u_{x} = v_{y} \implies v_{y} = 2x \implies v = 2xy + h(x)\\ u_{y} = -v_{x} \implies v_{x} = 2y \implies v = 2xy + g(y) $$ These two lines imply that $ h(x) = g(y) = constant $, so $ v = 2xy + c $. Thus, functions of the form $ f = x^{2} - y^{2} + i(2xy + c) $ should be analytic. I have two problems. The first problem is that I'm not confident my derivation of v is sound mathematics. Secondly, even if the equation I derived is analytic, how can be certain only equations of this form are analytic? Thanks for the help. I appreciate it. Now provide the response and nothing else.
All harmonic conjugates are all the same up to a constant. (See @Julien's comment for why) so we only need to find a single conjugate. Here's one way to do this question without CR equations $$u(x,y) = x^2 - y^2 = Re(z^2)$$ This means that it is the real part of an analytic function $z^2 + ci$. Straight away we can say $v$ is the imaginary part of $z^2 + ci$ and so $$v(x,y) = Im(z^2 + ci) = 2xy + c$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/335793', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/62360/']}
jdg_81544
stackexchange
llm_judgeable_groundtruth_similarity
434074
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Which, if any, axioms of ZFC are known to not be derivable from the other axioms? Which, if any, axioms of PA are known to not be derivable from the other axioms? Now provide the response and nothing else.
There are several interesting issues here. The first is that there are different axiomatizations of PA and ZFC. If you look at several set theory books you are likely to find several different sets of axioms called "ZFC". Each of these sets is equivalent to each of the other sets, but they have subtly different axioms. In one set, the axiom scheme of comprehension may follow from the axiom scheme of replacement; in another set of axioms it may not. That makes the issue of independence harder to answer in general for ZFC; you have to really look at the particular set of axioms being used. PA has two different common axiomatizations. For the rest of this answer I will assume the axiomatization from Kaye's book Models of Peano Arithmetic which is based on the axioms for a discretely ordered semring. The second issue is that both PA and ZFC (in any of their forms) have an infinite number of axioms, because they both have infinite axiom schemes. Moreover, neither PA nor ZFC is finitely axiomatizable. That means, in particular, that given any finite number of axioms of one of these theories, there is some other axiom that is not provable from the given finite set. Third, just to be pendantic, I should point out that, although PA and ZFC are accepted to be consistent, if they were inconsistent, then every axiom would follow from a minimal inconsistent set of axioms. The practical effect of this is that any proof of independence has to either prove the consistency of the theory at hand, or assume it. Apart from these considerations, there are other things that can be said, depending on how much you know about PA and ZFC. In PA, the axiom scheme of induction can be broken into infinitely many infinite sets of axioms in a certain way using the arithmetical hierarchy; these sets of axioms are usually called $\text{I-}\Sigma^0_0$, $\text{I-}\Sigma^0_1$, $\text{I-}\Sigma^0_2$ , $\ldots$. For each $k$, $\text{I-}\Sigma^0_k \subseteq \text{I-}\Sigma^0_{k+1}$. The remaining non-induction axioms of PA are denoted $\text{PA}^-$. Then the theorem is that, for each $k$, there is an axiom in $\text{I-}\Sigma^0_{k+1}$ that is not provable from $\text{PA}^- + \text{I-}\Sigma^0_k$. This is true for both common axiomatizations of PA. In ZFC, it is usually more interesting to ask which axioms do follow from the others. The axiom of the empty set (for the authors who include it) follows from an instance of the axiom scheme of separation and the fact that $(\exists x)[x \in x \lor x \not \in x]$ is a formula in the language of ZFC that is logically valid in first order logic, so ZFC trivially proves that at least one set exists. In ZFC, there are some forms of the axiom scheme of separation that follow from the remainder of ZFC when particular forms of the axiom of replacement are used. The axiom of pairing is also redundant from the other axioms in many presentation. There are likely to be other redundancies in ZFC as well, depending on the presentation. One reason that we do not remove the redundant axioms from ZFC is that it is common in set theory to look at fragments of ZFC in which the axiom of powerset, the axiom scheme of replacement, or both, are removed. So axioms that are redundant when these axioms are included may not be redundant once these axioms are removed.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/434074', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/84664/']}
jdg_81545
stackexchange
llm_judgeable_groundtruth_similarity
29928236
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to print all permutation of string in lexicographic order. I wrote this code: void permute(char *a, int i, int n) { if (i == (n-1)) printf("\"%s\"\n", a); else { for (int j = i; j < n; j++) { swap((a+i), (a+j)); permute(a, i+1, n); swap((a+i), (a+j)); } }} Let's take for example string abc , I want to receive all permutation in lexicographic order as in the left column, but I have result as in the right column. "abc" "abc""acb" "acb""bac" "bac""bca" "bca""cab" <"cba" "cba" > "cab" Can someone help me with this? I saw some algorithms, but they look difficult. I think I can save all generated strings in an array and then sort this array, but I cannot write this (I'm a beginner in C). Now provide the response and nothing else.
In C There's a pretty straightforward description of an algorithm (plus implementation) at geeksforgeeks : Given a string, print all permutations of it in sorted order. For example, if the input string is “ABC”, then output should be “ABC, ACB, BAC, BCA, CAB, CBA”. We have discussed a program to print all permutations in this post, but here we must print the permutations in increasing order. Following are the steps to print the permutations lexicographic-ally Sort the given string in non-decreasing order and print it. The first permutation is always the string sorted in non-decreasing order. Start generating next higher permutation. Do it until next higher permutation is not possible. If we reach a permutation where all characters are sorted in non-increasing order, then that permutation is the last permutation. Steps to generate the next higher permutation: 1. Take the previously printed permutation and find the rightmost character in it, which is smaller than its next character. Let us call this character as ‘first character’. Now find the ceiling of the ‘first character’. Ceiling is the smallest character on right of ‘first character’, which is greater than ‘first character’. Let us call the ceil character as ‘second character’. Swap the two characters found in above 2 steps. Sort the substring (in non-decreasing order) after the original index of ‘first character’. I've re-implemented it below: #include <stdio.h>#include <string.h>#include <stdlib.h>void swap(char* left, char* right){ char temp = *left; *left = *right; *right = temp;}int compare (const void * a, const void * b){ return ( *(char*)a - *(char*)b );}void PrintSortedPermutations(char* inStr){ // Re-implementation of algorithm described here: // http://www.geeksforgeeks.org/lexicographic-permutations-of-string/ int strSize = strlen(inStr); // 0. Ensure input container is sorted qsort(inStr, strSize, sizeof(char), compare); int largerPermFound = 1; do{ // 1. Print next permutation printf("%s\n", inStr); // 2. Find rightmost char that is smaller than char to its right int i; for (i = strSize - 2; i >= 0 && inStr[i] >= inStr[i+1]; --i){} // if we couldn't find one, we're finished, else we can swap somewhere if (i > -1) { // 3 find character at index j such that // inStr[j] = min(inStr[k]) && inStr[k] > inStr[i] for all k > i int j = i+1; int k; for(k=j;k<strSize && inStr[k];++k) { if (inStr[k] > inStr[i] && inStr[k] < inStr[j]) j = k; } // 3. Swap chars at i and j swap(&inStr[i], &inStr[j]); // 4. Sort string to the right of i qsort(inStr+i+1, strSize-i-1, sizeof(char), compare); } else { largerPermFound = 0; } }while(largerPermFound);}int main(void) { char str[] = "abc"; PrintSortedPermutations(str); return 0;} Output abc acb bac bca cab cba Live Demo In C++ std::next_permutation from the <algorithm> library will do this for you, just make sure you sort your container first: Return value true if the function could rearrange the object as a lexicographicaly greater permutation. Otherwise, the function returns false to indicate that the arrangement is not greater than the previous, but the lowest possible (sorted in ascending order). For example: std::string myStr = "abc";std::stable_sort(std::begin(myStr), std::end(myStr));do { for(auto&& element : myStr) std::cout << element << " "; std::cout << std::endl;} while (std::next_permutation(std::begin(myStr), std::end(myStr))); Output: a b c a c b b a c b c a c a b c b a Live Demo
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29928236', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4356097/']}
jdg_81546
stackexchange
llm_judgeable_groundtruth_similarity
20526757
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have to send data (json object) to another webserver (java). This is my node.js code var express = require('express');var app = express();app.get('/', function (req, res) { var data = querystring.stringify({ username: "myname", password: " pass" }); var options = { host: 'www.javaserver.com', port: 8070, path: '/login', method: 'POST', headers: { 'Content-Type': 'application/x-www-form-urlencoded', 'Content-Length': Buffer.byteLength(data) } }; var req = http.request(options, function (res) { res.setEncoding('utf8'); res.on('data', function (chunk) { console.log("body: " + chunk); }); }); req.write(data); req.end();});app.listen(8090); This is not working. How can I do this? Now provide the response and nothing else.
You are repeating req, and res variables for the post request. I have updated your code and tested it working with requestb.in var express = require('express');var querystring = require('querystring');var http = require('http');var app = express();app.get('/', function (req, res) { var data = querystring.stringify({ username: "myname", password: " pass" }); var options = { host: 'requestb.in', port: 80, path: '/nfue7rnf', method: 'POST', headers: { 'Content-Type': 'application/x-www-form-urlencoded', 'Content-Length': Buffer.byteLength(data) } }; var httpreq = http.request(options, function (response) { response.setEncoding('utf8'); response.on('data', function (chunk) { console.log("body: " + chunk); }); response.on('end', function() { res.send('ok'); }) }); httpreq.write(data); httpreq.end();});app.listen(8090); Please update the request host and path in the code to the values you need. Let me know if it still doesn't work for you.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20526757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2039264/']}
jdg_81547
stackexchange
llm_judgeable_groundtruth_similarity
7496657
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a sample c++ code I will open a file and print each char in hexafile has only 16 chars but why ffffff will print after each heax values? char buff[256];// buff filled with freadfor(i=0;i<16;i++)printf("%x",buff[i]); Output is: 4affffff67ffffffcdffffff Why is this? Now provide the response and nothing else.
Edit: printf("%x", (int)(*(unsigned char*)(&buff[i])) ); This should make the trick. My first version was incorrect, sorry. The problem is in the sign bit: every value more than 127 was handled as negative. Casting to unsigned char should solve the problem.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7496657', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/518834/']}
jdg_81548
stackexchange
llm_judgeable_groundtruth_similarity
17394166
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Razor C# newbe here, I have the following code: @foreach(var row in db.Query(selectQueryString)){ @: ["@row.Table1", "@row.Table2", "@row.Table3", "@row.Table4", "@row.Table5", "@row.Table6", "@row.Table7",], } That loops out content from my database. However since I'm from Sweden I need to use the non-english characters "å, ä, ö" and these do not show correctly. Looking at what's rendered, here for example "k&#246;p" the letter "ö" is replaced by "&#246;" instead. What i would want is to have the correct letter replacing this, or to know if there is some method I can use to solve this problem. Using UTF-8 <meta charset="utf-8" /> Thanks in advance! Now provide the response and nothing else.
It just realized that this code only does kind of a "set equivalency" check (and now I see that you actually did say that, what a lousy reader I am!). This can be achieved much simpler template <class T>static bool compareVectors(vector<T> a, vector<T> b){ std::sort(a.begin(), a.end()); std::sort(b.begin(), b.end()); return (a == b);} You'll need to include the header algorithm . If your vectors are always of same size, you may want to add an assertion at the beginning of the method: assert(a.size() == b.size()); This will be handy in debugging your program if you once perform this operation for unequal lengths by mistake. Otherwise, the vectors can't be the same if they have unequal length, so just add if ( a.size() != b.size() ){ return false;} before the sort instructions. This will save you lots of time. The complexity of this technically is O(n*log(n)) because it's mainly dependent on the sorting which (usually) is of that complexity. This is better than your O(n^2) approach, but might be worse due to the needed copies. This is irrelevant if your original vectors may be sorted. If you want to stick with your approach, but tweak it, here are my thoughts on this: You can use std::find for this: template <class T>static bool compareVectors(const vector<T> &a, const vector<T> &b){ const size_t n = a.size(); // make it const and unsigned! std::vector<bool> free(n, true); for ( size_t i = 0; i < n; ++i ) { bool matchFound = false; auto start = b.cbegin(); while ( true ) { const auto position = std::find(start, b.cend(), a[i]); if ( position == b.cend() ) { break; // nothing found } const auto index = position - b.cbegin(); if ( free[index] ) { // free pair found free[index] = false; matchFound = true; break; } else { start = position + 1; // search in the rest } } if ( !matchFound ) { return false; } } return true;} Another possibility is replacing the structure to store free positions. You may try a std::bitset or just store the used indices in a vector and check if a match isn't in that index-vector. If the outcome of this function is very often the same (so either mostly true or mostly false) you can optimize your data structures to reflect that. E.g. I'd use the list of used indices if the outcome is usually false since only a handful of indices might needed to be stored. This method has the same complexity as your approach. Using std::find to search for things is sometimes better than a manual search. (E.g. if the data is sorted and the compiler knows about it, this can be a binary search).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17394166', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1590525/']}
jdg_81549
stackexchange
llm_judgeable_groundtruth_similarity
42594
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: One of my beloved theorems in matrix analysis is the fact that the map $H\mapsto (\det H)^{1/n}$, defined over the convex cone $HPD_n$ of Hermitian positive definite matrices, is concave. This is accurate, if we think that this map is homogeneous of degree one, thus linear over rays. it has important applications in many branches of mathematics, it has many elegant proofs. I know at least three complety different ones. I am interested to learn in both aspects. Which is your prefered proof of the concavity ? Is it useful in your own speciality ? In order to avoid influencing the answers, I decide not to give any example. But those who have visited my page may know my taste. Now provide the response and nothing else.
An easy reduction shows that one can suppose that one of the matrices is the identity and the other diagonal: the inequality then reduced to the convexity of $f(x)=\ln(1+e^x)$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/42594', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8799/']}
jdg_81550
stackexchange
llm_judgeable_groundtruth_similarity
438880
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have discrete data that comes from the distribution of a discrete random variable Y. The data appears to follow the normal distribution (ie if I make a kde plot it looks like a normal distribution). I've conducted Bayesian inference and assumed that the mean mu follows a normal prior. Thus, the posterior of mu also follows a normal distribution. Now I'm interested in conducting predictions for future samples from the pdf of Y given the samples I've already seen, but I'm unsure how to proceed. It doesn't make sense to use a normal distribution for prediction because all samples can only take non-negative Interger values. To me it doesn't seem to make sense to use a Poisson because the variance doesn't equal the mean nor does it make sense to use a Binomial because the value that the sample can take is theoretically unbounded (but in practice it is often bounded by ~25). Now provide the response and nothing else.
It's true that you're not missing information when you use only $k-1$ categories. In linear models, we are all familiar with the dummy variable trap and the relationship between a model with $k-1$ levels and an intercept and a model with $k$ levels and no intercept. However , you're using a tree-based model, so the mechanics of how recursive binary splits work are important! In the case of a factor with 2 levels, e.g. "red" and "blue", it's obvious that using the $k-1$ 1hot method is equivalent to choosing the $k$ 1-hot method. This is because NOT blue implies red . In this case, there is no difference. But for $k>2$ categories, you'll need $k-1$ binary splits to isolate the the omitted level (the $k$ th level). So if you have 3 levels, e.g. "red", "green", "blue", but you only include 1-hot features for "red" and "green", it will take 2 successive splits to isolate the "blue" samples. This is because if you split on "red", the children are nodes for red and NOT red = green OR blue . To isolate "blue" when the category "blue" is omitted from the coding scheme, you'll have to split again on "green" because then the children nodes of green OR blue will be blue and green . As $k$ increases, this problem becomes more pronounced, as you'll require more splits. This may interact with your other hyperparameters in strange ways, because specifying a maximum tree depth is a common strategy to avoid overfitting with boosted trees/ xgboost . If isolating category $k$ isn't important, then this effect may not matter at all for your problem. But if category $k$ is important, you'll tend to grow very deep trees to try and isolate it, either via the categorical variables or else by identifying latent interactions of other variables.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/438880', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/159515/']}
jdg_81551
stackexchange
llm_judgeable_groundtruth_similarity
15715744
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have configured my WebApiConfig like this: public static void Register(HttpConfiguration config){ config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } );} I have one method which accepts one parameter. The accessing URI is http://localhost:8598/api/WebApi/GetLocationCategory/87 . This gives me an error: No HTTP resource was found that matches the request URI 'http://localhost:8598/api/WebApi/GetLocationCategory/87' Controller: public IEnumerable<LocationCategory_CLS> GetLocationCategory(int CatID){ var LocCats = (from lct in entities.tdp_LocationCategories join lc in entities.tdp_LocationMaster on lct.FK_LocationID equals lc.LocationID where lct.IsApproved == 0 && lct.FK_CategoryID == CatID select new { lc.LocationID, lc.LocationName }).ToList(); List<LocationCategory_CLS> loc = new List<LocationCategory_CLS>(); foreach (var element in LocCats) { loc.Add(new LocationCategory_CLS { LocationID = element.LocationID, LocationName = element.LocationName }); } return loc;} Now provide the response and nothing else.
Try changing your Controller method as public IEnumerable<LocationCategory_CLS> GetLocationCategory(int id) <-- Change{ var LocCats = (from lct in entities.tdp_LocationCategories join lc in entities.tdp_LocationMaster on lct.FK_LocationID equals lc.LocationID where lct.IsApproved == 0 && lct.FK_CategoryID == id select new { lc.LocationID, lc.LocationName }).ToList(); List<LocationCategory_CLS> loc = new List<LocationCategory_CLS>(); foreach (var element in LocCats) { loc.Add(new LocationCategory_CLS { LocationID = element.LocationID, LocationName = element.LocationName }); } return loc;} The change is only, changing input parameter from CatId to id .... It works for me many times.. Edit : Its a long time when I look back I think I know the reason now. Words Like Jared is correct, it's all to do with Routing which we specify. If I have a route(default) as : routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); And my URL is /MyController/GetLocationCategory/123 , it will be equivalent to /MyController/GetLocationCategory?id=123 . Similarly, if I want to change my parameter name for Id to say CatId, then I need to change the query string parameter(the way I am calling my Controller Action would change). Which would now be : /MyController/GetLocationCategory?CatId=123
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15715744', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/815244/']}
jdg_81552
stackexchange
llm_judgeable_groundtruth_similarity
5244830
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been battling with passing data between two view controllers for a couple of days now and getting very confused. I'm new to Objective-C and finding some parts tricky to get my head round. I have a Navigation Controller, FirstView is a form and on this form I have a button which loads SecondView which contains a TableView for the user to select some options. I then want to pass the selection back to the FirstView controller and display the data etc... I have read alot about this (stackoverflow, iphonedevsdk, CS 193P Resources) and the options i've seen are, 1) ivar in app delegate (dirty and not recommended)2) create a singleton3) create a data model class4) Use protocols and delegates (recommended by apple) I want to do things right and want to use option 4 - Delegates in my program Problem is, I don't understand delegates and how to setup and implement them. Could anyone provide a basic example on how to setup and pass an NSArray using the delegate and 2 view controllers. Thanks in advanceMatt Now provide the response and nothing else.
Delegation is the correct pattern to be used in this case, but your description doesn't look much like delegation as it is using a global variable. Perhaps you're storing global variables in your App Delegate which you should always try to avoid. Here's a rough outline of what the code should look like: SecondViewController.h : @protocol SecondViewControllerDelegate;@interface SecondViewController;SecondViewController : UIViewController{ id<SecondViewControllerDelegate> delegate; NSArray* someArray;}@property (nonatomic, assign) id<SecondViewControllerDelegate> delegate;@property (nonatomic, retain) NSArray* someArray;@end@protocol SecondViewControllerDelegate- (void)secondViewControllerDidFinish:(SecondViewController*)secondViewController;@end SecondViewController.m : @implementation SecondViewController@synthesize delegate;@synthesize someArray;- (void)dealloc{ [someArray release]; [super dealloc];}- (void)someMethodCalledWhenUserIsDone{ [delegate secondViewControllerDidFinish:self];} FirstViewController.h: #import SecondViewController@interface FirstViewController : UIViewController <SecondViewControllerDelegate>{ ...}@end FirstViewController.m : @implementation FirstViewController- (void)secondViewControllerDidFinish:(SecondViewController*)secondViewController{ NSArray* someArray = secondViewController.someArray // Do something with the array}@end
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5244830', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/634273/']}
jdg_81553
stackexchange
llm_judgeable_groundtruth_similarity
107195
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: It is not difficult to show (even without Weyl criterion) that the sequence $\sqrt{n}$, $n=1,2,\ldots$ is equidistributed mod 1. However, I need a reference to this result. Can you help me? Thanks. Now provide the response and nothing else.
Fejer's theorem: If $w(t)$ is a function with continuous first and second derivatives whose signs are eventually constant, and if $t \cdot w'(t)$ goes to infinity for $t$ to infinity, and $w(t)/t$ goes to zero, then $(w(n): n=1,2,3,...)$ is uniformly distributed. In particular, for any fixed $b>0$ and $\alpha$ between $0$ and $1$, $b\cdot n^\alpha$ is uniformly distributed. Reference: Hlawka, The theory of uniform distribution, page 23. Certainly also in Kuipers-Niederreiter, which I do not have here at the moment. (EDIT: After a bit of prodding, Google helped me to find Theorem 2.5 on page 13 in Kuipers-Niederreiter: If the sequence of differences $b_n:=a_{n+1}-a_n$ converges monotonically to zero, and $n b_n$ diverges to infinity, then $a_n$ is u.d. mod 1.)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/107195', 'https://mathoverflow.net', 'https://mathoverflow.net/users/21700/']}
jdg_81554
stackexchange
llm_judgeable_groundtruth_similarity
40682748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the public folder inside my laravel project and I have some js and css files inside it. I'm using the asset function and even though it's referencing to the public folder, my files aren't loaded on the page. I'm using this code to load (it's only one example, there are more files): <link href="{{ asset('css/style.css') }}" rel="stylesheet"> And on the browser's console, I'm geting something like this: Failed to load resource: the server responded with a status of 404 (Not Found) http://localhost:8000/css/style.css Well, I tried to revert the last commit, but no success. Tried to change to URL::asset() function, nothing. Tried everything from the following link: http://laravel.io/forum/09-17-2014-problem-asset-not-point-to-public-folder?page=1 and success. Please, a little help? Thanks! Now provide the response and nothing else.
I was having same problem. This is due to moving of .htaccess file from public to root of the project in order to serve localhost/project rather than localhost/project/laravel . And needed to use public as well in the asset: <link href="{{ asset('public/css/app.css') }}" rel="stylesheet"> Or, modify the asset function from /Illuminate/Foundation/helpers.php if (! function_exists('asset')) { /** * Generate an asset path for the application. * * @param string $path * @param bool $secure * @return string */ function asset($path, $secure = null) { return app('url')->asset("public/".$path, $secure); }} The preceding method is not good way. Anyway this would have been easier if there was config for setting asset path.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40682748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4368555/']}
jdg_81555
stackexchange
llm_judgeable_groundtruth_similarity
18665714
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use an array in a table, and want to insert values into the array only if the array doesnt already contain the value. I am not sure what would be the best approach for performance here. "models.py" class Documents(models.Model):documentID = models.AutoField(primary_key=True)class VisibleDocuments(models.Model): visibleTo = models.ForeignKey(UserProfile, null=True) documents = ListField() When I add documents to visibleDocuments, I do the following: "views.py" visibleDocuments.documents.append(document.documentID)visibleDocuments.save() However, the problem with this approach is that it leads to duplicate documentIDs in the list. I would like to have only unique documentIDs. I wonder what would be the best way to do that in Django? Can I use sets perhaps or should I use some kind of sorted read access, perhaps? Thanks in advance! Solution: Use SetField class VisibleDocuments(models.Model): visibleTo = models.ForeignKey(UserProfile, null=True) documents = SetField() views.py visibleDocuments.documents.add(document.documentID)visibleDocuments.save() Thanks to everybody! Now provide the response and nothing else.
You need to concatenate your string. document.getElementById(tabName).style.backgroundImage = 'url(buttons/' + imagePrefix + '.png)'; The way you had it, it's just making 1 long string and not actually interpreting imagePrefix. I would even suggest creating the string separate: function ChangeBackgroungImageOfTab(tabName, imagePrefix){ var urlString = 'url(buttons/' + imagePrefix + '.png)'; document.getElementById(tabName).style.backgroundImage = urlString;} As mentioned by David Thomas below, you can ditch the double quotes in your string. Here is a little article to get a better idea of how strings and quotes/double quotes are related: http://www.quirksmode.org/js/strings.html
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18665714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1657010/']}
jdg_81556
stackexchange
llm_judgeable_groundtruth_similarity
5266532
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that I have a .class file, can I get all the methods included in that class ? Now provide the response and nothing else.
Straight from the source: http://java.sun.com/developer/technicalArticles/ALT/Reflection/ Then I modified it to be self contained, not requiring anything from the command line. ;-) import java.lang.reflect.*;/** Compile with this:C:\Documents and Settings\glow\My Documents\j>javac DumpMethods.javaRun like this, and results followC:\Documents and Settings\glow\My Documents\j>java DumpMethodspublic void DumpMethods.foo()public int DumpMethods.bar()public java.lang.String DumpMethods.baz()public static void DumpMethods.main(java.lang.String[])*/public class DumpMethods { public void foo() { } public int bar() { return 12; } public String baz() { return ""; } public static void main(String args[]) { try { Class thisClass = DumpMethods.class; Method[] methods = thisClass.getDeclaredMethods(); for (int i = 0; i < methods.length; i++) { System.out.println(methods[i].toString()); } } catch (Throwable e) { System.err.println(e); } }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5266532', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/597657/']}
jdg_81557
stackexchange
llm_judgeable_groundtruth_similarity
15554915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have to write some code for IE8. I have an ng-repeat creating a table filled with: <input production-qty type="text" class="input-mini" maxlength="3" ng-model="day.qtyA" ui-event="{ blur : 'updateProduction(day)' }" ng-disabled="day.type=='H'"> IE8 won't do type=number I want a directive that will ignore key strokes on that input field that are NOT numeric keys....ie....0 - 9 I don't want to let the user type abc and pollute the model and then tell them the value is invalid. I'd rather not let them enter any data that's not valid in the first place. Now provide the response and nothing else.
HTML: <input production-qty type="text" maxlength="3" ng-model="qty1"> Directive: app.directive('productionQty', function() { return { require: 'ngModel', link: function (scope, element, attr, ngModelCtrl) { function fromUser(text) { var transformedInput = text.replace(/[^0-9]/g, ''); console.log(transformedInput); if(transformedInput !== text) { ngModelCtrl.$setViewValue(transformedInput); ngModelCtrl.$render(); } return transformedInput; // or return Number(transformedInput) } ngModelCtrl.$parsers.push(fromUser); } }; }); Plunker See also filters on ng-model in an input . My answer above is modeled off pkozlowski.opensource's answer. I looked at ng-pattern, but it does not filter what is shown in the textbox. It sets $scope.qty1 to undefined , but the undesired characters are visible in the textbox.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15554915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/526664/']}
jdg_81558
stackexchange
llm_judgeable_groundtruth_similarity
50511999
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi I've followed this instruction try to resize image with Cloudfront and lambda@edge. When I tried to test the resized image, I keep getting the error message below: The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions. So I checked the lambda functions created by cloud formation provided by the article I mentioned in the beginning, and I found there's no trigger in it. I've tried to set it manually but getting the error message below: CloudFront events cannot be associated with $LATEST or Alias. Choose Actions to publish a new version of your function, and then retry association. I followed the instruction in the error message; publish, and add Cloudfront as trigger but it seems there's no way to apply it. It's still running the one without Cloudfront as the trigger. Is there any way to set Cloudfront as trigger and make this work properly? Now provide the response and nothing else.
For people Googling "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions": I got that error and struggled to debug it. It turned out there were some programmatic errors inside my Lambda that I had to resolve. But, how do you debug it if, when hitting Cloudfront you keep getting "The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions". That, and there's nothing inside the Cloudwatch logs. My Lambda was defined in Cloudformation inside a AWS::Lambda::Function's ZipFile attribute. I ended up going to the Lambda service inside AWS and creating a Lambda test payload corresponding to my Cloudfront event as documented here: Lambda@Edge Event Structure . Then, I could debug the Lambda inside the Lambda console without having to hit Cloudfront or having to navigate to Cloudwatch logs.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50511999', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4652534/']}
jdg_81559
stackexchange
llm_judgeable_groundtruth_similarity
30797231
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: See this example: http://jsfiddle.net/aLrfmyqm/ p { display: inline;} em { display:block;} <p> Outer inline <em>Block <p>Inner inline</p></em></p> I expect the <p>Inner inline</p> to be inlined with Block , however, it starts in a new line. Does anyone have ideas about this? Thanks! Now provide the response and nothing else.
Your markup is invalid. You are not supposed to nest a p element inside p element and hence the issue. From W3C : The P element represents a paragraph. It cannot contain block-level elements (including P itself). Check the source and you will get it why it behaves differently than what you expect it to be Your browser will actually separate all the tags out and close the p elements for you. So how we fix it? Use the <span> element instead of <p> Demo
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30797231', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1272683/']}
jdg_81560
stackexchange
llm_judgeable_groundtruth_similarity
12489317
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say, if I have an MSI installation file, can I extract its contents from a C# or C++ program without installing it? Now provide the response and nothing else.
Typically you can perforrm an Administrative installation to extract the contents of an MSI. msiexec /a foo.msi TARGETDIR=C:\EXTRACTHERE /qn If you don't want to go out of process you can interop directly with MSI via the MsiInstallProduct function . szPackagePath [in] A null-terminated string that specifies the path to the location of the Windows Installer package. The string value can contain a URL a network path, a file path (e.g. file://packageLocation/package.msi), or a local path (e.g. D:\packageLocation\package.msi). szCommandLine [in] A null-terminated string that specifies the command line property settings. This should be a list of the format Property=Setting Property=Setting. For more information, see About Properties. To perform an administrative installation, include ACTION=ADMIN in szCommandLine. For more information, see the ACTION property. Note that while you can declare the P/Invoke yourself, there is a really good .NET interop library available with Windows Instaler XML called Deployment Tools Foundation (DTF). The Microsoft.Deployment.WindowsInstaller namespace has a class method called Installer that exposes a static method called InstallProduct. This is a direct encapsulation of MsiInstallProduct. Using the DTF libraries hides you from the ugliness on the Win32 API and correctly implements IDisposable where needed to ensure that underlying unmanaged handles are released where needed. Additionally DTF has the Microsoft.DeploymentWindowwsInstaller.Package namespace with the InstallPackage class. This class exposes a method called ExtractFiles() that extracts the files to the working directory. An example of code looks like: using Microsoft.Deployment.WindowsInstaller;using Microsoft.Deployment.WindowsInstaller.Package;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { using( var package = new InstallPackage(@"C:\test.msi", DatabaseOpenMode.ReadOnly)) { package.ExtractFiles(); } } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12489317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/843732/']}
jdg_81561
stackexchange
llm_judgeable_groundtruth_similarity
73010
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A common way to characterize the dirac delta function $\delta$ is by the following two properties: $$1)\ \delta(x) = 0\ \ \text{for}\ \ x \neq 0$$ $$2)\ \int_{-\infty}^{\infty}\delta(x)\ dx = 1$$ I have seen a proof of the sifting property for the delta function from these two properties as follows: Starting with $$\int_{-\infty}^{\infty}\delta(x-t)f(x)\ dx$$ for some "sufficiently smooth" function $f$, since $\delta(x - t) = 0$ for $x \neq t$ we can restrict the integral to some epsilon interval around $t$ $$\int_{-\infty}^{\infty}\delta(x-t)f(x)\ dx = \int_{t-\epsilon}^{t+\epsilon}\delta(x-t)f(x)\ dx$$ On this infinitesimal interval, $f$ is "approximately constant" and so we can remove it from the integral $$\int_{t-\epsilon}^{t+\epsilon}\delta(x-t)f(x)\ dx = f(t)\int_{t-\epsilon}^{t+\epsilon}\delta(x-t)\ dx = f(t)$$ This proof seems a little too hand wavy for me. The points I find problematic are in quotations. What is meant by "sufficiently smooth" in this case? Is continuous enough? Also, how exactly is the extraction of the function from the integral done rigorously, without just assuming that it is "approximately constant"? I have seen this proof done with non-standard analysis and I understand that the delta function is by nature a rather "hand wavy" object so that a rigorous proof using these two properties may not even exist. Still I ask if anyone can perhaps make the above proof rigorous or offer a new proof without appealing to non-standard analysis. (I'm not too sure what tags to include for this question. If anyone could retag for me that'd be much appreciated Now provide the response and nothing else.
Well, as you mention, no truely rigorous treatment can be given with such a description of the Delta Dirac function - no such function actually satisfies those requirements. Thus, I won't take too much effort to make the below too precise. Where it says "sufficiently smooth", it doesn't actually need anything there at all! Whatever $f$ is, as long as it is finite almost everywhere, the product with that delta function will be $0$ away from a neighbourhood of $t$ , so you can restrict the integral like that. For the extraction of $f$ , being continuous in a closed neighbourood of $x=t$ is enough. If $f$ is continuous through $ [ t-\epsilon, t+ \epsilon ] $ then by the Extreme value theorem it attains a maximum and minimum in that interval, call them $ M $ and $m$ respectively. Then since $ m \leq f(x) \leq M$ is that range, $$ m=\int^{t+\epsilon}_{t-\epsilon} m \delta(x-t) \, dx \leq \int^{t+\epsilon}_{t-\epsilon} f(x) \delta(x-t) \,dx \leq \int^{t+\epsilon}_{t-\epsilon} M \delta(x-t) \, dx = M.$$ Now as $\epsilon \to 0$ , both $m$ and $M$ go to $f(t)$ as $f$ is continuous so by the Squeeze theorem, $$ \int^{t+\epsilon}_{t-\epsilon} f(x) \delta(x-t) \, dx \to f(t) $$ as $\epsilon \to 0$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/73010', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9246/']}
jdg_81562
stackexchange
llm_judgeable_groundtruth_similarity
35088793
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two classes namely A and B respectively. The class definitions are as shown below. class A { constructor(){ ... } ...}//Implementation 1class B extends A { constructor(){ this.childProperty = "something"; // this is undefined. }}//Implementation 2class B { constructor(){ this.childProperty = "something"; // this is not undefined. }} Why is this undefined in Implementation 1 and not in Implementation 2 ? What am I doing wrong here? Now provide the response and nothing else.
You need to call super() first: class B extends A { constructor() { super(); this.childProperty = "cool" }} JsFiddle
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35088793', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1166366/']}
jdg_81563
stackexchange
llm_judgeable_groundtruth_similarity
11165432
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to extend a case class without constantly picking up new vals along the way? For example this doesn't work: case class Edge(a: Strl, b: Strl)case class EdgeQA(a: Strl, b: Strl, right: Int, asked: Int) extends Edge(a, b) "a" conflicts with "a" , so I'm forced to rename to a1 . But I don't want all kinds of extra public copies of "a" so I made it private. case class Edge(a: Strl, b: Strl)case class EdgeQA(private val a1: Strl, private val b1: Strl, right: Int, asked: Int) extends Edge(a, b) This just doesn't seem clean to me... Am I missing something? Now provide the response and nothing else.
As the previous commenter mentioned: case class extension should be avoided but you could convert your Edge class into a trait. If you want to avoid the private statements you can also mark the variables as override trait Edge{ def a:Strl def b:Strl}case class EdgeQA(override val a:Strl, override val b:Strl, right:Int, asked:Int ) extends Edge Don't forget to prefer def over val in traits
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11165432', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1243034/']}
jdg_81564
stackexchange
llm_judgeable_groundtruth_similarity
592535
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am an MBA Student taking courses in statistics. In this last week, I have heard two conflicting opinions from my professors on Statistics. 1) Predictive Models can NOT make predictions on individuals - only on groups: My professor from my "Research Methods" class introduced us to the concept of the "Ecological Fallacy" ( https://en.wikipedia.org/wiki/Ecological_fallacy ). He pointed out that very often, the behavior of individuals within a group will not be similar to the average behavior of the group. He mentioned that statistical models can ONLY be used to describe variations within groups - he mentioned the importance of concepts such as "Propensity Score Matching" and correctly designing experiments while accounting for possible inconsistencies. He mentioned that if a pharmaceutical drug was found to be effective on a cohort of men all having similar age groups, lifestyles, backgrounds and medical histories - the best we can do is say that any man from the general public who fits this cohort profile will likely experience similar effects from this drug : we can not really make any individual prediction apart from this. He mentioned the field of "Survival Analysis" and told us that statistical models are routinely used to estimate the survival odds and hazard of surviving at the group level and not the individual level - he gave us an example: if you have 1 Male Asian Patient and 99 Male Non-Asian patient, you have no choice but to analyze the average survival rates of MALES ... how can you perform "Asian Specific Analysis" and make inferences about the average Asian when you only have 1 observation!? He closed by mentioning the new and emerging field of "Precision Medicine" in which "accurate molecular taxonomy of diseases that enhances diagnosis and treatment and tailorsdisease management to the individual characteristics of each patient " ( https://www.mdpi.com/2227-9717/10/6/1200/pdf ) - i.e. for the first time, medical treatments are being considered for the individual patient's condition, and not for the average profile of this patient (although he mentioned that this is still in its infancy and should be used with great caution). With this, he largely dismissed "Data Science" as a "pseudo science" and said although many "Data Science" models have demonstrated success, the methodology is not mathematically rigorous and can only be considered as an "engineered solution" (e.g. a Random Forest predicting if an individual patient will develop a disease - he said that this should not be done for many reasons, e.g. interpretability, blackbox, lack of odd's ratio and individual prediction). He closed with this final example : Imagine two bridges. The first bridge can only support a load of 100 kilograms, but has been rigorously tested in theory (e.g. physics, material science) and empirically - the reputation and behavior of this bridge has been well documented and observed under various conditions. The second bridge has been demonstrated to support loads of up to 500 kilograms, but we have no idea how this bridge was built or whether it will collapse in the future when subject to even 50 kilograms. He asked us - which bridge are you more confident walking across? He said this in a rhetorical tone, and said obviously the first bridge that has demonstrated both theoretical and empirical success should be favored over the second bridge - even if the first bridge is believed to be less stronger than the second bridge. He quipped that the first bridge is Classical Statistics and the second bridge is Data Science. He told us that perhaps this example is an extreme example, but this is precisely why a drug that has initially shown strong potential for curing a disease must be rigorously studied in both theoretical and empirical settings before it can be released - and in the interim, potentially less effective but better understood drugs must be administered of which we have higher confidence in. 2) Predictive Models CAN be used to make predictions on individuals: On the other hand, my Data Science professor thinks otherwise. I told him the views of my "Research Methods" professor and he stated that research publications would suggest otherwise. He showed me publications (e.g. https://www.sciencedirect.com/science/article/pii/S2666827021000694 , https://pubs.rsna.org/doi/full/10.1148/radiol.2018180547 ) in which predictive models have indeed demonstrated success in making individualized predictions - for example, successfully predicting the presence of COVID-19 in individual patients. He basically said that he sees no reason as to why Predictive Models can not be used for individual prediction - and if the opposite was true, he would have been long out of a job. This has left me conflicted - I find myself agreeing with the opinions and views of both professors at the same time. Can someone here provide some insights to bridge these two views together? Now provide the response and nothing else.
There is legitimacy to each stance. Your data science professor is right because it is routine to use a model to make a prediction about an individual. For instance, I am writing this answer on an iPhone that keeps trying to guess which word I will type next. Your research methods professor is right because there is an entire conditional distribution to consider. While we might call $\hat y_i$ the predicted value, that is just the expected value (or median or some other quantity, depending on what exactly we do) of an entire distribution of plausible values for the individual. This is a standard argument in pop-culture statistics: the statistic means nothing to the individual, and it is true that an individual can wind up above or below average, not right smack on the average. Cross Validated has an interesting question on this very topic that I believe has some relevance to this question. Then again, this leads to a defense of your data science professor. Yes, $\hat y_i$ is just the expected value of an entire conditional distribution, but if we believe our loss function to quantify what we value, then that is exactly the prediction we should make to minimize the pain we experience from having results that are not perfectly accurate.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/592535', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/77179/']}
jdg_81565
stackexchange
llm_judgeable_groundtruth_similarity
24043397
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was getting an SSL Handshake Exception error: PKIX "path does not chain" ( described here ).I fixed it by importing a certificate chain using openssl: openssl s_client -host www.envmgr.com -port 443 -showcerts > cert_chain.crt and installed it into my JDK's keystore: keytool -import -alias envmgrchain -file cert_chain.crt -keystore cacerts -storepass changeit Well this works. Hooray. The problem is we'll be putting our application up on a cloud server like rackspace or AWS and I think there is a good chance that we won't have access to modify the keystore of the JVM to add this chain. I thought, "no problem, I'll just add this certificate chain to the keystore programmatically" so I removed it from my JVM: keytool -delete -alias envmgrchain -keystore cacerts -storepass changeit and added this code: KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); //Create an empty keystore that we can load certificate into trustStore.load(null); InputStream fis = new FileInputStream("cert_chain.crt"); BufferedInputStream bis = new BufferedInputStream(fis); CertificateFactory cf = CertificateFactory.getInstance("X.509"); while(bis.available()>0) { Collection<? extends Certificate> certs = cf.generateCertificates(bis); Iterator<? extends Certificate> iter = certs.iterator(); //Add each cert in the chain one at a time for(int i=0; i<certs.size(); i++) { Certificate cert = iter.next(); String alias = "chaincert"+((i>0)?i:""); trustStore.setCertificateEntry(alias, cert); } } bis.close(); fis.close();//Add custom keystore to TrustManager TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); tmf.init(trustStore); SSLContext ctx = SSLContext.getInstance("TLSv1"); ctx.init(null, tmf.getTrustManagers(), null); But when I run it, the PKIX error reappears. Is the above code not equivalent to keytool -import? I feel like I'm either adding certificates to the KeyStore incorrectly, or I'm not installing the Keystore into the TrustManager in the right way. FYI: I am also attempting to address this issue by implementing an X509TrustManager. Now provide the response and nothing else.
Here's code you can use for clients to programatically add your CA at runtime. You don't need to put it in any store - just carry around the PEM encoded file. You can even hard code it into your program so there's no separate file to manage. static String CA_FILE = "ca-cert.pem";...FileInputStream fis = new FileInputStream(CA_FILE);X509Certificate ca = (X509Certificate) CertificateFactory.getInstance("X.509") .generateCertificate(new BufferedInputStream(fis));KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());ks.load(null, null);ks.setCertificateEntry(Integer.toString(1), ca);TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());tmf.init(ks);SSLContext context = SSLContext.getInstance("TLS");context.init(null, tmf.getTrustManagers(), null);... You will need a trusted distribution channel to ensure your program is not modified while sitting on the server waiting to be picked or while traveling down the wire while being installed. openssl s_client -host www.envmgr.com -port 443 -showcerts > cert_chain.crt You should only need to trust the root certificate, and not the entire chain. The server is responsible for sending all intermediate certificates required to build the chain. If the server is not sending all intermediate certificates required to build the chain, then the server is misconfigured. The problem you are experiencing is called the "Which Directory" problem. Its a well known problem in PKI. Essentially, it means a client does not know where to go to fetch a missing intermediate certificate. You solve it by having the server send all required intermediates along with the server's certifcate. See OWASP's TLS Cheatsheet and Rule - Always Provide All Needed Certificates . Just bike shedding, but there's a whole nother can of worms here with Java (especially Java 7 and lower): SSLContext ctx = SSLContext.getInstance("TLSv1");ctx.init(null, tmf.getTrustManagers(), null); You can improve upon it, if desired. See SSLSocketFactoryEx at Which Cipher Suites to enable for SSL Socket? . It closes some gaps in protocol versions, cipher suites, etc provided by default in Java SSLSocketFactory .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24043397', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/253575/']}
jdg_81566
stackexchange
llm_judgeable_groundtruth_similarity
412
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Motivation I work with datasets that contain personally identifiable information (PII) and sometimes need to share part of a dataset with third parties, in a way that doesn't expose PII and subject my employer to liability. Our usual approach here is to withhold data entirely, or in some cases to reduce its resolution; e.g., replacing an exact street address with the corresponding county or census tract. This means that certain types of analysis and processing must be done in-house, even when a third party has resources and expertise more suited to the task. Since the source data is not disclosed, the way we go about this analysis and processing lacks transparency. As a result, any third party's ability to perform QA/QC, adjust parameters or make refinements may be very limited. Anonymizing Confidential Data One task involves identifying individuals by their names, in user-submitted data, while taking into account errors and inconsistencies. A private individual might be recorded in one place as "Dave" and in another as "David," commercial entities can have many different abbreviations, and there are always some typos. I've developed scripts based on a number of criteria that determine when two records with non-identical names represent the same individual, and assign them a common ID. At this point we can make the dataset anonymous by withholding the names and replacing them with this personal ID number. But this means the recipient has almost no information about e.g. the strength of the match. We would prefer to be able to pass along as much information as possible without divulging identity. What Doesn't Work For instance, it would be great to be able to encrypt strings while preserving edit distance. This way, third parties could do some of their own QA/QC, or choose to do further processing on their own, without ever accessing (or being able to potentially reverse-engineer) PII. Perhaps we match strings in-house with edit distance <= 2, and the recipient wants to look at the implications of tightening that tolerance to edit distance <= 1. But the only method I am familiar with that does this is ROT13 (more generally, any shift cipher ), which hardly even counts as encryption; it's like writing the names upside down and saying, "Promise you won't flip the paper over?" Another bad solution would be to abbreviate everything. "Ellen Roberts" becomes "ER" and so forth. This is a poor solution because in some cases the initials, in association with public data, will reveal a person's identity, and in other cases it's too ambiguous; "Benjamin Othello Ames" and "Bank of America" will have the same initials, but their names are otherwise dissimilar. So it doesn't do either of the things we want. An inelegant alternative is to introduce additional fields to track certain attributes of the name, e.g.: +-----+----+-------------------+-----------+--------+| Row | ID | Name | WordChars | Origin |+-----+----+-------------------+-----------+--------+| 1 | 17 | "AMELIA BEDELIA" | (6, 7) | Eng |+-----+----+-------------------+-----------+--------+| 2 | 18 | "CHRISTOPH BAUER" | (9, 5) | Ger |+-----+----+-------------------+-----------+--------+| 3 | 18 | "C J BAUER" | (1, 1, 5) | Ger |+-----+----+-------------------+-----------+--------+| 4 | 19 | "FRANZ HELLER" | (5, 6) | Ger |+-----+----+-------------------+-----------+--------+ I call this "inelegant" because it requires anticipating which qualities might be interesting and it's relatively coarse. If the names are removed, there's not much you can reasonably conclude about the strength of the match between rows 2 & 3, or about the distance between rows 2 & 4 (i.e., how close they are to matching). Conclusion The goal is to transform strings in such a way that as many useful qualities of the original string are preserved as possible while obscuring the original string. Decryption should be impossible, or so impractical as to be effectively impossible, no matter the size of the data set. In particular, a method that preserves the edit distance between arbitrary strings would be very useful. I've found a couple papers that might be relevant, but they're a bit over my head: Privacy Preserving String Comparisons Based on Levenshtein Distance An Empirical Comparison of Approaches to Approximate String Matching in Private Record Linkage Now provide the response and nothing else.
One of the references I mentioned in the OP led me to a potential solution that seems quite powerful, described in "Privacy-preserving record linkage using Bloom filters" ( doi:10.1186/1472-6947-9-41 ): A new protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers has been developed. The protocol is based on Bloom filters on q-grams of identifiers. The article goes into detail about the method, which I will summarize here to the best of my ability. A Bloom filter is a fixed-length series of bits storing the results of a fixed set of independent hash functions, each computed on the same input value. The output of each hash function should be an index value from among the possible indexes in the filter; i.e., if you have a 0-indexed series of 10 bits, hash functions should return (or be mapped to) values from 0 to 9. The filter starts with each bit set to 0. After hashing the input value with each function from the set of hash functions, each bit corresponding to an index value returned by any hash function is set to 1. If the same index is returned by more than one hash function, the bit at that index is only set once. You could consider the Bloom filter to be a superposition of the set of hashes onto the fixed range of bits. The protocol described in the above-linked article divides strings into n-grams, which are in this case sets of characters. As an example, "hello" might yield the following set of 2-grams: ["_h", "he", "el", "ll", "lo", "o_"] Padding the front and back with spaces seems to be generally optional when constructing n-grams; the examples given in the paper that proposes this method use such padding. Each n-gram can be hashed to produce a Bloom filter, and this set of Bloom filters can be superimposed on itself (bitwise OR operation) to produce the Bloom filter for the string. If the filter contains many more bits than there are hash functions or n-grams, arbitrary strings are relatively unlikely to produce exactly the same filter. However, the more n-grams two strings have in common, the more bits their filters will ultimately share. You can then compare any two filters A, B by means of their Dice coefficient: D A, B = 2h / (a + b) Where h is the number of bits that are set to 1 in both filters, a is the number of bits set to 1 in only filter A, and b is the number of bits set to 1 in only filter B. If the strings are exactly the same, the Dice coefficient will be 1; the more they differ, the closer the coefficient will be to 0 . Because the hash functions are mapping an indeterminate number of unique inputs to a small number of possible bit indexes, different inputs may produce the same filter, so the coefficient indicates only a probability that the strings are the same or similar. The number of different hash functions and the number of bits in the filter are important parameters for determining the likelihood of false positives - pairs of inputs that are much less similar than the Dice coefficient produced by this method predicts. I found this tutorial to be very helpful for understanding the Bloom filter. There is some flexibility in the implementation of this method; see also this 2010 paper (also linked at the end of the question) for some indications of how performant it is in relation to other methods, and with various parameters.
{}
{'log_upvote_score': 5, 'links': ['https://datascience.stackexchange.com/questions/412', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/322/']}
jdg_81567
stackexchange
llm_judgeable_groundtruth_similarity
37465816
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The Getting Started docs for aiohttp give the following client example: import asyncioimport aiohttpasync def fetch_page(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: assert response.status == 200 return await response.read()loop = asyncio.get_event_loop()with aiohttp.ClientSession(loop=loop) as session: content = loop.run_until_complete( fetch_page(session, 'http://python.org')) print(content) And they give the following note for Python 3.4 users: If you are using Python 3.4, please replace await with yield from and async def with a @coroutine decorator. If I follow these instructions I get: import aiohttpimport [email protected] fetch(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: return (yield from response.text())if __name__ == '__main__': loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: html = loop.run_until_complete( fetch(session, 'http://python.org')) print(html) However, this will not run, because async with is not supported in Python 3.4: $ python3 client.py File "client.py", line 7 async with session.get(url) as response: ^SyntaxError: invalid syntax How can I translate the async with statement to work with Python 3.4? Now provide the response and nothing else.
Just don't use the result of session.get() as a context manager; use it as a coroutine directly instead. The request context manager that session.get() produces would normally release the request on exit, but so does using response.text() , so you could ignore that here: @asyncio.coroutinedef fetch(session, url): with aiohttp.Timeout(10): response = yield from session.get(url) return (yield from response.text()) The request wrapper returned here doesn't have the required asynchronous methods ( __aenter__ and __aexit__ ), they omitted entirely when not using Python 3.5 (see the relevant source code ). If you have more statements between the session.get() call and accessing the response.text() awaitable, you probably want to use a try:..finally: anyway to release the connection; the Python 3.5 release context manager also closes the response if an exception occurred. Because a yield from response.release() is needed here, this can't be encapsulated in a context manager before Python 3.4: import [email protected] fetch(session, url): with aiohttp.Timeout(10): response = yield from session.get(url) try: # other statements return (yield from response.text()) finally: if sys.exc_info()[0] is not None: # on exceptions, close the connection altogether response.close() else: yield from response.release()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37465816', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58866/']}
jdg_81568
stackexchange
llm_judgeable_groundtruth_similarity
20865235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written simple function in C, void GetInput(){ char buffer[8]; gets(buffer); puts(buffer);} When I disassemble it in gdb's disassembler, it gives following disassembly. 0x08048464 <+0>: push %ebp 0x08048465 <+1>: mov %esp,%ebp 0x08048467 <+3>: sub $0x10,%esp 0x0804846a <+6>: mov %gs:0x14,%eax 0x08048470 <+12>: mov %eax,-0x4(%ebp) 0x08048473 <+15>: xor %eax,%eax=> 0x08048475 <+17>: lea -0xc(%ebp),%eax 0x08048478 <+20>: mov %eax,(%esp) 0x0804847b <+23>: call 0x8048360 <gets@plt> 0x08048480 <+28>: lea -0xc(%ebp),%eax 0x08048483 <+31>: mov %eax,(%esp) 0x08048486 <+34>: call 0x8048380 <puts@plt> 0x0804848b <+39>: mov -0x4(%ebp),%eax 0x0804848e <+42>: xor %gs:0x14,%eax 0x08048495 <+49>: je 0x804849c <GetInput+56> 0x08048497 <+51>: call 0x8048370 <__stack_chk_fail@plt> 0x0804849c <+56>: leave 0x0804849d <+57>: ret Now please look at line number three, 0x08048467 <+3>: sub $0x10,%esp , I have only 8 bytes allocated as local variable, then why compiler is allocating 16 bytes(0x10). Secondly, what is meaning of xor %gs:0x14,%eax . @Edit: If it is optimization, is there any way to stop it. Thanks. Now provide the response and nothing else.
Two things: The compiler may reserve space for intermediate expressions to which you did not give names in the source code (or conversely not allocate space for local variables that can live entirely in registers). The list of stack slots in the binary does not have to match the list of local variables in the source code. On some platforms, the compiler has to keep the stack pointer aligned. For the particular example in your question, it is likely that the compiler is striving to keep the stack pointer aligned to a boundary of 16 bytes. Regarding your other question that you should have asked separately, xor %gs:0x14,%eax is clearly part of a stack protection mechanism , enabled by default. If you are using GCC, turn it off with -fno-stack-protector .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20865235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/744616/']}
jdg_81569
stackexchange
llm_judgeable_groundtruth_similarity
21598437
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Dapper (thanks Sam , great project.) a micro ORM with a DAL and by some reason I'm not able to execute stored procedures with input parameters. In a example service I've the following code: public void GetSomething(int somethingId){ IRepository<Something, SomethingEnum> repository = UnitOfWork.GetRepository<Something, SomethingEnum>(); var param = new DynamicParameters(); param.Add("@somethingId", dbType: DbType.Int32, value:somethingId, direction: ParameterDirection.Input); var result = repository.Exec<Something>(SomethingEnum.spMyStoredProcedure, param); ...} When the execution of the stored procedure is triggered a SqlException is thrown stating that I need to provide the 'somethingId' Procedure or function 'spMyStoredProcedure' expects parameter '@somethingId', which was not supplied. My DAL is similar based on this github project of Pencroff. Am I missing something here? Update: I am actually passing the commandType via the SomethingEnum: public class SomethingEnum : EnumBase<SomethingEnum, string> { public static readonly SomethingEnum spMyStoredProcedure = new SomethingEnum("spMyStoredProcedure", "[dbo].[spMyStoredProcedure]", CommandType.StoredProcedure); public SomethingEnum(string Name, string EnumValue, CommandType? cmdType): base(Name, EnumValue, cmdType) { }} Now provide the response and nothing else.
You need to tell it the command type: make sure there's a commandType: CommandType.StoredProcedure in the dapper call. Otherwise, it is simply executing the text command: spMyStoredProcedure (with some unused parameters in the ambient context). This is legal TSQL, and attempts to call spMyStoredProcedure without passing parameters - the same as if you put spMyStoredProcedure into SSMS and press f5 . Also, if your parameters are fixed, I would actually suggest just using: var param = new { somethingId }; or even just inline it completely: var result = repository.Exec<Something>(SomethingEnum.spMyStoredProcedure, new { somethingId }, commandType: CommandType.StoredProcedure); (note: if your Exec<T> method only ever handles stored procedures, you could move the commandType internal to the method - or you could make it an optional parameter that defaults to CommandType.StoredProcedure )
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21598437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1014893/']}
jdg_81570
stackexchange
llm_judgeable_groundtruth_similarity
3423545
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to get the percent encoded string for these specific letters, how to do that in objective-c? Reserved characters after percent-encoding! * ' ( ) ; : @ & = + $ , / ? # [ ]%21 %2A %27 %28 %29 %3B %3A %40 %26 %3D %2B %24 %2C %2F %3F %23 %5B %5D Percent-encoding wiki Please test with this string and see if it do work: myURL = @"someurl/somecontent" I would like the string to look like: myEncodedURL = @"someurl%2Fsomecontent" I tried with the stringByAddingPercentEscapesUsingEncoding: NSASCIIStringEncoding already but it does not work, the result is still the same as the original string. Please advice. Now provide the response and nothing else.
I've found that both stringByAddingPercentEscapesUsingEncoding: and CFURLCreateStringByAddingPercentEscapes() are inadequate. The NSString method misses quite a few characters, and the CF function only lets you say which (specific) characters you want to escape. The proper specification is to escape all characters except a small set. To fix this, I created an NSString category method to properly encode a string. It will percent encoding everything EXCEPT [a-zA-Z0-9.-_~] and will also encode spaces as + (according to this specification ). It will also properly handle encoding unicode characters. - (NSString *) URLEncodedString_ch { NSMutableString * output = [NSMutableString string]; const unsigned char * source = (const unsigned char *)[self UTF8String]; int sourceLen = strlen((const char *)source); for (int i = 0; i < sourceLen; ++i) { const unsigned char thisChar = source[i]; if (thisChar == ' '){ [output appendString:@"+"]; } else if (thisChar == '.' || thisChar == '-' || thisChar == '_' || thisChar == '~' || (thisChar >= 'a' && thisChar <= 'z') || (thisChar >= 'A' && thisChar <= 'Z') || (thisChar >= '0' && thisChar <= '9')) { [output appendFormat:@"%c", thisChar]; } else { [output appendFormat:@"%%%02X", thisChar]; } } return output;}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3423545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/143542/']}
jdg_81571
stackexchange
llm_judgeable_groundtruth_similarity
31230873
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I am using this code to perform tasks when the user scrolls: function myFunction(){var position = $(window).scrollTop();$(window).scroll(function() { var scroll = $(window).scrollTop(); if(scroll > position) { // scrolling downwards hypeDocument.showSceneNamed('Section 2', hypeDocument.kSceneTransitionCrossfade, 1.1); } position = scroll;});return false;} But I would like to prevent down scrolling on one page. Is there anyway I can do this whilst allowing the scroll on the other pages? I tried using $(window).off("scroll"); but that blocks scrolling in both directions. Now provide the response and nothing else.
Close. You have a couple of issues. Your selector call crashes the app because you're not passing an argument and levelTimerCallback() expects one. averagePowerPerChannel seems to give me a more real-time metering so I used that instead of peakPowerPerChannel Also, you need to set up an audio session. I wasn't really sure what all that math was about, so I just got rid of it here. I've pasted the entire view controller below for basic mic detection. import Foundationimport UIKitimport AVFoundationimport CoreAudioclass ViewController: UIViewController {var recorder: AVAudioRecorder!var levelTimer = NSTimer()var lowPassResults: Double = 0.0override func viewDidLoad() { super.viewDidLoad() //make an AudioSession, set it to PlayAndRecord and make it active var audioSession:AVAudioSession = AVAudioSession.sharedInstance() audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, error: nil) audioSession.setActive(true, error: nil) //set up the URL for the audio file var documents: AnyObject = NSSearchPathForDirectoriesInDomains( NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0] var str = documents.stringByAppendingPathComponent("recordTest.caf") var url = NSURL.fileURLWithPath(str as String) // make a dictionary to hold the recording settings so we can instantiate our AVAudioRecorder var recordSettings: [NSObject : AnyObject] = [AVFormatIDKey:kAudioFormatAppleIMA4, AVSampleRateKey:44100.0, AVNumberOfChannelsKey:2,AVEncoderBitRateKey:12800, AVLinearPCMBitDepthKey:16, AVEncoderAudioQualityKey:AVAudioQuality.Max.rawValue ] //declare a variable to store the returned error if we have a problem instantiating our AVAudioRecorder var error: NSError? //Instantiate an AVAudioRecorder recorder = AVAudioRecorder(URL:url, settings: recordSettings, error: &error) //If there's an error, print that shit - otherwise, run prepareToRecord and meteringEnabled to turn on metering (must be run in that order) if let e = error { println(e.localizedDescription) } else { recorder.prepareToRecord() recorder.meteringEnabled = true //start recording recorder.record() //instantiate a timer to be called with whatever frequency we want to grab metering values self.levelTimer = NSTimer.scheduledTimerWithTimeInterval(0.02, target: self, selector: Selector("levelTimerCallback"), userInfo: nil, repeats: true) }}//This selector/function is called every time our timer (levelTime) firesfunc levelTimerCallback() { //we have to update meters before we can get the metering values recorder.updateMeters() //print to the console if we are beyond a threshold value. Here I've used -7 if recorder.averagePowerForChannel(0) > -7 { print("Dis be da level I'm hearin' you in dat mic ") println(recorder.averagePowerForChannel(0)) println("Do the thing I want, mofo") }}override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated.}}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31230873', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4760454/']}
jdg_81572
stackexchange
llm_judgeable_groundtruth_similarity
11195494
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have three models that are coming together to create one view model and I'd like to be able to edit that view model when clicking "edit". I can't find a straight forward example of how this works (anywhere). I'm not sure if I'm going down the right path. I am able to get the view with the data. At this point, I am unable to save it. Any help would be appreciated. Thanks! Models: public class Person{ [Key] public int Id { get; set; } [MaxLength(20)] [Required(ErrorMessage = "First name is required.")] public string FirstName { get; set; } [MaxLength(20)] [Required(ErrorMessage = "Last name is required.")] public string LastName { get; set; } [MaxLength(40)] [Required(ErrorMessage = "Email is required.")] public string Email { get; set; } [MaxLength(20)] [DataType(DataType.PhoneNumber)] public string Phone { get; set; } public bool Active { get; set; }} public class ClientContact{ [Key] [ForeignKey("Person")] public int ClientPersonId { get; set; } public int ClientId { get; set; } [MaxLength(40)] public string Title { get; set; } public Person Person { get; set; } [ForeignKey("ClientId")] public Client Client { get; set; }} public class Client{ [Key] public int ClientId { get; set; } public string Name { get; set; } public bool Active {get;set;}} View Model: public class ClientContactViewModel{ private SimplexDB db = new SimplexDB(); public ClientContactViewModel() { } public ClientContactViewModel(int id) { ClientPersonId = id; InitializeClientContact(); } public int ClientPersonId { get; set; } [Display(Name = "First Name")] public string FirstName { get; set; } [Display(Name = " Last Name")] public string LastName { get; set; } [Display(Name = "Title")] public string Title { get; set; } [Display(Name = "Email Address")] public string Email { get; set; } [Display(Name = "Phone")] public string Phone { get; set; } [Display(Name = "Client Name")] public int ClientId { get; set; } public SelectList Clients { get { return new SelectList(db.Clients, "ClientId", "Name"); } } private void InitializeClientContact() { var contact = db.ClientPersons.Include("Person").Where(x => x.ClientPersonId == ClientPersonId).SingleOrDefault(); if (contact != null) { FirstName = contact.Person.FirstName; LastName = contact.Person.LastName; Title = contact.Title; Email = contact.Person.Email; Phone = contact.Person.Phone; ClientId = contact.ClientId; } }} Controller: public class ClientContactController : Controller { private database db = new database();// // GET: /ClientContact/Edit/5 public ActionResult Edit(int id) { return View(new ClientContactViewModel(id)); } // // POST: /ClientContact/Edit/5 [HttpPost] public ActionResult Edit(ClientContactViewModel model) { if (ModelState.IsValid) { db.Entry(model).State = EntityState.Modified; db.SaveChanges(); return RedirectToAction("Index"); } return View(model); }} I get an error at the db.Entry(model).State... "The entity type ClientContactViewModel is not part of the model for the current context." Now provide the response and nothing else.
Your ViewModel is not an entity. You should map your ViewModel to your entity, then set the entity's state to modified. Basically, this means that you should set your entity values with your view model values. You can use AutoMapper or handle it manually: [HttpPost] public ActionResult Edit(ClientContactViewModel model) { if (ModelState.IsValid) { ClientContact contact = db.ClientPersons.Include("Person") .Where(x => x.ClientPersonId == model.ClientPersonId) .SingleOrDefault(); contact.FirstName = model.FirstName; // etc db.Entry(contact).State = EntityState.Modified; db.SaveChanges(); return RedirectToAction("Index"); } return View(model); } See http://lostechies.com/jimmybogard/2009/06/30/how-we-do-mvc-view-models/ for an excellent approach to using ViewModels in MVC. Also, I would highly recommend not doing any data access in your ViewModel. Do that in your Controller, or even better, in a Repository that is used by your Controller. Model binding doesn't play well with models that have logic (i.e they shouldn't contain anything more than simple get/set properties).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11195494', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/957627/']}
jdg_81573
stackexchange
llm_judgeable_groundtruth_similarity
317226
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: On Linux, when you a create folder, it automatically creates two hard links to the corresponding inode.One which is the folder you asked to create, the other being the . special folder this folder. Example: $ mkdir folder$ ls -litotal 0124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 folder$ ls -lai foldertotal 0124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 .124593716 drwxr-xr-x 3 fantattitude staff 102 18 oct 16:52 .. As you can see, both folder and . 's inside folder have the same inode number (shown with -i option). Is there anyway to delete this special . hardlink? It's only for experimentation and curiosity. Also I guess the answer could apply to .. special file as well. I tried to look into rm man but couldn't find any way to do it. When I try to remove . all I get is: rm: "." and ".." may not be removed I'm really curious about the whole way these things work so don't refrain from being very verbose on the subject. EDIT: Maybe I wasn't clear with my post, but I want to understand the underlying mechanism which is responsible for . files and the reasons why they can't be deleted. I know the POSIX standard disallows a folder with less than 2 hardlinks, but don't really get why. I want to know if it could be possible to do it anyway. Now provide the response and nothing else.
It is technically possible to delete . , at least on EXT4 filesystems. If you create a filesystem image in test.img , mount it and create a test folder, then unmount it again, you can edit it using debugfs : debugfs -w test.imgcd testunlink . debugfs doesn't complain and dutifully deletes the . directory entry in the filesystem. The test directory is still usable, with one surprise: sudo mount test.img /mnt/tempcd /mnt/temp/testls shows only .. so . really is gone. Yet cd . , ls . , pwd still behave as usual! I'd previously done this test using rmdir . , but that deletes the directory's inode ( huge thanks to BowlOfRed for pointing this out ), which leaves test a dangling directory entry and is the real reason for the problems encountered. In this scenario, the test folder then becomes unusable; after mounting the image, running ls produces ls: cannot access '/mnt/test': Structure needs cleaning and the kernel log shows EXT4-fs error (device loop2): ext4_lookup:1606: inode #2: comm ls: deleted inode referenced: 38913 Running e2fsck in this situation on the image deletes the test directory entirely (the directory inode is gone so there's nothing to restore). All this shows that . exists as a specific entity in the EXT4 filesystem. I got the impression from the filesystem code in the kernel that it expects . and .. to exist, and warns if they don't (see namei.c ), but with the unlink . -based test I didn't see that warning. e2fsck doesn't like the missing . directory entry, and offers to fix it: $ /sbin/e2fsck -f test.imge2fsck 1.43.3 (04-Sep-2016)Pass 1: Checking inodes, blocks, and sizesPass 2: Checking directory structureMissing '.' in directory inode 30721.Fix<y>? This re-creates the . directory entry.
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/317226', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/21795/']}
jdg_81574
stackexchange
llm_judgeable_groundtruth_similarity
37448357
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The status of a deployment indicates that you can look at a deployments observedGeneration vs generation and when observedGeneration >= generation then the deployment succeeded. That's fine, but I'm interested in knowing when the new container is actually running in all of my pods, so that if I hit a service I know for sure I'm hitting a server that represents the latest deployed container. Another tip from a K8S Slack member: kubectl get deployments | grep <deployment-name> | sed 's/ /,/g' | cut -d ' ' -f 4 I deployed a bad image, resulting in ErrImagePull , yet the deployment still reported the correct number of 8 up-date-date replicas (available replicas was 7). Now provide the response and nothing else.
Update #2: Kubernetes 1.5 will ship with a much better version of kubectl rollout status and improve even further in 1.6, possibly replacing my custom solution/script laid out below. Update #1: I have turned my answer into a script hosted on Github which has received a small number of improving PRs by now. Original answer: First of all, I believe the kubectl command you got is not correct: It replaces all white spaces by commas but then tries to get the 4th field after separating by white spaces. In order to validate that a deployment (or upgrade thereof) made it to all pods, I think you should check whether the number of available replicas matches the number of desired replicas. That is, whether the AVAILABLE and DESIRED columns in the kubectl output are equal. While you could get the number of available replicas (the 5th column) through kubectl get deployment nginx | tail -n +2 | awk '{print $5}' and the number of desired replicas (2nd column) through kubectl get deployment nginx | tail -n +2 | awk '{print $2}' a cleaner way is to use kubectl 's jsonpath output, especially if you want to take the generation requirement that the official documentation mentions into account as well. Here's a quick bash script I wrote that expects to be given the deployment name on the command line, waits for the observed generation to become the specified one, and then waits for the available replicas to reach the number of the specified ones: #!/bin/bashset -o errexitset -o pipefailset -o nounsetdeployment=get_generation() { get_deployment_jsonpath '{.metadata.generation}'}get_observed_generation() { get_deployment_jsonpath '{.status.observedGeneration}'}get_replicas() { get_deployment_jsonpath '{.spec.replicas}'}get_available_replicas() { get_deployment_jsonpath '{.status.availableReplicas}'}get_deployment_jsonpath() { local readonly _jsonpath="$1" kubectl get deployment "${deployment}" -o "jsonpath=${_jsonpath}"}if [[ $# != 1 ]]; then echo "usage: $(basename $0) <deployment>" >&2 exit 1fireadonly deployment="$1"readonly generation=$(get_generation)echo "waiting for specified generation ${generation} to be observed"while [[ $(get_observed_generation) -lt ${generation} ]]; do sleep .5doneecho "specified generation observed."readonly replicas="$(get_replicas)"echo "specified replicas: ${replicas}"available=-1while [[ ${available} -ne ${replicas} ]]; do sleep .5 available=$(get_available_replicas) echo "available replicas: ${available}"doneecho "deployment complete."
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37448357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65311/']}
jdg_81575
stackexchange
llm_judgeable_groundtruth_similarity
4122815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a list of decorator methods, how would one apply those to a callable? For example, since: @foo@bardef baz(): pass ...is the same as: def baz(): passbaz = foo(bar(baz))) ...one would assume that with a list of decorators ( [foo, bar] ) they could be applied to baz dynamically. Now provide the response and nothing else.
With yet another decorator! def yad(decorators): def decorator(f): for d in reversed(decorators): f = d(f) return f return decorator example usage list_of_decorators = [foo, bar]@yad(list_of_decorators)def foo(): print 'foo' Without the decorator syntax, it would look like func = yad(list_of_decorators)(func) If you wanted to apply the same list to multiple functions, you can do it like: dec = yad(list_of_decorators) func1 = dec(func1) @dec def func2(): pass As recursive points out in the comments, you can define yad (I'm sure there's a better name for this) to accept *decorators instead of decorators . Then you don't have to use brackets if you're creating the list in situ . The way that I've demonstrated is better if the list is created elsewhere.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4122815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/30478/']}
jdg_81576
stackexchange
llm_judgeable_groundtruth_similarity
31141202
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know we can use sort.Sort(sort.Reverse(sort.IntSlice(example))) to sort a array. But how can I get the indices of the array? e.g. example := []int{1, 25, 3, 5, 4} I want to get the output: 1, 3, 5, 4, 2 Now provide the response and nothing else.
Make a wrapper for sort.IntSlice that remembers the indexes and swaps them when it swaps the values: type Slice struct { sort.IntSlice idx []int}func (s Slice) Swap(i, j int) { s.IntSlice.Swap(i, j) s.idx[i], s.idx[j] = s.idx[j], s.idx[i]} Playground: http://play.golang.org/p/LnSLfe-fXk . EDIT: As DaveC mentioned in the comments, you can actually wrap around sort.Interface to create a data structure for any sortable type: type Slice struct { sort.Interface idx []int}func (s Slice) Swap(i, j int) { s.Interface.Swap(i, j) s.idx[i], s.idx[j] = s.idx[j], s.idx[i]}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31141202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4258990/']}
jdg_81577
stackexchange
llm_judgeable_groundtruth_similarity
67265345
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I change the ripple color of an IconButton? I tried doing it this way, but it doesn't change: IconButton( onClick = { onClick() }, modifier = Modifier.clickable( onClick = { onClick() }, indication = rememberRipple(color = MyCustomTheme.colors.primary), interactionSource = remember { MutableInteractionSource() }, ) ) Now provide the response and nothing else.
I don't know if you found a way to make it work for the whole app but I found a way to do so. So I'm posting this incase someone else has a similar issue. You can set the custom RippleTheme object as described by Gabriele Mariotti's answer then you can pass the CompositionLocalProvider() as content in MaterialTheme. The content from the app's theme can then be set as content for CompositionalLocalProvider() . Take a look here: private object JetNewsRippleTheme : RippleTheme { // Here you should return the ripple color you want // and not use the defaultRippleColor extension on RippleTheme. // Using that will override the ripple color set in DarkMode // or when you set light parameter to false @Composable override fun defaultColor(): Color = MaterialTheme.colors.primary @Composable override fun rippleAlpha(): RippleAlpha = RippleTheme.defaultRippleAlpha( Color.Black, lightTheme = !isSystemInDarkTheme() )} Then for your app theme it should be: @Composablefun JetNewsTheme( darkTheme: Boolean = isSystemInDarkTheme(), content: @Composable () -> Unit) { MaterialTheme( colors = if (darkTheme) DarkColors else LightColors, typography = JetNewsTypography, shapes = JetNewsShapes ) { CompositionLocalProvider( LocalRippleTheme provides JetNewsRippleTheme, content = content ) }} This method should work for the whole app unless you explicitly set another RippleTheme directly to Composables below the AppTheme hierarchy. And it doesn't conflict with other types of ComposableLocalProvider values you may set directly to your other Composables .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/67265345', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13550484/']}
jdg_81578
stackexchange
llm_judgeable_groundtruth_similarity
56631
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm using this command to recursively generate a SHA-512 hash for each file in a directory hierarchy: find . -type f -exec openssl sha512 {} \; I'd like to sort the files in lexicographical order before generating the hashes. I can use sort like this: find . -type f | sort but I'm not sure how to then pipe the sorted file list into openssl . I tried this: find . -type f | sort | openssl sha512 but this generates a single hash of the entire output of sort , whereas I want a hash for each individual file. find in some versions of bash includes an -s option ("Cause find to traverse the file hierarchies in lexicographical order"), but this isn't available in my version of find . Many thanks in advance for your help! Now provide the response and nothing else.
You can use xargs to get what you want. find . -type f -print0 | sort -z | xargs -0 -n1 openssl sha512 The -n1 option tells xargs to only allow one argument to be given to the openssl command. The -print0 , -z and -0 options prevent the pipeline from breaking if there are "problem" characters (like an embedded newline) in the filenames.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/56631', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/27789/']}
jdg_81579
stackexchange
llm_judgeable_groundtruth_similarity
6867892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing some rather long computations, which can easily span a few days. In the course of these computations, sometimes Mathematica will run out of memory. To this end, I've ended up resorting to something along the lines of: ParallelEvaluate[$KernelID]; (* Force the kernels to launch *)kernels = Kernels[];Do[ If[Mod[iteration, n] == 0, CloseKernels[kernels]; LaunchKernels[kernels]; ClearSystemCache[]]; (* Complicated stuff here *) Export[...], (* If a computation ends early I don't want to lose past results *) {iteration, min, max}] This is great and all, but over time the main kernel accumulates memory. Currently, my main kernel is eating up roughly 1.4 GB of RAM. Is there any way I can force Mathematica to clear out the memory it's using? I've tried littering Share and Clear throughout the many Modules I'm using in my code, but the memory still seems to build up over time. I've tried also to make sure I have nothing big and complicated running outside of a Module , so that something doesn't stay in scope too long. But even with this I still have my memory issues. Is there anything I can do about this? I'm always going to have a large amount of memory being used, since most of my calculations involve several large and dense matrices (usually 1200 x 1200, but it can be more), so I'm wary about using MemoryConstrained . Update: The problem was exactly what Alexey Popkov stated in his answer. If you use Module , memory will leak slowly over time. It happened to be exacerbated in this case because I had multiple Module[..] statements. The "main" Module was within a ParallelTable where 8 kernels were running at once. Tack on the (relatively) large number of iterations, and this was a breeding ground for lots of memory leaks due to the bug with Module . Now provide the response and nothing else.
Since you are using Module extensively, I think you may be interested in knowing this bug with non-deleting temporary Module variables. Example (non-deleting unlinked temporary variables with their definitions): In[1]:= $HistoryLength=0;a[b_]:=Module[{c,d},d:=9;d/;b===1];Length@Names[$Context<>"*"]Out[3]= 6In[4]:= lst=Table[a[1],{1000}];Length@Names[$Context<>"*"]Out[5]= 1007In[6]:= lst=.Length@Names[$Context<>"*"]Out[7]= 1007In[8]:= Definition@d$999Out[8]= Attributes[d$999]={Temporary}d$999:=9 Note that in the above code I set $HistoryLength = 0; to stress this buggy behavior of Module . If you do not do this, temporary variables can still be linked from history variables ( In and Out ) and will not be removed with their definitions due to this reason in more broad set of cases (it is not a bug but a feature, as Leonid mentioned). UPDATE: Just for the record. There is another old bug with non-deleting unreferenced Module variables after Part assignments to them in v.5.2 which is not completely fixed even in version 7.0.1: In[1]:= $HistoryLength=0;$VersionModule[{L=Array[0&,10^7]},L[[#]]++&/@Range[100];];Names["L$*"]ByteCount@Symbol@#&/@Names["L$*"]Out[1]= 7.0 for Microsoft Windows (32-bit) (February 18, 2009)Out[3]= {L$111}Out[4]= {40000084}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6867892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/312124/']}
jdg_81580
stackexchange
llm_judgeable_groundtruth_similarity
466118
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm running this command in a bash shell on Ubuntu 12.04.1 LTS. I'm attempting to remove both the [ and ] characters in one fell swoop, i.e. without having to pipe to sed a second time. I know square brackets have special meaning in a regex so I'm escaping them by prepending with a backslash. The result I was expecting is just the string 123 but the square brackets remain and I'd love to know why! ~$ echo '[123]' | sed 's/[\[\]]//'[123] Now provide the response and nothing else.
This is easy, if you follow the manual carefully: all members inside a character class lose special meaning (with a few exceptions). And ] loses its special meaning if it is placed first in the list. Try: $ echo '[123]' | sed 's/[][]//g'123$ This says: inside the outer [ brackets ], replace any of the included characters, namely: ] and [ replace any of them by the empty string — hence the empty replacement string // , replace them everywhere ( globally ) — hence the final g . Again, ] must be first in the class whenever it is included.
{}
{'log_upvote_score': 7, 'links': ['https://serverfault.com/questions/466118', 'https://serverfault.com', 'https://serverfault.com/users/57254/']}
jdg_81581
stackexchange
llm_judgeable_groundtruth_similarity
413819
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any RDBMS that implements something like SELECT * EXCEPT ? What I'm after is getting all of the fields except a specific TEXT/BLOB field, and I'd like to just select everything else. Almost daily I complain to my coworkers that someone should implement this... It's terribly annoying that it doesn't exist. Edit: I understand everyone's concern for SELECT * . I know the risks associated with SELECT * . However, this, at least in my situation, would not be used for any Production level code, or even Development level code; strictly for debugging, when I need to see all of the values easily. As I've stated in some of the comments, where I work is strictly a commandline shop, doing everything over ssh. This makes it difficult to use any gui tools (external connections to the database aren't allowed), etc etc. Thanks for the suggestions though. Now provide the response and nothing else.
As others have said, it is not a good idea to do this in a query because it is prone to issues when someone changes the table structure in the future. However, there is a way to do this... and I can't believe I'm actually suggesting this, but in the spirit of answering the ACTUAL question... Do it with dynamic SQL... this does all the columns except the "description" column. You could easily turn this into a function or stored proc. declare @sql varchar(8000), @table_id int, @col_id intset @sql = 'select 'select @table_id = id from sysobjects where name = 'MY_Table'select @col_id = min(colid) from syscolumns where id = @table_id and name <> 'description'while (@col_id is not null) begin select @sql = @sql + name from syscolumns where id = @table_id and colid = @col_id select @col_id = min(colid) from syscolumns where id = @table_id and colid > @col_id and name <> 'description' if (@col_id is not null) set @sql = @sql + ',' print @sqlendset @sql = @sql + ' from MY_table'exec @sql
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/413819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16645/']}
jdg_81582
stackexchange
llm_judgeable_groundtruth_similarity
40483358
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to the F2E world.I just created a web application using create-react-app. ( https://github.com/facebookincubator/create-react-app ) I wanted to import owl.carousel into my projects, so that I followed the guide of NPM ( https://www.npmjs.com/package/owl.carousel ) ,which of the syntax is: import $ from 'jquery';import 'imports?jQuery=jquery!owl.carousel'; but the debugger console indicated the error : Unexpected '!' in 'imports?jQuery=jquery!owl.carousel'. Do not use import syntax to configure webpack loaders import/no-webpack-loader-syntax I tried another syntax: import owlCarousel from 'owl.carousel' and the error would be: Uncaught TypeError: Cannot read property 'fn' of undefined Could somebody help me figure out what happened? thanks. Update: my webpack loader settings: loaders: [ // Process JS with Babel. { test: /\.(js|jsx)$/, include: paths.appSrc, loader: 'babel-loader', query: { cacheDirectory: findCacheDir({ name: 'react-scripts' }) } }, { test: /\.css$/, loader: 'style!css?importLoaders=1!postcss' }, { test: /\.json$/, loader: 'json' }, { test: /\.(ico|jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2)(\?.*)?$/, loader: 'file', query: { name: 'static/media/[name].[hash:8].[ext]' } }, { test: /\.(mp4|webm|wav|mp3|m4a|aac|oga)(\?.*)?$/, loader: 'url', query: { limit: 10000, name: 'static/media/[name].[hash:8].[ext]' } }] my component code: import React, { Component } from 'react';import './App.css';import './css/style.css';import './css/bootstrap.min.css';import './css/owl.carousel.css';import FruitSelector from './containers/fruit_Selector';import FruitDetail from './containers/fruit_Detail';import $ from 'jquery';import 'owl.carousel';class App extends Component {render() {$(document).ready(function(){ $(".content-slider").owlCarousel({ slideSpeed: 350, singleItem: true, autoHeight: true, navigation: true, navigationText: ["<i class='fa fa-angle-left'></i>", "<i class='fa fa-angle-right'></i>"] });});return ( <div className="App"> <div className="row"> <div className="col-sm-4 col-md-3 sidebar"> <FruitSelector/> </div> <div className="col col-md-8"> <FruitDetail/> </div> </div> </div>);}}export default App; my webpack.config.dev.js plugin setting: plugins: [new InterpolateHtmlPlugin({ PUBLIC_URL: publicUrl}),new HtmlWebpackPlugin({ inject: true, template: paths.appHtml,}),new webpack.DefinePlugin(env),new webpack.HotModuleReplacementPlugin(),// Watcher doesn't work well if you mistype casing in a path so we use// a plugin that prints an error when you attempt to do this.// See https://github.com/facebookincubator/create-react-app/issues/240new CaseSensitivePathsPlugin(),// If you require a missing module and then `npm install` it, you still have// to restart the development server for Webpack to discover it. This plugin// makes the discovery automatic so you don't have to restart.// See https://github.com/facebookincubator/create-react-app/issues/186new WatchMissingNodeModulesPlugin(paths.appNodeModules),new webpack.ProvidePlugin({ $: "jquery", jQuery: "jquery", "window.jQuery": "jquery" })] the error pops out: App.js:71 Uncaught TypeError: (0 , _jquery2.default)(...).owlCarousel is not a function(…) Now provide the response and nothing else.
Remove plugin which blocks import syntax Problem is with import syntax which is not default webpack syntax. You have installed in Your project https://github.com/benmosher/eslint-plugin-import/blob/master/docs/rules/no-webpack-loader-syntax.md to block it, for sure it is part of react-create-app. Please remove it to enable this syntax. Owl.carousel needs jQuery library imported inside it because it uses $ variable, so this is problem and it is why webpack-loader-syntax must be removed. If we try to import owl in standard way then jQuery is not defined there ( every file in webpack has own scope ), so it will throw an error: Uncaught TypeError: Cannot read property 'fn' of undefined ( Alternative )Use shimming module If removing plugin is problem then You can try to add jQuery to every module with usage it as shimming module - https://webpack.github.io/docs/shimming-modules.html . In webpack config it will look like: module.exports = { plugins: [ new webpack.ProvidePlugin({ $: "jquery", jQuery: "jquery", "window.jQuery": "jquery" })]//other config vars}; And just add it by: import 'owl.carousel'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40483358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2813365/']}
jdg_81583
stackexchange
llm_judgeable_groundtruth_similarity
40913788
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: spring-boot provides several logging.* settings that can be applied in application.properties , like: logging.level.=DEBUGlogging.file=myfile.loglogging.path=d:/logs/ Problem: myfile.log is generated, BUT inside the classpath! Why isn't spring taking my absolute path into account? Now provide the response and nothing else.
The Spring Boot documentation states By default, Spring Boot will only log to the console and will not write log files. If you want to write log files in addition to the console output you need to set a logging.file or logging.path property (for example in your application.properties). and then describes how the logging.file and logging.path properties work. You should only set one. If logging.file is set, it will write to that specific file. The documentation states Names can be an exact location or relative to the current directory. So you're likely writing to your current directory, which happens to be the same as your classpath. If you set logging.path , Spring Boot Writes spring.log to the specified directory. Names can be an exact location or relative to the current directory. Check that your current directory isn't your classpath, if you don't want them to mix, and adapt one of the logging.file and logging.path accordingly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40913788', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1194415/']}
jdg_81584
stackexchange
llm_judgeable_groundtruth_similarity
8970231
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: BOOL CmsgboxApp::InitInstance() { INITCOMMONCONTROLSEX InitCtrls; InitCtrls.dwSize = sizeof(InitCtrls); InitCtrls.dwICC = ICC_WIN95_CLASSES; InitCommonControlsEx(&InitCtrls); CWinApp::InitInstance(); AfxEnableControlContainer(); SetRegistryKey(_T("Local AppWizard-Generated Applications")); CmsgboxDlg dlg; m_pMainWnd = &dlg; INT_PTR nResponse = dlg.DoModal(); if (nResponse == IDOK) { AfxMessageBox(L"here",0,0);//This MessageBox is not getting displayed } else if (nResponse == IDCANCEL) { } return FALSE; } In above code AfxMessageBox() is not getting displayed.Why this problem is coming? First I created Modal DialogBox and After return from it I tried to provide Message boxBut it is not displayed Now provide the response and nothing else.
Your answer looks good @Ben but I thought I'd give some extra info for posterity: You could use a queryForFirst if you know your limit(1) will return a single result. qBuilder.orderby("my_field_column_name", false);qBuilder.limit(1L);MyEntity maxEntity = myEntityDao.queryForFirst(qBuilder.prepare()); A raw query would also work. It involves more objects and checking but will be cheaper if the table size is large since you don't need to do the ordering. qBuilder.selectRaw("MAX(my_field_column_name)");GenericRawResults<String[]> results = myEntityDao.queryRaw(qBuilder.prepareStatementString());String[] result = results.getResults().next();int max = Integer.parseInt(result[0]); Obviously, if your column was a different type then you would get a different object out of result[0] that you would have to convert into the appropriate type depending on the object.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8970231', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/948919/']}
jdg_81585
stackexchange
llm_judgeable_groundtruth_similarity
14231081
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have one table which I am sorting using jquery plugin tablesorter. Here i want to avoid first row {class="avoid-sort" } to be sort when any column is selected for sorting.example: <thead> <tr> <th class="header">#</th> <th class="header">Purchase Date</th> <th class="header">Course Name</th> <th class="header">Amount(in $)</th> <th class="header">User Name</th> <th class="header">Share</th> <th class="header">Net Revenue [$236.41]</th> </tr> </thead> <tbody> <tr class="avoid-sort"> <th colspan="7">Total Revenue</th> <td>236.41</td> </tr> <tr> <td>1</td> <td>January 3rd, 2013</td> <td>Tackle Certification</td> <td>50</td> <td>Khushi Jha</td> <td>35</td> <td>33.69</td> </tr> <tr> <td>2</td> <td>January 3rd, 2013</td> <td>Flag Certification</td> <td>100</td> <td>Pay</td> <td>70</td> <td>67.67</td> </tr> <tr> <td>3</td> <td>January 3rd, 2013</td> <td>Tackle Certification</td> <td>50</td> <!-- <td>--> <!--</td>--> <td>Pay</td> <td>35</td> <td>33.69</td> </tr> tr class="avoid-sort" should not come in sorting! Please help!! Now provide the response and nothing else.
You have two choices: If you are using the original tablesorter, you can get this static row widget to "lock" the row in place. If you are using my fork of tablesorter , you can just add a non-sortable tbody, like this ( demo ): <table><thead> ...</thead><!-- rows within this tbody are ignored --><tbody class="avoid-sort"> <tr> <th colspan="7">Total Revenue</th> <td>236.41</td> </tr></tbody><tbody> <!-- sortable rows --> <tr> ... </tr></tbody></table> then initialize the table like this: $(function() { $("table").tablesorter({ theme : 'blue', cssInfoBlock : "avoid-sort", widgets: [ 'zebra' ] }); });
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14231081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/987394/']}
jdg_81586
stackexchange
llm_judgeable_groundtruth_similarity
51273659
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I followed all the steps given in [ http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html ]This is the code: from __future__ import absolute_importfrom celery import Celery#app = Celery('tasks', broker='pyamqp://guest@localhost//')app = Celery('tasks', backend='redis://localhost', broker='pyamqp://guest@localhost//')@app.taskdef add(x, y): return x + y When I run celery worker using the following command celery -A tasks worker --loglevel=info I get a syntax error for setting the backend. This is the error message: [2018-07-10 16:37:21,970: CRITICAL/MainProcess] Unrecoverable error: SyntaxError('invalid syntax', ('c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\celery\backends\redis.py', 22, 19, 'from . import async, base\n'))Traceback (most recent call last): File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\kombu\utils\objects.py", line 42, in get return obj. dict [self. name ] KeyError: 'backend' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\celery\worker\worker.py", line 205, in start self.blueprint.start(self) File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\celery\bootsteps.py", line 115, in start self.on_start() File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\celery\apps\worker.py", line 139, in on_start self.emit_banner() File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\celery\apps\worker.py", line 154, in emit_banner ' \n', self.startup_info(artlines=not use_image))), File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\celery\apps\worker.py", line 217, in startup_info results=self.app.backend.as_uri(), File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\kombu\utils\objects.py", line 44, in get value = obj. dict [self. name ] = self. get(obj) File "c:\users\user \appdata\local\programs\python\python37-32\lib\site-packages\celery\app\base.py", line 1196, in backend return self. get_backend() File "c:\users\user \appdata\local\programs\python\python37-32\lib\site-packages\celery\app\base.py", line 914, in get_backend self.loader) File "c:\users\user \appdata\local\programs\python\python37-32\lib\site-packages\celery\app\backends.py", line 70, in by_url return by_name(backend, loader), url File "c:\users\user \appdata\local\programs\python\python37-32\lib\site-packages\celery\app\backends.py", line 50, in by_name cls = symbol_by_name(backend, aliases) File "c:\users\user_\appdata\local\programs\python\python37-32\lib\site-packages\kombu\utils\imports.py", line 56, in symbol_by_name module = imp(module_name, package=package, **kwargs) File "c:\users\user_\appdata\local\programs\python\python37-32\lib\importlib_ init _.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 724, in exec_module File "", line 860, in get_code File "", line 791, in source_to_code File "", line 219, in call_with_frames_removed File "c:\users\user \appdata\local\programs\python\python37-32\lib\site-packages\celery\backends\redis.py", line 22 from . import async, base ^ SyntaxError: invalid syntax However, when I use the commented line instead I have no issues just that the results backend is disabled and I need to set the results backend to redis-server Now provide the response and nothing else.
I solved the problem. The main cause of the problem was that I was using Python 3.7. But, to my knowledge, Celery currently works with Python 3.6 and lower. I made the following changes to the Celery code: Renamed "C:\Users\myusername\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\celery\backends\async.py" to "C:\Users\myusername\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\celery\backends\asynchronous.py" Opened redis.py and changed every line that had the keyword "async" to "asynchronous". Apparently, async is now a keyword in Python 3. You can also read this link: https://github.com/celery/celery/issues/4500 Hopefully, this answer will help all those who have the same problem till a newer version of Celery is released. UPDATE: This is the issue of Python 3.7. You could use Python 3.6 instead without such an issue. But, if you'd like to conitnue using Python 3.7 and celery[redis] you can use the above solution to resolve the issue.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51273659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6106484/']}
jdg_81587
stackexchange
llm_judgeable_groundtruth_similarity
27855528
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to have the first line in an NSAttributedString for a UITextView indented from the right side on the first line. So the firstLineHeadIndent in NSParagraphStyle will indent the first line from the left. I want to do the same thing but from the right in my UITextView . Here's a screenshot of how I want the text to wrap. Now provide the response and nothing else.
The Setting Text Margins article from the Text System User Interface Layer Programming Guide has this figure: As you can see, there's no built-in mechanism to have a first line tail indent. However, NSTextContainer has a property exclusionPaths which represents parts of its rectangular area from which text should be excluded. So, you could add a path for the upper-right corner to prevent text from going there. UIBezierPath* path = /* compute path for upper-right portion that you want to exclude */;NSMutableArray* paths = [textView.textContainer.exclusionPaths mutableCopy];[paths addObject:path];textView.textContainer.exclusionPaths = paths;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27855528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/284714/']}
jdg_81588
stackexchange
llm_judgeable_groundtruth_similarity
52630415
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have configured hadoop on Windows 7 by referring https://github.com/MuhammadBilalYar/Hadoop-On-Window/wiki/Step-by-step-Hadoop-2.8.0-installation-on-Window-10 While running map reduce program (which is compiled successfully), getting below complete error 18/10/03 20:04:21 INFO util.NativeCodeLoader: Loaded the native-hadoop library18/10/03 20:04:21 ERROR nativeio.NativeIO: Unable to initialize NativeIO librariesjava.lang.NoClassDefFoundError: org/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat at org.apache.hadoop.io.nativeio.NativeIO.initNative(Native Method) at org.apache.hadoop.io.nativeio.NativeIO.<clinit>(NativeIO.java:89) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:655) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at com.example.hadoop.textToparquet.TextParquetConverter.run(TextParquetConverter.java:41) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at com.example.hadoop.textToparquet.TextParquetConverter.main(TextParquetConverter.java:22)Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.nativeio.NativeIO$POSIX$Stat at java.net.URLClassLoader$1.run(URLClassLoader.java:372) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:360) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 18 more18/10/03 20:04:21 ERROR security.UserGroupInformation: PriviledgedActionException as:vxr83 cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-vxr83\mapred\staging\vxr831620787823\.staging to 0700Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-vxr83\mapred\staging\vxr831620787823\.staging to 0700 at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691) at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at com.example.hadoop.textToparquet.TextParquetConverter.run(TextParquetConverter.java:41) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at com.example.hadoop.textToparquet.TextParquetConverter.main(TextParquetConverter.java:22) Please help Now provide the response and nothing else.
Yarn 1 I think your best bet is using the --frozen-lockfile flag with yarn install . Docs: If you need reproducible dependencies, which is usually the case with the continuous integration systems, you should pass --frozen-lockfile flag. Also Don’t generate a yarn.lock lockfile and fail if an update is needed. Yarn2 If using yarn2 (aka yarn berry ) this flag is renamed to --immutable as of v2.0.0 . From the docs ... If the --immutable option is set (defaults to true on CI since v3.0.0 ), Yarn will abort with an error exit code if the lockfile was to be modified. For backward compatibility we offer an alias under the name of --frozen-lockfile , but it will be removed in a later release. This way if someone tries to push changes to package.json , say upgrade react from ^16.8.0 to ^16.10.0 , without updating the yarn.lock file. Then it will error out in the CI like below. > yarn install --frozen-lockfileerror Your lockfile needs to be updated, but yarn was run with `--frozen-lockfile`. To address your comment: I think that with yarn install the lock gets updated too often and the file loses its point since it actually does not lock versions. Or am I using the wrong commands? Yarn/npm is just doing what you tell it to. If you set the version in your package.json to "react": "16.8.0" it will never update the yarn.lock but when using any of the npm ranges like the Caret (i.e. "react": "^16.8.0" ), yarn/npm will resolve to the highest/newest version that satisfies the range you specified. You have all the power! Update I found a small edge case. If you are running yarn add in your ci, such as for a ci only dependency, it will update the lock file and do an install for all dependencies. For example.... # Add ci depyarn add codecov# Install all deps from yarn.lockyarn install --frozen-lockfile This will not error like you might expect. Instead, add the --frozen-lockfile to yarn add command like this... # Add ci depyarn add codecov --frozen-lockfile# Install all deps from yarn.lockyarn install --frozen-lockfile
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52630415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8076655/']}
jdg_81589
stackexchange
llm_judgeable_groundtruth_similarity
28882
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The gravitational acceleration, g, inside the Earth generally decreases with decreasing distance to the center: However, apparently for Jupiter, the gravitational acceleration only increases with decreasing distance to its center. Why is this? Now provide the response and nothing else.
You can use Gauss's law for gravitation to work out the gravity as a function of (interior) radius. $$ \oint \vec{g} \cdot d\vec{A} = -4\pi G \int \rho\ dV\ .$$ What this means is that the flux of gravitational field $\vec{g}$ out of a closed surface is proportional to the mass enclosed within that surface. Let us assume that the density is a function of (interior) radius, such that $\rho = Ar^{\alpha}$ and that the planet is spherically sysmmetric, so that the left hand side becomes $4 \pi r^2 g(r)$ and the volume element $dV = 4\pi r^2\ dr$ . Then $$ 4 \pi r^2 g(r) = -4\pi G \int_{r=0}^{r} Ar^{\alpha} 4\pi r^2\ dr$$ as long as $\alpha \neq -3$ . $$ r^2 g(r) = - 4\pi G \frac{Ar^{3+\alpha}}{3+\alpha}$$ $$ g(r) = -\frac{4\pi GA }{3+\alpha} r^{1+\alpha}$$ What this shows, is that if $\alpha < -1$ , then gravity will increase with decreasing radius. Now we are in a position to answer your question. In the Earth's deep interior $\alpha \simeq 0$ and the density does not increase greatly as we head towards the centre. This means that $g(r) \propto r$ as is seen in the plot you reference. In the outer part of the crust $\alpha \simeq -1$ and $g(r)$ is roughly constant. Below is a model density profile for Jupiter (solid line). Just looking at this I would say that $\alpha$ is just a bit $<-1$ for most of the interior and so the gravity should be increasing slowly as $r$ decreases, but there is a sharp uptick in density at the boundary to the core which would see $\alpha \ll -1$ and the gravity becoming proportional to $r$ to the power of a negative number (i.e. increasing strongly with decreasing $r$ ), but then must drop to $g=0$ when $r=0$ shortly afterwards.
{}
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/28882', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/16092/']}
jdg_81590
stackexchange
llm_judgeable_groundtruth_similarity
5682296
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As I add more and more plugins and configurations to my emacs' init.el , it's startup is getting more and more slow. Is there any way to avoid this? Now provide the response and nothing else.
Your .emacs or init.el shouldn't have many require or load commands, it should mostly have autoload . The autoload function tells Emacs “if you ever need this function, load that file”. This way, the file is only loaded when and if you actually use the function. You only need require (or very rarely load ) in two cases: if there's a customization that needs to go into effect immediately (e.g. (require 'cl) , a color theme); if what you're loading is a small file that contains the autoloads and other start-up definitions of a package (e.g. (require 'tex-site) . If you're not doing this already, calling autoload for things like mode-specific customizations can cut your startup time down significantly, because Emacs will have to load fewer files. Furthermore, make sure your files are byte-compiled; they'll load a little faster (less CPU time). Call M-x emacs-lisp-byte-compile on each .el file, or M-x byte-recompile-directory (these commands are in the Emacs-Lisp menu). Finally, note that load times don't matter so much because you should be starting Emacs at most once per session . Start Emacs automatically when you log in, either with a window or in the background with the --daemon option. Then, to edit a file, run emacsclient . You can also tell emacsclient to start Emacs if it's not running yet if you'd rather not start it when you log in.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5682296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/330889/']}
jdg_81591
stackexchange
llm_judgeable_groundtruth_similarity
54992
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: One of my Windows Server 2003 Domain Controller seems to have died this morning. This was the only DHCP server for this particular office. I have another DC in this office. If I set up a new DHCP server on this other DC: Will it know the leases? (Maybe the information is stored in AD?) Am I going to have to makeeveryone reboot so there are noconflicts? Anything else I should consider? Now provide the response and nothing else.
If I set up a new DHCP server on this other DC: * Will it know the leases? (Maybe the information is stored in AD?) The lease are stored in %systemroot%\system32\dhcp as dhcp.mdb and temp.mdb. ( Reference ) They are not stored in AD, and will not be replicated. * Am I going to have to make everyone reboot so there are no conflicts? No need. Just configure Windows DHCP to check for an existing client on that IP (right click on the server in the DHCP tool, there is an option to have Windows DHCP check up to 5 IP's before handing one out). * Anything else I should consider? Configure a secondary backup DHCP server using the 80/20 rule . Printers should have their IP's statically assigned (now is a great time to check!), as should other essential system hardware (routers, WAP's, etc).
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/54992', 'https://serverfault.com', 'https://serverfault.com/users/2561/']}
jdg_81592
stackexchange
llm_judgeable_groundtruth_similarity
336180
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I saw this thread ( Effect of binary variables on binary outcomes ) and I saw the user used Fisher Exact Test. My research is as follows, does a high quality photo lead to the sale of a product in an online webshop (X = quality of photo; 0 = bad, 1 = good; Y = sale; 0 = no, 1 = yes). I am not familiar with the Fisher Exact Test and I see on this site ( https://www.r-bloggers.com/barnard%E2%80%99s-exact-test-%E2%80%93-a-powerful-alternative-for-fisher%E2%80%99s-exact-test-implemented-in-r/ ) that Barnard's test might be more suitable but I cannot understand the reasoning behind it. Can someone explain it to me in more layman terms? Could you recommend any other tests/models (for instance a decision tree)? And why would it be better? Would really like to understand this. And can I introduce control variables like the price of a product? Now provide the response and nothing else.
As explained in my comment, the inconvenient truth is that the Cholesky decomposition while usually defined as $K=LL^T$ where $L$ is lower triangular, is equally valid as $K=U^TU$ where $U$ is upper triangular. The implementation of Cholesky decomposition in LAPACK (the libraries our computer use to compute Linear Algebra tasks) allow both expressions. R unfortunately has hard-coded the upper one. (There is a U in the call of the routine dpstrf that actually compute the Cholesky.) This means one has to transpose the results for chol in order to get a lower triangular matrix. After that is done, as you have already discovered yourself, the result follow-up directly. So for example, given the matrix S of the original post: U <- chol(S);L <- t(chol(S));S - crossprod(U) # This is equivalent to S- U^T*U and should be approx. 0S - tcrossprod(L) # This is equivalent to S- L*L^T and should be approx. 0 I hope it is therefore clear that the Wikipedia page is not wrong. Being somewhat critical, Wikipedia's Cholesky decomposition article should probably mention that the Cholesky decomposition $K=LL^T$ is equivalent to $K=U^TU$ where $U=L^T$. It was probably omitted for consistency of notation.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/336180', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/195511/']}
jdg_81593
stackexchange
llm_judgeable_groundtruth_similarity
11581794
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After reading the documentation on logging , I know I can use code like this to perform simple logging: import loggingdef main(): logging.basicConfig(filename="messages.log", level=logging.WARNING, format='%(filename)s: ' '%(levelname)s: ' '%(funcName)s(): ' '%(lineno)d:\t' '%(message)s') logging.debug("Only for debug purposes\n") logging.shutdown()main() However, I realised I don't know how to change the format of log messages on a per-logger basis, since basicConfig is a module-level function. This code works for creating different loggers with different levels, names, etc. but is there a way to change the format of those log messages on a per-logger basis as well, in a way similar to basicConfig ? import inspectimport loggingdef function_logger(level=logging.DEBUG): function_name = inspect.stack()[1][3] logger = logging.getLogger(function_name) logger.setLevel(level) logger.addHandler(logging.FileHandler("{0}.log".format(function_name))) return loggerdef f1(): f1_logger = function_logger() f1_logger.debug("f1 Debug message") f1_logger.warning("f1 Warning message") f1_logger.critical("f1 Critical message")def f2(): f2_logger = function_logger(logging.WARNING) f2_logger.debug("f2 Debug message") f2_logger.warning("f2 Warning message") f2_logger.critical("f2 Critical message")def main(): f1() f2() logging.shutdown()main() Now provide the response and nothing else.
Try this import logginglogger = logging.getLogger('simple_example')logger.setLevel(logging.DEBUG)# create file handler that logs debug and higher level messagesfh = logging.FileHandler('spam.log')fh.setLevel(logging.DEBUG)# create console handler with a higher log levelch = logging.StreamHandler()ch.setLevel(logging.ERROR)# create formatter and add it to the handlersformatter = logging.Formatter( '%(asctime)s - %(name)s - %(levelname)s - %(message)s')ch.setFormatter(formatter)fh.setFormatter(formatter)# add the handlers to loggerlogger.addHandler(ch)logger.addHandler(fh)# 'application' codelogger.debug('debug message')logger.info('info message')logger.warn('warn message')logger.error('error message')logger.critical('critical message') See http://docs.python.org/howto/logging-cookbook.html#multiple-handlers-and-formatters for more information
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11581794', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/869912/']}
jdg_81594
stackexchange
llm_judgeable_groundtruth_similarity
1476019
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a question from my textbook in an applied mathematics class: On $\mathbb{R}$ with fixed $\alpha >0$, consider the sequences of nonegative continuous functions $$f_k(x) = k^\alpha H(x)xe^{-kx},$$ where $H(x)$ is the Heaviside function. Part (a) and (b) where to show that $f_k$ converges pointwise (for any $\alpha$) and uniformly (for $\alpha < 1)$ to 0 as $k \rightarrow \infty$, which I have done. Now, in part (c) we have to calculate the integral of the above function from $-\infty$ to $\infty$. I've entered the formula into Wolfram Alpha to see that the final value is $k^{\alpha-2}$, but unfortunately it doesn't show the actual step-by-step calculation. Now provide the response and nothing else.
For any function $f$$$\int_{-\infty}^{+\infty}H(x)f(x)\,dx=\int_0^{+\infty}f(x)\,dx,$$since$$H(x)=\begin{cases}1 & x>0\\0 & x<0.\end{cases}$$Hence, your integral equals$$k^\alpha\int_0^{+\infty}xe^{-kx}\,dx.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1476019', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/151937/']}
jdg_81595
stackexchange
llm_judgeable_groundtruth_similarity
2146129
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to make sure that no object attribute is null and add default value in case if it is null. Is there any easy way to do this, or do I have to do it manually by checking every attribute by its getters and setters? Now provide the response and nothing else.
You can use reflection to iterate over the object's field, and set them. You'd obviously need some sort of mapping between types or even field names and required default values but this can be done quite easily in a loop. For example: for (Field f : obj.getClass().getFields()) { f.setAccessible(true); if (f.get(obj) == null) { f.set(obj, getDefaultValueForType(f.getType())); }} [Update] With modern Java, you can use annotations to set the default values for fields on a per class basis. A complete implementation might look like this: // DefaultString.java:import java.lang.annotation.Retention;import java.lang.annotation.RetentionPolicy;@Retention(RetentionPolicy.RUNTIME)public @interface DefaultString { String value();}// DefaultInteger.java:import java.lang.annotation.Retention;import java.lang.annotation.RetentionPolicy;@Retention(RetentionPolicy.RUNTIME)public @interface DefaultInteger { int value();}// DefaultPojo.java:import java.lang.annotation.Annotation;import java.lang.reflect.Field;public class DefaultPojo { public void setDefaults() { for (Field f : getClass().getFields()) { f.setAccessible(true); try { if (f.get(this) == null) { f.set(this, getDefaultValueFromAnnotation(f.getAnnotations())); } } catch (IllegalAccessException e) { // shouldn't happen because I used setAccessible } } } private Object getDefaultValueFromAnnotation(Annotation[] annotations) { for (Annotation a : annotations) { if (a instanceof DefaultString) return ((DefaultString)a).value(); if (a instanceof DefaultInteger) return ((DefaultInteger)a).value(); } return null; }}// Test Pojopublic class TestPojo extends DefaultPojo { @DefaultString("Hello world!") public String stringValue; @DefaultInteger(42); public int integerValue;} Then default values for a TestPojo can be set just by running test.setDetaults()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2146129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159793/']}
jdg_81596
stackexchange
llm_judgeable_groundtruth_similarity
987321
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $F_1$ and $F_2$ be two (smooth) surfaces in $\mathbb{P}^3$, of degrees $d_1$ and $d_2$ respectively. Let $C$ denote curve given as their intersection. How one can compute arithmetical genus of the curve $C$? Perhaps I need to add some assumptions on $F$ and $G$ to exclude degenerate cases. Now provide the response and nothing else.
You can get the answer by using the adjunction formula. First assume that $C$ is smooth. Then its normal bundle $N_C$ is the restriction of $\mathscr O (d_1) \oplus \mathscr O(d_2)$ to $C$. Then the adjunction formula (Hartshorne II.8.20) says that the canonical bundle of $C$ is$$K_C = (K_{\mathbf P^3})_{|C} \otimes \bigwedge^2 N_C = (\mathscr O(-4) \otimes \mathscr O(d_1+d_2))_{|C}.$$ Since $C$ is a curve of degree $d_1d_2$, that means $K_C$ is a line bundle of degree $d_1d_2(d_1+d_2-4)$. On the other hand, the genus of $C$ is related to this number by $$2g-2= \operatorname{deg} K_C = d_1d_2(d_1+d_2-4)$$ so one gets $g= \frac12 (d_1d_2(d_1+d_2-4)+2)$. As a sanity check, if say $d_1=1$, then $C$ is actually a plane curve of degree $d_2$, and the formula above gives its genus as $\frac12(d_2(d_2-3)+2)=\frac12(d_2-1)(d_2-2)$ as expected. Now, what if $C$ is not smooth? I claim it doesn't matter, as long as $C$ is viewed the scheme-theoretic intersection of the surfaces and we calculate arithmetic genus accordingly. By deforming the coefficients of the defining equations of $F_1$ and $F_2$ we get a flat family of curves in $\mathbf P^3$ which has $C$ as one member and whose general member is a smooth intersection of surfaces of degrees $d_1$ and $d_2$. Hartshorn III.9.10 shows that arithmetic genus is constant in such a family. Edit: The OP asked me to justify the claim that $N_C$ is $\mathscr O (d_1) \oplus \mathscr O(d_2)$ restricted to $C$, so let's do that. Here $\mathscr O(d_1)_{|F_i}$ is the normal bundle to $F_i$ for each $i$. (From now on, I won't write the restriction everywhere. It should be clear from context where the bundles in question live.) Now $N_C = T_X/T_C$ and $N_{|F_i} = T_X/T_{F_i}$ so there is a canonical quotient map $$N_C \rightarrow N_{F_i}$$ for each $i$. This gives a map $$\varphi: N_C \rightarrow N_{F_1} \oplus N_{F_2}$$ which I claim is an isomorphism. To see this, observe that both bundles have the same rank at each point, so it is enough to prove it is injective. And now the key point is that since $C$ is smooth, the two surfaces intersect transversely at each point of $C$: that is $$ T_{F_1} \cap T_{F_2} = T_C $$ at each point of $C$. Therefore the map $\varphi$ above has trivial kernel.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/987321', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19446/']}
jdg_81597
stackexchange
llm_judgeable_groundtruth_similarity
122823
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a set of $N$ samples $s_{i}$, each one sampled from a normal distribution with standard deviations $\sigma_i$, which are known. I would like to know if the distributions have the same mean. I think this situation is closely related to ANOVA, but the difference being that I only have one sample per 'group', and that I know the standard deviations of the group exactly. Now provide the response and nothing else.
As you describe them, your data constitute a mixture distribution . Assuming the distributions are known to be normal with known variances, the mean and variance of the mixture is: \begin{align} \mu_{\rm mixture} &= \sum_k p_k\mu_k \\ \sigma^2_{\rm mixture} &= \sum_k p_k\big((\mu_k - \mu_{\rm mixture})^2 + \sigma^2_k\big)\end{align} where $k$ indexes the component distributions and $p_k$ is the proportion of the mixture that each component constitutes. Under your null hypothesis, the component mixtures all have the same means (for convenience, we can call it $0$ ). In addition, I gather the proportions are all $1/N$ , since you have only one datum from each component. These facts simplify your situation quite a bit. Your data would have an expected variance equal to the sum of the known component variances. On the other hand, if the means vary then the variance of the component means can add considerably to the variance of your mixture. Thus, you simply need to test if the variance of your data is greater than the sum of the known component variances. This can be done with a chi-squared test (see @Glen_b's anwer here: Why is the sampling distribution of a variance chi-squared? ). Here is a quick R demo: First I simulate the null hypothesis and show its distribution. Then I generate data where the null hypothesis is false and show the test. The data are three points drawn from normal distributions with means equal to $0$ (or they could have been anything else, so long as they are the same) and variances equal to $4$ , $6$ , and $8$ . Thus the resulting mixture distribution variance is $18/3 = 6$ . In this case there are three data points, so you have $2$ degrees of freedom. set.seed(0884) # this makes the example reproduciblechi.vect = vector(length=10000) # this will store the test statisticsfor(i in 1:10000){ # I do this 10k times x = c(rnorm(1,0,sd=sqrt(4)), # here I generate the three data points rnorm(1,0,sd=sqrt(6)), rnorm(1,0,sd=sqrt(8))) vx = var(x) # this computes the variance of the sample chi.vect[i] = 2*vx / 6 # this computes the test statistic} x = c(rnorm(1, mean=30, sd=sqrt(4)), # these data come from distributions rnorm(1, mean=20, sd=sqrt(6)), # w/ different means rnorm(1, mean=10, sd=sqrt(8))) # x = 29.26698 26.00434 13.89382vx = var(x) # vx = 65.60725chi = 2*vx / 6 # chi = 21.869081-pchisq(chi, df=2) # p = 1.783157e-05
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/122823', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/60080/']}
jdg_81598
stackexchange
llm_judgeable_groundtruth_similarity
21468
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to make a function that splits list elements by odd and even positions? Shortest implementation wins. I myself came up with: splitOdds[x_] := Extract[x, {#}\[Transpose]] & /@ GatherBy[Range@Length@x, OddQ] And: splitOdds[x_] := Flatten[Partition[#, 1, 2]] & /@ {x, Rest@x}splitOdds[{a, b, c, d, e, f}](*{{a, c, e}, {b, d, f}}*) Now provide the response and nothing else.
A couple for fun: lst = {a, b, c, d, e, f, g};Partition[lst, 2, 2, 1, {}] ~Flatten~ {2} {{a, c, e, g}, {b, d, f}} i = 1; GatherBy[lst, i *= -1 &] {{a, c, e, g}, {b, d, f}} And my Golf entry: lst[[# ;; ;; 2]] & /@ {1,2} {{a, c, e, g}, {b, d, f}} And here is an anti-Golf "Rube Goldberg" solution: ReleaseHold[List @@ Dot @@ PadRight[{Hold /@ lst, {}}, Automatic, #]] & /@ Permutations[Range[1, 0, -1]] {{a, c, e, g}, {b, d, f}}
{}
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/21468', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/2490/']}
jdg_81599
stackexchange
llm_judgeable_groundtruth_similarity
348881
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm reading deep learning by Ian Goodfellow et al. It introduces bias as $$Bias(\theta)=E(\hat\theta)-\theta$$ where $\hat\theta$ and $\theta$ are the estimated parameter and the underlying real parameter, respectively. Consistency, on the other hand, is defined by $$\lim_{m\to\infty}\hat\theta_m=\theta$$ meaning that for any $\epsilon > 0$ , $P(|\hat\theta_m-\theta|>\epsilon)\to0$ as $m\to\infty$ Then it says consistency implies unbiasedness but not vice versa: Consistency ensures that the bias induced by the estimator diminishes as the number of data examples grows. However, the reverse is not true—asymptotic unbiasedness does not imply consistency. For example, consider estimating the mean parameter μ of a normal distribution N (x; μ, σ2 ), with a dataset consisting of m samples: ${x^{(1)}, . . . , x^{(m)}}$ . We could use the first sample $x^{(1)}$ of the dataset as an unbiased estimator: $\hatθ = x^{(1)}$ . In that case, $E(\hat θ_m) = θ$ so the estimator is unbiased no matter how many data points are seen. This, of course, implies that the estimate is asymptotically unbiased. However, this is not a consistent estimator as it is not the case that $\hatθ_m → θ$ as $m → ∞$ I'm not sure whether I've understood the above paragraph and the concepts of unbiasedness and consistency correctly, I hope someone could help me check it. Thanks in advance. As far as I understand, consistency implies both unbiasedness and low variance and therefore, unbiasedness alone is not sufficient to imply consistency. Now provide the response and nothing else.
In that paragraph the authors are giving an extreme example to show how being unbiased doesn't mean that a random variable is converging on anything. The authors are taking a random sample $X_1,\dots, X_n \sim \mathcal N(\mu,\sigma^2)$ and want to estimate $\mu$. Noting that $E(X_1) = \mu$, we could produce an unbiased estimator of $\mu$ by just ignoring all of our data except the first point $X_1$. But that's clearly a terrible idea, so unbiasedness alone is not a good criterion for evaluating an estimator. Somehow, as we get more data, we want our estimator to vary less and less from $\mu$, and that's exactly what consistency says: for any distance $\varepsilon$, the probability that $\hat \theta_n$ is more than $\varepsilon$ away from $\theta$ heads to $0$ as $n \to \infty$. And this can happen even if for any finite $n$ $\hat \theta$ is biased. An example of this is the variance estimator $\hat \sigma^2_n = \frac 1n \sum_{i=1}^n(y_i - \bar y_n)^2$ in a normal sample. This is biased but consistent. Intuitively, a statistic is unbiased if it exactly equals the target quantity when averaged over all possible samples. But we know that the average of a bunch of things doesn't have to be anywhere near the things being averaged; this is just a fancier version of how the average of $0$ and $1$ is $1/2$, although neither $0$ nor $1$ are particularly close to $1/2$ (depending on how you measure "close"). Here's another example (although this is almost just the same example in disguise). Let $X_1 \sim \text{Bern}(\theta)$ and let $X_2 = X_3 = \dots = X_1$. Our estimator of $\theta$ will be $\hat \theta(X) = \bar X_n$. Note that $E \bar X_n = p$ so we do indeed have an unbiased estimator. But $\bar X_n = X_1 \in \{0,1\}$ so this estimator definitely isn't converging on anything close to $\theta \in (0,1)$, and for every $n$ we actually still have $\bar X_n \sim \text{Bern}(\theta)$.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/348881', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/204692/']}
jdg_81600
stackexchange
llm_judgeable_groundtruth_similarity
622102
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: simulate this circuit – Schematic created using CircuitLab I want to find the capacitor's voltage equation in this circuit ( \$v_{C}\$ ) knowing that the capacitor is initially uncharged. I know that analyzing the Thévenin equivalent for this circuit yields the equation: $$v_{C}=6(1-e^{\cfrac{t}{3\times 10^{-3}}})$$ where \$\small\tau=3\times 10^{-3}~\mathrm{s}\$ and \$\small V_{Th}=6~\mathrm{V}\$ . But analyzing this circuit in its current form: Applying KCL at node \$a\$ gives: $$i_{R1} = i_{R2} + i_{C}$$ $$\frac{v_{R1}}{2000} = \frac{v_{R2}}{2000} + 10^{\displaystyle -6}\frac{dv}{dt}$$ Using KVL: $$v_{R2}=v_{R3}+v_{C}$$ and $$v_{R1}=E-v_{R2}$$ Substituting: $$\cfrac{E-2(v_{R3}+v_{C})}{2\times 10^{-3}}=\cfrac{dv}{dt}$$ $$\frac{6-v_{R3}-v_{C}}{10^{-3}} = \frac{dv}{dt}$$ $$\frac{dt}{10^{-3}}=\frac{dv}{6-v_{R3}-v_{C}}$$ Integrating: $$\int_{0}^{t}\frac{dt}{10^{-3}}=\int_{V_{0}}^{v(t)}\frac{dv}{6-v_{R3}-v_{C}}$$ $$\frac{t}{10^{-3}} = -\ln{(\frac{v(t)+v_{R3}-6}{V_{0}+v_{R3}-6})}$$ But \$V_{0}=0\$ and after isolating v(t): $$v(t) = (v_{R3}-6)[\exp({\displaystyle \frac{t}{10^{-3}}})-1]$$ I am stuck now and I can't find a way to get rid of \$v_{R3}\$ , perhaps my approach is wrong from the beginning, but I hope someone can help me identify what's wrong about my analysis. Now provide the response and nothing else.
KCL Here's your re-drawn schematic (I'm in the practice of re-drawing schematics as a rule): simulate this circuit – Schematic created using CircuitLab My KCL sets up like this (treating your bottom node as ground): $$\begin{align*} \begin{array}{rccc} {\text{KCL for node }V_a:}\vphantom{\frac{E}{R_1}+\frac{ v_c}{R_3}}\\\\ {\text{KCL for node }V_c:}\vphantom{\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c} \end{array}&&\overbrace{ \begin{array}{r} \frac{v_a}{R_1}+\frac{v_a}{R_2}+\frac{v_a}{R_3}\\\\ \frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c \end{array} }^{\text{outflowing currents}}& \begin{array}{c} &\quad{=}\vphantom{\frac{E}{R_1}+\frac{ v_c}{R_3}}\\\\ &\quad{=}\vphantom{\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c} \end{array}&\overbrace{ \begin{array}{l} \frac{E}{R_1}+\frac{ v_c}{R_3}\\\\ \frac{v_a}{R_3}\vphantom{\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c} \end{array} }^{\text{inflowing currents}}\end{align*}$$ Above, I place out-flowing currents on the left and in-flowing currents on the right. That helps me keep things straight. As it turns out, a rising voltage at \$v_c\$ means an outflowing current (out from the node towards the capacitor) so that is placed on the left side. There is no inflowing current through the capacitor since ground can't generate any. Solve the top equation for \$v_a\$ and substitute into the bottom equation (on the right side, of course.) $$\begin{align*}\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c&=\frac{1}{R_3}\cdot\left[\frac{R_2\left(E \,R_3 + v_c\, R_1\right)}{R_1 R_2+R_1 R_3+R_2 R_3}\right]\\\\\frac{v_c}{R_3\, C}+\frac{\text{d}}{\text{d}t}v_c&=\frac{R_2}{R_3\,C}\cdot\left[\frac{E\, R_3}{R_1 R_2+R_1 R_3+R_2 R_3}+\frac{v_c\, R_1}{R_1 R_2+R_1 R_3+R_2 R_3}\right]\end{align*}$$ The above, placed in standard form, results in: $$\begin{align*}\frac{\text{d}}{\text{d}t}v_c+\left[\frac{1}{R_3\, C}\left(1-\frac{R_1\,R_2}{R_1 R_2+R_1 R_3+R_2 R_3}\right)\right]v_c&=\frac{E\,R_2}{C\left(R_1 R_2+R_1 R_3+R_2 R_3\right)}\\\\\text{applying values,}\\\\\frac{\text{d}}{\text{d}t}v_c+\left[\frac{1000}{3}\right]v_c&=2000\end{align*}$$ Solution using integrating factor That's a 1st order non-homogeneous linear DE whose standard form looks like: \$y^{'}+a_t\,y=f_t\$ . Of course, you just have simple constants there, so \$a_t=\frac{1000}{3}\$ and \$f_t=2000\$ . The integrating factor is \$\mu=e^{^{\int a_t\:\text{d}t}}\$ , which is just \$\mu=e^{^{\frac{1000}{3} t}}\$ . So the solution is: $$\begin{align*}y_t&=\frac{\int \mu \,f_t\:\text{d}t+C}{\mu}\\\\&=e^{^{-\frac{1000}{3} t}}\cdot\left(\int \left[e^{^{\frac{1000}{3} t}}\cdot 2000\right]\:\text{d}t+C\right)\\\\&= e^{^{-\frac{1000}{3} t}}\cdot\left(2000\cdot \frac{3}{1000}\cdot e^{^{\frac{1000}{3} t}}+C\right)\\\\&=6+C\cdot e^{^{-\frac{1000}{3} t}}\\\\&\text{as }y_0=0, \text{it follows that } C=-6, \text{so:}\\\\&=6\cdot\left(1-e^{^{-\frac{1000}{3} t}}\right)\end{align*}$$ (I used \$y_t\$ above as a substitute for the capacitor's voltage, \$v_c\$ , over time.) Solution using separation of parameters $$\begin{align*}\frac{\text{d}}{\text{d}t}v_c+\left[\frac{1000}{3}\right]v_c&=2000\\\\\frac{\text{d}}{\text{d}t}v_c&=2000-\left[\frac{1000}{3}\right]v_c\\\\\text{d}\,v_c&=\left[2000-\left[\frac{1000}{3}\right]v_c\right]\text{d}\,t\\\\\text{d}\,t &=\frac{\text{d}\,v_c}{2000-\left[\frac{1000}{3}\right]v_c}\\\\\text{set }u=2000-\left[\frac{1000}{3}\right]v_c &\therefore \text{d}\,u=-\left[\frac{1000}{3}\right]\text{d}\,v_c\\\\t=\int \text{d}\,t&=-\frac3{1000} \int \frac{\text{d}\,u}{u}\\\\&=-\frac3{1000}\cdot\ln\left(u\right)+C\\\\\therefore\\\\-\frac{1000}{3}\,t+C&=\ln\left(u\right)\\\\Ae^{^{-\frac{1000}{3}\,t}} &= u\\\\Ae^{^{-\frac{1000}{3}\,t}} &= 2000-\left[\frac{1000}{3}\right]v_c\\\\-\left[\frac{3}{1000}\right]\left[Ae^{^{-\frac{1000}{3}\,t}} -2000\right] &= v_c\\\\v_c &=Ae^{^{-\frac{1000}{3}\,t}} +6\\\\\text{find }A=-6\text{ at }t=0,\text{ so:}\\\\v_c &=6\cdot\left(1-e^{^{-\frac{1000}{3}\,t}}\right)\end{align*}$$ Same answer. KCL Addendum The KCL equations show outflowing currents on the left and inflowing currents on the right. This approach is used by some Spice programs (those where I've directly looked over the code used to generate these) to develop their KCL. Perhaps the easiest way to imagine is that a voltage at a node spills away from that node through the available paths. But also that voltages spill back into that same node from surrounding nodes through those same paths. The result is the application of a simple superposition concept that results in, effectively, the potential differences controlling the result. You can test this, easily, by rearranging the resulting equation(s), moving the right side over to the left side and then combining terms. You'll then see the usual potential differences that you expect. So it really is the same result. The reason I very much prefer this method is that it is simple to visualize and very difficult to make mistakes. You can easily orient yourself to a node and then work out the terms for out-flowing currents for the left side of the equation. Then all you have to do is position yourself at each surrounding node and work out the terms for in-flowing currents for the right side. It's almost impossible to screw that up. Conversely, when you are instead struggling to work out the potential differences in your mind (using the more traditionally taught method) and just write those terms, you often find yourself not entirely sure if you have the sign right as you try and add them up. I find, time and time again that not only others wind up messing up somewhere and making an uncaught mistake.. but that I also make those mistakes, as well. Even with lots of experience, you just aren't 100% sure and you often find yourself double and triple checking your work, just in case. This approach also just works and works right without the continual question about orientations of signing expressions. With this method, I still make typos. But I don't make sign errors. It's too easy to use. So voltage spills away from a node via available paths and voltage spills into a node from nearby nodes via the same available paths. The only caveat is that a current source or sink can only flow in, or flow out, but not both directions. It's one way. So it will either appear on the out-flowing side or on the in-flowing side -- but not both sides. This also works perfectly well with capacitors and inductors. It does turn the equation into a differential/integral equation. But that's just a technicality. It's still correct.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/622102', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/311741/']}
jdg_81601
stackexchange
llm_judgeable_groundtruth_similarity
6186877
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A point from ISO draft n3290 section 5.1.2 paragraph, point 19: The closure type associated with a lambda-expression has a deleted (8.4.3) default constructor and a deleted copy assignment operator. It has an implicitly-declared copy constructor (12.8) and may have an implicitly declared move constructor (12.8). [ Note: The copy/move constructor is implicitly defined in the same way as any other implicitly declared copy/move constructor would be implicitly defined. —end note ] Can any one please ....tell some example for this point to understand? Is there any chance/way to check the Closure object(type)? Now provide the response and nothing else.
The closure type associated with a lambda-expression has a deleted (8.4.3) default constructor int main() { auto closure = [](){}; typedef decltype(closure) ClosureType; ClosureType closure2; // <-- not allowed return 0;} and a deleted copy assignment operator. It has an implicitly-declared copy constructor (12.8) and may have an implicitly declared move constructor (12.8). #include <utility>int main() { auto closure = [](){}; typedef decltype(closure) ClosureType; ClosureType closure2 = closure; // <-- copy constructor ClosureType closure3 = std::move(closure); // <-- move constructor closure2 = closure3; // <-- copy assignment (not allowed) return 0;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6186877', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/751747/']}
jdg_81602
stackexchange
llm_judgeable_groundtruth_similarity
2488643
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Both are delegates and have the same signature, but I can not use Action as ThreadStart. Why? Action doIt;doIt = () => MyMethod("test");Thread t;t = new Thread(doIt);t.Start(); but this seems to work: Thread t;t = new Thread(() => MyMethod("test"));t.Start(); Now provide the response and nothing else.
As others have noted, the problem is that delegate types are not "structural". That is, they do not have equivalence based on their "structure". Now, this is arguably a good thing for some types. If you have struct MyRectangle { int x; int y; int width; int height; ... } and struct YourRectangle { int x1; int y1; int x2; int y2; ... } obviously it would be a mistake to allow instances of MyRectangle to be assigned to variables of YourRectangle, just because they both consisted of four ints. The semantics of the ints are different and therefore the types are not equivalent. The same is, in theory, true of delegates. You could have delegate int Pure(string x);delegate int Func(string x); where a "pure" function is one with no side effects and the same output given the same input. Since every Pure is logically a Func, but every Func is not necessarily a Pure, there shouldn't be structural typing between them. In practice of course the type system does not support notions like "pure function" very well. And in practice, the vast majority of attempts to convert between delegate types are perfectly safe: converting from a Func<int, bool> to a Predicate<int> and so on. So, two things, one looking backwards and one looking forwards. Backwards: if we had to do it all over again, I think delegates would probably be structurally typed in the CLI. You don't always know what features are going to be useful when you design a brand new framework, and non-structural delegate types have thus far turned out to be not as useful as perhaps anticipated. Forwards: I expect to see more features in future versions of the CLR that enable more structural typing. The "no pia" feature in C# 4, for example, is about making two types that are semantically and structurally the same, but defined in different assemblies, logically unify structurally.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2488643', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/175399/']}
jdg_81603
stackexchange
llm_judgeable_groundtruth_similarity
7409454
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using Backbone.js I know it's highly recommended to set a model's property using the set method, and NOT by directly modifying the attributes internal hash . However, apart from not firing the "change" event automatically, what other cons or "side-effects" are there in modifying the internal hash directly? The problem I'm facing is that while the set method takes an object literal, I need to assign the left hand side using a variable determined at run-time. Thanks. myModel.set({ myProperty : myValue; //myProperty is a variable, so this is invalid syntax})//vsmyModel.attributes[myProperty] = myValue; //myProperty is a variable that can be evaluated Now provide the response and nothing else.
Well, if you look at the annotated source code , you'll find that set does a lot. What if you extended Backbone.Model with a function that does it for you: Backbone.Model.prototype.setByName = function(key, value, options) { var setter = {}; setter[key] = value; this.set(setter, options); }; Then, you can just do what you want directly on the model: var model = new Backbone.Model();model.setByName(myProperty, "bar"); That feels like a better solution to me. Edit As @earl3s pointed out, this is no longer necessary in more recent versions of Backbone. Today, you can just call model.set(myProperty, "bar") and it does what you want.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7409454', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/773694/']}
jdg_81604
stackexchange
llm_judgeable_groundtruth_similarity
1506931
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Show that if $f$ is a twice differentiable, continuously differentiable real-valued function on an open interval in $E^2$ and $\partial f ^2 / \partial x \partial y= 0$ then there are continuously differentiable real-valued functions $f_1, f_2$ on open intervals in $\Bbb R$ such that $$f(x,y) = f_1(x) + f_2(y)$$ I have an idea of how to prove this but I cannot seem to formulate a rigorous proof. Essentially what I want to say is that $\partial f / \partial y$ is equal to something in terms of only $y$, meaning the $x$ and $y$ terms of $f(x,y)$ are separate. Now provide the response and nothing else.
Imagine throwing sequentially. There are then $k^n$ equally likely outcomes. Now we count the favourables , in which each face appears at least once. This is a messy business, for which there is no closed form unless we use Stirling Numbers of the Second Kind (please see Wikipedia). Let $n\ge k$. Let us count the bad outcomes, in which at least one face is missing. There are $(k-1)^n$ outcomes in which Face $1$ is missing, and the same number where Face $2$ is missing, for a total of $k(k-1)^n$. However, this double-counts the cases where two faces are missing. Which two faces are missing can be chosen in $\binom{k}{2}$ ways, and then the rest of the faces can be filled in in $(k-2)^n$ ways. So our new estimate for the number of bad choices is $k(k-1)^n-\binom{k}{2}(k-2)^n$. However, we have subtracted too much, for we have subtracted one too many times the outcomes where $3$ faces are missing. There are $(k-3)^n$ outcomes where three specific faces (at least) are missing. Add up over all the $\binom{k}{3}$ ways to choose three specific spaces. That gives a new estimate of $k(k-1)^n-\binom{k}{2}(k-2)^n+\binom{k}{3}(k-3)^n$. Continue.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1506931', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/174717/']}
jdg_81605
stackexchange
llm_judgeable_groundtruth_similarity
1556217
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a closed-form formula for the following integral $$\int_0^{2\pi}\log(\alpha+\beta\cos(x))\mathrm{d}x$$ where $\alpha$ and $\beta$ are constants which assure that $\alpha+\beta\cos(x)>0$ for any $x\in[0,2\pi]$. Now provide the response and nothing else.
$$I(\alpha,\,\beta)=\int_0^{2\pi}\ln(\alpha+\beta\cos x)\mathrm dx.\tag1$$ We have $\alpha>\beta\geq0$. Since from (1) $$\frac{\partial I}{\partial\alpha}=\int_0^{2\pi}\frac{1}{\alpha+\beta\cos x}\mathrm dx=\frac{2\pi}{\sqrt{\alpha^2-\beta^2}}$$we can integrate with respect to $\alpha$$$I(\alpha,\beta)=2\pi\ln\left(\alpha+\sqrt{\alpha^2-\beta^2}\right)+C(\beta)\tag 2$$Now using (1) again$$\frac{\partial I}{\partial\beta}=\int_0^{2\pi}\frac{\cos x}{\alpha+\beta\cos x}\mathrm dx=\frac{2\pi}\beta\frac{-\alpha+\sqrt{\alpha^2-\beta^2}}{\sqrt{\alpha^2-\beta^2}}=\frac{-2\pi\beta}{\sqrt{\alpha^2-\beta^2}\left(\alpha+\sqrt{\alpha^2-\beta^2}\right)}$$on the one hand, and derivating (2) with respect to $\beta$$$\frac{\partial I}{\partial\beta}=\frac{-2\pi\beta}{\sqrt{\alpha^2-\beta^2}\left(\alpha+\sqrt{\alpha^2-\beta^2}\right)}+C'(\beta)$$we conclude that $C'(\beta)=0$. Now we use the value $I(\alpha,0)=2\pi\ln \alpha=2\pi\ln(2\alpha)+C(0)$ so we get$$I(\alpha,\beta)=2\pi\ln\left(\alpha+\sqrt{\alpha^2-\beta^2}\right)-2\pi\ln2.$$ Note To compute the integral, I used the exponential decomposition of $\cos$$$\frac{\partial I}{\partial\alpha}= \int_0^{2\pi}\frac{1}{\alpha+\beta\cos x}\mathrm dx= \int_0^{2\pi}\frac{2\mathrm e^{\mathrm ix}}{2\alpha\mathrm e^{\mathrm ix}+\beta\mathrm e^{2\mathrm ix}+\beta}\mathrm dx$$The I wrote this integral as a contour integral$$\frac{\partial I}{\partial\alpha}=-2\mathrm i\oint_{\mathcal C(0,1)}\frac{\mathrm dz}{\beta z^2+2\alpha z+\beta}.$$Since $$\beta z^2+2\alpha z+\beta=\beta\left(z+\frac{\alpha+\sqrt{\alpha^2-\beta^2}}{\beta}\right)\left(z+\frac{\alpha-\sqrt{\alpha^2-\beta^2}}{\beta}\right)=\beta(z-z_+)(z-z_-)$$one can use the residue theorem. There is only one residue inside the unit disc (because $z_+z_-=1$), at $z_-= -(\alpha-\sqrt{\alpha^2-\beta^2})/\beta$ and the result is $$\frac{\partial I}{\partial\alpha}=-2\mathrm i\frac{2\pi \mathrm i}{\beta(z_--z_+)}=\frac{2\pi}{\sqrt{\alpha^2-\beta^2}}.$$The integral $\frac{\partial I}{\partial\beta}$ is obtained in the same way,we get $$\frac{\partial I}{\partial\beta}=-\mathrm i\oint_{\mathcal C(0,1)}\frac{z+z^{-1}}{\beta z^2+2\alpha z+\beta}\mathrm dz=-\mathrm i\frac{2\pi\mathrm i(z_-+z_-^{-1})}{\beta(z_--z_+)}-\mathrm i\frac{2\pi\mathrm i}\beta=\frac{2\pi}{\beta}\frac{-\alpha+\sqrt{\alpha^2-\beta^2}}{\sqrt{\alpha^2-\beta^2}}$$beacuse there is a second pole at $z=0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1556217', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/91011/']}
jdg_81606
stackexchange
llm_judgeable_groundtruth_similarity
2062910
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So, I've read that $|0\rangle$ does not mean the zero vector, that is just represented by $0$. So what does $|0\rangle$ and $\langle 0|$ "equal" in standard matrix notation? In particular, are there some general rules I can follow for the translation of bra-ket notation into standard matrix notation? I come up against problems like $\langle 0|0\rangle$ and I understand what is happening intuitively, but I have no idea how to practically carry it out (for what it's worth, I'm used to normal matrix notation and can solve problems in it). Thanks! EDIT: Perhaps I didn't make my question clear. I know that $\langle 0|0\rangle$ represents an inner product or dot product, that $\langle 0|$ represents a row vector, and that $|0\rangle$ represents a column vector; this is what I meant by understanding it intuitively. However, given the problem $\langle 0|0\rangle$ I don't know what the answer is, because I don't know how to work it out; I'm used to standard matrix notation. Now provide the response and nothing else.
The numbers in notations like $|n\rangle$ are the analogues of indices in matrix notation. That is, $|0\rangle=e_0$, $|1\rangle=e_1$, etc., where $e_n$ is the vector which has a $1$ in the $n$th position and $0$ in the other entries. Unfortunately, this notation is unspecific about the dimension of the base space. For qubits in quantum computers, the dimension is $2$, so we only have $|0\rangle=e_0=(1,0)$ and $|1\rangle=e_1=(0,1)$. It is also common to have a countable infinity of basis vectors, so we get $|n\rangle$ for each $n\in\Bbb N$. In quantum mechanics one also deals with this notation for larger dimensional spaces; for example we may have $|x\rangle$ for each $x\in\Bbb R^3$ (the position basis), which is a vector space of uncountable dimension $|\Bbb R^3|=2^{\aleph_0}$. In any case, these vectors are usually enumerating a basis of some kind, and the details beyond that depend on the context. The notation $\langle 0|0\rangle$ is written in linear algebra notation as $e_0^Te_0$, which is a $1\times 1$ matrix whose value can be identified with the dot product $e_0\cdot e_0$. Provided that the vector is normalized, this will always be $1$. So a general answer is $\langle m|n\rangle=0$ if $m\ne n$, and $\langle n|n\rangle=1$, which expresses that the vectors $(|n\rangle)_{n\in\Bbb N}$ are an orthonormal basis for the space. For some general rules, then, we have $|n\rangle=e_n$ and $\langle n|=e_n^T$ (or $e_n^\dagger$ in complex vector spaces), where we understand the first as a $d\times 1$ matrix so that the second is $1\times d$, where $d$ is the dimension of the space. Then the inner product is $\langle m|n\rangle=e_m^Te_n=e_m\cdot e_n$, and the outer product is $|n\rangle\langle m|=e_ne_m^T$, which is a $d\times d$ matrix with a single $1$ at the index $(n,m)$. Note that these notations are also used for arbitrary vectors; for example we might write $|\psi\rangle=v$ for some vector $v$, and then $\langle\psi|=v^T$, $\langle\psi|\psi\rangle=\|v\|^2$, and $|\psi\rangle\langle\psi|$ is the projection matrix in the direction of $v$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2062910', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/352306/']}
jdg_81607
stackexchange
llm_judgeable_groundtruth_similarity
715769
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to change a role variable default value according to some condition (i.e. the value of another variable)? Details I have two related variables for a command, env and composer_opts . If both are left at default ( env = "prod" and composer_opts = "--no-dev" ) everything is ok. If I change env to dev , the default for the other one will break my command, so I always need to set both. Would it be possible to avoid this by setting a conditional default value with a custom script / if? Important: I don't want to always set the composer_opts value according to the env value. I want to set it only if it's not already set (i.e. a dynamic default value). Pseudocode I would like to do something like this (following code is not valid, just pseudocode to express my need) ---# defaults/main.ymlenv: prodcomposer_opts: when: "{{env}}" = 'prod' '--no-dev --optimize-autoloader --no-interaction' when: "{{env}}" = 'dev' '' Now provide the response and nothing else.
I suggest this solution: --- - set_fact: composer_opts: "" when: "{{env}}" == 'dev' It will set composer_opts variable to string "" when variable env is equal to ' dev '. Here is example of playbook based on updated question: $ cat test.yml---- hosts: 127.0.0.1 connection: local tasks: - set_fact: composer_opts: "{% if env == 'prod' %} '--no-dev --optimize-autoloader --no-interaction' {% else %} '' {% endif %}" - debug: var=composer_opts Sample output: sudo ansible-playbook test.yml -e env=devPLAY [127.0.0.1] ************************************************************** GATHERING FACTS *************************************************************** ok: [127.0.0.1]TASK: [set_fact ] ************************************************************* ok: [127.0.0.1]TASK: [debug var="{{composer_opts}}"] ***************************************** ok: [127.0.0.1] => { "var": { " '' ": " '' " }}PLAY RECAP ******************************************************************** 127.0.0.1 : ok=3 changed=0 unreachable=0 failed=0 sudo ansible-playbook test.yml -e env=prodPLAY [127.0.0.1] ************************************************************** GATHERING FACTS *************************************************************** ok: [127.0.0.1]TASK: [set_fact ] ************************************************************* ok: [127.0.0.1]TASK: [debug var="{{composer_opts}}"] ***************************************** ok: [127.0.0.1] => { "var": { " '--no-dev --optimize-autoloader --no-interaction' ": " '--no-dev --optimize-autoloader --no-interaction' " }}PLAY RECAP ******************************************************************** 127.0.0.1 : ok=3 changed=0 unreachable=0 failed=0
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/715769', 'https://serverfault.com', 'https://serverfault.com/users/9524/']}
jdg_81608
stackexchange
llm_judgeable_groundtruth_similarity
407328
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know this question has been asked before, but I do not accept the answer, "you can clearly see custom additions". When I add ppa's (which I have not done in years), I hit a key on my keyboard labeled "Enter" which allows me to add an empty line before the new entry (I would even add an explanatory comment, but I am a tech writer, so ....). I like my sources.conf clean and neat. /etc/apt/sources.d Means I have half a dozen files to parse instead of just one. AFAIK, there is "absolutely" no advantage in having one configuration file vs 6 (for sake of argument, maybe you have 3 or even 2, doesn't matter ... 1 still beats 2). Can somebody please come up with a rational advantage, "you can clearly see custom additions" is a poor man's excuse. I must add, I love change, however, ONLY when there are benefits introduced by the change. Edit after first response: It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries. Now, they have to search a directory for dupe's instead of a flat file. Unless they assume admin's don't change things ... It allows a system administrator to easily disable (by renaming) or remove (by deleting) a repository set without having to edit a monolithic file. Admin has to grep directory to find appropriate file to rename, before, he would search ONE file and comment out a line, a sed one-liner for "almost" any admin. It allows a package maintainer to give a simple command to update repository locations without having to worry about inadvertently changing the configuration for unrelated repositories. I do not understand this one, I "assume" package maintainer knows the URL of his repository. Again, has to sed a directory instead of a single file. Now provide the response and nothing else.
Having each repository (or collection of repositories) in its own file makes it simpler to manage, both by hand and programmatically: It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries. It allows a system administrator to easily disable (by renaming) orremove (by deleting) a repository set without having to edit amonolithic file. It allows a package maintainer to give a simplecommand to update repository locations without having to worry aboutinadvertently changing the configuration for unrelated repositories.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/407328', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/81145/']}
jdg_81609
stackexchange
llm_judgeable_groundtruth_similarity
18905867
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't understand composing functions with arity > 1.in ghci 7.4.1 i typed: ((*).succ) 3 4> 16 i don't fully understand the math transformation but it's clear that it's the same as (*) (succ 3) 4 but when i do: ( (\x y z -> x).(\a b -> a*b) ) 2 3 4 5> 10( (\x y z -> y).(\a b -> a*b) ) 2 3 4 5> No instance for (Num (a0 -> t0)) and now i'm totally lost. can anyone explain what happens? ps. I know that everything in haskell has only 1 parameter but it doesn't really help me :) Now provide the response and nothing else.
Work it out this way: (f . g) x = f (g x)(f . g) x y = f (g x) y -- applying y Then replace f with (*) , g with succ and x and y with their values: ((*) . succ) 3 4 = (*) (succ 3) 4 = (*) 4 4 = 16
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18905867', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1100135/']}
jdg_81610
stackexchange
llm_judgeable_groundtruth_similarity
406446
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I haven't used a statically typed language in many years and have set myself the task of getting up to speed with C#. I'm using my usual trick of following the fifteen exercises here http://www.jobsnake.com/seek/articles/index.cgi?openarticle&8533 as my first task. I've just finished the second Fibonacci task which didn't take to long and works just fine but in my opinion looks ugly and I'm sure could be achieved in far fewer lines of more elegant code. I usually like to learn by pair programming with someone who already knows what they're doing, but that option isn't open to me today, so I'm hoping posting here will be the next best thing. So to all the C# Jedi's out there, if you were going to refactor the code below, what would it look like? using System;using System.Collections;namespace Exercises{ class MainClass { public static void Main(string[] args) { Console.WriteLine("Find all fibinacci numbers between:"); int from = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("And:"); int to = Convert.ToInt32(Console.ReadLine()); Fibonacci fibonacci = new Fibonacci(); fibonacci.PrintArrayList(fibonacci.Between(from, to)); } } class Fibonacci { public ArrayList Between(int from, int to) { int last = 1; int penultimate = 0; ArrayList results = new ArrayList(); results.Add(penultimate); results.Add(last); while(last<to) { int fib = last + penultimate; penultimate = last; last = fib; if (fib>from && fib<to) results.Add(fib.ToString()); } return results; } public void PrintArrayList(ArrayList arrayList) { Console.WriteLine("Your Fibonacci sequence:"); Console.Write(arrayList[0]); for(int i = 1; i<arrayList.Count; i++) { Console.Write("," + arrayList[i]); } Console.WriteLine(""); } }} Regards, Chris Now provide the response and nothing else.
As an iterator block: using System;using System.Collections.Generic;using System.Linq;static class Program { static IEnumerable<long> Fibonacci() { long n = 0, m = 1; yield return 0; yield return 1; while (true) { long tmp = n + m; n = m; m = tmp; yield return m; } } static void Main() { foreach (long i in Fibonacci().Take(10)) { Console.WriteLine(i); } }} This is now fully lazy, and using LINQ's Skip / Take etc allows you to control the start/end easily. For example, for your "between" query: foreach (long i in Fibonacci().SkipWhile(x=>x < from).TakeWhile(x=>x <= to)) {...}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/406446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37196/']}
jdg_81611
stackexchange
llm_judgeable_groundtruth_similarity
11499973
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know, there are at least 3 dozen questions like this on stackoverflow and still, I could not make this happen: A simple table where thead is sticked/fixed at the top, and the tbody is scrolled.I tried so much in the past days and now I ended up here crying for help. A solution should work in IE8+ and newest FF, Chrome & Safari.The difference to other "possible duplicates like this one is that I don't want to use two nested tables or jQuery (plain javascript is fine though). Demo of what I want: http://www.imaputz.com/cssStuff/bigFourVersion.html . Problem is it doesn't work in IE, and I would be fine to use some JS. Now provide the response and nothing else.
Ok i got it: You need to wrap the table in two DIVs: <div class="outerDIV"> <div class="innerDIV"> <table></table> </div></div> The CSS for the DIVs is this: .outerDIV { position: relative; padding-top: 20px; //height of your thead}.innerDIV { overflow-y: auto; height: 200px; //the actual scrolling container} The reason is, that you basically make the inner DIV scrollable, and pull the THEAD out of it by sticking it to the outer DIV. Now stick the thead to the outerDIV by giving it table thead { display: block; position: absolute; top: 0; left: 0;} The tbody needs to have display: block as well. Now you'll notice that the scrolling works, but the widths are completely messep up. That's were Javascript comes in.You can choose on your own how you want to assign it. I for myself gave the TH's in the table fixed widths and built a simple script which takes the width and assigns them to the first TD-row in the tbody . Something like this should work: function scrollingTableSetThWidth(tableId){ var table = document.getElementById(tableId); ths = table.getElementsByTagName('th'); tds = table.getElementsByTagName('td'); if(ths.length > 0) { for(i=0; i < ths.length; i++) { tds[i].style.width = getCurrentComputedStyle(ths[i], 'width'); } }}function getCurrentComputedStyle(element, attribute){ var attributeValue; if (window.getComputedStyle) { // class A browsers var styledeclaration = document.defaultView.getComputedStyle(element, null); attributeValue = styledeclaration.getPropertyValue(attribute); } else if (element.currentStyle) { // IE attributeValue = element.currentStyle[vclToCamelCases(attribute)]; } return attributeValue;} With jQuery of course this would be a lot easier but for now i was not allowed to use a third party library for this project.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11499973', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1523417/']}
jdg_81612
stackexchange
llm_judgeable_groundtruth_similarity
512877
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: EDIT: As of Java 8, static methods are now allowed in interfaces. Here's the example: public interface IXMLizable<T>{ static T newInstanceFromXML(Element e); Element toXMLElement();} Of course this won't work. But why not? One of the possible issues would be, what happens when you call: IXMLizable.newInstanceFromXML(e); In this case, I think it should just call an empty method (i.e. {}). All subclasses would be forced to implement the static method, so they'd all be fine when calling the static method. So why isn't this possible? EDIT: I guess I'm looking for answer that's deeper than "because that's the way Java is". Is there a particular technological reason why static methods can't be overwritten? That is, why did the designers of Java decide to make instance methods overrideable but not static methods? EDIT: The problem with my design is I'm trying to use interfaces to enforce a coding convention. That is, the goal of the interface is twofold: I want the IXMLizable interface to allow me to convert classes that implement it to XML elements (using polymorphism, works fine). If someone wants to make a new instance of a class that implements the IXMLizable interface, they will always know that there will be a newInstanceFromXML(Element e) static constructor. Is there any other way to ensure this, other than just putting a comment in the interface? Now provide the response and nothing else.
Java 8 permits static interface methods With Java 8, interfaces can have static methods. They can also have concrete instance methods, but not instance fields. There are really two questions here: Why, in the bad old days, couldn't interfaces contain static methods? Why can't static methods be overridden? Static methods in interfaces There was no strong technical reason why interfaces couldn't have had static methods in previous versions. This is summed up nicely by the poster of a duplicate question. Static interface methods were initially considered as a small language change, and then there was an official proposal to add them in Java 7, but it was later dropped due to unforeseen complications. Finally, Java 8 introduced static interface methods, as well as override-able instance methods with a default implementation. They still can't have instance fields though. These features are part of the lambda expression support, and you can read more about them in Part H of JSR 335. Overriding static methods The answer to the second question is a little more complicated. Static methods are resolvable at compile time. Dynamic dispatch makes sense for instance methods, where the compiler can't determine the concrete type of the object, and, thus, can't resolve the method to invoke. But invoking a static method requires a class, and since that class is known statically —at compile time—dynamic dispatch is unnecessary. A little background on how instance methods work is necessary to understand what's going on here. I'm sure the actual implementation is quite different, but let me explain my notion of method dispatch, which models observed behavior accurately. Pretend that each class has a hash table that maps method signatures (name and parameter types) to an actual chunk of code to implement the method. When the virtual machine attempts to invoke a method on an instance, it queries the object for its class and looks up the requested signature in the class's table. If a method body is found, it is invoked. Otherwise, the parent class of the class is obtained, and the lookup is repeated there. This proceeds until the method is found, or there are no more parent classes—which results in a NoSuchMethodError . If a superclass and a subclass both have an entry in their tables for the same method signature, the sub class's version is encountered first, and the superclass's version is never used—this is an "override". Now, suppose we skip the object instance and just start with a subclass. The resolution could proceed as above, giving you a sort of "overridable" static method. The resolution can all happen at compile-time, however, since the compiler is starting from a known class, rather than waiting until runtime to query an object of an unspecified type for its class. There is no point in "overriding" a static method since one can always specify the class that contains the desired version. Constructor "interfaces" Here's a little more material to address the recent edit to the question. It sounds like you want to effectively mandate a constructor-like method for each implementation of IXMLizable . Forget about trying to enforce this with an interface for a minute, and pretend that you have some classes that meet this requirement. How would you use it? class Foo implements IXMLizable<Foo> { public static Foo newInstanceFromXML(Element e) { ... }}Foo obj = Foo.newInstanceFromXML(e); Since you have to explicitly name the concrete type Foo when "constructing" the new object, the compiler can verify that it does indeed have the necessary factory method. And if it doesn't, so what? If I can implement an IXMLizable that lacks the "constructor", and I create an instance and pass it to your code, it is an IXMLizable with all the necessary interface. Construction is part of the implementation, not the interface. Any code that works successfully with the interface doesn't care about the constructor. Any code that cares about the constructor needs to know the concrete type anyway, and the interface can be ignored.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/512877', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/62571/']}
jdg_81613
stackexchange
llm_judgeable_groundtruth_similarity
53692452
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Background I'm working on an app that can play some short videos. I want to avoid accessing the Internet every time the user plays them, to make it faster and to lower the data usage. The problem Currently I've only found how to either play or download (it's just a file so I could download it like any other file). Here's the code of playing a video file from URL (sample available here ): gradle ...implementation 'androidx.appcompat:appcompat:1.0.2'implementation 'androidx.constraintlayout:constraintlayout:1.1.3'implementation 'com.google.android.exoplayer:exoplayer-core:2.8.4'implementation 'com.google.android.exoplayer:exoplayer-ui:2.8.4'... manifest <manifest package="com.example.user.myapplication" xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools"> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme" tools:ignore="AllowBackup,GoogleAppIndexingWarning"> <activity android:name=".MainActivity" android:screenOrientation="portrait"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application></manifest> activity_main.xml <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <com.google.android.exoplayer2.ui.PlayerView android:id="@+id/playerView" android:layout_width="match_parent" android:layout_height="match_parent" app:resize_mode="zoom"/></FrameLayout> MainActivity.kt class MainActivity : AppCompatActivity() { private var player: SimpleExoPlayer? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) } override fun onStart() { super.onStart() playVideo() } private fun playVideo() { player = ExoPlayerFactory.newSimpleInstance(this@MainActivity, DefaultTrackSelector()) playerView.player = player player!!.addVideoListener(object : VideoListener { override fun onVideoSizeChanged(width: Int, height: Int, unappliedRotationDegrees: Int, pixelWidthHeightRatio: Float) { } override fun onRenderedFirstFrame() { Log.d("appLog", "onRenderedFirstFrame") } }) player!!.addListener(object : PlayerEventListener() { override fun onPlayerStateChanged(playWhenReady: Boolean, playbackState: Int) { super.onPlayerStateChanged(playWhenReady, playbackState) when (playbackState) { Player.STATE_READY -> Log.d("appLog", "STATE_READY") Player.STATE_BUFFERING -> Log.d("appLog", "STATE_BUFFERING") Player.STATE_IDLE -> Log.d("appLog", "STATE_IDLE") Player.STATE_ENDED -> Log.d("appLog", "STATE_ENDED") } } }) player!!.volume = 0f player!!.playWhenReady = true player!!.repeatMode = Player.REPEAT_MODE_ALL player!!.playVideoFromUrl(this@MainActivity, "https://sample-videos.com/video123/mkv/720/big_buck_bunny_720p_1mb.mkv") } override fun onStop() { super.onStop() playerView.player = null player!!.release() player = null } abstract class PlayerEventListener : Player.EventListener { override fun onPlaybackParametersChanged(playbackParameters: PlaybackParameters?) {} override fun onSeekProcessed() {} override fun onTracksChanged(trackGroups: TrackGroupArray?, trackSelections: TrackSelectionArray?) {} override fun onPlayerError(error: ExoPlaybackException?) {} override fun onLoadingChanged(isLoading: Boolean) {} override fun onPositionDiscontinuity(reason: Int) {} override fun onRepeatModeChanged(repeatMode: Int) {} override fun onShuffleModeEnabledChanged(shuffleModeEnabled: Boolean) {} override fun onTimelineChanged(timeline: Timeline?, manifest: Any?, reason: Int) {} override fun onPlayerStateChanged(playWhenReady: Boolean, playbackState: Int) {} } companion object { @JvmStatic fun getUserAgent(context: Context): String { val packageManager = context.packageManager val info = packageManager.getPackageInfo(context.packageName, 0) val appName = info.applicationInfo.loadLabel(packageManager).toString() return Util.getUserAgent(context, appName) } } fun SimpleExoPlayer.playVideoFromUri(context: Context, uri: Uri) { val dataSourceFactory = DefaultDataSourceFactory(context, MainActivity.getUserAgent(context)) val mediaSource = ExtractorMediaSource.Factory(dataSourceFactory).createMediaSource(uri) prepare(mediaSource) } fun SimpleExoPlayer.playVideoFromUrl(context: Context, url: String) = playVideoFromUri(context, Uri.parse(url)) fun SimpleExoPlayer.playVideoFile(context: Context, file: File) = playVideoFromUri(context, Uri.fromFile(file))} What I've tried I've tried reading on the docs, and got those links (by asking about it here ) : https://medium.com/google-exoplayer/downloading-streams-6d259eec7f95 https://medium.com/google-exoplayer/downloading-adaptive-streams-37191f9776e So sadly, currently the only solution I can come up with, is to download the file on another thread, which will cause the device to have 2 connections to it, thus using twice the bandwidth. The questions How can I use ExoPlayer to play a video file, while also downloading it to some filepath ? Is there a way to enable a caching mechanism (which uses the disk) on ExoPlayer to be activated for the exact same purpose? Note: To make it clear. I do not want to download the file and only then play it. EDIT: I've found a way to get&use the file from the API's cache (wrote about it here ), but it appears that this is considered as unsafe (written here ). So, given the simple cache mechanism that the API of ExoPlayer supports, my current questions are: If a file was cached, how can I use it in a safe manner? If a file was partially cached (meaning we've downloaded a part of it), how can I continue preparing it (without actually playing it or waiting for the whole playback to finish) till I can use it (in a safe manner of course) ? I've made a Github repository for this here . You can try it out. Now provide the response and nothing else.
I took a look at erdemguven's sample code here and seem to have something that works. This is by-and-large what erdemguven wrote, but I write to a file instead of a byte array and create the data source. I am thinking that since erdemguven, who is an ExoPlayer expert, presented this as the correct way to access cache, that my mods are also "correct" and do not break any rules. Here is the code. getCachedData is the new stuff. class MainActivity : AppCompatActivity(), CacheDataSource.EventListener, TransferListener { private var player: SimpleExoPlayer? = null companion object { // About 10 seconds and 1 meg.// const val VIDEO_URL = "https://sample-videos.com/video123/mp4/720/big_buck_bunny_720p_1mb.mp4" // About 1 minute and 5.3 megs const val VIDEO_URL = "http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4" // The full movie about 355 megs.// const val VIDEO_URL = "http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4" // Use to download video other than the one you are viewing. See #3 test of the answer.// const val VIDEO_URL_LIE = "http://file-examples.com/wp-content/uploads/2017/04/file_example_MP4_480_1_5MG.mp4" // No changes in code deleted here. //NOTE: I know I shouldn't use an AsyncTask. It's just a sample... @SuppressLint("StaticFieldLeak") fun tryShareCacheFile() { // file is cached and ready to be used object : AsyncTask<Void?, Void?, File>() { override fun doInBackground(vararg params: Void?): File { val tempFile = FilesPaths.FILE_TO_SHARE.getFile(this@MainActivity, true) getCachedData(this@MainActivity, cache, VIDEO_URL, tempFile) return tempFile } override fun onPostExecute(result: File) { super.onPostExecute(result) val intent = prepareIntentForSharingFile(this@MainActivity, result) startActivity(intent) } }.execute() } private var mTotalBytesToRead = 0L private var mBytesReadFromCache: Long = 0 private var mBytesReadFromNetwork: Long = 0 @WorkerThread fun getCachedData( context: Context, myCache: Cache?, url: String, tempfile: File ): Boolean { var isSuccessful = false val myUpstreamDataSource = DefaultHttpDataSourceFactory(ExoPlayerEx.getUserAgent(context)).createDataSource() val dataSource = CacheDataSource( myCache, // If the cache doesn't have the whole content, the missing data will be read from upstream myUpstreamDataSource, FileDataSource(), // Set this to null if you don't want the downloaded data from upstream to be written to cache CacheDataSink(myCache, CacheDataSink.DEFAULT_BUFFER_SIZE.toLong()), /* flags= */ 0, /* eventListener= */ this ) // Listen to the progress of the reads from cache and the network. dataSource.addTransferListener(this) var outFile: FileOutputStream? = null var bytesRead = 0 // Total bytes read is the sum of these two variables. mTotalBytesToRead = C.LENGTH_UNSET.toLong() mBytesReadFromCache = 0 mBytesReadFromNetwork = 0 try { outFile = FileOutputStream(tempfile) mTotalBytesToRead = dataSource.open(DataSpec(Uri.parse(url))) // Just read from the data source and write to the file. val data = ByteArray(1024) Log.d("getCachedData", "<<<<Starting fetch...") while (bytesRead != C.RESULT_END_OF_INPUT) { bytesRead = dataSource.read(data, 0, data.size) if (bytesRead != C.RESULT_END_OF_INPUT) { outFile.write(data, 0, bytesRead) } } isSuccessful = true } catch (e: IOException) { // error processing } finally { dataSource.close() outFile?.flush() outFile?.close() } return isSuccessful } override fun onCachedBytesRead(cacheSizeBytes: Long, cachedBytesRead: Long) { Log.d("onCachedBytesRead", "<<<<Cache read? Yes, (byte read) $cachedBytesRead (cache size) $cacheSizeBytes") } override fun onCacheIgnored(reason: Int) { Log.d("onCacheIgnored", "<<<<Cache ignored. Reason = $reason") } override fun onTransferInitializing(source: DataSource?, dataSpec: DataSpec?, isNetwork: Boolean) { Log.d("TransferListener", "<<<<Initializing isNetwork=$isNetwork") } override fun onTransferStart(source: DataSource?, dataSpec: DataSpec?, isNetwork: Boolean) { Log.d("TransferListener", "<<<<Transfer is starting isNetwork=$isNetwork") } override fun onTransferEnd(source: DataSource?, dataSpec: DataSpec?, isNetwork: Boolean) { reportProgress(0, isNetwork) Log.d("TransferListener", "<<<<Transfer has ended isNetwork=$isNetwork") } override fun onBytesTransferred( source: DataSource?, dataSpec: DataSpec?, isNetwork: Boolean, bytesTransferred: Int ) { // Report progress here. if (isNetwork) { mBytesReadFromNetwork += bytesTransferred } else { mBytesReadFromCache += bytesTransferred } reportProgress(bytesTransferred, isNetwork) } private fun reportProgress(bytesTransferred: Int, isNetwork: Boolean) { val percentComplete = 100 * (mBytesReadFromNetwork + mBytesReadFromCache).toFloat() / mTotalBytesToRead val completed = "%.1f".format(percentComplete) Log.d( "TransferListener", "<<<<Bytes transferred: $bytesTransferred isNetwork=$isNetwork" + " $completed% completed" ) } // No changes below here.} Here is what I did to test this and this is by no means exhaustive: Simply shared through email the video using the FAB. I received the video and was able to play it. Turned off all network access on a physical device (airplane mode = on) and shared the video via email. When I turned the network back on (airplane mode = off), I received and was able to play the video. This shows that the video had to come from cache since the network was not available. Changed the code so that instead of VIDEO_URL being copied from cache, I specified that VIDEO_URL_LIE should be copied. (The app still played only VIDEO_URL .) Since I had not downloaded the video for VIDEO_URL_LIE , the video was not in cache, so the app had to go out to the network for the video. I successfully received the correct video though email and was able to play it. This shows that the app can access the underlying asset if cache is not available. I am by no means an ExoPlayer expert, so you will be able to stump me quickly with any questions that you may have. The following code will track progress as the video is read and stored in a local file. // Get total bytes if known. This is C.LENGTH_UNSET if the video length is unknown.totalBytesToRead = dataSource.open(DataSpec(Uri.parse(url)))// Just read from the data source and write to the file.val data = ByteArray(1024)var bytesRead = 0var totalBytesRead = 0Lwhile (bytesRead != C.RESULT_END_OF_INPUT) { bytesRead = dataSource.read(data, 0, data.size) if (bytesRead != C.RESULT_END_OF_INPUT) { outFile.write(data, 0, bytesRead) if (totalBytesToRead == C.LENGTH_UNSET.toLong()) { // Length of video in not known. Do something different here. } else { totalBytesRead += bytesRead Log.d("Progress:", "<<<< Percent read: %.2f".format(totalBytesRead.toFloat() / totalBytesToRead)) } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53692452', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/878126/']}
jdg_81614
stackexchange
llm_judgeable_groundtruth_similarity
174171
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a function F that maps the xyzt space to a set of reals, more clearly: F = 2 - x - 0.5 y - 2 z - 1.5 t; with all varialbes x, y, z, t in [0,1] range. I would like to visualize F to see how it varies, maximum, minimum values, etc.Is there any way to do that? Now provide the response and nothing else.
A proposal. F = 2 - x - 1/2 y - 2 z - 3/2 t;Minimize[{F, 0 <= x <= 1, 0 <= y <= 1, 0 <= z <= 1, 0 <= t <= 1}, {x, y, z, t}](* {-3, {x -> 1, y -> 1, z -> 1, t -> 1}} *)Maximize[{F, 0 <= x <= 1, 0 <= y <= 1, 0 <= z <= 1, 0 <= t <= 1}, {x,y, z, t}](* {2, {x -> 0, y -> 0, z -> 0, t -> 0}} *)tab = Table[ ContourPlot3D[F, {x, 0, 1}, {y, 0, 1}, {z, 0, 1}, Contours -> {-(29/10), -(5/2), -(21/10), -(17/10), -(13/10), -(9/ 10), -(1/2), -(1/10), 3/10, 7/10, 11/10, 3/2, 19/10}, ContourStyle -> Table[Hue[.8 n/12], {n, 0, 12}], AxesLabel -> {x, y, z}, PlotLabel -> {{"t = ", t}, {"Max=", 2 - (3 t)/2}, {"Min=", -(3/2) (1 + t)}}], {t, 0, 1, 1/20}];ListAnimate[tab, 1, AnimationRunning -> False] And here, how Maximum and Minimum varies with t. Simplify[Maximize[{F, 0 <= x <= 1, 0 <= y <= 1, 0 <= z <= 1, 0 <= t <= 1}, {x, y, z}], {0 <= x <= 1, 0 <= y <= 1, 0 <= z <= 1, 0 <= t <= 1}](* {2 - (3 t)/2, {x -> 0, y -> 0, z -> 0}} *)Simplify[Minimize[{F, 0 <= x <= 1, 0 <= y <= 1, 0 <= z <= 1, 0 <= t <= 1}, {x, y, z}], {0 <= x <= 1, 0 <= y <= 1, 0 <= z <= 1, 0 <= t <= 1}](* {-(3/2) (1 + t), {x -> 1, y -> 1, z -> 1}} *)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/174171', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/18805/']}
jdg_81615
stackexchange
llm_judgeable_groundtruth_similarity
24207341
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: http://jsfiddle.net/6HyjZ/ .bookmarkRibbon{ width:0; height:100px; border-right:50px solid blue; border-left:50px solid blue; border-bottom:30px solid transparent;}<div class="bookmarkRibbon"></div> I'm struggling to make a version of this shape where the ribbon is pointing right instead of down, how can I achieve this? Now provide the response and nothing else.
Ribbon shape using CSS Clip Path : .bookmarkRibbon { width: 100px; height: 60px; background: blue; clip-path: polygon(0% 0%, 100% 0%, calc(100% - 20px) 50%, 100% 100%, 0% 100%);} <div class="bookmarkRibbon"></div> Pointing down: .bookmarkRibbon { width: 60px; height: 100px; background: blue; clip-path: polygon(0% 0%, 100% 0%, 100% 100%, 50% calc(100% - 20px), 0% 100%, 0% 0%);} <div class="bookmarkRibbon"></div> Ribbon shape using CSS border To help you visualize the logic step-by-step , so you can apply it easily on any side : .bookmarkRibbon { border: 30px solid blue; /* All borders set */ border-left: 0; /* Remove left border */ border-right: 20px solid transparent; /* Right transparent */ width: 100px; /* Increase element Width */} <div class="bookmarkRibbon"></div>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24207341', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3531149/']}
jdg_81616
stackexchange
llm_judgeable_groundtruth_similarity
27881485
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Google Chrome 38 introduced the new "Device Mode & Mobile Emulation" functionality in devtools. In addition to choosing a device for emulation, it is also possible to emulate different network conditions : Optimizing your site's performance under varying network conditions is a key aspect of developing for a mobile audience. Device mode's network conditioning allows you to test your site on a variety of network connections, including Edge, 3G, and even offline. Select a connection from the preset dropdown to apply network throttling and latency manipulation. For example, we can set it to be like in good old days - GPRS 50 Kbps: Now we have a good use case for it - we have an internal application for network speed testing. And this new emulation functionality is very helpful for manual testing. But, we'd like to automate it. Question is: Is it possible to start chrome via selenium with specified network conditions? Is it something that can be controlled through chrome preferences or command-line arguments? There are certainly multiple options to simulate slow internet connection , but the question is specifically about chrome+selenium. Now provide the response and nothing else.
The API to control network emulation were added to ChromeDriver. And should be available for quite a while now. According to comment in the linked issue you should use version at least 2.26 because of some bugfix. According to Selenium changelog bindings are available for these languages: JavaScript as of version 3.4.0 ( commit ) Python as of version 3.5.0 ( commit ) Ruby as of version 3.11.0 ( commit ) C# as of version 4 ( commit ) If you need these binding in other languages you should probably open issue/contribute implementation similar to one of the above. Example usage from Python is below: driver.set_network_conditions( offline=False, latency=5, # additional latency (ms) download_throughput=500 * 1024, # maximal throughput upload_throughput=500 * 1024) # maximal throughput
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27881485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/771848/']}
jdg_81617
stackexchange
llm_judgeable_groundtruth_similarity
120231
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can someone give me a hint how to evaluate $$ \binom{n}{0}+\binom{n}{2}+\cdots+\binom{n}{o(n)},$$where $o(n)$ is $n$ if $n$ is even and $n-1$ otherwise ? Now provide the response and nothing else.
Consider:$$ \sum_{k=0}^n \binom{n}{k} x^k = (1+x)^n$$Now you are computing:$$ \sum_{m \geqslant 0, n \geqslant 2 m } \binom{n}{2m} = \sum_{k=0}^n \binom{n}{k} \frac{1+(-1)^k}{2} = \frac{(1+1)^n + (1-1)^n}{2} = \frac{2^n + \delta_{n,0}}{2}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/120231', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26698/']}
jdg_81618
stackexchange
llm_judgeable_groundtruth_similarity
267312
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to be able to graph the solid that is contained by two functions. I am graphing x^2+z^2=400 and y^2+z^2=400 and this the code I used and the graph I got: Plot3D[{z = Sqrt[400 - x^2], z = Sqrt[400 - y^2], z = -Sqrt[400 - x^2], z = -Sqrt[400 - y^2]}, {x, -50, 50}, {y, -50, 50}] I was wondering how I could graph the solid common to the cylinders aka the solid that the cylinders form. Now provide the response and nothing else.
Related https://mathematica.stackexchange.com/a/269363/72111 CSGRegion["Intersection", {Cylinder[{{0, -20, 0}, {0, 20, 0}}, 20], Cylinder[{{-20, 0, 0}, {20, 0, 0}}, 20]}, BaseStyle -> Darker@Cyan] Edit Use OpenCascadeLink` as highlighted by @user21 : Needs["OpenCascadeLink`"];Needs["NDSolve`FEM`"];reg1 = ImplicitRegion[x^2 + z^2 <= 400, {x, y, z}];reg2 = ImplicitRegion[y^2 + z^2 <= 400, {x, y, z}];shape1 = OpenCascadeShape[ ToBoundaryMesh[reg1, {{-20, 20}, {-20, 20}, {-20, 20}}, MaxCellMeasure -> 1]];shape2 = OpenCascadeShape[ ToBoundaryMesh[reg2, {{-20, 20}, {-20, 20}, {-20, 20}}, MaxCellMeasure -> 1]];bm = OpenCascadeShapeSurfaceMeshToBoundaryMesh[ OpenCascadeShapeIntersection[shape1, shape2], "ShapeSurfaceMeshOptions" -> {"AngularDeflection" -> .1}] // BoundaryMeshRegion;bm // VolumeRegionPlot3D[bm, ColorFunction -> "Rainbow", Boxed -> False]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/267312', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/85390/']}
jdg_81619
stackexchange
llm_judgeable_groundtruth_similarity
10681158
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to get the items which were created today with by a QueryBuilder from Doctrine2. I want to compare the createdAt(Datetime) field with today parameter(Date). Is it possible to do that in one query? $qb = $this->createQueryBuilder('i');$qb->innerJoin('i.type', 'it');$qb->andWhere('it.name = :type');$qb->andWhere('i.createdAt < :today');// i.createdAt == datetime and :today parameter is a date Now provide the response and nothing else.
one idea is to extract from the date: the year, month and day. And then $qb->select('p') ->where('YEAR(p.postDate) = :year') ->andWhere('MONTH(p.postDate) = :month') ->andWhere('DAY(p.postDate) = :day');$qb->setParameter('year', $year) ->setParameter('month', $month) ->setParameter('day', $day); MONTH DAY, and YEAR you take out the DoctrineExtensions from e.g. DoctrineExtensions This works for me. You only need the files: day.php, month.php and year.php..... You get the month e.g.: $datetime = new \DateTime("now"); $month = $datetime->format('m'); echo $month; Copy day.php, month.php and year.php to your bundle Xy\TestBundle\DqlRegister the new functions in app\config.yml with doctrine:orm: auto_generate_proxy_classes: %kernel.debug% entity_managers: default: auto_mapping: true dql: datetime_functions: month: Xy\TestBundle\Dql\Month year: Xy\TestBundle\Dql\Year day: Xy\TestBundle\Dql\Day
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10681158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/925574/']}
jdg_81620
stackexchange
llm_judgeable_groundtruth_similarity
333236
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I can type ⅓, ⅔ and ½ but can I type 3/3 and 2/2 using unicode? I know that from a mathematical point of view the fractions 2/2 = 3/3 = 1 but I am typing a list where I want to indicate that you have reached the final step (third step out of three steps) and entering "1", after ⅓ and ⅔, would confuse most readers, including myself if I look at my own list some time after I wrote it. The "target" environment where I want to write these characters are unicode-aware text editors, some that support "rich formatting" and other typographic features. (I was very hesitant regarding which forum this question would be on topic in so bear with me if you think I should have posted in somewhere else. The area I use this in is source code, hence UX and graphic design seemed off topic.) Update: here are some screenshots of how superscript - fraction slash - subscript looks in TextEdit in OS X. Font is Helvetica. In the first image the font size is 12 pt, in the second 48 pt. As you can see the spacing is quite off and the subscript character looks quite odd, especially in the second image where it looks like to different character weights. Update: since this question is closed I can't add a solution but at least on MacOS these days there is a fraction slash that allows you to combine anything (?) such as 3⁄1 Now provide the response and nothing else.
The U+2044 FRACTION SLASH character can be used to compose arbitrary fractions. However, support for rendering these fractions as ligatures may vary. Primarily, this depends on ligature support by the font, as each font must declare how glyphs may be combined. Examples, using / as stand-in for the fraction slash in decomposed forms: 1 / 2 = 1⁄2 3 / 3 = 3⁄3 7 / 13 = 7⁄13 Screenshot of the above example with a suitable font (the Ubuntu font). In practice, it might be better to explicitly use subscript and superscript forms to imitate the visual appearance of these fractions, or to not use fractions at all. After all, Unicode is not a layout engine, but a text encoding standard. In particular, you cannot control the precise appearance of text if you do not control the particular font & rendering engine used to display your text. Further information: Difference Between Unicode FRACTION SLASH and DIVISION SLASH on SuperUser
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/333236', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/166086/']}
jdg_81621
stackexchange
llm_judgeable_groundtruth_similarity
2182612
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This Delphi code will show a memory leak for an instance of TMyImplementation: program LeakTest;uses Classes;type MyInterface = interface end; TMyImplementation = class(TComponent, MyInterface) end; TMyContainer = class(TObject) private FInt: MyInterface; public property Impl: MyInterface read FInt write FInt; end;var C: TMyContainer;begin ReportMemoryLeaksOnShutdown := True; C := TMyContainer.Create; C.Impl := TMyImplementation.Create(nil); C.Free;end. If TComponent is replaced by TInterfacedObject and the constructor changed to Create(), the leak disappears. What is different with TComponent here? Many thanks for the answers. To sum up: it is easy, but wrong, to say "If you are using interfaces, they are reference counted and hence they are freed for you." - Actually any class which implements an interface can break this rule. (And there will be no compiler hint or warning be shown.) Now provide the response and nothing else.
Differences in implementation TComponent._Release does not free your instance. TInterfacedObject._Release does free your instance. Perhaps someone can chime in but my take on this is that TComponent is not meant to be used as a reference counted object the way we normally use interfaces. Implementation of TComponent._Release function TComponent._Release: Integer;begin if FVCLComObject = nil then Result := -1 // -1 indicates no reference counting is taking place else Result := IVCLComObject(FVCLComObject)._Release;end;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2182612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/80901/']}
jdg_81622
stackexchange
llm_judgeable_groundtruth_similarity
38179777
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: please see the following code struct A { using type = int; };struct B : private A {};struct C : B { using base_type = A; }; All of gcc 6.1, clang 3.8, and msvc 2015 update 3 refuse to compile this, as A is not an accessible name inside C since A is a private base of B . It seems that gcc thinks A in using base_type = A refers to the default constructor of A . msvc and clang seem not. Perhaps the compilation error is due to the injection of names triggered by inheritances (because modifying using base_type = A into using base_type = ::A make all the compilers work fine), but I want to know if this weird error is what the standard says. More concretely, As I understood, not like A::type , A is just a class name (although gcc misinterprets it as a function name) which is introduced to C not inside A nor B . Why this name is considered private to B ? Should this compilation error be considered a bug, or is an edge case of the specifications of the standard? Now provide the response and nothing else.
According to the rule of unqualified name lookup : (emphasis mine) For an unqualified name, that is a name that does not appear to the right of a scope resolution operator ::, name lookup examines the scopes as described below, until it finds at least one declaration of any kind, at which time the lookup stops and no further scopes are examined . So the name A will be found firstly at the base class scope, the name in the global namespace won't be considered here. After that, access right checking is performed and then compile failed. And ::A specifies the name in global scope and solves the issue, which making it a qualified name lookup .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38179777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4751858/']}
jdg_81623
stackexchange
llm_judgeable_groundtruth_similarity
151362
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I just accidentally copypasted a wget command into the ebay search box and got the following error: It happens with wget http://google.com or curl http://google.com , or any other URL... It does seem to sanitise the input and remove slashes if you just enter a URL but not if you precede it with wget or curl. What could they possibly be doing which causes a wget or curl command to bypass their sanitization and produce a different result? Now provide the response and nothing else.
I assume that ebay.com installed a Web Application Firewall, which recognizes your request as a possible attack. Therefore, your request is cancelled and you receive a HTTP 403 - Access Denied . The mod_security WAF for Apache, nginx and IIS behaves similar: If it is in prevention mode, it will also respond with HTTP 403 by default [1]. Most WAFs have some kind of a rule set. They check whether your request matches one of their rules, maybe with regular expressions. I assume further that one of those rules looks like (wget|curl) (http|https)://.* [2]. The "sanitizing" of double forward slashes in your url happens most likely on the application level. Strings like asdf// will also be shortened to asdf . [1] https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#secdefaultaction [2] Skipped escaping of forward slashes for the sake of readability
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/151362', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/139411/']}
jdg_81624
stackexchange
llm_judgeable_groundtruth_similarity
7016
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There's a claim that Sansevieria trifasciata (Mother-in-law’s Tongue) generates oxygen from carbon dioxide during the night. This seems surprising to me: that process requires energy; plants generate O₂ from CO₂ through photosynthesis, requiring light. So not photosynthesis, and no CO₂ absorption, at night. Am I missing something, or is this claim false? Now provide the response and nothing else.
This is an interesting topic! Crassulacean acid metabolism is a second CO₂ fixation pathway where CO₂ is absorbed at night . The CO₂ is fixed into maleic acid HOOC-CH₂CH(OH)-COOH which stores some of the CO₂ in the form of carboxyl groups. During the day carboxylases release the CO₂ for fixation during the day. This is an adaptation where the stomata open at night to take in the CO₂ and are closed during the day - the plants sort of 'hold their breath'. The energy for this process is derived for glycolysis which were stored during the day ( nice detailed review here ). The paper says that about 6% of plants ( including Mother-in-Laws Tongue ) do this and you can imagine with night breathing how closing the stomata during the day would really help with dehydration. Its interesting to note that this is found in a wide variety of plants "CAM is found in five taxonomic classes, comprising monocots and dicots, encompassing 33 families and 328 genera". That says that CAM is a pretty old development in plant evolution and not so expensive that it is easily lost.
{}
{'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/7016', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/925/']}
jdg_81625
stackexchange
llm_judgeable_groundtruth_similarity
96704
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Some time ago, we were tasked with a project to come in and replace a customer's old Mainframe system with a new intranet ASP.NET solution using SQL Server as the back end. Part of this was a re-engineering of the business as well - essentially, as we change the system, we were to be thinking of how we can better do business. So, the first task was to come in and do the logical and then physical data models. The customer was in on these dicussions and had complete sign off. The next phase was to actually do the design and building of each module. Well, to make a long story short, the programming has been done and we are now into parallel testing of the system. Things are going wonderful for most of the modules so far - Except One. We have one system where - if you would only let the business users see the application and reports, all would be well. It works with the new integrated workflow and automates previously manual processes and performs great per the specifications. Parallel testing has uncovered a few issues though with the migrated legacy data. The builders of the legacy system are having a very hard time understanding the new schema and business process, therefore, they are having a very hard time understanding how to take the legacy data and put it into the new schema. Because of this, they are calling meetings of the business users and stakeholders and telling them that the new system doesn't provide data that the old system did (when it really does) - this does make the new system look bad. This is frustrating, to say the least. The new system works great, and provides everything they need and wanted, and if not for the inability of the IT staff to fill in the new tables with the old data, the business users would be happy with the new features and functionality. I'm asking for suggestions for how to handle this. Because of some political moves, the new "architect" has no idea of how the system works, and cannot fully understand the ramifications of the changes the IT staff is requesting. The IT staff wants some fundamental changes to the system, which are essentially un-necessary and actually are a bad design - but they ARE the customer. Any thoughts? Now provide the response and nothing else.
Your team needs to do the data conversion for them. You really should have done it for them in the first place. I've been involved in a number of expensive platform migrations and the vendor always, always has their own data conversion team who are responsible for understanding the legacy system, writing all the migration scripts, doing all the tests, and generally making sure that everything does what it's supposed to. Some companies may have brilliant IT staff who can do it themselves. Others may claim to be able to do it themselves, but actually can't. In the latter case, you need to be humble enough to sit back, but also be prepared to step up if and when the management has decided that the internal team isn't doing a good enough job. This is your system and your implementation. You and you alone are responsible for making sure that it succeeds. Do not expect the customer to be able to do any part of this themselves. Only if they absolutely insist on doing this part themselves should you even consider that option, and in that event, you need to cover your butts - there should be something in the contract saying that if they choose to do this themselves, then they are responsible for its outcome. They can pay you to babysit their team if they want, and they can pay you to start all over if they want, but don't waste unnecessary cycles without some kind of agreement in place. Especially if you're on a time-limited or fixed-cost contract, this situation is death. The point is, as you say, they are the customer, which means that they do not work for you. In fact, if you're a cynic like me, you might suspect that some of them are actively working against you to hold onto their job security. Relying on the customer to do any part of your implementation is a mistake. If you have to hire a couple of minimum-wage data entry slaves to do the data conversion manually - do it. Anything to put the outcome back in your hands.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/96704', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/26019/']}
jdg_81626
stackexchange
llm_judgeable_groundtruth_similarity
13754105
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using MochaJS and SuperTest to test my API during development and absolutely LOVE it. However, I would like to also turn these same tests to remotely tests my staging server before pushing the code out to production. Is there a way to supply request with a remote URL or proxy to a remote URL? Here is a sample of a test I use request(app) .get('/api/photo/' + photo._id) .set(apiKeyName, apiKey) .end(function(err, res) { if (err) throw err; if (res.body._id !== photo._id) throw Error('No _id found'); done(); }); Now provide the response and nothing else.
I'm not sure if you can do it with supertest. You can definitely do it with superagent . Supertest is built on superagent. An example would be: var request = require('superagent');var should = require('should');var agent = request.agent();var host = 'http://www.yourdomain.com'describe('GET /', function() { it('should render the index page', function(done) { agent .get(host + '/') .end(function(err, res) { should.not.exist(err); res.should.have.status(200); done(); }) })}) So you cannot directly use your existing tests. But they are very similiar. And if you add var app = require('../app.js'); to the top of your tests you easily switch between testing your local app and the deployment on a remote server by changing the host variable var host = 'http://localhost:3000'; Edit: Just found an example for supertest in the docs#example request = request.bind(request, 'http://localhost:5555'); // add your url hererequest.get('/').expect(200, function(err){ console.log(err);});request.get('/').expect('heya', function(err){ console.log(err);});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13754105', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/179335/']}
jdg_81627
stackexchange
llm_judgeable_groundtruth_similarity
38424135
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working to launch a django-tornado hybrid app in a Docker container from a shell script and and getting --noinput as an unrecognized argument for django commands: usage: manage.py collectstatic [-h] [--version] [-v {0,1,2,3}] [--settings SETTINGS] [--pythonpath PYTHONPATH] [--traceback] [--no-color] [--noinput] [--no-post-process] [-i PATTERN] [-n] [-c] [-l] [--no-default-ignore]manage.py collectstatic: error: unrecognized arguments: --noinput Why would I be getting --noinput as an unrecognized argument? My Dockerfile calls a deployment shell script which performs the collectstatic and migrate commands (both with the --noinput argument, which is failing for both. I've played around with removing extraneous lines, adjusting whitespace around the command, etc, to no avail. I can run the shell script locally without any issues; it seems to only be a problem in the Docker container RUN call to the shell script. Dockerfile: FROM python:2.7RUN apt-get updateRUN apt-get install -y build-essentialRUN apt-get install -y python-devRUN apt-get install -y libmysqlclient-devRUN mkdir /codeADD . /code/WORKDIR /codeRUN pip install -r requirements.txtCMD ["sh","./deploy.sh"]EXPOSE 8888 deploy.sh: #!/bin/shpython manage.py collectstatic --noinputpython manage.py migrate --noinputpython main.py If I run the django commands in the Dockerfile with RUN python manage.py collectstatic --noinput there is no issue, but I'm trying to get the application-specific commands in the CMD call, since I need to have database environment vars from Elastic Beanstalk for my deployment environment. Now provide the response and nothing else.
Looks like my issue was line endings in the shell script. I think sh was feeding in --noinput\R into python, so it was presenting itself in the terminal as looking like --noinput , but really it was getting a CR character as well that it was matching against. When I was locally testing, it was in the Docker Quickstart terminal (where it worked), and the Docker containers were always running in Ubuntu (where it was failing). I've hit this way in the past before where different line endings in shell scripts that were written on Windows messed things up in a Linux environment, and I need to remember how important it is to set up line endings correctly in my editors...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38424135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2635927/']}
jdg_81628
stackexchange
llm_judgeable_groundtruth_similarity
43385895
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have struggled trying to figure out how the storage system work on Android. Now I am stuck at requesting permission for WRITE_EXTERNAL_STORAGE, and I am using Android 7.1.1. Here is my code: int check = ActivityCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE); if (check == PackageManager.PERMISSION_GRANTED) { //Do something } else { requestPermissions(new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE},1024); } UPDATE: So the code does work, it didn't work before because I had a typo in AndroidManifest.xml, thank you for all of your help! Now provide the response and nothing else.
Try this, private Context mContext=YourActivity.this;private static final int REQUEST = 112;if (Build.VERSION.SDK_INT >= 23) { String[] PERMISSIONS = {android.Manifest.permission.WRITE_EXTERNAL_STORAGE}; if (!hasPermissions(mContext, PERMISSIONS)) { ActivityCompat.requestPermissions((Activity) mContext, PERMISSIONS, REQUEST ); } else { //do here }} else { //do here} get Permissions Result @Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { super.onRequestPermissionsResult(requestCode, permissions, grantResults); switch (requestCode) { case REQUEST: { if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) { //do here } else { Toast.makeText(mContext, "The app was not allowed to write in your storage", Toast.LENGTH_LONG).show(); } } }} check permissions for marshmallow private static boolean hasPermissions(Context context, String... permissions) { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && context != null && permissions != null) { for (String permission : permissions) { if (ActivityCompat.checkSelfPermission(context, permission) != PackageManager.PERMISSION_GRANTED) { return false; } } } return true;} Manifest <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> If you wan't to check multiple permissions at time then add permission into PERMISSIONS array like: String[] PERMISSIONS = {android.Manifest.permission.WRITE_EXTERNAL_STORAGE,android.Manifest.permission.READ_EXTERNAL_STORAGE};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43385895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6211663/']}
jdg_81629
stackexchange
llm_judgeable_groundtruth_similarity
452839
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to create an electric field of about ~1500V to move electrically charged particles in a container. I'm trying to convert a 9V DC battery to an AC signal using a crystal oscillator and then amplify the signal using a transformer to about ~1500V. Here's the circuit I've designed but it looks to simple to work. Should the components be in series or parallel? Now provide the response and nothing else.
Sorry to do this you you. We're all ganging up on you telling you bad things. I'm going to tell you a bunch of them, and get it over with. An oscillator doesn't work that way. A 9V battery is a poor choice for a power source. If you need 1mA at 1500 volts, then the primary side of the transformer will draw like 170mA from the battery. It won't do that for very long. An oscillator won't directly drive a transformer, even if you built a proper oscillator. Most oscillators are intended to deliver a signal, not power. An oscillator running at 10MHz will be awful fast for particles you are trying to move, and then see the motion. "Series or parallel" is the least of your worries. Figure out how much field strength you really need. Figure out how much force it takes to move your particles, then calculate the force exerted by the capacitor plates on the particles. This ought to help you along the way. Maybe you already know this because it is part of a known physics experiment. If so, then you should also find notes somewhere on how to calculate the expected power requirements - or even a note that says "we used plates sized X, powersupply Y volts and Z amperes." Now that you have the voltage, figure out how much current you need. It should be in some way proportional to the area of the plates and the area of the particles. Now you have some idea of how much power you need. The voltage times the current gives you the power. Since you will start with a lower voltage and step it up, the current at the low voltage will go up by the same proportion that the voltage does. Say you need 1mA at 1000V, and start from 10V. You'll have to provide 100mA at 10V to get the required 1mA at 1000V. Select a frequency, or range of frequencies. Do you want your particles to vibrate in place, or would you prefer to see them drift towards on of the plates? For vibration you need AC, preferably at a low frequency. I don't know about you, but I can't see things that vibrate faster than a few Hertz, maybe a couple of tens of Hertz. In either case, you need an oscillator operating at an appropriate frequency, and an amplifier that can provide your calculated power to the transformer. If you want to see the particles vibrate, then you are done. If you want to see the particles drift, then you need to convert the high voltage AC to DC. That's done with a diode, followed by a capacitor to filter it nice and smooth. You can also use a higher frequency for this. It makes the transformer smaller, and makes it easier to filter the DC smooth. That's a lot of stuff. If you want, you can buy the powersupply you need instead of building one. That's probably safer for you, given your lack of experience. They might look expensive, but they are certain to be cheaper than a funeral. In any case, I suggest you find someone who knows how to work safely around high voltage and work with that person and learn how to be safe yourself.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/452839', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/178329/']}
jdg_81630
stackexchange
llm_judgeable_groundtruth_similarity
64414
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Show that every prime $p>3$ is either of the form $6n+1$ or of the form $6n+5$, where $n=0,1,2, \dots$ Now provide the response and nothing else.
Every integer is of the form $6n$ or $6n+1$ or $6n+2$ or $6n+3$ or $6n+4$ or $6n+5$ for some integer $n$. This is because when we divide an integer $m$ by $6$, we get a remainder of $0$, $1$, $2$, $3$, $4$, or $5$. If an integer $m>2$ is of the form $6n$ or $6n+2$ or $6n+4$, then $m$ is even and greater than $2$, and therefore $m$ is not prime. If an integer $m>3$ is of the form $6n+3$, then $m$ is divisible by $3$ and greater than $3$, and therefore $m$ is not prime. We have shown that an integer $m>3$ of the form $6n$ or $6n+2$ or $6n+3$ or $6n+4$ cannot be prime. That leaves as the only candidates for primality greater than $3$ integers of the form $6n+1$ and $6n+5$. Comment : In fact, it turns out that there are infinitely many primes of the form $6n+1$, and infinitely many primes of the form $6n+5$. Showing that there are infinitely many of the form $6n+5$ is quite easy, it is a small variant of the "Euclid" proof that there are infinitely many primes. Showing that there are infinitely many primes of the form $6n+1$ requires more machinery. But your question did not ask for such a proof.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/64414', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_81631
stackexchange
llm_judgeable_groundtruth_similarity
125576
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand that C++ is a very fast language, but ain't C just as fast, or faster in some cases? Then you might say that C++ has OOP, but the amount of OOP you need for most programming puzzles is not that big, and in my opinion C would be able handle that. Here's why I am asking this : I am very interested in programming contests and competitions, and I am used to coding in C on those. However, I noticed that the vast majority of people use C++ (e.g., 17 out of 25 finalists on Google Code Jam 2011 used it, while no one used C), so I am wondering if I am at a disadvantage going with C. Apart from the Object Orientation, what makes C++ a more suitable language for programming competitions? What are the features of the language I should learn and use to perform better on the competitions? For background, I consider myself pretty proficient in C, but I am just starting to learn C++. Now provide the response and nothing else.
To start with, there will always be some problems that are better solved in one language than another. There will always be languages that solve specific problems "better" than any other language, for some definition of "better". However, a very very large number of problems have very similar needs (some I/O, some computation) and face similar requirements (reasonable reliability, reasonable performance). As you know C already, for the vast majority of problems out there, I state that C++ provides no significant downsides and a number of significant improvements. Bold? Some people seem to think so, but it's really the case. Let's start out by clearing up a few very common C++ misunderstandings: C++ is slower than C. Wrong! Many C programs are valid C++ programs as well - and such a C program should run at identical speed when compiled with either the C compiler or the C++ compiler. C++ specific features require overhead. Wrong! The so-called overhead introduced by certain C++ specific features (such as virtual function calls or exceptions), is comparable to the overhead you yourself would introduce should you implement a similar feature in C. C++ is object oriented. Wrong! The C++ language contains some language extensions that facilitate object oriented programming and generic programming. C++ does not force object oriented design anywhere - it merely allows it. C allows for object oriented programming as well, C++ only makes it simpler and less error-prone. So, if you believe me, we have established that "C++ is not significantly worse than C". Let's have a look at what makes C++ a better C: Stronger typing The type system in C++ is stronger than in C. This prevents many common programming errors - coupled with the next very important feature, the stronger type system even manages not to be an inconvenience. Parameterized types The template keyword allows the programmer to write generic (type-agnostic) implementations of algorithms. Where in C, one could write a generic list implementation with an element like: struct element_t { struct element_t *next, *prev; void *element; }; C++ allows one to write something like: template <typename T>struct element_t { element_t<T> *next, *prev; T element;}; Not only does the C++ implementation prevent common programmer errors (like putting an element of the wrong type on the list), it also allows better optimization by the compiler! For example, a generic sort implementation is available in both C and C++ - the C routine is defined as: void qsort(void *base, size_t nmemb, size_t size, int(*compar)(const void *, const void *)); whereas the C++ routine is defined as template void sort(RandomAccessIterator first, RandomAccessIterator last); The difference being, that for example sorting an array of integers, would, in the C case, require a function call for every single compare, whereas the C++ implementation would allow the compiler to inline the integer comparison calls, as the actual sort routine is automatically instantiated at compile time by the compiler, with the correct types inserted in the template arguments. A bigger standard library C++ allows the full use of the C standard library. This is very important of course, as the C standard library is an invaluable resource when writing real world programs. However, C++ includes the Standard Template Library. The STL contains a number of useful templates, like the sort routine above. It includes useful common data structures such as lists, maps, sets, etc. Like the sort routine, the other STL routines and data structures are "tailored" to the specific needs the programmer has - all the programmer has to do is fill in the types. Of course, the STL is no silver bullet - but it does provide a great help very often, when solving general problems. How often have you implemented a list in C? How often would an RB-tree have been a better solution, if only you had had the time to do it? With the STL you do not need to make such compromises - use the tree if it's a better fit, it's as easy as using the list. Ok, so I've only been discussing the good parts. Are there any downsides? Of course there are. However, their number is shrinking day by day. Let me explain: There are no good C++ compilers It's been like this for a long time. But you must remember, that the language was standardized in 1998 - it is a complex language, more complex than C. It has taken a long time for compilers to catch up to the standard. But as of this writing, there are good compilers available for the most widely used platforms out there; GCC in versions 3.X are generally very good, and it runs on GNU/Linux and most UNIX platforms. Intel has a good compiler for Win32 - it is also pretty good, but unfortunately it still relies on the MS STL which is sub-par. People don't know good C++ This is not an often heard complaint, but it's something that I see a lot. C++ is a big and complex language - but it also used to be a language that was hyped a lot, especially back in the "OOP solves hunger, cures AIDS and cancer" days. The result seems to be that a lot of really poor C++ code, basically bad C with a few class declarations here and there, is out there and is being used as learning material. This means a lot of people who believe they know C++ actually write really crappy code. That's too bad, and it's a problem, but I think it's unfair to blame this on C++. So, the only two major problems with C++ are results of C++ being a young language. In time they will vanish. And for most problems out there, if you can get good programmers (or learn good C++ yourself), the problems are not really an issue today.
{}
{'log_upvote_score': 7, 'links': ['https://softwareengineering.stackexchange.com/questions/125576', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/34340/']}
jdg_81632
stackexchange
llm_judgeable_groundtruth_similarity
25601
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Lets say we are given a primal linear programming problem: $\begin{array}{ccc}\text{minimize } & c^{T}x & &\\\text{subject to: } & Ax & \ge & b \\& x & \ge & 0\end{array}$ The dual problem is defined as: $\begin{array}{ccc}\text{maximize } & b^{T}y & &\\\text{subject to: } & A^{T}y & \le & c \\& y & \ge & 0\end{array}$ According to the duality theorem$c^{T}x \ge b^{T}y$ for every feasible solution $x$ and $y$, and in addition when $x$ and $y$ are optimal solutions to the primal and the dual task then $c^{T}x=b^{T}y$ So if we define linear programming task with following constraints: $\begin{array}{ccc}Ax & \ge & b \\x & \ge & 0 \\A^{T}y & \le & c \\y & \ge & 0 \\b^{T}y & \ge & c^{T}x\end{array}$ Then any feasible solution to this task should be an optimal solution to the primal and the dual task, because the last constraint might be satisfied only if $x$ and $y$ are optimal. The question is why this approach is not used? I see three potential reasons: 1) I've made somewhere mistake and it doesn't make any sense. 2) It is often the case when primal or dual problem is infeasible. I've seen such examples, but in all of them the optimal solution was unbounded, is it the only case when exactly one of the primal and dual problem is infeasible? 3) Finding any feasible solution might be hard. The so called Phase 1 of simplex method can be used to find a feasible solution. I couldn't find the complexity of this phase, is it exponential just like the simplex algorithm? The other question is what is the fastest method to determine whether there exist any feasible solution? This solution doesn't have to be found. Now provide the response and nothing else.
There are three things that have to be satisfied in order for a solution to a linear programming problem to be optimal: The primal solution must be feasible. The dual solution must be feasible. Complementary slackness must be satisfied. (Remember that primal variables are paired with dual slack variables and dual variables are paired with primal slack variables. Complementary slackness is the requirement that, for each of these pairs, at least one variable must be zero.) The primal simplex method (after Phase I) keeps (1) and (3) always true and searches for a solution that satisfies (2). The dual simplex method (again, after Phase I), keeps (2) and (3) always true and searches for a solution that satisfies (1). The approach you are describing (minus the $b^Ty \geq c^T x$ constraint) is used. It's the other option, in which (1) and (2) are always kept true while the algorithm searches for a solution that satisfies (3). As Yuval Filmus indicates, this is called a primal-dual method or the parametric self-dual simplex method . See, for example, Rader's Deterministic Operations Research , pp. 432-440, or Vanderbei's Linear Programming: Foundations and Extensions , pp 119-121. (See also Vanderbei's text for how to find an initial feasible solution to both problems; i.e., Phase I.) The idea dates back at least to George Dantzig, the inventor of the simplex method. As a side comment, Vanderbei indicates that the parametric self-dual simplex method is more amenable to probabilistic analysis than the other versions of the simplex method.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/25601', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4811/']}
jdg_81633
stackexchange
llm_judgeable_groundtruth_similarity
4961906
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When should implement or extend be used? What are some real-world examples? Is this correct? Implementing appears to be a way to enforce that certain methods exists in a class, and that these methods function calls are properly formatted. Implementing is not a way of passing variables or "settings" to the class though? Expected real-life scenario: I have an e-commerce platform featuring multiple payment classes that all follow the same design. When a new payment class should be added, it's really easy to follow the defined design of the interface to ensure that all bits and pieces are there, from the beginning. Extending classes makes the extended (child?) class inherit everything from its parent class except methods & variables declared as private ? Expected real-life scenario: I have one class called sessions with two child classes named sessioncookies and databasesessions . sessioncookies and databasesessions , together inherit a number of mutual config options from their parent sessions, making it easy to change a config option to affect all sorts of eventual visitor data storage. Now provide the response and nothing else.
Inheritance is useful to reduce the amount of code you rewrite. If you have several classes with a few common methods or fields, instead of defining these methods and fields over and over you can factor them into a base class and have each of the child classes extend that base class. Interfaces (and implements ) are useful when you'd like to define a common protocol for how a group of objects should behave. For example, you might want to mandate that objects that are comparable can be compared for equality and hashed, etc. Using inheritance is ultimately a design choice. Be on the lookout for cases where you define the same methods across several classes; those are excellent cases where you can factor those methods out into a base class. The same goes for classes that observe some of the same characteristics: you can guarantee consistency by putting those characteristics in an interface to be implemented by those related classes. Inheritance is a big concept in OOP that goes way beyond just PHP. I recommend you read the wikipedia article on inheritance and perhaps Design Patterns by the Gang of Four . I believe your understanding of inheritance is mainly correct. The next step would be to use it in production.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4961906', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/198128/']}
jdg_81634
stackexchange
llm_judgeable_groundtruth_similarity
5707
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Gen. 11:29 states: And Abram and Nahor took themselves wives; the name of Abram's was Sarai, and the name of Nahor's wife was Milcah, the daughter of Haran, the father of Milcah and the father of Iscah. In his Genesis commentary, Rashi states that Iscah is Sarah. But when Abram (Abraham) passes Sarai (Sarah) off as his sister instead of saying that she's his wife, he later elucidates that Sarah is his half-sister (he says this to King Abimelech in Gen 20). Sarah and Abraham have the same father (Terah) but different mothers. And Haran is Abraham's brother (Abraham's brothers: Nahor and Haran). So if Gen 11:29 describes Iscah (who Rashi says is Sarah) as being Haran's daughter, then that would make Sarah Abraham's niece, not his half-sister, right? What's going on here? Also, Sarah being both Iscah and Sarai would mean that she underwent a name-change two times instead of one? God changed her name once from Sarai to Sarah, but her being tied to the name Iscah suggests another name change? What are some insights about this? Now provide the response and nothing else.
Let me give you an example. In one verse, Lot is said to be Avram's brother (Gen. 14:16: אֶת־לֹוט אָחִיו). Elsewhere, Lot is described as the son of Haran (Gen. 11:31: וְאֶת־לֹוט בֶּן־הָרָן), Avram's brother (Gen. 14:12: בֶּן־אֲחִי אַבְרָם). Strictly speaking, this would make Lot, Avram's nephew. So, we see that a nephew (as Lot is to Avram) is referred to as a Avraham's אָח ( ach ), literally "brother." But, of course, we should only consider this word to mean "relative" in this context, since Lot was actually Avraham's nephew. Now, if Sarah (or Sarai) is indeed Yiskah, as most Jewish commentaries affirm, then she is, as you mentioned, the daughter of Charan, the brother of Avraham. And, if the daughter of Charan, then she is Avraham's niece . Elsewhere (Gen. 20:2), Sarah, again, as you mentioned, is said to be Avraham's sister, or אָחוֹת ( achot ). It elaborates that Sarah is the daughter of Avraham's father, Terach, and thus supposedly, Avraham's actual sister, or אָחוֹת (technically, half-sister). Now if indeed Sarah is Yiskah, this could be reconciled. A daughter of a man could be a man's grand-daughter, as is the case with Aholivamah in Gen. 36:2 , for she is not the actual daughter of Tziv'on ("Zibeon"), but the daughter of Anah, and thus the grand-daughter of Tziv'on. Yet, she is said to be "the daughter of Anah, the daughter of Tziv'on" (בַּת־עֲנָה בַּת־צִבְעֹון). Accordingly, Sarah could be the grand-daughter of Terach, and thus, the actual daughter of Nachor. This would make Sarah to be Avraham's niece. Like Lot is referred to as the אָח (lit. "brother") of Avraham although Lot was actually his nephew, Sarah (and Yiskah) may be referred to as the אָחוֹת (lit. "sister") of Avraham although actually his niece. Thus, how Yiskah could be Sarah is reconciled. Now, I am not agreeing that she is, only explaining how it is possible that she could be.
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/5707', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/2530/']}
jdg_81635
stackexchange
llm_judgeable_groundtruth_similarity
888113
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: The problem I regularly have a debate with my CTO which usually begins something like this ... CTO: My password expired, that should never happen. Me : It's a security risk to never expire passwords. CTO: It's a security risk to force passwords to be reset because users have bad habits. Me : Yes but the security is in the user not the system, enforcing password expiry ensures the system is secure in the event of an unknown breach of the userbase. This raises an interesting question that neither of us primarily not being a system administrator but being a position that we need to apply a policy to this effect don't really agree on what the right answer should be. My standing The system is more secure if you force all users to change their passwords in X amount of time where X is computed by determining the algorithm strength used to protect the password and an estimated time to break (with brute force) the raw value back in to the original password. CTO's Standing The act of forcing users to change their passwords all the time results in patterns / ****123 "like" patterns over time or users write passwords down meaning the users "bad habit" is more of a risk to the system than the data being compromised in some more technical manner (e.g. through brute forcing). So I would like to know Is there some way I can prove either way weather or not we should enforce a password reset policy based on some industry best practice? OR Is one of us just plain wrong? Now provide the response and nothing else.
Your CTO is more right, but it's a more complex issue. NIST ( https://csrc.nist.gov/ ) is probably the "industry best practices" reference. When it comes to passwords length, not complexity or frequent changes, is the way to go. ( https://nakedsecurity.sophos.com/2016/08/18/nists-new-password-rules-what-you-need-to-know/ ) They suggest phrashes or sentences rather than a single word. You describe hashing passwords before storing them which is the logically better alternative than plain text. The problem is with multi-gpu systems tens to hundreds of hashes can be calculated and checked per second. The links below discuss the changes. I'd suggest paying special attention to salting passwords before they're hashed. https://nakedsecurity.sophos.com/2016/08/18/nists-new-password-rules-what-you-need-to-know/ https://www.passwordping.com/surprising-new-password-guidelines-nist/ https://softwareengineering.stackexchange.com/questions/216998/updating-password-hashing-without-forcing-a-new-password-for-existing-users
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/888113', 'https://serverfault.com', 'https://serverfault.com/users/89502/']}
jdg_81636
stackexchange
llm_judgeable_groundtruth_similarity
12271269
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I already have a long working on an application that can read information from Micro, some (most) failed more still missing some, I'm working with WMI and Delphi.The problem I'm facing is that I need to be listed units which belong to each HD, for example: HD1 has drive C:, D: and so on. thank you Now provide the response and nothing else.
To correlate the Logical Drives and Physical Disks you must use the Win32_DiskDrive class and the Win32_DiskDriveToDiskPartition , Win32_LogicalDiskToPartition ASSOCIATORS classes. Try this sample. {$APPTYPE CONSOLE}uses SysUtils, ActiveX, ComObj, Variants;function ListDrives : string;var FSWbemLocator : OLEVariant; objWMIService : OLEVariant; colDiskDrives : OLEVariant; colLogicalDisks: OLEVariant; colPartitions : OLEVariant; objdiskDrive : OLEVariant; objPartition : OLEVariant; objLogicalDisk : OLEVariant; oEnumDiskDrive : IEnumvariant; oEnumPartition : IEnumvariant; oEnumLogical : IEnumvariant; iValue : LongWord; DeviceID : string;begin; Result:=''; FSWbemLocator := CreateOleObject('WbemScripting.SWbemLocator'); objWMIService := FSWbemLocator.ConnectServer('localhost', 'root\CIMV2', '', ''); colDiskDrives := objWMIService.ExecQuery('SELECT DeviceID FROM Win32_DiskDrive'); oEnumDiskDrive := IUnknown(colDiskDrives._NewEnum) as IEnumVariant; while oEnumDiskDrive.Next(1, objdiskDrive, iValue) = 0 do begin Writeln(Format('DeviceID %s',[string(objdiskDrive.DeviceID)])); //Escape the `\` chars in the DeviceID value because the '\' is a reserved character in WMI. DeviceID := StringReplace(objdiskDrive.DeviceID,'\','\\',[rfReplaceAll]); //link the Win32_DiskDrive class with the Win32_DiskDriveToDiskPartition class colPartitions := objWMIService.ExecQuery(Format('ASSOCIATORS OF {Win32_DiskDrive.DeviceID="%s"} WHERE AssocClass = Win32_DiskDriveToDiskPartition',[DeviceID])); oEnumPartition := IUnknown(colPartitions._NewEnum) as IEnumVariant; while oEnumPartition.Next(1, objPartition, iValue) = 0 do begin if not VarIsNull(objPartition.DeviceID) then begin Writeln(Format(' Partition %s',[string(objPartition.DeviceID)])); //link the Win32_DiskPartition class with theWin32_LogicalDiskToPartition class. colLogicalDisks := objWMIService.ExecQuery('ASSOCIATORS OF {Win32_DiskPartition.DeviceID="'+VarToStr(objPartition.DeviceID)+'"} WHERE AssocClass = Win32_LogicalDiskToPartition'); oEnumLogical := IUnknown(colLogicalDisks._NewEnum) as IEnumVariant; while oEnumLogical.Next(1, objLogicalDisk, iValue) = 0 do begin Writeln(Format(' Logical Disk %s',[string(objLogicalDisk.DeviceID)])); objLogicalDisk:=Unassigned; end; end; objPartition:=Unassigned; end; objdiskDrive:=Unassigned; end;end;begin try CoInitialize(nil); try ListDrives; finally CoUninitialize; end; except on E:Exception do Writeln(E.Classname, ':', E.Message); end; Readln;end. This will output something like so DeviceID \\.\PHYSICALDRIVE0 Partition Disk #0, Partition #0 Logical Disk F:DeviceID \\.\PHYSICALDRIVE1 Partition Disk #1, Partition #0 Logical Disk C:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12271269', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1643935/']}
jdg_81637
stackexchange
llm_judgeable_groundtruth_similarity
32367957
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I write a loop that runs n times in MySql without using a stored procedure. This is how I do it with a stored procedure: DELIMITER $$DROP PROCEDURE IF EXISTS test$$CREATE PROCEDURE test()BEGIN DECLARE count INT DEFAULT 0; WHILE count < 10 DO /**Sql statement**/ SET count = count + 1; END WHILE;END$$DELIMITER ; And then I execute my procedure this way: call test(); If I remove the stored procedure and run the normal query, then it fails with this error: 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DECLARE count INT DEFAULT 0; WHILE count < 10 DO at line' 2 I have looked through the Internet for a solution with no luck. Edit Based On comments: The above stored procedure does exactly what I want: It loops 10 times and execute my sql statement. Now I want to accomplish the same thing without using a stored procedure. Something like: DECLARE count INT DEFAULT 0; WHILE count < 10 DO /**Sql statement**/ SET count = count + 1; END WHILE; Now provide the response and nothing else.
MySQL docs on Flow Control Statements say: MySQL supports the IF, CASE, ITERATE, LEAVE LOOP, WHILE, and REPEAT constructs for flow control within stored programs . Docs on Stored Programs and Views say: Stored program definitions include a body that may use compound statements, loops, conditionals, and declared variables . Compound-Statement Syntax This section describes the syntax for the BEGIN ... END compound statement and other statements that can be used in the body of stored programs : Stored procedures and functions, triggers, and events. A compound statement is a block that can contain other blocks; declarations for variables, condition handlers, and cursors; and flow control constructs such as loops and conditional tests. So, it looks like you can run an explicit loop only within a stored procedure, function or trigger. Depending on what you do in your SQL statement, it may be acceptable to use a table (or view) of numbers ( Creating a "Numbers Table" in mysql , MYSQL: Sequential Number Table ). If your query is a SELECT and it is OK to return result of your SELECT 10 times as one long result set (as opposed to 10 separate result sets) you can do something like this: SELECT MainQuery.*FROM ( SELECT 1 AS Number UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10 ) AS Numbers CROSS JOIN ( SELECT 'some data' AS Result ) AS MainQuery Example for INSERT I recommend to have a permanent table of numbers in your database. It is useful in many cases. See the links above how to generate it. So, if you have a table Numbers with int column Number with values from 1 to, say, 100K (as I do), and primary key on this column, then instead of this loop: DECLARE count INT DEFAULT 0;WHILE count < 10 DO INSERT INTO table_name(col1,col2,col3) VALUES("val1","val2",count); SET count = count + 1;END WHILE; you can write: INSERT INTO table_name(col1,col2,col3)SELECT ("val1", "val2", Numbers.Number-1)FROM NumbersWHERE Numbers.Number <= 10; It would also work almost 10 times faster.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32367957', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4323504/']}
jdg_81638
stackexchange
llm_judgeable_groundtruth_similarity
46849
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: One of the dangers of using WPA2-PSK is the possibility that an attacker may setup another wireless network with the same SSID and even use the same access point MAC address via MAC spoofing. The attacker then may use a deauth attack to cause an internal computer in the network to have to reauthenticate, allowing the attacker to recover enough of what he needs to eventually bruteforce his way into your wireless network. However, I'm not sure if this attack method is mitigated when using something like WPA2-Enterprise. I'm a system administrator and it would be relatively trivial (< 1hr work) for me to setup WPA2-Enterprise with an internal RADIUS authentication server and migrate away from WPA2-PSK. However, does this mitigate the evil twin problem? Now provide the response and nothing else.
I would call this an evil twin attack. And it's not uncommon to see corporate laptops vulnerable to this. WPA2-Enterprise supports a number of EAPs - Extensible Authentication Protocols. ( Wikipedia article ) The security depends on which EAP you use, and how you configure it. Some EAPs more vulnerable to an "evil twin" attack than PSK, as an attacker doesn't need to brute force the key. Fortunately, a number of EAPs are secure against an evil twin. One approach is to use TLS internally; PEAP is a popular option that does this. To be secure against an evil twin, you must enable certificate verification and distribute the certificate to all your clients. There is no need to get such a cert issued by a root CA.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/46849', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/2374/']}
jdg_81639
stackexchange
llm_judgeable_groundtruth_similarity
861045
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I always heard that class fields (heap based) were initialized, but stack based variables were not. I also heard that record members (also being stack based) were also not initialized. The compiler warns that local variables are not initialized ([DCC Warning] W1036 Variable 'x' might not have been initialized), but does not warn for record members. So I decided to run a test. I always get 0 from Integers and false from Booleans for all record members. I tried turning various compiler options (debugging, optimizations, etc.) on and off, but there was no difference. All my record members are being initialized. What am I missing? I am on Delphi 2009 Update 2. program TestInitialization;{$APPTYPE CONSOLE}uses SysUtils;type TR = Record Public i1, i2, i3, i4, i5: Integer; a: array[0..10] of Integer; b1, b2, b3, b4, b5: Boolean; s: String; End;var r: TR; x: Integer;begin try WriteLn('Testing record. . . .'); WriteLn('i1 ',R.i1); WriteLn('i2 ',R.i2); WriteLn('i3 ',R.i3); WriteLn('i4 ',R.i4); WriteLn('i5 ',R.i5); Writeln('S ',R.s); Writeln('Booleans: ', R.b1, ' ', R.b2, ' ', R.b3, ' ', R.b4, ' ', R.b5); Writeln('Array '); for x := 0 to 10 do Write(R.a[x], ' '); WriteLn; WriteLn('Done . . . .'); except on E:Exception do Writeln(E.Classname, ': ', E.Message); end; ReadLn;end. Output: Testing record. . . .i1 0i2 0i3 0i4 0i5 0SBooleans: FALSE FALSE FALSE FALSE FALSEArray0 0 0 0 0 0 0 0 0 0 0Done . . . . Now provide the response and nothing else.
Global variables are zero-initialized. Variables used in the context of the main begin .. end block of a program can be a special case; sometimes they are treated as local variables, particularly for -loop indexers. However, in your example, r is a global variable and allocated from the .bss section of the executable, which the Windows loader ensures is zero-filled. Local variables are initialized as if they were passed to the Initialize routine. The Initialize routine uses runtime type-info (RTTI) to zero-out fields (recursively - if a field is of an array or record type) and arrays (recursively - if the element type is an array or a record) of a managed type, where a managed type is one of: AnsiString UnicodeString WideString an interface type (including method references) dynamic array type Variant Allocations from the heap are not necessarily initialized; it depends on what mechanism was used to allocate memory. Allocations as part of instance object data are zero-filled by TObject.InitInstance . Allocations from AllocMem are zero-filled, while GetMem allocations are not zero-filled. Allocations from New are initialized as if they were passed to Initialize .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/861045', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/255/']}
jdg_81640