source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
14174056
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone know how jQuery's .on() method can be implemented in native JS? The addEventListener method does not take a child/selector element as a way to filter, and I don't think I have the proper bubbling/capturing knowledge to completely understand what is happening in there. I did consult the source in event.js, where it appears that eventually addEventListener does get used just as it normally does, but I'm not sure I quite grok the source. If the native method does not provide a mechanism for taking advantage of bubbling and capturing, then does the jQuery .on() function really even have any benefit, or does it just make it look that way? I was under the impression that .on('parent', '.child', fn(){}); is more efficient than attaching an event to all children individually, but from my interpretation of the source, it's difficult to tell if jQuery is somehow managing this in a way to leads to performance improvement, or if it's just for readability. Is there a standard methodology for implementing events on a parent that take advantage of bubbling/capture phases for it's child elements, rather than having to attach an event to each individual child? Now provide the response and nothing else.
To perform event delegation natively: parent.addEventListener('click', function(e) { if(e.target.classList.contains('myclass')) { // this code will be executed only when elements with class // 'myclass' are clicked on }}); The efficiency you are referring to has to do with how many event handlers you add. Imagine a table with 100 rows. It is much more efficient to attach a single event handler to the table element then 'delegate' to each row than attach 100 event handlers, 1 to each row. The reason event delegation works is because a click event actually fires on both the child and the parent (because you're clicking over a region within the parent). The above code snippet fires on the parent's click event, but only executes when the condition returns true for the event target , thus simulating a directly attached event handler. Bubbling/capturing is a related issue, but you only need to worry about it if the order of multiple event handlers firing matters. I recommend reading further on event order if you are interested in understanding bubbling vs capturing. The most common benefit of event delegation is that it handles new elements that are added to the DOM after the event handler is attached. Take the above example of a table of 100 rows with click handlers. If we use direct event handler attachment (100 event handlers), then new rows that are added will need event handlers added manually. If we use delegated events, then new rows will automatically 'have' the event handler, because it's technically been added to the parent which will pick up all future events. Read What is DOM Event Delegation , as Felix Kling suggested, for more information.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14174056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/802397/']}
jdg_84841
stackexchange
llm_judgeable_groundtruth_similarity
23042699
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Python and I'm trying to teach myself language processing. NLTK in python has a function called FreqDist that gives the frequency of words in a text, but for some reason it's not working properly. This is what the tutorial has me write: fdist1 = FreqDist(text1)vocabulary1 = fdist1.keys()vocabulary1[:50] So basically it's supposed to give me a list of the 50 most frequent words in the text. When I run the code, though, the result is the 50 least frequent words in order of least frequent to most frequent, as opposed to the other way around. The output I am getting is as follows: [u'succour', u'four', u'woods', u'hanging', u'woody', u'conjure', u'looking', u'eligible', u'scold', u'unsuitableness', u'meadows', u'stipulate', u'leisurely', u'bringing', u'disturb', u'internally', u'hostess', u'mohrs', u'persisted', u'Does', u'succession', u'tired', u'cordially', u'pulse', u'elegant', u'second', u'sooth', u'shrugging', u'abundantly', u'errors', u'forgetting', u'contributed', u'fingers', u'increasing', u'exclamations', u'hero', u'leaning', u'Truth', u'here', u'china', u'hers', u'natured', u'substance', u'unwillingness...] I'm copying the tutorial exactly, but I must be doing something wrong. Here is the link to the tutorial: http://www.nltk.org/book/ch01.html#sec-computing-with-language-texts-and-words The example is right under the heading "Figure 1.3: Counting Words Appearing in a Text (a frequency distribution)" Does anyone know how I might fix this? Now provide the response and nothing else.
From NLTK's GitHub : FreqDist in NLTK3 is a wrapper for collections.Counter; Counter provides most_common() method to return items in order. FreqDist.keys() method is provided by standard library; it is not overridden. I think it is good we're becoming more compatible with stdlib. docs at googlecode are very old, they are from 2011. More up-to-date docs can be found on http://nltk.org website. So for NLKT version 3, instead of fdist1.keys()[:50] , use fdist1.most_common(50) . The tutorial has also been updated: fdist1 = FreqDist(text1)>>> print(fdist1)<FreqDist with 19317 samples and 260819 outcomes>>>> fdist1.most_common(50)[(',', 18713), ('the', 13721), ('.', 6862), ('of', 6536), ('and', 6024),('a', 4569), ('to', 4542), (';', 4072), ('in', 3916), ('that', 2982),("'", 2684), ('-', 2552), ('his', 2459), ('it', 2209), ('I', 2124),('s', 1739), ('is', 1695), ('he', 1661), ('with', 1659), ('was', 1632),('as', 1620), ('"', 1478), ('all', 1462), ('for', 1414), ('this', 1280),('!', 1269), ('at', 1231), ('by', 1137), ('but', 1113), ('not', 1103),('--', 1070), ('him', 1058), ('from', 1052), ('be', 1030), ('on', 1005),('so', 918), ('whale', 906), ('one', 889), ('you', 841), ('had', 767),('have', 760), ('there', 715), ('But', 705), ('or', 697), ('were', 680),('now', 646), ('which', 640), ('?', 637), ('me', 627), ('like', 624)]>>> fdist1['whale']906
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23042699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3528925/']}
jdg_84842
stackexchange
llm_judgeable_groundtruth_similarity
10431195
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I having trouble with my threads.After i segue a couple of times between 2 screen when the thread is busy. The thread don't perform every line.., The breakpoint just disappear when it has to return to the main thread.Can somebody please help me ? I release the thread when the view is unload. Thanks, - (void)fetchFeedDataIntoDocument{ NSString * labelString = [NSString stringWithFormat:@"Feed Fetcher %@", self.pageTitle]; const char *label = [labelString UTF8String]; self.fetchF = dispatch_queue_create(label, NULL); dispatch_async(self.fetchF, ^{ NSArray *feeds = [FeedFetcher getDataForJson:self.pageTitle downloadBy:@"up"]; NSDictionary *lastfeed; AppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *context = [appDelegate getManagedObjectContext]; if ([feeds count] > 0) { lastfeed = [feeds objectAtIndex:0]; [FeedFetcher setLastUpdateIdToCatgorie:self.pageTitle WithId:[lastfeed objectForKey:@"id"] AndPulishDate:[lastfeed objectForKey:@"publish_up"]]; } for (NSDictionary *feedInfo in feeds) { [Feed FeedWithInfo:feedInfo InManageObject:context]; } NSError *error = nil; [context save:&error]; if (error){ NSLog(@"Error save : %@", error);} dispatch_async(dispatch_get_main_queue(), ^{ [self setupFetchedResultsController]; [self.tableView reloadData]; [self downloadImagesForFeeds:feeds]; }); }); Now provide the response and nothing else.
You are accessing the managedObjectContext from a different thread from where it was created. This is Core Data Threading Rule #1. You are getting the MOC from the app delegate. If it's the normal Xcode-generated MOC, then it is created with thread-confinement concurrency. You can't even call performBlock with it. You can only access that MOC from the main thread. Period. Anything else is playing with fire, at best. If you want to do all the work in a separate thread, you need a separate MOC as well. Like this (just typed - not compiled)... NSManagedObjectContext *moc = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];moc.parentContext = appDelegate.managedObjectContext;[moc performBlock:^{ // Go get your remote data and whatever you want to do // Calling save on this MOC will push the data up into the "main" MOC // (though it is now in the main MOC it has not been saved to the store). [moc save:&error];}]; Which would translate into something like this... - (void)fetchFeedDataIntoDocument{ NSString * labelString = [NSString stringWithFormat:@"Feed Fetcher %@", self.pageTitle]; const char *label = [labelString UTF8String]; AppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *mainContext = [appDelegate getManagedObjectContext]; NSManagedObjectContext *context = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType]; context.parentContext = mainContext; [context performBlock:^{ NSArray *feeds = [FeedFetcher getDataForJson:self.pageTitle downloadBy:@"up"]; NSDictionary *lastfeed; if ([feeds count] > 0) { lastfeed = [feeds objectAtIndex:0]; [FeedFetcher setLastUpdateIdToCatgorie:self.pageTitle WithId:[lastfeed objectForKey:@"id"] AndPulishDate:[lastfeed objectForKey:@"publish_up"]]; } for (NSDictionary *feedInfo in feeds) { [Feed FeedWithInfo:feedInfo InManageObject:context]; } NSError *error = nil; [context save:&error]; if (error){ NSLog(@"Error save : %@", error);}DO you really want to continue on error? dispatch_async(dispatch_get_main_queue(), ^{ // Data has been pushed into main context from the background // but it still needs to be saved to store... // Do not forget to perform error handling... NSError *error = nil; [mainContext save:&error]; [self setupFetchedResultsController]; [self.tableView reloadData]; [self downloadImagesForFeeds:feeds]; }); }); EDIT The code generated by Xcode for creating the MOC uses init, which uses NSConfinementConcurrencyType. You can replace it with MainConcurrency, without any problems, but get several benefits. In your app delegate file, replace... __managedObjectContext = [[NSManagedObjectContext alloc] init]; with this... __managedObjectContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSMainQueueConcurrencyType]; Now, your main MOC can be "parented" and you can also call performBlock on it as well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10431195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1370164/']}
jdg_84843
stackexchange
llm_judgeable_groundtruth_similarity
11415106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my webapp, have a JSON data response from a database query that includes the lat/long coordinates of 1 to n locations. I want to calculate the bearing from the data[i] location to the current position. I've been adapting the code here , but the bearing returned is incorrect. //starting lat/long along with converting lat to radsvar endLat = toRad(location.lat()); var endLong = location.lng();//loop over response, calculate new headings for links and add link to arrayfor(var i=0; i<data.length; i++){ //this link's lat/long coordinates, convert lat to rads var startLat = toRad(data[i].lat); var startLong = data[i].lon; //get the delta values between start and end coordinates in rads var dLong = toRad(endLong - startLong); //calculate var y = Math.sin(dLong)*Math.cos(endLong); var x = Math.cos(startLat)*Math.sin(endLat)-Math.sin(startLat)*Math.cos(endLat)*Math.cos(dLong); var bearing = Math.atan(y, x); bearing = (toDeg(bearing) + 360) % 360; panoLinks.push({'heading': bearing, 'description': data[i].description, 'pano': data[i].description});}//radian/degree conversionsfunction toRad(convert){ return convert * Math.PI/180;}function toDeg(convert){ return convert * 180/Math.PI;} Using the functions above and the values startLat= 43.6822, converts to 0.7623982145146669 radiansstartLong= -70.450769endLat= 43.682211, converts to 0.7623984065008848 radiansendLong= -70.45070dLong = startLong - endLong, converts to 0.0000011170107216805305 radians results in a compass degree of bearing= 0.000014910023935499339 which is definitely off. Where have I gone wrong? Now provide the response and nothing else.
Give this a try, I can't for the life of me remember where I got it though... /** * Calculate the bearing between two positions as a value from 0-360 * * @param lat1 - The latitude of the first position * @param lng1 - The longitude of the first position * @param lat2 - The latitude of the second position * @param lng2 - The longitude of the second position * * @return int - The bearing between 0 and 360 */ bearing : function (lat1,lng1,lat2,lng2) { var dLon = (lng2-lng1); var y = Math.sin(dLon) * Math.cos(lat2); var x = Math.cos(lat1)*Math.sin(lat2) - Math.sin(lat1)*Math.cos(lat2)*Math.cos(dLon); var brng = this._toDeg(Math.atan2(y, x)); return 360 - ((brng + 360) % 360); }, /** * Since not all browsers implement this we have our own utility that will * convert from degrees into radians * * @param deg - The degrees to be converted into radians * @return radians */ _toRad : function(deg) { return deg * Math.PI / 180; }, /** * Since not all browsers implement this we have our own utility that will * convert from radians into degrees * * @param rad - The radians to be converted into degrees * @return degrees */ _toDeg : function(rad) { return rad * 180 / Math.PI; },
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11415106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214892/']}
jdg_84844
stackexchange
llm_judgeable_groundtruth_similarity
60641
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have heard it said more than once—on Wikipedia , for example—that the étale topology on the category of, say, smooth varieties over $\mathbb{C}$, is equivalent to the Euclidean topology. I have not seen a good explanation for this statement, however. If we consider the relatively simple example of $\mathbb{P}^1_\mathbb{C}$, it seems to me that an étale map is just a branched cover by a Riemann surface, together with a Zariski open subset of $\mathbb{P}^1_\mathbb{C}$ that is disjoint from the ramification locus. (If there is a misconception there, small or large, please let me know) The connection to the Euclidean topology on $\mathbb{P}^1_\mathbb{C}$, however, is not obvious to me. What is the correct formulation of the statement that the two topologies are equivalent, or what is a good way to compare them? Now provide the response and nothing else.
Saying that the étale topology is equivalent to the euclidean topology is vastly overstating the case. For example, if you compute the cohomology of a complex algebraic variety with coefficients in $\mathbb Q$ in the étale topology, typically you get 0. On the other hand, it is a deep result that the étale cohomology of such a variety with coefficients in a finite abelian group coincides with its cohomology in the euclidean topology. Similarly, you can't capture the whole fundamental group with the étale topology, but only its finite quotients (and the fact that you can indeed describe the finite quotients of the fundamental group via étale covers is, again, a deep result).
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/60641', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1770/']}
jdg_84845
stackexchange
llm_judgeable_groundtruth_similarity
2639362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would think that adding that attribute to an interface would be helpful make sure you do not create classes that use the interface and forget to make them serializable. This could be a very fundamental question, but I wanted to ask the experts. Now provide the response and nothing else.
Interfaces define a contract and do not have any state of their own. Serialization is about saving and loading state into and out of an object model. Not much point to serializing something that holds no state. To answer the practical question of forcing an implementation of an interface to be Serializable - this is why the ISerializable interface exists. In .NET you can declare an interface that should implement other interfaces: interface MustBeSerializable : ISerializable {} See some more information here .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2639362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316735/']}
jdg_84846
stackexchange
llm_judgeable_groundtruth_similarity
306526
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've a set of 500+ observation (200+ feature vector dimension) of 7 classes and want improve my classification rate (with SVM or KNN). To reduce the dimension and transform the feature matrix to a lower dimension (due to curse of dimensionality), I'm using LDA. It maps my high dimensional data to lower 6 dimensions. But with applying cross validated LDA it doesn't help and degrade the results dramatically. When I even use leave one out (LOOCV) to calculate LDA projection matrix, it is calculated by holding out just one observation. My question is why even in this case the projection matrix ($W$) is so over-fitted and sensitive to cross validation? Intuitively I've hold out just one sample but it seems the projection matrix can't map the held out observation correctly. I'm interested in two parts: The math behind such experiment. Some consideration or solution for better cross validated feature transform instead of LDA. Update based on @Andrew M, initial response, I've different number of observation per class. For example one class has example 120 observation while the other has only 40. Now provide the response and nothing else.
When I even use leave one out (LOOCV) to calculate LDA projection matrix, it is calculated by holding out just one observation. My question is why even in this case the projection matrix ($W$) is so over-fitted and sensitive to cross validation? Intuitively I've hold out just one sample but it seems the projection matrix can't map the held out observation correctly. Well, the cross validation is probably doing what it is supposed to do: with almost the same training data, performance is measured. What you observe is that the models are unstable (which is one symptom of overfitting). considering your data situation, it seems totally plausible to me that the full model overfits just as badly. Cross validation does not in itself guard against overfitting (or improve the situation) - it just tells you that you are overfitting and it is up to you to do something against that. Keep in mind that the recommended number of training cases where you can be reasonably sure of having a stable fitting for (unregularized) linear classifiers like LDA is n > 3 to 5 p in each class . In your case that would be, say, 200 * 7 * 5 = 7000 cases, so with 500 cases you are more than an order of magnitude below that recommendation. Suggestions: As you look at LDA as a projection method, you can also check out PLS (partial least squares). It is related to LDA ( Barker & Rayens: Partial least squares for discrimination J Chemom, 2003, 17, 166-173 ). In contrast to PCA, PLS takes the dependent variable into account for its projection. But in contrast to LDA (and like PCA) it directly offering regularization. In small sample size situations where n is barely larger than p, many problems can be solved by linear classification. I'd recommend checking whether the nonlinear 2nd stage in your classification is really necessary. Unstable models may be improved by switching to an aggregated (ensemble) model. While bagging is the most famous variety, you can also aggregate cross validation LDA (e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations Anal Bioanal Chem, 2008, 390, 1261-1271. DOI: 10.1007/s00216-007-1818-6 ) Because of the pooling of the covariance matrix, I'd expect your uneven distribution of cases over the different classes to be less difficult for LDA compared to many other classifiers such as SVM. Of course this comes at the cost that a common covariance matrix may not be a good description of your data. However, if your classes are very unequal (or you even have rather ill-defined negative classes such as "something went wrong with the process") you may want to look into one-class classifiers. They typically need more training cases than discriminative classifiers, but they do have the advantage that recognition of classes where you have sufficient cases will not be compromised by classes with only few training instances, and said ill-defined classes can be described as the case belongs to none of the well-defined classes.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/306526', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/178637/']}
jdg_84847
stackexchange
llm_judgeable_groundtruth_similarity
2886995
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider this javascript: var values = { name: "Joe Smith", location: { city: "Los Angeles", state: "California" }}var string = "{name} is currently in {location.city}, {location.state}";var out = string.replace(/{([\w\.]+)}/g, function(wholematch,firstmatch) { return typeof values[firstmatch] !== 'undefined' ? values[firstmatch] : wholematch;}); This will output the following: Joe Smith is currently in {location.city}, {location.state} But I want to output the following: Joe Smith is currently in Los Angeles, California I'm looking for a good way to convert multiple levels of dot notation found between braces in the string into multiple parameters to be used with bracket notation, like this: values[first][second][third][etc] Essentially, for this example, I'm trying to figure out what regex string and function I would need to end up with the equivalent of: out = values[name] + " is currently in " + values["location"]["city"] + values["location"]["state"]; NOTE: I'd like to do this without using eval() . Now provide the response and nothing else.
Using a helper function to iteratively access the properties: function getNestedValue(obj, prop) { var value, props = prop.split('.'); // split property names for (var i = 0; i < props.length; i++) { if (typeof obj != "undefined") { obj = obj[props[i]]; // go next level } } return obj;}var string = "{name} is currently in {location.city}, {location.state}";var out = string.replace(/{([^}]+)}/g, function(wholematch,firstmatch) { var value = getNestedValue(joe, firstmatch); return typeof value !== 'undefined' ? value : wholematch;});// "Joe Smith is currently in Los Angeles, California" Try the above example here . Edit : Something slightly elegant , using the Array.prototype.reduce method, part of the new ECMAScript 5th Edition Standard: function replace(str, obj) { return str.replace(/{([^}]+)}/g, function(wholematch,firstmatch) { var value = firstmatch.split('.').reduce(function (a, b) { return a[b]; }, obj); return typeof value !== 'undefined' ? value : wholematch; });}replace("{name} is currently in {location.city}, {location.state}", values);// "Joe Smith is currently in Los Angeles, California" Try the new example here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2886995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/220599/']}
jdg_84848
stackexchange
llm_judgeable_groundtruth_similarity
582686
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assume that I have 10 classes with 100 samples for each class—same # of samples, perfect balanced dataset. I want to add 3 new classes, and which of the following is the best option for the number of samples for each newly added class? 100, 100, 100 200, 200, 200 1000, 1000, 1000 100, 1000, 1000 I am creating a data analysis product for my clients, and I have to make a threshold for the minimum (or maximum) number of samples they have to add. It depends on the datasets we are using, but balanced data is almost always better than imbalanced. However, I am pretty not sure how I can set the threshold for the newly added class' data sample number if I can choose it. Now provide the response and nothing else.
All else being equal, more data is always better. So #3 is clearly the best option . Imbalanced data is not really a problem, and sacrificing more data for balance is throwing away free information (as Stephan Kolassa notes, the cost of data collection could be a concern - I am ignoring that for now). See the following questions for more detailed discussion about this common misconception: Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? When is unbalanced data really a problem in Machine Learning? Does an unbalanced sample matter when doing logistic regression? What is the root cause of the class imbalance problem? This would be a more difficult choice if instead of [1000, 1000, 1000], option #3 was something like [10, 1000, 1000]. In that case, it is arguable whether you would learn enough about that one class from 10 samples to make the additional benefit of 1000 samples from the other 2 classes worth it - so [200, 200, 200] or [100, 1000, 1000] might be better options.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/582686', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/291550/']}
jdg_84849
stackexchange
llm_judgeable_groundtruth_similarity
10838
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I wonder if there is a simple way to produce a list of variables using a for loop, and give its value. for(i in 1:3){ noquote(paste("a",i,sep=""))=i} In the above code, I try to create a1 , a2 , a3 , which assign to the values of 1, 2, 3. However, R gives an error message. Thanks for your help. Now provide the response and nothing else.
Your are looking for assign() . for(i in 1:3){ assign(paste("a", i, sep = ""), i) } gives > ls()[1] "a1" "a2" "a3" and > a1[1] 1> a2[1] 2> a3[1] 3 Update I agree that using loops is (very often) bad R coding style (see discussion above). Using list2env() (thanks to @mbq for mentioning it), this is another solution to @Han Lin Shang's question: x <- as.list(rnorm(10000))names(x) <- paste("a", 1:length(x), sep = "")list2env(x , envir = .GlobalEnv)
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/10838', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/4625/']}
jdg_84850
stackexchange
llm_judgeable_groundtruth_similarity
38209
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Recently liberals and conservatives alike have floated returning to the high marginal income tax rates of the mid 20th century, as a method to help reduce inequality in the United States, among other goals. Similarly, some have also floated ideas about taxing underlying wealth (net worth) instead of (or in addition to) annual income. Ann Coulter supporting high income taxes proposed by Rep. Alexandria Ocasio-Cortez Slate supporting a wealth tax Tucker Carlson on concerns about inequality My question: How could such a "wealth tax" be implemented to minimize evasion and non-compliance, ideally without also creating a massive compliance headache for the IRS or whoever administers the wealth tax? Obviously rich people would attempt to circumvent such taxes by trying to "hide" their assets, shield them overseas, etc., so I'm wondering how the law's details could be structured to stymie evasion as much as possible. Assume for this discussion that a wealth tax looks something like: Pay 1% on marginal net worth over $10M. For example, a $15M net worth would be taxed annually at ($10M * 0%)+($5M * 1%) = $50,000 / year. I specifically am NOT asking about the merits or downsides of wealth taxes, but rather how they could be implemented most effectively from an enforcement perspective. Edit : To make this more specific and less opinion-based, let's focus on how other countries have structured wealth taxes. What have they done? Is there any reliable reporting on evasion or compliance? Now provide the response and nothing else.
Henry George advocated for a Land Value Tax which assesses the undeveloped value of a property and taxes the person who owns the property an assessed tax. Specifically, it is the unimproved land that is taxed, not any improvement to the land (such as a parking lot, a mall, a house, a farm, ect) or what is the value of the property assuming it was undeveloped from the start. This type of tax is considered to be progressive as the poorest of the citizens never own land and is likely the only economic efficient tax as it encourages improvement of the land in order to pay the tax. If you have an empty lot in the city, you still owe a similar value to your neighbors who have an office tower and a trendy apartment building on either side. Thus, it's to your benefit to develop something that will profit the land. It's also one of the few commodities that cannot be stored overseas... it's a fixed location and any attempt to put it offshore will be noticed pretty quickly, thus people who invest in land will be unable to hide their true worth as far as taxing purposes are concerned.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/38209', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/24708/']}
jdg_84851
stackexchange
llm_judgeable_groundtruth_similarity
312985
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Standard Model of particle physics splits both the leptons and the quarks into three generations, with mass and instability going up from the first to the third generation. These are normally displayed together, on the same rows or columns of the table of fundamental particles: This makes some sense: each charged lepton is tied to its neutrino in most Feynman vertices it appears in, and the quarks are linked to each other by their charges, if nothing else. However, I can't think of any way in which the SM formally links, say, muons, and strange quarks. Is there some explicit link with generation-specific interactions? Or is it just a coincidence that there are three rungs in both ladders with increasing mass on both? Now provide the response and nothing else.
Is there some explicit link with generation-specific interactions? Or is it just a coincidence that there are three rungs in both ladders with increasing mass on both? In the SM, there are no generation-specific interactions, beyond color, which factors out. The generation tabulation system simply arrays states according to increasing mass , a still mysterious pattern; which is why, to date, the neutrino mass eigenstates $\nu_{1,2,3}$ have not been assigned to generations, yet, pending complete experimental confirmation of the normal hierarchy/ordering (the $\nu_{e,\mu,\tau}$ are not mass eigenstates, and were on that chart to simply confuse and abuse; mercifully, they are going away ). The increasing mass pattern is not a coincidence: it is the construction principle. The weak interactions, through weak mixing , hop across all generations , and neutrinos mix a lot, unlike quarks, whose mixing angles are small. So scrambling green columns will do nothing to physics, as long as you take care to rewrite your PMNs matrix to reflect the labelling change. Of course you do need the number of states you see on such tables to be matched, 3 rungs to 3 rungs, whatever their order, so as to prevent gauge anomalies, invalidating gauge invariance, but the assignment of states in generations, the order of the rungs, is basically arbitrary. In speculative GUTs (like the Georgi-Glashow SU(5)) one tries to link fermion masses (cf (22) , since leptons and quarks are put in common representations, 5 and 10 ; but, again, alternate inequivalent models wiggle the leptons around associating them to different quarks, to adapt to negative proton decay results. So, indeed, such alternative speculative models take advantage of the SM freedom to scramble green columns .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/312985', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/8563/']}
jdg_84852
stackexchange
llm_judgeable_groundtruth_similarity
55331
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Today, we were being given a lesson on Thermodynamics when my brain encountered an error. It is easy to derive that, $\Delta U$, the internal energy change for constant volume process is given by,$$\Delta U = nC_V \Delta T$$and that enthalpy change for a constant pressure process is given by$$\Delta H = nC_P \Delta T$$ However, my teacher later mentioned that internal energy and enthalpy changes are given by the same equations for any thermodynamic process. They need not be constant volume/pressure processes. I can't figure out why should that be true . A hint shall be enough. Now provide the response and nothing else.
A very general discussion-Not specific to a system: The internal energy, $U$, of a system is a function of state, which means that its value only depends on the thermodynamic variables ($P, V, T)$ for example, at a given state (this means for a given set of values of these variables). Let us make this more concrete: Imagine the system is in a thermodynamic state where the thermodynamic variables have the values ($P_i, V_i, T_i$) ($i$ stands for initial). At these values of the thermodynamic variables the internal energy has a value: Internal energy at the initial state $i$: $U(P_i,T_i,V_i)$. You can think of a gas at pressure, volume and temperature condition ($P_i, V_i, T_i$). Now imagine you change the thermodynamic variables to these ones ($P_f, V_f, T_f$) ($f$ stands for final). The internal energy now has a new value Internal energy at the initial state $f$: $U(P_f,T_f,V_f)$. In this process you have changed the internal energy of the system by an amount: Change in U: $\Delta U= U(P_f,T_f,V_f)- U(P_i,T_i,V_i)$ I hope it is clear to observe that the system could have followed an infinitely large set of $(P,V,T)$-points, along an infinitely large number of different paths in order to go from state $i$ to state $f$. However, these are not, in any way, influencing by how much $U$ will change, you can take which ever path you please to go from state $i$ to state $f$. So the system has no memory of the intermediate states. In mathematical terminology, this means that the differential change, $dU$, is a perfect differential and this is stated by the simple mathematical expression $\oint_C dU=0$ It is very similar to the gravitational potential of the Earth, for example, which tells us that the amount of energy we need to spend to lift an object by 3m, does not depend whether we bring it straight vertically up or we follow some other path.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/55331', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/21308/']}
jdg_84853
stackexchange
llm_judgeable_groundtruth_similarity
4577651
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: From Gallian's "Contemporary Abstract Algebra", Part 2 Chapter 5 It looks like using Lagrange's theorem would work, since $|S_n| = n!$ and $\langle\alpha\rangle$ is a subgroup of $S_n$ . However, that hasn't been covered in the book at this point, so I'm assuming a different solution is expected $\alpha$ can be broken up into disjoint cycles $\alpha_1\dots\alpha_m$ such that $|\alpha_1| + \dots +|\alpha_m| = n$ , and then $|\alpha| = \operatorname{lcm}(|\alpha_1|, \dots, |\alpha_n|)$ . Don't know how to continue though Now provide the response and nothing else.
You've almost solved it yourself! The numbers $|\alpha_i|$ are all between $1$ and $n$ so, from what you wrote, $|\alpha|$ is the least common multiple of a set of numbers between $1$ and $n$ . On the other hand $n!$ is the product of all the numbers from $1$ to $n$ . One definition of the least common multiple of a set of numbers is that it divides any number which is divisible by all the numbers in the set.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4577651', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/788464/']}
jdg_84854
stackexchange
llm_judgeable_groundtruth_similarity
3431064
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: What is the “??” operator for? What does the "??" operator perform in an expression ? public NameValueCollection Metadata{ get { return metadata ?? (metadata = new NameValueCollection()); }} Now provide the response and nothing else.
This is known as null-coalescing operator and it acts as following, assume a is a nullable int and b is a normal int b = a ?? 1; is equal to b = (a != null ? (int)a : 1); which is equal to if(a != null) b = (int)a;else b = 1; Therefore public NameValueCollection Metadata{ get { return metadata ?? (metadata = new NameValueCollection()); }} expanded should look like something like this public NameValueCollection Metadata{ get { if(metadata == null) return (metadata = new NameValueCollection()); else return metadata; }} which is some kind of a one liner singleton pattern, because the getter returns metadata (an initialized NameValueCollection object) every time its requested, expect the very first time which it's null at that point, so it initializes it and then returns it. This is off topic but note that this approach to singleton pattern is not thread-safe.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3431064', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/411680/']}
jdg_84855
stackexchange
llm_judgeable_groundtruth_similarity
12421171
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using ksoap library for webservice call. Below is a simple function for calling webserive. try { METHOD_NAME = method; SOAP_ACTION = "http://tempuri.org/"+method; URL = "http://172.27.212.2:8080/services/"+serviceName; request = new SoapObject(NAMESPACE, METHOD_NAME); soapEnvelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); soapEnvelope.dotNet = true; soapEnvelope.setOutputSoapObject(request); androidHttpTransport = new HttpTransportSE(URL); androidHttpTransport.call(SOAP_ACTION, soapEnvelope); SoapPrimitive resultString = (SoapPrimitive) soapEnvelope.getResponse(); return resultString.toString(); } catch (Exception e) { throw e; } Webservice method returns a json string. But in eclipse this string getting truncated. I debugged and truncated resultString is as below: OutPut: {"menu":{ "Arrival": [ { "traveldeskdetailid": 1968, "traveldeskid": 4, "AirlineName": "United Airlines", "AirlineLogo": "UA1.gif", "FlightNumber": "1239", "Codeshare": "", "City": "Newark", "AirportName": "Newark Liberty International Airport", "Delayed": "T", "ScheduledTime": "8:29 PM", "Remarks": "126 minutes late", "RemarksWithTime": "Estimated 10:35 PM", "Terminal": "3", "Gate": "85", "path": "http://dem5xqcn61lj8.cloudfront.net/logos/UA1.gif" }, { .. },{ .. }, { "traveldeskdetailid": 1983, "traveldeskid": 4, "AirlineName": "Air Canada", "AirlineLogo": "AC.gif", "FlightNumber": "564", "Codeshare": "", "City": "Vancouver", "AirportName": "Vancouver International Airport", "Delayed": "T", "ScheduledTime": "10:45 PM", "Remarks": "19 minutes late", "RemarksWithTime": "Estimated 11:04 PM", "Terminal": "I", "Gate": "73", "path": "http://dem5xqcn61lj8.cloudfront.net/logos/AC.gif" }, { "traveldeskdetaili... Response string is truncate like { "traveldeskdetaili... How can I get full response? Now provide the response and nothing else.
You are misunderstanding that particular overload of the LastIndexOf method. The docs state the following: The search starts at a specified character position and proceeds backward toward the beginning of the string. Note that it says backward . So, if you start at position 0 , there is no "<" substring at that position or in front of that position and hence the result is -1 . In contrast, if you use the overload that takes only the substring, the search will start at the end of the string and hence correctly find the indicated substring.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12421171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/696602/']}
jdg_84856
stackexchange
llm_judgeable_groundtruth_similarity
36471723
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have three web applications (standard Spring MVC-Hibernate) running within a Jboss server 6.1. All three applications share a common authentication method which is compiled as a JAR and included within each WAR file. Our authentication method uses org.springframework.security.crypto.bcrypt.BCrypt to hash user passwords, please see below: hashedPassword.equals(BCrypt.hashpw(plainTextPassword, salt)); JBOSS StartUp Options set "JAVA_OPTS=-Xms2048m -Xmx4096m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError -verbosegc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:gc.txt -XX:+UseParallelOldGC Problem: It appears that when the server is restarted, the Bcrypt.hashpw takes 100ms to decrypt password. However after some time (there is no pattern) suddenly the Bcrypt.hashpw performance spikes up from 100ms to 10s of seconds. There is no obvious reason for this. More information: Hibernate Version: 4.2.4.Final Spring Version: 4.0.5.RELEASE Spring Security Version: 3.2.4.RELEASE Has anyone else seen this problem before? Now provide the response and nothing else.
Problem: It appears that when the server is restarted, the Bcrypt.hashpw takes 100ms to decrypt password. However after some time (there is no pattern) suddenly the Bcrypt.hashpw performance spikes up from 100ms to 10s of seconds. There is no obvious reason for this. The problem is /dev/random sometimes blocks and when it does it will appear to be random :) The more confusing thing is that while trying to test how it works you'll run up against the Observer Effect ie while trying to observe random behavior you're generating entropy and this can lead to a ton of confusion i.e. my results won't be the same as yours etc. This is also why it looks like there's no pattern.. I'll demonstrate the problem and show you how to recreate it (within reason) on your own servers so you can test solutions. I'll try and provide a couple of fixes, note this is on Linux but the same problem will happen on any system that requires entropy to generate random numbers and runs out. On Linux /dev/random is a stream of random bytes. As you read from the stream you deplete the available entropy. When it reaches a certain pointreads from /dev/random block. You can see available entropy using this command cat /proc/sys/kernel/random/entropy_avail If you run the following bash script and also monitor entropy_avail you'll notice that entropy dips dramatically as the bash script consumes it. while :do cat /dev/random > /dev/nulldone This should also give you a hint on how to recreate this problem on your servers ie run the above bash script to reduce available entropy and the problem will manifest itself. If you want to see just how many bytes per second your system is creating youcan use pv to measure it i.e. pv /dev/random If you leave pv running it has an effect, it's consuming the random stream of bytes which means other services might start to block. Note that pv is also displaying it's output so it might also be increasing available entroy on the system :). On systems with little or no entropy using pv /dev/random will seem glacially slow. I've also experienced that VM's sometimes have major issues with generating entropy. To recreate the issue use the following class... import java.security.SecureRandom;import org.mindrot.jbcrypt.BCrypt;public class RandTest { public static void main(String[] args) { SecureRandom sr = new SecureRandom(); int out = 0; String password = "very-strong-password-1729"; String hashed; for (int i = 0; i < 200000 ; i++) { hashed = BCrypt.hashpw(password, BCrypt.gensalt()); //If we print, we're generating entroy :) System.out.println(hashed); } }} I downloaded bcrypt to a local directory. I compiled and ran it as follows javac -cp ./jBCrypt-0.4/src/ RandTest.javajava -cp ./jBCrypt-0.4/src/:. RandTest If you then run the bash script from earlier while runnng RandTest you'll see large pauses where the system is blocking waiting for more entropy. If you run strace you'll see the following... 1067 [pid 22481] open("/dev/random", O_RDONLY|O_LARGEFILE) = 1211068 [pid 22481] fstat64(12, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 8), ...}) = 011069 [pid 22481] fcntl64(12, F_GETFD) = 011070 [pid 22481] fcntl64(12, F_SETFD, FD_CLOEXEC) = 0.....11510 [pid 22481] read(12, "\320\244\317RB\370", 8) = 6 The program is reading from /dev/random . The problem with testing entropy isyou might be generating more of it while trying to test it ie the Observer Effect. Fixes The first fix is to change from using /dev/random to /dev/urandom ie time java -Djava.security.egd=file:///dev/./urandom -cp ./jBCrypt-0.4/src/:. RandTest An alternative fix is to recreate the /dev/random device as a /dev/urandom device. You can find how to do this form the man page ie, instead of creating them... mknod -m 644 /dev/random c 1 8mknod -m 644 /dev/urandom c 1 9chown root:root /dev/random /dev/urandom we delete one and fake it ie rm /dev/randommknod -m 644 /dev/random c 1 9chown root:root /dev/random /dev/random is now actually /dev/urandom The key thing to remember is testing random data that requires entroy from thesystem you're testing on is difficult because of the Observer Effect.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36471723', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1164145/']}
jdg_84857
stackexchange
llm_judgeable_groundtruth_similarity
28593502
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this large string: s = '''Vaya ir VMM03S0 0.427083mañanita mañana RG 0.796611, , Fc 1buscando buscar VMG0000 1la lo PP3FSA00 0.0277039encontramos encontrar VMIP1P0 0.65. . Fp 1Pero pero CC 0.999764vamos ir VMIP1P0 0.655914a a SPS00 0.996023lo el DA0NS0 0.457533que que PR0CN000 0.562517interesa interesar VMIP3S0 0.994868LO_QUE_INTERESA_La lo_que_interesa_la NP00000 1lavadora lavador AQ0FS0 0.585262tiene tener VMIP3S0 1una uno DI0FS0 0.951575clasificación clasificación NCFS000 1A+ a+ NP00000 1, , Fc 1de de SPS00 0.999984las el DA0FP0 0.970954que que PR0CN000 0.562517ahorran ahorrar VMIP3P0 1energía energía NCFS000 1, , Fc 1si si CS 0.99954me me PP1CS000 0.89124no no RN 0.998134equivoco equivocar VMIP1S0 1. . Fp 1Lava lavar VMIP3S0 0.397388hasta hasta SPS00 0.9576987 7 Z 1kg kilogramo NCMN000 1, , Fc 1no no RN 0.998134está estar VAIP3S0 0.999201, , Fc 1se se P00CN000 0.465639le le PP3CSD00 1veía ver VMII3S0 0.62272un uno DI0MS0 0.987295gran gran AQ0CS0 1tambor tambor NCMS000 1( ( Fpa 1de de SPS00 0.999984acero acero NCMS000 0.973481inoxidable inoxidable AQ0CS0 1) ) Fpt 1y y CC 0.999962un uno DI0MS0 0.987295error error NCFSD23 0.234930error error VMDFG34 0.98763consumo consumo NCMS000 0.948927máximo máximo AQ0MS0 0.986111de de SPS00 0.99998449 49 Z 1litros litro NCMP000 1error error DI0S3DF 1Mandos mandos NP00000 1intuitivos intuitivo AQ0MP0 1, , Fc 1todo todo PI0MS000 0.43165muy muy RG 1bien bien RG 0.902728explicado explicar VMP00SM 1, , Fc 1jamas jamas RG 0.343443nada nada PI0CS000 0.850279que que PR0CN000 0.562517de de SPS00 0.999984nunca nunca RG 0.903casa casa NCFS000 0.979058de de SPS00 0.999984mis mi DP1CPS 0.995868error error VM9032 0.234323string string VMWEOO 0.03444padres padre NCMP000 1Además además NP00000 1incluye incluir VMIP3S0 0.994868la el DA0FS0 0.972269tecnología tecnología NCFS000 1error errpr RG2303 1Textileprotec textileprotec NP00000 1que que PR0CN000 0.562517protege proteger VMIP3S0 0.994868nuestras nuestro DP1FPP 0.994186ninguna ninguno DI0S3DF 0.345344falla falla NCFSD23 1prendas prenda NCFP000 0.95625más más RG 1preciadas preciar VMP00PF 1jamas jamas RG2303 1string string VM9032 0.234323nunca nunca RG 0.293030string string VM 0.902333no no RNle le PP004DF 0.390230falla fallar VM0FD00 0.99033. . Fp 1''' I would like to extract in a list the second word from left to right and its id that holds this ids pattern: RN_ _ _ _ _ , PP_ _ _ _ _ , VM_ _ _ _ _ . This ids must be together. For example: no no RN 0.90383le le PPSDF23 0.902339falla fallar VM00DKE 0.9045 This is the pattern I would like to match, since they are together and the ids have the RN_ _ _ _ _ , PP_ _ _ _ _ , VM_ _ _ _ _ order this should be the output given the s string: [('no RN', 'le PP004DF', 'fallar VM0FD00')] This is what I tried: together__ = re.findall(r'(?s)(\w+\s+RN)(?:(?!\s(?:RN|PP|VM)).)*?(\w+\s+PP\w+)(?:(?!\s(?:RN|PP|VM)).)*?(\w+\s+VM\w+)', s) but I get this with the above regex: print together__ output: [('no RN', 'le PP3CSD00', 'ver VMII3S0'), ('no RN', 'le PP004DF', 'fallar VM0FD00')] Which is wrong since the ids are not consecutevely in the string s ( RN, PP, VM ). How can I fix this regex?. Thanks in advance guys. Now provide the response and nothing else.
The short answer Since the element we want to replace has a unique name, what we're looking for is this: nav { ul { li { a { color: red; @at-root #{selector-replace(&, 'ul', 'ul.opened')} { color: green; } } } }} The long answer Manipulating selectors is extremely dirty, and I would advise against it unless you absolutely had to. If you're overqualifying your selectors by specifying things like table tr td or ul li , then start by simplifying: tr and ul are both redundant in these selectors (unless you're trying to avoid styling elements under an ordered list). Adjust your nesting to be simpler, etc. Starting with Sass version 3.4, there are 2 important features that allow you to modify selectors. Selector functions The parent selector can be stored in a variable Example: .foo ul > li a, .bar { $sel: &; @debug $sel;} You'll always get a list of list of strings because selectors can be chained together with a comma, even when you have only one selector. .foo ul > li a, .bar { ... } (1 2 3 4 5), (1) You'll note that the descendant selector is being counted here (lists in Sass can be either space or comma delimited). This is extremely important to remember. When selector-replace() doesn't work The selector-replace() function does not work in the following cases: The selector you want to replace is not unique (eg. ul ul li ) You want to insert one or more selectors (eg. ul ul li -> ul ul ul li ) You want to remove a selector (eg. ul > li -> ul li ) In this case, you'll need to loop over the selectors and you'll need to know which position you want to modify. The following function will take a function and apply it to a specific position in your selector using the magic of the call() function . @function selector-nth($sel, $n, $f, $args...) { $collector: (); @each $s in $sel { $modified: call($f, nth($s, $n), $args...); $collector: append($collector, set-nth($s, $n, $modified), comma); } @return $collector;} Append a class (when the selector isn't unique or you don't know its name) The function we need here takes 2 arguments: the original selector and the selector you'd like to append to it. Uses simple interpolation to do the job. @function append-class($a, $b) { @return #{$a}#{$b};}.foo, .bar { ul > li a { color: red; @at-root #{selector-nth(&, -2, append-class, '.baz')} { color: blue; } }} Output: .foo ul > li a, .bar ul > li a { color: red;}.foo ul > li.baz a, .bar ul > li.baz a { color: blue;} Insert a selector This function also takes 2 arguments: the original selector and the selector you'd like to insert before it. @function insert-selector($a, $b) { @return $b $a;}.foo, .bar { ul > li a { color: red; @at-root #{selector-nth(&, -2, insert-selector, '.baz')} { color: blue; } }} Output: .foo ul > li a, .bar ul > li a { color: red;}.foo ul > .baz li a, .bar ul > .baz li a { color: blue;} Remove a selector Removing a selector is as simple as replacing your selector with an empty string. @function remove-selector($sel) { @return '';}.foo, .bar { ul > li a { color: red; @at-root #{selector-nth(&, -2, remove-selector)} { color: blue; } }} Output: .foo ul > li a, .bar ul > li a { color: red;}.foo ul > a, .bar ul > a { color: blue;} TL;DR Selectors are just a lists. Any list manipulation functions will work on it and you can loop over it to modify it as necessary. So yeah, don't do it unless you really really really need to. If you've decided you still need it, I've packaged these functions up into the selector-nth library .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28593502', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3638878/']}
jdg_84858
stackexchange
llm_judgeable_groundtruth_similarity
494843
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Definition of ridge regression $$min_\beta||y-X\beta||_2^2+\lambda||\beta||_2^2, \lambda\ge0$$ you can prove a function is strictly convex if the 2nd derivative is strictly greater than 0 thus But unfortunately I don't know if this is sufficient proof as it's possible for $X^TX$ to be negative and $\lambda$ can be 0. Unless I'm missing something. Now provide the response and nothing else.
"you can prove a function is strictly convex if the 2nd derivative is strictly greater than 0" That's in one dimension. A multivariate twice-differentiable function is convex iff the 2nd derivative matrix is positive semi-definite, because that corresponds to the directional derivative in any direction being non-negative. It's strictly convex iff the second derivative matrix is positive definite. As you showed, the ridge loss function has second derivative $2\lambda I +2X^TX$ , which is positive definite for any $\lambda>0$ because $\lambda I$ is positive definite for any $\lambda>0$ $X^TX$ is positive semi-definite for any $X$ the sum of a positive definite and positive semi-definite matrix is positive definite If you aren't sure about any of these and want to check in more detail it's useful to know that $A$ is positive definite iff $b^TAb>0$ for all (non-zero) column vectors $b$ . Because of this relationship, many matrix proofs of positive definiteness just come from writing the scalar proofs of positiveness in matrix notation (including non-trivial results like the Cramér-Rao lower bound for variances)
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/494843', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/268999/']}
jdg_84859
stackexchange
llm_judgeable_groundtruth_similarity
162197
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As I understand it, Digital Signatures involve the 'signage' of a message using the senders Private Key , and its validation by the recipient using the sender's mathematically-related Public Key . Whilst this seems straight-forward enough, the whole point of Asymmetric encryption (as I understand it) is that the private key is hidden to the world and necessary for decryption, making the publicly available key of the sender therefore useless to the attacker. If Digital Signatures use the private key of a sender to encrypt data, surely the private key becomes vulnerable, and the message can be attacked using the same decryption attacks commonly used against symmetrically encrypted data. RSA wouldn't be held in such high regard if it were actually susceptible to the flaw I just outlined, which makes me suspect I have misunderstood something. Now provide the response and nothing else.
There seems to be some confusion here about the difference between digitally signing and directed message encryption. If I digitally sign something I indeed need a private key to do so. But you can take my public key and verify that the holder of this public key did indeed sign this . Where decryption is concerned. Yes a private key is required to decrypt say a message of some sort. However what we do is, before hand, exchange public keys with one another, and I encrypt the message with your public key not mine (or both depending on the algorithm or purpose). That way only the holder of the private key associated with that public key can decrypt it (i.e. you). To make this a bit more clear they are different ways of using RSA alltogether, basically at a high level: Encrypt(plaintext, publicKey) = RSA(OAEP(plaintext), publicKey) = ciphertext Decrypt(cipherText, privateKey) = OAEP(RSA(cipherText, privateKey)) = plaintext While signing is slightly different: Sign(plainText, privateKey) = RSA(Hash(plaintext), privateKey) = signature [s] Verify(plainText; s; publicKey) = RSA(s,publicKey) = output if output == Hash(plainText) the signature is verified This is crude, but shows the high level difference between them. The private key is never known by either party. But they can still decrypt and verify using their own private keys, and each others public keys.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/162197', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/82738/']}
jdg_84860
stackexchange
llm_judgeable_groundtruth_similarity
23132
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Front Page Mag (a blog for the conservative David Horowitz Freedom Center) quotes an unnamed report that UK authorities were complicit in the support of sexually abusive Muslims: The police did their part… for the Muslim rapists. The report heard of two cases where fathers tracked down their daughters and tried to remove them from houses where they were being abused only to be arrested themselves when police were called. Is there evidence that men were arrested in this way - trying to rescue their daughters from sexual abuse? Now provide the response and nothing else.
The text of the report can be downloaded from here . The report includes the following paragraph, 5.9 In two of the cases we read, fathers tracked down their daughters and tried toremove them from houses where they were being abused, only to be arrestedthemselves when police were called to the scene. In a small number of cases (whichhave already received media attention) the victims were arrested for offences suchas breach of the peace or being drunk and disorderly, with no action taken againstthe perpetrators of rape and sexual assault against children. It's possible that no further specific details are forthcoming: for example, arrest records and police reports (unlike evidence presented in court) might not be available to the general pubic. The Preface to the report says, The Inquiry applied the definition of child sexual exploitation which is used in Governmentguidance and is set out in Appendix 4, paragraph 48 of this report. The methodologyincluded reading a wide range of minutes, reports and case files I include the first quote above, as evidence for your "Is it true?" question: The report is higher-quality (i.e. more direct) evidence than the newspaper report you referenced; i.e. it shows that the quote in the newspaper report you cited was not misquoting the official report. The report doesn't shed additional light on the 'editorializing' in the newspaper article, i.e. the "The police did their part… for the Muslim rapists" sentence. The report does not confirm or identify, for those two specific 'cases': Whether the homes were of "Muslim rapists" What the specific type of "being abused" was The names of the victims of the abuse The names of the arrestees (the victims' fathers) What they were arrested for. The report in question is titled "Independent Inquiry into Child Sexual Exploitation in Rotherham". The independent inquiry was commissioned by Rotherham Metropolitan Borough Council in October 2013 (Wikipedia says it was commissioned in November 2013, citing this BBC article ). It is (quoting its title) an inquiry into Child Sexual Exploitation in Rotherham. The scope of its investigation included examining case files of child sexual exploitation cases, to determine whether officials (including police, Council, social workers, etc., in Rotherham) acted appropriately c) Consider managerial and political oversight, leadership and direction, operational management practice including supervision, support and guidance and the roles and responsibilities of other parties including the Police, Crown Prosecution Service, health services, schools, parents, family and the Local Safeguarding Children Board. e) Identify who in the Council knew what information when and determine whether that information was used effectively and in the best interests of protecting young people. g) Ensure that the cases reviewed will include those identified in the national press. It was therefore important that the investigation be independent of the Council, which comissioned the report. The terms of its independence are stated in Appendix A (page 121) of the inquiry, as follows: Appendix 1: Terms of Reference for the Independent Inquiry into Child Sexual Exploitation 1997 - 2013 That it be conducted by an independent person with appropriate skills, experience and abilities who has not previously been employedby or undertaken work, either directly or indirectly, for RotherhamMetropolitan Borough Council, nor is a relation of any member orofficer of the Council past or present. Prior to appointment theindependent person will be required to sign a declaration to thateffect. The person should be on a list of reputable personsrecommended to the Council by the Local Government Association. The leader of the inquiry and author of the report is Alexis Jay OBE : Alexis Jay is a Visiting Professor at the University of Strathclyde and the Independent Chair of the Centre for Excellence for Looked After Children in Scotland (CELCIS).[1] Professor Jay is a former senior social worker.[2] She was previously Chief Social Work Adviser to the Scottish Government.[3] and a former president of the Association of Directors of Social Work.[4] In 2005 she took up the post of Chief Social Work Inspector at the Social Work Inspection Agency (SWIA), a government organisation scrutinising all aspects of social services provided by local authorities in Scotland.[5] She served as Chief Executive and Chief Social Work Inspector until the functions of SWIA and the Care Commission were taken over by the Care Inspectorate in 2011. She then remained as Chief Social Work Adviser to the Scottish Government until early 2013. She led the Independent Inquiry into Child Sexual Exploitation in Rotherham, an investigation into child sexual abuse in the Metropolitan Borough of Rotherham in South Yorkshire.[3] She is the author of the investigation's report, published in August 2014.[6] The report was published in August 2014 (about two weeks ago as of today). It was subsequent to (i.e. preceded by) articles published in The Times, for example, On 24th September 2012, The Times reported Andrew Norfolk’s investigation into CSE in Rotherham. i.e. Police files reveal vast child protection scandal
{}
{'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/23132', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/17021/']}
jdg_84861
stackexchange
llm_judgeable_groundtruth_similarity
2986852
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Select makes sense. But can someone explain .detect to me? I don't understand these data. >> [1,2,3,4,5,6,7].detect { |x| x.between?(3,4) }=> 3>> [1,2,3,4,5,6,7].detect { |x| x.between?(3,6) }=> 3>> [1,2,3,4,5,6,7].detect { |x| x.between?(3,7) }=> 3>> [1,2,3,4,5,6,7].detect { |x| x.between?(2,7) }=> 2>> [1,2,3,4,5,6,7].detect { |x| x.between?(1,7) }=> 1>> [1,2,3,4,5,6,7].detect { |x| x.between?(6,7) }=> 6>> [1,2,3,4,5,6,7].select { |x| x.between?(6,7) }=> [6, 7]>> [1,2,3,4,5,6,7].select { |x| x.between?(1,7) }=> [1, 2, 3, 4, 5, 6, 7] Now provide the response and nothing else.
Detect returns the first item in the list for which the block returns TRUE. Your first example: >> [1,2,3,4,5,6,7].detect { |x| x.between?(3,4) }=> 3 Returns 3 because that is the first item in the list that returns TRUE for the expression x.between?(3,4) . detect stops iterating after the condition returns true for the first time. select will iterate until the end of the input list is reached and returns all of the items where the block returned true.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2986852', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/165448/']}
jdg_84862
stackexchange
llm_judgeable_groundtruth_similarity
51076386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following df (with many columns). How can I force that all values start at 1, maintaining pattern per ID? Example: input <- data.frame(ID = c(1,1,1,1,1,2,2,2,2,2,3,3,3,3), Obs1 = c(1,1,1,2,2,4,4,5,6,7,2,3,4,4), Obs2 = c(2,2,3,4,5,1,1,1,1,1,1,1,2,2), Obs3 = c(5,6,7,8,9,3,3,3,4,4,1,1,2,2)) Desired result result <- data.frame(ID = c(1,1,1,1,1,2,2,2,2,2,3,3,3,3), Obs1 = c(1,1,1,2,2,1,1,2,3,4,1,2,3,3), Obs2 = c(1,1,2,3,4,1,1,1,1,1,1,1,2,2), Obs3 = c(1,2,3,4,5,1,1,1,2,2,1,1,2,2)) Now provide the response and nothing else.
When the readable event is emitted, both the listeners listen. However, the first to call the read() function would get the data and second one would get an empty buffer because it's the same input stream being read twice and only a single chunk was present. When the data event is emitted the listener receives the chunk of data being read from the stream. So all the listeners receive the same data. [EDIT] In detail how it works: All readable streams begin in 2 modes: flowing and paused . All Readable streams begin in paused mode by default but they can be switched to the flowing mode using any of the three methods: 1. Attaching a 'data' event handler to the stream 2. Calling the stream.resume() method 3. Calling the stream.pipe() method to send the data to a Writable. When you use any of the above method the stream starts to flow. It doesn't care if the data listeners are attached to the stream and there's a possibility of losing the data. Internally, the read() method is called on the stream and whatever data that is being accumulated in the internal buffer is read and emitted to the listeners. The memory usage is quite low. When you attach a readable listener on your stream, it takes priority over the data listener and hence your stream remains in the paused mode. In the paused mode, you've to explicitly read the data from the internal buffer by calling the read() method. When the readable data is available it keeps on getting accumulated in the internal buffer until the read() method is called explicitly or the stream is resumed. You can specify the size in bytes of the chunk to be read from the internal buffer or all the available data is returned. When read() is called data event is also emitted with the chunk of data read. After consuming this data, the internal buffer is emptied. So when you have multiple readable events attached and try to consume from the same internal buffer, you are not able to get the same data multiple times. My suggestion to you would be have just one readable listener and multiple data listeners. Having a readable would give you the flexibility to read when you want without missing any data. And with the data event handlers, you would be able to get that data in all the handlers.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51076386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9346630/']}
jdg_84863
stackexchange
llm_judgeable_groundtruth_similarity
7928233
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm searching for something like the finished-signal from QDialog , only for QWidget . The reason is, I disable my toolbar once the widget pops up (which isn't a problem at all) and I want the toolbar to be enabled again, once the widget is closed. I also can't override the close-Event of that widget, because then we would have GUI-code in business-classes. Now provide the response and nothing else.
You can set the widget to be deleted on close, and then listen to its destroyed signal: widget->setAttribute( Qt::WA_DeleteOnClose );connect( widget, SIGNAL(destroyed(QObject*)), this, SLOT(widgetDestroyed(QObject*)) ); That only works if you're not interested in the widget contents though. At the point destroyed() is emitted, the widget isn't a QWidget anymore, just a QObject (as destroyed() is emitted from ~QObject ), so you can't cast the argument QObject* to QWidget anymore. A simple alternative might be to wrap your widget with a QDialog .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7928233', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979926/']}
jdg_84864
stackexchange
llm_judgeable_groundtruth_similarity
1783684
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am preparing for an exam and found this integral in a previous test. Did I do it correctly? My attempt. $$\int\frac{\sqrt{x^2+2x-3}}{x+1}\,dx$$Complete the square of $x^2+2x-3$; I changed the integral to$$\int\frac{\sqrt{(x-1)^2-4}}{x+1}\,dx$$then set $u=x+1$ to get$$\int\frac{\sqrt{u^2-4}}{u}\,dx$$Using the triangle, $2\sec\theta=u$ and $du=2\sec\theta\tan\theta d\theta$$$\int\frac{\sqrt{(2\sec\theta)^2-2^2}}{2\sec\theta}2\sec\theta \tan\theta\, d\theta$$This I simplified to $$2\int\tan^2\theta\, d\theta = 2\int\sec^2\theta-1\,d\theta=2[\tan\theta-\theta]+C$$ Back substitute$$\theta=\tan^{-1}\frac{\sqrt{u^2-4}}{2}$$and$$\tan\theta=\frac{\sqrt{u^2-4}}{2}$$Back substitute $u=x+1$$$\int\frac{\sqrt{x^2+2x-3}}{x+1}\,dx=\sqrt{(x-1)^2-4}-\tan^{-1}{\sqrt{(x-1)^2-4}}+C$$ Now provide the response and nothing else.
I changed the integral to $\int\frac{\sqrt{(x-1)^2-4}}{x+1}dx$ then $u=x+1$ It should be $$\int\frac{\sqrt{(x\color{red}{+}1)^2-4}}{x+1}dx$$ I simplified to $2\int tan^2\theta d\theta$ = $2\int\sec^2\theta-1d\theta$ $=2[tan\theta-\theta]+C$ back substitute $\theta=tan^{-1}\frac{\sqrt{u^2-4}}{2}$ and $tan\theta=\frac{\sqrt{u^2-4}}{2}$ back substitute $u=x+1$ I think you did nothing wrong here. $\int\frac{\sqrt{x^2+2x-3}}{x+1}dx={\sqrt{(x-1)^2-4}}-tan^{-1}{\sqrt{(x-1)^2-4}}+C$ This is not correct : $$\int\frac{\sqrt{x^2+2x-3}}{x+1}dx={\sqrt{(x\color{red}{+}1)^2-4}}-\color{red}{2}\tan^{-1}\frac{\sqrt{(x\color{red}{+}1)^2-4}}{\color{red}{2}}+C$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1783684', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/144413/']}
jdg_84865
stackexchange
llm_judgeable_groundtruth_similarity
63352151
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The record preview feature (JEP 384) introduced in JDK 14 are a great innovation. They make it much easier to create simple immutable classes that are a pure collection of values without the loss of context inherent in the generic tuple classes in various libraries. The description of the JEP, written by Brian Goetz ( https://openjdk.java.net/jeps/384 ) explains the intent very well. However I was expecting a closer association with the eventual introduction of value types. The original aims of value types were quite extensive: essentially allowing potentially significant performance improvements for objects whose value is all that matters by removing all the overhead not required for these types of object (e.g. reference indirection, synchronisation). Plus it could provide syntactical niceties such as myPosition != yourPosition instead of !myPosition.equals(yourPosition) . It seems as though the restrictions of records are very close to the types of restrictions that would be required on a potential value type. Yet the JEP makes no reference to these aims in the motivation. I tried unsuccessfully to find any public records on these deliberations. So my question is: are records intended to be part of a possible move towards value types or are these completely unrelated concepts and future value types may look completely different? My motivation for asking the question: if records become a permanent part of the language it would be an added incentive to adopt them in code if there is the possibility of significant performance benefits in a future release. Now provide the response and nothing else.
Records and primitive classes (the new name for value types) have a lot in common -- they are implicitly final and shallowly immutable. So it is understandable that the two might be seen as the same thing. In reality, they are different, and there is room for both of them to co-exist, but they can also work together. Both of these new kinds of classes involve some sort of restriction, in exchange for certain benefits. (Just like enum , where you give up control over instantiation, and are rewarded with a more streamlined declaration, support in switch , etc.) A record requires you to give up on extension, mutability, and the ability to decouple the representation from the API. In return, you get implementations of constructors, accessors, equals , hashCode , and more. A primitive class requires you to give up on identity, which includes giving up on extension and mutability, as well as some other things (e.g., synchronization). In return, you get a different set of benefits -- flattened representation, optimized calling sequences, and state-based equals and hashCode . If you are willing to make both compromises, you can get both sets of benefits -- this would be a primitive record . There are lots of use cases for primitive records, so classes that are records today could be primitive records tomorrow, and would just get faster. But, we don't want to force all records to be primitive or for all primitives to be records. There are primitive classes that want to use encapsulation, and records that want identity (so they can organize into trees or graphs), and this is fine.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/63352151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3705127/']}
jdg_84866
stackexchange
llm_judgeable_groundtruth_similarity
2877304
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have two type classes defined as follows that are identical in function but different in names: class Monad m where (>>=) :: m a -> (a -> m b) -> m b return :: a -> m aclass PhantomMonad p where pbind :: p a -> (a -> p b) -> p b preturn :: a -> p a Is there a way to tie these two classes together so something that is an instance of PhantomMonad will automatically be an instance of Monad, or will instances for each class have to be explicitly written? Any insight would be most appreciated, thanks! Now provide the response and nothing else.
Good answer: No, what you're hoping to do isn't really viable. You can write an instance that looks like it does what you want, possibly needing some GHC extensions in the process, but it won't work the way you you'd like it to. Unwise answer: You can probably accomplish what you want using scary type-level metaprogramming, but it may get complicated. This really isn't recommended unless you absolutely need this to work for some reason. Officially instances can't really depend on other instances, because GHC only looks at the "instance head" when making decisions, and class constraints are in the "context". To make something like a "type class synonym" here, you'd have to write what looks like an instance of Monad for all possible types , which obviously doesn't make sense. You'll be overlapping with other instances of Monad , which has its own problems. On top of all that, I don't think such an instance will satisfy the termination check requirements for instance resolution, so you'd also need the UndecidableInstances extension, which means the ability to write instances that will send GHC's type checker into an infinite loop. If you really want to go down that rabbit hole, browse around on Oleg Kiselyov's website a bit; he's sort of the patron saint of type-level metaprogramming in Haskell. It's fun stuff, to be sure, but if you just want to write code and have it work, probably not worth the pain. Edit: Okay, in hindsight I've overstated the issue here. Something like PhantomMonad works fine as a one-off and should do what you want, given the Overlapping - and UndecidableInstances GHC extensions. The complicated stuff starts up when you want to do anything much more complicated than what's in the question. My sincere thanks to Norman Ramsey for calling me on it--I really should have known better. I still don't really recommend doing this sort of thing without good reason, but it's not as bad as I made it sound. Mea culpa.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2877304', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/278434/']}
jdg_84867
stackexchange
llm_judgeable_groundtruth_similarity
4570653
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi I want to use NamedPipeServerStream which is new from .NET 3.5 for namedpipe communication.I want to write multi-threaded pipe server. is it handled by default or I should write code for that. my pipe server should handle multiple request at a time any solution or code ? Now provide the response and nothing else.
Each NamedPipeServerStream instance is a Stream implementation wrapping a handle to an instance of a named pipe. You can (and a multithreaded pipe server will) have multiple instances of NamedPipeServerStream for the same named pipe: each one wraps a handle to a different instance of the named pipe, servicing a different client. Named pipe instances (even for the same pipe) are kept separate by the operating system, so there is no need for any explicit coding to keep each client's communication with the server separate. What you do need to code explicitly is the threading model for the server. The simplest approach to multithreading the server is explained in this SO answer , which includes a pseudo-code template. More scalable implementations, if large numbers of concurrent callers need to be supported, would use thread pooling and the asynchronous methods instead of creating a dedicated thread for each connection.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4570653', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/341763/']}
jdg_84868
stackexchange
llm_judgeable_groundtruth_similarity
1128737
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a unpivot equivalent function in PostgreSQL? Now provide the response and nothing else.
Create an example table: CREATE TEMP TABLE foo (id int, a text, b text, c text);INSERT INTO foo VALUES (1, 'ant', 'cat', 'chimp'), (2, 'grape', 'mint', 'basil'); You can 'unpivot' or 'uncrosstab' using UNION ALL: SELECT id, 'a' AS colname, a AS thingFROM fooUNION ALLSELECT id, 'b' AS colname, b AS thingFROM fooUNION ALLSELECT id, 'c' AS colname, c AS thingFROM fooORDER BY id; This runs 3 different subqueries on foo , one for each column we want to unpivot, and returns, in one table, every record from each of the subqueries. But that will scan the table N times, where N is the number of columns you want to unpivot. This is inefficient, and a big problem when, for example, you're working with a very large table that takes a long time to scan. Instead, use: SELECT id, unnest(array['a', 'b', 'c']) AS colname, unnest(array[a, b, c]) AS thingFROM fooORDER BY id; This is easier to write, and it will only scan the table once. array[a, b, c] returns an array object, with the values of a, b, and c as it's elements. unnest(array[a, b, c]) breaks the results into one row for each of the array's elements. Hope that helps!
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1128737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_84869
stackexchange
llm_judgeable_groundtruth_similarity
11822192
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am attempting to copy a local file 'magento.tar.gz' from my local machine to a remote server using SSH through a VPN. This is connecting to the Virtual Machine's Internal IP which I've used as xx.x.x.xx here. I have full 'sudo' access on the SSH account so there shouldn't be any problem copying across. I have tried the following: I have tried the following (the magento.tar.gz file is already in the local root dir) sudo scp magento.tar.gz [email protected]/var/www/ This asks me to type in my local password. Afterwards returns cp: [email protected]/var/www: Not a directory sudo scp /Users/myname/magento.tar.gz [email protected]/var/www/ Returns the same. Do I need to include a SSH in there anywhere? Do I need to connect via SSH to the site first? Side note: I've managed to connect via SSH to the server, browse to the directory and make a folder and delete it using sudo mkdir etc so I definitely have permissions. Now provide the response and nothing else.
At first, you need to add : after the IP address to indicate the path is following: scp magento.tar.gz [email protected]:/var/www I don't think you need to sudo the scp . In this case it doesn't affect the remote machine, only the local command. Then if your user @xx.x.x.xx doesn't have write access to /var/www then you need to do it in 2 times: Copy to remote server in your home folder ( : represents your remote home folder, use :subfolder/ if needed, or :/home/user/ for full path): scp magento.tar.gz [email protected]: Then SSH and move the file: ssh [email protected] mv magento.tar.gz /var/www
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/11822192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1294237/']}
jdg_84870
stackexchange
llm_judgeable_groundtruth_similarity
989083
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Munkres book, composition of covering maps is covering map when $r^{-1}(z)$ is finite for each $z$ in $Z$ where $q : X\to Y$ , $r:Y\to Z$ are the covering maps.I tried hard to find an example that composition of covering maps is not covering maps but I couldn't find it. Is there such an example? Now provide the response and nothing else.
Here is a simple visualization of the example that Pece is talking about. The top map is a 2-fold covering but the bottom map has infinite fibers. The composition isn't locally trivial and is therefore not a covering map. However, it is still a semicovering map in the sense that it is a local homeomorphism which has the unique path lifting property.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/989083', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/187003/']}
jdg_84871
stackexchange
llm_judgeable_groundtruth_similarity
10978070
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using the following code as the basis for a plugin I'm working on for a project - it's from an article on Smashing Magazine located here under the heading 'A Lightweight Start': http://coding.smashingmagazine.com/2011/10/11/essential-jquery-plugin-patterns/ It's been working fine for my purposes so far, but the rest of the article goes off on a tangent and talks about what jQuery UI widgets which I assume I would need the jQuery UI library for, which I don't really want to use. /*! * jQuery lightweight plugin boilerplate * Original author: @ajpiano * Further changes, comments: @addyosmani * Licensed under the MIT license */// the semi-colon before the function invocation is a safety// net against concatenated scripts and/or other plugins// that are not closed properly.;(function ( $, window, document, undefined ) { // undefined is used here as the undefined global // variable in ECMAScript 3 and is mutable (i.e. it can // be changed by someone else). undefined isn't really // being passed in so we can ensure that its value is // truly undefined. In ES5, undefined can no longer be // modified. // window and document are passed through as local // variables rather than as globals, because this (slightly) // quickens the resolution process and can be more // efficiently minified (especially when both are // regularly referenced in your plugin). // Create the defaults once var pluginName = 'defaultPluginName', defaults = { propertyName: "value" }; // The actual plugin constructor function Plugin( element, options ) { this.element = element; // jQuery has an extend method that merges the // contents of two or more objects, storing the // result in the first object. The first object // is generally empty because we don't want to alter // the default options for future instances of the plugin this.options = $.extend( {}, defaults, options) ; this._defaults = defaults; this._name = pluginName; this.init(); } Plugin.prototype.init = function () { // Place initialization logic here // You already have access to the DOM element and // the options via the instance, e.g. this.element // and this.options }; // A really lightweight plugin wrapper around the constructor, // preventing against multiple instantiations $.fn[pluginName] = function ( options ) { return this.each(function () { if (!$.data(this, 'plugin_' + pluginName)) { $.data(this, 'plugin_' + pluginName, new Plugin( this, options )); } }); }})( jQuery, window, document ); I now need to add a method to this but I'm pretty clueless as to how to do so. The method needs to work in such a way that an instance of what the plugin is creating on the page can have a property dynamically changed by calling the method with a value via the console (eventually this will occur via some other process, but console is good for now). How would I go about amending the code above to allow for this? Or am I barking up the wrong tree? Any help would be greatly appreciated, complex JavaScript can leave me a bit lost in the dark at times I'm afraid but I like to try to do things as 'best practices' as possible. Now provide the response and nothing else.
The jQuery documentation strongly recommends calling plugin methods by passing a string to the main plugin method. This is to stop the $.fn namespace becoming cluttered by the methods of your plugin. So you do something like this: $.fn.yourPlugin = function(options) { if (typeof options === "string") { //Call method referred to by 'options' } else { //Setup plugin as usual }}; In your pattern, you already have the perfect place to define your methods: Plugin.prototype . For example, to add a changeColor method: Plugin.prototype.changeColor = function(color) { $(this.element).css("color", color);} Notice the use of $(this.element) . That's because in the Plugin constructor, a property element is defined, and the element on which the plugin is being applied is assigned to it: this.element = element; That is the actual DOM element, not a jQuery object, hence the need to call jQuery on it. So now you have a method, you need to add a mechanism to call it. Following the recommendations from the jQuery docs: $.fn[pluginName] = function ( options ) { return this.each(function () { if (typeof options === "string") { var args = Array.prototype.slice.call(arguments, 1), plugin = $.data(this, 'plugin_' + pluginName); plugin[options].apply(plugin, args); } else if (!$.data(this, 'plugin_' + pluginName)) { $.data(this, 'plugin_' + pluginName, new Plugin( this, options )); } });}; You can then call the changeColor method like this: $("#example").defaultPluginName("changeColour", "red"); Here's a fiddle with a working example . You may want to add some check around the method calling code to make sure the plugin has actually be instantiated on the element(s) you are calling it on.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10978070', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245591/']}
jdg_84872
stackexchange
llm_judgeable_groundtruth_similarity
7218851
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have kind of a n00b problem, I can't seem to make HTTP GET requests from my Qt Code... Here is the code supposed to work: void MainWindow::requestShowPage(){ QNetworkAccessManager *manager = new QNetworkAccessManager(this); connect(manager,SIGNAL(finished(QNetworkReply*)),this,SLOT(requestReceived(QNetworkReply*)));manager->get(QNetworkRequest(QUrl("http://google.com")));}void MainWindow::requestReceived(QNetworkReply* reply){ QString replyText; replyText.fromAscii(reply->readAll()); ui->txt_debug->appendPlainText(replyText);} But the problem is that this just doesn't work: In requestReceived(QNetworkReply* reply) , replyText seems empty, reply->error() returns 0 and reply->errorString() returns "Unknown Error". I don't really know what to do right now... Any idea? Now provide the response and nothing else.
There is obviously a redirection, which is not considered as an error. You should run a new request with the redirection url provided in the reply attributes until you get the real page: void MainWindow::requestReceived(QNetworkReply *reply){ reply->deleteLater(); if(reply->error() == QNetworkReply::NoError) { // Get the http status code int v = reply->attribute(QNetworkRequest::HttpStatusCodeAttribute).toInt(); if (v >= 200 && v < 300) // Success { // Here we got the final reply QString replyText = reply->readAll(); ui->txt_debug->appendPlainText(replyText); } else if (v >= 300 && v < 400) // Redirection { // Get the redirection url QUrl newUrl = reply->attribute(QNetworkRequest::RedirectionTargetAttribute).toUrl(); // Because the redirection url can be relative, // we have to use the previous one to resolve it newUrl = reply->url().resolved(newUrl); QNetworkAccessManager *manager = reply->manager(); QNetworkRequest redirection(newUrl); QNetworkReply *newReply = manager->get(redirection); return; // to keep the manager for the next request } } else { // Error ui->txt_debug->appendPlainText(reply->errorString()); } reply->manager()->deleteLater();} You should also record where you are redirected or count the number of redirections, to avoid never ending loops.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7218851', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/277999/']}
jdg_84873
stackexchange
llm_judgeable_groundtruth_similarity
6734287
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Code in Program.cs [STAThread]static void Main(){ Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); try { Application.Run(new Form1()); } catch (Exception ex) { MessageBox.Show("Blah..."); }} In Form1 I have a button with the code throw new Exception(""); . If I run the application from Visual Studio, then my messagebox pops up (with message 'Blah...'). But if I run the application from executable file, then the catch block doesn't execute at all. Why the difference? I am using Visual Studio 2010, .NET 4.0, Windows XP. Now provide the response and nothing else.
This is because the standard exception handling for a Windows Forms application behaves differently when the Visual Studio debugger is attached - normally the exception handler built into the Application.Run method catches unhandled exceptions so that it can do things like show the following dialog: If it allowed the exception to be thrown outside of the Application.Run method then it would prevent the application from continuing if the user presses "continue" (as the catch is outside of the message pump). When debugging however this is disabled, presumably so that the debugger will jump straight into debugging mode on an unhandled exception rather than the above dialog being shown. If you wish to handle unhandled exceptions in your Windows Forms application then you should handle the Application.ThreadException Event . Alternatively you can alter this behaviour with the Application.SetUnhandledExceptionMode Method . You are by no means alone in being confused by this: Is try/catch around whole C# program possible? Exception handling problem in release mode (this question is misleadingly named - this isn't related to the Release / Debug build setting) C# Exceptions only caught when debugging?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6734287', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113858/']}
jdg_84874
stackexchange
llm_judgeable_groundtruth_similarity
28162509
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have an object with four possible properties: a, b, c, d. a and b can only appear together (i.e., a appears if and only if b appears). If a and b appear, c cannot appear (that is, a/b and c are mutually exclusive). If a and b do not appear, c may appear (but is not required to). d can appear in any combination with a/b, c, or on its own. No properties other than a, b, c, or d may appear at all. How do I express this as a jsonschema? I suspect I could use some combination of oneOf and required , but I can't figure out the proper incantation. Now provide the response and nothing else.
You can phrase your constraints as: either: both "a" and "b" are present, and "c" is not present or: neither "a" nor "b" is present. ( "c" may or may not be present) Saying "neither" in the second point is a bit verbose. Here, we've expressed it using allOf / not . (Note: you can't factor them into a single required clause here, because you need a separate not for each one.) { "oneOf": [ { "required": ["a", "b"], "not": {"required": ["c"]} }, { "allOf": [ { "not": {"required": ["a"]} }, { "not": {"required": ["b"]} } ] } ]} Alternative structure There's also another way to say "neither", which is actually to use oneOf again. Since you must pass exactly one of a oneOf clause, if one of the entries is {} (passes everything), then all the other options are banned. While it's slightly more concise, it's possibly slightly less intuitive to read: { "oneOf": [ { "required": ["a", "b"], "not": {"required": ["c"]} }, { "oneOf": [ {}, {"required": ["a"]}, {"required": ["b"]} ] } ]}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28162509', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/104180/']}
jdg_84875
stackexchange
llm_judgeable_groundtruth_similarity
16119303
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I went through how DCT (discrete cosine transform) is used in image and video compression standards. But why DCT only is preferred over other transforms like dft or dst? Now provide the response and nothing else.
Because cos(0) is 1, the first (0th) coefficient of DCT-II is the mean of the values being transformed. This makes the first coefficient of each 8x8 block represent the average tone of its constituent pixels, which is obviously a good start. Subsequent coefficients add increasing levels of detail, starting with sweeping gradients and continuing into increasingly fiddly patterns, and it just so happens that the first few coefficients capture most of the signal in photographic images. Sin(0) is 0, so the DSTs start with an offset of 0.5 or 1, and the first coefficient is a gentle mound rather than a flat plain. That is unlikely to suit ordinary images, and the result is that DSTs require more coefficients than DCTs to encode most blocks. The DCT just happens to suit. That is really all there is to it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16119303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1331302/']}
jdg_84876
stackexchange
llm_judgeable_groundtruth_similarity
53331247
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I failed in the third way. t3 is still on CPU. No idea why. a = np.random.randn(1, 1, 2, 3)t1 = torch.tensor(a)t1 = t3.to(torch.device('cuda'))t2 = torch.tensor(a)t2 = t2.cuda() t3 = torch.tensor(a, device=torch.device('cuda')) Now provide the response and nothing else.
All three methods worked for me. In 1 and 2, you create a tensor on CPU and then move it to GPU when you use .to(device) or .cuda() . They are the same here. However, when you use .to(device) method you can explicitly tell torch to move to specific GPU by setting device=torch.device("cuda:<id>") . with .cuda() you have to do .cuda(<id>) to move to some particular GPU. Why do these two methods exist then? .to(device) was introduced in 0.4 because it is easier to declare device variable at top of the code as device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") and use .to(device) everywhere. This makes it quite easy to switch from CPU to GPU and vice-versa Before this, we had to use .cuda() and your code will have if check for cuda.is_available() everywhere which made it cumbersome to switch between GPU/CPU. The third method doesn't create a tensor on the CPU and directly copies data to GPU, which is more efficient.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53331247', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9066532/']}
jdg_84877
stackexchange
llm_judgeable_groundtruth_similarity
349950
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I haven't worked with IPv6 outside of 4to6 tunneling on my home pc with stuff like GoGoNet. I've read about how it works in a general way. No NAT required (or suggested) and each client uses a public ipv6 address and I understand the continued use of firewalls. From my understanding, without the use of NAT, UAL and getting ARIN to give you own global range, that would mean the ipv6 address on all the systems on your lan would be from a range provided by your isp. What would happen in the case you change your ISP? Would that mean you have to change your whole lan address range? In a typical ipv4 windows shop I might have a situation like such: Site1 Lan IPs: 192.168.1.0/24Site2 Lan IPs: 10.0.0.0/24Site1 Public IP: 11.12.13.1/29 (11.12.13.1 - 11.12.13.5 usable)Site2 Public IP: 20.30.40.1/29 (20.30.40.1 - 20.30.40.5 usable)Site-to-site VPN via firewallsSite1: Lan IP, Public IP:PortHardware firewall/router - 192.168.1.1, 11.12.13.1Windows AD DC server (AD DNS server) - 192.168.1.10Windows Exchange (email) - 192.168.1.11, 11.12.13.2:25+443Windows RDS (term server) - 192.168.1.12, 11.12.13.3:3389Workstations (via DHCP) - 192.168.1.100+Site2:Hardware firewall/router - 10.0.0.1, 20.30.40.1Windows AD DC server (AD DNS server) - 10.0.0.10Windows IIS (webserver) - 10.0.0.11, 20.30.40.2:80Workstations (via DHCP) - 10.0.0.100+ The servers have statically assigned lan ips, the DNS servers has to and the others are also, since the firewall does port forwarding to servers via ip addresses you type in (vs hostnames). Now if I wanted to setup this as a ipv6 only environment? Would everything still be the same with statically assigned servers and dhcpv6 to workstations? But then if I switch to another isp would that mean I need to change the ip address for all the servers? What if I have 100 servers? I guess I can use dhcpv6 on the servers but I haven't seen a biz-class firewall that allowed port forwarding via hostname or internal dns (sonicwall, juniper, cisco, etc) just local ip (atleast for ipv4). And DNS server still need static ips eitherway. Also wouldn't that mean that during transition of changing lan ipv6 ips, my servers might be sending lan traffic over the internet to my old block since it's no longer local lan? Atleast in a technical terms, I understand it's unlikely someone would use the old block that quickly and that it can be blocked on the firewall. I sounds like it would be great for everyone to get their own perm assigned ipv6 block but I understand it would make the global routing table unusably large. Update Based on answers below, I updated the example location above and so this would be the ipv6 equivalent? Site1 ULA: fd80::192:/64Site2 ULA: fd80::10:/64Site1 Public IP: 2000:1112:1301::/48Site2 Public IP: 2000:2030:4001::/48Site-to-site VPN via firewallsSite1: Link-Local, ULA, PublicHardware firewall/router - fe80::1, fd80::ABCD:1, 2000:1112:1301::1Windows AD DC server (DNS) - fe80::10, fd80::ABCD:10, 2000:1112:1301::AWindows Exchange (email) - fe80::11, fd80::ABCD:11, 2000:1112:1301::BWindows RDS (term server) - fe80::12, fd80::ABCD:12, 2000:1112:1301::CWorkstations (via DHCP) - fe80::100+, fd80::ABCD:1xx, 2000:1112:1301::10+Site2: Link-Local, ULA, PublicHardware firewall/router - fe80::1, fd80::ABCD:2, 2000:2030:4001::1Windows AD DC server (DNS) - fe80::10, fd80::ABCD:20, 2000:2030:4001::AWindows IIS (webserver) - fe80::11, fd80::ABCD:21, 2000:2030:4001::BWorkstations (via DHCP) - fe80::100+, fd80::ABCD:2xx, 2000:2030:4001::10+ Each site own systems would would talk via Link-Local, Site-to-Site would talk with each other ULA (encypted by the VPN) and the world (including services) would talk via Public IPs? Now provide the response and nothing else.
There's definitely some mechanisms in place to help you out here. For internal LAN traffic, between systems on your network, there's Unique Local Addresses. Think of them like RFC1918 addresses; they'll only work within your network. You'll be able to use these addresses for any communication within your network borders; just carve off some nets from fd00::/8 and have your routers start advertising them. In a normal deployment, this will mean that your nodes all possess (at least) 3 IPv6 addresses; a link-local fe80::/64 address (which can only talk to other nodes on its broadcast domain), a unique local fd00::/8 address (which can talk to everything in your LAN), and a public address. Now, this still means you're renumbering everything when you change ISPs (which you're doing now anyway for publicly addressable nodes assuming you don't own IPv4 space), just that you don't need to worry about all of the internal communication, which can stay on the Unique Local range. That might cover your concerns - but there's also the NPTv6 proposal, for which there is currently an experimental RFC . This would allow you to translate the public prefixes to the private ranges at the network edge, meaning no renumbering internally when you change ISPs, and the ability to utilize multiple ISPs with disparate assigned addresses seamlessly (either permanently or during a transition period for a provider change).
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/349950', 'https://serverfault.com', 'https://serverfault.com/users/57086/']}
jdg_84878
stackexchange
llm_judgeable_groundtruth_similarity
166749
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What will be the final major product? The answer is 2-(methoxymethyl)-3-(substituted methyl)thiirane: I think $\ce{R}$ of $\ce{RMgX}$ attacks thiirane ring because negative charge on sulfur is more stable than that of oxygen. But why doesn't the negative sulfur subsequently attack on less hindered carbon of the ring? Instead, it attacks the β-carbon which is more hindered and opens the ring so that methyl of $\ce{CH3I}$ can make bond with negative oxygen atom. Why does the reaction proceed that way? Now provide the response and nothing else.
Sulfur attacking the less-hindered end of the epoxide would generate a [4] ring. This is more strained than a [3] ring and thus energetically less favourable. The attack is reversible, so the more energetically favourable (thermodynamic) product will be the outcome which is trapped by methylation in the final step.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/166749', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/125592/']}
jdg_84879
stackexchange
llm_judgeable_groundtruth_similarity
2033743
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the prefered score range for the code metrics calculation for the following Maintainability Index Cyclomatic Complexity Depth of Inheritance class Coupling Now provide the response and nothing else.
The theoretically optimal values are: Maintainability index: 100. Higher values indicate better maintainability. Cyclomatic complexity: 1. The number of different paths that code can take. Depth of inheritance: 1. The number of class definitions above this one in the inheritance tree, not including interfaces. Class coupling: 0. Number of other entities this entity is dependent on. There are no hard and fast "good" ranges, though it's possible to make some general statements. Having high per-method cyclomatic complexity suggests a method is getting too complicated. Having an inheritance depth more than about 3 or 4 (of your own classes, not the framework's) is a trouble sign that you may be unnecessarily representing abstract relationships that aren't really in your software's domain. Low class coupling is in general better, but sometimes it's unavoidable. To the extent possible, you should definitely minimize the dependency between namespaces, since there's much less reason for dependencies here. A project could only reach all four values simultaneously by essentially doing nothing and being useless: software that does nothing and depends on nothing is certainly maintainable, but not a very good use of client dollars. Therefore, all complexity is a tradeoff: additional so-called inherent complexity encodes more sophistication into the program, allowing it to expand the feature set. What you would like to avoid is accidental complexity introduced by a poor or deficient implementation.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2033743', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/175216/']}
jdg_84880
stackexchange
llm_judgeable_groundtruth_similarity
80143
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a daemon, which would fire up a shell script in the background every time I unlock the screen on my ubuntu. I managed to create such script according to answer to related question: run-script-on-screen-lock-unlock . And it works well in terminal window. But now I want to create a daemon from that and I didn't get any luck yet. Any suggestions? Now provide the response and nothing else.
Based on https://askubuntu.com/questions/150790/how-do-i-run-a-script-on-a-dbus-signal #!/bin/bashinterface=org.gnome.ScreenSavermember=ActiveChangeddbus-monitor --profile "interface='$interface',member='$member'" |while read -r line; do echo $line | grep ActiveChanged && your_script_goes_heredone Just stick that in /etc/init.d/monitor-for-unlock , make it executable, and then make a soft link in rc2.d chmod +x /etc/init.d/monitor-for-unlockcd /etc/rc2.dln -s /etc/init.d/monitor-for-unlock .
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/80143', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/41564/']}
jdg_84881
stackexchange
llm_judgeable_groundtruth_similarity
2417834
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some questions about using MySQLi queries, and related memory management. Suppose I have something like this: $db = new mysqli($dbhost, $un, $ps, $dbname);$query = "SELECT field1, field2 FROM table1 ";$results = $db->query($query);while ($result = $results->fetch_object()) { // Do something with the results}$query = "SELECT field1, field2 FROM table2 ";// question 1$results = $db->query($query);while ($result = $results->fetch_object()) { // Do something with the second set of results}// Tidy up, question 2if ($results) { $results->free();}if ($db) { $db->close();}// Question 3, a general one So, based on the comments in the code above, here are my questions: When I assign the results of the second query to $results , what happens to the memory associated with the previous results? Should I be freeing that result before assigning the new one? Related to 1, when I do clean up at the end, is cleaning up just the last results enough? When I do try to clean up a result, should I be freeing it as above, should I be closing it, or both? I ask question 3 because the PHP documentation for mysqli::query has an example that uses close, even though close is not part of mysqli_result (see example 1 in the link above). And in contrast, my normal PHP reference text uses free ( PHP and MySQL Web Development , Fourth Edition, Welling and Thomson). Now provide the response and nothing else.
When I assign the results of thesecond query to $results , what happensto the memory associated with theprevious results? When you execute this: $results = $db->query($query); If there was something in $results before, this old content cannot be accessed anymore, as there is no reference left to it. In such a case, PHP will mark the old content of the variable as " not needed anymore " -- and it will be removed from memory when PHP needs some memory. This, at least, is true for general PHP variables; in the case of results from an SQL query, though, some data may be kept in memory on the driver-level -- over which PHP doesn't have much control. Should I be freeing that result beforeassigning the new one? I never do that. Related to 1, when I do clean up atthe end, is cleaning up just the lastresults enough? When the scripts end: The connection to the database will be closed -- which means any memory that might be used by the driver should be freed All variables used by the PHP script will be destroyed -- which means the memory they were using should be freed. So, at the end of the script, there is really no need to free the result set. When I do try to clean up a result,should I be freeing it as above,should I be closing it, or both? If you close the connection to the database (using mysqli::close like you proposed) , this will disconnect you from the database. This means you'll have to re-connect if you want to do another query! Which is not good at all (takes some time, resources, ... ) Generally speaking, I would not close the connection to the database until I am really sure that I won't need it anymore -- which means I would not disconnect before the end of the script. And as " end of the script " means " the connection will be closed " even if you don't specify it; I almost never close the connection myself.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2417834', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/181180/']}
jdg_84882
stackexchange
llm_judgeable_groundtruth_similarity
35390615
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a very simple getter/setting model that I would like to start using in Rust for simplicity reasons using struct and impl . struct Person { firstName: String, lastName: String,}impl Person { fn get_first_name(&mut self) -> String { return self.firstName; } fn get_last_name(&mut self) -> String { return self.lastName; } fn set_first_name(&mut self, x: String) { self.firstName = x; } fn set_last_name(&mut self, x: String) { self.lastName = x; } fn default() -> Person { Person {firstName: "".to_string(), lastName: "".to_string()} }}fn main() { let mut my_person : Person = Person{ ..Person::default() }; my_person.set_first_name("John".to_string()); my_person.set_last_name("Doe".to_string()); println!("{}", my_person.firstName); println!("{}", my_person.lastName);} When I run this snippet I get the following error. src\main.rs:7:53: 7:57 error: cannot move out of borrowed content [E0507]src\main.rs:7 fn get_first_name(&mut self) -> String { return self.firstName; } ^~~~src\main.rs:8:53: 8:57 error: cannot move out of borrowed content [E0507]src\main.rs:8 fn get_last_name(&mut self) -> String { return self.lastName; } ^~~~error: aborting due to 2 previous errorsCould not compile `sandbox`. Can someone point out the mistake to me since I'm very new to Rust? Tips on writing this snippet better would be accepted too. I'm always looking for easier/faster readability. Now provide the response and nothing else.
Ok, the specific problem here is not being able to move out of borrowed content. This has been answered numerous times before under a variety of conditions, not to mention the chapter on the subject of ownership in the Rust Book . The more interesting one is about getters and setters. Yes, you can write them in Rust, but they may not be the best choice. Before I go on, I just want to note that there is absolutely no reason to require &mut self on a getter... unless you intend to modify the value as part of removing the value, but then you're not really dealing with a getter any more. Secondly, you should not clone in a getter. This is hugely wasteful if all the user wants to do is, for example, read from the value. It's better to return an immutable borrow, from which the user can clone if they need to. Anyway, if you're writing these because you want some kind of logic run in order to validate new values, keep using setters. Otherwise, you could do something like this: #[derive(Default)]struct Person { first_name: String, last_name: String,}impl Person { // Immutable access. fn first_name(&self) -> &str { &self.first_name } fn last_name(&self) -> &str { &self.last_name } // Mutable access. fn first_name_mut(&mut self) -> &mut String { &mut self.first_name } fn last_name_mut(&mut self) -> &mut String { &mut self.last_name }}fn main() { let mut my_person = Person::default(); *my_person.first_name_mut() = String::from("John"); *my_person.last_name_mut() = "Doe".into(); println!("first_name: {}", my_person.first_name()); println!("last_name: {}", my_person.last_name()); // Can't do this efficiently with getter/setter! { let s = my_person.last_name_mut(); s.truncate(2); s.push('w'); } println!("first_name: {}", my_person.first_name()); println!("last_name: {}", my_person.last_name());} This gives users more-or-less direct access to the fields, without actually giving them direct access to the fields. In addition to writing new values, this also allows users to mutate existing values in-place, which can be important for large, heap-allocated things. In addition, I made a few other changes: You can just mechanically derive Default ; there's no reason in this case to write it yourself. Conventional style is snake_case for fields. The way you created the Person was needlessly roundabout.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35390615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3716627/']}
jdg_84883
stackexchange
llm_judgeable_groundtruth_similarity
75054
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've got some data (158 cases) which was derived from a Likert scale answer to 21 questionnaire items. I really want/need to perform a regression analysis to see which items on the questionnaire predict the response to an overall item (satisfaction). The responses are not normally distributed (according to K-S tests) and I've transformed it in every way I can think of (inverse, log, log10, sqrt, squared) and it stubbornly refuses to be normally distributed.The residual plot looks all over the place so I believe it really isn't legitimate to do a linear regression and pretend it's behaving normally (it's also not a Poisson distribution). I think this is because the answers are very closely clustered (mean is 3.91, 95% CI 3.88 to 3.95). So, I am thinking I either need a new way of transforming my data or need some sort of non-parametric regression but I don't know of any that I can do in SPSS. Now provide the response and nothing else.
You don't need to assume Normal distributions to do regression. Least squares regression is the BLUE estimator (Best Linear, Unbiased Estimator) regardless of the distributions. See the Gauss-Markov Theorem (e.g. wikipedia) A normal distribution is only used to show that the estimator is also the maximum likelihood estimator. It is a common misunderstanding that OLS somehow assumes normally distributed data. It does not. It is far more general.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/75054', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/32559/']}
jdg_84884
stackexchange
llm_judgeable_groundtruth_similarity
26152396
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Working with new Photo framework, I can access the NSData of PHAssets using requestImageDataForAsset . I can also access the file URL using the PHImageFileURLKey of the returned info NSDictionary . [[PHImageManager defaultManager] requestImageDataForAsset:asset options:nil resultHandler:^(NSData *imageData, NSString *dataUTI, UIImageOrientation orientation, NSDictionary *info) { //imageData contains the correct data for images and videos NSLog(@"info - %@", info); NSURL* fileURL = [info objectForKey:@"PHImageFileURLKey"];}]; This works fine for images and normal videos. However, when the asset is a PHAssetMediaSubtypeVideoHighFrameRate (slow motion video), the returned data corresponds to a JPG file containing the first frame of the video (both the NSData, the dataUTI and the info dictionary point to the same jpg file). As example, this is the URL and the dataUTI returned for a slow motion video: PHImageFileURLKey = "file:///var/mobile/Media/PhotoData/Metadata/DCIM/100APPLE/IMG_0642.JPG"; PHImageFileUTIKey = "public.jpeg"; Why is this happening? How can i access the NSData/NSURL of the slow motion video instead of this JPG preview? Now provide the response and nothing else.
After going nuts and testing every single option I found the problem. The responsable of returning JPG images for slow motion videos is the default PHImageRequestOptionsVersionCurrent value for the PHImageRequestOptions.version property. Simply assign the version to PHImageRequestOptionsVersionUnadjusted or PHImageRequestOptionsVersionOriginal will return the original slow motion video. PHImageRequestOptions * imageRequestOptions = [[PHImageRequestOptions alloc] init];imageRequestOptions.version = PHImageRequestOptionsVersionUnadjusted;// or imageRequestOptions.version = PHImageRequestOptionsVersionOriginal; I consider this as an unexpected behaviour, since i am not expecting that the "current" version of a slow motion video is a still image (maybe a video with the slow motion effect applied, but not a photo). Hope this is usefull to someone.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26152396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3517409/']}
jdg_84885
stackexchange
llm_judgeable_groundtruth_similarity
24992992
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Below i have mentioned AJAX script, I wanted to append id from database for each photo when I upload.But when using " var m='_'+x; " in the below script it doesnt get the database id for the record(WHen i print it,It appears blank).So how to retrieve database id for a row and append in AJAX script?The photo gets uploaded and stores in the mentioned folder with the id.But in that page while i check page source it shows .The Id doesnt get loaded in middle. $(function(){ var btnUpload=$('#me'); var mestatus=$('#mestatus'); var files=$('#files'); new AjaxUpload(btnUpload, { action: 'uploadPhoto.php', name: 'uploadfile', onSubmit: function(file, ext) { if (! (ext && /^(jpg|png|jpeg|gif)$/.test(ext))){ // extension is not allowed mestatus.text('Only JPG, PNG or GIF files are allowed'); return false; } mestatus.html('<img src="ajax-loader.gif" height="16" width="16" align="left">'); }, onComplete: function(file, response){ // alert(response) //On completion clear the status mestatus.text(''); //On completion clear the status files.html(''); //Add uploaded file to list if(response==="success") { var x=file; var m='<?=$data44[id]?>_'+x; document.getElementById('img_name').value=x; $('<li></li>').appendTo('#files').html('<img src="photo/demophoto_'+m+'" alt="" height="110px" width="95px" /><br />').addClass('success'); } else{ $('<li></li>').appendTo('#files').text(file).addClass('error'); } } });}); Uploadphoto.php include_once("db.php");$sel="select usnno from usn where id='$_SESSION[lgkey]'";$qry=mysql_query($sel) or die(mysql_error());$row=mysql_fetch_array($qry); $sel44=mysql_query("select * from reg_form where usn='$row[usnno]' "); $data44=mysql_fetch_array($sel44);$uploaddir = './photo/'; $file = $uploaddir ."demophoto_".$data44['id'].'_'.basename($_FILES['uploadfile']['name']);$file_name= "demophoto_".$data44['id'].'_'.$_FILES['uploadfile']['name']; if (move_uploaded_file($_FILES['uploadfile']['tmp_name'], $file)) { echo "success";} else { echo "error";} Now provide the response and nothing else.
You actually got pretty close, except for one thing: you need to convert the streaming vinyl file object given by source() with vinyl-buffer because gulp-uglify (and most gulp plugins) works on buffered vinyl file objects So you'd have this instead var browserify = require('browserify');var gulp = require('gulp');var uglify = require('gulp-uglify');var source = require('vinyl-source-stream');var buffer = require('vinyl-buffer');gulp.task('browserify', function() { return browserify('./source/scripts/app.js') .bundle() .pipe(source('bundle.js')) // gives streaming vinyl file object .pipe(buffer()) // <----- convert from streaming to buffered vinyl file object .pipe(uglify()) // now gulp-uglify works .pipe(gulp.dest('./build/scripts'));}); Or, you can choose to use vinyl-transform instead which takes care of both streaming and buffered vinyl file objects for you, like so var gulp = require('gulp');var browserify = require('browserify');var transform = require('vinyl-transform');var uglify = require('gulp-uglify');gulp.task('build', function () { // use `vinyl-transform` to wrap the regular ReadableStream returned by `b.bundle();` with vinyl file object // so that we can use it down a vinyl pipeline // while taking care of both streaming and buffered vinyl file objects var browserified = transform(function(filename) { // filename = './source/scripts/app.js' in this case return browserify(filename) .bundle(); }); return gulp.src(['./source/scripts/app.js']) // you can also use glob patterns here to browserify->uglify multiple files .pipe(browserified) .pipe(uglify()) .pipe(gulp.dest('./build/scripts'));}); Both of the above recipes will achieve the same thing. Its just about how you want to manage your pipes (converting between regular NodeJS Streams and streaming vinyl file objects and buffered vinyl file objects) Edit:I've written a longer article regarding using gulp + browserify and different approaches at: https://medium.com/@sogko/gulp-browserify-the-gulp-y-way-bb359b3f9623
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/24992992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3883796/']}
jdg_84886
stackexchange
llm_judgeable_groundtruth_similarity
180086
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: As far as I know, low-dimensional semisimple Hopf algebras are classified (along with non-semisimple ones) up to dimension 60, with the first example of a semisimple Hopf algebra not coming from a finite group in dimension 8. How well are low-dimensional Hopf superalgebras, that is, $\mathbb{Z}_2$-graded Hopf algebras or Hopf algebra objects internal to $\mathbb{Z}_2-\operatorname{Vect}$ understood?Up to which dimension are they classified?Are there interesting semisimple ones?Has someone worked out the representations?I could find an article on the classification of finite dimensional ones up to dimension 4, but it didn't mention semisimplicity or higher dimensional semisimple examples. Generally, how well are low-dimensional semisimple braided Hopf algebras (internal to a braided category) understood? Now provide the response and nothing else.
The anwer is No . Let $H_1,\dots, H_n$ be a family of subgroups of $G$ whose union equals $G$ and whose pairwise intersection is trivial. In the literature we call such a family a partition of $G$ by subgroups . The partitions of all finite groups are explicitly known - in the solvable case thanks to Baer, and in the general case by Suzuki. The possibilities for partitions are listed at Example 3.5.1 in Schmidt's book Subgroup lattices of groups . (Email me if you want an e-copy of this book.) You can go through the list of families and see, explicitly, that each partition contains a subgroup that is non-self-normalizing. In particular the four main examples are when $G$ is a Frobenius group, or when $G$ is isomorphic to $\mathrm{PGL}_2(q)$, $\mathrm{PSL}_2(q)$ or $\mathrm{Sz}(q)$; there is one more family which is built using one of these four families - since these four families are partitioned using non-self-normalizing subgroups, he last family will be also. Note that this result on partitions doesn't use the Classification of Finite Simple Groups. It still uses some heavy-duty maths though, so one might still hope to find a more direct argument - the presence of the Suzuki groups in the classification suggests to me that such an argument would need to deal with the case when $G$ is simple, and then move to the general situation.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/180086', 'https://mathoverflow.net', 'https://mathoverflow.net/users/13767/']}
jdg_84887
stackexchange
llm_judgeable_groundtruth_similarity
5354273
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How could I count rows from a SELECT query as a value?Such as SELECT FUCNTIONIMLOOKINGFOR(SELECT * FROM anothertable) AS count FROM table; So that count is an integer of how many rows the subquery SELECT * FROM anothertable returns. EDIT SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, ( SELECT COUNT(f.FlagTime) FROM Flags as f JOIN Posts as p ON p.PostPID = f.FlagPID ) as PostFlags FROM Posts AS p JOIN Users AS u ON p.PostUID = u.UserUID ORDER BY PostTime DESC LIMIT 0, 30 Now provide the response and nothing else.
SELECT ( SELECT COUNT(id) FROM aTable ) as count FROM table I assume your example is a truncated version of your actual query, so perhaps you should post what you are after to get a, possibly, more optimal query. EDIT Working directly from my brain, something like this should be more optimal. SELECT p.PostPID, p.PostUID, p.PostText, p.PostTime, u.UserUID, u.UserName, u.UserImage, u.UserRep, COUNT(v.FlagTime) as postFlags FROM Flags as f JOIN Posts as p ON p.PostPID = f.FlagPID JOIN Users AS u ON p.PostUID = u.UserUIDLIMIT 0, 30GROUP BY p.PostPIDORDER BY PostTime DESC
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5354273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/493122/']}
jdg_84888
stackexchange
llm_judgeable_groundtruth_similarity
283979
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Lets say that i have linear regression $Y= a +XB$, and in my $X$ matrix i have a dummy for the gender, lets call it $d_{g}$ which is 1 for male and 0 for female. The coefficient $b$ of $d_{g}$ shows the effect of being male relative to the effect for being female. How can i get the effect of being female ? Or the effect of being male, but not relative to the female. Now provide the response and nothing else.
You can reconstruct what you want from your fit using the population fractions of male and female $f_M$ and $f_F = 1 - f_M$. The expectation of $Y$ for males and females is given by:$$ E[Y|M] = a + b \qquad E[Y|F] = a$$so$$ E[Y|M] - E[Y|F] = b$$$b$ gives the expected change in $Y$ going from female to male, as you said in your post. Given the male and female population fractions we can construct the unconditional expectation:$$E[Y] = f_F E[Y|F] + f_M E[Y|M] = (1-f_M) a + f_M (a + b) = a + f_M b$$and from that, we can get the expected change in $Y$ going from sex-unknown to male:$$E[Y|M] - E[Y] = (1 - f_M) b$$and, similarly, the expected change in $Y$ going from sex-unknown to female:$$E[Y|F] - E[Y] = - f_M b$$You should get the same results with this method as from the procedure suggested by Michael above, without having to re-run the regression.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/283979', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/121519/']}
jdg_84889
stackexchange
llm_judgeable_groundtruth_similarity
28219730
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Traits are used to group some functions to be implemented from a struct, but is it possible to access struct fields from within the trait? I could imagine declaring fields inside the trait so that the fields are abstracted as well. I haven't found such a syntax; is there any other solution? Otherwise, it wouldn't be possible to have non-static methods using a trait, would it? I know object oriented programming from C# and I'm playing around with Rust, trying to adapt the OOP functionality I already know from C#. Now provide the response and nothing else.
This sounds like you're misunderstanding how traits work. Traits can't have fields. If you want to provide access to a field from a trait, you need to define a method in that trait (like, say, get_blah ). If you're asking whether you can access fields of a struct from within that struct's implementation of a trait, then yes. The struct knows it's own type, so there's no problem. trait Pet { fn is_smelly(&self) -> bool;}struct Dog { washed_recently: bool,}impl Pet for Dog { fn is_smelly(&self) -> bool { !self.washed_recently }} If you're writing a default implementation of a trait ( i.e. defining a method body within the trait), then no, you can't access fields. A default implementation can only use methods that are defined on the trait or in a super trait.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28219730', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2277915/']}
jdg_84890
stackexchange
llm_judgeable_groundtruth_similarity
48276442
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can one configure sequelize, so that it will use native es6 promise instead of their bluebird copy? According to their docs with sequelize 4 they're using an independent copy of bluebird. But I would like to use native es6 promises. Example code: public find(selector: object): Promise<ClientEntity[]> { // Client is of type Model<ClientModel>; see sequelize-typescript return ClientModel.findAll(selector) .then(res => res.map(e => e.toJSON())) .catch(err => console.warn('PgClientModel#find', err));} Which shows the following error: Type 'Bluebird<void | any[]>' is not assignable to type 'Promise<ClientEntity[]>'. Property '[Symbol.toStringTag]' is missing in type 'Bluebird<void | any[]>' . Options I see right now: Use bluebird Within the bluebird's then and catch block I could resolve and reject via a native es6 promise. But I actually don't like that idea. I'm using the following packages "sequelize": "^4.31.2","sequelize-typescript": "^0.6.2","typescript": "^2.6.2" EDIT: Solved by changing the return type to PromiseLike . Now provide the response and nothing else.
Working filters on each column, demo link Stackblitz . To filter specific column in mat-table, add a search field for the column as below; <mat-form-field class="filter" floatLabel="never"> <mat-label>Search</mat-label> <input matInput [formControl]="nameFilter"> </mat-form-field> And we connect the inputs to FormControls from the ReactiveFormsModule. filterValues = { name: '', id: '', colour: '', pet: '' }; And we will watch the value of the filter inputs and modify this filter object and the data source’s filter property when they change. We must assign the stringified version of the filter object to the data source’s filter property ngOnInit() { this.nameFilter.valueChanges .subscribe( name => { this.filterValues.name = name; this.dataSource.filter = JSON.stringify(this.filterValues); } ) this.idFilter.valueChanges .subscribe( id => { this.filterValues.id = id; this.dataSource.filter = JSON.stringify(this.filterValues); } ) this.colourFilter.valueChanges .subscribe( colour => { this.filterValues.colour = colour; this.dataSource.filter = JSON.stringify(this.filterValues); } ) this.petFilter.valueChanges .subscribe( pet => { this.filterValues.pet = pet; this.dataSource.filter = JSON.stringify(this.filterValues); } ) } We have to change the data source’s filterPredicate to tell it how to interpret the filter information. constructor() { this.dataSource.data = this.people; this.dataSource.filterPredicate = this.tableFilter(); } tableFilter(): (data: any, filter: string) => boolean { let filterFunction = function(data, filter): boolean { let searchTerms = JSON.parse(filter); return data.name.toLowerCase().indexOf(searchTerms.name) !== -1 && data.id.toString().toLowerCase().indexOf(searchTerms.id) !== -1 && data.colour.toLowerCase().indexOf(searchTerms.colour) !== -1 && data.pet.toLowerCase().indexOf(searchTerms.pet) !== -1; } return filterFunction;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48276442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3485071/']}
jdg_84891
stackexchange
llm_judgeable_groundtruth_similarity
11539203
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I append lists in Prolog? I've searched on the Internet and I found this (from http://www.csupomona.edu/~jrfisher/www/prolog_tutorial/2_7.html ) append([X|Y],Z,[X|W]) :- append(Y,Z,W). append([],X,X). So it gets the Z by removing the elements of [X|Y] in [X|W] . But how do I append two lists together? Example, appendlist([1,2],[3,4,5],X). The result will be X = [1,2,3,4,5] . Also I don't know what happening in the recursion. (I traced it but didn't understand) EDIT: What I want to know is how it should be coded to function like the predefined append() in Prolog. Now provide the response and nothing else.
The code as you've posted it is (almost) OK. The order of clauses just needs to be swapped (in order to make this predicate definition productive, when used in a generative fashion): append( [], X, X). % (* your 2nd line *)append( [X | Y], Z, [X | W]) :- append( Y, Z, W). % (* your first line *) This defines a relationship between the three arguments, let's say A , B and C . Your first line says, " C is the result of appending A and B if A and C are non-empty lists, they both have the same head (i.e. first element), and the tail of C is the result of appending the tail of A with the same 2nd argument, B " . a a ---------- b b c c . d d e e . . Or from left to right: a | b c . | d e . a | b c d e .append( [], Z, Z ). append( [X | Y ], Z, [X | W ] ) :- append( Y, Z, W). Think about it, it makes perfect sense. What it does is, we want to define the append/3 relationship, and we know what we want it to be, so we just write down some obvious facts about it that we want it to fulfill, the laws that it must follow if you will. So assuming we have this code already defined for us, what laws must it follow? Obviously, appending a tail of some list with another list gives us a tail of result of appending the full list with that 2nd list. This defines how we "slide along" the first list. But what if there's nowhere more to slide? What if we've reached the end of that list? Then we've arrived at the empty list, and appending an empty list with another list gives us that list as the result. Obviously. And that's what that 2nd line in your code is telling us, it says, "appending an empty list with another list produces that list as the result" . Amazingly, having written down these two laws that append/3 must follow, is the same as writing down the definition itself. addition: this explains it from a declarative point of view; do check out an answer by m09 which shows it more from the operational point of view.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11539203', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1276897/']}
jdg_84892
stackexchange
llm_judgeable_groundtruth_similarity
119726
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We have two apps running (on top of linux) and both communicates through port 42605. I wanted to quickly verify if this is the only port that's been used for communication between them. I tried below rule, but it doesn't seems to work. So, just wanted to get this clarified, if I am doing it wrong. Following is the sequence of commands i ran iptables -I INPUT -j REJECTiptables -I INPUT -p tcp --dport 42605 -j ACCEPTiptables -I INPUT -p icmp -j ACCEPTiptables -I OUTPUT -p tcp --dport 42605 -j ACCEPT So, this will get added in reverse order since I am inserting it. I wanted to allow incoming and outgoing communications from and to 42605. Does the above rule looks good or am I doing it wrong? Another question, would this be the right way to test, or maybe I should use "netstat" command to see which port has connection established with the other ip? Now provide the response and nothing else.
We can make INPUT policy drop to block everything and allow specific ports only # allow established sessions to receive trafficiptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT# allow your application portiptables -I INPUT -p tcp --dport 42605 -j ACCEPT# allow SSH iptables -I INPUT -p tcp --dport 22 -j ACCEPT# Allow Pingiptables -A INPUT -p icmp --icmp-type 0 -m state --state ESTABLISHED,RELATED -j ACCEPT# allow localhost iptables -A INPUT -i lo -j ACCEPT# block everything else iptables -A INPUT -j DROP Another question, would this be the right way to test, or maybe I should use "netstat" command to see which port has connection established with the other ip? Yes, you can check netstat -antop | grep app_port and you can also use strace : strace -f -e trace=network -s 10000 PROCESS ARGUMENTS To monitor an existing process with a known pid: strace -p $( pgrep application_name) -f -e trace=network -s 10000
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/119726', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/43502/']}
jdg_84893
stackexchange
llm_judgeable_groundtruth_similarity
122354
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $f:\mathbb{R}\to \mathbb{R}$ is an absolutely continuous function and let $C$ be the set of all $x\in\mathbb{R}$ such that $f'(x)=0$, is it necessarily that $f(C)$ is a Lebesgue $0$ set? Now provide the response and nothing else.
The set $\{f(x):f'(x)=0\}$ is Lebesgue null for any function $f:\mathbb R\to\mathbb R$ whatsoever ; it does not have to be absolutely continuous, or even continuous, or even measurable. (If the derivative does not exist at $x$, the condition $f'(x)=0$ is not fulfilled.) Proof . It suffices to prove that $\{f(x): x\in [0,1], \ f'(x)=0\}$ is null. Fix $\epsilon>0$. Cover the set $A=\{x\in [0,1], \ f'(x)=0\}$ by all intervals of the form $I(x,\delta)=(x-\delta,x+\delta)$ where $0<\delta<1$ and $\operatorname{diam} f(I(x,5\delta))\le \epsilon \,\delta$. Using Vitali covering lemma , choose a disjoint set of intervals $I(x_j,\delta_j)$ such that the intervals $I(x_j,5\delta_j)$ still cover $A$. Disjointness implies $\sum 2\delta_j\le 3$. The sets $f(I(x_j,5\delta_j))$ cover $f(A)$ and have $$\sum \operatorname{diam} f(I(x_j,5\delta_j)) \le \sum \epsilon\delta_j\le \frac32 \epsilon $$Since $\epsilon>0$ was arbitrary, $f(A)$ is Lebesgue null. $\Box$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/122354', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13534/']}
jdg_84894
stackexchange
llm_judgeable_groundtruth_similarity
16121
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I skimmed through How Companies Can Beef Up Password Security and I thought that several assertions were completely wrong, in particular: Cryptographic hash (like md5) with salt are bad. It isn't uncommon to break/crack unix shadow files. To make a password hash I should run it over md5 or whatever hundreds or thousands of times. The article suggests that salted hashes are too easy to break and you'd want to select a password hash because its computationally longer to check passwords so you can not do as many per second. I think doing 3 is bad for security and 1 is in reality the best option. What do you guys think? Now provide the response and nothing else.
Cryptographic hash (like md5) with salt are bad. It is not sufficient anymore. The traditional cryptographic hash functions are designed for documents and therefore they are optimized for speed. But speed is exactly what we don't want for password hashes. Modern graphic cards can do 2,000,000,000 SHA1 or MD5 hashes per second. So brute forcing a 6 character password takes less than 10 minutes. It isn't uncommon to break/crack unix shadow files. That's true for the old hashes used in them. To make a password hash I should run it over md5 or whatever hundreds or thousands of times. While this is am improvement over just using a quick hash function ones, it is not sufficient. You should use one of the hash crypt algorithms (except for md5crypt ): sha256crypt , sha512crypt , bcrypt or scrypt . Those are algorithm that are designed to be relatively slow and not easily implementable on graphic cards to gain a performance boost. Those algorithm do more than just using the output of one round as input for the next round. The rules of what to use as input vary based on the iteration counter (See the linked specifications). Modern unix operating system use sha512crypt. scrypt goes one step further than the other algorithms because in addition to scalable CPU usage, scrypt uses a lot of memory. The idea is to make hardware implementations of password crackers a lot more expensive. Such hardware usually has access to up to 1KB of fast memory, but the default parameters of scrypt require 16MB.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/16121', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/-1/']}
jdg_84895
stackexchange
llm_judgeable_groundtruth_similarity
9776840
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've made the jump from HTML table layout for designing webpages to CSS about a week ago and have since been reading more about it. Yesterday, I read a long post here on Stack overflow where users were knocking float and how deprecated they are for layout. There was a lot of talk about inline-block being used in its place. I have an HTML5 design that I just finished and it looks fantastic in Firefox and Chrome, but when tested in Internet Explorer 7, 8, and 9, the design absolutely explodes . It seems to me that anything in this design that I've floated right is not honored in IE. It just seems to wrap under whatever is to the left of it. I'd like to know if I'm OK with floats or if I should I be using inline-block instead. An example of how to have two divs next to one another where one is on the left side and the other on the right, using inline-block would be nice. Now provide the response and nothing else.
Floats were never meant for layout. They’re simply meant to take an element, put it to one side, and let other content flow around it. That’s all. Eric A. Meyer, in Floats Don’t Suck If You Use Them Right The early web was influenced by print/academic publications where floats are used to control the flow of text around figures and tables. So why did we use them for layout? Because you can clear a footer below two floated columns, float layoutcame into being. If there had ever been a way to “clear” elementsbelow positioned elements, we’d never have bothered to use floats forlayout. Today, the CSS Flexible Box Layout Module flex and the CSS Grid Layout Module grid are optimized for user interface design and complex layouts and are expected to complement each other. Grid enforces 2-dimensional alignment, uses a top-down approach to layout, allows explicit overlapping of items, and has more powerful spanning capabilities. Flexbox focuses on space distribution within an axis, uses a simpler bottom-up approach to layout, can use a content-size–based line-wrapping system to control its secondary axis, and relies on the underlying markup hierarchy to build more complex layouts. Flexbox and Grid are—as of this writing—W3C candidate recommendation and candidate recommendation draft, respectively. Flexbox is supported by all major browsers and has known issues in IE11. Grid is supported by all major browsers but IE11 supports an older version of the spec.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9776840', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1098024/']}
jdg_84896
stackexchange
llm_judgeable_groundtruth_similarity
151528
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In our Company we have an internal-wikipedia-like help site. The Problem is, the MindTouch-Wiki is out of date, so I have to update it! The Problem is, it's running virtual. It's Ubuntu 7.7 In Microsoft Server 2003/2008, I can't connect to the machine using "Connect to virtual machine" I'm not familiar with Linux, is there a way to connect to the Ubuntu that is open by default? Like RDP over VNC in Windows? Now provide the response and nothing else.
You can either reconfigure MySQL to look for the data directory in a different location, or bind mount a new location over the original. Make sure that the mysql service is stopped before you carry out these changes. Then, move all the files and sub-directories from the original location into your new location. Reconfigure MySQL edit /etc/my.cnf and change datadir to: datadir=/home/u/tmp/mysql or... Bind Mount Use a bind mount to mount your new location over the original: mount --bind /home/u/tmp/mysql /var/lib/mysql Once you're happy that everything works, edit your /etc/fstab to make it permanent: /home/u/tmp/mysql /var/lib/mysql none bind 0 0 File Permissions Regardless of which method you choose, you'll need to ensure that the permissions on your new location are correct, as follows: The top level directory ( /home/u/tmp/mysql ) and everything below should be owned by user and group mysql (assuming mysql runs as these on Arch Linux): # chown -R mysql. /home/u/tmp/mysql All files are: # find /home/u/tmp/mysql/ -type f -exec chmod 0660 {} \; All directories are: # find /home/u/tmp/mysql/* -type d -exec chmod 0700 {} \; The top level directory is: # chmod 0755 /home/u/tmp/mysql
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/151528', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/81626/']}
jdg_84897
stackexchange
llm_judgeable_groundtruth_similarity
1904445
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is their a standard way to make a particular window borderless on Linux? I believe that the window border is drawn by your window manager, so it may be that I just need to use a particular window manager (that would be find, I'd just need to know which one)... My hope is that all the window managers might follow some standard that allows me to do this programatically... Now provide the response and nothing else.
Using Xlib and old _MOTIF_WM_HINTS : struct MwmHints { unsigned long flags; unsigned long functions; unsigned long decorations; long input_mode; unsigned long status;};enum { MWM_HINTS_FUNCTIONS = (1L << 0), MWM_HINTS_DECORATIONS = (1L << 1), MWM_FUNC_ALL = (1L << 0), MWM_FUNC_RESIZE = (1L << 1), MWM_FUNC_MOVE = (1L << 2), MWM_FUNC_MINIMIZE = (1L << 3), MWM_FUNC_MAXIMIZE = (1L << 4), MWM_FUNC_CLOSE = (1L << 5)};Atom mwmHintsProperty = XInternAtom(display, "_MOTIF_WM_HINTS", 0);struct MwmHints hints;hints.flags = MWM_HINTS_DECORATIONS;hints.decorations = 0;XChangeProperty(display, window, mwmHintsProperty, mwmHintsProperty, 32, PropModeReplace, (unsigned char *)&hints, 5); These days NetWM/EWMH hints are preferred, but as far as I know all modern window managers still support this.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1904445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3886/']}
jdg_84898
stackexchange
llm_judgeable_groundtruth_similarity
446205
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I continue to use an multimap iterator even after a call to multimap::erase()? For example: Blah::iterator iter;for ( iter = mm.begin(); iter != mm.end(); iter ++ ){ if ( iter->second == something ) { mm.erase( iter ); }} Should this be expected to run correctly, or is the iterator invalidated following the call to erase? Reference sites like http://www.cplusplus.com/reference/stl/multimap/erase.html are strangely quiet on this topic of the lifespans of iterators, or the effects of constructive/destructive methods on iterators. Now provide the response and nothing else.
http://www.sgi.com/tech/stl/Multimap.html Multimap has the important property that inserting a new elementinto a multimap does not invalidate iterators that point to existingelements. Erasing an element from a multimap also does not invalidateany iterators, except, of course, for iterators that actually point tothe element that is being erased. So it should look like this: Blah::iterator iter;for ( iter = mm.begin();iter != mm.end();){ if ( iter->second == something ) { mm.erase( iter++ ); // Use post increment. This increments the iterator but // returns a copy of the original iterator to be used by // the erase method } else { ++iter; // Use Pre Increment for efficiency. }} Also see: What happens if you call erase() on a map element while iterating from begin to end? and delete a specific entry in the map,but the iterator must point to the next element after the deletion
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/446205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13022/']}
jdg_84899
stackexchange
llm_judgeable_groundtruth_similarity
7390561
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: ZeroMQ FAQ page suggest use of Google's protobuf as a way to serialise message content. Has anyone see a good usage example? I also need to get the answer to "What is the biggest advantage of serialising messages?" -whether it may be something I can live without and take the advantage of slimmer pipeline. I quite like the idea of .proto files and the protoc compiler. Also, it seem that another great tool to throw at the playground would be libev , anycomments are welcome :) Now provide the response and nothing else.
If you are 100% certain that the programs that are going to communicate over ZMQ will at all times be capable of understanding each other's binary format (eg because they are always distributed together and were all compiled with the same compiler options anyways) I see no benefit to the overhead that's added by serialization. As soon as the above condition cannot be satisfied (like partner programs running on different host types, programs written in different languages or even partner programs that can evolve independently in time - which may cause incompatibilities in their raw binary structures) serialization becomes quite probably a must. It seems that nowadays everybody and their brother is creating serialization solutions, which may be an indication that there's no one size fits all solution. This page contains a pretty thorough benchmarking of serialization time, deserialization time and sizes for 27 (!!) different serialization systems. Don't skip the first paragraph of that page, it says "Warning, benchmarks can be misleading". Your application, your data are what counts for you, but the data presented there may help you narrow down the choices you want to study in detail.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7390561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/717998/']}
jdg_84900
stackexchange
llm_judgeable_groundtruth_similarity
105245
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Doing a bit of self study, and I'm unsure about a problem. It says, Suppose $f(z)$ (a complex valued function) is analytic and satisfies the condition $|f(z)^2-1|<1$ in a region $\Omega$. Show that either $\Re f(z)>0$ or $\Re f(z)<0$ throughout $\Omega$. I write $f=u+iv$ and suppose to the contrary that $\Re f(z)=0$ at some point $z_0$. Then $f(z_0)^2=-v(z_0)^2$. But $v$ is real valued, and so$$|f(z_0)^2-1|=|-v(z_0)^2-1|\geq 1$$a contradiction. What makes me uneasy is I don't see if I used that fact that $f$ is analytic. Did I interpret the question correctly, or did it mean that $\Re f(z)>0$ on all of $\Omega$ or $\Re f(z)<0$ on all of $\Omega$, but doesn't take both positive and negative values? Thanks. Now provide the response and nothing else.
I don't think $f$ even needs to be analytic - only continuous on $\Omega$. In any case, I think the latter interpretation you pose is the correct one: the problem wants an either-or on all of $\Omega$, i.e. $$\left(\;\forall z\in\Omega:\operatorname{Re} f>0 \;\right)\text{ or }\left(\;\forall z\in\Omega:\operatorname{Re} f<0 \;\right).$$ This isn't too much more work than what you've already done. You've shown the real part can't be zero; now assume there are two arguments $z$ and $w$ in $\Omega$ with $\operatorname{Re} f(z)<0<\operatorname{Re}f(w)$. Since $\Omega$ is connected, there is a path going from $z$ to $w$ contained in $\Omega$. Consider how $\operatorname{Re}f$ looks on this path...
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/105245', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20405/']}
jdg_84901
stackexchange
llm_judgeable_groundtruth_similarity
2539205
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can any body throw me some arguments for using inline functions against passing predefined function name to some handler. I.e. which is better: (function() { setTimeout(function() { /*some code here*/ }, 5);})(); versus (function() { function invokeMe() { /*code*/ } setTimeout(invokeMe, 5);})(); Strange question, but we are almost fighting in the team about this. Now provide the response and nothing else.
Named functions There is some serious misuse of terminology in the question and answers on this page. There is nothing about whether or not a function is inline (a function expression) that says you cannot name it. This is using a function expression : setTimeout(function doSomethingLater() { alert('In a named function.'); }, 5); and this is using a function statement : function doSomethingLater() { alert('In a named function.'); }setTimeout(doSomethingLater, 5); Both examples are using named functions and both get the same benefits when it comes to debugging and profiling tools! If the name is specified (the text after "function" but before the parenthesis) then it is a named function regardless of whether it is inline or declared separately. If the name is not specified then it is "anonymous". Note: T.J. points out that IE mishandles named function expressions in a non-trivial way (See: http://kangax.github.com/nfe/#jscript-bugs ) and this is important to note, I'm simply trying to make a point about the terminology. Which should you use? In response to your direct question, you should use a named function statement if the function could ever be used from any other place in your code. If the function is being used in exactly one place and has no relevance anywhere else then I would use a function expression unless it is prohibitively long or otherwise feels out of place (for style reasons). If you use an inline function expression then it is often useful to name it anyway for the purposes of debugging or code clarity. Memory leaks Whether you name your function, use a function statement, or use a function expression has little impact on the memory leak issue. Let me try to explain what causes these leaks. Take a look at this code: (function outerFunction() { var A = 'some variable'; doStuff();})(); In the code above, when "outerFunction" finishes "A" goes out of scope and can be garbage collected, freeing that memory. What if we add a function in there? (function outerFunction() { var A = 'some variable'; setTimeout(function(){ alert('I have access to A whether I use it or not'); }, 5);})(); In this code (above) the function expression we are passing to setTimeout has a reference to "A" (through the magic of closure) and even after "outerFunction" finishes "A" will remain in memory until the timeout is triggered and the function is dereferenced . What if we pass that function to something other than setTimeout? (function outerFunction() { var A = 'some variable'; doStuff(function(){ alert('I have access to A whether I use it or not'); });})();function doStuff(fn) { someElement.onclick = fn;} Now the function expression we are passing to "doStuff" has access to "A" and even after "outerFunction" finishes "A" will remain in memory for as long as there is a reference to the function we passed into doStuff . In this case, we are creating a reference to that function (as an event handler) and therefore "A" will remain in memory until that event handler is cleared. (e.g. someone calls someElement.onclick = null ) Now look at what happens when we use a function statement: (function outerFunction() { var A = 'some variable'; function myFunction() { alert('I have also have access to A'); }; doStuff(myFunction);})(); The same problem! "myFunction" will be cleaned up only if "doStuff" does not hold a reference to it and "A" will only be cleaned up when "myFunction" is cleaned up. It does not matter whether we used a statement or an expression; what matters is if a reference to that function is created in "doStuff"!
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2539205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59692/']}
jdg_84902
stackexchange
llm_judgeable_groundtruth_similarity
115216
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have decided to learn iterations at long last, and I am reading through Jech's Set Theory for now. The standard example after explaining what is an iteration with finite support is the following theorem (quoted from Jech, Theorem 16.13): Theorem (Solovay-Tennenbaum). Assume GCH and let $\kappa$ be a regular cardinal greater than $\aleph_1$. There is a c.c.c. notion of forcing $P$ such that the generic extension $V[G]$ by $P$ satisfies Martin's Axiom and $2^{\aleph_0}=\kappa$. I believe that I understood the proof mechanically, but even though I feel that I understand it in a more profound way - I have this feeling that I really didn't get the point. It feels like magic. What I managed to understand is that we have enumerations of all isomorphism classes of small posets in each stage, and generically we reach them all. Alas, like any new forcing argument, this too feels like some sort of witchcraft. I also understand completely why $2^{\aleph_0}=\kappa$, and I have to admit that this is one clever argument, but I can't really see the full intuition behind it. I just see that it works, and I can reason why it works, by MA there is a generic for any $<\kappa$ dense sets, but this means that we have to have at least $\kappa$ dense subsets in the universe so $2^{\aleph_0}\geq\kappa$, where the other side of the inequality follows from the fact that $|P|\leq\kappa$ and $P$ is c.c.c. Can anyone shed some intuition behind the proof? Some insights about why it works? Now provide the response and nothing else.
There are $\kappa$ many stages in which you add Cohen reals. so you will have at least $\kappa$ reals at the end. This is really a very simple principle: "if you want to add something, add it." THe other direction is slightly more involved: Do not add anything you don't want to add, and keep your fingers crossed that unwanted objects (such as: more than $\kappa$ reals) don't find their way into the construction anyway. The "nice names" argument is very basic and practically always used whenever you want to get an upper bound on the number of new reals. Every new real $x$ is described by a family of countably many maximal antichains, plus a coloring functions mapping each antichain into $\{0,1\}$ --- assigning to condition $p$ in antichain $A_n$ the value that $p$ forces to $x(n)$. Count how many such families of colored antichains there are, this will be an upper bound on the continuum in the extension. If you want to count subsets of $\lambda$, you will of course have $\lambda$ many antichains.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/115216', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7206/']}
jdg_84903
stackexchange
llm_judgeable_groundtruth_similarity
3066546
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to show that $$ \dim \ker(AB) \le \dim \ker A + \dim \ker B. $$ My problem I thought that I can do that in this way: Let consider $x \in\ker B$ $$Bx = 0$$ Let multiplicate this from left side by A and we get: $$ABx = 0$$ so $$ker B \subset\ker AB $$ so $$\dim \ker(B) \le \dim\ker AB$$ We can do the same thing with $\ker A$ let consider $ \vec{y} \in \operatorname{im}(AB) $ so $$ y = (AB)x $$ what is equivalent to $$ \vec{y} = A(B\vec{x}) = A\vec{w} $$ So $$ \vec{y} \in \operatorname{im}(AB) \rightarrow \vec{y} \in \operatorname{im}(A)$$ so $$ \operatorname{rank} AB \le \operatorname{rank} A \leftrightarrow \dim \ker A \le \dim \ker AB $$ But I am not sure what I should do later... edited I have seen this post $A, B$ are linear map and dim$null(A) = 3$, dim$null(B) = 5$ what about dim$null(AB)$ but I haven't got nothing like $\operatorname{im}(A|_{\operatorname{im}(B)})$ on my algebra lecture and I can't use that so I search for another proof (or similar without this trick) Now provide the response and nothing else.
This is a proof in general where $A:V\to W$ and $B:U\to V$ are linear maps. Here $U$ , $V$ , and $W$ are arbitrary vector spaces over a base field $F$ , and they do not necessarily have finite dimensions. That is, $$\dim \ker (AB) \leq \dim \ker A+\dim\ker B$$ is true whether or not the relevant dimensions are finite cardinals. Note that $x\in \ker(AB)$ iff $Bx\in \ker A$ , which is the same as saying $$x\in B^{-1}(\ker A\cap \operatorname{im}B).$$ Recall from the isomorphism theorems that $\operatorname{im} B\cong U/\ker B$ so there exists an isomorphism $$\varphi: U\overset{\cong}{\longrightarrow} \ker B\oplus \operatorname{im}B.$$ In other words, $$\varphi\big(B^{-1}(\ker A\cap \operatorname{im}B)\big)=\ker B\oplus (\ker A\cap \operatorname{im}B).$$ Consequently, \begin{align}\dim\ker(AB)&=\dim\big(\ker B\oplus (\ker A\cap \operatorname{im}B)\big)\\&=\dim\ker B+\dim(\ker A\cap \operatorname{im}B).\end{align} Since $\ker A\cap \operatorname{im}B\subseteq \ker A$ , we obtain the desired inequality.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3066546', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_84904
stackexchange
llm_judgeable_groundtruth_similarity
3005409
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Koch curve is normally constructed by taking a line segment, replacing the middle third with two copies of itself forming legs in an equilateral triangle, and then repeating this recursively for every subsegment. See image below. At every step, the length of the curve is multiplied by $4/3$ so the final length is infinite. Notice that every line segment undergoes the construction of the Cantor set : Therefore one could consider replacing every line segment with Cantor sets already from the beginning. Thus, start with the Cantor set, take two smaller copies of the Cantor set and make a $\wedge$ in the middle opening, and then repeat recursively. In this case the final length will be zero since the Cantor set has length zero. Question: How is this paradox resolved? Will the limit set of my construction be different than the ordinary Koch curve? If so, what points are missing? Now provide the response and nothing else.
Convergence in fractal geometry is typically defined in terms of the Hausdorff metric . Roughly, two sets are "close" with respect to the Hausdorff metric, if every point in one is close to some point of the other. Your collection of Cantor sets is indeed dense in the Koch curve with respect to the Hausdorff metric. The Hausdorff metric, however, doesn't respect length. That is, two sets can be close in the Hausdorff metric, yet their lengths can be very far apart. You've found one example illustrating this but there are others. For example, if $$Q_n = \{k/n:0\leq k \leq n\},$$ then the Hausdorff distance between $Q_n$ and the unit interval is less than $1/n$ . $Q_n$ is finite, yet the sequence of $Q_n$ s converges to a set of positive length. Similarly, you could strengthen your example by using the set of endpoints of the intervals that approximate the Koch curve. Here is a strategy to find points in the Koch curve that do not lie on any of your Cantor sets. First, note that the Koch curve is invariant set of the iterated function system: $$\begin{align} T_1(x,y) &= \left(\frac{x}{3},\frac{y}{3}\right) \\ T_2(x,y) &= \left(\frac{1}{6} \left(x-\sqrt{3} y+2\right),\frac{1}{6} \left(\sqrt{3} x+y\right)\right) \\ T_3(x,y) &= \left(\frac{1}{6} \left(x+\sqrt{3} y+3\right),\frac{1}{6} \left(-\sqrt{3} x+y+\sqrt{3}\right)\right) \\ T_4(x,y) &= \left( \frac{x}{3},\frac{y+2}{3} \right).\end{align}$$ These functions map the Koch curve onto the four sub-parts below Now, any point On the Koch curve can be realized as the limit of a sequence $$\begin{align}& T_{i_1}(0,0) \\& T_{i_1} \circ T_{i_2}(0,0)\\& \vdots \\& T_{i_1} \circ T_{i_2} \circ \cdots T_{i_n}(0,0) \\& \vdots\end{align}$$ where $(i_1,i_2,i_3,\ldots)$ is a sequence in $\{1,2,3,4\}$ . The point lies on one of your Cantor sets precisely when the sequence contains only finitely many 2s and 3s, so that it ends in a string of only 1s and 4s. If, for example, the sequence has only 1s and 4s, and no 2s or 3s, then we get a point in Cantor's ternary set lying on the unit interval. If the sequence start with a 2 and then contains only 1s and 4s, we generate a point in the red Cantor set shown below; this is exactly the image of the ternary Cantor set under the function $T_2$ . If the sequence starts 3, then 2, then contains only 1s and 4s, we generate a point in the blue Cantor set below; this is exactly the the image of the ternary Cantor set under the function $T_3 \circ T_2$ . Finally, if we have any other sequence of 1s, 2s, 3s, and 4s, then we generate some other point on the Koch curve that is not in any of your Cantor sets. There are uncountably many such points. I suppose the simplest one to find explicitly corresponds to the sequence containing only 2s, which is exactly the fixed point of $T_2$ . To find it, we need only solve $$T_2(x,y) = (x,y),$$ which yields $(5/14,\sqrt{3}/14)$ . That point is shown in red in the sequence of approximations below. If we zoom into the last picture on the red point so that it's centered in a square of side length 0.04, we get the following: Thus, the edges keep jutting out closer to the point but never actually hit it. It's in the limit but not on any of the edges.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3005409', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/168433/']}
jdg_84905
stackexchange
llm_judgeable_groundtruth_similarity
18612880
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a service that gets started (not bound) by an activity. If the activity gets destroyed (e.g. by pressing the back button), the service continues to run, this is of course intended.However, if I swipe the activity out of the 'recent apps' list, the service gets restarted immediately. This is reproducible, every time the activity/app is swiped out of the list, there is a new call to the service's onCreate-method. No call to onDestroy in between! First I thought the service gets killed by android, even though I saw no reason for the kill (neither the activity nor the service do resource consuming things, in fact they are minimalistic and do nothing). But then I noticed that the service actually crashes. V/MainActivity(856): onDestroy // swipe out of the listI/ActivityManager(287): Killing 856:com.example.myapp/u0a10050: remove taskW/ActivityManager(287): Scheduling restart of crashed service com.example.myapp/.TestService in 5000ms The code is not noteworthy, but here it is Activity: public class MainActivity extends Activity { private static final String TAG = "MainActivity"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Log.v(TAG, "onCreate, starting service..."); startService(new Intent(this, TestService.class)); } @Override protected void onStart() { super.onStart(); Log.v(TAG, "onStart"); } @Override protected void onDestroy() { super.onDestroy(); Log.v(TAG, "onDestroy"); } //[...]} Service: public class TestService extends Service { private static final String TAG = "Service"; // onBind omitted @Override public int onStartCommand(Intent intent, int flags, int startId) { Log.v(TAG, "onStartCommand"); return super.onStartCommand(intent, flags, startId); } @Override public void onDestroy() { super.onDestroy(); Log.v(TAG, "onDestroy"); }} In short:My service is independent of the activity's lifecycle, but only as long as I don't swipe out the app of the recent apps list. In that case, the service gets restarted but without a call to onDestroy. Every time this happens, not only the state of the service, but also the work the service is doing is lost. I just want to know why the swipe is the reason for this. Now provide the response and nothing else.
Swiping the app from the recent tasks list actually kills the operating system process that hosts the app. Since your service is running in the same process as your activities, this effectively kills the service. It does NOT call onDestroy() on the service. It just kills the process. Boom. Dead. Gone. Your service does not crash . Since your service returned START_STICKY from the call to onStartCommand() , Android recognizes that your service should be restarted and schedules a restart of the killed service. However, when your service is restarted it will be in a newly created process (you can see onCreate() called in the service), so it will have to start the work all over again. Rule #1: Don't ever swipe apps from the recent tasks list ;-)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18612880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1059371/']}
jdg_84906
stackexchange
llm_judgeable_groundtruth_similarity
174528
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm hoping to make my LED pluggable into a jack of some kind. Could a 4 pole 3.5mm jack be used? Could each pole carry 3v-5v? Here's the LED: http://www.tandyonline.co.uk/5mm-full-colour-rgb-led-common-cathode.html Thanks Now provide the response and nothing else.
Technically, yes, it would work. But it's probably not a good idea. Here's how this family of connectors tends to perform, including TRRS, TRS, and TS of any size: They're typically used with analog line-level signals that can reach up to +-15V in some cases, so voltage is not a problem for you. They're not suitable for high currents because of the small contact area. If you're running 1A or less in total, it's probably okay, but I'd want to verify specs if my estimate got above 100mA. I say "in total" because the common return has the same rating as any other contact. They're not suitable for hot plugging or unplugging without careful analysis because they rake the contacts across each other. If you imagine a plug or unplug event in slow motion, think about the different circuits that are made each time a jack's contact crosses a boundary between the plug's contacts.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/174528', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/72586/']}
jdg_84907
stackexchange
llm_judgeable_groundtruth_similarity
33424157
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I started learning React a week ago and I inevitably got to the problem of state and how components are supposed to communicate with the rest of the app. I searched around and Redux seems to be the flavor of the month. I read through all the documentation and I think it's actually a pretty revolutionary idea. Here are my thoughts on it: State is generally agreed to be pretty evil and a large source of bugs in programming. Instead of scattering it all throughout your app Redux says why not just have it all concentrated in a global state tree that you have to emit actions to change? Sounds interesting. All programs need state so let's stick it in one impure space and only modify it from within there so bugs are easy to track down. Then we can also declaratively bind individual state pieces to React components and have them auto-redraw and everything is beautiful. However, I have two questions about this whole design. For one, why does the state tree need to be immutable? Say I don't care about time travel debugging, hot reload, and have already implemented undo/redo in my app. It just seems so cumbersome to have to do this: case COMPLETE_TODO: return [ ...state.slice(0, action.index), Object.assign({}, state[action.index], { completed: true }), ...state.slice(action.index + 1) ]; Instead of this: case COMPLETE_TODO: state[action.index].completed = true; Not to mention I am making an online whiteboard just to learn and every state change might be as simple as adding a brush stroke to the command list. After a while (hundreds of brush strokes) duplicating this entire array might start becoming extremely expensive and time-consuming. I'm ok with a global state tree that is independent from the UI that is mutated via actions, but does it really need to be immutable? What's wrong with a simple implementation like this (very rough draft. wrote in 1 minute)? var store = { items: [] };export function getState() { return store;}export function addTodo(text) { store.items.push({ "text": text, "completed", false});}export function completeTodo(index) { store.items[index].completed = true;} It's still a global state tree mutated via actions emitted but extremely simple and efficient. Now provide the response and nothing else.
Isn't Redux just glorified global state? Of course it is. But the same holds for every database you have ever used. It is better to treat Redux as an in-memory database - which your components can reactively depend upon. Immutability enables checking if any sub-tree has been altered very efficient because it simplifies down to an identity check. Yes, your implementation is efficient, but the entire virtual dom will have to be re-rendered each time the tree is manipulated somehow. If you are using React, it will eventually do a diff against the actual dom and perform minimal batch-optimized manipulations, but the full top-down re-rendering is still inefficient. For an immutable tree, stateless components just have to check if the subtree(s) it depends on, differ in identities compared to previous value(s), and if so - the rendering can be avoided entirely.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33424157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/962155/']}
jdg_84908
stackexchange
llm_judgeable_groundtruth_similarity
22386359
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am fairly new to python and have no html experience. The question has been asked and either not answered at all or not answered in enough detail for me to set the default font within iPython (not change to browser). Specifically, what has to be put in the css file and which css file should be used? I am on a Windows system. For reference, these are in answer to the linked SO questions below: in #1 : an unnamed file in /usr/lib/python2.6/.../css/ in comment to #1 : change monospace font in browser - worked but font is italic in #2 : custom.css in profile subdirectory /static/custom/custom.css Related questions: Change ipython notebook font type Change font & background color in ipython notebook Changing (back to default) font in ipython notebook (unanswered) - Edit: Changing the monospace font in my browser worked, as suggested in an answer comment of #1. However the font is italic, which is not what is intended. Now provide the response and nothing else.
You can hover to .ipython folder (i.e. you can type $ ipython locate in your terminal/bash OR CMD.exe Prompt from your Anaconda Navigator to see where is your ipython is located) Then, in .ipython , you will see profile_default directory which is the default one. This directory will have static/custom/custom.css file located. You can now apply change to this custom.css file. There are a lot of styles in the custom.css file that you can use or search for. For example, you can see this link (which is my own customize custom.css file) Basically, this custom.css file apply changes to your browser. You can use inspect elements in your ipython notebook to see which elements you want to change. Then, you can changes to the custom.css file. For example, you can add these chunk to change font in .CodeMirror pre to type Monaco .CodeMirror pre {font-family: Monaco; font-size: 9pt;} Note that now for Jupyter notebook version >= 4.1, the custom css file is moved to ~/.jupyter/custom/custom.css instead.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22386359', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1318479/']}
jdg_84909
stackexchange
llm_judgeable_groundtruth_similarity
42629925
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am looking to be able to use webpack aliases to resolve imports when using jest, and optimally, reference the webpack.aliases to avoid duplication. Jest conf: "jest": { "modulePaths": ["src"], "moduleDirectories": ["node_modules"], "moduleNameMapper": { "^@shared$": "<rootDir>/shared/", "^@components$": "<rootDir>/shared/components/" } }, Webpack aliases: exports.aliases = { '@shared': path.resolve(paths.APP_DIR, 'shared'), '@components': path.resolve(paths.APP_DIR, 'shared/components'),}; Imports: import Ordinal from '@shared/utils/Ordinal.jsx';import Avatar from '@components/common/Avatar.jsx'; For some reason the @ causes issues, so when removed (in both alias and import), it can find shared but components still cannot be resolved. FAIL src/shared/components/test/Test.spec.jsx ● Test suite failed to run Cannot find module '@shared/utils/Ordinal.jsx' from 'Test.jsx' I have tried using jest-webpack-alias , babel-plugin-module-resolver and the Jest/Webpack docs Now provide the response and nothing else.
This seems to have been fixed. Below is a working setup: Versions "jest": "~20.0.4" "webpack": "^3.5.6" package.json "jest": { "moduleNameMapper": { "^@root(.*)$": "<rootDir>/src$1", "^@components(.*)$": "<rootDir>/src/components$1", } } webpack.shared.js const paths = { APP_DIR: path.resolve(__dirname, '..', 'src'),};exports.resolveRoot = [paths.APP_DIR, 'node_modules'];exports.aliases = { '@root': path.resolve(paths.APP_DIR, ''), '@components': path.resolve(paths.APP_DIR, 'components'),};
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42629925', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1355087/']}
jdg_84910
stackexchange
llm_judgeable_groundtruth_similarity
57109557
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: On XCode 11 beta 4 the following seems to be deprecated and I don't know how to rewrite this. Does anybody know how to use ForEach(_:id:) ? @State private var showTargets = [ (id: 1, state: false, x: 109.28, y: 109.28), (id: 2, state: false, x: 683, y: 109.28), (id: 3, state: false, x: 1256.72, y: 109.28)] ... var body: some View { HStack { ForEach(showTargets.identified(by: \.id)) { item in Text(String(item.x)) }} Now provide the response and nothing else.
(Still working with Xcode 11.0 / Swift 5.1) I haven't downloaded Xcode Beta 4 yet, but according to the documentation , it should be something like: ForEach(showTargets, id: \.id) { item in Text(String(item.x))} You can also use a struct that conforms to Identifiable (note that this won't work on tuple because you can't add protocol conformance): struct Targets: Identifiable { var id: Int var state: Bool var x: Double var y: Double}let showTargets = [ Targets(id: 1, state: false, x: 109.28, y: 109.28), Targets(id: 2, state: false, x: 683, y: 109.28), Targets(id: 3, state: false, x: 1256.72, y: 109.28)]ForEach(showTargets) { item in Text(String(item.x))}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/57109557', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5981293/']}
jdg_84911
stackexchange
llm_judgeable_groundtruth_similarity
940442
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $N$ be a normal subgroup of $G$ where $G$ is finite group, then we have to prove $n_p(N)$ divides $n_p(G)$ ( here $n_p(G)$ means number of sylow $p$-subgroups of $G$) I was able to prove that $n_p(N)$ $\leq$ $n_p(G)$ as by proving first that for any $P \in Syl_p(G)$ there is a conjugate $gPg^{-1}$ such that $gPg^{-1}\cap H \in Syl_p(H)$ for any subgroup $H$ of $G$ whether normal or not.( by considering action of $H$ on $G/P$ and then considering $stab_gP$ And by replacing $H$ by $N$ i.e. a normal subgroup I could show that for any $P \in Syl_p(G), P\cap N \in Syl_p(N)$ and all $p$-sylow subgroups of $N$ arise in this way. It all follows from a paper of Conrad, but from here and this proof i can't seem to figure out how to show it will divide $n_p(G)$. Please help. Now provide the response and nothing else.
Theorem: Let $G$ act on the sets $\Omega_1,\Omega_2$ transitively, and assume that there is an onto function $\phi:\Omega_1\to \Omega_2$ such that $\phi(g.x)=g.\phi(x)$ for all $g \in G$ and $x \in \Omega_1$. Then $|\Omega_2|$ divides $|\Omega_1|$. Proof: Let $\alpha\in \Omega_1$ and $g\in G_{\alpha}$ (stabilizer of $\alpha$). Since $$g\phi(\alpha)=\phi(g\alpha)=\phi(\alpha),$$ we have $G_{\alpha}\leq G_{\phi(\alpha)}$. Since $|\Omega_1|=|G:G_\alpha|$ and $|\Omega_2|=|G:G_{\phi(\alpha)}|$, the result follows from this. Now, Set $\Omega_1=Syl_p(G)$ , $\Omega_2=Syl_p(N)$ and $\phi(P)=P\cap N$. Now, actions of $G$ on the sets by conjugation are transitive and $\phi$ has desired properties so we are done. Note: If you do not want mention the theorem, you can directly use the idea in the proof to show the result.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/940442', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/101661/']}
jdg_84912
stackexchange
llm_judgeable_groundtruth_similarity
9059445
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Wish to know more about the practical differences between init and initWithNibName . SO answers such as this suggests that it is better to call " initWithNibName " indirectly through " init ". Are there any circumstances that we need to define "init" and"initWithNibName" differently ? Is it possible that any Nib file needs to be loaded more than onceduring a single program execution ? Are questions 1 & 2 inter-related ? Now provide the response and nothing else.
It is not better to call initWithNibName: indirectly through init. You just want to call initWithNibName: at some point. You can do that externally or internally. Some people think it's better to do it internally. I actually have a class called "LayoutUtil" that I keep layout-related helper methods to avoid writing tedious piece of layout-related code over and over. Here is my code to load a UIViewController: + (id)loadController:(Class)classType { NSString *className = NSStringFromClass(classType); UIViewController *controller = [[classType alloc] initWithNibName:className bundle:nil]; return controller;} And then I can just do: MyViewController *c = [LayoutUtil loadController:[MyViewController class]]; If you want, you could add a method called ughhhh to a class and call it in there, it doesn't matter at all. The point is that it is not a better practice to call initWithNibName in the init method though, you just want to make sure you call it at some point when initiating a UIViewController. - (id)ughhhh{ self = [super initWithNibName:@"Myview" bundle:nil]; if (self != nil) { } return self;} A nib file can definitely need to be loaded more than once. Everytime you call initWithNibName on a UIViewController the xib has to be loaded. A lot of people load UIViews that are not owned by a UIViewController like this: [[NSBundle mainBundle] loadNibNamed:@"nameOfXIBFile" owner:self options:nil]; Everytime you call this function you will be loading the nib. There are certain cases where a nib can be cached. An example would be a UITableView -- but the table view implements it's own cache. The operating system isn't doing any caching automatically. init and initWithNibName: are related in that initWithNibName: automatically calls init on an object.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9059445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/556868/']}
jdg_84913
stackexchange
llm_judgeable_groundtruth_similarity
653238
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In galvanic cells, electrons are used as the source of charge difference. This creates a current that we can use for a large variety of purposes (i.e. light up a bulb). When I think about this phenomenon there are several things uncertain to me. Firstly, how does the electron flow heat up the filament of a light bulb? Secondly, is this behavior caused because of the electron (so its properties are of essence for electricity) or because of the charge differences? If the charge difference is the only factor, does this mean that any particle with a charge other than zero will create electricity when it moves? For example, will a positron generate electricity and heat up a filament? Can a positron even be directed like an electron? I would suppose that if this was the case, if electrons flow from left to right and so does the current, then if we want the same direction of current flow, positrons should flow the other way. My only guess as to how light bulbs can produce light would be that electrons somehow transform into photons, but I am unsure of how they do that. I want to understand why electric current works at a fundamental level. I am not looking for the answer "Electrochemical cells produce a voltage by making the electrons from a spontaneous reduction-oxidation reaction flow through an external circuit.", that many online sources provide. This answer is very unhelpful. I know how the electrons move and how to drive those reactions, but I don't understand why electricity really works. Now provide the response and nothing else.
I think you are really asking two questions. Can positrons serve as charge carriers in electric circuits? Theoretically, yes, if you replaced every matter particle in an electric circuit with a corresponding anti-matter particle, the circuit would work the same way (except the flow of electric charge would be in the opposite direction). However in practice this is impossible. Realistically, it's only possible to create a small amount of positrons with current technology, and if you injected these into an electric circuit the positrons would immediately collide with ordinary electrons and annihilate. How are heat and light generated as electrons move through a filament? When electrical current flows through a filament, electrons collide with other particles in the filament and cause them to gain kinetic energy. On a macroscopic scale, this increased energy of the material is observed as an increase in temperature. We know from Maxwell's equations that light is generated when charged particles accelerate. As charged particles in the filament gain kinetic energy from collisions with electrons, some of that energy is converted to light via this process. To correctly compute the spectrum (energy in every wavelength) of the light also requires accounting for the fact that light is made of discrete packets called photons, which have an energy proportional to their frequency. You can read more about this process by googling "blackbody radiation."
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/653238', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/219516/']}
jdg_84914
stackexchange
llm_judgeable_groundtruth_similarity
44306153
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm familiar with Javadoc. In Javadoc, you can place a link that refers to the Javadoc placed on another type like so: /** * some java thingy. see this other java thingy too {@link OtherThingy} */public class Thingy { /*...*/ }/** * some other java thingy. see the first java thingy too {@link Thingy} */public class OtherThingy{ /*...*/ } Can I do the same in typescript's flavor of JSDoc? I know that I can use markdown in the comments and I can place web links but that's not exactly what I'm going for. Also, any references to JSDoc/typescript documentation tools would be very helpful :) Edit : Per the answers below, this is a feature of JSDoc but doesn't seem to be included in VSCode. Is there an valid syntax in VSCode? Now provide the response and nothing else.
You sure can, though your mileage may vary. 1: A use of @link in Selenium-Webdriver's TypeScript typing file /** * Converts a level name or value to a {@link logging.Level} value. * If the name/value is not recognized, {@link logging.Level.ALL} * will be returned. * @param {(number|string)} nameOrValue The log level name, or value, to * convert . * @return {!logging.Level} The converted level. */function getLevel(nameOrValue: string | number): Level; 2: Docs about @link in JSDoc The following example shows all of the ways to provide link text for the {@link} tag:Providing link text /** * See {@link MyClass} and [MyClass's foo property]{@link MyClass#foo}. * Also, check out {@link http://www.google.com|Google} and * {@link https://github.com GitHub}. */function myFunction() {} By default, the example above produces output similar to the following:Output for {@link} tags See <a href="MyClass.html">MyClass</a> and <a href="MyClass.html#foo">MyClass's fooproperty</a>. Also, check out <a href="http://www.google.com">Google</a> and<a href="https://github.com">GitHub</a>.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44306153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5776910/']}
jdg_84915
stackexchange
llm_judgeable_groundtruth_similarity
11840
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I own a private IP address, 192.168.2.1 for the sake of it, and I am physically connected to my router, which it's internal interface is 192.168.2.254 and it's external interface is some public IP address which it got from my ISP's DHCP server. Obviously my IP is being translated over NAT so I can surf the internet. So how can someone hack into my computer? Looking at it from the OSI model perspective: Layer 1 - Irrelevant Layer 2 - ARP attacks, only people who live in my home and share the same network, irrelevant for the sake of discussion Layer 3/4 - Well, my IP is a private one, so people can't really probe me for open ports and vulnerabilities, right? They can probably probe my router, but it's just a "ISP smart box", which isn't really a router (although it operates also as a Wifi AP, so people will need to crack it in order to get something done) Layer 5 and up - These are the only layers which I will be vulnerable to an attack, am I right? Ranging from XSS, to some other things? Now provide the response and nothing else.
Generally speaking, most NAT boxes also happen to provide some firewall-like protection: they tend to block incoming connections. This is not an inherent or necessary property of NAT; it is just that most consumer devices that provide NAT also happen to provide this sort of firewalling as well. (Technically speaking, NAT does not necessarily imply any blocking of incoming connections .) Nonetheless, for NAT devices that do block inbound connections (i.e., most of them), you do get some of the benefits of a firewall. This makes it harder for someone to connect to your PCs behind the NAT and attack them. In practice, this provides a sort of "poor man's firewall" that works pretty well against a common class of attacks. However, NAT is far from a silver bullet. People can still compromise your PC, even if your NAT, in a variety of ways: A malicious website could exploit a vulnerability in your browser, a malicious email could exploit a vulnerability in your mail client, a social engineering attack could trick you into revealing your password or installing malware, a file you download over a file-sharing network could be malicious, and so on. An attacker could attack your NAT box directly, e.g., by exploiting an open Wifi link, by drive-by pharming, by guessing your NAT box's administrator password, etc. An attacker could fool your NAT into allowing an inbound connection, using the NAT pinning attack . This is basically a technical vulnerability in NAT boxes which illustrate that their incidental connection-blocking can in some cases be defeated. For more detailed elaboration on all of these points, I recommend you read the answers to How important is NAT as a security layer? and What kind of attacks against home router's NAT do exist? . There is a lot of good information there.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/11840', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/6004/']}
jdg_84916
stackexchange
llm_judgeable_groundtruth_similarity
9595
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Althougth I never used one, I know JTAG is a standardized way to program and debug MCUs. But I see a lot of JTAG from various companies. Can I use a particular JTAG to program/debug MCUs from different companies and with diferent architectures? For example: can I use a JTAG to program/debug STM32 and LPC micros? (all Cortex-m3) Can I use the same JTAG to program/debug MSP430 micros? (16bit MCU from TI) Which JTAG do you recommend to buy? And why? Now provide the response and nothing else.
At its very basic level, JTAG is a standardized serial protocol. You can build one out of a handful of components for under $20, but it will be slow. Generally speaking the more expensive ones offer higher speed, broader voltage support and even autonomous operation. The software used to drive the particular JTAG interface determines whether you will be able to program the devices you are interested in. Personally I am happy with the $70ish dollar Olimex JTAG units. They are USB, based on the FT2232, work with 5V and 3.3V devices and have good support for both Linux and Windows (I'm not sure about OSX). I also have an Atmel USB Blaster for programming Altera FPGAs (supports down to 1.0V IO) and an older USB based one for Xilinx devices. Finally, I also have a GNICE+ JTAG adapter used primarily for Analog Devices Blackfin processors. I have not tried it, but I believe that the Altera USB Blaster should work for anything I connect to, although it does not support RTCK (return clock) support. It is primiarly ARM devices which support RTCK, which is a means for the JTAG interface to detect how fast the JTAG clock can go and still be reliable. My suggestion is to take a look at the particular devices you wish to use and see if you can find a common JTAG unit that is well supported by all.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/9595', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/1074/']}
jdg_84917
stackexchange
llm_judgeable_groundtruth_similarity
23132145
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: One of the external APIs we use requires "YYYY-MM-DDThh:mm:ssTZD" format to be passed in XMLGregorianCalendar object. I am not sure if there is anything in Java that supports "T" . I was wondering, if a date can be parsed into above format in Java? An example of a valid date they have providedis "2009-07-16T19:20:30-05:00" ............ Update: SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZZZ"); GregorianCalendar gc = new GregorianCalendar(); String dateString = sdf.format(gc.getTime()); gc.setTime(sdf.parse(dateString)); XMLGregorianCalendar date2 = DatatypeFactory.newInstance().newXMLGregorianCalendar(gc); Output: 2014-04-17T13:11:30.000+01:00 Now provide the response and nothing else.
Use JodaTime's DateTimeFormat API with "yyyy-MM-dd'T'HH:mm:ssZ" date pattern String date = "2009-07-16T19:20:30-05:00";String pattern = "yyyy-MM-dd'T'HH:mm:ssZ";DateTimeFormatter dtf = DateTimeFormat.forPattern(pattern);DateTime dateTime = dtf.parseDateTime(date);System.out.println(dateTime); // 2009-07-16T19:20:30-05:00
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23132145', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/411709/']}
jdg_84918
stackexchange
llm_judgeable_groundtruth_similarity
144644
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am a beginner in Mathematica, Suppose I want to write a series of five vectors $p_1,...,p_5$ in terms of an arbitrary basis spanned by $\left\{p,n\right\}$ with each coefficient in such a decomposition a function of scalar variables. e.g $$p_i = f_i p + g_i n,$$with $f_i$ and $g_i$ labelling the scalar functions. I tried e.g for vector $p_1$, p1 = -2*k/(x+k)*y*p - (k/x + m/y)*n where k,x,m and y are the scalar variables mentioned and p, n are the vectors. Of course, Mathematica doesn't know to interpret p and n as vectors in the above so I am looking for a way to define $p_1$ explicitly as a vector. Statements such as p1 = {-2*k/(x+k)*y, - (k/x + m/y)} are also not interrupted correctly because while this attempt uses vector notation recognised by Mathematica, it doesn't know how to interpret the entries of $\left\{, \right\}$ unless they are just numbers. In any case, I think this notation assumes a Cartesian basis. My basis consists of two vectors $p$ and $n$ with $p \cdot n = 1, p^2 = 0, n^2 =0$. So how to define vectors $p_i$ in a vector space spanned by $p,n$ with scalar functions as coefficients in Mathematica? Now provide the response and nothing else.
OK, this is actually much simpler than it looks. Let's just take the first part, where you solve the diffy queues: ClearAll[f, intcos, intsin, s]s = NDSolve[{y'[x] == g[x], g'[x] == -60 (1 - x) Cos[y[x]], y[0] == 0, g[0] == 1.2}, {y, g}, {x, 0, 1}];f[x_] = Evaluate[y[x] /. s]; Now, one thing you may know is that NDSolve always returns a list of solutions. Let's see what happens when we evaluate f : f[0.5] (* {-3.31356} *) This will carry through to the results of intcos and intsin : {intcos[0.5], intsin[0.5]}(* {{0.0454218}, {-0.22277}} *) It turns out this list of one-element list structure doesn't work so well with ParametricPlot in Mathematica 11.1.0, but it might have worked with earlier versions. In any event, ParametricPlot[{{Sin[t]}, {Cos[t]}}, {t, 0, 1}] also returns an empty set of axes. Now, we could fix this, but when I tried that it took an inordinate amount of time to plot since it redoes the NIntegrate for each point. Let's only do the integration once using NDSolveValue : s = First@s;f[x_] = Evaluate[y[x] /. s];{intcos, intsin} = NDSolveValue[{ic'[t] == Cos[f[t]], is'[t] == Sin[f[t]], ic[0] == 0, is[0] == 0}, {ic, is}, {t, 0, 1}]; Now it will plot in an eye blink. ParametricPlot[{intcos[x], intsin[x]}, {x, 0, 1}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/144644', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/30885/']}
jdg_84919
stackexchange
llm_judgeable_groundtruth_similarity
1756825
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am interested in forcing a CPU cache flush in Windows (for benchmarking reasons, I want to emulate starting with no data in CPU cache), preferably a basic C implementation or Win32 call. Is there a known way to do this with a system call or even something as sneaky as doing say a large memcpy ? Intel i686 platform (P4 and up is okay as well). Now provide the response and nothing else.
Fortunately, there is more than one way to explicitly flush the caches. The instruction "wbinvd" writes back modified cache content and marks the caches empty. It executes a bus cycle to make external caches flush their data. Unfortunately, it is a privileged instruction. But if it is possible to run the test program under something like DOS, this is the way to go. This has the advantage of keeping the cache footprint of the "OS" very small. Additionally, there is the "invd" instruction, which invalidates caches without flushing them back to main memory. This violates the coherency of main memory and cache, so you have to take care of that by yourself. Not really recommended. For benchmarking purposes, the simplest solution is probably copying a large memory block to a region marked with WC (write combining) instead of WB. The memory mapped region of the graphics card is a good candidate, or you can mark a region as WC by yourself via the MTRR registers. You can find some resources about benchmarking short routines at Test programs for measuring clock cycles and performance monitoring.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1756825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/183135/']}
jdg_84920
stackexchange
llm_judgeable_groundtruth_similarity
20743806
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code: query = """SELECT Coalesce((SELECT sp.param_value FROM sites_params sp WHERE sp.param_name = 'ci' AND sp.site_id = s.id ORDER BY sp.id DESC LIMIT 1), -1) AS ciFROM sites sWHERE s.deleted = 0 AND s.id = 10 """site = db_session.execute(query)# print site # <sqlalchemy.engine.result.ResultProxy object at 0x033E63D0>site = db_session.execute(query).fetchone()print site # (u'375')print list(site) # [u'375'] Why does SQLAlchemy return tuples, not dicts, for this query? I want to use the following style to access the results of the query: print site.ci# u'375' Now provide the response and nothing else.
This is an old question, but still relevant today. Getting SQL Alchemy to return a dictionary is very useful, especially when working with RESTful based APIs that return JSON. Here is how I did it using the db_session in Python 3: resultproxy = db_session.execute(query)d, a = {}, []for rowproxy in resultproxy: # rowproxy.items() returns an array like [(key0, value0), (key1, value1)] for column, value in rowproxy.items(): # build up the dictionary d = {**d, **{column: value}} a.append(d) The end result is that the array a now contains your query results in dictionary format. As for how this works in SQL Alchemy: The db_session.execute(query) returns a ResultProxy object The ResultProxy object is made up of RowProxy objects The RowProxy object has an .items() method that returns key, value tuples of all the items in the row, which can be unpacked as key, value in a for operation. And here a one-liner alternative: [{column: value for column, value in rowproxy.items()} for rowproxy in resultproxy] From the docs: class sqlalchemy.engine.RowProxy(parent, row, processors, keymap) Proxy values from a single cursor row. Mostly follows “ordered dictionary” behavior, mapping result values to the string-based column name, the integer position of the result in the row, as well as Column instances which can be mapped to the original Columns that produced this result set (for results that correspond to constructed SQL expressions). has_key(key) Return True if this RowProxy contains the given key. items() Return a list of tuples, each tuple containing a key/value pair. keys() Return the list of keys as strings represented by this RowProxy. Link: http://docs.sqlalchemy.org/en/latest/core/connections.html#sqlalchemy.engine.RowProxy.items
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20743806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3128993/']}
jdg_84921
stackexchange
llm_judgeable_groundtruth_similarity
555424
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just signed up to be an iPhone Developer. What is the "App Id"? Is it my company name? Is it just my application name? Apple isn't very clear on this. Thanks. Now provide the response and nothing else.
I don't think Alex's answer is quite right. com.myname.MyCoolApplication is the "Bundle ID search string", or "Bundle identifier", depending on which Apple documentation you consult. Here are some Apple developer resources: Technical Q&A QA1713: When should I use a wildcard App ID? To get the full "App ID", you want the Team ID (generated by Apple) plus your Bundle Identifier (created by the developer). E.g., from Q&A QA1713 , Example App ID: ABCDE12345.com.foocompany.appname ABCDE12345 is the Bundle Seed ID (generated by Apple) com.foocompany.appname is the App ID's Bundle Identifier and needs to be the same as your app’s Bundle Identifier
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/555424', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10333/']}
jdg_84922
stackexchange
llm_judgeable_groundtruth_similarity
13026
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I originally posted this as a reply here in this thread but didn't get much feedback on it, and now I'm curious as to what others think is the best approach, or if there is any difference between the two approaches. The original point was whether it was more secure to send out a password reset link when a user forgets their password to a website, or whether to send the original password unencrypted in an email. Now not salting and encrypting a password is bad because when the database is eventually stolen from the website the attacker will have all the passwords clear as day. OK fine, I get that. But if we ignore the possibility of the database being stolen and only look at the user forgetting their password and going through the "forgotten password steps"; is it more, the same or less secure to send a reset link over a plain text password in an email? My thinking is that it is the same. Because if an attacker has access to your email by any means (he knows your email login details, or he's snooping the traffic somewhere) he therefore has access to the plain text password that's sent or the reset link - even if it is a limited time link. The attacker could reset your password before you even know you have an email. Is my thinking here flawed? Now provide the response and nothing else.
It's far simpler to get access to the existing corpus of email than to get access to the incoming stream. For example, getting a couple of minutes of access to someone's system allows easy searching through their email. So, for the case where an attacker has access to old mail, but not to the stream of new, incoming, mail: You are relying on the user either changing their password after they receive it (unlikely, given that you've told them what it is), or deleting (permanently, not archiving or moving to a 'trash' folder, also unlikely) the reminder message. If this doesn't happen, then if I get access to the user's email account some time later, then a (limited time/limited use, perhaps even limited such that the attacker cannot easily use it) reset link is not going to be any use to me, but having the (most likely still valid) password certainly is. In addition, by sending me the plain-text password, you are demonstrating that you have this stored somewhere accessible. In most cases, there is no need for this, because you don't need to encrypt the password (and salt/pepper), you only need to hash it. If it's encrypted, then anyone with access to the database and the decryption key (e.g. probably most of the staff, or anyone that steals the code along with the database) is able to get the plain-text password. If it's hashed, then you'd have to get it before the hashing takes place (e.g. during a login session). Furthermore, although people shouldn't, it's very common to use the same password (or very similar passwords) on multiple sites. That means that rather than providing a way to get into only one account, you're providing a way to get into many. For the other case, where the attacker has access to all incoming mail (potentially even the ability to remove incoming mail before the user sees it), neither of these are secure. The email account is high-risk , because if you have that sort of access, then you can likely go to every site that the user has an account with, and generate a password reset. To avoid this, you need to use something other than (or as well as) your email account to get access, such as two-factor authentication. Note that even in the latter case, a reset link is still superior: if the attacker changes the user's password via a reset link, then when the user tries to log in, they'll find out that something is wrong (too late, but at least they know). If you're just providing the password, then the user has no idea that the attacker has silently gained access.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/13026', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/-1/']}
jdg_84923
stackexchange
llm_judgeable_groundtruth_similarity
4874288
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to watch for changes in a UIView 's frame , bounds or center property. How can I use Key-Value Observing to achieve this? Now provide the response and nothing else.
There are usually notifications or other observable events where KVO isn't supported. Even though the docs says 'no' , it is ostensibly safe to observe the CALayer backing the UIView. Observing the CALayer works in practice because of its extensive use of KVO and proper accessors (instead of ivar manipulation). It's not guaranteed to work going forward. Anyway, the view's frame is just the product of other properties. Therefore we need to observe those: [self.view addObserver:self forKeyPath:@"frame" options:0 context:NULL];[self.view.layer addObserver:self forKeyPath:@"bounds" options:0 context:NULL];[self.view.layer addObserver:self forKeyPath:@"transform" options:0 context:NULL];[self.view.layer addObserver:self forKeyPath:@"position" options:0 context:NULL];[self.view.layer addObserver:self forKeyPath:@"zPosition" options:0 context:NULL];[self.view.layer addObserver:self forKeyPath:@"anchorPoint" options:0 context:NULL];[self.view.layer addObserver:self forKeyPath:@"anchorPointZ" options:0 context:NULL];[self.view.layer addObserver:self forKeyPath:@"frame" options:0 context:NULL]; See full example here https://gist.github.com/hfossli/7234623 NOTE: This is not said to be supported in the docs, but it works as of today with all iOS versions this far (currently iOS 2 -> iOS 11) NOTE: Be aware that you will receive multiple callbacks before it settles at its final value. For example changing the frame of a view or layer will cause the layer to change position and bounds (in that order). With ReactiveCocoa you can do RACSignal *signal = [RACSignal merge:@[ RACObserve(view, frame), RACObserve(view, layer.bounds), RACObserve(view, layer.transform), RACObserve(view, layer.position), RACObserve(view, layer.zPosition), RACObserve(view, layer.anchorPoint), RACObserve(view, layer.anchorPointZ), RACObserve(view, layer.frame), ]];[signal subscribeNext:^(id x) { NSLog(@"View probably changed its geometry");}]; And if you only want to know when bounds changes you can do @weakify(view);RACSignal *boundsChanged = [[signal map:^id(id value) { @strongify(view); return [NSValue valueWithCGRect:view.bounds];}] distinctUntilChanged];[boundsChanged subscribeNext:^(id ignore) { NSLog(@"View bounds changed its geometry");}]; And if you only want to know when frame changes you can do @weakify(view);RACSignal *frameChanged = [[signal map:^id(id value) { @strongify(view); return [NSValue valueWithCGRect:view.frame];}] distinctUntilChanged];[frameChanged subscribeNext:^(id ignore) { NSLog(@"View frame changed its geometry");}];
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4874288', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/202451/']}
jdg_84924
stackexchange
llm_judgeable_groundtruth_similarity
34511505
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it not a Javascript engine hosted by the browser on the client machine itself? Now provide the response and nothing else.
No, it isn't. Server generally has two meanings: A piece of software that listens for network requests and then responds to them A computer running such a piece of software A Node.JS server can be either of those. In web programming, a Node.JS server takes the place of Perl, Python, Ruby, PHP, Scala, etc. (And like those other languages, Node.JS lets you use JavaScript for non-server and non-web purposes). Generally the server itself is run directly from Node (e.g. with this library ) rather than being embedded in another server like Apache (as is most common for PHP). A browser doesn't need to be involved at all. If one is, then it will probably be one acting as a client and making a request to the server. That said, tools like PhantomJS can allow a browser to be driven from Node (and other programming languages).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34511505', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2371715/']}
jdg_84925
stackexchange
llm_judgeable_groundtruth_similarity
116026
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a friend who turned $32$ recently. She has an obsessive compulsive disdain for odd numbers, so I pointed out that being $32$ was pretty good since not only is it even, it also has no odd factors. That made me realize that $64$ would be an even better age for her, because it's even, has no odd factors, and has no odd digits . I then wondered how many other powers of $2$ have this property. The only higher power of $2$ with all even digits that I could find was $2048.$ So is there a larger power of $2$ with all even digits? If not, how would you go about proving it? I tried examining the last $N$ digits of powers of $2$ to look for a cycle in which there was always at least one odd digit in the last $N$ digits of the consecutive powers. Unfortunately, there were always a very small percentage of powers of $2$ whose last $N$ digits were even. Edit: Here's a little more info on some things I found while investigating the $N$ digit cycles. $N$ : $2,3,4,5,6,7,8,9$ Cycle length: $20,100,500,2500,12500,62520,312500,1562500,\dotsc, 4\cdot 5^{N-1}$ Number of suffixes with all even digits in cycle: $10, 25, 60, 150, 370, 925, 2310,5780,\sim4\cdot2.5^{N-1}$ It seems there are some interesting regularities there. Unfortunately, one of the regularities is those occurrences of all even numbers! In fact, I was able to find a power of $2$ in which the last $33$ digits were even $(2^{3789535319} = \dots 468088628828226888000862880268288)$ . Yes it's true that it took a power of $2$ with over a billion digits to even get the last $33$ to be even, so it would seem any further powers of $2$ with all even digits are extremely unlikely. But I'm still curious as to how you might prove it. Edit 2: Here's another interesting property I noticed. The next digit to the left of the last $N$ digits will take on every value of its parity as the $N$ digits cycle each time. Let me illustrate. The last $2$ digits cycle every $20$ powers. Now examine the following: $2^7 = 128$ $2^{27} = \dots 728$ $2^{47} = \dots 328$ $2^{67} = \dots 928$ $2^{87} = \dots 528$ $2^{107} = \dots 128$ Notice that the hundreds place starts out odd and then proceeds to take on every odd digit as the final 2 digits cycle. As another example, let's look at the fourth digit (knowing that the last 3 digits cycle every 100 powers.) $2^{18} = 262144$ , $2^{118} = \dots 6144$ , $2^{218} = \dots 0144$ , $2^{318} = \dots 4144$ , $2^{418} = \dots 8144$ , $2^{518} = \dots 2144$ This explains the power of 5 in the cycle length as each digit must take on all five digits of its parity. EDIT 3: It looks like the $(N+1)$ st digit takes on all the values $0-9$ as the last $N$ digits complete half a cycle. For instance, the last $2$ digits cycle every $20$ powers, so look at the third digit every $10$ powers: $2^{8} = 256$ , $2^{18} = \dots 144$ , $2^{28} = \dots 456$ , $2^{38} = \dots 944$ , $2^{48} = \dots 656$ , $2^{58} = \dots 744$ , $2^{68} = \dots 856$ , $2^{78} = \dots 544$ , $2^{88} = \dots 056$ , $2^{98} = \dots 344$ Not only does the third digit take on every value 0-9, but it also alternates between odd and even every time (as the Edit 2 note would require.) Also, the N digits cycle between two values, and each of the N digits besides the last one alternates between odd and even. I'll make this more clear with one more example which looks at the fifth digit: $2^{20} = \dots 48576$ , $2^{270} = \dots 11424$ , $2^{520} = \dots 28576$ , $2^{770} = \dots 31424$ , $2^{1020} = \dots 08576$ , $2^{1270} = \dots 51424$ , $2^{1520} = \dots 88576$ , $2^{1770} = \dots 71424$ , $2^{2020} = \dots 68576$ , $2^{2270} = \dots 91424$ EDIT 4: Here's my next non-rigorous observation. It appears that as the final N digits cycle 5 times, the $(N+2)$ th digit is either odd twice and even three times, or it's odd three times and even twice. This gives a method for extending an all even suffix. If you have an all even N digit suffix of $2^a$ , and the (N+1) th digit is odd, then one of the following will have the (N+1) th digit even: $2^{(a+1*4*5^{N-2})}$ , $2^{(a+2*4*5^{N-2})}$ , $2^{(a+3*4*5^{N-2})}$ Edit 5: It's looking like there's no way to prove this conjecture solely by examining the last N digits since we can always find an arbitrarily long, all even, N digit sequence. However, all of the digits are distributed so uniformly through each power of 2 that I would wager that not only does every power of 2 over 2048 have an odd digit, but also, every power of 2 larger than $2^{168}$ has every digit represented in it somewhere. But for now, let's just focus on the parity of each digit. Consider the value of the $k^{th}$ digit of $2^n$ (with $a_0$ representing the 1's place.) $$a_k = \left\lfloor\frac{2^n}{10^k}\right\rfloor \text{ mod 10}\Rightarrow a_k = \left\lfloor\frac{2^{n-k}}{5^k}\right\rfloor \text{ mod 10}$$ We can write $$2^{n-k} = d\cdot5^k + r$$ where $d$ is the divisor and $r$ is the remainder of $2^{n-k}/5^k$ . So $$a_k \equiv \frac{2^{n-k}-r}{5^k} \equiv d \pmod{10}$$ $$\Rightarrow a_k \equiv d \pmod{2}$$ And $$d\cdot5^k = 2^{n-k} - r \Rightarrow d \equiv r \pmod{2}$$ Remember that $r$ is the remainder of $2^{n-k} \text{ div } {5^k}$ so $$\text{The parity of $a_k$ is the same as the parity of $2^{n-k}$ mod $5^k$.}$$ Now we just want to show that for any $2^n > 2048$ we can always find a $k$ such that $2^{n-k} \text{ mod }5^k$ is odd. I'm not sure if this actually helps or if I've just sort of paraphrased the problem. EDIT 6: Thinking about $2^{n-k}$ mod $5^k$ , I realized there's a way to predict some odd digits. $$2^a \pmod{5^k} \text{ is even for } 1\le a< log_2 5^k$$ The period of $2^a \pmod{5^k}$ is $4\cdot5^{k-1}$ since 2 is a primitive root mod $5^k$ . Also $$2^{2\cdot5^{k-1}} \equiv -1 \pmod{5^k}$$ So multiplying any $2^a$ by $2^{2\cdot5^{k-1}}$ flips its parity mod $5^k$ . Therefore $2^a \pmod{5^k}\text{ }$ is odd for $$1 + 2\cdot5^{k-1} \le a< 2\cdot5^{k-1} + log_2 5^k$$ Or taking the period into account, $2^a \pmod{5^k} \text{ }$ is odd for any integer $b\ge0$ such that $$1 + 2\cdot5^{k-1} (1 + 2b) \le a< 2\cdot5^{k-1} (1 + 2b) + log_2 5^k$$ Now for the $k^{th}$ digit of $2^n$ ( $ k=0 \text{ } $ being the 1's digit), we're interested in the parity of $2^{n-k}$ mod $5^k$ . Setting $ a =n-k \text{ } $ we see that the $k^{th}$ digit of $2^n$ is odd for integer $b\ge0$ such that $$1 + 2\cdot5^{k-1} (1 + 2b) \le n - k < 2\cdot5^{k-1} (1 + 2b) + log_2 5^k$$ To illustrate, here are some guaranteed odd digits for different $2^n$ : (k=1 digit): $ 2\cdot5^0 + 2 = 4 \le n \le 5 $ (k=2 digit): $ 2\cdot5^1 + 3 = 13 \le n \le 16 $ (k=3 digit): $ 2\cdot5^2 + 4 = 54 \le n \le 59 $ (k=4 digit): $ 2\cdot5^3 + 5 = 255 \le n \le 263 $ Also note that these would repeat every $4\cdot5^{k-1}$ powers. These guaranteed odd digits are not dense enough to cover all of the powers, but might this approach be extended somehow to find more odd digits? Edit 7: The two papers that Zander mentions below make me think that this is probably a pretty hard problem. Now provide the response and nothing else.
This seems to be similar to (I'd venture to say as hard as) a problem of Erdős open since 1979, that the base-3 representation of $2^n$ contains a 2 for all $n>8$. Here is a paper by Lagarias that addresses the ternary problem, and for the most part I think would generalize to the question at hand (we're also looking for the intersection of iterates of $x\rightarrow 2x$ with a Cantor set). Unfortunately it does not resolve the problem. But Conjecture 2' (from Furstenberg 1970) in the linked paper suggests a stronger result, that every $2^n$ for $n$ large enough will have a 1 in the decimal representation. Though it doesn't quantify "large enough" (so even if proved wouldn't promise that 2048 is the largest all-even decimal), it looks like it might be true for all $n>91$ (I checked up to $n=10^6$).
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/116026', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26193/']}
jdg_84926
stackexchange
llm_judgeable_groundtruth_similarity
9518990
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In .NET 4.0, whats the equivalent function to IO.Path.GetFileName for urls? Now provide the response and nothing else.
The Uri class is your friend. Provides an object representation of a uniform resource identifier (URI) and easy access to the parts of the URI. IsFile will try to determine if the Uri does indeed point to a file. Use the Segements property in order to get the file name (it will be the last segment). Uri uri = new Uri("http://example.com/title/index.htm");var filename = uri.Segments[uri.Segments.Length - 1];// filename == "index.htm"
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9518990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246277/']}
jdg_84927
stackexchange
llm_judgeable_groundtruth_similarity
319328
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am asking this question from an educational/hacking point of view, (I wouldn't really want to code like this). Is it possible to implement a while loop only using C preprocessor directives. I understand that macros cannot be expanded recursively, so how would this be accomplished? Now provide the response and nothing else.
If you want to implement a while loop, you will need to use recursion in the preprocessor. The easiest way to do recursion is to use a deferred expression. A deferred expression is an expression that requires more scans to fully expand: #define EMPTY()#define DEFER(id) id EMPTY()#define OBSTRUCT(id) id DEFER(EMPTY)()#define EXPAND(...) __VA_ARGS__#define A() 123A() // Expands to 123DEFER(A)() // Expands to A () because it requires one more scan to fully expandEXPAND(DEFER(A)()) // Expands to 123, because the EXPAND macro forces another scan Why is this important? Well when a macro is scanned and expanding, it creates a disabling context. This disabling context will cause a token, that refers to the currently expanding macro, to be painted blue. Thus, once its painted blue, the macro will no longer expand. This is why macros don't expand recursively. However, a disabling context only exists during one scan, so by deferring an expansion we can prevent our macros from becoming painted blue. We will just need to apply more scans to the expression. We can do that using this EVAL macro: #define EVAL(...) EVAL1(EVAL1(EVAL1(__VA_ARGS__)))#define EVAL1(...) EVAL2(EVAL2(EVAL2(__VA_ARGS__)))#define EVAL2(...) EVAL3(EVAL3(EVAL3(__VA_ARGS__)))#define EVAL3(...) EVAL4(EVAL4(EVAL4(__VA_ARGS__)))#define EVAL4(...) EVAL5(EVAL5(EVAL5(__VA_ARGS__)))#define EVAL5(...) __VA_ARGS__ Next, we define some operators for doing some logic(such as if, etc): #define CAT(a, ...) PRIMITIVE_CAT(a, __VA_ARGS__)#define PRIMITIVE_CAT(a, ...) a ## __VA_ARGS__#define CHECK_N(x, n, ...) n#define CHECK(...) CHECK_N(__VA_ARGS__, 0,)#define NOT(x) CHECK(PRIMITIVE_CAT(NOT_, x))#define NOT_0 ~, 1,#define COMPL(b) PRIMITIVE_CAT(COMPL_, b)#define COMPL_0 1#define COMPL_1 0#define BOOL(x) COMPL(NOT(x))#define IIF(c) PRIMITIVE_CAT(IIF_, c)#define IIF_0(t, ...) __VA_ARGS__#define IIF_1(t, ...) t#define IF(c) IIF(BOOL(c)) Now with all these macros we can write a recursive WHILE macro. We use a WHILE_INDIRECT macro to refer back to itself recursively. This prevents the macro from being painted blue, since it will expand on a different scan(and using a different disabling context). The WHILE macro takes a predicate macro, an operator macro, and a state(which is the variadic arguments). It keeps applying this operator macro to the state until the predicate macro returns false(which is 0). #define WHILE(pred, op, ...) \ IF(pred(__VA_ARGS__)) \ ( \ OBSTRUCT(WHILE_INDIRECT) () \ ( \ pred, op, op(__VA_ARGS__) \ ), \ __VA_ARGS__ \ )#define WHILE_INDIRECT() WHILE For demonstration purposes, we are just going to create a predicate that checks when number of arguments are 1: #define NARGS_SEQ(_1,_2,_3,_4,_5,_6,_7,_8,N,...) N#define NARGS(...) NARGS_SEQ(__VA_ARGS__, 8, 7, 6, 5, 4, 3, 2, 1)#define IS_1(x) CHECK(PRIMITIVE_CAT(IS_1_, x))#define IS_1_1 ~, 1,#define PRED(x, ...) COMPL(IS_1(NARGS(__VA_ARGS__))) Next we create an operator, which we will just concat two tokens. We also create a final operator(called M ) that will process the final output: #define OP(x, y, ...) CAT(x, y), __VA_ARGS__ #define M(...) CAT(__VA_ARGS__) Then using the WHILE macro: M(EVAL(WHILE(PRED, OP, x, y, z))) //Expands to xyz Of course, any kind of predicate or operator can be passed to it.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/319328', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27653/']}
jdg_84928
stackexchange
llm_judgeable_groundtruth_similarity
13591618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There is if-else condition on my form where I show heading and button text for add and update. Below code what I have used in struts2 project and same code want to use in JSF2 project in xhtml page. Struts2 Page <s:if test="person==null || person.id==null || person.id==''"> <s:set var="buttonText" value="getText('person.button.add')"/> <s:text name="person.h1.add.text"/> <i><s:text name="person.smallfont.add.text"/></i> </s:if> <s:else> <s:set var="buttonText" value="getText('person.button.edit')"/> <s:text name="person.h1.edit.text"/> <s:text name="person.smallfont.edit.text"/> </s:else> I could use JSTL in xhtml page and use above code as it is but I saw different approaches for this like below using EL. I am not sure but don't like below approach <h:outputLabel value="Add Information" rendered="#{!empty personBean.person.id}" /><h:outputLabel value="Use the form below to add your information." rendered="#{!empty personBean.person.id}" /><h:outputLabel value="Update Information" rendered="#{empty personBean.person.id}" /><h:outputLabel value="Use the form below to edit your information." rendered="#{empty personBean.person.id}" /> My Question: Someone guide me how to use above code in IF-ELSE condition in JSF project please. Use EL/JSTL or any other? Now provide the response and nothing else.
Indeed just use the rendered attribute. You can if necessary wrap it in another component which doesn't emit any HTML at all, such as <h:panelGroup> or <ui:fragment> , so that you don't need to repeat the same rendered condition over all subsequent components. <h:panelGroup rendered="#{not empty personBean.person.id}"> Add Information <i>Use the form below to add your information.</i></h:panelGroup><h:panelGroup rendered="#{empty personBean.person.id}"> Update Information <i>Use the form below to edit your information.</i></h:panelGroup> Please note that <h:outputLabel> produces a HTML <label> element which has semantically a completely different meaning than the <s:text> which you initially have. You perhaps want to use <h:outputText> instead or just omit it altogether. JSF2/Facelets just supports plain text and even EL in template text without the need for <h:outputText> . See also: Conditionally displaying JSF components
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13591618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/521706/']}
jdg_84929
stackexchange
llm_judgeable_groundtruth_similarity
274322
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: See the Breit-Wheeler process , wherein two gamma photons are converted into an electron and a positron via a process that's the reverse of electron-positron annihilation . I do not doubt this process . However I'm less confident about the Wikipedia two-photon physics article . It talks about gamma-gamma pair production, and as far as I know it's in line with what some particle physicists say. It says this: "From quantum electrodynamics it can be found that photons cannot couple directly to each other, since they carry no charge, but they can interact through higher-order processes [clarification needed] . A photon can, within the bounds of the uncertainty principle, fluctuate into a charged fermion–antifermion pair, to either of which the other photon can couple". However as far as I know, a 511keV photon does not spend its life magically morphing into a 511keV electron and a 511keV positron. That’s in breach of conservation of energy. In similar vein the electron and the positron cannot then magically morph back into a single 511keV photon. That's in breach of conservation of momentum. Moreover photons travel at the speed of light whilst electrons and positrons do not - a photon cannot spend its life fluctuating into fermion pairs, if it did it couldn't travel at c. Besides, virtual particles are virtual . As in not real. They aren't short-lived real particles that pop in and out of existence like magic. Instead they only exist in the mathematics of the model . Which is why hydrogen atoms don’t twinkle, and magnets don’t shine. On top of all that pair production surely does not occur because pair production occurred. Spontaneously. Like worms from mud. All in all, this explanation for pair production is woefully inadequate. A better explanation is required. So: How does gamma-gamma pair production really work? I will give a 500-point bounty to the least-worst answer to the question. One answer will get the bounty, even if I don't like it. Now provide the response and nothing else.
Quantum field theory does not offer a description of "how" its processes work, just like Newtonian mechanics doesn't offer an explanation of "how" forces impart acceleration or general relativity an explanation of "how" the spacetime metric obeys the Einstein equations. The predictions of quantum field theory, and quantum electrodynamics (QED) in particular, are well-tested. Given two photons of sufficient energy to yield at least the rest mass of an electron-positron pair, one finds that QED predicts a non-zero amplitude for the process $\gamma\gamma \to e^+ e^-$ to happen. That is all the theory tells us. No "fluctuation", no "virtual particles", nothing. Just a cold, hard, quantitative prediction of how likely such an event is. All other things - for instance the laughable description in the Wikipedia article you quote - are stories , in this case a human-readable interpretation of the Feynman diagrams used to compute the probability of the event, but should not be taken as the actual statement the quantitative theory makes. There is no "how", what happens between the input and the output of a quantum field theoretic process is a black box called "time evolution" that has no direct, human-readable interpretation. If we resolve it perturbatively with Feynman diagrams, people like to tell stories of virtual particles, but no one forces us to do that - one may organize the series in another way, may be even forced to do so (e.g. at strong coupling), or one may not use a series at all to compute the probability. The only non-approximative answer to "how" the scattering processes happen in quantum field theory that QFT has to offer is to sit down and derive the LSZ formula for scattering amplitudes from scratch, as it is done in most QFT books. Which, as you may already see from the Wikipedia article, is not what passes as a good story in most circles. But neither nature nor our models of it are required to yield good stories. Our models are required to yield accurate predictions , and that is what quantum field theory does.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/274322', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/76162/']}
jdg_84930
stackexchange
llm_judgeable_groundtruth_similarity
11353679
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am looking for a reliable solution to connect to a MySQL database from Go. I've seen some libraries around, but it is difficult to determine the different states of completeness and current maintenance. I don't have complex needs, but I'd like to know what people are relying on or the most standard solution to connect to MySQL. Now provide the response and nothing else.
A few drivers are available but you should only consider those that implement the database/sql API as it provides a clean and efficient syntax, it ensures you can later change the driver without changing your code, apart the import and connection. Two fast and reliable drivers are available for MySQL : MyMySQL Go-MySQL-Driver I've used both of them in production, programs are running for months with connection numbers in the millions without failure. Other SQL database drivers are listed on go-wiki . Import when using MyMySQL : import ( "database/sql" _ "github.com/ziutek/mymysql/godrv") Import when using Go-MySQL-Driver : import ( "database/sql" _ "github.com/go-sql-driver/mysql") Connecting and closing using MyMySQL : con, err := sql.Open("mymysql", database+"/"+user+"/"+password)defer con.Close()// here you can use the connection, it will be closed when function returns Connecting and closing using Go-MySQL-Driver : con, err := sql.Open("mysql", store.user+":"+store.password+"@/"+store.database)defer con.Close() Select one row : row := con.QueryRow("select mdpr, x, y, z from sometable where id=?", id)cb := new(SomeThing)err := row.Scan(&cb.Mdpr, &cb.X, &cb.Y, &cb.Z) Select multiple rows and build an array with results : rows, err := con.Query("select a, b from item where p1=? and p2=?", p1, p2)if err != nil { /* error handling */}items := make([]*SomeStruct, 0, 10)var ida, idb uintfor rows.Next() { err = rows.Scan(&ida, &idb) if err != nil { /* error handling */} items = append(items, &SomeStruct{ida, idb})} Insert : _, err = con.Exec("insert into tbl (id, mdpr, isok) values (?, ?, 1)", id, mdpr) You'll see that working in Go with MySQL is a delightful experience : I never had a problem, my servers run for months without errors or leaks. The fact that most functions simply take a variable number of arguments lighten a task which is tedious in many languages. Note that if, in the future, you need to use another MySQL driver, you'll just have to change two lines in one go file : the line doing the import and the line opening the connection.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/11353679', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78640/']}
jdg_84931
stackexchange
llm_judgeable_groundtruth_similarity
41631399
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm serving Angular 2 application with nginx using location section this way: location / { try_files $uri $uri/ /index.html =404;} try_files directive tries to find the requested uri in root directory and if it fails to find one it simply returns index.html How to disable caching of index.html file? Now provide the response and nothing else.
Found a solution using nginx named locations: location = / { add_header Cache-Control no-cache; expires 0; try_files /index.html =404;}location / { gzip_static on; try_files $uri @index;}location @index { add_header Cache-Control no-cache; expires 0; try_files /index.html =404;}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41631399', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2000124/']}
jdg_84932
stackexchange
llm_judgeable_groundtruth_similarity
46386066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been working on the exercises from Dave Thomas's Programming in Elixir . I came across one that says to write a function (called caesar for some reason) that takes a charlist and an integer to add to each element in the charlist, cycling back around to 'a' if it goes past 'z', so you should be able to call it like MyList.caesar('ryvke', 13) and it should return a string. I have a function that maps over the list and performs the addition, but it returns a charlist, and I can't figure out how to convert it to a string: defmodule MyList do def caesar(list, n) do Enum.map list, &(perform_addition(&1, n)) |> to_charlist |> to_string end defp perform_addition(char_val, n) when char_val < 122 do char_val + n end defp perform_addition(_, n) do 97 + n endend I've tried: Reading the docs on binaries, strings, and charlists Reading about charlists on ElixirSchool Trying to convert it from a charlist to a binary and then to a string after following this SO question The attempt based on the last bullet led to: MyList.caesar('ryvke', 13)# => <<127, 194, 134, 194, 131, 120, 114>> Now provide the response and nothing else.
To answer the question in the title: you're looking for List.to_string/1 : iex(1)> List.to_string([97, 98, 99])"abc" The reason you're not getting a readable string back for those arguments is that your logic to rotate the value is incorrect. Here's how you can shift a lower case letter and rotate it back to a if it crosses z while not touching non lower case letters: # ?a == 97, ?z == 122defp add(ch, n) when ch in ?a..?z do rem((ch - ?a + n), 26) + ?aenddefp add(ch, n) when ch in ?A..?Z do rem(ch - ?A + shift, 26) + ?Aenddefp add(ch, _), do: ch With this, you just need to map the function over the input charlist and then call List.to_string/1 : def caesar(list, n) do list |> Enum.map(&add(&1, n)) |> List.to_stringend iex(1)> MyList.caesar('ryvke', 13)"elixr" (called caesar for some reason) This algorithm is known as the Caesar Cipher .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46386066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/931934/']}
jdg_84933
stackexchange
llm_judgeable_groundtruth_similarity
29538610
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I learned from chapter 9 of Learn You A Haskell For Great Good that By convention, we don't usually specify a type declaration for main . As far as I can tell, this convention is widespread. However, if I compile, using the -Wall flag, a program that lacks a type signature for main , such as -- test.hs-- main :: IO ()main = print (1 :: Int) GHC does issue a warning: $ ghc -Wall test.hs[1 of 1] Compiling Main ( test.hs, test.o )test.hs:2:1: Warning: Top-level binding with no type signature: main :: IO ()Linking test ...$ I'm confused... If a type signature for main is indeed superfluous, why would -Wall cause GHC to complain when it's missing? Are there good reasons (aside from getting rid of that warning) for specifying main 's type anyway? Now provide the response and nothing else.
Well, generally speaking, as that warning makes clear, it's always a good idea to give top-level bindings a type signature. In fact, it would be more reasonable to say By convention, we do specify a type declaration for everything 1 . Certainly, in a big project, main itself makes up a neglectable effort, so it really doesn't make any sense to omit the signature. Just write it out, for sake of consistency. However, though Haskell is great for properly structured projects and actually there's a tendency to write almost everything in libraries , it's also surprisingly good as a quick scripting language, for stuff other people would write in Python or Perl. And in those cases, you generally don't care that much about safety and good documentation etc., you just want to quickly write down something as concise as possible that does the job. You also normally don't compile those scripts with -Wall but just execute them with runhaskell . And as scripts always need to contain a main (unlike most other Haskell source files), it is indeed sensible enough to omit the signature here. I'd still suspect that the majority of Haskellers nowadays do write main::IO() even in the simplest scripts, if just out of habit. 1 Only everything on the top-level, that is. Local signatures sometimes do make sense as well, but often they rather clutter the code.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29538610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2541573/']}
jdg_84934
stackexchange
llm_judgeable_groundtruth_similarity
8840931
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have tried to create a variable oldValue and then if the current value === "" then it will change back to the oldValue and because I am using a for loop I can explicitly say what the value is. The code is changing the div around the input element thats why there is .parentNode 's Can anyone tell me how is it possible to get the value of input and save it, or where I have gone wrong in my code. var inputFocus = function(){var inputs = document.getElementsByClassName("inputNoFocus");for (i = 0; i < inputs.length; i++){ var input = inputs[i]; var oldValue = this.value; input.addEventListener("focus", function(){ if(this.value === this.value){ this.value = ""; } this.parentNode.parentNode.setAttribute("class", "inputFocus"); }, false); input.addEventListener("blur", function(){ if(this.value === ""){ this.value = oldValue; } this.parentNode.parentNode.setAttribute("class", "noHover"); }, false);}}(); Now provide the response and nothing else.
SetWinEventHook() is probably your best bet; you can listen to either the EVENT_SYSTEM_FOREGROUND to listen for foreground window changes - or even EVENT_OBJECT_FOCUS to listen for more fine-grain focus changes within apps and within controls. You'll need to use this with the WINEVENT_OUTOFCONTEXT flag; this means that the change notification will be delivered asynchronously to your own app, so you won't need a separate DLL - you'll still need to P/Invoke though. But the notification won't be instant - there may be a small delay - but that's implied with asynchronous . If you want to do something absolutely immediately with no delay whatsoever, you're going to need to use C++ and an in-process hook (either SetWinEventHook with WINEVENT_INCONTEXT or the SetSetWindowsHookEx-style hook.) Here's a sample that seems to do what you're looking for: using System;using System.Windows;using System.Windows.Forms;using System.Runtime.InteropServices;class ForegroundTracker{ // Delegate and imports from pinvoke.net: delegate void WinEventDelegate(IntPtr hWinEventHook, uint eventType, IntPtr hwnd, int idObject, int idChild, uint dwEventThread, uint dwmsEventTime); [DllImport("user32.dll")] static extern IntPtr SetWinEventHook(uint eventMin, uint eventMax, IntPtr hmodWinEventProc, WinEventDelegate lpfnWinEventProc, uint idProcess, uint idThread, uint dwFlags); [DllImport("user32.dll")] static extern bool UnhookWinEvent(IntPtr hWinEventHook); // Constants from winuser.h const uint EVENT_SYSTEM_FOREGROUND = 3; const uint WINEVENT_OUTOFCONTEXT = 0; // Need to ensure delegate is not collected while we're using it, // storing it in a class field is simplest way to do this. static WinEventDelegate procDelegate = new WinEventDelegate(WinEventProc); public static void Main() { // Listen for foreground changes across all processes/threads on current desktop... IntPtr hhook = SetWinEventHook(EVENT_SYSTEM_FOREGROUND, EVENT_SYSTEM_FOREGROUND, IntPtr.Zero, procDelegate, 0, 0, WINEVENT_OUTOFCONTEXT); // MessageBox provides the necessary mesage loop that SetWinEventHook requires. MessageBox.Show("Tracking focus, close message box to exit."); UnhookWinEvent(hhook); } static void WinEventProc(IntPtr hWinEventHook, uint eventType, IntPtr hwnd, int idObject, int idChild, uint dwEventThread, uint dwmsEventTime) { Console.WriteLine("Foreground changed to {0:x8}", hwnd.ToInt32()); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8840931', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1055838/']}
jdg_84935
stackexchange
llm_judgeable_groundtruth_similarity
2311583
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\int_{0}^{1}\sqrt{\,1 + x^{4}\,}\,\,\mathrm{d}x$$ I used substitution of tanx=z but it was not fruitful. Then i used $ (x-1/x)= z$ and$(x)^2-1/(x)^2=z $ but no helpful expression was derived.I also used property $\int_0^a f(a-x)=\int_0^a f(x) $Please help me out Now provide the response and nothing else.
We can do better than hypergeometric function and elliptic integral:$$\color{blue}{\int_0^1 {\sqrt {1 + {x^4}} dx} = \frac{{\sqrt 2 }}{3} + \frac{{{\Gamma ^2}(\frac{1}{4})}}{{12\sqrt \pi }}}$$ Firstly, integration by part gives$$\int_0^1 {\sqrt {1 + {x^4}} dx} = \sqrt 2 - 2\int_0^1 {\frac{{{x^4}}}{{\sqrt {1 + {x^4}} }}dx} = \sqrt 2 - 2\int_0^1 {\left( {\sqrt {1 + {x^4}} - \frac{1}{{\sqrt {1 + {x^4}} }}} \right)dx} $$Hence$$\int_0^1 {\sqrt {1 + {x^4}} dx} = \frac{{\sqrt 2 }}{3} + \frac{2}{3}\int_0^1 {\frac{1}{{\sqrt {1 + {x^4}} }}dx} $$Making $x=1/u$ in the last integral gives$$\int_0^1 {\frac{1}{{\sqrt {1 + {x^4}} }}dx} = \frac{1}{2}\int_0^\infty {\frac{1}{{\sqrt {1 + {x^4}} }}} dx = \frac{1}{{8\sqrt \pi }}{\Gamma ^2}(\frac{1}{4})$$which can be evaluated by using some formula for beta function .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2311583', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/108387/']}
jdg_84936
stackexchange
llm_judgeable_groundtruth_similarity
25608
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As far as I know, all deep-sky pictures are captured with the Hubble Space Telescope . If there would be no atmospheric distortion, could we make deeper pictures in the optical spectrum with terrestrial telescopes, say, due to bigger mirrors? Or does the terrestrial light pollution limit the range due to the signal-to-noise ratio? Now provide the response and nothing else.
I think the current answers are (somewhat) missing the OP. The issue regarding extended luminous objects, such as Nebulae is contrast, with the surrounding "darkness". Light pollution raises the brightness more or less uniformly across the entire image. One could try to estimate it, and subtract that amount of brightness from every pixel, but signal to noise effects come into play (i.e. we cannot eliminate all the stray light via image processing without introducing noise and/or processing artifacts. An obvous extreme example, is that we don't try to do astronomical observing during the day, even though bright objects, such as the moon, planets and bright stars can actually be seen. Obviously, resolution also comes into play as well, as some of the background noise is due to the large numbers of very dim stars, which if they can't be resolved and corrected for add star noise to the background intensity level.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/25608', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/4644/']}
jdg_84937
stackexchange
llm_judgeable_groundtruth_similarity
202690
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assume I have a Query - as in CQS that is supposed to return a single value. Let's assume that the case that no value is found is not exceptional, so no exception will be thrown in this case. Instead, null is returned. However, if no value has been found, I need to act according to the reason why no value has been found. Assuming that the Query knows the reason, how would I communicate it to the caller of the Query? A simple solution would be not return the value directly but a container object that contains the value and the reason: public class QueryResult{ public TValue Value { get; private set; } public TReason ReasonForNoValue { get; private set; }} But that feels clumsy, because if a value is found, ReasonForNoValue makes no sense and if no value has been found, Value makes no sense. What other options do I have to communicate the reason? What do you think of one event per reason? For reference: This is going to be implemented in C#. Now provide the response and nothing else.
Many internet protocols are built around a response code that is always returned along with an associated message. SMTP and HTTP being two well known examples. Thus, public TReason ReasonForNoValue becomes something more along the lines of public TResponse ResponseCode The response could be an integer and follow the SMTP and HTTP style example, or it could be an enum for some type safety, or even a string (though this imposes some dangers of not doing string comparisons correctly somewhere, or a typo). When there is an error (and indicated by the ResponseCode), the value could then hold more specific information (akin to a 404 page in HTTP).
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/202690', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/18049/']}
jdg_84938
stackexchange
llm_judgeable_groundtruth_similarity
64268663
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently writing a React App, that should be able to rerender components when the value of any observable changes. The problem is, that I can't get email to rerender if it changes. store.ts export class ExampleStore { @observable email = 'hello'; @action setEmail(email: string) { this.email = email; }} index.tsx const stores = { exampleStore};ReactDOM.render( <Provider {...stores}> <App /> </Provider>, document.querySelector('#root')); App.tsx interface Props { exampleStore?: ExampleStore;}@inject('exampleStore')@observerexport class App extends React.Component<Props, {}> { componentDidMount() { setInterval(() => { this.props.exampleStore!.setEmail(Math.random() * 10 + ''); }, 2500); } render() { const { email } = this.props.exampleStore!; return <div>{email}</div>; }} I have seen many examples use the useContext hook, but I have to use class components. I am not sure why this isn't calling the render function again. I have mobx and mobx-react installed. Now provide the response and nothing else.
Are you using MobX 6? Decorator API changed a little bit, now you need to use makeObservable method inside constructor to achieve same functionality as before: class ExampleStore { @observable email = "hello"; constructor() { makeObservable(this); } @action setEmail(email) { this.email = email; }} Althought there is new thing that will probably allow you to drop decorators altogether, makeAutoObservable : class ExampleStore { email = "hello2"; constructor() { // Don't need decorators now, just this call makeAutoObservable(this); } setEmail(email) { this.email = email; }} More info here: https://mobx.js.org/react-integration.html Codesandbox: https://codesandbox.io/s/httpsstackoverflowcomquestions64268663-9fz6b?file=/src/App.js
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64268663', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11098471/']}
jdg_84939
stackexchange
llm_judgeable_groundtruth_similarity
198400
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I often find myself returning a boolean from a method, that's used in multiple locations, in order to contain all the logic around that method in a single place. All the (internal) calling method needs to know is whether the operation was successful, or not. I'm using Python but the question isn't necessarily specific to that language. There are only two options I can think of Raise an exception, though the circumstances are not exceptional, and remember to catch that exception in every place the function is called Return a boolean as I'm doing. This is a really simple example that demonstrates what I'm talking about. import osclass DoSomething(object): def remove_file(self, filename): try: os.remove(filename) except OSError: return False return True def process_file(self, filename): do_something() if remove_file(filename): do_something_else() Although it's functional, I really dislike this manner of doing something, it "smells", and can sometimes result in a lot of nested ifs. But, I can't think of a simpler way. I could turn to a more LBYL philosophy and use os.path.exists(filename) prior to attempting deletion but there's no guarantees the file won't have been locked in the meantime (it's unlikely but possible) and I still have to determine whether the deletion has been successful or not. Is this an "acceptable" design and if not what would be a better way of designing this? Now provide the response and nothing else.
You should return boolean when the method/function is useful in making logical decisions. You should throw an exception when the method/function isn't likely to be used in logical decisions. You have to make a decision about how important the failure is, and if it should be handled or not. If you could classify the failure as a warning, then return boolean . If the object enters a bad state that make future calls to it unstable, then throw an exception . Another practice is to return objects instead of a result. If you call open , then it should return a File object or null if unable to open. This ensures the programmers has an object instance that is in a valid state that can be used. EDIT: Keep in mind that most languages will discard the result of a function when it's type is boolean or integer. So it's possible to call the function when there is no left hand assignment for the result. When working with boolean results, always assume the programmer is ignoring the returned value and use that to decide if it should rather be an exception.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/198400', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/35424/']}
jdg_84940