source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
10750299
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a method declared like so: /*!\brief Removes the leading and trailing white space from a string.\param s The string to remove the white space from.\param white_chars Characters to be considered as whitespace.*/std::string Trim(const std::string &s, const std::string &white_chars = " \t\n\r"); The definition of the method is uninteresting, but here it is anyway: std::string Trim(const std::string &s, const std::string &white_chars){ size_t startidx = s.find_first_not_of(white_chars); if (startidx == std::string::npos) return ""; size_t endidx = s.find_last_not_of(white_chars); return s.substr(startidx, endidx - startidx + 1);} Now in most usages of this method, I supply just the first argument. Valgrind gives me the following warning ==3338== 68 bytes in 2 blocks are possibly lost in loss record 4,639 of 7,212==3338== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)==3338== by 0x728CA88: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)==3338== by 0x728E2B4: char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)==3338== by 0x728E414: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&, unsigned long, unsigned long) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)==3338== by 0x728E441: std::string::substr(unsigned long, unsigned long) const (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)==3338== by 0x6965F0A: str::Trim(std::string const&, std::string const&) (appbase.cpp:90)==3338== by 0x5F481D7: netopt::Element::set_ID(std::string const&) (netopt_elem.cpp:85) Note "blocks are possibly lost"? I've seen memory leaks where Valgrind tells me "blocks are definitely lost", but this is a less certain message. So the question becomes, am I causing a memory leak by assigning a default value to an std::string & ? If so, what am I doing wrong? Now provide the response and nothing else.
There is no technical issue, but philosophically creating a temporary std::string at each call is not so nice. Especially with libstdc++ (which you appear to use) since it causes a memory allocation each time (lack of Short String Optimization). Since find_first_not_of has an overload taking a char const* , it would be better to provide two overloads instead: // First, the one with the default argument, but without a temporarystd::string Trim(std::string const& s, char const* white_chars = " \t\n\r");// Second, for convenience, one with a `std::string`std::string Trim(std::string const& s, std::string const& white_chars); This also means that when calling Trim(xx, "abc") you will avoid the temporary std::string being generated :) Of course, the overkill solution is to reuse already written code: Boost String Algorithm has many algorithms for string manipulation, including trim .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10750299', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/385094/']}
jdg_80441
stackexchange
llm_judgeable_groundtruth_similarity
31616287
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to build a few docker containers and I found the iteration process of editing the Dockerfile, and scripts run within it, clunky. I'm looking for best practices and to find out how others go about. My initial process was: docker build -t mycontainer mycontainer docker run mycontainer docker exec -i -t < container id > "/bin/bash" # get into container to debug docker rm -v < container id > docker rmi mycontainer Repeat This felt expensive for each iteration, especially if it was typo. This alternate process required a little bit less iteration: Install vim in dockerfile docker run mycontainer docker exec -i -t < container id > "/bin/bash" # get into container to edit scripts docker cp to copy edited files out when done. If I need to run any command, I carefully remember and update the Dockerfile outside the container. Rebuild image without vim This requires fewer iterations, but is not painless since everything's very manual and I have to remember which files changed and got updated. Now provide the response and nothing else.
I've been working with Docker in production since 0.7 and I've definitely felt your pain. Dockerfile Development Workflow Note: I always install vim in the container when I'm in active development. I just take it out of the Dockerfile when I release. Setup tmux/gnu screen/iTerm/your favorite vertical split console utility. On the right console I run: $ vim Dockerfile On the left console I run: $ docker build -t username/imagename:latest . && docker run -it -name dev-1 username/imagename:latest Now split the left console horizontally, so that the run STDOUT is above and a shell is below. Here you will run: docker exec -it dev-1 and edits internally or do tests with: docker exec -it dev-1 <my command> Every time you are satisfied with your work with the Dockerfile save ( :wq! ) and then in the left console run the command above. Test the behavior. If you are not happy run: docker rm dev-1 and then edit again and repeat step #3. Periodically, when I've built up too many images or containers I do the following: Remove all containers: docker rm $(docker ps -qa) Remove all images: docker rmi $(docker images -q)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31616287', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1026572/']}
jdg_80442
stackexchange
llm_judgeable_groundtruth_similarity
88594
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Mathematics is exact. It is a beautiful language that allows us to express quantities that aren't possible to be represented physically. We build theoretical models of physical systems that work out great on paper, yet there are cases where we cannot solve these models exactly. We need these tools, such as perturbation theory, to get almost exact answers. For example, in Quantum Mechanics we use Perturbation Theory to solve the Schrodinger equation for the hydrogen atom. Shouldn't our theoretical models be refined if they cannot be solved exactly? Let us imagine we needed the exact answer to whatever our problem is; an approximation, no matter how good, just won't do. Shouldn't this be a fundamental rule of accepted theories; it needs to work out exactly? Now provide the response and nothing else.
No. There is nothing wrong with perturbation theory, or with theories with known, restricted accuracy. The point of theory is to explain the results of observation from as simple an initial theoretical standpoint as possible. Therefore: Since experiment always has a finite uncertainty, one can only ask that theory match the experimental value within its uncertainty range. Exact answers are not very useful in physics, because we know that other, smaller effects will always get in the way before an experiment matches the infinite set of significant figures we can draw out of an 'exact' answer. Ignoring these effects, or trying to minimize them, is very often the best way to hide your head in the sand while some amazing physics walks by. The point of theory is to make the initial standpoint as simple and understandable as possible, and to try and get some intuition for how that standpoint implies experimental results. It is a fact that most simple models will in general not be solvable. More importantly, it is not a given that the universe is describable by solvable models . We're not out to impose structure on nature, we're out to find the structure that's there. If it turns out it's not an exactly solvable model, then that's the way it is .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/88594', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/32287/']}
jdg_80443
stackexchange
llm_judgeable_groundtruth_similarity
528576
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've started learning about DDD and wanted to know how others have organised their projects. I've started off by organising around my AggregateRoots: MyApp.Domain (namespace for domain model) MyApp.Domain.Product - Product - IProductService - IProductRepository - etc MyApp.Domain.Comment - Comment - ICommentService - ICommentRepository - etc MyApp.Infrastructure - ... MyApp.Repositories - ProductRepository : IProductRepository - etc The problem i've bumped into with this is that i have to reference my domain product as MyApp.Domain.Product.Product or Product.Product. I also get a conflict with my linq data model for product....I have to use ugly lines of code to distiguish between the two such as MyApp.Domain.Product.Product and MyApp.Respoitories.Product. I am really interested to see how others have organised their solutions for DDD... I am using Visual Studio as my IDE. Thanks alot. Now provide the response and nothing else.
I try to keep things very simple whenever I can, so usually something like this works for me: Myapp.Domain - All domain specific classes share this namespace Myapp.Data - Thin layer that abstracts the database from the domain. Myapp.Application - All "support code", logging, shared utility code, service consumers etc Myapp.Web - The web UI So classes will be for example: Myapp.Domain.Sales.Order Myapp.Domain.Sales.Customer Myapp.Domain.Pricelist Myapp.Data.OrderManager Myapp.Data.CustomerManager Myapp.Application.Utils Myapp.Application.CacheTools Etc. The idea I try to keep in mind as I go along is that the "domain" namespace is what captures the actual logic of the application. So what goes there is what you can talk to the "domain experts" (The dudes who will be using the application) about. If I am coding something because of something that they have mentioned, it should be in the domain namespace, and whenever I code something that they have not mentioned (like logging, tracing errors etc) it should NOT be in the domain namespace. Because of this I am also wary about making too complicated object hierarchies. Ideally a somewhat simplified drawing of the domain model should be intuitively understandable by non-coders. To this end I don't normally start out by thinking about patterns in much detail. I try to model the domain as simple as I can get away with, following just standard object-oriented design guidelines. What needs to be an object? How are they related? DDD in my mind is about handling complex software, but if your software is not itself very complex to begin with you could easily end up in a situation where the DDD way of doing things adds complexity rather than removes it. Once you have a certain level of complexity in your model you will start to see how certain things should be organised, and then you will know which patterns to use, which classes are aggregates etc. In my example, Myapp.Domain.Sales.Order would be an aggregate root in the sense that when it is instanced it will likely contain other objects, such as a customer object and collection of order lines, and you would only access the order lines for that particular order through the order object. However, in order to keep things simple, I would not have a "master" object that only contains everything else and has no other purpose, so the order class will itself have values and properties that are useful in the application. So I will reference things like: Myapp.Domain.Sales.Order.TotalCost Myapp.Domain.Sales.Order.OrderDate Myapp.Domain.Sales.Order.Customer.PreferredInvoiceMethod Myapp.Domain.Sales.Order.Customer.Address.Zip etc.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/528576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_80444
stackexchange
llm_judgeable_groundtruth_similarity
6467832
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two sets of temperature date, which have readings at regular (but different) time intervals. I'm trying to get the correlation between these two sets of data. I've been playing with Pandas to try to do this. I've created two timeseries, and am using TimeSeriesA.corr(TimeSeriesB) . However, if the times in the 2 timeSeries do not match up exactly (they're generally off by seconds), I get Null as an answer. I could get a decent answer if I could: a) Interpolate/fill missing times in each TimeSeries (I know this is possible in Pandas, I just don't know how to do it) b) strip the seconds out of python datetime objects (Set seconds to 00, without changing minutes). I'd lose a degree of accuracy, but not a huge amount c) Use something else in Pandas to get the correlation between two timeSeries d) Use something in python to get the correlation between two lists of floats, each float having a corresponding datetime object, taking into account the time. Anyone have any suggestions? Now provide the response and nothing else.
You have a number of options using pandas, but you have to make a decision about how it makes sense to align the data given that they don't occur at the same instants. Use the values "as of" the times in one of the time series , here's an example: In [15]: ts Out[15]: 2000-01-03 00:00:00 -0.722808451504 2000-01-04 00:00:00 0.0125041039477 2000-01-05 00:00:00 0.777515530539 2000-01-06 00:00:00 -0.35714026263 2000-01-07 00:00:00 -1.55213541118 2000-01-10 00:00:00 -0.508166334892 2000-01-11 00:00:00 0.58016097981 2000-01-12 00:00:00 1.50766289013 2000-01-13 00:00:00 -1.11114968643 2000-01-14 00:00:00 0.259320239297 In [16]: ts2 Out[16]: 2000-01-03 00:00:30 1.05595278907 2000-01-04 00:00:30 -0.568961755792 2000-01-05 00:00:30 0.660511172645 2000-01-06 00:00:30 -0.0327384421979 2000-01-07 00:00:30 0.158094407533 2000-01-10 00:00:30 -0.321679671377 2000-01-11 00:00:30 0.977286027619 2000-01-12 00:00:30 -0.603541295894 2000-01-13 00:00:30 1.15993249209 2000-01-14 00:00:30 -0.229379534767 you can see these are off by 30 seconds. The reindex function enables you to align data while filling forward values (getting the "as of" value): In [17]: ts.reindex(ts2.index, method='pad') Out[17]: 2000-01-03 00:00:30 -0.722808451504 2000-01-04 00:00:30 0.0125041039477 2000-01-05 00:00:30 0.777515530539 2000-01-06 00:00:30 -0.35714026263 2000-01-07 00:00:30 -1.55213541118 2000-01-10 00:00:30 -0.508166334892 2000-01-11 00:00:30 0.58016097981 2000-01-12 00:00:30 1.50766289013 2000-01-13 00:00:30 -1.11114968643 2000-01-14 00:00:30 0.259320239297 In [18]: ts2.corr(ts.reindex(ts2.index, method='pad')) Out[18]: -0.31004148593302283 note that 'pad' is also aliased by 'ffill' (but only in the very latest version of pandas on GitHub as of this time!). Strip seconds out of all your datetimes . The best way to do this is to use rename In [25]: ts2.rename(lambda date: date.replace(second=0)) Out[25]: 2000-01-03 00:00:00 1.05595278907 2000-01-04 00:00:00 -0.568961755792 2000-01-05 00:00:00 0.660511172645 2000-01-06 00:00:00 -0.0327384421979 2000-01-07 00:00:00 0.158094407533 2000-01-10 00:00:00 -0.321679671377 2000-01-11 00:00:00 0.977286027619 2000-01-12 00:00:00 -0.603541295894 2000-01-13 00:00:00 1.15993249209 2000-01-14 00:00:00 -0.229379534767 Note that if rename causes there to be duplicate dates an Exception will be thrown. For something a little more advanced , suppose you wanted to correlate the mean value for each minute (where you have multiple observations per second): In [31]: ts_mean = ts.groupby(lambda date: date.replace(second=0)).mean() In [32]: ts2_mean = ts2.groupby(lambda date: date.replace(second=0)).mean() In [33]: ts_mean.corr(ts2_mean) Out[33]: -0.31004148593302283 These last code snippets may not work if you don't have the latest code from https://github.com/wesm/pandas . If .mean() doesn't work on a GroupBy object per above try .agg(np.mean) Hope this helps!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6467832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/814005/']}
jdg_80445
stackexchange
llm_judgeable_groundtruth_similarity
13517252
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After several days of happily hacking away on this C# app using Visual Studio 2008, I get struck by a barrage of error dialogs showing: Code generation for property ' valueMember ' failed. Error was: ' Object reference not set to an instance of an object .' This happens now often when I make a tiny change in the designer, e.g. shift a control a few pixels, and then try to save. Several such error dialogs appear each second, keeping me busy cancelling all those by hammering the Enter key while trying to get alt-F4 to get VS to close. Eventually I do get VS to close and to save the changes I made. After restarting VS, I do "clean" on the entire project, then "build" and everything works fine, the app runs fine, no problems. Until I make another slight change in the form designer. I don't know about any property valueMember in my app. This makes me crazy, it is a real showstopper for my project. Any help is appreciated. Now provide the response and nothing else.
As this is happening at design time, it is likely that you have a custom control which requires a parameter or other value which does not have a default. When in design view in Visual Studio; a control instance is created to render it on the visual editor, but if the control requires a property to be set before it can be rendered, it will result in an error. Can you check that all custom controls have default values, and anything referenced in the constructor that cannot have a default is wrapped by DesignMode property - see http://msdn.microsoft.com/en-us/library/system.componentmodel.component.designmode.aspx .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13517252', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1845672/']}
jdg_80446
stackexchange
llm_judgeable_groundtruth_similarity
11512313
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm confused as to what the Put monad offers over using Builder directly, in Data.Binary . I read the Binary Generation section of Dealing with Binary data, and it seems to assume that you should use Put , but it's pretty short doesn't explain why. Data.Binary.Put The Put monad. A monad for efficiently constructing lazy bytestrings. type Put = PutM () Put merely lifts Builder into a Writer monad, applied to (). Data.Binary.Builder Efficient construction of lazy byte strings. What is the point of a Writer monad applied to () ? I can see that Put is (a type synonym to) a monad whereas Builder is not, but I don't really get why Put would be needed. In my case, I'm rendering a 3D scene and writing each pixel as a 3 bytes, and then adding on the PPM format's header to the beginning (will use PNG later). Binary seems like it is meant to be instantiated for types that can be serialized and deserialized to and from binary data. This isn't exactly what I'm doing, but it felt natural to instantiate Binary for my colour type instance (Binary a) => Binary (Colour a) where put (Colour r g b) = put r >> put g >> put b get = Colour <$> get <*> get <*> get This makes it easy to put a Colour Word8 into 24 bits. But then I also have to tack on the header, and I'm not sure how I should do that. Is Builder meant to be hidden behind the scenes, or does it depend? Is the Binary class only for (de)serializing data, or for all binary generation purposes? Now provide the response and nothing else.
First of all note the conceptual difference. Builders are for efficient building of bytestring streams, while the PutM monad is really for serialization. So the first question you should ask yourself is whether you are actually serializing (to answer that ask yourself whether there is a meaningful and exact opposite operation – deserialization). In general I would go with Builder for the convenience it provides. However, not the Builder from the binary package, but in fact from the blaze-builder package. It's a monoid and has many predefined string generators. It is also very composable. Finally it's very fast and can in fact be fine-tuned. Last but not least if you really want speed, convenience and elegant code you will want to combine this with one of the various stream processor libraries around like conduit , enumerator or pipes .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11512313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/148195/']}
jdg_80447
stackexchange
llm_judgeable_groundtruth_similarity
12058610
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've a problem with Backbone Marionette and ItemView rendering. I need to pass a value from the Composite View to each of its Item View.The value is contained correctly in the options array of the Item View, however, I cannot access it from the templateHelpers method. So I tried to set it as value of my View but when I render the array it returns an "undefined" value. The Composite View var TableView = Backbone.Marionette.CompositeView.extend({.... itemViewOptions: { foo: "bar", }, The Item View var RowView = Backbone.Marionette.ItemView.extend({template: RowTemplate,tagName: "tr",foo: "",initialize: function(){ this.foo = this.options.foo; },templateHelpers: { foo: function(){ return this.foo; }}, What I'm doing wrong? How can I access the value and fetch it to the template? Thank you. Now provide the response and nothing else.
In the templateHelpers functions, the this variable is the object that was retured from the serializeData method. To get the itemViewOptions in to the templateHelpers , then, you need to modify the serializeData method on your item view: ItemView.extend({ // ... serializeData: function(){ // call the super method var data = Backbone.Marionette.ItemView.prototype.serializeData.apply(this, arguments); // augment the data the way you need data.foo = this.options.foo; // send back your custom data return data; }}); This should make your data available in the templateHelpers .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12058610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1437668/']}
jdg_80448
stackexchange
llm_judgeable_groundtruth_similarity
2799512
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to compute this limit using only elementary properties of definite integrals and without using any advanced integration technique $$\lim_{h \to 0} \int_0^h \frac {\sqrt {9 + t^2}}{h}dt $$ Thanks in advance Now provide the response and nothing else.
Take $$F(x) = \int_0^x \sqrt{9+t^2} dt$$ then we have by the definition of the derivative: $$F'(0) = \lim_{h\to 0} \frac{F(h) - F(0)}{h}$$ But $$\frac{F(h) - F(0)}{h} = \int_0^h \frac {\sqrt {9 + t^2}}{h}dt$$ So we have: $$\lim_{h \to 0} \int_0^h \frac {\sqrt {9 + t^2}}{h}dt = F'(0)$$ But we know $$F'(x) = \sqrt{9 + x^2}$$ by the Fundamental theorem of calculus. So $F'(0) = \ldots$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2799512', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/325705/']}
jdg_80449
stackexchange
llm_judgeable_groundtruth_similarity
321706
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have this congruence relation: 2013 ≡ 1012 (mod m) I am supposed to find all m in the natural system. Now provide the response and nothing else.
$2013\equiv 1012\pmod m\implies m|(2013-1012)\implies m|1001$. Thus all factors of $1001$ satisfy this.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/321706', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/65182/']}
jdg_80450
stackexchange
llm_judgeable_groundtruth_similarity
10659875
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm sure this question has been asked before, my apologies for not finding it first. The original array: [0] => Array ( [categoryId] => 1 [eventId] => 2 [eventName] => 3 [vendorName] => 4 )[1] => Array ( [categoryId] => 5 [eventId] => 6 [eventName] => 7 [vendorName] => 8 )[2] => Array ( [categoryId] => 9 [eventId] => 10 [eventName] => 11 [vendorName] => 12 ) My hoped for result out of: print_r(get_values_from_a_key_in_arrays('categoryId', $array)); [0] => 1[1] => 5[2] => 9 I'm just looking for something cleaner than writing my own foreach based function. If foreach is the answer, I already have that in place. Edit: I don't want to use a hard-coded key, I was just showing an example call to the solution. Thanks! ^_^ Quick Grab Solution for PHP 5.3: private function pluck($key, $data) { return array_reduce($data, function($result, $array) use($key) { isset($array[$key]) && $result[] = $array[$key]; return $result; }, array());} Now provide the response and nothing else.
So, the cool thing about higher-order collection/iterator functions such as pluck , filter , each , map , and friends is that they can be mixed and matched to compose a more complex set of operations. Most languages provide these types of functions (look for packages like collection, iterator, or enumeration/enumerable)...some provide more functions than others and you will commonly see that the functions are named differently across languages (i.e. collect == map, reduce == fold). If a function doesn't exist in your language, you can create it from the ones that do exist. As for your test case...we can use array_reduce to implement pluck . The first version I posted relied on array_map ; however, I agree with @salathe that array_reduce is more succinct for this task; array_map is an OK option, but you end up having to do more work in the end. array_reduce can look a bit odd at first, but if the callback is neatly organized, all is well. A less naive pluck would also check to see if it can "call" (a function/method) on the iterated value. In the naive implementation below, we assume the structure to be a hash (associative array). This will setup the test-case data (Fixtures): <?php$data[] = array('categoryId' => 1, 'eventId' => 2, 'eventName' => 3, 'vendorName' => 4);$data[] = array('categoryId' => 5, 'eventId' => 6, 'eventName' => 7, 'vendorName' => 8);$data[] = array('categoryId' => 9, 'eventId' => 10, 'eventName' => 11, 'vendorName' => 12);$data[] = array(/* no categoryId */ 'eventId' => 10, 'eventName' => 11, 'vendorName' => 12);$data[] = array('categoryId' => false,'eventId' => 10, 'eventName' => 11, 'vendorName' => 12);$data[] = array('categoryId' => 0.0, 'eventId' => 10, 'eventName' => 11, 'vendorName' => 12); Choose the version of pluck you'd prefer $preferredPluck = 'pluck_array_reduce'; // or pluck_array_map "pluck" for PHP 5.3+: array_reduce provides a terse implementation though not as easy to reason about as the array_map version: function pluck_array_reduce($key, $data) { return array_reduce($data, function($result, $array) use($key){ isset($array[$key]) && $result[] = $array[$key]; return $result; }, array());} "pluck" for PHP 5.3+: array_map isn't perfect for this so we have to do more checking (and it still doesn't account for many potential cases): function pluck_array_map($key, $data) { $map = array_map(function($array) use($key){ return isset($array[$key]) ? $array[$key] : null; }, $data); // is_scalar isn't perfect; to make this right for you, you may have to adjust return array_filter($map, 'is_scalar');} "pluck" for legacy PHP <5.3 We could have used the legacy create_function ; however, it is bad form, not recommended, and also not at all elegant, thus, I've decided not to show it. function pluck_compat($key, $data) { $map = array(); foreach ($data as $array) { if (array_key_exists($key, $array)) { $map[] = $array[$key]; } } unset($array); return $map;} Here we choose a version of "pluck" to call based on the version of PHP we are running. If you run the entire script, you should get the correct answer no matter what version you are on. $actual = version_compare(PHP_VERSION, '5.3.0', '>=') ? $preferredPluck('categoryId', $data) : pluck_compat('categoryId', $data);$expected = array(1, 5, 9, false, 0.0);$variance = count(array_diff($expected, $actual));var_dump($expected, $actual);echo PHP_EOL;echo 'variance: ', $variance, PHP_EOL;print @assert($variance) ? 'Assertion Failed' : 'Assertion Passed'; Notice there is no ending '?>'. That is because it isn't needed. More good can come of leaving it off than from keeping it around. FWIW, it looks like this is being added to PHP 5.5 as array_column .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10659875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/975132/']}
jdg_80451
stackexchange
llm_judgeable_groundtruth_similarity
51353110
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following simple CMake code: cmake_minimum_required(VERSION 3.1)project(PrintGenerators)set(TEST_OR_GENERATOR "$<$<OR:0,0,1>:YAY!>")message(STATUS ${TEST_OR_GENERATOR}) # Print out the result of the build I expect this code to print out YAY! , but it does not. I instead get $<$<OR:0,0,1>:YAY!> as the output. How do I print the result of the evaluated generator expression during configuration? Now provide the response and nothing else.
How do I print the result of the evaluated generator expression during configuration? You cannot. Generator expressions are intended for things, which are not exactly known at configuration stage: they depend on build type, which, in case of multiconfiguration generators , becomes known only at the build stage . You may, however, save a value of the generator expression into the file, but the file will be written only at the end of the configuration stage: file(GENERATE OUTPUT <filename> CONTENT <string-with-generator-expression>) More detailed description of file(GENERATOR) see in documentation .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51353110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6525260/']}
jdg_80452
stackexchange
llm_judgeable_groundtruth_similarity
311652
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In 1963, this paper was written about the effects of radiation on solar panels. The paper states that: When electrons at energies greater than 145 KeV and protons at energies greater than 98eV bombard a silicon crystal, they can displace an atom from the crystal lattice, producing a lattice vacancy and a recoil atom which comes to rest as an interstitial atom. However, the resting energies of electrons and protons are far greater than this, at roughly 511 KeV and 938 MeV respectively. I concluded that the paper was referring to kinetic energy rather than total energy, and adjusted my calculations based on this conjecture. So: Was I correct to assume that the paper referred to kinetic energy, or was it instead some other measure of the particles' energy? More generally, is there a standard meaning for a particle's "energy" when referring to such particles moving at relativistic speeds? Now provide the response and nothing else.
Special relativity theory gives you $(mc^2)^2 = E^2 - (pc)^2$, where $m$ is the rest mass of particle (511 keV for electrons), $p$ is the momentum and $E$ the total energy. When you have the natural unit system where $c = 1$, the equation becomes$m^2 = E^2 - p^2$,which might be strange looking for you. When an electron has an energy of 145 keV, it must be the kinetic energy only, there is just no way that you can choose $p$ such that $E$ could be smaller than $mc^2$. For higher energies, it could become ambiguous. If you have $E \gg mc^2$, then the actual difference will be small because the particles are ultra-relativistic anyway. In a particle physics context, where particles can annihilate each other, energy is usually meant as the total energy including rest mass, just like the above equation says.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/311652', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/145265/']}
jdg_80453
stackexchange
llm_judgeable_groundtruth_similarity
446420
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say I have a bash function like so: gmx(){ echo "foo";} will this function implicitly return the exit value of the echo command, or is using return necessary? gmx(){ echo "foo"; return $?} I assume that the way bash works, the exit status of the final command of the bash function is the one that gets "returned", but not 100% certain. Now provide the response and nothing else.
return does an explicit return from a shell function or "dot script" (a sourced script). If return is not executed, an implicit return is made at the end of the shell function or dot script. If return is executed without a parameter, it is equivalent of returning the exit status of the most recently executed command. That is how return works in all POSIX shells. For example, gmx () { echo 'foo' return "$?"} is therefore equivalent to gmx () { echo 'foo' return} which is the same as gmx () { echo 'foo'} In general, it is very seldom that you need to use $? at all. It is really only needed if you need to save it for future use, for example if you need to investigate its value multiple times (in which case you would assign its value to a variable and perform a series of tests on that variable).
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/446420', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/113238/']}
jdg_80454
stackexchange
llm_judgeable_groundtruth_similarity
41559660
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When using GitLab CI, as well as the gitlab-ci-multi-runner , I'm unable to get internally-started Docker containers to expose their ports to the "host", which is the Docker image in which the build is running. My .gitlab-ci.yml file: test: image: docker stage: test services: - docker:dind script: - APP_CONTAINER_ID=`docker run -d --privileged -p "9143:9143" appropriate/nc nc -l 9143` - netstat -a - docker exec $APP_CONTAINER_ID netstat -a - nc -v localhost 9143 My command: gitlab-ci-multi-runner exec docker --docker-privileged test The output: $ netstat -aActive Internet connections (servers and established)Proto Recv-Q Send-Q Local Address Foreign Address Statetcp 0 0 runner--project-1-concurrent-0:54664 docker:2375 TIME_WAITtcp 0 0 runner--project-1-concurrent-0:54666 docker:2375 TIME_WAITActive UNIX domain sockets (servers and established)Proto RefCnt Flags Type State I-Node Path$ docker exec $APP_CONTAINER_ID netstat -aActive Internet connections (servers and established)Proto Recv-Q Send-Q Local Address Foreign Address Statetcp 0 0 0.0.0.0:9143 0.0.0.0:* LISTENActive UNIX domain sockets (servers and established)Proto RefCnt Flags Type State I-Node Path$ nc -v localhost 9143ERROR: Build failed: exit code 1FATAL: exit code 1 What am I doing wrong here? Original Question Follows - above is a shorter, easier-to-test example I have an application image that listens on port 9143 . Its startup and config is managed via docker-compose.yml , and works great on my local machine with docker-compose up - I can access localhost:9143 without issue. However, when running on GitLab CI (the gitlab.com version) via a shared runner, the port doesn't seem to be exposed. The relevant portion of my .gitlab-ci.yml : test: image: craigotis/buildtools:v1 stage: test script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/craigotis/myapp - docker-compose up -d - sleep 60 # a temporary hack to get the logs - docker-compose logs - docker-machine env - docker-compose port app 9143 - netstat -a - docker-compose ps - /usr/local/bin/wait-for-it.sh -h localhost -p 9143 -t 60 - cd mocha - npm i - npm test - docker-compose down The output is: $ docker-compose logs...app_1 | [Thread-1] INFO spark.webserver.SparkServer - == Spark has ignited ...app_1 | [Thread-1] INFO spark.webserver.SparkServer - >> Listening on 0.0.0.0:9143app_1 | [Thread-1] INFO org.eclipse.jetty.server.Server - jetty-9.0.z-SNAPSHOTapp_1 | [Thread-1] INFO org.eclipse.jetty.server.ServerConnector - Started ServerConnector@6919dc5{HTTP/1.1}{0.0.0.0:9143}...$ docker-compose port app 91430.0.0.0:9143$ netstat -aActive Internet connections (servers and established)Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 runner-e11ae361-project-1925166-concurrent-0:53646 docker:2375 TIME_WAIT tcp 0 0 runner-e11ae361-project-1925166-concurrent-0:53644 docker:2375 TIME_WAIT tcp 0 0 runner-e11ae361-project-1925166-concurrent-0:53642 docker:2375 TIME_WAIT Active UNIX domain sockets (servers and established)Proto RefCnt Flags Type State I-Node Path$ docker-compose psstty: standard input: Not a tty Name Command State Ports ----------------------------------------------------------------------------------------my_app_1 wait-for-it.sh mysql_serve ... Up 8080/tcp, 0.0.0.0:9143->9143/tcp mysql_server docker-entrypoint.sh --cha ... Up 3306/tcp $ /usr/local/bin/wait-for-it.sh -h localhost -p 9143 -t 60wait-for-it.sh: waiting 60 seconds for localhost:9143wait-for-it.sh: timeout occurred after waiting 60 seconds for localhost:9143 The contents of my docker-compose.yml : version: '2'networks: app_net: driver: bridgeservices: app: image: registry.gitlab.com/craigotis/myapp:latest depends_on: - "db" networks: - app_net command: wait-for-it.sh mysql_server:3306 -t 60 -- java -jar /opt/app*.jar ports: - "9143:9143" db: image: mysql:latest networks: - app_net container_name: mysql_server environment: - MYSQL_ALLOW_EMPTY_PASSWORD=true It seems like my application container is listening on 9143 , and it's properly exposed to the shared GitLab runner, but it doesn't seem to actually be exposed. It works fine on my local machine - is there some special workaround/tweak I need to make this work inside a Docker container running on GitLab? Now provide the response and nothing else.
The offical gitab-ci on gitlab.com documentation refers to the example of PostgreSQL Its working CI does not try to connect to localhost, but rather to the service name The services keyword defines just another docker image that is run during your build and is linked to the docker image that the image keyword defines. This allows you to access the service image during build time. The service container for MySQL will be accessible under the hostname mysql . So, in order to access your database service you have to connect to the host named mysql instead of a socket or localhost . You could check if this applies in your case, and try accessing your application service in app:9143 instead of localhost:9143 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41559660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/88111/']}
jdg_80455
stackexchange
llm_judgeable_groundtruth_similarity
14118309
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created a list view with multiple items in row. I have also created a search box above. I want to implement search functionality on the basis of particular fields of the list. How can I achieve this? Any help will be appreciated. Now provide the response and nothing else.
You have to use model, listview, and customadapter with filtering for this.I have created a demo for this. Suppose you have a model named Product, and you are displaying its content in a custom listview where name and price are displayed in a textview. I mean in a custom row having two textviews, and you want to filter the list by one of the field of custom row. Here I have filtered with "name" Screenshots: Initial Filtered Source code Model public class Product { public String name; public Integer price; public Product(String name, Integer price) { super(); this.name = name; this.price = price; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Integer getPrice() { return price; } public void setPrice(Integer price) { this.price = price; }} Activity with custom adapter and listview public class MainActivity extends Activity { private LinearLayout llContainer; private EditText etSearch; private ListView lvProducts; private ArrayList<Product> mProductArrayList = new ArrayList<Product>(); private MyAdapter adapter1; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); initialize(); // Add Text Change Listener to EditText etSearch.addTextChangedListener(new TextWatcher() { @Override public void onTextChanged(CharSequence s, int start, int before, int count) { // Call back the Adapter with current character to Filter adapter1.getFilter().filter(s.toString()); } @Override public void beforeTextChanged(CharSequence s, int start, int count,int after) { } @Override public void afterTextChanged(Editable s) { } }); } private void initialize() { etSearch = (EditText) findViewById(R.id.etSearch); lvProducts = (ListView)findViewById(R.id.lvOS); } @Override protected void onResume() { // TODO Auto-generated method stub super.onResume(); mProductArrayList.add(new Product("a", 100)); mProductArrayList.add(new Product("b", 200)); mProductArrayList.add(new Product("c", 300)); mProductArrayList.add(new Product("d", 400)); mProductArrayList.add(new Product("e", 500)); mProductArrayList.add(new Product("f", 600)); mProductArrayList.add(new Product("g", 700)); mProductArrayList.add(new Product("h", 800)); mProductArrayList.add(new Product("i", 900)); mProductArrayList.add(new Product("j", 1000)); mProductArrayList.add(new Product("k", 1100)); mProductArrayList.add(new Product("l", 1200)); mProductArrayList.add(new Product("m", 1000)); mProductArrayList.add(new Product("n", 1300)); mProductArrayList.add(new Product("o", 1400)); mProductArrayList.add(new Product("p", 1500)); adapter1 = new MyAdapter(MainActivity.this, mProductArrayList); lvProducts.setAdapter(adapter1); } // Adapter Class public class MyAdapter extends BaseAdapter implements Filterable { private ArrayList<Product> mOriginalValues; // Original Values private ArrayList<Product> mDisplayedValues; // Values to be displayed LayoutInflater inflater; public MyAdapter(Context context, ArrayList<Product> mProductArrayList) { this.mOriginalValues = mProductArrayList; this.mDisplayedValues = mProductArrayList; inflater = LayoutInflater.from(context); } @Override public int getCount() { return mDisplayedValues.size(); } @Override public Object getItem(int position) { return position; } @Override public long getItemId(int position) { return position; } private class ViewHolder { LinearLayout llContainer; TextView tvName,tvPrice; } @Override public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder = null; if (convertView == null) { holder = new ViewHolder(); convertView = inflater.inflate(R.layout.row, null); holder.llContainer = (LinearLayout)convertView.findViewById(R.id.llContainer); holder.tvName = (TextView) convertView.findViewById(R.id.tvName); holder.tvPrice = (TextView) convertView.findViewById(R.id.tvPrice); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } holder.tvName.setText(mDisplayedValues.get(position).name); holder.tvPrice.setText(mDisplayedValues.get(position).price+""); holder.llContainer.setOnClickListener(new OnClickListener() { public void onClick(View v) { Toast.makeText(MainActivity.this, mDisplayedValues.get(position).name, Toast.LENGTH_SHORT).show(); } }); return convertView; } @Override public Filter getFilter() { Filter filter = new Filter() { @SuppressWarnings("unchecked") @Override protected void publishResults(CharSequence constraint,FilterResults results) { mDisplayedValues = (ArrayList<Product>) results.values; // has the filtered values notifyDataSetChanged(); // notifies the data with new filtered values } @Override protected FilterResults performFiltering(CharSequence constraint) { FilterResults results = new FilterResults(); // Holds the results of a filtering operation in values ArrayList<Product> FilteredArrList = new ArrayList<Product>(); if (mOriginalValues == null) { mOriginalValues = new ArrayList<Product>(mDisplayedValues); // saves the original data in mOriginalValues } /******** * * If constraint(CharSequence that is received) is null returns the mOriginalValues(Original) values * else does the Filtering and returns FilteredArrList(Filtered) * ********/ if (constraint == null || constraint.length() == 0) { // set the Original result to return results.count = mOriginalValues.size(); results.values = mOriginalValues; } else { constraint = constraint.toString().toLowerCase(); for (int i = 0; i < mOriginalValues.size(); i++) { String data = mOriginalValues.get(i).name; if (data.toLowerCase().startsWith(constraint.toString())) { FilteredArrList.add(new Product(mOriginalValues.get(i).name,mOriginalValues.get(i).price)); } } // set the Filtered result to return results.count = FilteredArrList.size(); results.values = FilteredArrList; } return results; } }; return filter; } }} activity_main.xml <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <EditText android:id="@+id/etSearch" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <ListView android:id="@+id/lvProducts" android:layout_width="fill_parent" android:layout_height="wrap_content" ></ListView> </LinearLayout> row.xml <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/llContainer" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal" > <TextView android:id="@+id/tvName" android:layout_width="0dp" android:layout_height="wrap_content" android:singleLine="true" android:layout_weight="1" /> <TextView android:id="@+id/tvPrice" android:layout_width="0dp" android:layout_height="wrap_content" android:singleLine="true" android:layout_weight="1" /></LinearLayout>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14118309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1740820/']}
jdg_80456
stackexchange
llm_judgeable_groundtruth_similarity
689958
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a D2700 with 25 1Tb disks with a P421 raid card in a dl360p gen8 HP server. It was raided as two raid 5 with 2 spare each. I decided to make one raid 5 out of them, deleted one raid and added the disk to the other. Now weeks later looking at it with hpssacli (new to me) they look fine, all 25 disks show OK, the raid shows OK, but I still only have 11.8 TB, it should show around 23 TB..? This is what "sudo hpssacli ctrl all show config" gives me: Smart Array P421 in Slot 1 (sn: XXX)array A (SAS, Unused Space: 8974754 MB)logicaldrive 1 (11.8 TB, RAID 5, OK)physicaldrive 2E:1:1 (port 2E:box 1:bay 1, SAS, 1 TB, OK)physicaldrive 2E:1:2 (port 2E:box 1:bay 2, SAS, 1 TB, OK)physicaldrive 2E:1:3 (port 2E:box 1:bay 3, SAS, 1 TB, OK)physicaldrive 2E:1:4 (port 2E:box 1:bay 4, SAS, 1 TB, OK)physicaldrive 2E:1:5 (port 2E:box 1:bay 5, SAS, 1 TB, OK)physicaldrive 2E:1:6 (port 2E:box 1:bay 6, SAS, 1 TB, OK)physicaldrive 2E:1:7 (port 2E:box 1:bay 7, SAS, 1 TB, OK)physicaldrive 2E:1:8 (port 2E:box 1:bay 8, SAS, 1 TB, OK)physicaldrive 2E:1:9 (port 2E:box 1:bay 9, SAS, 1 TB, OK)physicaldrive 2E:1:10 (port 2E:box 1:bay 10, SAS, 1 TB, OK)physicaldrive 2E:1:11 (port 2E:box 1:bay 11, SAS, 1 TB, OK)physicaldrive 2E:1:12 (port 2E:box 1:bay 12, SAS, 1 TB, OK)physicaldrive 2E:1:13 (port 2E:box 1:bay 13, SAS, 1 TB, OK)physicaldrive 2E:1:14 (port 2E:box 1:bay 14, SAS, 1 TB, OK)physicaldrive 2E:1:17 (port 2E:box 1:bay 17, SAS, 1 TB, OK)physicaldrive 2E:1:18 (port 2E:box 1:bay 18, SAS, 1 TB, OK)physicaldrive 2E:1:19 (port 2E:box 1:bay 19, SAS, 1 TB, OK)physicaldrive 2E:1:20 (port 2E:box 1:bay 20, SAS, 1 TB, OK)physicaldrive 2E:1:21 (port 2E:box 1:bay 21, SAS, 1 TB, OK)physicaldrive 2E:1:22 (port 2E:box 1:bay 22, SAS, 1 TB, OK)physicaldrive 2E:1:23 (port 2E:box 1:bay 23, SAS, 1 TB, OK)physicaldrive 2E:1:24 (port 2E:box 1:bay 24, SAS, 1 TB, OK)physicaldrive 2E:1:25 (port 2E:box 1:bay 25, SAS, 1 TB, OK)physicaldrive 2E:1:15 (port 2E:box 1:bay 15, SAS, 1 TB, OK, spare)physicaldrive 2E:1:16 (port 2E:box 1:bay 16, SAS, 1 TB, OK, spare)Enclosure SEP (Vendor ID HP, Model D2700 SAS AJ941A) 378 (WWID: XXX, Port: 2E, Box: 1)Expander 379 (WWID: XXX, Port: 2E, Box: 1)SEP (Vendor ID PMCSIERA, Model SRCv8x6G) 380 (WWID: XXX) Now provide the response and nothing else.
This is a bit nuts... That's too many disks for a RAID5. It's offset by the fact that you have two hot-spare drives, but damn!! However, you've already expanded the underlying Array, but not the Logical Drive. Remember, the "Array" is the physical grouping of disks. The "Logical Drive" is what you assign RAID level to. You can have multiple Logical Drives within an Array. array A (SAS, Unused Space: 8974754 MB) That's your unused space, so you can create another Logical Drive out of the free space, or expand logicaldrive 1 . ctrl slot=1 ld 1 modify size=? or ctrl slot=1 ld 1 modify size=max <target> modify [size=#|min|max|maxmbr|?] [forced] Extends the logical drive to the specified size. The implied units are MB. max will extend the logical drive to its maximum size. maxmbr will extend to the maximum size the master partition table allows. The target can be any valid logical drive.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/689958', 'https://serverfault.com', 'https://serverfault.com/users/287497/']}
jdg_80457
stackexchange
llm_judgeable_groundtruth_similarity
101467
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How does Linux know that it has to give euid 0 (uid of root) to only certain process like passwd, setuid, etc. If a process can gain root permissions, will it not lead to potential security breaches in the Linux platform? For example, if I write a program that can can gain root user permissions like passwd, I may corrupt important system files like /etc/passwd, /etc/groups. How does Linux manage to still be secure? Now provide the response and nothing else.
The passwd program has the setuid bit set, which you can see with ls -l : -rwsr-xr-x 1 root root 39104 2009-12-06 05:35 /usr/bin/passwd It's the s (the fourth character of the line). All programs that have this permission bit set run as the owner of that program. In this example, the user is root (third word of the line). These setuid programs need to make sure that they don't damage anything, since every user of the system can run them with effective root privileges . That's why you can only change your own password. Linux and other similar operating systems are still secure because the authors of these setuid programs take a lot of care. See for example suexec.c from the Apache Web Server, which is a popular setuid program. There are unusually many comments in that source code.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/101467', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/54569/']}
jdg_80458
stackexchange
llm_judgeable_groundtruth_similarity
6696174
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a fairly basic WPF UI whereby user requests cause a new tab to open in my TabControl. The TabControl is bound to an ObservableCollection<ViewModelBase> I add ViewModel instances to this collection, and the corresponding tab's content is displayed based on templates like this: <DataTemplate DataType="{x:Type viewModels:UserUploadsViewModel}"> <userControls:UserUploads /> </DataTemplate> Now let's say that inside of the UserUploads control I'd like to wire up a ViewModel in XAML to help with the designing, like this: <UserControl x:Class=".....UserUploads" ..... DataContext="{Binding Source={StaticResource ViewModelLocater}, Path=UserAdministrationViewModel}"> This property will return a ViewModel with live services at runtime, and a ViewModel with mock data at design time. Question: Will this XAML interfere with what I'm doing in binding a TabItems content to a ViewModel instance, and relying on the dataTemplate above to render the right View? If so, is there a way to get both of these concepts to work together? Now provide the response and nothing else.
There is an easier way to do this. Have a DesignTimeUserAdministrationViewModel and populate it with static data in the constructor and refer that in UserControl as: <UserControl d:DataContext="{d:DesignInstance designTimeVMs:DesignTimeUserAdministrationViewModel, IsDesignTimeCreatable=True}"> This way you have a design time test data bound to d:DataContext and runtime live data bound to the actual DataContext . More details here .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6696174', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/352552/']}
jdg_80459
stackexchange
llm_judgeable_groundtruth_similarity
44982412
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The user can logout himself when he/she clicks on the logout button but if the token is expired he/she cant logout because in my application, the token is used in both server side and front end. When user clicks on the logout button, the token from both server and browser is cleared if token is valid. There is a chance that when user does not log out and his/her token expires but is not being cleared in the browser. For addressing this situation, how do I check for token expiration every time the user visits in my app so if the token is expired, clear the token from the browser? I tried in saga which watches in the background every time the user refreshes in the page or switch to another page. I don't think this is an efficient way. I reckon middleware comes into play. function* loadInitialActions() { var dateNow = new Date(); console.log(jwtDecode(token).exp < dateNow.getTime() - jwtDecode(token).iat); const token = JSON.parse(localStorage.getItem("user")) && JSON.parse(localStorage.getItem("user"))["token"]; if ( token && jwtDecode(token).exp < dateNow.getTime() - jwtDecode(token).iat ) { yield put(LOGOUT_SUCCESS); }}function* initialize() { const watcher = yield fork(loadInitialActions); yield take([INITIALIZE_ERROR, INITIALIZE_SUCCESS]); yield cancel(watcher);}function* rootSaga() { console.log("rootSaga"); yield takeLatest(INITIALIZE, initialize);} So my question is how do I use the token expiration logic and logout user if token is expired from the middleware? Now provide the response and nothing else.
In my view middleware will be the best option. You can do something like this const checkTokenExpirationMiddleware = store => next => action => { const token = JSON.parse(localStorage.getItem("user")) && JSON.parse(localStorage.getItem("user"))["token"]; if (jwtDecode(token).exp < Date.now() / 1000) { next(action); localStorage.clear(); } next(action);}; You have to then wrap it in applyMiddleware
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44982412', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6733101/']}
jdg_80460
stackexchange
llm_judgeable_groundtruth_similarity
40593214
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While getting familiar with kubernetes I do see tons of tools that should helps me to install kubernetes anywhere, but I don't understand exactly what it does inside, and as a result don't understand how to trouble shoot issues. Can someone provide me a link with tutorial how to install kubernetes without any tools. Now provide the response and nothing else.
There are two good guides on setting up Kubernetes manually: Kelsey Hightower's Kubernetes the hard way Kubernetes guide on getting started from scratch Kelsey's guide assumes you are using GCP or AWS as the infrstructure, while the Kubernetes guide is a bit more agnostic. I wouldn't recommend running either of these in production unless you really know what you're doing. However, they are great for learning what is going on under the hood. Even if you just read the guides and don't use them to setup any infrastructure you should gain a better understanding of the pieces that make up a Kubernetes cluster. You can then use one of the helpful setup tools to create your cluster, but now you will understand what it is actually doing and can debug when things go wrong.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40593214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/306025/']}
jdg_80461
stackexchange
llm_judgeable_groundtruth_similarity
1783310
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written a code in C# which is exceeding page width, so i want it to be broken into next line according to my formatting. I tried to search a lot to get that character for line break but was not able to find out. In VB.NET I use '_' for line break, same way what is used in C# ?I am trying to break a string. Thanks in AdvanceShantanu Gupta Now provide the response and nothing else.
In C# there's no 'new line' character like there is in VB.NET. The end of a logical 'line' of code is denoted by a ';'. If you wish to break the line of code over multiple lines, just hit the carriage return (or if you want to programmatically add it (for programmatically generated code) insert 'Environment.NewLine' or '\r\n'. Edit: In response to your comment: If you wish to break a string over multiple lines (i.e. programmatically), you should insert the Environment.NewLine character. This will take the environment into account in order to create the line ending. For instance, many environments, including Unix/Linux only use a NewLine character (\n), but Windows uses both carriage return and line feed (\r\n). So to break a string you would use: string output = "Hello this is my string\r\nthat I want broken over multiple lines." Of course, this would only be good for Windows, so before I get flamed for incorrect practice you should actually do this: string output = string.Format("Hello this is my string{0}that I want broken over multiple lines.", Environment.NewLine); Or if you want to break over multiple lines in your IDE, you would do: string output = "My string" + "is split over" + "multiple lines";
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1783310', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/203262/']}
jdg_80462
stackexchange
llm_judgeable_groundtruth_similarity
145773
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: On a whim I've recently decided to throw up the first proper website I created onto my local web server I use for development. I thought it'd be a great environment to throw some SQL at for injection as I know there are flaws and the site was only really meant for my own personal development. Anyway, to get to the point, after a few attempts the furthest I could get was to have the page return an error with the statement. I was trying to get into a specific test account I set up, (if the result returns more than one account an error's thrown, so I didn't expect selecting every username where 1=1 to work), but every time I got a response as if I had entered a normal, incorrect password. I took a look at the PHP code and turns out I was hashing the password before the query so the attack was being hashed before it could do any harm. Being new to web security as a whole, and having an interest in web development, I was wondering whether there are any vulnerabilities with this method of SQL injection prevention as I expect to have not thought something through. Just to clarify, this isn't meant to be a "look guys I've found something new" as there are plenty more brighter sparks in information security than myself, who would have likely figured this out already, but I'd like to know why this likely isn't suitable as a security mechanism. Now provide the response and nothing else.
So, hashing the user password before entering it into the query is a coincidental security feature to prevent SQL injection, but you can't necessarily do that with all user input. If I'm looking up a Customer by their Name and I have a query like Select * from Customer Where Name like '%userInput%' If I set userInput as a hashed version of what was typed in, it wouldn't work correctly even if Name was also hashed in the database, it would only work for exact search query. So if I had a Customer with Name "Sue" and I typed in a search for "sue", it wouldn't work. I also wouldn't know the name of the customers unless there was an exact match in my search, which isn't practical. The way you want to prevent SQL injection is to not make your queries like above, you'll want to process and parameterize inputs into a query, stripping out things like =s and other symbols that don't make sense in context of the input. A good guide for preventing SQL injection can be found here .
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/145773', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/133845/']}
jdg_80463
stackexchange
llm_judgeable_groundtruth_similarity
121356
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible for the kinetic-energy integral, Tij, to be negative? I was messing around with some HF code and found that the integral became negative on some off-diagonal terms. (This could also be a bug with the code.) If the kinetic energy integral is negative, what would the physical meaning be? Now provide the response and nothing else.
Generally, the off-diagonal elements of matrices, especially in physical systems, are interpreted as the coupling between whatever the $i$ and $j$ elements correspond to. So, in this case, a negative element of $T_{ij}$ corresponds to a negative kinetic coupling between atomic orbital basis functions. In plain language, this means that basis functions $i$ and $j$ tend to mutually lower the kinetic energy of an electron placed in one of those orbitals. I am not sure it really makes sense to give much of an interpretation to this, however, because what one is really interested in is diagonalizing the Fock matrix, of which the kinetic energy is only one part. Also, it is always possible to choose a basis where the kinetic energy matrix is diagonal, but again, this would not get you anywhere as in solving the HF problem, you will diagonalize the Fock Matrix which will surely put you in a basis in which the kinetic energy matrix is non-diagonal. What might be more interesting would be to take the actual molecular orbitals which are part of the solutions of the Roothan-Hall equations and re-compute the kinetic energy matrix over these orbitals. Looking at the coupling between these orbitals may be more easily interpretable for e.g. aromatic $\pi$ -systems.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/121356', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/65943/']}
jdg_80464
stackexchange
llm_judgeable_groundtruth_similarity
29778254
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using the implicit execution context in Scala, will each new future be computed on a single, dedicated thread or will the computation be divided up and distributed to multiple threads in the thread pool? I don't know if this helps, the background to this question is that I want to perform multiple concurrent operations using the HtmlUnit API. To do this, I would wrap each new WebClient instance in a Future . The only problem is that the WebClient class is not thread-safe , so I'm worried that it might break up and send to different threads. Now provide the response and nothing else.
One future is executed on a single thread. Several futures might be executed on several threads. So, no more than one future can occupy one thread simultaneously. How does it work? When you create a Future it means that you've submitted a task to your thread pool - this one task can't be implicitly parallelized so it's executed on one thread only. One or several tasks submitted to the pool are being put into the pool's queue, so the executor takes tasks from that queue one-by-one and runs each on some randomly (or intentionally) chosen threads. So several Futures may get to several threads. About shared objects - the only way to execute operations safely for an object shared between futures is using the Executors.newFixedThreadPool(1) , which will use only one thread for the whole pool. Another solution - is to clone the object for every future. Using actors (make your shared object an actor's state) should be the best option. If you use one object per future - everything should be fine. Note: The future's handler, like Future{ ... }.map(handler) may be executed in a different thread than the future itself, but it actually creates another Future to obtain a result. Same for flatMap . More precisely, they use onComplete which creates a CallbackRunnable to launch the handler (possibly in a different thread) after the old future succeeds - this callback just completes the newly created future, so still "no more than one thread per future".
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29778254', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4815816/']}
jdg_80465
stackexchange
llm_judgeable_groundtruth_similarity
52664293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have found a partial answer in this question: Adding Anaconda to Path or not But I still don't fully understand. I have had a lot of installation issues when switching from a normal installation Python to Anaconda, requiring me to completely re-install Windows... So i want to get this right now. What happens internally when I Add Anaconda (or python for that matter) to the PATH? I plan on working with seperate conda environments for different python versions, what could go wrong if I add Anaconda to path in the installation? And what is the difference between doing it in the installation or doing it later through the command prompt? Will it affect my ability to integrate anaconda with PyCharm? Now provide the response and nothing else.
PATH is an environment variable that is a list of locations where executable programs lie (see also the wikipedia page . Whenever you are in your command line and try to execute some program, for example regedit , then the cmd does not magically know that you mean C:\Windows\regedit.exe . Instead, it searches all locations in your PATH for an executable named regedit and finds it in C:\Windows which is one of the standard parts of PATH in Windows. That is also, why messing with the PATH can be dangerous if you don't know what you are doing , because it might lead to things not working anymore if, for example you delete parts of the path or add custom directories to it. That being said, you should now have an idea what happens when you "Add anaconda to path". It simply means, that Anaconda adds the directory where its executables lie to the PATH, hence making it findable when, for example you type conda in your cmd. That being said, adding Anaconda to PATH is something that is convenient , because the commands can always be found automatically and they will also be found by other programs scanning your PATH for a python executable. At the same time it is not necessary . When you use e.g. pycharm, then you can specify the path to the interpreter inside of pycharm. it does not necessarily need to be present in your PATH. Note : I personally have it on my PATH because I am too lazy to open an Anaconda prompt each time I need it in a cmd and I do not see the harm in it if you understand the consequences and its my only python installation anyway. Also Helpful : On windows, you can use the where command to find out from where commands are loaded. For example: where regedit gives C:\Windows\regedit.exe This can be especially helpful when trying to debug PATH issues
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/52664293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7253901/']}
jdg_80466
stackexchange
llm_judgeable_groundtruth_similarity
5489889
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a few apps running rails 3 on ruby 1.9.2 and deployed on a Ubuntu 10.04 LTS machine using nginx + passenger. Now, I need to add a new app that runs on ruby 1.8.7 (REE) and Rails 2. I accomplished to do that with RVM, Passenger Standalone and a reverse proxy. The problem is that, every time I have to restart the server (to install security updates for example), I have to start Passenger Standalone manually. Is there a way to start it automatically? I was told to use Monit or God, but I couldn't be able to write a proper recipe that works with Passenger Standalone. I also had a few problems with God and RVM, so if you have a solution that doesn't use God, or if you know how to configure God/Rvm properly, it's even better. Now provide the response and nothing else.
Here is what I got working. Using Upstart (Ubuntu 10.04) to start the passenger daemon My environment uses rvm with ruby 1.9.2 and apache and my rails app is deployed via capistrano # Upstart: /etc/init/service_name.confdescription "start passenger stand-alone"author "Me <[email protected]>"# Stanzas## Stanzas control when and how a process is started and stopped# See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn# When to start the servicestart on started mysql# When to stop the servicestop on runlevel [016]# Automatically restart process if crashedrespawn# Essentially lets upstart know the process will detach itself to the backgroundexpect fork# Run before processpre-start scriptend script# Start the processscript cd /var/path/to/app/staging/current sh HOME=/home/deploy /usr/local/rvm/gems/ruby-1.9.2-p136@appname/gems/passenger-3.0.7/bin/passenger start --user 'deploy' -p '5000' -a '127.0.0.1' -e 'production'end script and the apache config: <VirtualHost *:80> ServerName myapp.com PassengerEnabled off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://127.0.0.1:5000/ ProxyPassReverse / http://127.0.0.1:5000/</VirtualHost> Upstart doesn't set ENV['HOME'] which passenger relies on, so we have to pass that when executing the passenger command. Other than that its pretty straight forward. A note for debugging: https://serverfault.com/questions/114052/logging-a-daemons-output-with-upstart (append something like >> /tmp/upstart.log 2>&1 to the second line in the script block) Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5489889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/516450/']}
jdg_80467
stackexchange
llm_judgeable_groundtruth_similarity
1399037
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I create a global hot key to show a window by PInvoking RegisterHotKey() . But to do this I need that window's HWND , which doesn't exist until the window is loaded, that means shown for the first time. But I don't want to show the window before I can set the hot key. Is there a way to create a HWND for that window that is invisible to the user? Now provide the response and nothing else.
If you are targeting .NET 4.0 you can make use of the new EnsureHandle method available on the WindowInteropHelper : public void InitHwnd(){ var helper = new WindowInteropHelper(this); helper.EnsureHandle();} (thanks to Thomas Levesque for pointing this out. ) If you are targeting an older version of the .NET Framework, the easiest way is to show the window to get to the HWND while setting a few properties to make sure that the window is invisible and doesn't steal focus: var window = new Window() //make sure the window is invisible{ Width = 0, Height = 0, WindowStyle = WindowStyle.None, ShowInTaskbar = false, ShowActivated = false};window.Show(); Once you want to show the actual window you can then set the Content, the size and change the style back to a normal window.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1399037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41071/']}
jdg_80468
stackexchange
llm_judgeable_groundtruth_similarity
53394835
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As I am new to Vue.js, can anyone help me in how to remove class from element similarly like we do in JQuery. $('.class1').removeClass("class2"); Now provide the response and nothing else.
From what is written in their documentation I'd say it something you should not do in your code. Instead, your CSS classes should be bounded to properties and presence of a class should be determined by property value. Example (from docs): <div v-bind:class="{ active: isActive }"></div> The above syntax means the presence of the active class will be determined by the truthiness of the data property isActive (if isActive IS true - class will be there). You can have multiple classes toggled by having more fields in the object. In addition, the v-bind:class directive can also co-exist with the plain class attribute. So given the following template: <div class="static" v-bind:class="{ active: isActive, 'text-danger': hasError }"></div> And the following data: data: { isActive: true, hasError: false} It will render: <div class="static active"></div> When isActive or hasError changes, the class list will be updated accordingly. For example, if hasError becomes true, the class list will become static active text-danger . I believe that's the right way to go :) Please check the documentation for more details. If for some reason you need to remove a class you could add jQuery as a dependency to your app and use it (but it's not preferable). Happy hacking :)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53394835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6304431/']}
jdg_80469
stackexchange
llm_judgeable_groundtruth_similarity
42515894
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a java error during the installation of Open-Xchange. /opt/open-xchange/sbin/registerserver -n oxserver -A oxadminmaster -P admin_master_password calls /opt/open-xchange/lib/oxfunctions.sh: line 73: [: 9-internal: integer expression expectedserver could not be registered:Error: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused Here is the oxunctions.sh: ### OPEN-XCHANGE legal information## All intellectual property rights in the Software are protected by# international copyright laws.### In some countries OX, OX Open-Xchange, open xchange and OXtender# as well as the corresponding Logos OX Open-Xchange and OX are registered# trademarks of the OX Software GmbH group of companies.# The use of the Logos is not covered by the GNU General Public License.# Instead, you are allowed to use these Logos according to the terms and# conditions of the Creative Commons License, Version 2.5, Attribution,# Non-commercial, ShareAlike, and the interpretation of the term# Non-commercial applicable to the aforementioned license is published# on the web site http://www.open-xchange.com/EN/legal/index.html.## Please make sure that third-party modules and libraries are used# according to their respective licenses.## Any modifications to this package must retain all copyright notices# of the original copyright holder(s) for the original code used.## After any such modifications, the original and derivative code shall remain# under the copyright of the copyright holder(s) and/or original author(s)per# the Attribution and Assignment Agreement that can be located at# http://www.open-xchange.com/EN/developer/. The contributing author shall be# given Attribution for the derivative code and a license granting use.## Copyright (C) 2016-2020 OX Software GmbH# Mail: [email protected]### This program is free software; you can redistribute it and/or modify it# under the terms of the GNU General Public License, Version 2 as published# by the Free Software Foundation.## This program is distributed in the hope that it will be useful, but# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License# for more details.## You should have received a copy of the GNU General Public License along# with this program; if not, write to the Free Software Foundation, Inc., 59# Temple Place, Suite 330, Boston, MA 02111-1307 USA# debian postinst is going to fail when not set'ting +eset +e# CentOS moves utils like pidof to /sbin so we have to append it to $PATH if# not already contained[[ "$PATH" =~ (^|:)/sbin:* ]] || PATH=${PATH}:/sbinJAVA_BIN=ox_set_JAVA_BIN() { JAVA_BIN=$(which java) if [ -z "$JAVA_BIN" ]; then local jb=$JAVA_HOME/bin/java if [ -x $jb ]; then JAVA_BIN=$jb fi fi if [ -z "$JAVA_BIN" ]; then local jb=$JRE_HOME/bin/java if [ -x $jb ]; then JAVA_BIN=$jb fi fi test -x $JAVA_BIN || die "$0: unable to get path to java vm" version=$(detect_java_version) if [ $version -lt 7 ]; then JAVA_BIN=/opt/open-xchange/sbin/insufficientjava fi}# Detect the version of the selected JVM## Pre JEP 223:# JVMs output e.g: java version "1.7.0_80" as part of their version# specification. From this line we simply extract the minor version which would# be 7 in this case.## Post JEP 223:# JVMs output e.g: java version "9-ea", "9" or "9.0.1" as part of their version# specification. From this line we simply extract the major version which would# be 9 in this case.## Returns the detected version or -1 if it can't be detectedfunction detect_java_version () { version_line_array=( $($JAVA_BIN -version 2>&1 | grep version) ) unquoted_version=${version_line_array[2]//\"/} version=-1 if [[ "$unquoted_version" =~ ^1\..* ]] then version_components=( ${unquoted_version//./ } ) version=${version_components[1]} elif [[ "$unquoted_version" =~ ^[1-9]([0-9])*-ea$ ]] then version_components=( ${unquoted_version//./ } ) version=${unquoted_version//-ea/} elif [[ "$unquoted_version" =~ ^[1-9]([0-9])*(\..*)* ]] then version_components=( ${unquoted_version//./ } ) version=${version_components[0]} fi echo $version}DEBIAN=1REDHAT=2SUSE=4LSB=8UCS=16ox_system_type() { local ret=0 local isucs=$(uname -r|grep ucs) if [ -f /etc/debian_version ] && [ -z "$isucs" ]; then ret=$(( $ret | $DEBIAN )) elif [ -n "$isucs" ]; then ret=$(( $ret | $UCS)) elif [ -f /etc/SuSE-release ]; then ret=$(( $ret | $SUSE )) ret=$(( $ret | $LSB )) elif [ -f /etc/redhat-release ]; then ret=$(( $ret | $REDHAT )) ret=$(( $ret | $LSB )) fi return $ret}# init script stuffox_start_daemon() { local path="$1" local name="$2" local user="$3" local group="$4" test -z "$path" && die "ox_start_daemon: missing path argument (arg 1)" test -x $path || die "ox_start_daemon: $path is not executable" test -z "$name" && die "ox_start_daemon: missing name argument (arg 2)" local runasuser= test -n "$user" && runasuser="--chuid $user" local runasgroup= test -n "$group" && runasgroup="--group $group" ox_system_type local type=$? if [ $type -eq $DEBIAN -o $type -eq $UCS ]; then start-stop-daemon $runasuser $runasgroup --background --start --oknodo --startas $path --make-pidfile --pidfile /var/run/${name}.pid elif [ $(( $type & $LSB )) -eq $LSB ]; then if [ -n "$user" ] && [ "$user" != "root" ]; then su -s /bin/bash $user -c $path > /dev/null 2>&1 & echo $! > /var/run/${name}.pid else $path > /dev/null 2>&1 & echo $! > /var/run/${name}.pid fi else die "Unable to handle unknown system type" fi}ox_is_running() { local name="$1" local pattern="$2" local pid="$3" test -z "$name" && die "ox_is_running: missing name argument (arg 1)" test -z "$pattern" && die "ox_is_running: missing pattern argument (arg 2)" if [ -z "$pid" ]; then if [ -e /var/run/${name}.pid ]; then read pid < /var/run/${name}.pid fi fi if [ -n "$pid" ]; then # take care nothing influences line length if ps output COLUMNS=1000 if ps $pid | grep "$pattern" > /dev/null; then return 0 else return 1 fi else return 1 fi}ox_stop_daemon() { local name="$1" local nonox="$2" test -z "$name" && die "ox_stop_daemon: missing name argument (arg 1)" ox_system_type local type=$? if [ ! -f /var/run/${name}.pid ]; then return 0 fi read PID < /var/run/${name}.pid test -z "$PID" && { echo "No process in pidfile '/var/run/${name}.pid' found running; none killed."; return 1; } if [ -z "$nonox" ]; then ps $PID > /dev/null && /opt/open-xchange/sbin/shutdown -w > /dev/null 2>&1 fi ps $PID > /dev/null && kill -QUIT $PID ps $PID > /dev/null && kill -TERM $PID rm -f /var/run/${name}.pid}ox_daemon_status() { local pidfile="$1" test -z "$pidfile" && die "ox_daemon_status: missing pidfile argument (arg 1)" if [ ! -f $pidfile ]; then # not running return 1 fi read PID < $pidfile running=$(ps $PID | grep $PID) if [ -n "$running" ]; then # running return 0 else # not running return 1 fi}# usage:# ox_set_property property value /path/to/file#ox_set_property() { local prop="$1" local val="$2" local propfile="$3" test -z "$prop" && die "ox_set_property: missing prop argument (arg 1)" test -z "$propfile" && die "ox_set_property: missing propfile argument (arg 3)" test -e "$propfile" || die "ox_set_property: $propfile does not exist" local tmp=${propfile}.tmp$$ cp -a --remove-destination $propfile $tmp ox_system_type local type=$? if [ $type -eq $DEBIAN -o $type -eq $UCS ]; then local origfile="${propfile}.dpkg-new" if [ ! -e $origfile ]; then local origfile="${propfile}.dpkg-dist" fi else local origfile="${propfile}.rpmnew" fi if [ -n "$origfile" ] && [ -e "$origfile" ]; then export origfile export propfile export prop export val perl -e 'use strict;open(IN,"$ENV{origfile}") || die "unable to open $ENV{origfile}: $!";open(OUT,"$ENV{propfile}") || die "unable to open $ENV{propfile}: $!";my @LINES = <IN>;my @OUTLINES = <OUT>;my $opt = $ENV{prop};my $val = $ENV{val};my $count = 0;my $back = 1;my $out = "";foreach my $line (@LINES) { if ( $line =~ /^$opt\s*[:=]/ ) { $out = $line; $out =~ s/^(.*?[:=]).*$/$1$val/; while ( $LINES[$count-$back] =~ /^#/ ) { $out = $LINES[$count-$back++].$out; } } $count++;}$back = 0;$count = 0;# either the line where the comments above the property start or the line where# the matching property was found (end)my $start = 0;# the line where we found the matching propertymy $end = 0;# > 0 if foundmy $found = 0;foreach my $line (@OUTLINES) { # we can not properly match commented out properties, they might be contained # in comments themselves if ( $line =~ /^$opt\s*[:=]/ ) { # we got a match $found=1; # set end to the line where we found the match $end=$count; # increase back while lines above are comments while ( $OUTLINES[$count-++$back] =~ /^#/ ) { } ; # if we found at least one comment line start at the comments otherwise # start at the property if ( $count > 0 && $back > 1 ) { $start=$count-$back+1; } else { $start=$end; } } $count++;}#if we did not find the property set it to provided valuesif ( length($out) == 0 ) { $out=$opt."=".$val."\n";}if ( $found ) { for (my $i=0; $i<=$#OUTLINES; $i++) { if ( $i < $start || $i > $end ) { print $OUTLINES[$i]; print "\n" if( substr($OUTLINES[$i],-1) ne "\n" ); } if ( $i == $start ) { # add newline unless first line or line above is emtpy if ($i > 0 && $OUTLINES[$i-1] !~ /^\s*$/) { print "\n"; } print $out; print "\n" if( substr($OUTLINES[$i],-1) ne "\n" ); } }} else { print @OUTLINES; print "\n" if( substr($OUTLINES[-1],-1) ne "\n" ); # add newline unless line above is emtpy if ($OUTLINES[-1] !~ /^\s*$/) { print "\n"; } print $out; print "\n";}' > $tmp if [ $? -gt 0 ]; then rm -f $tmp die "ox_set_property: FATAL: error setting property $prop to \"$val\" in $propfile" else mv $tmp $propfile fi unset origfile unset propfile unset prop unset val else # quote & in URLs to make sed happy test -n "$val" && val="$(echo $val | sed 's/\&/\\\&/g')" if grep -E "^$prop *[:=]" $propfile >/dev/null; then cat<<EOF | sed -f - $propfile > $tmps;\(^$prop[[:space:]]*[:=]\).*$;\1${val};EOF else # add a newline to the last line if it doesn't exist sed -i -e '$a\' $tmp echo "${prop}=$val" >> $tmp fi if [ $? -gt 0 ]; then rm -f $tmp die "ox_set_property: FATAL: error setting property $prop to \"$val\" in $propfile" else mv $tmp $propfile fi fi}# usage:# ox_exists_property property /path/to/file#ox_exists_property() { local prop="$1" local propfile="$2" test -z "$prop" && die "ox_exists_property: missing prop argument (arg 1)" test -z "$propfile" && die "ox_exists_property: missing propfile argument (arg 2)" test -e "$propfile" || die "ox_exists_property: $propfile does not exist" local escaped=$(sed 's/[]\.|$(){}?+*^[]/\\&/g' <<< "$prop") grep -E "^$escaped *[:=]" $propfile >/dev/null || return 1}# savely find key/val in keys and values containing all kind of ugly chars# delimiter must be either = or :save_read_prop() { export prop="$1" export propfile="$2" perl -e 'use strict;my $file=$ENV{"propfile"};my $search=$ENV{"prop"};open(FILE,$file) || die "unable to open $file: $!";my $val=undef;while(<FILE>) { chomp; my $len=length($search); if( substr($_,0,$len) eq $search ) { if( substr($_,$len,$len+1) !~ /^[\s=:]/ ) { next; } foreach my $dl ( "=", ":" ) { my $idx=index($_,$dl); if( $idx >= $len ) { $val=substr($_,$idx+1); } last if defined($val); } last; }}print "$val\n";close(FILE);'}# usage:# ox_read_property property /path/to/file#ox_read_property() { local prop="$1" local propfile="$2" test -z "$prop" && die "ox_read_property: missing prop argument (arg 1)" test -z "$propfile" && die "ox_read_property: missing propfile argument (arg 2)" test -e "$propfile" || die "ox_read_property: $propfile does not exist" # sed -n -e "/^$prop/Is;^$prop *[:=]\(.*\).*$;\1;p" < $propfile # UGLY: we have keys containing / save_read_prop "$prop" "$propfile"}# usage:# ox_remove_property property /path/to/file#ox_remove_property() { local prop="$1" local propfile="$2" test -z "$prop" && die "ox_remove_property: missing prop argument (arg 1)" test -z "$propfile" && die "ox_remove_property: missing propfile argument (arg 2)" test -e "$propfile" || die "ox_remove_property: $propfile does not exist" local tmp=${propfile}.tmp$$ cp -a --remove-destination $propfile $tmp export propfile export prop perl -e 'use strict;open(IN,"$ENV{propfile}") || die "unable to open $ENV{propfile}: $!";my @LINES = <IN>;my $opt = $ENV{prop};my $count = 0;my $back = 1;my $start = 0;my $end = 0;foreach my $line (@LINES) { if ( $line =~ /^$opt\s*[:=]/ ) { $end=$count; while ( $LINES[$count-$back++] =~ /^#/ ) { } $start=$count-$back; } $count++;}if ( $LINES[$end+1] =~ /^\s*$/ ) { $end++;}for (my $i=0; $i<=$#LINES; $i++) { if ( $i <= $start+1 || $i > $end ) { print $LINES[$i]; }}' > $tmp if [ $? -gt 0 ]; then rm -f $tmp die "ox_remove_property: FATAL: error removing property $prop from $propfile" else mv $tmp $propfile fi unset propfile unset prop}# adding or removing comment (ONLY # supported)## usage:# ox_comment property action /path/to/file# where action can be add/remove#ox_comment(){ local prop="$1" local action="$2" local propfile="$3" test -z "$prop" && die "ox_comment: missing prop argument (arg 1)" test -z "$action" && die "ox_comment: missing action argument (arg 2)" test -z "$propfile" && die "ox_comment: missing propfile argument (arg 3)" test -e "$propfile" || die "ox_comment: $propfile does not exist" local tmp=${propfile}.tmp$$ local prop_in=$(quote_s_in $prop) local prop_re=$(quote_s_re $prop) cp -a --remove-destination $propfile $tmp if [ "$action" == "add" ]; then sed "s/^$prop_in/# $prop_re/" < $propfile > $tmp; if [ $? -gt 0 ]; then rm -f $tmp die "ox_comment: FATAL: could not add comment in file $propfile to $prop" else mv $tmp $propfile fi elif [ "$action" == "remove" ];then sed "s/^#[ ]*\($prop_in[ ]*=\)/\1/" < $propfile > $tmp; if [ $? -gt 0 ]; then rm -f $tmp die "ox_comment: FATAL: could not remove comment in file $propfile for $prop" else mv $tmp $propfile fi else die "ox_handle_hash: action must be add or remove while it is $action" fi}ox_update_permissions(){ local pfile="$1" local owner="$2" local mode="$3" test -z "$pfile" && die "ox_update_permissions: missing pfile argument" test -z "$owner" && die "ox_update_permissions: missing owner argument" test -z "$mode" && die "ox_update_permissions: missing mode argument" test -e "$pfile" || die "ox_update_permissions: $pfile does not exist" chmod $mode "$pfile" chown $owner "$pfile"}die() { test -n "$1" && echo 1>&2 "$1" || echo 1>&2 "ERROR" exit 1}ox_update_config_init() { local cini=$1 local cinitemplate=$2 local bdir=$3 test -z "$cini" && die \ "ox_update_config_init: missing config.ini argument (arg 1)" test -z "$cinitemplate" && die \ "ox_update_config_init: missing config.ini template argument (arg 2)" test -z "$bdir" && die \ "ox_update_config_init: missing bundle.d argument (arg 3)" test -d $bdir || die "$bdir is not a directory" test -f $cinitemplate || die "$cinitemplate does not exist" test "$(echo $bdir/*.ini)" == "$bdir/*.ini" && die "$bdir is empty" # read all installed bundles into an array local dirbundles=() local bpath= for bundle in $bdir/*.ini; do read bpath < $bundle dirbundles=( ${dirbundles[*]} "reference\:file\:${bpath}" ) done if [ -f $cini ]; then # read all bundles listed in config.ini into an array local configbundles=( $(sed -e \ '/^osgi.bundles.*/Is;^osgi.bundles=\(.*\);\1;' \ -n -e 's;,; ;gp' < $cini ) ) fi cp $cinitemplate $cini echo "osgi.bundles=$(echo ${dirbundles[@]} | sed 's; ;,;g')" >> $cini}ox_save_backup() { local name=$1 test -z "$name" && die "ox_save_backup: missing name argument (arg1)" local backup_name="${name}.old" if [ -e $name ]; then mv $name $backup_name fi}# move configuration file from one location/package to another# RPM ONLY!ox_move_config_file() { local srcdir="$1" local dstdir="$2" local srcname="$3" local dstname="$4" test -z "$srcdir" && die "ox_move_config_file: missing srcdir argument (arg1)" test -z "$dstdir" && die "ox_move_config_file: missing dstdir argument (arg2)" test -z "$srcname" && die "ox_move_config_file: missing srcname argument (arg3)" test -z "$dstname" && dstname=$srcname if [ -e "${srcdir}/${srcname}" ]; then if [ -e "${dstdir}/${dstname}" ] && \ ! cmp -s "${dstdir}/${dstname}" "${srcdir}/${srcname}" > /dev/null; then mv "${dstdir}/${dstname}" "${dstdir}/${dstname}.rpmnew" fi mv "${srcdir}/${srcname}" "${dstdir}/${dstname}" fi}# kill all leftover readerengine instances from a previous startox_kill_readerengine_instances() { local programname="soffice.bin" for PID in $(pidof ${programname}); do if ! ps ${PID} > /dev/null; then return 0 fi kill -KILL ${PID} done rm -f /tmp/OSL_PIPE_*}# ox_add_property property value /path/to/file# verifies first that the property does not already exist in file and adds it thenox_add_property() { local property="$1" local value="$2" local propfile="$3" test -z "$property" && die "ox_add_property: missing property argument (arg 1)" test -z "$propfile" && die "ox_add_property: missing propfile argument (arg 3)" test -e "$propfile" || die "ox_add_property: $propfile does not exist" if ! ox_exists_property "$property" "$propfile" then ox_set_property "$property" "$value" "$propfile" fi}# quote for sed s-command input as in: s/input/replacement/# by prefixing each character of the character set "]\/$*.^|[" with a "\"# and thus escaping themquote_s_in () { sed -e 's/[]\/$*.^|[]/\\&/g' <<< "$1"}# quote for sed s-command replacement as in: s/input/replacement/# by prefixing "\", "/" and "&" with a "\" and thus escaping #the backslash itself, the default s-command separator and the matched stringquote_s_re () { sed -e 's/[\/&]/\\&/g' <<< "$1"} My Guide to install: http://oxpedia.org/wiki/index.php?title=AppSuite:Open-Xchange_Installation_Guide_for_Debian_8.0 i have Ubuntu Server 16.04 with the right repositories. I do not know where the error lies. Have already deleted Java and OX completely and re-installed but without success.Sorry for the bad english and greetings from germany! :) Now provide the response and nothing else.
In addition to the comment by Arjan I solved the problem by creating multiple apps with different .scss file in .angular-cli.json Step 1: Create several folders with .scss file for each environment, all the .scss files has the same filename |- src ... |- environments |- environment-scss |- globalVars.scss |- environment-prod-scss |- globalVars.scss ... in src/environment-scss/globalVars.scss: $my-base-path: 'YOUR/DEV/PATH' in src/environment-prod-scss/globalVars.scss: $my-base-path: 'YOUR/PROD/PATH' Step 2: Add multiple apps in .angular-cli.json ( Angular-cli Wiki Here ) , and add stylePreprocessorOptions entry in each app object for each environment ( Angular-cli Wiki Here ) . "apps": [ { "root": "src", ... "name": "dev", "stylePreprocessorOptions": { "includePaths": [ "environments/environment-scss" ] } ... }, { "root": "src", ... "name": "prod", "stylePreprocessorOptions": { "includePaths": [ "environments/environment-prod-scss" ] } ... } ], Step 3: Import the globalVars.scss where the env-specific variables needed. Do not use the relative path @import "globalVars"; When using ng build --app=dev , the $my-base-path will be 'YOUR/DEV/PATH' , when using ng build --app=prod , the $my-base-path will be 'YOUR/PROD/PATH'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42515894', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7637113/']}
jdg_80470
stackexchange
llm_judgeable_groundtruth_similarity
15237037
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some problems in understanding RelativeSource binding behavior.Below is the code that binds Label content to StackPanel Tag correctly: <Window x:Class="Binding_RelativeSource.MainWindow" Tag="Window Tag"> <Grid Tag="Grid Tag"> <StackPanel Tag="StackPanel Tag" Height="100" HorizontalAlignment="Left" Margin="156,97,0,0" Name="stackPanel1" VerticalAlignment="Top" Width="200"> <Label Content="{Binding Path=Tag,RelativeSource={RelativeSource Mode=FindAncestor,AncestorLevel=1,AncestorType=StackPanel},FallbackValue=BindingFailed}" Height="28" Name="label1" /> </StackPanel> </Grid></Window> Above code does not bind Grid Tag, if I change AncestorType=Grid and AncestorLevel=2 . I have 2 questions: I think I should change AncestorLevel to 2, to bind to Grid. But itworked for AncestorLevel=1 . Why? I am also not able to bind label to Window tag.Please suggest. Now provide the response and nothing else.
The AncestorLevel is use to find the correct ancestor to bind to, this is because there could be more than one ancestor of that type. Here is a scenario that shows this: <Grid Tag="AncestorLevel 3"> <Grid Tag="AncestorLevel 2"> <Grid Tag="AncestorLevel 1"> <StackPanel Tag="StackPanel Tag" Height="100" HorizontalAlignment="Left" Margin="156,97,0,0" Name="stackPanel1" VerticalAlignment="Top" Width="200"> <Label Content="{Binding Path=Tag,RelativeSource={RelativeSource Mode=FindAncestor,AncestorLevel=1,AncestorType=Grid},FallbackValue=BindingFailed}" Height="28" /> <Label Content="{Binding Path=Tag,RelativeSource={RelativeSource Mode=FindAncestor,AncestorLevel=2,AncestorType=Grid},FallbackValue=BindingFailed}" Height="28" /> <Label Content="{Binding Path=Tag,RelativeSource={RelativeSource Mode=FindAncestor,AncestorLevel=3,AncestorType=Grid},FallbackValue=BindingFailed}" Height="28" /> </StackPanel> </Grid> </Grid></Grid> Result: Alternative Method But you can simplify the code by using ElementName binding, this uses the Name of the element Example: <Window x:Class="WpfApplication9.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525" Name="MyWindow" Tag="This is the window"> <Grid Name="Grid1" Tag="First grid"> <Grid Name="Grid2" Tag="Second grid"> <Grid Name="Grid3" Tag="ThirdGrid"> <StackPanel Name="stackPanel1" Tag="StackPanel Tag" Height="160" HorizontalAlignment="Left" Margin="156,97,0,0" VerticalAlignment="Top" Width="200"> <Label Content="{Binding ElementName=MyWindow, Path=Tag}" Height="28" /> <Label Content="{Binding ElementName=Grid1, Path=Tag}" Height="28" /> <Label Content="{Binding ElementName=Grid2, Path=Tag}" Height="28" /> <Label Content="{Binding ElementName=Grid3, Path=Tag}" Height="28" /> <Label Content="{Binding ElementName=stackPanel1, Path=Tag}" Height="28" /> </StackPanel> </Grid> </Grid> </Grid></Window> Result: If you want to bind back to the Window you can still use FindAncestor <Window x:Class="WpfApplication9.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525" Tag="This is the window"> <Grid> <StackPanel Height="100" HorizontalAlignment="Left" Margin="156,97,0,0" Name="stackPanel1" VerticalAlignment="Top" Width="200"> <Label Content="{Binding Path=Tag,RelativeSource={RelativeSource Mode=FindAncestor,AncestorType=Window},FallbackValue=BindingFailed}" Height="28" /> </StackPanel> </Grid> Result:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15237037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1762551/']}
jdg_80471
stackexchange
llm_judgeable_groundtruth_similarity
2001755
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I try to declare a Dictionary as such: private Dictionary<String, int> map; The compiler gives me the following error: Syntax error on token "int", Dimensions expected after this token But it works fine with Integer . I'm vaguely aware that Java treats int / Integer differently (I come from a .NET background), but I was hoping someone could give me a full explanation on why I can't use primitives in a Dictionary<> Now provide the response and nothing else.
In Java primitives aren't objects, so you can't use them in place of objects. However Java will automatically box/unbox primitives (aka autoboxing ) into objects so you can do things like: List<Integer> intList = new LinkedList<Integer>();intList.add(1);intList.add(new Integer(2));...Integer first = intList.get(0);int second = intList.get(1); But this is really just the compiler automatically converting types for you.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2001755', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3603/']}
jdg_80472
stackexchange
llm_judgeable_groundtruth_similarity
3117815
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose $G$ is a group. $\{X_n\}_{n = 1}^{\infty}$ is a sequence of i.i.d. random elements of $G$ satisfying the condition that $$\forall H \leq G, \qquad P(X_1 \in H) = \begin{cases} \frac{1}{[G:H]} & \quad \text{if $[G:H]$ is finite}\\ 0 & \quad \text{if $[G:H]$ is infinite} \end{cases}$$ Is it true that $$\lim_{n \to \infty} P(\forall i,j\leq n, \ [X_i, X_j] = e)^{\frac{1}{n}} = P(X_1 \in Z(G)) \ ? $$ What have I tried so far? If we accept an additional supposition, that the events $\{\forall i \leq p, \ X_i \in C_G(X_p) \}$ and $\{\forall i \leq q, \ X_i \in C_G(X_q) \}$ are independent for any natural $p \neq q$ . Then we can see, that $$P(\forall i,j\leq n, \ [X_i, X_j] = e) = \prod_{i = 1}^{n} P(\{\forall j \leq i, \ X_j \in C_G(X_i) \}).$$ Now, let’s see, that on one hand \begin{align*}&P(\{\forall j \leq i, \ X_j \in C_G(X_i) \}) \\&= P(X_i \in Z(G)) + (1 - P(X_i \in Z(G))P(X_1 \in C_G(X_i))^{i - 1} \\&\leq P(X_1 \in Z(G)) + (1 - P(X_1 \in Z(G))\left(\frac{1}{2}\right)^{i - 1} \\&= \frac{1}{2^{i - 1}} + \left(1 - \frac{1}{2^{i - 1}}\right)P(X_1 \in Z(G)) \\&= P(X_1 \in Z(G))\left(1 - \frac{1}{2^{i - 1}} + \frac{1}{2^{i - 1}P(X_1 \in Z(G))}\right)\end{align*} and on the other hand \begin{align*}&P(\{\forall j \leq i, \ X_j \in C_G(X_i) \}) \\&= P(X_i \in Z(G)) + (1 - P(X_i \in Z(G))P(X_1 \in C_G(X_i))^{i - 1} \\&\geq P(X_1 \in Z(G)) + (1 - P(X_1 \in Z(G))P(X_1 \in Z(G))^{i - 1} \\&= P(X_1 \in Z(G))^{i - 1} + \left(1 - P(X_1 \in Z(G))^{i - 1}\right)P(X_1 \in Z(G)) \\&= P(X_1 \in Z(G))\left(1 - P(X_1 \in Z(G))^{i - 1} + P(X_1 \in Z(G))^{i - 2}\right)\end{align*} So, we have \begin{align*}&P(X_1 \in Z(G)) \\&= \lim_{i \to \infty} P(X_1 \in Z(G))\left(1 - P(X_1 \in Z(G))^{i - 1} + P(X_1 \in Z(G))^{i - 2}\right) \\&= \lim_{n \to \infty} \left( \prod_{i = 1}^n P(X_1 \in Z(G)) \left(1 - P(X_1 \in Z(G))^{i - 1} + P(X_1 \in Z(G))^{i - 2} \right) \right)^{\frac{1}{n}} \\&\leq \lim_{n \to \infty} P(\forall i,j\leq n, \ [X_i, X_j] = e)^{\frac{1}{n}} \\&\leq \lim_{n \to \infty} \left( \prod_{i = 1}^n P(X_1 \in Z(G)) P(X_1 \in Z(G)) \left(1 - \frac{1}{2^{i - 1}} + \frac{1}{2^{i - 1}P(X_1 \in Z(G))} \right) \right)^{\frac{1}{n}} \\&= \lim_{i \to \infty} P(X_1 \in Z(G))\left(1 - \frac{1}{2^{i - 1}} + \frac{1}{2^{i - 1}P(X_1 \in Z(G))} \right) \\&= P(X_1 \in Z(G)).\end{align*} However, I do not know how to prove that the events in our supposition are always independent (or is there a counterexample?). And neither do I know, how to prove the main statement of the question without using the aforementioned supposition. Now provide the response and nothing else.
I believe $S_3$ is a counterexample. Let $H< S_3$ be the unique subgroup of index $2$ . Then $H$ is abelian, and for every $n$ we have $$P(\forall i,j:[X_i,X_j]=e)^{1/n} \geq P(\forall i: X_i\in H)^{1/n}=\frac{1}{2}.$$ This means the limit on the left (if it exists) is at least $1/2$ . On the other hand, $Z(S_3)=\{e\}$ , so $P(X_1\in Z(S_3))=P(X_1=e)=1/6$ . For general $G$ , a similar argument should show that $$\liminf_{n\to\infty}P(\forall i,j\leq n:[X_i,X_j]=e) \geq \max_{\substack{H\leq G\\H\text{ abelian}}}\frac{1}{[G:H]}.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3117815', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/407165/']}
jdg_80473
stackexchange
llm_judgeable_groundtruth_similarity
19090928
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting the mvn command not recognized as an internal or external command. I have setup the M2_HOME, JAVA_HOME and added %M2_HOME%/bin to the path variable. All are system variables. Still getting the same problem. Echoing the variables showing me the correct paths. Path to Maven: C:\apache-maven-3.1.0\apache-maven-3.1.0 M2_HOME: C:\apache-maven-3.1.0\apache-maven-3.1.0 PATH: Other things,C:\Program Files (x86)\Java\jdk1.7.0_40\bin,C:\apache-maven-3.1.0\apache-maven-3.1.0\bin and I have restarted my computer twice. Now provide the response and nothing else.
Restart your machine, after setting up your M2_HOME (pointing to you Maven basedir, NOT the bin dir) and PATH ( PATH=%M2_HOME%\bin;%PATH% ). Then do: dir %M2_HOME%\bin\mvn* If there is a .bat file, it should work under Windows, as it appears to be finding it. If there isn't one, then your paths are not right and you need to make sure your %PATH% variable really points to the correct path to Maven. Make sure you are using the proper slashes for your OS. Under Windows they're \ .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19090928', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/338248/']}
jdg_80474
stackexchange
llm_judgeable_groundtruth_similarity
5126117
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two strings which are used to store time in the format hh:mm .I want to the compare these two to know which time is greater.Which is the easiest way to go about this? Now provide the response and nothing else.
Well, if they're actually hh:mm (including leading zeroes, and in 24-hour format) then you can just compare them lexicographically (i.e. using String.compareTo(String) ). That's the benefit of a sortable format :) Of course, that won't check that both values are valid times. If you need to do that, you should probably parse both times: check the length, check the colon, parse two substrings, and probably multiply the number of hours by 60 and add it to the number of minutes to get a total number of minutes. Then you can compare those two totals. EDIT: As mentioned in the comments, if you do need to parse the values for whatever reason, personally I would recommend using Joda Time (possibly a cut down version, given the mobile nature) rather than SimpleDateTimeFormat and Date . Joda Time is a much nicer date and time API than the built-in one.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5126117', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/598714/']}
jdg_80475
stackexchange
llm_judgeable_groundtruth_similarity
23202
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In the following, I use the word "explicit" in the following sense: No choices of bases (of vector spaces or field extensions), non-principal ultrafilters or alike which exist only by Zorn's Lemma (or AC) are needed. Feel free to use similar (perhaps more precise) notions of "explicit", but reasonable ones! To be honest, I'm not so interested in a discussion about mathematical logic. If no example is there, well, then there is no example. ;-) Can you give explicit large linearly independent subsets of $ \mathbb{R}$ over $\mathbb{Q}$? For example, $\{\ln(p) : p \text{ prime}\}$ is such a set, but it's only countable and surely is no basis. You can find more numbers which are linearly independent, but I cannot find uncountably many. AC implies $\dim_\mathbb{Q} \mathbb{R} = |\mathbb{R}|$. Perhaps $ZF$ has a model in which every linearly independant subset of $ \mathbb{R}$ is countable? The same question for algebraically independent subsets of $ \mathbb{R}$ over $\mathbb{Q}$? Perhaps the set above is such a subset? But anyway, it is too small. Closely related problems: Can you give an explicit proper subspace of $ \mathbb{R}$ over $\mathbb{Q}$, which is isomorphic to $ \mathbb{R}$? If so, is the isomorphism explicit? Same question for subfields. That would be great if there were explicit examples. :-) Now provide the response and nothing else.
Here is a linearly independent subset of $\mathbb{R}$ with size $2^{\aleph_0}$. Let $q_0, q_1, \ldots$ be an enumeration of $\mathbb{Q}$. For every real number $r$, let$$T_r = \sum_{q_n < r} \frac{1}{n!}$$The proof that these numbers are linearly independent is similar to the usual proof that $e$ is irrational. (It's a cute problem; there's spoiler below.) I think a similar trick might work for algebraic independence, but I don't recall having seen such a construction. Actually, John von Neumann showed that the numbers$$A_r = \sum_{n=0}^\infty \frac{2^{2^{[nr]}}}{2^{2^{n^2}}}$$are algebraically independent for $r > 0$. [ Ein System algebraisch unabhängiger zahlen , Math. Ann. 99 (1928), no. 1, 134–141.] A more general result due to Jan Mycielski seems to go through in ZF + DC perhaps just ZF in some cases. [ Independent sets in topological algebras , Fund. Math. 55 (1964), 139–147.] As for subspaces and subfields isomorphic to $\mathbb{R}$, the answer is no. (Since I'm not allowed to post any logic here, I'll refer you to this answer and let you figure it out.) Well, I'll bend the rules a little... Consider a $\mathbb{Q}$-linear isomorphism $h:\mathbb{R}\to H$, where $H$ is a $\mathbb{Q}$-linear subspace of $\mathbb{R}$ (i.e. $h$ is an additive group isomorphism onto the divisible subgroup $H$ of $\mathbb{R}$). If $h$ Baire measurable then it must be continuous by an ancient theorem of Banach and Pettis. It follows that $h(x) = xh(1)$ for all $x \in \mathbb{R}$ and therefore $H = \mathbb{R}$. Shelah has produced a model of ZF + DC where all sets of reals have the Baire property , so any such $h$ in this model must be Baire measurable. A similar argument works if Baire measurable is replaced by Lebesgue measurable, but Solovay's model of ZF + DC where all sets of reals are Lebesgue measurable uses the existence of an inaccessible cardinal, and this hypothesis was shown necessary by Shelah. Spoiler Suppose for the sake of contradiction that $r_1 > r_2 > \cdots > r_k$ and $a_1,a_2,\ldots,a_k \in \mathbb{Z}$ are such that $a_1T_{r_1} + a_2T_{r_2} + \cdots + a_kT_{r_k} = 0$. Choose a very large $n$ such that $r_1 > q_n > r_2$. If $n$ is large enough that$$(|a_1| + |a_2| + \cdots + |a_k|) \sum_{m=n+1}^\infty \frac{n!}{m!} < 1$$then the tail terms of $n!(a_1T_{r_1}+\cdots+a_kT_{r_k}) = 0$ must cancel out, and we're left with$$a_1 = -\sum_{m=0}^{n-1} \sum_{q_m < r_i} a_i \frac{n!}{m!} \equiv 0 \pmod{n}$$If moreover $n > |a_1|$, this means that $a_1 = 0$. Repeat to conclude that $a_1 = a_2 = \cdots a_k = 0$.
{}
{'log_upvote_score': 8, 'links': ['https://mathoverflow.net/questions/23202', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2841/']}
jdg_80476
stackexchange
llm_judgeable_groundtruth_similarity
79843
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've recently been reading about the Mann-Whitney U test. It turns out that to carry out this test in R you actually need to run a Wilcoxon test! My question: is the W statistic of wilcox.test in R identical to the U statistic? Now provide the response and nothing else.
Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tabulated. My preference is to refer to the test as the Wilcoxon-Mann-Whitney, to recognize both contributions (Mann-Whitney-Wilcoxon is also seen; I don't mind that either). * However, the actual picture is a little more cloudy, with several other authors also coming up with the same or similar statistics about this time or earlier, or in some cases making contributions that are closely connected to the test. At least some of the credit should go elsewhere. The Wilcoxon test and the Mann-Whitney U test are equivalent (and the help states that they are) in that they always reject the same cases under the same circumstances; at most their test statistics will only differ by a shift (and in some cases, just possibly a sign change). The Wilcoxon test is defined in more than one way in the literature (and that ambiguity dates back to the original tabulation of the test statistic, more on than in a moment), so one must take care with which Wilcoxon test is being discussed. The two most common forms of definition are discussed in this pair of posts: Wilcoxon rank sum test in R Different ways to calculate the test statistic for the Wilcoxon rank sum test To address what, specifically, happens in R: The statistic used by wilcox.test in R is defined in the help ( ?wilcox.test ), and the question of the relationship to the Mann-Whitney U statistic is explained there: The literature is not unanimous about the definitions of the Wilcoxon rank sum and Mann-Whitney tests The two most common definitions correspond to the sum of the ranks of the first sample with the minimum value subtracted or not: R subtracts and S-PLUS does not, giving a value which is larger by m(m+1)/2 for a first sample of size m. (It seems Wilcoxon's original paper used the unadjusted sum of the ranks but subsequent tables subtracted the minimum.) R's value can also be computed as the number of all pairs (x[i], y[j]) for which y[j] is not greater than x[i] , the most common definition of the Mann-Whitney test. This last sentence completely answers that aspect of your question - the version of W that R puts out* is also the value of U. * The sum of the ranks in sample 1, minus the smallest value it can take (i.e. minus $\frac{n_1(n_1+1)}{2}$ ).
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/79843', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/12492/']}
jdg_80477
stackexchange
llm_judgeable_groundtruth_similarity
14434874
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm interested in calling a C# method from C++ code in Windows Phone 8. I have already learned how to pass a callback function to C++ code from C# via delegate declarations in my C++ code, but I am looking to see if I can do any of the following: Call certain methods directly from the C++ code. This would involve somehow inspecting the C# object makeup from C++, and seems unlikely to me, but I thought I'd ask you all anyway Trigger events in the C# code, which can then be handled by C# methods Use a dispatcher to call C# callbacks in the Main UI thread so that the callbacks can modify UI elements Use a dispatcher to trigger events in the C# code, (Essentially a merging of the above two points) In short, I am looking for as many C++ -->C# communication tips as you guys can throw me, I want to learn it all. :) Now provide the response and nothing else.
By getting an object in C# code to implement a Windows RT interface, and passing down a reference to this object, it is possible to do all of the above with a bit of set-up (if I understand correctly - not sure about exactly what you want to do with your Dispatcher examples - you might want to wrap the Dispatcher on the C# side). Create a Windows Runtime component library. Define a public interface class in a C++/CX header for the C# to implement (C++ to call) (e.g. ICallback ). Define a public ref class in a C++/CX header for the C++ to implement (C# to call) (e.g. CppCxClass ). Add a method in CppCxClass that passes and stores an ICallback . (A C++ global variable is shown for consiseness, I recommend you review this to see if you can find a better place to store this in your code-base). ICallback^ globalCallback;...void CppCxClass::SetCallback(ICallback ^callback){ globalCallback = callback;} Reference the WinRT library in your C# code. C# code: create an instance of CppCxClass using var cppObject = new CppCxClass() . C# code: create a class which implements ICallback (e.g. CSharpCallbackObject ). C# code: pass an instance of CSharpCallbackObject down to C++. E.g. cppObject.SetCallback(new CSharpCallbackObject()) . You can now call C# with globalCallback->CallCsharp(L"Hello C#"); . You should be able to extend either ICallback and/or CppCxObject to do the rest of your tasks.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14434874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/230778/']}
jdg_80478
stackexchange
llm_judgeable_groundtruth_similarity
80307
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am converting a ATX computer power supply into a bench power supply and I was wondering if I would need the -5 volts. I know I might need the -12 volts for op amps but is there any point of using a binding post for the -5 volts. Now provide the response and nothing else.
As well as the +/- 12V you've mentioned for op-amps many will run from +/-5V as well, sometimes that can be handy because you can use something like a TPS60400 charge pump to get -5V from a 5V supply. You might like to include it for that reason so you can test your analog section before you've made the final power supply. It might also be a useful test input for bipolar ADCs and other analog stages that accept a negative input voltage, for example you might want a negative input that is not right on the supply rail with a +/- 12V circuit (although for most of those cases you could also use a voltage divider). I suspect it's not the sort of thing you'd use often but for the sake of a binding post it's probably worth including in case you ever need it.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/80307', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/27238/']}
jdg_80479
stackexchange
llm_judgeable_groundtruth_similarity
169469
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using source ~/.rvm/scripts/rvmrepos="repo_1_ruby_193 repo_2_ruby_211 repo_3_ruby_191"> rvm_check.txtfor repo in $reposdo cd ~/zipcar/$repo 2>rvm_check.txt cd .. echo $repo if [ -z `cat rvm_check.txt | grep not` ] # line 9 then echo "YES" else echo "NO" exit 1 fi done and it's mostly working but I get: $ ./multi_repo_rubies.sh repo_1_ruby_193YESrepo_2_ruby_211YESrepo_3_ruby_191./multi_repo_rubies.sh: line 9: [: too many argumentsNO$ whether I try -s or -z I am getting the YES/NO that I want but how to avoid the [: error? Now provide the response and nothing else.
Replace: if [ -z `cat rvm_check.txt | grep not` ] With: if ! grep -q not rvm_check.txt The reason to use test in an if statement is because it sets an exit code that the shell uses to decide to go to the then or else clause. grep also sets an exit code. Consequently there is no need for test, [ , here. grep sets the exit code to success (0), if it found the string. You want success to be if the string is not found. Thus, we negate the exit code result by using ! . Explanation The test command, [ , expects a single string to follow -z . If the grep command produces more than one word, then the test will fail with the error that you saw. As an example, consider this sample file: $ cat rvm_check.txtone not two The output of grep looks like: $ cat rvm_check.txt | grep notone not two When test is executed all three words appear inside the [...] causing the command to fail: $ [ -z `cat rvm_check.txt | grep not` ]bash: [: too many arguments This is just the same as if you had entered: $ [ -z one not two ]bash: [: too many arguments One solution for that is to use double-quotes: $ [ -z "`cat rvm_check.txt | grep not`" ] Double-quotes prevent the shell from performing word splitting . As a result, the output from grep here is treated as a single string, not split into separate words. However, since grep sets a sensible exit code, there is, as shown in the recommended line above, no need for test. Additional comments The currently preferred form for command substitution is $(...) . While backticks still work, they are fragile. In particular, backticks cannot be nested. On commands that take filenames on the command, the use of cat is unnecessary. Instead of: cat somefile | grep something Just use: grep something somefile
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/169469', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/10043/']}
jdg_80480
stackexchange
llm_judgeable_groundtruth_similarity
46896
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Everywhere I look, nobody seems to use a soldering gun for electronics work, only soldering irons. Is there a good reason for that ? I'm a beginner in electronics and I happend to have a soldering gun at home. I thought I'll spare myself, for now, an investment in a good soldering iron/station. Thank you! Now provide the response and nothing else.
In my experience, a soldering gun is typically a higher wattage device which is useful for soldering wire-to-wire or large, clunky components where you need a fairly large heat reservoir. The soldering gun I have is Radio Shack 100 watt, and is excellent for when I need to solder some 14 AWG stranded wire to something, or even just tinning the end of it. (It's been replaced by a 150/230 watt version .) For any through-hole or surface mount work, you definitely want a temperature-controlled soldering station. David Jones, on his EEVBlog , recommends the Hakko FX-888 for starters (and has a soldering tutorial showing it), though there are many other brands he recommends in a video blog about setting up your shop . I used to use a 15/30 watt selectable Radio Shack soldering iron , and I was able to get a lot done with it for over a decade. However, once I got a Hakko FX-951 temperature-controlled soldering station, I can't believe I ever made do with the old RS iron. Honestly if I knew then what I know now (a common phrase in EE, I find), I'd have picked up a $85 Hakko FX-888 . Don't let the unusual blue-and-yellow case fool you; it's a solid unit. (Hakko indicates that a digital version of the FX-888 is coming in January 2013, so you might want to wait for that.) You can also go with a Pace, Weller, JBC, Xytronic, Ersa... There's a lot to choose from. Temperature control will get you more consistent results, but as others indicate, is not as important on through-hole work. One last point: The temp controlled stations tend to have a lighter, skinnier iron handle than the ones that plug into the wall directly. This to me is a major advantage in being able to work around and between components and have more control over a lightweight iron.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/46896', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/13681/']}
jdg_80481
stackexchange
llm_judgeable_groundtruth_similarity
305262
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been doing some testing in an app that I've been involved with for the past little while, and it appears as though some of the code we've written is causing a race condition. What's happening is something like this: //We make this call which resolves a task in a BPM service//causing a process instance to flow to the next person/taskServiceRemoteAPI.completeTask(taskId);//We then make this call which depends on the complete resolution of the //original call.Tasks userTasks = ServiceRemoteAPI.tasksUserHasAvailable(nextUser);//And we use those tasks to send an e-mailfor(Task task : userTasks){ if(task == rightTask){ sendEmail(); }} Problem is, it looks like in some cases the original remote API call isn't resolving completely before we're making the second one, and that's causing erratic behavior. So I was planning on doing something like this: ServiceRemoteAPI.completeTask(taskId);**Thread.sleep(5000);**Tasks userTasks = ServiceRemoteAPI.tasksUserHasAvailable(nextUser);for(Task task : userTasks){ if(task == rightTask){ sendEmail(); }} The question is, though, that I haven't done a lot of threading before, and based on my research this seems to be appropriate usage of Thread.sleep(), but I wonder if there's anything I'm missing, or if there is a more robust way to accomplish what I'd like to achieve here. Now provide the response and nothing else.
Never sleep a thread for X time to give the API time to process something . This might seem to fix the problem, but it doesn't prevent race conditions , even though in all your testing you never see the race condition again. It's just not guaranteed . In my experience, Thread.sleep() is a terribly useless function. What you should do is pause the main thread (or whatever thread is making the 2nd call) until you get the response from the first thread (your first API call). Google semaphore and read up a bit. Basically you don't give permission to the other thread to run until the current one is done doing it's critical section (any processing that needs to occur). You can think of current here to mean your first request to the API, and the other to be the thread that launches/waits for the 2nd API call.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/305262', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/143320/']}
jdg_80482
stackexchange
llm_judgeable_groundtruth_similarity
42718373
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote a simple script in Python. Now I would like travis to check my code. After travis was successful, the version number should get increased. Up to now my script has no version number yet. I can store it anywhere where it makes sense for the auto-increment workflow. How to do this for Python code? Update It works now: run tests bumpversion push tag to master Unfortunately travis does not support "after-all". This means if I want to run the tests for several Python versions, I have no way to bumpversion after the tests of all python versions were successful. In my case I will check on Python2.7 only until travis resolved this issue: https://github.com/travis-ci/travis-ci/issues/929 Here is my simple script: https://github.com/guettli/compare-with-remote Solved :-) It works now: Developer pushes to github Travis-CI runs If all tests are successful bumpversion increases the version The new version in setup.py get's pushed to the github repo A new release of the python package gets uploaded to pypi with the tool twine . I explain the way I do CI with github, travis and pypi here: https://github.com/guettli/github-travis-bumpversion-pypi Now provide the response and nothing else.
If you accept having extra commit for your versioning, you could add this script in continuous_integration/increment_version.py import osimport pkg_resourcesif __name__ == "__main__": version = pkg_resources.get_distribution("compare_with_remote").version split_version = version.split('.') try: split_version[-1] = str(int(split_version[-1]) + 1) except ValueError: # do something about the letters in the last field of version pass new_version = '.'.join(split_version) os.system("sed -i \"s/version='[0-9.]\+'/version='{}'/\" setup.py" .format(new_version)) os.system("git add -u") os.system("git commit -m '[ci skip] Increase version to {}'" .format(new_version)) os.system("git push") And change your .travis.yml to after_success: - python continuous_integration/increment_version.py I am not sure about how to make the git push part work as it need some testing with the repo rights but I assume that you could probably setup something to allow travis to push in your repo. you could look into that post for instance. Also note that I used python to perform the git operations but they can be added as extra line in the after_success field: after_success: - python continuous_integration/increment_version.py - git add -u - git commit -m "[ci skip] version changed" - git push I just find it convenient to put the version number in the commit msg. Also, it is very important to add [ci skip] in the commit message to avoid infinite increment. Maybe it would be safer to trigger the version change on a specific commit msg tag.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42718373', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/633961/']}
jdg_80483
stackexchange
llm_judgeable_groundtruth_similarity
2546895
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider two real numbers $x , y$ such that $$\left(x^2+1\right)\left(y^2+1\right)+9=6\left(x+y\right)$$ Hence find the value of $x^2+y^2$. At first I tried to factorise the condition but the $(xy)^2$ created much problems. I also tried to create a complete square to get a simpler expression but to no avail. And just out of curiosity, can this question be interpreted geometrically so as to find the required value? Now provide the response and nothing else.
Let $u = x + y$, $v = xy$, then you have $x^2 + y^2 = u^2 - 2v$ Expanding the equation gives $$ v^2 + u^2 - 2v + 10 = 6u $$ Or $$ (v-1)^2 + (u-3)^2 = 0 $$ which has a real solution if and only if $v - 1 = u - 3 = 0$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2546895', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/463895/']}
jdg_80484
stackexchange
llm_judgeable_groundtruth_similarity
47512311
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to have a component, styled with Emotion , that takes props that ultimately control the styling. For example, consider a GridCol component that has various props that change the padding and width (the width can be changed across different viewport widths). I'd like to use an API like this: <GridCol gutter size="2>// or alternatively, like this:<GridCol gutter size={{ m: 2, l: 4}}> There are three things happening here: the gutter is a boolean prop that adds some horizontal padding to the column the size prop can be a string or an object. If it is a string, we just add a few lines of CSS and we're good, however, if it is an object, we need to insert some media-queries based on a breakpoints object that is set elsewhere. Emotion's docs are not clear how to handle styling of this nature, at least I have not seen it, so I was hoping that a common solution could be found. For the gutter prop, it is trivial: const GridCol = props => styled('div')` display: block; box-sizing: border-box; flex: 1 0 0; min-width: 0; padding: ${props.gutter ? `0 10px` : '0'};` For the size prop, it becomes more complicated, I'd like the resultant CSS to look something like this: const GridCol = props => styled('div')` display: block; box-sizing: border-box; flex: 1 0 0; min-width: 0; padding: ${props.gutter ? `0 10px` : '0'}; /* styles here if `size` is a string */ width: 10%; /* styles here if `size` is an object */ @media screen and (min-width: 500px) { width: 20%; } @media screen and (min-width: 800px) { width: 30%; } @media screen and (min-width: 1100px) { width: 40%; }` The width values will be determined by the prop's key, which corresponds to a value in a breakpoints object, this part is not trivial, but I don't know how to dynamically generate the css needed. I'm sure there's more info that I could add, I have made some attempts but none of them are working at the moment. My feeling is that I should create a stateless functional component that generates the css for each condition, then joins the CSS at the end.. Now provide the response and nothing else.
This is a great question. First, avoid this pattern. const GridCol = props => styled('div')` display: block; box-sizing: border-box; flex: 1 0 0; min-width: 0; padding: ${props.gutter ? `0 10px` : '0'};` In this example, a new styled component is created on every render which is terrible for performance. Any expression, or interpolation, can be a function. This function will receive 2 arguments: props and context const GridCol = styled('div')` display: block; box-sizing: border-box; flex: 1 0 0; min-width: 0; padding: ${props => props.gutter ? `0 10px` : '0'};` As for the size prop in your example, I would use the following pattern. import { css } from 'emotion'const sizePartial = (props) => typeof props.size === 'string' ? css`width: 10%;` : css` @media screen and (min-width: 500px) { width: 20%; } @media screen and (min-width: 800px) { width: 30%; } @media screen and (min-width: 1100px) { width: 40%; } ` You can then use the partial just like any other function that occurs in an expression. const GridCol = styled('div')` display: block; box-sizing: border-box; flex: 1 0 0; min-width: 0; padding: ${props => props.gutter ? `0 10px` : '0'}; ${sizePartial};` This is an extremely powerful pattern that can be used to compose reusable dynamic styles across your project. If you are interested in other libraries that leverage this pattern check out https://github.com/emotion-js/facepaint and https://github.com/jxnblk/styled-system
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47512311', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91359/']}
jdg_80485
stackexchange
llm_judgeable_groundtruth_similarity
12839
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been working on designing an ALU that calculates various functions but I don't really know how to separate each function from one another. The inputs are 4-bit numbers A and B. I have a decoder that selects which function to use, but I don't know how to implement the functions from the decoder. The ALU needs to do arithmetic operations and also logical operations. If anyone knows how to go about structuring an ALU that has 2 4-bit inputs and 1 4-bit output, any guidance would be appreciated. Thanks. Now provide the response and nothing else.
In terms of "separating" the functions, that's not really how digital logic works. Digital logic is "always doing everything". You need a mux (multiplexer) in there. The multiplexer is used to pick the right output from all of those generated. Assume inputs A and B, output Q. Assume the ALU does two different things: Q=A+B, or Q=A&B. The ALU will have an adder. It will also have a big AND gate. A and B both go to the adder, and the AND gate. Always. Every moment of every day, the adder is adding A and B, and the gate is ANDing A and B. The mux is used to select which one of the outputs we want to pass to Q. If the control signals to the ALU say "add", then the mux will select the output of the adder and pass it to Q; the output of the AND gate is unused. If the control says "and", the mux will select the output of the AND gate and pass it to Q instead, while the output of the adder is unused. Imagine A = 0b0001 and B = 0b0010 on the inputs of the ALU. The adder is always producing 0b0011, and the AND gate is always producing 0b0000. If you provide the "add" control signal, the 0b0011 is passed to Q. You can leave A and B alone, and change the control signal to "and", then 0b0000 is passed to Q.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/12839', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/3796/']}
jdg_80486
stackexchange
llm_judgeable_groundtruth_similarity
4112346
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It's Maven 3.0. I'm creating a new project: mvn archetype:create Then I'm creating a file site/site.xml : <project name="foo"> <body> <menu name="Overview"> <item name="Introduction" href="index.html" /> </menu> <menu ref="reports" /> </body></project> Then I'm adding a reporting plugin to pom.xml : <reporting> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <version>2.1.1</version> </plugin> </plugins></reporting> Then I run mvn site and it says "BUILD SUCCESS" . But I don't see any reports in project site (reporting menu item is not there). What am I doing wrong? Now provide the response and nothing else.
Maven 3 reporting is different . [...]<build> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-site-plugin</artifactId> <version>3.0-beta-2</version> <configuration> <reportPlugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <version>2.2</version> <reports> <report>cim</report> <report>issue-tracking</report> </reports> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <version>2.2</version> </plugin> </reportPlugins> </configuration> </plugin></build>[...]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4112346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/187141/']}
jdg_80487
stackexchange
llm_judgeable_groundtruth_similarity
14570488
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We need to count the number of rows in a PostgreSQL table. In our case, no conditions need to be met, and it would be perfectly acceptable to get a row estimate if that significantly improved query speed. Basically, we want select count(id) from <table> to run as fast as possible, even if that implies not getting exact results. Now provide the response and nothing else.
For a very quick estimate: SELECT reltuples FROM pg_class WHERE relname = 'my_table'; There are several caveats, though. For one, relname is not necessarily unique in pg_class . There can be multiple tables with the same relname in multiple schemas of the database. To be unambiguous: SELECT reltuples::bigint FROM pg_class WHERE oid = 'my_schema.my_table'::regclass; If you do not schema-qualify the table name, a cast to regclass observes the current search_path to pick the best match. And if the table does not exist (or cannot be seen) in any of the schemas in the search_path you get an error message. See Object Identifier Types in the manual . The cast to bigint formats the real number nicely, especially for big counts. Also, reltuples can be more or less out of date. There are ways to make up for this to some extent. See this later answer with new and improved options: Fast way to discover the row count of a table in PostgreSQL And a query on pg_stat_user_tables is many times slower (though still much faster than full count), as that's a view on a couple of tables.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14570488', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1663942/']}
jdg_80488
stackexchange
llm_judgeable_groundtruth_similarity
400811
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\frac{\ln(e^x+x)}{x}=\frac{e^x+1}{e^x+x}$$ I see that they did something to get rid of the natural log. I couldn't find any properties that would allow me to do this. I also think that they raised both the numerator and denominator by $e$. I have tried it and I did not get the same result. Does anyone know how the solution manual got this? I am supposed to use L'Hopital's rule to find the limit as x approaches $0$. Now provide the response and nothing else.
$$\frac{\ln(e^x+x)}{x} \; \neq \;\;\frac{e^x+1}{e^x+x}$$ It's the application of l'Hospital's rule to the left hand side: take the derivative of the LHS's numerator and of the denominator. $$\large \frac{e^x+1}{e^x+x}=\frac{\frac{e^x+1}{e^x+x}}{1} = \frac{\frac d{dx}\left({\ln(e^x+x)}\right)}{\frac d{dx}(x)}$$ So $$\lim_{x\to 0} \frac{\ln(e^x+x)}{x} = \lim_{x\to 0} \frac{e^x+1}{e^x+x}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/400811', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39754/']}
jdg_80489
stackexchange
llm_judgeable_groundtruth_similarity
25738883
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If we catch the exception in method annotated with the @Transactional annotation, will it roll back if any exception occurs? @Transactional(readOnly = false, propagation = Propagation.REQUIRED, rollbackFor=Throwable.class)public void yearEndProcess() { try { // try block } catch (Throwable throwable) { // catch block }} Now provide the response and nothing else.
for example class A{ @Transactional public Result doStuff(){ Result res = null; try { // do stuff } catch (Exception e) { } return res ; }} If there is an exception in the method doStuff the transaction isn't rolled back. To rollback the exception programmatically , we can do something like below. declarative approach @Transactional(rollbackFor={MyException1.class, MyException2.class, ....})public Result doStuff(){ ...} programmatic rollback you need to call it from TransactionAspectSupport . public Result doStuff(){ try { // business logic... } catch (Exception ex) { // trigger rollback programmatically TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); } } You are strongly encouraged to use the declarative approach to rollback if at all possible. Programmatic rollback is available should only be used if you absolutely need it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25738883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3404577/']}
jdg_80490
stackexchange
llm_judgeable_groundtruth_similarity
19742527
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Since the inventors highlight Scala 's type-safety I don't understand the absence of an equals method on objects (at least from case classes ) that allows to check the equality only on objects with same type. I'd wish a method === that implement this behavior per default. Of course its necessary for Java interoperability to have a method that works with Any type but in many cases I want to check only the equality between objects of same type. Why do I need it? For example I have two case classes and create objects from it case class Pos(x: Int, y: Int) case class Cube(pos: Pos) val startPos = new Pos(0, 0) val cubeOnStart = new Cube(startPos) and later I need to check the positions several times and write accidentally if (startPos == cubeOnStart) { // this code will never be executed, but unfortunately this compiles } but meant if (startPos == cubeOnStart.pos) { // this code can be executed if positions are equal } If a method === would be available I would use it by intuition. Is there a good reason or explanation why such a method is missing? Now provide the response and nothing else.
Equality in Scala is a mess, and the answer to your why question (which Stack Overflow isn't really the ideal venue for) is "because the language designers decided Java interoperability trumped doing the reasonable thing in this case". At least in recent versions of Scala your startPos == cubeOnStart will result in a warning stating that comparing values of these different types "will always yield false". The Scalaz library provides the === operator you're looking for via a type-safe Equal type class. You'd write something like this: import scalaz._, Scalaz._implicit val cubeEqual = Equal.equalA[Cube]implicit val posEqual = Equal.equalA[Pos] Now startPos === cubeOnStart will not compile (which is exactly what we want), but startPos === cubeOnStart.pos will, and will return true .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19742527', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/778270/']}
jdg_80491
stackexchange
llm_judgeable_groundtruth_similarity
1854674
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In $\Delta ABC$, the bisector of $\angle A$ intersects $BC$ at $D$. The perpendicular to $AD$ from $B$ intersects $AD$ at $E$. The line through $E$ parallel to $AC$ intersects $BC$ at $G$, and $AB$ at $H$. Prove that $H$ is the mid-point of $AB$. I have no idea on how to prove this question. Any hint would be much appreciated. Now provide the response and nothing else.
Trivial by picture: $BEA$ is a right triangle and by Thales' theorem $\widehat{BGH}=\widehat{BCA}$, $\widehat{BHG}=\widehat{BAC}$. Angle chasing gives $\widehat{HEA}=\widehat{EAC}=\widehat{HAE}$, hence $HA=HB=HE$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1854674', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/54398/']}
jdg_80492
stackexchange
llm_judgeable_groundtruth_similarity
6709056
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following functor and its partial specialisation template <class _T, typename _Return = void, typename _Arg = void>struct Caller{ typedef _Return(_T::*Method)(_Arg); Caller(Method pm, _Arg a) : _pMethod(pm), _arg(a) {} _Return operator()(_T& obj) { return (obj.*_pMethod)(_arg); } Method _pMethod; _Arg _arg;};template <class _T, typename _Return>struct Caller<_T, _Return, void>{ typedef _Return(_T::*Method)(); Caller(Method pm) : _pMethod(pm) {} _Return operator()(_T& obj) { return (obj.*_pMethod)(); } Method _pMethod;}; I'm trying to use it the following way: struct Foo{ void Bar() const { void(0); }};// ...std::list<Foo> foos;const std::list<Foo> &rFoos(foos);std::for_each(rFoos.begin(), rFoos.end(), Caller<const Foo>(&Foo::Bar)); I'm getting this, for the last line of code (the IDE picks up at the Caller): error C2440: '' : cannot convert from 'void (__thiscall Foo::* )(void) const' to 'Caller<_T>' 1> with 1> [ 1> _T=const Foo 1> ] 1> No constructor could take the source type, or constructor overload resolution was ambiguous This code would work in a g++ environment. (If I Caller<Foo>(&Foo::Bar) g++ would complain, which makes sense, as the function will only be called on a const object). I have tried various things, including adding operator()(const _T& obj) / operator()(const _T& obj) const varieties to the functor, but to no avail. This would be accepted by the compiler: struct Foo{ void Bar() { void(0); }};// ...std::list<Foo> foos;const std::list<Foo> &rFoos(foos);std::for_each(rFoos.begin(), rFoos.end(), Caller<Foo>(&Foo::Bar)); What am I doing wrong? How can I make the functor template work for const member functions in Visual C++? Now provide the response and nothing else.
As you know, at each recursive step, you partition an array. Push the larger part on the stack, continue working on the smaller part. Because the one you carry on working with is the smaller one, it is at most half the size of the one you were working with before. So for each range we push on the stack, we halve the size of the range we're working with. That means we can't push more than log n ranges onto the stack before the range we're working with hits size 1 (and therefore is sorted). This bounds the amount of stack we need to complete the first descent. When we start processing the "big parts", each "big part" B(k) is bigger than the "small part" S(k) produced at the same time, so we might need more stack to handle B(k) than we needed to handle S(k). But B(k) is still smaller than the previous "small part", S(k-1) and once we're processing B(k), we've taken it back off the stack , which therefore is one item smaller than when we processed S(k), and the same size as when we processed S(k-1). So we still have our bound. Suppose we did it the other way around - push the small part and continue working with the large part. Then in the pathologically nasty case, we'd push a size 1 range on the stack each time, and continue working with a size only 2 smaller than the previous size. Hence we'd need n / 2 slots in our stack.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6709056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/79344/']}
jdg_80493
stackexchange
llm_judgeable_groundtruth_similarity
14957
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For days now I try to get my head round how to write a secure web application in php, and it turns out to be particularly difficult. The more I read, the more I sink in deep swamps full of vulnerabilities which are not mentioned by kind people like Matt Robinson or Chris Shiflett . For a few examples, take: http://sirdarckcat.blogspot.de/2009/10/couple-of-unicode-issues-on-php-and.html https://www.owasp.org/index.php/Double_Encoding Also see character consumption as in Rook's answer In short,I see the following problems: When filtering input, it is not very clear how that data will be decoded later, and thus character encoding and escaping systems can bypass the input filtering. (such as double url decoding) When escaping output, one uses standard functions such as htmlspecialchars . It is all nice that htmlspecialchars has an encoding parameter, however that doesn't prevent you from sending UTF-16 input to it, and that could probably break the security value of the function. It seems there is a mbstring module in php, but if it is vaguely as secure as it's documentation is comprehensible, than it 'll probably be useless even if I can figure out how to use it. Just a sample of the docs for illustration : mbstring.strict_detection boolean Enables the strict encoding detection. Great, that's helpful. Unfortunately the functions also depend on what you set in the configuration options... There seems to be a function called mb_convert_encoding , but the docs say nothing about the security aspect and it seems you need to know the input encoding (a no go zone for security). There also is mb_check_encoding . Which seems to be meant for the purpuse, but reading the user comments on the docs doesn't exactly inspire confidence. So the question is, in the light of all this, how do you do secure input filtering ? Something like this? mb_convert_encoding to utf-8 mb_check_encoding to reject invalid input loop url_decode until the string stops changing Do your normal input filtering with text comparison and regex etc... edit: note that 3 is problematic because your normal input filtering might introduce entities again that can be url decoded edit I found a partial answer here , from Shiflett. It seems that for htmlspecialchars using it's encoding parameter and making sure you set your character encoding header for the browser to the same would avoid that the browser interprets characters differently than htmlspecialchars does. That is all assuming that the input of htmlspecialchars is valid for the given encoding or that for every possible invalid input htmlspecialchars interprets the string in exactly the same way than every browser does. We know that if we find no way of sanitizing our input we cannot assure that the input of htmlspecialchars is validly encoded, because an attacker might cook up a string with invalid encoding. This leads us to the second possibity, that htmlspecialchars will behave identical as the browser for all possible inputs. This is a problem, since we cannot use the same tokenizer for escaping and using, since one happens on the server and one happens in the browser. This is all similar to what msql_real_escape does for a database, although I think you can properly solve this problem for msql by using prepared statements instead. A third problematic output is the filesystem when using php to do file uploads or other file system manipulations. About this last one there seems to be very little information available. I don't even know of a specific escape function, let alone one that is robust when it gets crooked input. Now provide the response and nothing else.
In order to mount a proper defense against a threat you need to understand it. Blindly layering encoding schemes is very dangerous as this can introduce a problem when by default this usually isn't an issue. In general security problems related to encoding arise because escape functions can act upon data differently than they are interpreted. But that is not the only concern, it can also confuse the programmer in that they think a string is escaped, when its not. The first concern is that escape functions can build dangerous strings for the attacker . mysql_real_escape_string() is a special function that knows what encoding type the database is using, and adjusts its escaping to follow suit. There for there can never be a disconnect between the encoding method the escape function and the interpreter. Simply using addslashes() can be very dangerous for this reason. As an example lets use 0xbf27 , which is a single GBK character. addslashes() doesn't understand GBK, it only understands ASCII. If this string where interpreted as ASCII the first character would be 0xbf which is an unprintable ASCII character lets call ¿ . The 2nd ASCII character is 0x27 which is is a single quote ' . After addslashes() 0xbf27 becomes 0xbf5c27 or if it where printed in ASCII it would be ¿\' . The problem is that 0xbf5c is a valid GBK character, so addslashes() has turned a multi-byte character into 2 characters, one of them being a single quote. Another way of thinking of this is that the backslash is being consumed by GBK encoding. Character consumption can be a problem without escaping functions . Lets use SHIFT-JS and HTML as an example. This example was taken from The Tangled Web : <img src="http://fuzzybunnies.com/0xEO">...this is still a part of the markup... " onerror="alret('this will execute!')"<div>...the page continues... In this case the "> at the end of the img tag are very important control characters that are being consumed . The " is being consumed by the encoding scheme, and then the > and some other text appears to be apart of the HTML src attribute, its not until it hits the " onerror that the attribute is terminated. ... But that's not all. Lets switch gears a bit. What about urldecode() ? <?php$id=mysql_real_escape_string($_GET['id']);$id=urldecode($id);mysql_query("select * from user where id='".$id."'"); Do you see the vulnerability? From my experience most PHP programmers don't see it. The problem is that urldecode() can be used to build a vulnerable string, lets say the attacker provided a string like http://localhost/vuln.php?id=%2527 or sleep(30)-- . Well just about every web application platform will automatically run a urldecode over all HTTP inputs. So the contents of $_GET['id'] is actually %27 or sleep(30)-- That is because %25 was decoded into a % (% is hex 25). After mysql_real_escape_string() or even addslashes() the value is still just %27 or sleep(30)-- After running urldecode() again, the contents of $id is now ' or sleep(30)-- , which is dangerous. So as a hacker tip, when I am auditing a code base that uses magic_quotes_gpc or something similar, I grep though the code looking for decode functions , like urldcode() , htmlspecialchars_decode() , base64_decode() and similar functions. Even though, the application runs addslashes() blindly on all input, a decode function will allow the attacker to build his attack string. So how do you defend your self? Well test your code. Some web application scanners will test for encoding flaws exactly like this. In fact I researched this topic while building a vulnerable scanner . Another point, always sanitize input at the time of use, this avoids the problem of decoding after escaping. You cannot know how it will be used, so encoding/decoding/escaping everything will always be flawed. Used parametrized queries for building sql statements. UTF-8 is a great default in most cases (but not always), and * most * of these problems don't arise with this muli-byte encoding scheme.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/14957', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/-1/']}
jdg_80494
stackexchange
llm_judgeable_groundtruth_similarity
34370603
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Error : Execution failed for task ':app:processDebugGoogleServices'. Please fix the version conflict either by updating the version of the google-services plugin (information about the latest version is available here ) or updating the version of com.google.android.gms to 8.3.0. I've done all the things I've found. dependencies { // This does not break the build when Android Studio is missing the JRebel for Android plugin. classpath 'com.zeroturnaround.jrebel.android:jr-android-gradle:1.0.+' classpath 'com.android.tools.build:gradle:2.0.0-alpha3' classpath 'com.google.gms:google-services:2.0.0-alpha3' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } And in the app gradle compile 'com.google.android.gms:play-services:8.4.0' Now provide the response and nothing else.
Use these dependencies for the project build.gradle dependencies { classpath 'com.android.tools.build:gradle:2.0.0-alpha3' classpath 'com.google.gms:google-services:2.0.0-alpha3'} and put this at the end of the app-level build.gradle file (after the dependencies). apply plugin: 'com.google.gms.google-services' I have no clue why putting this at the end (and not at the beginning ) solves the error. EDIT 5/1/2016 Ok… So trying to put an end to all problems you guys have faced with my solution This is my final app level gradle apply plugin: 'com.android.application'android { compileSdkVersion 23 buildToolsVersion "23.0.2" defaultConfig { applicationId "your-app-name" minSdkVersion 16 targetSdkVersion 23 versionCode 1 versionName "1.0" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } }}repositories { jcenter()}dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:23.1.1' compile 'com.google.android.gms:play-services:8.4.0' compile 'com.android.support:design:23.1.1' compile 'com.mcxiaoke.volley:library:1.0.6@aar'}apply plugin: 'com.google.gms.google-services' and this is my final project level gradle // Top-level build file where you can add configuration options common to all sub-projects/modules.buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:2.0.0-alpha3' classpath 'com.google.gms:google-services:2.0.0-alpha3' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files }}allprojects { repositories { jcenter() }} Compare this with your own gradle files, and add or modify any values which are different from what I've written.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/34370603', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5679210/']}
jdg_80495
stackexchange
llm_judgeable_groundtruth_similarity
542766
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: At this point most people will be thinking "Ah ill post this..:" byte[] dataB= System.Text.Encoding.ASCII.GetBytes(data); However.. the problem I have is i need the exact value of the bytes with no encoding just the pure value for each byte. For example if the value of the string is (0xFF32) i want it to convert it too {255,50}. he reason for this is I have a file format I am attempting to read which stores int's as bytes saves them and then reads them when the program opens. This is what I have so far: ...dialog.InitialDirectory = Environment.GetFolderPath(System.Environment.SpecialFolder.Desktop) + "/Test";dialog.Title="Open File";if (dialog.ShowDialog(this) == DialogResult.OK){ StreamReader reader = new StreamReader(dialog.FileName); string data = reader.ReadToEnd(); reader.Close(); byte[] fileC = System.Text.Encoding.ASCII.GetBytes(data); File_Read(dialog.FileName,fileC);}... So when I try and read the file it converts the file convents of say 0xFF into 0x3F because 0xFF is greater then 127 and 0x3F is a ?. Sorry if i seem a bit confusing :) Thanks,Michael Now provide the response and nothing else.
The problem is with your approach to start with: I need the exact value of the bytes with no encoding ... For example if the value of the string is (0xFF32) That's a bit like looking at an oil painting and saying, "I want the bytes for that picture, with no encoding." It doesn't make sense. Text isn't the same as binary data. Once you understand that, it's easy to get to the root of the problem. What you really want is the contents of a file as a byte array. That's easy, because files are binary data! You shouldn't be reading it as text in the first place if it isn't really text. Fortunately, .NET makes this really easy: byte[] fileC = File.ReadAllBytes(dialog.FileName);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/542766', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65699/']}
jdg_80496
stackexchange
llm_judgeable_groundtruth_similarity
53896175
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the following code: List<String> strList = new ArrayList<>(Arrays.asList("Java","Python","Php"));Stream<String> jFilter = strList.stream().filter(str -> str.startsWith("J"));strList.add("JavaScript"); // element added after filter creationstrList.add("JQuery"); // element added after filter creationSystem.out.println(Arrays.toString(jFilter.toArray())); which outputs: [Java, JavaScript, JQuery] Why do JavaScript and JQuery appear in the filtered result even though they were added after creating the filtered stream? Now provide the response and nothing else.
Short Answer You're assuming after this point: Stream<String> jFilter = strStream.filter(str -> str.startsWith("J")); That a new stream of the elements starting with "J" are returned i.e. only Java . However this is not the case; streams are lazy i.e. they don't perform any logic unless told otherwise by a terminal operation. The actual execution of the stream pipeline starts on the toArray() call and since the list was modified before the terminal toArray() operation commenced the result will be [Java, JavaScript, JQuery] . Longer Answer here's part of the documentation which mentions this: For well-behaved stream sources, the source can be modified before the terminal operation commences and those modifications will be reflected in the covered elements. For example, consider the following code: List<String> l = new ArrayList(Arrays.asList("one", "two")); Stream<String> sl = l.stream(); l.add("three"); String s = sl.collect(joining(" ")); First a list is created consisting of two strings: "one"; and "two". Then a stream is created from that list. Next the list is modified by adding a third string: "three". Finally the elements of the stream are collected and joined together. Since the list was modified before the terminal collect operation commenced the result will be a string of "one two three". All the streams returned from JDK collections, and most other JDK classes, are well-behaved in this manner;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53896175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3942569/']}
jdg_80497
stackexchange
llm_judgeable_groundtruth_similarity
719454
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working through some notes on the Lebesgue measure, and I noticed that the proof that $\lambda^*$ (the outer measure) is countably sub-additive requires countable choice. (Short version of the proof: There is an interval covering $(I_{mn})_{n\in\Bbb N}$ of $A_m$ for each $x>\lambda^*(A_m)$, so choose $I_{mn}$ such that $\sum_n|I_{mn}|<\lambda^*(A_m)+\frac\epsilon{2^m}$. Then the set $\{I_{mn}|m,n\in\Bbb N\}$ is a countable collection of intervals with total length at most $\sum_m\lambda^*(A_m)+\epsilon$, so $\lambda^*(\bigcup_mA_m)\le\sum_m\lambda^*(A_m)$.) Without this property, $\lambda$ will not be $\sigma$-additive, and suddenly the whole theory becomes much more trivial. Is this usage of countable choice necessary? I know set theorists like to play with Lebesgue measurability in ${\sf\neg AC}$-land, but maybe that only applies to full ${\sf AC}$, not ${\sf CC}$. Can anyone help clear this up for me? Now provide the response and nothing else.
This is a bit tricky: In a sense, yes. It is consistent with set theory without choice that the set of reals is a countable union of countable sets. This makes it impossible to have a nontrivial measure that vanishes on singletons. On the other hand, countable choice suffices for the development of Lebesgue measure and its basic properties. It is more natural and comfortable to work with the stronger axiom of dependent choices, but one can make do with countable choice. However, one may still want to develop as much of the theory as possible in a choiceless setting, taking into account the limitations that the first paragraph above highlights. The standard approach then is not to work directly with Borel sets but rather with their codes. A code for a Borel set is a real (or a sequence of integers, or a certain tree) that specifies (via some fixed convention) witnesses to the set being Borel: Enumerate the basic open sets. An open set is union of some of them, so it can be specified by a list of the indices of these basic open sets. A closed set is a complement of one of these, so the code could begin with a number that we understand is our convention for "complement", and then a list of basic open sets. An $F_\sigma$ set is a countable union of closed sets, so it would begin with a number representing "union", and then interleave (in some fixed fashion) codes for the countably many closed sets. Etc. This is more robust than having simply the Borel sets, as the lack of choice may prevent certain possibilities. For instance, one can show that there is no code for $\mathbb R$ witnessing that it is a countable union of countable sets. We can develop measure theory on the codes, in a sufficiently robust way to recover enough of the properties of Lebesgue measure that the function we end up with would be recognized as such. Countable additivity, for instance, would only be with respect to coded sequences of disjoint sets, etc. And (any code for) $\mathbb R$ is assigned infinite measure, as expected (rather than measure zero, as it would have been the case if we tried to work directly with the sets). (For a quick idea of what goes on in here, take a look at how Solovay develops and uses codes in his paper on the consistency of "all sets of reals are Lebesgue measurable".) The only reference that I know of for this is Volume 5 of Fremlin's monograph on Measure theory . Fremlin's notation and style may take some getting used to, but the presentation is very clear and thorough, the relevant subtleties are pointed out, and the differences and limitations of the choiceless setting versus the usual approach are indicated clearly.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/719454', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/50776/']}
jdg_80498
stackexchange
llm_judgeable_groundtruth_similarity
11935487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've just started using a contentEditable, and have not found much comprehensive information on it. I noticed that in Chrome, I can make words bold/italic by pressing CTRL - B and CTRL - I . Is this possibly going to be the intended behavior in other browsers? This, for instance, works in Chrome: <div class="container" id="typer" onclick="this.contentEditable='true';"> http://jsfiddle.net/uk6DA/15/ I'm wondering if I can read this formatting, in order to save the user's edits? Also, can I create a Bold button and Italic button that will trigger CTRL - B and CTRL - I ? Or would I need to depend on the user pressing CTRL - B and CTRL - I (which means providing them a note telling them)? Now provide the response and nothing else.
This is standard in all major browsers. There are also programmatic equivalents of the keyboard shortcuts available via document.execCommand() in all major browsers, although in 2022 this seems to be on the way out. For now, bold and italic commands, for example, can executed as follows: document.execCommand("Bold", false, null);document.execCommand("Italic", false, null); However, the mark-up generated varies between browsers. For example, variations for bold include <b>foo</b> , <strong>foo</strong> and <span style="font-weight: bold">foo</span> . References: MSDN , list of commands MDN (Mozilla)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11935487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/652259/']}
jdg_80499
stackexchange
llm_judgeable_groundtruth_similarity
21317377
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a jasmine test for my DetailCtrl. I have 10 json file each with file names like this 1.json2.json3.json in my data folder Here is my Detail Ctrl backpagecontrollers.controller('DetailCtrl', function($scope, $stateParams, $http) { $http.get('data/' + $stateParams.listingId + '.json').success(function(data) { $scope.extrainfo = data; });}); The detail controller is fetching each 1.json, 2.json, 3.json file from my data folder. Here is a part of my route .state('listingdetail', { url: "/listings/:listingId", templateUrl: "partials/detail.html", controller: 'DetailCtrl' }) Lets head back to the test, I injected both the $stateParams and the $state into the test. I want to test that for each json file above the images exist inside my json file. I am setting the httpbackend to get the local host url plus the listingId from the $stateparams which I configured as part of the routes but the listingId is coming back as undefined. Am I suppose to inject something else into my test? describe('Detail Ctrl', function() { var scope, ctrl, httpBackend, stateparams, listingId; beforeEach(angular.mock.module("backpageApp")); beforeEach(angular.mock.inject(function($controller, $rootScope, _$httpBackend_, $stateParams, $state) { httpBackend = _$httpBackend_; stateparams = $stateParams; listingId = stateparams.listingId; httpBackend.expectGET('http://localhost:8000/#/listings/' + listingId).respond([{id: 1 }, {id: 2}, {id:3}, {id:4}, {id:5}, {id:6}, {id:7}, {id:8}, {id:9}, {id:10}]); scope = $rootScope.$new(); ctrl = $controller("DetailCtrl", {$scope:scope}); })); it('the images for each listing should exist', function() { httpBackend.flush(); expect(scope.images).toBe(true) }); }); I am getting this error Error: Unexpected request: GET data/undefined.json Expected GET http://localhost:8000/#/listings/undefined Now provide the response and nothing else.
I think you might be misunderstanding how the router is working with the controller. When you're unit testing a controller, you're not executing a route or entering a ui-router state. Those states and routes are what trigger controllers to be executed when the application is running normally. But in a unit test, you're executing the controller explicitly using $controller. So you're skipping the routing part altogether. Which means you need to mock the object that the ui-router would normally create for you, $stateparams. describe('Detail Ctrl', function() { var scope, ctrl, httpBackend, stateparams, listingId; beforeEach(angular.mock.module("backpageApp")); //don't need to inject state or stateparams here beforeEach(angular.mock.inject(function($controller, $rootScope, _$httpBackend_) { httpBackend = _$httpBackend_; stateparams = { listingId: 1 }; //mock your stateparams object with your id //you should be expecting the get request url from the controller, not the route httpBackend.expectGET('data/' + stateparams.listingId + '.json').respond([{id: 1 }, {id: 2}, {id:3}, {id:4}, {id:5}, {id:6}, {id:7}, {id:8}, {id:9}, {id:10}]); scope = $rootScope.$new(); //pass your mock stateparams object to the controller ctrl = $controller("DetailCtrl", {$scope:scope, $stateParams:stateparams}); })); it('the images for each listing should exist', function() { httpBackend.flush(); //I don't see images set in your controller, but you //could check scope.extrainfo here expect(scope.images).toBe(true) });});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21317377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1336855/']}
jdg_80500
stackexchange
llm_judgeable_groundtruth_similarity
143339
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For a research project, I conducted the following methodology. The dataset was of size $N$. $B$ times, I: took a random $N/2$ rows and trained my model, which finds the optimal size $M$ of a system of resources took the other half of the rows, the other $N/2$, and simulated the system assuming the size of the system was $M$. This led to a performance metric I will denote $E$. I then reported the mean and confidence interval of $E$ across all $B$ iterations. My questions are: Is this bootstrapping without replacement or "repeated 2-fold cross validation"? If the answer is both, what exactly is the difference betweenbootstrapping without replacement and "repeated cross validation"? My methodology is summarized in this wikipedia, but strangely it is not called bootstrapping, but I thought this was bootstrapping, hence my confusion: http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29#Repeated_random_sub-sampling_validation Now provide the response and nothing else.
Q. 1. Is this bootstrapping without replacement or "repeated 2-fold cross validation"? Q. 2. If the answer is both, what exactly is the difference between bootstrapping without replacement and "repeated cross validation"? It is neither . But the differences between sampling methods are subtle. It is not classic bootstrap because your $B$ training samples are not drawn with replacement . (For the record: classic bootstrap also imposes the condition that a sample consists of exactly $N$ draws with replacement.) It is not "bootstrap without replacement " because you hold the size of the training sets fixed at $N/2$ . (In classic bootstrap the number of unique observations in a training sample is random with expected value $0.632N$ for sufficiently large $N$ . More on that here .) It is not repeated 2-fold cross-validation (CV) because CV imposes a constraint on the validation sets, $\{ V_i \}_{i=1}^B$ , namely that $V_i \cap V_{i+1} = \emptyset$ for odd $i$ . (See also this answer .) So what is it then? The wikipedia reference you provided is correct. It is rightly called " repeated random subsampling (validation)". Sometimes also, "Monte Carlo subsampling" (with fixed subset size). The difference with the related repeated CV method, is simply that it relaxes the disjointness condition on the validation sets. In repeated random subsampling each set drawn is independent of the others.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/143339', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/64314/']}
jdg_80501
stackexchange
llm_judgeable_groundtruth_similarity
2393049
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Definition : A relation $\rm R$ in a set $\rm A$ called transitive, if $(a_1, a_2) \in \mathrm R$ and $(a_2, a_3) \in \mathrm R \implies (a_1, a_3) \in \mathrm R \quad \forall a_1, a_2, a_3 \in \mathrm A$ Problem : ( source ) Let $\mathrm A$ be finite set of human beings. Let $\mathrm R$ be a relation on the set $\mathrm A$ defined as $$\mathrm R = \{ (x,y) : \text{$x$ is wife of $y$}\}$$ Determine whether it is transitive or not. I would say it is not transitive because if $x$ is wife of $y$ then $y$ can't be wife of $z$ and certainly $x$ can't be wife of $z$ assuming no same sex marriage or extramarital affairs by the people of set $\mathrm A$. Here if we define $p : (x,y) \in \mathrm R \ \land \ (y,z) \in \mathrm R $ and $q : (x, z) \in \mathrm R$, Then clearly both $p,q$ is false here and so $p \implies q$ should be false. By the definition of transitivity ,$\text{if $(p \implies q)$ then transitive}$, the relation $\mathrm R$ is not transitive because $p \implies q$ is false. Here is the part I don't understand, in the source of this problem the answers suggest that the relation is transitive and it is so because $p \implies q$ is false, provided I understand them properly. I don't understand why if both $p,q$ is false then the relation is transitive and how does this follows from the definition of transitivity ? Now provide the response and nothing else.
I write $xRy$ for $(x,y)\in R$. A relation $R$ is transitive if and only if, for all $x$, $y$, and $z$, if $xRy$ and $yRz$, then $xRz$. Note that a statement of the form `if $p$, then $q$' is true if $p$ is false. The point of the exercises is to notice that there are no $x$, $y$, and $z$ such that $xRy$ and $yRz$. So, it is (trivially) true that, for all $x$, $y$, and $z$, if $xRy$ and $yRz$, then $xRz$. So, the relation $R$ with which you are dealing in this exercise is transitive.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2393049', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/516330/']}
jdg_80502
stackexchange
llm_judgeable_groundtruth_similarity
6147965
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to change just the first node of a JTree icon. There is a file manager that uses JTree to show files. Here is a schematic example. How can I change the icon? Back|->Please wait(this is leaf)Folder 1|->file1 file2 file3Folder 2|->file1 file2 file3Folder 3|->file1 file2 file3 Now provide the response and nothing else.
To change the appearance of entries in a tree, use a TreeCellRenderer . E.G. /** A TreeCellRenderer for a File. */class FileTreeCellRenderer extends DefaultTreeCellRenderer { private FileSystemView fileSystemView; private JLabel label; FileTreeCellRenderer() { label = new JLabel(); label.setOpaque(true); fileSystemView = FileSystemView.getFileSystemView(); } @Override public Component getTreeCellRendererComponent( JTree tree, Object value, boolean selected, boolean expanded, boolean leaf, int row, boolean hasFocus) { DefaultMutableTreeNode node = (DefaultMutableTreeNode)value; File file = (File)node.getUserObject(); label.setIcon(fileSystemView.getSystemIcon(file)); label.setText(fileSystemView.getSystemDisplayName(file)); label.setToolTipText(file.getPath()); if (selected) { label.setBackground(backgroundSelectionColor); } else { label.setBackground(backgroundNonSelectionColor); } return label; }} Screen Shot Using the native PLAF on Windows. FileManager.java Here is the complete source (including renderer & table model) used to grab that screen shot. import java.awt.*;import java.awt.event.*;import javax.swing.*;import javax.swing.border.*;import javax.swing.event.*;import javax.swing.tree.*;import javax.swing.table.*;import javax.swing.filechooser.FileSystemView;import java.io.File;import java.util.Date;/**A basic File Manager. Requires 1.6+ for the Desktop & SwingWorkerclasses, amongst other minor things.Includes support classes FileTableModel & FileTreeCellRenderer.Neither Delete nor New has been implemented. See commented code forhints on how to proceed for that functionality.@TODO Still throws occasional AIOOBEs and NPEs, so some update onthe EDT must have been missed.@author Andrew Thompson@version 2011-05-29@see <a href="https://stackoverflow.com/questions/6147965/change-icon-of-the-first-node-of-jtree/6153182#6153182">Change icon of the first node of JTree</a>*/class FileManager { /** Used to open/edit/print files. */ private Desktop desktop; /** Provides nice icons and names for files. */ private FileSystemView fileSystemView; /** Main GUI container */ private JPanel gui; /** File-system tree. Built Lazily */ private JTree tree; /** Directory listing */ private JTable table; private JProgressBar progressBar; /* File controls. */ private JButton openFile; private JButton printFile; private JButton editFile; private JButton deleteFile; private JButton newFile; /* File details. */ private JLabel fileName; private JTextField path; private JLabel date; private JLabel size; private JCheckBox readable; private JCheckBox writable; private JCheckBox executable; private JCheckBox isDirectory; private JCheckBox isFile; /** Table model for File[]. */ private FileTableModel fileTableModel; private ListSelectionListener listSelectionListener; private boolean cellSizesSet = false; /** currently selected File. */ private File currentFile; private boolean safeMode = true; public Container getGui() { if (gui==null) { gui = new JPanel(new BorderLayout(3,3)); gui.setBorder(new EmptyBorder(5,5,5,5)); fileSystemView = FileSystemView.getFileSystemView(); desktop = Desktop.getDesktop(); DefaultMutableTreeNode root = new DefaultMutableTreeNode(); File[] roots = fileSystemView.getRoots(); for (File fileSystemRoot : roots) { DefaultMutableTreeNode node = new DefaultMutableTreeNode(fileSystemRoot); root.add( node ); File[] files = fileSystemView.getFiles(fileSystemRoot, true); for (File file : files) { node.add(new DefaultMutableTreeNode(file)); } } TreeSelectionListener treeSelectionListener = new TreeSelectionListener() { public void valueChanged(TreeSelectionEvent tse){ DefaultMutableTreeNode node = (DefaultMutableTreeNode)tse.getPath().getLastPathComponent(); addChildren(node); setFileDetails((File)node.getUserObject()); } }; tree = new JTree(root); tree.setRootVisible(false); tree.addTreeSelectionListener(treeSelectionListener); tree.setCellRenderer(new FileTreeCellRenderer()); tree.expandRow(0); JScrollPane treeScroll = new JScrollPane(tree); // as per trashgod tip tree.setVisibleRowCount(15); Dimension preferredSize = treeScroll.getPreferredSize(); Dimension widePreferred = new Dimension( 250, (int)preferredSize.getHeight()); treeScroll.setPreferredSize( widePreferred ); JPanel detailView = new JPanel(new BorderLayout(3,3)); fileTableModel = new FileTableModel(); table = new JTable(fileTableModel); table.setSelectionMode(ListSelectionModel.SINGLE_SELECTION); table.setAutoCreateRowSorter(true); table.setShowVerticalLines(false); // arbitrary size adjustment to better account for icons table.setRowHeight( (int)(table.getRowHeight()*1.3) ); listSelectionListener = new ListSelectionListener() { @Override public void valueChanged(ListSelectionEvent lse) { int row = table.getSelectionModel().getLeadSelectionIndex(); setFileDetails( ((FileTableModel)table.getModel()).getFile(row) ); } }; table.getSelectionModel().addListSelectionListener(listSelectionListener); JScrollPane tableScroll = new JScrollPane(table); Dimension d = tableScroll.getPreferredSize(); tableScroll.setPreferredSize(new Dimension((int)d.getWidth(), (int)d.getHeight()/2)); detailView.add(tableScroll, BorderLayout.CENTER); JPanel fileMainDetails = new JPanel(new BorderLayout(4,2)); fileMainDetails.setBorder(new EmptyBorder(0,6,0,6)); JPanel fileDetailsLabels = new JPanel(new GridLayout(0,1,2,2)); fileMainDetails.add(fileDetailsLabels, BorderLayout.WEST); JPanel fileDetailsValues = new JPanel(new GridLayout(0,1,2,2)); fileMainDetails.add(fileDetailsValues, BorderLayout.CENTER); fileDetailsLabels.add(new JLabel("File", JLabel.TRAILING)); fileName = new JLabel(); fileDetailsValues.add(fileName); fileDetailsLabels.add(new JLabel("Path/name", JLabel.TRAILING)); path = new JTextField(5); path.setEditable(false); fileDetailsValues.add(path); fileDetailsLabels.add(new JLabel("Last Modified", JLabel.TRAILING)); date = new JLabel(); fileDetailsValues.add(date); fileDetailsLabels.add(new JLabel("File size", JLabel.TRAILING)); size = new JLabel(); fileDetailsValues.add(size); int count = fileDetailsLabels.getComponentCount(); for (int ii=0; ii<count; ii++) { fileDetailsLabels.getComponent(ii).setEnabled(false); } JToolBar toolBar = new JToolBar(); openFile = new JButton("Open"); openFile.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae) { try { desktop.open(currentFile); } catch(Throwable t) { JOptionPane.showMessageDialog( gui, t.toString(), t.getMessage(), JOptionPane.ERROR_MESSAGE ); } gui.repaint(); } }); toolBar.add(openFile); editFile = new JButton("Edit"); editFile.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae) { try { desktop.edit(currentFile); } catch(Throwable t) { showThrowable(t); } } }); toolBar.add(editFile); printFile = new JButton("Print"); printFile.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae) { try { desktop.print(currentFile); } catch(Throwable t) { showThrowable(t); } } }); toolBar.add(printFile); // Check the actions are supported on this platform! openFile.setEnabled(desktop.isSupported(Desktop.Action.OPEN)); editFile.setEnabled(desktop.isSupported(Desktop.Action.EDIT)); printFile.setEnabled(desktop.isSupported(Desktop.Action.PRINT)); toolBar.addSeparator(); deleteFile = new JButton("Delete"); deleteFile.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae) { if (currentFile==null) { JOptionPane.showMessageDialog( gui, "No file selected for deletion.", "Select File", JOptionPane.ERROR_MESSAGE ); return; } if (safeMode) { safeMessage(); } else { int result = JOptionPane.showConfirmDialog( gui, "Are you sure you want to delete this file?", "Delete File", JOptionPane.ERROR_MESSAGE ); if (result==JOptionPane.OK_OPTION) { try { boolean deleted = currentFile.delete(); if (deleted) { // delete the node.. } else { JOptionPane.showMessageDialog( gui, "The file '" + currentFile + "' could not be deleted.", "Delete Failed", JOptionPane.ERROR_MESSAGE ); } } catch(Throwable t) { showThrowable(t); } } } gui.repaint(); } }); toolBar.add(deleteFile); final JPanel newPanel = new JPanel(new GridLayout(0,1,3,3)); final JRadioButton newTypeFile = new JRadioButton("New File"); JRadioButton newTypeDirectory = new JRadioButton("New Directory"); ButtonGroup bg = new ButtonGroup(); bg.add(newTypeFile); bg.add(newTypeDirectory); final JTextField name = new JTextField(15); newPanel.add( name ); newPanel.add( newTypeFile ); newPanel.add( newTypeDirectory ); newFile = new JButton("New"); newFile.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent ae) { if (currentFile==null) { JOptionPane.showMessageDialog( gui, "No location selected for new file.", "Select Location", JOptionPane.ERROR_MESSAGE ); return; } if (safeMode) { safeMessage(); } else { int result = JOptionPane.showConfirmDialog(gui, newPanel); if (result==JOptionPane.OK_OPTION) { try { boolean created; File file = new File( currentFile, name.getText() ); if (newTypeFile.isSelected()) { created = file.createNewFile(); } else { created = file.mkdir(); } if (created) { // add the new node.. } else { JOptionPane.showMessageDialog( gui, "The file '" + file + "' could not be created.", "Create Failed", JOptionPane.ERROR_MESSAGE ); } } catch(Throwable t) { showThrowable(t); } } } gui.repaint(); } }); toolBar.add(newFile); JPanel flags = new JPanel(new GridLayout(1,0,4,4)); readable = new JCheckBox("Read"); readable.setEnabled(false); flags.add(readable); writable = new JCheckBox("Write"); writable.setEnabled(false); flags.add(writable); executable = new JCheckBox("Execute"); executable.setEnabled(false); flags.add(executable); isDirectory = new JCheckBox("Directory"); isDirectory.setEnabled(false); flags.add(isDirectory); isFile = new JCheckBox("File"); isFile.setEnabled(false); flags.add(isFile); //flags.setBorder(new TitledBorder("Flags")); JPanel fileView = new JPanel(new BorderLayout(3,3)); fileView.add(toolBar,BorderLayout.NORTH); fileView.add(fileMainDetails,BorderLayout.CENTER); fileView.add(flags,BorderLayout.SOUTH); detailView.add(fileView, BorderLayout.SOUTH); JSplitPane splitPane = new JSplitPane( JSplitPane.HORIZONTAL_SPLIT, treeScroll, detailView); gui.add(splitPane, BorderLayout.CENTER); JPanel simpleOutput = new JPanel(new BorderLayout(3,3)); progressBar = new JProgressBar(); simpleOutput.add(progressBar, BorderLayout.EAST); progressBar.setVisible(false); gui.add(simpleOutput, BorderLayout.SOUTH); } return gui; } private void safeMessage() { String safe = "This is a test program! " + "Delete/New have not been enabled. " + "Recompile with safeMode=false; to enable."; JOptionPane.showMessageDialog( gui, safe, "Not Enabled", JOptionPane.ERROR_MESSAGE ); } private void showThrowable(Throwable t) { JOptionPane.showMessageDialog( gui, t.toString(), t.getMessage(), JOptionPane.ERROR_MESSAGE ); gui.repaint(); } /** Update the table on the EDT */ private void setTableDate(final File[] files) { SwingUtilities.invokeLater(new Runnable() { public void run() { table.getSelectionModel().removeListSelectionListener(listSelectionListener); fileTableModel.setFiles(files); table.getSelectionModel().addListSelectionListener(listSelectionListener); int max = table.getRowCount(); if (!cellSizesSet) { setColumnWidth(0,-1); setColumnWidth(3,60); table.getColumnModel().getColumn(3).setMaxWidth(120); setColumnWidth(4,-1); setColumnWidth(5,-1); setColumnWidth(6,-1); setColumnWidth(7,-1); cellSizesSet = true; } } }); } private void setColumnWidth(int column, int width) { TableColumn tableColumn = table.getColumnModel().getColumn(column); if (width<0) { // use the preferred width of the header.. JLabel label = new JLabel( (String)tableColumn.getHeaderValue() ); Dimension preferred = label.getPreferredSize(); width = (int)preferred.getWidth()+10; } tableColumn.setPreferredWidth(width); tableColumn.setMaxWidth(width); tableColumn.setMinWidth(width); } /** Add the files that are contained within the directory of this node. */ private void addChildren(final DefaultMutableTreeNode node) { SwingWorker worker = new SwingWorker() { @Override public String doInBackground() { tree.setEnabled(false); progressBar.setVisible(true); progressBar.setIndeterminate(true); File file = (File)node.getUserObject(); if ( file.isDirectory() ) { File[] files = fileSystemView.getFiles(file, true); if (node.isLeaf()) { for (File child : files) { node.add( new DefaultMutableTreeNode(child) ); } } setTableDate(files); } progressBar.setIndeterminate(false); progressBar.setVisible(false); tree.setEnabled(true); return "done"; } }; worker.execute(); } /** Update the File details view with the details of this File. */ private void setFileDetails(File file) { currentFile = file; fileName.setIcon(fileSystemView.getSystemIcon(file)); fileName.setText(fileSystemView.getSystemDisplayName(file)); path.setText(file.getPath()); date.setText(new Date(file.lastModified()).toString()); size.setText(file.length() + " bytes"); readable.setSelected(file.canRead()); writable.setSelected(file.canWrite()); executable.setSelected(file.canExecute()); isDirectory.setSelected(file.isDirectory()); isFile.setSelected(file.isFile()); gui.repaint(); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { try { // Significantly improves the look of the output in // terms of the file names returned by FileSystemView! UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); } catch(Exception weTried) { } JFrame f = new JFrame("File Manager"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); FileManager fileManager = new FileManager(); f.setContentPane(fileManager.getGui()); f.pack(); f.setLocationRelativeTo(null); f.setLocationByPlatform(true); f.setMinimumSize(f.getSize()); f.setVisible(true); } }); }}/** A TableModel to hold File[]. */class FileTableModel extends AbstractTableModel { private File[] files; private FileSystemView fileSystemView = FileSystemView.getFileSystemView(); private String[] columns = { "Icon", "File", "Path/name", "Size", "Last Modified", "R", "W", "E" }; FileTableModel() { this(new File[0]); } FileTableModel(File[] files) { this.files = files; } public Object getValueAt(int row, int column) { File file = files[row]; switch (column) { case 0: return fileSystemView.getSystemIcon(file); case 1: return fileSystemView.getSystemDisplayName(file); case 2: return file.getPath(); case 3: return file.length(); case 4: return file.lastModified(); case 5: return file.canRead(); case 6: return file.canWrite(); case 7: return file.canExecute(); default: System.err.println("Logic Error"); } return ""; } public int getColumnCount() { return columns.length; } public Class<?> getColumnClass(int column) { switch (column) { case 0: return ImageIcon.class; case 3: return Long.class; case 4: return Date.class; case 5: case 6: case 7: return Boolean.class; } return String.class; } public String getColumnName(int column) { return columns[column]; } public int getRowCount() { return files.length; } public File getFile(int row) { return files[row]; } public void setFiles(File[] files) { this.files = files; fireTableDataChanged(); }}/** A TreeCellRenderer for a File. */class FileTreeCellRenderer extends DefaultTreeCellRenderer { private FileSystemView fileSystemView; private JLabel label; FileTreeCellRenderer() { label = new JLabel(); label.setOpaque(true); fileSystemView = FileSystemView.getFileSystemView(); } @Override public Component getTreeCellRendererComponent( JTree tree, Object value, boolean selected, boolean expanded, boolean leaf, int row, boolean hasFocus) { DefaultMutableTreeNode node = (DefaultMutableTreeNode)value; File file = (File)node.getUserObject(); label.setIcon(fileSystemView.getSystemIcon(file)); label.setText(fileSystemView.getSystemDisplayName(file)); label.setToolTipText(file.getPath()); if (selected) { label.setBackground(backgroundSelectionColor); } else { label.setBackground(backgroundNonSelectionColor); } return label; }} Note This is basically the same answer already provided by trashgod , but with code & screen shot.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6147965', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/678758/']}
jdg_80503
stackexchange
llm_judgeable_groundtruth_similarity
405834
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Everyone's seen lightning streaks, either in real life or in pictures and videos. My question is, why does it look the way it does? Does lightning spread in a random manner, or is there physics behind the path lightning streaks trace in the sky? $\hspace{175px}$ Now provide the response and nothing else.
Your assumptions are right. There is indeed physics involved in lightning shapes. Why can’t lightning just be in a plain, straight shape? From the Indiana Public Media's Moment of Science Podcast episode " The Shape of Lightning Bolts ": The answer has to do with the complex way a lightning bolt forms.Although it looks like it forms all at once, a lightning bolt isactually produced in many steps. Instead of jumping right to theground, the cloud’s negative charge begins with a short downward hop. This initial hop is called a “leader,” and it’s no more than a fewhundred feet long. From the lower end of this leader, another leaderforms, and from the lower end of this, another. In this manner, thenegative charge hops downward from leader to leader like a frogjumping from lily pad to lily pad across a pond. While this is going on, the ground sends up its own chain of shorter,positively charged leaders. It’s only when these two chains meet,about a hundred feet off the ground, that we see the lightning bolt’sflash. Lightning is jagged because each leader forms independently of theothers. Each place a lightning bolt zigs or zags is where one leaderstopped and another one started. Each place a lightning bolt forks is where two separate leaders formedfrom the bottom end of a single leader above. This whole process takesonly a few thousandths of a second, but that’s enough time to sculptbeautiful and complex lightning bolts. Fun fact: Lightning is actually 5 times hotter than the Sun's "surface".
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/405834', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/131341/']}
jdg_80504
stackexchange
llm_judgeable_groundtruth_similarity
52327740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two cycles: First cycle import jsontickets = json.loads(response_body)for airport, flights in tickets['data'].items(): print('airport:', airport) refers to one json (this is part) {"success":true,"data":{"AAE":{"2":{"price":48973,"airline":"AF","flight_number":1745,"departure_at":"2018-09-04T18:45:00Z","return_at":"2018-09-14T07:15:00Z","expires_at":"2018-09-02T06:57:21Z"},"3":{"price":67240,"airline":"TP","flight_number":1235,"departure_at":"2018-09-04T07:15:00Z","return_at":"2018-09-14T07:15:00Z","expires_at":"2018-09-02T06:57:21Z"}},"AAH":{"1":{"price":34049,"airline":"LH","flight_number":1453,"departure_at":"2018-09-30T09:05:00Z","return_at":"2018-10-02T07:40:00Z","expires_at":"2018-09-03T11:37:06Z"},"2":{"price":35838,"airline":"LH","flight_number":1453,"departure_at":"2018-09-30T09:05:00Z","return_at":"2018-10-02T11:39:00Z","expires_at":"2018-09-03T11:37:06Z"}},"AAL":{"1":{"price":23258,"airline":"KL","flight_number":904,"departure_at":"2018-12-08T18:00:00Z","return_at":"2018-12-15T06:00:00Z","expires_at":"2018-09-03T13:27:58Z"},"2":{"price":21867,"airline":"AF","flight_number":1745,"departure_at":"2018-12-08T20:00:00Z","return_at":"2018-12-15T18:15:00Z","expires_at":"2018-09-03T13:27:58Z"},"3":{"price":30639,"airline":"AF","flight_number":1145,"departure_at":"2018-12-08T09:45:00Z","return_at":"2018-12-15T06:00:00Z","expires_at":"2018-09-03T13:27:58Z"}},"AAQ":{"0":{"price":5354,"airline":"FV","flight_number":5515,"departure_at":"2018-09-16T04:20:00Z","return_at":"2018-10-04T12:40:00Z","expires_at":"2018-08-31T20:53:40Z"},"1":{"price":8590,"airline":"FV","flight_number":5515,"departure_at":"2018-09-16T04:20:00Z","return_at":"2018-10-04T15:05:00Z","expires_at":"2018-08-31T20:53:40Z"},"2":{"price":13702,"airline":"U6","flight_number":79,"departure_at":"2018-10-04T11:20:00Z","return_at":"2018-10-10T12:40:00Z","expires_at":"2018-09-03T06:47:01Z"}},"AAR":{"1":{"price":24418,"airline":"OK","flight_number":905,"departure_at":"2018-09-19T22:10:00Z","return_at":"2018-09-25T09:35:00Z","expires_at":"2018-09-02T21:16:33Z"},"2":{"price":20805,"airline":"AY","flight_number":712,"departure_at":"2018-09-19T11:50:00Z","return_at":"2018-09-25T16:55:00Z","expires_at":"2018-09-02T21:16:33Z"},"3":{"price":36316,"airline":"BT","flight_number":425,"departure_at":"2018-09-19T09:45:00Z","return_at":"2018-09-25T09:35:00Z","expires_at":"2018-09-02T21:16:33Z"}} and from there get codes airports (ААЕ, ААН, etc.). The second cycle citys = json.loads(responce)for h in citys: print(h['code']) print(h['name']) refers to another json [{"code":"AAA","name":"Анаа","coordinates":{"lon":-145.41667,"lat":-17.05},"cases":{"ro":"","da":"","vi":"","tv":"","pr":""},"time_zone":"Pacific/Tahiti","name_translations":{"en":"Anaa"},"country_code":"PF"},{"code":"AAB","name":"Аррабури","coordinates":{"lon":141.04167,"lat":-26.7},"cases":{"ro":"","da":"","vi":"","tv":"","pr":""},"time_zone":"Australia/Brisbane","name_translations":{"en":"Arrabury"},"country_code":"AU"},{"code":"AAC","name":"Эль-Ариш","coordinates":{"lon":33.75,"lat":31.133333},"cases":{"ro":"Аль-Ариш","da":"Аль-Ариш","vi":"в Аль-Ариш","tv":"Аль-Ариш","pr":"Аль-Ариш"},"time_zone":"Africa/Cairo","name_translations":{"en":"El Arish"},"country_code":"EG"},{"code":"AAE","name":"Аннаба","coordinates":{"lon":7.811857,"lat":36.821392},"cases":{"ro":"Аннабы","da":"Аннабе","vi":"в Аннабу","tv":"Аннабой","pr":"Аннабе"},"time_zone":"Africa/Algiers","name_translations":{"en":"Annaba"},"country_code":"DZ"}, that stores codes of airports and city names. I can not understand how can I compare the airport codes received so that, in case of a coincidence, I get the name of the city Now provide the response and nothing else.
So, I found the solution. Big thanks to Cerise Limón , whose code fits perfectly for me. The solution I ended up with Controller func Find(c echo.Context) (err error) {model := &models.Query{}if err = c.Bind(model); err != nil { return c.JSON(http.StatusInternalServerError, u.Message(false, "Bad request"))}resp := model.Find()return c.JSON(http.StatusOK, resp) Model type Query map[string]interface{}func (model Query) Find() (Query) { var values []interface{} var where []string for k, v := range model { values = append(values, v) //MySQL Way: where = append(where, fmt.Sprintf("%s = ?", k)) where = append(where, fmt.Sprintf(`"%s" = %s`,k, "$" + strconv.Itoa(len(values)))) } string := ("SELECT name FROM users WHERE " + strings.Join(where, " AND ")) //for testing purposes i didn't ran actual query, just print it in the console and returned JSON back fmt.Println(string) return model} Update: for PostgreSQL users (thanks to @mkopriva and his playground example ), I'm able to have this placeholder thing working right on PostgreSQL
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52327740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6908131/']}
jdg_80505
stackexchange
llm_judgeable_groundtruth_similarity
8556927
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im trying to parse a bmp file with fread() and when I begin to parse, it reverses the order of my bytes. typedef struct{ short magic_number; int file_size; short reserved_bytes[2]; int data_offset;}BMPHeader; ...BMPHeader header; ... The hex data is 42 4D 36 00 03 00 00 00 00 00 36 00 00 00 ;I am loading the hex data into the struct by fread(&header,14,1,fileIn); My problem is where the magic number should be 0x424d //'BM' fread() it flips the bytes to be 0x4d42 // 'MB' Why does fread() do this and how can I fix it; EDIT: If I wasn't specific enough, I need to read the whole chunk of hex data into the struct not just the magic number. I only picked the magic number as an example. Now provide the response and nothing else.
This is not the fault of fread , but of your CPU, which is (apparently) little-endian. That is, your CPU treats the first byte in a short value as the low 8 bits, rather than (as you seem to have expected) the high 8 bits. Whenever you read a binary file format, you must explicitly convert from the file format's endianness to the CPU's native endianness. You do that with functions like these: /* CHAR_BIT == 8 assumed */uint16_t le16_to_cpu(const uint8_t *buf){ return ((uint16_t)buf[0]) | (((uint16_t)buf[1]) << 8);}uint16_t be16_to_cpu(const uint8_t *buf){ return ((uint16_t)buf[1]) | (((uint16_t)buf[0]) << 8);} You do your fread into an uint8_t buffer of the appropriate size, and then you manually copy all the data bytes over to your BMPHeader struct, converting as necessary. That would look something like this: /* note adjustments to type definition */typedef struct BMPHeader{ uint8_t magic_number[2]; uint32_t file_size; uint8_t reserved[4]; uint32_t data_offset;} BMPHeader;/* in general this is _not_ equal to sizeof(BMPHeader) */#define BMP_WIRE_HDR_LEN (2 + 4 + 4 + 4)/* returns 0=success, -1=error */int read_bmp_header(BMPHeader *hdr, FILE *fp){ uint8_t buf[BMP_WIRE_HDR_LEN]; if (fread(buf, 1, sizeof buf, fp) != sizeof buf) return -1; hdr->magic_number[0] = buf[0]; hdr->magic_number[1] = buf[1]; hdr->file_size = le32_to_cpu(buf+2); hdr->reserved[0] = buf[6]; hdr->reserved[1] = buf[7]; hdr->reserved[2] = buf[8]; hdr->reserved[3] = buf[9]; hdr->data_offset = le32_to_cpu(buf+10); return 0;} You do not assume that the CPU's endianness is the same as the file format's even if you know for a fact that right now they are the same; you write the conversions anyway, so that in the future your code will work without modification on a CPU with the opposite endianness. You can make life easier for yourself by using the fixed-width <stdint.h> types, by using unsigned types unless being able to represent negative numbers is absolutely required, and by not using integers when character arrays will do. I've done all these things in the above example. You can see that you need not bother endian-converting the magic number, because the only thing you need to do with it is test magic_number[0]=='B' && magic_number[1]=='M' . Conversion in the opposite direction, btw, looks like this: void cpu_to_le16(uint8_t *buf, uint16_t val){ buf[0] = (val & 0x00FF); buf[1] = (val & 0xFF00) >> 8;}void cpu_to_be16(uint8_t *buf, uint16_t val){ buf[0] = (val & 0xFF00) >> 8; buf[1] = (val & 0x00FF);} Conversion of 32-/64-bit quantities left as an exercise.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8556927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/892584/']}
jdg_80506
stackexchange
llm_judgeable_groundtruth_similarity
21199
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Thomas's Calculus (11th edition), it is mentioned (Section 3.8 pg 225) that the derivative $dy/dx$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the formula $dy = f'(x) \, dx$ we are able to plug in values for $dx$ and calculate a $dy$ (differential)? Then, if we rearrange we get $dy/dx=f'(x)$ , and so $dy/dx$ can be seen as a ratio of $dy$ and $dx$ . I wonder if the author says this because $dx$ is an independent variable, and $dy$ is a dependent variable, and for $dy/dx$ to be a ratio both variables need to be independent. Now provide the response and nothing else.
Historically, when Leibniz conceived of the notation, $\frac{dy}{dx}$ was supposed to be a quotient: it was the quotient of the "infinitesimal change in $y$ produced by the change in $x$" divided by the "infinitesimal change in $x$". However, the formulation of calculus with infinitesimals in the usual setting of the real numbers leads to a lot of problems. For one thing, infinitesimals can't exist in the usual setting of real numbers! Because the real numbers satisfy an important property, called the Archimedean Property: given any positive real number $\epsilon\gt 0$, no matter how small, and given any positive real number $M\gt 0$, no matter how big, there exists a natural number $n$ such that $n\epsilon\gt M$. But an "infinitesimal" $\xi$ is supposed to be so small that no matter how many times you add it to itself, it never gets to $1$, contradicting the Archimedean Property . Other problems: Leibniz defined the tangent to the graph of $y=f(x)$ at $x=a$ by saying "Take the point $(a,f(a))$; then add an infinitesimal amount to $a$, $a+dx$, and take the point $(a+dx,f(a+dx))$, and draw the line through those two points." But if they are two different points on the graph, then it's not a tangent, and if it's just one point, then you can't define the line because you just have one point. That's just two of the problems with infinitesimals. (See below where it says " However... ", though.) So Calculus was essentially rewritten from the ground up in the following 200 years to avoid these problems, and you are seeing the results of that rewriting (that's where limits came from, for instance). Because of that rewriting, the derivative is no longer a quotient , now it's a limit :$$\lim_{h\to0 }\frac{f(x+h)-f(x)}{h}.$$And because we cannot express this limit-of-a-quotient as a-quotient-of-the-limits (both numerator and denominator go to zero), then the derivative is not a quotient. However, Leibniz's notation is very suggestive and very useful; even though derivatives are not really quotients, in many ways they behave as if they were quotients. So we have the Chain Rule:$$\frac{dy}{dx} = \frac{dy}{du}\;\frac{du}{dx}$$which looks very natural if you think of the derivatives as "fractions". You have the Inverse Function theorem, which tells you that$$\frac{dx}{dy} = \frac{1}{\quad\frac{dy}{dx}\quad},$$which is again almost "obvious" if you think of the derivatives as fractions. So, because the notation is so nice and so suggestive, we keep the notation even though the notation no longer represents an actual quotient, it now represents a single limit. In fact, Leibniz's notation is so good, so superior to the prime notation and to Newton's notation, that England fell behind all of Europe for centuries in mathematics and science because, due to the fight between Newton's and Leibniz's camp over who had invented Calculus and who stole it from whom (consensus is that they each discovered it independently), England's scientific establishment decided to ignore what was being done in Europe with Leibniz notation and stuck to Newton's... and got stuck in the mud in large part because of it. (Differentials are part of this same issue: originally, $dy$ and $dx$ really did mean the same thing as those symbols do in $\frac{dy}{dx}$, but that leads to all sorts of logical problems, so they no longer mean the same thing, even though they behave as if they did.) So, even though we write $\frac{dy}{dx}$ as if it were a fraction, and many computations look like we are working with it like a fraction, it isn't really a fraction (it just plays one on television). However... There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis . It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. If you do that, then you can , if you are careful, define derivatives exactly like Leibniz, in terms of infinitesimals and actual quotients; if you do that, then all the rules of Calculus that make use of $\frac{dy}{dx}$ as if it were a fraction are justified because, in that setting , it is a fraction. Still, one has to be careful because you have to keep infinitesimals and regular real numbers separate and not let them get confused, or you can run into some serious problems.
{}
{'log_upvote_score': 12, 'links': ['https://math.stackexchange.com/questions/21199', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6786/']}
jdg_80507
stackexchange
llm_judgeable_groundtruth_similarity
4470
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Wikipedia it is stated that the volume of the parallelepiped given its edge lengths $a,b,c$, and the internal angles between the edges $\alpha ,\beta ,\gamma $ is: \begin{equation*}V=abc\sqrt{1+2\cos \alpha \cos \beta \cos \gamma -\cos ^{2}\alpha- \cos^{2}\beta - \cos ^{2}\gamma }\qquad(*).\end{equation*} I was not able to derive it by using the determinant formula and expressing $\cos \alpha ,\cos \beta ,\cos \gamma $ in terms of $a,b,c$ and $\alpha,\beta ,\gamma $. For instance \begin{equation*}a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3}=ab\cos \alpha .\end{equation*} Question: Could you give a hint on how can the formula (*) be proved? Now provide the response and nothing else.
The volume satisfies $V=|\det D|$ where $D$ is the matrix$$\pmatrix{a_1&a_2&a_3\\b_1&b_2&b_3\\c_1&b_2&c_3}$$where $\mathbf{a}=(a_1,a_2,a_3)$ is the vector corresponding to $a$ etc.Thus$$V^2=\det(DD^t)=\det\pmatrix{\mathbf{a}\cdot\mathbf{a}&\mathbf{a}\cdot\mathbf{b}&\mathbf{a}\cdot\mathbf{c}\\\mathbf{b}\cdot\mathbf{a}&\mathbf{b}\cdot\mathbf{b}&\mathbf{b}\cdot\mathbf{c}\\\mathbf{c}\cdot\mathbf{a}&\mathbf{c}\cdot\mathbf{b}&\mathbf{c}\cdot\mathbf{c}}.$$ Now you can express $V^2$ in terms of $a$, $b$, $c$ and the variouscosines.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4470', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/752/']}
jdg_80508
stackexchange
llm_judgeable_groundtruth_similarity
42616576
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Unit testing. I want to test React project. As I started with React documentation which refers to enzyme as test utility which is kind of ambiguous to me. What is the difference between Jest and enzyme ? Is enzyme assertion library or task runner? Can I use karma with Jasmine ? What is the best way to test a react-redux project? Now provide the response and nothing else.
Jest is a framework which includes a task runner, assertion library, and mocking support. This means it can execute different unit test cases, write its result in console or log files, create mocks, or verify all the assertions. In short, it will execute the test. Enzyme , on other hand, is a library that provides a simple interface for writing unit tests. For this purpose, it wraps packages such as React TestUtils, JSDOM and CheerIO. React TestUtils has methods to render a React component into a document and to simulate an event. JSDOM is a JavaScript implementation of the DOM (Document object model). The DOM represents the tree structure of UI components. CheerIO implements a subset of jQuery core and is used to query the DOM. Enzyme is not a test runner. It doesn't have its own assertion library. It just provides a collection of APIs for unit testing. That's why it could be integrated with Jest or any other task runner. Yes, you can use karma with jasmine.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42616576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6525664/']}
jdg_80509
stackexchange
llm_judgeable_groundtruth_similarity
4180610
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $U$ and $V$ are vector spaces. $m_1 = \begin{pmatrix}1&0\\0&0\end{pmatrix}$ , $m_2 = \begin{pmatrix}0&0\\0&1\end{pmatrix}$ , $m_3 = \begin{pmatrix}0&1\\1&0\end{pmatrix}$ is a basis of $U$ . And: $p_1 = \begin{pmatrix}0&1\\-1&0\end{pmatrix}$ is a basis of $V$ . How do I show that $U + V$ is a direct sum? I'm learning linear algebra, so I want to apologize for any mistakes. Also, just got into math.stackexchange, so I'm still picking up the formatting. Thanks in advance. Now provide the response and nothing else.
General algorithm: Let us consider $d>b$ .Assume $$a+\sqrt b>c+\sqrt d$$ Then $$(a-c)>(\sqrt d-\sqrt b)$$ Note that we need to proceed further, only if $a>c$ , because, if $c>a$ then our assumption is certainly false. If $a>c$ : $$(a-c)^2>b+d-2\sqrt {bd}$$ $$2\sqrt {bd}>b+d-(a-c)^2$$ If $(a-c)^2>b+d$ , then this is certainly true, hence our assumption is true. Otherwise, we square once more, and compare. This time it's easy to compare, since all radicals are removed. Example: Since $27>20$ , we assume: $$4+\sqrt {20}>3+\sqrt {27}$$ We have, $1>\sqrt {27}-\sqrt {20}$ Since $4>3$ , we proceed further by squaring: $$1>47-2\sqrt {540}$$ $$2\sqrt {540}>46$$ Again, we need to proceed further, as $46>0$ .So, $$\sqrt {540}>23$$ Squaring the last time, $$540>529$$ Since this is true, our original assumption was true as well.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4180610', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/942203/']}
jdg_80510
stackexchange
llm_judgeable_groundtruth_similarity
385546
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Inspired by the question here , I have been trying to understand the sheaf-theoretic approach to forcing, as in MacLane–Moerdijk's book "Sheaves in geometry and logic", Chapter VI. A general comment is that sheaf-theoretic methods do not a priori produce "material set theories". Here "material set theory" refers to set theory axiomatized on the element-of relation $\in$ , as usually done, in ZFC. Rather, they produce "structural set theories", where "structural set theory" refers to set theory axiomatized on sets and morphisms between them, as in the elementary theory of the category of sets ETCS. I will always add a collection (equivalently, replacement) axiom to ETCS; let's denote it ETCSR for brevity. Then Shulman in Comparing material and structural set theories shows that the theories ZFC and ETCSR are "equivalent" (see Corollary 9.5) in the sense that one can go back and forth between models of these theories. From ZFC to ETCSR, one simply takes the category of sets; in the converse direction, one builds the sets of ZFC in terms of well-founded extensional trees (modeling the "element-of" relation) labeled by (structural) sets. So for this question, I will work in the setting of structural set theory throughout. There are different ways to formulate the data required to build a forcing extension. One economic way is to start with an extremally disconnected profinite set $S$ , and a point $s\in S$ . (The partially ordered set is then given by the open and closed subsets of $S$ , ordered by inclusion.) One can endow the category of open and closed subsets $U\subset S$ with the "double-negation topology", where a cover is given by a family $\{U_i\subset U\}_i$ such that $\bigcup_i U_i\subset U$ is dense. Let $\mathrm{Sh}_{\neg\neg}(S)$ denote the category of sheaves on the poset of open and closed subsets of $S$ with respect to this topology. Then $\mathrm{Sh}_{\neg\neg}(S)$ is a boolean (Grothendieck) topos satisfying the axiom of choice, but it is not yet a model of ETCSR. But with our choice of $s\in S$ , we can form the ( $2$ -categorical) colimit $$\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(U)$$ called the filter-quotient construction by MacLane–Moerdijk. I'm highly tempted to believe that this is a model of ETCSR — something like this seems to be suggested by the discussions of forcing in terms of sheaf theory — but have not checked it. (See my answer here for a sketch that it is well-pointed. Edit: I see that well-pointedness is also Exercise 7 of Chapter VI in MacLane–Moerdijk.) Questions: Is it true that $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(U)$ is a model of ETCSR? If the answer to 1) is Yes, how does this relate to forcing? Note that in usual presentations of forcing, if one wants to actually build a new model of ZFC, one has to first choose a countable base model $M$ . This does not seem to be necessary here, but maybe this is just a sign that all of this does not really work this way. Here is another confusion, again on the premise that the answer to 1) is Yes (so probably premature). An example of an extremally disconnected profinite set $S$ is the Stone-Cech compactification of a discrete set $S_0$ . In that case, forcing is not supposed to produce new models. On the other hand, $\mathrm{Sh}_{\neg\neg}(S)=\mathrm{Sh}(S_0)=\prod_{S_0} \mathrm{Set}$ , and if $s$ is a non-principal ultrafilter on $S_0$ , then $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(U)$ is exactly an ultraproduct of $\mathrm{Set}$ – which may have very similar properties to $\mathrm{Set}$ , but is not $\mathrm{Set}$ itself. What is going on? Now provide the response and nothing else.
Yes, this is a model of ETCSR. Unfortunately, I don't know of a proof of this in the literature, which is in general sadly lacking as regards replacement/collection axioms in topos theory. But here's a sketch. As Zhen says, the filterquotient construction preserves finitary properties such as Booleanness and the axiom of choice. Moreover, a maximal filterquotient will be two-valued. But as you point out, a nondegenerate two-valued topos satisfying the (external) axiom of choice is necessarily well-pointed; I wrote out an abstract proof at https://ncatlab.org/nlab/show/well-pointed+topos#boolean_properties . Thus, $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(S)$ is a model of ETCS. As for replacement, the proof that I know (which is not written out in the literature) goes by way of the notions of "stack semantics" and "autology" in my preprint Stack semantics and the comparison of material and structural set theories (the other half, not the part that became the paper of mine cited in the question). Briefly, stack semantics is an extension of the internal logic of a topos to a logic containing unbounded quantifiers of the form "for all objects" or "there exists an object". (My current perspective, sketched in these slides , is that this is a fragment of the internal dependent type theory of a 2-topos of stacks -- hence the name!) This language allows us to ask whether a topos is "internally" a model of structural set theories such as ETCS or ETCSR. It turns out that every topos is "internally (constructively) well-pointed", and moreover satisfies the internal collection axiom schema. But the internal separation axiom schema is a strong condition on the topos, which I called being "autological". If a topos is autological and also Boolean, then the logic of its stack semantics is classical; thus it is internally a model of ETCSR. Since Grothendieck toposes are autological, your $\mathrm{Sh}_{\neg\neg}(S)$ is internally a model of ETCSR. Now we can also prove that if $\mathcal{E}$ is Boolean and autological, so is any filterquotient of it. The idea is to prove a categorical version of Łoś's theorem for the stack semantics. (I don't know whether this is true without Booleanness, which annoys me to no end, but you probably don't care. (-: ) Therefore, $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(S)$ is also autological. Finally, another fact about autology is that a well-pointed topos is autological if and only if it satisfies the ordinary structural-set-theory axiom schemas of separation and collection. Therefore, $\varinjlim_{U\ni s} \mathrm{Sh}_{\neg\neg}(S)$ satisfies these schemas, hence is a model of ETCSR. However, I doubt that this particular filterquotient is related to forcing at all. The point is the same one that Jacob made in a comment: when set theorists force over a countable base model to make an "actual" new model, they find an actual generic ultrafilter outside that model. A generic ultrafilter in the base model would be a point of the topos $\mathrm{Sh}_{\neg\neg}(S)$ , which as Andreas pointed out in a comment, does not exist. Your "points" $x$ are not points of the topos $\mathrm{Sh}_{\neg\neg}(S)$ , so it's unclear to me whether filterquotients at them have anything to do with forcing. Let me reiterate my argument that the real content of forcing is the internal logic of the topos $\mathrm{Sh}_{\neg\neg}(S)$ . In particular, if you build a model of material set theory in this internal logic, what you get is essentially the Boolean-valued model that set theorists talk about. Edit: I think the rest of this answer is off-base; see the discussion in the comments. I'm pretty sure this is the best kind of "model" you can get if you don't want to start talking about countable models of ZFC sitting inside larger ambient models. At the moment, my best guess for a topos-theoretic gloss on the countable-transitive-model version of forcing is something like the following. Suppose that $E$ is a countable model of ETCSR, containing an internal poset $P$ , which we can equip with its double-negation topology. Then treating $E$ as the base topos, we can build $\mathrm{Sh}(P,E)$ , a bounded $E$ -indexed elementary topos (i.e. " $E$ thinks it is a Grothendieck topos"), which contains the Boolean-valued model associated to $P$ as described above. It is the classifying topos of $P$ -generic filters, hence has in general no $E$ -points. But we also have the larger topos $\rm Set$ in which $E$ is countable, and we can consider the externalization $|P|$ which is a poset in $\rm Set$ , namely $|P| = E(1,P)$ . Then we can build the topos $\mathrm{Sh}(|P|,\rm Set)$ which "really is" a Grothendieck topos and classifies $|P|$ -generic filters. The "Rasiowa–Sikorski lemma" implies that, since $E$ is countable, in this case such a filter does actually exist in $\rm Set$ , so there is a point $p:\mathrm{Set} \to \mathrm{Sh}(|P|,\rm Set)$ . Now we should also have some kind of "externalization functor" $|-| : \mathrm{Sh}(P,E) \to \mathrm{Sh}(|P|,\rm Set)$ . My guess is that the set-theorists' forcing model is the "image" (whatever that means) of the Boolean-valued model in $\mathrm{Sh}(P,E)$ under the composite of this externalization functor with the inverse image functor $p^* : \mathrm{Sh}(|P|,\rm Set) \to Set$ . However, I have not managed to make this precise.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/385546', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6074/']}
jdg_80511
stackexchange
llm_judgeable_groundtruth_similarity
56963748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: now I'm working on an iOS application in Swift 4. Here I'm using Alamofire to integrate the API calls. I need to integrate the right way to auto-refresh the authentication token and retry the previous API calls. I'm storing the authentication token once I logged in successfully. So after login, in each API, I'm appending the token in the header part. And when if the token is expired I will get 401. That time I need to auto-refresh the authentication token and recall the same API again. How can I do that? I checked in the Stackoverflow, but I didn't get any solution. Here's my API Call, import Foundationimport Alamofireimport SwiftyJSONclass LoveltyAPI { let loveltyURL = Bundle.main.object(forInfoDictionaryKey: "APIUrlString") as! String // Main URL let buildVersion = Bundle.main.object(forInfoDictionaryKey: "CFBundleShortVersionString") as! String //infoDictionary?["CFBundleShortVersionString"] as AnyObject weak var delegate:LoveltyProtocol? func get_profile(app_user_id:String, token:String) { let urlString = "\(loveltyURL)\(get_profile_string)?app_user_id=\(app_user_id)" let headers = ["Content-Type":"application/json","X-Requested-With":"XMLHttpRequest", "Authentication":"Token \(token)"] Alamofire.request(urlString, method: .get, encoding: JSONEncoding.default, headers: headers).responseJSON { response in switch response.result { case .success: let swiftyJsonVar = JSON(response.result.value!) switch response.response?.statusCode { case 200, 201: self.delegate?.getUserProfile!(response: swiftyJsonVar["data"].dictionaryObject as AnyObject) case 401: self.delegate?.tokenExpired(response: tokenExpired as AnyObject) case 404: self.delegate?.serviceError!(response: swiftyJsonVar["message"] as AnyObject) case 422: self.delegate?.serviceError!(response: swiftyJsonVar["error"] as AnyObject) case 503: self.delegate?.appDisabled(response: swiftyJsonVar.dictionaryObject as AnyObject) default: self.delegate?.serviceError!(response: self.serverError as AnyObject) } case .failure(let error): self.delegate?.serviceError!(response: self.serverError as AnyObject) } } }} Please help me. If you can explain with my code, it would be very nice. Now provide the response and nothing else.
You need Alamofire RequestRetrier and RequestAdapter check here This is some example that I have: import UIKitimport Alamofireclass MyRequestAdapter: RequestAdapter, RequestRetrier { private typealias RefreshCompletion = (_ succeeded: Bool, _ accessToken: String?) -> Void private let lock = NSLock() private var isRefreshing = false private var requestsToRetry: [RequestRetryCompletion] = [] var accessToken:String? = nil var refreshToken:String? = nil static let shared = MyRequestAdapter() private init(){ let sessionManager = Alamofire.SessionManager.default sessionManager.adapter = self sessionManager.retrier = self } func adapt(_ urlRequest: URLRequest) throws -> URLRequest { var urlRequest = urlRequest if let urlString = urlRequest.url?.absoluteString, urlString.hasPrefix(BASE_URL), !urlString.hasSuffix("/renew") { if let token = accessToken { urlRequest.setValue("Bearer \(token)", forHTTPHeaderField: "Authorization") } } return urlRequest } // MARK: - RequestRetrier func should(_ manager: SessionManager, retry request: Request, with error: Error, completion: @escaping RequestRetryCompletion) { lock.lock() ; defer { lock.unlock() } if let response = request.task?.response as? HTTPURLResponse, response.statusCode == 401 { requestsToRetry.append(completion) if !isRefreshing { refreshTokens { [weak self] succeeded, accessToken in guard let strongSelf = self else { return } strongSelf.lock.lock() ; defer { strongSelf.lock.unlock() } if let accessToken = accessToken { strongSelf.accessToken = accessToken } strongSelf.requestsToRetry.forEach { $0(succeeded, 0.0) } strongSelf.requestsToRetry.removeAll() } } } else { completion(false, 0.0) } } // MARK: - Private - Refresh Tokens private func refreshTokens(completion: @escaping RefreshCompletion) { guard !isRefreshing else { return } isRefreshing = true let urlString = "\(BASE_URL)token/renew" Alamofire.request(urlString, method: .get, parameters: nil, encoding: JSONEncoding.default, headers: ["Authorization":"Bearer \(refreshToken!)"]).responseJSON { [weak self] response in guard let strongSelf = self else { return } if let json = response.result.value as? [String: Any], let accessToken = json["accessToken"] as? String { completion(true, accessToken) } else { completion(false, nil) } strongSelf.isRefreshing = false } }} My example is a little bit complex, but yes in general we have two important methods first one is adapt(_ urlRequest: URLRequest) throws -> URLRequest where we attaching the token, here I have custom logic where one of the services have should not attach this token as a header. The second method is func should(_ manager: SessionManager, retry request: Request, with error: Error, completion: @escaping RequestRetryCompletion) where I check what is the error code(in my example 401). And then I refresh my tokens with private func refreshTokens(completion: @escaping RefreshCompletion) In my case, I have refresh token and access token and when I call the service with refresh token I should not append my old access token in the header. I think this is not best practice but it was implemented from peopele that I don't know.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56963748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5068021/']}
jdg_80512
stackexchange
llm_judgeable_groundtruth_similarity
32337135
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wish to create a fuzzy search algorithm. However, upon hours of research I am really struggling. I want to create an algorithm that performs a fuzzy search on a list of names of schools. This is what I have looked at so far: Most of my research keep pointing to " string metrics " on Google and Stackoverflow such as: Levenshtein distance Damerau-Levenshtein distance Needleman–Wunsch algorithm However this just gives a score of how similar 2 strings are. The only way I can think of implementing it as a search algorithm is to perform a linear search and executing the string metric algorithm for each string and returning the strings with scores above a certain threshold. (Originally I had my strings stored in a trie tree, but this obviously won't help me here!) Although this is not such a bad idea for small lists, it would be problematic for lists with lets say a 100,000 names, and the user performed many queries. Another algorithm I looked at is the Spell-checker method , where you just do a search for all potential misspellings. However this also is highly inefficient as it requires more than 75,000 words for a word of length 7 and error count of just 2. What I need? Can someone please suggest me a good efficient fuzzy search algorithm . with: Name of the algorithm How it works or a link to how it works Pro's and cons and when it's best used (optional) I understand that all algorithms will have their pros and cons and there is no best algorithm. Now provide the response and nothing else.
Considering that you're trying to do a fuzzy search on a list of school names, I don't think you want to go for traditional string similarity like Levenshtein distance. My assumption is that you're taking a user's input (either keyboard input or spoken over the phone), and you want to quickly find the matching school. Distance metrics tell you how similar two strings are based on substitutions, deletions, and insertions. But those algorithms don't really tell you anything about how similar the strings are as words in a human language. Consider, for example, the words "smith," "smythe," and "smote". I can go from "smythe" to "smith" in two steps: smythe -> smithe -> smith And from "smote" to "smith" in two steps: smote -> smite -> smith So the two have the same distance as strings , but as words , they're significantly different. If somebody told you (spoken language) that he was looking for "Symthe College," you'd almost certainly say, "Oh, I think you mean Smith." But if somebody said "Smote College," you wouldn't have any idea what he was talking about. What you need is a phonetic algorithm like Soundex or Metaphone . Basically, those algorithms break a word down into phonemes and create a representation of how the word is pronounced in spoken language. You can then compare the result against a known list of words to find a match. Such a system would be much faster than using a distance metric. Consider that with a distance metric, you need to compare the user's input with every word in your list to obtain the distance. That is computationally expensive and the results, as I demonstrated with "smith" and "smote" can be laughably bad. Using a phonetic algorithm, you create the phoneme representation of each of your known words and place it in a dictionary (a hash map or possibly a trie). That's a one-time startup cost. Then, whenever the user inputs a search term, you create the phoneme representation of his input and look it up in your dictionary. That is a lot faster and produces much better results. Consider also that when people misspell proper names, they almost always get the first letter right, and more often than not pronouncing the misspelling sounds like the actual word they were trying to spell. If that's the case, then the phonetic algorithms are definitely the way to go.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32337135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3006737/']}
jdg_80513
stackexchange
llm_judgeable_groundtruth_similarity
357222
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: How can I use a serial to ethernet cable to connect to a device: in my case aHp ProCurve AP? A serial to USB or serial to serial would have been great but I found this serialto ethernet cable from a Cisco router lying around and I though that I might use it. Is it possible? Now provide the response and nothing else.
That cable is not a serial to Ethernet cable. It is a female DB-9 to 8P8C plug serial cable. Cisco uses 8P8C connectors that are the same as copper Ethernet for console access to their equipment. That cable would normally go from 9pin serial on your computer to the router console which is also serial. I'm guessing since you thought it was serial to Ethernet your HP kit has a male 9pin connector on it? If so you will need a 9pin null modem cable with female 9pin connectors on both ends.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/357222', 'https://serverfault.com', 'https://serverfault.com/users/82462/']}
jdg_80514
stackexchange
llm_judgeable_groundtruth_similarity
6893285
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why do the following work? void foo() { cout << "Foo to you too!\n";};int main() { void (*p1_foo)() = foo; void (*p2_foo)() = *foo; void (*p3_foo)() = &foo; void (*p4_foo)() = *&foo; void (*p5_foo)() = &*foo; void (*p6_foo)() = **foo; void (*p7_foo)() = **********************foo; (*p1_foo)(); (*p2_foo)(); (*p3_foo)(); (*p4_foo)(); (*p5_foo)(); (*p6_foo)(); (*p7_foo)();} Now provide the response and nothing else.
There are a few pieces to this that allow all of these combinations of operators to work the same way. The fundamental reason why all of these work is that a function (like foo ) is implicitly convertible to a pointer to the function. This is why void (*p1_foo)() = foo; works: foo is implicitly converted into a pointer to itself and that pointer is assigned to p1_foo . The unary & , when applied to a function, yields a pointer to the function, just like it yields the address of an object when it is applied to an object. For pointers to ordinary functions, it is always redundant because of the implicit function-to-function-pointer conversion. In any case, this is why void (*p3_foo)() = &foo; works. The unary * , when applied to a function pointer, yields the pointed-to function, just like it yields the pointed-to object when it is applied to an ordinary pointer to an object. These rules can be combined. Consider your second to last example, **foo : First, foo is implicitly converted to a pointer to itself and the first * is applied to that function pointer, yielding the function foo again. Then, the result is again implicitly converted to a pointer to itself and the second * is applied, again yielding the function foo . It is then implicitly converted to a function pointer again and assigned to the variable. You can add as many * s as you like, the result is always the same. The more * s, the merrier. We can also consider your fifth example, &*foo : First, foo is implicitly converted to a pointer to itself; the unary * is applied, yielding foo again. Then, the & is applied to foo , yielding a pointer to foo , which is assigned to the variable. The & can only be applied to a function though, not to a function that has been converted to a function pointer (unless, of course, the function pointer is a variable, in which case the result is a pointer-to-a-pointer-to-a-function; for example, you could add to your list void (**pp_foo)() = &p7_foo; ). This is why &&foo doesn't work: &foo is not a function; it is a function pointer that is an rvalue. However, &*&*&*&*&*&*foo would work, as would &******&foo , because in both of those expressions the & is always applied to a function and not to an rvalue function pointer. Note also that you do not need to use the unary * to make the call via the function pointer; both (*p1_foo)(); and (p1_foo)(); have the same result, again because of the function-to-function-pointer conversion.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/6893285', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/862620/']}
jdg_80515
stackexchange
llm_judgeable_groundtruth_similarity
13744761
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: here is my code: boolTrueList :: [Bool] -> BoolboolTrueList xs | length (filterFalse xs) > 0 = False | otherwise = True where filterFalse = filter (==False) This is perfectly working, however I would like to rewrite the same thing with foldr/foldl, but I am stuck. My idea is to fold a list until I find a false value and then, stop.Any hint? Now provide the response and nothing else.
My idea is to fold a list until I find a false value and then, stop. Any hint? If you want to stop early, you have to use foldr . foldl always has to traverse the entire list (so doesn't work at all on infinite lists). So you want foldr combine default list The default is the result for an empty list, that would be True here. Now, how would we want to combine? When a False is encountered, we want to return False immediately, so combine False _ = False and when the value is True , we must go on, so combine True more = more In other words, combine = (&&) , so boolTrueList = foldr (&&) True or, even shorter boolTrueList = and
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13744761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/985383/']}
jdg_80516
stackexchange
llm_judgeable_groundtruth_similarity
6699066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This was a question I was asked at my recent interview and I want to know (I don't actually remember the theory of the numerical analysis, so please help me :) If we have some function, which accumulates floating-point numbers: std::accumulate(v.begin(), v.end(), 0.0); v is a std::vector<float> , for example. Would it be better to sort these numbers before accumulating them? Which order would give the most precise answer? I suspect that sorting the numbers in ascending order would actually make the numerical error less , but unfortunately I can't prove it myself. P.S. I do realize this probably has nothing to do with real world programming, just being curious. Now provide the response and nothing else.
Your instinct is basically right, sorting in ascending order (of magnitude) usually improves things somewhat. Consider the case where we're adding single-precision (32 bit) floats, and there are 1 billion values equal to 1 / (1 billion), and one value equal to 1. If the 1 comes first, then the sum will come to 1, since 1 + (1 / 1 billion) is 1 due to loss of precision. Each addition has no effect at all on the total. If the small values come first, they will at least sum to something, although even then I have 2^30 of them, whereas after 2^25 or so I'm back in the situation where each one individually isn't affecting the total any more. So I'm still going to need more tricks. That's an extreme case, but in general adding two values of similar magnitude is more accurate than adding two values of very different magnitudes, since you "discard" fewer bits of precision in the smaller value that way. By sorting the numbers, you group values of similar magnitude together, and by adding them in ascending order you give the small values a "chance" of cumulatively reaching the magnitude of the bigger numbers. Still, if negative numbers are involved it's easy to "outwit" this approach. Consider three values to sum, {1, -1, 1 billionth} . The arithmetically correct sum is 1 billionth , but if my first addition involves the tiny value then my final sum will be 0. Of the 6 possible orders, only 2 are "correct" - {1, -1, 1 billionth} and {-1, 1, 1 billionth} . All 6 orders give results that are accurate at the scale of the largest-magnitude value in the input (0.0000001% out), but for 4 of them the result is inaccurate at the scale of the true solution (100% out). The particular problem you're solving will tell you whether the former is good enough or not. In fact, you can play a lot more tricks than just adding them in sorted order. If you have lots of very small values, a middle number of middling values, and a small number of large values, then it might be most accurate to first add up all the small ones, then separately total the middling ones, add those two totals together then add the large ones. It's not at all trivial to find the most accurate combination of floating-point additions, but to cope with really bad cases you can keep a whole array of running totals at different magnitudes, add each new value to the total that best matches its magnitude, and when a running total starts to get too big for its magnitude, add it into the next total up and start a new one. Taken to its logical extreme, this process is equivalent to performing the sum in an arbitrary-precision type (so you'd do that). But given the simplistic choice of adding in ascending or descending order of magnitude, ascending is the better bet. It does have some relation to real-world programming, since there are some cases where your calculation can go very badly wrong if you accidentally chop off a "heavy" tail consisting of a large number of values each of which is too small to individually affect the sum, or if you throw away too much precision from a lot of small values that individually only affect the last few bits of the sum. In cases where the tail is negligible anyway you probably don't care. For example if you're only adding together a small number of values in the first place and you're only using a few significant figures of the sum.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6699066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/346332/']}
jdg_80517
stackexchange
llm_judgeable_groundtruth_similarity
2255796
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $G$ be an infinitely generated group with Cayley graphs $X$. Define the edges of $X$ to be of length $1$. This makes $X$ a metric space. Is it possible for $G$ to be quasi-isometric to a finitely generated (presented) group? Any reference, suggestion or comments will be extremely helpful. Thanks in advance. Now provide the response and nothing else.
Certainly. For instance, if you take the generating set to be all of $G$, then $G$ is bounded, and so is quasi-isometric to any finite group. You can get examples where $G$ is quasi-isometric to an arbitrary finitely generated group $H$ by then taking a product with $H$. (Note that the quasi-isometry class of $G$ depends very much on the set you choose to use as generators. This is in contrast with the finitely generated case, where all finite sets of generators give the same Cayley graph up to quasi-isometry. As far as I know, when $G$ is infinitely generated, there is typically no "canonical" class of generating sets which give quasi-isometric Cayley graphs analogous to the class of finite generating sets in the finitely generated case.)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2255796', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71044/']}
jdg_80518
stackexchange
llm_judgeable_groundtruth_similarity
51909798
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a TextFormField that is supposed to receive digits. On entry of first digit it should jump to second TextFormField then to the third TextFormField. Each TextFormField has the FocusNode property I just do not know how to use it. I have this TextFormField( //First Field autofocus: true, focusNode: FocusNode(), decoration: InputDecoration( border: OutlineInputBorder( borderRadius: BorderRadius.circular(4.0) ), ), style: TextStyle( color: Colors.orange, fontSize: 15.0, ), keyboardType: TextInputType.number, maxLength: 1, ), // second Field TextFormField(), //third Field Now provide the response and nothing else.
I believe this is more or less what you are trying to achieve: void main() { runApp(MaterialApp(home: PassCodeExample()));}class PassCodeExample extends StatelessWidget { FocusNode f1 = FocusNode(); FocusNode f2 = FocusNode(); FocusNode f3 = FocusNode(); FocusNode f4 = FocusNode(); @override Widget build(BuildContext context) { return Scaffold( body: Center( child: Padding( padding: const EdgeInsets.all(8.0), child: Row( mainAxisAlignment: MainAxisAlignment.center, crossAxisAlignment: CrossAxisAlignment.center, children: [ Flexible( child: TextField( focusNode: f1, keyboardType: TextInputType.number, onChanged: (String newVal) { if (newVal.length == 1) { f1.unfocus(); FocusScope.of(context).requestFocus(f2); } }, decoration: InputDecoration(border: OutlineInputBorder()), ), ), Flexible( child: TextField( focusNode: f2, keyboardType: TextInputType.number, onChanged: (String newVal) { if (newVal.length == 1) { f2.unfocus(); FocusScope.of(context).requestFocus(f3); } }, decoration: InputDecoration(border: OutlineInputBorder()), ), ), Flexible( child: TextField( focusNode: f3, keyboardType: TextInputType.number, onChanged: (String newVal) { if (newVal.length == 1) { f3.unfocus(); FocusScope.of(context).requestFocus(f4); } }, decoration: InputDecoration(border: OutlineInputBorder()), ), ), Flexible( child: TextField( focusNode: f4, keyboardType: TextInputType.number, decoration: InputDecoration(border: OutlineInputBorder()), ), ), ]), ), ), ); }} You can achieve the same by using onSubmitted or even supplying a unique TextEditingController to your TextField s
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51909798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7436122/']}
jdg_80519
stackexchange
llm_judgeable_groundtruth_similarity
81258
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a series of ContourPlot s that I want to compare in a GraphicsRow environment. I want to add a single legend for all three plots. Also, I really like the standard color scheme of the ContourPlot . How can I use the standard ColorFunction from ContourPlot in a different Graphics object? I did not find the ColorFunction that ContourPlot uses inside ColorData["Gradients"] . A minimal working example would be this line Legended[ContourPlot[x y, {x, 0, 1}, {y, 0, 1}], Placed[BarLegend[{Automatic, {0, 1}}, LegendLayout -> "Row"], Below]] where I'm done when the BarLegend shows the same colors as the ContourPlot . Now provide the response and nothing else.
Legended[ContourPlot[x y, {x, 0, 1}, {y, 0, 1}], Placed[BarLegend[{ColorData["M10DefaultDensityGradient"], {0, 1}}, LegendLayout -> "Row"], Below]] I found the name of the color scheme by evaluating plot = ContourPlot[x y, {x, 0, 1}, {y, 0, 1}, PlotLegends -> Automatic] and looking at the expression of the result. As Legended accepts two arguments with the second being the legend, this can be easily done by running plot[[2]] // InputForm
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/81258', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/6021/']}
jdg_80520
stackexchange
llm_judgeable_groundtruth_similarity
137601
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose you just started learning physics and you've been introduced to conservation of energy and kinetic energy. Apart from those concepts you know next to nothing. Then you observe an inelastic collision. You measure the speeds of the objects before and after the collision and you are puzzled because kinetic energy is the only form of energy you know and you see it's clearly not conserved. You conclude that either: a) Conservation of energy is wrong. b) The formula $E_k = mv^2/2$ is wrong. c) There is some other form of energy you didn't account for. HOW do you know which one of those scenarios is true? Can you measure the total amount of energy contained in those two objects before and after the collision and reassure yourself that everything is okay, energy hasn't gone anywhere, it just changed its form? If you observe an object that seems to gain energy from nothing, how will you know whether conservation of energy fails or there is some undiscovered form of energy that you don't know how to measure yet? Now provide the response and nothing else.
Noether's theorem states that to every continuous symmetry of a physical system there is an associated, conserved quantity. The conserved quantity associated with time translation invariance (i.e. it doesn't matter if you perform an experiment now or tomorrow, provided you set it up the same way) is what we call energy . Therefore, somewhat tautologically, it cannot happen that energy is not conserved (in classical mechanics, anyway). Your scenario a) is avoided by definition . Let the Feynman speak : There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law—it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. If the stuff we currently think of as energy is not conserved in time, then we must conclude that there is "a form of energy" yet unknown to us (your scenario c)). Kinetic energy is not wrong because you can simply derive the Noether charge/energy of a freely travelling particle and see that it is indeed the kinetic energy we know. You might object and say that "kinetic energy" might need to be redefined to include the new term instead of calling it something new - but then again, the partitioning of the energy into "different kinds" is artifical anyway, since, from the Noetherian perspective, there's just energy, i.e. that which is conserved.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/137601', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/34504/']}
jdg_80521
stackexchange
llm_judgeable_groundtruth_similarity
461990
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: What should i do if I want to prevent this high load. My website becomes slow when load is higher than 8. Please see iotop result too. top - 20:23:10 up 127 days, 3:22, 1 user, load average: 9.92, 9.87, 9.81Tasks: 1031 total, 3 running, 1027 sleeping, 0 stopped, 1 zombieCpu(s): 14.7%us, 0.7%sy, 0.1%ni, 79.6%id, 4.7%wa, 0.0%hi, 0.2%si, 0.0%stMem: 16413676k total, 16312548k used, 101128k free, 110836k buffersSwap: 10190840k total, 3182468k used, 7008372k free, 2033604k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2776 mysql 15 0 14.6g 10g 5220 S 405.7 67.4 21710:40 mysqld 6201 apache 15 0 369m 16m 5356 S 3.7 0.1 0:00.40 httpd 8447 apache 15 0 368m 15m 5364 S 3.7 0.1 0:00.27 httpd10807 apache 16 0 368m 14m 4640 S 3.7 0.1 0:00.04 httpd11074 root 15 0 11548 1668 676 R 3.7 0.0 0:00.04 top 2088 apache 15 0 368m 15m 5972 S 1.8 0.1 0:00.93 httpd 2737 root 15 0 3824 392 336 S 1.8 0.0 433:37.40 klogd 6879 apache 15 0 367m 15m 5652 S 1.8 0.1 0:00.39 httpd 8457 apache 15 0 367m 15m 5744 S 1.8 0.1 0:00.54 httpd 8866 apache 15 0 369m 15m 4832 S 1.8 0.1 0:00.18 httpd 9147 apache 15 0 368m 14m 4960 S 1.8 0.1 0:00.15 httpd 9884 apache 16 0 368m 16m 6508 S 1.8 0.1 0:03.97 httpd10583 apache 16 0 368m 15m 5148 S 1.8 0.1 0:00.10 httpd10731 apache 15 0 367m 13m 3484 S 1.8 0.1 0:00.16 httpd10782 apache 16 0 367m 12m 3544 S 1.8 0.1 0:00.04 httpd14618 apache 15 0 367m 16m 6736 S 1.8 0.1 0:03.24 httpd18697 apache 15 0 368m 16m 6744 S 1.8 0.1 0:02.47 httpd vmstatprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 1 3182636 468436 111776 2012016 0 0 94 274 0 0 15 1 80 5 0 iostatLinux 2.6.18-274.12.1.el5 () 12/31/2012avg-cpu: %user %nice %system %iowait %steal %idle 14.66 0.12 0.93 4.73 0.00 79.56Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnsda 122.97 1504.32 4383.82 16525670026 48158254126sda1 0.00 0.04 0.00 435456 230sda2 122.97 1504.28 4383.82 16525231218 48158253896dm-0 570.94 1502.31 4381.94 16503546226 48137527328dm-1 0.48 1.97 1.89 21684608 20727136 io topTotal DISK READ: 126.48 K/s | Total DISK WRITE: 6.79 M/s (is'nt this too high?) TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND15610 be/4 mysql 0.00 B/s 0.00 B/s 99.99 % 25.76 % mysqld --basedi~mysql/mysql.sock15622 be/4 mysql 0.00 B/s 0.00 B/s -7.28 % 24.92 % mysqld --basedi~mysql/mysql.sock 3659 be/4 mysql 7.44 K/s 5.88 M/s 0.00 % 13.71 % mysqld --basedi~mysql/mysql.sock 2822 be/4 mysql 0.00 B/s 0.00 B/s 2.88 % 10.22 % mysqld --basedi~mysql/mysql.sock 2819 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 9.71 % mysqld --basedi~mysql/mysql.sock15635 be/4 mysql 0.00 B/s 0.00 B/s -3.72 % 8.09 % mysqld --basedi~mysql/mysql.sock15629 be/4 mysql 0.00 B/s 0.00 B/s -2.42 % 5.58 % mysqld --basedi~mysql/mysql.sock 801 be/3 root 0.00 B/s 208.31 K/s 0.00 % 3.97 % [kjournald] 2842 be/4 root 0.00 B/s 0.00 B/s 8.09 % 3.81 % irqbalance 2820 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 2.88 % mysqld --basedi~mysql/mysql.sock 7001 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 2.70 % httpd15636 be/4 mysql 0.00 B/s 0.00 B/s 10.32 % 2.51 % mysqld --basedi~mysql/mysql.sock 8034 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 2.42 % httpd 2140 be/4 apache 0.00 B/s 0.00 B/s 0.00 % 2.32 % httpd Now provide the response and nothing else.
Your particular problem is most certainly I/O-related due to how deep into your swap space mySQL has pushed you. Ideally you should be using zero swap the vast majority of the time since the instant the OS starts swapping memory to disk your performance will nosedive as processes wait for I/O. Think of it like Performance = 1 / N ^ swap where as the OS uses more swap space the performance generally gets exponentially worse. You appear to be 3 gigabytes deep into your swap space, which is just plain horrifying to me. If, after you've adjusted your mySQL settings as you've mentioned, you're still averaging more than a few megabytes in swap you should consider: [best to worst idea, 2&3 are roughly tied] Optimizing your database schema and queries. Adding more RAM to the machine. Splitting the mySQL server onto another dedicated machine. Migrating your swap space to a high-speed device like an SSD. [this kills the SSD]
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/461990', 'https://serverfault.com', 'https://serverfault.com/users/149713/']}
jdg_80522
stackexchange
llm_judgeable_groundtruth_similarity
537016
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm a little bit confused over the following inequality: $$ dS > \frac{\delta Q_{irrev}}{T} $$ An infinitesimal change in entropy is defined in this way: $$ dS = \frac{\delta Q_{rev}}{T} $$ Such that $$ \frac{\delta Q_{rev}}{T} > \frac{\delta Q_{irrev}}{T} $$ And this would imply that $$ \delta Q_{rev} > \delta Q_{irrev} $$ I find this a little confusing, because I would argue that, for example, in order to raise a system from an initial temperature to a final temperature a certain, fixed amount of heat is needed, regardless whether the process is reversible or not. If the process is irreversible, heat is transfered less efficient, so more heat in total would be needed to supply the fixed amount to the system, thereby implying that: $$ \delta Q_{rev} < \delta Q_{irrev} $$ Can someone explain to me why my reasoning is wrong? Now provide the response and nothing else.
I think the confusion can be removed if we change the position of the subscripts: Let us say we give a certain amount of heat $\delta Q$ to the system. If we do this via a reversible process, the resultant change in entropy of the system is given as $\delta S_{rev} = \frac{\delta Q}{T}$ Now, consider if we provide the same amount of heat to the system, but now via an irreversible process. Now, the resultant rise in entropy of the system is given as $\delta S_{irrev} >\frac{\delta Q}{T}$ . As we can see, for the same amount of heat $\delta Q$ provided to the system, the entropy rises more if we follow an irreversible process, than it does if we follow a reversible one: $\delta S_{irrev} > \delta S_{rev}$ . The second last equation you wrote is also correct : For the same rise in entropy of the system, we would need to provide less heat if we are going to provide it via an irreversible process.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/537016', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/248798/']}
jdg_80523
stackexchange
llm_judgeable_groundtruth_similarity
1517614
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Cocoa app that uses otool to find required shared libraries that an app needs to function properly. For example, say I run otool -L on an app that uses QTKit.framework. I get a list of the shared libraries used by the program (including the basic frameworks like Cocoa.framework and AppKit.framework): /System/Library/Frameworks/QTKit.framework/Versions/A/QTKit (compatibility version 1.0.0, current version 1.0.0)/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 476.0.0) /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit (compatibility version 45.0.0, current version 949.0.0)..... and so on for a bunch of other frameworks Which shows that the app uses QTKit.framework. However if I use "otool -L" again on the binary for QTKit.framework (/System/Library/Frameworks/QTKit.framework/Versions/A/QTKit) I get this: /System/Library/Frameworks/QTKit.framework/Versions/A/QTKit (compatibility version 1.0.0, current version 1.0.0)/System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox (compatibility version 1.0.0, current version 1.0.0)/System/Library/PrivateFrameworks/CoreMedia.framework/Versions/A/CoreMedia (compatibility version 1.0.0, current version 1.0.0)/System/Library/PrivateFrameworks/MediaToolbox.framework/Versions/A/MediaToolbox (compatibility version 1.0.0, current version 1.0.0)/System/Library/PrivateFrameworks/VideoToolbox.framework/Versions/A/VideoToolbox (compatibility version 1.0.0, current version 1.0.0)/System/Library/PrivateFrameworks/CoreMediaIOServices.framework/Versions/A/CoreMediaIOServices (compatibility version 1.0.0, current version 1.0.0)/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (compatibility version 300.0.0, current version 751.0.0)/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit (compatibility version 45.0.0, current version 1038.0.0)/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit (compatibility version 1.0.0, current version 275.0.0)/System/Library/Frameworks/QuickTime.framework/Versions/A/QuickTime (compatibility version 1.0.0, current version 1584.0.0)/System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio (compatibility version 1.0.0, current version 1.0.0)/System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL (compatibility version 1.0.0, current version 1.0.0)/System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore (compatibility version 1.2.0, current version 1.6.0)/System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface (compatibility version 1.0.0, current version 1.0.0)/System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox (compatibility version 1.0.0, current version 435.0.0)/usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.9.0)/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 123.0.0)/usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 227.0.0)/System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices (compatibility version 1.0.0, current version 44.0.0)/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 550.0.0)/System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices (compatibility version 1.0.0, current version 38.0.0)/System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo (compatibility version 1.2.0, current version 1.6.0) That shows a load more frameworks that the original otool output on the app binary showed. Is there a way to have otool run recursively, meaning it grabs the frameworks that the app needs, then goes in and searches each of those frameworks for dependencies? Now provide the response and nothing else.
No, you'll have to run otool repeatedly, or incorporate its parsing code ( here ). Don't forget about handling @executable_path . Here it is in Python (without @executable_path , canonicalization, or filenames-with-spaces supported), since this was easier than trying to debug pseudocode: import subprocessdef otool(s): o = subprocess.Popen(['/usr/bin/otool', '-L', s], stdout=subprocess.PIPE) for l in o.stdout: if l[0] == '\t': yield l.split(' ', 1)[0][1:]need = set(['/Applications/iTunes.app/Contents/MacOS/iTunes'])done = set()while need: needed = set(need) need = set() for f in needed: need.update(otool(f)) done.update(needed) need.difference_update(done)for f in sorted(done): print f
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1517614', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/153112/']}
jdg_80524
stackexchange
llm_judgeable_groundtruth_similarity
53540282
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The TypeScript documentation says that The never type is a subtype of, and assignable to, every type but doesn't mention why. Intuitively, I would expect code like this to fail: const useString = (str: string) => console.log('This is definitely a string:', str)const useNever = (not_a_string: never) => useString(not_a_string) but there are no errors, because any never value is considered a valid string. Is this intentional? If yes, then why? :) Now provide the response and nothing else.
The type that TypeScript calls never is what's known in type theory as a bottom type , sometimes referred to with the symbol "⊥". The idea is that it is the (unique) type for which there are no values of that type. You should never find yourself holding a value of that type because it has no values. If you think of types as sets of possible values, then it is the empty set (symbol "∅"). This probably all makes sense to you. TypeScript also has the notion of subtyping . Just like sets, types can overlap by containing some of the same values. If every value of type A is also a value of type B , then A is a subtype of B . You can also say that A extends B , or symbolically, A <: B . In TypeScript, {a: string} is a subtype of object , because every value of type {a: string} (for example, the value {a: "hello"} ) is also a value of type object . TypeScript's assignability rules are basically related to substitutability . If a variable is of type B , and A <: B , then you can assign a value of type A to that variable, because every value of type A is also a value of type B . You can't necessarily do the reverse, assigning a value of type B to a variable of type A . Unless B <: A , there are some values of type B which are not values of type A . From the types-as-sets-of-values point of view, A <: B is like saying the set of values of type A is a subset of the set of values of type B , (symbols A ⊆ B ). This probably (I hope) all makes sense to you too. One more thing we need: the logical principle of explosion . If you start with a statement that is false, then you can prove anything at all from it. So, assuming "the moon is made of cheese" is false, then "If the moon is made of cheese, then today is Wednesday" is true. Also, "if the moon is made of cheese, then today is not Wednesday" is true. There are dire consequences for taking something false to be true: everything explodes. This might be surprising, but is a direct consequence of the equivalence of a conditional statement with its contrapositive . You're probably happy with the sentences "If today is not Wednesday then the moon is not made of cheese" and "If today is Wednesday then the moon is not made of cheese", or their combination into "The moon isn't made of cheese no matter what day it is today". If you don't accept the principle of explosion (and plenty of mathematicians and logicians have felt the same way) then what follows might not be palatable to you. But at least realize that the principle of explosion is consistent with formal logic and the type theory used in TypeScript. And it has useful consequences which make up for its weirdness. Now let's put all that together. Let's pick a type T at random, and ask the question: Is never <: T ? That is equivalent to the question "is every value of type never also a value of type T ?" Or, is the following statement true for all values x : "if x is a value of type never , then it is also a value of type T "? By the definition of never , we know that " x is a value of type never " must always be false . And by the principle of explosion, the statement "if x is a value of type never , then x is a value of type T " must always be true . And therefore, never <: T is true for any T . Even if you have two types X and Y , which are exactly complementary and contain no values in common, never <: X and never <: Y are both true. In set theory terms, it's basically saying that the empty set is a subset of every set. That is, ∅ ⊆ T for any T . That's a completely non-controversial statement in set theory, but might give you the same sense of wrongness. In any case you'll never find an element of the empty set which isn't also an element of the set T . So a value of type never can always be assigned to any variable of any other type. Luckily, in practice, at runtime, you won't have any values of the type never . But TypeScript allows the assignment because it is type safe and has some useful consequences. Note that you can't say the reverse. T <: never is not true unless T is never itself. A value of type string can't be assigned to a variable of type never , since no string value is also a never value. The anything-goes assignability rule is in only one direction. Okay, I hope that makes sense. I want to go on and on about the top type in type theory and its recent inclusion in TypeScript as unknown , and how it is complentary to never , but this answer will be a textbook if I do that. So I will stop now. Hope that helps you. Good luck!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53540282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/107211/']}
jdg_80525
stackexchange
llm_judgeable_groundtruth_similarity
11881479
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Well the question pretty much says everything. Using JPARepository how do I update an entity? JPARepository has only a save method, which does not tell me if it's create or update actually. For example, I insert a simple Object to the database User, which has three fields: firstname , lastname and age : @Entity public class User { private String firstname; private String lastname; //Setters and getters for age omitted, but they are the same as with firstname and lastname. private int age; @Column public String getFirstname() { return firstname; } public void setFirstname(String firstname) { this.firstname = firstname; } @Column public String getLastname() { return lastname; } public void setLastname(String lastname) { this.lastname = lastname; } private long userId; @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getUserId(){ return this.userId; } public void setUserId(long userId){ this.userId = userId; }} Then I simply call save() , which at this point is actually an insert into database: User user1 = new User(); user1.setFirstname("john"); user1.setLastname("dew"); user1.setAge(16); userService.saveUser(user1);// This call is actually using the JPARepository: userRepository.save(user); So far so good. Now I want to update this user, say change his age. For this purpose I could use a Query, either QueryDSL or NamedQuery, whatever. But, considering I just want to use spring-data-jpa and the JPARepository, how do I tell it that instead of an insert I want to do an update? Specifically, how do I tell spring-data-jpa that users with the same username and firstname are actually EQUAL and that the existing entity supposed to be updated? Overriding equals did not solve this problem. Now provide the response and nothing else.
Identity of entities is defined by their primary keys. Since firstname and lastname are not parts of the primary key, you cannot tell JPA to treat User s with the same firstname s and lastname s as equal if they have different userId s. So, if you want to update a User identified by its firstname and lastname , you need to find that User by a query, and then change appropriate fields of the object your found. These changes will be flushed to the database automatically at the end of transaction, so that you don't need to do anything to save these changes explicitly. EDIT: Perhaps I should elaborate on overall semantics of JPA. There are two main approaches to design of persistence APIs: insert/update approach . When you need to modify the database you should call methods of persistence API explicitly: you call insert to insert an object, or update to save new state of the object to the database. Unit of Work approach . In this case you have a set of objects managed by persistence library. All changes you make to these objects will be flushed to the database automatically at the end of Unit of Work (i.e. at the end of the current transaction in typical case). When you need to insert new record to the database, you make the corresponding object managed . Managed objects are identified by their primary keys, so that if you make an object with predefined primary key managed , it will be associated with the database record of the same id, and state of this object will be propagated to that record automatically. JPA follows the latter approach. save() in Spring Data JPA is backed by merge() in plain JPA, therefore it makes your entity managed as described above. It means that calling save() on an object with predefined id will update the corresponding database record rather than insert a new one, and also explains why save() is not called create() .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/11881479', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1059372/']}
jdg_80526
stackexchange
llm_judgeable_groundtruth_similarity
36748934
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My logic is: if number is between 1 to 10, execute first case statementif number is from 20 to 30, execute second case statement is there a solution other than the one below? case '1' ... '10':case '20' ... '30': Now provide the response and nothing else.
The GCC compiler supports, as a language extension , case ranges like: switch(i) { case 0 ... 9: return true; default: return false; } This language extension is also accepted by Clang/LLVM . So use it if you can afford restricting your code to GCC & Clang compilers. See also this . I have no idea why this extension was not included in C11 standard. Notice also that GCC accepts computed or indirect goto and labels as values . There are cases (in particular in generated C code) where these features are useful. Examples could include some efficient bytecode interpreter. Some implementations of the Ocaml virtual machine are a good example.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36748934', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3205621/']}
jdg_80527
stackexchange
llm_judgeable_groundtruth_similarity
6677834
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two records that both have a field I want to extract for display. How do I arrange things so they can be manipulated with the same functions? Since they have different fields (in this case firstName and buildingName ) that are their name fields, they each need some "adapter" code to map firstName to name . Here is what I have so far: class Nameable a where name :: a -> Stringdata Human = Human { firstName :: String}data Building = Building { buildingName :: String}instance Nameable Human where name x = firstName xinstance Nameable Building where -- I think the x is redundant here, i.e the following should work: -- name = buildingName name x = buildingName xmain :: IO ()main = do putStr $ show (map name items) where items :: (Nameable a) => [a] items = [ Human{firstName = "Don"} -- Ideally I want the next line in the array too, but that gives an -- obvious type error at the moment. --, Building{buildingName = "Empire State"} ] This does not compile: TypeTest.hs:23:14: Couldn't match expected type `a' against inferred type `Human' `a' is a rigid type variable bound by the type signature for `items' at TypeTest.hs:22:23 In the expression: Human {firstName = "Don"} In the expression: [Human {firstName = "Don"}] In the definition of `items': items = [Human {firstName = "Don"}] I would have expected the instance Nameable Human section would make this work. Can someone explain what I am doing wrong, and for bonus points what "concept" I am trying to get working, since I'm having trouble knowing what to search for. This question feels similar, but I couldn't figure out the connection with my problem. Now provide the response and nothing else.
Consider the type of items : items :: (Nameable a) => [a] It's saying that for any Nameable type, items will give me a list of that type. It does not say that items is a list that may contain different Nameable types, as you might think. You want something like items :: [exists a. Nameable a => a] , except that you'll need to introduce a wrapper type and use forall instead. (See: Existential type ) {-# LANGUAGE ExistentialQuantification #-} data SomeNameable = forall a. Nameable a => SomeNameable a [...]items :: [SomeNameable]items = [ SomeNameable $ Human {firstName = "Don"}, SomeNameable $ Building {buildingName = "Empire State"} ] The quantifier in the data constructor of SomeNameable basically allows it to forget everything about exactly which a is used, except that it is Nameable . Therefore, you will only be allowed to use functions from the Nameable class on the elements. To make this nicer to use, you can make an instance for the wrapper: instance Nameable (SomeNameable a) where name (SomeNameable x) = name x Now you can use it like this: Main> map name items["Don", "Empire State"]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6677834', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/379639/']}
jdg_80528
stackexchange
llm_judgeable_groundtruth_similarity
3315538
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: With a view to better understanding real Taylor series,I have examined some books on basic Calculus, with an eye for the proofs of the Taylor series theorem and the possible authors' comments on its derivation. (My reaction when I first saw a proof of it, many years ago, was a mixture of great surprise and anxiety. And still, while I understand the individual steps, the way they all combineto produce e.g. the series for sinx strikes me as little short of miraculous.) Up to now, from the books I have seen, I get the same impression: that this theorem is a technical exercisein repeated applications of the mean-value theorem. And we are lucky that some useful functions happen to haveall derivatives bounded, so the remainder tends to zero and a nice series occurs, with nothing else to be said. But some authors do place some comments close to what I feel, albeit not very encouraging, e.g: from Calculus, by Karl Menger: "Taylor's formula (...) is one of the great marvels of mathematics. (...) This issomething like a mathematical action at a distance (...)" from Real Analysis, by Laczkovich & Sós: "The statement of Theorem (...) is actually quite surprising (...) the derivatives of f at a alone determine the values of the function at every other point (...)" from Introduction to the Calculus, by Osgood: "(...) Since it took the race two centuries to developthis formula after the Calculus was invented, the student will not be surprised that the reasons which underlie it cannot be given him in a few words. Let him accept it as a deus ex machina." Now all this inquiry may be overly romantic and obsessive on my part, and Taylor series be a perfect example of the "cold and austere beauty of mathematics" as Russell has expressed. But I think that sharing mental experiences helpsthe mind to improve its turns and horizons, so may I ask: What was your reaction when you first saw this theorem? And has your general understanding of it changed ever since, by some other way of looking at it and proving it? Now provide the response and nothing else.
It's simple to discover Taylor series. Let's start with $$\tag{1}f(x) = f(a) + \int_a^x f'(s) \, ds,$$ which of course is just the fundamental theorem of calculus.Now if we are feeling playful we might note (again by FTC) that $f'(s) = f'(a) + \int_a^s f''(t) \, dt$ . Plugging this into (1), we find that \begin{align}f(x) &= f(a) + \int_a^x f'(a) + \int_a^s f''(t) \, dt \,ds \\\tag{2}&= f(a) + f'(a)(x - a) + \underbrace{\int_a^x \int_a^s f''(t) \, dt \, ds}_{\text{remainder}}.\end{align} We can keep going like this for as long as we want. The next step is to note that $f''(t) = f''(a) + \int_a^t f'''(u) \, du$ . Plugging this into (2), we find that \begin{align}f(x) &= f(a) + f'(a) (x - a) + \int_a^x \int_a^s f''(a) + \int_a^t f'''(u) \, du \, dt \, ds \\&= f(a) + f'(a)(x - a) + \int_a^x f''(a)(s - a) + \int_a^s \int_a^t f'''(u) \, du \, dt \, ds \\&= f(a) + f'(a)(x - a) + f''(a) \frac{(x-a)^2}{2} + \underbrace{\int_a^x \int_a^s \int_a^t f'''(u) \, du \, dt \, ds}_{\text{remainder}}.\end{align} You see the pattern. So we have discovered the Taylor polynomial approximation to $f(x)$ , and we have a formula for the remainder. By the way, if $| f'''(u) | \leq M$ for all $u \in [a,x]$ , then the remainder $R(x)$ satisfies \begin{align}| R(x) | &\leq \int_a^x \int_a^s \int_a^t | f'''(u) | \, du \, dt \, ds \\&\leq \int_a^x \int_a^s \int_a^t M \, du \, dt \, ds \\&= M \frac{(x-a)^3}{3!}.\end{align} You see what the bound on the remainder will be for higher order Taylor series approximations. So we see that the remainder will be small if $x$ is close to $a$ . (If $f$ is sine or cosine, we can take $M = 1$ . If $f$ is the exponential function, we can take $M = e^x$ .)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3315538', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/343572/']}
jdg_80529
stackexchange
llm_judgeable_groundtruth_similarity
13943387
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Panel called panel1 and I am trying to draw a line on my panel1 using this code: var g = panel1.CreateGraphics();var p = new Pen(Color.Black, 3);var point1 = new Point(234,118);var point2 = new Point(293,228);g.DrawLine(p, point1, point2); But nothing is showing up. Any ideas? This is in a windows form. Now provide the response and nothing else.
Handle the Panel's Paint event and put it in there. What's happening is that it's being drawn once in the constructor but then being drawn over in the Paint event everytime it's called. private void panel1_Paint(object sender, PaintEventArgs e){ base.OnPaint(e); using(Graphics g = e.Graphics) { var p = new Pen(Color.Black, 3); var point1 = new Point(234,118); var point2 = new Point(293,228); g.DrawLine(p, point1, point2); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13943387', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1344109/']}
jdg_80530
stackexchange
llm_judgeable_groundtruth_similarity
54517136
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to get all the dates between two dates excluding weekends dates.Below is the format of dates: last_date = '2019-01-21'curr_date = '2019-02-04' Now provide the response and nothing else.
Using date.weekday() Return the day of the week as an integer, where Monday is 0 and Sunday is 6. from datetime import timedelta, datedef daterange(date1, date2): for n in range(int ((date2 - date1).days)+1): yield date1 + timedelta(n)start_dt = date(2019,1,21)end_dt = date(2019,2,4)weekdays = [5,6]for dt in daterange(start_dt, end_dt): if dt.weekday() not in weekdays: # to print only the weekdates print(dt.strftime("%Y-%m-%d")) EDIT: Using date.isoweekday() : Return the day of the week as an integer, where Monday is 1 and Sunday is 7. weekdays = [6,7]for dt in daterange(start_dt, end_dt): if dt.isoweekday() not in weekdays: print(dt.strftime("%Y-%m-%d")) OUTPUT: 2019-01-212019-01-222019-01-232019-01-242019-01-252019-01-282019-01-292019-01-302019-01-312019-02-012019-02-04
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54517136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10455815/']}
jdg_80531
stackexchange
llm_judgeable_groundtruth_similarity
10408046
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a hard time implementing Read for a tree structure. I want to take a left-associative string (with parens) like ABC(DE)F and convert it into a tree. That particular example corresponds to the tree . Here's the data type I'm using (though I'm open to suggestions): data Tree = Branch Tree Tree | Leaf Char deriving (Eq) That particular tree would be, in Haskell: example = Branch (Branch (Branch (Branch (Leaf 'A') (Leaf 'B')) (Leaf 'C')) (Branch (Leaf 'D') (Leaf 'E'))) (Leaf 'F') My show function looks like: instance Show Tree where show (Branch l r@(Branch _ _)) = show l ++ "(" ++ show r ++ ")" show (Branch l r) = show l ++ show r show (Leaf x) = [x] I want to make a read function so that read "ABC(DE)F" == example Now provide the response and nothing else.
This is a situation where using a parsing library makes the code amazingly short and extremely expressive. (I was amazed that it was so neat when I was experimenting to answer this!) I'm going to use Parsec (that article provides some links for more information), and using it in "applicative mode" (rather than monadic), since we don't need the extra power/foot-shooting-ability of monads. Code First the various imports and definitions: import Text.Parsecimport Control.Applicative ((<*), (<$>))data Tree = Branch Tree Tree | Leaf Char deriving (Eq, Show)paren, tree, unit :: Parsec String st Tree Now, the basic unit of the tree is either a single character (that's not a parenthesis) or a parenthesised tree. The parenthesised tree is just a normal tree between ( and ) . And a normal tree is just units put into branches left-associatedly (it's extremely self-recursive). In Haskell with Parsec: -- parenthesised tree or `Leaf <character>`unit = paren <|> (Leaf <$> noneOf "()") <?> "group or literal"-- normal tree between ( and )paren = between (char '(') (char ')') tree -- all the units connected up left-associatedlytree = foldl1 Branch <$> many1 unit-- attempt to parse the whole input (don't short-circuit on the first error)onlyTree = tree <* eof (Yes, that's the entire parser!) If we wanted to, we could do without paren and unit but the code above is very expressive, so we can leave it as is. As a brief explanation (I've provided links to the documentation): (<|>) basically means "left parser or right parser"; (<?>) allows you to make nicer error messages; noneOf will parse anything that's not in the given list of characters; between takes three parsers, and returns the value of the third parser as long as it is delimited by the first and second ones; char parses its argument literally. many1 parses one or more of its argument into a list (it appears that the empty string is invalid hence many1 , rather than many which parses zero or more); eof matches the end of the input. We can use the parse function to run the parser (it returns Either ParseError Tree , Left is an error and Right is a correct parse). As read Using it as a read like function could be something like: read' str = case parse onlyTree "" str of Right tr -> tr Left er -> error (show er) (I've used read' to avoid conflicting with Prelude.read ; if you want a Read instance you'll have to do a bit more work to implement readPrec (or whatever is required) but it shouldn't be too hard with the actual parsing already complete.) Examples Some basic examples: *Tree> read' "A"Leaf 'A'*Tree> read' "AB"Branch (Leaf 'A') (Leaf 'B')*Tree> read' "ABC"Branch (Branch (Leaf 'A') (Leaf 'B')) (Leaf 'C')*Tree> read' "A(BC)"Branch (Leaf 'A') (Branch (Leaf 'B') (Leaf 'C'))*Tree> read' "ABC(DE)F" == exampleTrue*Tree> read' "ABC(DEF)" == exampleFalse*Tree> read' "ABCDEF" == exampleFalse Demonstrating errors: *Tree> read' ""***Exception: (line 1, column 1):unexpected end of inputexpecting group or literal*Tree> read' "A(B"***Exception: (line 1, column 4):unexpected end of inputexpecting group or literal or ")" And finally, the difference between tree and onlyTree : *Tree> parse tree "" "AB)CD" -- success: ignores ")CD"Right (Branch (Leaf 'A') (Leaf 'B'))*Tree> parse onlyTree "" "AB)CD" -- fail: can't parse the ")"Left (line 1, column 3):unexpected ')'expecting group or literal or end of input Conclusion Parsec is amazing! This answer might be long but the core of it is just 5 or 6 lines of code which do all the work.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10408046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1115979/']}
jdg_80532
stackexchange
llm_judgeable_groundtruth_similarity
74541
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I saw the question below on a Q/A site, Donut Empire sells five different types of donuts. Amanda finds herself 3rd in line in the queue behind her friends Nicole and Brenda and realises that there are only three of each type left. a) Given that Nicole and Brenda only buy one donut each, find the probability that all three of them will buy the same donut. (No: You may assume that Nicole, Breda and Amanda are equally likely to buy any of the donuts available.) Now my answer came out to be 1/3, as probability= (choices to look for)/(total choices)= 5/15 But the OP stated that the answer was (1/91)^3Can anyone explain why? Thanks. Now provide the response and nothing else.
The probability that all three of them will buy the same donut is zero. Donut shops usually respect property laws and don't sell things they've already sold to other customers. Assuming that the question was intended to ask about the probability that all three of them will buy the same type of donut, the question arises whether the sentence "You may assume that Nicole, Breda and Amanda are equally likely to buy any of the donuts available." is intended as written, or is also intended to refer to types of donuts. If it's intended to refer to types of donuts, the answer is $1$ in $25$: There are $5^3$ combinations for the five types of donuts, they're all equally likely, and in $5$ of them they all buy the same type of donut. Since we wouldn't need to know how many of each type there were left to answer this, this is presumably not how the question was intended. So assume that the question was intended to ask about the probability that all three of them will buy the same type of donut, given that they are equally likely to buy any of the donuts available. In that case, there are $\binom{15}3$ combinations of choices, all of which are equally likely, and there are again only $5$ combinations in which they all buy the same type of donut, so the probability in this case would be $$\frac5{\binom{15}3}=\frac5{455}=\frac1{91}\;.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/74541', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_80533
stackexchange
llm_judgeable_groundtruth_similarity
76456
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I have a 3-bit ADC and a reference voltage of 8 V, any input voltage of exact increment of 1 V (e.g., 1 V, 2 V, 3 V, etc.) should not have any quantization error. With a 2 V DC input, the ADC outputs 010 with zero quantization error. Why then do we still have $ \Delta^2 \over 12 $ quantization noise and do we still see the white noise as quantization noise? Now provide the response and nothing else.
Why then we still have $ \Delta^2 \over 12 $ quantization noise You need to read more closely: this is the quantization noise for continuous-uniformly distributed signal amplitudes and equally spaced steps. This signal model clearly does not apply to your example signal! Note that your example signal only takes discrete values - and hence is already quantized. In that case, it's not quite fair to consider your ADC a quantizer; it might be more of something like a discrete value mapper. Why can't quantization noise be zero? A time-discrete quantized signal always has finite entropy¹, whereas using the same metric as for discrete sources, a value-continuous source has "infinite entropy". Assume we construct a continuous-valued source as ever increasing finer-grained quantized source, it becomes clear that there's a difference in the amount of information in the discrete- and the continuous-valued source. That difference in information has to "go" somewhere. So, there needs to be an amount of information "lost" during quantization, and we can consider that equivalent to a noise that overlays the original signal. ¹ that's the expectation of the information of symbols coming out of that source, i.e. $$\mathbb E(I(x))=\mathbb E(-\log_2(\mathrm P(X))=-\sum_x \mathrm P(x)\log_2(\mathrm P(x))$$
{}
{'log_upvote_score': 5, 'links': ['https://dsp.stackexchange.com/questions/76456', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/58460/']}
jdg_80534
stackexchange
llm_judgeable_groundtruth_similarity
1708037
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a bit of code to add a link tag to the head tag in the code behind... i.e. HtmlGenericControl css = new HtmlGenericControl("link");css.Attributes["rel"] = "Stylesheet";css.Attributes["type"] = "text/css";css.Attributes["href"] = String.Format("/Assets/CSS/{0}", cssFile); to try and achieve something like... <link rel="Stylesheet" type="text/css" href="/CSS/Blah.css" /> I am using the HtmlGenericControl to achieve this... the issue I am having is that the control ultimatly gets rendered as... <link rel="Stylesheet" type="text/css" href="/CSS/Blah.css"></link> I cant seem to find what I am missing to not render the additional </link> , I assumed it should be a property on the object. Am I missing something or is this just not possible with this control? Now provide the response and nothing else.
I think you'd have to derive from HtmlGenericControl , and override the Render method. You'll then be able to write out the "/>" yourself (or you can use HtmlTextWriter's SelfClosingTagEnd constant). Edit: Here's an example (in VB)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1708037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_80535
stackexchange
llm_judgeable_groundtruth_similarity
731933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting the following error while compiling my Latex File : ! LaTeX Error: Cannot determine size of graphic in tree.jpg (no BoundingBox). Why do I get this error ? What has this to do with an .eps file ? In fact, I could compile it fine with MacTex on my machine. But when I submit it online to a scientific journal for publication, it compiles it on the server and sends me the resultant PDF which has all these errors that prevent it from compiling. I am using the following syntax for including the images : \begin{figure}[!h] \begin{center} \scalebox{0.45}{ \includegraphics{tree.jpg}} \end{center} \caption{\small A sample} \label{tree}\end{figure} What should I do ? Could it be that their server is using an old compiler ? UPDATE : It finally worked. I converted the first image to PDF and the compilation happened perfectly. I guess it used the bounding box value from that PDF and applied it to all images. Now provide the response and nothing else.
I've had the same problems including jpegs in LaTeX. The engine isn't really built to gather all the necessary size and scale information from JPGs. It is often better to take the JPEG and convert it into a PDF (on a mac) or EPS (on a PC). GraphicsConvertor on a mac will do that for you easily. Whereas a PDF includes DPI and size, a JPEG has only a size in terms of pixels. ( I know this is not the answer you wanted, but it's probably better to give them EPS/PDF that they can use than to worry about what happens when they try to scale your JPG).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/731933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78351/']}
jdg_80536
stackexchange
llm_judgeable_groundtruth_similarity
4664850
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Python has string.find() and string.rfind() to get the index of a substring in a string. I'm wondering whether there is something like string.find_all() which can return all found indexes (not only the first from the beginning or the first from the end). For example: string = "test test test test"print string.find('test') # 0print string.rfind('test') # 15#this is the goalprint string.find_all('test') # [0,5,10,15] For counting the occurrences, see Count number of occurrences of a substring in a string . Now provide the response and nothing else.
There is no simple built-in string function that does what you're looking for, but you could use the more powerful regular expressions : import re[m.start() for m in re.finditer('test', 'test test test test')]#[0, 5, 10, 15] If you want to find overlapping matches, lookahead will do that: [m.start() for m in re.finditer('(?=tt)', 'ttt')]#[0, 1] If you want a reverse find-all without overlaps, you can combine positive and negative lookahead into an expression like this: search = 'tt'[m.start() for m in re.finditer('(?=%s)(?!.{1,%d}%s)' % (search, len(search)-1, search), 'ttt')]#[1] re.finditer returns a generator , so you could change the [] in the above to () to get a generator instead of a list which will be more efficient if you're only iterating through the results once.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/4664850', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/277262/']}
jdg_80537
stackexchange
llm_judgeable_groundtruth_similarity
41918913
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Windows explorer, the green checkie icon for repositories that are up-to-date and the red mark for those that aren't don't show anymore. How do I bring them back? Please see the pictures attached. For e.g. I do have some untracked files in the repository named Temp. However, none of the folders have the green or red icons that display the git repository status. They used to be there earlier. They suddenly disappeared a few weeks ago. I am using 64-bit Windows 7 Home Premium. Now provide the response and nothing else.
The solution to this was provided by @magicandre1981 in the comments to this question. Here is my attempt to explain the solution. It appears that application such as Tortoise Git, Dropbox, Google Drive, etc. that present different icons in front of Windows explorer files and folders do so by creating an overlay on top of the regular icons that Windows draws. Apparently, the application registers these overlays and Windows redraws each overlay once it has finished the basic drawing of file system objects. However, Windows imposes a limit on the number of overlays it will honor. Any number of applications can register any number of overlays with Windows. But Windows will only pick the first 15 overlays registered with it and re-draw them. The rest will be ignored. The Windows registry maintains the order and number of overlays registered. Specifically, the key HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Explorer\ShellIconOverlayIdentifiers has this list as seen in the picture below. An explanation of this can also be found on this web page . As can be seen Dropbox and Google Drive had already registered more than 15 overlays in total, which was why Windows never got a chance to honor the overlays registered by Tortoise Git on my machine. I uninstalled Dropbox and Google Drive as I didn't use their sync feature much. I didn't see any change so I restarted my machine. I still didn't see any change but after about 30 minutes or so of restarting my machine, the icons were restored to normal, i.e. I started seeing the Tortoise git status for my repositories just fine.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41918913', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303685/']}
jdg_80538
stackexchange
llm_judgeable_groundtruth_similarity
33285468
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Microsoft requires a UAC Shield Icon next to buttons and list entries that will open a UAC verification prompt. How do I get this Icon next to my WPF Button? I've been searching the web for more than an hour now but I was unable to find a way of adding this shield icon to an WPF Button. I have a WPF Form using a normal WPF Button but most of the scripts I was able to find didn't worked for me - mostly because my Buttons don't have a FlatStyle or Handle property (I think the WinForms-Buttons have these properties) I'm using Visual Studio 2015 Community with a .NET Framework 3.5 Application using WPF I hope you guys are able to help me. Have a nice day Now provide the response and nothing else.
The actual Windows icon for running version of windows is supplied via the Win32 API. I'm not aware of any functions in .NET to directly retrieve it, however it can be accessed via p/invoke on user32.dll . Details can be found here . Adjustments will need to be made for WPF, as the linked code is for Winforms. Short summary: [DllImport("user32")]public static extern UInt32 SendMessage (IntPtr hWnd, UInt32 msg, UInt32 wParam, UInt32 lParam);internal const int BCM_FIRST = 0x1600; //Normal buttoninternal const int BCM_SETSHIELD = (BCM_FIRST + 0x000C); //Elevated buttonstatic internal void AddShieldToButton(Button b){ b.FlatStyle = FlatStyle.System; SendMessage(b.Handle, BCM_SETSHIELD, 0, 0xFFFFFFFF);} Update This will give you direct access to the correct icon and it works in WPF directly. BitmapSource shieldSource = null;if (Environment.OSVersion.Version.Major >= 6){ SHSTOCKICONINFO sii = new SHSTOCKICONINFO(); sii.cbSize = (UInt32) Marshal.SizeOf(typeof(SHSTOCKICONINFO)); Marshal.ThrowExceptionForHR(SHGetStockIconInfo(SHSTOCKICONID.SIID_SHIELD, SHGSI.SHGSI_ICON | SHGSI.SHGSI_SMALLICON, ref sii)); shieldSource = System.Windows.Interop.Imaging.CreateBitmapSourceFromHIcon( sii.hIcon, Int32Rect.Empty, BitmapSizeOptions.FromEmptyOptions()); DestroyIcon(sii.hIcon);}else{ shieldSource = System.Windows.Interop.Imaging.CreateBitmapSourceFromHIcon( System.Drawing.SystemIcons.Shield.Handle, Int32Rect.Empty, BitmapSizeOptions.FromEmptyOptions());} p/Invoke Signatures can be found here .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33285468', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4773888/']}
jdg_80539
stackexchange
llm_judgeable_groundtruth_similarity
320184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Ever since switching from TABLE-layout to DIV-layout, one common problem remains: PROBLEM : you fill your DIV with dynamic text and inevitably there is a super-long word that extends over the edge of your div column and makes your site look unprofessional. RETRO-WHINING : This never happened with table layouts. A table cell will always nicely expand to the width of the longest word. SEVERITY : I see this problem on even the most major sites, especially on German sites where even common words such as "speed limit" are very long ("Geschwindigkeitsbegrenzung"). Does anyone have a workable solution to this? Now provide the response and nothing else.
Soft hyphen You can tell browsers where to split long words by inserting soft hyphen ( &shy; ): averyvery&shy;longword may be rendered as averyverylongword or averyvery- longword A nice regular expression can ensure you won't be inserting them unless neccessary: /([^\s-]{5})([^\s-]{5})/ → $1&shy;$2 Browsers and search engines are smart enough to ignore this character when searching text, and Chrome and Firefox (haven't tested others) ignore it when copying text to clipboard. <wbr> element Another option is to inject <wbr> , a former IE-ism , which is now in HTML5 : averyvery<wbr>longword Breaks with no hyphen: averyvery longword You can achieve the same with zero-width space character &#8203; (or &#x200B ). FYI there's also CSS hyphens: auto supported by latest IE, Firefox and Safari ( but currently not Chrome ): div.breaking { hyphens: auto;} However that hyphenation is based on a hyphenation dictionary and it's not guaranteed to break long words. It can make justified text prettier though. Retro-whining solution <table> for layout is bad, but display:table on other elements is fine. It will be as quirky (and stretchy) as old-school tables: div.breaking { display: table-cell;} overflow and white-space: pre-wrap answers below are good too.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/320184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4639/']}
jdg_80540