source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
32495069
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is essentially the same question as How to short-circuit reduce on Stream? . However, since that question focuses on a Stream of boolean values, and its answer cannot be generalized for other types and reduce operations, I'd like to ask the more general question. How can we make a reduce on a stream so that it short-circuits when it encounters an absorbing element for the reducing operation? The typical mathematical case would be 0 for multiplication. This Stream : int product = IntStream.of(2, 3, 4, 5, 0, 7, 8) .reduce(1, (a, b) -> a * b); will consume the last two elements ( 7 and 8 ) regardless of the fact that once 0 has been encountered the product is known. Now provide the response and nothing else.
Unfortunately the Stream API has limited capabilities to create your own short-circuit operations. Not so clean solution would be to throw a RuntimeException and catch it. Here's the implementation for IntStream , but it can be generalized for other stream types as well: public static int reduceWithCancelEx(IntStream stream, int identity, IntBinaryOperator combiner, IntPredicate cancelCondition) { class CancelException extends RuntimeException { private final int val; CancelException(int val) { this.val = val; } } try { return stream.reduce(identity, (a, b) -> { int res = combiner.applyAsInt(a, b); if(cancelCondition.test(res)) throw new CancelException(res); return res; }); } catch (CancelException e) { return e.val; }} Usage example: int product = reduceWithCancelEx( IntStream.of(2, 3, 4, 5, 0, 7, 8).peek(System.out::println), 1, (a, b) -> a * b, val -> val == 0);System.out.println("Result: "+product); Output: 23450Result: 0 Note that even though it works with parallel streams, it's not guaranteed that other parallel tasks will be finished as soon as one of them throws an exception. The sub-tasks which are already started will likely to run till finish, so you may process more elements than expected. Update : alternative solution which is much longer, but more parallel-friendly. It's based on custom spliterator which returns at most one element which is result of accumulation of all underlying elements). When you use it in sequential mode, it does all the work in single tryAdvance call. When you split it, each part generates the correspoding single partial result, which are reduced by Stream engine using the combiner function. Here's generic version, but primitive specialization is possible as well. final static class CancellableReduceSpliterator<T, A> implements Spliterator<A>, Consumer<T>, Cloneable { private Spliterator<T> source; private final BiFunction<A, ? super T, A> accumulator; private final Predicate<A> cancelPredicate; private final AtomicBoolean cancelled = new AtomicBoolean(); private A acc; CancellableReduceSpliterator(Spliterator<T> source, A identity, BiFunction<A, ? super T, A> accumulator, Predicate<A> cancelPredicate) { this.source = source; this.acc = identity; this.accumulator = accumulator; this.cancelPredicate = cancelPredicate; } @Override public boolean tryAdvance(Consumer<? super A> action) { if (source == null || cancelled.get()) { source = null; return false; } while (!cancelled.get() && source.tryAdvance(this)) { if (cancelPredicate.test(acc)) { cancelled.set(true); break; } } source = null; action.accept(acc); return true; } @Override public void forEachRemaining(Consumer<? super A> action) { tryAdvance(action); } @Override public Spliterator<A> trySplit() { if(source == null || cancelled.get()) { source = null; return null; } Spliterator<T> prefix = source.trySplit(); if (prefix == null) return null; try { @SuppressWarnings("unchecked") CancellableReduceSpliterator<T, A> result = (CancellableReduceSpliterator<T, A>) this.clone(); result.source = prefix; return result; } catch (CloneNotSupportedException e) { throw new InternalError(); } } @Override public long estimateSize() { // let's pretend we have the same number of elements // as the source, so the pipeline engine parallelize it in the same way return source == null ? 0 : source.estimateSize(); } @Override public int characteristics() { return source == null ? SIZED : source.characteristics() & ORDERED; } @Override public void accept(T t) { this.acc = accumulator.apply(this.acc, t); }} Methods which are analogous to Stream.reduce(identity, accumulator, combiner) and Stream.reduce(identity, combiner) , but with cancelPredicate : public static <T, U> U reduceWithCancel(Stream<T> stream, U identity, BiFunction<U, ? super T, U> accumulator, BinaryOperator<U> combiner, Predicate<U> cancelPredicate) { return StreamSupport .stream(new CancellableReduceSpliterator<>(stream.spliterator(), identity, accumulator, cancelPredicate), stream.isParallel()).reduce(combiner) .orElse(identity);}public static <T> T reduceWithCancel(Stream<T> stream, T identity, BinaryOperator<T> combiner, Predicate<T> cancelPredicate) { return reduceWithCancel(stream, identity, combiner, combiner, cancelPredicate);} Let's test both versions and count how many elements are actually processed. Let's put the 0 close to end. Exception version: AtomicInteger count = new AtomicInteger();int product = reduceWithCancelEx( IntStream.range(-1000000, 100).filter(x -> x == 0 || x % 2 != 0) .parallel().peek(i -> count.incrementAndGet()), 1, (a, b) -> a * b, x -> x == 0);System.out.println("product: " + product + "/count: " + count);Thread.sleep(1000);System.out.println("product: " + product + "/count: " + count); Typical output: product: 0/count: 281721product: 0/count: 500001 So while result is returned when only some elements are processed, the tasks continue working in background and counter is still increasing. Here's spliterator version: AtomicInteger count = new AtomicInteger();int product = reduceWithCancel( IntStream.range(-1000000, 100).filter(x -> x == 0 || x % 2 != 0) .parallel().peek(i -> count.incrementAndGet()).boxed(), 1, (a, b) -> a * b, x -> x == 0);System.out.println("product: " + product + "/count: " + count);Thread.sleep(1000);System.out.println("product: " + product + "/count: " + count); Typical output: product: 0/count: 281353product: 0/count: 281353 All the tasks are actually finished when the result is returned.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32495069', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1886012/']}
jdg_78641
stackexchange
llm_judgeable_groundtruth_similarity
38205799
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Probably a stupid question but I am new to MVC. So far in my Razor I could say @HTML.TextBoxFor(t => t.EmailAddress) but now I have a for-each: foreach(var q in Model.Questions){ // so here the t => t.EmailAddress syntax is not working anymore.} I asked my question in the code sample above. So when I am inside a for-each loop how can I can use @HTML.TextBox ? because now it doesn't get the lambda syntax anymore. Now provide the response and nothing else.
Do not use foreach , because this will cause problems when you try to bind your inputs back to the model. Instead use a for loop: for (var i = 0; i < Model.Questions.Count(); i++) { @Html.TextBoxFor(m => m.Questions[i])} See also Model Binding to a List MVC 4 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38205799', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/320724/']}
jdg_78642
stackexchange
llm_judgeable_groundtruth_similarity
1188384
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using SQLBulkCopy to move large amounts of data. I implemented the notification event to notify me every time a certain number of rows have been processed, but the OnSqlRowsCopied event does not fire when the job is completed. How do I get the total number of rows copied when the SQLBulkCopy writetoserver completes? Now provide the response and nothing else.
The following hack (using reflection) is an option: /// <summary> /// Helper class to process the SqlBulkCopy class /// </summary> static class SqlBulkCopyHelper { static FieldInfo rowsCopiedField = null; /// <summary> /// Gets the rows copied from the specified SqlBulkCopy object /// </summary> /// <param name="bulkCopy">The bulk copy.</param> /// <returns></returns> public static int GetRowsCopied(SqlBulkCopy bulkCopy) { if (rowsCopiedField == null) { rowsCopiedField = typeof(SqlBulkCopy).GetField("_rowsCopied", BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance); } return (int)rowsCopiedField.GetValue(bulkCopy); } } And then use the class as follows: int rowsCopied = SqlBulkCopyHelper.GetRowsCopied(bulkCopyObjectInYourCode); Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1188384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/94382/']}
jdg_78643
stackexchange
llm_judgeable_groundtruth_similarity
687370
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The #1 rule of sports biomechanics is the conservation of angular momentum. It dictates that whenever an athlete performs an acrobatic jump, the angular momentum that he has created on takeoff is to stay unchanged until he lands. He can control the speed of rotation by expanding or retracting his limbs, but he can't just randomly stop rotating in mid-air and then continue again out of nowhere. For rotations around multiple axes (twists etc), I understand that the conservation of momentum should work for each of individual axes. Now take a look at this jump (starts at 0.51): https://youtu.be/sb82tVOq2dY?t=51 On takeoff, the diver initiates a flip with a twist (a spin around both vertical and horizontal axes at the same time). But then, in the middle of the jump, he somehow kills the vertical component of rotation and converts to a plain frontflip. In another video, you can see the opposite: https://www.youtube.com/watch?v=fwDGrNKiTi8 Here, the athlete initiates a pure frontflip rotation on takeoff. However, before the last flip, he somehow initiates an additional rotation around vertical axes, pulling that 180 in the end seemingly out of nowhere. And I've seen people pulling even 360s like that out of nowhere. So what's going on there? Is it possible for an athlete to initiate or kill angular momentum in mid-air somehow? Or is there some other effect at play? Now provide the response and nothing else.
To explain how orientation can change whilst angular momentum is conserved it is first best to look at a slightly simpler system - a cat in free fall! Here is a series of photographs taken in $1894$ which shows a cat turning its body to ensure that it lands on its feet. This gif file illustrates how a cat changes its shape to rotate and yet still to conserve angular momentum. Finally here is a video of such an event with the cat suffering no harm. So the key is changing body shape to achieve a rotation whilst conserving angular momentum. This is shown using a selection of stills from the gymnast video. First head on. Arm movements starting in slide $\rm d$ initiate the twisting of the gymnast. From the side. Here is a dive executed in the video referenced by the OP. The diver when on the diving board cannot use it to start a twisting rotation as that rotation could not be removed towards the end of the dive and I think that is also against completion rules. The Physics of somersaulting and tumbling is explained in an article published in Scientific American. By moving the arms a diver can start and stop a twist. The somersault rotation continues from start to finish but before entering the water the diver increases the moment of inertia about a horizontal axis by stretching out thus reducing the speed of rotation. By timing the entry to perfection and whilst still rotating the diver enter the water with the smallest horizontal profile. Note the rotation continuing under the water. This slow motion video of a twisting somersault shows clearly how the arms are used to initiate twisting.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/687370', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/221028/']}
jdg_78644
stackexchange
llm_judgeable_groundtruth_similarity
13263507
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table in OpenERP/PostgreSQL with the following columns: name and description . I added the following validation for unique name: _sql_constraints = [('unique_name', 'unique(name)', 'A record with the same name already exists.')] It works fine but it is case sensitive. Currently, it accepts values such as "Mickey", "MICKEY" and "mickey": Wrong Way:--------------------------| name | description |--------------------------| mickey | not a mouse |--------------------------| MICKEY | not a mouse |--------------------------| Mickey | not a mouse |-------------------------- Is there a way to revise the validation code so that it will not allow users to add several values such as "Mickey", "MICKEY" and "mickey"? How can I make the unique key validation case insensitive? Right Way:--------------------------------| name | description |--------------------------------| mickey | not a mouse |--------------------------------| mickey mouse | is a mouse |--------------------------------| donald | is a duck |-------------------------------- Now provide the response and nothing else.
For case insensitive constraints check out HERE else you can always use Openerp Constraints instead of SQL . for openerp Constraints check the example def _check_unique_insesitive(self, cr, uid, ids, context=None): sr_ids = self.search(cr, 1 ,[], context=context) lst = [ x.FIELD.lower() for x in self.browse(cr, uid, sr_ids, context=context) if x.FIELD and x.id not in ids ] for self_obj in self.browse(cr, uid, ids, context=context): if self_obj.FILD and self_obj.FILD.lower() in lst: return False return True_constraints = [(_check_unique_insesitive, 'Error: UNIQUE MSG', ['FIELD'])]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13263507', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1490928/']}
jdg_78645
stackexchange
llm_judgeable_groundtruth_similarity
9341389
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been reading several tutorials and watching some iTube videos to see how I best prepare my app for Ad Hoc distribution. However the information I got is not really consistent, it goes from the 5 step explanation from the Developer Center of Apple to a almost 3 minutes video, where I would have to set some Code Signing options in XCode, create and zip the Archive etc. My questions: 1.Is there a tutorial available, which explains everything required, but not too much? I somehow feel, for instance, that code signing is not really required. 2.After I tried to install the app using iTunes, the synch claimed that it was not able to install it. Is there some log available in iTunes, so I can check for further error details? 3.How about switching from Ad Hoc distribution mode to normal developer mode? After I followed the steps from the video and prepared my app for Ad Hoc distribution, I was not able to install it on my device anymore, I got the error Build target AbiliatorCheck dependencies[BEROR]CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 5.0' This is the video tutorial on iTube I followed: Demo of iPhone app Ad Hoc distribution Thanks for your feedback and have a great weekend. Now provide the response and nothing else.
OK, looked up the Error " the following job tried to hijack the service..." and saw that a reboot of the iPhone may solve the issue... and guess what, I was chasing a phantom all day long..... It works perfectly. So last hint for anyone, having a similar issue, after installing an app via Ad Hoc the first time: Reboot iPhone, otherwise the app may crash. Cheers and now good nite :-) PS: Just to round this thread up here the steps I did at the end (as mentioned, I am using XCode 4.2.1) : Added my device on the Provisioning Portal (this had been done long time ago, however it is an important step, as the beta testers device IDs also need to be registered here before you re-create and download the distribution profile) Created and downloaded an iOS Distribution Certificate in the Provisioning Portal Imported the certificate in my Keychain Created a Ad Hoc Provisioning File in the Provisioning Portal Downloaded that profile Double clicked the profile, which added it to the XCode Organizer Provisioning Profile Section) Added the options Any SDK and Any iOS SDK in for Ad Hoc Distribution Code Signing Identity in the Code Signing section of my project AND my target and set the values with my Ad Hoc Provisioning File I just created and imported Made sure the option in my app .plist file "Application requires iPhone environment" was set to true Ran a clean (Product / Clean) and then a build only (Product Build) Made sure the app was not installed on my iPhone (as I was testing before via XCode) Copied the target (group Products via Show in Finder) to iTunes via Drag and Drop (I dropped it into the Media section) Copied the Ad Hoc Provisioning File into iTunes via Drag and Drop (also to the Media section) Made sure the app was checked for the synch, synched iTunes with the iPhone Lastly rebooted my iPhone! However, I am pretty sure, that this might occur on the Developers device only. I assume that the reason for that was a process, still remaining on the iPhone, from my last test via XCode. I did not use an entitlements file. I have actually to find out, what that file is really for, I have read that it might be required only, if using iCloud. And XCode 4.2 developers, there is nothing like File / New / Entitlement Template anymore in XCode. The file is created as soon as you check the option Entitlements and give the file your desired name in the Summary tab of your target. You may also want to download the iPhone Configuration Utility from apple I will also recommend this to my beta testers, as it allows you to see the console log of the iPhone, which of course might be very helpful in case of errors. Hope this summary helps others, setting up their app for the first Ad Hoc distribution. Cheers, René
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9341389', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/449980/']}
jdg_78646
stackexchange
llm_judgeable_groundtruth_similarity
34347985
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i heard that clojure does not have cons cells as of most lisp languages. does that mean a clojure list does not end with an empty list? could anyone explain what that exactly means? Now provide the response and nothing else.
Lisp provides a primitive cons data structure and a notation for it. See John McCarthy, Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I, 1960, Chapter 3, Recursive Functions of Symbolic Expressions . That chapter introduces: Symbolic expressions made of atoms and pairs of symbolic expressions written using a dot notation: ( a . b ) a list notation to abbreviate certain symbolic expressions (a b c) an atomic symbol nil to terminate lists the primitive functions car , cdr , cons , eq and atom several other functions: ff , subst , equal , null , cadr , caddr , null , append , among , pair , assoc , sublis , apply , eval , ... Early on in Lisp, functions to mutate cons cells have been added: rplaca (means replace car ) and rplacd (means replace cdr ). See the LISP 1.5 Programmer's Manual by John McCarthy et al. from 1962 . These functions allow us to write destructive functions and also allow us to create cyclic cons-based data structures like cyclic lists. Common Lisp Typically Lisp dialects implement most of this. Common Lisp is no exception and for it this functionality is described in the Common Lisp standard: Conses . Examples using the functions mentioned above: ; pair two lists into a list of cons cells.; the function pair is called pairlis in Common Lisp.CL-USER 17 > (pairlis '(john mary eva) '(34 29 40))((EVA . 40) (MARY . 29) (JOHN . 34)); find a cons cell in a list of cons cells,; based on the content of the car of those cons cellsCL-USER 18 > (assoc 'eva (pairlis '(john mary eva) '(34 29 40)))(EVA . 40); create a tree out of cons cells and atomsCL-USER 19 > (cons (cons 10 20) (cons 30 40))((10 . 20) 30 . 40); a cons cell is not an atomCL-USER 20 > (atom (cons 1 2))NIL; a cons cell is not nilCL-USER 21 > (null (cons 1 2))NIL; substitute an item with a new one in a treeCL-USER 22 > (subst 30 ; new 'bar ; old '((10 . 20) . (bar . 40))) ; tree((10 . 20) 30 . 40) ; also written as ((10 . 20) . (30 . 40)); substitute several items in a tree, using an assoc list; to describe the substitutionsCL-USER 23 > (sublis '((a . 10) (d . 40)) ; substitutions '((a . b) . (c . d))) ; tree((10 . B) C . 40) Lists are then a special case of symbolic expressions. They are typically written without dots: CL-USER 24 > '(a . (b . nil))(A B) Common Lisp also supports the mutating operations rplaca and rplacd of Lisp 1.5: CL-USER 25 > (let ((c (cons 0 1))) ; create a cons (print c) ; print it (print (rplaca c 'foo)) ; replace the car (print (rplacd c 'bar)) ; replace the cdr (print (eq c (rplaca c 'baz))) ; identical ? (values))(0 . 1) ; the cons cell(FOO . 1) ; car replaced(FOO . BAR) ; cdr replacedT ; still the same object Emacs Lisp Emacs Lisp also implements the above functionality: ELISP> (sublis '((a . 10) (d . 40)) '((a . b) . (c . d)))((10 . b) c . 40) Clojure Clojure does not support these symbolic expressions as described by John McCarthy. It has no cons cells, no dot notation and does not provide the above interface. For example atom means something completely different in Clojure. cons does not create a cons cell. Lists are not made of cons cells. In Clojure a dot is just another symbol: user=> (count '(1 . 2))3 There is a primitive function to construct lists : user=> (list 1 2 3)(1 2 3) The result should be a list: user=> (list? (list 1 2 3))true There is a function called cons : user=> (cons 0 (list 1 2 3))(0 1 2 3) Somehow this is not a list: user=> (list? (cons 0 (list 1 2 3)))false Basically Clojure does use different data structures (-> sequences , logical lists ) with its own naming and semantics. Even if names are similar to Lisp names, don't expect that they do the same. Scheme The programming language Scheme also provides cons cells similar to above. It lacks some of the functions, but they can be easily implemented. For example sublis might be implemented like this in Scheme (see initdr.scm ): (define (sublis alist tree) (if (pair? tree) (cons (sublis alist (car tree)) (sublis alist (cdr tree))) (if (assv tree alist) (cdr (assv tree alist)) tree)))
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34347985', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/571828/']}
jdg_78647
stackexchange
llm_judgeable_groundtruth_similarity
742772
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I had a question regarding the BASE version in svn. I know it is the pristine version of an item from the last time update was run and svn diff and status basically run against this version.When does the BASE get updated? Does it get updated to the HEAD when update is run? Is this always true even if there was a conflict during update? What does BASE have incase of a conflict? Does it still have the version from my last update(before the most recent update that caused a conflict) or will it be updated to the HEAD even if there is a conflict? Also when svn commit is run I am assuming the BASE gets updated with my local changes and at this point BASE, HEAD and my local copy all are the same? Provided the commit was successful. Now provide the response and nothing else.
HEAD is the latest revision in the repository. BASE is the last revision you have obtained from the repository. They are the same after a successful commit or update. When you make changes, your files differ from the BASE copies. When you revert, they go back to matching the BASE revision. When you get a conflict, you do not update the repository. Rather, your files are considered to still be "being edited" as if you were making changes. After you resolve conflicts, you have in essence decided what the final files will look like and you then commit them as you would have normally. So, conflicts are like a case of special editing.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/742772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1984928/']}
jdg_78648
stackexchange
llm_judgeable_groundtruth_similarity
3764072
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Solved: Workable solution: sbi's answer Explanation for what really happens: Hans's answer Explanation for why OpenFile doesn't pass through "DELETE PENDING": Benjamin's answer The Problem: Our software is in large part an interpreter engine for a proprietary scripting language. That scripting language has the ability to create a file, process it, and then delete the file. These are all separate operations, and no file handles are kept open in between these operations. (I.e. during the file creation, a handle is created, used for writing, then closed. During the file processing portion, a separate file handle opens the file, reads from it, and is closed at EOF. And finally , delete uses ::DeleteFile which only has use of a filename, not a file handle at all). Recently we've come to realize that a particular macro (script) fails sometimes to be able to create the file at some random subsequent time (i.e. it succeeds during the first hundred iterations of "create, process, delete", but when it comes back to creating it a hundred and first time, Windows replies "Access Denied"). Looking deeper into the issue, I have written a very simple program that loops over something like this: while (true) { HANDLE hFile = CreateFileA(pszFilename, FILE_ALL_ACCESS, FILE_SHARE_READ, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL); if (hFile == INVALID_HANDLE_VALUE) return OpenFailed; const DWORD dwWrite = strlen(pszFilename); DWORD dwWritten; if (!WriteFile(hFile, pszFilename, dwWrite, &dwWritten, NULL) || dwWritten != dwWrite) return WriteFailed; if (!CloseHandle(hFile)) return CloseFailed; if (!DeleteFileA(pszFilename)) return DeleteFailed;} As you can see, this is direct to the Win32 API and is pretty darn simple. I create a file, write to it, close the handle, delete it, rinse, repeat... But somewhere along the line, I'll get an Access Denied (5) error during the CreateFile() call. Looking at sysinternal's ProcessMonitor, I can see that the underlying issue is that there is a pending delete on the file while I'm trying to create it again. Questions: Is there a way to wait for the delete to complete? Is there a way to detect that a file is pending deletion? We have tried the first option, by simply WaitForSingleObject() on the HFILE. But the HFILE is always closed before the WaitForSingleObject executes, and so WaitForSingleObject always returns WAIT_FAILED. Clearly, trying to wait for the closed handle doesn't work. I could wait on a change notification for the folder that the file exists in. However, that seems like an extremely overhead-intensive kludge to what is a problem only occasionally (to wit: in my tests on my Windows 7 x64 E6600 PC it typically fails on iteration 12000+ -- on other machines, it can happen as soon as iteration 7 or 15 or 56 or never). I have been unable to discern any CreateFile() arguments that would explicitly allow for this ether. No matter what arguments CreateFile has, it really is not okay with opening a file for any access when the file is pending deletion. And since I can see this behavior on both an Windows XP box and on an x64 Windows 7 box, I am quite certain that this is core NTFS behavior "as intended" by Microsoft. So I need a solution that allows the OS to complete the delete before I attempt to proceed, preferably without tying up CPU cycles needlessly, and without the extreme overhead of watching the folder that this file is in (if possible). 1 Yes, this loop returns on a failure to write or a failure to close which leaks, but since this is a simple console test application, the application itself exits, and Windows guarantees that all handles are closed by the OS when a process completes. So no leaks exist here. bool DeleteFileNowA(const char * pszFilename){ // Determine the path in which to store the temp filename char szPath[MAX_PATH]; strcpy(szPath, pszFilename); PathRemoveFileSpecA(szPath); // Generate a guaranteed to be unique temporary filename to house the pending delete char szTempName[MAX_PATH]; if (!GetTempFileNameA(szPath, ".xX", 0, szTempName)) return false; // Move the real file to the dummy filename if (!MoveFileExA(pszFilename, szTempName, MOVEFILE_REPLACE_EXISTING)) return false; // Queue the deletion (the OS will delete it when all handles (ours or other processes) close) if (!DeleteFileA(szTempName)) return false; return true;} Now provide the response and nothing else.
First rename the file to be deleted, and then delete it. Use GetTempFileName() to obtain a unique name, and then use MoveFile() to rename the file. Then delete the renamed file. If the actual deletion is indeed asynchronous and might conflict with the creation of the same file (as your tests seems to indicate), this should solve the problem. Of course, if your analysis is right and file operations happen somewhat asynchronous, this might introduce the problem that you attempt to delete the file before the renaming is done. But then you could always keep trying to delete in a background thread. If Hans is right (and I'm inclined to believe his analysis), then moving might not really help, because you might not be able to actually rename a file that's open by another process. (But then you might, I don't know this.) If that's indeed the case, the only other way I can come up with is "keep trying". You would have to wait for a few milliseconds and retry. Keep a timeout to give up when this doesn't help.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3764072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/112755/']}
jdg_78649
stackexchange
llm_judgeable_groundtruth_similarity
2749427
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got two entities, one named exercise and the other named workout. I would like to store several different exercises in each workout object. What is the correct logic here? Create an array of exercises in each workout? Now provide the response and nothing else.
You can't create arrays in Core Data. You can simulate them, however, using To-Many Relationships. Add a relationship to your Workout entity called, say, exercises . Then set its Destination to the Exercise entity and check the "To-Many Relationship" checkbox. Note that the exercises will be stored as an NSSet , not an NSArray , so they will not be ordered. If you want them to be in a specific order, I would recommend using the BWOrderedManagedObject subclass.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2749427', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/330286/']}
jdg_78650
stackexchange
llm_judgeable_groundtruth_similarity
29336616
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I came across bizarre eval behavior in Python 3 - local variables aren't picked up when eval is called in a list comprehension. def apply_op(): x, y, z = [0.5, 0.25, 0.75] op = "x,y,z" return [eval(o) for o in op.split(",")]print(apply_op()) It errors in Python 3: ▶ python --versionPython 3.4.3▶ python eval.pyTraceback (most recent call last): File "eval.py", line 7, in <module> print(apply_op()) File "eval.py", line 5, in apply_op return [eval(o) % 1 for o in op.split(",")] File "eval.py", line 5, in <listcomp> return [eval(o) % 1 for o in op.split(",")] File "<string>", line 1, in <module>NameError: name 'x' is not defined And it works fine in Python 2: ▶ python --versionPython 2.7.8▶ python eval.py[0.5, 0.25, 0.75] Moving it outside of the list comprehension removes the problem. def apply_op(): x, y, z = [0.5, 0.25, 0.75] return [eval("x"), eval("y"), eval("z")] Is this intended behavior, or is it a bug? Now provide the response and nothing else.
There is a closed issue in the bug tracker for this: Issue 5242 . The resolution for this bug is won't fix . Some comments from the Issue read: This is expected, and won't easily fix. The reason is that list comprehensions in 3.x use a function namespace "under the hood" (in 2.x, they were implemented like a simple for loop). Because inner functions need to know what names to get from what enclosing namespace, the names referenced in eval() can't come from enclosing functions. They must either be locals or globals. eval() is probably already an hack, there's no need to add another hack to make it work. It's better to just get rid of eval() and find a better way to do what you want to do.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29336616', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1309324/']}
jdg_78651
stackexchange
llm_judgeable_groundtruth_similarity
930815
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A monkey types each of the 26 letters of the alphabet exactly once (the order is random). What is the probability that the word Hamlet appears somewhere in the string of letters? Progress So far I thought that Hamlet is a 6-letter string and could appear in 21 different spots in the 26-letter string. But what should I do next? That is, what is the number of possible sequences of the other letters and the total probability? Now provide the response and nothing else.
HINT: First count how many possible $26$-letter strings there are. Now, abbreviate Hamlet to H, and use the remaining $20$ letters along with H.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/930815', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/175920/']}
jdg_78652
stackexchange
llm_judgeable_groundtruth_similarity
252492
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an AWS EC2 instance with docker installed, running a default nginx container - docker run -it --rm -d -p 8080:80 --name web nginx . I have an rsyslog setup that successfully captures the auth.log file for the host, so I can capture any login attempts to that machine. However, I'm wondering if there is any way I can capture container login attempts, i.e if someone gains access to the machine and runs docker exec -it web bash . While the container is running, docker logs outputs anything the container is logging to stdout/err. But I haven't found any documentation on container login attempts. Is docker exec the correct way to try "logging in" to the container? Is this something I can feasibly capture? Does it make sense to? When I run docker exec I haven't seen it logged anywhere - host syslog, kernel.log, auth.log, docker logs , nothing at all. So, it doesn't seem like container "logins" are even captured anywhere, and as long as the container is not running with privileged access (as USER root), there is minimal risk. It seems that protecting the host is far more important. More generally, if anyone is in the container poking around, running commands that require root etc., is this logged anywhere on the host? Or do I need to configure rsyslog in the container in order to capture such events. Any insight would be greatly appreciated! Now provide the response and nothing else.
Does this mean my SQL query is safe? Is there any way someone can break my query and view all the records? No, it is not safe. More than being able to view all the records of one table, you can pass in: "'AND(EXISTS(SELECT(1)FROM"SECRET_TABLE"WHERE((username='Admin')AND(password_hash='0123456789ABCDEF'))))AND"RECORD"LIKE'" If you get any output then you know that: There is a table called SECRET_TABLE ; That table has the columns USERNAME and PASSWORD_HASH ; and There is a row where the username is Admin and the password hash is 0123456789ABCDEF . And the passed in expression does not use the -*|%/ characters or any whitespace and results in a valid SQL expression. db<>fiddle here A determined attacker could then use this type of query to pull out data from any table the connected user has access to. Don't use string concatenation to include user input into queries; use prepared statements with parameters. For example: "SELECT * FROM table_name WHERE RECORD LIKE '%' || ? || '%'" And pass your dynamic value in as a bind parameter.
{}
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/252492', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/256267/']}
jdg_78653
stackexchange
llm_judgeable_groundtruth_similarity
14141007
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to write a program Shellcode.c that accepts in input a text file, which contains bash commands separeted by newline, and executes every commands in the text file: for example, the text file will contain: echo Hello Worldmkdir goofy ls I tried this one (just to begin practicing with one of the exec functions): #include <stdio.h>#include <unistd.h>void main() { char *name[3]; name[0] = "echo"; name[1] = "Hello World"; name[2] = NULL; execvp("/bin/sh", name);} I get, in return, echo: Can't open Hello World I'm stuck with the execvp function, where did I go wrong? Now provide the response and nothing else.
You're doing it wrong. The first array index is the name of the program, as explained in the docs : The execv(), execvp(), and execvpe() functions provide an array of pointers to null-terminated strings that represent the argument list available to the new program. The first argument, by convention, should point to the filename associated with the file being executed. The array of pointers must be terminated by a NULL pointer. Also, bash doesn't expect free-form argument like that, you need to tell it you're going to pass commands using the -c option: So, you need: name[0] = "sh";name[1] = "-c";name[2] = "echo hello world";name[3] = NULL;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14141007', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1459065/']}
jdg_78654
stackexchange
llm_judgeable_groundtruth_similarity
10324355
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to perform Fast Fourier Transform on a .3gpp audio file. The file contains a small 5 second recording in 44100kHz from the phones microphone. Every Java FFT algorithm I can find only takes double[], float[] or Complex[] inputs, for obvious reasons, but I'm reading in the audio file in a byte-array, so I'm kind of confused as to where I go from here. The only thing I could find is the answer to a previous question: Android audio FFT to retrieve specific frequency magnitude using audiorecord But I'm unsure as to wether or not this is the correct procedure. Anyone with any insight? Now provide the response and nothing else.
There is no alternative. You have to run a loop and cast each element of the array separately. I do the same thing for shorts that I fft as floats: public static float[] floatMe(short[] pcms) { float[] floaters = new float[pcms.length]; for (int i = 0; i < pcms.length; i++) { floaters[i] = pcms[i]; } return floaters;} EDIT 4/26/2012 based on comments If you really do have 16 bit PCM but have it as a byte[], then you can do this: public static short[] shortMe(byte[] bytes) { short[] out = new short[bytes.length / 2]; // will drop last byte if odd number ByteBuffer bb = ByteBuffer.wrap(bytes); for (int i = 0; i < out.length; i++) { out[i] = bb.getShort(); } return out;} then float[] pcmAsFloats = floatMe(shortMe(bytes)); Unless you are working with a weird and badly designed class that gave you the byte array in the first place, the designers of that class should have packed the bytes to be consistent with the way Java converts bytes (2 at a time) to shorts.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10324355', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/281434/']}
jdg_78655
stackexchange
llm_judgeable_groundtruth_similarity
10232366
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to check to see if a variable is null or has all empty spaces or is just blank (""). I have the following, but it is not working: var addr;addr = " ";if (!addr) { // pull error } If I do the following, it works: if (addr) {} What I need is something like the C# method String.IsNullOrWhiteSpace(value) . Now provide the response and nothing else.
A non-jQuery solution that more closely mimics IsNullOrWhiteSpace , but to detect null, empty or all-spaces only: function isEmptyOrSpaces(str){ return str === null || str.match(/^ *$/) !== null;} ...then: var addr = ' ';if(isEmptyOrSpaces(addr)){ // error } * EDIT * Please note that op specifically states: I need to check to see if a var is null or has any empty spaces or for that matter just blank . So while yes, "white space" encompasses more than null, spaces or blank my answer is intended to answer op's specific question. This is important because op may NOT want to catch things like tabs, for example.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10232366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/996431/']}
jdg_78656
stackexchange
llm_judgeable_groundtruth_similarity
36492
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Output voltage of transformer depends on ratio of winding count on primary and secondary coils, but is there an impact on transformer performance by actual winding count? Say, i want to have 1:2 ratio, i could wind 10:20 or 100:200 windings. In general, more windings - bigger the resistance, inductance and cost. Is there any point in winding more or is winding count kept to absolute minimum? How minimal winding count is determined? Now provide the response and nothing else.
The induced magnetic field is proportional to ampere-turns, that's current times number of turns. Electrical energy is converted to magnetic energy in the core and back to electrical. The core must be big enough to hold that without saturating. For a 100 VA transformer you want to transfer more energy magnetically than for a 10 VA transformer. The 100 VA is larger because it has more turns to build up a stronger field, and also needs a bigger core to avoid it saturating.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/36492', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/9079/']}
jdg_78657
stackexchange
llm_judgeable_groundtruth_similarity
33692296
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created a folder in order for it to be the main work directory meaning all the files I create go there, and files I read will be from there.For some reason after I created the folder and I'm trying to set it as the working directory I get this message: Error in setwd("~/") : cannot change working directory When I try to create files they are saved somewhere different and I get an error when trying to load them. I used the scan function and got: Error in file(file, "r") : cannot open the connection In addition: Warning message: In file(file, "r") : cannot open file 'ff': No such file or directory Now provide the response and nothing else.
The command setwd("~/") should set your working directory to your home directory. You might be experiencing problems because the OS you are using does not recognise "~/" as your home directory: this might be because of the OS, or it might be because of not having set that as your home directory elsewhere. As you have tagged the post using RStudio: In the bottom right window move the tab over to 'files'. Navigate through there to whichever folder you were planning to use as your working directory. Under 'more' click 'set as working directory' You will now have set the folder as your working directory. Use the command getwd() to get the working directory as it is now set, and save that as a variable string at the top of your script. Then use setwd with that string as the argument, so that each time you run the script you use the same directory. For example at the top of my script I would have: work_dir <- "C:/Users/john.smith/Documents"setwd(work_dir)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33692296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5556411/']}
jdg_78658
stackexchange
llm_judgeable_groundtruth_similarity
61812597
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a Next.js application, and I would like to add a custom HTTP response header to all pages (I don't need it to be configurable per page). Now I was wondering how to do that, without having to set up a custom server. Is this even possible? If so, how? Now provide the response and nothing else.
It's probably not possible without tradeoffs. Next.js has Automatic Static Optimization, so pages that can be statically exported will be exported to plain .html files. And .html files require no code execution on a server so there is no place to add a custom HTTP header. Alternatively, you could add custom HTTP headers on every response with getServerSideProps in _app.js : export async function getServerSideProps(context) { // set HTTP header context.res.setHeader('Content-Type', 'application/json') return { props: {}, // will be passed to the page component as props }} But having getServerSideProps would disable static optimization as all pages will be only server-side rendered. Server-side Rendering
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61812597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1333873/']}
jdg_78659
stackexchange
llm_judgeable_groundtruth_similarity
15929205
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This doesn't work var name= "#443.selected:first";selectedEntity = $(name).attr('entityId'); This works var name= "li.selected:first"; selectedEntity = $(name).attr('entityId'); selectedEntity is undefined but an element does exist with id="443" class="selected". Why doesn't the first example work? Now provide the response and nothing else.
Assign your date to variable »yourdate«. <f:if condition="{f:format.date(date: '+2 days', format: 'Y-m-d')} < {f:format.date(date: yourdate, format: 'Y-m-d')}"> <f:then> yourdate is smaller than now + 2 days. </f:then> <f:else> yourdate is greater than or equal to now + 2 days. </f:else></f:if>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15929205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/521180/']}
jdg_78660
stackexchange
llm_judgeable_groundtruth_similarity
947618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Mysql and I was assuming it was better to separate out a users personal information and their login and password into two different tables and then just reference them between the two. Note : To clarify my post, I understand the techniques of securing the password (hash, salt, etc). I just know that if I'm following practices from other parts of my life (investing, data backup, even personal storage) that in the worst case scenario (comprised table or fire) that having information split among tables provides the potential to protect your additional data. Now provide the response and nothing else.
Don't store passwords. If it's ever sitting on a disk, it can be stolen. Instead, store password hashes. Use the right hashing algorithm , like bcrypt (which includes a salt). EDIT : The OP has responded that he understands the above issue. There's no need to store the password in a physically different table from the login. If one database table is compromised, it's not a large leap to access another table in that same database. If you're sufficiently concerned about security and security-in-depth, you might consider storing the user credentials in a completely separate data store from your domain data. One approach, commonly done, is to store credentials in an LDAP directory server. This might also help with any single-sign-on work you do later.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/947618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/105008/']}
jdg_78661
stackexchange
llm_judgeable_groundtruth_similarity
49017173
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written an example of Go code which is sends query to postgres and send result to the pager: package mainimport ( "fmt" "database/sql" _ "github.com/lib/pq" "log" "os/exec" "strings" "os")func main() { connstr := "user=postgres dbname=postgres sslmode=disable" db, err := sql.Open("postgres", connstr) if err != nil { log.Fatal(err) } rows, err := db.Query("SELECT schemaname, relname, seq_scan FROM pg_stat_all_tables ORDER BY 1 LIMIT 10") if err != nil { log.Fatal(err) } defer rows.Close() var buf string for rows.Next() { var s, r string var ss int if err := rows.Scan(&s, &r, &ss); err != nil { log.Fatal(err) } buf = fmt.Sprintf("%s %s %d\n", buf + s, r, ss) } cmd := exec.Command("less") cmd.Stdin = strings.NewReader(buf) cmd.Stdout = os.Stdout err = cmd.Run() if err != nil { log.Fatal(err) }} But the following line: buf = fmt.Sprintf("%s %s %d\n", buf + s, r, ss) looks rude for me and I'm not sure this is a right way. Is there way to achieve result in more elegant way? May be it's possible with some kind of buffers and io.Readers? UPD. When I asked this question a few year ago, I was a Golang newbie. Now, with current experience I see no problems with this code and the question should be closed. Now provide the response and nothing else.
Change this: <android.support.v7.widget.AppCompatTextView android:autoSizeTextType="uniform" .../> to this: <TextView app:autoSizeTextType="uniform" .../> There's no need to reference AppCompatTextView directly. The LayoutInflater used by the support library will automatically inject AppCompat widgets in place of standard widgets. Additionally, to get auto-sizing working pre-API-26, you need to be using the support library implementation, which means you need to be using the AppCompat attributes (with the app: namespace) instead of the platform attributes (with the android: namespace).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49017173', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1367644/']}
jdg_78662
stackexchange
llm_judgeable_groundtruth_similarity
473187
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Imaging the light racing out from distant sun, as beam of light shoots aways is a circular pattern (spherical actually), remembering that, light comes in photons or packets of energy.so how come is that we do not see "gaps" in the light coming from distant stars as these "rays" should have gaps that are getting farther apart as distance grows Now provide the response and nothing else.
You are right that single photon detection is a discrete event. But you are under the false assumption that these "rays" are discretely distributed. Ideally, a photon would have an equal probability of being emitted through any solid angle out of the star. i.e. it is a uniform probability distribution with respect to the solid angle. There aren't single rays that are evenly distributed around the star that the photons travel along. For a water analogy, it is not like the star is a spherical shower head where photons can only be released from discrete locations. So, even if you might have a different random distribution of photon detection events at different angular locations relative to the star, you will still always see photons (this is neglecting stars that are so far away from us that their light never reaches us due to the expanding universe). Of course, if you are far enough away you will experience fewer and fewer photons. However this is not limited to certain "rays". This will be true at any angle at a large enough distance.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/473187', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/227480/']}
jdg_78663
stackexchange
llm_judgeable_groundtruth_similarity
1632005
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The default lower-alpha list type for ordered list uses a dot '.'. Is there a way to use a right parenthesis instead like a)... b) ..etc? Now provide the response and nothing else.
Here's a neat solution. (Honestly I surprised myself with this.) CSS has something called counters , where you can set, for example, automatic chapter numbers on each heading. A bit of modification gives you the below; You'll need to sort out padding etc yourself. ol { counter-reset: list;}ol > li { list-style: none;}ol > li:before { content: counter(list, lower-alpha) ") "; counter-increment: list;} <span>custom list style type (v1):</span><ol> <li>Number 1</li> <li>Number 2</li> <li>Number 3</li> <li>Number 4</li> <li>Number 5</li> <li>Number 6</li></ol> Works in all modern browsers and IE9+ (and possibly IE8 but may be buggy). Update: I added child selector to prevent nested lists picking up the parent style. trejder also beings up a good point in the comments that the list item alignment is also messed up. An article on 456bereastreet has a good solution which involves absolutely positioning the counter. ol { counter-reset: list;}ol > li { list-style: none; position: relative;}ol > li:before { counter-increment: list; content: counter(list, lower-alpha) ") "; position: absolute; left: -1.4em;} <span>custom list style type (v2):</span><ol> <li>Number 1</li> <li>Number 2</li> <li>Number 3</li> <li>Number 4</li> <li>Number 5</li> <li>Number 6</li></ol> Here is a jsFiddle showing the result, including nested lists.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1632005', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129001/']}
jdg_78664
stackexchange
llm_judgeable_groundtruth_similarity
2929175
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When should I prefer either a static or a normal class? Or: what is the difference between them? using System;using System.Collections.Generic;using System.Linq;using System.Text;namespace staticmethodlar{ class Program { static void Main(string[] args) { SinifA.method1(); } } static class SinifA { public static void method1() { Console.WriteLine("Deneme1"); } } public static class SinifB { public static void method2() { Console.WriteLine("Deneme2"); } } public class sinifC { public void method3() { Console.WriteLine("Deneme3"); } } public class sinifD : sinifC { void method4() { Console.WriteLine("Deneme4"); } sinifC sinifc = new sinifC(); // I need to use it :) }} Now provide the response and nothing else.
Static classes contain static objects that can't be instantiated multiple times. Usually what I use static classes for are to house static methods that provide calculations, general processing patterns, string output formats, etc. Static classes are light weight and don't need instantiation. For instance System.IO.File is a static class with static a method Exists() . You don't create a File object to call it. You invoke it like this System.IO.File.Exists(filePath) Rather than doing this System.IO.File myFile = new System.IO.File(filePath); if(myFile.Exists()) { /* do work */ } If you require several objects in software, then you use dynamic classes. For instance if you have an inventory system you may have several Product objects and in that case you would use a dynamic class such as this public class Product{ public int ProductID { get; private set; } public string ProductName { get; private set; } public int Qty { get; set; } public Product( int productID, string productName, int total ) { this.ProductID = productID; this.ProductName = productName; this.Qty = total; } }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2929175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/52420/']}
jdg_78665
stackexchange
llm_judgeable_groundtruth_similarity
32608969
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: TL;DR - How do I check whether one-of and all-of a nested array meet specified criteria? I have a document . Each document has an array of nested outer objects, who themselves have a list of nested inner objects. I need to perform a filter for all documents where at least one of the document's outer nested objects match. When I say match, I mean that all the outer nested objects' inner objects match in some way. Here's an example mapping for reference; { "document" : { "properties" : { "name" : { "type" : "string" }, "outer" : { "type" : "nested", "properties" : { "inner" : { "type" : "nested", "properties" : { "match" : { "type" : "string", "index" : "not_analyzed" }, "type" : { "type" : "string", "index" : "not_analyzed" } }}}}}}} If the document has no outer / inner objects it is considered to match. But to make things worse the inner objects need to be considered to match differently depending on the type in a kind of conditional logic manner (eg CASE in SQL). For example, if the type were the term "Country" then inner object would be considered to match if the match were a specified country code such as ES . A document may have inner objects of varying type and there is not guarantee that specific types will exist. Coming from a imperative (Java) programming background I am having incredible trouble figuring out how to implement this kind of filtering. Nothing I can think of even vaguely matches this behaviour. Thus far all I have is the filtered query; "filtered" : { "query" : { "match_all" : { } }, "filter" : { "bool" : { "should" : { "missing" : { "field" : "outer.inner.type" } }}}}} So, the question is... How can I filter to documents who have at least one outer object which has all inner objects matching based on the type of inner object? Further details By Request - Example Document JSON { "name":"First", "outer":[ { "inner":[ {"match":"ES","type":"Country"}, {"match":"Elite","type":"Market"} ] },{ "inner":[ {"match":"GBR","type":"Country"}, {"match":"1st Class","type":"Market"}, {"match":"Admin","type":"Role"} ] } ], "lockVersion":0,"sourceId":"1"} The above example should come through the filter if we were to provide "1st Class" market and the country "GRB" because the second of the two outer objects would be considered a match because both inner objects match. If, however, we provided the country country "GRB" and the market "Elite" then we would not have this document returned because neither of the outer objects would have bother of their inner objects match in their entirety. If we wanted the second outer object to match then all three inner would need to match. Take note that there is an extra type in the third inner . This leads to a situation where if a type exists then it needs to have a match for it else it doesn't need to match because it is absent. Now provide the response and nothing else.
One of Nested Array Having one of a nested array matching some criteria turns out to be very simple. A nested filter evaluates to matching/true if any of the array of nested objects match the specified inner filters. For example, given an array of outer objects where one of those objects has a field match with the value "matching" the following would be considered true. "nested": { "path": "outer", "filter": { "term" : { "match" : "matching" } }} The above will be considered true/matching if one of the nested outer objects has a field called match with the value "matching" . All of Nested Array Having a nested filter only be considered matching if all of the nested objects in an array match is more interesting. In fact, it's impossible. But given that it is considered matching if only one of the nested objects match a filter we can reverse the logic and say "If none of the nested objects don't match" to achieve what we need. For example, given an array of nested outer.inner objects where all of those objects has a field match with the value "matching" the following would be considered true. "not" : { "nested": { "path": "outer.inner", "filter": { "not" : { "term" : { "match" : "matching" } } } }} The above will be considered true/matching because none of the nested outer.inner objects don't (double negative) have a field called match with the value "matching" . This, of course, is the same as all of the nested inner objects having a field match with the value "matching" . Missing Any Nested Objects You can't check whether a field containing nested objects is missing using the traditional missing filter . This is because nested objects aren't actually in the document at all, they are stored somewhere else. As such missing filters will always be considered true. What you can do however, is check that a match_all filter returns no results like so; "not": { "nested": { "path": "outer", "filter": { "match_all": {} } } } This is considered true/matching if match_all finds no results.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32608969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2182928/']}
jdg_78666
stackexchange
llm_judgeable_groundtruth_similarity
541882
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Q) An insect crawls up a hemispherical surface very slowly.The coeffiecient of friction is $\mu$ between surface and insect.If line joining the centre of hemispherical surface to the insect makes an angle $\alpha $ with the vertical, find the maximum possible value of $\alpha$ . With the force method, the solution can be found as at the highest point the frictional force would be equal to gravitational force.Therefore, $$\mu mg\cos\alpha=mg \sin\alpha$$ $$\implies \cot \alpha=1/\mu$$ However, when I tried to do this by energy conservation,equating the total frictional force with potential energy the answer was different. Let $\theta$ be angle covered by it and $d\theta$ be a small angle covered by it. $$mgr(1-\cos\alpha)=\int_0^\alpha \mu (mg\cos\theta )*rd\theta$$ $$mgr(1-\cos\alpha)=\mu mgr \sin\alpha$$ $$2\sin^2\frac{\alpha}{2}=\mu 2\sin\frac{\alpha}{2}\cos\frac{\alpha}{2}$$ $$\cot\frac{\alpha}{2}=1/\mu$$ Why is the answer different if I used force or if i use energy conservation? Now provide the response and nothing else.
First, I am assuming that there is no kinetic friction acting on the insect as it moves up the bowl. If kinetic friction were involved, you would have energy dissipation, but I will not consider that here. Your mistake is in assuming that the static friction force is equal to its maximum value during the entire process. $\mu N$ only determines the maximum magnitude the static friction force can have before slipping occurs; it doesn't always hold for the static friction force magnitude. Before slipping, the static friction force is just equal to the force needed to prevent slipping, i.e. $mg\sin\theta$ . Doing this correctly, you will then see that the integral will give you a true expression, but it won't help you find where the ant slips because the integral is true for any angle $\alpha$ before slipping occurs, and the integral doesn't tell you anything about when the static friction force fails. i.e. energy conservation doesn't apply only when slipping occurs, so energy conservation won't help you solve this problem. Also, technically the static friction force can't do work because the point of contact between the ant and the bowl doesn't move as the force is being applied, but that point isn't important here, as the (correct) integral will still give the work done by the insect's legs on the rest of the insect, even if the physical interpretation isn't correct.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/541882', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/236133/']}
jdg_78667
stackexchange
llm_judgeable_groundtruth_similarity
7581654
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a little program in C++11 and really use exceptions for one of the first time. I've got a question about how to catch the exceptions efficiently, and after some googling I still don't have the answer. Here is the question :What is the more efficient (or recommended) between catching the exception by (const?) lvalue reference, or by (const?) rvalue reference? In code this give : 1) try { throw std::exception{"what"}; }catch (std::exception& ex) {} 2) try { throw std::exception{"what"}; }catch (const std::exception& ex) {} 3) try { throw std::exception{"what"}; }catch (std::exception&& ex) {} 4) try { throw std::exception{"what"}; }catch (const std::exception&& ex) {} Now provide the response and nothing else.
You should catch by const lvalue reference (2): try { throw std::exception{"what"}; }catch (const std::exception& ex) {} Rationale: In C++11 it is possible (via use of shared_future ) that two threads could be unwinding the same exception at the same time. This can happen in your code even if you are not aware of shared_future being used, unless you control the entire application. If two threads are caught unwinding the same exception simultaneously, and one or both of the threads modifies the exception, then you've got a race condition. So as long as you don't have to modify the exception object in the catch clause, let the compiler enforce that policy for you - catch by const& . If you really do need to modify the exception, then make a copy of it, modify the copy and throw the copy. You can do this by catching by value if you are sure this won't slice your exception object (which is not usually the case if you are catching std::exception ).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7581654', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/610351/']}
jdg_78668
stackexchange
llm_judgeable_groundtruth_similarity
421793
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How might I search two directories for files with the same name, size,type...and remove them from one of those directories? Now provide the response and nothing else.
Using fdupes : fdupes --delete dir1 dir2 fdupes will not test on filename or file type, but will test on file size and contents (which implicitly includes file type). Example: $ mkdir dir1 dir2$ touch dir{1,2}/{a,b,c}$ tree.|-- dir1| |-- a| |-- b| `-- c`-- dir2 |-- a |-- b `-- c2 directories, 6 files$ fdupes --delete dir1 dir2[1] dir1/a[2] dir1/b[3] dir1/c[4] dir2/a[5] dir2/b[6] dir2/cSet 1 of 1, preserve files [1 - 6, all]: 1 [+] dir1/a [-] dir1/b [-] dir1/c [-] dir2/a [-] dir2/b [-] dir2/c$ tree.|-- dir1| `-- a`-- dir22 directories, 1 file
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/421793', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/273564/']}
jdg_78669
stackexchange
llm_judgeable_groundtruth_similarity
18351921
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have the following problem: I started to create a form with HTML an JS and there are two Dropdowns (Country and City). now i want to make these two dynamic with JQuery so that only the cities of the selected countries are visible. I've started with some basic JS which worked fine but makes some trouble in IE. Now i'm trying to convert my JS to JQuery for a better compatibility. My original JS looks like this: function populate(s1, s2) { var s1 = document.getElementById(s1); var s2 = document.getElementById(s2); s2.innerHTML = ""; if (s1.value == "Germany") { var optionArray = ["|", "magdeburg|Magdeburg", "duesseldorf|Duesseldorf", "leinfelden-echterdingen|Leinfelden-Echterdingen", "eschborn|Eschborn"]; } else if (s1.value == "Hungary") { var optionArray = ["|", "pecs|Pecs", "budapest|Budapest", "debrecen|Debrecen"]; } else if (s1.value == "Russia") { var optionArray = ["|", "st. petersburg|St. Petersburg"]; } else if (s1.value == "South Africa") { var optionArray = ["|", "midrand|Midrand"]; } else if (s1.value == "USA") { var optionArray = ["|", "downers grove|Downers Grove"]; } else if (s1.value == "Mexico") { var optionArray = ["|", "puebla|Puebla"]; } else if (s1.value == "China") { var optionArray = ["|", "beijing|Beijing"]; } else if (s1.value == "Spain") { var optionArray = ["|", "barcelona|Barcelona"]; } for (var option in optionArray) { var pair = optionArray[option].split("|"); var newOption = document.createElement("option"); newOption.value = pair[0]; newOption.innerHTML = pair[1]; s2.options.add(newOption); }}; and here my Jquery: http://jsfiddle.net/HvXSz/ i know it is very simple but i can't see the wood for the trees. Now provide the response and nothing else.
It should as simple as jQuery(function($) { var locations = { 'Germany': ['Duesseldorf', 'Leinfelden-Echterdingen', 'Eschborn'], 'Spain': ['Barcelona'], 'Hungary': ['Pecs'], 'USA': ['Downers Grove'], 'Mexico': ['Puebla'], 'South Africa': ['Midrand'], 'China': ['Beijing'], 'Russia': ['St. Petersburg'], } var $locations = $('#location'); $('#country').change(function () { var country = $(this).val(), lcns = locations[country] || []; var html = $.map(lcns, function(lcn){ return '<option value="' + lcn + '">' + lcn + '</option>' }).join(''); $locations.html(html) });}); Demo: Fiddle
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18351921', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2609756/']}
jdg_78670
stackexchange
llm_judgeable_groundtruth_similarity
58774
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: (In the setting of number fields and algebraic integers) If $(a),(b)$ are two principle ideals then $(a)+(b)$ corresponds to $(\gcd(a,b))$, so while the natural definition of addition for ideals has a pleasing meaning, it does not correspond to addition of numbers. My question is whether there is a "natural" operation $\oplus$ on ideals (one that can be generalized to non-principal ideals) such that $(a)\oplus(b)=(a+b)$? From what I've read I've come under the impression that there is not such operation but I'd like to make sure I'm not missing anything - and even better, I'd like to hear an explanation as to why such an operator is not likely to exist. Now provide the response and nothing else.
It's not well-defined, even for $\mathbb{Z}$: because $(a)=(-a)$ for any $a\in\mathbb{Z}$, we have$$(a)\oplus(a)=(a+a)=(2a)$$and $$(a)\oplus(a)=(a)\oplus(-a)=(a+(-a))=(0)$$which is impossible if $a\neq0$. The same problem occurs for any ring not of characteristic 2. Here's an analogy to consider, maybe it will clarify why we wouldn't expect this to work: Suppose we want to try to make an operation $\star$ on subspaces of a vector space $V$ such that for any $v,w\in V$, we have$$\text{span}(v)\star\text{span}(w)=\text{span}(v+w).$$There's a problem because $\text{span}(w)=\text{span}(tw)$ for any $t\neq 0$, while $\text{span}(v+w)\neq\text{span}(v+tw)$ in general (specifically, when $v$ and $w$ are linearly independent). Note that the ideals of a ring $R$ are precisely the $R$-submodules of $R$ (hence the analogy with subspaces of a vector space).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/58774', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1818/']}
jdg_78671
stackexchange
llm_judgeable_groundtruth_similarity
2480713
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a simple way I can exclude nulls from affecting the avg? They appear to count as 0, which is not what I want. I simply don't want to take their average into account, yet here is the catch, I can't drop them from the result set, as that record has data on it that I do need. Update: example: select avg(col1+col2), count(col3) from table1wheregroup by SomeArbitraryColhaving avg(col1+col2) < 500 and count(col3) > 3order by avgcol1+col2) asc; This would be working for me, but the averages aren't accurate as they are counting null values as 0, which is really throwing off the whole average. Now provide the response and nothing else.
Aggregate functions (SUM, AVG, COUNT, etc) in SQL always automatically exclude NULL. So SUM(col) / COUNT(col) = AVG(col) - this is great and consistent. The special case of COUNT(*) counts every row. If you make up an expression with NULLs: A + B where either A or B is NULL, then A + B will be NULL regardless of the other column being NULL. When there are NULLs, in general, AVG(A + B) <> AVG(A) + AVG(B), and they will likely have different denominators, too. You would have to wrap the columns: AVG(COALESCE(A, 0) + COALESCE(B, 0)) to solve that, but perhaps also exclude the case where COALESCE(A, 0) + COALESCE(B, 0). Based on your code, I would suggest: select avg(coalesce(col1, 0) + coalesce(col2, 0)), count(col3) from table1where coalesce(col1, col2) is not null -- double nulls are eliminatedgroup by SomeArbitraryColhaving avg(coalesce(col1, 0) + coalesce(col2, 0)) < 500 and count(col3) > 3order by avg(coalesce(col1, 0) + coalesce(col2, 0)) asc;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2480713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17675/']}
jdg_78672
stackexchange
llm_judgeable_groundtruth_similarity
40183188
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I had to parse the scientific notation that was a result of implementing logarithmic scale to my graph but it printed out each value for each line in the graph. No method of steps seems to be working. RESULTING CHART IMG var packetsElement = $("#packetsGraph"); pckBarChart = new Chart(packetsElement, { type: 'bar', data: { labels: ["Received", "Errors", "Lost"], datasets: [{ label: '# of Packets', data: packetsArr, backgroundColor: [ 'rgba(55,102,245,0.3)', 'rgba(55,102,245,0.2)', 'rgba(55,102,245,0.1)' ], borderColor: [ '#3766F5', '#3766F5', '#3766F5'], borderWidth: 1 }] }, options: { responsive: true, maintainAspectRatio: false, title: { display: true, text: 'Packets', fontSize: 20 }, scales: { yAxes: [{ type: 'logarithmic', ticks: { min: 1, stepSize: 1000000, steps: 1000000, stepValue: 1000000, callback: function(value, index, values) { return parseFloat(value); } } }] } } }); Now provide the response and nothing else.
I figured it out, here's what I did in case any one else needs it: var packetsElement = $("#packetsGraph"); pckBarChart = new Chart(packetsElement, { type: 'bar', data: { labels: ["Received", "Errors", "Lost"], datasets: [{ label: '% of Packets (Logarithmic)', data: packetsArr, backgroundColor: [ 'rgba(55,102,245,0.3)', 'rgba(55,102,245,0.2)', 'rgba(55,102,245,0.1)' ], borderColor: [ '#3766F5', '#3766F5', '#3766F5'], borderWidth: 1 }] }, options: { responsive: true, maintainAspectRatio: false, title: { display: true, text: 'Packets', fontSize: 20 }, scales: { yAxes: [{ type: 'logarithmic', ticks: { min: 0, max: 100, callback: function(value, index, values) {//needed to change the scientific notation results from using logarithmic scale return Number(value.toString());//pass tick values as a string into Number function } }, afterBuildTicks: function(pckBarChart) { pckBarChart.ticks = []; pckBarChart.ticks.push(0); pckBarChart.ticks.push(25); pckBarChart.ticks.push(50); pckBarChart.ticks.push(75); pckBarChart.ticks.push(100); } }] }, tooltips: { enabled: true, mode: 'single', callbacks: { label: function(tooltipItems, data) { return tooltipItems.yLabel + ' %'; } } }, } });
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40183188', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7054570/']}
jdg_78673
stackexchange
llm_judgeable_groundtruth_similarity
64613
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have read some text about the eye pattern (or eye diagram), but I do not understand how should I read it. The wikipedia definition of it is this one: In telecommunication, an eye pattern, also known as an eye diagram, is an oscilloscope display in which a digital signal from a receiver is repetitively sampled and applied to the vertical input, while the data rate is used to trigger the horizontal sweep. It is so called because, for several types of coding, the pattern looks like a series of eyes between a pair of rails. It is a tool for the evaluation of the combined effects of channel noise and intersymbol interference on the performance of a baseband pulse-transmission system. It is the synchronised superposition of all possible realisations of the signal of interest viewed within a particular signaling interval. So my first question is: how does an ideal eye pattern look like? My second question is about how should I read it, and I think that in order to understand this I should know how it is built. For instance this page shows it in this way: According to wikipedia, it is sufficient to make the superposition of all the signals represented in picture. I understand that the resulting signal is that shown in the last graph. But, how should I read it? Which are the advantages of analyzing it instead of the single signals properties (edges etc)? Now provide the response and nothing else.
It is the objective of the receiver to make the best estimate for each symbol as to what was transmitted. This is often done by ultimately determining a decision time in each sample (through timing recovery) on the waveform after it has been processed by the receiver (equalization and matched filtering) in which to sample the waveform and make a decision as to what symbol was transmitted. The eye diagram superimposes the waveform for each symbol at the point in the receiver just prior to decision. The eye diagram readily shows the ability to make a decision between signal levels; so the ideal eye would show that there is a lot of margin both in the vertical and horizontal axis to allow for a minimum error rate (what we would call a wide eye opening). As the opening narrows, it would show that there is an increased sensitivity to noise since any added noise would cause a value to pass a decision threshold resulting in an error (this could be phase noise or jitter in the vertical direction and amplitude noise in the horizontal direction, or intersymbol interference which effects both directions). I have this graphic pasted below that I created for one of my courses that may be useful in answering your question. This particular eye diagram is over two symbol durations. The waveform directly below the eye-diagram plot is replicated in the eye diagram as the darker blue lines, amidst the rest of the trajectories of the waveform from symbol to symbol over a much longer duration as given in the lighter blue lines: Other examples of eye diagrams are shown below. The first is for QPSK, where we see the constellation on the left with the ideal sample locations in red, and the remaining trajectories from symbol to symbol everywhere else. On the right is the eye diagram formed from the real and imaginary components of the waveform. And below is an eye diagram for a 16 QAM signal before and after the root-raised-cosine filter in the receiver. Here we clearly see the utility of the eye-diagram in that we see due to the proper filtering we have significantly increased our margin against noise. This would be similar to using the eye diagram to assess the effectiveness of equalization or to determine if certain filtering in our receiver is degrading our ability to obtain a low error rate due to the eye closing.
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/64613', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/45492/']}
jdg_78674
stackexchange
llm_judgeable_groundtruth_similarity
1520825
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a div that contains an image and a p tag (seen below). I want to align the image in the middle of the div vertically depending on how many lines the paragraph is. Vertical-align doesn't work. I am using JavaScript right now to figure out how much to add to margin-top, but would rather use CSS. Any ideas? <div> <img> <p>Multi-line</p></div> Now provide the response and nothing else.
This is pure CSS, vertically aligns the image, and also resizes the image down if it's taller (or wider) than the containing box. Ergo, the box and image can both be any size without breaking the vertical alignment. Also, you may want to add a left margin to the <p> tags to keep them from being hidden by the image. CSS /* Positioning */.absoluteCenterWrapper { position: relative; /* Contains the image in the div */}.absoluteCenter { /* Aligns image vertically */ margin: auto; position: absolute; top: 0; bottom: 0;}.absoluteCenterWrapper p { /* Pushes <p> to edge of image */ margin-left: 101px; /* Width of image */}.absoluteCenter { /* Specifies width of image to avoid overlap if image changes */ width: 101px; /* Width of image */}/* Sizing - resizes down to avoid cutting off */.absoluteCenter { max-height: 100%; max-width: 100%;}/* Making it "pretty" */.absoluteCenterWrapper { margin: 1em; padding: 0.001em; /* <- Hack to contain margins */ outline: 1px solid red; }.absoluteCenterWrapper p { margin-top: 1em; margin-bottom: 1em; padding-left: 1em;} <div class="absoluteCenterWrapper"> <img class="absoluteCenter" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGUAAAAXCAMAAAAlSHE3AAAAA3NCSVQICAjb4U/gAAAACXBIWXMAAA2/AAANvwELq/fgAAAAGXRFWHRTb2Z0d2FyZQB3d3cuaW5rc2NhcGUub3Jnm+48GgAAAjFQTFRF/38A/4AB/4AC/4ED/4EE/4IF/4IG/4MH/4MI/4QJ/4QK/4UL/4UM/4YN/4YO/4cP/4cQ/4gR/4gS/4kT/4kU/4oV/4oW/4sX/4sY/4wZ/4wa/40b/40c/44d/44e/48g/5Ai/5Ej/5Ek/5Im/5Mo/5Qp/5Qq/5Ur/5Us/5Yt/5Yu/5cv/5cw/5gy/5kz/5o1/5o2/5s3/5s4/5w6/508/54+/58//59A/6BB/6FD/6FE/6JF/6JG/6NH/6NI/6RJ/6RK/6VL/6VM/6ZN/6ZO/6dP/6dQ/6hR/6lT/6pV/6pW/6xZ/61c/65d/65e/69g/7Bi/7Fj/7Jl/7Jm/7Nn/7No/7Rp/7Rq/7Vr/7Zt/7Zu/7dw/7hy/7p2/7t3/7t4/7x5/717/718/759/75+/79//7+A/8GD/8GE/8KF/8KG/8OH/8OI/8WL/8WM/8aO/8eQ/8iR/8iS/8mT/8mU/8qV/8uX/8uY/8yZ/8ya/82b/82c/86e/9Ch/9Ci/9Gj/9Km/9Oo/9Sp/9Sq/9Ws/9at/9ew/9ix/9mz/9m0/9q1/9u4/9y6/969/96+/9+//9/A/+DB/+LG/+TK/+XM/+bN/+bO/+fP/+jR/+nT/+nU/+rV/+vX/+vY/+za/+3b/+7e/+/f//Dh//Di//Hk//Po//Tp//Tq//Xr//Xs//bu//fv//jx//jy//nz//n0//r1//r2//v3//v4//z5//z6//37//38//79//7+////1DEj+QAAAndJREFUSInt1NlfTHEYx/HPNKYaSkVZRkyqGUq2CUOyDDUqe4U4kX3fyb5FliJOirIMQkWSmpnM9Px1buZMZ/Kai2644Ln6vZ7f9znvi+ecA//rHy+jOw+AdPdkfXuKpwQ2e8aN9nGZnhXasdyTEG4nyDUANohLn54uW6BaEker2KVUO9ZI0t9VJq2tWmqKVGIclSWWcH5hOjDLYYRERzIwo6xivgEwO1LjF+9MCykLty01R1cqfsg3eZmhVyzN0unz7tDyb+6B4avkQ7lkwN6fA93SOBFsssfj719gl1KIvS1fetpPRSgNTqfT6TwmLiiUG/aYlR2qXnnY6yTutiwK5c9/MTBXug9AXQeslotjWOO9BTbxbTUa4+xSCopsx3AgEKFo5YIXPWOBfTJ7WMmSw0BCT0MoXyx57H5fq2Loug7PPpmAC0PTscnd8F4+NgG0Ryh1VqvValXERYy/RVEU5Yy4hxW3LAFo7AjlkwPVPLmyzm/OkfXQdQegSAqxiaIp4+QQwNkoe0mVD6qqqqq6aliplFyAmwPaQEtj3EBZSnC5IpOJCdYC5MtGbLJJU6yyC+BQtO33XdbaYWWVLAdoeqvdnOgrHEqj5eSj18CneoAyWaxXYoPHAC5FUx53poxUpg5eBSz+m9pAgajtcLzt+zmgvjcZuN8/Qa/Q9gowfY6mZHlbCyxZxQ/ide/YhWCN1dHqn6kNmH1yBpaJFAFzgs3zMo7KESKUErmemV3fE/V7yXkuIr4Gk06JPT0o8s4RHuCpuCDeG0gCWPZRxHfQGKlQ5ZPArf06ZWSNn59tHtGKnTMtahxm5Jp+b8bljfrf9L/+bP0CQB36KehprcoAAAAASUVORK5CYII="> <p>Paragraph goes here.</p> <p>Paragraph goes here.</p></div><div class="absoluteCenterWrapper"> <img class="absoluteCenter" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGUAAAAXCAYAAAASloEFAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAANvwAADb8BC6v34AAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAWSSURBVGiB7Zl7TNVlGMc/5xwunsNN7kSEIISIhiSFNS+IGrREdGmmmzlb5kpZy7k2c20utfqndP1hTsPL1paVa2Yz84aIpZSoaCp5g4OAiAICQtwPpz8e2Ln+OCBndebOdzv77fe87/O8l+/zPs/z/o7KuB4jbrgU1P/3BNywhZsUF4SbFBeEmxQXhC0pag0kzoOIFPsa/lHS7hs+9NF8IyD3GiTNN8lSlonMy2fo9lwNQfGylvgsx31T3+5bt69Nky0pHlpYtB/SVtk3FjNd2iOfG+KMAY0nhIwB7wCTTBskMtVjcGg1Xn3r83fcVxssfdUam6bHYCceP3g41ZpPGERNgoBoqCuFyt/B0P1otlRqeDINIiZAWz1UFUFLjWO9qEnw8A48rDbJwsaBlx/UFEOvQWTefhA6DuqvQUeTqW9ANES9ACNGQm0J1JwDo9lVzlML4RPgwS3oajXN8cp3jucV8Sw0XIfqPwbs6jxSUldA1mbJDR1Nsqi7JfDDAmgsH5otv0hYsBdGTYPWe2ILIxxZA8VfDaw7dxfcvwr7Fsq7SgXLCkEXDDsnQ9UZkY9fBHN2wJdxJlKmfggZG8SRulrBJxRuHoL9S6GtQfqMjIHlRXB8LaS8CYGx0NsD1X+KjjU0XjB3JyQvEedSaeBhFZQdVVyCMim+T0j+sEboWFtZXCbM2Q4lu+HM5+J9cZmQvU02aY8dOwMhe5sUGnsyoOIkeHhDTh7M3gr3/pITqITyfHhmsZBhNIp36oLhnzoYPdNESuxMaKowOczTr8DMT+HsVjj8vmx0QjYs3AdZW4QYc0z/GA6tgot75F3tAYFxtvN5fqUQ8stKKN4m85r2EaSvV1yCck5JmA3LCmx/U9fZ9p31GbQ3wqFcCVvGXrh1GC7kQUw6hI1X3kRrBCfAmBwo2iKEAPR0wsF3ZYwpawfW1+eDLkRCDMDoWRJqSvcJESAbE5shBPZj8gfQXAVHVgshADcOyhqSl0DAU5bjXD8AF3ZKOOw1yBzt4cXVoD8hhIA4SuFGqLuquATlk3Llezmi1kicBy9vMb2r1LLptZckhJlDGyzPoHi4f0VxKAuEJsmzosBS3tUK1UWSHwZCRaFsUuwMqL0opOhPQPlxmLhcckJQvOQ/vRkpoUlCknUOLDsGabkQMlZI60dVkeO1ePlIjirZZdumL4DwZLtqyqR0tcjxtkZbveW7Nkjipi5ECLNGRSEYFLzIHvrvP+2N9sf2cXA/6miCuxckVBVvhegp4u0VJyXERE+RBA9CFohj6UKgQ2FM83n1o/2B47XoQpXXYm+sPgw/0bfVixeXHYWD7wzbHM2V8vQNh3tWbf5R0HzbsQ19vnj3qHTwGCGnrr0Ras5LCAsbJ8VAa98Axl6p2OwR7h8lz6ZBjGuNlhqxbe+i7RuhqOace0rlaTkl2sDh26q9KCEk6TVLuV+kePmds45tlOfLTXnqOrh/WZI8SAiLzxKyzEMXiN24l/oqPTOMfx262wYffs1h6JLCZEyOpVzjCQlzFNWcQ8qv78GIAHjjmMRwv0hJ2IlzYdFP4q2DRctdOL8dJr4lSX1kjNwbFv8s7YUbHNuoOi2JNybdMpnr86Wq8/azlAOc2ihELj4AkakQOBpmbIKxr0LR5sGFK3s4tUlybk6e5LKQRJi/Fzx1iirOuac03ICv0yB7Oyw9ZpL3dIp3Gg1Ds3dkjehmbJDKDqDhJuxOhwdljvW726X0jc2wPBGVp6GnA9SecLvQUqf2EnyTKSX8inOm+Resh98+Gdr8zVH6o1SlmV+Io/Ua4PK3QvSMjXZVVE7/59HbX77pdLZI/O9uf3RbGi+5F7XVS8z/rxAQLWGs/u9H/yJhDQ9vqeAay2RvBoDzSXFj2HB/kHRBuElxQbhJcUH8Cxuwzh5I0D/eAAAAAElFTkSuQmCC"> <p>Paragraph goes here.</p> <p>Paragraph goes here.</p> <p>Paragraph goes here.</p></div>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1520825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184460/']}
jdg_78675
stackexchange
llm_judgeable_groundtruth_similarity
26597665
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been trying to figure out how to read the contents of a httpclient call, and I can't seem to get it. The response status I get is 200, but I can't figure out how to get to the actual Json being returned, which is all I need! The following is my code: async Task<string> GetResponseString(string text){ var httpClient = new HttpClient(); var parameters = new Dictionary<string, string>(); parameters["text"] = text; Task<HttpResponseMessage> response = httpClient.PostAsync(BaseUri, new FormUrlEncodedContent(parameters)); return await response.Result.Content.ReadAsStringAsync();} And I am getting it just calling it from a method: Task<string> result = GetResponseString(text); And This is what I get response Id = 89, Status = RanToCompletion, Method = "{null}", Result = "StatusCode: 200, ReasonPhrase: 'OK', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:\r\n{\r\n Connection: keep-alive\r\n Date: Mon, 27 Oct 2014 21:56:43 GMT\r\n ETag: \"5a266b16b9dccea99d3e76bf8c1253e0\"\r\n Server: nginx/0.7.65\r\n Content-Length: 125\r\n Content-Type: application/json\r\n}" System.Threading.Tasks.Task<System.Net.Http.HttpResponseMessage> Update: This is my current code per Nathan's response below async Task<string> GetResponseString(string text) { var httpClient = new HttpClient(); var parameters = new Dictionary<string, string>(); parameters["text"] = text; var response = await httpClient.PostAsync(BaseUri, new FormUrlEncodedContent(parameters)); var contents = await response.Content.ReadAsStringAsync(); return contents; } And I call it from this method.... string AnalyzeSingle(string text) { try { Task<string> result = GetResponseString(text); var model = JsonConvert.DeserializeObject<SentimentJsonModel>(result.Result); if (Convert.ToInt16(model.pos) == 1) { _numRetries = 0; return "positive"; } if (Convert.ToInt16(model.neg) == 1) { _numRetries = 0; return "negative"; } if (Convert.ToInt16(model.mid) == 1) { _numRetries = 0; return "neutral"; } return ""; } catch (Exception e) { if (_numRetries > 3) { LogThis(string.Format("Exception caught [{0}] .... skipping", e.Message)); _numRetries = 0; return ""; } _numRetries++; return AnalyzeSingle(text); } } And it keeps running forever, It hits the line var model = JsonConvert.DeserializeObject<SentimentJsonModel>(result.Result); Once, and it continues to go without stopping at another breakpoint. When I pause execution, It say Id = Cannot evaluate expression because the code of the current method is optimized., Status = Cannot evaluate expression because the code of the current method is optimized., Method = Cannot evaluate expression because the code of the current method is optimized., Result = Cannot evaluate expression because the code of the current method is optimized. .. I Continue execution, but it just runs forever. Not sure what the problem is Now provide the response and nothing else.
The way you are using await/async is poor at best, and it makes it hard to follow. You are mixing await with Task'1.Result , which is just confusing. However, it looks like you are looking at a final task result, rather than the contents. I've rewritten your function and function call, which should fix your issue: async Task<string> GetResponseString(string text){ var httpClient = new HttpClient(); var parameters = new Dictionary<string, string>(); parameters["text"] = text; var response = await httpClient.PostAsync(BaseUri, new FormUrlEncodedContent(parameters)); var contents = await response.Content.ReadAsStringAsync(); return contents;} And your final function call: Task<string> result = GetResponseString(text);var finalResult = result.Result; Or even better: var finalResult = await GetResponseString(text);
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/26597665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2392664/']}
jdg_78676
stackexchange
llm_judgeable_groundtruth_similarity
10214463
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using RSA for encryption and decryption. I've heard that it's vulnerable to attacks. Is it? How safe is it to use? Now provide the response and nothing else.
The usual attack on RSA involves factoring a large number which is the product of two very large prime numbers. The general idea behind this is that finding prime numbers is fairly easy, and multiplying them together to get a single large number is also quite easy, but if you only start with that large number, finding the factors is quite hard. Back when they were an independent company, RSA Data Security Inc., ran a factoring challenge. IIRC, the last few prizes were won by teams from RSA DSI themselves. These were done using the General Number Field Sieve (GNFS). The typical implementation uses a large number of workstations for a few months or so to build a "factor base". Then (the hard part) you feed that to a single machine with gargantuan amounts of RAM (think terabytes as a starting point) and have it crunch for a few weeks to get the factors. The point of that is to point out that the most common attack on RSA has a very high initial cost (i.e., getting something like a Cray supercomputer) to even get started. In all honesty, however, I don't believe any machine currently exists that can hold enough RAM to even begin an attack on something like a 1024-bit RSA key (not to mention the 2048 or even 4096-bit keys some of the paranoid types insist on using). There are, of course, other factoring methods that don't require that huge machine, but they're enough less efficient that even at best, you'd be looking at years of work to break even a fairly small key by RSA standards -- 512 bits, for example. Realistically, the security of RSA itself is generally the least of your concerns in designing a system that uses it. In fact, that's true of nearly all reasonably modern encryption algorithms. When something gets broken into, it's essentially never by actually breaking the fundamental algorithm involved -- it's by finding weaknesses in how keys are distributed, "social engineering" attacks, etc. While (as I said) this tends to be true of encryption in general, it's even more consistently true with RSA than most other encryption. This is simply because the normal use-case for RSA is comparatively complex. In particular, you do not normally want to use RSA to encrypt large streams of raw data. RSA is slow enough that you generally do not want to use it to encrypt a lot of data. Instead, you typically use RSA in conjunction with some symmetric-key (aka private key) encryption algorithm such as AES. When using the two together, you start by getting the public key of the person to whom you want to send some data. You then generate a random number of the correct size to be used as the key with the symmetric algorithm you've chosen. You encrypt your random number with RSA and send it to the target. You then use that number as the key with the symmetric algorithm, and use that symmetric algorithm to encrypt the actual data. This means that in real use, RSA has some weaknesses that don't necessarily apply to most symmetric encryption algorithms. Just for an obvious example, if the method use to choose the "random" key for the symmetric algorithm isn't really entirely random, an attacker may be able find the key and decrypt the data, without attacking the RSA encryption itself at all (and yes, this has been the source of a real vulnerability in real, publicly released software). Also note that what I've described above is pretty much the simplest way that RSA is normally put to use. Real systems often get considerably more complex still to provide such things as "perfect forward secrecy" 1 . The RSA in such a system is still perfectly normal RSA, but the rest of the system that uses it is considerably more complex still. Summary: if you're going to use RSA encryption, the security of RSA itself is probably the least of your concerns. The route from "RSA" to "secure communication system" is a bit like something out of a twisted combination of Greek mythology and the Christian Bible: a maze with a thousand wrong turns, each of which leads to a place that looks exactly like where you wanted to go--but taking any wrong turn will damn you to an eternity of torture and torment. 1. No, I'm not going to try to describe this right now, but I'm sure searching for it on Crypto.SE will turn up some relevant results. I should probably add, however, that although RSA can be used in a system that provides PFS, it's fairly unusual in practice.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10214463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1166690/']}
jdg_78677
stackexchange
llm_judgeable_groundtruth_similarity
59846232
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm sure this question has been answered, but unfortunately I don't know what to call this operation, and so my search is failing me. It is almost like a reverse pivot table. Lets say I have the following payroll data: data = [ {'employee': 1, 'date': '2020-01-04', 'reg': 8, 'ot': 0, 'dt': 0}, {'employee': 1, 'date': '2020-01-05', 'reg': 4, 'ot': 4, 'dt': 0}, {'employee': 1, 'date': '2020-01-06', 'reg': 0, 'ot': 0, 'dt': 4}, {'employee': 2, 'date': '2020-01-04', 'reg': 6, 'ot': 2, 'dt': 0}, {'employee': 2, 'date': '2020-01-05', 'reg': 3, 'ot': 5, 'dt': 0}, {'employee': 2, 'date': '2020-01-06', 'reg': 0, 'ot': 4, 'dt': 0},]data_df = pd.DataFrame(data) What I need to do is break each rate ('reg', 'ot', and 'dt') for each employee/date, out into its own row that has a column for the rate label, and a column for the number of hours, keeping the other non-rate-based columns. Additionally, I dont want a row for any rates where the value is zero. For the data above, I am looking to get: result = [ {'employee': 1, 'date': '2020-01-04', 'rate': 'reg', 'hours': 8}, {'employee': 1, 'date': '2020-01-05', 'rate': 'reg', 'hours': 4}, {'employee': 1, 'date': '2020-01-05', 'rate': 'ot', 'hours': 4}, {'employee': 1, 'date': '2020-01-06', 'rate': 'dt', 'hours': 4}, {'employee': 2, 'date': '2020-01-04', 'rate': 'reg', 'hours': 6}, {'employee': 2, 'date': '2020-01-04', 'rate': 'ot', 'hours': 2}, {'employee': 2, 'date': '2020-01-05', 'rate': 'reg', 'hours': 3}, {'employee': 2, 'date': '2020-01-05', 'rate': 'ot', 'hours': 5}, {'employee': 2, 'date': '2020-01-06', 'rate': 'ot', 'hours': 4},]result_df = pd.DataFrame(result) Any thoughts on how to accomplish this would be greatly appreciated! Now provide the response and nothing else.
Try using melt : (data_df.melt(['employee','date'], var_name='rate', value_name='hours') .query('hours != 0')) Output: employee date rate hours0 1 2020-01-04 reg 81 1 2020-01-05 reg 43 2 2020-01-04 reg 64 2 2020-01-05 reg 37 1 2020-01-05 ot 49 2 2020-01-04 ot 210 2 2020-01-05 ot 511 2 2020-01-06 ot 414 1 2020-01-06 dt 4
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59846232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/376491/']}
jdg_78678
stackexchange
llm_judgeable_groundtruth_similarity
24151166
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Am attempting to use Regex to match only whole words within double quotations. These are the two statements I have and an example set of text. Looking to combine these 2 regex statements. "(?:[^\\"]+|\\.)*"\b\w+\b Trying to get this final output: Ignore the zzz's, those are replacements of single quotes. Bonus to anyone who recognizes this book. Thanks a ton, apologies for potential confusion - first time asking a question here. Edit: Solved - apologies for the confusing question and images, am using Python 2.7. Used the regex linked to from @HamZa and modified for Python 2.7 and to ignore newlines. (?:"?)\w+(?=(?:(?:[^"]"){2}) (?:[^"] ?"[^"]*?(?:$))) goo.gl/VHvS2o Now provide the response and nothing else.
COMPLETE GUIDE TO EXCERPTS I've recently answered a few questions regarding excerpts, so I' going to give a detailed explanation covering as much as I can. HTML TAGS/FORMATTING the_excerpt() first of all doesn't except any parameters, so nothing can be passed to it. In is a fact that the_excerpt() trims the content to 55 words, and all html tags are stripped before returning the text. the_excerpt() is located in wp-includes/post-template.php . To allow certain or all html tags in the excerpt, a new excerpt have to be created. First of all, the original function needs to be removed first, and then the new function needs to be hooked to get_the_excerpt . Please take note, this new excerpt will still be callable as the_excerpt() in template files, no need to change that. get_the_excerpt() is located in wp-includes/post-template.php . The excerpt uses wp_trim_excerpt to return the trimmed text, so we need to remove wp_trim_excerpt first from the excerpt filter. wp_trim_excerpt() is located in wp-includes/formatting.php , line 2355. This is how: remove_filter('get_the_excerpt', 'wp_trim_excerpt'); You can now add your new excerpt to get_the_excerpt add_filter('get_the_excerpt', 'wpse_custom_wp_trim_excerpt'); To allow html tags/formatting, we will need to specify which tags you will need to allow. You can use the following strip_tags statement to achieve that $wpse_excerpt = strip_tags($wpse_excerpt, wpse_allowedtags()); The second argument wpse_allowedtags() is a small function that is used to add the tags the_excerpt() will allow. For a complete list of valid HTML5 tags, go and check it out here . Here is function, add any html tag to this that you need to allow/keep function wpse_allowedtags() {// Add custom tags to this string return '<script>,<style>,<br>,<em>,<i>,<ul>,<ol>,<li>,<a>,<p>,<img>,<video>,<audio>'; } If you need to allow all HTML tags, that is, no stripping of any tags, the strips_tags() function can be omitted/removed completely. A point to note however, when html tags are allowed, these tags are counted as words, so your word count for excerpts with tags and without tags will not be the same. To correct that, you will need to remove these tags from the actual word count first so that only words are counted. I have written an excerpt that will allow all tags, count only words as words, and complete a sentence after the set amount of words (won't trim text mid-sentence) and add a read more text after the last word. Here is the complete code function wpse_allowedtags() { // Add custom tags to this string return '<script>,<style>,<br>,<em>,<i>,<ul>,<ol>,<li>,<a>,<p>,<img>,<video>,<audio>'; }if ( ! function_exists( 'wpse_custom_wp_trim_excerpt' ) ) : function wpse_custom_wp_trim_excerpt($wpse_excerpt) { global $post; $raw_excerpt = $wpse_excerpt; if ( '' == $wpse_excerpt ) { $wpse_excerpt = get_the_content(''); $wpse_excerpt = strip_shortcodes( $wpse_excerpt ); $wpse_excerpt = apply_filters('the_content', $wpse_excerpt); $wpse_excerpt = str_replace(']]>', ']]&gt;', $wpse_excerpt); $wpse_excerpt = strip_tags($wpse_excerpt, wpse_allowedtags()); /*IF you need to allow just certain tags. Delete if all tags are allowed */ //Set the excerpt word count and only break after sentence is complete. $excerpt_word_count = 75; $excerpt_length = apply_filters('excerpt_length', $excerpt_word_count); $tokens = array(); $excerptOutput = ''; $count = 0; // Divide the string into tokens; HTML tags, or words, followed by any whitespace preg_match_all('/(<[^>]+>|[^<>\s]+)\s*/u', $wpse_excerpt, $tokens); foreach ($tokens[0] as $token) { if ($count >= $excerpt_word_count && preg_match('/[\,\;\?\.\!]\s*$/uS', $token)) { // Limit reached, continue until , ; ? . or ! occur at the end $excerptOutput .= trim($token); break; } // Add words to complete sentence $count++; // Append what's left of the token $excerptOutput .= $token; } $wpse_excerpt = trim(force_balance_tags($excerptOutput)); $excerpt_end = ' <a href="'. esc_url( get_permalink() ) . '">' . '&nbsp;&raquo;&nbsp;' . sprintf(__( 'Read more about: %s &nbsp;&raquo;', 'wpse' ), get_the_title()) . '</a>'; $excerpt_more = apply_filters('excerpt_more', ' ' . $excerpt_end); //$pos = strrpos($wpse_excerpt, '</'); //if ($pos !== false) // Inside last HTML tag //$wpse_excerpt = substr_replace($wpse_excerpt, $excerpt_end, $pos, 0); /* Add read more next to last word */ //else // After the content $wpse_excerpt .= $excerpt_end; /*Add read more in new paragraph */ return $wpse_excerpt; } return apply_filters('wpse_custom_wp_trim_excerpt', $wpse_excerpt, $raw_excerpt); }endif; remove_filter('get_the_excerpt', 'wp_trim_excerpt');add_filter('get_the_excerpt', 'wpse_custom_wp_trim_excerpt'); You can just remove the '//' from functions that you need extra. CUSTOM EXCERPT LENGTHS Sometimes you need to display simple excerpts of different lengths and it is not viable to write an excerpt for every post/function/page. Here is a nice small little function using wp_trim_words function wpse_custom_excerpts($limit) { return wp_trim_words(get_the_excerpt(), $limit, '<a href="'. esc_url( get_permalink() ) . '">' . '&nbsp;&hellip;' . __( 'Read more &nbsp;&raquo;', 'wpse' ) . '</a>');} What this little function do is taking get_the_excerpt trimming that to $limit set by the user and returning the text with a read more link at the end. You can call this excerpt as follow in your template echo wpse_custom_excerpts($limit); where $limit will be your word count, so an excerpt of 30 word will be echo wpse_custom_excerpts(30); Just one thing to remember here, if you set your limit to more that 55 words, only 55 words will be returned as the excerpt is only 55 words in length. If longer excerpts are needed, use get_the_content instead. CUSTOM EXCERPT LENGTH If you just need to alter the length of the_excerpt() , you can use the following function function wpse_excerpt_length( $length ) { return 20;}add_filter( 'excerpt_length', 'wpse_excerpt_length', 999 ); Remember, you will need to set a priority bigger than 10 so that your custom function executes after the default. ADD READ MORE LINK All text returned by the excerpt have the hated [...] at the end that is not clickable. To add a read more text in the place of the hellips, use this function function wpse_excerpt_more( $more ) { return ' <a class="read-more" href="'. get_permalink( get_the_ID() ) . '">' . __('Read More', 'your-text-domain') . '</a>';}add_filter( 'excerpt_more', 'wpse_excerpt_more' ); EDIT Excerpt first paragraph I want to keep this complete, so here is the excerpt that trims after the first paragraph Here is a function that keeps HTML tags in tact, adds a "Read More" link at the end of the excerpt and trims the excerpt after the first paragraph. if ( ! function_exists( 'wpse0001_custom_wp_trim_excerpt' ) ) : function wpse0001_custom_wp_trim_excerpt($wpse0001_excerpt) {global $post;$raw_excerpt = $wpse0001_excerpt;if ( '' == $wpse0001_excerpt ) {$wpse0001_excerpt = get_the_content('');$wpse0001_excerpt = strip_shortcodes( $wpse0001_excerpt );$wpse0001_excerpt = apply_filters('the_content', $wpse0001_excerpt);$wpse0001_excerpt = substr( $wpse0001_excerpt, 0, strpos( $wpse0001_excerpt, '</p>' ) + 4 );$wpse0001_excerpt = str_replace(']]>', ']]&gt;', $wpse0001_excerpt);$excerpt_end = ' <a href="'. esc_url( get_permalink() ) . '">' . '&nbsp;&raquo;&nbsp;' . sprintf(__( 'Read more about: %s &nbsp;&raquo;', 'pietergoosen' ), get_the_title()) . '</a>'; $excerpt_more = apply_filters('excerpt_more', ' ' . $excerpt_end); //$pos = strrpos($wpse0001_excerpt, '</');//if ($pos !== false)// Inside last HTML tag//$wpse0001_excerpt = substr_replace($wpse0001_excerpt, $excerpt_end, $pos, 0);//else// After the content$wpse0001_excerpt .= $excerpt_end;return $wpse0001_excerpt;}return apply_filters('wpse0001_custom_wp_trim_excerpt', $wpse0001_excerpt, $raw_excerpt);}endif; remove_filter('get_the_excerpt', 'wp_trim_excerpt');add_filter('get_the_excerpt', 'wpse0001_custom_wp_trim_excerpt');
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24151166', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3727713/']}
jdg_78679
stackexchange
llm_judgeable_groundtruth_similarity
21040672
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm not a javascript developer, so bear with me on this one... I need to perform a redirect after a jQuery click event has completed. This is what I have, but I can't apply .done to .click . Wrapping the whole thing in $.when doesn't work ether... $("#printpng").click(function(){ $('#tool_docprops').click(); $('#tool_docprops_save').click(); $('#tool_export').click() }).done(function(){ window.location.href = "<?php echo $base_url ?>/sign/print" }); does anyone have any better solutions? Thanks Now provide the response and nothing else.
Assuming each click is triggering an Ajax call out of your control (else why would you have this problem), you can simply await any/all Ajax requests to complete with: $(document).ajaxStop(function () { window.location.href = "<?php echo $base_url ?>/sign/print" }); You can also add a timeout, if required, before you change the URL in case there is additional processing after the Ajax load.If it is not an Ajax issue please clarify and I will adjust (or remove) this answer. Full version (with a 1 second additional delay after Ajax wait) might look something like: $("#printpng").click(function(){ $('#tool_docprops').click(); $('#tool_docprops_save').click(); $('#tool_export').click(); $(document).ajaxStop(function () { setTimeout(function(){ window.location.href = "<?php echo $base_url ?>/sign/print" }, 1000); });}); As promised [sic] a better solution using promises As the full code was never provided, the solution was guesswork, assuming multiple ajax calls. A generic solution is to not fire click events, but to simply call the related code for each click, in a promise-friendly way. Assuming that each click handler has a dedicated function, just make each function return a promise: e.g. function loadPropsViaAjax(){ // simply return the ajax call as $.ajax returns a promise return $.ajax({parameters here}); } function saveDocPropsViaAjax(){ // simply return the ajax call as $.ajax returns a promise return $.ajax({parameters here}); } function waitForImageToload(){ // create a deferred object var def = $.Deferred(); // When the image eventually loads, resolve the promise $('#someimageselector').on('load', function(){ def.resolve(); }); // Return the promise immediately return def.promise(); } Then to make use of it in your example (running sequentially using .then() ): // On click button event... $("#printpng").click(function(){ // Run operations sequentially loadPropsViaAjax().then(saveDocPropsViaAjax).then(waitForImageToload) .done(function(){ window.location.href = "<?php echo $base_url ?>/sign/print" }); }); Or if they can run in parallel, use $.when : // On click button event... $("#printpng").click(function(){ // Run operations in parallel $.when(loadPropsViaAjax, saveDocPropsViaAjax, waitForImageToload) .done(function(){ window.location.href = "<?php echo $base_url ?>/sign/print" }); });
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21040672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2412837/']}
jdg_78680
stackexchange
llm_judgeable_groundtruth_similarity
229543
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given a random variable $Y = Exp(\lambda)$, what is the mean and variance of $G=\dfrac{1}{Y}$ ? I look at the Inverse Gamma Distribution, but the mean and variance are only defined for $\alpha>1$ and $\alpha>2$ respectively... Now provide the response and nothing else.
Given that the inverse exponential distribution has $\alpha = 1$, you have stumbled upon the fact that the mean of the inverse exponential is $\infty$. And therefore, the variance of the inverse exponential is undefined. If $G$ is inverse exponentially distributed, $E(G^r)$ exists and is finite for $r < 1$, and $= \infty$ for $r = 1$.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/229543', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/27069/']}
jdg_78681
stackexchange
llm_judgeable_groundtruth_similarity
83299
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Bash is driving me nuts. I can't figure out why the following (nor any of the dozens of variations I've literally copied and pasted from examples) fails to work: #!/bin/bashecho $#function main { if (( $# < 1 )); then usage fi echo "good"}function usage { echo "Usage: $0 <outputdir>" exit 1}main For some reason the argument check fails every time, even when I pass an argument and can clearly see $# is set as expected. Now provide the response and nothing else.
You're calling the main function with no argument. So $# in the main function is always 0. Each function instance has its own arguments. (“Instance” means that if the function is started multiple times (through recursive calls), each call has its own arguments.) The positional parameters $1 , $2 , etc., as well as the associated parameters $# , $* and $@ refer to the arguments passed in the function call. For example, if you call main foo bar then inside the main function the value of $1 will be foo , the value of $# will be 2 and so on. If you want to pass the arguments of the script to a function, use "$@" . This construct expands to the list of arguments passed to the script (or to the function, if called inside a function). Note that unlike what normally happens with double quotes, the parameters are passed separately, "$@" is a list of strings and not a single string. The double quotes are necessary, otherwise the parameters are not passed as is but treated as a whitespace-separated list of file name patterns. function main { if (( $# < 1 )); then usage fi echo "good"}main "$@"
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/83299', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/28615/']}
jdg_78682
stackexchange
llm_judgeable_groundtruth_similarity
28659660
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to add events to calendar with the following code : public Intent calPopulation(){ Intent calIntent = new Intent(Intent.ACTION_INSERT); calIntent.setType("vnd.android.cursor.item/event"); calIntent.putExtra(CalendarContract.Events.TITLE, this._title); GregorianCalendar calDate = new GregorianCalendar(this._year,this._month, this._day, this._hour, this._minute); calIntent.putExtra(CalendarContract.EXTRA_EVENT_BEGIN_TIME, calDate.getTimeInMillis()); calIntent.putExtra(CalendarContract.EXTRA_EVENT_END_TIME, calDate.getTimeInMillis()+60*60*1000); calIntent.putExtra(CalendarContract.Events.HAS_ALARM, true); calIntent.putExtra(CalendarContract.Reminders.EVENT_ID, CalendarContract.Events._ID); calIntent.putExtra(CalendarContract.Events.ALLOWED_REMINDERS, "METHOD_DEFAULT"); calIntent.putExtra(CalendarContract.Reminders.METHOD, CalendarContract.Reminders.METHOD_ALERT); calIntent.putExtra(CalendarContract.Reminders.MINUTES,5); return calIntent;} And then launch the action with : startActivity(mTask.calPopulation()); I don't have any issue, the calendar app event launched with the correct information I entered into my app, except that it does not fill in the event the reminder I would like to add. Do you have any clues? I tried to search within many threads using this method (I mean the intent.putExtra) but never find anything interesting. Other point, is there a way to directly save the event + reminder into the calendar without opening calendar App and requesting for user action? Thanks in advance.Alex. Now provide the response and nothing else.
No answer according to the method I pointed out, but another way I found to add event with reminders to the calendar. Hereunder methods are working fine for me : // Add an event to the calendar of the user. public void addEvent(Context context) { GregorianCalendar calDate = new GregorianCalendar(this._year, this._month, this._day, this._hour, this._minute); try { ContentResolver cr = context.getContentResolver(); ContentValues values = new ContentValues(); values.put(CalendarContract.Events.DTSTART, calDate.getTimeInMillis()); values.put(CalendarContract.Events.DTEND, calDate.getTimeInMillis()+60*60*1000); values.put(CalendarContract.Events.TITLE, this._title); values.put(CalendarContract.Events.CALENDAR_ID, 1); values.put(CalendarContract.Events.EVENT_TIMEZONE, Calendar.getInstance() .getTimeZone().getID()); System.out.println(Calendar.getInstance().getTimeZone().getID()); Uri uri = cr.insert(CalendarContract.Events.CONTENT_URI, values); // Save the eventId into the Task object for possible future delete. this._eventId = Long.parseLong(uri.getLastPathSegment()); // Add a 5 minute, 1 hour and 1 day reminders (3 reminders) setReminder(cr, this._eventId, 5); setReminder(cr, this._eventId, 60); setReminder(cr, this._eventId, 1440); } catch (Exception e) { e.printStackTrace(); } } // routine to add reminders with the event public void setReminder(ContentResolver cr, long eventID, int timeBefore) { try { ContentValues values = new ContentValues(); values.put(CalendarContract.Reminders.MINUTES, timeBefore); values.put(CalendarContract.Reminders.EVENT_ID, eventID); values.put(CalendarContract.Reminders.METHOD, CalendarContract.Reminders.METHOD_ALERT); Uri uri = cr.insert(CalendarContract.Reminders.CONTENT_URI, values); Cursor c = CalendarContract.Reminders.query(cr, eventID, new String[]{CalendarContract.Reminders.MINUTES}); if (c.moveToFirst()) { System.out.println("calendar" + c.getInt(c.getColumnIndex(CalendarContract.Reminders.MINUTES))); } c.close(); } catch (Exception e) { e.printStackTrace(); } } // function to remove an event from the calendar using the eventId stored within the Task object. public void removeEvent(Context context) { ContentResolver cr = context.getContentResolver(); int iNumRowsDeleted = 0; Uri eventsUri = Uri.parse(CALENDAR_URI_BASE+"events"); Uri eventUri = ContentUris.withAppendedId(eventsUri, this._eventId); iNumRowsDeleted = cr.delete(eventUri, null, null); Log.i(DEBUG_TAG, "Deleted " + iNumRowsDeleted + " calendar entry."); } public int updateEvent(Context context) { int iNumRowsUpdated = 0; GregorianCalendar calDate = new GregorianCalendar(this._year, this._month, this._day, this._hour, this._minute); ContentValues event = new ContentValues(); event.put(CalendarContract.Events.TITLE, this._title); event.put("hasAlarm", 1); // 0 for false, 1 for true event.put(CalendarContract.Events.DTSTART, calDate.getTimeInMillis()); event.put(CalendarContract.Events.DTEND, calDate.getTimeInMillis()+60*60*1000); Uri eventsUri = Uri.parse(CALENDAR_URI_BASE+"events"); Uri eventUri = ContentUris.withAppendedId(eventsUri, this._eventId); iNumRowsUpdated = context.getContentResolver().update(eventUri, event, null, null); // TODO put text into strings.xml Log.i(DEBUG_TAG, "Updated " + iNumRowsUpdated + " calendar entry."); return iNumRowsUpdated; } Hope this could help others that encountered the same problem I had :). Alex.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28659660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4278060/']}
jdg_78683
stackexchange
llm_judgeable_groundtruth_similarity
6129581
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have three profiles in my pom.xml for our application... dev (for use on a developer's) qa (for use on our internal qa server) prod (production). When we run our maven build all three profiles ouput a war file with the same name. I would like to output $profilename-somearbitraryname.war Any ideas? Now provide the response and nothing else.
You've answered yourself correctly: <profiles> <profile> <id>dev</id> <properties> <rp.build.warname>dev</rp.build.warname> </properties> </profile> <profile> <id>qa</id> <properties> <rp.build.warname>qa</rp.build.warname> </properties> </profile> <profile> <id>prod</id> <properties> <rp.build.warname>prod</rp.build.warname> </properties> </profile></profiles> but there is a simpler way to redefine WAR name: <build> <finalName>${rp.build.warname}-somearbitraryname</finalName> <!-- ... --></build> No maven-war-plugin is needed.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6129581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/97901/']}
jdg_78684
stackexchange
llm_judgeable_groundtruth_similarity
1757963
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f\colon\mathbb{C}\to\mathbb{C}$ . We know that if $f^{\prime}(a)$ exists for some $a\in\mathbb{C}$ then $f$ is continuous at $a$ . This is because, from the definition of the derivative, $$f(z)-f(a)=[f^{\prime}(a)+\varepsilon(z)](z-a)$$ for some $\varepsilon$ such that $\varepsilon(z)\to0$ as $z\to a$ , and hence $f(z)\to f(a)$ . I think we can interpret this geometrically by saying that, for $z$ close to $a$ , if we take the vector from $a$ to $z$ , rotate it counterclockwise about its tail by $\mathrm{Arg}\,{(f^{\prime}(a))}$ and scale its length by $\lvert f^{\prime}(a)\rvert$ , then we get approximately the vector from $f(a)$ to $f(z)$ (it's only approximate, because of the $\varepsilon$ error term, but for $z$ close to $a$ this $\varepsilon$ is small). However, consider the following heuristic argument: let $z_{1}$ and $z_{2}$ both be "close to" $a$ (and therefore close to each other). The above interpretation then suggests that the vector from $f(a)$ to $f(z_{1})$ should be approximately the same as the vector from $f(a)$ to $f(z_{2})$ , and therefore $f(z_{2})-f(z_{1})$ should be small in magnitude. This seems to suggest that $f$ should be continuous not just at $a$ , but also in some small interval around $a$ . This seems too good to be true. Hence my question: Does there exist a function $f\colon\mathbb{C}\to\mathbb{C}$ such that $f^{\prime}(a)$ exists at some point $a\in\mathbb{C}$ but $f$ is continuous nowhere else? Update, added later: A quick thanks to those who responded. I have since thought through this with a friend and we've found a problem with the heuristic argument: Properly interpreted, the heuristic argument is saying that if $z_{1}$ and $z_{2}$ are close to $a$ , then $f(z_{1})$ is close to $f(z_{2})$ . However, to have continuity at $z_{1}$ , say, we need to be able to make $f(z_{2})$ arbitrarily close to $f(z_{1})$ by choosing $z_{2}$ to be sufficiently close to $z_{1}$ --- therein lies the problem. Suppose $\lvert z_{3}-z_{1} \rvert < \lvert z_{2}-z_{1} \rvert$ , i.e., suppose $z_{3}$ is even closer to $z_{1}$ than $z_{2}$ is. Then, for $f$ to be continuous, we need to be able to ensure that $f(z_{3})$ is closer still to $f(z_{1})$ (than $f(z_{2})$ is); in general, we can't ensure this. If I get round to it, I might try to work through the details in a specific case and see if there are any other problems. If I get anywhere, I'll post the results as an additional answer. Now provide the response and nothing else.
Let $g \colon \mathbb{C} \rightarrow \mathbb{C}$ be a function that is discontinuous everywhere and bounded. Consider $f(z) := z^2 g(z)$. Then $f$ is continuous only at $z = 0$, and $f$ is differentiable at $z = 0$ as $$ \lim_{z \to 0} \frac{f(z) - f(0)}{z} = \lim_{z \to 0} z g(z) = 0.$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1757963', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/254628/']}
jdg_78685
stackexchange
llm_judgeable_groundtruth_similarity
42414531
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to implement a Floating Action Button (F.A.B) that hides on scroll down, and shows on scroll up. I have ScrollAwareFABBehavior.java to manage this, and it's connected to the F.A.B in in the XML activity_main . Problem: The F.A.B hides on scroll down, but doesn't show again when I scroll up. I logged the onNestedScroll method and it calls "calling scroll" and "calling to hide" while scrolling down; but after the F.A.B is hidden there are none of the 3x Log 's Question: Why does the F.A.B not show when I scroll up, after the F.A.B has been hidden. ScrollAwareFABBehavior.java: import android.content.Context;import android.support.design.widget.CoordinatorLayout;import android.support.design.widget.FloatingActionButton;import android.support.v4.view.ViewCompat;import android.util.AttributeSet;import android.util.Log;import android.view.View;public class ScrollAwareFABBehavior extends FloatingActionButton.Behavior { public ScrollAwareFABBehavior(Context context, AttributeSet attrs) { super(); } @Override public boolean onStartNestedScroll(final CoordinatorLayout coordinatorLayout, final FloatingActionButton child, final View directTargetChild, final View target, final int nestedScrollAxes) { // Ensure we react to vertical scrolling return nestedScrollAxes == ViewCompat.SCROLL_AXIS_VERTICAL || super.onStartNestedScroll(coordinatorLayout, child, directTargetChild, target, nestedScrollAxes); } @Override public void onNestedScroll(final CoordinatorLayout coordinatorLayout, final FloatingActionButton child, final View target, final int dxConsumed, final int dyConsumed, final int dxUnconsumed, final int dyUnconsumed) { super.onNestedScroll(coordinatorLayout, child, target, dxConsumed, dyConsumed, dxUnconsumed, dyUnconsumed); Log.d("test", "calling scroll"); if (dyConsumed > 0 && child.getVisibility() == View.VISIBLE) { // User scrolled down and the FAB is currently visible -> hide the FAB Log.d("test", "calling to hide"); child.hide(); } else if (dyConsumed < 0 && child.getVisibility() != View.VISIBLE) { // User scrolled up and the FAB is currently not visible -> show the FAB Log.d("test", "calling to show"); child.show(); } }} activity_main.xml: <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/main_content" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="companyname.appname.MainActivity"> <android.support.v7.widget.RecyclerView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/rv_contactlist" android:layout_width="match_parent" android:layout_height="match_parent" android:clipToPadding="false" android:paddingBottom="16dp" android:paddingTop="16dp" android:scrollbars="vertical" /> <android.support.design.widget.FloatingActionButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" android:layout_margin="16dp" android:src="@drawable/ic_add_circle_outline_black_24dp" app:layout_anchor="@id/rv_contactlist" app:layout_anchorGravity="bottom|right|end" android:layout_alignParentEnd="true" app:fabSize="normal" android:layout_alignParentBottom="true" android:onClick="addItem" app:layout_behavior="companyname.appname.ScrollAwareFABBehavior"/></android.support.design.widget.CoordinatorLayout> Now provide the response and nothing else.
Which support library version are you using in your project? If you are using the latest one ( I mean 25.0.x), this is because fab.hide() method set the visibility to be View.GONE. This makes nested scroll listener stop to check fab, the second time you try to scroll the list. More detail can be found here: https://code.google.com/p/android/issues/detail?id=230298 And I search a bit found this similar question already have a nice answer: Floating action button not visible on scrolling after updating Google Support & Design Library So a possible work around would be to override fab.hide method, not to set the visibility to GONE but INVISIBLE instead. And I think this may be fixed from upstream later, so just keep an eye on it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42414531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4825286/']}
jdg_78686
stackexchange
llm_judgeable_groundtruth_similarity
2531542
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So far I have: Base case: $$ n = 1 : (2(1))! > (1!)^2$$$$ 2! > 1!^2$$$$2 > 1$$Induction step: Assume this is true for some $n > 1$Let n = p + 1$$(2(p+1))! > ((p+1)!)^2$$$$(2p)!(2p+1)(2p+2) > ((p+1)!)^2$$$$(2p)!>\frac{((p+1)!)^2}{(2p+1)(2p+2)}$$Now I add 1 to the smaller term in the demoninator, for cancellation purposes which makes the RHS smaller so inequality still holds. $$(2p)!>\frac{(p!)^2(p+1)^2}{(2p+2)(2p+2)}$$$$(2p)!>\frac{(p!)^2}{4}$$Now I am stumped on how to get rid of the extra 4 in the denominator. Now provide the response and nothing else.
Use$$(2n)! = n! (n+1)(n+2)\cdots(2n)> n! \cdot 1\cdot 2 \cdots n = (n!)^2.$$ Alternatively, if you know binomial coefficients,$$(2n)! = \binom{2n}{n} (n!)^2 > (n!)^2,$$since $\binom{2n}{n}$ is an integer $> 1.$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2531542', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/240358/']}
jdg_78687
stackexchange
llm_judgeable_groundtruth_similarity
10644443
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to make use of the Magento Category thumbnail, but it is not working. I've followed many tutorials online (eg http://www.douglasradburn.co.uk/getting-category-thumbnail-images-with-magento/ ),and all of them make mention of the function : $_category->getThumbnail() which is supposed to be in the Category model. I'm running Magento 1.6 and I can't find this function anywhere. I've also downloaded 1.5 and 1.7, looked in there and it is nowhere to be found. When I run the code it gives me no errors however, just nothing is output. Here is my full code: <ul id="nav"> <?php foreach ($this->getStoreCategories() as $_category): ?> <?php echo $_category->getThumbnail(); ?> <?php echo $this->drawItem($_category) ?> <?php endforeach ?> </ul> (I am trying to use the thumbnail as a menu item where it is present) Got it working. The secret is you need to re-query for the FULL category data using this code: Mage::getModel('catalog/category')->load($_category->getId())->getThumbnail() I followed this tutorial somewhat: http://www.h-o.nl/blog/using_category_images_in_your_magento_navigation/ for having category thumbnails in your menu. thanksT Now provide the response and nothing else.
For what it's worth, your solution works but is quite inefficient. Using: Mage::getModel('catalog/category')->load($_category->getId())->getThumbnail() will add a few hundredths, maybe even tenths of a second per category to your page's load time. The reason for this is you've gone to the trouble of getting a model collection and getting the item within it, and then you'll be adding new database calls that fetch the full data for each category. You need to simply ensure you collect the full category data in the first place. The reason what you had before wasn't working is because the category collection wasn't told what attributes it needs to select. It was in effect just returning flat data from the catalog_category_entity table, not joined with any attribute tables. What you need to do is probably more along these lines: <ul id="nav"><?php foreach ($this->getStoreCategories()->addAttributeToSelect("*") as $_category): ?> <?php echo $_category->getThumbnail(); ?> <?php echo $this->drawItem($_category) ?><?php endforeach ?></ul> In fact, ideally you want to override the ->getStoreCategories() function to add the wildcard filter. I recommend opening app/code/core/Mage/Eav/Model/Entity/Collection/Abstract.php and learning what sort of very cool collection functions have been written. Mastering EAV Collections is like a rite of passage for Magento developers. Once you do this you'll be unstoppable! Hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10644443', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/425646/']}
jdg_78688
stackexchange
llm_judgeable_groundtruth_similarity
5532595
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that I must call one of the following before each call to glVertex: glTexCoord(0,0); glTexCoord(0,1); glTexCoord(1,1); glTexCoord(1,0); But I have no idea what they mean. I know, however, that if I multiply (or is that divide?) the right side (or is it all the ones?) by two, my texture expands, and if I do the opposite, my texture repeats twice. I've managed to code a texture atlas by applying operations until it worked. But I have no proper idea about what's going on. Why does dividing these coordinates affect the image and why does reversing them mirror it? How do texture coordinates work? Now provide the response and nothing else.
Texture coordinates specify the point in the texture image that will correspond to the vertex you are specifying them for. Think of a rectangular rubber sheet with your texture image printed on it, where the length of each side is normalized to the range 0-1. Now let's say you wanted to draw a triangle using that texture. You'd take 3 pins and place them in the rubber sheet in the positions of each of your desired texture coordinates. (Say [0, 0], [1, 0] and [1, 1]) then move those pins (without taking them out) to your desired vertex coordinates (Say [0, 0], [0.5, 0] and [1, 1]), so that the rubber sheet is stretched out and the image is distorted. That's basically how texture coordinates work. If you use texture coordinates greater than 1 and your texture is set to repeat, then it's as if the rubber sheet was infinite in size and the texture was tiled across it. Therefore if your texture coordinates for two vertices were 0, 0 and 4, 0, then the image would have to be repeated 4 times between those vertices. @b1nary.atr0phy Image for all you visual thinkers!
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5532595', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/154066/']}
jdg_78689
stackexchange
llm_judgeable_groundtruth_similarity
313194
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Now this question might sound (/ is) stupid, but I'm just getting into OpenGL.So I installed GLFW source and I'm looking at some easy examples. What I'm noticing is, that the code still contains "typical" OpenGL functions, like glColor or glVertex , but it also contains "glfw" functions like glfwSetErrorCallback . I don't really understand how OpenGL and GLFW (or something like GLUT) relate to each other. Khronos owns OpenGL, but do they program a library for people to use, or do they just specify an API, like for example there are several C++ libraries ( libc++ and libstdc++ ) which implement the same API specified by the C++ Consortium? I'm going to make an assumption and going to ask if it is right: Khronos programs the OpenGL library and specifies an API for it. GLUT, GLFW and freeglut wrap around (or better extend) that OpenGL library and offer additional behavior (I read that OpenGL doesn't offer much for support for sound or dealing with input). Which means that you could use OpenGL on its own, but it would make several things quite difficult. But it also means that you wrote something with GLFW, you could "port" it to using, e.g., freeglut, and you would only have to replace the functionality provided by glfw(...) functions, but the gl(...) woul stay the same. Is that right? Now provide the response and nothing else.
OpenGL is a library that provides services like 3D rendering but doesn't do anything like creating a window that you can render into. OpenGL doesn't know anything about windows, or mouse events, or keyboards, or fonts and so on. It only knows about rendering. Because you probably want to render things visible on the screen you can't "use OpenGL on its own" because you wouldn't have anywhere to render. In fact, in order to do anything with OpenGL you need what is called an OpenGL context. The OpenGL library itself has no mechanism to create such contexts. Think of an OpenGL context as some internal data that represents the current state of the OpenGL system. Different platforms (such as Windows, Linux/X, OSX, iOS, Android) provide different mechanisms for creating contexts. For example Windows has its own function calls that allow the creation of OpenGL contexts that are very different from how OSX creates a context. Additionally Windows provides functions to support things like mouse events. This is where GLFW comes in: it provides a cross-platform library to handle the things that OpenGL doesn't handle, like creating contexts and handling mouse events. By and large it's not a wrapper library on top of OpenGL though it does provide some a small amount of functionality like this: like automatically building mip-map levels.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/313194', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/217255/']}
jdg_78690
stackexchange
llm_judgeable_groundtruth_similarity
6909
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This bit of code sets up a list with the first entry already in it -- {{1,1}} : Clear[x, y]x = 1y = 1datalist = List[List[x, y]] Here is a slider by which I can alter x and y by depressing the mouse: Slider2D[{x, y}, {{0, 0}, {5, 8}, {1, 2}}] But when I release the mouse button nothing happens of course, except that {x,y} are at some new value inside the slider. Suppose I have moved the slider to location {3,6} . When I release the mouse button, I want {3,6} to be appended to datalist so it now becomes {{1,1},{3,6}} . When I am done adding data points I want to save datalist . How do I create this functionality with making as few changes/additions to the above code as possible? Now provide the response and nothing else.
This is a job for EventHandler : EventHandler[ Slider2D[Dynamic[{x, y}], {{0, 0}, {5, 8}, {1, 2}}], {"MouseUp" :> AppendTo[datalist, {x, y}]}, PassEventsDown -> True] Follow the data acquisition with a Dynamic : Dynamic[datalist] {{1,1},{3,4},{2,2},{2,4}} PassEventsDown -> True is necessary to prevent EventHandler from eating up the MouseDrag events that are necessary to make the Slider2D move. To let Slider2D update x and y the Dynamic is necessary. This is well documented in the Slider2D documentation .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/6909', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1508/']}
jdg_78691
stackexchange
llm_judgeable_groundtruth_similarity
33282889
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to get the raw http response from my API REST. I have tried with this interface: @POST("/login")@FormUrlEncodedCall<retrofit.Response> login(@Field("username") String login, @Field("password") String pass, @Field("appName") String appName, @Field("appKey") String appKey); But I get: java.lang.IllegalArgumentException: Unable to create call adapter for retrofit.Call for method Api.login I create Retrofit this way: Retrofit.Builder retrofitBuilder = new Retrofit.Builder();retrofitBuilder.addConverterFactory(JacksonConverterFactory.create());Retrofit retrofitAdapter = retrofitBuilder.baseUrl(baseUrl).build();return retrofitAdapter.create(apiClass); Now provide the response and nothing else.
To get access to the raw response, use ResponseBody from okhttp as your call type. Call<ResponseBody> login(...) In your callback, you can check the response code with the code method of the response. This applies to any retrofit 2 return type, because your callback always gets a Response parameterized with your actual return type. For asynchronous -- Call<ResponseBody> myCall = myApi.login(...)myCall.enqueue(new Callback<ResponseBody>() { @Override public void onResponse(Response<ResponseBody> response, Retrofit retrofit) { // access response code with response.code() // access string of the response with response.body().string() } @Override public void onFailure(Throwable t) { t.printStackTrace(); }}); for synchronous calls -- Response<ResponseBody> response = myCall.execute();System.out.println("response code" + response.code());
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/33282889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3026283/']}
jdg_78692
stackexchange
llm_judgeable_groundtruth_similarity
9349342
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an inquiry about the "Character set" option in Visual Studio. The Character Set options are: Not Set Use Unicode Character Set Use Multi-Byte Character Set I want to know what the difference between three options in Character Set? Also if I choose something of them, will affect the support for languages other than English (like RTL languages)? Now provide the response and nothing else.
It is a compatibility setting, intended for legacy code that was written for old versions of Windows that were not Unicode enabled. Versions in the Windows 9x family, Windows ME was the last and widely ignored one. With "Not Set" or "Use Multi-Byte Character Set" selected, all Windows API functions that take a string as an argument are redefined to a little compatibility helper function that translates char* strings to wchar_t* strings, the API's native string type. Such code critically depends on the default system code page setting. The code page maps 8-bit characters to Unicode which selects the font glyph. Your program will only produce correct text when the machine that runs your code has the correct code page. Characters whose value >= 128 will get rendered wrong if the code page doesn't match. Always select "Use Unicode Character Set" for modern code. Especially when you want to support languages with a right-to-left layout and you don't have an Arabic or Hebrew code page selected on your dev machine. Use std::wstring or wchar_t[] in your code. Getting actual RTL layout requires turning on the WS_EX_RTLREADING style flag in the CreateWindowEx() call.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9349342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/459943/']}
jdg_78693
stackexchange
llm_judgeable_groundtruth_similarity
94977
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In C# and in Java (and possibly other languages as well), variables declared in a "try" block are not in scope in the corresponding "catch" or "finally" blocks. For example, the following code does not compile: try { String s = "test"; // (more code...)}catch { Console.Out.WriteLine(s); //Java fans: think "System.out.println" here instead} In this code, a compile-time error occurs on the reference to s in the catch block, because s is only in scope in the try block. (In Java, the compile error is "s cannot be resolved"; in C#, it's "The name 's' does not exist in the current context".) The general solution to this issue seems to be to instead declare variables just before the try block, instead of within the try block: String s;try { s = "test"; // (more code...)}catch { Console.Out.WriteLine(s); //Java fans: think "System.out.println" here instead} However, at least to me, (1) this feels like a clunky solution, and (2) it results in the variables having a larger scope than the programmer intended (the entire remainder of the method, instead of only in the context of the try-catch-finally). My question is, what were/are the rationale(s) behind this language design decision (in Java, in C#, and/or in any other applicable languages)? Now provide the response and nothing else.
Two things: Generally, Java has just 2 levels of scope: global and function. But, try/catch is an exception (no pun intended). When an exception is thrown and the exception object gets a variable assigned to it, that object variable is only available within the "catch" section and is destroyed as soon as the catch completes. (and more importantly). You can't know where in the try block the exception was thrown. It may have been before your variable was declared. Therefore it is impossible to say what variables will be available for the catch/finally clause. Consider the following case, where scoping is as you suggested: try{ throw new ArgumentException("some operation that throws an exception"); string s = "blah";}catch (e as ArgumentException){ Console.Out.WriteLine(s);} This clearly is a problem - when you reach the exception handler, s will not have been declared. Given that catches are meant to handle exceptional circumstances and finallys must execute, being safe and declaring this a problem at compile time is far better than at runtime.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/94977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12484/']}
jdg_78694
stackexchange
llm_judgeable_groundtruth_similarity
31386592
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can't figure out why my UIMenuController doesn't show up in my view.When I long-press the button, I can see that all instructions are executed, but menu is not shown. public class MySingleArticleView : UIView{ private MyArticleViewController _controller; // MyArticleViewController extends BaseViewController public MySingleArticleView (MyArticleViewController controller, ...) { _controller = controller; // ... init code } void ArticleLongPressed (UILongPressGestureRecognizer gestureRecognizer) { var menuController = UIMenuController.SharedMenuController; var copyMenuItem = new UIMenuItem ("copy", new Selector ("CopyRow")); var pasteMenuItem = new UIMenuItem ("paste", new Selector ("PasteRow")); var location = gestureRecognizer.LocationInView (_controller.View); _controller.BecomeFirstResponder (); menuController.MenuItems = new [] { copyMenuItem, pasteMenuItem }; menuController.SetTargetRect (new CGRect (location.X, location.Y, 0, 0), _controller.View); menuController.SetMenuVisible (true, true); } [Export("CopyRow")] void Row (UIMenuController controller) { // do something } [Export("PasteRow")] void PasteRow (UIMenuController controller) { // do something }} What am I forgetting? Now provide the response and nothing else.
Question : As for your question concerning when shuffling is triggered on Spark? Answer : Any join , cogroup , or ByKey operation involves holding objects in hashmaps or in-memory buffers to group or sort. join , cogroup , and groupByKey use these data structures in the tasks for the stages that are on the fetching side of the shuffles they trigger. reduceByKey and aggregateByKey use data structures in the tasks for the stages on both sides of the shuffles they trigger. Explanation : How does shuffle operation work in Spark? The shuffle operation is implemented differently in Spark compared to Hadoop. I don't know if you are familiar with how it works with Hadoop but let's focus on Spark for now. On the map side , each map task in Spark writes out a shuffle file (os disk buffer) for every reducer – which corresponds to a logical block in Spark. These files are not intermediary in the sense that Spark does not merge them into larger partitioned ones. Since scheduling overhead in Spark is lesser, the number of mappers ( M ) and reducers( R ) is far higher than in Hadoop. Thus, shipping M*R files to the respective reducers could result in significant overheads. Similar to Hadoop, Spark also provide a parameter spark.shuffle.compress to specify compression libraries to compress map outputs. In this case, it could be Snappy (by default) or LZF . Snappy uses only 33KB of buffer for each opened file and significantly reduces risk of encountering out-of-memory errors. On the reduce side , Spark requires all shuffled data to fit into memory of the corresponding reducer task, on the contrary of Hadoop that had an option to spill this over to disk. This would of course happen only in cases where the reducer task demands all shuffled data for a GroupByKey or a ReduceByKey operation, for instance. Spark throws an out-of-memory exception in this case, which has proved quite a challenge for developers so far. Also with Spark there is no overlapping copy phase, unlike Hadoop that has an overlapping copy phase where mappers push data to the reducers even before map is complete. This means that the shuffle is a pull operation in Spark, compared to a push operation in Hadoop. Each reducer should also maintain a network buffer to fetch map outputs. Size of this buffer is specified through the parameter spark.reducer.maxMbInFlight (by default, it is 48MB). For more information about shuffling in Apache Spark, I suggest the following readings : Optimizing Shuffle Performance in Spark by Aaron Davidson and Andrew Or. SPARK-751 JIRA issue and Consolidating Shuffle files by Jason Dai.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31386592', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5098136/']}
jdg_78695
stackexchange
llm_judgeable_groundtruth_similarity
12059
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Regarding the concept of device (browser) fingerprinting , would it be possible for the user to spoof fingerprinting data by giving random information such that your fingerprint changes after every request? If so, are there any plugins/browsers that have this capability? Now provide the response and nothing else.
As far as I can see, you can easily spoof all of the fingerprinting data, and potentially change it with every request. All this information is sent by your browser in the HTTP headers of the request (or obtained via browser client-side scripts). I am not aware of a particular plugin that supports this fully, but there are some plugins for changing your user-agent , manipulating headers and managing cookies . The user-agent is one of the main identifying pieces of information of the fingerprinting process and most other pieces are in other various headers. Blocking javascript/flash scripts is also possible, e.g. using the noscript plugin Changing this fingerprinting info however does come with a price. It might cause the website to display pages in a different way, which will cause the site to look wrong or not to function properly. Lots of website developers rely on (at least some of) this fingerprint information in order to make the pages they display compatible with your browser. Hiding/spoofing this info will therefore affect this process and your browsing experience may be altered . It's also possible that this fingerprinting is performed for legitimate security reasons . For example, a security-aware website might tie in the fingerprint info to the session or logged-in user, and will reject requests where the fingerprinting doesn't match (this would suggest to the server a request is now coming from a different browser). Here's one more example from another question where the website owner wants to test browser plugin versions of your browser (which is part of the fingerprinting data) before allowing access. I don't know of any specific sites that use this technique, but it is definitely feasible. Changing fingerprint data with every request will again cause unwanted behaviour. I suppose that performing some minor randomized tweaks to specific headers will achieve a better degree of anonymity against fingerprinting without compromising the browsing experience. This requires some experimentation and will probably mean a compromise between security and usability.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/12059', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/3676/']}
jdg_78696
stackexchange
llm_judgeable_groundtruth_similarity
9398057
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to do a simple control: a container with a view inside. If I touch the container and I move the finger, I want to move the view to follow my finger. What kind of container (layout) should I use? How to do this? I don't need to use a surface, but a simple layout. Now provide the response and nothing else.
Something like this: public class MyActivity extends Activity implements View.OnTouchListener {TextView _view;ViewGroup _root;private int _xDelta;private int _yDelta;@Overridepublic void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); _root = (ViewGroup)findViewById(R.id.root); _view = new TextView(this); _view.setText("TextView!!!!!!!!"); RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(150, 50); layoutParams.leftMargin = 50; layoutParams.topMargin = 50; layoutParams.bottomMargin = -250; layoutParams.rightMargin = -250; _view.setLayoutParams(layoutParams); _view.setOnTouchListener(this); _root.addView(_view);}public boolean onTouch(View view, MotionEvent event) { final int X = (int) event.getRawX(); final int Y = (int) event.getRawY(); switch (event.getAction() & MotionEvent.ACTION_MASK) { case MotionEvent.ACTION_DOWN: RelativeLayout.LayoutParams lParams = (RelativeLayout.LayoutParams) view.getLayoutParams(); _xDelta = X - lParams.leftMargin; _yDelta = Y - lParams.topMargin; break; case MotionEvent.ACTION_UP: break; case MotionEvent.ACTION_POINTER_DOWN: break; case MotionEvent.ACTION_POINTER_UP: break; case MotionEvent.ACTION_MOVE: RelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams) view.getLayoutParams(); layoutParams.leftMargin = X - _xDelta; layoutParams.topMargin = Y - _yDelta; layoutParams.rightMargin = -250; layoutParams.bottomMargin = -250; view.setLayoutParams(layoutParams); break; } _root.invalidate(); return true;}} In main.xml just RelativeLayout with @+id/root
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9398057', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1105637/']}
jdg_78697
stackexchange
llm_judgeable_groundtruth_similarity
5812493
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a set of data which looks something like this: anim <- c(25499,25500,25501,25502,25503,25504)sex <- c(1,2,2,1,2,1)wt <- c(0.8,1.2,1.0,2.0,1.8,1.4)data <- data.frame(anim,sex,wt)data anim sex wt anim21 25499 1 0.8 22 25500 2 1.2 23 25501 2 1.0 24 25502 1 2.0 25 25503 2 1.8 26 25504 1 1.4 2 I would like a zero to be added before each animal id: data anim sex wt anim21 025499 1 0.8 22 025500 2 1.2 23 025501 2 1.0 24 025502 1 2.0 25 025503 2 1.8 26 025504 1 1.4 2 And for interest sake, what if I need to add two or three zeros before the animal id's? Now provide the response and nothing else.
The short version: use formatC or sprintf . The longer version: There are several functions available for formatting numbers, including adding leading zeroes. Which one is best depends upon what other formatting you want to do. The example from the question is quite easy since all the values have the same number of digits to begin with, so let's try a harder example of making powers of 10 width 8 too. anim <- 25499:25504x <- 10 ^ (0:5) paste (and it's variant paste0 ) are often the first string manipulation functions that you come across. They aren't really designed for manipulating numbers, but they can be used for that. In the simple case where we always have to prepend a single zero, paste0 is the best solution. paste0("0", anim)## [1] "025499" "025500" "025501" "025502" "025503" "025504" For the case where there are a variable number of digits in the numbers, you have to manually calculate how many zeroes to prepend, which is horrible enough that you should only do it out of morbid curiosity. str_pad from stringr works similarly to paste , making it more explicit that you want to pad things. library(stringr)str_pad(anim, 6, pad = "0")## [1] "025499" "025500" "025501" "025502" "025503" "025504" Again, it isn't really designed for use with numbers, so the harder case requires a little thinking about. We ought to just be able to say "pad with zeroes to width 8", but look at this output: str_pad(x, 8, pad = "0")## [1] "00000001" "00000010" "00000100" "00001000" "00010000" "0001e+05" You need to set the scientific penalty option so that numbers are always formatted using fixed notation (rather than scientific notation). library(withr)with_options( c(scipen = 999), str_pad(x, 8, pad = "0"))## [1] "00000001" "00000010" "00000100" "00001000" "00010000" "00100000" stri_pad in stringi works exactly like str_pad from stringr . formatC is an interface to the C function printf . Using it requires some knowledge of the arcana of that underlying function (see link). In this case, the important points are the width argument, format being "d" for "integer", and a "0" flag for prepending zeroes. formatC(anim, width = 6, format = "d", flag = "0")## [1] "025499" "025500" "025501" "025502" "025503" "025504"formatC(x, width = 8, format = "d", flag = "0")## [1] "00000001" "00000010" "00000100" "00001000" "00010000" "00100000" This is my favourite solution, since it is easy to tinker with changing the width, and the function is powerful enough to make other formatting changes. sprintf is an interface to the C function of the same name; like formatC but with a different syntax. sprintf("%06d", anim)## [1] "025499" "025500" "025501" "025502" "025503" "025504"sprintf("%08d", x)## [1] "00000001" "00000010" "00000100" "00001000" "00010000" "00100000" The main advantage of sprintf is that you can embed formatted numbers inside longer bits of text. sprintf( "Animal ID %06d was a %s.", anim, sample(c("lion", "tiger"), length(anim), replace = TRUE))## [1] "Animal ID 025499 was a tiger." "Animal ID 025500 was a tiger."## [3] "Animal ID 025501 was a lion." "Animal ID 025502 was a tiger."## [5] "Animal ID 025503 was a tiger." "Animal ID 025504 was a lion." See also goodside's answer . For completeness it is worth mentioning the other formatting functions that are occasionally useful, but have no method of prepending zeroes. format , a generic function for formatting any kind of object, with a method for numbers. It works a little bit like formatC , but with yet another interface. prettyNum is yet another formatting function, mostly for creating manual axis tick labels. It works particularly well for wide ranges of numbers. The scales package has several functions such as percent , date_format and dollar for specialist format types.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/5812493', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/438498/']}
jdg_78698
stackexchange
llm_judgeable_groundtruth_similarity
219028
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do I make sure my REST API only responds to requests generated by trusted clients, in my case my own mobile applications? I want to prevent unwanted requests coming from other sources. I don't want users to fill in a serial key or whatever, it should happen behind the scenes, upon installation, and without any user interaction required. As far as I know, HTTPS is only to validate the server you are communicating with is who it says it is. I'm ofcourse going to be using HTTPS to encrypt the data. Is there a way to accomplish this? Update: The user can perform read-only actions, which do not require the user to be logged in, but they can also perform write actions, which do require the user to be logged in (Authentication by Access Token). In both cases I want the API to respond to requests coming only from trusted mobile applications. The API will also be used for registering a new account through the mobile application. Update 2: It seems like there are multiple answers to this, but I honestly don't know which one to flag as the answer. Some say it can be done, some say it can't. Now provide the response and nothing else.
You Can't. You can never verify an entity, any entity , be it a person, hardware client or software client. You can only verify that what they are telling you is correct, then assume honesty . For example, how does Google know it is I'm logging into my Gmail account? They simply ask me for a user name and password, verify that , then assume honesty because who else would have that info? At some point Google decided that this was not enough and added behavioral verification (looking for odd behavior) but that is still relying on the person to do the behavior , then validating the behavior . This is exactly the same thing with validating the Client. You can only validate the behavior of the Client, but not the Client itself. So with SSL, you can verify the Client has a valid cert or not, So one can simply install your App, get the Cert, then run all new code. So the question is: Why is this so critical? If this is a real concern, I would question your choice of a fat client. Perhaps you should go with a web App (so you don't have to expose your API). Also see: Defeating SSL Certificate Validation for Android Applications and : How safe are client SSL certificates in a mobile app?
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/219028', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/106175/']}
jdg_78699
stackexchange
llm_judgeable_groundtruth_similarity
23089964
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In one of our first CS lectures on security we were walked through C's issue with not checking alleged buffer lengths and some examples of the different ways in which this vulnerability could be exploited. In this case, it looks like it was a case of a malicious read operation, where the application just read out however many bytes of memory Am I correct in asserting that the Heartbleed bug is a manifestation of the C buffer length checking issue? Why didn't the malicious use cause a segmentation fault when it tried to read another application's memory? Would simply zero-ing the memory before writing to it (and then subsequently reading from it) have caused a segmentation fault? Or does this vary between operating systems? Or between some other environmental factor? Apparently exploitations of the bug cannot be identified. Is that because the heartbeat function does not log when called? Otherwise surely any request for a ~64k string is likely to be malicious? Now provide the response and nothing else.
Am I correct in asserting that the Heartbleed bug is a manifestation of the C buffer length checking issue? Yes. Is the heartbleed bug a manifestation of the classic buffer overflow exploit in C? No. The "classic" buffer overflow is one where you write more data into a stack-allocated buffer than it can hold, where the data written is provided by the hostile agent. The hostile data overflows the buffer and overwrites the return address of the current method. When the method ends it then returns to an address containing code of the attacker's choice and starts executing it. The heartbleed defect by contrast does not overwrite a buffer and does not execute arbitrary code, it just reads out of bounds in code that is highly likely to have sensitive data nearby in memory. Why didn't the malicious use cause a segmentation fault when it tried to read another application's memory? It did not try to read another application's memory. The exploit reads memory of the current process, not another process. Why didn't the malicious use cause a segmentation fault when it tried to read memory out of bounds of the buffer? This is a duplicate of this question: Why does this not give a segmentation violation fault? A segmentation fault means that you touched a page that the operating system memory manager has not allocated to you. The bug here is that you touched data on a valid page that the heap manager has not allocated to you . As long as the page is valid, you won't get a segfault. Typically the heap manager asks the OS for a big hunk of memory, and then divides that up amongst different allocations. All those allocations are then on valid pages of memory as far as the operating system is concerned. Dereferencing null is a segfault simply because the operating system never makes the page that contains the zero pointer a valid page. More generally: the compiler and runtime are not required to ensure that undefined behaviour results in a segfault; UB can result in any behaviour whatsoever, and that includes doing nothing. For more thoughts on this matter see: Can a local variable's memory be accessed outside its scope? For both me complaining that UB should always be the equivalent of a segfault in security-critical code, as well as some pointers to a discussion on static analysis of the vulnerability, see today's blog article: http://ericlippert.com/2014/04/15/heartbleed-and-static-analysis/ Would simply zero-ing the memory before writing to it (and then subsequently reading from it) have caused a segmentation fault? Unlikely. If reading out of bounds doesn't cause a segfault then writing out of bounds is unlikely to. It is possible that a page of memory is read-only, but in this case it seems unlikely. Of course, the later consequences of zeroing out all kinds of memory that you should not are seg faults all over the show. If there's a pointer in that zeroed out memory that you later dereference, that's dereferencing null which will produce a segfault. does this vary between operating systems? The question is vague. Let me rephrase it. Do different operating systems and different C/C++ runtime libraries provide differing strategies for allocating virtual memory, allocating heap memory, and identifying when memory access goes out of bounds? Yes; different things are different . Or between some other environmental factor? Such as? Apparently exploitations of the bug cannot be identified. Is that because the heartbeat function does not log when called? Correct. surely any request for a ~64k string is likely to be malicious? I'm not following your train of thought. What makes the request likely malicious is a mismatch between bytes sent and bytes requested to be echoed, not the size of the data asked to be echoed.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23089964', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214470/']}
jdg_78700
stackexchange
llm_judgeable_groundtruth_similarity
3921162
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wish to send some components to my customers. The reasons I want to deliver source code are: 1) My class is templatized. Customer might use any template argument, so I can't pre-compile and send .o file. 2) The customer might use different compiler versions for gcc than mine. So I want him to do compilation at his end. Now, I can't reveal my source code for obvious reasons. The max I can do is to reveal the .h file. Any ideas how I may achieve this. I am thinking about some hooks in gcc that supports decryption before compilation, etc. Is this possible? In short, I want him to be able to compile this code without being able to peek inside. Now provide the response and nothing else.
Contract = good, obfuscation = ungood. That said, you can always do a kind of PIMPL idiom to serve your customer with binaries and just templated wrappers in the header(s). The idea is then to use an "untyped" separately compiled implementation, where the templated wrapper just provides type safety for client code. That's how one often did things before compilers started to understand how to optimize templates, that is, to avoid machine-code level code bloat, but it only provides some measure of protection about trivial copy-and-paste theft, not any protection against someone willing to delve into the machine code. But perhaps the effort is then greater than just reinventing your functionality?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3921162', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/219899/']}
jdg_78701
stackexchange
llm_judgeable_groundtruth_similarity
114014
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's define a three-dimensional surface Clear["Global`*"];Φcl = (-G*Mcl)/Sqrt[x^2 + y^2 + z^2 + a^2];Φeff = Φcl + 1/2*(κ2 - 4*ω^2)*x^2 + 1/2*v2*z^2;G = 1; Mcl = 2.2; a = 0.182;κ2 = 1.8; ω = 1; v2 = 7.6;E0 = -3.2; and create the corresponding contour plot rm = 1;P0 = ContourPlot3D[Φeff == E0, {x, -rm, rm}, {y, -rm, rm}, {z, -rm, rm}, Mesh -> None] Now I want the following: Define $N$ $(x,y,z)$ points, let's say $N = 10000$, equally placed on this 3D surface. In other words, I want to recreate this surface using $N$ points to cover the surface fully. Of course, the inside of this surface will be hollow again. I am using version 9.0 of Mathematica. Any suggestions? Now provide the response and nothing else.
Well, you can use DiscretizeGraphics and RandomPoint to achieve what you want: P0 = ContourPlot3D[Φeff == E0, {x, -rm, rm}, {y, -rm, rm}, {z, -rm, rm}, Mesh -> None, Lighting -> None]; Note the Lighting -> None option, this is to circumvent a bug in DiscretizeGraphics that the good people at Wolfram refuse to fix. gg = DiscretizeGraphics[P0];pts = RandomPoint[gg, 30000]; (* increase the number of points if you like *)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/114014', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5052/']}
jdg_78702
stackexchange
llm_judgeable_groundtruth_similarity
6996764
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a vector of three (or four) floats. What is the fastest way to sum them? Is SSE (movaps, shuffle, add, movd) always faster than x87? Are the horizontal-add instructions in SSE3 worth it? What's the cost to moving to the FPU, then faddp, faddp? What's the fastest specific instruction sequence? "Try to arrange things so you can sum four vectors at a time" will not be accepted as an answer. :-) e.g. for summing an array, you can use multiple vector accumulators for vertical sums (to hide addps latency), and reduce down to one after the loop, but then you need to horizontally sum that last vector. Now provide the response and nothing else.
In general for any kind of vector horizontal reduction, extract / shuffle high half to line up with low, then vertical add (or min/max/or/and/xor/multiply/whatever); repeat until a there's just a single element (with high garbage in the rest of the vector). If you start with vectors wider than 128-bit, narrow in half until you get to 128 (then you can use one of the functions in this answer on that vector). But if you need the result broadcast to all elements at the end, then you can consider doing full-width shuffles all the way. Related Q&As for wider vectors, and integers, and FP __m128 and __m128d This answer (see below) __m256d with perf analysis for Ryzen 1 vs. Intel (showing why vextractf128 is vastly better than vperm2f128 ) Get sum of values stored in __m256d with SSE/AVX __m256 How to sum __m256 horizontally? Intel AVX: 256-bits version of dot product for double precision floating point variables of single vectors. Dot product of arrays (not just a single vector of 3 or 4 elements): do vertical mul/add or FMA into multiple accumulators , and hsum at the end. Complete AVX+FMA array dot-product example , including an efficient hsum after the loop. (For the simple sum or other reduction of an array, use that pattern but without the multiply part, e.g. add instead of fma). Do not do the horizontal work separately for each SIMD vector; do it once at the end. How to count character occurrences using SIMD as an integer example of counting _mm256_cmpeq_epi8 matches, again over a whole array, only hsumming at the end. (Worth special mention for doing some 8-bit accumulation then widening 8 -> 64-bit to avoid overflow without doing a full hsum at that point.) Integer __m128i 32-bit elements: this answer (see below). 64-bit elements should be obvious: only one pshufd/paddq step. __m128i 8-bit unsigned uint8_t elements without wrapping/overflow: psadbw against _mm_setzero_si128() , then hsum the two qword halves (or 4 or 8 for wider vectors). Fastest way to horizontally sum SSE unsigned byte vector shows 128-bit with SSE2. Summing 8-bit integers in __m512i with AVX intrinsics has an AVX512 example. How to count character occurrences using SIMD has an AVX2 __m256i example. (For int8_t signed bytes you can XOR set1_epi8(0x80) to flip to unsigned before SAD, then subtract the bias from the final hsum; see details here , also showing an optimization for doing only 9 bytes from memory instead of 16). 16-bit unsigned: _mm_madd_epi16 with set1_epi16(1) is a single-uop widening horizontal add: SIMD: Accumulate Adjacent Pairs . Then proceed with a 32-bit hsum. __m256i and __m512i with 32-bit elements. Fastest method to calculate sum of all packed 32-bit integers using AVX512 or AVX2 . For AVX512, Intel added a bunch of "reduce" inline functions (not hardware instructions) that do this for you, like _mm512_reduce_add_ps (and pd, epi32, and epi64). Also reduce_min/max/mul/and/or. Doing it manually leads to basically the same asm. horizontal max (instead of add): Getting max value in a __m128i vector with SSE? Main answer to this question: mostly float and __m128 Here are some versions tuned based on Agner Fog's microarch guide 's microarch guide and instruction tables. See also the x86 tag wiki. They should be efficient on any CPU, with no major bottlenecks. (e.g. I avoided things that would help one uarch a bit but be slow on another uarch). Code-size is also minimized. The common SSE3 / SSSE3 2x hadd idiom is only good for code-size, not speed on any existing CPUs. There are use-cases for it (like transpose and add, see below), but a single vector isn't one of them. I've also included an AVX version. Any kind of horizontal reduction with AVX / AVX2 should start with a vextractf128 and a "vertical" operation to reduce down to one XMM ( __m128 ) vector. In general for wide vectors, your best bet is to narrow in half repeatedly until you're down to a 128-bit vector, regardless of element type. (Except for 8-bit integer, then vpsadbw as a first step if you want to hsum without overflow to wider elements.) See the asm output from all this code on the Godbolt Compiler Explorer . See also my improvements to Agner Fog's C++ Vector Class Library horizontal_add functions. ( message board thread , and code on github ). I used CPP macros to select optimal shuffles for code-size for SSE2, SSE4, and AVX, and for avoiding movdqa when AVX isn't available. There are tradeoffs to consider: code size: smaller is better for L1 I-cache reasons, and for code fetch from disk (smaller binaries). Total binary size mostly matters for compiler decisions made repeatedly all over a program. If you're bothering to hand-code something with intrinsics, it's worth spending a few code bytes if it gives any speedup for the whole program (be careful of microbenchmarks that make unrolling look good). uop-cache size: Often more precious than L1 I$. 4 single-uop instructions can take less space than 2 haddps , so this is highly relevant here. latency: Sometimes relevant throughput (back-end ports): usually irrelevant, horizontal sums shouldn't be the only thing in an innermost loop. Port pressure matters only as part of the whole loop that contains this. throughput (total front-end fused-domain uops): If surrounding code doesn't bottleneck on the same port that the hsum uses, this is a proxy for the impact of the hsum on the throughput of the whole thing. When a horizontal add is infrequent : CPUs with no uop-cache might favour 2x haddps if it's very rarely used: It's slowish when it does run, but that's not often. Being only 2 instructions minimizes the impact on the surrounding code (I$ size). CPUs with a uop-cache will probably favour something that takes fewer uops, even if it's more instructions / more x86 code-size. Total uops cache-lines used is what we want to minimize, which isn't as simple as minimizing total uops (taken branches and 32B boundaries always start a new uop cache line). Anyway, with that said, horizontal sums come up a lot , so here's my attempt at carefully crafting some versions that compile nicely. Not benchmarked on any real hardware, or even carefully tested. There might be bugs in the shuffle constants or something. If you're making a fallback / baseline version of your code, remember that only old CPUs will run it ; newer CPUs will run your AVX version, or SSE4.1 or whatever. Old CPUs like K8, and Core2(merom) and earlier only have 64bit shuffle units . Core2 has 128bit execution units for most instructions, but not for shuffles. (Pentium M and K8 handle all 128b vector instructions as two 64bit halves). Shuffles like movhlps that move data in 64-bit chunks (no shuffling within 64-bit halves) are fast, too. Related: shuffles on new CPUs, and tricks for avoiding 1/clock shuffle throughput bottleneck on Haswell and later: Do 128bit cross lane operations in AVX512 give better performance? On old CPUs with slow shuffles : movhlps (Merom: 1uop) is significantly faster than shufps (Merom: 3uops). On Pentium-M, cheaper than movaps . Also, it runs in the FP domain on Core2, avoiding the bypass delays from other shuffles. unpcklpd is faster than unpcklps . pshufd is slow, pshuflw / pshufhw are fast (because they only shuffle a 64bit half) pshufb mm0 (MMX) is fast, pshufb xmm0 is slow. haddps is very slow (6uops on Merom and Pentium M) movshdup (Merom: 1uop) is interesting : It's the only 1uop insn that shuffles within 64b elements. shufps on Core2(including Penryn) brings data into the integer domain, causing a bypass delay to get it back to the FP execution units for addps , but movhlps is entirely in the FP domain. shufpd also runs in the float domain. movshdup runs in the integer domain, but is only one uop. AMD K10, Intel Core2(Penryn/Wolfdale), and all later CPUs, run all xmm shuffles as a single uop. (But note the bypass delay with shufps on Penryn, avoided with movhlps ) Without AVX, avoiding wasted movaps / movdqa instructions requires careful choice of shuffles . Only a few shuffles work as a copy-and-shuffle, rather than modifying the destination. Shuffles that combine data from two inputs (like unpck* or movhlps ) can be used with a tmp variable that's no longer needed instead of _mm_movehl_ps(same,same) . Some of these can be made faster (save a MOVAPS) but uglier / less "clean" by taking a dummy arg for use as a destination for an initial shuffle. For example: // Use dummy = a recently-dead variable that vec depends on,// so it doesn't introduce a false dependency,// and the compiler probably still has it in a register__m128d highhalf_pd(__m128d dummy, __m128d vec) {#ifdef __AVX__ // With 3-operand AVX instructions, don't create an extra dependency on something we don't need anymore. (void)dummy; return _mm_unpackhi_pd(vec, vec);#else // Without AVX, we can save a MOVAPS with MOVHLPS into a dead register __m128 tmp = _mm_castpd_ps(dummy); __m128d high = _mm_castps_pd(_mm_movehl_ps(tmp, _mm_castpd_ps(vec))); return high;#endif} SSE1 (aka SSE): float hsum_ps_sse1(__m128 v) { // v = [ D C | B A ] __m128 shuf = _mm_shuffle_ps(v, v, _MM_SHUFFLE(2, 3, 0, 1)); // [ C D | A B ] __m128 sums = _mm_add_ps(v, shuf); // sums = [ D+C C+D | B+A A+B ] shuf = _mm_movehl_ps(shuf, sums); // [ C D | D+C C+D ] // let the compiler avoid a mov by reusing shuf sums = _mm_add_ss(sums, shuf); return _mm_cvtss_f32(sums);} # gcc 5.3 -O3: looks optimal movaps xmm1, xmm0 # I think one movaps is unavoidable, unless we have a 2nd register with known-safe floats in the upper 2 elements shufps xmm1, xmm0, 177 addps xmm0, xmm1 movhlps xmm1, xmm0 # note the reuse of shuf, avoiding a movaps addss xmm0, xmm1 # clang 3.7.1 -O3: movaps xmm1, xmm0 shufps xmm1, xmm1, 177 addps xmm1, xmm0 movaps xmm0, xmm1 shufpd xmm0, xmm0, 1 addss xmm0, xmm1 I reported a clang bug about pessimizing the shuffles . It has its own internal representation for shuffling, and turns that back into shuffles. gcc more often uses the instructions that directly match the intrinsic you used. Often clang does better than gcc, in code where the instruction choice isn't hand-tuned, or constant-propagation can simplify things even when the intrinsics are optimal for the non-constant case. Overall it's a good thing that compilers work like a proper compiler for intrinsics, not just an assembler. Compilers can often generate good asm from scalar C that doesn't even try to work the way good asm would. Eventually compilers will treat intrinsics as just another C operator as input for the optimizer. SSE3 float hsum_ps_sse3(__m128 v) { __m128 shuf = _mm_movehdup_ps(v); // broadcast elements 3,1 to 2,0 __m128 sums = _mm_add_ps(v, shuf); shuf = _mm_movehl_ps(shuf, sums); // high half -> low half sums = _mm_add_ss(sums, shuf); return _mm_cvtss_f32(sums);} # gcc 5.3 -O3: perfectly optimal code movshdup xmm1, xmm0 addps xmm0, xmm1 movhlps xmm1, xmm0 addss xmm0, xmm1 This has several advantages: doesn't require any movaps copies to work around destructive shuffles (without AVX): movshdup xmm1, xmm2 's destination is write-only, so it creates tmp out of a dead register for us. This is also why I used movehl_ps(tmp, sums) instead of movehl_ps(sums, sums) . small code-size. The shuffling instructions are small: movhlps is 3 bytes, movshdup is 4 bytes (same as shufps ). No immediate byte is required, so with AVX, vshufps is 5 bytes but vmovhlps and vmovshdup are both 4. I could save another byte with addps instead of addss . Since this won't be used inside inner loops, the extra energy to switch the extra transistors is probably negligible. FP exceptions from the upper 3 elements aren't a risk, because all elements hold valid FP data. However, clang/LLVM actually "understands" vector shuffles, and emits better code if it knows that only the low element matters. Like the SSE1 version, adding the odd elements to themselves may cause FP exceptions (like overflow) that wouldn't happen otherwise, but this shouldn't be a problem. Denormals are slow, but IIRC producing a +Inf result isn't on most uarches. SSE3 optimizing for code-size If code-size is your major concern, two haddps ( _mm_hadd_ps ) instructions will do the trick (Paul R's answer). This is also the easiest to type and remember. It is not fast , though. Even Intel Skylake still decodes each haddps to 3 uops, with 6 cycle latency. So even though it saves machine-code bytes (L1 I-cache), it takes up more space in the more-valuable uop-cache. Real use-cases for haddps : a transpose-and-sum problem , or doing some scaling at an intermediate step in this SSE atoi() implementation . AVX: This version saves a code byte vs. Marat's answer to the AVX question . #ifdef __AVX__float hsum256_ps_avx(__m256 v) { __m128 vlow = _mm256_castps256_ps128(v); __m128 vhigh = _mm256_extractf128_ps(v, 1); // high 128 vlow = _mm_add_ps(vlow, vhigh); // add the low 128 return hsum_ps_sse3(vlow); // and inline the sse3 version, which is optimal for AVX // (no wasted instructions, and all of them are the 4B minimum)}#endif vmovaps xmm1,xmm0 # huh, what the heck gcc? Just extract to xmm1 vextractf128 xmm0,ymm0,0x1 vaddps xmm0,xmm1,xmm0 vmovshdup xmm1,xmm0 vaddps xmm0,xmm1,xmm0 vmovhlps xmm1,xmm1,xmm0 vaddss xmm0,xmm0,xmm1 vzeroupper ret Double-precision: double hsum_pd_sse2(__m128d vd) { // v = [ B | A ] __m128 undef = _mm_undefined_ps(); // don't worry, we only use addSD, never touching the garbage bits with an FP add __m128 shuftmp= _mm_movehl_ps(undef, _mm_castpd_ps(vd)); // there is no movhlpd __m128d shuf = _mm_castps_pd(shuftmp); return _mm_cvtsd_f64(_mm_add_sd(vd, shuf));}# gcc 5.3.0 -O3 pxor xmm1, xmm1 # hopefully when inlined, gcc could pick a register it knew wouldn't cause a false dep problem, and avoid the zeroing movhlps xmm1, xmm0 addsd xmm0, xmm1# clang 3.7.1 -O3 again doesn't use movhlps: xorpd xmm2, xmm2 # with #define _mm_undefined_ps _mm_setzero_ps movapd xmm1, xmm0 unpckhpd xmm1, xmm2 addsd xmm1, xmm0 movapd xmm0, xmm1 # another clang bug: wrong choice of operand order// This doesn't compile the way it's writtendouble hsum_pd_scalar_sse2(__m128d vd) { double tmp; _mm_storeh_pd(&tmp, vd); // store the high half double lo = _mm_cvtsd_f64(vd); // cast the low half return lo+tmp;} # gcc 5.3 -O3 haddpd xmm0, xmm0 # Lower latency but less throughput than storing to memory # ICC13 movhpd QWORD PTR [-8+rsp], xmm0 # only needs the store port, not the shuffle unit addsd xmm0, QWORD PTR [-8+rsp] Storing to memory and back avoids an ALU uop. That's good if shuffle port pressure, or ALU uops in general, are a bottleneck. (Note that it doesn't need to sub rsp, 8 or anything because the x86-64 SysV ABI provides a red-zone that signal handlers won't step on.) Some people store to an array and sum all the elements, but compilers usually don't realize that the low element of the array is still there in a register from before the store. Integer: pshufd is a convenient copy-and-shuffle. Bit and byte shifts are unfortunately in-place, and punpckhqdq puts the high half of the destination in the low half of the result, opposite of the way movhlps can extract the high half into a different register. Using movhlps for the first step might be good on some CPUs, but only if we have a scratch reg. pshufd is a safe choice, and fast on everything after Merom. int hsum_epi32_sse2(__m128i x) {#ifdef __AVX__ __m128i hi64 = _mm_unpackhi_epi64(x, x); // 3-operand non-destructive AVX lets us save a byte without needing a mov#else __m128i hi64 = _mm_shuffle_epi32(x, _MM_SHUFFLE(1, 0, 3, 2));#endif __m128i sum64 = _mm_add_epi32(hi64, x); __m128i hi32 = _mm_shufflelo_epi16(sum64, _MM_SHUFFLE(1, 0, 3, 2)); // Swap the low two elements __m128i sum32 = _mm_add_epi32(sum64, hi32); return _mm_cvtsi128_si32(sum32); // SSE2 movd //return _mm_extract_epi32(hl, 0); // SSE4, even though it compiles to movd instead of a literal pextrd r32,xmm,0} # gcc 5.3 -O3 pshufd xmm1,xmm0,0x4e paddd xmm0,xmm1 pshuflw xmm1,xmm0,0x4e paddd xmm0,xmm1 movd eax,xmm0int hsum_epi32_ssse3_slow_smallcode(__m128i x){ x = _mm_hadd_epi32(x, x); x = _mm_hadd_epi32(x, x); return _mm_cvtsi128_si32(x);} On some CPUs, it's safe to use FP shuffles on integer data. I didn't do this, since on modern CPUs that will at most save 1 or 2 code bytes, with no speed gains (other than code size/alignment effects).
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6996764', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/59044/']}
jdg_78703
stackexchange
llm_judgeable_groundtruth_similarity
220185
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It is my understanding that HMAC is a symmetric signing algorithm (single secret key) whereas RSA is an asymmetric signing algorithm (private/public key pair). I am trying to choose between these 2 methods for signing JSON Web Tokens. However, I am a little bit confused about the use case of HMAC. If both the clients (users) and the server share a key, what is stopping the client from changing the token's payload fields (perhaps the subject to another user's id) and then resigning the token client-side. Clearly this would be bad. Does HMAC only guarantee that the data was signed by someone with the secret key, and not just a single entity as with RSA? What use case does this actually have, perhaps I am misunderstanding something? Or perhaps the secret key isn't even shared with clients at all? Now provide the response and nothing else.
HMAC is used to protect against manipulation by someone who has not access to the secret. Typically this means to protect against manipulation by the client if the secret is only known to the server or to protect against manipulation in transit if the secret is known to client and server. An RSA based signature is used to protect against manipulation and also to allow others to verify the integrity and source of the data without being able to manipulate the data. This can be done because the private key is only known to the provider and signer of the data while the public key is known to everybody who likes to verify the integrity. What to choose thus depends on your specific use case. If the server just needs to protect the token against manipulation by the client then using HMAC with a server side secret is enough. If instead it needs to be also proven to others that the token was created by a specific (trusted) server then RSA based signatures should be used.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/220185', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/214767/']}
jdg_78704
stackexchange
llm_judgeable_groundtruth_similarity
11115
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The standard setup when training a neural network seems to be to split the data into train and test sets, and keep running until the scores stop improving on the test set. Now, the problem: there is a certain amount of noise in the test scores, so the single best score may not correspond to the state of the network which is most likely to be best on new data. I've seen a few papers point to a specific epoch or iteration in the training as being "best by cross-validation" but I have no idea how that is determined (and the papers do not provide any details). The "best by cross-validation" point is not the one with the best test score. How would one go about doing this type of cross validation? Would it be by doing k-fold on the test set? Okay, that gives k different test scores instead of one, but then what? Now provide the response and nothing else.
I couldn't say what the authors refer to by best by cross-validation but I'll mention a simple and general procedure that's out there: You are correct that analyzing one estimate of the generalization performance using one training and one test set is quite simplistic. Cross-validation can help us understand how this performance varies across datasets, instead of wonder whether we got lucky/unlucky with our choice of train/test datasets. Split the whole dataset into k folds (or partitions), and train/test the model k times using different folds. When you're done, you can compute the mean performance and a variance that will be of utmost importance assessing confidence in the generalization performance estimate.
{}
{'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/11115', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/26/']}
jdg_78705
stackexchange
llm_judgeable_groundtruth_similarity
5023
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that when a PC send a packets to any remoter router then first of all it checks out the Router MAC address which is connected with it, but it gets this MAC Address of Router from ARP Cache, but i don't know is ther the same process of the ARP Cache in the Switch?please elaborate me Now provide the response and nothing else.
Answering the original router question: Arp caches are there to map an IP addess to a mac address. There's nothing unique about a router. .. it needs an ARP cache just like any other IP device. Managed switches likewise need an Arp cache so they can communicate with the stations managing them. As we discussed in chat , you need to add an IP address to your switch so you can communicate via ARP. vlan 2 name management_vlan!interface Vlan2 ip address 192.0.2.20 255.255.255.0 no shutdown no ip route-cache!ip default-gateway 192.0.2.254 This isn't what you asked about, but if you need to see what mac addresses have been learned on a Cisco switch, use show mac address-table . Keep in mind that mac learning and ARP are completely different processes on a layer 2 switch. Mac learning is used for communication through the switch. ARP is required for communication to the switch itself.
{}
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/5023', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/3242/']}
jdg_78706
stackexchange
llm_judgeable_groundtruth_similarity
3448230
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a windows service writes its log in a text file in a simple format. Now, I'm going to create a small application to read the service's log and shows both the existing log and the added one as live view. The problem is that the service locks the text file for adding the new lines and at the same time the viewer application locks the file for reading. The Service Code: void WriteInLog(string logFilePath, data){ File.AppendAllText(logFilePath, string.Format("{0} : {1}\r\n", DateTime.Now, data));} The viewer Code: int index = 0;private void Form1_Load(object sender, EventArgs e) { try { using (StreamReader sr = new StreamReader(logFilePath)) { while (sr.Peek() >= 0) // reading the old data { AddLineToGrid(sr.ReadLine()); index++; } sr.Close(); } timer1.Start(); } catch (Exception ex) { MessageBox.Show(ex.Message); } }private void timer1_Tick(object sender, EventArgs e) { using (StreamReader sr = new StreamReader(logFilePath)) { // skipping the old data, it has read in the Form1_Load event handler for (int i = 0; i < index ; i++) sr.ReadLine(); while (sr.Peek() >= 0) // reading the live data if exists { string str = sr.ReadLine(); if (str != null) { AddLineToGrid(str); index++; } } sr.Close(); } } Is there any problem in my code in reading and writing way? How to solve the problem? Now provide the response and nothing else.
You need to make sure that both the service and the reader open the log file non-exclusively. Try this: For the service - the writer in your example - use a FileStream instance created as follows: var outStream = new FileStream(logfileName, FileMode.Open, FileAccess.Write, FileShare.ReadWrite); For the reader use the same but change the file access: var inStream = new FileStream(logfileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite); Also, since FileStream implements IDisposable make sure that in both cases you consider using a using statement, for example for the writer: using(var outStream = ...){ // using outStream here ...} Good luck!
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3448230', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/322355/']}
jdg_78707
stackexchange
llm_judgeable_groundtruth_similarity
14336994
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: struct m{ int parent:3; int child:3; int mother:2;};void main(){ struct m son={2,-6,5}; printf("%d %d %d",son.parent,son.child,son.mother);} Can anybody please help in telling why the output of the program is 2 2 1 ? Now provide the response and nothing else.
Taking out all but the significant bits for the fields shown: parent: 3 bits (1-sign bit + 2 more), value 010, result 2child: 3 bits (1-sign bit + 2 more), value 010, result 2mother: 2 bits (1 sign bit + 1 more), value 01, result 1 Details It bears pointing out that your structure fields are declared as int bit-field values. By C99-§6.7.2,2, the following types are all equivalent: int , signed , or signed int . Therefore, your structure fields are signed . By C99-§6.2.6.2,2, one of your bits shall be consumed in representing the "sign" of the variable (negative or positive). Further, the same section states that excluding the sign-bit, the remaining bits representation must correspond to an associated unsigned type of the remaining bit-count. C99-§6.7.2,1 clearly defines how each of these bits represents a power of 2. Therefore, the only bit that is normally used as the sign-bit is the most significant bit (its the only one that is left, but I'm quite sure if this is an inaccurate interpretation of the standard I'll hear about it in due time). That you are assigning a negative number as one of your test values used for your sample suggests you may be aware of this, but many people newly exposed to bit fields are not. Thus, it bears noting. The following sections of the C99 standard are referenced in the remainder of this answer. The first deals with promotions to different types, the next the valuation and potential value-change (if any). The last is important in understanding how a bit-fields int -type is determined. C99-§6.3.1.1: Boolean, characters, and integers 2: If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int ; otherwise, it is converted to an unsigned int . These are called the integer promotions. All other types are unchanged by the integer promotions. C99-§6.3.1.3 Signed and unsigned integers When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged. Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type. Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised. C99-§6.7.2.1 Structure and Union Specifiers 10: A bit-field is interpreted as having a signed or unsigned integer type consisting of the specified number of bits. If the value 0 or 1 is stored into a nonzero-width bit-field of type _Bool, the value of the bit-field shall compare equal to the value stored; a _Bool bit-field has the semantics of a _Bool. Consider the regular int bit representation of your test values. The following are on a 32-bit int implementation: value : s bits 2 : 0 0000000 00000000 00000000 00000010 <== note bottom three bits -6 : 1 1111111 11111111 11111111 11111010 <== note bottom three bits 5 : 0 0000000 00000000 00000000 00000101 <== note bottom two bits Walking through each of these, applying the requirements from the standard references above. int parent:3 :The first field is a 3-bit signed int , and is being assigned the decimal value 2 . Does the rvalue type, int , encompass the lvalue type, int:3 ? Yes, so the types are good. Does the value 2 fit within the range of the lvalue type? Well, 2 can easily fit in an int:3 , so no value mucking is required either. The first field works fine. int child:3 : The second field is also a 3-bit signed int , this time being assigned the decimal value -6 . Once again, does the rvalue type ( int ) fully-encompass the lvalue type ( int:3 )? Yes, so again the types are fine. However, the minimum bit-count require to represent -6 , a signed value, is 4 bits. ( 1010 ), accounting for the most signnificant bit as the sign-bit. Therefore the value -6 is out of range of the allowable storage of a 3-bit signed bit-field. Therefore, the result is implementation-defined per §6.3.1.3-3. int mother:2 The final field is a 2-bit signed int , this time being assigned the decimal value 5. Once again, does the rvalue type ( int ) fully-encompass the lvalue type ( int:2 )? Yes, so again the types are fine. However, once again we're faced with a value that cannot fit within the target-type. The minimum bit-count need for representing a signed positive 5 is four: (0101). We only have two to work with. Therefore, the result is once again implementation-defined per §6.3.1.3-3. Therefore, if I take this correctly, the implementation in this case simply hacks off all but the required bits needed to store fill the declared bit depth. And the results of that hackery is what you now have. 2 2 1 Note It is entirely possible I flipped the order of the promotion incorrectly (it is easy for me to get lost in the standard, as I am dyslexic and flip things in my head periodically). If that is the case I would ask anyone with a stronger interpretation of the standard please point this out to me an I will address the answer accordingly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14336994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1929221/']}
jdg_78708
stackexchange
llm_judgeable_groundtruth_similarity
2318126
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to use LIKE and IN together? I want to achieve something like this. SELECT * FROM tablename WHERE column IN ('M510%', 'M615%', 'M515%', 'M612%'); So basically I want to be able to match the column with a bunch of different strings. Is there another way to do this with one query or will I have to loop over the array of strings I am looking for? Now provide the response and nothing else.
You can do it by in one query by stringing together the individual LIKEs with ORs: SELECT * FROM tablenameWHERE column LIKE 'M510%'OR column LIKE 'M615%'OR column LIKE 'M515%'OR column LIKE 'M612%'; Just be aware that things like LIKE and per-row functions don't always scale that well. If your table is likely to grow large, you may want to consider adding another column to your table to store the first four characters of the field independently. This duplicates data but you can guarantee it stays consistent by using insert and update triggers. Then put an index on that new column and your queries become: SELECT * FROM tablename WHERE newcolumn IN ('M510','M615','M515','M612'); This moves the cost-of-calculation to the point where it's necessary (when the data changes), not every single time you read it. In fact, you could go even further and have your new column as a boolean indicating that it was one of the four special types (if that group of specials will change infrequently). Then the query would be an even faster: SELECT * FROM tablename WHERE is_special = 1; This tradeoff of storage requirement for speed is a useful trick for larger databases - generally, disk space is cheap, CPU grunt is precious, and data is read far more often than written. By moving the cost-of-calculation to the write stage, you amortise the cost across all the reads.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2318126', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/266347/']}
jdg_78709
stackexchange
llm_judgeable_groundtruth_similarity
47953
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've read some about .egg files and I've noticed them in my lib directory but what are the advantages/disadvantages of using then as a developer? Now provide the response and nothing else.
From the Python Enterprise Application Kit community : "Eggs are to Pythons as Jars are to Java..." Python eggs are a way of bundling additional information with a Python project, that allows the project's dependencies to be checked and satisfied at runtime, as well as allowing projects to provide plugins for other projects. There are several binary formats that embody eggs, but the most common is '.egg' zipfile format, because it's a convenient one for distributing projects. All of the formats support including package-specific data, project-wide metadata, C extensions, and Python code. The primary benefits of Python Eggs are: They enable tools like the "Easy Install" Python package manager .egg files are a "zero installation" format for a Python package; no build or install step is required, just put them on PYTHONPATH or sys.path and use them (may require the runtime installed if C extensions or data files are used) They can include package metadata, such as the other eggs they depend on They allow "namespace packages" (packages that just contain other packages) to be split into separate distributions (e.g. zope. , twisted. , peak.* packages can be distributed as separate eggs, unlike normal packages which must always be placed under the same parent directory. This allows what are now huge monolithic packages to be distributed as separate components.) They allow applications or libraries to specify the needed version of a library, so that you can e.g. require("Twisted-Internet>=2.0") before doing an import twisted.internet. They're a great format for distributing extensions or plugins to extensible applications and frameworks (such as Trac, which uses eggs for plugins as of 0.9b1), because the egg runtime provides simple APIs to locate eggs and find their advertised entry points (similar to Eclipse's "extension point" concept). There are also other benefits that may come from having a standardized format, similar to the benefits of Java's "jar" format. -Adam
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47953', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4491/']}
jdg_78710
stackexchange
llm_judgeable_groundtruth_similarity
240504
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: By accident I found that with a polymorphic type using dynamic_cast back to the derived class will throw a 'bad cast' error if the pointer is no longer valid. Is this undefined behavior or could this be a way to check for valid pointers without using smart pointers? I'm not really against using smart pointers as they serve a real need, but I dislike the syntax. I don't have anything against templated types but I find smart pointers to bloat the code and I'm not a fan of macro's to try and avoid that bloat. I wish C++ included something in the language syntax itself vs using the template feature but with this question I'm more concerned about what is happening below and if this is a valid and defined way to checking for valid pointers as this gives an exception every time from my tests. #include <string>#include <map>using namespace std;class Base{public: virtual ~Base(){}};class Derived : public Base{public: virtual ~Derived(){}};class Storage{private: map<string, Base*> storage;public: void AddItem(string name, Base* base) { if (storage.find(name) == storage.end()) { storage[name] = base; } } template <class T> T& FindItem(string name) { if (storage.find(name) != storage.end()) { Base* item = storage[name]; return dynamic_cast<T&>(*item); } throw; }};int main(){ Storage store; // force 'd' to go out of scope for our test { Derived d; store.AddItem("test", &d); } // this will throw a bad cast exception Derived& test = store.FindItem<Derived>("test"); return 0;} Now provide the response and nothing else.
dynamic-cast needs to deference the pointer to find the proper offset it should be at, dereferencing an invalid pointer is undefined behavior, one thing you might see: class Derived2 : public Base{public: virtual ~Derived2(){}};int main(){ Storage store; { Derived d; store.AddItem("test", &d); } { Derived2 d2; Derived2& test = store.FindItem<Derived2>("test");//might not throw an exception //test might refer to d2 } return 0;} Messing about with a void* buffer and placement new and destroy will let me insert anything in there. Just go with smart pointers and learn to appreciate them for the blessing that they are.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/240504', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/126663/']}
jdg_78711
stackexchange
llm_judgeable_groundtruth_similarity
29391815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I have a numpy array a containing 10 values. Just an example situation here, although I would like to repeat the same for an array with length 100. a = np.array([1,2,3,4,5,6,7,8,9,10]) I would like to sum the first 5 values followed by the second 5 values and so on and store them in a new empty list say b . So b would contain b = [15,40] . How do I go about doing it? Now provide the response and nothing else.
One way is to use the add ufunc with its reduceat method: >>> np.add.reduceat(a, [0,5])array([15, 40]) This sums the slices a[0:5] and a[5:] and returns a new array. If you want a Python list, you could call tolist() on the returned array. You can use any list of indexes with the method (and they do not have to evenly spaced). For example, if you want slices of 5 each time on an array of length 100: >>> b = np.arange(100)>>> np.add.reduceat(b, range(0, 100, 5))array([ 10, 35, 60, 85, 110, 135, 160, 185, 210, 235, 260, 285, 310, 335, 360, 385, 410, 435, 460, 485])
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29391815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3397243/']}
jdg_78712
stackexchange
llm_judgeable_groundtruth_similarity
22762710
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the following line to make a simple system call which works: system ("mkdir -p Purged") or die "Failed to mkdir." ; Executing the script does make the system call and I can find a directory called Purged, but the error message is still printed and the script dies. What is wrong with my syntax? Now provide the response and nothing else.
That would be a little confusing, wouldn't? - Leonardo Herrera on Ikegami's answer Yes, it is confusing that the system command inverts true and false in Perl, and creates fun logic like this: if ( system qw($command) ) { die qq(Aw... If failed);}else { say qq(Hooray! It worked!);} But, it's understandable why the system command does this. In Unix, an exit status of zero means the program worked, and a non-zero status could give you information why your system call failed. Maybe the program you were calling doesn't exist. Maybe the program sort of worked as expected. For example, grep returns an exit code of 1 when grep works, but there were no matching lines. You might want to distinguish when grep returns zero, one, or a return code greater than one. (Yes, I know it's silly to use a system call to grep in a Perl program, but that's the first example I could think of). To prevent casual confusion, I create a variable that holds the exit status of my system command instead of testing the output of system directly: my $error = system qw($command);if ($error) { die qq(Aw... It failed);}else { say qq(Hooray! It worked!);} It's completely unnecessary, and people who work with Perl should know that system reverses Perl's definition of true and false , but if I hadn't had my coffee in the morning, I may miss it as I go over someone else's code. Doing this little step just makes the program look a bit more logical. The Perldoc of system give you code that allows you to test the output of your system command to see exactly what happened. (If there was an error, or a system interrupt signal killed the system call). It's nice to know if you need to interrogate your system return value to figure out what went wrong.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22762710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3481208/']}
jdg_78713
stackexchange
llm_judgeable_groundtruth_similarity
36876770
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Ionic 2 for the development of an app and I need to preview the app in different sizes. Currently, I am using vw in all the sizes including font-size , padding and so on, but when resizing the font it sometimes becomes a bit small and even sometimes the text is not readable. For that reason, I would like to know what is best to use in this case: px % vw wh em Or do I need to use also the @media and support different font sizes? Any thoughts? Now provide the response and nothing else.
Note that I only mentioned the ones you asked about. Here you can see the full list of CSS measurement units: CSS Units in W3Schools Rather than telling you which one is the "right one", I would rather want you to understand what each one actually is. Pixels ( px ): Absolute pixels. So for example, 20px will be literally 20 pixels on any screen. If a monitor is of 1980x1200, and you set an element's height to 200px , the element will take 200 pixels out of that. Percentage ( % ): Relative to the parent value. So for this example: <div style="width: 200px;"> <div style="width: 50%;"></div></div> The inner div will have a width of 100 pixels. Viewport height/width ( vw / vh ): Size relative to the viewport (browser window, basically). Example: .myDiv { width: 100vw; height: 100vh; background-color: red;} Will make an cover the whole browser in red. This is very common among flexboxes as it's naturally responsive. Emeters ( em ) and Root Emeters ( rem ): em is relative to the parent element's font size. rem will be relative to the html font-size (mostly 16 pixels). This is very useful if you want to keep an "in mind relativity of sizes" over your project, and not using variables by pre-processors like Sass and Less. Just easier and more intuitive, I guess. Example: .myDiv { font-size: 0.5rem;} Font size will be 8 pixels. Now that you know, choose the right one for the right purpose.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36876770', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6101493/']}
jdg_78714
stackexchange
llm_judgeable_groundtruth_similarity
60778288
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: if($users) { while ($user = mysqli_fetch_assoc($users)) { $user_list .= "<tr>"; // $user_list .= "<td>{$user['id']}</td>"; $user_list .= "<td>{$user['make']}</td>"; $user_list .= "<td>{$user['description']}</td>"; $user_list .= "<td class='text-center'> <div class='btn-group'> <button type='button' class='btn btn-sm btn-info'>Action</button> <button type='button' class='btn btn-sm btn-info dropdown-toggle dropdown-icon' data-toggle='dropdown'> <span class='sr-only'>Toggle Dropdown</span> <div class='dropdown-menu' role='menu'> <a class='dropdown-item' href='***./edit_make.php***' target='_blank'>Edit</a> <div class='dropdown-divider'></div> <a class='dropdown-item' href='./edit_make.php'>Delete</a> </div> </button> </div> </td>"; $user_list .= "</td>"; } Above is dynamic table content. it has an edit button as action. I need to function that edit button to the destination link which I included in the code. but it has not opened the link. is there any issue in code? Now provide the response and nothing else.
I had a similar need in a FastAPI middleware and although not ideal here's what we ended up with: app = FastAPI()@app.middleware("http")async def log_request(request, call_next): logger.info(f'{request.method} {request.url}') response = await call_next(request) logger.info(f'Status code: {response.status_code}') body = b"" async for chunk in response.body_iterator: body += chunk # do something with body ... return Response( content=body, status_code=response.status_code, headers=dict(response.headers), media_type=response.media_type ) Be warned that such an implementation is problematic with responses streaming a body that would not fit in your server RAM (imagine a response of 100GB). Depending on what your application does, you will rule if it is an issue or not. In the case where some of your endpoints produce large responses, you might want to avoid using a middleware and instead implement a custom ApiRoute. This custom ApiRoute would have the same issue with consuming the body, but you can limit it's usage to a particular endpoints. Learn more at https://fastapi.tiangolo.com/advanced/custom-request-and-route/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60778288', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13095841/']}
jdg_78715
stackexchange
llm_judgeable_groundtruth_similarity
304879
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've read these questions: Understanding "programming to an interface" What is the point of an interface? Does it always make sense to "program to an interface" in Java? I don't understand how to "program to an interface" if you are using methods in concrete classes that aren't part of the interface. I realize the most common example of this design principle is the List vs ArrayList in Java because it is easy to understand and illustrate the point. Here is a somewhat silly example to illustrate what my question is asking (the code is in PHP but it applies to most OOP languages): interface ResponseInterface { public function getStatusCode();}class Response implements ResponseInterface { private $status; public function getStatusCode() { return $this->status; }}class AwesomeResponse implements ResponseInterface { private $status; private $message = ['200' => 'OK', '500' => 'Internal Server Error']; public function getStatusCode() { return $this->status; } public function getStatusMessage() { return $this->message[$status]; }}class Server { public function sendResponse(ResponseInterface $response) { // this seems wrong -----^ header(vsprintf('HTTP/1.1 %d %s', [ $response->getStatusCode(), $response->getStatusMessage() ]), true, $response->getStatusCode()); }} As you can see, the sendResponse method takes a ResponseInterface parameter but it calls getStatusMessage() which isn't part of the interface but only something implemented in AwesomeResponse , which implements ResponseInterface . The application crashes at runtime when a Response object is passed as it tries to call the non-existent method getStatusMessage() . Therefore, the proper implementation would be: public function sendResponse(AwesomeResponse $response) { // ...stuff} But AwesomeResponse isn't an interface so how do I program to an interface? Now provide the response and nothing else.
Hmm - a less trivial example might help. Consider Executor , which is an interface that defines a contract for executing Runnables . Now, if you write AwesomeService like this: class AwesomeService { // ... public void runAllPendingTasks(ScheduledThreadPoolExecutor executor) { for(Runnable task : pendingTasks) { executor.execute(task); } } } Then my code, trying to call yours, needs to find a ScheduledThreadPoolExecutor lying around somewhere. On the other hand, if your method signature admits that your needs can be satisfied by anything that implements Executor class AwesomeService { // ... public void runAllPendingTasks(Executor executor) { for(Runnable task : pendingTasks) { executor.execute(task); } } } The my code can pass you my customized Executor , and there by enhance what actually happens when you run your tasks. For instance, I might want to track how often we run tasks, so that I can generate metrics in real time that monitor the health of the application. class MeteredExecutor implements Executor { private final Executor impl; private final io.dropwizard.metrics.Meter meter; MeteredExecutor(...) {...} public void execute(Runnable command) { meter.mark(); impl.execute(command); }} OK, so why doesn't an interface seem to help for AwesomeResponse ? The primary answer is encapsulation -- the MeteredExecutor provides the execute service by using its own internal state to satisfy a request. What it doesn't do is share that internal state. Notice that AwesomeService calls execute, and lets the MeteredExecutor do the work, rather than calling getMeter().mark() and getImpl.execute(command) . "Give me your internal state" is a really lousy motivation for an interface. AwesomeResponse is exposing its data. That's not necessarily a bad thing; in your example as written, AwesomeResponse is essentially a representation of a message that is about to cross an application boundary. Put another way, it's a value type that makes up part of the api of your Service; of course it exposes its data, that's what it is for! Calling out the same point a different way: notice that in the case of AwesomeService and the interfaces, state is being passed inwards. My state is an executor, and I pass that to AwesomeService. The state of AwesomeService is a task, an it passes that task to the Executor, which in turn passes it inward to another Executor, and turtles all the way down. Interfaces work great for this. On the other hand, in AwesomeResponse, the state is coming out. Writing interfaces to fetch the state out is just extra work. BUT Encapsulation is a good thing, and AwesomeResponse isn't quite at the application boundary. Could we rewrite things so that state is passed inward? interface Writer { void header(int code, String message);}interface Response { void writeTo(Writer writer);}class AwesomeResponse implements Response { void writeTo(Writer writer) { writer.header(this.code, this.message); }}class NotSoAwesomeResponse implements Response { void writeTo(Writer writer) { writer.header(this.code, String.valueOf(this.code)); }}class ServerWriter implements Writer { void header(int code, String message) { header(vsprintf('HTTP/1.1 %d %s', [ code, message ]), true, code); }}class Server { public function sendResponse(Response response) { ServerWriter writer = new ServerWriter(); response.writeTo(writer); }}
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/304879', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/95744/']}
jdg_78716
stackexchange
llm_judgeable_groundtruth_similarity
9576860
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an array of dates in ISO8601 format and need to sort them. Does anyone have a suggestion for an algorithm that would work? I don't think they will sort as strings unless I'm much mistaken, so I assume they have to be broken down into their component parts? Can someone post an algorithm, preferably language agnostic, but VB or C# example would work as long as it just uses strings and integers and no functions that are built-in to the language. Thanks! Now provide the response and nothing else.
It depends on whether or not you're mixing formats. Within any specific format, like yyyy-mm-dd or yyyy-Www-d , ISO 8601 is built to sort lexicographically (other than negative years). From the ISO 8601 wikipedia page : Date and time values are organised from the most to the least significant: year, month (or week), day, hour, minute, second, and fraction of second. The lexicographical order of the representation thus corresponds to chronological order, except for date representations involving negative years. This allows dates to be naturally sorted by, for example, file systems. That means that string sorting should work okay. It's only if you mix formats will that not work. If that's the case, you'll need to convert to a specific format before comparing. By that, I mean something like converting all formats into yyyy-mm-dd before comparison and then back afterwards if desired. For example, if you have the input data: 2010-03-012010-W01-1 you could first change them all to: 2010-03-01:2010-03-012010-01-04:2010-W01-1 (prefixing the actual data with a specific form) then sort that. Once sorted, you then go back and strip off everything up to the first : character in each element, which will recover the original form. Not necessarily the most efficient way but you'll need to do something like that if you want to preserve the original form. If that's not an issue, simply convert them to the specific form once and leave them like that.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9576860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/820085/']}
jdg_78717
stackexchange
llm_judgeable_groundtruth_similarity
37305233
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to align a 2 column grid, where the items on the 2nd row align with the 2nd column? This is difficult to explain textually, so I drew up what I'm aiming for. Ideally, I would prefer no swapping CSS classes via JS. the label to have a max-width , but grow up until that point. Now provide the response and nothing else.
Ok, so based on our comments back and forth you need to add this section to your nginx config for the server config listening on :8000. That regex tells nginx that whenever it gets a request for a URL with a file ending in .php to send that to the fastcgi process. Otherwise it's going to default to returning the raw file that matches in /var/www/html. Hope that helps. # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000#location ~ \.php$ { try_files $uri =404; include snippets/fastcgi-php.conf; # With php5-cgi alone: #fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; #fastcgi_index index.php; include fastcgi_params;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37305233', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/398574/']}
jdg_78718
stackexchange
llm_judgeable_groundtruth_similarity
50934936
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using angular 6 and I preload all my components,I have 8 components: ScreensComponent , AccountComponent , DashboardComponent , FinderComponent , ListerComponent , MonitorComponent , AuthComponent , LoginComponent but the output is 11 files (not including the main.js and other files). I am unable to understand why there are 11 chunks and their names are weird.Also, most of the files are so huge and I cant seem to understand why everything weighs so much chunk {8} account-account-module-ngfactory.3381d22a1e8d5db6b823.js (account-account-module-ngfactory) 70.3 kB [rendered]chunk {scripts} scripts.29cb0c319e11420627c6.js (scripts) 876 kB [rendered]chunk {0} common.e58351c45cec4bb0f84d.js (common) 17.5 kB [rendered]chunk {1} account-account-module-ngfactory~finder-finder-module-ngfactory~lister-lister-module-ngfactory.9fd06129611a0271f9ea.js (account-account-module-ngfactory~finder-finder-module-ngfactory~lister-lister-module-ngfactory) 26 kB [rendered]chunk {2} finder-finder-module-ngfactory~lister-lister-module-ngfactory~screens-screens-module-ngfactory.d512470383c55e2004b8.js (finder-finder-module-ngfactory~lister-lister-module-ngfactory~screens-screens-module-ngfactory) 68.2 kB [rendered]chunk {3} lister-lister-module-ngfactory~monitor-monitor-module-ngfactory.00e81bc0649f067026a6.js (lister-lister-module-ngfactory~monitor-monitor-module-ngfactory) 228 kB [rendered]chunk {4} monitor-monitor-module-ngfactory.145e0aaf56cca28ab92d.js (monitor-monitor-module-ngfactory) 238 kB [rendered]chunk {5} lister-lister-module-ngfactory.00baef5ecda629c1924b.js (lister-lister-module-ngfactory) 1.03 MB [rendered]chunk {6} finder-finder-module-ngfactory.0c67cb70a42e4e1248fd.js (finder-finder-module-ngfactory) 234 kB [rendered]chunk {7} dashboard-dashboard-module-ngfactory.0a194d0dd1e6b3d22216.js (dashboard-dashboard-module-ngfactory) 1.49 kB [rendered]chunk {9} login-login-module-ngfactory.3803a9fde37726d2fcb1.js (login-login-module-ngfactory) 97.3 kB [rendered]chunk {10} screens-screens-module-ngfactory.517275ed05ee9a3474b9.js (screens-screens-module-ngfactory) 200 kB [rendered]chunk {11} auth-auth-module-ngfactory.9b8c7640150610ba12dc.js (auth-auth-module-ngfactory) 62 kB [rendered]chunk {12} runtime.b75eb3e8646a750ee46a.js (runtime) 2.68 kB [entry] [rendered]chunk {13} styles.80497f4105b47590510a.css (styles) 80.6 kB [initial] [rendered]chunk {14} polyfills.c428b26f0c9cd88ad9e2.js (polyfills) 64.3 kB [initial] [rendered]chunk {15} main.bfa19806e0940a6dbd7e.js (main) 1.06 MB [initial] [rendered] Now provide the response and nothing else.
Basic use of SKRenderer is pretty straightforward, but there are some oddities that make it somewhat quirky in practice. First, the fundamentals. To instantiate a renderer, use the rendererWithDevice: method. This method takes a id<MTLDevice> , such as the system default device. Pardon the Objective-C; this will translate easily to Swift: SKRenderer *renderer = [SKRenderer rendererWithDevice:mtlDevice]; To tell the renderer what to draw, we associate it with a previously-created scene: renderer.scene = (SKScene *)scene; If you want actions to run, you'll need to manually un-pause the scene, something that is normally done by SKView when presenting a scene: scene.paused = NO; To actually draw the scene, we'll need to provide a command buffer and render pass descriptor. Supposing you're using an MTKView to handle running the display link timer and manage a CAMetalLayer , you can write a delegate method like this, which updates the scene's time (and actions) via the renderer, then draws into the MTKView 's drawable: - (void)drawInMTKView:(nonnull MTKView *)view { MTLRenderPassDescriptor *renderPassDescriptor = view.currentRenderPassDescriptor; if (renderPassDescriptor == nil) { return; } [self.renderer updateAtTime:CACurrentMediaTime()]; id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer]; CGRect viewport = CGRectMake(0, 0, view.drawableSize.width, view.drawableSize.height); [self.renderer renderWithViewport:viewport commandBuffer:commandBuffer renderPassDescriptor:renderPassDescriptor]; // TODO: Add any additional Metal rendering here [commandBuffer presentDrawable:view.currentDrawable]; [commandBuffer commit];} Remember to set the MTKView 's framebufferOnly property to NO if you use this technique. If you want to render offscreen into a texture you've created, you'll need to do a bit more manual work, but the concepts involved are the same. You can encode separate passes that render to the same texture by creating additional render pass descriptors/encoders; just remember to set the loadAction of the primary color attachment to MTLLoadActionLoad to preserve the contents of the texture across passes. You can also use the renderWithViewport:renderCommandEncoder:renderPassDescriptor:commandQueue: to consolidate all drawing into a single pass. Some caveats: As far as I can tell, the viewport parameter is ignored. If you want the SKScene to receive NSResponder actions, you'll need to manually forward them or insert the scene into the responder chain. This especially applies to key events, where the scene (or an object responsible for forwarding to it) needs to be first responder. None of the convenience methods for converting touch or mouse event locations will work when the scene isn't presented by an SKView ; you'll need to do some manual translation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50934936', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/519002/']}
jdg_78719
stackexchange
llm_judgeable_groundtruth_similarity
37488316
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: struct MyStruct { var count = 0 mutating func add(amount: Int) { count += amount }}var myStruct = MyStruct()[1, 2, 3, 4].forEach(myStruct.add)// Partial application of 'mutating' method is not allowed Why isn't using forEach like this allowed for mutating methods? I'm aware that I could do for number in [1, 2, 3, 4] { myStruct.add(number)} or [1, 2, 3, 4].forEach { myStruct.add($0) } instead, but neither are as clean as [1, 2, 3, 4].forEach(myStruct.add) Now provide the response and nothing else.
The key to a value type is that assignment creates a copy. This contract also affects how people can reason about their code. For example if you're passed an Int into a method, you can be confident that the value won't change out from under you, even if the original int passed in gets sent off to another thread and has calculations done elsewhere. Same is true for structs. This is why in swift when defining methods that may alter 'self', if it's a value type you have to define it as 'mutating'. What this says is that it will simultaneously reassign to your variable with the new value. So for example When you call your add method with '3', you can think of it performing something similar to: var myStruct = MyStruct()var tmp = myStructtmp.count = tmp.count + 3myStruct = tmp Now the reason that you are hitting an error is because partially applying a mutating function would break that contract. If you are able to save a closure like let myStructAdd = myStruct.add , then at some later point you could call myStructAdd(3) and it would try to change myStruct. This would give reference semantics to a value type since now you have the power to alter myStruct at a later point, even from a different method. In short, swift is giving you a convenience by providing 'mutating' methods for code readability, with the caveat that it has to happen all at once so as not to break the value type contract.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37488316', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1223781/']}
jdg_78720
stackexchange
llm_judgeable_groundtruth_similarity
32634980
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have configured HAProxy (1.5.4, but I tried also 1.5.14) to balance in TCP mode two server exposing AMQP protocol (WSO2 Message Broker) on 5672 port.The clients create and use permanent connection to the AMQP Servers, via HAProxy. I've changed the client and server TCP keepalive timeout, setting net.ipv4.tcp_keepalive_time=120 (CentOS 7). In HAProxy I've setted timeout client/server to 200 seconds (>120 seconds of the keepalive packets) and used the option clitcpka. Then I've started wireshark and sniffed all the tcp traffic: after the last request from the clients, the tcp keepalived packets are sente regularly after 120 seconds, but after 200 seconds after the last request from the clients the connection are closed (thus ignoring the keepalived packet). Below the configuration: haproxy.conf global log 127.0.0.1 local3 maxconn 4096 user haproxy group haproxy daemon debuglisten messagebroker_balancer 172.19.19.91:5672 mode tcp log global retries 3 timeout connect 5000ms option redispatch timeout client 200000ms timeout server 200000ms option tcplog option clitcpka balance leastconn server s1 172.19.19.79:5672 check inter 5s rise 2 fall 3 server s2 172.19.19.80:5672 check inter 5s rise 2 fall 3 Now provide the response and nothing else.
TCP keep alive is at the transport layer and is only used to do some traffic on the connection so intermediate systems like packet filters don't loose any states and that the end systems can notice if the connection to the other side broke (maybe because something crashed or a network cable broke). TCP keep alive has nothing to do with the application level idle timeout which you have set explicitly to 200s: timeout client 200000mstimeout server 200000ms This timeouts gets triggered if the connection is idle, that is if no data get transferred. TCP keep alive does not transport any data, the payload of these packets is empty.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32634980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5347134/']}
jdg_78721
stackexchange
llm_judgeable_groundtruth_similarity
2689947
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In particular I mean: $$\sin(x)^2 + \cos(x)^2$$ $$=\left(\sum_{n=0}^{\infty} (-1)^n \frac{x^{2n+1}}{(2n+1)!}\right)^2 + \left(\sum_{n=0}^{\infty} (-1)^n \frac{x^{2n}}{(2n)!}\right)^2$$ However I am not sure how you're supposed to correctly expand and recombine terms when dealing with the sum of two squared series, especially when there are factorials involved. Edit: To be clear, I am asking about manipulating the series I have just stated in order to show that they sum to $1$. Now provide the response and nothing else.
Absolutely. Power series actually multiply just like polynomials do: $$(a_0+a_1x+a_2x^2+a_3x^3+\ldots)(b_0+b_1x+b_2x^2+\ldots)=\sum_{n=0}^{\infty}\left(\sum_{c=0}^na_cb_{n-c}\right)x^n.$$ Let $$\alpha(x)=\left(\sum_{n=0}^{\infty} (-1)^n\frac{x^{2n+1}}{(2n+1)!}\right)^2$$$$\beta(x)=\left(\sum_{n=0}^{\infty} (-1)^n\frac{x^{2n}}{(2n)!}\right)^2.$$First, it is clear that the constant term of $\alpha(x)+\beta(x)$ is indeed $1$, as we can check directly. Thus, we merely need to check that every other coefficient vanishes. Note that, either from the formula for products or by noting that both are even functions, all the coefficients of odd powers of $x$ in both $\alpha$ and $\beta$ and thus $\alpha+\beta$ vanish. Now, consider the coefficient of $x^{2n}$ in either. In $\beta$, the formula gives the coefficient of $x^{2n}$ as, where we use the variable $k$ to count only the even (non-zero) coefficients of $\cos(x)$:$$\sum_{k=0}^n(-1)^c\cdot (-1)^{n-k}\cdot \frac{1}{(2n-2k)!}\cdot \frac{1}{(2k)!}=(-1)^n\cdot \sum_{k=0}^n\frac{1}{(2k)!(2n-2k)!}.$$The same can be done to find the coefficient of $x^{2n}$ in $\alpha$, using $k$ to enumerate odd coefficients of $\sin(x)$:$$\sum_{k=0}^{n-1}(-1)^k\cdot (-1)^{n-k-1}\cdot \frac{1}{(2n-2k-1)!}\cdot \frac{1}{(2k+1)!}=(-1)^{n-1}\cdot \sum_{k=0}^{n-1}\frac{1}{(2k+1)!(2n-2k-1)!}.$$We are trying to show that the coefficient of $x^n$ in $\alpha+\beta$ is zero for $n>0$. This amounts to showing the following equality for all $n>0$:$$\sum_{k=0}^{n-1}\frac{1}{(2k+1)!(2n-2k-1)!}=\sum_{k=0}^n\frac{1}{(2k)!(2n-2k)!}.$$Multiplying through by $(2n)!$ on both sides reduces this to a combinatorial equality:$$\sum_{k=0}^{n-1}{2n\choose 2k+1} = \sum_{k=0}^n{2n\choose 2k}.$$This just says that the number of subsets of $2n$ with an odd number of elements equals the number of subsets of $2n$ with an even number of elements - but this is easy to show: We can define a bijection $\pi$ which takes a set $S\subseteq \{1,\ldots,2n\}$ and takes it to $S\cup \{1\}$ if $1\not\in S$ and $S\setminus \{1\}$ if $1\in S$. This places sets of odd and even parity into bijection, showing the desired equality.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2689947', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/539262/']}
jdg_78722
stackexchange
llm_judgeable_groundtruth_similarity
14462272
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering why True is equal to -1 and not 1. If I remember correctly (back in the days) in C, "true" would be equal to 1. Dim t, f As Integer t = True f = False Console.WriteLine(t) ' -1 Console.WriteLine(f) ' 0 Console.ReadLine() Now provide the response and nothing else.
When you cast any non-zero number to a Boolean , it will evaluate to True . For instance: Dim value As Boolean = CBool(-1) ' TrueDim value1 As Boolean = CBool(1) ' TrueDim value2 As Boolean = CBool(0) ' False However, as you point out, any time you cast a Boolean that is set to True to an Integer , it will evaluate to -1, for instance: Dim value As Integer = CInt(CBool(1)) ' -1 The reason for this is because -1 is the signed-integer value where all of its bits are equal to 1. Since a Boolean is stored as a 16-bit integer, it is easier to toggle between true and false states by simply NOT'ing all of the bits rather than only NOT'ing the least significant of the bits. In other words, in order for True to be 1 , it would have to be stored like this: True = 0000000000000001False = 0000000000000000 But it's easier to just store it like this: True = 1111111111111111False = 0000000000000000 The reason it's easier is because, at the low-level: 1111111111111111 = NOT(0000000000000000) Whereas: 0000000000000001 <> NOT(0000000000000000)0000000000000001 = NOT(1111111111111110) For instance, you can replicate this behavior using Int16 variables like this: Dim value As Int16 = 0Dim value2 As Int16 = Not valueConsole.WriteLine(value2) ' -1 This would be more obvious if you were using unsigned integers, because then, the value of True is the maximum value rather than -1. For instance: Dim value As UInt16 = CType(True, UInt16) ' 65535 So, the real question, then, is why in the world does VB.NET use 16 bits to store a single bit value. The real reason is speed. Yes, it uses 16 times the amount of memory, but a processor can do 16-bit boolean operations a lot faster than it can do single-bit boolean operations. Side note: The reason why the Int16 value of -1 is stored as 1111111111111111 instead of as 1000000000000001 , as you might expect (where the first bit would be the "sign bit", and the rest would be the value), is because it is stored as the two's-complement. Storing negative numbers as the two's-complement means that arithmetic operations are much easier for the processor to perform. It's also safer because, with two's-compliment, there's no way to represent 0 as a negative number, which could cause all sorts of confusion and bugs.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14462272', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130611/']}
jdg_78723
stackexchange
llm_judgeable_groundtruth_similarity
2068057
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In my course of complex analysis I am asked to solve the following exercise: Show that $\displaystyle \int_0^\infty e^{-x^2/2}\cos(ax) \, dx = \frac 1 2 \sqrt{2 \pi} e^{-a^2/2}$ for $a\in \mathbb{R} $. I have tried using the integration methods by means of integration contours but I make many errors and I can not get the result. Now provide the response and nothing else.
HINT: Write $$\begin{align}e^{-x^2/2}\cos(ax)&=\text{Re}\left(e^{-x^2/2+iax}\right)\\\\&=e^{-a^2/2}\text{Re}\left(e^{-\frac12(x-ia)^2}\right)\tag 1\end{align}$$ Exploit the evenness of the integrand, use $(1)$, translate the argument by enforcing the substitution $x-ia \to x$, deform the contour back to the real line exploiting Cauchy's Integral Theorem, evaluate the resulting Gaussian integral, and take the real part. SPOILER ALERT: Scroll over the highlighted area to reveal the solution Therefore, $$\begin{align}\int_0^\infty e^{-x^2/2}\cos(ax)\,dx&=\frac12 e^{-a^2/2} \int_{-\infty}^\infty \text{Re}\left(e^{-\frac12(x-ia)^2}\right)\,dx\\\\&=\frac12 e^{-a^2/2} \text{Re}\left(\int_{-\infty}^\infty e^{-\frac12(x-ia)^2}\,dx\right)\\\\&=\frac12 e^{-a^2/2} \text{Re}\left(\int_{-\infty-ia}^{\infty-ia} e^{-\frac12 x^2}\,dx\right)\\\\&=\frac12 e^{-a^2/2} \text{Re}\left(\int_{-\infty}^{\infty} e^{-\frac12 x^2}\,dx\right)\\\\&=\frac12 \sqrt{2\pi }e^{-a^2/2}\end{align}$$as was to be shown!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2068057', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_78724
stackexchange
llm_judgeable_groundtruth_similarity
61771591
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a macOS Application with a NavigationView and want to have the default ToggleSidebar item in the toolbar of the window. Currently I set the target of the ToolbarItem to the AppDelegate in toolbarWillAddItem(_) of the NSToolbarDelegate . Inside of the AppDelegate I implemented @objc func toggleSidebar(_ sender: Any) { ((window.contentView?.subviews.first?.subviews.first?.subviews.first as? NSSplitView)?.delegate as? NSSplitViewController)?.toggleSidebar(self)} This solution is working right now. If the implementation of SwiftUI will change this breaks. So how can this be done in a better way? Now provide the response and nothing else.
Since macOS Big Sur beta 4 you can add default sidebar commands with SwiftUI 2.0. var body: some Scene { WindowGroup { NavigationView { Group { SidebarView() ContentView() } } } .commands { SidebarCommands() }} This code will add the "Toggle Sidebar" shortcut: SidebarView code: var body: some View { List { ForEach(0..<5) { index in Text("\(index)") } } .listStyle(SidebarListStyle())}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61771591', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2663531/']}
jdg_78725
stackexchange
llm_judgeable_groundtruth_similarity
1077170
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is the informal proof of Drinker's paradox The proof begins by recognising it is true that either everyone in the pub is drinking (in this particular round of drinks), or at least one person in the pub isn't drinking. On the one hand, suppose everyone is drinking. For any particular person, it can't be wrong to say that if that particular person is drinking, then everyone in the pub is drinking — because everyone is drinking. Suppose, on the other hand, at least one person isn't drinking. For that particular person, it still can't be wrong to say that if that particular person is drinking, then everyone in the pub is drinking — because that person is, in fact, not drinking. I can agree with the first case of the proof. But how is the second case true ? How can they apply material implication in the second case when the material conditional itself has not been proved yet or given to us ? Now provide the response and nothing else.
"If 1+1=3, then I am the King of France" is an example of a "vacuously true" statement - in classical logic, we take "P -> Q" as equivalent to "Q or !P". In this case "!P", so "Q or !P" is true (regardless of the truth value of Q), and so "P -> Q" is true; although this tells us little we didn't know before! Similarly, in the second case you note, there exists a person A for whom "A is drinking" is false. Therefore we can take "A is drinking -> everyone is drinking" as equivalent to "(everyone is drinking) or !(A is drinking)" which is true, since "!(A is drinking)" is true regardless of whether (everyone is drinking) is true or not.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1077170', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/124772/']}
jdg_78726
stackexchange
llm_judgeable_groundtruth_similarity
47602097
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Want to output a Pandas groupby dataframe to CSV. Tried various StackOverflow solutions but they have not worked. Python 3.6.1, Pandas 0.20.1 groupby result looks like: id month year countweek 0 9066 82 32142 8951 7679 84 30112 7492 8368 126 42187 8723 11038 102 34165 9764 8815 117 34122 7675 10979 163 50225 12526 8726 142 38159 9967 5568 63 26143 582 Want a csv that looks like week count0 8951 7492 8723 9764 7675 12526 9967 582 Current code: week_grouped = df.groupby('week')week_grouped.sum() #At this point you have the groupby resultweek_grouped.to_csv('week_grouped.csv') #Can't do this - .to_csv is not a df function. Read SO solutions: output groupby to csv file pandas week_grouped.drop_duplicates().to_csv('week_grouped.csv') Result: AttributeError: Cannot access callable attribute 'drop_duplicates' of 'DataFrameGroupBy' objects, try using the 'apply' method Python pandas - writing groupby output to file week_grouped.reset_index().to_csv('week_grouped.csv') Result: AttributeError: "Cannot access callable attribute 'reset_index' of 'DataFrameGroupBy' objects, try using the 'apply' method" Now provide the response and nothing else.
Try doing this: week_grouped = df.groupby('week')week_grouped.sum().reset_index().to_csv('week_grouped.csv') That'll write the entire dataframe to the file. If you only want those two columns then, week_grouped = df.groupby('week')week_grouped.sum().reset_index()[['week', 'count']].to_csv('week_grouped.csv') Here's a line by line explanation of the original code: # This creates a "groupby" object (not a dataframe object) # and you store it in the week_grouped variable.week_grouped = df.groupby('week')# This instructs pandas to sum up all the numeric type columns in each # group. This returns a dataframe where each row is the sum of the # group's numeric columns. You're not storing this dataframe in your # example.week_grouped.sum() # Here you're calling the to_csv method on a groupby object... but# that object type doesn't have that method. Dataframes have that method. # So we should store the previous line's result (a dataframe) into a variable # and then call its to_csv method.week_grouped.to_csv('week_grouped.csv')# Like this:summed_weeks = week_grouped.sum()summed_weeks.to_csv('...')# Or with less typing simplyweek_grouped.sum().to_csv('...')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47602097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1533309/']}
jdg_78727
stackexchange
llm_judgeable_groundtruth_similarity
41188002
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to know about popo. I have searched popo and found it stands for Plain Old Php Object.But I am not sure the exact meaning of Plain Old Php Object.I want to know what is popo and where to use it?Thanks. Now provide the response and nothing else.
Plain Old {Insert Language Here} Object is an simple approach that says you don't always need to use an extensive class, or inheritance chain, to store data or perform logic. It allows you to simplify the understanding of your structure by encapsulating details, not deriving details or creating dependencies on other sources. A simple use case could be a DTO or Entity that encapsulates user fields: class User { public $firstName; public $lastName;} Versus a more extensive object that implements an interface or extends a Base class or both. interface UserInterface { public function getName(); public function setName($name);}class User extends Model implements UserInterface { public function getName() { ... } public function setName($value) { ... }} According to Wikipedia, a named class is considered a plain object if it does not implement any interfaces or extend any preexisting classes. I recommend looking at Shaunak Sontakke's answer for use cases.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41188002', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7122343/']}
jdg_78728
stackexchange
llm_judgeable_groundtruth_similarity
25920681
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The cplusplus.com shared_ptr page calls out a distinction between an empty std::shared_ptr and a null shared_ptr . The cppreference.com page doesn't explicitly call out the distinction, but uses both "empty" and comparison to nullptr in its description of std::shared_ptr behavior. Is there a difference between an empty and a null shared_ptr ? Is there any use case for such mixed-behavior pointers? Does a non-empty null shared_ptr even make sense? Would there ever be a case in normal usage (i.e. if you didn't explicitly construct one) where you could end up with an empty-but-non-null shared_ptr ? And do any of these answers change if you're using the Boost version instead of the C++11 version? Now provide the response and nothing else.
It's a weird corner of shared_ptr behavior. It has a constructor that allows you to make a shared_ptr that owns something and points to something else: template< class Y > shared_ptr( const shared_ptr<Y>& r, T *ptr ); The shared_ptr constructed using this constructor shares ownership with r , but points to whatever ptr points to (i.e., calling get() or operator->() will return ptr ). This is handy for cases where ptr points to a subobject (e.g., a data member) of the object owned by r . The page you linked calls a shared_ptr that owns nothing empty , and a shared_ptr that points to nothing (i.e., whose get() == nullptr ) null . ( Empty is used in this sense by the standard; null isn't.) You can construct a null-but-not-empty shared_ptr , but it won't be very useful. An empty-but-not-null shared_ptr is essentially a non-owning pointer, which can be used to do some weird things like passing a pointer to something allocated on the stack to a function expecting a shared_ptr (but I'd suggest punching whoever put shared_ptr inside the API first). boost::shared_ptr also has this constructor , which they call the aliasing constructor .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25920681', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3022952/']}
jdg_78729
stackexchange
llm_judgeable_groundtruth_similarity
786393
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Atiyah gives this criterion for nilradical to be a prime ideal.Nilradical is the intersection of prime ideals.Is nilradical prime iff there is only one prime ideal? ie Intersection of distinct prime ideals can never be a prime ideal? Now provide the response and nothing else.
Nilradical is intersection of prime ideals. So if there is more than one minimal prime ideal, then nilradical is not a prime ideal. note that Intersection of distinct minimal prime ideals can never be a prime ideal because if $p_1\cap... \cap p_n = p$ then $p_1... p_n \subset p$ so $p_i \subset p$ for some i. this means $p=p_i$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/786393', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61329/']}
jdg_78730
stackexchange
llm_judgeable_groundtruth_similarity
1407012
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've seen this proof for the center of a group $G$ : $$C = \{x\in G:xg = gx \ \ \ \forall g \in G\}$$ So, the center is the set of all elements that commute with every $g$ of $G$ .This subset of $G$ is nonempty because $eg = ge$ for all $g\in G$ . If we have $a,b\in C$ , then we know that: $$\forall g\in G:\qquad ag = ga,\qquad bg=gb,$$ and therefore, we have $$(ab)g = a(bg) = a(gb) = (ag)b = (ga)b = g(ab),$$ which shows that $ab\in C$ . Also, for the inverse, if we have $c\in C$ then for all $g\in G$ we have $gc=cg$ hence also $$c^{-1}gc=c^{-1}cg=g,$$ from which it follows that $gc^{-1}=c^{-1}g$ , and hence that $c^{-1}\in C$ . The normalizer of $S\subset G$ is defined as: $$N = \{g\in G: gS = Sg\}.$$ This subset is nonempty, since $eS = Se$ . For all $a,b\in N$ we have $aS=Sa$ and $bS=Sb$ , and hence $$(ab)S = a(bS) = a(Sb) = (aS)b = S(ab),$$ which shows that $ab\in S$ . For the inverse; if $c\in N$ then $Sc=cS$ and so $$c^{-1}Sc=c^{-1}cS=S,$$ from wich it follows that $c^{-1}S=Sc^{-1}$ , and hence that $c^{-1}\in N$ . Am I right? Now provide the response and nothing else.
You might have noticed that the steps in each of your proofs are quite similar. In fact, they are both special cases of a more general result involving group actions . Given a set $X$, a group $G$ acts on $X$ if there is a map $G \times X \to X$, denoted $g \cdot x$, such that $1 \cdot x = x$ for all $x \in X$, where $1$ is the identity in $G$,and $g\cdot(h \cdot x) = (gh)\cdot x$ for all $g,h \in G$ and $x \in X$. Let $G$ be a group acting on a set $X$. For $x \in X$, we define the stabilizer of $x$ to be$$\text{Stab}_G(x) = \{g \in G \mid g \cdot x = x\} \, .$$One can show that the stabilizer of a point is in fact a subgroup of $G$. Try to prove this yourself: you will find that the steps are almost exactly the same as those in your two proofs above. Now let's see how the centralizer and normalizer are both examples of stabilizers of group actions. A group $G$ acts on itself by conjugation, that is, we can define the map\begin{align*}G \times G &\to G\\(g, x) &\mapsto gxg^{-1}\end{align*}and show that it satisfies the two axioms of a group action. For this action, we find that, given $x \in G$, the stabilizer of $x$ is$$\text{Stab}_G(x) = \{g \in G \mid gxg^{-1} = x\} = \{g \in G \mid gx = xg\} = C_G(x)$$the centralizer of $x$ in $G$. Thus the centralizer of any point is a subgroup. The center $C = Z(G) = \bigcap_{x \in G} C_G(x)$ is then the intersection of all the centralizers, hence is also a subgroup. Let $\mathscr{P}(G)$ denote the power set of $G$, i.e., the collection of all subsets of $G$. Then $G$ also acts on $\mathscr{P}(G)$ by conjugation, as one can show that the map\begin{align*}G \times \mathscr{P}(G) &\to \mathscr{P}(G)\\(g, S) &\mapsto gSg^{-1} = \{gsg^{-1} \mid s \in S\}\end{align*}satisfies the two group action axioms. In this case we find that, for $S \in \mathscr{P}(G)$, the stabilizer of $S$ is$$\text{Stab}_G(S) = \{g \in G \mid gSg^{-1} = S\} = \{g \in G \mid gS = Sg\} = N_G(S)$$the normalizer of $S$ in $G$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1407012', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/166180/']}
jdg_78731
stackexchange
llm_judgeable_groundtruth_similarity
11343831
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The situation in HTML: <div class="someDivision"> <span class="firstSpan">...</span> <span class="firstSpan">...</span> <span class="firstSpan"> <span class="secondSpan">...</span> <span class="secondSpan"> "Unique text" <a>"Text which I need"</a> I need access to the text contained in <a> . Now provide the response and nothing else.
Note: So finally I think that I have nailed it down. I am not confirming that it is right. But with some code reading and tests this is what I found out and I don't have additional time to look into it. If anyone is interested they can check it out and tell if this answer is right or wrong - I would be glad :) The reference I used was from this tarball available at OpenJDK's site: openjdk-6-src-b25-01_may_2012.tar.gz Java natively translates all string to platform's local encoding in this method: jdk/src/share/native/common/jni_util.c - JNU_GetStringPlatformChars() . System property sun.jnu.encoding is used to determine the platform's encoding. The value of sun.jnu.encoding is set at jdk/src/solaris/native/java/lang/java_props_md.c - GetJavaProperties() using setlocale() method of libc. Environment variable LC_ALL is used to set the value of sun.jnu.encoding . Value given at the command prompt using -Dsun.jnu.encoding option to Java is ignored. Call to File.exists() has been coded in file jdk/src/share/classes/java/io/File.java and it returns as return ((fs.getBooleanAttributes(this) & FileSystem.BA_EXISTS) != 0); getBooleanAttributes() is natively coded (and I am skipping steps in code browsing through many files) in jdk/src/share/native/java/io/UnixFileSystem_md.c in function : Java_java_io_UnixFileSystem_getBooleanAttributes0() . Here the macro WITH_FIELD_PLATFORM_STRING(env, file, ids.path, path) converts path string to platform's encoding. So conversion to wrong encoding will actually send a wrong C string (char array) to subsequent call to stat() method. And it will return with result that file cannot be found. LESSON: LC_ALL is very important
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11343831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1503884/']}
jdg_78732
stackexchange
llm_judgeable_groundtruth_similarity
15872913
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was learning MVC WebAPI and I was following a tutorial and everything was going fine untill I saw the following: namespace HelloWebAPI.Controllers{ using System; public class ProductsController : ApiController { }} What we usually do is that we add the resources\scope in the beginning like this: using System;namespace HelloWebAPI.Controllers{ public class ProductsController : ApiController { }} I want to have a better understanding about it Thanks. Now provide the response and nothing else.
There is a difference, small but there is. It's all about the sequence of name resolution made by compiler. The good answer on subject you can find here: Should Usings be inside or outside the namespace In practise in first case compiler, in case could not find a type information immediately, would search among namespaces declared inside using . In second case, instead, would search first among the actual namespace and after only go to search inside declared outside.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15872913', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/158455/']}
jdg_78733
stackexchange
llm_judgeable_groundtruth_similarity
2236621
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a code block in my MVC view as follows: <%using (Ajax.BeginForm("MyAction", new { action = "MyAction", controller = "Home", id = ViewData["selected"].ToString() }, new AjaxOptions { UpdateTargetId = "Div1" })) { %> <%=Html.DropDownList("ddl", ViewData["MyList"] as SelectList, new { onchange = "this.form.submit()" })%> <%} %> I want to set the value of ViewData["selected"] so that i can send it to the desired action.Can anyone please suggest how can i do this? thanks! Now provide the response and nothing else.
Instead of using a form, why not use a jQuery onChange event on your drop down? $(document).ready(function() { $("#ddl").change(function() { var strSelected = ""; $("#ddl option:selected").each(function() { strSelected += $(this)[0].value; }); var url = "/Home/MyAction/" + strSelected; $.post(url, function(data) { // do something if necessary }); });});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2236621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/209542/']}
jdg_78734
stackexchange
llm_judgeable_groundtruth_similarity
288550
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let a dodecahedron sit on the plane, with one face's vertices on an origin-centered unit circle.Fix the orientation so that the edge whose indices are $(1,2)$ is horizontal.For any $p \in \mathbb{R}^2$, define the dodecahedral distance $dd(p)$ from $o=(0,0)$to $p$ to be the fewest number of edge-rolls that will result in a face of the dodecahedron landing on top of $p$.Equivalently, imagine reflecting a regular pentagon over edges,as illustrated below: It takes $4$ rolls/reflections to cover $p=(5,\pi)$: $p=(5,\pi)$, $dd(p)=4$, $s=(3,1,4,2)$. My main question is: Q . Given $p$, how can one calculate $dd(p)$? Greedily choosing, at each step, the roll that is best aligned with the vector $p-o$does not always succeed. Could one characterize the sequences of roll indices $s$, where rolling overedge $(i,i+1)$ of the pentagon is index $i\,$? These sequencesfeel analogous to continued-fraction approximations.What do all the points $p$ of $\mathbb{R}^2$ with $dd(p)=k$ look like,i.e., what is the shape of a $dd$-circle $C_k$? $p=(18.3,-1.4)$, $dd(p) \le 12$, $s=(2, 4, 1, 3, 5, 2, 4, 1, 3, 5, 2, 5)$. That the dodecahedral distance is well-defined follows, e.g.,from " Thinnest covering of the plane by regular pentagons ." Now provide the response and nothing else.
Here are a few trivial lemmas. I won't use anything about the rolling motion, just that the distance is defined by gluing pentagons edge-to-edge: The $dd$-circle of radius $k$, which I'll call $C_k$, is a closed polygonal curve. Let $D_k$ be the closed $dd$-disk of radius $k$; note that $D_k$ may contain holes and $C_k$ is not in general simple! (This happens even at $k=2$.) The orientations of the line segments forming $C_k$ (measured relative to the $+x$ ray) when $k$ is even (odd) take the form $\pi m/5$ with $m$ an odd (even) integer; this is because pentagons are glued to each other in only two orientations (up-pointing or down-pointing) and furthermore, we can only glue down-pointing pentagons to up-pointing ones and vice versa. (Note also that no line segments of $C_k$ lie on $C_{k+1}$). It's not hard to see that $C_k$ is strictly contained within $D^\circ_{k+2}$, the interior of $D_{k+2}$ (the open $dd$-disk). Suppose that $v$ is a simple vertex of $C_k$ so that the interior angle $\alpha$ is defined and $\alpha> 4\pi/5$. Then $v$ is surrounded by the pentagons added to its adjacent segments and therefore lies inside $D^\circ_{k+1}$. Otherwise, if the interior angle $\alpha\leq 4\pi/5$ then $v$ is also a vertex on $C_{k+1}$. However the interior angle at $v$ on $C_{k+1}$ is now $\alpha+6\pi/5>4\pi/5$, so it gets "eaten up" in the next layer. Here are pictures of $D_k$ for $k=1,\dots,8$ placed side by side with $D_{k}\setminus D_{k-1}$, (made by "hand" from edited versions of this file from wikipedia ): $D_1$: $D_2$ and $D_2\setminus D_1$: $D_3$ and $D_3\setminus D_2$: $D_4$ and $D_4\setminus D_3$: $D_5$ and $D_5\setminus D_4$: $D_6$ and $D_6\setminus D_5$: $D_7$ and $D_7\setminus D_6$: $D_8$ and $D_8\setminus D_7$: From these images, the following patterns are apparent: the "outer frontier" of $C_k$ is determined by a closed chain of $5k$ pentagons joined vertex to vertex. When $k$ is odd, these pentagons are all "down-pointing" and when $k$ is even they all are "up-pointing". From now on assume $k\geq2$. Then the pentagon chain is formed from 10 "segments of pentagons", which I'll call pentasegments , joining 10 corner pentagons where the pentasegments change direction. There are two types of pentasegments: those where the bases of the constituent pentagons point outwards relative to the interior of $D_k$ (type I): and those where the vertices of the pentagons point outwards (type II): The top pentasegment of $C_k$ is type I if $k$ is odd and type II if $k$ is even and the 10 pentasegments alternate between type I and type II so that there are 5 of each type. When $k$ is even, there are $k/2-1$ pentagons on each pentasegment lying strictly between the corners (the number of pentagons on each pentasegment including the 2 corners is $k/2+1$). When $k$ is odd, there are $(k-1)/2$ pentagons on each type I pentasegment between the corners and $(k-3)/2$ pentagons on each type II pentasegment between the corners. There are holes between each of the pentagons lying on the type II pentasegments, thus there are $5\lfloor \frac{k}{2}\rfloor$ holes in total. I think the above gives a full characterization of $C_k$ and also shouldn't be too hard to prove, though I'm finding it awkward to turn the pictures into words. First, the corners propagate along "zig-zag" paths of pentagons; for example, a formula for the coordinates in $\mathbb{C}$ of the centroid of one of the corners of $C_k$ is: $$2r_{in}\sum_{j=1}^ke^{i\pi m_j/5},$$ where $r_{in}=\frac{1}{2}\sqrt{1+\frac{2}{\sqrt{5}}}$ is the inradius of the regular pentagon ( formula from mathworld ), $m_{2l-1}=2$, and $m_{2l}=1$. This simplifies to: $$\left\lceil\frac{k}{2}\right\rceil e^{2\pi i/5}+\left\lfloor\frac{k}{2}\right\rfloor e^{\pi i/5},$$ and there are 9 other similar formulas for the other corners. Another easy step is seeing that type I pentasegments turn into type II pentasegments (and vice versa) after attaching the next layer of pentagons, and seeing how the number of pentagons along each type of pentasegment changes is also straightforward. Maybe there's a more elegant description which might then lead to a simple formula / algorithm to compute $dd(p)$. Here's some geometry which may help in writing an algorithm; as you can see, the details are straightforward but rather messy. If I haven't screwed up the law of cosines, the centroids of the corner pentagons of $C_k$ lie on the circle of radius $R_k$ centered at the origin, where: $$\left(\frac{R_k}{r_{in}}\right)^2=\left\lceil\frac{k}{2}\right\rceil^2+\left\lfloor\frac{k}{2}\right\rfloor^2+2\left\lceil\frac{k}{2}\right\rceil\left\lfloor\frac{k}{2}\right\rfloor\cos\frac{\pi}{5}.$$ The 10-gon $P_k$ formed by these centroids is inscribed in this circle and is determined once we calculate the polar angles of two neighboring centroids, $\alpha_k,\beta_k$. This can be done with the law of sines but I won't write out expressions explicitly here. Let $S_k$ be the polygonal annulus lying strictly between $P_k$ and $P_{k+1}$. Given a point $p$, it will lie in some $S_k$ or on some $P_k$, and this tells us that $|dd(p)-k|\leq 1$. I don't know what the best algorithm for determining this $k$ is (something using the polar coordinates of $p$?), but I suspect Joseph O'Rourke will know. Once we get $k$, then it's possible to pin down $dd(p)$ exactly by analyzing in more detail where $p$ sits relative to the edges of $C_k$, though again I'm not sure what the most efficient algorithm would be.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/288550', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6094/']}
jdg_78735
stackexchange
llm_judgeable_groundtruth_similarity
31668659
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to ST-JS , and I would like to output some debug info to the browser console. I found org.stjs.javascript.Console; but can't make it work. I tried this way: Console console = new Console() {};console.log("Hello"); But the I get an error like this: Uncaught ReferenceError: Console is not defined Anybody knows how to use it like they do on the ST-JS reference page ? Any help would be much appreciated. Now provide the response and nothing else.
After an hour of hair pulling I realised that the httpBackend is very specific about the order that tests a called in - the expectation must be set not just before you call flush, but before the resource request is made, and when you call flush you must have made exactly and only the requests expected. This means if you want to flush between sequential requests, the order of requests and expectations must be exactly: $httpBackend.expectGET('...')resource.get();$httpBackend.flush()$httpBackend.expectGET('...')resource.get();$httpBackend.flush()...etc So in the case of the code above, it works if I change the ordering to: describe('poll', function() { beforeEach(function() { $httpBackend.expectGET('/books/1').respond(200,{id:1}); poll(function() { successCalled = true; }); $httpBackend.flush(); $httpBackend.expectGET('/books/1').respond(200,{id:1, newVal:1}); $timeout.flush(); canStop=true; $httpBackend.flush(); }); it('should call success when canStop is true', function() { expect(successCalled).toBe(true); });});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31668659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1094823/']}
jdg_78736
stackexchange
llm_judgeable_groundtruth_similarity
4376199
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working through the asymptotics section of Stein-Shakarchi's Complex Analysis, where the Bessel function is defined, for $\nu > - \frac{1}{2}$ as $$ J_{\nu}(s) = \frac{(s/2)^\nu}{\Gamma(\nu + 1/2)\Gamma(1/2)}\int_{-1}^1e^{isx}(1-x^2)^{\nu-1/2}\text{d}x $$ They then state that if $J_{-1/2}(s)$ is defined as $\lim_{\nu\to -1/2}J_\nu(s)$ then it equals $\sqrt{\frac{2}{\pi s}}\cos(s).$ I can see where the prefactor comes from but I don't understand how one gets $\cos(s)$ from this limit. Any help would be appreciated. Now provide the response and nothing else.
Clearly, we cannot just let $\nu \searrow - \frac{1}{2}$ immediately in your formula, since the Gamma function and the integral are divergent for $\nu = -\frac{1}{2}$ . We first rewrite the expression: \begin{align}\operatorname{J}_\nu (s) &= \frac{(s/2)^\nu}{\sqrt{\pi} \operatorname{\Gamma}\left(\nu + \frac{3}{2}\right)} \left(\nu + \frac{1}{2}\right) \int \limits_{-1}^1 \mathrm{e}^{\mathrm{i} s x} (1-x^2)^{\nu - \frac{1}{2}} \, \mathrm{d}x \\&= \frac{(s/2)^\nu}{\sqrt{\pi} \operatorname{\Gamma}\left(\nu + \frac{3}{2}\right)} \left(\nu + \frac{1}{2}\right) \int \limits_0^1 2 \cos(s x) (1 - x^2)^{\nu - \frac{1}{2}} \, \mathrm{d} x \\&= \frac{(s/2)^\nu}{\sqrt{\pi} \operatorname{\Gamma}\left(\nu + \frac{3}{2}\right)} \int \limits_0^1 \frac{\cos(sx)}{x} \frac{\mathrm{d}}{\mathrm{d}x} \left(1 - (1-x^2)^{\nu + \frac{1}{2}}\right) \mathrm{d} x \, .\end{align} Then we integrate by parts to obtain $$\operatorname{J}_\nu (s) = \frac{(s/2)^\nu}{\sqrt{\pi} \operatorname{\Gamma}\left(\nu + \frac{3}{2}\right)} \left[\cos(s) + \int \limits_0^1 [\cos(s x) + s x\sin(s x)] \frac{1 - (1-x^2)^{\nu + \frac{1}{2}}}{x^2} \mathrm{d} x \right] .$$ Now the prefactor is harmless. The fraction in the remaining integral is bounded by $1$ and goes to zero for $x \in [0,1)$ as $\nu \searrow - \frac{1}{2}$ . Therefore, we can finally take the limit to find \begin{align}\operatorname{J}_{-1/2} (s) &= \lim_{\nu \searrow -1/2} \frac{(s/2)^\nu}{\sqrt{\pi} \operatorname{\Gamma}\left(\nu + \frac{3}{2}\right)} \left[\cos(s) + \int \limits_0^1 [\cos(s x) + s x\sin(s x)] \frac{1 - (1-x^2)^{\nu + \frac{1}{2}}}{x^2} \mathrm{d} x \right] \\&= \frac{(s/2)^{-1/2}}{\sqrt{\pi} \operatorname{\Gamma}(1)} [\cos(s) + 0] = \sqrt{\frac{2}{\pi s}} \cos(s) \, .\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4376199', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1022977/']}
jdg_78737
stackexchange
llm_judgeable_groundtruth_similarity
5923676
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have applied a template filter, which extracts the domain from an email address. In the template file I have this code: {% for email in user_list %}<p> {{email.email}} corresponds to this domain: <b> {{email.email|domain}} </b></p>{% endfor %} It is currently making bold all domain names. What I want to do is to make bold ONLY those email addresses with a 'valid' email extension (for example, only those at the domain '@gmail.com'). How do I apply an if or ifequal statement to do this? For example, this is the logic I want it to have -- {% for email in user_list %}<p> {{email.email}} corresponds to this domain: {% if domain = 'specified extension' %} <b> {{email.email|domain}} </b> {% else %} {{ email.email|domain }} {% endif %}</p>{% endfor %} Update: OK -- I got this working by creating a custom model in models.py , like so -- class Table(models.Model): name = models.CharField(max_length=50) email = models.CharField(max_length=50) def valid_email(self): verified = ['yahoo.com','gmail.com'] domain = self.email.split('@')[1] return domain in verified And in the template index.html -- {% for email in user_list %}<p> {{email.email}} corresponds to this domain: {% if email.valid_email %} <b>{{ email.email|domain}}</b> {% else %} {{ email.email|domain}} {% endif %}</p>{% endfor %} This works well now, but my concern is that when I need to update the models.py and tamper with the verified email list. Where would be a better place to hold this valid_emails() function, such that I can update it easily? And then how would I reference the function in the template (if different than current)? Thank you. Now provide the response and nothing else.
You can use the with template tag to assign email.email|domain to domain . <p> {{ email.email }} corresponds to this domain: {% with domain=email.email|domain %} {% if domain = 'specified extension' %} <b>{{ domain }}</b> {% else %} {{ domain }} {% endif %} {% endwith %}</p> Note that I've used the Django 1.3 with syntax. See the docs for earlier versions of Django. To follow up on Ben James' comment, if you set a list of specified_extensions in the view, you can use in operator in your if statement. In the view: specified_extensions = ['gmail.com', 'hotmail.com',] In the template: {% if domain in specified_extensions %}... Update: I think that you have put the valid_emails method in the correct place, it belongs on the model. If you want to be able to update the list of domains without restarting the server, I suggest you store the domains in the database. class VerifiedDomain(models.Model): name = models.CharField(max_length=50, help_text="an allowed domain name for emails, e.g 'gmail.com'")class Table(models.Model): name = models.CharField(max_length=50) email = models.CharField(max_length=50) def valid_email(self): domain = self.email.split('@')[1] return VerifiedDomain.objects.filter(name=domain).exists()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5923676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/651174/']}
jdg_78738
stackexchange
llm_judgeable_groundtruth_similarity
11706533
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: On the callback from Facebook for nodejs passport authentication, how do you get the req object within the callback? passport.use(new FacebookStrategy({ clientID: 123456789, clientSecret: 'SECRET', callbackURL: "http://example.com/login/facebook/callback" }, function(accessToken, refreshToken, profile, done){ // Is there any way to get the req object in here? })); Now provide the response and nothing else.
Setting the passReqToCallback option, as so: passport.use(new LocalStrategy({ passReqToCallback: true }, function(req, username, password, done) { User.findOne({ username: username }, function (err, user) { if (err) { return done(err); } if (!user) { return done(null, false); } if (!user.verifyPassword(password)) { req.flash('error', 'Your password is too long'); req.flash('error', 'Also, it is too short!!!'); return done(null, false); } return done(null, user); }); })); req becomes the first argument to the verify callback As per https://github.com/jaredhanson/passport/issues/39
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11706533', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1059070/']}
jdg_78739
stackexchange
llm_judgeable_groundtruth_similarity
4492473
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Two dice are rolled simultaneously. Calculate the probability that the maximum is greater than or equal to $3$ . Attempt: The answer is $\frac{8}{36}$ . Why can't I say that since there are $4 \cdot 4$ ways to give data respecting the imposed conditions, then we have the probability of $\frac{16}{36}$ . Am I missing something? Now provide the response and nothing else.
You would calculate that the outcome of both dice are greater than or equal to 3. But then you would miss the outcome $(2,4)$ , for instance. So you are looking for the combinations where at least one outcome is greater than or equal to 3. You can make a table and count the favorable outcomes. Or you look for the complementary events. These are the combinations where both outcomes are less or equal to 2: $\max(1,1)=1 ;\max(1,2)=2, \max(2,1)=2, \max(2,2)=2$ Then the complementary probability is $\frac{4}{36}$ . So the asked probability is $P\left(\max(X_1,X_2)\geq 3\right)=1-\frac{4}{36}=\frac{32}{36}=\frac{8}{9}$ . This is the double of the given answer.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4492473', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/666297/']}
jdg_78740