source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
2881931
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that the series $\sum_{n=1}^{\infty} a_n$ converges conditionally. Show that the series $\sum_{n=3}^{\infty} n(\log n)(\log {\log n})^2 a_n$ diverges. Any hint on how to proceed? Now provide the response and nothing else.
Suppose that $\sum_{n=3}^{\infty} n(\log n)(\log {\log n})^2 a_n$ converges. Then we have $$n(\log n)(\log {\log n})^2 a_n\to 0,$$so $$|a_n|<\frac{1}{n(\log n)(\log {\log n})^2}$$for large $n$. But by Cauchy condensation test , the series $$\sum_{n=3}^\infty \frac{1}{n(\log n)(\log {\log n})^2}<\infty,$$which implies that $\sum a_n$ is absolutely convergent, contradiction.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2881931', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/575415/']}
jdg_79541
stackexchange
llm_judgeable_groundtruth_similarity
56905592
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When photographing a sheet of paper (e.g. with phone camera), I get the following result (left image) (jpg download here ). The desired result (processed manually with an image editing software) is on the right: I would like to process the original image with openCV to get a better brightness/contrast automatically (so that the background is more white) . Assumption: the image has an A4 portrait format (we don't need to perspective-warp it in this topic here), and the sheet of paper is white with possibly text/images in black or colors. What I've tried so far: Various adaptive thresholding methods such as Gaussian, OTSU (see OpenCV doc Image Thresholding ). It usually works well with OTSU: ret, gray = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY) but it only works for grayscale images and not directly for color images. Moreover, the output is binary (white or black), which I don't want : I prefer to keep a color non-binary image as output Histogram equalization applied on Y (after RGB => YUV transform) or applied on V (after RGB => HSV transform), as suggested by this answer ( Histogram equalization not working on color image - OpenCV ) or this one ( OpenCV Python equalizeHist colored image ): img3 = cv2.imread(f)img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2YUV)img_transf[:,:,0] = cv2.equalizeHist(img_transf[:,:,0])img4 = cv2.cvtColor(img_transf, cv2.COLOR_YUV2BGR)cv2.imwrite('test.jpg', img4) or with HSV: img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2HSV)img_transf[:,:,2] = cv2.equalizeHist(img_transf[:,:,2])img4 = cv2.cvtColor(img_transf, cv2.COLOR_HSV2BGR) Unfortunately, the result is quite bad since it creates awful micro contrasts locally (?): I also tried YCbCr instead, and it was similar. I also tried CLAHE (Contrast Limited Adaptive Histogram Equalization) with various tileGridSize from 1 to 1000 : img3 = cv2.imread(f)img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2HSV)clahe = cv2.createCLAHE(tileGridSize=(100,100))img_transf[:,:,2] = clahe.apply(img_transf[:,:,2])img4 = cv2.cvtColor(img_transf, cv2.COLOR_HSV2BGR)cv2.imwrite('test.jpg', img4) but the result was equally awful too. Doing this CLAHE method with LAB color space, as suggested in the question How to apply CLAHE on RGB color images : import cv2, numpy as npbgr = cv2.imread('_example.jpg')lab = cv2.cvtColor(bgr, cv2.COLOR_BGR2LAB)lab_planes = cv2.split(lab)clahe = cv2.createCLAHE(clipLimit=2.0,tileGridSize=(100,100))lab_planes[0] = clahe.apply(lab_planes[0])lab = cv2.merge(lab_planes)bgr = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR)cv2.imwrite('_example111.jpg', bgr) gave bad result too. Output image: Do an adaptive thresholding or histogram equalization separately on each channel (R, G, B) is not an option since it would mess with the color balance, as explained here . "Contrast strechting" method from scikit-image 's tutorial on Histogram Equalization : the image is rescaled to include all intensities that fall within the 2nd and 98th percentiles is a little bit better, but still far from the desired result (see image on top of this question). TL;DR: how to get an automatic brightness/contrast optimization of a color photo of a sheet of paper with OpenCV/Python? What kind of thresholding/histogram equalization/other technique could be used? Now provide the response and nothing else.
Contrast and brightness can be adjusted using alpha ( α ) and beta ( β ), respectively. These variables are often called the gain and bias parameters. The expression can be written as OpenCV already implements this as cv2.convertScaleAbs() so we can just use this function with user defined alpha and beta values. import cv2image = cv2.imread('1.jpg')alpha = 1.95 # Contrast control (1.0-3.0)beta = 0 # Brightness control (0-100)manual_result = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)cv2.imshow('original', image)cv2.imshow('manual_result', manual_result)cv2.waitKey() But the question was How to get an automatic brightness/contrast optimization of a color photo? Essentially the question is how to automatically calculate alpha and beta . To do this, we can look at the histogram of the image. Automatic brightness and contrast optimization calculates alpha and beta so that the output range is [0...255] . We calculate the cumulative distribution to determine where color frequency is less than some threshold value (say 1%) and cut the right and left sides of the histogram. This gives us our minimum and maximum ranges. Here's a visualization of the histogram before (blue) and after clipping (orange). Notice how the more "interesting" sections of the image are more pronounced after clipping. To calculate alpha , we take the minimum and maximum grayscale range after clipping and divide it from our desired output range of 255 α = 255 / (maximum_gray - minimum_gray) To calculate beta, we plug it into the formula where g(i, j)=0 and f(i, j)=minimum_gray g(i,j) = α * f(i,j) + β which after solving results in this β = -minimum_gray * α For your image we get this Alpha: 3.75 Beta: -311.25 You may have to adjust the clipping threshold value to refine results. Here's some example results using a 1% threshold with other images: Before -> After Automated brightness and contrast code import cv2import numpy as npfrom matplotlib import pyplot as plt# Automatic brightness and contrast optimization with optional histogram clippingdef automatic_brightness_and_contrast(image, clip_hist_percent=1): gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Calculate grayscale histogram hist = cv2.calcHist([gray],[0],None,[256],[0,256]) hist_size = len(hist) # Calculate cumulative distribution from the histogram accumulator = [] accumulator.append(float(hist[0])) for index in range(1, hist_size): accumulator.append(accumulator[index -1] + float(hist[index])) # Locate points to clip maximum = accumulator[-1] clip_hist_percent *= (maximum/100.0) clip_hist_percent /= 2.0 # Locate left cut minimum_gray = 0 while accumulator[minimum_gray] < clip_hist_percent: minimum_gray += 1 # Locate right cut maximum_gray = hist_size -1 while accumulator[maximum_gray] >= (maximum - clip_hist_percent): maximum_gray -= 1 # Calculate alpha and beta values alpha = 255 / (maximum_gray - minimum_gray) beta = -minimum_gray * alpha ''' # Calculate new histogram with desired range and show histogram new_hist = cv2.calcHist([gray],[0],None,[256],[minimum_gray,maximum_gray]) plt.plot(hist) plt.plot(new_hist) plt.xlim([0,256]) plt.show() ''' auto_result = cv2.convertScaleAbs(image, alpha=alpha, beta=beta) return (auto_result, alpha, beta)image = cv2.imread('1.jpg')auto_result, alpha, beta = automatic_brightness_and_contrast(image)print('alpha', alpha)print('beta', beta)cv2.imshow('auto_result', auto_result)cv2.waitKey() Result image with this code: Results with other images using a 1% threshold An alternative version is to add gain and bias to an image using saturation arithmetic instead of using OpenCV's cv2.convertScaleAbs() . The built-in method does not take an absolute value, which would lead to nonsensical results (e.g., a pixel at 44 with alpha = 3 and beta = -210 becomes 78 with OpenCV, when in fact it should become 0). import cv2import numpy as np# from matplotlib import pyplot as pltdef convertScale(img, alpha, beta): """Add bias and gain to an image with saturation arithmetics. Unlike cv2.convertScaleAbs, it does not take an absolute value, which would lead to nonsensical results (e.g., a pixel at 44 with alpha = 3 and beta = -210 becomes 78 with OpenCV, when in fact it should become 0). """ new_img = img * alpha + beta new_img[new_img < 0] = 0 new_img[new_img > 255] = 255 return new_img.astype(np.uint8)# Automatic brightness and contrast optimization with optional histogram clippingdef automatic_brightness_and_contrast(image, clip_hist_percent=25): gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Calculate grayscale histogram hist = cv2.calcHist([gray],[0],None,[256],[0,256]) hist_size = len(hist) # Calculate cumulative distribution from the histogram accumulator = [] accumulator.append(float(hist[0])) for index in range(1, hist_size): accumulator.append(accumulator[index -1] + float(hist[index])) # Locate points to clip maximum = accumulator[-1] clip_hist_percent *= (maximum/100.0) clip_hist_percent /= 2.0 # Locate left cut minimum_gray = 0 while accumulator[minimum_gray] < clip_hist_percent: minimum_gray += 1 # Locate right cut maximum_gray = hist_size -1 while accumulator[maximum_gray] >= (maximum - clip_hist_percent): maximum_gray -= 1 # Calculate alpha and beta values alpha = 255 / (maximum_gray - minimum_gray) beta = -minimum_gray * alpha ''' # Calculate new histogram with desired range and show histogram new_hist = cv2.calcHist([gray],[0],None,[256],[minimum_gray,maximum_gray]) plt.plot(hist) plt.plot(new_hist) plt.xlim([0,256]) plt.show() ''' auto_result = convertScale(image, alpha=alpha, beta=beta) return (auto_result, alpha, beta)image = cv2.imread('1.jpg')auto_result, alpha, beta = automatic_brightness_and_contrast(image)print('alpha', alpha)print('beta', beta)cv2.imshow('auto_result', auto_result)cv2.imwrite('auto_result.png', auto_result)cv2.imshow('image', image)cv2.waitKey()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56905592', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1422096/']}
jdg_79542
stackexchange
llm_judgeable_groundtruth_similarity
120618
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What logging solutions exist for j2me? I'm specifically interested in easily excluding logging for "release" version, to have a smaller package & memory footprint. Now provide the response and nothing else.
If you are using preprocessing and obfuscation with Proguard, then you can have a simple logging class. public class Log { public static void debug(final String message) { //#if !release.build System.out.println(message); //#endif }} Or do logging where ever you need to. Now, if release.build property is set to true, this code will be commented out, that will result in an empty method. Proguard will remove all usages of empty method - In effect release build will have all debug messages removed. Edit: Thinking about it on library level (I'm working on mapping J2ME library) I have, probably, found a better solution. public class Log { private static boolean showDebug; public static void debug(final String message) { if (showDebug) { System.out.println(message); } } public static void setShowDebug(final boolean show) { showDebug = show; }} This way end developer can enable log levels inside library that he/she is interested in. If nothing will be enabled, all logging code will be removed in end product obfuscation. Sweet :) /JaanusSiim
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/120618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6827/']}
jdg_79543
stackexchange
llm_judgeable_groundtruth_similarity
8400508
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am reading a single data item from a UDP port. It's essential that this read be the lowest latency possible. At present I'm reading via the boost::asio library's async_receive_from method. Does anyone know the kind of latency I will experience between the packet arriving at the network card, and the callback method being invoked in my user code? Boost is a very good library, but quite generic, is there a lower latency alternative? All opinions on writing low-latency UDP network programs are very welcome. EDIT: Another question, is there a relatively feasible way to estimate the latency that I'm experiencing between NIC and user mode? Now provide the response and nothing else.
Your latency will vary, but it will be far from the best you can get. Here are few things that will stand in your way to the better latency: Boost.ASIO It constantly allocates/deallocates memory to store "state" in order to invoke a callback function associated with your read operation. It does unnecessary mutex locking/unlocking in order to support a broken mix of async and sync approaches. The worst, it constantly adds and removes event descriptors from the underlying notification mechanism. All in all, asio is a good library for high-level application developers, but it comes with a big price tag and a lot of CPU cycle eating gremlins. Another alternative is libevent , it is a lot better, but still aims to support many notification mechanisms and be platform-independent. Nothing can beat native mechanisms, i.e. epoll . Other things UDP stack. It doesn't do a very good job for latency sensitive applications. One of the most popular solutions is OpenOnload. It by-passes the stack and works directly with your NIC. Scheduler. By default, scheduler is optimized for throughput and not latency. You will have to tweak and tune your OS in order to make it latency oriented. Linux, for example, has a lot of "rt" patches for that purpose. Watch out not to sleep. Once your process is sleeping, you will never get a good wakeup latency compared to constantly burning CPU and waiting for a packet to arrive. Interference with other IRQs, processes etc. I cannot tell you exact numbers, but assuming that you won't be getting a lot of traffic, using Boost and a regular Linux kernel, with a regular hardware, your latency will range somewhere between ~50 microseconds to ~100 milliseconds. It will improve a bit as you get more data, and after some point start dropping, and will always be ranging. I'd say that if you are OK with those numbers, don't bother optimizing.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8400508', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/25462/']}
jdg_79544
stackexchange
llm_judgeable_groundtruth_similarity
32230161
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I want to undo all changes introduced by previous commits. As far as I understand, git reset --hard <specified commit> will remove all commits up until the specified commit, and undo all changes. On the other hand git checkout <specified commit> will change my directory to reflect the specified commit. So if I git reset after git checkout will it have the same result as git reset --hard ? Alternatively, if I simply git commit after git checkout , will the newly created commit overwrite the existing commits? Now provide the response and nothing else.
Commits are trees, branches are pointers to some commits, and HEAD (alias: @ ) is a pointer to some branch or to some commit (‘detached HEAD ’). git checkout <commit> moves the HEAD , index, and working tree to the specified commit, and keeps any local changes in the index and working tree if possible , otherwise the operation is aborted. When you do this, the HEAD becomes detached. You will see this message, which is pretty self-explanatory: You are in 'detached HEAD' state. You can look around, makeexperimental changes and commit them, and you can discard any commitsyou make in this state without impacting any branches by performinganother checkout. If you want to create a new branch to retain commits you create, youmay do so (now or later) by using -b with the checkout command again.Example: git checkout -b new_branch_name So you will not loose any changes in the working tree and index , and you will not loose any commits with it .* It is a safe operation. git reset --hard <commit> moves the branch pointed to by HEAD or the HEAD if it is detached, index, and working tree to the specified commit. So you will loose all the changes in the working tree and index . Additionally, if you're moving to some older commit and the newer commits are not in some other branch, you will loose these newer commits too . It is not a safe operation so don't do this unless you really understand what you're doing. See also these great answers for information about how to undo a commit. You might also benefit from using a GUI tool like SourceTree. * Well, unless you're already in detached HEAD state on some dangling commit, but normally you shouldn't worry.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32230161', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5205383/']}
jdg_79545
stackexchange
llm_judgeable_groundtruth_similarity
40168344
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use custom font (Bebas Neue) in my iOS application. The steps I took are: Copy the .otf files to the project. Confirm the .otf files have set the project as target. Added the .otf files in 'Fonts provided by application' in plist. In Build Phases, the .otf files are in 'Copy Bundle Resources'. Install the font on my Mac. Try to print out all fonts available but I can't see my custom font. The code I used: for name in UIFont.familyNames() { println(name) if let nameString = name as? String { println(UIFont.fontNamesForFamilyName(nameString)) }} Trying to set the font in code and it worked. The code I used: textLabel?.font = UIFont(name: "BebasNeueRegular", size: 14) But I can't set it in interface builder. Any idea? Screenshots: Now provide the response and nothing else.
Try Below Steps: Code tested in Swift 3 . Step 1 : Add Your custom font into your project( Make sure Add to Target ticked).I am using "PermanentMarker.ttf","Pecita.otf" and "AROLY.ttf" font as a test font. Note: Supporting font Type ttf and otf (Both font types should work) Step 2 : Modify the application-info.plist file. Add the key "Fonts provided by application" in a new row and add "PermanentMarker.ttf" as new item in the Array " Fonts provided by application ". Your plist should looks like this Now the font will be available in Interface Builder. To use the custom font in code we need to refer to it by name, but the name often isn’t the same as the font’s filename Now, You can access the Custom Font from your viewController. I am testing the font by placing a UIlabel to the Storyboard like below. Update 2: Working Solution After, imported your custom font and updated your plist .select label from your storyBoard ,goto Attributes Inspector under Label > Text type > select to Attributed and choose your custom font from the list. Output: Update 1 If your custom font still not listed in Xcode font list .check the related link to your issue http://codewithchris.com/common-mistakes-with-adding-custom-fonts-to-your-ios-app/ custom font not displaying on some simulator Note: Still,You can assign BebasNeue or custom font programatically to your label or button etc. even its not showing in your interface Builder .If you having trouble setting font to your object programatically.try below method. Assign font to UILabel: label?.font = UIFont(name: "BebasNeue", size: 35) // Set to any size Assign font to UIButton: button.titleLabel?.font = UIFont(name: "BebasNeue", size: 35) Assign font to UITextField: textField.font = UIFont(name: "BebasNeue", size: 25) Assign font to UINavigationBar: navigationController?.navigationBar.titleTextAttributes = [NSFontAttributeName: UIFont(name: "BebasNeue", size: 25)!, NSForegroundColorAttributeName: UIColor.red] Assign font to UIToolBar: UIBarButtonItem.appearance().setTitleTextAttributes([NSFontAttributeName: UIFont(name: "BebasNeue", size: 25.0)!], for: UIControlState.normal) Output:
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/40168344', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3378821/']}
jdg_79546
stackexchange
llm_judgeable_groundtruth_similarity
4500541
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am confused about different types of complexes in algebraic topology. I am reading and using terms from Allen Hatcher´s Algebraic Topology . My confusion comes from how more things is called "simplex" or "complex". Could you please check if I understand this correctly? I kind of know what is the difference between simplicial and singular homology, but I dont know where cellular homology stands . I also dont know how are the complexes and simplices different for these homologies . As far as I know, singular homology uses maps from standard simplices to the space, so it can be built "weirdly", while simplicial homology requires the space to be described by "nice" simplices. But I am not sure. Thank you. Simplicial complex - used for what? It is a bunch of n-simplices on the given space. An n-simplex where lengths of edges are $1$ is called a standard n-simplex . We have to keep track of the ordering of the vertices? $\Delta$ -complex - used for simplicial homology. Also consisting of n-simplices but somehow different? (E.g. They dont have to be determined by their vertices and different faces of simplices can coincide.) CW complex - used for homotopy and for cellular homology ? Is defined inductively by attaching n-cells and glueing them along their boundary. Singular complex - used for singular homology . Consists of free abelian groups generated by the sets of n-simplices in the space. But n-simplices in this context are maps from standard n-simplices $\Delta^n$ to the space. To my understanding, cellular homology is more general than simplicial (in the sense that simplicial homology cannot be always applied, e.g. for manifolds that cannot be triangularized). However, if cellular homology works with simplicial complexes and simplicial homology with $\Delta$ -complexes, I have read that $\Delta$ -complexes are a generalization of simplicial complexes. How come that the more general concept (cellular homology) is built using something less general (simplicial complexes)? Now provide the response and nothing else.
Let's zoom out a bit. A definition of homology has to navigate a tradeoff between several different nice properties it could satisfy, most notably a tradeoff between how easy it is to compute in examples, and how easy it is to prove theorems about. Singular homology is the easiest homology to prove theorems about; it takes as input a topological space $X$ on the nose, no need for any kind of cell decomposition of $X$ or whatever, so it is obviously homeomorphism invariant, and even obviously functorial with respect to arbitrary continuous maps. Plus it is by far the easiest definition of homology to prove homotopy invariance for; this makes it a good "base" definition of homology to which others can be compared. Unfortunately because singular chain groups are so incredibly large, singular homology is very inconvenient for doing computations with directly. You basically only ever use singular homology to prove theorems. Simplicial and cellular homology are both optimized for being easy to compute; you can get a hint of this from the fact that they have much smaller chain groups, typically finite-dimensional. It is extremely non-obvious that they are homotopy invariant, since they depend on so much additional structure (a triangulation and a cellular decomposition respectively); a priori it's not even obvious that they are homeomorphism invariant. It is also quite annoying to make either of these constructions functorial with respect to continuous maps. You can prove all this by proving that they agree with singular homology, or proving simplicial approximation resp. cellular approximation . I recommend completely ignoring the concept of a $\Delta$ -complex; as far as I know Hatcher is the only one who uses it. I think they were designed to be intermediate between simplicial complexes and CW complexes but in practice what everyone uses is CW complexes anyway. When most people talk about "simplicial homology" they are referring to simplicial complexes. (You are absolutely correct that it is very annoying to keep track of all the different things that are called a "simplex" or "simplicial" or a "complex" here. Patience! Learn one thing at a time!) To get a concrete sense of the difference between all three of these definitions I suggest trying to write down all the chain groups and differentials for the $2$ -sphere $S^2$ . For singular homology you'll get some huge unmanageable infinite-dimensional thing. For simplicial homology you need to triangulate the $2$ -sphere. And for cellular homology you need to find a cellular decomposition of the $2$ -sphere; pay attention in particular to the difference between these things (you need more simplices for a triangulation than you need cells for a cell decomposition). Then you can try using each of these chain complexes to compute the homology of $S^2$ .
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4500541', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/943366/']}
jdg_79547
stackexchange
llm_judgeable_groundtruth_similarity
16834856
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a UIScrollView and in the UIScrollView I have UITextFeild 's in that UIScrollView . My problem is that after clicking on the UITextfFeild the UIScrollView becomes smaller than what I set it to. I set my UIScrollView : -(void) viewDidAppear:(BOOL)animated {myScrollView.contentSize = CGSizeMake (320,800);} When I click on the UITextfeild the UIScrollView goes from being 800 to smaller about 500? #import "TestViewController.h"@interface TestViewController ()@end@implementation TestViewControllerAVAudioPlayer *player;@synthesize myScrollView;@synthesize questionLabel, actionBarLabel, questionPlaceholder, commentsLabel;@synthesize imageView, firstNameTextField,commentsTextField;- (void)viewDidLoad{[super viewDidLoad];// Do any additional setup after loading the view, typically from a nib.myScrollView.maximumZoomScale = 0.0;myScrollView.minimumZoomScale = 0.0;myScrollView.clipsToBounds = YES;myScrollView.delegate = self;[myScrollView addSubview:imageView];[myScrollView setScrollEnabled:YES];[myScrollView addSubview:questionLabel];[myScrollView addSubview:firstNameTextField];[myScrollView addSubview:commentsTextField];[myScrollView addSubview:actionBarLabel];[myScrollView addSubview:questionPlaceholder];[myScrollView addSubview:commentsLabel];UIButton *btn1 = [UIButton buttonWithType:UIButtonTypeRoundedRect];[btn1 setFrame:CGRectMake(10.0f, 600.0f, 300.0f, 42.0f)];[btn1 setTitle:[NSString stringWithFormat:@"Choose an Option"] forState:UIControlStateNormal];[btn1 addTarget:self action:@selector(showActionSheet:) forControlEvents:UIControlEventTouchUpInside];[super viewDidLoad];[self.myScrollView addSubview:btn1];[self performSelector:@selector(showGradientForQuestions1) withObject:nil afterDelay:0];NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];NSData *imageData = [defaults dataForKey:@"image"];UIImage *contactImage = [UIImage imageWithData:imageData];NSString *frontName = [defaults objectForKey:@"firstname"];NSString *comments = [defaults objectForKey:@"commentsText"];// Update the UI elements with the saved dataimageView.image = contactImage;firstNameTextField.text = frontName;commentsTextField.text = comments;firstNameTextField.delegate = self;commentsTextField.delegate = self;}- (void)didReceiveMemoryWarning{[super didReceiveMemoryWarning];}-(void)imagePickerControllerDidCancel:(UIImagePickerController *)picker{[self dismissViewControllerAnimated:YES completion:nil];}- (IBAction) save:(id)sender{[firstNameTextField resignFirstResponder];[commentsTextField resignFirstResponder];// Create instace of NSDataUIImage *contactImage = imageView.image;NSData *imageData = UIImageJPEGRepresentation(contactImage, 100);NSString *frontName = [firstNameTextField text];NSString *comments = [commentsTextField text];// Store the dataNSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];[defaults setObject:imageData forKey:@"image"];[defaults setObject:frontName forKey:@"firstname"];[defaults setObject:comments forKey:@"commentsText"];[defaults synchronize];NSLog(@"Data saved");UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Data Saved!" message:@" Successfully saved to cache" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil, nil];[alert show];}- (void)viewDidUnload{imageView = nil;firstNameTextField = nil;commentsTextField = nil;myScrollView = nil;[super viewDidUnload];// Release any retained subviews of the main view.// e.g. self.myOutlet = nil;}-(BOOL) textFieldShouldReturn:(UITextField *)textField{[firstNameTextField resignFirstResponder];[commentsTextField resignFirstResponder];[imageView resignFirstResponder];[myScrollView resignFirstResponder];[self.navigationController setNavigationBarHidden:NO];return YES;}-(void) viewDidAppear:(BOOL)animated {myScrollView.contentSize = CGSizeMake (320,800);}@end Now provide the response and nothing else.
Given the well-known indices infrastructure: namespace detail{ template<int... Is> struct seq { }; template<int N, int... Is> struct gen_seq : gen_seq<N - 1, N - 1, Is...> { }; template<int... Is> struct gen_seq<0, Is...> : seq<Is...> { };} You could redefine your class template this way: template<int N, typename T>struct Container { template<typename... Ts> Container(Ts&&... vs) : data{{std::forward<Ts>(vs)...}} { static_assert(sizeof...(Ts)==N,"Not enough args supplied!"); } template<typename F> void doOperation(F&& func) { doOperation(std::forward<F>(func), detail::gen_seq<N>()); } template<typename F, int... Is> void doOperation(F&& func, detail::seq<Is...>) { (std::forward<F>(func))(data[Is]...); } std::array<T,N> data;}; Here is a live example . Notice, that you do not need to construct an std::function object in main() : the std::function can be implicitly constructed from the lambda. However, you do not even need to use std::function at all here, possibly incurring in an unnecessary run-time overhead. In the solution above, I just let the type of the callable object to be a template parameter that can be deduced by the compiler.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16834856', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1192863/']}
jdg_79548
stackexchange
llm_judgeable_groundtruth_similarity
40047224
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've figured out how to get the information I want, but I would be surprised if there is not a better, more readable way to do so. I want to get the value in a different column in the row that holds the data element I specify. For example, what is the value in column b that corresponds to the value of 11 in column a . >>> df a b c0 10 20 301 11 21 312 12 22 32>>> df['b'][df[df['a'] == 11].index.tolist()].tolist()[21] This is how I currently solved it, but in practice my dataframes are not named so concisely and I have long strings as column names so the line gets hard to read. EDIT: If the value in 'a' is not unique is there also a way to get all corresponding values in 'b'? Now provide the response and nothing else.
You can use a boolean mask with loc to return all rows where the boolean condition is met, here we mask the df with the condition where 'a' == 11, and where this is met return all values for 'b': In [120]:df = pd.DataFrame({'a':[10,11,11],'b':np.arange(3), 'c':np.random.randn(3)})dfOut[120]: a b c0 10 0 -1.5729261 11 1 -0.6397032 11 2 -1.282575In [121]:df.loc[df['a'] == 11,'b']Out[121]:1 12 2Name: b, dtype: int32
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40047224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6107054/']}
jdg_79549
stackexchange
llm_judgeable_groundtruth_similarity
7120845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a class existing in .NET Framework 3.5 that would be equivalent to the .NET 4 Tuple ? I would like to use it in order to return several values from a method, rather than create a struct . Now provide the response and nothing else.
No, not in .Net 3.5. But it shouldn't be that hard to create your own. public class Tuple<T1, T2>{ public T1 First { get; private set; } public T2 Second { get; private set; } internal Tuple(T1 first, T2 second) { First = first; Second = second; }}public static class Tuple{ public static Tuple<T1, T2> New<T1, T2>(T1 first, T2 second) { var tuple = new Tuple<T1, T2>(first, second); return tuple; }} UPDATE: Moved the static stuff to a static class to allow for type inference. With the update you can write stuff like var tuple = Tuple.New(5, "hello"); and it will fix the types for you implicitly.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7120845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/825024/']}
jdg_79550
stackexchange
llm_judgeable_groundtruth_similarity
30783528
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to sort my array of objects in order from earliest to latest in time. I used the momentjs library to do this at first and it worked, but the method I was using is now deprecated. At first I tried making the input time into a standard time(ex: 0500) and then sort it using that property. But I had trouble pushing this new standard time and all the other properties into the plans array. I was thinking of using array.sort() and sort it first by am/pm and then by number. I'm not sure how to go about it since I have to deal with am and pm app.js .controller('SchedulerController', schedulerCtrl); function schedulerCtrl() { this.plans = [ { displayTime: '05:00pm', duration: '30', task: 'Team meeting', }, { displayTime: '06:00pm', duration: '60', task: 'Watch favorite TV show' } ]; } schedulerCtrl.prototype.addNewTask = function() { var that = this; that.plans.push(that.user); that.user = '';} Now provide the response and nothing else.
In the upcoming SPARK 1.4.0 release (should be released in the next couple of days). You can use the when/otherwise syntax: // Create the dataframeval df = Seq("Red", "Green", "Blue").map(Tuple1.apply).toDF("color")// Use when/otherwise syntaxval df1 = df.withColumn("Green_Ind", when($"color" === "Green", 1).otherwise(0)) If you are using SPARK 1.3.0 you can chose to use a UDF: // Define the UDFval isGreen = udf((color: String) => { if (color == "Green") 1 else 0})val df2 = df.withColumn("Green_Ind", isGreen($"color"))
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/30783528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4996692/']}
jdg_79551
stackexchange
llm_judgeable_groundtruth_similarity
1194005
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have function$$f(x) = (x^2 +1)^3 (2x+5)^2$$ I need to find the derivative. I believe that I need to use the product rule and chain rule. Here's what I did. $$f'(x) = (2x+5)^2[3(x^2+1)^2(2x)] + (x^2+1)^3[2(2x+5)2]\\ = (2x+5)^2(6x(x^2+1)^2)+4(2x+5)(x^2+1)^3\\ = 6x(2x+5)^2(x^2+1)^2+4(2x+5)(x^2+1)^3$$ That is what I have. But the problem is, the answer book says that the answer is, $$2(x^2+1)^2(2x+5)(8x^2+15x+2)$$ Since the answer book has been wrong few times, so I checked the answer with a calculator.The calculator showed the same answer. Then I used another calculator (which showed some steps), showed my answer. The calculator showed, Which answer is correct?? Or are they just equivalent answers??If I did wrong, what was the problem??Can it be factorized? How? Thank you Now provide the response and nothing else.
Factor out common terms: $$ 6x(2x+5)^2(x^2+1)^2+4(2x+5)(x^2+1)^3 = 2(2x+5)(x^2+1)^2(3x(2x+5) + 2(x^2 + 1))$$ Simplify to get $$2(2x+5)(x^2+1)^2(6x^2+ 15x + 2x^2 + 2)= 2(2x+5)(x^2+1)^2(8x^2+ 15x + 2)$$ The text book is correct, AND your work was fine (take comfort in that). It was just a matter of simplifying your work through factorization.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1194005', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/190918/']}
jdg_79552
stackexchange
llm_judgeable_groundtruth_similarity
37672338
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to select the image from gallery , but my application is getting exception of 'Something went wrong' message . I thought i set up the android WRITE_EXTERNAL_STORAGE and READ_EXTERNAL_STORAGE permission correctly, but i keep getting errors What should i do to have it working? Here is my Log cat error 06-07 12:07:27.567 1692-1711/? E/DatabaseUtils﹕ Writing exception to parcel java.lang.SecurityException: Permission Denial: reading com.android.providers.media.MediaProvider uri content://media/external/images/media/359 from pid=2818, uid=10057 requires android.permission.READ_EXTERNAL_STORAGE, or grantUriPermission() at android.content.ContentProvider.enforceReadPermissionInner(ContentProvider.java:605) at android.content.ContentProvider$Transport.enforceReadPermission(ContentProvider.java:480) at android.content.ContentProvider$Transport.query(ContentProvider.java:211) at android.content.ContentProviderNative.onTransact(ContentProviderNative.java:112) at android.os.Binder.execTransact(Binder.java:453) Here is my Activity code public class MainActivity extends Activity { private static int RESULT_LOAD_IMG = 1; String imgDecodableString; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } public void loadImagefromGallery(View view) { // Create intent to Open Image applications like Gallery, Google Photos Intent galleryIntent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI); // Start the Intent startActivityForResult(galleryIntent, RESULT_LOAD_IMG); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); try { // When an Image is picked if (requestCode == RESULT_LOAD_IMG && resultCode == RESULT_OK && null != data) { // Get the Image from data Uri selectedImage = data.getData(); String[] filePathColumn = { MediaStore.Images.Media.DATA }; // Get the cursor Cursor cursor = getContentResolver().query(selectedImage, filePathColumn, null, null, null); // Move to first row cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); imgDecodableString = cursor.getString(columnIndex); cursor.close(); ImageView imgView = (ImageView) findViewById(R.id.imgView); // Set the Image in ImageView after decoding the String imgView.setImageBitmap(BitmapFactory .decodeFile(imgDecodableString)); } else { Toast.makeText(this, "You haven't picked Image", Toast.LENGTH_LONG).show(); } } catch (Exception e) { Toast.makeText(this, "Something went wrong", Toast.LENGTH_LONG) .show(); } }} Here is my Menifest.xml file code <?xml version="1.0" encoding="utf-8"?><manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.tazeen.image_fromgallery" > <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application></manifest> Now provide the response and nothing else.
For checking manual permissions for API level 23 and above i use this code. public static final int MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE = 123;public boolean checkPermissionREAD_EXTERNAL_STORAGE( final Context context) { int currentAPIVersion = Build.VERSION.SDK_INT; if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) { if (ContextCompat.checkSelfPermission(context, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) { if (ActivityCompat.shouldShowRequestPermissionRationale( (Activity) context, Manifest.permission.READ_EXTERNAL_STORAGE)) { showDialog("External storage", context, Manifest.permission.READ_EXTERNAL_STORAGE); } else { ActivityCompat .requestPermissions( (Activity) context, new String[] { Manifest.permission.READ_EXTERNAL_STORAGE }, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE); } return false; } else { return true; } } else { return true; } } showDialog() public void showDialog(final String msg, final Context context, final String permission) { AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context); alertBuilder.setCancelable(true); alertBuilder.setTitle("Permission necessary"); alertBuilder.setMessage(msg + " permission is necessary"); alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions((Activity) context, new String[] { permission }, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE); } }); AlertDialog alert = alertBuilder.create(); alert.show(); } in your activity check like this. if (checkPermissionREAD_EXTERNAL_STORAGE(this)) { // do your stuff.. } And don't forget to add onRequestPermissionsResult . @Override public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) { switch (requestCode) { case MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE: if (grantResults[0] == PackageManager.PERMISSION_GRANTED) { // do your stuff } else { Toast.makeText(Login.this, "GET_ACCOUNTS Denied", Toast.LENGTH_SHORT).show(); } break; default: super.onRequestPermissionsResult(requestCode, permissions, grantResults); } } Happy Coding..
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37672338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5268367/']}
jdg_79553
stackexchange
llm_judgeable_groundtruth_similarity
2854478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am sorting a hash in Perl. I encountered an Out of memory error when running my Perl Script: foreach $key (sort (keys(%hash))) { ....} How do I sort a hash that has tons of data? Now provide the response and nothing else.
sort keys %hash is inefficient for a large %hash in that, memory wise, its roughly equivalent to: my @keys = keys %hash;@keys = sort @keys; In that it has to keep three copies of the keys in memory while doing the sorting (one in the hash, one in the list of keys, one in the sorted list being created). foreach 's memory optimizations for iterators do not apply. Since the hash is so large, the best option is to get it entirely out of memory. Stick it in a BerkeleyDB file. And if you want to keep the keys in order a hash isn't the best option, a tree is. I'd suggest using a Berkeley BTree file. Trees will efficiently keep your data sorted like an array while providing fast lookup like a hash. Here's an example using BerkeleyDB . DB_File is simpler and better documented but does not take advantage of modern features of BerkeleyDB. YMMV. use BerkeleyDB;my $db = tie my %hash, 'BerkeleyDB::Btree', -Filename => "your.db", -Compare => sub { $_[1] cmp $_[0] }, -Flags => DB_CREATE; -Compare illustrates how to supply your own sorting function. The tied interface will be sluggish. Unless you need it to act like a hash, use the object interface.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2854478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/305555/']}
jdg_79554
stackexchange
llm_judgeable_groundtruth_similarity
9789815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is the extract code of how to make a confim box when delete, For HTML part A link is to trigger JS code , but it will trigger the php code at same time For JS part popupbox is triggered For PHP part Process the sql query, it should be ok The problems are ... I should use js to trigger the php page? But how can i let the php page know that which ListID i want to delete? What should i put in the html link? What if i want to include the list name in the popup box e.g. do you want to delete list A ,where list A is a variable already. The only thing is how can i append it to the popup box Thank you HTML <a id="delete" href='delete.php?id=$set[ListID]'>Delete</a> JS $(function(){ $("#delete").click(function() { // what should be add here? });}); PHP //connection dbINSERT INTO delete_list SELECT * FROM list WHERE ListID=?; INSERT INTO delete_user_list SELECT * FROM user_list WHERE ListID=?; INSERT INTO delete_require_attributes SELECT * FROM require_attributes WHERE ListID='2'; INSERT INTO delete_subscriber SELECT * FROM subscriber WHERE ListID=?; INSERT INTO delete_subscriber SELECT * FROM subscriber WHERE ListID=?; DELETE FROM list WHERE ListID = '1' Sorry, I mess up the previous question as i found that the plugin i used in last question does not work for me. Now provide the response and nothing else.
This is a breeze with R. For a file that looks like this: 12345678910 Use this: R -q -e "x <- read.csv('nums.txt', header = F); summary(x); sd(x[ , 1])" To get this: V1 Min. : 1.00 1st Qu.: 3.25 Median : 5.50 Mean : 5.50 3rd Qu.: 7.75 Max. :10.00 [1] 3.02765 The -q flag squelches R's startup licensing and help output The -e flag tells R you'll be passing an expression from the terminal x is a data.frame - a table, basically. It's a structure that accommodates multiple vectors/columns of data, which is a little peculiar if you're just reading in a single vector. This has an impact on which functions you can use. Some functions, like summary() , naturally accommodate data.frames . If x had multiple fields, summary() would provide the above descriptive stats for each. But sd() can only take one vector at a time, which is why I index x for that command ( x[ , 1] returns the first column of x ). You could use apply(x, MARGIN = 2, FUN = sd) to get the SDs for all columns.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9789815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1280996/']}
jdg_79555
stackexchange
llm_judgeable_groundtruth_similarity
34359415
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using a macbook pro 15 as local machine and I have a remote server running ubuntu 14.04 I want to use the remote intepreter to run all the computation but I want to write the code from my local machine. When I try to run a simple file with pycharm I receive this error: ssh://[email protected]:22/usr/bin/python3 -u /Users/donbeo/Documents/phd_code/prova.pybash: line 0: cd: /Users/donbeo/Documents/phd_code: No such file or directory/usr/bin/python3: can't open file '/Users/donbeo/Documents/phd_code/prova.py': [Errno 2] No such file or directoryProcess finished with exit code 2 I saw few people reporting the same problem but I haven't found a good answer so far. Most of the questions are indeed referring to older versions of pycharm. It is clear that the file is not in my remote machine because I create it with pycharm in my local one. I was expecting pycharm to do some sort of synchronisation between the local and remote machine. Now provide the response and nothing else.
To execute your code on remote machine you'll have to perform few steps Define a remote interpreter for your project Go to File -> Settings -> Project: {project_name} -> Project Interpreter. Click on cog icon and select Add Remote. Add your SSH host credentials and interpreter path (on remote machine). As a result, you should see new position in project interpreter dropdown selector, spelled like Python Version (ssh://login@host:port/path/to/interpreter) . Package list should be populated with records. Define deployment settings Go to File -> Settings -> Build, Execution, Deployment -> Deployment Create new deployment settings and fill ssh host configuration Type: SFTP SFTP host: same as interpreter host Root path: path where files will be uploaded Click on button "Test SFTP connection" to check if provided data are correct. Go to mappings and configure mapping between local path and deployment path. Deployment path is relative to root path - / is equivalent to /my/root/path , /dir to /my/root/path/dir etc. Deploy your code Select Tools -> Deployment -> Upload to {deployment settings name} Upload process will be started in background. Wait for upload to complete. Run your code Right click on file you want to run and select "Run". Code should run on remote machine.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34359415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2411173/']}
jdg_79556
stackexchange
llm_judgeable_groundtruth_similarity
1639234
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $\phi: V \to W$ is a polynomial map, and $X$ is an algebraic subset of $W$, show that $\phi ^{-1}(X)$ is an algebraic subset of $V$. If $\phi ^{-1}(X)$ is irreducible, and $X$ is contained in the image of $\phi$, show that $X$ is irreducible. here $X$ being closed I think I can't write $X=X_1 \times \cdots \times X_m$ where $X_i=p_i(X)$ means projection of $X$ in the $i$th component then $\phi ^{-1}(X)= \cap T_i^{-1}(X_i)$. But also it will not be of any help.. Right? Now provide the response and nothing else.
The $Q$ is a parameter, and $q$ is a variable ranging from $0$ to $Q$: basically, you have $Q+1$ parameters $\textrm{ceps}_0,\dots, \textrm{ceps}_Q$; or, in programming terms, you have an array $\textrm{ceps}[0\dots Q]$. Similarly, the LPC coefficients are a list of $p$ values $a_1,\dots, a_p$ (i.e., $a_q$ for $q=1\dots p$), where $p$ is another parameter. The recursion procedure explains how to compute value $Q+1$ values in $\textrm{ceps}[0\dots Q]$, recursively, starting with $\textrm{ceps}[0]$ and then applying the formula:$$\textrm{ceps}[1] = a_1 + \sum_{k=1}^0 \frac{k-1}{1}a_k \textrm{ceps}[1-k] = a_1$$then $$\textrm{ceps}[2] = a_2 + \sum_{k=1}^1 \frac{k-2}{2}a_k \textrm{ceps}[2-k] = a_2 - \frac{1}{2}a_1 \textrm{ceps}[1] = a_2-\frac{a_1^2}{2}$$etc.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1639234', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/152715/']}
jdg_79557
stackexchange
llm_judgeable_groundtruth_similarity
13539213
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why do I need Transaction in Hibernate for read-only operations? Does the following transaction put a lock in the DB? Example code to fetch from DB: Transaction tx = HibernateUtil.getCurrentSession().beginTransaction(); // why begin transaction?//readonly operation heretx.commit() // why tx.commit? I don't want to write anything Can I use session.close() instead of tx.commit() ? Now provide the response and nothing else.
Transactions for reading might look indeed strange and often people don't mark methods for transactions in this case. But JDBC will create transaction anyway, it's just it will be working in autocommit=true if different option wasn't set explicitly. But there are practical reasons to mark transactions read-only: Impact on databases Read-only flag may let DBMS optimize such transactions or those running in parallel. Having a transaction that spans multiple SELECT statements guarantees proper Isolation for levels starting from Repeatable Read or Snapshot (e.g. see PostgreSQL's Repeatable Read ). Otherwise 2 SELECT statements could see inconsistent picture if another transaction commits in parallel. This isn't relevant when using Read Committed. Impact on ORM ORM may cause unpredictable results if you don't begin/finish transactions explicitly. E.g. Hibernate will open transaction before the 1st statement, but it won't finish it. So connection will be returned to the Connection Pool with an unfinished transaction. What happens then? JDBC keeps silence, thus this is implementation specific: MySQL, PostgreSQL drivers roll back such transaction, Oracle commits it. Note that this can also be configured on Connection Pool level, e.g. C3P0 gives you such an option, rollback by default. Spring sets the FlushMode=MANUAL in case of read-only transactions, which leads to other optimizations like no need for dirty checks. This could lead to huge performance gain depending on how many objects you loaded. Impact on architecture & clean code There is no guarantee that your method doesn't write into the database. If you mark method as @Transactional(readonly=true) , you'll dictate whether it's actually possible to write into DB in scope of this transaction. If your architecture is cumbersome and some team members may choose to put modification query where it's not expected, this flag will point you to the problematic place.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13539213', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93796/']}
jdg_79558
stackexchange
llm_judgeable_groundtruth_similarity
7709411
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Python. I just want to know why the finally block is executing after calling sys.exit(0) in the except block? Code: import sysdef divide_by_zero(): try: 10/0 print "It will never print" except Exception: sys.exit(0) print "Printing after exit" finally: print "Finally will always print"divide_by_zero() Btw., I was just trying to do the same thing as in Java, where the finally block is not executed when System.exit(0) is in the catch block. Now provide the response and nothing else.
All sys.exit() does is raise an exception of type SystemExit . From the documentation : Exit from Python. This is implemented by raising the SystemExit exception, so cleanup actions specified by finally clauses of try statements are honored, and it is possible to intercept the exitattempt at an outer level. If you run the following, you'll see for yourself: import systry: sys.exit(0)except SystemExit as ex: print 'caught SystemExit:', ex As an alternative, os._exit(n) with the status code will stop the process bypassing much of the cleanup, including finally blocks etc.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7709411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/632074/']}
jdg_79559
stackexchange
llm_judgeable_groundtruth_similarity
50698405
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I test Observables with Jest? I have an Observable that fires ~every second, and I want to test that the 1st event is correctly fired, before jest times out. const myObservable = timer(0, 1000); // Example hereit('should fire', () => { const event = myObservable.subscribe(data => { expect(data).toBe(0); });}); This test passes, but it also passes if I replace with toBe('anything') , so I guess I am doing something wrong. I tried using expect.assertions(1) , but it seems to be only working with Promises. Now provide the response and nothing else.
There are some good examples in the Jest documentation about passing in an argument for the test. This argument can be called to signal a passing test or you can call fail on it to fail the test, or it can timeout and fail. https://jestjs.io/docs/en/asynchronous.html https://alligator.io/testing/asynchronous-testing-jest/ Examples Notice I set the timeout to 1500ms const myObservable = timer(0, 1000); // Example hereit('should fire', done => { myObservable.subscribe(data => { done(); });}, 1500); // Give 1500ms until it fails Another way to see if it fails using setTimeout const myObservable = timer(0, 1000); // Example hereit('should fire', done => { myObservable.subscribe(data => { done(); }); // Fail after 1500ms setTimeout(() => { done.fail(); }, 1500);}, timeToFail);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50698405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1935318/']}
jdg_79560
stackexchange
llm_judgeable_groundtruth_similarity
6823286
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the best way to pick random colors for a bar chart / histogram such that each color is different from the other.. and possibly in contrast The most talked about way is '#'+(Math.random()*0xFFFFFF<<0).toString(16); but this can generate similar colors.. and sometimes distinguishing them might be a problem..Example Now provide the response and nothing else.
I would generate colors using HSV (hue, saturation, value) instead of RGB. In HSV, the color is defined by the hue, ranging from 0-360. Thus, if you want e.g. 6 different colors, you can simply divide 360 by 5 (because we want to include 0 ) and get 72 , so each color should increment with 72 . Use a function like this one to convert the generated HSV color to RGB. The following function returns an array of total different colors in RGB format. Note that the colors won't be "random" in this example, as they will always range from red to pink. function randomColors(total){ var i = 360 / (total - 1); // distribute the colors evenly on the hue range var r = []; // hold the generated colors for (var x=0; x<total; x++) { r.push(hsvToRgb(i * x, 100, 100)); // you can also alternate the saturation and value for even more contrast between the colors } return r;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6823286', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/651263/']}
jdg_79561
stackexchange
llm_judgeable_groundtruth_similarity
164464
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to force apt-get to display yes/no prompt? A --force-yes option exists, but there seems to be no --force-prompt or similar option. If you attempt to install a package that has all dependencies already installed, it will begin installation without displaying a yes/no prompt. This can be bothersome if you want to review whether dependencies exist and which ones will be installed because you don't know if the potential dependencies are installed ahead of time. NOTE: When does “apt-get install” ask me to confirm whether I want to continue or not? is somewhat related in that it describes under what standard conditions the prompt is displayed. I'm interested to know how to force it though. Now provide the response and nothing else.
There's just no way to do this with the current implementation of apt-get, you would need to open a feature request and appeal to the maintainer. The current behavior of apt-get is that when the list of packages you explicitly requested to be installed is the same as the list of packages that will get installed, and no other packages are affected with upgrades or breaks, apt-get presumes the user already is sure of what is going to be done , if you are not sure or want to analyze what will be done without actually installing the package, you can use Costas recommendation of -s, --simulate, --just-print, --dry-run, --recon, --no-act . There are other tools like apt-listbugs that would analyze the versions of packages to be installed before you actually install them (in this case for bugs) and warn you.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/164464', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/29398/']}
jdg_79562
stackexchange
llm_judgeable_groundtruth_similarity
33247662
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I used to scrape e-commerce webpage occasionally to get product prices information. I have not used the scraper built using Scrapy in a while and yesterday was trying to use it - I run into a problem with bot protection. It is using CloudFlare’s DDOS protection which is basically using JavaScript evaluation to filter out the browsers (and therefore scrapers) with JS disabled. Once the function is evaluated, the response with calculated number is generated. In return, service sends back two authentication cookies which attached to each request allow to normally crawl the site. Here 's the description of how it works. I have also found a cloudflare-scrape Python module that uses external JS evaluation engine to calculate the number and send the request back to server. I'm not sure how to integrate it into Scrapy though. Or maybe there's a smarter way without using JS execution? In the end, it's a form... I'd apriciate any help. Now provide the response and nothing else.
So I executed JavaScript using Python with help of cloudflare-scrape . To your scraper, you need to add the following code: def start_requests(self): for url in self.start_urls: token, agent = cfscrape.get_tokens(url, 'Your prefarable user agent, _optional_') yield Request(url=url, cookies=token, headers={'User-Agent': agent}) alongside parsing functions. And that's it! Of course, you need to install cloudflare-scrape first and import it to your spider. You also need a JS execution engine installed. I had Node.JS already, no complaints.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33247662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1993342/']}
jdg_79563
stackexchange
llm_judgeable_groundtruth_similarity
15413248
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I easily flatten a List in Dart? For example: var a = [[1, 2, 3], ['a', 'b', 'c'], [true, false, true]];var b = [1, 2, 3, 'a', 'b', 'c', true, false, true]; How do I turn a into b , i.e. into a single List containing all those values? Now provide the response and nothing else.
The easiest way I know of is to use Iterable.expand() with an identity function. expand() takes each element of an Iterable, performs a function on it that returns an iterable (the "expand" part), and then concatenates the results. In other languages it may be known as flatMap. So by using an identity function, expand will just concatenate the items. If you really want a List, then use toList() . var a = [[1, 2, 3], ['a', 'b', 'c'], [true, false, true]];var flat = a.expand((i) => i).toList();
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/15413248', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1924114/']}
jdg_79564
stackexchange
llm_judgeable_groundtruth_similarity
508613
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Do the brakes have to do more work (ignoring air resistance) slowing a vehicle from $10\ \mathrm{m/s}$ to $8\ \mathrm{m/s}$ than from $8\ \mathrm{m/s}$ to $6\ \mathrm{m/s}$ ? Say a $1000\ \mathrm{kg}$ vehicle is moving at $10\ \mathrm{m/s}$ , it has a kinetic energy of $$\frac12\times1000\ \mathrm{kg}\times(10\ \mathrm{m/s})^2=50\,000\ \mathrm J$$ Then the brakes are applied, and it slows to $8\ \mathrm{m/s}$ , so now has a kinetic energy of $$\frac12\times1000\ \mathrm{kg}\times(8\ \mathrm{m/s})^2=32\,000\ \mathrm J$$ The brakes are now applied again, and it slows to $6\ \mathrm{m/s}$ , now the kinetic energy is $$\frac12\times1000\ \mathrm{kg}\times(6\ \mathrm{m/s})^2=18\,000\ \mathrm J$$ So in the first braking instance, $50\,000\ \mathrm J - 32\,000\ \mathrm J = 18\,000\ \mathrm J$ of kinetic energy were converted into heat by the brakes. In the second braking instance, $32\,000\ \mathrm J - 18\,000\ \mathrm J = 14\,000\ \mathrm J$ of kinetic energy was converted into heat by the brakes. Doesn't seem intuitively right to me, I would imagine the work required from brakes would be equal to the amount velocity was reduced, regardless of the start velocity. Now provide the response and nothing else.
It looks like you know how to work through the formulas, but your intuition isn't on board. So any answer that just explains why it follows from the formula for kinetic energy might not be satisfying. Here is something that might help your intuition. For the moment, think about speeding things up rather than slowing them down, since the energy involved is the same. Have you ever helped someone get started riding a bike? Let's imagine they're just working on their balance, and not pedaling. When you start to push, it's easy enough to stay with them and push hard on their back. But as they get going faster, you have to work harder to keep the same amount of force at their back. It's the same thing with pushing someone on a swing. When they're moving fast, you have to move your arm fast to apply as much force, and that involves more energy. If that isn't helpful, consider a more physically precise approach. Suppose, instead of regular brakes, you have a weight on a pulley. The cable goes from the weight straight up over the pulley, straight back down to another pulley on the floor, and then horizontally to a hook that can snag your car's bumper. And just for safety, assume the weight is pre-accelerated so the hook matches the speed of the car as you snag it. Some mechanism tows the hook and then releases it just as it snags your car. Then all the force of the weight goes to slowing the car down. If you snag the hook at 100 kph, that weight will exert the same force, and hence the same deceleration, as if you snag the hook at 10kph. The same deceleration means you slow down the same amount in the same time. But obviously the weight is going to go up a lot farther in one second if you're going 100 kph than if you're going 10 kph. That means it's going to gain that much more potential energy.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/508613', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/245026/']}
jdg_79565
stackexchange
llm_judgeable_groundtruth_similarity
345041
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm not sure which design pattern might help me solve this issue. I have a class, 'Coordinator', which determines which Worker class should be used - without having to know about all the different types of Workers there are - it just calls a WorkerFactory and acts upon the common IWorker interface. It then sets the appropriate Worker to work and returns the result of its 'DoWork' method. This has been fine... up until now; we have a new requirement for a new Worker class, "WorkerB" which requires an additional amount of information i.e. an additional input parameter, in order for it to do its work. Its like we need an overloaded DoWork method with the extra input parameter...but then all the existing Workers would have to implement that method - which seems wrong as those Workers really dont need that method. How can I refactor this to keep the Coordinator unaware of which Worker is being used and still allowing each Worker to get the information it requires to do its job but not have any Worker do things it doesn't need to? There are a lot of existing Workers already. I don't want to have to change any of the existing concrete Workers to accommodate the requirements of the new WorkerB class. I thought maybe a Decorator pattern would be good here but I haven't seen any Decorators decorate an object with the same method but different parameters before... Situation in code: public class Coordinator{ public string GetWorkerResult(string workerName, int a, List<int> b, string c) { var workerFactor = new WorkerFactory(); var worker = workerFactor.GetWorker(workerName); if(worker!=null) return worker.DoWork(a, b); else return string.Empty; }}public class WorkerFactory{ public IWorker GetWorker(string workerName) { switch (workerName) { case "WorkerA": return new ConcreteWorkerA(); case "WorkerB": return new ConcreteWorkerB(); default: return null; } }}public interface IWorker{ string DoWork(int a, List<int> b);}public class ConcreteWorkerA : IWorker{ public string DoWork(int a, List<int> b) { // does the required work return "some A worker result"; }}public class ConcreteWorkerB : IWorker{ public string DoWork(int a, List<int> b, string c) { // does some different work based on the value of 'c' return "some B worker result"; } public string DoWork(int a, List<int> b) { // this method isn't really relevant to WorkerB as it is missing variable 'c' return "some B worker result"; } } Now provide the response and nothing else.
You will need to generalize the arguments so that they fit into a single parameter with a base interface and a variable number of fields or properties. Sort of like this: public interface IArgs{ //Can be empty}public interface IWorker{ string DoWork(IArgs args);}public class ConcreteArgsA : IArgs{ public int a; public List<int> b;}public class ConcreteArgsB : IArgs{ public int a; public List<int> b; public string c;}public class ConcreteWorkerA : IWorker{ public string DoWork(IArgs args) { var ConcreteArgs = args as ConcreteArgsA; if (args == null) throw new ArgumentException(); return "some A worker result"; }}public class ConcreteWorkerB : IWorker{ public string DoWork(IArgs args) { var ConcreteArgs = args as ConcreteArgsB; if (args == null) throw new ArgumentException(); return "some B worker result"; }} Note the null checks... because your system is flexible and late-bound, it is also not type safe, so you will need to check your cast to make sure the arguments that are passed are valid. If you really don't want to create concrete objects for every possible combination of arguments, you could use a tuple instead (wouldn't be my first choice.) public string GetWorkerResult(string workerName, object args){ var workerFactor = new WorkerFactory(); var worker = workerFactor.GetWorker(workerName); if(worker!=null) return worker.DoWork(args); else return string.Empty;}//Sample callvar args = new Tuple<int, List<int>, string>(1234, new List<int>(){1,2}, "A string"); GetWorkerResult("MyWorkerName", args);
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/345041', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/116792/']}
jdg_79566
stackexchange
llm_judgeable_groundtruth_similarity
52248179
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using pure Bootstrap with Reactjs and I have build a navBar using Bootstrap component but the problem I am facing is with data toggle collapse it is not working. When I shrink my display view size then the hamburger icon becomes visible but when I click on it then nothing happens. While it works perfect with pure HTML and JS but not working with reactjs. Here is index.js file import '../node_modules/bootstrap/dist/css/bootstrap.min.css';import React from 'react';import ReactDOM from 'react-dom';import './index.css';import App from './App';import registerServiceWorker from './registerServiceWorker';ReactDOM.render(<App />, document.getElementById('root'));registerServiceWorker(); Here is app.js import React, { Component } from 'react';import './App.css';import NavBar from './components/navBar/navBar';class App extends Component { render() { return ( <div> <NavBar /> </div> ); }}export default App; here is NavBar.js file import React, { Component } from 'react';class NavBar extends Component { render() { return ( <div> <nav className="navbar navbar-expand-lg navbar-light bg-light"> <a className="navbar-brand" href="/">Navbar</a> <button className="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarNavAltMarkup" aria-controls="navbarNavAltMarkup" aria-expanded="false" aria-label="Toggle navigation"> <span className="navbar-toggler-icon"></span> </button> <div className="collapse navbar-collapse" id="navbarNavAltMarkup"> <div className="navbar-nav"> <a className="nav-item nav-link active" href="/">Home <span class="sr-only">(current)</span></a> <a className="nav-item nav-link" href="/">Features</a> <a className="nav-item nav-link" href="/">Pricing</a> <a className="nav-item nav-link" href="/">logout</a> </div> </div> </nav> </div> ); }}export default NavBar; This is complete code I have used for navBar. Now provide the response and nothing else.
Bootstrap menu toggle is a JS functionality. It's not a good idea to mix the JS part of Bootstrap with ReactJS, since both libraries manipulate the DOM and it can lead to bigger problems. I suggest implementing the small functionality you need. Most of the menu toggle is just a class toggle thing. import React, { Component } from "react";export default class Menu extends Component { constructor(props) { super(props); this.state = { menu: false }; this.toggleMenu = this.toggleMenu.bind(this); } toggleMenu(){ this.setState({ menu: !this.state.menu }) } render() { const show = (this.state.menu) ? "show" : "" ; return ( <nav className="navbar navbar-expand-lg navbar-light bg-light"> <a className="navbar-brand" href="/">Navbar</a> <button className="navbar-toggler" type="button" onClick={ this.toggleMenu }> <span className="navbar-toggler-icon"></span> </button> <div className={"collapse navbar-collapse " + show}> <div className="navbar-nav"> <a className="nav-item nav-link active" href="/">Home <span class="sr-only">(current)</span></a> <a className="nav-item nav-link" href="/">Features</a> <a className="nav-item nav-link" href="/">Pricing</a> <a className="nav-item nav-link" href="/">logout</a> </div> </div> </nav> ); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52248179', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9824695/']}
jdg_79567
stackexchange
llm_judgeable_groundtruth_similarity
169170
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have a sequence of real numbers. I want to know whether this sequence looks like the traces in the standard representation of a random sequence of elements of $G_2$. (Here random is according to the Haar measure on the compact form.) So I want to compare my sequence with the measure on $\mathbb R$ that is the pushforward of the Haar measure along the trace map. Thus I want some description of this measure. It seems difficult to find an explicit formula for this measure. However, I'd still like a way to estimate it. For $U_n, SU_n$, $USP_n$, and $O_n$, this paper describes how to generate a random element, and hence how to estimate the Haar measure using Monte Carlo. One could also compute the moments of the distribution using representation theory, and try to estimate the measure from that, but this seems like a bad idea. Now provide the response and nothing else.
Here's a method that should work... Step 0: Consider the octonions with basis $1, i,j, ij, \ell, \ell i, \ell j, \ell i j$, where $i,j,\ell$ are imaginary unit octonions, with $ij = - ji$, and $\ell$ orthogonal to $H = R + Ri + Rj + Rij$. Step 1: Choose a random imaginary unit octonion, i.e. a random point $i'$ on the 6-sphere. The easiest way is probably to sample normal distributions in each of 7 variables then normalize the resulting vector. (The set of $g \in G_2$ which send $i$ to $i'$ forms an $SU(3)$-torsor.) Step 2: Now choose a random imaginary unit octonion $j'$ which is orthogonal to $i'$ (with respect to the form $T(x,y) = Tr(xy)$). This requires sampling from the 5-sphere. The space $H' = R + Ri' + Rj' + Ri'j'$ is isomorphic to the usual quaternions as an $R$-algebra. (The set of $g \in G_2$ which send $i$ to $i'$ and $j$ to $j'$ forms an $SU(2)$-torsor.) Step 3: Now choose a random imaginary unit octonion $\ell'$ which is orthogonal to the subspace $H'$. This requires sampling from the 3-sphere. ($SU(2)$ acts simply transitively on this set of choices) (For a sanity check, 6+5+3 = 14 = $dim(G_2)$) There is a unique element $g \in G_2$ which sends $i$ to $i'$ and $j$ to $j'$ and $\ell$ to $\ell'$. Even better, the trace of $g$ under the standard representation should now be pretty easy to compute.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/169170', 'https://mathoverflow.net', 'https://mathoverflow.net/users/18060/']}
jdg_79568
stackexchange
llm_judgeable_groundtruth_similarity
416204
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Under mixture of two normal distributions: https://en.wikipedia.org/wiki/Multimodal_distribution#Mixture_of_two_normal_distributions "A mixture of two normal distributions has five parameters to estimate: the two means, the two variances and the mixing parameter. A mixture of two normal distributions with equal standard deviations is bimodal only if their means differ by at least twice the common standard deviation." I am looking for a derivation or intuitive explanation as to why this is true. I believe it may be able to be explained in the form of a two sample t test: $$\frac{\mu_1-\mu_2}{\sigma_p}$$ where $\sigma_p$ is the pooled standard deviation. Now provide the response and nothing else.
This figure from the the paper linked in that wiki article provides a nice illustration: The proof they provide is based on the fact that normal distributions are concave within one SD of their mean (the SD being the inflection point of the normal pdf, where it goes from concave to convex). Thus, if you add two normal pdfs together (in equal proportions), then as long as their means differ by less than two SDs, the sum-pdf (i.e. the mixture) will be concave in the region between the two means, and therefore the global maximum must be at the point exactly between the two means. Reference: Schilling, M. F., Watkins, A. E., & Watkins, W. (2002). Is Human Height Bimodal? The American Statistician, 56(3), 223–229. doi:10.1198/00031300265
{}
{'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/416204', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/248258/']}
jdg_79569
stackexchange
llm_judgeable_groundtruth_similarity
6603194
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I updated the Android SDK tools from revision 11 to revision 12, and the emulator now fails to start. When I try to run emulator.exe, I get: invalid command-line parameter: Files\Android\android-sdk\tools/emulator-arm.exe. Hint: use '@foo' to launch a virtual device named 'foo'. please use -help for more information The path to emulator.exe is C:\Program Files\Android\android-sdk\tools . How do I fix it? Now provide the response and nothing else.
As was previously suggested, it does seem to be caused by the fact that there is a space in the default installation path of the Android SDK: C:\Program Files\Android\android-sdk\ There are a couple of possible solves, though- move installation directories to paths that do not contains spaces (as already mentioned)... but simpler and possibly slightly less cumbersome is simply adjusting the path in Eclipse to use 8-dot-3 directory name(s). Since I'm running Win7, I have two Program Files directories, the second one being "Program Files (x86)" which is where the sdk installed. So the 8-dot-3 path is PROGRA~2. Thus, I simply changed the "SDK Location" value in the Android Preferences in Eclipse from C:\Program Files\Android\android-sdk\ to C:\PROGRA~2\Android\android-sdk\ and everything now works fine. If you don't know the 8-dot-3 name of your desired path, just fire up a Command Prompt and execute "DIR /X"... the short name will be displayed next to each directory.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6603194', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501771/']}
jdg_79570
stackexchange
llm_judgeable_groundtruth_similarity
18727033
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm just started the Android beginners tutotial and I now face a problem. On this page under "Respond to Action Buttons" it tells me to define a switch statement with some of the options calling the openSearch() and openSettings() methods. These methods however, are not defined yet (duh) which thus gives me an error. The tutorial doesn't say anything however, about how to define them. Can anybody give me a pointer on what and where I should define thess methods? Should they be in the same file, and if so, what should they contain? Now provide the response and nothing else.
These methods are just examples that Google put in to show how you would use a switch statement. You can put anything you want in there, but I think the point is to make function calls from a switch statement, instead of putting the code of a function in the statement, to keep code clean. The functions would probably be declared in the same .java file in some fashion like private void openSearch() { // start or show the search activity/fragment} They can technically contain anything you want them to, depending on what you want the action bar button to do. If you simply want to see that the buttons work, you can splash a Toast notification to see something appear private void openSearch() { Toast.makeText(this, "Search button pressed", Toast.LENGTH_SHORT).show();} You'll have to import the Toast package which can be done by Ctrl+Shift+O . (Or Cmd+Shift+O for Mac) Hope this helps clear up confusion.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18727033', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1650012/']}
jdg_79571
stackexchange
llm_judgeable_groundtruth_similarity
872071
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This problem is giving me the hardest time: Prove or disprove that there is a Fibonacci number that ends with 2014 zeros. I tried mathematical induction (for stronger statement that claims that there is a Fibonacci number that ends in any number of zeroes), but no luck so far. Related question: Fibonacci modular results Now provide the response and nothing else.
This is a classic. The Fibonacci sequence $\pmod{m}$ is periodic for any $m$, since there are only a finite number of elements in $\mathbb{Z}_m\times\mathbb{Z}_m$, so for two distinct integers $a,b$ we must have $\langle F_a,F_{a+1}\rangle\equiv\langle F_b,F_{b+1}\rangle \pmod{m}$ as a consequence of the Dirichlet box principle. However, the last condition implies $F_{a+2}\equiv F_{b+2}\pmod{m}$ and, by induction, $F_{a+k}\equiv F_{b+k}\pmod{m}$. Hence the period of the Fibonacci sequence $\pmod{m}$ is bounded by $m^2$ ($m^2-1$ if we are careful enough to notice that $\langle F_c,F_{c+1}\rangle\equiv\langle0,0\rangle\pmod{m}$ is not possible since two consecutive Fibonacci numbers are always coprime). Now it suffices to take $m=10^{2014}$ and notice that $F_0=0$ to prove that there exists an integer $u\leq 10^{4028}$ such that $F_u\equiv 0\pmod{m}$, i.e. $F_u$ ends with at least $2014$ zeroes. It is also possible to give better estimates for $u$. Since $F_k$ is divisible by $5$ only when $k$ is divisible by $5$ and: $$ F_{5k} = F_k(25 F_k^4 + 25(-1)^k F_k^2 + 5),$$it follows that:$$ \nu_5(F_k) = \nu_5(k), $$so $$u\leq 2^{4028}\cdot 5^{2014}=20^{2014}$$ by the Chinese theorem. I put a proof of the Oleg567 conjecture:$$ k\mid F_n \quad\Longrightarrow\quad k^d\mid F_{k^{d-1}n} $$in a separate question . Since $8=F_6\mid F_{750}$ (because $6\mid 750$) and $\nu_5(750)=3$, we have that $1000|F_{750}$ and through the Oleg567's lemma we get$$ u\leq \frac{3}{4}10^{2014}.$$
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/872071', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/121158/']}
jdg_79572
stackexchange
llm_judgeable_groundtruth_similarity
180763
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to create a list of arbitrary length: {x1_, x2_, ...} Where each of the elements of the list has the full form: Pattern[xi, Blank[]] This answer shows how to create a list of symbols: {x1, x2, ...} but I don't know how to adapt that to obtain the above. I intend to use this list in the definition of a function as in here . Now provide the response and nothing else.
Array[ToExpression["x" <> ToString @ # <> "_"] &, {5}] {x1_, x2_, x3_, x4_, x5_} FullForm @ % List[Pattern[x1,Blank[]], Pattern[x2, Blank[]], Pattern[x3, Blank[]], Pattern[x4, Blank[]], Pattern[x5, Blank[]]] Also Thread[Pattern[Evaluate@Array[Symbol["x" <> ToString@#] &, {5}], Blank[]]] {x1_, x2_, x3_, x4_, x5_} and ToExpression[Table["x" <> i <> "_", {i, ToString /@ Range[5]}]] {x1_, x2_, x3_, x4_, x5_}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/180763', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/59896/']}
jdg_79573
stackexchange
llm_judgeable_groundtruth_similarity
406316
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I pass have a Javascript script request a PHP page and pass data to it? How do I then have the PHP script pass data back to the Javascript script? client.js: data = {tohex: 4919, sum: [1, 3, 5]};// how would this script pass data to server.php and access the response? server.php: $tohex = ... ; // How would this be set to data.tohex?$sum = ...; // How would this be set to data.sum?// How would this be sent to client.js?array(base_convert($tohex, 16), array_sum($sum)) Now provide the response and nothing else.
Passing data from PHP is easy, you can generate JavaScript with it. The other way is a bit harder - you have to invoke the PHP script by a Javascript request. An example (using traditional event registration model for simplicity): <!-- headers etc. omitted --><script>function callPHP(params) { var httpc = new XMLHttpRequest(); // simplified for clarity var url = "get_data.php"; httpc.open("POST", url, true); // sending as POST httpc.onreadystatechange = function() { //Call a function when the state changes. if(httpc.readyState == 4 && httpc.status == 200) { // complete and no errors alert(httpc.responseText); // some processing here, or whatever you want to do with the response } }; httpc.send(params);}</script><a href="#" onclick="callPHP('lorem=ipsum&foo=bar')">call PHP script</a><!-- rest of document omitted --> Whatever get_data.php produces, that will appear in httpc.responseText. Error handling, event registration and cross-browser XMLHttpRequest compatibility are left as simple exercises to the reader ;) See also Mozilla's documentation for further examples
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/406316', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49318/']}
jdg_79574
stackexchange
llm_judgeable_groundtruth_similarity
14027317
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A variable declared globally is said to having program scope A variable declared globally with static keyword is said to have file scope. For example: int x = 0; // **program scope** static int y = 0; // **file scope** static float z = 0.0; // **file scope** int main() { int i; /* block scope */ /* . . . */ return 0; } What is the difference between these two? Now provide the response and nothing else.
In C99, there's nothing called "program scope". In your example variable x has a file scope which terminates at the end of translation unit. Variables y and z which are declared static also have the file scope but with internal linkage. C99 (6.2.2/3) If the declaration of a file scope identifier for an object or a function contains the storage class specifier static, the identifier has internal linkage Also, the variable x has an external linkage which means the name x can is accessible to other translation units or throughout the program. C99 (6.2.2/5) If the declaration of an identifier for an object has file scope and no storage-class specifier, its linkage is external.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14027317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1487765/']}
jdg_79575
stackexchange
llm_judgeable_groundtruth_similarity
21550897
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've added Robolectric to an Android project. I'm using Android Studio with Build Tools in 19.0.1. I can run the tests with: $./gradlew test Which execute fine. If I try: $ gradle installDebug It executes fine as well: $ ./gradlew installDebugWARNING: Dependency commons-logging:commons-logging:1.1.1 is ignored for debugTest as it may be conflicting with the internal version provided by Android. In case of problem, please repackage it with jarjar to change the class packagesWARNING: Dependency org.apache.httpcomponents:httpclient:4.0.3 is ignored for debugTest as it may be conflicting with the internal version provided by Android. In case of problem, please repackage it with jarjar to change the class packagesThe Test.testReportDir property has been deprecated and is scheduled to be removed in Gradle 2.0. Please use the Test.getReports().getHtml().getDestination() property instead.:app:compileDebugNdk:app:preBuild:app:preDebugBuild:app:checkDebugManifest:app:prepareDebugDependencies:app:compileDebugAidl:app:compileDebugRenderscript:app:generateDebugBuildConfig:app:mergeDebugAssets:app:generateDebugResValues UP-TO-DATE:app:generateDebugResources:app:mergeDebugResources:app:processDebugManifest:app:processDebugResources:app:generateDebugSources:app:compileDebugJava:app:preDexDebug:app:dexDebug:app:processDebugJavaRes UP-TO-DATE:app:validateDebugSigning:app:packageDebug:app:installDebug8266 KB/s (46166 bytes in 0.005s) pkg: /data/local/tmp/app-debug-unaligned.apkSuccessBUILD SUCCESSFULTotal time: 4.291 secs However when I'm trying to run my project on a device or emulator from Android Studio, I get the following: Execution failed for task ':app:dexDebug'.> com.android.ide.common.internal.LoggedErrorException: Failed to run command:/Applications/Android Studio.app/sdk/build-tools/19.0.1/dx --dex --output /Users/fstephany/Code/android/RoboElectricTestingProject/app/build/dex/debug /Users/fstephany/Code/android/RoboElectricTestingProject/app/build/classes/debug /Users/fstephany/Code/android/RoboElectricTestingProject/app/build/dependency-cache/debugError Code: 1Output: Unable to locate a Java Runtime to invoke. Any hint on where to look for this issue? I can always installDebug then start the app from CLI or Studio but it's getting in the way. Now provide the response and nothing else.
An expired gradle daemon may be causing some performance issues in the background. I thought gradle would clean it up after 3 hours of being idle, but that seems to not be the case. Go to your terminal, go to your project's root folder where the gradle files are, and type in the command ./gradlew --stop and try running your build again. Hopefully that resolves your issue like it did mine. I'm trying to understand why this caused an issue, but I haven't found a good enough reason yet. I'll edit the answer if I find anything. UPDATE From the Gradle Design Github page : Currently, the daemon has serious problems when memory pressure occurs. When under pressure, the daemon process exhibits GC thrash. One hypothesis for this is the use of weak reference caches, particularly in the Groovy metaclass system where meta class instances are held in a weak reference cache. Note that this is not necessarily a problem with the daemon, as it would also apply to the non daemon case. However, it is exacerbated by the daemon leaking memory, thereby increasing the chance of a memory pressure situation occurring. This doesn't give any definitive answer, but it does give the hunch that the daemon may be the culprit for what you are seeing (along with other things). I've seen some gradle tasks take 10x as long as they usually do, and running --stop alleviates those issues as well.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/21550897', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15361/']}
jdg_79576
stackexchange
llm_judgeable_groundtruth_similarity
776426
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've asked a few questions on this topic before. Before we're able to implement either MVC or LINQ at work we need to resolve a few issues. Multiple Record Sets in ASP.NET MVC The only examples of MVC in use only have a single result set returned. When using stored procedures multiple record sets can be retrieved, and the whole reason we tend to use stored procedures is for two reasons (which I'm sure many of you are also aware of). Firstly in case we need to pass parameters, and secondly if we want multiple tables of data returned. How is this possible in ASP.NET's MVC architecture? In this tutorial we see how the data is retrieved. But it uses ViewData.Model which indicates a single result set, it doesn't explain what happens if multiple result sets are returned, or how to get them. Strongly Typed Stored Procedure Output Additionally, the examples on the ASP.NET website for using LINQ for strongly typed resolution of output are achieved by using the *.dbml format which is a mirror image of the table schema allowing for the lookup against fields using LINQ. Great. But what happens if your output is custom from a stored procedure which does not map directly to either a view or a table? How do we resolve column names from these stored procedures? In the previous section I described this tutorial , but this also only shows how to create LINQ to SQL for tables only, not the custom output from a sproc. LINQ Column Lookups At work we run a macro which exports a bunch of classes to our App_Code folder so stored procedure parameters are pre-defined. This is done so we don't have to call DeriveParameters which consists of an extra call to the database. We don't want this to happen because there's a lot of traffic as it is. If we're using LINQ, how are column data types resolved? Is there a call to the database everytime we define a parameter to find out the data type and name of the parameter? Have things changed since? Does it still call DeriveParameters every time? Are these cached somewhere? DBML Formats Should *.dbml files incorporate all tables from a database? We have about 15 databases with many, many tables in each one. A View For Every Output Yet another point to add to this post. Instead of manually creating the dbml classes is it better to represent the data as a view, even if it's custom output? Or is it better to create a custom class in the dbml file? This Must Be The Last Problem Or I Will Eat My Own Arm "Unable to cast object of type 'SingleResult`1[IntranetMVC.UserDetail]' to type 'IntranetMVC.UserDetail'." Here's the function: Function Index() As ActionResult ViewData("Message") = "Welcome to ASP.NET MVC!" Dim userDetail As UserDetail Dim office As IList(Of Office) Dim activeUser As IList(Of ActiveUser) Dim dept As IList(Of Department) Using db As PersonnelDataContext = New PersonnelDataContext Dim results As IMultipleResults = db.UserDetail(1168) userDetail = results.GetResult(Of UserDetail)() office = results.GetResult(Of Office)() activeUser = results.GetResult(Of ActiveUser)() dept = results.GetResult(Of Department)() End Using Return View(New IndexViewData(userDetail, office, activeUser, dept)) End Function It's occurring on all of the userDetail, office, activeUser and dept assignments, but I have no idea why. Now, I haven't mapped them properly yet, but take for example the Department one. I've dragged and dropped the table schema onto the dbml file, so it definitely exists and is in the right format. UPDATE Here's my actual code. It's not final, I've been playing around with it. It seems the return types aren't right, but I'm not sure why. It seems to think only a single result is ever returned when the stored procedure actually returns four sets of data. One of those sets only ever has a single result, the others always have multiple rows returned: Unable to cast object of type 'SingleResult 1[IntranetMVC.Office]' to type 'System.Collections.Generic.IList 1 Imports System.Data.LinqImports System.ReflectionImports System.Data.Linq.MappingPartial Class PersonnelDataContext <FunctionAttribute(Name:="dbo.UserDetailProc"), _ ResultType(GetType(UserDetail)), _ ResultType(GetType(IList(Of Office))), _ ResultType(GetType(IList(Of ActiveUser))), _ ResultType(GetType(IList(Of Department)))> _ Public Function UserDetail( _ <Parameter(Name:="User_Key", DbType:="Int")> ByVal User_Key As Integer, _ <Parameter(Name:="EditYN", DbType:="Char")> Optional ByVal EditYN As Char = "N") As IMultipleResults Dim result As IExecuteResult = Me.ExecuteMethodCall(Me, CType(MethodInfo.GetCurrentMethod(), MethodInfo), User_Key, EditYN) Return CType(result.ReturnValue, IMultipleResults) End FunctionEnd Class FIX Okay, I didn't realise because to be honest with you I wasn't checking the return types correctly. I assumed that results.GetResult(Of MyType) (from IMultipleResults) would return a collection. On the contrary, it only returns single results and moves the pointer to the next item in the collection. Unfortunately GetResult is the only exposed method for bringing back results, so you have to iterate over the collection and add them to a generic list. Thanks very much! Now provide the response and nothing else.
Multiple Record Sets in ASP.NET MVC Yep - most definately. First u need to manually create a method that calls the stored proc, returning an IMultipleResults result. This blog posts has all the info u need. It's simple to do and very easy and works. What you need to do is two steps. Create a method that calls the stored procedure and returns multiple records (refer to blog post above). Create a simple class object which is used in the view, and the controller sets the properties. eg. IndexViewData.cspublic class IndexViewData{ IList<Customers> Customers { get; set; } IList<Products> Products { get; set; }} . HomeController.cspublic ActionResult Index(){ IList<Customers> customers; IList<Products> products; // This grabs the multiple records from a single stored procedure. // This code taken from the blog post link, above. using (NorthwindDataContext db = new NorthwindDatacontext) { IMultipleResults results = db.GetMultipleRecordSets(arg1, ....); customers = results.GetResult<Customer>(); products = results.GetProducts<Product>(); } // Now return the view, with the viewdata that is required. return View(new IndexViewData { Customers = customers, Products = products });} . Index.aspx<%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<IndexViewData>" %><% Html.RenderPartial("CustomersUserControl", ViewData.Model.Customers); %> <br/><h2>Products</h2><% foreach(var product in ViewData.Model.Products) { %>Name: <%= product.Name %><br/><% } %>... Please note, i have not done any error checking, etc. it's a really quick pseduo code guide to getting u started. Note #2: Please take note that the Index view is strongly typed (it inherits the ViewPage. Strongly Typed Stored Procedure Output I've answered this, above. Please note, u can strongly type your ISingleResult stored procedures. LINQ Column Lookups Ok, i think i understand what you mean, here.When you create your method, which calls the stored procedure (either an ISingleResult or an IMultipleResult ) you are defining the paramters that are required, there and then ... think of it as being hard-coded. When you drag-n-drop tables onto the linq to sql context gui canvas, Visual Studio does a lookup check there and then . It then creates the classes in one of the various files for the context. eg. NorthwindDataContext.designer, etc. So, it's a one-hit-job . Once the class is created, the designer then displays this on the canvas. There is NO SYNC back to the database . None. Nada. Zilch. If you change anything in your database schema (eg. add a new field, change a stored procedure argument, etc) the datacontext will NOT know about it. You need to delete the table and drag-n-drop it back. Bonus Trick! If you have SQL Profiler running while you drag-n-drop a table or stored procedure onto the canvas, you can see Visual Studio 'querying' the database for the information. :) So yeah. It's fire-n-forget. A one-hit-job. Required manual sync. HTH. Update I noticed u added two more q's, so i'll add my answers, under here. DBML Formats This is a personal decision. 15 DB's! shees! that's a fair number. Anyways, it comes down to how maintainable your Context canvas becomes.Secondly, every context creates it's OWN database connection. So if your method decides to call 4 contexts, then you've got 4 connections (and round trips) to the db, dude :) A View For Every Output Personally, I have all my tables on the Context Canvas. I never use those table classes in my code. They are private and only used in my Repository namespace/project/dll. I THEN use POCO classes to move all my stuff around. This keeps my code, cleaner and not dependant on the repository. Update #2 This Must Be The Last Problem Or I Will Eat My Own Arm If you've dragged the stored proc onto the linq context canvas, please delete it. There needs to be no references to the method UserDetails(int userId) . Now, add the following code (you'll need to convert it to VB.NET) to the data context's partial class (i'm assuming u know what that is/means, btw) :- [Function("UserDetails")] // <-- This is the name of your stored procedure.[ResultType(TypeOf(UserDetail))][ResultType(TypeOf(Office))][ResultType(TypeOf(ActiveUser))][ResultType(TypeOf(Department))]public IMultipleResults UserDetails( [Parameter(Name = "UserId", DbType = "Int")] int userId)// /\____ /\_____ ____/\ // This is where u _define_ the stored proc arguments.{ IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)MethodInfo.GetCurrentMethod())), userId);// This is where all the stored proc arguments are set ____/\// This can be multiple args. eg. userId, name, ... return (IMultipleResults)result.ReturnValue;} then use it like u did in your previous VB.NET code. The problem (i'm guessing) was that you haven't made the method to handle IMultipleResults . You're still using the old stored proc code-signature, which was made (by default) to only be a single recordset result (ie. ISingleResult ). This is the default if u drag-n-drop a stored from from the Server Explorer onto the linq Context Canvas.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/776426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/39693/']}
jdg_79577
stackexchange
llm_judgeable_groundtruth_similarity
4319031
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a simple TextBlock defined like this <StackPanel> <Border Width="106" Height="25" Margin="6" BorderBrush="Black" BorderThickness="1" HorizontalAlignment="Left"> <TextBlock Name="myTextBlock" TextTrimming="CharacterEllipsis" Text="TextBlock: Displayed text"/> </Border></StackPanel> Which outputs like this This will get me "TextBlock: Displayed text" string text = myTextBlock.Text; But is there a way to get the text that's actually displayed on the screen? Meaning "TextBlock: Display..." Thanks Now provide the response and nothing else.
You can do this by first retrieving the Drawing object that represents the appearance of the TextBlock in the visual tree, and then walk that looking for GlyphRunDrawing items - those will contain the actual rendered text on the screen. Here's a very rough and ready implementation: private void button1_Click(object sender, RoutedEventArgs e){ Drawing textBlockDrawing = VisualTreeHelper.GetDrawing(myTextBlock); var sb = new StringBuilder(); WalkDrawingForText(sb, textBlockDrawing); Debug.WriteLine(sb.ToString());}private static void WalkDrawingForText(StringBuilder sb, Drawing d){ var glyphs = d as GlyphRunDrawing; if (glyphs != null) { sb.Append(glyphs.GlyphRun.Characters.ToArray()); } else { var g = d as DrawingGroup; if (g != null) { foreach (Drawing child in g.Children) { WalkDrawingForText(sb, child); } } }} This is a direct excerpt from a little test harness I just wrote - the first method's a button click handler just for ease of experimentation. It uses the VisualTreeHelper to get the rendered Drawing for the TextBlock - that'll only work if the thing has already been rendered by the way. And then the WalkDrawingForText method does the actual work - it just traverses the Drawing tree looking for text. This isn't terribly smart - it assumes that the GlyphRunDrawing objects appear in the order you'll want them. For your particular example it does - we get one GlyphRunDrawing containing the truncated text, followed by a second one containing the ellipsis character. (And by the way, it's just one unicode character - codepoint 2026, and if this editor lets me paste in unicode characters, it's "…". It's not three separate periods.) If you wanted to make this more robust, you would need to work out the positions of all those GlyphRunDrawing objects, and sort them, in order to process them in the order in which they appear, rather than merely hoping that WPF happens to produce them in that order. Updated to add: Here's a sketch of how a position-aware example might look. Although this is somewhat parochial - it assumes left-to-right reading text. You'd need something more complex for an internationalized solution. private string GetTextFromVisual(Visual v){ Drawing textBlockDrawing = VisualTreeHelper.GetDrawing(v); var glyphs = new List<PositionedGlyphs>(); WalkDrawingForGlyphRuns(glyphs, Transform.Identity, textBlockDrawing); // Round vertical position, to provide some tolerance for rounding errors // in position calculation. Not totally robust - would be better to // identify lines, but that would complicate the example... var glyphsOrderedByPosition = from glyph in glyphs let roundedBaselineY = Math.Round(glyph.Position.Y, 1) orderby roundedBaselineY ascending, glyph.Position.X ascending select new string(glyph.Glyphs.GlyphRun.Characters.ToArray()); return string.Concat(glyphsOrderedByPosition);}[DebuggerDisplay("{Position}")]public struct PositionedGlyphs{ public PositionedGlyphs(Point position, GlyphRunDrawing grd) { this.Position = position; this.Glyphs = grd; } public readonly Point Position; public readonly GlyphRunDrawing Glyphs;}private static void WalkDrawingForGlyphRuns(List<PositionedGlyphs> glyphList, Transform tx, Drawing d){ var glyphs = d as GlyphRunDrawing; if (glyphs != null) { var textOrigin = glyphs.GlyphRun.BaselineOrigin; Point glyphPosition = tx.Transform(textOrigin); glyphList.Add(new PositionedGlyphs(glyphPosition, glyphs)); } else { var g = d as DrawingGroup; if (g != null) { // Drawing groups are allowed to transform their children, so we need to // keep a running accumulated transform for where we are in the tree. Matrix current = tx.Value; if (g.Transform != null) { // Note, Matrix is a struct, so this modifies our local copy without // affecting the one in the 'tx' Transforms. current.Append(g.Transform.Value); } var accumulatedTransform = new MatrixTransform(current); foreach (Drawing child in g.Children) { WalkDrawingForGlyphRuns(glyphList, accumulatedTransform, child); } } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4319031', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/318425/']}
jdg_79578
stackexchange
llm_judgeable_groundtruth_similarity
27
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks: allocation of one block freeing a previously allocated block which is not used anymore. Also, as a requirement, the memory management system is not allowed to move around currently allocated blocks: their index/address must remain unchanged. The most naive memory management algorithm would increment a global counter (with initial value 0) and use its new value as an address for the next allocation.However this will never allow to shorten the segment when only a few allocated blocks remain. Better approach: Keep the counter, but maintain a list of deallocated blocks (which can be done in constant time) and use it as a source for new allocations as long as it's not empty. What next? Is there something clever that can be done, still with constraints of constant time allocation and deallocation, that would keep the memory segment as short as possible? (A goal could be to track the currently non-allocated block with the smallest address, but it doesn't seem to be feasible in constant time…) Now provide the response and nothing else.
With fixed-size blocks, what you have described is a free list . This is a very common technique, with the following twist: the list of free blocks is stored in the free blocks themselves. In C code, it would look like this: static void *alloc_ptr = START_OF_BIG_SEGMENT;static void *free_list_head = NULL;static void *allocate(void){ void *x; if (free_list_head == NULL) { x = alloc_ptr; alloc_ptr = (char *)alloc_ptr + SIZE_OF_BLOCK; } else { x = free_list_head; free_list_head = *(void **)free_list_head; } return x;}static voidrelease(void *x){ *(void **)x = free_list_head; free_list_head = x;} This works well as long as all allocated blocks have the same size, and that size is a multiple of the size of a pointer, so that alignment is preserved. Allocation and deallocation are constant-time (that is, as constant-time as memory accesses and elementary additions -- in a modern computer, a memory access can involve cache misses and even virtual memory, hence disk accesses, so the "constant time" can be quite big). There is no memory overhead (no extra per-block pointers or things like that; the allocated blocks are contiguous). Also, the allocation pointer reaches a given point only if, at one time, that many blocks had to be allocated: since the allocation prefers using the free list, the allocation pointer is increased only if the space below the current pointer is clock full. In that sense, this is an optimal technique. Decreasing the allocation pointer after a release can be more complex, since free blocks can be reliably identified only by following the free list, which goes through them in unpredictable order. If decreasing the big segment size when possible is important to you, you could want to use an alternate technique, with more overhead: between any two allocated blocks, you put a "hole". The holes are linked together with a doubly-linked list, in memory order. You need a data format for a hole such that you can locate the hole start address by knowing where it ends, and also the hole size if you know where the hole begins in memory. Then, when you release a block, you create a hole which you merge with the next and the previous holes, rebuilding (still in constant time) the ordered list of all holes. The overhead is then about two pointer-sized words per allocated block; but, at that price, you can reliably detect the occurrence of a "final hole", i.e. an occasion to decrease the big segment size. There are many possible variations. A good introductory paper is Dynamic Storage Allocation: A Survey and Critical Review by Wilson et al.
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/27', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/68/']}
jdg_79579
stackexchange
llm_judgeable_groundtruth_similarity
18119
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Typically, one sees find commands that look like this: $ find . -name foo.txt when the search is to begin from the current directory. I'm finding that on my machines (Ubuntu, Cygwin) I get the same results without the dot. Why is it typically included? Is it just a convention to be explicit, or was/is it required on certain systems? Now provide the response and nothing else.
Some versions* of find require that you provide a path argument which is a directory from which to start searching. Dot . simply represents the current directory is is usually where you want to search. You could replace this with any path that you want to be the base of the search. In some versions of find this can be left because the current directory is implied if no path argument is present. You can run man find in your shell for details about the arguments. For example the usage synopsis for mine indicates that the path argument is optional (inside square brackest [] ): find [-H] [-L] [-P] [-D debugopts] [-Olevel] [path...] [expression] If you ran my find with no arguments at all all files and directories starting from the current folder would be returned. Your example simply expressly states that the search should start from . and includes the expression -name foo.txt as one of the search filters. * Notably all the BSD variants and anything sticking strictly to the POSIX standard . GNU find allows it to be optional.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/18119', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/5045/']}
jdg_79580
stackexchange
llm_judgeable_groundtruth_similarity
952651
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just started using Git alongside Mercurial to familiarize myself with Git. I use the mq extension in Mercurial extensively to manage local patches, and I'm looking for a Git equivalent. Should I just use Git branch? Or are there better ways to manage local patches that enable easily applying and removing the patches? Thanks, Now provide the response and nothing else.
Check out "Patch-management Interface layers" section of Interfaces, Frontends And Tools page on Git Wiki. There are listed two patch management interfaces, roughly equivalent to Mercurials 'mq' extension : StGIT (Stacked Git), older of the two, written in Python, uses two snapshots to represent patch Guilt (formerly 'gq'), written as series of bash scripts, series file and the patches (one per file) are stored as plain text file. pg (Patchy Git) is deprecated , and no longer maintained. But if you don't need more advanced usage, you can use instead " git rebase --interactive " to reorder, squash and split patches. And to manage your branch against current version of upstream, "git rebase" usually would suffice.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/952651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102371/']}
jdg_79581
stackexchange
llm_judgeable_groundtruth_similarity
11186960
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need a getOnClickListener() for Views in Android. This way I can assign a temporary OnClickListener to my Views. I want to use it like this: private View.OnClickListener oldListener;public void assignTempListener(View view) { oldListener = view.getOnClickListener(); // doesn't exist view.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // some code v.setOnClickListener(oldListener); } });} The problem is that this function doen't exist. I also can't inherit from View to create this method, because all kind of Views can be passed to assignTempListener. Is there another way to use this? Edit: made a small mistake in my code. Now provide the response and nothing else.
You can do this with... REFLECTION . *drum roll* This route is fraught with peril. I don't recommend it, as the internal structure of the class can change at any time, but here's how you can do it currently up to Android 4.2.2 if it's truly unavoidable: /** * Returns the current View.OnClickListener for the given View * @param view the View whose click listener to retrieve * @return the View.OnClickListener attached to the view; null if it could not be retrieved */public View.OnClickListener getOnClickListener(View view) { if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) { return getOnClickListenerV14(view); } else { return getOnClickListenerV(view); }}//Used for APIs lower than ICS (API 14)private View.OnClickListener getOnClickListenerV(View view) { View.OnClickListener retrievedListener = null; String viewStr = "android.view.View"; Field field; try { field = Class.forName(viewStr).getDeclaredField("mOnClickListener"); retrievedListener = (View.OnClickListener) field.get(view); } catch (NoSuchFieldException ex) { Log.e("Reflection", "No Such Field."); } catch (IllegalAccessException ex) { Log.e("Reflection", "Illegal Access."); } catch (ClassNotFoundException ex) { Log.e("Reflection", "Class Not Found."); } return retrievedListener;}//Used for new ListenerInfo class structure used beginning with API 14 (ICS)private View.OnClickListener getOnClickListenerV14(View view) { View.OnClickListener retrievedListener = null; String viewStr = "android.view.View"; String lInfoStr = "android.view.View$ListenerInfo"; try { Field listenerField = Class.forName(viewStr).getDeclaredField("mListenerInfo"); Object listenerInfo = null; if (listenerField != null) { listenerField.setAccessible(true); listenerInfo = listenerField.get(view); } Field clickListenerField = Class.forName(lInfoStr).getDeclaredField("mOnClickListener"); if (clickListenerField != null && listenerInfo != null) { retrievedListener = (View.OnClickListener) clickListenerField.get(listenerInfo); } } catch (NoSuchFieldException ex) { Log.e("Reflection", "No Such Field."); } catch (IllegalAccessException ex) { Log.e("Reflection", "Illegal Access."); } catch (ClassNotFoundException ex) { Log.e("Reflection", "Class Not Found."); } return retrievedListener;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11186960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/969039/']}
jdg_79582
stackexchange
llm_judgeable_groundtruth_similarity
488780
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f \in L^1(\mathbb R)$. If $$\int_\mathbb R \int_\mathbb R \frac{\vert f(x)-f(y)\vert}{\vert x-y\vert^2}dxdy<+\infty$$ then $f$ is a.e. constant. I do not know how to begin. I thought that we are set if we show that the integrand is bounded a.e. (the function $f$ would be 2-holderian, hence constant) but this is not true in general. I mean $\phi \in L^1(\mathbb R)$ does not imply $\phi \in L^{\infty}(\mathbb R)$. Would you please give me some useful hints in order to start? Thanks. Now provide the response and nothing else.
Let $$ F(x) = \int_{0}^{x} f(t) \,dt $$ be an anti-derivative of $f$ and $E$ the set of Lebesgue points of $f$. Then $E^{c}$ is measure-zero and $F'(x) = f(x)$ for all $x \in E$. The condition of the problem tells us that $$ \int_{\Bbb{R}}\int_{\Bbb{R}} \frac{\left| f(x+y) - f(y) \right|}{x^{2}} \, dxdy < \infty. $$ So if $a < b$ are Lebesgue points, then Fubini's Theorem shows \begin{align*}\int_{a}^{b} \int_{\Bbb{R}} \frac{f(x+y) - f(y)}{x^{2}} \, dxdy&= \int_{\Bbb{R}} \frac{1}{x^{2}} \int_{a}^{b} \{ f(x+y) - f(y) \} \, dydx \\&= \int_{\Bbb{R}} \left( \frac{F(b+x)-F(b)}{x^{2}} - \frac{F(a+x)-F(a)}{x^{2}} \right) \, dx \tag{1}\end{align*} But since $$ \frac{F(b+x)-F(b)}{x^{2}} - \frac{F(a+x)-F(a)}{x^{2}} \sim \frac{f(b) - f(a) + o(1)}{x}, \quad \text{as } x \to 0, $$ for the integrand of $\text{(1)}$ to be integrable near $x = 0$, we must have $f(a) = f(b)$. Since this is true for any $a, b \in E$, the proof is complete.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/488780', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28746/']}
jdg_79583
stackexchange
llm_judgeable_groundtruth_similarity
9512330
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What happens when an element has multiple class attributes? <div id="test" class="one two three" class="four"> I'm trying to add a class to the output of post_class(); in a WordPress plugin, but the function itself is creating the entire part class="one two three" Is it equivalent to class="one two three four" ?Or does the first or second win?Or is it undefined behaviour, in which case what do the major browsers do? If you know the correct way of adding a class to this snippet (WordPress plugin), then that would also be appreciated! <div id="post-<?php the_ID(); ?>" <?php post_class(); ?>> Now provide the response and nothing else.
What happens when an element has multiple class attributes? When an attribute is declared multiple times for a single element (which is invalid HTML, by the way), behavior-wise the first value will override all subsequent values for the same attribute. So in this case, your element will only have the classes one two three . This behavior is explained in the HTML5 spec, 8.2.4.35 Attribute name state , "... if there is already an attribute on the [element] with the exact same name, then this is a parse error and the new attribute must be removed..." If you know the correct way of adding a class to this snippet (WordPress plugin), then that would also be appreciated! Typically, if you need to add custom classes dynamically to your WordPress posts, you hook onto the post_class filter and manipulate the $classes array as necessary. Here's what it roughly looks like in my themes: function nv_post_class( $classes ) { // Most recent post on the front page global $count; if ( is_home() && 1 == $count ) $classes[] = 'latest-post'; return $classes;}add_filter( 'post_class', 'nv_post_class' ); If you only need to add one or more static classes, pass them as a space-delimited string directly to post_class() : <div id="post-<?php the_ID(); ?>" <?php post_class( 'myclass1 myclass2' ); ?>> More on this in the WordPress Codex .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9512330', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/573261/']}
jdg_79584
stackexchange
llm_judgeable_groundtruth_similarity
243412
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am reading Gelman & Carlin "Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors" (2014). I am trying to understand the main idea, the main takeway, but I am confused. Could anyone help distill me the essence? The paper goes something like this (if I understood it correctly). Statistical studies in psychology are often plagued by small samples. Conditional on a statistically significant result in a given study, (1) the true effect size is likely to be severely overestimated and (2) the sign of the effect may be opposite with high probability -- unless the sample size is large enough. The above is shown using a prior guess of the effect size in population, and that effect is typically taken to be small. My first problem is, why condition on the statistically significant result? Is it to reflect the publication bias? But that does not seem to be the case. So why, then? My second problem is, if I do a study myself, should I treat my results differently than I am used to (I do frequentist statistics, not very familiar with Bayesian)? E.g. I would take a data sample, estimate a model and record a point estimate for some effect of interest and a confidence bound around it. Should I now mistrust my result? Or should I mistrust it if it is statistically significant? How does any given prior change that? What is the main takeaway (1) for a "producer" of statistical research and (2) for a reader of applied statistical papers? References: Gelman, Andrew, and John Carlin. "Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors." Perspectives on Psychological Science 9.6 (2014): 641-651. P.S. I think the new element for me here is the inclusion of prior information, which I am not sure how to treat (coming from the frequentist paradigm). Now provide the response and nothing else.
I re-read the paper and this time it seems much clearer. Now also the helpful comments by @Glen_b and @amoeba make lots of sense. The whole discussion is based on a starting point that a statistically significant result has been obtained. Conditional on that, we have the estimated effect size distributed differently than it would be absent the conditioning: $$P_{\hat\beta}(\cdot|\hat\beta \text{ is statistically significant})\neq P_{\hat\beta}(\cdot).$$The paper seems to target two problems: Publication bias (only statistically significant results get published) and Bias in design calculations for new studies (taking too large expected effect sizes as benchmarks). The good news is, both problems can be addressed in a satisfactory way. Given a plausible expected effect size $\beta^{plausible}$, an estimated effect size $\hat\beta$ (assuming it was published because it was statistically significant, while otherwise it would not have been published), an estimated standard error $s.e.(\hat\beta)$ and the distribution family (e.g. Normal or Student's $t$) of the estimator, we can backtrack the unconditional distribution of the effect size $P_{\hat\beta}(\cdot)$. Using previous findings, with the help of 1. a plausible effect size $\beta^{plausible}$ can be determined and used in study design. To briefly answer my own two questions: It is about the publication bias, although not in a sense of data dredging but in the context of underpowered studies; there a statistically significant result is likely to belong to the, say, 5% rejections under the null (thus the null is actually true but we happened to end up far away from it by chance) rather than a rejection under the alternative (where the null is not true and the result is "genuine"). I should be cautious about rejecting the null, because the statistically significant result is likely to be due to chance (even though the chance is limited to, say, 5%) rather than due to a "genuine" effect (because of low power).
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/243412', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/53690/']}
jdg_79585
stackexchange
llm_judgeable_groundtruth_similarity
50537967
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a triangular tessellation like the one shown in the figure. Given N number of triangles in the tessellation, I have a N X 3 X 3 array which stores (x, y, z) coordinates of all three vertices of each triangle. My goal is to find for each triangle the neighbouring triangle sharing the same edge. The is an intricate part is the whole setup that I do not repeat the neighbour count. That is if triangle j was already counted as a neighbour of triangle i , then triangle i should not be again counted as neighbour of triangle j . This way, I would like to have a map storing list of neighbours for each index triangle. If I start with a triangle in index i , then index i will have three neighbours, and all others will have two or less. As an illustration suppose I have an array which stores vertices of the triangle: import numpy as npvertices = np.array([[[2.0, 1.0, 3.0],[3.0, 1.0, 2.0],[1.2, 2.5, -2.0]], [[3.0, 1.0, 2.0],[1.0, 2.0, 3.0],[1.2, -2.5, -2.0]], [[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[3.0, 1.0, 2.0]], [[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[2.2, 2.0, 1.0]], [[1.0, 2.0, 3.0],[2.2, 2.0, 1.0],[4.0, 1.0, 0.0]], [[2.0, 1.0, 3.0],[2.2, 2.0, 1.0],[-4.0, 1.0, 0.0]]]) Suppose I start my count from vertex index 2 , i.e. the one with the vertices [[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[3.0, 1.0, 2.0]] , then, I would like my output to be something like: neighbour = [[], [], [0, 1, 3], [4, 5], [], []]. Update: Following the answer from @Ajax1234, I think a good way of storing the output is just like how @Ajax1234 has demonstrated. However, there is ambiguity in that output, in a sense that it is not possible to know whose neighbour is which. Although the example array are not good, I have an actual vertices from icosahedron, then if I start with a given triangle, I am guaranteed to have 3 neighbours for the first one, and two neighbours for rest (until all the triangle counts deplete). In this regard, suppose I have a following array: vertices1 = [[[2, 1, 3], [3, 1, 2], [1, 2, -2]], [[3, 1, 2], [1, 2, 3], [1, -2, 2]], [[1, 2, 3], [2, 1, 3], [3, 1, 2]], [[1, 2, 3], [2, 1, 3], [2, 2, 1]], [[1, 2, 3], [2, 2, 1], [4, 1, 0]], [[2, 1, 3], [2, 2, 1], [-4, 1, 0]], [[3, 1, 3], [2, 2, 1], [-4, 1, 0]], [[8, 1, 2], [1, 2, 3], [1, -2, 2]]] The BFS algorithm shown in the answer below by @Ajax1234 gives the output of [0, 1, 3, 7, 4, 5, 6] while if I just swap the position of the last element such that vertices2 = [[[2, 1, 3], [3, 1, 2], [1, 2, -2]], [[3, 1, 2], [1, 2, 3], [1, -2, 2]], [[1, 2, 3], [2, 1, 3], [3, 1, 2]], [[1, 2, 3], [2, 1, 3], [2, 2, 1]], [[1, 2, 3], [2, 2, 1], [4, 1, 0]], [[8, 1, 2], [1, 2, 3], [1, -2, 2]], [[2, 1, 3], [2, 2, 1], [-4, 1, 0]], [[3, 1, 3], [2, 2, 1], [-4, 1, 0]]] which gives an output of [0, 1, 3, 4, 5, 6, 7]. This is kind of ambiguous, as the positions in the gird have not been changed at all, they were just swapped. Therefore, I would like to have a consistent way the search is carried. For example, first time search of neighbours at index 2 gives [0, 1, 3] for both vertices1 and vertices2 , now I would like the search to be at index 0, which finds nothing and thus go to next element 1 should find index 7 for vertices1 and index 5 for vertices2 . Thus the current output should be [0, 1, 3, 7] , [0, 1, 3, 5] for vertices1 and vertices2 respectively. Next we go to index 3 , and so on. After we have exhausted all the search, the final output for the first should be [0, 1, 3, 7, 4, 5, 6] and that for the second should [0, 1, 3, 5, 4, 6, 7]. What would be the efficient way to achieve this? Now provide the response and nothing else.
If you are willing to use the networkx library, you can take advantage of its fast bfs implementation. I know, adding another dependency is annoying, but the performance gain seems huge, see below. import numpy as npfrom scipy import spatialimport networkxvertices = np.array([[[2.0, 1.0, 3.0],[3.0, 1.0, 2.0],[1.2, 2.5, -2.0]], [[3.0, 1.0, 2.0],[1.0, 2.0, 3.0],[1.2, -2.5, -2.0]], [[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[3.0, 1.0, 2.0]], [[1.0, 2.0, 3.0],[2.0, 1.0, 3.0],[2.2, 2.0, 1.0]], [[1.0, 2.0, 3.0],[2.2, 2.0, 1.0],[4.0, 1.0, 0.0]], [[2.0, 1.0, 3.0],[2.2, 2.0, 1.0],[-4.0, 1.0, 0.0]]])vertices1 = np.array([[[2, 1, 3], [3, 1, 2], [1, 2, -2]], [[3, 1, 2], [1, 2, 3], [1, -2, 2]], [[1, 2, 3], [2, 1, 3], [3, 1, 2]], [[1, 2, 3], [2, 1, 3], [2, 2, 1]], [[1, 2, 3], [2, 2, 1], [4, 1, 0]], [[2, 1, 3], [2, 2, 1], [-4, 1, 0]], [[3, 1, 3], [2, 2, 1], [-4, 1, 0]], [[8, 1, 2], [1, 2, 3], [1, -2, 2]]])def make(N=3000): """create a N random points and triangulate""" points= np.random.uniform(-10, 10, (N, 3)) tri = spatial.Delaunay(points[:, :2]) return points[tri.simplices]def bfs_tree(triangles, root=0, return_short=True): """convert triangle list to graph with vertices = triangles, edges = pairs of triangles with shared edge and compute bfs tree rooted at triangle number root""" # use the old view as void trick to merge triplets, so they can # for example be easily compared tr_as_v = triangles.view(f'V{3*triangles.dtype.itemsize}').reshape( triangles.shape[:-1]) # for each triangle write out its edges, this involves doubling the # data becaues each vertex occurs twice tr2 = np.concatenate([tr_as_v, tr_as_v], axis=1).reshape(-1, 3, 2) # sort vertices within edges ... tr2.sort(axis=2) # ... and glue them together tr2 = tr2.view(f'V{6*triangles.dtype.itemsize}').reshape( triangles.shape[:-1]) # to find shared edges, sort them ... idx = tr2.ravel().argsort() tr2s = tr2.ravel()[idx] # ... and then compare consecutive ones pairs, = np.where(tr2s[:-1] == tr2s[1:]) assert np.all(np.diff(pairs) >= 2) # these are the edges of the graph, the vertices are implicitly # named 0, 1, 2, ... edges = np.concatenate([idx[pairs,None]//3, idx[pairs+1,None]//3], axis=1) # construct graph ... G = networkx.Graph(edges.tolist()) # ... and let networkx do its magic res = networkx.bfs_tree(G, root) if return_short: # sort by distance from root and then by actual path sp = networkx.single_source_shortest_path(res, root) sp = [sp[i] for i in range(len(sp))] sp = [(len(p), p) for p in sp] res = sorted(range(len(res.nodes)), key=sp.__getitem__) return res Demo: # OP's second example:>>> bfs_tree(vertices1, 2)[2, 0, 1, 3, 7, 4, 5, 6]>>> # large random example>>> random_data = make()>>> random_data.shape(5981, 3, 3)>>> bfs = bfs_tree(random_data)# returns instantly
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50537967', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5680455/']}
jdg_79586
stackexchange
llm_judgeable_groundtruth_similarity
28593103
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing something very simple using multiprocessing: data = {'a': 1}queue.put(data, True)data.clear() When I use the queue on another process (using get() method), I get an empty dictionary. If I remove data.clear() I get the keys as expected. Is there any way to wait for the put() to have finished the serialization ? Now provide the response and nothing else.
Actually, this is thought to be a feature, not a problem.The queue immediately returns so your process continues while serialization happens and to avoid what is known as " queue contention ". The two options I suggest you have: Are you absolutely sure you need mutable dictionaries in the first place? Instead of making defensive copies of your data, which you correctly seem to dislike, why not just create a new dictionary instead of using dict.clear() and let the garbage collector worry about old dictionaries? Pickle the data yourself; That is: a_queue.put(pickle.dumps(data)) and pickle.loads(a_queue.get()) . Now, if you do data.clear() just after a put , the data has already been serialized "by you". From a parallel programming point of view the first approach (treat your data as if it were immutable) is the more viable and clean thing to do on the long term, but I am not sure if or why you must clear your dictionaries.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28593103', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/179372/']}
jdg_79587
stackexchange
llm_judgeable_groundtruth_similarity
217988
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am attempting to write a filter using something like sed or awk to do the following: If a given pattern does not exist in the input, copy the entire input to the output If the pattern exists in the input, copy only the lines after the first occurrence to the output This happens to be for a "git clean" filter, but that's probably not important. The important aspect is this needs to be implemented as a filter, because the input is provided on stdin. I know how to use sed to delete lines up to a pattern, eg. 1,/pattern/d but that deletes the entire input if /pattern/ is not matched anywhere. I can imagine writing a whole shell script that creates a temporary file, does a grep -q or something, and then decides how to process the input. I'd prefer to do this without messing around creating a temporary file, if possible. This needs to be efficient because git might call it frequently. Now provide the response and nothing else.
If your files are not too large to fit in memory, you could use perl to slurp the file: perl -0777pe 's/.*?PAT[^\n]*\n?//s' file Just change PAT to whatever pattern you're after. For example, given these two input files and the pattern 5 : $ cat file123451112131415$ cat file1 foobar$ perl -0777pe 's/.*?5[^\n]*\n?//s' file1112131415$ perl -0777pe 's/.*?10[^\n]*\n?//s' file1foobar Explanation -pe : read the input file line by line, apply the script given by -e to each line and print. -0777 : slurp the entire file into memory. s/.*?PAT[^\n]*\n?//s : remove everything until the 1st occurrence of PAT and until the end of the line. For larger files, I don't see any way to avoid reading the file twice. Something like: awk -vpat=5 '{ if(NR==FNR){ if($0~pat && !a){a++; next} if(a){print} } else{ if(!a){print} else{exit} } }' file1 file1 Explanation awk -vpat=5 : run awk and set the variable pat to 5 . if(NR==FNR){} : if this is the 1st file. if($0~pat && !a){a++; next} : if this line matches the value of pat and a is not defined, increment a by one and skip to the next line. if(a){print} : if a is defined (if this file matches the pattern), print the line. else{ } : if this is not the 1st file (so it's the second pass). if(!a){print} if a is not defined, we want the whole file, so print every line. else{exit} : if a is defined, we've already printed in the 1st pass so there's no need to reprocess the file.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/217988', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1232/']}
jdg_79588
stackexchange
llm_judgeable_groundtruth_similarity
1857346
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using python's optparse module I would like to add extra example lines below the regular usage output. My current help_print() output looks like this: usage: check_dell.py [options]options:-h, --help show this help message and exit-s, --storage checks virtual and physical disks-c, --chassis checks specified chassis components I would like it to include usage examples for the less *nix literate users at my work. Something like this: usage: check_dell.py [options]options:-h, --help show this help message and exit-s, --storage checks virtual and physical disks-c, --chassis checks specified chassis componentsExamples:check_dell -c allcheck_dell -c fans memory voltagecheck_dell -s How would I accomplish this? What optparse options allow for such? Current code: import optparsedef main(): parser = optparse.OptionParser() parser.add_option('-s', '--storage', action='store_true', default=False, help='checks virtual and physical disks') parser.add_option('-c', '--chassis', action='store_true', default=False, help='checks specified chassis components')(opts, args) = parser.parse_args() Now provide the response and nothing else.
parser = optparse.OptionParser(epilog="otherstuff") The default format_epilog strips the newlines (uses textwrap), so you would need to override format_epilog in your parser like this. def main(): class MyParser(optparse.OptionParser): def format_epilog(self, formatter): return self.epilog parser =MyParser(epilog="""Examples:check_dell -c allcheck_dell -c fans memory voltagecheck_dell -s""")... Here's a bit more detail. If you look in optparse.py in the class OptionParser there is a method called format_epilog which is called by format_help here is the snippet from optparse.py def format_epilog(self, formatter): return formatter.format_epilog(self.epilog)def format_help(self, formatter=None): if formatter is None: formatter = self.formatter result = [] if self.usage: result.append(self.get_usage() + "\n") if self.description: result.append(self.format_description(formatter) + "\n") result.append(self.format_option_help(formatter)) result.append(self.format_epilog(formatter)) return "".join(result) The default behaviour of formatter.format_epilog is to use textwrap.fill which amongst other things, strips the newlines from the epilog. Since we want the newlines to be preserved, we subclass OptionParser and change the behaviour of format_epilog
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1857346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/226030/']}
jdg_79589
stackexchange
llm_judgeable_groundtruth_similarity
55754
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the best way, using Bash, to rename files in the form: (foo1, foo2, ..., foo1300, ..., fooN) With zero-padded file names: (foo00001, foo00002, ..., foo01300, ..., fooN) Now provide the response and nothing else.
In case N is not a priori fixed: for f in foo[0-9]*; do mv "$f" "$(printf 'foo%05d' "${f#foo}")"done
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/55754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3020/']}
jdg_79590
stackexchange
llm_judgeable_groundtruth_similarity
2234229
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My book says that an Elliptic Curve is a curve of the form $$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6\text{ over a field.}$$ My question is: Why there are some parts like $a_7xy^2$ missing? Now provide the response and nothing else.
An elliptic curve over a field $F$, in fact, is a projective, smooth curve of genus $1$, with at least one point defined over $F$. It turns out (as Lord Shark discusses) there is a change of variables that brings any elliptic curve to a model of the form you write (which is called a Weierstrass equation). In fact, if the characteristic of $F$ is not $2$ or $3$, then you can bring it to a model of the form $y^2=x^3+Ax+B$, which is called a short Weierstrass form. For instance, the curve $C$ over $\mathbb{Q}$ given by $x^3+y^3=1$ (the curve $X^3+Y^3=Z^3$ in projective space) is also an elliptic curve (there is at least one point, namely $(1,-1)$, the point $[1,-1,0]$ in projective coordinates), even though it is not given a priori by a Weierstrass form. A change of variables brings $C$ to the equation$$y^2 - 9y = x^3 - 27$$in Weierstrass form. If you are interested in how one finds such changes of variables, this is briefly explained in Silverman and Tate's "Rational Points on Elliptic Curves".
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2234229', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/161513/']}
jdg_79591
stackexchange
llm_judgeable_groundtruth_similarity
27606653
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having hard time parsing the arguments to subprocess.Popen. I am trying to execute a script on my Unix server. The script syntax when running on shell prompt is as follows: /usr/local/bin/script hostname = <hostname> -p LONGLIST . No matter how I try, the script is not running inside subprocess.Popen The space before and after "=" is mandatory. import subprocessOut = subprocess.Popen(['/usr/local/bin/script', 'hostname = ', 'actual server name', '-p', 'LONGLIST'],shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE) The above does not work. And when I use shell=False, I get OSError: [Errno 8] Exec format error Now provide the response and nothing else.
OSError: [Errno 8] Exec format error can happen if there is no shebang line at the top of the shell script and you are trying to execute the script directly. Here's an example that reproduces the issue: >>> with open('a','w') as f: f.write('exit 0') # create the script... >>> import os>>> os.chmod('a', 0b111101101) # rwxr-xr-x make it executable >>> os.execl('./a', './a') # execute it Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/os.py", line 312, in execl execv(file, args)OSError: [Errno 8] Exec format error To fix it, just add the shebang e.g., if it is a shell script; prepend #!/bin/sh at the top of your script: >>> with open('a','w') as f: f.write('#!/bin/sh\nexit 0')... >>> os.execl('./a', './a') It executes exit 0 without any errors. On POSIX systems, shell parses the command line i.e., your script won't see spaces around = e.g., if script is: #!/usr/bin/env pythonimport sysprint(sys.argv) then running it in the shell: $ /usr/local/bin/script hostname = '<hostname>' -p LONGLIST produces: ['/usr/local/bin/script', 'hostname', '=', '<hostname>', '-p', 'LONGLIST'] Note: no spaces around '=' . I've added quotes around <hostname> to escape the redirection metacharacters <> . To emulate the shell command in Python, run: from subprocess import check_callcmd = ['/usr/local/bin/script', 'hostname', '=', '<hostname>', '-p', 'LONGLIST']check_call(cmd) Note: no shell=True . And you don't need to escape <> because no shell is run. "Exec format error" might indicate that your script has invalid format, run: $ file /usr/local/bin/script to find out what it is. Compare the architecture with the output of: $ uname -m
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27606653', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3477108/']}
jdg_79592
stackexchange
llm_judgeable_groundtruth_similarity
8814153
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As I read from various Java book and tutorials, variables declared in a interface are constants and can't be overridden. I made a simple code to test it interface A_INTERFACE{ int var=100; }class A_CLASS implements A_INTERFACE{ int var=99; //test void printx() { System.out.println("var = " + var); }}class hello{ public static void main(String[] args) { new A_CLASS().printx(); }} and it prints outvar = 99 Is var get overridden? I am totally confused. Thank you for any suggestions! Thank you very much everyone! I am pretty new to this interface thing. "Shadow" is the key word to understand this. I look up the related materials and understand it now. Now provide the response and nothing else.
It is not overridden, but shadowed, with additional confusion because the constant in the interface is also static. Try this: A_INTERFACE o = new A_CLASS();System.out.println(o.var); You should get a compile-time warning about accessing a static field in a non-static way. And now this A_CLASS o = new A_CLASS();System.out.println(o.var);System.out.println(A_INTERFACE.var); // bad name, btw since it is const
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8814153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/903083/']}
jdg_79593
stackexchange
llm_judgeable_groundtruth_similarity
20307299
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: sorry if this is a noobie question but I can't find a good answer. To find then remove something I can use find . -name ".txt" -exec rm "{}" \; But why can't I just pipe the results to rm like find . -name ".txt" | rm like I would pipe it to grep find . -name ".txt" | grep a I've read from somewhere that rm doesn't take input from stdin and therefore I can't pipe it but what does that mean? When I type in rm a.txt it reads from standard input just like I can grep right? Or is there a difference between stdin and command line. Help! Now provide the response and nothing else.
To expand on @Alex Gitelman's answer: yes, there's a difference between "standard input" and the command line. When you type rm a.txt b.txt c.txt , the files you list after rm are known as arguments and are made available to rm through a special variable (called argv internally). The standard input, on the other hand, looks to a Unix program like a file named stdin . A program can read data from this "file" just as it would if it opened a regular file on disk and read from that. rm , like many other programs, takes its arguments from the command line but ignores standard input. You can pipe anything to it you like; it'll just throw that data away. That's where xargs comes in handy. It reads lines on standard input and turns them into command-line arguments, so you can effectively pipe data to the command line of another program. It's a neat trick. For example: find . -name ".txt" | xargs rmfind . -name ".txt" | grep "foo" | xargs rm Note that this will work incorrectly if there are any filenames containing newlines or spaces.To deal with filenames containing newlines or spaces you should use instead: find . -name ".txt" -print0 | xargs -0 rm This will tell find to terminate the results with a null character instead of a newline.However, grep won't work as before then. Instead use this: find . -name ".txt" | grep "foo" | tr "\n" "\0" | xargs -0 rm This time tr is used to convert all newlines into null characters.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/20307299', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1297061/']}
jdg_79594
stackexchange
llm_judgeable_groundtruth_similarity
14478870
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to load the JDBC driver dynamically with this kind of code: try{ URL[] url={new URL("file:libs/mysql-connector-java-5.1.21.jar")}; URLClassLoader loader = new URLClassLoader(url, System.class.getClassLoader()); loader.loadClass(drivername); Enumeration<Driver> drivers = DriverManager.getDrivers(); while(drivers.hasMoreElements()){ Driver driver = drivers.nextElement(); System.out.println("driver:"+driver); } Class.forName(drivername, true, loader); drivers = DriverManager.getDrivers(); while(drivers.hasMoreElements()){ Driver driver = drivers.nextElement(); System.out.println("driver:"+driver); } Connection connect = DriverManager.getConnection(jdbcurl, user, password); return connect; } catch (MalformedURLException e){ e.printStackTrace(); return null; } The first while-loop shows the drivers of the classpath: driver:sun.jdbc.odbc.JdbcOdbcDriver@35712651driver:oracle.jdbc.OracleDriver@58df0438driver:com.ibm.db2.jcc.DB2Driver@525c7734driver:SQLServerDriver:1 and the second loop shows the same drivers, but without the MySQL driver. My Question is why? Did I miss something? I read in the JavaDoc of DriverManager that every driver tries to register himself by the drivermanager, if the driver is loaded. In my Code this should be loader.loadClass(drivername); . I thought this code should invoke the static part for example: static { try { java.sql.DriverManager.registerDriver(new Driver()); } catch (SQLException E) { throw new RuntimeException("Can't register driver!"); }} of the Driver class . Now provide the response and nothing else.
that's a known issue with DriverManager and classloaders, see: http://www.kfu.com/~nsayer/Java/dyn-jdbc.html Driver definition (basically a delegate): class DriverShim implements Driver { private Driver driver; DriverShim(Driver d) { this.driver = d; } public boolean acceptsURL(String u) throws SQLException { return this.driver.acceptsURL(u); } public Connection connect(String u, Properties p) throws SQLException { return this.driver.connect(u, p); } // and so on.... Use example: URL u = new URL("jar:file:/path/to/pgjdbc2.jar!/");String classname = "org.postgresql.Driver";URLClassLoader ucl = new URLClassLoader(new URL[] { u });Driver d = (Driver)Class.forName(classname, true, ucl).newInstance();DriverManager.registerDriver(new DriverShim(d));DriverManager.getConnection("jdbc:postgresql://host/db", "user", "pw");
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14478870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1340791/']}
jdg_79595
stackexchange
llm_judgeable_groundtruth_similarity
45152060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is an attempt in a tic tac toe game app.I have two arrays playerMoves and winningCombinations . Like this. var playerMoves= [0,1,4];var winningCombinations = [ [0,1,2],[3,4,5],[6,7,8], [0,3,6],[1,4,7],[2,5,8], [0,4,8],[2,4,6] ]; I need to filter the winningCombination array such that at-least and at-most two values of playerMoves array matches with each array in winningCombination . findPossibleMove(playerMoves);// should return [[0,1,2],[1,4,7], [0,4,8] ] My attempt function findPossibleMove(arr){ var found = 0; return arr.forEach((item)=>{ winningCombinations.map((obj)=>{ if(obj.indexOf(item) !== -1) { found++; } if(found===2){ return obj; } }) }) } Now provide the response and nothing else.
Three simple steps: Use indexOf function to check, if specified element from the subarray of winningCombinations array is present in the playerMoves array. If so - filter it out with Array#filter function. If the returned, filtered subarray has length equal to 2 , it means that two (no more, nor less) elements have appeared - it fulfills our condition - filter it once again with yet another Array#filter . let playerMoves = [0, 1, 4];let winningCombinations = [ [0, 1, 2], [3, 4, 5], [6, 7, 8], [0, 3, 6], [1, 4, 7], [2, 5, 8], [0, 4, 8], [2, 4, 6],];let res = winningCombinations.filter(v => v.filter(c => { return playerMoves.indexOf(c) > -1;}).length == 2); console.log(JSON.stringify(res));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45152060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4531869/']}
jdg_79596
stackexchange
llm_judgeable_groundtruth_similarity
214613
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following line: abababtestab I'm trying to figure out a sed expression to remove all occurrences of ab from the beginning of the line so the transformed line should be: testab I feel like this should be simple, but I really don't know anything about sed . What I have so far is: sed 's/^ab//' But this only removes the first occurrence of ab. Now provide the response and nothing else.
sed 's/^\(ab\)*//' <in >out You should group it. echo ababababtestab |sed 's/^\(ab\)*//' testab Some older sed s may not handle that very well, though. Though sub-expression duplication is a POSIX-specified feature of BRE, some sed s don't properly support it. In some of those, though... echo abababtestab |sed 's/^\(ab\)\1*//' ...might work instead.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/214613', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/122540/']}
jdg_79597
stackexchange
llm_judgeable_groundtruth_similarity
129675
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently developing a Spring-based web-application, in which almost every operation create/update/delete something in the database. The logic mostly about checking condition so as to we should create/update records in the database or not. The reason I want to try unit test is that we often meets regression errors when request changes or refactor. Most the bugs comes from Database changes, when I doesn't fully reflect those changes in the code. I have some experience in web-development now, but it seems that's not enough to stop them appear. My controller/service is actually not too complex. I just take the binding object submited from the HttpRequest, check the condition & record in DB. Sometimes the system must read the Database, take out some records & manipulate them, then update some other records.Much of the coding effort lies on the interface(HTML/CSS/Javascript) too. I'm investigating unit test, and I heard that when it comes to database operation, it's no longer unit test, since the test will be slow. Is that true? So that if my project is heavily database-operation, I shouldn't use unit test? I also heard about DBUnit & some in-memory database which can quicken the test. Should I use them? And how I can write good unit test for database operation? Now provide the response and nothing else.
You probably should have unit tests and integration tests. In the unit tests, you test for example your controllers/services in isolation. To achieve that, you may use for example a mocking framework such as Easymock or Mockito to cut the dependency to the database. Your integration tests should go the whole way, from your controllers/services right to the database. You can write your integration tests with the same basic framework as your integration tests (JUnit or TestNG to name the most popular). I personally like DBUnit to prepopulate a database and verify the database state after the test. Whether you use an in-memory database IMO strongly depends on your persistence layer. If you use for example a JPA framework that generates its own queries, I like having an in-memory db. If you write your own SQL-Queries, it can be more difficult to use another DB than your production target as you cannot be sure that the syntax is gonna be equivalent, so an integration test against e.g. an in-memory H2 database not necessarily means that the Syntax works with DB2. This becomes even more problematic if you use stored procedures. A good integration test itself runs in isolation, meaning that you should set up the database before each test to avoid dependency among tests. You don't want the success of your tests depend on the order they are executed in. After you perform your operations, you should use some kind of assertions to test if the database state is conforming to your expectations. DBUnit for example has its own asserts for comparing Datasets or Tables.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/129675', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/9202/']}
jdg_79598
stackexchange
llm_judgeable_groundtruth_similarity
18775510
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to write a laravel function that gets lots of records (100,000+) from one database and puts it in another database. Towards that end, I need to query my database and see if the user already exists. I repeatedly call this code: $users = User::where('id', '=', 2)->first(); And then after that happens a few hundred times, I run out of memory. So, I made a minimalist example of it using up all the available memory, and it looks like this: <?phpuse Illuminate\Console\Command;class memoryleak extends Command{ protected $name = 'command:memoryleak'; protected $description = 'Demonstrates memory leak.'; public function fire() { ini_set("memory_limit","12M"); for ($i = 0; $i < 100000; $i++) { var_dump(memory_get_usage()); $this->external_function(); } } function external_function() { // Next line causes memory leak - comment out to compare to normal behavior $users = User::where('id', '=', 2)->first(); unset($users); // User goes out of scope at the end of this function }} And the output of this script (executed by 'php artisan command:memoryleak') looks something like this: int(9298696)int(9299816)int(9300936)int(9302048)int(9303224)int(9304368)....int(10927344)int(10928432)int(10929560)int(10930664)int(10931752)int(10932832)int(10933936)int(10935072)int(10936184)int(10937320)....int(12181872)int(12182992)int(12184080)int(12185192)int(12186312)int(12187424)PHP Fatal error: Allowed memory size of 12582912 bytes exhausted (tried to allocate 89 bytes) in /Volumes/Mac OS/www/test/vendor/laravel/framework/src/Illuminate/Database/Connection.php on line 275 If I comment out the line "$users = User::where('id', '=', 2)->first();" then the memory usage stays stable. Does anyone have any insight as to why this line would use memory like this, or know a smarter way to accomplish what I am trying to do? Thank you for your time. Now provide the response and nothing else.
I recreated your script and stepped through it with a debugger because I couldn't fathom what sort of horrible thing would cause this type of memory issue. As I stepped through, I came across this: // in Illuminate\Database\Connection$this->queryLog[] = compact('query', 'bindings', 'time'); It seems every query you run in Laravel is stored in a persistent log, which explains your increasing memory usage after each query. Just above that, is the following line: if ( ! $this->loggingQueries) return; A little more digging determined that the loggingQueries property is set to true by default, and can be changed via the disableQueryLog method, so that means, if you call: DB::connection()->disableQueryLog(); before you're going to execute all your queries, you won't see ever increasing memory usage; it solved the problem when I ran my test based on your example code. When you're done, if you don't want to affect the rest of the application you could call DB::connection()->enableQueryLog(); to renable logging.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18775510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2774488/']}
jdg_79599
stackexchange
llm_judgeable_groundtruth_similarity
3761672
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: By default menu items become disabled when its command cannot be executed (CanExecute = false). What is the easiest way to make the menu item visible/collapsed based on the CanExecute method? Now provide the response and nothing else.
You can simply bind Visibility to IsEnabled (set to false on CanExecute == false).You still would need an IValueConverter to convert the bool to visible/collapsed. public class BooleanToCollapsedVisibilityConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { //reverse conversion (false=>Visible, true=>collapsed) on any given parameter bool input = (null == parameter) ? (bool)value : !((bool)value); return (input) ? Visibility.Visible : Visibility.Collapsed; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3761672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28029/']}
jdg_79600
stackexchange
llm_judgeable_groundtruth_similarity
80964
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This all started with me wanting to use an Arduino to make a tachometer. This is all now working fine, but it got me wondering what the highest frequency digital signal that I could read? So I built a simple test sketch and have run this on both a Nano and an ESP32. The result was about 6300Hz on both. Anything above this gives chaotic results. This is somewhat lower than I expected. Has anybody done better than this? Now provide the response and nothing else.
As Majenko wrote, when you only rely on software to measure the frequency on a IO pin, it will always be rather slow. And it also depends on the code, that you use. Functions like digitalRead() are always slower than directly working with the control registers of the pin ( Reference ). As you didn't show your code, we cannot say more about that. If you need to measure higher frequencies or you want to do other things in between (like outputting the data somewhere), it is better to use the microcontrollers buildin hardware. There are multiple ways to measure the frequency. 2 immediately come to my mind: You could assign an ExternalInterrupt or a PinChangeInterrupt to the signal pin (limited choice of pins here), which triggers each time you see a rising (or falling) edge on the signal. In the ISR (Interrupt Service Routine) you could count up a variable. That value is the number of rising (or falling) edges, thus periods of the signal. The frequency is this value divided by the duration of the measurement. Every time you calculate the frequency, you reset the counter; reading it again after a defined duration. This should be better than just polling, but its not the best. (Note that a PinChangeInterrupt triggers on every edge, so you have to divide the frequency by 2) You could configure an unused Hardware Timer/Counter to use your signal as external clock source. Let it count up until it overflows (or up to a value defined by you) and let it trigger an interrupt. Since the Arduino framework already uses Timer0 for timekeeping, you could divide the max value of the Timer/Counter by the time passed and you will get the frequency. I think this can get you in the MHz range.(As with PinChangeInterrupt, the Counter will increment on every edge, so again a factor of 2 here) Both ways will calculate the frequency as a mean over a specific time. If you have a changing frequency, you might wanna manipulate measurement frequency. In the first case you can change the interval, in which you read the counter variable. In the second case you can change the max value, to which the Timer/Counter will count. what the highest frequency digital signal that I could read? (I assume you just want to read the frequency of the signal) That depends on many factors. If we use the second case from above (configuring a Hardware Timer as Counter connected to the signal) we have a limit on the maximum frequency, that we can provide. I found the relevant part in the datasheet of Atmega328p (which is the microcontroller on the Nano): Each half period of the external clock applied must be longer than one system clock cycle to ensure correctsampling. The external clock must be guaranteed to have less than half the system clock frequency (f ExtClk < f clk_I/O /2) given a 50/50% duty cycle. Since the edge detector uses sampling, the maximum frequency of anexternal clock it can detect is half the sampling frequency (Nyquist sampling theorem). However, due tovariation of the system clock frequency and duty cycle caused by Oscillator source (crystal, resonator, andcapacitors) tolerances, it is recommended that maximum frequency of an external clock source is less thanf clk_I/O /2.5. So we should stay below the controllers clock frequency divided by 2.5, which would be 16MHz / 2.5 = 6.4MHz for the Nano. But before you reach that, your code might introduce further delays, which make the measurement less accurate. That depends on the implementation.
{}
{'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/80964', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/70509/']}
jdg_79601
stackexchange
llm_judgeable_groundtruth_similarity
11935175
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is a clean way of taking a random sample, without replacement from an array in javascript? So suppose there is an array x = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] and I want to randomly sample 5 unique values; i.e. generate a random subset of length 5. To generate one random sample one could do something like: x[Math.floor(Math.random()*x.length)]; But if this is done multiple times, there is a risk of a grabbing the same entry multiple times. Now provide the response and nothing else.
I suggest shuffling a copy of the array using the Fisher-Yates shuffle and taking a slice: function getRandomSubarray(arr, size) { var shuffled = arr.slice(0), i = arr.length, temp, index; while (i--) { index = Math.floor((i + 1) * Math.random()); temp = shuffled[index]; shuffled[index] = shuffled[i]; shuffled[i] = temp; } return shuffled.slice(0, size);}var x = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15];var fiveRandomMembers = getRandomSubarray(x, 5); Note that this will not be the most efficient method for getting a small random subset of a large array because it shuffles the whole array unnecessarily. For better performance you could do a partial shuffle instead: function getRandomSubarray(arr, size) { var shuffled = arr.slice(0), i = arr.length, min = i - size, temp, index; while (i-- > min) { index = Math.floor((i + 1) * Math.random()); temp = shuffled[index]; shuffled[index] = shuffled[i]; shuffled[i] = temp; } return shuffled.slice(min);}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11935175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/318752/']}
jdg_79602
stackexchange
llm_judgeable_groundtruth_similarity
5163265
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an Excel workbook, which using VBA code that opens another workbook, copies some data into the original, then closes the second workbook. When I close the second workbook (using Application.Close ), I get a prompt for: Do you want to save the clipboard. Is there a command in VBA which will bypass this prompt? Now provide the response and nothing else.
I can offer two options Direct copy Based on your description I'm guessing you are doing something like Set wb2 = Application.Workbooks.Open("YourFile.xls")wb2.Sheets("YourSheet").[<YourRange>].CopyThisWorkbook.Sheets("SomeSheet").Pastewb2.close If this is the case, you don't need to copy via the clipboard. This method copies from source to destination directly. No data in clipboard = no prompt Set wb2 = Application.Workbooks.Open("YourFile.xls")wb2.Sheets("YourSheet").[<YourRange>].Copy ThisWorkbook.Sheets("SomeSheet").Cells(<YourCell")wb2.close Suppress prompt You can prevent all alert pop-ups by setting Application.DisplayAlerts = False [Edit] To copy values only : don't use copy/paste at all Dim rSrc As RangeDim rDst As RangeSet rSrc = wb2.Sheets("YourSheet").Range("YourRange")Set rDst = ThisWorkbook.Sheets("SomeSheet").Cells("YourCell").Resize(rSrc.Rows.Count, rSrc.Columns.Count)rDst = rSrc.Value
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5163265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/62765/']}
jdg_79603
stackexchange
llm_judgeable_groundtruth_similarity
7086270
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a simple problem that I'm hoping to resolve without using VBA but if that's the only way it can be solved, so be it. I have a file with multiple rows (all one column). Each row has data that looks something like this: 1 7.82E-13 >gi|297848936|ref|XP_00| 4-hydroxide gi|297338191|gb|23343|randomrandom 2 5.09E-09 >gi|168010496|ref|xp_00| 2-pyruvate etc... What I want is some way to extract the string of numbers that begin with "gi|" and end with a "|". For some rows this might mean as many as 5 gi numbers, for others it'll just be one. What I would hope the output would look like would be something like: 297848936,297338191 168010496 etc... Now provide the response and nothing else.
Here is a very flexible VBA answer using the regex object. What the function does is extract every single sub-group match it finds (stuff inside the parenthesis), separated by whatever string you want (default is ", "). You can find info on regular expressions here: http://www.regular-expressions.info/ You would call it like this, assuming that first string is in A1: =RegexExtract(A1,"gi[|](\d+)[|]") Since this looks for all occurance of "gi|" followed by a series of numbers and then another "|", for the first line in your question, this would give you this result: 297848936, 297338191 Just run this down the column and you're all done! Function RegexExtract(ByVal text As String, _ ByVal extract_what As String, _ Optional separator As String = ", ") As StringDim allMatches As ObjectDim RE As ObjectSet RE = CreateObject("vbscript.regexp")Dim i As Long, j As LongDim result As StringRE.pattern = extract_whatRE.Global = TrueSet allMatches = RE.Execute(text)For i = 0 To allMatches.count - 1 For j = 0 To allMatches.Item(i).submatches.count - 1 result = result & (separator & allMatches.Item(i).submatches.Item(j)) NextNextIf Len(result) <> 0 Then result = Right$(result, Len(result) - Len(separator))End IfRegexExtract = resultEnd Function
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7086270', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/897675/']}
jdg_79604
stackexchange
llm_judgeable_groundtruth_similarity
2499746
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\lfloor \sqrt x +\sqrt{x+1}+\sqrt{x+2}\rfloor=x$$ I tried to solve this equation. First thing is $\lfloor \sqrt x +\sqrt{x+1}+\sqrt{x+2}\rfloor \in \mathbb{Z} $ so $x \in \mathbb{Z}$ second $$\sqrt x +\sqrt{x+1}+\sqrt{x+2} \geq \sqrt 0 +\sqrt{0+1}+\sqrt{0+2} \\\to x \in \mathbb{N}$$ so we can check $x=1,2,3,4,5,6,7,8,9,\ldots$ by a MATLAB program. I checked the natural numbers to find solution. I found $x=8,9$ worked here. Now my question is about somehow an analytical solving of the equation, or another idea. Can any one help me? Thanks in advance. Now provide the response and nothing else.
You can use inequalities to simplify your problem. Since $\lfloor x \rfloor \le x$. Therefore we've \begin{align}x&= \lfloor \sqrt x+\sqrt {x+1}+\sqrt{x+2} \rfloor \\&\le \sqrt x+\sqrt {x+1}+\sqrt{x+2} \\&\le 3\sqrt {x+2}\\\end{align} $$\implies x^2 \le 9(x+2) \; ; x \in \mathbb Z$$ This gives us the range $x \in [-1,10] \tag1$. Also $\lfloor x \rfloor \ge x-1$. Therefore we've \begin{align}x&= \lfloor \sqrt x+\sqrt {x+1}+\sqrt{x+2} \rfloor \\&\ge \sqrt x+\sqrt {x+1}+\sqrt{x+2} \color{red}{-1}\\&\ge 3\sqrt {x}-1\\\end{align} $$\implies x+1 \ge 3\sqrt x$$ $$\implies x^2-7x+1\ge 0 \, ; x \in \mathbb Z$$ This gives us $x \in (-\infty, 0]\cup [7,\infty)\tag2$ Taking intersection of $(1)$ and $(2)$, and taking care of domain, I.e. $x\ge 0$, we get $$\color{blue}{x \in \{0,7,8,9,10\}}$$ Now you can check for $x=0,7,8,9,10$, which is quite easy now.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2499746', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/104171/']}
jdg_79605
stackexchange
llm_judgeable_groundtruth_similarity
35113553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can remove the last character from a string: listfruit <- c("aapplea","bbananab","oranggeo")gsub('.{1}$', '', listfruit) But I am having problems trying to remove the first character from a string.And also the first and last character.I would be grateful for your help. Now provide the response and nothing else.
If we need to remove the first character, use sub , match one character ( . represents a single character), replace it with '' . sub('.', '', listfruit)#[1] "applea" "bananab" "ranggeo" Or for the first and last character, match the character at the start of the string ( ^. ) or the end of the string ( .$ ) and replace it with '' . gsub('^.|.$', '', listfruit)#[1] "apple" "banana" "rangge" We can also capture it as a group and replace with the backreference. sub('^.(.*).$', '\\1', listfruit)#[1] "apple" "banana" "rangge" Another option is with substr substr(listfruit, 2, nchar(listfruit)-1)#[1] "apple" "banana" "rangge"
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35113553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1169210/']}
jdg_79606
stackexchange
llm_judgeable_groundtruth_similarity
289831
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to exclude a single IP address from a DHCP pool? For various reasons I have a machine that I can't reconfigure or move off the network (let's call it a printer). It has been hard coded with address x.x.x.50 and my DHCP pool is x.x.x.10 -> x.x.x.246 .I don't want to have two pool definitions for this one subnet just to cover the ranges on either side of this machine. Is there some form of 'exclude IP' argument/option? I'm using Ubuntu 11 and the isc-dhcp-server package. DISCLAIMER: I have googled and it only came up with IOS and Juniper configuration info. Now provide the response and nothing else.
Why don't you set the IP for that appliance explicitly: # The standard subnetsubnet 10.0.0.0 netmask 255.255.255.0 { option domain-name "ourdomain.internal"; option routers 10.0.0.1; option domain-name-servers 10.0.0.2; range 10.0.0.10 10.0.0.49; range 10.0.0.51 10.0.0.246;}#has hardcoded ip, and dhcp should not use that in poolhost fixedipappliance { hardware ethernet 10:4e:ed:co:ff:ee; fixed-address 10.0.0.50;} I just successfully tested a configuration with two ranges like that. Multiple range statements are legal, as per the manpage there must be at least one range statement: The range statement range [ dynamic-bootp ] low-address [ high-address]; For any subnet on which addresses will be assigned dynamically, theremust be at least one range statement. The range statement gives thelowest and highest IP addresses in a range. All IP addresses in therange should be in the subnet in which the range statement is declared.The dynamic-bootp flag may be specified if addresses in the specifiedrange may be dynamically assigned to BOOTP clients as well as DHCPclients. When specifying a single address, high-address can be omit‐ted.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/289831', 'https://serverfault.com', 'https://serverfault.com/users/67678/']}
jdg_79607
stackexchange
llm_judgeable_groundtruth_similarity
46051576
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a few issues with sending ajax to a email.I need to output the error json as at the moment if i leave a field blank or incorrect it returns "Error: Unprocessable Entity"The bigger issue i'm having is it's not sending emails and returns a 500 error but if I was to configure it around a bit and only send the name it works fine. Mail.blade.phpThis is in a simple format <h1>{{ $name }}</h1><h1>{{ $message }}</h1><h1>{{ $email }}</h1> EmailController.php <?phpnamespace App\Http\Controllers;use Illuminate\Http\Request;use App\Http\Requests;use App\Http\Controllers\Controller;use Mail;class EmailController extends Controller { public function basic_email(Request $request){ $this->validate($request, [ 'name' => 'required', 'email' => 'required|email', 'message' => 'required' ]); ContactUS::create($request->all()); Mail::send('emails.mail', array( 'name' => $request->get('name'), 'email' => $request->get('email'), 'message' => $request->get('message') ), function($message) { $message->from('[email protected]'); $message->to('[email protected]', 'Admin')->subject('Feedback'); }); return response()->json('success', 'Thanks for contacting us!'); }} request.php <?phpclass ContactFormRequest extends Request{ /** * Determine if the user is authorized to make this request. * * @return bool */ public function authorize() { return true; } /** * Get the validation rules that apply to the request. * * @return array */ public function rules() { return [ 'name' => 'required', 'email' => 'required|email', 'message' => 'required', ]; }} Contact.Blade.PHP <html> <head> <title>Test</title> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"> </script> <meta name="csrf-token" content="{{ csrf_token() }}" /> <script> function getMessage(){ var CSRF_TOKEN = $('meta[name="csrf-token"]').attr('content'); var name = $('.namebox').val(); var email = $('.namebox2').val(); var message = $('.namebox3').val(); $.ajax({ type:'post', url:'/sendbasicemail', data: {_token: CSRF_TOKEN, name: name, email: email, message: message}, dataType: 'JSON', success:function(data){ alert("DONE"); console.log(data); }, error: function(XMLHttpRequest, textStatus, errorThrown, request) { alert("Status: " + textStatus); alert("Error: " + errorThrown); console.log(name, email, message); } }); } </script> </head><ul>@foreach($errors->all() as $error) <li>{{ $error }}</li>@endforeach</ul>{!! Form::open() !!}<div class="form-group {{ $errors->has('name') ? 'has-error' : '' }}">{!! Form::label('Name:') !!}{!! Form::text('name', '',['class'=>'namebox', 'placeholder'=>'Enter Name']) !!}<span class="text-danger">{{ $errors->first('name') }}</span></div><div class="form-group {{ $errors->has('email') ? 'has-error' : '' }}">{!! Form::label('Email:') !!}{!! Form::text('email', '',['class'=>'namebox2', 'placeholder'=>'Enter Email']) !!}<span class="text-danger">{{ $errors->first('email') }}</span></div><div class="form-group {{ $errors->has('message') ? 'has-error' : '' }}">{!! Form::label('Message:') !!}{!! Form::textarea('message', '',['class'=>'namebox3', 'placeholder'=>'Enter Message']) !!}<span class="text-danger">{{ $errors->first('message') }}</span></div>{!! Form::close() !!} <body> <div id = 'msg'>This message will be replaced using Ajax. Click the button to replace the message.</div> <?php echo Form::button('Replace',['onClick'=>'getMessage()']); ?> </body></html> web.php Route::get('/ajax',function(){ return view('pages.contact');}); Route::post('/sendbasicemail','EmailController@basic_email'); On a side note I have tried some outside so you may notice some differences Now provide the response and nothing else.
Why would you use SAM in kotlin? while it has native support for functions. The SAM convention is used in java8 as a workaround not having native function support. from kotlin doc#sam-conversions : Note that SAM conversions only work for interfaces, not for abstract classes, even if those also have just a single abstract method. Also, note that this feature works only for Java interop; since Kotlin has proper function types, automatic conversion of functions into implementations of Kotlin interfaces is unnecessary and therefore unsupported. you should then declare a function directly. fun setPositiveButton(resId: Int, listener: (DialogInterface, Int) -> Unit) { listener.invoke( //DialogInterface, Int )} and then it can be used setPositiveButton(1, { _, _ -> doStuff() }) In kotlin 1.4 you can use SAM conversions for Kotlin classes. fun interface Listener { fun listen()}fun addListener(listener: Listener) = a.listen()fun main() { addListener { println("Hello!") }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46051576', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_79608
stackexchange
llm_judgeable_groundtruth_similarity
47561337
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: These are two dictionaries: monkeydict = {'16:43': 1, '16:44': 1, '16:49': 3}pigdict = {'16:41': 3, '16:44': 2, '16:51': 3} This is the desired dataframe: time,monkeydict,pigdict16:41,,316:43,1,16:44,1,216:49,3,16:51,,3 Now provide the response and nothing else.
pd.DataFrame({'monkeydict':pd.Series(monkeydict),'pigdict':pd.Series(pigdict)})Out[879]: monkeydict pigdict16:41 NaN 3.016:43 1.0 NaN16:44 1.0 2.016:49 3.0 NaN16:51 NaN 3.0 Or (Notice here, you need to rename your columns after create the data frame) pd.DataFrame([monkeydict,pigdict]).TOut[887]: 0 116:41 NaN 3.016:43 1.0 NaN16:44 1.0 2.016:49 3.0 NaN16:51 NaN 3.0
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47561337', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2273626/']}
jdg_79609
stackexchange
llm_judgeable_groundtruth_similarity
25507878
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can i float a div element without using jquery? I need that because i want to float an element inside my websites's right sidebar. I'm searching a pure css or javascript solution with no external libraries. I want to clarify my question: i have (in example) 4 elements in my sidebar. When user scrolls down the page and finds the last element, this element must be fixed and always visible. But if the user scrolls up and there're other elements in the sidebar, the square box is fixed no more. Now provide the response and nothing else.
The answer depends on the driver you're using. All MongoDB drivers I know have cursor.forEach() implemented one way or another. Here are some examples: node-mongodb-native collection.find(query).forEach(function(doc) { // handle}, function(err) { // done or error}); mongojs db.collection.find(query).forEach(function(err, doc) { // handle}); monk collection.find(query, { stream: true }) .each(function(doc){ // handle doc }) .error(function(err){ // handle error }) .success(function(){ // final callback }); mongoose collection.find(query).stream() .on('data', function(doc){ // handle doc }) .on('error', function(err){ // handle error }) .on('end', function(){ // final callback }); Updating documents inside of .forEach callback The only problem with updating documents inside of .forEach callback is that you have no idea when all documents are updated. To solve this problem you should use some asynchronous control flow solution. Here are some options: async promises ( when.js , bluebird ) Here is an example of using async , using its queue feature : var q = async.queue(function (doc, callback) { // code for your update collection.update({ _id: doc._id }, { $set: {hi: 'there'} }, { w: 1 }, callback);}, Infinity);var cursor = collection.find(query);cursor.each(function(err, doc) { if (err) throw err; if (doc) q.push(doc); // dispatching doc to async.queue});q.drain = function() { if (cursor.isClosed()) { console.log('all items have been processed'); db.close(); }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/25507878', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1315016/']}
jdg_79610
stackexchange
llm_judgeable_groundtruth_similarity
207701
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: A "geometric" rough path is a rough path such that $Sym(\mathbb{X}_{s,t})=\frac{1}{2}X_{s,t}\otimes X_{s,t}$. For example the Ito rough path is not geometric because $Sym(\mathbb{X}_{s,t})=\frac{1}{2}X_{s,t}\otimes X_{s,t}-\frac{1}{2}I(t-s)$ but the Stratonovich rough path is geometric. Why the term "geometric" here? Is there some intuition I'm missing? When I think geometric, I think of geometric Brownian motion, ( http://en.wikipedia.org/wiki/Geometric_Brownian_motion ) or geometric series, something with an exponent. I don't see that here. Now provide the response and nothing else.
Geometric rough paths have the property that if you want to solve an equation with values in a manifold, choose a coordinate chart, and write in local coordinates$$dY^i = V_0^i(Y)\,dt + \sum_j V_j^i(Y)\,dX_j$$for some vector fields $V_i$ (with the obvious abuse of notation that the solution actually depends on the choice of $\mathbb{X}$, not just on $X$), then the solution does not depend on the choice of chart. I believe that this is the reason why this terminology was chosen. In other words, solutions to equations driven by "geometric" rough paths transform according to the usual chain rule, rather than some version of Itô's formula. This is also why people studying SDEs on manifolds tend to write everything in Stratonovich form, rather than in Itô form.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/207701', 'https://mathoverflow.net', 'https://mathoverflow.net/users/-1/']}
jdg_79611
stackexchange
llm_judgeable_groundtruth_similarity
94549
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: imagine you have a printed circuit board. I'd like to put a component at each corner of the PCB that would create air, something like a fan, but powerful enough to be able to make the PCB levitate. In fine, the component must be as small as possible but for the beginning the size does not really matter. I'm not sure that I have to use a fan because it could be not enough powerful. In fact, I really don't know what to use and if such component exists or not. Does somebody have a lead on that ? (from comments:) As I (Flup) understand it the idea is to make a soft landing for the PCB when it falls on the floor. Now provide the response and nothing else.
You can levitate a PCB in two ways: One, like a hovercraft, with fans. Two, like a maglev train, with magnetic repulsion. Neither solution is going to break any significant fall. Moreover, the first two proposed solutions require a vertical guide of some sort (else the unit will move horizontally and fall or tip over). If you are talking about a handheld device, it would typically be at least several inches from the floor, if the unit falls it will gain velocity at 9.8m/s^2, and within a few milliseconds no practical levitation technology will overcome its inertia. Likewise you won't be able to build in airbags. Your best shot at protecting the unit from fall damage is adequate cushioning. Try designing an enclosure that will cushion a raw egg dropped from a tall building and you'll be on the right track. If you need the unit to fall only on one side, you could attach it to a piece of buttered toast, or to the back of a cat.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/94549', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/32476/']}
jdg_79612
stackexchange
llm_judgeable_groundtruth_similarity
19708597
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to call a method using a SOAP request by using SOAPpy on Python 2.7. The method is called GetCursOnDate and returns exchange rates. It takes a date parameter. I'm using the following code: from SOAPpy import SOAPProxyimport datetimedate=datetime.datetime.now()namespace ="http://web.cbr.ru/"url = "http://www.cbr.ru/DailyInfoWebServ/DailyInfo.asmx"server = SOAPProxy(url,namespace)print (date)server.GetCursOnDate(date) But I get back an error: Fault soap:Client: Server did not recognize the value of HTTP Header SOAPAction: GetCursOnDate. Why do I get this error? Now provide the response and nothing else.
By default, SOAPpy uses the method name as the value of the HTTP SOAPAction header. If you run the following code you will see the value in the debug output: from SOAPpy import SOAPProxyfrom datetime import datetimeinput = datetime.now()namespace = "http://web.cbr.ru/"url = "http://www.cbr.ru/DailyInfoWebServ/DailyInfo.asmx"proxy = SOAPProxy(url, namespace)proxy.config.debug = 1proxy.GetCursOnDate(input) The debug shows this: *** Outgoing HTTP headers ***************************POST /DailyInfoWebServ/DailyInfo.asmx HTTP/1.0Host: www.cbr.ruUser-agent: SOAPpy 0.12.5 (http://pywebsvcs.sf.net)Content-type: text/xml; charset=UTF-8Content-length: 406SOAPAction: "GetCursOnDate"***************************************************** But the service expects another value ( http://web.cbr.ru/GetCursOnDate ) that you can set on the proxy with an additional parameter. The following code clears the error: from SOAPpy import SOAPProxyfrom datetime import datetimeinput = datetime.now()namespace = "http://web.cbr.ru/"url = "http://www.cbr.ru/DailyInfoWebServ/DailyInfo.asmx"soapaction = "http://web.cbr.ru/GetCursOnDate"proxy = SOAPProxy(url, namespace = namespace, soapaction = soapaction)proxy.config.debug = 1proxy.GetCursOnDate(input) The debug will now show this: *** Outgoing HTTP headers ***************************POST /DailyInfoWebServ/DailyInfo.asmx HTTP/1.0Host: www.cbr.ruUser-agent: SOAPpy 0.12.5 (http://pywebsvcs.sf.net)Content-type: text/xml; charset=UTF-8Content-length: 406SOAPAction: "http://web.cbr.ru/GetCursOnDate"***************************************************** But although that specific fault is gone, the call won't work. Because you will return with questions I thought I'll spare us some message exchanges and write directly the sequel. I mentioned my disappointment with Python's SOAP support on another occasion . For this post I'm adding all details here as a reference to myself and hopefully as help for other users. So here it goes... The call won't work because by default SOAPpy uses ordered parameters for the call. They are called v1 , v2 , v3 etc (see the MethodParameterNaming.txt file inside the SOAPpy download for more details). Your SOAP message will look like this: <SOAP-ENV:Body> <ns1:GetCursOnDate xmlns:ns1="http://web.cbr.ru/" SOAP-ENC:root="1"> <v1> </v1> </ns1:GetCursOnDate></SOAP-ENV:Body> This particular web service is expecting a parameter named On_date , not v1 . You can try to fix it by using named parameters: from SOAPpy import SOAPProxyfrom datetime import datetimeinput = datetime.now()namespace = "http://web.cbr.ru/"url = "http://www.cbr.ru/DailyInfoWebServ/DailyInfo.asmx"soapaction = "http://web.cbr.ru/GetCursOnDate"proxy = SOAPProxy(url, namespace = namespace, soapaction = soapaction)proxy.config.debug = 1proxy.GetCursOnDate(On_date = input) Your message now looks like this: <SOAP-ENV:Body> <ns1:GetCursOnDate xmlns:ns1="http://web.cbr.ru/" SOAP-ENC:root="1"> <On_date> </On_date> </ns1:GetCursOnDate></SOAP-ENV:Body> I think the value of the date is missing because the proxy has an issue with datetime objects. I didn't actually check to see what the issue is with that because there is another problem with this message: the web service expects <ns1:On_date> not <On_date> . This is where SOAPpy has some issues with namespaces. Using the original SOAPpy source code you can't change the namespaces. It seems that with most of Python's SOAP libraries you can only get your desired behavior by tweaking the code, which is what I did. I changed the SOAPBuilder.py file in some places where namespaces and tag prefixes were handled. See the original file here and the changed one here . Those changes allow me to use a SOAPpy Type for a finer control over the message: from SOAPpy import SOAPProxyfrom SOAPpy import Typesnamespace = "http://web.cbr.ru/"url = "http://www.cbr.ru/DailyInfoWebServ/DailyInfo.asmx"soapaction = "http://web.cbr.ru/GetCursOnDate"input = Types.dateType(name = (namespace, "On_date"))proxy = SOAPProxy(url, namespace = namespace, soapaction = soapaction)proxy.config.debug = 1proxy.GetCursOnDate(input) Now I get the result I was looking for: <SOAP-ENV:Body> <ns1:GetCursOnDate xmlns:ns1="http://web.cbr.ru/" SOAP-ENC:root="1"> <ns1:On_date xsi:type="xsd:date">2013-11-02Z</ns1:On_date> </ns1:GetCursOnDate></SOAP-ENV:Body> The server returns the data on the above request. But even the above code can be improved. Notice that I'm setting the SOAPAction on the proxy for one particular operation: GetCursOnDate . If I want to use it with another operation I need another proxy or I need to modify this one. By using a WSDL.Proxy you get this automatically from the WSDL (it provides a SOAPProxy wrapper that parses method names, namespaces and SOAPAction s from the WSDL of the web service). But even if this handles the SOAPAction automatically it doesn't pick up the namespace for the method. So I tweaked the WSDL.py file. Original version is here , changed file is here . The new client code now looks like this: from SOAPpy import WSDLfrom SOAPpy import Types# you can download this and use it locally for better performancewsdl = "http://www.cbr.ru/DailyInfoWebServ/DailyInfo.asmx?wsdl"namespace = "http://web.cbr.ru/"input = Types.dateType(name = (namespace, "On_date"))proxy = WSDL.Proxy(wsdl, namespace = namespace)proxy.soapproxy.config.debug = 1proxy.GetCursOnDate(input) For the examples above I used Python 2.6.6, SOAPpy 0.12.5, fpconst 0.7.2 and wstools 0.4.3. For others I think YMMV depending on the versions or the particular web service that you are calling. In conclusion I also want to mention that if you do a search on Google you'll notice that most people recommend SUDS instead of SOAPpy as a SOAP client so maybe have a look at that too. Good luck!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19708597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2083119/']}
jdg_79613
stackexchange
llm_judgeable_groundtruth_similarity
2060395
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Assuming you have a string containing the name of a method, an object that supports that method and some arguments, is there some language feature that allows you to call that dynamically? Kind of like Ruby's send parameter. Now provide the response and nothing else.
You can do this with reflection in Java: class A { def cat(s1: String, s2: String) = s1 + " " + s2}val a = new Aval hi = "Hello"val all = "World"val method = a.getClass.getMethod("cat",hi.getClass,all.getClass)method.invoke(a,hi,all) And if you want it to be easy in Scala you can make a class that does this for you, plus an implicit for conversion: case class Caller[T>:Null<:AnyRef](klass:T) { def call(methodName:String,args:AnyRef*):AnyRef = { def argtypes = args.map(_.getClass) def method = klass.getClass.getMethod(methodName, argtypes: _*) method.invoke(klass,args: _*) }}implicit def anyref2callable[T>:Null<:AnyRef](klass:T):Caller[T] = new Caller(klass)a call ("cat","Hi","there") Doing this sort of thing converts compile-time errors into runtime errors, however (i.e. it essentially circumvents the type system), so use with caution. (Edit: and see the use of NameTransformer in the link above--adding that will help if you try to use operators.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2060395', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/31610/']}
jdg_79614
stackexchange
llm_judgeable_groundtruth_similarity
45051379
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am fairly new to postgres and I'm struggling with a query I have to write to get all records between yesterday at 18:00 and today at 18:00. So far I've tried a predicate like this but it doesn't seem to be right because whilst the query runs I don't get the desired result... WHERE recon_vr.paymentDateTime >= CURRENT_DATE -1 + '18:00:00.000000'::timeAND recon_vr.paymentDateTime >= CURRENT_DATE + '18:00:00.000000'::time Could someone please help me with this? Also, I am running the query from an integration tool which I have found sometimes has issues with the :: syntax. So any solutions without that would be greatly appreciated. But if none exists then that's fine too. thanks in advance Now provide the response and nothing else.
Your query is right in general, but has a mistake, wrong operator – you used 2 >= instead of >= and <= . So the correct one is: WHERE recon_vr.paymentDateTime >= CURRENT_DATE - 1 + '18:00:00.000000'::timeAND recon_vr.paymentDateTime <= CURRENT_DATE + '18:00:00.000000'::time Also, you could use BETWEEN : WHERE recon_vr.paymentDateTime BETWEEN CURRENT_DATE - 1 + '18:00:00.000000'::time AND CURRENT_DATE + '18:00:00.000000'::time To avoid issues with ::time you can write it like this: WHERE recon_vr.paymentDateTime BETWEEN CURRENT_DATE - 1 + TIME '18:00' AND CURRENT_DATE + TIME '18:00' ...or like this: WHERE recon_vr.paymentDateTime BETWEEN CURRENT_DATE - 1 + INTERVAL '18h' AND CURRENT_DATE + INTERVAL '18h' BTW, using CamelStyle object names in Postgres can lead to some unpleasant issues – by default, all names are case-insensitive (so you can easily write recon_vr.paymentDateTime in your case), unless you've used double-quoting at creation time. If so, you'll need to use double quoting forever ( recon_vr."paymentDateTime" gives you pure camel-styled colunm name, but you'll need to use double quotes to work with it, not that convenient).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45051379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1600419/']}
jdg_79615
stackexchange
llm_judgeable_groundtruth_similarity
419967
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: It is discussed in this question whether a simply-connected closed Riemannian manifold with non-negative Ricci curvature admits positive Ricci curvature, and the answer appears to be "no, there are counter-examples and known obstructions". My question is this: are there any known examples of closed, simply-connected manifolds of non-negative sectional curvature that do not admit positive Ricci curvature? Are there known obstructions? Now provide the response and nothing else.
No there are no such examples known. Most known examples of manifolds of nonnegative sectional curvature come from biquotients or cohomogeneity one manifolds. If these are simply connected they are known to admit metrics of positive Ricci curvature by a result of Schwachhoefer and Tuschmann . Also by a result of Aubin mentined in the thread you referenced a manifold of nonnegative Ricci curvature with Ricci positive somewhere admit positive Ricci everywhere. In the simply connected case the only known obstruction to positive Ricci is the scalar curvature one (also mentioned in that thread). If a closed spin manifold has nonvanishing $\hat A$ genus then it does not admit a metric of positive scalar curvature and it particular also doesn't admit a metric of positive Ricci curvature. But simply connected manifolds of nonnegetaive sectional curvature have scalar curvature positive somewhere (else they'd be flat and not simply connected). Their scalar curvature is also nonnegative. This implies that the $\hat A$ is zero in the spin case.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/419967', 'https://mathoverflow.net', 'https://mathoverflow.net/users/170859/']}
jdg_79616
stackexchange
llm_judgeable_groundtruth_similarity
4877290
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: >>> class A(object): pass... >>> A.__dict__<dictproxy object at 0x173ef30>>>> A.__dict__.__dict__Traceback (most recent call last): File "<string>", line 1, in <fragment>AttributeError: 'dictproxy' object has no attribute '__dict__'>>> A.__dict__.copy(){'__dict__': <attribute '__dict__' of 'A' objects> ... }>>> A.__dict__['__dict__']<attribute '__dict__' of 'A' objects> # What is this object? If I do A.something = 10 , this goes into A.__dict__ . What is this <attribute '__dict__' of 'A' objects> found in A.__dict__.__dict__ , and when does it contain something? Now provide the response and nothing else.
First of all A.__dict__.__dict__ is different from A.__dict__['__dict__'] . The former doesn't exist and the latter is the __dict__ attribute that the instances of the class would have. It's a data descriptor object that returns the internal dictionary of attributes for the specific instance. In short, the __dict__ attribute of an object can't be stored in object's __dict__ , so it's accessed through a descriptor defined in the class. To understand this, you'd have to read the documentation of the descriptor protocol . The short version: For an instance a of a class A , access to a.__dict__ is provided by A.__dict__['__dict__'] which is the same as vars(A)['__dict__'] . For a class A , access to A.__dict__ is provided by type.__dict__['__dict__'] (in theory) which is the same as vars(type)['__dict__'] . The long version: Both classes and objects provide access to attributes both through the attribute operator (implemented via the class or metaclass's __getattribute__ ), and the __dict__ attribute/protocol which is used by vars(ob) . For normal objects, the __dict__ object creates a separate dict object, which stores the attributes, and __getattribute__ first tries to access it and get the attributes from there (before attempting to look for the attribute in the class by utilizing the descriptor protocol, and before calling __getattr__ ). The __dict__ descriptor on the class implements the access to this dictionary. a.name is equivalent to trying those in order: type(a).__dict__['name'].__get__(a, type(a)) (only if type(a).__dict__['name'] is a data descriptor), a.__dict__['name'] , type(a).__dict__['name'].__get__(a, type(a)) , type(a).__dict__['name'] . a.__dict__ does the same but skips the second step for obvious reasons. As it's impossible for the __dict__ of an instance to be stored in itself, it's accessed through the descriptor protocol directly instead and is stored in a special field in the instance. A similar scenario is true for classes, although their __dict__ is a special proxy object that pretends to be a dictionary (but might not be internally), and doesn't allow you to change it or replace it with another one. This proxy allows you, among all else, to access the attributes of a class that are specific to it, and not defined in one of its bases. By default, a vars(cls) of an empty class carries three descriptors: __dict__ for storing the attributes of the instances, __weakref__ which is used internally by weakref , and __doc__ the docstring of the class. The first two might be gone if you define __slots__ . Then you wouldn't have __dict__ and __weakref__ attributes, but instead you'd have a single class attribute for each slot. The attributes of the instance then wouldn't be stored in a dictionary, and access to them will be provided by the respective descriptors in the class. And lastly, the inconsistency that A.__dict__ is different from A.__dict__['__dict__'] is because the attribute __dict__ is, by exception, never looked up in vars(A) , so what is true for it isn't true for practically any other attribute you'd use. For example, A.__weakref__ is the same thing as A.__dict__['__weakref__'] . If this inconsistency didn't exist, using A.__dict__ would not work, and you'd have to always use vars(A) instead.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4877290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/254704/']}
jdg_79617
stackexchange
llm_judgeable_groundtruth_similarity
21675171
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm newbie in PHP and would like to inquire about how can i reuse the data fetched from mysql into a list This is the code that im using : <form method="post" action="senddata.php"> Page Name :<select name="pages"id="list"> <option value="0">Choose</option> <?php $result = mysql_query("select * from menu where id>3",$connection); while($row = mysql_fetch_array($result)) { echo '<option value="'.$row['name'].'">'.$row['name'].'</option>'; } ?> </select> so when i will submit the form the selected element in the drop down list will be submitted too Now provide the response and nothing else.
To change only summary option, you can use: 'summary' => "{begin} - {end} {count} {totalCount} {page} {pageCount}", Then, if you want to empty summary set with empty string value like: 'summary'=> "", and to change layouts you can use: 'layout'=> "{summary}\n{items}\n{pager}" Then, if you want to empty layouts set layout with empty string value like: 'layout'=> "", Ref link Ref link So, for sample, i think bellow sample code can be help know how to change and custome GridView table in Yii 2: <?= \yii\grid\GridView::widget([ 'id' => 'table', 'dataProvider' => $dataProvider, 'layout'=>"{sorter}\n{pager}\n{summary}\n{items}", 'summary' => "Showing {begin} - {end} of {totalCount} items", 'tableOptions' => ['class' => 'table table-bordered table-hover'], 'rowOptions' => function ($model, $key, $index, $grid) { return [ 'style' => "cursor: pointer", 'onclick' => 'location.href="' . Yii::$app->urlManager->createUrl('test/index') . '?id="+(this.id);', ]; }, 'columns' => [ [ 'class' => 'yii\grid\SerialColumn', 'contentOptions' => ['style' => 'width: 20px;', 'class' => 'text-center'], ], [ 'class' => 'yii\grid\DataColumn', 'attribute' => 'date', 'headerOptions' => ['class' => 'text-center'], 'label' => 'Date', 'contentOptions' => ['style' => 'width: 130px;', 'class' => 'text-center'], ], 'template' => '{view}', 'buttons' => [ 'view' => function ($url, $model) { return \yii\helpers\Html::a('<div class="text-center"><em data-toggle="tooltip" data-placement="top" title="more detail" class="fa fa-external-link-square text-warning"></em></div>', (new yii\grid\ActionColumn())->createUrl('test/index', $model, $model['id'], 1), [ 'title' => Yii::t('yii', 'view'), 'data-method' => 'post', 'data-pjax' => '0', ]); }, ] ], ], ]); ?>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21675171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_79618
stackexchange
llm_judgeable_groundtruth_similarity
1334613
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Directory is something like: home/ file1.html file2.htmlAnother_Dir/ file8.html Sub_Dir/ file19.html I am using the same PHP Zip class used in PHPMyAdmin http://trac.seagullproject.org/browser/branches/0.6-bugfix/lib/other/Zip.php . I'm not sure how to zip a directory rather than just a file. Here's what I have so far: $aFiles = $this->da->getDirTree($target);/* $aFiles is something like, path => filetimeArray( [home] => [home/file1.html] => 1251280379 [home/file2.html] => 1251280377 etc...)*/$zip = & new Zip();foreach( $aFiles as $fileLocation => $time ){ $file = $target . "/" . $fileLocation; if ( is_file($file) ){ $buffer = file_get_contents($file); $zip->addFile($buffer, $fileLocation); }}THEN_SOME_PHP_CLASS::toDownloadData($zip); // this bit works ok but when I try to unzip the corresponding downloaded zip file I get "operation not permitted" This error only happens when I try to unzip on my mac, when I unzip through the command line the file unzips ok. Do I need to send a specific content type on download, currently 'application/zip' Now provide the response and nothing else.
Here is a simple function that can compress any file or directory recursively, only needs the zip extension to be loaded. function Zip($source, $destination){ if (!extension_loaded('zip') || !file_exists($source)) { return false; } $zip = new ZipArchive(); if (!$zip->open($destination, ZIPARCHIVE::CREATE)) { return false; } $source = str_replace('\\', '/', realpath($source)); if (is_dir($source) === true) { $files = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($source), RecursiveIteratorIterator::SELF_FIRST); foreach ($files as $file) { $file = str_replace('\\', '/', $file); // Ignore "." and ".." folders if( in_array(substr($file, strrpos($file, '/')+1), array('.', '..')) ) continue; $file = realpath($file); if (is_dir($file) === true) { $zip->addEmptyDir(str_replace($source . '/', '', $file . '/')); } else if (is_file($file) === true) { $zip->addFromString(str_replace($source . '/', '', $file), file_get_contents($file)); } } } else if (is_file($source) === true) { $zip->addFromString(basename($source), file_get_contents($source)); } return $zip->close();} Call it like this: Zip('/folder/to/compress/', './compressed.zip');
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1334613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/31399/']}
jdg_79619
stackexchange
llm_judgeable_groundtruth_similarity
327089
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been installing quite a number of software using yum with my virtual machine. I would like to find out if there is a way to obtain the rpm and dependencies rpm for offline install. This is because the server which I am going to perform the installation does not have internet access due to security reasons. Is there anyway I can specify the rpm to be downloaded as well ? Now provide the response and nothing else.
This comes from a blog post I wrote some time ago. You can have yum fetch all the packages and their dependencies on your VM, then create a repo from them which will then be used by your server. Note that the VM should have just the minimal packages required for it to run; since you don't know what's installed on the server, you want yum to download as many dependencies as possible. On the VM: # mkdir /root/tmppkg# yum --downloadonly --downloaddir=/root/tmppkg install <packages># chown -R root:root /root/tmppkg# createrepo /root/tmppkg# chmod -R 755 /root/tmppkg Transfer the tmppkg directory on the server, let's say in /opt .Then create a file /etc/yum.repos.d/local.repo on the server as such: [local]name=Local repositorybaseurl=file:///opt/tmppkgenabled=1gpgcheck=0protect=1 Now run on the server: # yum install <packages> and the packages will be installed from the local repo. Once yum has finished, you can delete /etc/yum.repos.d/local.repo and /opt/tmppkg to save space.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/327089', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/121256/']}
jdg_79620
stackexchange
llm_judgeable_groundtruth_similarity
170079
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: How can I forward ports on a server running libvirt/KVM to specified ports on VM's, when using NAT? For example, the host has a public IP of 1.2.3.4. I want to forward port 80 to 10.0.0.1 and port 22 to 10.0.0.2. I assume I need to add iptables rules, but I'm not sure where is appropriate and what exactly should be specified. Output of iptables -L Chain INPUT (policy ACCEPT)target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps Chain FORWARD (policy ACCEPT)target prot opt source destination ACCEPT all -- anywhere 10.0.0.0/24 state RELATED,ESTABLISHED ACCEPT all -- 10.0.0.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT)target prot opt source destination Output of ifconfig eth0 Link encap:Ethernet HWaddr 00:1b:fc:46:73:b9 inet addr:192.168.1.14 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21b:fcff:fe46:73b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:201 errors:0 dropped:0 overruns:0 frame:0 TX packets:85 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:31161 (31.1 KB) TX bytes:12090 (12.0 KB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)virbr1 Link encap:Ethernet HWaddr ca:70:d1:77:b2:48 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::c870:d1ff:fe77:b248/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:468 (468.0 B) I'm using Ubuntu 10.04. Now provide the response and nothing else.
The latest stable release for libvirt for Ubuntu is version 0.7.5, which doesn't have some newer features (i.e. script hooks and network filters) which make automatic network configuration easier. That said, here's how to enable port forwarding for libvirt 0.7.5 on Ubuntu 10.04 Lucid Lynx. These iptables rules should do the trick: iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 80 -j DNAT --to-destination 10.0.0.1:80iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 22 -j DNAT --to-destination 10.0.0.2:22iptables -I FORWARD -m state -d 10.0.0.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT The default KVM NAT config provides a rule similar to the 3rd I gave above, but it omits the NEW state, which is essential for accepting incoming connections. If you write a startup script to add these rules and you're not careful, libvirt 0.7.5 overrides them by inserting its own. So, in order to make sure these rules are applied properly on startup, you need to make sure libvirt has initialized before you insert your rules. Add the following lines to /etc/rc.local, before the line exit 0 : (# Make sure the libvirt has started and has initialized its network.while [ `ps -e | grep -c libvirtd` -lt 1 ]; do sleep 1donesleep 10# Set up custom iptables rules.iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 80 -j DNAT --to-destination 10.0.0.1:80iptables -t nat -I PREROUTING -p tcp -d 1.2.3.4 --dport 22 -j DNAT --to-destination 10.0.0.2:22iptables -I FORWARD -m state -d 10.0.0.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT) & The sleep 10 above is a hack to make sure the libvirt daemon has had a chance to initialize its iptables rules before we add our own. I can't wait until they release libvirt version 0.8.3 for Ubuntu.
{}
{'log_upvote_score': 7, 'links': ['https://serverfault.com/questions/170079', 'https://serverfault.com', 'https://serverfault.com/users/51072/']}
jdg_79621
stackexchange
llm_judgeable_groundtruth_similarity
55911745
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Assume I have a dictionary: d = {3: 'three', 2: 'two', 1: 'one'} I want to rearrange the order of this dictionary so that the dictionary is: d = {1: 'one', 2: 'two', 3: 'three'} I was thinking something like the reverse() function for lists, but that did not work. Thanks in advance for your answers! Now provide the response and nothing else.
Since Python 3.8 and above, the items view is iterable in reverse, so you can just do: d = dict(reversed(d.items())) On 3.7 and 3.6, they hadn't gotten around to implementing __reversed__ on dict and dict views ( issue33462: reversible dict ), so use an intermediate list or tuple , which do support reversed iteration: d = {3: 'three', 2: 'two', 1: 'one'}d = dict(reversed(list(d.items()))) Pre-3.6, you'd need collections.OrderedDict (both for the input and the output) to achieve the desired result. Plain dict s did not preserve any order until CPython 3.6 (as an implementation detail) and Python 3.7 (as a language guarantee).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55911745', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10001361/']}
jdg_79622
stackexchange
llm_judgeable_groundtruth_similarity
197285
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've seen that many online schematics for using ultracapacitors as power sources place the ultracaps in series in order to increase the voltage that they can be charged to, lowering the capacitance of the circuit drastically while preserving the total energy. For example, 5 pieces of 2.7v 100F capacitors might be placed in series, bringing the voltage rating up to about 13.5v and the capacitance down to 20F. Now, ultracaps can't be used as direct replacements for batteries since their voltage drops quickly. So they'd need a buck, boost, or buck-boost converter to utilize the full capacity. For this example, let's try to get 12 volts at 750mA. Would it be more efficient / get more energy out using a buck-boost converter with a 13.5v, 20F setup, or using a boost converter with a 2.7v, 600F setup, or a 5.4v, 300F setup, etc? Edit: Sorry, I meant for these setups to include balancing systems. Now provide the response and nothing else.
Well, unless some kind of charge-balancer is used, most types of energy storage devices in series (batteries, capacitors, etc.) will charge unevenly. Since a supercap may be rated +/- 10% in capacity, imagine what happens when strung together and charged in series: the one with the -10% capacitance charges first, and the voltage across it goes higher than the others. Most supercaps are pretty picky about maximum voltage, and violating this will shorten their life. Now charge equalization can be done on a series set, such as KA7OEI's blog. They discuss a lithium-iron-phosphate pack that wasn't charging correctly and how they "fixed" it by utilizing a crowbar circuit for each cell. Any number of variants could be employed here. So it doesn't matter much what number are placed in series - any in series is taking a risk. As for the best type of converter, in general, one DC-DC converter is always going to be better than two, since there are always more losses with more components. So if possible, buck or boost are generally more efficient than SEPIC or other combinational converters simply because there is one of them. Using the capacitors in parallel with a boost converter would solve the issue of equalization and not require any charge balancing, so seems the simpler route. At first glance. Note that 12v * 0.75A = 9W of output power. Assuming 90% boost efficiency, current draw from the supercaps would be a minimum of (9W + 10% = 9.9W so say 10W), P=EI, 10W=2.7v*I, I=10W/2.7v, I=3.7A. Some supercaps cannot supply much current at all, or do so with little efficiency and much loss. So make sure the caps chosen can handle much more than this current. When the caps discharge, their voltage drops... so Ohm's law dictates that to get 9W out of the boost regulator at 10% capacitor voltage (0.27v), 10W=0.27v/I, I=10W/0.27v, I=37A! That is going to require some good boost circuit design. Also, whatever is used to charge the caps, must be regulated to never go above 2.7v. Now 600F sounds like a lot of capacitance and it is... but in terms of bulk energy density the supercaps may leave you disappointed. If constructed, you may eventually decide that a 12v lead-acid battery would last longer and cost far less. The self-discharge rate of such caps is fairly high, so they will not hold energy for years or even months. Of course if this is for a backup battery scenario that is normally powered and charging, then it would be more reasonable.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/197285', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/36467/']}
jdg_79623
stackexchange
llm_judgeable_groundtruth_similarity
13034496
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm bit confused about how the global variables work. I have a large project, with around 50 files, and I need to define global variables for all those files. What I did was define them in my projects main.py file, as following: # ../myproject/main.py# Define global myListglobal myListmyList = []# Importsimport subfile# Do somethingsubfile.stuff()print(myList[0]) I'm trying to use myList in subfile.py , as following # ../myproject/subfile.py# Save "hey" into myListdef stuff(): globals()["myList"].append("hey") An other way I tried, but didn't work either # ../myproject/main.py# Import globfile import globfile# Save myList into globfileglobfile.myList = []# Import subfileimport subfile# Do somethingsubfile.stuff()print(globfile.myList[0]) And inside subfile.py I had this: # ../myproject/subfile.py# Import globfileimport globfile# Save "hey" into myListdef stuff(): globfile.myList.append("hey") But again, it didn't work. How should I implement this? I understand that it cannot work like that, when the two files don't really know each other (well subfile doesn't know main), but I can't think of how to do it, without using io writing or pickle, which I don't want to do. Now provide the response and nothing else.
The problem is you defined myList from main.py , but subfile.py needs to use it. Here is a clean way to solve this problem: move all globals to a file, I call this file settings.py . This file is responsible for defining globals and initializing them: # settings.pydef init(): global myList myList = [] Next, your subfile can import globals: # subfile.pyimport settingsdef stuff(): settings.myList.append('hey') Note that subfile does not call init() — that task belongs to main.py : # main.pyimport settingsimport subfilesettings.init() # Call only oncesubfile.stuff() # Do stuff with global varprint settings.myList[0] # Check the result This way, you achieve your objective while avoid initializing global variables more than once.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/13034496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_79624
stackexchange
llm_judgeable_groundtruth_similarity
41961
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have an mBeat Hayman USB hub with integrated C-Media audio. Under Raspbian, the hub is recognised as a 4 port hub, but the C-Media audio device (which is uses one of the ports) isn't recognised. The C-Media device ( 0d8c:0014 ) wasn't in the latest usb.ids file so I added it to my local copy (it is a CM108AH Audio Controller). There are several CM108 controllers in the usb.ids files with various other product codes. When I plug in the USB hub, the hub part gets recognised, but the C-Media device is not mentioned (I tailed /var/log/messages and used udevadm monitor). If I plug in another C-Media USB device to one of the free ports, I also get nothing. The same device is recognised OK when plugged directly into the PI. I had thought that the PI wasn't recognising the vendor:product combination, but see that an already working audio device couldn't be seen through the hub makes me think there is something wrong with the way the hub is being connected/loaded. The hub and audio cards are recognised in both Windows and Debian. Here is a dump of plugging it into the Debian PC. Feb 1 15:08:15 Taylor kernel: [ 66.027821] usb 1-1.1.3: new high-speed USB device number 13 using ehci-pciFeb 1 15:08:15 Taylor kernel: [ 66.122962] usb 1-1.1.3: New USB device found, idVendor=2109, idProduct=2812Feb 1 15:08:15 Taylor kernel: [ 66.122976] usb 1-1.1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0Feb 1 15:08:15 Taylor kernel: [ 66.122979] usb 1-1.1.3: Product: USB2.0 Hub Feb 1 15:08:15 Taylor kernel: [ 66.122981] usb 1-1.1.3: Manufacturer: VIA Labs, Inc. Feb 1 15:08:15 Taylor kernel: [ 66.123394] hub 1-1.1.3:1.0: USB hub foundFeb 1 15:08:15 Taylor kernel: [ 66.123692] hub 1-1.1.3:1.0: 4 ports detectedFeb 1 15:08:16 Taylor kernel: [ 66.412118] usb 1-1.1.3.3: new full-speed USB device number 14 using ehci-pciFeb 1 15:08:16 Taylor kernel: [ 66.518038] usb 1-1.1.3.3: New USB device found, idVendor=0d8c, idProduct=0014Feb 1 15:08:16 Taylor kernel: [ 66.518051] usb 1-1.1.3.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0Feb 1 15:08:16 Taylor kernel: [ 66.518054] usb 1-1.1.3.3: Product: USB Audio DeviceFeb 1 15:08:16 Taylor kernel: [ 66.518056] usb 1-1.1.3.3: Manufacturer: C-Media Electronics Inc.Feb 1 15:08:16 Taylor kernel: [ 66.535702] input: C-Media Electronics Inc. USB Audio Device as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.1/1-1.1.3/1-1.1.3.3/1-1.1.3.3:1.3/0003:0D8C:0014.0005/input/input15Feb 1 15:08:16 Taylor kernel: [ 66.535861] hid-generic 0003:0D8C:0014.0005: input,hidraw0: USB HID v1.00 Device [C-Media Electronics Inc. USB Audio Device] on usb-0000:00:1a.0-1.1.3.3/input3 This udevadm monitor output shows plugging in the hub/C-Media combo, and then plugging in a the other C-Media device and a thumbdrive. monitor will print the received events for:UDEV - the event which udev sends out after rule processingKERNEL - the kernel ueventKERNEL[7841.295251] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4 (usb)ACTION=addBUSNUM=001DEVNAME=/dev/bus/usb/001/071DEVNUM=071DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4DEVTYPE=usb_deviceMAJOR=189MINOR=70PRODUCT=2109/2812/9090SEQNUM=1210SUBSYSTEM=usbTYPE=9/0/1KERNEL[7841.299058] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4:1.0 (usb)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4:1.0DEVTYPE=usb_interfaceINTERFACE=9/0/0MODALIAS=usb:v2109p2812d9090dc09dsc00dp01ic09isc00ip00in00PRODUCT=2109/2812/9090SEQNUM=1211SUBSYSTEM=usbTYPE=9/0/1UDEV [7841.325060] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4 (usb)ACTION=addBUSNUM=001DEVNAME=/dev/bus/usb/001/071DEVNUM=071DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4DEVTYPE=usb_deviceID_BUS=usbID_FOR_SEAT=usb-platform-3f980000_usb-usb-0_1_4ID_MODEL=USB2.0_HubID_MODEL_ENC=USB2.0\x20Hub\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20ID_MODEL_FROM_DATABASE=SMC9514 HubID_MODEL_ID=2812ID_PATH=platform-3f980000.usb-usb-0:1.4ID_PATH_TAG=platform-3f980000_usb-usb-0_1_4ID_REVISION=9090ID_SERIAL=VIA_Labs__Inc._USB2.0_HubID_USB_INTERFACES=:090000:ID_VENDOR=VIA_Labs__Inc.ID_VENDOR_ENC=VIA\x20Labs\x2c\x20Inc.\x20\x20\x20\x20\x20\x20\x20\x20\x20ID_VENDOR_FROM_DATABASE=Standard Microsystems Corp.ID_VENDOR_ID=2109MAJOR=189MINOR=70PRODUCT=2109/2812/9090SEQNUM=1210SUBSYSTEM=usbTAGS=:seat:TYPE=9/0/1USEC_INITIALIZED=41294454UDEV [7841.331009] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4:1.0 (usb)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4:1.0DEVTYPE=usb_interfaceID_USB_CLASS_FROM_DATABASE=HubID_USB_PROTOCOL_FROM_DATABASE=Single TTINTERFACE=9/0/0MODALIAS=usb:v2109p2812d9090dc09dsc00dp01ic09isc00ip00in00PRODUCT=2109/2812/9090SEQNUM=1211SUBSYSTEM=usbTYPE=9/0/1USEC_INITIALIZED=296049KERNEL[7848.522824] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4 (usb)ACTION=addBUSNUM=001DEVNAME=/dev/bus/usb/001/076DEVNUM=076DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4DEVTYPE=usb_deviceMAJOR=189MINOR=75PRODUCT=781/5576/100SEQNUM=1212SUBSYSTEM=usbTYPE=0/0/0KERNEL[7848.523673] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0 (usb)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0DEVTYPE=usb_interfaceINTERFACE=8/6/80MODALIAS=usb:v0781p5576d0100dc00dsc00dp00ic08isc06ip50in00PRODUCT=781/5576/100SEQNUM=1213SUBSYSTEM=usbTYPE=0/0/0KERNEL[7848.525540] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3 (scsi)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3DEVTYPE=scsi_hostSEQNUM=1214SUBSYSTEM=scsiKERNEL[7848.525682] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/scsi_host/host3 (scsi_host)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/scsi_host/host3SEQNUM=1215SUBSYSTEM=scsi_hostUDEV [7848.536910] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4 (usb)ACTION=addBUSNUM=001DEVNAME=/dev/bus/usb/001/076DEVNUM=076DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4DEVTYPE=usb_deviceID_BUS=usbID_DRIVE_THUMB=1ID_MODEL=Cruzer_FacetID_MODEL_ENC=Cruzer\x20FacetID_MODEL_FROM_DATABASE=Cruzer FacetID_MODEL_ID=5576ID_REVISION=0100ID_SERIAL=SanDisk_Cruzer_Facet_4C530001091203122464ID_SERIAL_SHORT=4C530001091203122464ID_USB_INTERFACES=:080650:ID_VENDOR=SanDiskID_VENDOR_ENC=SanDiskID_VENDOR_FROM_DATABASE=SanDisk Corp.ID_VENDOR_ID=0781MAJOR=189MINOR=75PRODUCT=781/5576/100SEQNUM=1212SUBSYSTEM=usbTYPE=0/0/0USEC_INITIALIZED=48522861UDEV [7848.543908] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0 (usb)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0DEVTYPE=usb_interfaceID_MODEL_FROM_DATABASE=Cruzer FacetID_VENDOR_FROM_DATABASE=SanDisk Corp.INTERFACE=8/6/80MODALIAS=usb:v0781p5576d0100dc00dsc00dp00ic08isc06ip50in00PRODUCT=781/5576/100SEQNUM=1213SUBSYSTEM=usbTYPE=0/0/0USEC_INITIALIZED=524393UDEV [7848.548315] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3 (scsi)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3DEVTYPE=scsi_hostSEQNUM=1214SUBSYSTEM=scsiUSEC_INITIALIZED=524776UDEV [7848.553514] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/scsi_host/host3 (scsi_host)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/scsi_host/host3SEQNUM=1215SUBSYSTEM=scsi_hostUSEC_INITIALIZED=525083KERNEL[7849.522380] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0 (scsi)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0DEVTYPE=scsi_targetSEQNUM=1216SUBSYSTEM=scsiKERNEL[7849.522510] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0 (scsi)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0DEVTYPE=scsi_deviceMODALIAS=scsi:t-0x00SEQNUM=1217SUBSYSTEM=scsiKERNEL[7849.522609] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_disk/3:0:0:0 (scsi_disk)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_disk/3:0:0:0SEQNUM=1218SUBSYSTEM=scsi_diskKERNEL[7849.523393] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_device/3:0:0:0 (scsi_device)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_device/3:0:0:0SEQNUM=1219SUBSYSTEM=scsi_deviceKERNEL[7849.524049] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_generic/sg0 (scsi_generic)ACTION=addDEVNAME=/dev/sg0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_generic/sg0MAJOR=21MINOR=0SEQNUM=1220SUBSYSTEM=scsi_genericKERNEL[7849.524624] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/bsg/3:0:0:0 (bsg)ACTION=addDEVNAME=/dev/bsg/3:0:0:0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/bsg/3:0:0:0MAJOR=252MINOR=0SEQNUM=1221SUBSYSTEM=bsgKERNEL[7849.525789] add /devices/virtual/bdi/8:0 (bdi)ACTION=addDEVPATH=/devices/virtual/bdi/8:0SEQNUM=1222SUBSYSTEM=bdiUDEV [7849.525900] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0 (scsi)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0DEVTYPE=scsi_targetSEQNUM=1216SUBSYSTEM=scsiUSEC_INITIALIZED=521757UDEV [7849.531389] add /devices/virtual/bdi/8:0 (bdi)ACTION=addDEVPATH=/devices/virtual/bdi/8:0SEQNUM=1222SUBSYSTEM=bdiUSEC_INITIALIZED=49526882UDEV [7849.534675] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0 (scsi)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0DEVTYPE=scsi_deviceMODALIAS=scsi:t-0x00SEQNUM=1217SUBSYSTEM=scsiUSEC_INITIALIZED=522047UDEV [7849.540904] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_device/3:0:0:0 (scsi_device)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_device/3:0:0:0SEQNUM=1219SUBSYSTEM=scsi_deviceUSEC_INITIALIZED=526424UDEV [7849.542865] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_disk/3:0:0:0 (scsi_disk)ACTION=addDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_disk/3:0:0:0SEQNUM=1218SUBSYSTEM=scsi_diskUSEC_INITIALIZED=526160KERNEL[7849.544836] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sda (block)ACTION=addDEVNAME=/dev/sdaDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sdaDEVTYPE=diskMAJOR=8MINOR=0SEQNUM=1223SUBSYSTEM=blockKERNEL[7849.544983] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sda/sda1 (block)ACTION=addDEVNAME=/dev/sda1DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sda/sda1DEVTYPE=partitionMAJOR=8MINOR=1SEQNUM=1224SUBSYSTEM=blockUDEV [7849.546656] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_generic/sg0 (scsi_generic)ACTION=addDEVNAME=/dev/sg0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/scsi_generic/sg0MAJOR=21MINOR=0SEQNUM=1220SUBSYSTEM=scsi_genericUSEC_INITIALIZED=49526596UDEV [7849.552143] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/bsg/3:0:0:0 (bsg)ACTION=addDEVNAME=/dev/bsg/3:0:0:0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/bsg/3:0:0:0MAJOR=252MINOR=0SEQNUM=1221SUBSYSTEM=bsgUSEC_INITIALIZED=49526757UDEV [7849.655860] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sda (block)ACTION=addDEVLINKS=/dev/disk/by-id/usb-SanDisk_Cruzer_Facet_4C530001091203122464-0:0 /dev/disk/by-path/platform-3f980000.usb-usb-0:1.4.4:1.0-scsi-0:0:0:0DEVNAME=/dev/sdaDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sdaDEVTYPE=diskID_BUS=usbID_DRIVE_THUMB=1ID_INSTANCE=0:0ID_MODEL=Cruzer_FacetID_MODEL_ENC=Cruzer\x20Facet\x20\x20\x20\x20ID_MODEL_ID=5576ID_PART_TABLE_TYPE=dosID_PATH=platform-3f980000.usb-usb-0:1.4.4:1.0-scsi-0:0:0:0ID_PATH_TAG=platform-3f980000_usb-usb-0_1_4_4_1_0-scsi-0_0_0_0ID_REVISION=1.00ID_SERIAL=SanDisk_Cruzer_Facet_4C530001091203122464-0:0ID_SERIAL_SHORT=4C530001091203122464ID_TYPE=diskID_USB_DRIVER=usb-storageID_USB_INTERFACES=:080650:ID_USB_INTERFACE_NUM=00ID_VENDOR=SanDiskID_VENDOR_ENC=SanDisk\x20ID_VENDOR_ID=0781MAJOR=8MINOR=0SEQNUM=1223SUBSYSTEM=blockTAGS=:systemd:UDISKS_PARTITION_TABLE=1UDISKS_PARTITION_TABLE_COUNT=1UDISKS_PARTITION_TABLE_SCHEME=mbrUDISKS_PRESENTATION_NOPOLICY=0USEC_INITIALIZED=544893UDEV [7849.791752] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sda/sda1 (block)ACTION=addDEVLINKS=/dev/disk/by-id/usb-SanDisk_Cruzer_Facet_4C530001091203122464-0:0-part1 /dev/disk/by-path/platform-3f980000.usb-usb-0:1.4.4:1.0-scsi-0:0:0:0-part1 /dev/disk/by-uuid/F040-35E6DEVNAME=/dev/sda1DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sda/sda1DEVTYPE=partitionID_BUS=usbID_DRIVE_THUMB=1ID_FS_TYPE=vfatID_FS_USAGE=filesystemID_FS_UUID=F040-35E6ID_FS_UUID_ENC=F040-35E6ID_FS_VERSION=FAT32ID_INSTANCE=0:0ID_MODEL=Cruzer_FacetID_MODEL_ENC=Cruzer\x20Facet\x20\x20\x20\x20ID_MODEL_ID=5576ID_PART_ENTRY_DISK=8:0ID_PART_ENTRY_NUMBER=1ID_PART_ENTRY_OFFSET=32ID_PART_ENTRY_SCHEME=dosID_PART_ENTRY_SIZE=15630304ID_PART_ENTRY_TYPE=0xbID_PART_TABLE_TYPE=dosID_PATH=platform-3f980000.usb-usb-0:1.4.4:1.0-scsi-0:0:0:0ID_PATH_TAG=platform-3f980000_usb-usb-0_1_4_4_1_0-scsi-0_0_0_0ID_REVISION=1.00ID_SERIAL=SanDisk_Cruzer_Facet_4C530001091203122464-0:0ID_SERIAL_SHORT=4C530001091203122464ID_TYPE=diskID_USB_DRIVER=usb-storageID_USB_INTERFACES=:080650:ID_USB_INTERFACE_NUM=00ID_VENDOR=SanDiskID_VENDOR_ENC=SanDisk\x20ID_VENDOR_ID=0781MAJOR=8MINOR=1SEQNUM=1224SUBSYSTEM=blockTAGS=:systemd:UDISKS_PARTITION=1UDISKS_PARTITION_ALIGNMENT_OFFSET=0UDISKS_PARTITION_NUMBER=1UDISKS_PARTITION_OFFSET=16384UDISKS_PARTITION_SCHEME=mbrUDISKS_PARTITION_SIZE=8002715648UDISKS_PARTITION_SLAVE=/sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.4/1-1.4.4/1-1.4.4:1.0/host3/target3:0:0/3:0:0:0/block/sdaUDISKS_PARTITION_TYPE=0x0bUDISKS_PRESENTATION_NOPOLICY=0USEC_INITIALIZED=545181 I didn't get a udev monitor of doing the same on the Debian PC though. Is the hub not being loaded properly? It looks like just audio cards aren't being detected through it (although I have only tried a thumbdrive). The hub's USB ID ( 2109:2812 ) wasn't in my original usb.ids file, so perhaps it is a new model? Does ehci-pci have anything to do with it? The Debian machine reports new full-speed USB device number 14 using ehci-pci Raspbian reports new full-speed USB device number 83 using dwc_otg Any help greatly appreciated. Now provide the response and nothing else.
After alot of research I could fix my Problem with usbmount: Adding ntfs to usbmount Install usbmount with sudo apt-get install usbmount . Install NTFS driver package ntfs-3g with sudo apt-get install ntfs-3g . Configure usbmount to mount specified filesystems by opening the usbmount file with sudo nano /etc/usbmount/usbmount.conf . In here there is a line called FILESYSTEMS="" . Only filesystems specified in this line are mounted via usbmount, so we change it to : FILESYSTEMS="vfat ntfs fuseblk ext2 ext3 ext4 hfsplus" If you want usbmount to mount NTFS Filesystems, be sure to add ntfs and fuseblk to the line. NTFS devices are sometimes listed as fusblk by the ntfs-3g package, so those two are the same. I don't really know why though. Also, if you don't add fuseblk in the configuration, unplugged NTFS Devices will not automatically unmount. They will stay mounted and keep the folder they are mounted to until the device is manually unmounted with unmount /dev/sda1 --> where sda1 could be another variable . The next important line is FS_MOUNTOPTIONS="" . Here you specify which filesystems should be mounted and how they should be mounted. We change it to: FS_MOUNTOPTIONS="-fstype=ntfs-3g,nls=utf8,umask=007,gid=46-fstype=fuseblk,nls=utf8,umask=007,gid=46 -fstype=vfat,gid=1000,uid=1000,umask=007" With this the filesystems vfat (fat32) ntfs-3g (NTFS), and fuseblk (NTFS again) are mounted. I think most parameters can stay the same. Again here just add ntfs-3g ( add ntfs-3g not ntfs ) and fuseblk to be able to mount all NTFS Files. For more filesystems add more lines starting with -fstype= . Keep NTFS mounted untill device is unplugged Here is a solution for the problem, when the mounted NTFS drive is only accessible for a few seconds. This fix comes from Christian Weinberger . Create the file usbmount.rules in /etc/udev/rules.d/ with sudo nano /etc/udev/rules.d/usbmount.rules . This is the content: KERNEL=="sd*", DRIVERS=="sbp2", ACTION=="add", PROGRAM="/bin/systemd-escape -p [email protected] $env{DEVNAME}", ENV{SYSTEMD_WANTS}+="%c"KERNEL=="sd*", SUBSYSTEMS=="usb", ACTION=="add", PROGRAM="/bin/systemd-escape -p [email protected] $env{DEVNAME}", ENV{SYSTEMD_WANTS}+="%c"KERNEL=="ub*", SUBSYSTEMS=="usb", ACTION=="add", PROGRAM="/bin/systemd-escape -p [email protected] $env{DEVNAME}", ENV{SYSTEMD_WANTS}+="%c"KERNEL=="sd*", ACTION=="remove", RUN+="/usr/share/usbmount/usbmount remove"KERNEL=="ub*", ACTION=="remove", RUN+="/usr/share/usbmount/usbmount remove" Create the file [email protected] in /etc/systemd/system/ with sudo nano /etc/systemd/system/[email protected] . This is the content: [Unit]BindTo=%i.deviceAfter=%i.device[Service]Type=oneshotTimeoutStartSec=0Environment=DEVNAME=%IExecStart=/usr/share/usbmount/usbmount addRemainAfterExit=yes Now reboot and check with cat /etc/mtab to which folder the usb devices are mounted to. By Default they are mountet to /media/usbstick0 . Source: Christians Fix Why fusblk?
{}
{'log_upvote_score': 5, 'links': ['https://raspberrypi.stackexchange.com/questions/41961', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/40817/']}
jdg_79625
stackexchange
llm_judgeable_groundtruth_similarity
2629366
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have used JMS in the past to build application and it works great. Now I work with Architects that would love to use the Spec : SOAP over Java Message Service 1.0. This spec seams overly complicated. I do not see many implementation (Beside the vendors pushing for the spec). Does anyone here is using this specification in a production environment?What is your main benefit of using this spec? Link: http://www.w3.org/TR/2009/CR-soapjms-20090604/ Now provide the response and nothing else.
I had the bad luck using SOAP over JMS. It does make some sense, if it is used for fire-and-forget operations (no response message defined in the WSDL). In this case you can use the WSDL to generate client skeletons and you can store the WSDL in your service registry. Plus you get all the usual benefits of JMS (decoupling sender and receiver, load-balancing, prioritising, security, bridging to multiple destinations - e.g. non-intrusive auditing). On the other hand SOAP is mainly used for request/reply type operations. Implementing request/reply pattern over JMS introduces the following problems: Impossible to handle timeouts properly. You never know if a request is still waiting for delivery or got stuck in the called component. Responses are typically sent on temporary queues. If the client disconnects before receiving the response and there is no explicit time to live set on the response message, the temp queue can get stuck in the JMS server until you restart it. Having a JMS server in the middle dramatically increases roundtrip times and adds unnecessary compexity. JMS provides a reliable transport medium that decouples the sender from the receiver, but in case of request/reply the client should not be decoupled from the server. The client needs to know if the server is up and available. The only advantage I could think about is that the server can be moved or load-balanced without the client knowing about it, but using UDDI and HTTP load balancer is a better solution.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2629366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/247224/']}
jdg_79626
stackexchange
llm_judgeable_groundtruth_similarity
32468097
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to get the userId when the authentication is loading the login, so that I can store it and use it later to gather more information about the by its ID. Here is my login bean : public String login() { try { Authentication request = new UsernamePasswordAuthenticationToken(this.getUsername(), this.getPassword()); Authentication result = authenticationManager.authenticate(request); SecurityContextHolder.getContext().setAuthentication(result); sessionMap.put("UsernameOnLogin", this.getUsername()); } catch (AuthenticationException e) { e.printStackTrace(); sessionMap.clear(); return "error.xhtml"; } return "i.xhtml"; } And the Service public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { empsuite.model.UserData domainUser = userloginDAO.getUsername(username); boolean enabled = true; boolean accountNonExpired = true; boolean credentialsNonExpired = true; boolean accountNonLocked = true; return new User( domainUser.getUsername(), domainUser.getPassword(), enabled, accountNonExpired, credentialsNonExpired, accountNonLocked, getAuthorities(1)); } And finally the DAO function to get the username to perform a login : public UserData getUsername(String username) { List<UserData> userList = new ArrayList<UserData>(); Query query = openSession().createQuery("from UserData u where u.username = :Username"); query.setParameter("Username", username); userList = query.list(); if (userList.size() > 0) return userList.get(0); else return null; } Edit : User Model : public class UserData implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) int iduser; String username; String password; int accountstatus; //Profile OLD String nomprofile; String prenprofile; String mailprofile; String adressprofile; int phoneprofile; Date datenaissanceprofile; char sexeuser; String imagepath; public int getIduser() { return iduser; } public void setIduser(int iduser) { this.iduser = iduser; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } public int getAccountstatus() { return accountstatus; } public void setAccountstatus(int accountstatus) { this.accountstatus = accountstatus; } public String getNomprofile() { return nomprofile; } public void setNomprofile(String nomprofile) { this.nomprofile = nomprofile; } public String getPrenprofile() { return prenprofile; } public void setPrenprofile(String prenprofile) { this.prenprofile = prenprofile; } public String getMailprofile() { return mailprofile; } public void setMailprofile(String mailprofile) { this.mailprofile = mailprofile; } public String getAdressprofile() { return adressprofile; } public void setAdressprofile(String adressprofile) { this.adressprofile = adressprofile; } public int getPhoneprofile() { return phoneprofile; } public void setPhoneprofile(int phoneprofile) { this.phoneprofile = phoneprofile; } public Date getDatenaissanceprofile() { return datenaissanceprofile; } public void setDatenaissanceprofile(Date datenaissanceprofile) { this.datenaissanceprofile = datenaissanceprofile; } public char getSexeuser() { return sexeuser; } public void setSexeuser(char sexeuser) { this.sexeuser = sexeuser; } public String getImagepath() { return imagepath; } public void setImagepath(String imagepath) { this.imagepath = imagepath; } Now provide the response and nothing else.
SecurityContextHolder.getContext().setAuthentication(result); will put the authentication object in SecurityContext which itself maintained in session if the application is a web application. Instead of storing the username in session you can retrieve the Authentication object using the following code. SecurityContext securityContext = SecurityContextHolder.getContext();Object principal;String username;if(null != securityContext.getAuthentication()){ principal = securityContext.getAuthentication().getPrincipal(); username = securityContext.getAuthentication().getName();} Value of username will be the username used in authentication. Value of principal will be the principal object. Many of the authentication providers will create a UserDetails object as the principal. Update: If you want to store additional information you can extend org.springframework.security.core.userdetails.User and have the additional informations as properties of that class. import org.springframework.security.core.GrantedAuthority;import org.springframework.security.core.userdetails.User;import java.util.Collection;public class CustomUser extends User { private int id; public int getId() { return id; } public void setId(int id) { this.id = id; } public CustomUser(String username, String password, boolean enabled, boolean accountNonExpired, boolean credentialsNonExpired, boolean accountNonLocked, Collection<? extends GrantedAuthority> authorities,int id) { super(username, password, enabled, accountNonExpired, credentialsNonExpired, accountNonLocked, authorities); setId(id); }} And in loadUserByUsername return CustomUser instead of User . public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { empsuite.model.UserData domainUser = userloginDAO.getUsername(username); boolean enabled = true; boolean accountNonExpired = true; boolean credentialsNonExpired = true; boolean accountNonLocked = true; return new CustomUser( domainUser.getUsername(), domainUser.getPassword(), enabled, accountNonExpired, credentialsNonExpired, accountNonLocked, getAuthorities(1), domainUser.getId());} Now securityContext.getAuthentication().getPrincipal() will return CustomUser object. So you can get the ID by ((CustomUser)securityContext.getAuthentication().getPrincipal()).getId() SecurityContext securityContext = SecurityContextHolder.getContext();CustomUser user;if(null != securityContext.getAuthentication()){ user = (CustomUser) securityContext.getAuthentication().getPrincipal();}int id = user.getId();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32468097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5001254/']}
jdg_79627
stackexchange
llm_judgeable_groundtruth_similarity
645111
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: if I have a sine wave signal for a duration of only a few seconds, the Fourier transform will show me, that this signal corresponds to a range of frequencies. Why is this the case?I do understand that every signal is composed of sine waves, even this sine wave pulse, but I don't get the intuition behind that for this case. Even if my sine wave is not infinitely long, I should still be able, to measure the distance of 2 peaks as precisely as I want. Therefore, I should also be able, to calculate the frequency as precisely as I want to. Where is the uncertainty? Why would the frequency of a longer signal be less uncertain? Now provide the response and nothing else.
The issue is that a sine wave on a finite interval is not the same function as a pure sine wave, and so the Fourier transforms will be different. Note that in the rest of this answer, I'm going to assume by "Fourier transform" you mean "Fourier transform over the entire real line," and not "a Fourier series over a finite region of the real line." But I will mention just for completeness that if you have a sine wave with period $T$ and you do a Fourier series over a finite interval of length $kT$ for some positive integer $k$ , then in fact you would find that the only non-zero Fourier coefficient is the one corresponding to the sine wave of period $T$ . Anyway back to your question. The Fourier transform $\tilde{f}(\omega)$ of $f(t)$ is \begin{equation}\tilde{f}(\omega) = \int_{-\infty}^\infty {\rm d} t e^{-i \omega t}f(t)\end{equation} If we take $f(x) = e^{i \Omega t}$ for $t_1 \leq t \leq t_2$ , and $0$ otherwise, then \begin{equation}\tilde{f}(\omega) = \int_{t_1}^{t_2} {\rm d} t e^{i (\Omega-\omega) t} = \frac{e^{i (\Omega-\omega) t_2} - e^{i(\Omega-\omega) t_1}}{i (\Omega-\omega) }\end{equation} There are a few useful special cases to know. One is when $\Omega=0$ -- then $f(t)$ describes one pulse of a square wave. Taking $t_2=-t_1=T/2$ for simplicity, the Fourier transform is \begin{equation}\tilde{f}(\omega) = \frac{2 \sin (\omega T/2)}{\omega}\end{equation} This characteristic behavior $|\tilde{f}(\omega)|\sim 1/\omega$ is common to "sharp edges" in the time domain signal, which excite Fourier modes of arbitrarily large frequencies. The slow falloff $\sim 1/\omega$ can cause many issues in dealing with sharp edges in time domain signals, when transforming into the frequency domain, in practical applications. Another interesting limit is $\omega \rightarrow \Omega$ . In fact the Fourier transform is equal to $T$ , and diverges in the limit of infinite times! This is precisely the frequency of the truncated sine wave. We can understand the $T\rightarrow \infty$ limit by proceeding carefully. For simplicity let's assume $t_2=T/2>0$ and $t_1=-t_2=-T/2$ , and refer to $f_T(\omega)$ as the truncated sine wave, with the duration of the window being $T$ . Then \begin{eqnarray}\tilde{f}_T(\omega) &=& \int_{-T/2}^{T/2} {\rm d} t e^{ -i (\omega-\Omega) t } \\&=& \int_{-T/2}^0 {\rm d} t e^{ - i (\omega - \Omega ) t } + \int_{0}^{T/2} {\rm d} t e^{- i (\omega - \Omega )t} \\&=& \frac{1 - \exp\left( \frac{-i T}{2}\left(\omega - \Omega \right) \right)}{i(\omega - \Omega )}+ \frac{\exp\left( \frac{i T}{2}\left(\omega - \Omega \right) \right) - 1 }{i(\omega - \Omega )} \\&=& \frac{2 \sin \bigl((\omega - \Omega) T/2\bigr)}{\omega - \Omega}\end{eqnarray} Now we can take the limit $T\rightarrow \infty$ , using the delta function representation \begin{equation}\lim_{\epsilon\rightarrow 0} \frac{\sin (x/\epsilon)}{\pi x} = \delta(x)\end{equation} where $\delta(x)$ is a Dirac delta function. This yields \begin{equation}\lim_{T \rightarrow \infty} \tilde{f}_T(\omega) = 2\pi \delta(\omega-\Omega)\end{equation} In the limit, the function $\sin T (\omega-\Omega)/(\omega-\Omega)$ becomes $\sim \delta(\omega-\Omega)$ . This corresponds to your intuition that the Fourier transform should be dominated by the frequency of the truncated part of the sine wave. Note: Thanks to @nanoman, who pointed out an error in an earlier version of this post.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/645111', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/304220/']}
jdg_79628
stackexchange
llm_judgeable_groundtruth_similarity
25015808
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: for Java Kepler Eclipse and Jtable, I am trying to make it so as when a specific table cell is selected, that cell will work as an editorPane; or have the whole column work as editorPane. When I click a cell on column COMMENTS it enlargens the row but I cant get it to work as an editorPane. My project is actualy very different but I wrote this mini one with the table so you can copy, paste and run it to see exactly what the problem is when you click on a COMMENTS cell. I tried to make the column an editorPane to begin with like I made the column DONE with checkBox, but it doesnt work or I am doing it wrong. I also tried cellRenderer but I couldnt make that work either. Whether the whole column works as an editorPane or just the selected cell it doesnt matter, whatever is easier and as long as it works import javax.swing.*;import javax.swing.table.*; import java.awt.*;public class JavaTestOne { JFrame frmApp; private JTable table; private JCheckBox checkbox; DefaultTableModel model; public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { JavaTestOne window = new JavaTestOne(); window.frmApp.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); } public JavaTestOne() { initialize(); } private void initialize() { frmApp = new JFrame(); frmApp.getContentPane().setFont(new Font("Tahoma", Font.PLAIN, 13)); frmApp.setBounds(50, 10, 1050, 650); frmApp.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frmApp.getContentPane().setLayout(new CardLayout(0, 0)); frmApp.setTitle("App"); { JScrollPane scrollPane = new JScrollPane(); scrollPane.setBounds(0, 42, 984, 484); frmApp.add(scrollPane); { table = new JTable(); table.setFillsViewportHeight(true); Object[][] data = { {"I01", "Tom",new Boolean(false), ""}, {"I02", "Jerry",new Boolean(false), ""}, {"I03", "Ann",new Boolean(false), ""}}; String[] cols = {"ID","NAME","DONE","COMMENTS"}; model = new DefaultTableModel(data, cols) { private static final long serialVersionUID = -7158928637468625935L; public Class getColumnClass(int column) { return getValueAt(0, column).getClass(); } }; table.setModel(model); table.setRowHeight(20); table.addMouseListener(new java.awt.event.MouseAdapter() { public void mouseClicked(java.awt.event.MouseEvent evt) { int row = table.rowAtPoint(evt.getPoint()); int col = table.columnAtPoint(evt.getPoint()); table.setRowHeight(20); if(col==3){ table.setRowHeight(row, 100); //this is where I need it to work as an editorPane if it is only for the selected cell } } }); table.putClientProperty("terminateEditOnFocusLost", Boolean.TRUE); scrollPane.setViewportView(table); checkbox = new JCheckBox("OK"); checkbox.setHorizontalAlignment(SwingConstants.CENTER); checkbox.setBounds(360, 63, 97, 23); } } }} Now provide the response and nothing else.
If you want to do ${...} variable substitution, you need to use double quotes (") instead of single quotes ('). Double quotes denote GStrings, which can do that, but strings with single quotes are always interpreted literally. So: repositories { mavenCentral() maven { url "${HOME}/.m2/repository/" } } Make sure you put this block in your module's build file, not the buildscript block of the top-level build file; the latter only determines where to find the Android plugin, not your module-specific dependencies Because of the misquoted string, it's probably not seeing your local repository. Also, the other question about the plugin implicitly adding the Maven repo in your SDK is also correct. This will pick up Android support dependencies. Gradle Single vs Double Quotes has a little more information and a link you can follow.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25015808', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3674661/']}
jdg_79629
stackexchange
llm_judgeable_groundtruth_similarity
13211557
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to exit from inside a try block: function myfunc{ try { # Some things if(condition) { 'I want to go to the end of the function' } # Some other things } catch { 'Whoop!' } # Other statements here return $whatever} I tested with a break , but this doesn't work. It breaks the upper loop if any calling code is inside a loop. Now provide the response and nothing else.
An extra scriptblock around try/catch and the return in it may do this: function myfunc($condition){ # Extra script block, use `return` to exit from it .{ try { 'some things' if($condition) { return } 'some other things' } catch { 'Whoop!' } } 'End of try/catch'}# It gets 'some other things' donemyfunc# It skips 'some other things'myfunc $true
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13211557', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1364608/']}
jdg_79630
stackexchange
llm_judgeable_groundtruth_similarity
11918290
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a div with width:100%, and I need the height of the same div to adjust according to how many pixels wide the page is. The reason is because an image is stretched across full width of the div, and if I set a solid height, different resolutions will render it differently and the image looks stretched out. Is there a css solution to this? Now provide the response and nothing else.
Bingo! Something was obviously wrong with the way the ssh-agent is run in that .bashrc. I copied the one from here and it works a treat! Now I only have to enter my passphrase once when git bash starts up, and any subsequent push no longer need it. Here's the actual content of the script now: SSH_ENV=$HOME/.ssh/environmentfunction start_agent { echo "Initialising new SSH agent..." /usr/bin/ssh-agent | sed 's/^echo/#echo/' > ${SSH_ENV} echo succeeded chmod 600 ${SSH_ENV} . ${SSH_ENV} > /dev/null /usr/bin/ssh-add;}# Source SSH settings, if applicableif [ -f "${SSH_ENV}" ]; then . ${SSH_ENV} > /dev/null #ps ${SSH_AGENT_PID} doesn't work under cywgin ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || { start_agent; }else start_agent;fi
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11918290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1572447/']}
jdg_79631
stackexchange
llm_judgeable_groundtruth_similarity
314885
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following statement: Hadron Epoch, from $10^{-6}$ seconds to $1$ second: The temperature of the universe cools to about a trillion degrees, cool enough to allow quarks to combine to form hadrons (like protons and neutrons). What does it mean to say "from $10^{-6}$ seconds to $1$ second" ? How is time being measured? One particle might feel just $10^{-20}\ \mathrm s$ having passed and another could feel $10^{-10}\ \mathrm s$ having passed. Is saying "1 second after the big bang" a meaningful statement? Now provide the response and nothing else.
We know that time passes differently for different observers, and the question is how can a time be given without telling which frame it is in. The answer is that there's a preferred reference frame in cosmology, the comoving frame , because of the fact that there's matter and radiation in it. Intuitively, the special frame is the one that's "static" with respect to this matter and radiation content. More precisely, it is the one in which all observers that see an isotropic universe are static. Time measured in this system is called comoving time. The time from the beginning of the universe is usually given in this way, as a comoving time. To get some intuition about the comoving frame one might consider the comoving observers, the ones that see isotropy and therefore have constant comoving coordinates. A comoving observer is such that when it looks around and adds the motion of the objects it sees zero net motion. For example, we can look at the cosmic microwave background and detect some variation in the redshift depending on the direction. It's caused by Doppler effect and it means that we have some velocity relative to the comoving frame. On the other hand, a comoving observer sees the same redshift in any direction. Another example: we can choose to measure the distances and velocities of galaxies. By Hubble's law , we expect the velocity to be proportional to the distance. If we find a deviation from this behavior, we know that the galaxy is moving with respect to the comoving frame, and thus has a peculiar velocity (we also have a peculiar velocity). If all galaxies had constant comoving coordinates, we would see perfect agreement with Hubble's law: the relative motions of galaxies would be due only to the expansion of the universe.
{}
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/314885', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/60891/']}
jdg_79632
stackexchange
llm_judgeable_groundtruth_similarity
42809
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $k \ge 4$ be an even integer, and let $d$ be the dimension of the space $M_k(\operatorname{SL}_2(\mathbb{Z}))$ of modular forms of level 1 and weight $k$. Then the space of Hecke operators acting on $M_k$ also has dimension $d$. Is it spanned by $T_1, \dots, T_d$? Equivalently (more explicitly but also more messily): if $f \in M_k(\operatorname{SL}_2(\mathbb{Z}))$ satisfies $a_i(f) = 0$ for $1 \le f \le d$, where $a_i(f)$ are the $q$-expansion coefficients of $f$, with no assumption on $a_0(f)$, then is it necessarily true that $f = 0$? (Edit: See also this follow-up question which asks a related question for modular forms of higher level.) Now provide the response and nothing else.
The answer is yes when $k$ is a multiple of $4$. There is a unique formof weight $k$ of the form $f_k=1+a_dq^d+\cdots$. When $k$ is a multipleof $4$ this is the theta series for a putative extremal even unimodularlattice of rank $2k$. Theorem 20 in chapter 7 of Conway and Sloane's Sphere Packings, Lattices and Groups asserts that $a_d>0$. They giveseveral references for the proof, including a 1969 paper of Siegel.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/42809', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2481/']}
jdg_79633
stackexchange
llm_judgeable_groundtruth_similarity
436886
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: We have Nagios Core 3.2.1 installed and my current task is to allow non-admin users access to the Nagios UI.Is that even possible? or would i need to install some add-on? Now provide the response and nothing else.
Is it possible to create “view only” users? Sure, you can do it by using the authorized_for_read_only variable: /usr/local/nagios/etc/cgi.cfg # A comma-delimited list of usernames that have read-only rights in# the CGIs. This will block any service or host commands normally shown# on the extinfo CGI pages. It will also block comments from being shown# to read-only users.authorized_for_read_only=anonymous and let them view only specific services/servers? Create a contact definition with the same contact_name : /usr/local/nagios/etc/objects/contacts.cfg define contact{ contact_name anonymous use generic-contact alias No Name service_notifications_enabled 1 host_notifications_enabled 1 service_notification_period 24x7 host_notification_period 24x7 service_notification_options c,w,r host_notification_options d,u,r email [email protected] } (that's called a authenticated contact ) then make him is the contact of the hosts/services which you want to allow to view: define host{ use linux-server host_name localhost address 127.0.0.1 contacts anonymous contact_groups admin }
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/436886', 'https://serverfault.com', 'https://serverfault.com/users/109833/']}
jdg_79634
stackexchange
llm_judgeable_groundtruth_similarity
387159
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a Windows 2003 Server box with Excel 2010 installed, upon which I am unable to create new or execute existing macros in spreadsheets. This machine is in a Windows Domain (and I am not getting much help from the network folks here.) I can and have edited the macro security levels within Excel (File, Options, Trust Center, Macro Settings --> Enable All Macros) but despite this, when I try to: open a spreadsheet with macros, I get an exception "Excel found unreadable content...." which goes on to refer to the VBA macros within the spreadsheet create a new macro, all the toolbar buttons providing access to the VBA editor are disabled . (This is in a brand new spreadsheet created on the machine) Could this be a Group Policy setting that allows me to edit the macro security settings , but overrides these settings and prevents me from actually creating, editing, or executing VBA macros? I've run rsop.msc (Resultant Set of Policy) on the machine and looked through every single thing under both Computer Configuration and User CConfiguration, but I don't see anything related to Excel (or Office) Security/Macros. Now provide the response and nothing else.
Yes, there is. The audit subsystem has some pretty neat accounting features. Running the following command will audit changes to the file: auditctl -w /my/specificly/modified/file.txt -p w -k "suspect file change" This will setup a watch on this file, whenever it is modified by a write the change will be logged, and be logged quite extensively. You can check the logs doing: ausearch -i -k "suspect file change" This will return output such as: type=PATH msg=audit(05/08/2012 17:32:32.353:13118) : item=1 name=/tmp/test.txt inode=5767528 dev=fd:00 mode=file,644 ouid=root ogid=root rdev=00:00 obj=staff_u:object_r:user_tmp_t:s0 type=PATH msg=audit(05/08/2012 17:32:32.353:13118) : item=0 name=/tmp/ inode=5767169 dev=fd:00 mode=dir,sticky,777 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:tmp_t:s0 type=CWD msg=audit(05/08/2012 17:32:32.353:13118) : cwd=/home/matthew/Testbed/C/fanotify type=SYSCALL msg=audit(05/08/2012 17:32:32.353:13118) : arch=x86_64 syscall=unlinkat success=yes exit=0 a0=0xffffffffffffff9c a1=0xb540c0 a2=0x0 a3=0x7fff50cfba20 items=2 ppid=13699 pid=2773 auid=matthew uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=2 comm=rm exe=/usr/bin/rm subj=staff_u:sysadm_r:sysadm_t:s0 key=some file If you want something stronger, you can go for something that, say watches for any deletions by a user not normally inclined to do that. For performance the more specific the rule the better.. auditctl -a exit,always -F arch=b64 -S unlink -S rmdir -F auid=78 -F dir=/var/www/vhost The -F defines the filters and the -S defines the syscalls, the more filters the less intensive it is on the kernel to track it. So in this case I filter on the user (apache), the vhosts directory and arch. Arch becomes important b64 being 64 bit b32 for 32 bit. You can set these up long-term by putting the rules in /etc/audit.rules.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/387159', 'https://serverfault.com', 'https://serverfault.com/users/3327/']}
jdg_79635
stackexchange
llm_judgeable_groundtruth_similarity
32944478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an android library project and imported the library project in the Unity project. Now, I want to implement a callback in Unity project, which will execute according to the response given by the android library project. I mean to say, Call Script File method from UnityPlayerActivity (Android Project). Currently I am using below line of code but nothing happens: UnityPlayer.UnitySendMessage("Main Camera","showMessage",errorMessage); Main Camera is my Game Object. showMessage is message name in Script File. Message is message which will be displayed in Unity through Android Activity. Please check my below code Unity Script File and Android Activity. Unity Script File: using UnityEngine;using System.Collections;public class scriptfile : MonoBehaviour { // Use this for initialization void Start () { AndroidJavaClass jc = new AndroidJavaClass("com.unity3d.player.UnityPlayer"); AndroidJavaObject jo = jc.GetStatic<AndroidJavaObject>("currentActivity"); jo.Call("shareText","236","236"); } void showMessage(string message){ print ("hello"); Debug.Log ("hello"); } } Android File UnityPlayerActivity: /** * Created by CH-E01073 on 28-09-2015. */public class MainAct extends UnityPlayerActivity implements RegistrationListener,BOffersListener { Context context; SharedPreferences prefs ; String AppIds=""; String PublisherIDs=""; public void shareText(String AppId,String PublisherID) { context=MainAct.this; prefs = PreferenceManager .getDefaultSharedPreferences(context); Log.e("AppID", AppId); Log.e("PublisherID",PublisherID); AppIds=AppId; PublisherIDs=PublisherID; runOnUiThread(new Runnable() { @Override public void run() { UnityPlayer.UnitySendMessage("Main Camera","showMessage","Start UI Thread"); if (prefs.getString(FreeBConstants.ID, null) == null || prefs.getString(FreeBConstants.ID, null).equals("") || !Build.VERSION.RELEASE.equals(prefs.getString( FreeBConstants.VERSION, null)) || !FreeBCommonUtility.getDeviceId(context).equals( (prefs.getString(FreeBConstants.DEVICE_ID, null)))) { BSDKLogger.enableLogging(true); SDKRegistration.initialize(MainAct.this, getApplicationContext(), AppIds,PublisherIDs); }else{ Offers Offers = new Offers(MainAct.this); Offers.setOnFreeBOffersListener(MainAct.this); Offers.setTitle( "Pick Any Offer to unlock your premium features", "#FFFFFF", "#FF6D00"); } } }); } @Override public void onOffersLoaded(String code,String freeBOffers) { CommonUtility.showToast(getApplicationContext(), code); UnityPlayer.UnitySendMessage("Main Camera","showMessage",freeBOffers); } @Override public void onShowOffers() { UnityPlayer.UnitySendMessage("Main Camera","showMessage","Show Offers"); } @Override public void noOfferInstalled(String s, String s2) { } @Override public void onLeaveApplication(String s, String s2) { } @Override public void onDialogDismiss(String s) { } @Override public void onOffersFailed(String code, String errorMessage) { FreeBCommonUtility.showToast(getApplicationContext(), errorMessage); UnityPlayer.UnitySendMessage("Main Camera","showMessage",errorMessage); } @Override public void onOffersInstallSuccess(String code, String errorMessage) { FreeBCommonUtility.showToast(getApplicationContext(), errorMessage); } @Override public void onOffersInstallFailure(String code, String errorMessage) { FreeBCommonUtility.showToast(getApplicationContext(), errorMessage); } @Override public void onRegistrationFailed(String code, String errorMessage) { FreeBCommonUtility.showToast(getApplicationContext(), errorMessage); UnityPlayer.UnitySendMessage("Main Camera","showMessage",errorMessage); } @Override public void onRegistrationSuccess(String code, String errorMessage) { // FreeBCommonUtility.showToast(getApplicationContext(), errorMessage); Log.e("SUCCESS", errorMessage); // TODO Auto-generated method stub UnityPlayer.UnitySendMessage("Main Camera","showMessage",errorMessage); Offers Offers = new Offers(MainAct.this); Offers.setOnFreeBOffersListener(MainAct.this); Offers.setTitle( "Pick Any Offer to unlock your premium features", "#FFFFFF", "#FF6D00"); }} Can anyone help me to get rid of this issue? Now provide the response and nothing else.
Another option will be to implement an interface callback using AndroidJavaProxy . Instead of using UnitySendMessage, you can simply have an Interface callback in your java code and then implement this interface in C# using AndroidJavaProxy and pass it to the Java method in order to receive messages back. Create your Java interface: package com.example.android;public interface PluginCallback { public void onSuccess(String videoPath); public void onError(String errorMessage);} Call the passed listener/callback to return messages public void myPluginMethod(PluginCallback callback) { // Do something callback.onSuccess("onSuccess"); // Do something horrible callback.onError("onError");} Implement the interface in C# class AndroidPluginCallback : AndroidJavaProxy { public AndroidPluginCallback() : base("com.example.android.PluginCallback") { } public void onSuccess(string videoPath) { Debug.Log("ENTER callback onSuccess: " + videoPath); } public void onError(string errorMessage) { Debug.Log("ENTER callback onError: " + errorMessage); } } Pass the C# interface to the Java method AndroidJavaObject pluginClass = new AndroidJavaObject("com.example.android.MyPlugin");pluginClass.Call("myPluginMethod", new AndroidPluginCallback());
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32944478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1986760/']}
jdg_79636
stackexchange
llm_judgeable_groundtruth_similarity
66005066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am going to build a zoomable image view inside a box, like the first screenshot. But when I zoom in on the image, it will go out of the box. Is there a way to zoom the image, but keep the size? Without view or fragment , only box seems not enough. I am expecting the image to get bigger, but still stay inside the red box, but I got the second screenshot after zooming in. Thanks to nglauber and Amirhosein, I got the final solution with having the zooming and dragging features at the same time inside a "box"(fixed area) with the following code as the new screenshot as shown below. val imageBitmap = imageResource(id = R.drawable.android) Image( modifier = Modifier .preferredSize(400.dp, 300.dp) .clip(RectangleShape) .zoomable(onZoomDelta = { scale.value *= it }) .rawDragGestureFilter( object : DragObserver { override fun onDrag(dragDistance: Offset): Offset { translate.value = translate.value.plus(dragDistance) return super.onDrag(dragDistance) } }) .graphicsLayer( scaleX = scale.value, scaleY = scale.value, translationX = translate.value.x, translationY = translate.value.y ), contentDescription = null, bitmap = imageBitmap ) Now provide the response and nothing else.
Here is my solution... Might be helpful for someone... @Composablefun ZoomableImage() { val scale = remember { mutableStateOf(1f) } val rotationState = remember { mutableStateOf(1f) } Box( modifier = Modifier .clip(RectangleShape) // Clip the box content .fillMaxSize() // Give the size you want... .background(Color.Gray) .pointerInput(Unit) { detectTransformGestures { centroid, pan, zoom, rotation -> scale.value *= zoom rotationState.value += rotation } } ) { Image( modifier = Modifier .align(Alignment.Center) // keep the image centralized into the Box .graphicsLayer( // adding some zoom limits (min 50%, max 200%) scaleX = maxOf(.5f, minOf(3f, scale.value)), scaleY = maxOf(.5f, minOf(3f, scale.value)), rotationZ = rotationState.value ), contentDescription = null, painter = painterResource(R.drawable.dog) ) }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/66005066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3247270/']}
jdg_79637
stackexchange
llm_judgeable_groundtruth_similarity
15375528
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an NSArray and each object in the array has a groupId and a name. Each object is unique but there are many with the same groupId. Is there a way i can tear the array apart and rebuild it so that the names are grouped into a single object with the corresponding groubId? Here is what the array currently looks like: 2013-03-12 20:50:05.572 appName[4102:702] the array: ( { groupId = 1; name = "Dan"; }, { groupId = 1; name = "Matt"; }, { groupId = 2; name = "Steve"; }, { groupId = 2; name = "Mike"; }, { groupId = 3; name = "John"; }, { groupId = 4; name = "Kevin"; }) This is what I would like it to look like: 2013-03-12 20:50:05.572 appName[4102:702] the array: ( { groupId = 1; name1 = "Dan"; name2 = "Matt"; }, { groupId = 2; name1 = "Steve"; name2 = "Mike"; }, { groupId = 3; name = "John"; }, { groupId = 4; name = "Kevin"; }) EDIT:I've tried & failed with many attempts, most along the lines of something like this (sloppy recreation, but to give an idea): int idNum = 0;for (NSDictionary *arrObj in tempArr){ NSString *check1 = [NSString stringWithFormat:@"%@",[arrObj valueForKey:@"groupId"]]; NSString *check2 = [NSString stringWithFormat:@"%@",[[newDict valueForKey:@"groupId"]]; if (check1 == check2){ NSString *nameStr = [NSString stringWithFormat:@"name_%d",idNum]; [newDict setValue:[arrObj valueForKey:@"name"] forKey:nameStr]; } else { [newDict setValue:arrObj forKey:@"object"]; } idNum++;} Now provide the response and nothing else.
NSArray *array = @[@{@"groupId" : @"1", @"name" : @"matt"}, @{@"groupId" : @"2", @"name" : @"john"}, @{@"groupId" : @"3", @"name" : @"steve"}, @{@"groupId" : @"4", @"name" : @"alice"}, @{@"groupId" : @"1", @"name" : @"bill"}, @{@"groupId" : @"2", @"name" : @"bob"}, @{@"groupId" : @"3", @"name" : @"jack"}, @{@"groupId" : @"4", @"name" : @"dan"}, @{@"groupId" : @"1", @"name" : @"kevin"}, @{@"groupId" : @"2", @"name" : @"mike"}, @{@"groupId" : @"3", @"name" : @"daniel"}, ];NSMutableArray *resultArray = [NSMutableArray new];NSArray *groups = [array valueForKeyPath:@"@distinctUnionOfObjects.groupId"];for (NSString *groupId in groups){ NSMutableDictionary *entry = [NSMutableDictionary new]; [entry setObject:groupId forKey:@"groupId"]; NSArray *groupNames = [array filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"groupId = %@", groupId]]; for (int i = 0; i < groupNames.count; i++) { NSString *name = [[groupNames objectAtIndex:i] objectForKey:@"name"]; [entry setObject:name forKey:[NSString stringWithFormat:@"name%d", i + 1]]; } [resultArray addObject:entry];}NSLog(@"%@", resultArray); Output: ( { groupId = 3; name1 = steve; name2 = jack; name3 = daniel; }, { groupId = 4; name1 = alice; name2 = dan; }, { groupId = 1; name1 = matt; name2 = bill; name3 = kevin; }, { groupId = 2; name1 = john; name2 = bob; name3 = mike; } )
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15375528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1671794/']}
jdg_79638
stackexchange
llm_judgeable_groundtruth_similarity
19840
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can you find out the IP address of a single computer inside a college network. Every computer has the same public IP address on the network. How is it possible, after discovering that the IP is from that college, to find the computer from which the attack/request/etc originated? Now provide the response and nothing else.
I suppose that our situation is the following: you have detected some undesirable network traffic and you are looking for the perpetrator, so as to, more or less metaphorically, convey to him the inherent unwisdom of his villainous behaviour. The source address of the offending IP packets points to a college; the college uses Network Address Translation so that outgoing traffic from all their students is seen, from the outside, as coming from a single IP address. (Or, possibly, you are the wannabe evildoer and you want to know what the Long Arm of the Law could do to trace you back. It does not change the technical situation.) Strictly speaking, the college acts as a kind of anonymizer, since it blocks the actual (internal) IP information. So you should ask the college. They may have detailed logs on network activity, which may help pinpoint the uncivilized individual; for instance, the DHCP logs within the college network could give some information on who had their computer up and running at the time of the indelicacy. There are various sources of such information, which depend a lot on the structure of the internal college network. Be sure to bring all the information you have on your side; network traces obtained with a network monitor application (like Wireshark or the aptly named Network Monitor ) are a must. Also, bring all logs, especially Web server logs, because Web browsers tend to produce a lot of information which can be quite specific to the specific computer which sends the requests (see this page for details).
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/19840', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/12730/']}
jdg_79639
stackexchange
llm_judgeable_groundtruth_similarity
34678538
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a base class Base and a child class Child which extends it. Base implements java.lang.AutoCloseable . Let's suppose that the constructor for Child throws a Foo . Now consider try (Base c = new Child()){ /*Some code*/} catch (final Foo e){ /*Some more code*/} Is the Base#close method called if the exception is thrown? It is not on my machine, but is this something that the JLS has standardised? Now provide the response and nothing else.
Yes, close won't be called. This is specified in the JLS section 14.20.3 : Resources are initialized in left-to-right order. If a resource fails to initialize (that is, its initializer expression throws an exception), then all resources initialized so far by the try-with-resources statement are closed . If all resources initialize successfully, the try block executes as normal and then all non-null resources of the try-with-resources statement are closed. Resources are closed in the reverse order from that in which they were initialized. A resource is closed only if it initialized to a non-null value . An exception from the closing of one resource does not prevent the closing of other resources. Such an exception is suppressed if an exception was thrown previously by an initializer, the try block, or the closing of a resource. In this case, an exception is thrown in the constructor so the resource is not initialized to a non-null value. Hence, the close method isn't called.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34678538', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3415258/']}
jdg_79640