source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 49153747 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to call a native module in MainActivity 's onCreate function by: context.getNativeModule(RNBackgroundToForegroundServiceModule.class) But I am not sure how to get the current ReactContext there. I tried to use (ReactContext) getApplicationContext() but it crashes. How can I get the valid ReactContext ?
Now provide the response and nothing else.
| I came up with the right answer myself. We have to wait until the Javascript bundle is loaded. ReactInstanceManager mReactInstanceManager = getReactNativeHost().getReactInstanceManager(); ReactApplicationContext context = (ReactApplicationContext) mReactInstanceManager.getCurrentReactContext(); mReactInstanceManager.addReactInstanceEventListener(new ReactInstanceManager.ReactInstanceEventListener() { public void onReactContextInitialized(ReactContext validContext) { // Use validContext here } }); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49153747', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8943098/']} | jdg_80341 |
stackexchange | llm_judgeable_groundtruth_similarity | 17168623 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I heard that in "c" that there are we can pass the arguments via "call by value" or "call by reference".But in one book it's mentioned that there are we can pass the arguments via both way but there is no "pass by reference" but I actually pass the mostly arguments by "pass by reference". So why it is mentioned "does C even have "pass by reference"? A detailed description will be greatly appreciated.
Now provide the response and nothing else.
| C parameters are always passed by value rather than by reference. However, if you think of the address of an object as being a reference to that object then you can pass that reference by value. For example: void foo(int *x){ *x = 666;} You ask in a comment: So why do we need pointers in C when we can pass all the parameters by value? Because in a language that only supports pass-by-value, lack of pointers would be limiting. It would mean that you could not write a function like this: void swap(int *a, int *b){ int temp = *a; *b = *a; *a = temp;} In Java for example, it is not possible to write that function because it only has pass-by-value and has no pointers. In C++ you would write the function using references like this: void swap(int &a, int &b){ int temp = a; b = a; a = temp;} And similarly in C#: void swap(ref int a, ref int b){ int temp = a; b = a; a = temp;} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17168623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2464707/']} | jdg_80342 |
stackexchange | llm_judgeable_groundtruth_similarity | 34824880 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My question is about gradient with fading: gradient - from top to bottom, and fading - from left to right. Example: The code is: background-image: linear-gradient(0deg, rgba(198,83,165,.95) 0%, rgba(198,86,51,.95) 100%), linear-gradient(90deg, transparent 50%, rgba(0,0,0,.95) 100%);opacity: 0.949; My result is below. As you see, it doesn't fade the gradient, it looks like separate layer, behind this gradient. Is there any other method of implementing this?
Now provide the response and nothing else.
| As I had mentioned in comments, when you add a transparent layer on top of another gradient, it will only show through the colored gradient layer that is below it (and not the image that is present in the container). So, it would be very tough (almost impossible) to achieve this with gradients. You'd have to use a mask image to achieve it. The below is a snippet that uses a SVG mask . div { position: relative; height: 300px; width: 500px;}div svg { position: absolute; height: 100%; width: 100%;}div .grad-fill { fill: url(#grad); mask: url(#masker);} <div> <svg viewBox="0 0 500 300" preserveAspectRatio="none"> <defs> <linearGradient id="grad" gradientUnits="objectBoundingBox" gradientTransform="rotate(270,0.5,0.5)"> <stop offset="0%" stop-color="rgba(198,83,165,.95)" /> <stop offset="100%" stop-color="rgba(198,86,51,.95)" /> </linearGradient> <linearGradient id="mask-grad" gradientUnits="objectBoundingBox"> <stop offset="40%" stop-color="black" /> <stop offset="100%" stop-color="white" /> </linearGradient> <mask id="masker" x="0" y="0" width="500" height="300"> <rect x="0" y="0" width="500" height="300" fill="url(#mask-grad)" /> </mask> </defs> <rect x="0" y="0" width="500" height="300" class="grad-fill" /> </svg> <img src="http://lorempixel.com/500/300/animals/8" /></div> You can find more information about SVG masks in the below links: SVG Clipping and Masking - MDN SVG Masks Tutorials @ JENKOV.COM This can be done with pure CSS also but unfortunately mask-image property is currently supported only by WebKit browsers and so this approach is not recommended. div { position: relative; height: 300px; width: 500px; color: white;}div:after,img { position: absolute; height: 100%; width: 100%; top: 0px; left: 0px; z-index: -1;}div:after { content: ''; background-image: linear-gradient(0deg, rgba(198, 83, 165, .95) 0%, rgba(198, 86, 51, .95) 100%); -webkit-mask-image: linear-gradient(90deg, transparent 40%, rgb(0, 0, 0) 100%);} <div>Some text <img src="http://lorempixel.com/500/300/animals/8" /></div> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34824880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2333158/']} | jdg_80343 |
stackexchange | llm_judgeable_groundtruth_similarity | 60969224 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using the material-table ( https://material-table.com/ ). My issue is that I want to change the table border-radius and table shadow, apparently, this option does not exist using 'option feature' But when I inspect the table I could modify radius and shadow as you can see below : I'm wondering how to override these values from Reactjs : const useStyles = makeStyles(theme => ({ root: { }}));const MainTable = props => { const {className, params, ...rest} = props(...) return ( <MaterialTable className={classes.MuiPaperRounded} columns={[ {title: 'Equipement', field: 'equipement'}, {title: 'TAG', field: 'tag'}, {title: 'Nombre de points de mesures', field: 'nombreDePointsDeMesures'}, {title: 'Mesuré', field: 'mesure', type: 'boolean'} ]} data={rows} icons={(...)} options={{ tableLayout: {backgroundColor: 'green'}, }} title="Routine vibration" /> );};
Now provide the response and nothing else.
| If it's difficult to customize styles inside third party component, Use nesting selector with className from outside would be fine. For your example: "& .MuiPaper-root" Full code: import React from "react";import "./styles.css";import { makeStyles } from "@material-ui/core";import MaterialTable from "material-table";const useStyles = makeStyles(theme => ({ root: { "& .MuiPaper-root": { borderRadius: "100px", boxShadow: "10px 10px 5px 0px rgba(0,0,0,0.75);" } }}));export default function App() { const classes = useStyles(); return ( <div className={classes.root}> <MaterialTable /> </div> );} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60969224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12564973/']} | jdg_80344 |
stackexchange | llm_judgeable_groundtruth_similarity | 11675632 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While uploading the project , there was a network failure and I had to stop, when the uploading process was only 31% complete. Now when I click to deploy application to google app engine the following message is produced : com.google.appengine.tools.admin.HttpIoException: Error posting to URL: https://appengine.google.com/api/appversion/create?app_id=programworks&version=1&409 ConflictAnother transaction by user suhailgupta03 is already in progress for app: s~programworks, version: 1. That user can undo the transaction with "appcfg rollback".Unable to update app: Error posting to URL: https://appengine.google.com/api/appversion/create?app_id=programworks&version=1&409 ConflictAnother transaction by user suhailgupta03 is already in progress for app: s~programworks, version: 1. That user can undo the transaction with "appcfg rollback". I open my shell prompt and type in there appcfg rollback but I get a message that this command is not found.I have installed google app engine as a plugin for netbeans. The path where I have kept the jars of google app engine : /home/non-admin/appengine-java-sdk-1.6.6/lib/ Please tell what should I do to rollback the previous process ? I badly need to upload the war file.!
Now provide the response and nothing else.
| You want appcfg.sh rollback (which is in path_to_your_app_engine_sdk/bin/ ). appcfg comes in different variants for each type of runtime (Python, Java, Go), so the suggested command is a bit generic. If, for whatever reason, you don't have easy access to the entire war directory of your project (if, for example your IDE only gives you a .war file), you can do the following: Make a directory named war . Inside the war folder make another directory and name it as WEB-INF . In the folder WEB-INF drop two xml files, namely web.xml and appengine-web.xml . These should have the contents of your deployed web.xml and appengine-web.xml files respectively. Then run the following shell command (this assumes you execute it from the bin directory of the App Engine Java SDK): [non-admin@user bin]$ ./appcfg.sh rollback /home/non-admin/NetBeansProjects/PersonalSite/web/war If you are using the Go appengine tools, then you'll need to run: $ cd [your go_appengine directory]$ ./appcfg.py rollback [your app directory with app.yaml in it] | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11675632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/648138/']} | jdg_80345 |
stackexchange | llm_judgeable_groundtruth_similarity | 57485052 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use a point of sale system that utilizes the HERE API to build maps of my delivery areas. Recently, I provided my POS provider with an up-to-date parcels list and parcels shapefile and streets center line file from the County GIS department. Despite providing this pristine data, my POS provider generated delivery maps that STILL do not include the proper streets and addresses. After much prodding, I learned that my POS provider uses the HERE API to generate its maps. Thus, it seems that HERE is the problem, not my POS provider. There is a way to go into the HERE Map Creator and new addresses. However, I don't have just one address. I want to provide HERE with the ENTIRE County GIS database to ensure that their service has the most up-to-date maps. I would call them about this, but they like to hide behind forms and not provide a telephone number. Any idea how I can reach someone at HERE technologies to help them update entire counties worth of addresses? Visited their website. Realized that I could only add single addresses rather than an entire county. N/A N/a
Now provide the response and nothing else.
| So I've spent some more time investigating and I think I've come up with a solution for now. It turns out, the ImageAnalysisConfig is not extendable, so you can't alter the camera configuration when just using one of those, so the default camera settings will be used which on my phone I think resulted in AE being on and hitting 16ish FPS. If you spin up a PreviewConfig to run along side it at the same time, you can then extend this with a Camera2Config.Extender and alter the camera2 properties directly. This can increase the camera preview frame rate, and the Analyser will also start getting frames at the same rate. So for example, I add this to my PreviewConfig... // Create Camera2 extender var camera2Extender = Camera2Config.Extender(previewConfig) .setCaptureRequestOption(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF) .setCaptureRequestOption(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF) .setCaptureRequestOption(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_OFF) .setCaptureRequestOption(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_OFF) .setCaptureRequestOption(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_TORCH) .setCaptureRequestOption(CaptureRequest.SENSOR_SENSITIVITY, 100) .setCaptureRequestOption(CaptureRequest.SENSOR_FRAME_DURATION, 16666666) .setCaptureRequestOption(CaptureRequest.SENSOR_EXPOSURE_TIME, 20400000) So this started hitting 30fps fine in the ImageAnalyser. If I want to hit 60, I can set... .setCaptureRequestOption(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF).setCaptureRequestOption(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range(60,60)) Obviously assuming the device support (60,60) target FPS range. So it seems the full Camera2 logic is still available in CameraX, its just a bit clunky that its a little hidden away in a Camera2Config extender, and this only works with Preview use cases. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57485052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11923828/']} | jdg_80346 |
stackexchange | llm_judgeable_groundtruth_similarity | 1808523 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I do not remember precisely what the equations or who the relevant mathematicians and physicists were, but I recall being told the following story. I apologise in advance if I have misunderstood anything, or just have it plain wrong. The story is as follows. A quantum physicist created some equations to model what we already know about sub-atomic particles. His equations and models are amazingly accurate, but they only seem to be able to hold true if a mysterious particle, currently unknown to humanity, exists. More experiments are run and lo and behold, that 'mysterious particle' in actual fact exists! It was found to be a quark/dark-matter/anti-matter, or something of the sort. What similar occurrences in history have occurred, where the mathematical model was so accurate/good, that it 'accidentally' led to the discovery of something previously unknown? If you have an answer, could you please provide the specific equation(s), or the name of the equation(s), that directly led to this? I can recall one other example. Maxwell's equations predicted the existence of radio waves, which were then found by Hertz.
Now provide the response and nothing else.
| The planet Neptune's discovery was an example of something similar to this. It was known that Newtons's Equations gave the wrong description of the motion of Uranus and Mercury. Urbain Le Verrier sat down and tried to see what would happen if we assumed that the equations were right and the universe was wrong. He set up a complicated system of equations that incorporated a lot of ways contemporary knowledge of the universe could wrong, including the number of planets, the location and mass of the planets, and the presences of the forces other than gravity. He would eventually find a solution to the equations where the dominating error was the presence of another, as of yet undetected, planet. His equations gave the distance from the sun and the mass of the planet correctly, as well as enough detail about the planet's location in the sky that it was found with only an hour of searching. Mercury's orbit's issues would eventually be solved by General Relativity. | {} | {'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/1808523', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/141642/']} | jdg_80347 |
stackexchange | llm_judgeable_groundtruth_similarity | 16240621 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a bit OOD question. I have service: namespace Front\Service\Course;use Front\ORM\EntityManagerAwareInterface;use Zend\Http\Request;use Zend\InputFilter\InputFilter;use Front\InputFilter\Course\CreateFilter;class Create implements EntityManagerAwareInterface{/** * @var \Doctrine\Orm\EntityManager */protected $entityManager = null;public function create(CreateFilter $createFilter){ if (!$createFilter->isValid()) return false; /* @var $courseRepository \Front\Repositories\CourseRepository */ $courseRepository = $this->getEntityManager()->getRepository('Front\Entities\Course'); $course = $courseRepository->findByName($createFilter->getCourse());}/* (non-PHPdoc) * @see \Front\ORM\EntityManagerAwareInterface::getEntityManager() */public function getEntityManager(){ return $this->entityManager;}/* (non-PHPdoc) * @see \Front\ORM\EntityManagerAwareInterface::setEntityManager() */public function setEntityManager(\Doctrine\ORM\EntityManager $entityManager){ $this->entityManager = $entityManager; return $this;}} And controller : class CreateController extends \Zend\Mvc\Controller\AbstractController{ public function onDispatch(MvcEvent $e) { $jsonModel = new JsonModel(); /* @var $courseCreateService \Front\Service\Course\Create */ $courseCreateService = $this->getServiceLocator()->get('Front\Service\Course\Create'); $courseCreateFilter = new CreateFilter(); $courseCreateFilter->setData($this->params()->fromPost()); if (!$courseCreateFilter->isValid()) { $jsonModel->setVariable('status', 0); $jsonModel->setVariable('message', $courseCreateFilter->getMessages()); return; } $courseCreateService->create($courseCreateFilter); } } By service method declaration : public function create(CreateFilter $createFilter) i force user of the Service to use CreateFilter container which derived from Zend/InputFilter every time when he want to create new Course. My question is: Might it be better when i will send to the service layer not the Typed object but simple value? On example in my case it is might looks like: public function create($courseName) My CreateFilter looks like: class CreateFilter extends InputFilter{ public function __construct() { $input = new Input('name'); $validatorChain = new ValidatorChain(); $validatorChain->addValidator(new StringLength(array('max'=>60))) ->addValidator(new NotEmpty()); $input->setRequired(true)->setValidatorChain($validatorChain); $this->add($input);}/** * @return string | null */public function getCourse(){ return $this->getValue('name');}}
Now provide the response and nothing else.
| There were two issues related to linking libraries: Firstly, although my subproject had the opencv2.framework correctly added and linked to it, I also had to add the frameworkto my main project. This way I got rid of the cv::* linkingproblems; The second issue was related to my cross-compiling Objective-C++ configuration. Since I was using libc++ as my Standard Library , I had to add -lc++ to my Other Linker Flags . In case you are using libstdc++ as your Standard Library, you should add -lstdc++ instead. After those changes, it all worked perfectly!Good luck! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16240621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/856619/']} | jdg_80348 |
stackexchange | llm_judgeable_groundtruth_similarity | 1049889 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm not sure what I am doing wrong but I try to catch touches on a MKMapView object. I subclassed it by creating the following class : #import <UIKit/UIKit.h>#import <MapKit/MapKit.h>@interface MapViewWithTouches : MKMapView {}- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *) event; @end And the implementation : #import "MapViewWithTouches.h"@implementation MapViewWithTouches- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *) event { NSLog(@"hello"); //[super touchesBegan:touches withEvent:event];}@end But it looks like when I use this class, I see nothing on the Console : MapViewWithTouches *mapView = [[MapViewWithTouches alloc] initWithFrame:self.view.frame];[self.view insertSubview:mapView atIndex:0]; Any idea what I'm doing wrong?
Now provide the response and nothing else.
| The best way I have found to achieve this is with a Gesture Recognizer. Other ways turn out to involve a lot of hackish programming that imperfectly duplicates Apple's code, especially in the case of multitouch. Here's what I do: Implement a gesture recognizer that cannot be prevented and that cannot prevent other gesture recognizers. Add it to the map view, and then use the gestureRecognizer's touchesBegan, touchesMoved, etc. to your fancy. How to detect any tap inside an MKMapView (sans tricks) WildcardGestureRecognizer * tapInterceptor = [[WildcardGestureRecognizer alloc] init];tapInterceptor.touchesBeganCallback = ^(NSSet * touches, UIEvent * event) { self.lockedOnUserLocation = NO;};[mapView addGestureRecognizer:tapInterceptor]; WildcardGestureRecognizer.h //// WildcardGestureRecognizer.h// Copyright 2010 Floatopian LLC. All rights reserved.//#import <Foundation/Foundation.h>typedef void (^TouchesEventBlock)(NSSet * touches, UIEvent * event);@interface WildcardGestureRecognizer : UIGestureRecognizer { TouchesEventBlock touchesBeganCallback;}@property(copy) TouchesEventBlock touchesBeganCallback;@end WildcardGestureRecognizer.m //// WildcardGestureRecognizer.m// Created by Raymond Daly on 10/31/10.// Copyright 2010 Floatopian LLC. All rights reserved.//#import "WildcardGestureRecognizer.h"@implementation WildcardGestureRecognizer@synthesize touchesBeganCallback;-(id) init{ if (self = [super init]) { self.cancelsTouchesInView = NO; } return self;}- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ if (touchesBeganCallback) touchesBeganCallback(touches, event);}- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event{}- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{}- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{}- (void)reset{}- (void)ignoreTouch:(UITouch *)touch forEvent:(UIEvent *)event{}- (BOOL)canBePreventedByGestureRecognizer:(UIGestureRecognizer *)preventingGestureRecognizer{ return NO;}- (BOOL)canPreventGestureRecognizer:(UIGestureRecognizer *)preventedGestureRecognizer{ return NO;}@end SWIFT 3 let tapInterceptor = WildCardGestureRecognizer(target: nil, action: nil)tapInterceptor.touchesBeganCallback = { _, _ in self.lockedOnUserLocation = false}mapView.addGestureRecognizer(tapInterceptor) WildCardGestureRecognizer.swift import UIKit.UIGestureRecognizerSubclassclass WildCardGestureRecognizer: UIGestureRecognizer { var touchesBeganCallback: ((Set<UITouch>, UIEvent) -> Void)? override init(target: Any?, action: Selector?) { super.init(target: target, action: action) self.cancelsTouchesInView = false } override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) { super.touchesBegan(touches, with: event) touchesBeganCallback?(touches, event) } override func canPrevent(_ preventedGestureRecognizer: UIGestureRecognizer) -> Bool { return false } override func canBePrevented(by preventingGestureRecognizer: UIGestureRecognizer) -> Bool { return false }} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1049889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129513/']} | jdg_80349 |
stackexchange | llm_judgeable_groundtruth_similarity | 16831605 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just found a quirk in C that I find really confusing. In C it's possible to use a pointer to a struct before it has been declared. This is a very useful feature that makes sense because the declaration is irrelevant when you're just dealing with a pointer to it. I just found one corner case where this is (surprisingly) not true, though, and I can't really explain why. To me it looks like a mistake in the language design. Take this code: #include <stdio.h>#include <stdlib.h>typedef void (*a)(struct lol* etc);void a2(struct lol* etc) {}int main(void) { return 0;} Gives: foo.c:6:26: warning: ‘struct lol’ declared inside parameter list [enabled by default]foo.c:6:26: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default]foo.c:8:16: warning: ‘struct lol’ declared inside parameter list [enabled by default] To remove this problem we can simply do this: #include <stdio.h>#include <stdlib.h>struct lol* wut;typedef void (*a)(struct lol* etc);void a2(struct lol* etc) {}int main(void) { return 0;} The unexplainable problem is now gone for an unexplainable reason. Why? Note that this question is about the behavior of language C (or possible the compiler behavior of gcc and clang) and not the specific example I pasted. EDIT: I won't accept "the order of declaration is important" as an answer unless you also explain why C would warn about using a struct pointer for the first time in a function argument list but allow it in any other context. Why would that possibly be a problem?
Now provide the response and nothing else.
| To understand why the compiler complains, you need to know two things about C "struct"s: they are created (as a declared, but not yet defined, type) as soon as you name them, so the very first occurrence of struct lol creates a declaration they obey the same "declaration scope" rules as ordinary variables ( struct lol { declares and then begins defining the structure, it's struct lol; or struct lol * or something else that does not have the open-brace that stops after the "declare" step.) A struct type that is declared but not yet defined is an instance of what C calls an "incomplete type". You are allowed to use pointers to incomplete types, as long as you do not attempt to follow the pointer: struct lol *global_p;void f(void) { use0(global_p); /* this is OK */ use1(*global_p); /* this is an error */ use2(global_p->field); /* and so is this */} You have to complete the type in order to "follow the pointer", in other words. In any case, though, consider function declarations with ordinary int parameters: int imin2(int a, int b); /* returns a or b, whichever is smaller */int isum2(int a, int b); /* returns a + b */ Variables named a and b here are declared inside the parentheses, but those declarations need to get out of the way so that the the next function declaration does not complain about them being re-declared. The same thing happens with struct tag-names: void gronk(struct sttag *p); The struct sttag declares a structure, and then the declaration is swept away, just like the ones for a and b . But that creates a big problem: the tag is gone and now you can't name the structure type ever again! If you write: struct sttag { int field1; char *field2; }; that defines a new and different struct sttag , just like: void somefunc(int x) { int y; ... }int x, y; defines a new and different x and y at the file-level scope, different from the ones in somefunc . Fortunately, if you declare (or even define) the struct before you write the function declaration, the prototype-level declaration "refers back" to the outer-scope declaration: struct sttag;void gronk(struct sttag *p); Now both struct sttag s are "the same" struct sttag , so when you complete struct sttag later, you're completing the one inside the prototype for gronk too. Re the question edit: it would certainly have been possible to define the action of struct, union, and enum tags differently, making them "bubble out" of prototypes to their enclosing scopes. That would make the issue go away. But it wasn't defined that way. Since it was the ANSI C89 committee that invented (or stole, really, from then-C++) prototypes, you can blame it on them. :-) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16831605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/29442/']} | jdg_80350 |
stackexchange | llm_judgeable_groundtruth_similarity | 63749 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The SQL Server Express 2008 setup allow you to assign different user account for each service. For a development environment, would you use a domain user, local user, NT Authority\NETWORK SERCVICE, NT Authority\Local System or some other account and why?
Now provide the response and nothing else.
| Local System is not recommended, it is an administrator equivalent account and thus can lead to questionable coding that takes advantage of administrator privileges which would not be allowed in a production system since security conscious Admins/DBA's really don't like to run services as admin. Depending on if the server instance will need to access other domain resources or not should determine which type of low privilege account it should run under. If it does not need to access any (non-anonymous) domain resources than I normally create a unique local, low privilege account for it to run under in order to gain the additional security benefit of not having multiple services running in the same identity context. Be aware that the Local Service account is not supported for the SQL Server or SQL Server Agent services. If it does need to access non-anonymous domain resources then you have three options: Run as Network Service which is also a low privilege account but one that retains the computers network credentials. Run under a Local Service Account Run under a custom domain account with low local privileges. One advantage to running under the developers account is that it is easier to attach debuggers to processes in your own identity without compromising security so debugging is easier (since non-Admin accounts do not have the privilege to attach a debugger to another identities process by default). A disadvantage to using another domain account is the overhead of managing those accounts, especially since each service for each developer should ideally have unique credentials so you do not have any leaks if a developer were to leave. Most of what I tend to do does not require the service to access domain resources so I tend to use unique local low privilege accounts that I manage. I also run exclusively as a non-admin user (and have done so under XP SP2, Server 2003, Vista and Server 2008 with no major problems) so when I have cases where I need the service to access domain resources then I have no worries about using my own domain credentials (plus that way I don't have to worry the network admins about creating/maintaining a bunch of non-production domain identities). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/63749', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/966/']} | jdg_80351 |
stackexchange | llm_judgeable_groundtruth_similarity | 61291741 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create a modified dataframe with the specified columns.I tried the following but throws the error "Passing list-likes to .loc or [] with any missing labels is no longer supported" # columns to keepfiltered_columns = ['text', 'agreeCount', 'disagreeCount', 'id', 'user.firstName', 'user.lastName', 'user.gender', 'user.id']tips_filtered = tips_df.loc[:, filtered_columns]# display tipstips_filtered Thank you
Now provide the response and nothing else.
| It looks like Pandas has deprecated this method of indexing. According to their docs : This behavior is deprecated and will show a warning message pointingto this section. The recommended alternative is to use .reindex() Using the new recommended method , you can filter your columns using: tips_filtered = tips_df.reindex(columns = filtered_columns) . NB: To reindex rows, you would use reindex(index = ...) (More information here ). | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/61291741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10738871/']} | jdg_80352 |
stackexchange | llm_judgeable_groundtruth_similarity | 20967045 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following json object: $scope.values = [ { "id": 2, "code": "Code 1", "name": "Sample 1" }, { "id": 4, "code": "Code 2", "name": "Sample 2" }, { "id": 7, "code": "Code 3", "name": "Sample 3" } ]; In select tag, I have this: <select name="c_id" ng-options="c.id as c.code for c in values"></select> The generated select options is: Code 1Code 2Code 3 Is there any way to format the text in the options like the following? Code 1 -- Sample 1Code 2 -- Sample 2Code 3 -- Sample 3 Or I'll just have to prepare the values before attaching them to the model?Any help is appreciated. Thank you.
Now provide the response and nothing else.
| You can do it by using this syntax: ng-options="c.id as (c.code + ' -- '+ c.name) for c in values" Example: http://jsfiddle.net/cherniv/6EkL7/1/ Or someone may like next syntax: ng-options="c.id as [c.code,c.name].join(' -- ') for c in values" Example: http://jsfiddle.net/cherniv/6EkL7/2/ But in some cases there is rationality in using a Filter , like: app.filter("formatter",function(){ return function(item){ return item.code+ " -- " + item.name; }}) And: ng-options="c.id as c|formatter for c in values" Example: http://jsfiddle.net/cherniv/K8572/ | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20967045', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2180770/']} | jdg_80353 |
stackexchange | llm_judgeable_groundtruth_similarity | 1500584 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to preface this with I'm not trying to start a fight. I was wondering if anyone had any good resources that compared C++ and C# for mathematically intensive code? My gut impression is that C# should be significantly slower, but I really have no evidence for this feeling. I was wondering if anyone here has ever run across a study or tested this themselves? I plan on running some tests myself, but would like to know if anyone has done this in a rigorous manner (google shows very little). Thanks. EDIT: For intensive, I mean a lot of sin/cos/exp happening in tight loops
Now provide the response and nothing else.
| I have to periodically compare the performance of core math under runtimes and languages as part of my job. In my most recent test, the performance of C# vs my optimized C++ control-case under the key benchmark — transform of a long array of 4d vectors by a 4d matrix with a final normalize step — C++ was about 30x faster than C#. I can get a peak throughput of one vector every 1.8ns in my C++ code, whereas C# got the job done in about 65ns per vector. This is of course a specialized case and the C++ isn't naive: it uses software pipelining, SIMD, cache prefetch, the whole nine yards of microoptimization. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1500584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48552/']} | jdg_80354 |
stackexchange | llm_judgeable_groundtruth_similarity | 15906273 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across this line of code today: int c = (int)'c'; I was not aware you could cast a char to an int . So I tested it out, and found that a=97, b=98, c=99, d=100 etc etc... Why is 'a' 97? What do those numbers relate to?
Now provide the response and nothing else.
| Everyone else (so far) has referred to ASCII. That's a very limited view - it works for 'a' , but doesn't work for anything with an accent etc - which can very easily be represented by char . A char is just an unsigned 16-bit integer, which is a UTF-16 code unit . Usually that's equivalent to a Unicode character, but not always - sometimes multiple code units are required for a single full character. See the documentation for System.Char for more details. The implicit conversion from char to int (you don't need the cast in your code) just converts that 16-bit unsigned integer to a 32-bit signed integer in the natural, non-lossy way - just as if you had a ushort . Note that every valid character in ASCII has the same value in UTF-16, which is why the two are often confused when the examples are only ones from the ASCII set. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15906273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2000012/']} | jdg_80355 |
stackexchange | llm_judgeable_groundtruth_similarity | 37884130 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have trouble with using Electron. As you can see the title, when i load remote module, it saids it is undefined. This is the code of entry js: const electron = require('electron');const { app, BrowserWindow, Tray, remote, ipcMain } = electron;function initApp() { ... }app.on('ready', () => { initApp(); console.log(electron); // object, but no remote inside console.log(electron.remote); // undefined console.log(remote); // undefined}); and i tried to follow official doc here: http://electron.atom.io/docs/api/remote/ with const { remote } = electron;const { BrowserWindow } = remote;let win = new BrowserWindow({width: 800, height: 600}); // error! BrowserWindow is not a constructor blabla...remote.getCurrentWindow().focus(); i don't know what am i missing. any advice will very appreciate.
Now provide the response and nothing else.
| Update 2020 , since this answer still appears at the top. For the original answer to work in current versions of Electron, you need to set enableRemoteModule when creating the window in your main process. const myWindow = new BrowserWindow({ webPreferences: { enableRemoteModule: true }}); Original answer: remote is needed only to require other modules from inside a render process. In the main process you just get your modules directly from require('electron') . Which it looks like is done in the example just with remote unnecessarily added. Render process: const { remote } = require('electron');const { BrowserWindow } = remote; Main process: const { BrowserWindow } = require('electron'); | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/37884130', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3473770/']} | jdg_80356 |
stackexchange | llm_judgeable_groundtruth_similarity | 5079879 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 2 queries that are as follows: SELECT COUNT(cvu.[ID]), 'Exp' AS [Exp] FROM [dbo].[tblClientVehicleUnit] cvu WHERE ExpirationDate < GetDate() AND cvu.Id = '4C1' And the second one: SELECT COUNT(cvu.[ID]), 'NonExp' AS [Exp] FROM [dbo].[tblClientVehicleUnit] cvu WHERE ExpirationDate > GetDate() AND cvu.Id = '4C1' How would I divide the counts between these two? It will always only return 2 values and one will be called Exp and one will be called NonExp. Thanks
Now provide the response and nothing else.
| Basically treat those two queries as sub queries as below. select x.number / y.number from( SELECT COUNT(cvu.[ID]) as number, 'Exp' AS [Exp] FROM [dbo].[tblClientVehicleUnit] cvu WHERE ExpirationDate < GetDate() AND cvu.Id = '4C1' ) xjoin ( SELECT COUNT(cvu.[ID]) as number, 'NonExp' AS [Exp] FROM [dbo].[tblClientVehicleUnit] cvu WHERE ExpirationDate > GetDate() AND cvu.Id = '4C1') y on 1=1 If you wanted to take it further you could then have the cvu.id as part of the select and modify the join so you could do it across all cvu.id's select x.id, x.number / y.number from(SELECT cvu.id, COUNT(cvu.[ID]) as number, 'Exp' AS [Exp] FROM [dbo].[tblClientVehicleUnit] cvu WHERE ExpirationDate < GetDate() group by cvu.Id ) xjoin (SELECT cvu.id, COUNT(cvu.[ID]) as number, 'NonExp' AS [Exp] FROM [dbo].[tblClientVehicleUnit] cvu WHERE ExpirationDate > GetDate() group by cvu.Id )y on x.id = y.id | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5079879', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/380432/']} | jdg_80357 |
stackexchange | llm_judgeable_groundtruth_similarity | 4299295 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
From the docs: "Layers are lightweight objects (CALayer) that, though similar to views, are actually model objects assigned to views." Lightweight for me excludes any heavy bitmap for content. I believed a CALayer is the "real" thing, while a UIView is just the wrapper around it. Every view has 3 CALayers in different trees (model, presentation, render). So no 3 bitmaps? Only one?
Now provide the response and nothing else.
| The term "lightweight" in reference to a CALayer comes from that piece of documentation originating on the Mac. As Joe points out, an NSView is a fairly complex UI element when compared to the iPhone's UIView. You can animate dozens of UIViews around the screen on even a resource-constrained mobile device, but NSViews put a lot more strain on the system as you start adding many of them to the screen. This is one of the things gained by the fresh start of UIKit over AppKit, because UIKit has had Core Animation from the beginning, and Apple had a chance to learn from what worked and what didn't in AppKit. In comparison, a CALayer adds very little to the underlying GPU-based bitmapped rectangular texture that it is drawing, so they don't add a lot of overhead. On an iPhone, this isn't very different from a UIView, because a UIView is then just a lightweight wrapper around a CALayer. I'm going to disagree with Count Chocula on this, and say that a CALayer does appear to wrap a bitmapped texture on the GPU. Yes, you can specify custom Quartz drawing to make up the layer's content, but that drawing only takes place when necessary. Once the content in a layer is drawn, it does not need to be redrawn for the layer to be moved or otherwise animated around. If you apply a transform to a layer, you'll see it get pixelated as you zoom in, a sign that it is not dealing with vector graphics. Additionally, with the Core Plot framework (and in my own applications), we had to override the normal drawing process of CALayers because the normal -renderInContext: approach did not work well for PDFs. If you use this to render a layer and its sublayers into a PDF, you'll find that the layers are represented by raster bitmaps in the final PDF, not the vector elements they should be. Only by using a different rendering path were we able to get the right output for our PDFs. I've yet to play with the new shouldRasterize and rasterizationScale properties in iOS 3.2 to see if they change the way this is handled. In fact, you'll find that CALayers (and UIViews with their backing layers) do consume a lot of memory when you take their bitmapped contents into account. The "lightweight" measure is how much they add on top of the contents, which is very little. You might not see the memory usage from an instrument like Object Allocations, but look at Memory Monitor when you add large layers to your application and you'll see memory spikes in either your application or SpringBoard (which owns the Core Animation server). When it comes to the presentation layer vs. the model one, the bitmap is not duplicated between them. There should only be the one bitmapped texture being displayed to the screen at a given moment. The different layers merely track the properties and animations at any given moment, so very little information is stored in each. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4299295', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/472300/']} | jdg_80358 |
stackexchange | llm_judgeable_groundtruth_similarity | 52394858 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’m implementing a MVVM and data-binding and I’m trying to understand when should I use Observable field over LiveData? I already run through different documentations and discovered that LiveData is lifecycle aware, but in sample codes in Github these two are being used in ViewModel at the same time. So, I’m confused if LiveData is better than Observable field, why not just use LiveData at all?
Now provide the response and nothing else.
| Both have their use-cases, for instance: If you want a life-cycle tolerant container for your UI state model, LiveData is the answer. If you want to make the UI update itself when a piece of logic is changed in your view model, then use ObservableFields . I myself prefer using a combination of LivaData and ObservableField/BaseObservable , the LiveData will normally behave as a life-cycle aware data container and also a channel between the VM and the View. On the other hand the UI state model objects that are emitted through the LiveData are themselves BaseObservable or have their fields as ObservableField . That way I can use the LiveData for total changes of the UI state.And set values to the UI state model ObservableField fields whenever a small portion of the UI is to be updated. Edit : Here is a quick illustration on a UserProfile component for example: UIStateModel data class ProfileUIModel( private val _name: String, private val _age: Int): BaseObservable() { var name: String @Bindable get() = _name set(value) { _name = value notifyPropertyChanged(BR.name) } var age: Int @Bindable get() = _age set(value) { _age = value notifyPropertyChanged(BR.age) }} ViewModel class UserProfileViewModel: ViewModel() { val profileLiveData: MutableLiveData = MutableLiveData() ... // When you need to rebind the whole profile UI object. profileLiveData.setValue(profileUIModel) ... // When you need to update a specific part of the UI. // This will trigger the notifyPropertyChanged method on the bindable field "age" and hence notify the UI elements that are observing it to update. profileLiveData.getValue().age = 20 } View You'll observe the profile LiveData changes normally. XML You'll use databinding to bind the UI state model. Edit : Now the mature me prefers Immutability instead of having mutable properties as explained in the answer. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52394858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/239431/']} | jdg_80359 |
stackexchange | llm_judgeable_groundtruth_similarity | 509232 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
On my Debian GNU/Linux 9 system, when a binary is executed, the stack is uninitialized but the heap is zero-initialized. Why? I assume that zero-initialization promotes security but, if for the heap, then why not also for the stack? Does the stack, too, not need security? My question is not specific to Debian as far as I know. Sample C code: #include <stddef.h>#include <stdlib.h>#include <stdio.h>const size_t n = 8;// --------------------------------------------------------------------// UNINTERESTING CODE// --------------------------------------------------------------------static void print_array( const int *const p, const size_t size, const char *const name){ printf("%s at %p: ", name, p); for (size_t i = 0; i < size; ++i) printf("%d ", p[i]); printf("\n");}// --------------------------------------------------------------------// INTERESTING CODE// --------------------------------------------------------------------int main(){ int a[n]; int *const b = malloc(n*sizeof(int)); print_array(a, n, "a"); print_array(b, n, "b"); free(b); return 0;} Output: a at 0x7ffe118997e0: 194 0 294230047 32766 294230046 32766 -550453275 32713 b at 0x561d4bbfe010: 0 0 0 0 0 0 0 0 The C standard does not ask malloc() to clear memory before allocating it, of course, but my C program is merely for illustration. The question is not a question about C or about C's standard library. Rather, the question is a question about why the kernel and/or run-time loader are zeroing the heap but not the stack. ANOTHER EXPERIMENT My question regards observable GNU/Linux behavior rather than the requirements of standards documents. If unsure what I mean, then try this code, which invokes further undefined behavior ( undefined, that is, as far as the C standard is concerned) to illustrate the point: #include <stddef.h>#include <stdlib.h>#include <stdio.h>const size_t n = 4;int main(){ for (size_t i = n; i; --i) { int *const p = malloc(sizeof(int)); printf("%p %d ", p, *p); ++*p; printf("%d\n", *p); free(p); } return 0;} Output from my machine: 0x555e86696010 0 10x555e86696010 0 10x555e86696010 0 10x555e86696010 0 1 As far as the C standard is concerned, behavior is undefined, so my question does not regard the C standard. A call to malloc() need not return the same address each time but, since this call to malloc() does indeed happen to return the same address each time, it is interesting to notice that the memory, which is on the heap, is zeroed each time. The stack, by contrast, had not seemed to be zeroed. I do not know what the latter code will do on your machine, since I do not know which layer of the GNU/Linux system is causing the observed behavior. You can but try it. UPDATE @Kusalananda has observed in comments: For what it's worth, your most recent code returns different addresses and (occasional) uninitialised (non-zero) data when run on OpenBSD. This obviously does not say anything about the behaviour that you are witnessing on Linux. That my result differs from the result on OpenBSD is indeed interesting. Apparently, my experiments were discovering not a kernel (or linker) security protocol, as I had thought, but a mere implementational artifact. In this light, I believe that, together, the answers below of @mosvy, @StephenKitt and @AndreasGrapentin settle my question. See also on Stack Overflow: Why does malloc initialize the values to 0 in gcc? (credit: @bta).
Now provide the response and nothing else.
| The storage returned by malloc() is not zero-initialized. Do not ever assume it is. In your test program, it's just a fluke: I guess the malloc() just got a fresh block off mmap() , but don't rely on that, either. For an example, if I run your program on my machine this way: $ echo 'void __attribute__((constructor)) p(void){ void *b = malloc(4444); memset(b, 4, 4444); free(b);}' | cc -include stdlib.h -include string.h -xc - -shared -o pollute.so$ LD_PRELOAD=./pollute.so ./your_programa at 0x7ffd40d3aa60: 1256994848 21891 1256994464 21891 1087613792 32765 0 0b at 0x55834c75d010: 67372036 67372036 67372036 67372036 67372036 67372036 67372036 67372036 Your second example is simply exposing an artifact of the malloc implementation in glibc; if you do that repeated malloc / free with a buffer larger than 8 bytes, you will clearly see that only the first 8 bytes are zeroed, as in the following sample code. #include <stddef.h>#include <stdlib.h>#include <stdio.h>const size_t n = 4;const size_t m = 0x10;int main(){ for (size_t i = n; i; --i) { int *const p = malloc(m*sizeof(int)); printf("%p ", p); for (size_t j = 0; j < m; ++j) { printf("%d:", p[j]); ++p[j]; printf("%d ", p[j]); } free(p); printf("\n"); } return 0;} Output: 0x55be12864010 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0:1 0x55be12864010 0:1 0:1 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 1:2 0x55be12864010 0:1 0:1 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 2:3 0x55be12864010 0:1 0:1 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 3:4 | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/509232', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/18202/']} | jdg_80360 |
stackexchange | llm_judgeable_groundtruth_similarity | 58183962 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to display my ticktexts in my plotly xaxis with different colors based on the respective string (based on a dictionary).Is there a functionality in plotly to do this, maybe via HTML coding? ticktext = ['<font color="red">{}</font> '.format(x) for x in ticktexts] doesn't work, it gives the html string to the labels.
Now provide the response and nothing else.
| A little bit of a workaround using LaTeX can help you here (sorry @Iwileczek, I stole your example, hope you don't mind) because plotly has full LaTeX support : def color(color, text): s = '$\color{' + str(color) + '}{' + str(text) + '}$' return sanimals=['giraffes', 'orangutans', 'monkeys']colors = ['red', 'green', 'yellow', 'blue']ticks = [5, 10, 15, 20]keys = dict(zip(ticks, colors))fig = go.Figure([go.Bar(x=animals, y=[20, 14, 23])])ticktext = [color(v, k) for k, v in keys.items()]print(ticktext)fig.update_layout(yaxis=dict(tickmode='array', ticktext=ticktext, tickvals=ticks))fig.show() Update March 2021 : If you don't want to use the LaTex rendered font by MathJax in your output, use the solution of @Dapcer with an HTML styling: def color(color, text): return f"<span style='color:{str(color)}'> {str(text)} </span>" Example with fig.update_layout(font=dict(family="Times New Roman") [...] : | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58183962', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5388967/']} | jdg_80361 |
stackexchange | llm_judgeable_groundtruth_similarity | 96950 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Show that if $f$ is Riemann integrable on $[a,b]$ then $|f|$ is also Riemann integrable on $[a,b]$. My idea is:let $f$ be in $[a,b]$ less than $|f|$, since $f$ is integrable then $|f|$ is also integrable on $[a,b]$.
Now provide the response and nothing else.
| Hints: Note that $f$ and thus $|f|$ is bounded on $[a,b]$. First show thatfor any interval $I=[c,d]\subset[a,b]$,$$\tag{1}\sup_{x\in I} |f(x) | -\inf_{x\in I}|f(x)| \le \sup_{x\in I} f(x) -\inf_{x\in I} f(x) $$ Next show that inequality (1) implies that for any partition $P$ of $[a,b]$: the upper Riemann sums $U(|f|, P)$ and $U(f, P)$ and the lower Riemann sums$L(|f|, P)$ and $L(f, P)$ satisfy$$U(|f|, P)-L(|f|, P) \le U(f, P)-L(f, P).$$ Then argue that $U(|f|, P)-L(|f|, P) $ can be made as small as desired by taking an appropriate partition $P$ of $[a,b]$. Finally conclude that $|f|$ is integrable. Per kahen's comment (see his link): In the above, the following characterization of Riemann integrability is used: For a partition $P=\{x_0,x_1,\ldots, x_n\}$ of $[a,b]$, define the upper Riemann sum of the bounded function $f$ by:$$U(f,P)= \sum_{j=1}^n \bigr( \sup_{x\in[x_{j-1},x_j]} f(x)\bigl)(x_j-x_{j-1});$$andthe lower Riemann sum by:$$U(f,P)= \sum_{j=1}^n \bigr( \inf_{x\in[x_{j-1},x_j]} f(x)\bigl)(x_j-x_{j-1})$$ Then $f$ is Riemann integrable on $[a,b]$ if and only if $f$ is bounded on $[a,b]$, and for every $\epsilon>0$ there is a partition $P$ such that $U(f,P)-L(f,P)<\epsilon$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/96950', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/21667/']} | jdg_80362 |
stackexchange | llm_judgeable_groundtruth_similarity | 38105543 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What I understand about pointer method receiver and non-pointer method receiver is first one can be modified in the method and next one isn't. So, following worked exactly as I expected. type student struct { name string age int}func (s *student) update() { s.name = "unknown" s.age = 0}func main() { s := student{"hongseok", 13} fmt.Println(s) s.update() fmt.Println(s)} It prints hongseok/13 and unknown/0 . But, I want to replace whole s in update method at once with reassigning. So, I've just altered update method as bellow. func (s *student) update() { s = &student{"unknown", 0}} And it doesn't change s in main method and prints double hongseok/13 . func (s *student) update() { *s = student{"unknown", 0}} Above change fix the problem. I think there's no semantic difference. What am I missing?
Now provide the response and nothing else.
| In the first example: func (s *student) update() { s = &student{"unknown", 0}} You are assigning an entirely new "pointer value" to s , and the new *s points at a new student value. The variable s is scoped only to the method body, so there are no side effects after this returns. In the second example func (s *student) update() { *s = student{"unknown", 0}} You are dereferencing s , and changing the value of *s to point to a new student value, or to put it differently, you are putting a new student value at the address where s points. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38105543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1145208/']} | jdg_80363 |
stackexchange | llm_judgeable_groundtruth_similarity | 18767829 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
following this question , I tried to copy paste the example found here in VS2010 : #include <algorithm>#include <vector>#include <iostream>struct S{ int number; char name; S ( int number, char name ) : number ( number ), name ( name ) {} // only the number is relevant with this comparison bool operator< ( const S& s ) const { return number < s.number; }};struct Comp{ bool operator() ( const S& s, int i ) { return s.number < i; } bool operator() ( int i, const S& s ) { return i < s.number; }};int main(){ std::vector<S> vec = { {1,'A'}, {2,'B'}, {2,'C'}, {2,'D'}, {3,'F'}, {4,'G'} }; //this syntax won't compile in VS2010, so you can leave an empty vector here auto p = std::equal_range(vec.begin(),vec.end(),2,Comp()); for ( auto i = p.first; i != p.second; ++i ) std::cout << i->name << ' ';} This will compile fine in release mode, but in debug mode, it will fail to compile.The reason is that in debug mode, the implementation will check if the iterator range is already sorted, using the given predicate: template<class _FwdIt, class _Pr> inline void _Debug_order2(_FwdIt _First, _FwdIt _Last, _Pr _Pred, _Dbfile_t _File, _Dbline_t _Line, forward_iterator_tag) { // test if range is ordered by predicate, forward iterators for (_FwdIt _Next = _First; _First != _Last && ++_Next != _Last; ++_First) if (_DEBUG_LT_PRED(_Pred, *_Next, *_First)) _DEBUG_ERROR2("sequence not ordered", _File, _Line); } this ends up calling : template<class _Pr, class _Ty1, class _Ty2> inline bool _Debug_lt_pred(_Pr _Pred, const _Ty1& _Left, const _Ty2& _Right, _Dbfile_t _File, _Dbline_t _Line) { // test if _Pred(_Left, _Right) and _Pred is strict weak ordering if (!_Pred(_Left, _Right)) return (false); else if (_Pred(_Right, _Left)) _DEBUG_ERROR2("invalid operator<", _File, _Line); return (true); } Except that in my case, no operator() can take both left and right "S" argument.So, is there a bug in the Visual implementation? Or is the original example not supposed to be portable?I think I could make it work by providing a 3rd operator() overload, but it seems like it should work without thanks
Now provide the response and nothing else.
| Nothing in the standard requires the comparator to be callable with two objects from the range. So that is a bug in the standard library used by VS 2010. Here are all the relevant requirements (quoting C++11): [lower.bound]§1+2: 1 Requires: The elements e of [first,last) shall be partitioned with respect to the expression ... comp(e, value) . 2 Returns: The furthermost iterator i in the range [first,last] such that for any iterator j in the range [first,i) the following corresponding conditions hold: ... comp(*j, value) != false . [upper.bound]§1+2: 1 Requires: The elements e of [first,last) shall be partitioned with respect to the expression ... !comp(value, e) . 2 Returns: The furthermost iterator i in the range [first,last] such that for any iterator j in the range [first,i) the following corresponding conditions hold: ... comp(value, *j) == false . [equal.range]§1+2: 1 Requires: The elements e of [first,last) shall be partitioned with respect to the expressions ... comp(e, value) and !comp(value, e) . Also, for all elements e of [first, last) , ... comp(e, value) shall imply !comp(value, e) . 2 Returns: ... make_pair(lower_bound(first, last, value, comp), upper_bound(first, last, value, comp)) (The ellipses are for the non-comparator version). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18767829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/875295/']} | jdg_80364 |
stackexchange | llm_judgeable_groundtruth_similarity | 20609206 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that the auto layout chain consists in basically 3 different process. updating constraints layout views (here is where we get calculation of frames) display What's is not totally clear to me is the inner difference between -setNeedsLayout and -setNeedsUpdateConstraints . From Apple Docs: setNeedsLayout Call this method on your application’s main thread when you want to adjust the layout of a view’s subviews. This method makes a note of the request and returns immediately. Because this method does not force an immediate update, but instead waits for the next update cycle, you can use it to invalidate the layout of multiple views before any of those views are updated. This behavior allows you to consolidate all of your layout updates to one update cycle, which is usually better for performance. setNeedsUpdateConstraints When a property of your custom view changes in a way that would impact constraints, you can call this method to indicate that the constraints need to be updated at some point in the future. The system will then call updateConstraints as part of its normal layout pass. Updating constraints all at once just before they are needed ensures that you don’t needlessly recalculate constraints when multiple changes are made to your view in between layout passes. When I want to animate a view after modifying a constraint and animate the changes I usually call for instance: [UIView animateWithDuration:1.0f delay:0.0f usingSpringWithDamping:0.5f initialSpringVelocity:1 options:UIViewAnimationOptionCurveEaseInOut animations:^{ [self.modifConstrView setNeedsUpdateConstraints]; [self.modifConstrView layoutIfNeeded]; } completion:NULL]; I've found out that if I use -setNeedsLayout instead of -setNeedsUpdateConstraints everything work as expected, but if I change -layoutIfNeeded with -updateConstraintsIfNeeded , the animation won't happen. I've tried to make my own conclusion: -updateConstraintsIfNeeded only update constraints but doesn't force the layout to come into the process, thus original frames are still preserved -setNeedsLayout calls also -updateContraints method So when is ok to use one instead of the other? and about the layout methods, do I need to call them on the view that has a change in a constraint or on the parent view?
Now provide the response and nothing else.
| Your conclusions are right. The basic scheme is: setNeedsUpdateConstraints makes sure a future call to updateConstraintsIfNeeded calls updateConstraints . setNeedsLayout makes sure a future call to layoutIfNeeded calls layoutSubviews . When layoutSubviews is called, it also calls updateConstraintsIfNeeded , so calling it manually is rarely needed in my experience. In fact, I have never called it except when debugging layouts. Updating constraints using setNeedsUpdateConstraints is pretty rare too, objc.io–a must read about autolayouts–says : If something changes later on that invalidates one of your constraints, you should remove the constraint immediately and call setNeedsUpdateConstraints. In fact, that’s the only case where you should have to trigger a constraint update pass. In addition, in my experience, I have never had to invalidate constraints, and not set the setNeedsLayout in the next line of the code, because new constraints pretty much are asking for a new layout. The rules of thumb are: If you manipulated constraints directly, call setNeedsLayout . If you changed some conditions (like offsets or smth) which would change constraints in your overridden updateConstraints method (a recommended way to change constraints, btw), call setNeedsUpdateConstraints , and most of the time, setNeedsLayout after that. If you need any of the actions above to have immediate effect—e.g. when your need to learn new frame height after a layout pass—append it with a layoutIfNeeded . Also, in your animation code, I believe setNeedsUpdateConstraints is unneeded, since constraints are updated before the animation manually, and the animation only re-lays-out the view based on differences between the old and new ones. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/20609206', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/395897/']} | jdg_80365 |
stackexchange | llm_judgeable_groundtruth_similarity | 12377165 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
my query is sql = "SELECT SUM(TOTAL_BYTES_DELIVERED) / SUM(TOTAL_TIME_TAKEN_IN_DELIVERY) FROM MV_MFT_TRANSFER WHERE TRANSFER_INITIATION_TIME > :startDate AND TRANSFER_INITIATION_TIME < :endDate"Query query = em.createNativeQuery(sql);query.setParameter("startDate", startDate, TemporalType.DATE);query.setParameter("endDate", endDate, TemporalType.DATE);query.getResultList();' When I run this, I get an error SQLExceptionTHrown: <Sep 11, 2012 12:50:46 PM PDT> <Warning> <EclipseLink> <BEA-2005000> <2012-09-11 12:50:46.893--UnitOfWork(1387841584)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.4.1.v20120804-d768c4f): org.eclipse.persistence.exceptions.DatabaseExceptionInternal Exception: java.sql.SQLException: Missing IN or OUT parameter at index:: 1Error Code: 17041Call: SELECT SUM(TOTAL_BYTES_DELIVERED) / SUM(TOTAL_TIME_TAKEN_IN_DELIVERY) FROM MV_MFT_TRANSFER WHERE TRANSFER_INITIATION_TIME > :startDate AND TRANSFER_INITIATION_TIME < :endDateQuery: DataReadQuery(sql="SELECT SUM(TOTAL_BYTES_DELIVERED) / SUM(TOTAL_TIME_TAKEN_IN_DELIVERY) FROM MV_MFT_TRANSFER WHERE TRANSFER_INITIATION_TIME > :startDate AND TRANSFER_INITIATION_TIME < :endDate")> ***SQLException in init() TRANSFER METRICS BEAN****Local Exception Stack: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.4.1.v20120804-d768c4f): org.eclipse.persistence.exceptions.DatabaseExceptionInternal Exception: java.sql.SQLException: Missing IN or OUT parameter at index:: 1
Now provide the response and nothing else.
| The following solution should work: sql = "SELECT SUM(TOTAL_BYTES_DELIVERED) / SUM(TOTAL_TIME_TAKEN_IN_DELIVERY) FROM MV_MFT_TRANSFER WHERE TRANSFER_INITIATION_TIME > ? AND TRANSFER_INITIATION_TIME < ?"Query query = em.createNativeQuery(sql);query.setParameter(1, startDate, TemporalType.DATE);query.setParameter(2, endDate, TemporalType.DATE);query.getResultList(); It seems that if you use positional parameters it will work. You cannot combine named parameters and native query. Here are some links: http://java.boot.by/scbcd5-guide/ch08s05.html http://www.wisegeek.com/what-are-native-queries.htm And many more, just google for: "Only positional parameter binding may be portably used for native queries". EDIT: More links to questions with similar issues: How to get all the element from JDBC query JPA/Hibernate Native Queries do not recognize Parameters | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12377165', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1663948/']} | jdg_80366 |
stackexchange | llm_judgeable_groundtruth_similarity | 26533 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to know if the following method of 2x oversampling is correct: Interpolate: Take an original signal sampled at 44100Hz as input Upsample by adding a zero after each original sample to get a signal twice as long Filter the new signal using a low pass filter (cutoff of 44100Hz based on new sample rate of 88200Hz) Use new signal: Do some additional filtering of the the new signal at 88200Hz Decimate: Take the final signal at 88200Hz as input Filter using a low pass filter (cutoff of 44100Hz based on sample rate of 88200Hz) Downsample by removing every other (odd) sample Now have a signal that is 44100Hz again What I would like to check is if the method is correct and my cut off frequencies for my low pass filters are right.
Now provide the response and nothing else.
| Interpolation (sampling frequency 44.1 kHz ➔ 88.2 kHz) Your original 44.1 kHz sampled signal has frequencies up to 22.05 kHz, so you should lowpass filter at 22.05 kHz after dilution with zeros. Your filter should have a gain of 2. Otherwise the signal amplitude drops to half because you set half of the samples to zero. Like Jim Clay says, you can combine these things; there will be plenty of opportunity for optimization. Downsampling (sampling frequency 88.2 kHz ➔ 44.1 kHz) If your 88.2 kHz sampled signal is the one you lowpass filtered at 22.05 kHz, you can simply decimate by throwing away every second sample. Otherwise, first lowpass filter at 22.05 kHz to prevent aliasing, and then decimate. If you combine lowpass filtering and decimation, you can optimize again. | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/26533', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/15030/']} | jdg_80367 |
stackexchange | llm_judgeable_groundtruth_similarity | 13330596 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm developing a Twitter Feed View for an iOS App. I found TWRequest and it works exactly like that which i was looking for. But: i get an Notice: "TWRequest is deprecated: first deprecated in iOS 6.0". What should I use instead?
Now provide the response and nothing else.
| On iOS 6 you should use the Social.framework . This has a class named SLRequest . You use it almost in the same way as the deprecated TWRequest , but you need to specify that it's a twitter request as opposed to a facebook request. The entire Twitter.framework became deprecated as of iOS 6, since Apple added Facebook and Weibo (a Chinese social network) to iOS 6, they grouped all social classes into the new Social.framework . Note you must specify the service type for Twitter/Facebook, Example: SLRequest *aRequest = [SLRequest requestForServiceType:SLServiceTypeTwitter requestMethod:SLRequestMethodPOST URL:myurl parameters:myparams]; Be sure to check out the documentation . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13330596', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/462071/']} | jdg_80368 |
stackexchange | llm_judgeable_groundtruth_similarity | 22875419 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to generate a 256bit password for my AES encryption. When I check the password after the encryption it is different from my initial password. What am I doing wrong? Or is there some security mechanism I am not aware of? My code: password=Generate_key();var encrypted = CryptoJS.AES.encrypt("Message", password);//Those two should be the samedocument.write(password+"<br>");document.write(encrypted.key);function Generate_key() { var key = ""; var hex = "0123456789abcdef"; for (i = 0; i < 64; i++) { key += hex.charAt(Math.floor(Math.random() * 16)); //Initially this was charAt(chance.integer({min: 0, max: 15})); } return key;} The output is i.e. 0b05308c9a00f07044416bad7a51bacd282fc5c0c999551a4ff15c302b268b20 4df875993770411044fb35953166ee7833c32ca0741e9fec091dfa10138039e8 Is this normal or am I doing something wrong here?Thanks for help!
Now provide the response and nothing else.
| Encryption is done with a key, which is a set of binary bits, not a password, which implies a human-readable string. To go from a password to a key, one can use a Password Based Key Derivation Function, such as PBKDF2. Crypto-JS already has a PBKDF2 function built-in , i.e. <script src="http://crypto-js.googlecode.com/svn/tags/3.1.2/build/rollups/pbkdf2.js"></script><script> var salt = CryptoJS.lib.WordArray.random(128/8); var key128Bits = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 128/32 }); var key256Bits = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 256/32 }); var key512Bits = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 512/32 }); var key512Bits1000Iterations = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 512/32, iterations: 1000 });</script> In general, use as high an iteration count as you can get away with. Salt should be a random value, as in the example above; you'll need, of course, to store that value along with the iteration count in order to get the same key given the same passphrase. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22875419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2999288/']} | jdg_80369 |
stackexchange | llm_judgeable_groundtruth_similarity | 156008 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
What is the maximum number $k$ of unit radius, infinitely long cylinders with mutually disjoint interiors that can touch a unit ball? By a cylinder I mean a set congruent to the Cartesian product of a line and a circular disk. The illustrations, copied from 2 , show various configurations of six cylinders, perhaps all possible - up to isometry. The question is about 25 years old; the answer is conjectured to be 6, but the conjecture is still unconfirmed. Heppes and Szabó 1 proved in 1991 that $k\le 8$ ; nine years later Brass and Wenk 2 improved this result to $k\le 7$ . Also, one would like to know: What does the configuration space of six unit cylinders touching a unit ball look like? In particular: Is the configuration space connected? Even more specifically, referring to the configurations shown below: Within the configuration space, is a continuous transition possible from the configuration in Figures 1 and 2 to the configuration in Figure 3? The last question may be possible to verify by a natural candidate, but the necessary computations seem too tedious to me... 1 Heppes, Aladar and Szabó, Laszló. "On the number of cylinders touching a ball." Geom. Dedicata 40 (1991), no. 1, 111–116; MR1130481. 2 Brass, Peter and Wenk, Carola. "On the number of cylinders touching a ball." Geom. Dedicata 81 (2000), no. 1-3, 281–284; MR1772209.
Now provide the response and nothing else.
| Here is an idea. Consider the following parameterization, which is supposed to cover the configuration space in question. $$\mathcal{C}_7:=\left\{\pmatrix{x_k\\y_x\\z_k},\pmatrix{a_k\\b_k\\c_k}_{1\leq k\leq 7}\in{\mathbb{R}^{3\times 2}}^7\,\middle |\, \text{such that conditions 1.-4. are satisfied} \right\} $$ Conditions: $x_k^2+y_k^2+z_k^2=1$ $\left\langle\pmatrix{x_k\\y_k\\z_k},\pmatrix{a_k\\b_k\\c_k} \right\rangle=0$ $a_k^2+b_k^2+c_k^2=1$ $d(l_i,l_j)\geq 2$ for $1\leq i<j\leq 7,$ where we define the line$$l_k:=\left\{2\pmatrix{x_k\\y_k\\z_k}+\alpha\pmatrix{a_k\\b_k\\c_k}\,\middle|\,\alpha\in\mathbb{R} \right\}$$ and denote with $d(\cdot,\cdot)$ the distance between two lines. Note that condition 4. can be rewritten as polynomial inequalities. Hence $\mathcal{C}_7$ is a semi-algebraic set in $\mathbb{R}^{42}$. The $(x,z,y)$ are the points, where the unit cylinder is tangent to the unit sphere. The corresponding $(a,b,c)$ gives the direction in the tangent space and the lines $l$ are the cores of the cylinders. (Note that $(-a,-b,-c)$ gives the same cylinder.) The question " Is $\mathcal{C}_7$ empty? " should be decidable. Maybe an algorithmic approach could help from here. For the other questions the study of an analogues defined $\mathcal{C}_6$, which we know to be non-empty might be worthwhile. I wrote a little program that tries to find points in the described semi-algebraic sets. Here's what it found for $\mathcal{C}_6$ (click here for an animation). Let's take a slightly different point of view. Fix the radius of the ball to be $1$, but let the radii of the $k$ cylinders vary while making sure all cylinders have the same radius. We can then ask: What is the largest radius $r_k$, so that we can find $k$ non-overlapping cylinders of radius $r_k$, thattouch the unit ball? Hence the question is: $r_7\geq 1?$ An obvious lower bound on $r_k$ comes from the packing that allows a equatorial section which is a circle packing, as for $k=6$ in figure 1 and figure 2 in the question post. We therefore have:$$r_k\geq \frac{\operatorname{sin}(\frac{\pi}{k})}{1-\operatorname{sin}(\frac{\pi}{k})}$$ Here's a list of decimal approximations for small $k$s:$$\begin{array}{c|cccccc}k&3&4&5&6&7&8\\\hline\frac{\operatorname{sin}(\frac{\pi}{k})}{1-\operatorname{sin}(\frac{\pi}{k})}&6.464101& 2.414213& 1.425919& 1&0.766421& 0.619914\end{array}$$A perhaps surprising result of my calculations is the fact that $r_6>1$, indeed$$r_6> 1.04965$$So in other words there is configuration of $6$ cylinders where the cylinders have radius larger than $1.04965$. Here is a picture of the configuration(again click here for an animation): I also drew cylinders of radius $1$ with the same tangent points, so one can see the difference. The configuration space can be viewed as subset of the the $6$th power of the unit tangent bundle of the sphere $(T^1(S^2))^6$ (see conditions 1.-4. and Henrik Rüping's comment). The upshot of finding a configuration with larger radius is: the configuration space contains an open subset of $(T^1(S^2))^6$ and hence is $18$-dimensional locally . Edit: Here is list of lower bounds on $r_k$ for small $k$: For $k=3$ and $k=4$ I conjecture the trivial bound for $r_k$ given above to be sharp. For $k=5$ one can find a configuration that shows:$r_5>1.45289>1.425919$ For $k=6$ we have $r_6>1.04965 >1$ as mentioned above. For $k=7$ I found a configuration that shows $r_7>0.846934>0.766421$. Here is a picture of this configuration (again click here for an animation): | {} | {'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/156008', 'https://mathoverflow.net', 'https://mathoverflow.net/users/36904/']} | jdg_80370 |
stackexchange | llm_judgeable_groundtruth_similarity | 610611 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble understanding something about the monty hall problem. If monty opened one door before you arrives, then you would have a 50/50 chance, whichever door you picked, because there are only 2 doors to pick from - right? So, what about if you arrive, and monty said "I assumed you would pick door 1, so I opened door 3"? In that case, picking door 2 would be the same as if you had picked door 1, and then switched, so you should have 2/3 odds on it. But if monty hadn't said anything and you just picked one of the two doors, then your chances would be 50/50... or would they? Does monty assuming that you would want to pick door 1, change the probabilities even if you don't know about his decision? What about if someone else picks a door and tells monty, who opens another one, and you then have the option to choose - without knowing which door the original person picked? What are the probabilities then? Does one of the doors secretly have a 2/3 probability? That doesn't make any sense to me. Can someone explain this? Because I'm really not getting it.
Now provide the response and nothing else.
| I'm assuming that you understand the argument in the original problem (if not, read the other answers!). This question is actually deeper than it may seem. The big issue here is that we commonly mean two different things by probability: the likelihood, in some abstract sense, of the world being in different ways, or our state of uncertainty about the world. The changing probability in the Monty Hall problem reflects not changes in the way the world is, but changes in the state of our knowledge about the world. This is the source of most confusion about the problem. You ask if, in the case that a separate person had played the game with Monty earlier (but left two doors closed) whether somehow one of the doors would secretly have a $\frac{2}{3}$ probability of having the prize, and express that this seems like it would be really weird. It is really weird, and the weirdness comes about by a conflation of these two ideas of probability. Let's make this more extreme to make it clearer. At the end of the last game, Monty revealed which door had the prize, but for some reason, the other guy left without taking it. He then closes all the doors again, and now you (who haven't seen anything) come on stage. Shouldn't one of the doors "secretly" have a probability $1$ of having the prize? The door behind which the prize is located is determined, there is nothing random about the state of the world. But of course you can't find it. In fact this is true in the original problem as well! The probabilities reflect the state of your ignorance; you know no information to make you favor one door over another. That's why being told information "changes" the probabilities: because the probabilities were expressions of the state of your knowledge in the first place, not facts about the world. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/610611', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26613/']} | jdg_80371 |
stackexchange | llm_judgeable_groundtruth_similarity | 86922 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $X$ is taken from a discrete uniform distribution, {1,2,.....,N}, why is the MLE of N=X? I can give an intuitive argument in that since you are only viewing one observation, it is the largest value and hence is the maximum. But, is there a nice mathematical derivation via argmax or anything like that? Thanks!
Now provide the response and nothing else.
| Looking at the general case of having a sample of $m$ realizations of an i.i.d. sample, the joint density of the sample is $$f(\mathbf X) = \prod_{i=1}^m \frac 1n \cdot \mathbf I\Big\{X_i\in \{1,...,n\}\Big\} $$ where $\mathbf I\{\}$ is the indicator function. Using the properties of the indicator function, and treating the joint density as a likelihood function of the unknown parameter $n$ given the actual realization of the sample, we have $$ L(n \mid \mathbf x) = \frac 1{n^m} \cdot \min_i\Big(\mathbf I\Big\{x_i \le n\Big\}\Big)$$ Now, if we have overlooked the existence of the indicator function, then the term $\frac 1{n^m}$ is decreasing in $n$ - so the value of $n$ that would maximize the likelihood would be the smallest possible $n$ , i.e $n=1$ .But the existence of the (minimum of) the $m$ indicator functions tells us that if even one $x$ -realization is larger than the chosen value of $n$ , then the indicator function for this $x$ -realization will equal zero, hence the minimum of the $m$ indicator functions will equal zero, hence the likelihood will equal zero. So we need to choose $\hat n$ so as all realizations of the sample are equal or smaller than it... so why not choose an arbitrary large value? Because the further away we move from $\hat n =1$ the smaller the value of the likelihood becomes. So we want to move away the less possible: this means that we choose $\hat n = \max_i\{x_i\}$ which is the argmax of the likelihood given the constraint , as this constraint is represented by the indicator function, because it reduces the value of the likelihood no more than needed in order to satisfy the constraint. Obviously the above holds for $m=1$ also. RESPONSE TO COMMENT 2022-05-28 (user @user1916067) Suppose we have a sample of 3 observations,say, $\{3, 4, 5\}$ and so $m=3$ .Using the expression of $L(n | \mathbf x)$ , compute the value of the likelihood for the following candidate MLEs A value smaller than the realized values, say $\hat n = 2$ The values from the observed sample, $\hat n = \{3, 4, 5\}$ A value larger than the realized values, say $\hat n = 6$ You will find that the value of the likelihood for $\hat n = \{2,3,4\}$ is zero, and the value for $\hat n = 6$ is smaller than its value when $\hat n = 5 = \max x_i$ . | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/86922', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/38348/']} | jdg_80372 |
stackexchange | llm_judgeable_groundtruth_similarity | 18006801 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's sample code: //Called by VC:HICircleView *circleView = [[HICircleView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];// init of circle view- (id)initWithFrame:(CGRect)frame{ self = [super initWithFrame:frame]; if (self) { CAShapeLayer *borderLayer = [CAShapeLayer layer]; borderLayer.fillColor = [UIColor whiteColor].CGColor; borderLayer.path = [UIBezierPath bezierPathWithOvalInRect:self.frame].CGPath; borderLayer.strokeColor = [[UIColor redColor] CGColor]; borderLayer.lineWidth = 5; [self.layer addSublayer:borderLayer]; } return self;} OK, thanks for the answer. to shift i: CGRect rect = CGRectMake(3, 3, self.frame.size.width, self.frame.size.height);borderLayer.path = [UIBezierPath bezierPathWithOvalInRect:rect].CGPath; And made 6 the line width.
Now provide the response and nothing else.
| Setting the lineWidth draws a line, where the actual path is exactly in the middle of the drawn line. If you want the drawn line to line up with something, you will have to shift the path by half the lineWidth . You can shift the path by using - (void)applyTransform:(CGAffineTransform)transform on UIBezierPath and apply a translate transform. If you want a drawn path to be contained in a certain area, shifting the path doesn't help. In that case just create a smaller path. If you want to draw a 100ptx100pt rect with a line width of 5, you have to draw a path in a 95pt*95pt rect (2.5pt space on either side). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18006801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1178952/']} | jdg_80373 |
stackexchange | llm_judgeable_groundtruth_similarity | 26061651 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the purpose of the name attribute on input , span , or div tags? Should an id tag be used instead? Example from Mozilla's documentation : <label for="User">Click me</label><input type="text" id="User" name="Name" />
Now provide the response and nothing else.
| The name attribute identifies the input value if it is sent to a server via a traditional GET or POST of a form. Specific to the example if you had: <form action="http://localhost" method="POST"> <label for="User">Click me</label> <input type="text" id="User" name="Name" value="foo" /> <input type="submit" value="submit" /></form> and you submit the form, the server localhost will receive a content body like: Name=foo which as another post mentions, is usually parsed by a server side language like PHP into something easier to work with. The id attribute identifies the input to the DOM . If you specify an input with no name but an id and try to submit it via a GET or POST it will not be parsed correctly by the server. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26061651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1125584/']} | jdg_80374 |
stackexchange | llm_judgeable_groundtruth_similarity | 27089196 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
From page 291 of OCP Java SE 6 Programmer Practice Exams, question 25: public class Stone implements Runnable { static int id = 1; public void run() { id = 1 - id; if (id == 0) pick(); else release(); } private static synchronized void pick() { System.out.print("P "); System.out.print("Q "); } private synchronized void release() { System.out.print("R "); System.out.print("S "); } public static void main(String[] args) { Stone st = new Stone(); new Thread(st).start(); new Thread(st).start(); }} One of the answers is: The output could be P Q P Q I marked this answer as correct. My reasoning: We are starting two threads. First one enters run() . According to JLS 15.26.1 , it firstly evaluates 1 - id . Result is 0 . It is stored on the thread's stack. We are just about to save that 0 to static id , but... Boom, scheduler chooses the second thread to run. So, the second thread enters run() . Static id is still 1 , so he executes method pick() . P Q is printed. Scheduler chooses first thread to run. It takes 0 from its stack and saves to static id . So, the first thread also executes pick() and prints P Q . However, in the book it's written that this answer is incorrect: It is incorrect because the line id = 1 - id swaps the value of id between 0 and 1 . There is no chance for the same method to be executed twice. I don't agree. I think there is some chance for the scenario I presented above. Such swap is not atomic. Am I wrong?
Now provide the response and nothing else.
| Am I wrong? Nope, you're absolutely right - as is your example timeline. In addition to it not being atomic, it's not guaranteed that the write to id will be picked up by the other thread anyway, given that there's no synchronization and the field isn't volatile. It's somewhat disconcerting for reference material like this to be incorrect :( | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27089196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1073386/']} | jdg_80375 |
stackexchange | llm_judgeable_groundtruth_similarity | 1334762 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $a_n\downarrow 0, \sum\limits_{n=1}^{\infty}a_n=+\infty, b_n=\min\{a_n,1/n\}$. Prove that $\sum b_n $ diverges. In fact, I have known that two positive divergent series $\sum a_n ~\sum b_n$, $c_n=\min\{a_n,b_n\}, \sum c_n$ is not always divergent.But I do not know why this above series is surely divergent. Sincerely thanks.
Now provide the response and nothing else.
| Assume each of the gold bars have the number 1,2 or 3 stamped respectively on their underside where you cannot see it and the gold bars stamped 1 and 2 are in the same box. You randomly pick a box and after looking in one compartment you see a gold bar. Knowing that you have picked a box which contains a gold bar, there are three possibilities. 1) The gold bar has a 1 stamped on it therefore you have chosen the box containing the two gold bars. 2) The gold bar has a 2 stamped on it therefore you have chosen the box containing the two gold bars. 3) The gold bar has a 3 stamped on it therefore you have chosen the box containing one gold bar and one silver bar. The above three events are all equally probable therefore the probability that the other bar in the box is also gold is $\frac{2}{3}$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1334762', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/220813/']} | jdg_80376 |
stackexchange | llm_judgeable_groundtruth_similarity | 20799498 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
From a specific order I want to create an invoice for some selected items from that order. I have successfully created an invoice for the whole order programmatically, but I want to create a partial invoice of that order.
Now provide the response and nothing else.
| Atlast I got it . Had to dig magento to get this . $orderid // order id$order = Mage::getModel('sales/order')->load($orderid);or for order increment id $orderincrmentid // order increment id$order = Mage::getModel('sales/order')->loadByIncrementId($orderincrmentid);if($order->canInvoice()) { $invoiceId = Mage::getModel('sales/order_invoice_api') ->create($order->getIncrementId(), $itemsarray ,'your_comment' ,1,1);}echo $invoiceId; // Gives increment Invoice id @parameters for above create function : 1st parameter : order increment id 2nd parameter : array // array format . [Main point] foreach($order->getAllItems() as $item) {$item_id = $item->getItemId(); //order_item_id$qty = $item->getQtyOrdered(); //qty ordered for that item}array('56'=>'3','57'=>'1','58'=>'0');array([order_item_id] => [qty]); // general array format So here you will add order item id as key and its qty as its value . If one do not want to create invoice id a particular item then simply pass value of its quantity as 0 // zero . 3rd parameter : comment 4th parameter : to send mail ----> 1 not send mail ----> 0 5th parameter : include comment in mail ----> 1 not include comment in mail ----> 0 It returns invoice increment id . Hope it helps someone . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20799498', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1728613/']} | jdg_80377 |
stackexchange | llm_judgeable_groundtruth_similarity | 46107640 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In iOS 11 buttons and text field are unresponsive being subviews of UIToolBar . Comparing view hierarchy to iOS 10 we see there is a _UIToolBarContentView over all subview of UIToolBar . For instance, this new layout of the UIToolBar breaks slacktextviewcontroller https://github.com/slackhq/SlackTextViewController/issues/604 Need a solution working in iOS 10/11 .
Now provide the response and nothing else.
| To solve the problem for iOS11 (compatible with lower versions) you only needto make layoutSubview right after UIToolBar was added as a subview to UI hierarchy. In this case _UIToolbarContentView lowers to the first subview of UIToolBar, and you canadd all your subviews higher as before. For example in ObjC , UIToolbar *toolbar = [UIToolbar new]; [self addSubview: toolbar]; [toolbar layoutIfNeeded]; <here one can add all subviews needed> The same problem happens with slacktextviewcontroller | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46107640', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8577612/']} | jdg_80378 |
stackexchange | llm_judgeable_groundtruth_similarity | 20829332 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to use Bootstrap for the first time and am having an issue with modal dialogs. Using the example code on this page , when the modal is opened a scrollbar appears, which also shifts the content on the page to the left. Code: <!-- Button trigger modal --><button class="btn btn-primary btn-lg" data-toggle="modal" data-target="#myModal"> Launch demo modal</button><!-- Modal --><div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-hidden="true">×</button> <h4 class="modal-title" id="myModalLabel">Modal title</h4> </div> <div class="modal-body"> ... </div> <div class="modal-footer"> <button type="button" class="btn btn-default" data-dismiss="modal">Close</button> <button type="button" class="btn btn-primary">Save changes</button> </div> </div><!-- /.modal-content --> </div><!-- /.modal-dialog --></div><!-- /.modal --> JSFIDDLE: http://jsfiddle.net/WqRBP/ A lot of sites I'm looking at use Bootstrap and they don't have this issue, so is there some sort of easy fix for the problem? EDIT: The scrollbar appears in Chrome and IE, I haven't tested in other browsers. Here's what I'm seeing in JSFIDDLE:
Now provide the response and nothing else.
| The problem occurred because Twitter Bootstrap always shift the page 15px to the left when a modal is opened. You can solve this by moving the page back to the right - margin-right: -15px . This can be done by using events show.bs.modal and hidden.bs.modal provided by Bootstrap's modal . $('#myModal').bind('hidden.bs.modal', function () { $("html").css("margin-right", "0px");});$('#myModal').bind('show.bs.modal', function () { $("html").css("margin-right", "-15px");}); jsFiddle FYI: show.bs.modal : This event fires immediately when the show instance method is called. If caused by a click, the clicked element is available as the relatedTarget property of the event. hidden.bs.modal : This event is fired when the modal has finished being hidden from the user (will wait for CSS transitions to complete). Reference | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20829332', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1101095/']} | jdg_80379 |
stackexchange | llm_judgeable_groundtruth_similarity | 38732564 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
<div style="overflow-x: scroll"> <table class="ui table"> ... table data ... </table></div> I have a table that has many columns and rows in it. It overflows off the page and I can only scroll horizontally when I reach the very bottom of the page/table. How can I make it so that no matter how "tall" the table is, I can still be able to scroll horizontally at any point scrolling up and down on the table? Edit: JSFiddle link to example is here: http://jsfiddle.net/2111sku5/ In the JSFiddle, you can see that you have to scroll all the way to the bottom of the table in order to scroll horizontally to see the rest of the headers.
Now provide the response and nothing else.
| You must give your div that wraps the table a specific width and height and then set overflow to 'auto' or scroll. In your code, it should be like: <div style="width:800px;height:300px;overflow-x: scroll;"> <table> <thead> <tr> <th ng-repeat="datum in data" ng-bind="datum.id"></th> </tr> </thead> <tbody> <tr ng-repeat="datum in data"> <td ng-bind="datum.first_name"></td> <td ng-bind="datum.last_name"></td> <td ng-bind="datum.email"></td> <td ng-bind="datum.gender"></td> <td ng-bind="datum.ip_address"></td> </tr> </tbody> </table> </div> Edit: Here's a JSfiddle. See if this works for you: th, td { border: 1px solid black; padding: 5px;}#outer { display: flex; height: 70vh; flex-direction: row; align-items: stretch; border-style: solid;}#inner1 { flex: 1; border-style: dotted; overflow-x:scroll;}#inner2 { display: flex; flex-flow: column; height: 100%;}#inner2top { flex: 0 1 auto; background-color: cyan; float:left; position:fixed; width:100%;}#inner2bottom { flex: 1 1 auto;} <div id="outer"> <div> Hello This is a div. </div> <div id="inner1"> <div id="inner2"> <div id="inner2top"> Inner 2 top. </div> <div id="inner2bottom"> Inner 2 bottom. <table> <thead> <tr> <th>Name</th> <th>Position</th> <th>Office</th> <th>Age</th> <th>Start date</th> <th>Salary</th> <th>aoisdjfo;ajw;eoajw;eofjefi;awoefj</th> <th>oaisjdfoiajoifj</th> <th>foiajoijf</th> </tr> </thead> <tfoot> <tr> <th>Name</th> <th>Position</th> <th>Office</th> <th>Age</th> <th>Start date</th> <th>Salary</th> </tr> </tfoot> <tbody> <tr> <td>Tiger Nixon</td> <td>System Architect</td> <td>Edinburgh</td> <td>61</td> <td>2011/04/25</td> <td>$320,800</td> </tr> <tr> <td>Garrett Winters</td> <td>Accountant</td> <td>Tokyo</td> <td>63</td> <td>2011/07/25</td> <td>$170,750</td> </tr> <tr> <td>Ashton Cox</td> <td>Junior Technical Author</td> <td>San Francisco</td> <td>66</td> <td>2009/01/12</td> <td>$86,000</td> </tr> <tr> <td>Cedric Kelly</td> <td>Senior Javascript Developer</td> <td>Edinburgh</td> <td>22</td> <td>2012/03/29</td> <td>$433,060</td> </tr> <tr> <td>Airi Satou</td> <td>Accountant</td> <td>Tokyo</td> <td>33</td> <td>2008/11/28</td> <td>$162,700</td> </tr> <tr> <td>Brielle Williamson</td> <td>Integration Specialist</td> <td>New York</td> <td>61</td> <td>2012/12/02</td> <td>$372,000</td> </tr> <tr> <td>Herrod Chandler</td> <td>Sales Assistant</td> <td>San Francisco</td> <td>59</td> <td>2012/08/06</td> <td>$137,500</td> </tr> <tr> <td>Rhona Davidson</td> <td>Integration Specialist</td> <td>Tokyo</td> <td>55</td> <td>2010/10/14</td> <td>$327,900</td> </tr> <tr> <td>Colleen Hurst</td> <td>Javascript Developer</td> <td>San Francisco</td> <td>39</td> <td>2009/09/15</td> <td>$205,500</td> </tr> <tr> <td>Sonya Frost</td> <td>Software Engineer</td> <td>Edinburgh</td> <td>23</td> <td>2008/12/13</td> <td>$103,600</td> </tr> <tr> <td>Jena Gaines</td> <td>Office Manager</td> <td>London</td> <td>30</td> <td>2008/12/19</td> <td>$90,560</td> </tr> <tr> <td>Quinn Flynn</td> <td>Support Lead</td> <td>Edinburgh</td> <td>22</td> <td>2013/03/03</td> <td>$342,000</td> </tr> <tr> <td>Charde Marshall</td> <td>Regional Director</td> <td>San Francisco</td> <td>36</td> <td>2008/10/16</td> <td>$470,600</td> </tr> <tr> <td>Haley Kennedy</td> <td>Senior Marketing Designer</td> <td>London</td> <td>43</td> <td>2012/12/18</td> <td>$313,500</td> </tr> <tr> <td>Tatyana Fitzpatrick</td> <td>Regional Director</td> <td>London</td> <td>19</td> <td>2010/03/17</td> <td>$385,750</td> </tr> <tr> <td>Michael Silva</td> <td>Marketing Designer</td> <td>London</td> <td>66</td> <td>2012/11/27</td> <td>$198,500</td> </tr> <tr> <td>Paul Byrd</td> <td>Chief Financial Officer (CFO)</td> <td>New York</td> <td>64</td> <td>2010/06/09</td> <td>$725,000</td> </tr> <tr> <td>Gloria Little</td> <td>Systems Administrator</td> <td>New York</td> <td>59</td> <td>2009/04/10</td> <td>$237,500</td> </tr> <tr> <td>Bradley Greer</td> <td>Software Engineer</td> <td>London</td> <td>41</td> <td>2012/10/13</td> <td>$132,000</td> </tr> <tr> <td>Dai Rios</td> <td>Personnel Lead</td> <td>Edinburgh</td> <td>35</td> <td>2012/09/26</td> <td>$217,500</td> </tr> <tr> <td>Jenette Caldwell</td> <td>Development Lead</td> <td>New York</td> <td>30</td> <td>2011/09/03</td> <td>$345,000</td> </tr> <tr> <td>Yuri Berry</td> <td>Chief Marketing Officer (CMO)</td> <td>New York</td> <td>40</td> <td>2009/06/25</td> <td>$675,000</td> </tr> <tr> <td>Caesar Vance</td> <td>Pre-Sales Support</td> <td>New York</td> <td>21</td> <td>2011/12/12</td> <td>$106,450</td> </tr> <tr> <td>Doris Wilder</td> <td>Sales Assistant</td> <td>Sidney</td> <td>23</td> <td>2010/09/20</td> <td>$85,600</td> </tr> <tr> <td>Angelica Ramos</td> <td>Chief Executive Officer (CEO)</td> <td>London</td> <td>47</td> <td>2009/10/09</td> <td>$1,200,000</td> </tr> <tr> <td>Gavin Joyce</td> <td>Developer</td> <td>Edinburgh</td> <td>42</td> <td>2010/12/22</td> <td>$92,575</td> </tr> <tr> <td>Jennifer Chang</td> <td>Regional Director</td> <td>Singapore</td> <td>28</td> <td>2010/11/14</td> <td>$357,650</td> </tr> <tr> <td>Brenden Wagner</td> <td>Software Engineer</td> <td>San Francisco</td> <td>28</td> <td>2011/06/07</td> <td>$206,850</td> </tr> <tr> <td>Fiona Green</td> <td>Chief Operating Officer (COO)</td> <td>San Francisco</td> <td>48</td> <td>2010/03/11</td> <td>$850,000</td> </tr> <tr> <td>Shou Itou</td> <td>Regional Marketing</td> <td>Tokyo</td> <td>20</td> <td>2011/08/14</td> <td>$163,000</td> </tr> <tr> <td>Michelle House</td> <td>Integration Specialist</td> <td>Sidney</td> <td>37</td> <td>2011/06/02</td> <td>$95,400</td> </tr> <tr> <td>Suki Burks</td> <td>Developer</td> <td>London</td> <td>53</td> <td>2009/10/22</td> <td>$114,500</td> </tr> <tr> <td>Prescott Bartlett</td> <td>Technical Author</td> <td>London</td> <td>27</td> <td>2011/05/07</td> <td>$145,000</td> </tr> <tr> <td>Gavin Cortez</td> <td>Team Leader</td> <td>San Francisco</td> <td>22</td> <td>2008/10/26</td> <td>$235,500</td> </tr> <tr> <td>Martena Mccray</td> <td>Post-Sales support</td> <td>Edinburgh</td> <td>46</td> <td>2011/03/09</td> <td>$324,050</td> </tr> <tr> <td>Unity Butler</td> <td>Marketing Designer</td> <td>San Francisco</td> <td>47</td> <td>2009/12/09</td> <td>$85,675</td> </tr> <tr> <td>Howard Hatfield</td> <td>Office Manager</td> <td>San Francisco</td> <td>51</td> <td>2008/12/16</td> <td>$164,500</td> </tr> <tr> <td>Hope Fuentes</td> <td>Secretary</td> <td>San Francisco</td> <td>41</td> <td>2010/02/12</td> <td>$109,850</td> </tr> <tr> <td>Vivian Harrell</td> <td>Financial Controller</td> <td>San Francisco</td> <td>62</td> <td>2009/02/14</td> <td>$452,500</td> </tr> <tr> <td>Timothy Mooney</td> <td>Office Manager</td> <td>London</td> <td>37</td> <td>2008/12/11</td> <td>$136,200</td> </tr> <tr> <td>Jackson Bradshaw</td> <td>Director</td> <td>New York</td> <td>65</td> <td>2008/09/26</td> <td>$645,750</td> </tr> <tr> <td>Olivia Liang</td> <td>Support Engineer</td> <td>Singapore</td> <td>64</td> <td>2011/02/03</td> <td>$234,500</td> </tr> <tr> <td>Bruno Nash</td> <td>Software Engineer</td> <td>London</td> <td>38</td> <td>2011/05/03</td> <td>$163,500</td> </tr> <tr> <td>Sakura Yamamoto</td> <td>Support Engineer</td> <td>Tokyo</td> <td>37</td> <td>2009/08/19</td> <td>$139,575</td> </tr> <tr> <td>Thor Walton</td> <td>Developer</td> <td>New York</td> <td>61</td> <td>2013/08/11</td> <td>$98,540</td> </tr> <tr> <td>Finn Camacho</td> <td>Support Engineer</td> <td>San Francisco</td> <td>47</td> <td>2009/07/07</td> <td>$87,500</td> </tr> <tr> <td>Serge Baldwin</td> <td>Data Coordinator</td> <td>Singapore</td> <td>64</td> <td>2012/04/09</td> <td>$138,575</td> </tr> <tr> <td>Zenaida Frank</td> <td>Software Engineer</td> <td>New York</td> <td>63</td> <td>2010/01/04</td> <td>$125,250</td> </tr> <tr> <td>Zorita Serrano</td> <td>Software Engineer</td> <td>San Francisco</td> <td>56</td> <td>2012/06/01</td> <td>$115,000</td> </tr> <tr> <td>Jennifer Acosta</td> <td>Junior Javascript Developer</td> <td>Edinburgh</td> <td>43</td> <td>2013/02/01</td> <td>$75,650</td> </tr> <tr> <td>Cara Stevens</td> <td>Sales Assistant</td> <td>New York</td> <td>46</td> <td>2011/12/06</td> <td>$145,600</td> </tr> <tr> <td>Hermione Butler</td> <td>Regional Director</td> <td>London</td> <td>47</td> <td>2011/03/21</td> <td>$356,250</td> </tr> <tr> <td>Lael Greer</td> <td>Systems Administrator</td> <td>London</td> <td>21</td> <td>2009/02/27</td> <td>$103,500</td> </tr> <tr> <td>Jonas Alexander</td> <td>Developer</td> <td>San Francisco</td> <td>30</td> <td>2010/07/14</td> <td>$86,500</td> </tr> <tr> <td>Shad Decker</td> <td>Regional Director</td> <td>Edinburgh</td> <td>51</td> <td>2008/11/13</td> <td>$183,000</td> </tr> <tr> <td>Michael Bruce</td> <td>Javascript Developer</td> <td>Singapore</td> <td>29</td> <td>2011/06/27</td> <td>$183,000</td> </tr> <tr> <td>Donna Snider</td> <td>Customer Support</td> <td>New York</td> <td>27</td> <td>2011/01/25</td> <td>$112,000</td> </tr> </tbody> </table> </div> </div> </div></div> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38732564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5835009/']} | jdg_80380 |
stackexchange | llm_judgeable_groundtruth_similarity | 844909 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
One of the nicer consequences of compactness is that there is no statement in first-order logic whose content "$\leq$ is a well-order". So we can show that there are countable structure $(M,\leq)$ which are elementarily equivalent to $\Bbb N$, but not isomorphic to it. What about adding $S,+,\cdot$ and $0$? Well. Also not very good. The theory of the structure $(\Bbb N,0,S,+,\cdot,\leq)$ has $2^{\aleph_0}$ non-isomorphic countable models. This shows that none of these additions get us any closer to finding an $\aleph_0$-categorical theory for $\Bbb N$. Question. Is there a finite (or countable if needed) first-order language which includes $\leq$, and an $\aleph_0$-categorical theory $T$ whose countable structure is order isomorphic to $(\Bbb N,\leq)$?
Now provide the response and nothing else.
| Take countably many functions and relations on $\mathbb{N}$. Add a symbol for each, together with all sentences that are true in $\mathbb{N}$ under the natural interpretation. Let the resulting theory be $T$. In the usual way we can use the Compactness Theorem and Downward Loweheim-Skolem to find a countable model of $T$ that is not isomorphic to $\mathbb{N}$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/844909', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/622/']} | jdg_80381 |
stackexchange | llm_judgeable_groundtruth_similarity | 365888 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Now, I realise the initial response to this is likely to be "you can't" or "use analytics", but I'll continue in the hope that someone has more insight than that. Google adwords with "autotagging" appends a "gclid" (presumably "google click id") to link that sends you to the advertised site. It appears in the web log since it's a query parameter, and it's used by analytics to tie that visit to the ad/campaign. What I would like to do is to extract any useful information from the gclid in order to do our own analysis on our traffic. The reasons for this are: Stats are imperfect, but if we are collating them, we know exactly what assumptions we have made, and how they were calculated. We can tie the data to the rest of our data and produce far more accurate stats wrt conversion rate. We don't have to rely on javascript for conversions. Now it is clear that the gclid is base64 encoded (or some close variant), and some parts of it vary more than others. Beyond that, I haven't been able to determine what any of it relates to. Does anybody have any insight into how I might approach decoding this, or has anybody already related gclids back to compaigns or even accounts? I have spoken to a couple of people at google, and despite their "don't be evil" motto, they were completely unwilling to discuss the possibility of divulging this information, even under an NDA. It seems they like the monopoly they have over our web stats.
Now provide the response and nothing else.
| By far the easiest solution is to manually tag your links with Google Analytics campaign tracking parameters (utm_source, utm_campaign, utm_medium, etc.) and then pull out that data. The gclid is dependent on more than just the adwords account/campaign/etc. If you click on the same adwords ad twice, it could give you different gclids, because there's all sorts of session and cost data associated with that particular click as well. Gclid is probably not 100% random, true, but I'd be very surprised and concerned if it were possible to extract all your Adwords data from that number. That would be a HUGE security flaw (i.e. an arbitrary user could view your Adwords data). More likely, a pseudo-random gclid is generated with every impression, and if that ad is clicked on, the gclid is logged in Adwords (otherwise it's thrown out). Analytics then uses that number to reconcile the data with Adwords after the fact. Other than that, there's no intrinsic value in the gclid number itself. In regards to your last point, attempting to crack or reverse-engineer this information is explicitly forbidden in both the Google Analytics and Google Adwords Terms of Service, and is grounds for a permanent ban. Additionally, the TOS that you agreed to when signing up for these services says that it is not your data to use in any way you feel like. Google is providing a free service, so there are strings attached. If you don't like not having complete control over your data, then there are plenty of other solutions out there. However, you will pay a premium for that kind of control. Google makes nearly all their money from selling ads. Adwords is their biggest money-making product. They're not going to give you confidential information about how it works. They don't know who you are, or what you're going to do with that information. It doesn't matter if you sign an NDA and they have legal recourse to sue you; if you give away that information to a competitor, your life isn't worth enough to pay back the money you will have lost them. Sorry to break it to you, but "Don't be Evil" or not, Google is a business, not a charity. They didn't become one of the most successful companies in the world by giving away their search algorithm to the first guy who asked for it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/365888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26334/']} | jdg_80382 |
stackexchange | llm_judgeable_groundtruth_similarity | 6768475 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So, I'm attempting to use this: http://blogs.msdn.com/b/priozersk/archive/2010/09/17/customizing-picker-box-dialog.aspx However, I just want a normal pickerboxdialog (just text) but I'd like to attach an id to it, so I can easily reference the selection the user picked. However, even after building my own class to pass in, I still cannot get the text to display properly (IE at all) within the pickerbox. Does anyone have an experience? I basically copied his code and still no luck...
Now provide the response and nothing else.
| I think the better question is: Why do you care? You're basically falling into the "browser detection" trap that a lot of people fell into in the late 90's / early 00's. Since then, though, we've learned that it's feature detection that's the more useful approach, not least because the features supported in a given browser were (mostly) a moving target. There's code now, running on IE9 with its dramatically-improved DOM and JavaScript support, that's not using those features because it's doing browser detection and falling back on IE6 techniques. So rather than worrying about V8 vs. JSC, just worry about the features you want. I don't know anything about JSC, but for instance let's assume it doesn't have the forEach method on arrays that V8 has (part of the ECMAScript 5th edition standard). Rather than throwing a big "V8 vs. JSC" lever, you'd do: if (typeof Array.prototype.forEach === "function") { // Code that expects `forEach`}else { // Code that falls back} (Your "code that falls back" might add forEach to the prototype, or maybe this test is in your own iterator function and you want to know whether to defer to a native implementation or supply your own.) And similarly for other features you want to use that may or may not be present. But if you really need to detect V8 vs. JSC (and from your comment it seems you may), this page seems to demonstrate a means of doing so, though it seems awfully fragile. Here's my slightly-edited version (not least to replace window.devicePixelRatio with the test for WebkitAppearance — the former gives false positives on at least some other browsers [Firefox, for instance, which uses Gecko, not WebKit]): var v8string = 'function%20javaEnabled%28%29%20%7B%20%5Bnative%20code%5D%20%7D';if ('WebkitAppearance' in document.documentElement.style) { //If (probably) WebKit browser if (escape(navigator.javaEnabled.toString()) === v8string) { console.log('V8 detected'); } else { console.log('JSC detected'); }} else { console.log("Not a WebKit browser");} Works for me detecting the difference between Chrome (which also uses V8) and Safari (which also uses JSC). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6768475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/224200/']} | jdg_80383 |
stackexchange | llm_judgeable_groundtruth_similarity | 64059 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some website which requires a logon and shows sensitive information. The person goes to the page, is prompted to log in, then gets to see the information. The person logs out of the site, and is redirected back to the login page. The person then can hit "back" and go right back to the page where the sensitive information is contained. Since the browser just thinks of it as rendered HTML, it shows it to them no problem. Is there a way to prevent that information from being displayed when the person hits the "back" button from the logged out screen? I'm not trying to disable the back button itself, I'm just trying to keep the sensitive information from being displayed again because the person is not logged into the site anymore. For the sake of argument, the above site/scenario is in ASP.NET with Forms Authentication (so when the user goes to the first page, which is the page they want, they're redirected to the logon page - in case that makes a difference).
Now provide the response and nothing else.
| The short answer is that it cannot be done securely. There are, however, a lot of tricks that can be implemented to make it difficult for users to hit back and get sensitive data displayed. Response.Cache.SetCacheability(HttpCacheability.NoCache);Response.Cache.SetExpires(Now.AddSeconds(-1));Response.Cache.SetNoStore();Response.AppendHeader("Pragma", "no-cache"); This will disable caching on client side, however this is not supported by all browsers . If you have the option of using AJAX then sensitive data can be retrieved using a updatepanel that is updated from client code and therefore it will not be displayed when hitting back unless client is still logged in. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2577/']} | jdg_80384 |
stackexchange | llm_judgeable_groundtruth_similarity | 59266 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The following question is from an physics exercise and I know the answer and the way to solve the problem but just curious why my own way doesn't work. The question is asking "A 2500kg space vehicle, initially at rest, falls vertically from a height of 3500km above the Earth's surface. Determine how much work is done by the force of gravity in bringing the vehicle to the Earth's surface." The way to solve that is to integrate the differential work from the force $F_g = GM_\text{earth}m/r^2$ such that $$W = \int_{6400 \times 10^3}^{(6400+3500)\times 10^3}F_g dr$$. It does solve the problem but my question is why can't I use the $mgh$ to estimate the work done by gravity?
Now provide the response and nothing else.
| What is missing here ist the relation to physical/chemical status of that "copper". What are You talking about? Copper metal? Copper atoms as vapour? Copper ions I or II in aqueous solution? As a matter of fact, copper atoms are paramagnetic (one unpaired electron is enough despite the number of the paired ones!) BTW, the Herren Stern and Gerlach knew that silver atoms are paramagnetic, guess why!.In bulk copper metal the odd electron is sent into the pool of electrons making the metallic bond, thus the metal is diamagnetic, the same is for Cu+ salts, whreas Cu++ salts are paramagnetic. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/59266', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/16191/']} | jdg_80385 |
stackexchange | llm_judgeable_groundtruth_similarity | 15254708 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to create a ReactiveUI derived collection that has more elements in it than the original? I've seen that there is a way of filtering a collection, and selecting single properties, but what I'm looking for is the equivalent of the SelectMany operation for enumerables. To illustrate, imagine trying to get a derived collection representing every passenger stuck in a traffic jam. class Car { ReactiveCollection<Passenger> Passengers;}var TrafficJam=new ReactiveCollection<Car>();EveryPassengerInTheTrafficJam=Cars.CreateDerivedCollection(c=>c.Passengers); The above doesn't work, I think the error was IEnumerable<ReactiveCollection<Passenger>> can't be cast to ReactiveCollection<Passenger> - or something up with the types, in any case. I can't figure out the right approach for this flattening - admittedly I may be barking up completely the wrong tree here, so please let me know if there's a better way of achieving the same thing!
Now provide the response and nothing else.
| At the moment, CreateDerivedCollection doesn't support SelectMany as a transformation, it gets too Tricky to handle removes. If you don't have many items, you can just regenerate the collection every time: cars.Changed .Select(_ => cars.SelectMany(x => x.Passengers).ToList()) .ToProperty(this, x => x.Passengers); Edit: Alright, here we go: var whenCarsOrPassengersInThoseCarsChange = Observable.Merge( cars.Changed .SelectMany(_ => cars.Select(x => x.Passengers.Changed).Merge()) .Select(_ => Unit.Default), cars.Changed.Select(_ => Unit.Default));whenCarsOrPassengersInThoseCarsChange.StartWith(Unit.Default) .Select(_ => cars.SelectMany(x => x.Passengers).ToList()) .ToProperty(this, x => x.Passengers); So, the idea is that we've got two main situations when we want to reevaluate the passengers list: When one of the passengers change in the cars When one of the cars change However, the tricky part is, we only want to watch passengers for cars in the collection (i.e. if a car is removed, we no longer care about its passengers). Properly tracking suicidal passengers So, the idea in that weird SelectMany is, "Every time the car list changes, build a new list of Observables that represent when the passenger collection changes, and merge them all together". However, if we only had that statement, we would have to wait for a car to be added and its passengers change before we got a new passenger list, so we also have to update when the list of cars change too. What's this "Unit" business? In this case, I actually don't care about the values that these Observables put out, just when they happen. "Unit" is the Rx version of void, it only has a single value, "Unit.Default". You use it when you only care when something happens, not what the value of it is. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15254708', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1037491/']} | jdg_80386 |
stackexchange | llm_judgeable_groundtruth_similarity | 26941144 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I run git diff , the header part of each diff comes out in white text. Since I'm using a light background it is hard to read, so I want to change it. I have found that I can change other colors in the diff output like this (in .gitconfig ): [color "diff"] old = green new = red But I can't figure out what to put there for the header color. Even better, is there someplace where all of the default git config settings are documented? By 'diff header' I mean lines like this: diff --git a/README.md b/README.mdindex f102026..c5e3428 100644--- a/README.md+++ b/README.md
Now provide the response and nothing else.
| Try setting color.diff.meta , e.g. git config --global color.diff.meta blue or by manually editing the configuration file [color "diff"] meta = blue You can look through the various color. settings in the git-config reference for more possible settings. The color.diff.meta setting is listed here: color.diff.<slot> Use customized color for diff colorization. <slot> specifies which part of the patch to use the specified color, and is one of plain (context text), meta (metainformation), frag (hunk header), func (function in hunk header), old (removed lines), new (added lines), commit (commit headers), or whitespace (highlighting whitespace errors). The values of these variables may be specified as in color.branch.<slot> . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26941144', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/708333/']} | jdg_80387 |
stackexchange | llm_judgeable_groundtruth_similarity | 777261 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Sometimes a local variable is used for the sole purpose of checking it in an assert(), like so - int Result = Func();assert( Result == 1 ); When compiling code in a Release build, assert()s are usually disabled, so this code may produce a warning about Result being set but never read. A possible workaround is - int Result = Func();if ( Result == 1 ){ assert( 0 );} But it requires too much typing, isn't easy on the eyes and causes the condition to be always checked (yes, the compiler may optimize the check away, but still). I'm looking for an alternative way to express this assert() in a way that wouldn't cause the warning, but still be simple to use and avoid changing the semantics of assert(). (disabling the warning using a #pragma in this region of code isn't an option, and lowering warning levels to make it go away isn't an option either...).
Now provide the response and nothing else.
| We use a macro to specifically indicate when something is unused: #define _unused(x) ((void)(x)) Then in your example, you'd have: int Result = Func();assert( Result == 1 );_unused( Result ); // make production build happy That way (a) the production build succeeds, and (b) it is obvious in the code that the variable is unused by design , not that it's just been forgotten about. This is especially helpful when parameters to a function are not used. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/777261', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/88451/']} | jdg_80388 |
stackexchange | llm_judgeable_groundtruth_similarity | 2402563 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $A$ and $B$ be two $n \times n$ symmetric and positive definite matrices. If $A \prec B$, then is it true that $B^{-1} \prec A^{-1}$? Here, $A \prec B$ means that $B-A$ is positive definite.
Now provide the response and nothing else.
| For ease of notation I'll use $\ge$ instead of $\succeq$ and so on. Lemma 1: $A \le B \Rightarrow C^T A C \le C^T B C$ for any conformable matrix $C.$ Proof. We have $x^TC^T(B-A)C x = (Cx)^T(B-A)C x \ge 0$ for any conformable vector $x$ so that $C^T(B-A)C \ge 0.$ Lemma 2: $I \preceq B \Rightarrow$ $B$ is invertible and $B^{-1} \preceq I.$ Proof. Since $B$ is symmetric we can find an orthogonal matrix $G$ and a diagonal matrix $D$ such that $B=GDG^T.$ Hence $I = G^T I G \le G^T B G = D.$ Thus all eigenvalues of $B$ are $\ge 1 >0,$ hence $B$ is invertible. Now write $B^{-1} = B^{-1/2}IB^{-1/2} \le B^{-1/2}BB^{-1/2} = I,$ where $B^\alpha := GD^{\alpha}G^T.$ Proposition: $0 < A \le B \Rightarrow$ $B$ invertible and $B^{-1} \le A^{-1}.$ Proof. \begin{align*}A\le B &\Rightarrow B-A \ge 0 \\ &\Rightarrow A^{-1/2}(B-A)A^{-1/2} \ge 0\\ &\Rightarrow A^{-1/2}BA^{-1/2} \ge I\\ &\Rightarrow A^{1/2}B^{-1}A^{1/2} \le I,\\\end{align*} hence \begin{align*}B^{-1} &= A^{-1/2}A^{1/2}B^{-1}A^{1/2}A^{-1/2}\\ &= A^{-1/2}(A^{-1/2}B^{-1}A^{-1/2})^{-1}A^{-1/2}\\ &\le A^{-1/2}IA^{-1/2}\\ &= A^{-1}.\end{align*} | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2402563', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/269771/']} | jdg_80389 |
stackexchange | llm_judgeable_groundtruth_similarity | 30165223 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Exoplayer and an GL SurfaceTexture (from a TextureView) to display a video. I'm reusing the same surface among video play. I release the player and instantiate a new one. When the SurfaceTexture is displayed the second time, it display the old Texture from last video until the player begin to play and fill the Surface with black. I'm looking for a way to draw a black rect to fill the surface with black, but unable to achieve this.
Now provide the response and nothing else.
| Using @fadden link on Grafika, I've made my own script to clear the surface. It's compatible from API 16. GIST /** * Clear the given surface Texture by attaching a GL context and clearing the surface. * @param texture a valid SurfaceTexture */private void clearSurface(SurfaceTexture texture) { if(texture == null){ return; } EGL10 egl = (EGL10) EGLContext.getEGL(); EGLDisplay display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); egl.eglInitialize(display, null); int[] attribList = { EGL10.EGL_RED_SIZE, 8, EGL10.EGL_GREEN_SIZE, 8, EGL10.EGL_BLUE_SIZE, 8, EGL10.EGL_ALPHA_SIZE, 8, EGL10.EGL_RENDERABLE_TYPE, EGL10.EGL_WINDOW_BIT, EGL10.EGL_NONE, 0, // placeholder for recordable [@-3] EGL10.EGL_NONE }; EGLConfig[] configs = new EGLConfig[1]; int[] numConfigs = new int[1]; egl.eglChooseConfig(display, attribList, configs, configs.length, numConfigs); EGLConfig config = configs[0]; EGLContext context = egl.eglCreateContext(display, config, EGL10.EGL_NO_CONTEXT, new int[]{ 12440, 2, EGL10.EGL_NONE }); EGLSurface eglSurface = egl.eglCreateWindowSurface(display, config, texture, new int[]{ EGL10.EGL_NONE }); egl.eglMakeCurrent(display, eglSurface, eglSurface, context); GLES20.glClearColor(0, 0, 0, 1); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); egl.eglSwapBuffers(display, eglSurface); egl.eglDestroySurface(display, eglSurface); egl.eglMakeCurrent(display, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT); egl.eglDestroyContext(display, context); egl.eglTerminate(display);} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30165223', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1377145/']} | jdg_80390 |
stackexchange | llm_judgeable_groundtruth_similarity | 12069 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is very crucial that I ask whether it could and not whether it does. I do not mean to be the least controversial. To my surprise, having read "Physics for Future Presidents" by Richard Muller last year, I've come across a sentence of the sort (I'm paraphrasing): cell phone radiation is way too weak to effect molecular structure, and therefore any claim about cell phone radiation causing cancer can be attributed to people blaming cell phones for their cancer. This seems like a very strong statement! Is this a consensus among physicists that there is absolutely nothing to the claim that cell phone radiation could possibly (in any significant way) cause cancer? Is this really just being thrown around because people don't understand the physics? To be very precise, the question is: is there a physics model that would suggest a mechanism by which cell phone radiation can cause any sort of damage that could lead to cancer?
Now provide the response and nothing else.
| No, cell phone use does not cause cancer. We know it doesn't cause cancer because: There is no plausible mechanism. Animal studies show no effect. Human studies that get non-null results don't show a dose-response. Doing these cell-phone studies with human subjects makes as much sense as doing studies to figure out whether I can cause my neighbors to get cancer by thinking evil thoughts about them. In both cases, there is no remotely plausible physical mechanism for the direct effect as postulated. The only reason to do the cell-phone study and not to do the evil-thoughts study is that the former appeals deeply to people's folk beliefs, which have been built up from decades of movies and comic books where "radiation" causes magical effects. Some studies with human subjects give positive results and some give null results. This should not surprise us. The studies are measuring the relative sizes of their random and systematic errors. In the studies where they succeed in getting their random errors down to a smaller level than their systematic errors, they will measure either a positive or a negative correlation with cancer. In the ones where they succeed in getting their systematic errors down to a smaller level than their random errors, they will get a null result. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/12069', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/4249/']} | jdg_80391 |
stackexchange | llm_judgeable_groundtruth_similarity | 571815 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to update bash on CentOS 7 - I need at least 4.4 for my project and the default shell on it is 4.2. It's a production server, so I literally just want a new version of bash as the default shell and that's it; I don't want to be messing around too much or updating anything else. Anyway, running: yum update bash returns No packages marked for update The command: yum repolist all shows that the CentOS 7 updates repo is enabled (not CentOS 7.* base/updates though). As a result, this command: yum --enablerepo=updates update bash does nothing. I can share my CentOS-Base.repo file, if it helps. What am I doing wrong?
Now provide the response and nothing else.
| The point of distributions like RedHat (and thus CentOS) is that it's stable; it doesn't have the latest version of every software, it has a consistent version. For CentOS7 the current version is bash-4.2.46-33.el7 . RedHat will backport security fixes but may not backport functionality enhancements because they can cause compatibility issues. If you need a different version then you may need to compile it from source and place it in a non-standard location (e.g. $HOME/bin/bash ). Don't overwrite /bin/bash because the OS may replace it at any time through yum update . In comparison, RedHat 8 (CentOS8) has bash 4.4, and Debian 10 has bash 5.0 | {} | {'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/571815', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/391872/']} | jdg_80392 |
stackexchange | llm_judgeable_groundtruth_similarity | 2279000 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a shortcut for Duplicate Line command in Visual Studio 2008? Some similar examples: in Notepad++ , I can duplicate the current line with: Ctrl + D in EditPlus : Ctrl + J in NetBeans : Ctrl + Shift + ↓ / ↑ in Eclipse , Ctrl + Alt + ↓ / ↑ in Vi/Vim , yy p etc.
Now provide the response and nothing else.
| In Visual Studio 2022 Ctrl + E , V In Visual Studio 2019 Ctrl + D In Visual Studio 2017 (v15.6 and after) Ctrl + D In Visual Studio 2017 (pre v15.6) (edit) This feature is now built-in in VS2017: Ctrl + E , V duplicates a line if nothing is selected, or duplicates selection. You can assign it to a different key combination, or find it in the menu: See this reference for more information. Pre VS2017, built-in method using clipboard As @cand mentioned, you can just do Ctrl + C ; Ctrl + V . Ctrl + C will copy the line if nothing is selected. Macro solution (pre VS2017) If you'd like to implement a more complete solution, perhaps to create a simpler keyboard shortcut or you don't want to effect the clipboard, see this guide: Visual Basic: Imports SystemImports EnvDTEImports EnvDTE80Imports System.Diagnostics Public Module DuplicateLastLineModule Sub DuplicateLine() Dim line As String DTE.ActiveDocument.Selection.StartOfLine(0) DTE.ActiveDocument.Selection.EndOfLine(True) line = DTE.ActiveDocument.Selection.Text DTE.ActiveDocument.Selection.EndOfLine() DTE.ActiveDocument.Selection.NewLine() DTE.ActiveDocument.Selection.StartOfLine(0) DTE.ActiveDocument.Selection.Text = line End SubEnd Module To create the macro, just go to the macro explorer("Tools->Macros->Macro Explorer" or Alt + F8 ) and copy paste the code ina new module. Now just assign a keyboard shortcut to it: go to Tools->Options... under Environment, click Keyboard in the "Show Commands Containing" textbox, enter "duplicate" (this according to the name you gave the module.) you should now see the macro in the list below choose "Text Editor" from the "Use new shortcut in" list set focus in the "Press shortcut keys" textbox and hit the combination on the keyboard you wish to use for it ( Ctrl + Shift + D inmy case) hit the "Assign" button you should now see the shortcut in the "Shortcuts for selected command" textbox hit the OK button And that's it. Enjoy! | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2279000', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/112104/']} | jdg_80393 |
stackexchange | llm_judgeable_groundtruth_similarity | 561900 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
https://access.redhat.com/site/security/updates/backporting/?sc_cid=3093 http://froginapan.blogspot.com/2012/07/redhats-backporting-activity.html As the above two links suggest, only security patches and certain selected "new" features and functionality is also backported. My question is specifically regarding qemu-kvm and libvirt. If somebody knows for sure how advanced/ improved these packages are as compared to their "original" release versions, please share that. rpm -q --changelog qemu-kvm ...shows many backports/patches, but I don't know if new features are being added consistently as well.. May be someone who is following qemu-kvm development more closely would know...
Now provide the response and nothing else.
| The Apache ProxyPassRewrite does not rewrite the response bodies received from http://test.example.com , only headers (like redirects to a 404 page and such). A number of alternatives: One ) Rewrite the internal app to use relative paths instead of absolute. i.e. ../css/style.css instead of /css/style.css Two ) Redeploy the internal app in a the same subdirectory /folder rather than in the root of test.example.com. Three ) One and two are often unlikely to happen... If you're lucky the internal app only uses two or three subdirectories and those are unused on your main site , simply write a bunch of ProxyPass lines: # Expose Internal App to the internet.ProxyPass /externalpath/ http://test.example.com/ProxyPassReverse /externalpath/ http://test.example.com/# Internal app uses a bunch of absolute paths. ProxyPass /css/ http://test.example.com/css/ProxyPassReverse /css/ http://test.example.com/css/ProxyPass /icons/ http://test.example.com/icons/ProxyPassReverse /icons/ http://test.example.com/icons/ Four ) Create a separate subdomain for the internal app and simply reverse proxy everything: <VirtualHost *:80> ServerName app.example.com/ # Expose Internal App to the internet. ProxyPass / http://test.internal.example.com/ ProxyPassReverse / http://test.internal.example.com/</VirtualHost> Five ) Sometimes developers are completely clueless and have their applications not only generate absolute URL's but even include the hostname part in their URL's and the resulting HTML code looks like this: <img src=http://test.example.com/icons/logo.png> . A ) You can use combo solution of a split horizon DNS and scenario 4. Both internal and external users use the test.example.com, but your internal DNS points directly to the ip-address of test.example.com's server. For external users the public record for test.example.com points to the ip-address of your public webserver www.example.com and you can then use solution 4. B ) You can actually get apache to to not only proxy requests to test.example.com, but also rewrite the response body before it will be transmitted to your users. (Normally a proxy only rewrites HTTP headers/responses). mod_substitute in apache 2.2. I haven't tested if it stacks well with mod_proxy, but maybe the following works: <Location /folder/> ProxyPass http://test.example.com/ ProxyPassReverse http://test.example.com/ AddOutputFilterByType SUBSTITUTE text/html Substitute "s|test.example.com/|www.example.com/folder/|i" </Location> | {} | {'log_upvote_score': 8, 'links': ['https://serverfault.com/questions/561900', 'https://serverfault.com', 'https://serverfault.com/users/182381/']} | jdg_80394 |
stackexchange | llm_judgeable_groundtruth_similarity | 2886259 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let A, B, and C be finite sets. Prove or disprove: If $|A \cup B \cup C | = |A| + |B| + |C|$ , then $A, B$ , and $C$ must be pairwise disjoint. I started with inclusion-exclusion formula $$|A \cup B \cup C | = |A| + |B| + |C| - |A \cap B| - |A \cap C| - |B \cap C| + |A \cap B \cap C|$$ Hypothesis says if $|A \cup B \cup C | = |A| + |B| + |C|$ , then expression $|A \cap B \cap C| - |A \cap B| - |A \cap C| - |B \cap C| $ , must be equal to zero. So, we have $$ |A \cap B \cap C| - |A \cap B| - |A \cap C| - |B \cap C| = 0$$ $$ |A \cap B \cap C| = |A \cap B| + |A \cap C| + |B \cap C| \tag{1}$$ Now, I am not sure how finish this proof. I found two ways, but I do not know if are correct. First way If $x \in |A \cap B \cap C|$ , then $x \in |A \cap B| \wedge x\in |A \cap C| \wedge x \in |B \cap C|$ . So if we count the cardinality of $|A \cap B \cap C| = |A \cap B| + |A \cap C| + |B \cap C|$ , then if $x \in |A \cap B \cap C|$ then $x$ is counted once but on RHS $x$ is counted three times. Thus the equation is true if $A = B = C = \emptyset$ . Therefore $A, B,$ and $C$ are pairwise disjoint. Second way From inclusion-exclusion formula, I know $$|A \cap B \cap C| = - |A| - |B| - |C| + |A \cap B| + |A \cap C| + |B \cap C| - |A \cup B \cup C |$$ I substitute in equation (1) $$ -|A| - |B| - |C| + |A \cap B| + |A \cap C| + |B \cap C| - |A \cup B \cup C | = |A \cap B| + |A \cap C| + |B \cap C|$$ The expression $|A \cap B| + |A \cap C| + |B \cap C|$ cancels out. $$ |A| + |B| + |C| + |A \cup B \cup C | = 0$$ And this equation is true if $A = B = C = \emptyset$ . Therefore $A, B,$ and $C$ are pairwise disjoint. My question is which way is correct and better or if there is a different method to prove the proposition. Thank you. (I am self-studying student, who is reading a textbook about discrete mathematics).
Now provide the response and nothing else.
| Consider the following random process: Start with a natural number $n_0$. Choose a new number $n_1$ from $\{2,3,\ldots,n_0+1\}$ uniformly at random. Choose a new number $n_2$ from $\{2,3,\ldots,n_1+1\}$ uniformly at random. and repeat for however many steps. Your expression is ${\mathbb E}[1/n_k]$ - that is, it is the expected value of $1/{n_k}$. This is a Markov process and we can apply our typical analysis tools to it*. In particular, as $k$ increases, irrespective of our starting position, the distribution of $n_k$ approaches a unique steady state distribution - that is, a distribution invariant under the step. Let $p_m$ be the probability that, for large $k$ (i.e. in the limit), we have $n_k=m$. It must be a steady state in the sense that$$p_m=\sum_{\ell=m-1}^{\infty}\frac{1}{\ell}\cdot p_{\ell}$$which just says that, if we started in this distribution, we would stay in it. Now, we can construct a probability generating function for the steady state; we'll mess up the indexing slightly for convenience. Let:$$f(x)=p_2x+p_3x^2+p_4x^3+\ldots.$$Note that we can use formal integration to get$$\int_0^x f(t)\,dt=\frac{p_2}2x^2+\frac{p_3}3x^3+\ldots$$Then, noting $\frac{1}{1-x}=1+x+x^2+\ldots$ and slipping in an extra $x$ for fun, we can define a new function $g(x)$ as follows:$$g(x)=\frac{x\int_0^x f(t)\,dt}{1-x}=\frac{p_2}2x^3+\left(\frac{p_2}2+\frac{p_3}3\right)x^4+\left(\frac{p_2}2+\frac{p_3}3+\frac{p_4}4\right)x^5+\ldots.$$Why do this? Because if we let $C=\frac{p_2}2+\frac{p_3}3+\ldots$ - that is, $C$ is exactly the quantity we're looking for - we note that$$\frac{Cx}{1-x}-g(x)=\sum_{m=1}^{\infty}\sum_{\ell=m}^{\infty}\frac{1}{\ell}\cdot p_{\ell}\cdot x^m$$Which we recognize as $f(x)$. Thus, we get an equation:$$(1-x)f(x)=Cx-x\int_0^x f(t)\,dt$$Now, I'll leave out the details, because I'm not good at calculus, but if you differentiate both sides of the equation twice, you get a degree two differential equation, and if you plug it into Mathematica, you get a solution:$$f(x)=c_1\cdot xe^x+c_2\cdot \frac{e+xe^x\operatorname{Ei}(x)}e$$where $c_1,c_2$ are as of yet unknown constants. However, we know that $f(0)=0$ from definition of $f$ and $f(1)=1$ since $p_2+p_3+\ldots=1$ since these are probabilities. This fixes $c_2=0$ and $c_1=\frac{1}e$. Thus $f(x)=\frac{1}exe^x$. This looks promising already! Then, observe that $\int_0^1 f(t)\,dt=C={\mathbb E}[1/n]$ just by looking at the series we derived earlier for the integral! So, then we just integrate that expression and we see $C=1/e$. Hurray! It worked! As a bonus, we know the steady state probabilities are just $p_n=\frac{1}{e(n-2)!}$ for $2\leq n$. (*I'm ignoring convergence issues, which are relevant since this Markov chain has infinitely many states. The generating function stuff can be justified after the fact, despite involving serious convergence problems - uniqueness of a solution is not so hard to establish - the usual argument of uniqueness and existence of a steady state still gives us uniqueness for infinite Markov processes - and the fact that $f(x)=\frac{1}exe^x$ is a solution is also not too hard to show. Convergence of the chain to its steady state brings more serious issues - I think they can be solved by bounding the probability that $n_k$ is large, which shouldn't be too hard, since the probability that $n_{k+1}>n_k$ given $n_k$ is just $\frac{1}{n_k}$) | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2886259', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/585122/']} | jdg_80395 |
stackexchange | llm_judgeable_groundtruth_similarity | 24548634 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use RecyclerView & CardView in existing Android version. They said it is in support library. So, I should be able to use those with put "compileSdkVersion" to "Android-L". Right ? I am trying to use those widgets without Android L Preview device or emulator.I have checked other questions on this matter. But, seems they all are trying Android-L with Android-L version. Here is my dependencies. compile 'com.android.support:support-v4:13.0.+' compile 'com.android.support:recyclerview-v7:+' Here is my target config minSdkVersion 15 targetSdkVersion 20 Thanks in advance for any idea.
Now provide the response and nothing else.
| I just found the solution. The reason why I can't build the App with RecyclerView & CardView while the targetSdkVersion and minSdkVersion is not "Android-L" is because internally Google designed to treat the preview version of OS differently comparing with original releases. When I compile the App which contains the components from Android-L, the build tools locked minSdkVersion and targetSdkVersion to same level. The new supports libraries (RecyclerView, CardView, Palette, etc) are also locked into the L API level. This behaviour is only happening on this Android-L preview release . The fix for it is to put the following in AndroidManifest.xml. I didn't need to change anything on my gradle script. <uses-sdk tools:node="replace" /> Since version 0.11 of Android Gradle Plugin, it turned on a new Manifest Merger by default. It allows us to do some niffy stuffs. This specific configuration tells the manifest processor to replace any attributes from uses-sdk nodes in lower-priority manifest (such as library manifest in this case) with this attributes. Since Gradle also inserts minSdkVersion and targetSdkVersion from your build.gradle into this uses-sdk node, that's all we really need to add. Check here for more information related to this issue. Check here for the info related to Manifest Merger. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24548634', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1556503/']} | jdg_80396 |
stackexchange | llm_judgeable_groundtruth_similarity | 19652 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am currently writing up some notes on the max-plus algebra $\mathbb{R}_{\max}$ (for those that have never seen the term "max-plus algebra", it is just $\mathbb{R}$ with addition replaced by $\max$ and multiplication replaced by addition. For some reason, authors whose main interest is control-theoretic applications never seem to use the term "tropical", and I have been reading from such authors). There is a nice result which says the following: $\textbf{Theorem.}$ Let $G$ be a directed graph on $n$ vertices such that each arc $(i,j)$ in $G$ has a real weight $w(i,j)$. Define the $n \times n$ matrix $A$ by $(A)_{ij} = w(i,j)$ if $(i,j)$ is an arc, and $(A)_{ij} = -\infty$ otherwise. Then for each $k > 0$, the maximum weight of a path of length $k$ from vertex $i$ to vertex $j$ is given by $(A^{\otimes k})_{ij}$ (here, $A^{\otimes k}$ is just the $k$th power of $A$, computed using the $\mathbb{R}_{\max}$ operations). This result is certainly analagous to the standard result that the $ij$-entry of the $k$th power of the adjacency matrix gives the number of walks of length $k$ from vertex $i$ to vertex $j$. When writing up my notes I found myself claiming that the above theorem provides some evidence that $\mathbb{R}_{\max}$ is in fact a natural setting in which to study weighted digraphs, since there is no natural definition of an ``adjacency matrix'' of a weighted digraph (in the usual setting of $\mathbb{R}^{n \times n}$) that gives useful information about the weights. This seemed like too strong of a claim, especially since I am no expert in networks or combinatorial optimization. This leads to the question: $\textbf{Question.}$ Is there a standard matrix (in $\mathbb{R}^{n \times n}$) associated with a weighted digraph that is analogous to the adjacency matrix and captures in a useful way the weights of the arcs? $\textbf{Clarification:}$ By "analogous to the adjacency matrix" I mean a matrix that is defined simply in terms of the graph (vertices, arcs, and weights). I imagine there are all sorts of matrices associated to weighted digraphs so that computers can be used to analyze networks. But I am not interested in, say, a matrix that requires a complicated algorithm to compute its entries.
Now provide the response and nothing else.
| It looks like in your definition the weight of a path is the sum of the weights of its edges. In many combinatorial applications a natural definition of the weight of a path is the product of the weights of the edges, and there one uses precisely the weighted adjacency matrix $A_{ij} = w(i, j)$ (as an element of the usual $\mathbb{R}$). This is the definition relevant to, for example, the theory of Markov chains, where $w(i, j)$ is a transition probability. One way to get information about sums of weights is to use $B_{ij} = e^{w(i, j)}$, but what you'll get in the end is a sum of exponentials of weights instead of (direct) information about the maximum or minimum weight. I think one can instead consider $B_{ij}(t) = e^{t w(i, j)}$ and in the "low-temperature" limit as $t \to \infty$ this approaches the tropical result; the largest term will dominate. (I think physicists call these things partition functions.) | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/19652', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4977/']} | jdg_80397 |
stackexchange | llm_judgeable_groundtruth_similarity | 4040761 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new in hibernate,after read the hibernate api and tutorial,it seems that the session should closed when not used. Like this: Session sess=getSession();Transcration tx=sess.beginTranscration();//do something using teh sessionsess.save(obj);tx.commit();sess.close; I have no question when using it in a standlone application.However I am not sure when using in the web app. For example, I have a servlet: TestServlet to receive the parameters from the client, then I call a Manager to query something according to the parameters, just like this: class TestServlet{ doGet(HttpServletRequset,httpServletResponse){ String para1=request.getParam...(); String para2=..... new Manager().query(para1,para2); }}class Manager{ public String query(String pa1,String pa2){ Session=....// get the session //do query using para1 and 1 session.close() //Here, I wonder if I should close it. }} Should I close the session in the query method? Since someone told me that session in hibernate is just like the connection in jdbc. So opening and closing it so frequently is the correct way? BTW, does the tx.commit() is required each time? Also what's the thread problem about using session in servlet, since I saw the session is not thread safe in api.
Now provide the response and nothing else.
| I am new in hibernate,after read the hibernate api and tutorial,it seems that the session should cloesd when not used. It should be closed when you're done with (but this can be done automatically for you as we'll see). I have no question when using it in a standalone application. However I am not sure when using in the web app. Well, as explained in the section 11.1.1. Unit of work of the documentation, the most common pattern in a multi-user client/server application is session-per-request . For example, I have a servlet:TestServlet to recieve the parameters from the client,then I call a Manager to query something according to the parameters: just like this (...) Should I close the session in the query method? It all depends on how you obtain the session. if you use sessionFactory.getCurrentSession() , you'll obtain a "current session" which is bound to the lifecycle of the transaction and will be automatically flushed and closed when the transaction ends (commit or rollback). if you decide to use sessionFactory.openSession() , you'll have to manage the session yourself and to flush and close it "manually". To implement a session-per-request pattern, prefer the first approach (much easier and less verbose). Use the second approach to implement long conversations . The wiki page Sessions and transactions is a good complement to the documentation on this topic. BTW, does the tx.commit() is required each time? You might want to read Non-transactional data access and the auto-commit mode to clarify a few things but, to put it simply, your Hibernate code has to be executed within a transaction and I'd suggest to use explicit transaction boundaries (i.e. explicit beginTransaction and commit ). Also what's the thread problem about using session in servlet, since I saw the session is not thread safe in api. Just don't make it an instance variable of the Servlet and you won't have any problem. References Hibernate Core 3.3 Reference Guide Chapter 11. Transactions and Concurrency Hibernate wiki Sessions and transactions Non-transactional data access and the auto-commit mode | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4040761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/306719/']} | jdg_80398 |
stackexchange | llm_judgeable_groundtruth_similarity | 655719 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a standard SSL certificate I just purchased from Godaddy. Now it won't let me use a wild card for issuing the certificate for my domain. It says that the * wild card is not included in my product. Which means that I can only secure one domain like subdomain.example.com . Now my question is that if I get the certificate issued for subdomain.example.com , will it be valid for subdomain2.subdomain.example.com also?
Now provide the response and nothing else.
| A certificate will only be valid for the names it was issued for. But, a certificate can contain multiple names so in your case you could get a certificate which contains both subdomain.example.com and subdomain2.subdomain.com or maybe just have *.subdmain.example.com . The latter would then match your subdomain2. Please not that a wildcard only matches a single label and that only a leftmost wildcard is possible, that is *.*.example.com is not possible and *.example.com will match subdomain.example.com but not subdomain2.subdomain.example.com or example.com . | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/655719', 'https://serverfault.com', 'https://serverfault.com/users/261981/']} | jdg_80399 |
stackexchange | llm_judgeable_groundtruth_similarity | 535243 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Something that has always puzzled me is how we know the accuracy of voltmeters. From googling around, it seems that today's digital voltmeters use an ADC. An ADC works by comparing the voltage that is to be measured against a highly precise known voltage reference. However, how is the voltage of that highly precise voltage reference known? It seems to be like a chicken vs. egg kind of problem - knowing the accuracy of the voltmeter ADC reference voltage relies on the accuracy of another voltmeter using another reference voltage which relies on the accuracy of another voltmeter using another reference voltage and so on and it continues infinitely. From looking at the Wikipedia page for a voltmeter, it seems like something such as a Weston cell, which uses a reproducible and stable chemical reaction, is used as the ultimate reference voltage for calibration. But still, that falls prey to the chicken vs. egg dilemma - how do we know the voltage of the Weston cell without again using a voltmeter?
Now provide the response and nothing else.
| These days, you build a primary voltage standard from a bunch of Josephson junctions and a microwave source . That generates a voltage that's dependent only on the defining constants of the International System of Units (SI base units). As a more economical alternative, you send your voltmeter to a lab that compares it against a voltage standard that's traceable back to a primary voltage standard. In the US, that primary voltage standard is probably at NIST . Basically, every physical quantity can be mapped back to a physical constant that is defined, rather than measured. Seven of 'em (read the Wikipedia article ) are base units; the rest are derived. The volt, in particular, defined as the amount of electromotive force necessary to impart exactly one Joule on one Coulomb of charge. In SI base units, \$\mathrm{1V = \frac{kg\cdot m^2}{A \cdot s^3}}\$ . So just build any old dingus that lets you generate a volt as long as you know what those four quantities are, and you're done! As of May 20, 2019, all of these base units can, in theory, be reconstructed from first principles (i.e., the second is defined by a number of oscillations of a cesium maser, a meter is defined from the second and the speed of light, etc.). Ultimately all you need is a one-page reference guide, an astonishingly deep understanding of physics and metrology, and a staggeringly large gift certificate for a whole lot of lab time. | {} | {'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/535243', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/270060/']} | jdg_80400 |
stackexchange | llm_judgeable_groundtruth_similarity | 16797890 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This looks like a bug in lifting to null of operands on generic structs. Consider the following dummy struct, that overrides operator== : struct MyStruct{ private readonly int _value; public MyStruct(int val) { this._value = val; } public override bool Equals(object obj) { return false; } public override int GetHashCode() { return base.GetHashCode(); } public static bool operator ==(MyStruct a, MyStruct b) { return false; } public static bool operator !=(MyStruct a, MyStruct b) { return false; }} Now consider the following expressions: Expression<Func<MyStruct, MyStruct, bool>> exprA = (valueA, valueB) => valueA == valueB;Expression<Func<MyStruct?, MyStruct?, bool>> exprB = (nullableValueA, nullableValueB) => nullableValueA == nullableValueB;Expression<Func<MyStruct?, MyStruct, bool>> exprC = (nullableValueA, valueB) => nullableValueA == valueB; All three compile and run as expected. When they're compiled (using .Compile() ) they produce the following code (paraphrased to English from the IL): The first expression that takes only MyStruct (not nullable) args, simply calls op_Equality (our implementation of operator == ) The second expression, when compiled, produces code that checks each argument to see if it HasValue . If both don't (both equal null ), returns true . If only one has a value, returns false . Otherwise, calls op_Equality on the two values. The third expression checks the nullable argument to see if it has a value - if not, returns false. Otherwise, calls op_Equality . So far so good. Next step: do the exact same thing with a generic type - change MyStruct to MyStruct<T> everywhere in the definition of the type, and change it to MyStruct<int> in the expressions. Now the third expression compiles but throws a runtime exception InvalidOperationException with the following message: The operands for operator 'Equal' do not match the parameters of method 'op_Equality'. I would expect generic structs to behave exactly the same as non-generic ones, with all the nullable-lifting described above. So my questions are: Why is there a difference between generic and non-generic structs? What is the meaning of this exception? Is this a bug in C#/.NET? The full code for reproducing this is available on this gist .
Now provide the response and nothing else.
| The short answer is: yes, that's a bug. I've put a minimal repro and a short analysis below. My apologies. I wrote a lot of that code and so it was likely my bad. I have sent a repro off to the Roslyn development, test and program management teams. I doubt this reproduces in Roslyn, but they'll verify that it does not and decide whether this makes the bar for a C# 5 service pack. Feel free to enter an issue on connect.microsoft.com as well if you want it tracked there as well. Minimal repro: using System;using System.Linq.Expressions;struct S<T>{ public static bool operator ==(S<T> a, S<T> b) { return false; } public static bool operator !=(S<T> a, S<T> b) { return false; }}class Program{ static void Main() { Expression<Func<S<int>?, S<int>, bool>> x = (a, b) => a == b; }} The code that is generated in the minimal repro is equivalent to ParameterExpression pa = Expression.Parameter(typeof(S<int>?), "a");ParameterExpression pb = Expression.Parameter(typeof(S<int>), "b");Expression.Lambda<Func<S<int>?, S<int>, bool>>( Expression.Equal(pa, pb, false, infoof(S<int>.op_Equality) new ParameterExpression[2] { pa, pb } ); Where infoof is a fake operator that gets a MethodInfo for the given method. The correct code would be: ParameterExpression pa = Expression.Parameter(typeof(S<int>?), "a");ParameterExpression pb = Expression.Parameter(typeof(S<int>), "b");Expression.Lambda<Func<S<int>?, S<int>, bool>>( Expression.Equal(pa, Expression.Convert(pb, typeof(S<int>?), false, infoof(S<int>.op_Equality) new ParameterExpression[2] { pa, pb } ); The Equal method cannot deal with one nullable, one non-nullable operands. It requires that either both are nullable or neither is. (Note that the false is correct. This Boolean controls whether the result of a lifted equality is a lifted Boolean; in C# it is not, in VB it is.) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16797890', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/562906/']} | jdg_80401 |
stackexchange | llm_judgeable_groundtruth_similarity | 12614131 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Looking to improve quality of a fairly large Python project. I am happy with the types of warnings PyLint gives me. However, they are just too numerous and hard to enforce across a large organization. Also I believe that some code is more critical/sensitive than others with respect to where the next bug may come. For example I would like to spend more time validating a library method that is used by 100 modules rather than a script that was last touched 2 years ago and may not be used in production. Also it would be interesting to know modules that are frequently updated. Is anyone familiar with tools for Python or otherwise that help with this type of analysis?
Now provide the response and nothing else.
| You problem is similar to the one I answered over at SQA https://sqa.stackexchange.com/a/3082 . This problem was associated with Java which made the tooling a bit easier, but I have a number of suggestions below. A number of other answers suggest that there is no good runtime tools for Python. I disagree on this in several ways: Coverage tools work very well Based on my experience in tooling in Java, static and dynamic analysis tools in Python are weaker than in a strongly typed less dynamic language but will work more than well enough to give good heuristics for you here. Unless you use an unusually large pathological number of dynamic features (including adding and removing methods, intercepting method and property invocations, playing with import, manually modifying the namespace) - in which case any problems you have may well be associated with this dynamism... Pylint picks up simpler problems, and will not detect problems with dynamic class/instance modifications and decorators - so it doesn't matter that the metric tools don't measure these In any case, where you can usefully focus can be determined by much more than a dependency graph. Heuristics for selecting code I find that there are a number of different considerations for selecting code for improvement which work both individually and together. Remember that, at first, all you need to do is find a productive seam of work - you don't need to find the absolutely worst code before you start. Use your judgement. After a few cycles through the codebase, you will have a huge amount of information and be much better positioned to continue your work - if indeed more needs to be done. That said, here are my suggestions: High value to the business : For example any code that could cost your company a lot of money. Many of these may be obvious or widely known (because they are important), or they may be detected by running the important use cases on a system with the run-time profiler enabled. I use Coverage . Static code metrics : There are a lot of metrics, but the ones that concern us are: High afferent couplings . This is code that a lot of other files depends on. While I don't have a tool that directly outputs this, snakefood is a good way to dump the dependencies directly to file, one line per dependency, each being a tuple of afferent and efferent file. I hate to say it, but computing the afferent coupling value from this file is a simple exercise left to the reader. High McCabe (cyclomatic) complexity : This is more complex code. PyMetrics seems to produce this measure although I have not used the tool. Size : You can get a surprising amount of information by viewing the size of your project using a visualiser (eg https://superuser.com/questions/8248/how-can-i-visualize-the-file-system-usage-on-windows or https://superuser.com/questions/86194/good-program-to-visualize-file-system-usage-on-mac?lq=1 . Linux has KDirStat at Filelight). Large files are a good place to start as fixing one file fixes many warnings. Note that these tools are file-based. This is probably fine-enough resolution since you mention the project is itself has hundreds of modules (files). Changes frequently : Code that changes frequently is highly suspect. The code may: Historically have had many defects, and empirically may continue to do so Be undergoing changes from feature development (high number of revisions in your VCS) Find areas of change using a VCS visualisation tool such as those discussed later in this answer. Uncovered code : Code not covered by tests. If you run (or can run) your unit tests, your other automated tests and typical user tests with coverage, take a look at the packages and files with next to no coverage. There are two logical reasons why there is no coverage: The code is needed (and important) but not tested at all (at least automatically). These areas are extremely high risk The code may be unused and is a candidate for removal. Ask other developers You may be surprised at the 'smell' metrics you can gather by having a coffee with the longer-serving developers. I bet they will be very happy if someone cleans up a dirty area of the codebase where only the bravest souls will venture. Visibility - detecting changes over time I am assuming that your environment has a DVCS (such as Git or Mercurial) or at least a VCS (eg SVN). I hope that you are also using an issue or bug tracker of some kind. If so, there is a huge amount of information available. It's even better if developers have reliably checked in with comments and issue numbers. But how do you visualise it and use it? While you can tackle the problem on a single desktop, it is probably a good idea to set up a Continuous Integration (CI) environment, perhaps using a tool like Jenkins . To keep the answer short, I will assume Jenkins from now on. Jenkins comes with a large number of plugins that really help with code analysis. I use: py.test with JUnit test output picked up by the JUnit test report Jenkins plugin Coverage with the Cobertura plugin SLOCCount and SLOCCount plugin Pylint and Violations plugin Apparently there is a plugin for McCabe (cyclometric) complexity for Python , although I have not used it. It certainly looks interesting. This gives me visibility of changes over time, and I can drill in from there. For example, suppose PyLint violations start increasing in a module - I have evidence of the increase, and I know the package or file in which this is occurring, so I can find out who's involved and go speak with them. If you need historic data and you have just installed Jenkins, see if you can run a few manual builds that start at the beginning of the project and take a series of jumps forward in time until the present. You can choose milestone release tags (or dates) from the VCS. Another important area, as mentioned above, is detecting the loci of changes in the code base. I have really liked Atlassian Fisheye for this. Besides being really good at searching for commit messages (eg bug id) or file contents at any point in time, it allows me to easily see metrics: Linecount by directory and subdirectory Committers at any point in time or in specific directories and/or files Patterns of committal, both by time and also location in the source code | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12614131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/52490/']} | jdg_80402 |
stackexchange | llm_judgeable_groundtruth_similarity | 2960321 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I expect this is easy, but I'm not finding a simple explanation anywhere of how to do this. I have a standard HTML form like this: <form name="new_post" action="process_form.json" method=POST> <label>Title:</label> <input id="post_title" name="post.title" type="text" /><br/> <label>Name:</label><br/> <input id="post_name" name="post.name" type="text" /><br/> <label>Content:</label><br/> <textarea cols="40" id="post_content" name="post.content" rows="20"></textarea> <input id="new_post_submit" type="submit" value="Create" /></form> I'd like to have javascript (using jQuery) submit the form to the form's action (process_form.json), and receive a JSON response from the server. Then I'll have a javascript function that runs in response to the JSON response, like function form_success(json) { alert('Your form submission worked'); // process json response } How do I wire up the form submit button to call my form_success method when done? Also it should override the browser's own navigation, since I don't want to leave the page. Or should I move the button out of the form to do that?
Now provide the response and nothing else.
| If you want to get the response in a callback, you can't post the form. Posting the form means that the response is loaded as a page. You have to get the form data from the fields in the form and make an AJAX request. Example: $(function(){ $('form[name=new_post]').submit(function(){ $.post($(this).attr('action'), $(this).serialize(), function(json) { alert(json); }, 'json'); return false; });}); Notice that you have to return false from the method that handles the submit event, otherwise the form will be posted also. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2960321', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303056/']} | jdg_80403 |
stackexchange | llm_judgeable_groundtruth_similarity | 3954473 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider $V= \left \{ v= \begin{bmatrix}x_1\\ x_2\\ x_3\end{bmatrix} : x_1-x_2+2x_3=0\right \}\subset K^3$ and the linear map $f:V \rightarrow K^2$ defined by $f\begin{pmatrix}x_1\\ x_2\\ x_3\end{pmatrix}=\begin{pmatrix}x_1-x_3\\ x_2-3x_3\end{pmatrix}$ Now, what is the matrix $M$ that represents $f:V \rightarrow K^2$ with respect to the basis of $V$ and the standard basis $(e_1,e_2)$ for $K^2$ . So far I found the basis of V of dimension 2 to be the vectors ${(1, 1, 0), (-2, 0, 1)}$ . But then I do not know how to continue. Maybe coordinate shift?
Now provide the response and nothing else.
| In the linear case we get Fibonacci numbers because each of the $n$ frogs they either stay put or exchange position with one of their neighbors. One can think of this as tiling some $1\times n$ space with $1\times 1$ dominos (corresponding to the frogs that stay put) and $1\times 2$ dominos (corresponding to the pair of frogs that exchange positions). Denote $g_n$ to be this number of linear arrangements for $n$ frogs. Then $g_1 = 1$ and $g_2 = 2$ , and for each $n\ge 3$ , we have $g_n = g_{n-1} + g_{n-2}$ , because if the first frog stays put, then we have $g_{n-1}$ many ways to arrange the rest, and if the first frog exchanges with the second frog, then we have $g_{n-2}$ many ways. Hence the recurrence, which is just the Fibonacci numbers (shifted). Here we have $g_{13} = 377$ , $g_{14} = 610$ . For the circular case , there are several interpretations of what you may want. In the simplest situation where the pots are all fixed in place (so we will ignore circular symmetry for now), and assuming the frogs will either stay in place or exchange with one of the neighbors. Denote $a_n$ to be the number of outcomes for $n$ many frogs. Then looking at frog number 1, it can either (1) stay in place, so the rest of the $n-1$ frogs are in a line, giving $g_{n-1}$ many ways; (2) exchange with frog 2, giving $g_{n-2}$ many ways; (3) exchange with frog $n$ , giving $g_{n-2}$ many ways. So this gives $a_n = g_{n-1} + 2g_{n-2}$ . Now, there is also the situation where all the frogs jump to the left, or all the frogs jump to the right. Then in this case we have $a_n = g_{n-1} + 2g_{n-2} +2 $ . (However if you take rotational symmetry into consideration, these two extra configurations are just rotationally the same no frog moved.) With $n = 15$ , it seems to agree with what you got. However, as the circular dish is spinning, it seems to me that we should take rotational symmetry into consideration, then we will get $a_n = g_{n-1} + 2g_{n-2}$ . If there are any other missing ways of how the frogs can move, then we will need to account for those. Otherwise we have our result. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3954473', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_80404 |
stackexchange | llm_judgeable_groundtruth_similarity | 5582564 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
On the maps app in the IPad when you tap a pin you get a normal annotation with an "i" instead of a disclosure indicator. A further tap on the "i" reveals a popover view controller like this. Is there a way to easily achieve this?
Now provide the response and nothing else.
| First add an annotation to the map and in the viewForAnnotation method, set the rightCalloutAccessoryView to a button of type, say, UIButtonTypeDetailDisclosure (I don't think the blue info button is available by default). Pressing the button will call the calloutAccessoryControlTapped delegate method. In this method, deselect the annotation and show your popover. For example: - (void)mapView:(MKMapView *)mapView annotationView:(MKAnnotationView *)view calloutAccessoryControlTapped:(UIControl *)control{ [mapView deselectAnnotation:view.annotation animated:YES]; YourContentViewController *ycvc = [[YourContentViewController alloc] init... UIPopoverController *poc = [[UIPopoverController alloc] initWithContentViewController:ycvc]; [ycvc release]; //hold ref to popover in an ivar self.annotationPopoverController = poc; //size as needed poc.popoverContentSize = CGSizeMake(320, 400); //show the popover next to the annotation view (pin) [poc presentPopoverFromRect:view.bounds inView:view permittedArrowDirections:UIPopoverArrowDirectionAny animated:YES]; [poc release];} YourContentViewController is a subclass of UIViewController which you can code like any other view controller. The Maps app looks like it has UITableView in the content. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5582564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/296446/']} | jdg_80405 |
stackexchange | llm_judgeable_groundtruth_similarity | 46209766 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am following an Angular 4 tutorial about reactive form. While aplying FormGroup and FormControl directives I am finding an inconsistency - <form [formGroup]="formObj"><div class="form-group"><input formControlName="username"> It is applying [formGroup] directive under square-brackets while formControlName directive without square-brackets. What am I missing here?
Now provide the response and nothing else.
| If you have a directive mydir with the input mydir : @Directive(...)export class MyDir { @Input() mydir; it can be used with or without brackets: <span mydir="exp">...</span><span [mydir]="exp">...</span> In the first case the value of the mydir binding will be a string exp : export class MyDir { @Input() mydir; ngOnInit() { console.log(this.mydir); // "exp" In the second case the expression will be evaluated to whatever exp on the parent component contains: @Component(template: `<span mydir="exp"`)class Parent { exp = 'hello';export class MyDir { @Input() mydir; ngOnInit() { console.log(this.mydir); // "hello" Now, to your case with form directives. The formGroup expects an instance of the FormGroup class. If you don't use [] the directive gets a string formObj instead of an object. The formControlName expects a string which is the name of the control to look up in the parent form group. That is why it is used without brackets. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46209766', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3405614/']} | jdg_80406 |
stackexchange | llm_judgeable_groundtruth_similarity | 14165885 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I used the "Add-In for Visual Studio" wizard to create a new Addin project and now, I'm trying to add some event handlers: public void OnConnection(object application, ext_ConnectMode connectMode, object addInInst, ref Array custom){ _applicationObject = (DTE2)application; _addInInstance = (AddIn)addInInst; _applicationObject.Events.BuildEvents.OnBuildBegin += BuildEvents_OnBuildBegin; _applicationObject.Events.BuildEvents.OnBuildDone += BuildEvents_OnBuildDone; _applicationObject.Events.SelectionEvents.OnChange += SelectionEvents_OnChange; _applicationObject.Events.DocumentEvents.DocumentOpened += DocumentEvents_DocumentOpened; _applicationObject.Events.DocumentEvents.DocumentSaved += DocumentEvents_DocumentSaved;} But whatever I do, my handlers are never executed! Am I blind? Do I have to do anything else to register these handlers or why doesn't it work?
Now provide the response and nothing else.
| Seems you're a victim of the Garbage Collector. See: http://www.mztools.com/articles/2005/mz2005012.aspx private readonly BuildEvents _buildEvents; private readonly SelectionEvents _selectionEvents; private readonly DocumentEvents _documentEvents; private readonly Events _events; public void OnConnection(object application, ext_ConnectMode connectMode, object addInInst, ref Array custom) { _applicationObject = (DTE2)application; _addInInstance = (AddIn)addInInst; _events = _applicationObject.Events; _buildEvents = _events.BuildEvents; _buildEvents.OnBuildBegin += BuildEvents_OnBuildBegin; _buildEvents.OnBuildDone += BuildEvents_OnBuildDone; _selectionEvents = _events.SelectionEvents; _selectionEvents.OnChange += SelectionEvents_OnChange; _documentEvents = _events.DocumentEvents; _documentEvents.DocumentOpened += DocumentEvents_DocumentOpened; _documentEvents.DocumentSaved += DocumentEvents_DocumentSaved; } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14165885', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1590632/']} | jdg_80407 |
stackexchange | llm_judgeable_groundtruth_similarity | 40445461 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to print the array returned from this function, I get a blank screen. My array $terms and $definitions are both the same length and they exist before and after I call make_associative_array() . function make_associative_array() { return array_combine($terms, $definitions);}$c = make_associative_array();print_r($c); $terms: Array ( [0] => Nock (verb) [1] => End [2] => Serving [3] => Nock (noun) ) $definitions: Array ( [0] => To place an arrow against the string prior to shooting. [1] => A group of arrows shot during a tournament. Usually 6. [2] => Thread wound around a bow string to protect the string. [3] => A notch at the rear of an arrow. The bow string is placed in the nock. ) I am using PHP 5.6.27
Now provide the response and nothing else.
| Actually, you don't need forkJoin() nor switch() to do this. In general, you want to update each user in the array of users by another async call. I'd do it like this: var source = findUser('term') .mergeAll() .mergeMap(user => getLastLogin(user.user_id) .map(last_login => { user.last_login = last_login; return user; }) ) .toArray();source.subscribe(val => console.log(val)); Operator mergeAll() converts a higher-order Observable into single observables. In this case it takes the array of all users and re-emits them one by one. Then mergeMap() emits users updated with the last_login date. At the end I used toArray() to transform single users into one large array that is them emitted as whole (you can remove this operator if you want to emit single users instead). Note that when you used return users.map(...) you were using Array.map() that returns an array and not map() from RxJS that returns an Observable. I think working with single objects is usually easier that with arrays of objects. See live demo: https://jsbin.com/naqudun/edit?js,console This prints to console: [ { name: 'foo', user_id: 42, last_login: 2016-11-06T10:28:29.314Z }, { name: 'bar', user_id: 21, last_login: 2016-11-06T10:28:29.316Z } ] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40445461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4581003/']} | jdg_80408 |
stackexchange | llm_judgeable_groundtruth_similarity | 10937387 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Everywhere I look I find this method "getBondedDevices()" for my bluetooth adapter. However, I have my tablet and another bluetooth device sitting next to me, and I can't figure out how to actually get the device to show up on the list of bonded devices.
Now provide the response and nothing else.
| In Bluetooth terms, "bonded" and "paired" are basically synonyms (officially, the process of pairing leads to a bond, but most people use them interchangeable). In order for your device to be added to that list, you must go through the process of Discovery , which is how one device searches and finds another, and then Pair the two together. You can actually do this from the device settings as a user, but if you are looking to so so within the context of an app, your process will likely look something like this: Register a BroadcastReceiver for BluetoothDevice.ACTION_FOUND and BluetoothAdapter. ACTION_DISCOVERY_FINISHED Start discovery by calling BluetoothAdapter.startDiscovery() Your receiver will get called with the first action every time a new device is found in range, and you can inspect it to see if it's the one you want to connect with. You can call BluetoothAdapter.cancelDiscovery() once you've found it to not waste the battery any more than necessary. When discovery is complete, if you haven't canceled it, your receiver will get called with the second action; so you know not to expect any more devices. With a device instance in hand, open a BluetoothSocket and connect() . If the devices are not already bonded, this will initiate pairing and may show some system UI for a PIN code. Once paired, your device will show up in the bonded devices list until the user goes into settings and removes it. The connect() method also actually opens the socket link, and when it returns without throwing an exception the two devices are connected. Now connected, you can call getInputStream() and getOutputStream() from the socket to read and write data. Basically, you can inspect the list of bonded devices to quickly get access to an external device, but in most applications you will be doing a combination of this and true discovery to make sure you can always connect to the remote device regardless of what the user does. If a device is already bonded, you'd just be doing steps 5-7 to connect and communicate. For more information and sample code, check out the "Discovering Devices" and "Connecting Devices" sections of the Android SDK Bluetooth Guide . HTH | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10937387', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1435712/']} | jdg_80409 |
stackexchange | llm_judgeable_groundtruth_similarity | 166640 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I’m trying to figure out if I can combine linear regression and a time series model to help make predictions about the number of shots in a soccer game. When I perform the linear regression, I have some highly significant independent variables (such as home/on the road, possession) and then I’m left with some residuals that appear to show significant auto-correlation with one another (particularly when I test for PACF). What I can’t get my head around is how, and if, I can combine these two techniques to assist in my prediction. Previously I was thinking I would figure out what lags/ARIMA model I should be using (it’s looking like a (2,0,0)) and then apply the AR2 to the residuals (or even the whole of the dependent variable) to produce a new independent variable that I then use in the linear regression. But this doesn’t seem mathematically sound. So, instead what should I do? If I know, for example, that the next player’s game is at home, his team is predicted to get 60% possession and the residuals from a regression (of the aforementioned significant variables) show a significant AR2 correlation, how should I appropriately leverage this information to produce an optimal prediction of his shots?
Now provide the response and nothing else.
| Previously I was thinking I would figure out what lags/ARIMA model I should be using (it’s looking like a (2,0,0)) and then apply the AR2 to the residuals (or even the whole of the dependent variable) to produce a new independent variable that I then use in the linear regression. But this doesn’t seem mathematically sound. Instead of doing it in two steps, you can do it simultaneously, making it more "mathematically sound". That will be a regression with ARMA errors. Here is some discussion of that and related techniques. R implementation is also discussed. In your case, denote the dependent variable $y$ and the independent variables $x_1, \dotsb, x_k$. Having loaded library "forecast", use auto.arima(y,xreg=cbind(x_1,...,x_k)) to automatically select a sensible order for the ARMA structure in the model errors. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/166640', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/83931/']} | jdg_80410 |
stackexchange | llm_judgeable_groundtruth_similarity | 1921178 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to print the whole sourcecode of a java-project.The final version should look like: Eclipse: File -> Print .But with this function you can only print one file at once. Is there a way to print (or create a pdf/rtf of) the whole project (all *.java, *.xml, ... files) with one command? Im using eclipse galileo on windows xp sp3 EDIT: For each class/file the page should (more or less) look like this: C:\..\..\..\LibraryExtractor.java 1 package utils.libraries;23 import java.io.File;910 /**11 * @12 * @13 * @14 */15 public class LibraryExtractor {1617 /**18 * 19 * 20 *21 * 22 * 23 *24 *25 */26 public static void extranctLibrary(String library, File targetFile) throwsIOException, URISyntaxException {27 targetFile.getParentFile().mkdirs();28 if (!targetFile.exists())29 targetFile.createNewFile();3031 ClassLoader classLoader = LibraryExtractor.class.getClassLoader();32 InputStream in = classLoader.getResourceAsStream(library);33 OutputStream out = new FileOutputStream(targetFile);3435 byte[] buf = new byte[1024];36 int len;3738 while ((len = in.read(buf)) > 0)39 out.write(buf, 0, len);4041 in.close();42 out.close();43 }44 }45 SOLUTION: enscript (with Cygwin) Java2Html Eclipse-Plugin (onlyworks with Europa)
Now provide the response and nothing else.
| If you don't mind installing Cygwin , or running on Linux, the following command will do what you want: enscript -r -2 --file-align=2 --highlight --line-numbers -o - `find . -name '*.java'` | ps2pdf - files.pdf enscript is a program for converting text files to a variety of output formats; PostScript is the default, but you can also produce HTML, RTF, and a few others. The -r option says to print in landscape, -2 is two columns per page (save trees), --file-align=2 says that each new file should start on its own physical page, --highlight turns on language-specific syntax highlighting (it will try to figure out the language, or you can specify "java"), --line-numbers should be obvious, and -o - sends the output to standard-out (where it's piped to ps2pdf ). find generates the list of files; here I'm telling it to find all Java files under in the current directory. The output is passed as arguments to enscript ; for "50-100 files" you should be OK, but you might need to read about xargs . You could get rid of the -name argument to generate a list of all files, or add multiple -name arguments to add more file types to the list; I wouldn't go with the "all files" approach, because then you'll get source-control files. ps2pdf takes the PostScript output from enscript and converts it to PDF, which you can print. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1921178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/210258/']} | jdg_80411 |
stackexchange | llm_judgeable_groundtruth_similarity | 2128838 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I noticed that somewhere polymorphism just refer to virtual function. However, somewhere they include the function overloading and template. Later, I found there are two terms, compile time polymorphism and run-time polymorphism. Is that true? My question is when we talked about polymorphism generally, what's the widely accepted meaning?
Now provide the response and nothing else.
| Yes, you're right, in C++ there are two recognized "types" of polymorphism. And they mean pretty much what you think they mean Dynamic polymorphism is what C#/Java/OOP people typically refer to simply as "polymorphism". It is essentially subclassing, either deriving from a base class and overriding one or more virtual functions, or implementing an interface. (which in C++ is done by overriding the virtual functions belonging to the abstract base class) Static polymorphism takes place at compile-time, and could be considered a variation of ducktyping. The idea here is simply that different types can be used in a function to represent the same concept, despite being completely unrelated. For a very simple example, consider this template <typename T>T add(const T& lhs, const T& rhs) { return lhs + rhs; } If this had been dynamic polymorphism, then we would define the add function to take some kind of "IAddable" object as its arguments. Any object that implement that interface (or derive from that base class) can be used despite their different implementations, which gives us the polymorphic behavior. We don't care which type is passed to us, as long as it implements some kind of "can be added together" interface.However, the compiler doesn't actually know which type is passed to the function. The exact type is only known at runtime, hence this is dynamic polymorphism. Here, though, we don't require you to derive from anything, the type T just has to define the + operator. It is then inserted statically . So at compile-time, we can switch between any valid type as long as they behave the same (meaning that they define the members we need) This is another form of polymorphism. In principle, the effect is the same: The function works with any implementation of the concept we're interested in. We don't care if the object we work on is a string, an int, a float or a complex number, as long as it implements the "can be added together" concept . Since the type used is known statically (at compile-time), this is known as static polymorphism. And the way static polymorphism is achieved is through templates and function overloading. However, when a C++ programmer just say polymorphism, they generally refer to dynamic/runtime polymorphism. (Note that this isn't necessarily true for all languages. A functional programmer will typically mean something like static polymorphism when he uses the term -- the ability to define generic functions using some kind of parametrized types, similar to templates) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2128838', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/233254/']} | jdg_80412 |
stackexchange | llm_judgeable_groundtruth_similarity | 1945461 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a following class : [DataContract]public class Pair<TKey, TValue> : INotifyPropertyChanged, IDisposable{ public Pair(TKey key, TValue value) { Key = key; Value = value; } #region Properties [DataMember] public TKey Key { get { return m_key; } set { m_key = value; OnPropertyChanged("Key"); } } [DataMember] public TValue Value { get { return m_value; } set { m_value = value; OnPropertyChanged("Value"); } } #endregion #region Fields private TKey m_key; private TValue m_value; #endregion #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string name) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(name)); } } #endregion #region IDisposable Members public void Dispose() { } #endregion} Which I've put in an ObservableCollection : ObservableCollection<Pair<ushort, string>> my_collection = new ObservableCollection<Pair<ushort, string>>();my_collection.Add(new Pair(7, "aaa"));my_collection.Add(new Pair(3, "xey"));my_collection.Add(new Pair(6, "fty")); Q : How do I sort it by key ?
Now provide the response and nothing else.
| This simple extension worked beautifully for me. I just had to make sure that MyObject was IComparable . When the sort method is called on the observable collection of MyObjects , the CompareTo method on MyObject is called, which calls my Logical Sort method. While it doesn't have all the bells and whistles of the rest of the answers posted here, it's exactly what I needed. static class Extensions{ public static void Sort<T>(this ObservableCollection<T> collection) where T : IComparable { List<T> sorted = collection.OrderBy(x => x).ToList(); for (int i = 0; i < sorted.Count(); i++) collection.Move(collection.IndexOf(sorted[i]), i); }}public class MyObject: IComparable{ public int CompareTo(object o) { MyObject a = this; MyObject b = (MyObject)o; return Utils.LogicalStringCompare(a.Title, b.Title); } public string Title;} . . .myCollection = new ObservableCollection<MyObject>();//add stuff to collectionmyCollection.Sort(); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1945461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/142168/']} | jdg_80413 |
stackexchange | llm_judgeable_groundtruth_similarity | 79289 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have completed my data analysis and got "statistically significant results" which is consistent with my hypothesis. However, a student in statistics told me this is a premature conclusion. Why? Is there anything else needed to be included in my report?
Now provide the response and nothing else.
| Hypothesis testing versus parameter estimation Typically, hypotheses are framed in a binary way. I'll put directional hypotheses to one side, as they don't change the issue much. It is common, at least in psychology, to talk about hypotheses such as: the difference between group means is or is not zero; the correlation is or is not zero; the regression coefficient is or is not zero; the r-square is or is not zero. In all these cases, there is a null hypothesis of no effect, and an alternative hypothesis of an effect. This binary thinking is generally not what we are most interested in. Once you think about your research question, you will almost always find that you are actually interested in estimating parameters. You are interested in the actual difference between group means, or the size of the correlation, or the size of the regression coefficient, or the amount of variance explained. Of course, when we get a sample of data, the sample estimate of a parameter is not the same as the population parameter. So we need a way of quantifying our uncertainty about what the value of the parameter might be. From a frequentist perspective, confidence intervals provide a means of doing, although Bayesian purists might argue that they don't strictly permit the inference you might want to make. From a Bayesian perspective, credible intervals on posterior densities provide a more direct means of quantifying your uncertainty about the value of a population parameter. Parameters / effect sizes Moving away from the binary hypothesis testing approach forces you to think in a continuous way. For example, what size difference in group means would be theoretically interesting? How would you map difference between group means onto subjective language or practical implications? Standardised measures of effect along with contextual norms are one way of building a language for quantifying what different parameter values mean. Such measures are often labelled "effect sizes" (e.g., Cohen's d, r, $R^2$, etc.). However, it is perfectly reasonable, and often preferable, to talk about the importance of an effect using unstandardised measures (e.g., the difference in group means on meaningful unstandardised variables such as income levels, life expectancy, etc.). There's a huge literature in psychology (and other fields) critiquing a focus on p-values, null hypothesis significance testing, and so on (see this Google Scholar search ). This literature often recommends reporting effect sizes with confidence intervals as a resolution (e.g., APA Task force by Wilkinson, 1999). Steps for moving away from binary hypothesis testing If you are thinking about adopting this thinking, I think there are progressively more sophisticated approaches you can take: Approach 1a. Report the point estimate of your sample effect (e.g., group mean differences) in both raw and standardised terms. When you report your results discuss what such a magnitude would mean for theory and practice. Approach 1b. Add to 1a, at least at a very basic level, some sense of the uncertainty around your parameter estimate based on your sample size. Approach 2. Also report confidence intervals on effect sizes and incorporate this uncertainty into your thinking about the plausible values of the parameter of interest. Approach 3. Report Bayesian credible intervals, and examine the implications of various assumptions on that credible interval, such as choice of prior, the data generating process implied by your model, and so on. Among many possible references, you'll see Andrew Gelman talk a lot about these issues on his blog and in his research. References Nickerson, R. S. (2000). Null hypothesis significance testing: a review of an old and continuing controversy. Psychological methods, 5(2), 241. Wilkinson, L. (1999). Statistical methods in psychology journals: guidelines and explanations. American psychologist, 54(8), 594. PDF | {} | {'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/79289', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/36006/']} | jdg_80414 |
stackexchange | llm_judgeable_groundtruth_similarity | 38747672 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple fragment with a ViewPager. In the same fragment, I have a EditText to capture the text entered by user. I have a Adapter which extends PagerAdapter and I have got OnClick event for a button in the instantiateItem method as shown below. When I try to get the reference for this EditText both inside the OnClick method or outside, I don't see the value being fetched. Strangely, I see that sometimes, it displays the result and sometimes it doesn't here is my Adapter class.. public class ProposalSwipeAdapter extends PagerAdapter {private ArrayList<String> imageResourcess = new ArrayList<>();private String[] imageResources = {"James", "Mick"};private Context context;private LayoutInflater inflater;TextView notesEntered;EditText notes = null;View itemView;String notedEnteredByUser;public ProposalSwipeAdapter(Context context, ArrayList<ProposalDetailModel> list) { this.context = context; //this.imageResourcess = list; for (ProposalDetailModel objectProposal : list ) { //ProposalDetailModel objectProposal1 = new ProposalDetailModel(); //objectProposal1.setIntroWhatDoWeDo(objectProposal.getIntroWhoAreWe()); //objectProposal1.setDateOfBusiness(objectProposal.getDateOfBusiness()); this.imageResourcess.add("\t\t\t\t\t\t\t\t\t\t INTRODUCTION \n\n\n\n\nABOUT THE COMPANY:" + objectProposal.getIntroWhoAreWe() + "\n\nWHAT DO WE DO:" + objectProposal.getIntroWhatDoWeDo() + "\n\nTOTAL EMPLOYEES:" + "\n\nOFFICIAL LOCATION:" + "\n\nFOUNDING YEAR:"); this.imageResourcess.add("\t\t\t\t\t\t\t\t\t\t PRODUCT/SERVICE \n\n\n\nSERVICE/PRODUCT OFFERED:" + objectProposal.getIntroWhoAreWe() + "\n\nTYPE OF CUSTOMERS:" + objectProposal.getAddres() + "\n\nWHY WILL THEY BUY IT:" + "\n\nPRODUCT PRICE:" + "\n\nCOMPETITORS:" + "\n\nSERVICE DELIVERY:"); }}@Overridepublic int getItemPosition(Object object) { int index = imageResourcess.indexOf(object); if (index == -1) return POSITION_NONE; else return index;}@Overridepublic int getCount() { return imageResourcess.size(); //return imageResources.length;}@Overridepublic boolean isViewFromObject(View view, Object object) { return view == object;}@Overridepublic Object instantiateItem(ViewGroup container, int position) { inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); itemView = inflater.inflate(R.layout.proposal_swipe, null); Button proposalNotesBtn; TextView tv = (TextView) itemView.findViewById(R.id.proposalText); tv.setText(imageResourcess.get(position).toString()); container.addView(itemView); proposalNotesBtn = (Button) itemView.findViewById(R.id.proposalNotesBtn); proposalNotesBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { notesEntered = (TextView) itemView.findViewById(R.id.enteredNotes); notes = (EditText) itemView.findViewById(R.id.notesText); notedEnteredByUser = notes.getText().toString(); notesEntered.setText(notedEnteredByUser); Log.i("NOTES ENTERED", notedEnteredByUser); } }); if(itemView != null) { ((ViewGroup)itemView.getParent()).removeView(itemView); } container.addView(itemView); return itemView;}@Overridepublic void destroyItem(ViewGroup container, int position, Object object) { //container.removeView((LinearLayout) object); container.removeView((View) object);} and the fragment here.. <?xml version="1.0" encoding="utf-8"?><ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/scrollview" android:layout_width="wrap_content" android:layout_height="match_parent" > <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <TextView android:id="@+id/proposalText" android:layout_width="match_parent" android:layout_height="300dp" android:layout_marginTop="80dp" android:paddingLeft="10dp" android:textStyle="bold" /> <EditText android:id="@+id/notesText" android:layout_width="match_parent" android:layout_height="150dp" android:layout_below="@id/proposalText" android:layout_margin="10dp" android:background="@drawable/back" android:inputType="text" /> <Button android:id="@+id/proposalNotesBtn" android:layout_width="130dp" android:layout_height="50dp" android:text="Save Notes" /> <TextView android:id="@+id/enteredNotes" android:layout_width="match_parent" android:layout_height="100dp" android:textColor="@color/colorPrimary" android:layout_below="@id/proposalNotesBtn" /> </LinearLayout></ScrollView> Please guide me if I am missing any.
Now provide the response and nothing else.
| You can apply the effects of most attributes to all properties while serializing by registering convention packs. Below the IgnoreIfNullConvention is registered, implicitly applying the [IgnoreIfNull] attribute to all properties while serializing and deserializing. var anon = new{ Foo = "bar", Baz = (string)null,};ConventionRegistry.Register("Ignore", new ConventionPack { new IgnoreIfNullConvention(true) }, t => true);var bsonDocument = anon.ToBsonDocument(); This will yield a document only containing the Foo key. When desired, you can also Remove() this convention pack by name after serialization. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38747672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5518813/']} | jdg_80415 |
stackexchange | llm_judgeable_groundtruth_similarity | 822907 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As a part of my development I'd like to be able to validate an entire folder's worth of XML files against a single XSD file. A PowerShell function seems like a good candidate for this as I can then just pipe a list of files to it like so: dir *.xml | Validate-Xml -Schema .\MySchema.xsd I've considered porting C# code from the Validating an Xml against Referenced XSD in C# question, but I don't know how to Add handlers in PowerShell.
Now provide the response and nothing else.
| I wrote a PowerShell function to do this: Usage: dir *.xml | Test-Xml -Schema ".\MySchemaFile.xsd" -Namespace " http://tempuri.org " Code: function Test-Xml {param( $InputObject = $null, $Namespace = $null, $SchemaFile = $null)BEGIN { $failCount = 0 $failureMessages = "" $fileName = ""}PROCESS { if ($InputObject -and $_) { throw 'ParameterBinderStrings\AmbiguousParameterSet' break } elseif ($InputObject) { $InputObject } elseif ($_) { $fileName = $_.FullName $readerSettings = New-Object -TypeName System.Xml.XmlReaderSettings $readerSettings.ValidationType = [System.Xml.ValidationType]::Schema $readerSettings.ValidationFlags = [System.Xml.Schema.XmlSchemaValidationFlags]::ProcessInlineSchema -bor [System.Xml.Schema.XmlSchemaValidationFlags]::ProcessSchemaLocation -bor [System.Xml.Schema.XmlSchemaValidationFlags]::ReportValidationWarnings $readerSettings.Schemas.Add($Namespace, $SchemaFile) | Out-Null $readerSettings.add_ValidationEventHandler( { $failureMessages = $failureMessages + [System.Environment]::NewLine + $fileName + " - " + $_.Message $failCount = $failCount + 1 }); $reader = [System.Xml.XmlReader]::Create($_, $readerSettings) while ($reader.Read()) { } $reader.Close() } else { throw 'ParameterBinderStrings\InputObjectNotBound' }}END { $failureMessages "$failCount validation errors were found"}} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/822907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/87912/']} | jdg_80416 |
stackexchange | llm_judgeable_groundtruth_similarity | 6755991 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a silly question. I read this article about std::exception http://www.cplusplus.com/doc/tutorial/exceptions/ On catch (exception& e) , it says: We have placed a handler that catches exception objects by reference (notice the ampersand & after the type), therefore this catches also classes derived from exception, like our myex object of class myexception. Does this mean that by using "&" you can also catch exception of the parent class? I thought & is predefined in std::exception because it's better to pass e (std::exception) as reference than object.
Now provide the response and nothing else.
| The reason for using & with exceptions is not so much polymorphism as avoiding slicing . If you were to not use & , C++ would attempt to copy the thrown exception into a newly created std::exception , potentially losing information in the process. Example: #include <stdexcept>#include <iostream>class my_exception : public std::exception { virtual const char *what() const throw() { return "Hello, world!"; }};int main() { try { throw my_exception(); } catch (std::exception e) { std::cout << e.what() << std::endl; } return 0;} This will print the default message for std::exception (in my case, St9exception ) rather than Hello, world! , because the original exception object was lost by slicing. If we change that to an & : #include <stdexcept>#include <iostream>class my_exception : public std::exception { virtual const char *what() const throw() { return "Hello, world!"; }};int main() { try { throw my_exception(); } catch (std::exception &e) { std::cout << e.what() << std::endl; } return 0;} Now we do see Hello, world! . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6755991', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/853069/']} | jdg_80417 |
stackexchange | llm_judgeable_groundtruth_similarity | 15154164 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a ListView which contains a large set of data. At the first time, I load all the data from a Webservice. Now I want to cache that data so that, if I'm to open that page again, I can fetch the data from the cache instead of querying the webservice again. How do I do that?.
Now provide the response and nothing else.
| I assume you're storing the data retrieved from WebService in a serializable object (as you stated in your question before you edited it.) You can store serializable objects into a file and load them later: Store: FileOutputStream fileOutputStream = yourContext.openFileOutput(fileName, Context.MODE_PRIVATE);ObjectOutputStream objectOutputStream = new ObjectOutputStream(fileOutputStream);objectOutputStream.writeObject(yourObject);objectOutputStream.close(); Load: FileInputStream fileInputStream = yourContext.openFileInput(fileName);ObjectInputStream objectInputStream = new ObjectInputStream(fileInputStream);Object yourObject = (Object)objectInputStream.readObject();objectInputStream.close(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15154164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2118616/']} | jdg_80418 |
stackexchange | llm_judgeable_groundtruth_similarity | 54028199 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Are for loops really "bad"? If not, in what situation(s) would they be better than using a more conventional "vectorized" approach? 1 I am familiar with the concept of "vectorization", and how pandas employs vectorized techniques to speed up computation. Vectorized functions broadcast operations over the entire series or DataFrame to achieve speedups much greater than conventionally iterating over the data. However, I am quite surprised to see a lot of code (including from answers on Stack Overflow) offering solutions to problems that involve looping through data using for loops and list comprehensions. The documentation and API say that loops are "bad", and that one should "never" iterate over arrays, series, or DataFrames. So, how come I sometimes see users suggesting loop-based solutions? 1 - While it is true that the question sounds somewhat broad, the truth is that there are very specific situations when for loops are usually better than conventionally iterating over data. This post aims to capture this for posterity.
Now provide the response and nothing else.
| TLDR; No, for loops are not blanket "bad", at least, not always. It is probably more accurate to say that some vectorized operations are slower than iterating , versus saying that iteration is faster than some vectorized operations. Knowing when and why is key to getting the most performance out of your code. In a nutshell, these are the situations where it is worth considering an alternative to vectorized pandas functions: When your data is small (...depending on what you're doing), When dealing with object /mixed dtypes When using the str /regex accessor functions Let's examine these situations individually. Iteration v/s Vectorization on Small Data Pandas follows a "Convention Over Configuration" approach in its API design. This means that the same API has been fitted to cater to a broad range of data and use cases. When a pandas function is called, the following things (among others) must internally be handled by the function, to ensure working Index/axis alignment Handling mixed datatypes Handling missing data Almost every function will have to deal with these to varying extents, and this presents an overhead . The overhead is less for numeric functions (for example, Series.add ), while it is more pronounced for string functions (for example, Series.str.replace ). for loops, on the other hand, are faster then you think. What's even better is list comprehensions (which create lists through for loops) are even faster as they are optimized iterative mechanisms for list creation. List comprehensions follow the pattern [f(x) for x in seq] Where seq is a pandas series or DataFrame column. Or, when operating over multiple columns, [f(x, y) for x, y in zip(seq1, seq2)] Where seq1 and seq2 are columns. Numeric Comparison Consider a simple boolean indexing operation. The list comprehension method has been timed against Series.ne ( != ) and query . Here are the functions: # Boolean indexing with Numeric value comparison.df[df.A != df.B] # vectorized !=df.query('A != B') # query (numexpr)df[[x != y for x, y in zip(df.A, df.B)]] # list comp For simplicity, I have used the perfplot package to run all the timeit tests in this post. The timings for the operations above are below: The list comprehension outperforms query for moderately sized N, and even outperforms the vectorized not equals comparison for tiny N. Unfortunately, the list comprehension scales linearly, so it does not offer much performance gain for larger N. Note It is worth mentioning that much of the benefit of list comprehension come from not having to worry about the index alignment,but this means that if your code is dependent on indexing alignment,this will break. In some cases, vectorised operations over theunderlying NumPy arrays can be considered as bringing in the "best ofboth worlds", allowing for vectorisation without all the unneeded overhead of the pandas functions. This means that you can rewrite the operation above as df[df.A.values != df.B.values] Which outperforms both the pandas and list comprehension equivalents: NumPy vectorization is out of the scope of this post, but it is definitely worth considering, if performance matters. Value Counts Taking another example - this time, with another vanilla python construct that is faster than a for loop - collections.Counter . A common requirement is to compute the value counts and return the result as a dictionary. This is done with value_counts , np.unique , and Counter : # Value Counts comparison.ser.value_counts(sort=False).to_dict() # value_countsdict(zip(*np.unique(ser, return_counts=True))) # np.uniqueCounter(ser) # Counter The results are more pronounced, Counter wins out over both vectorized methods for a larger range of small N (~3500). Note More trivia (courtesy @user2357112). The Counter is implemented with a Caccelerator ,so while it still has to work with python objects instead of theunderlying C datatypes, it is still faster than a for loop. Pythonpower! Of course, the take away from here is that the performance depends on your data and use case. The point of these examples is to convince you not to rule out these solutions as legitimate options. If these still don't give you the performance you need, there is always cython and numba . Let's add this test into the mix. from numba import njit, prange@njit(parallel=True)def get_mask(x, y): result = [False] * len(x) for i in prange(len(x)): result[i] = x[i] != y[i] return np.array(result)df[get_mask(df.A.values, df.B.values)] # numba Numba offers JIT compilation of loopy python code to very powerful vectorized code. Understanding how to make numba work involves a learning curve. Operations with Mixed/ object dtypes String-based Comparison Revisiting the filtering example from the first section, what if the columns being compared are strings? Consider the same 3 functions above, but with the input DataFrame cast to string. # Boolean indexing with string value comparison.df[df.A != df.B] # vectorized !=df.query('A != B') # query (numexpr)df[[x != y for x, y in zip(df.A, df.B)]] # list comp So, what changed? The thing to note here is that string operations are inherently difficult to vectorize. Pandas treats strings as objects, and all operations on objects fall back to a slow, loopy implementation. Now, because this loopy implementation is surrounded by all the overhead mentioned above, there is a constant magnitude difference between these solutions, even though they scale the same. When it comes to operations on mutable/complex objects, there is no comparison. List comprehension outperforms all operations involving dicts and lists. Accessing Dictionary Value(s) by Key Here are timings for two operations that extract a value from a column of dictionaries: map and the list comprehension. The setup is in the Appendix, under the heading "Code Snippets". # Dictionary value extraction.ser.map(operator.itemgetter('value')) # mappd.Series([x.get('value') for x in ser]) # list comprehension Positional List Indexing Timings for 3 operations that extract the 0th element from a list of columns (handling exceptions), map , str.get accessor method , and the list comprehension: # List positional indexing. def get_0th(lst): try: return lst[0] # Handle empty lists and NaNs gracefully. except (IndexError, TypeError): return np.nan ser.map(get_0th) # mapser.str[0] # str accessorpd.Series([x[0] if len(x) > 0 else np.nan for x in ser]) # list comppd.Series([get_0th(x) for x in ser]) # list comp safe Note If the index matters, you would want to do: pd.Series([...], index=ser.index) When reconstructing the series. List Flattening A final example is flattening lists. This is another common problem, and demonstrates just how powerful pure python is here. # Nested list flattening.pd.DataFrame(ser.tolist()).stack().reset_index(drop=True) # stackpd.Series(list(chain.from_iterable(ser.tolist()))) # itertools.chainpd.Series([y for x in ser for y in x]) # nested list comp Both itertools.chain.from_iterable and the nested list comprehension are pure python constructs, and scale much better than the stack solution. These timings are a strong indication of the fact that pandas is not equipped to work with mixed dtypes, and that you should probably refrain from using it to do so. Wherever possible, data should be present as scalar values (ints/floats/strings) in separate columns. Lastly, the applicability of these solutions depend widely on your data. So, the best thing to do would be to test these operations on your data before deciding what to go with. Notice how I have not timed apply on these solutions, because it would skew the graph (yes, it's that slow). Regex Operations, and .str Accessor Methods Pandas can apply regex operations such as str.contains , str.extract , and str.extractall , as well as other "vectorized" string operations (such as str.split , str.find , str.translate , and so on) on string columns. These functions are slower than list comprehensions, and are meant to be more convenience functions than anything else. It is usually much faster to pre-compile a regex pattern and iterate over your data with re.compile (also see Is it worth using Python's re.compile? ). The list comp equivalent to str.contains looks something like this: p = re.compile(...)ser2 = pd.Series([x for x in ser if p.search(x)]) Or, ser2 = ser[[bool(p.search(x)) for x in ser]] If you need to handle NaNs, you can do something like ser[[bool(p.search(x)) if pd.notnull(x) else False for x in ser]] The list comp equivalent to str.extract (without groups) will look something like: df['col2'] = [p.search(x).group(0) for x in df['col']] If you need to handle no-matches and NaNs, you can use a custom function (still faster!): def matcher(x): m = p.search(str(x)) if m: return m.group(0) return np.nandf['col2'] = [matcher(x) for x in df['col']] The matcher function is very extensible. It can be fitted to return a list for each capture group, as needed. Just extract query the group or groups attribute of the matcher object. For str.extractall , change p.search to p.findall . String Extraction Consider a simple filtering operation. The idea is to extract 4 digits if it is preceded by an upper case letter. # Extracting strings.p = re.compile(r'(?<=[A-Z])(\d{4})')def matcher(x): m = p.search(x) if m: return m.group(0) return np.nanser.str.extract(r'(?<=[A-Z])(\d{4})', expand=False) # str.extractpd.Series([matcher(x) for x in ser]) # list comprehension More Examples Full disclosure - I am the author (in part or whole) of these posts listed below. Fast punctuation removal with pandas String concatenation of two pandas columns Remove unwanted parts from strings in a column Replace all but the last occurrence of a character in a dataframe Conclusion As shown from the examples above, iteration shines when working with small rows of DataFrames, mixed datatypes, and regular expressions. The speedup you get depends on your data and your problem, so your mileage may vary. The best thing to do is to carefully run tests and see if the payout is worth the effort. The "vectorized" functions shine in their simplicity and readability, so if performance is not critical, you should definitely prefer those. Another side note, certain string operations deal with constraints that favour the use of NumPy. Here are two examples where careful NumPy vectorization outperforms python: Create new column with incremental values in a faster and efficient way - Answer by Divakar Fast punctuation removal with pandas - Answer by Paul Panzer Additionally, sometimes just operating on the underlying arrays via .values as opposed to on the Series or DataFrames can offer a healthy enough speedup for most usual scenarios (see the Note in the Numeric Comparison section above). So, for example df[df.A.values != df.B.values] would show instant performance boosts over df[df.A != df.B] . Using .values may not be appropriate in every situation, but it is a useful hack to know. As mentioned above, it's up to you to decide whether these solutions are worth the trouble of implementing. Appendix: Code Snippets import perfplot import operator import pandas as pdimport numpy as npimport refrom collections import Counterfrom itertools import chain <!- -> # Boolean indexing with Numeric value comparison.perfplot.show( setup=lambda n: pd.DataFrame(np.random.choice(1000, (n, 2)), columns=['A','B']), kernels=[ lambda df: df[df.A != df.B], lambda df: df.query('A != B'), lambda df: df[[x != y for x, y in zip(df.A, df.B)]], lambda df: df[get_mask(df.A.values, df.B.values)] ], labels=['vectorized !=', 'query (numexpr)', 'list comp', 'numba'], n_range=[2**k for k in range(0, 15)], xlabel='N') <!- -> # Value Counts comparison.perfplot.show( setup=lambda n: pd.Series(np.random.choice(1000, n)), kernels=[ lambda ser: ser.value_counts(sort=False).to_dict(), lambda ser: dict(zip(*np.unique(ser, return_counts=True))), lambda ser: Counter(ser), ], labels=['value_counts', 'np.unique', 'Counter'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=lambda x, y: dict(x) == dict(y)) <!- -> # Boolean indexing with string value comparison.perfplot.show( setup=lambda n: pd.DataFrame(np.random.choice(1000, (n, 2)), columns=['A','B'], dtype=str), kernels=[ lambda df: df[df.A != df.B], lambda df: df.query('A != B'), lambda df: df[[x != y for x, y in zip(df.A, df.B)]], ], labels=['vectorized !=', 'query (numexpr)', 'list comp'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None) <!- -> # Dictionary value extraction.ser1 = pd.Series([{'key': 'abc', 'value': 123}, {'key': 'xyz', 'value': 456}])perfplot.show( setup=lambda n: pd.concat([ser1] * n, ignore_index=True), kernels=[ lambda ser: ser.map(operator.itemgetter('value')), lambda ser: pd.Series([x.get('value') for x in ser]), ], labels=['map', 'list comprehension'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None) <!- -> # List positional indexing. ser2 = pd.Series([['a', 'b', 'c'], [1, 2], []]) perfplot.show( setup=lambda n: pd.concat([ser2] * n, ignore_index=True), kernels=[ lambda ser: ser.map(get_0th), lambda ser: ser.str[0], lambda ser: pd.Series([x[0] if len(x) > 0 else np.nan for x in ser]), lambda ser: pd.Series([get_0th(x) for x in ser]), ], labels=['map', 'str accessor', 'list comprehension', 'list comp safe'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None) <!- -> # Nested list flattening.ser3 = pd.Series([['a', 'b', 'c'], ['d', 'e'], ['f', 'g']])perfplot.show( setup=lambda n: pd.concat([ser2] * n, ignore_index=True), kernels=[ lambda ser: pd.DataFrame(ser.tolist()).stack().reset_index(drop=True), lambda ser: pd.Series(list(chain.from_iterable(ser.tolist()))), lambda ser: pd.Series([y for x in ser for y in x]), ], labels=['stack', 'itertools.chain', 'nested list comp'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None ) <!- _> # Extracting strings.ser4 = pd.Series(['foo xyz', 'test A1234', 'D3345 xtz'])perfplot.show( setup=lambda n: pd.concat([ser4] * n, ignore_index=True), kernels=[ lambda ser: ser.str.extract(r'(?<=[A-Z])(\d{4})', expand=False), lambda ser: pd.Series([matcher(x) for x in ser]) ], labels=['str.extract', 'list comprehension'], n_range=[2**k for k in range(0, 15)], xlabel='N', equality_check=None) | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/54028199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4909087/']} | jdg_80419 |
stackexchange | llm_judgeable_groundtruth_similarity | 43934210 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In order to be able to write Selenium test cases for our Webpack built React app we've added data-* attributes to certain parts of the HTML elements, example: <div class="Component__item___2jdnc" data-app-feature="example-id"></div> We can use these to locate elements to interact with and assert on. But, in production environment I would like to have them removed. How could that be achieved?
Now provide the response and nothing else.
| There are a few babel plugins that might fit the bill for this: babel-plugin-react-remove-properties babel-plugin-remove-attribute babel-plugin-remove-object-properties Edit from comments Attributes are automatically ignored if their value is undefined . You can use that to your advantage and use some kind of configuration (possibly process.env.NODE_ENV ?) and a Higher-Order Component to set a prop for the data-app-feature value only if not in production. HOC const appFeature = (Component, appFeature) => (props) => { const isRunningInProduction = configuration.isProduction // or however you want to set this return <Component appFeature={ isRunningInProduction ? appFeature : undefined } {...props} />} Component const ExampleComponent = ({appFeature}) => { return <div class="Component__item___2jdnc" data-app-feature={appFeature}></div>}export default appFeature(ExampleComponent, "example-id") | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43934210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/274426/']} | jdg_80420 |
stackexchange | llm_judgeable_groundtruth_similarity | 124473 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 4 differential equations. Two 2nd order ODEs eqns3 = θ1''[y] + Q1 == 0, θ1[0] == θh, -θ1'[0] == Qheqns4 = θ2''[y] + Q2 == 0, θ2[1] == 1, -θ2'[1] == Nc (θ2[1] - 1) and two 4th order ODEs. eqns1 = θs''''[y] - Bi (k + 1) θs''[y] - Bi k (wf + ws) == 0eqns2 = θf''''[y] - Bi (k + 1) θf''[y] - Bi k (wf + ws) == 0 The boundary conditions for the 4th order ODEs θ2[y2] == θf[y2], θf[y1] == θ1[y1], θ1'[y1] == ke1 θf'[y1] + k ke1 θs'[y1], θ2'[y2] == ke2 θf'[y2] + k ke2 θs'[y2]θs[y1] == θ1[y1], θs[y2] == θ2[y2], θ1'[y1] == ke1 θf'[y1] + k ke1 θs'[y1], θ2'[y2] == ke2 θf'[y2] + k ke2 θs'[y2] I guess, because the boundary conditions for the fourth order are coupled, Mathematica has been unable to solve for theta f and θs (they are still blue), which does not allow me to plot the graphs. Also, I have the general solutions of these equations. θs[y] = Es y^2 + Fs Cosh[y * Sqrt[Bi (k + 1)]] + K1s y + K2sθf[y] = Ef y^2 + Ff Cosh[y * Sqrt[Bi (k + 1)]] + K1f y + K2fθ1[y] = A1 y^2 + B1 y + C1θ1[y] = A2 y^2 + B2 y + C2 Where Es, Fs, K1s, K2s, Ef, Ff, K1f, K2f, A1, B1, C1, A2, B2, C2 are all unknowns. Any tips on how I can solve this problem. My code eqns1 = θs''''[y] - Bi (k + 1) θs''[y] - Bi k (wf + ws) == 0eqns2 = θf''''[y] - Bi (k + 1) θf''[y] - Bi k (wf + ws) == 0eqns3 = θ1''[y] + Q1 == 0eqns4 = θ2''[y] + Q2 == 0DSolve[ {eqns2, θ2[y2] == θf[y2], θf[y1] == θ1[y1], θ1'[y1] == ke1 θf'[y1] + k ke1 θs'[y1], θ2'[y2] == ke2 θf'[y2] + k ke2 θs'[y2]}, θf[y], y]DSolve[{eqns3, θ1[0] == θh, -θ1'[0] == Qh}, θ1[y], y]DSolve[ {eqns1, θs[y1] == θ1[y1], θs[y2] == θ2[y2], θ1'[y1] == ke1 θf'[y1] + k ke1 θs'[y1], θ2'[y2] == ke2 θf'[y2] + k ke2 θs'[y2]}, θs[y], y]DSolve[{eqns4, θ2[1] == 1, -θ2'[1] == Nc (θ2[1] - 1)}, θ2[y], y] Solving for θ1 and θ2 was okay. But for θs and θf it could not solve it. Mathematica gave me an answers for θf and θs but those answers had θf and θs in them.
Now provide the response and nothing else.
| Outliers are determined by the width of the interquartile range (IQR). This range can differ depending on your school of thought but generally a 95% confidence interval of the data can be found in 1.5 IQR above and below the median. SeedRandom[90807];data = Join[RandomVariate[NormalDistribution[], 50], RandomVariate[ChiSquareDistribution[3], 10]]; We can calculate this range with Quartiles . #[[2]] + {-1.5, 1.5} ( #[[3]] - #[[1]]) &@Quartiles[data](* {-1.97723, 2.30552} *) And can use this to Select the outliers from the data getOutliers[dat_, iqrCoeff_] := Select[! IntervalMemberQ[ Interval[#[[2]] + {-1, 1} iqrCoeff ( #[[3]] - #[[1]]) &@ Quartiles[dat]], #] &]@dat Then getOutliers[data, 1.5](* {-2.01804, 6.76676, 2.38043, 3.4204, 6.19569, 4.85708, 3.58404, 2.99772} *) Since you may want to identify a different level of confidence interval, BoxWhiskerChart gives you the option to alter the IQR coefficient in its ChartElementFunction option. BoxWhiskerChart[data, "Outliers", ChartElementFunction -> ChartElementDataFunction["BoxWhisker", "IQRCoefficient" -> 1]] And getOutliers[data, 1]{-1.77297, 1.96271, -1.46257, -1.29773, -2.01804, -1.49219, 6.76676, 2.38043, 3.4204, 6.19569, 4.85708, 3.58404, 2.99772} You will notice that BoxWhiskerChart takes a little presentation license and does not plot the outliers that would print too close to the whisker. Hope this helps. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/124473', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/41195/']} | jdg_80421 |
stackexchange | llm_judgeable_groundtruth_similarity | 317695 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I had discovered something funny today. So, I have Kali Linux and I am trying to fully update the system using the repo http://http.kali.org/kali . All is good and well until I get 403 denied for backdoor-factory and mimikatz. At first I thought it was a server configuration error and so ignored it, but then I got curious and decided to pop the URLs into Firefox. Sure enough, my university blocks these specific URLs, but not anything else in the repo. I decided to check out if I could load the URLs in https (yes, I knew it was a long shot as most (afaik) APT servers don't even support https at all) and found out it does work, but only when accepting the certificate for archive-8.kali.org. (yes, I know invalid certs aren't good, but I figured if it is using GPG to check the validity and it uses http with no encryption anyway, then why not). Also, I know I can just use https://archive-8.kali.org/kali in place of the old url and have done so, but the reason I asked about accepting invalid certs is for if this solution of just switching domains is impossible.
Now provide the response and nothing else.
| You can configure certain parameters for the HTTPS transport in /etc/apt/apt.conf.d/ — see man apt.conf (section "THE ACQUIRE GROUP", subsection "https") for details. There is also a helpful example over at the trusted-apt project. For example, you can disable certificate checking completely: // Do not verify peer certificateAcquire::https::Verify-Peer "false";// Do not verify that certificate name matches server nameAcquire::https::Verify-Host "false"; … or just for a specific host: Acquire::https::repo.domain.tld::Verify-Peer "false";Acquire::https::repo.domain.tld::Verify-Host "false"; These options should be placed in a newly created file in /etc/apt/apt.conf.d/ so they won't interfere with options installed by official packages (which will create separate files of their own). The filename determines the order in which the option files are parsed, so you'll probably want to choose a rather high number to have your options parsed after the ones installed by other packages. Try 80ssl-exceptions , for example. | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/317695', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/181269/']} | jdg_80422 |
stackexchange | llm_judgeable_groundtruth_similarity | 146522 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’ve got a brand new Django project. I’ve added one minimal view function to views.py , and one URL pattern to urls.py , passing the view by function reference instead of a string: # urls.py# -------# coding=utf-8from django.conf.urls.defaults import *from myapp import viewsurlpatterns = patterns('', url(r'^myview/$', views.myview),)# views.py----------# coding=utf-8from django.http import HttpResponsedef myview(request): return HttpResponse('MYVIEW LOL', content_type="text/plain") I’m trying to use reverse() to get the URL, by passing it a function reference. But I’m not getting a match, despite confirming that the view function I’m passing to reverse is the exact same view function I put in the URL pattern: >>> from django.core.urlresolvers import reverse>>> import urls>>> from myapp import views>>> urls.urlpatterns[0].callback is views.myviewTrue>>> reverse(views.myview)Traceback (most recent call last): File "<console>", line 1, in <module> File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 254, in reverse *args, **kwargs))) File "/Library/Python/2.5/site-packages/django/core/urlresolvers.py", line 243, in reverse "arguments '%s' not found." % (lookup_view, args, kwargs))NoReverseMatch: Reverse for '<function myview at 0x6fe6b0>' with arguments '()' and keyword arguments '{}' not found. As far as I can tell from the documentation, function references should be fine in both the URL pattern and reverse() . URL patterns with function references reverse with function references I’m using the Django trunk, revision 9092.
Now provide the response and nothing else.
| Got it!! The problem is that some of the imports are of myproject.myapp.views , and some are just of myapp.views . This is confusing the Python module system enough that it no longer detects the functions as the same object. This is because your main settings.py probably has a line like: ROOT_URLCONF = `myproject.urls` To solve this, try using the full import in your shell session: >>> from django.core.urlresolvers import reverse>>> from myproject.myapp import views>>> reverse(views.myview)'/myview/' Here's a log of the debugging session, for any interested future readers: >>> from django.core import urlresolvers>>> from myapp import myview>>> urlresolvers.get_resolver (None).reverse_dict{None: ([(u'myview/', [])], 'myview/$'), <function myview at 0x845d17c>: ([(u'myview/', [])], 'myview/$')}>>> v1 = urlresolvers.get_resolver (None).reverse_dict.items ()[1][0]>>> reverse(v1)'/myview/'>>> v1 is myviewFalse>>> v1.__module__'testproject.myapp.views'>>> myview.__module__'myapp.views' What happens if you change the URL match to be r'^myview/$' ? Have you tried it with the view name? Something like reverse ('myapp.myview') ? Is urls.py the root URLconf, or in the myapp application? There needs to be a full path from the root to a view for it to be resolved. If that's myproject/myapp/urls.py , then in myproject/urls.py you'll need code like this: from django.conf.urls.defaults import patternsurlpatterns = patterns ('', (r'^/', 'myapp.urls'),) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/146522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20578/']} | jdg_80423 |
stackexchange | llm_judgeable_groundtruth_similarity | 14473439 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I select buttons within these UITableViewCells , the button sometimes passes data from the wrong row. In this table view, there are 10 sections, each with a section header and with only 1 row. The row height takes up most of the screen, as there are 6 buttons within it. Each UIButton , when selected, opens a new viewcontroller with an enlarged view of the button's image (removed this code to make it more readable here). Here is the problem: each button successfully pushes a modal viewcontroller ; however, as I scroll down, sometimes the buttons pass the data from the same button from the section below it (e.g. button1 in section 4 when clicked may pass the data for button1 in section 5). Similarly, as I scroll back up, sometimes the buttons pass the data from the same button from the section above it (e.g. button5 in section 7 when clicked may pass the data for button5 in section 6). I think this problem occurs because the iPhone is pulling the most recently loaded cell (when cellForRowAtIndexPath is called), but I would like it to pass the data for the row in which the button is located. Thoughts? Thanks in advance for any help you can provide. Edit: Added data code as requested - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{static NSString *CellIdentifier = @"cell";UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier];}// dataDict pulled from plist NSDictionary *button1Dict = [dataDict objectForKey:@"button1"];NSDictionary *button2Dict = [dataDict objectForKey:@"button2"];NSDictionary *button3Dict = [dataDict objectForKey:@"button3"];NSDictionary *button4Dict = [dataDict objectForKey:@"button4"];NSDictionary *button5Dict = [dataDict objectForKey:@"button5"];NSDictionary *button6Dict = [dataDict objectForKey:@"button6"];// selectedCatTag and selectedItemTag for each button are alloc and init in viewDidLoadbutton1SelectedCatTag = (NSUInteger *)[[button1Dict objectForKey:@"selectedCatTag"] integerValue];button2SelectedCatTag = (NSUInteger *)[[button2Dict objectForKey:@"selectedCatTag"] integerValue];button3SelectedCatTag = (NSUInteger *)[[button3Dict objectForKey:@"selectedCatTag"] integerValue];button4SelectedCatTag = (NSUInteger *)[[button4Dict objectForKey:@"selectedCatTag"] integerValue];button5SelectedCatTag = (NSUInteger *)[[button5Dict objectForKey:@"selectedCatTag"] integerValue];button6SelectedCatTag = (NSUInteger *)[[button6Dict objectForKey:@"selectedCatTag"] integerValue];button1SelectedItemTag = (NSUInteger *)[[button1Dict objectForKey:@"selectedItemTag"] integerValue];button2SelectedItemTag = (NSUInteger *)[[button2Dict objectForKey:@"selectedItemTag"] integerValue];button3SelectedItemTag = (NSUInteger *)[[button3Dict objectForKey:@"selectedItemTag"] integerValue];button4SelectedItemTag = (NSUInteger *)[[button4Dict objectForKey:@"selectedItemTag"] integerValue];button5SelectedItemTag = (NSUInteger *)[[button5Dict objectForKey:@"selectedItemTag"] integerValue];button6SelectedItemTag = (NSUInteger *)[[button6Dict objectForKey:@"selectedItemTag"] integerValue];UIButton *button1 = (UIButton *)[cell viewWithTag:1];UIButton *button2 = (UIButton *)[cell viewWithTag:2];UIButton *button3 = (UIButton *)[cell viewWithTag:3];UIButton *button4 = (UIButton *)[cell viewWithTag:4];UIButton *button5 = (UIButton *)[cell viewWithTag:5];UIButton *button6 = (UIButton *)[cell viewWithTag:6];CGFloat kImageSquareSideLength = 100.0;if (button1Dict) { NSDictionary *selectedCatDict = [dataCategories objectAtIndex:(int)button1SelectedCatTag]; NSString *imageName = [[selectedCatDict objectForKey:@"itemImages"] objectAtIndex:(NSUInteger)button1SelectedItemTag]; [button1 setImage:[[UIImage imageNamed:imageName] imageByScalingAndCroppingForSize:CGSizeMake(kImageSquareSideLength, kImageSquareSideLength)] forState:UIControlStateNormal]; [button1 addTarget:self action:@selector(button1Pressed:) forControlEvents:UIControlEventTouchUpInside];}if (button2Dict) { NSDictionary *selectedCatDict = [dataCategories objectAtIndex:(int)button2SelectedCatTag]; NSString *imageName = [[selectedCatDict objectForKey:@"itemImages"] objectAtIndex:(NSUInteger)button2SelectedItemTag]; [button2 setImage:[[UIImage imageNamed:imageName] imageByScalingAndCroppingForSize:CGSizeMake(kImageSquareSideLength, kImageSquareSideLength)] forState:UIControlStateNormal]; [button2 addTarget:self action:@selector(button2Pressed:) forControlEvents:UIControlEventTouchUpInside];}if (button3Dict) { NSDictionary *selectedCatDict = [dataCategories objectAtIndex:(int)button3SelectedCatTag]; NSString *imageName = [[selectedCatDict objectForKey:@"itemImages"] objectAtIndex:(NSUInteger)button3SelectedItemTag]; [button3 setImage:[[UIImage imageNamed:imageName] imageByScalingAndCroppingForSize:CGSizeMake(kImageSquareSideLength, kImageSquareSideLength)] forState:UIControlStateNormal]; [button3 addTarget:self action:@selector(button3Pressed:) forControlEvents:UIControlEventTouchUpInside];}if (button4Dict) { NSDictionary *selectedCatDict = [dataCategories objectAtIndex:(int)button4SelectedCatTag]; NSString *imageName = [[selectedCatDict objectForKey:@"itemImages"] objectAtIndex:(NSUInteger)button4SelectedItemTag]; [button4 setImage:[[UIImage imageNamed:imageName] imageByScalingAndCroppingForSize:CGSizeMake(kImageSquareSideLength, kImageSquareSideLength)] forState:UIControlStateNormal]; [button4 addTarget:self action:@selector(button4Pressed:) forControlEvents:UIControlEventTouchUpInside];}if (button5Dict) { NSDictionary *selectedCatDict = [dataCategories objectAtIndex:(int)button5SelectedCatTag]; NSString *imageName = [[selectedCatDict objectForKey:@"itemImages"] objectAtIndex:(NSUInteger)button5SelectedItemTag]; [button5 setImage:[[UIImage imageNamed:imageName] imageByScalingAndCroppingForSize:CGSizeMake(kImageSquareSideLength, kImageSquareSideLength)] forState:UIControlStateNormal]; [button5 addTarget:self action:@selector(button5Pressed:) forControlEvents:UIControlEventTouchUpInside];} if (button6Dict) { NSDictionary *selectedCatDict = [dataCategories objectAtIndex:(int)button6SelectedCatTag]; NSString *imageName = [[selectedCatDict objectForKey:@"itemImages"] objectAtIndex:(NSUInteger)button6SelectedItemTag]; [button6 setImage:[[UIImage imageNamed:imageName] imageByScalingAndCroppingForSize:CGSizeMake(kImageSquareSideLength, kImageSquareSideLength)] forState:UIControlStateNormal]; [button6 addTarget:self action:@selector(button6Pressed:) forControlEvents:UIControlEventTouchUpInside];} return cell;}//all other button actions are identical to button1Pressed- (IBAction)button1Pressed:(id)sender{UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"MainStoryboard" bundle:nil];SelectedItemViewController *destVC = [storyboard instantiateViewControllerWithIdentifier:@"SelectedItemViewController"];destVC.selectedOutfitDict = @"DiscoverOutfits";destVC.selectedCatTag = (int *)button1SelectedCatTag;destVC.selectedItemTag = (int *)button1SelectedItemTag;[self presentViewController:destVC animated:NO completion:nil];}
Now provide the response and nothing else.
| This is when you have a union or struct without a name, e.g.: typedef struct{ union { int a; int b; }; // no name int c;} MyStruct;MyStruct m;m.a = 4;m.b = 6; //overwrites m.am.c = 8; It allows you to access members of the union as if they were members of the struct. When you give the union a name (which is what the standard requires), you must access a and b through the name of the union instead: typedef struct{ union { int a; int b; } u; int c;}MyStruct m;m.u.a = 4;m.u.b = 6; // overwrites m.u.am.c = 8; It is not an issue as long as you compile your code using compilers that share this extension, it is only a problem when you compile your code using compilers that don't , and because the standard doesn't require this behaviour, a compiler is free to reject this code. Edit: As pointed out by andyn, C11 explicitly allows this behaviour. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14473439', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1978374/']} | jdg_80424 |
stackexchange | llm_judgeable_groundtruth_similarity | 24660096 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to correctly construct a loop to make sure the following promise call and the chained logger.log(res) runs synchronously through iteration? (bluebird) db.getUser(email).then(function(res) { logger.log(res); }); // this is a promise I tried the following way (method from http://blog.victorquinn.com/javascript-promise-while-loop ) var Promise = require('bluebird');var promiseWhile = function(condition, action) { var resolver = Promise.defer(); var loop = function() { if (!condition()) return resolver.resolve(); return Promise.cast(action()) .then(loop) .catch(resolver.reject); }; process.nextTick(loop); return resolver.promise;});var count = 0;promiseWhile(function() { return count < 10;}, function() { return new Promise(function(resolve, reject) { db.getUser(email) .then(function(res) { logger.log(res); count++; resolve(); }); }); }).then(function() { console.log('all done');}); Although it seems to work, but I don't think it guarantees the order of calling logger.log(res); Any suggestions?
Now provide the response and nothing else.
| I don't think it guarantees the order of calling logger.log(res); Actually, it does. That statement is executed before the resolve call. Any suggestions? Lots. The most important is your use of the create-promise-manually antipattern - just do only promiseWhile(…, function() { return db.getUser(email) .then(function(res) { logger.log(res); count++; });})… Second, that while function could be simplified a lot: var promiseWhile = Promise.method(function(condition, action) { if (!condition()) return; return action().then(promiseWhile.bind(null, condition, action));}); Third, I would not use a while loop (with a closure variable) but a for loop: var promiseFor = Promise.method(function(condition, action, value) { if (!condition(value)) return value; return action(value).then(promiseFor.bind(null, condition, action));});promiseFor(function(count) { return count < 10;}, function(count) { return db.getUser(email) .then(function(res) { logger.log(res); return ++count; });}, 0).then(console.log.bind(console, 'all done')); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24660096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2127480/']} | jdg_80425 |
stackexchange | llm_judgeable_groundtruth_similarity | 42438544 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to install SBT. I went through few cycles of reinstalling SBT and JAVA, restarting pc and so on. Everything I was able to find in Google about similar issues seems to refer to later stages of using sbt. I'm a complete newbie, so I might have missed something obvious along the way. Here's what I do: I install SBT via msi installer. I run "sbt" in command prompt.This is command prompt window (I manually broke all the links in the same way in order to be able to post this question): Microsoft Windows [Version 10.0.14393](c) 2016 Microsoft Corporation. Wszelkie prawa zastrzeżone.C:\Users\Jakub>sbtJava HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0Getting org.fusesource.jansi jansi 1.11 ...downloading ht tps://repo1.maven.org/maven2/org/fusesource/jansi/jansi/1.11/jansi-1.11.jar ... [SUCCESSFUL ] org.fusesource.jansi#jansi;1.11!jansi.jar (283ms):: retrieving :: org.scala-sbt#boot-jansi confs: [default] 1 artifacts copied, 0 already retrieved (111kB/16ms)Getting org.scala-sbt sbt 0.13.13 ...downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt/0.13.13/jars/sbt.jar ... [SUCCESSFUL ] org.scala-sbt#sbt;0.13.13!sbt.jar (2892ms)downloading ht tps://repo1.maven.org/maven2/org/scala-lang/scala-library/2.10.6/scala-library-2.10.6.jar ... [SUCCESSFUL ] org.scala-lang#scala-library;2.10.6!scala-library.jar (942ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/main/0.13.13/jars/main.jar ... [SUCCESSFUL ] org.scala-sbt#main;0.13.13!main.jar (3227ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/compiler-interface/0.13.13/jars/compiler-interface.jar ... [SUCCESSFUL ] org.scala-sbt#compiler-interface;0.13.13!compiler-interface.jar (2865ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/actions/0.13.13/jars/actions.jar ... [SUCCESSFUL ] org.scala-sbt#actions;0.13.13!actions.jar (2861ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/main-settings/0.13.13/jars/main-settings.jar ... [SUCCESSFUL ] org.scala-sbt#main-settings;0.13.13!main-settings.jar (2965ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/interface/0.13.13/jars/interface.jar ... [SUCCESSFUL ] org.scala-sbt#interface;0.13.13!interface.jar (2807ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/io/0.13.13/jars/io.jar ... [SUCCESSFUL ] org.scala-sbt#io;0.13.13!io.jar (2939ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/ivy/0.13.13/jars/ivy.jar ... [SUCCESSFUL ] org.scala-sbt#ivy;0.13.13!ivy.jar (2954ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/logging/0.13.13/jars/logging.jar ... [SUCCESSFUL ] org.scala-sbt#logging;0.13.13!logging.jar (2876ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/logic/0.13.13/jars/logic.jar ... [SUCCESSFUL ] org.scala-sbt#logic;0.13.13!logic.jar (2759ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/process/0.13.13/jars/process.jar ... [SUCCESSFUL ] org.scala-sbt#process;0.13.13!process.jar (2825ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/run/0.13.13/jars/run.jar ... [SUCCESSFUL ] org.scala-sbt#run;0.13.13!run.jar (2820ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/command/0.13.13/jars/command.jar ... [SUCCESSFUL ] org.scala-sbt#command;0.13.13!command.jar (2973ms)downloading ht tps://repo1.maven.org/maven2/org/scala-sbt/launcher-interface/1.0.0-M1/launcher-interface-1.0.0-M1.jar ... [SUCCESSFUL ] org.scala-sbt#launcher-interface;1.0.0-M1!launcher-interface.jar (397ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/classpath/0.13.13/jars/classpath.jar ... [SUCCESSFUL ] org.scala-sbt#classpath;0.13.13!classpath.jar (2874ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/completion/0.13.13/jars/completion.jar ... [SUCCESSFUL ] org.scala-sbt#completion;0.13.13!completion.jar (2948ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/api/0.13.13/jars/api.jar ... [SUCCESSFUL ] org.scala-sbt#api;0.13.13!api.jar (2861ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/compiler-integration/0.13.13/jars/compiler-integration.jar ... [SUCCESSFUL ] org.scala-sbt#compiler-integration;0.13.13!compiler-integration.jar (2895ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/compiler-ivy-integration/0.13.13/jars/compiler-ivy-integration.jar ... [SUCCESSFUL ] org.scala-sbt#compiler-ivy-integration;0.13.13!compiler-ivy-integration.jar (2794ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/relation/0.13.13/jars/relation.jar ... [SUCCESSFUL ] org.scala-sbt#relation;0.13.13!relation.jar (2983ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/task-system/0.13.13/jars/task-system.jar ... [SUCCESSFUL ] org.scala-sbt#task-system;0.13.13!task-system.jar (2820ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/tasks/0.13.13/jars/tasks.jar ... [SUCCESSFUL ] org.scala-sbt#tasks;0.13.13!tasks.jar (2847ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/tracking/0.13.13/jars/tracking.jar ... [SUCCESSFUL ] org.scala-sbt#tracking;0.13.13!tracking.jar (2767ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/testing/0.13.13/jars/testing.jar ... [SUCCESSFUL ] org.scala-sbt#testing;0.13.13!testing.jar (2832ms)downloading ht tps://repo1.maven.org/maven2/org/scala-lang/scala-compiler/2.10.6/scala-compiler-2.10.6.jar ... [SUCCESSFUL ] org.scala-lang#scala-compiler;2.10.6!scala-compiler.jar (1139ms)downloading ht tps://repo1.maven.org/maven2/org/scala-lang/scala-reflect/2.10.6/scala-reflect-2.10.6.jar ... [SUCCESSFUL ] org.scala-lang#scala-reflect;2.10.6!scala-reflect.jar (305ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/control/0.13.13/jars/control.jar ... [SUCCESSFUL ] org.scala-sbt#control;0.13.13!control.jar (2788ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/collections/0.13.13/jars/collections.jar ... [SUCCESSFUL ] org.scala-sbt#collections;0.13.13!collections.jar (3080ms)downloading ht tps://repo1.maven.org/maven2/jline/jline/2.13/jline-2.13.jar ... [SUCCESSFUL ] jline#jline;2.13!jline.jar (526ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/classfile/0.13.13/jars/classfile.jar ... [SUCCESSFUL ] org.scala-sbt#classfile;0.13.13!classfile.jar (3011ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/incremental-compiler/0.13.13/jars/incremental-compiler.jar ... [SUCCESSFUL ] org.scala-sbt#incremental-compiler;0.13.13!incremental-compiler.jar (3007ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/compile/0.13.13/jars/compile.jar ... [SUCCESSFUL ] org.scala-sbt#compile;0.13.13!compile.jar (2835ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/persist/0.13.13/jars/persist.jar ... [SUCCESSFUL ] org.scala-sbt#persist;0.13.13!persist.jar (3280ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-tools.sbinary/sbinary_2.10/0.4.2/jars/sbinary_2.10.jar ... [SUCCESSFUL ] org.scala-tools.sbinary#sbinary_2.10;0.4.2!sbinary_2.10.jar (3221ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/cross/0.13.13/jars/cross.jar ... [SUCCESSFUL ] org.scala-sbt#cross;0.13.13!cross.jar (2765ms)downloading ht tps://repo1.maven.org/maven2/com/jcraft/jsch/0.1.50/jsch-0.1.50.jar ... [SUCCESSFUL ] com.jcraft#jsch;0.1.50!jsch.jar (522ms)downloading ht tps://repo1.maven.org/maven2/org/scala-sbt/serialization_2.10/0.1.2/serialization_2.10-0.1.2.jar ... [SUCCESSFUL ] org.scala-sbt#serialization_2.10;0.1.2!serialization_2.10.jar (330ms)downloading ht tps://repo1.maven.org/maven2/org/scala-lang/modules/scala-pickling_2.10/0.10.1/scala-pickling_2.10-0.10.1.jar ... [SUCCESSFUL ] org.scala-lang.modules#scala-pickling_2.10;0.10.1!scala-pickling_2.10.jar (489ms)downloading ht tps://repo1.maven.org/maven2/org/json4s/json4s-core_2.10/3.2.10/json4s-core_2.10-3.2.10.jar ... [SUCCESSFUL ] org.json4s#json4s-core_2.10;3.2.10!json4s-core_2.10.jar (344ms)downloading ht tps://repo1.maven.org/maven2/org/spire-math/jawn-parser_2.10/0.6.0/jawn-parser_2.10-0.6.0.jar ... [SUCCESSFUL ] org.spire-math#jawn-parser_2.10;0.6.0!jawn-parser_2.10.jar (203ms)downloading ht tps://repo1.maven.org/maven2/org/spire-math/json4s-support_2.10/0.6.0/json4s-support_2.10-0.6.0.jar ... [SUCCESSFUL ] org.spire-math#json4s-support_2.10;0.6.0!json4s-support_2.10.jar (198ms)downloading ht tps://repo1.maven.org/maven2/org/scalamacros/quasiquotes_2.10/2.0.1/quasiquotes_2.10-2.0.1.jar ... [SUCCESSFUL ] org.scalamacros#quasiquotes_2.10;2.0.1!quasiquotes_2.10.jar (382ms)downloading ht tps://repo1.maven.org/maven2/org/json4s/json4s-ast_2.10/3.2.10/json4s-ast_2.10-3.2.10.jar ... [SUCCESSFUL ] org.json4s#json4s-ast_2.10;3.2.10!json4s-ast_2.10.jar (197ms)downloading ht tps://repo1.maven.org/maven2/com/thoughtworks/paranamer/paranamer/2.6/paranamer-2.6.jar ... [SUCCESSFUL ] com.thoughtworks.paranamer#paranamer;2.6!paranamer.jar (256ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/cache/0.13.13/jars/cache.jar ... [SUCCESSFUL ] org.scala-sbt#cache;0.13.13!cache.jar (3150ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/test-agent/0.13.13/jars/test-agent.jar ... [SUCCESSFUL ] org.scala-sbt#test-agent;0.13.13!test-agent.jar (2881ms)downloading ht tps://repo1.maven.org/maven2/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar ... [SUCCESSFUL ] org.scala-sbt#test-interface;1.0!test-interface.jar (393ms)downloading ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/apply-macro/0.13.13/jars/apply-macro.jar ... [SUCCESSFUL ] org.scala-sbt#apply-macro;0.13.13!apply-macro.jar (3008ms)downloading ht tps://repo1.maven.org/maven2/org/scala-sbt/template-resolver/0.1/template-resolver-0.1.jar ... [SUCCESSFUL ] org.scala-sbt#template-resolver;0.1!template-resolver.jar (193ms):: problems summary :::::: WARNINGS problem while downloading module descriptor: ht tps://repo1.maven.org/maven2/org/scala-sbt/ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivy-2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6.pom: Read timed out (19300ms) module not found: org.scala-sbt.ivy#ivy;2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6 ==== local: tried C:\Users\Jakub\.ivy2\local\org.scala-sbt.ivy\ivy\2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6\ivys\ivy.xml -- artifact org.scala-sbt.ivy#ivy;2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6!ivy.jar: C:\Users\Jakub\.ivy2\local\org.scala-sbt.ivy\ivy\2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6\jars\ivy.jar ==== Maven Central: tried ht tps://repo1.maven.org/maven2/org/scala-sbt/ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivy-2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6.pom ==== typesafe-ivy-releases: tried ht tps://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt.ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivys/ivy.xml ==== sbt-ivy-snapshots: tried ht tps://repo.scala-sbt.org/scalasbt/ivy-snapshots/org.scala-sbt.ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivys/ivy.xml :::::::::::::::::::::::::::::::::::::::::::::: :: UNRESOLVED DEPENDENCIES :: :::::::::::::::::::::::::::::::::::::::::::::: :: org.scala-sbt.ivy#ivy;2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6: not found :::::::::::::::::::::::::::::::::::::::::::::::: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILSunresolved dependency: org.scala-sbt.ivy#ivy;2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6: not foundError during sbt execution: Error retrieving required libraries(see C:\Users\Jakub\.sbt\boot\update.log for complete log)Error: Could not retrieve sbt 0.13.13 If I try to run sbt now command prompt contains the same message starting at :: problems summary :: And this is part of C:\Users\Jakub.sbt\boot that seems to contain description of problem (whole file greatly exceeds character limit) Module descriptor is processed : org.scala-sbt#test-interface;1.0Module descriptor is processed : org.scala-sbt#main-settings;0.13.13Module descriptor is processed : org.scala-sbt#apply-macro;0.13.13Module descriptor is processed : org.scala-sbt#command;0.13.13Module descriptor is processed : org.scala-sbt#template-resolver;0.1Module descriptor is processed : org.scala-sbt#logic;0.13.13Module descriptor is processed : org.scala-sbt#compiler-interface;0.13.13 report for org.scala-sbt#boot-app;1.0 default produced in C:\Users\Jakub\.sbt\boot\resolution-cache\org.scala-sbt-boot-app-default.xml resolve done (128406ms resolve - 106277ms download):: problems summary :::::: WARNINGS problem while downloading module descriptor: https://repo1.maven.org/maven2/org/scala-sbt/ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivy-2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6.pom: Read timed out (19300ms) module not found: org.scala-sbt.ivy#ivy;2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6 ==== local: tried C:\Users\Jakub\.ivy2\local\org.scala-sbt.ivy\ivy\2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6\ivys\ivy.xml -- artifact org.scala-sbt.ivy#ivy;2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6!ivy.jar: C:\Users\Jakub\.ivy2\local\org.scala-sbt.ivy\ivy\2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6\jars\ivy.jar ==== Maven Central: tried https://repo1.maven.org/maven2/org/scala-sbt/ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivy-2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6.pom ==== typesafe-ivy-releases: tried https://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt.ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivys/ivy.xml ==== sbt-ivy-snapshots: tried https://repo.scala-sbt.org/scalasbt/ivy-snapshots/org.scala-sbt.ivy/ivy/2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6/ivys/ivy.xml :::::::::::::::::::::::::::::::::::::::::::::: :: UNRESOLVED DEPENDENCIES :: :::::::::::::::::::::::::::::::::::::::::::::: :: org.scala-sbt.ivy#ivy;2.3.0-sbt-2cf13e211b2cb31f0d3b317289dca70eca3362f6: not found :::::::::::::::::::::::::::::::::::::::::::::::: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILSjava.lang.RuntimeException: not found at org.apache.ivy.core.resolve.IvyNode.loadData(IvyNode.java:238) at org.apache.ivy.core.resolve.VisitNode.loadData(VisitNode.java:292) at org.apache.ivy.core.resolve.ResolveEngine.fetchDependencies(ResolveEngine.java:714) at org.apache.ivy.core.resolve.ResolveEngine.doFetchDependencies(ResolveEngine.java:799) at org.apache.ivy.core.resolve.ResolveEngine.fetchDependencies(ResolveEngine.java:722) at org.apache.ivy.core.resolve.ResolveEngine.doFetchDependencies(ResolveEngine.java:799) at org.apache.ivy.core.resolve.ResolveEngine.fetchDependencies(ResolveEngine.java:722) at org.apache.ivy.core.resolve.ResolveEngine.doFetchDependencies(ResolveEngine.java:799) at org.apache.ivy.core.resolve.ResolveEngine.fetchDependencies(ResolveEngine.java:722) at org.apache.ivy.core.resolve.ResolveEngine.doFetchDependencies(ResolveEngine.java:799) at org.apache.ivy.core.resolve.ResolveEngine.fetchDependencies(ResolveEngine.java:722) at org.apache.ivy.core.resolve.ResolveEngine.doFetchDependencies(ResolveEngine.java:799) at org.apache.ivy.core.resolve.ResolveEngine.fetchDependencies(ResolveEngine.java:722) at org.apache.ivy.core.resolve.ResolveEngine.doFetchDependencies(ResolveEngine.java:799) at org.apache.ivy.core.resolve.ResolveEngine.fetchDependencies(ResolveEngine.java:722) at org.apache.ivy.core.resolve.ResolveEngine.getDependencies(ResolveEngine.java:594) at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:234) at xsbt.boot.Update.xsbt$boot$Update$$lockedApply(Update.scala:105) at xsbt.boot.Update$$anon$4.call(Update.scala:99) at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93) at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78) at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Update.apply(Update.scala:100) at xsbt.boot.Launch.update(Launch.scala:350) at xsbt.boot.Launch.xsbt$boot$Launch$$retrieve$1(Launch.scala:208) at xsbt.boot.Launch$$anonfun$3.apply(Launch.scala:216) at scala.Option.getOrElse(Option.scala:120) at xsbt.boot.Launch.xsbt$boot$Launch$$getAppProvider0(Launch.scala:216) at xsbt.boot.Launch$$anon$2.call(Launch.scala:196) at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93) at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78) at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Launch.locked(Launch.scala:238) at xsbt.boot.Launch.app(Launch.scala:147) at xsbt.boot.Launch.app(Launch.scala:145) at xsbt.boot.Launch$.run(Launch.scala:102) at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35) at xsbt.boot.Launch$.launch(Launch.scala:117) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala)Error during sbt execution: Error retrieving required libraries at xsbt.boot.Pre$.error(Pre.scala:26) at xsbt.boot.Update.xsbt$boot$Update$$lockedApply(Update.scala:105) at xsbt.boot.Update$$anon$4.call(Update.scala:99) at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93) at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78) at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Update.apply(Update.scala:100) at xsbt.boot.Launch.update(Launch.scala:350) at xsbt.boot.Launch.xsbt$boot$Launch$$retrieve$1(Launch.scala:208) at xsbt.boot.Launch$$anonfun$3.apply(Launch.scala:216) at scala.Option.getOrElse(Option.scala:120) at xsbt.boot.Launch.xsbt$boot$Launch$$getAppProvider0(Launch.scala:216) at xsbt.boot.Launch$$anon$2.call(Launch.scala:196) at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93) at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78) at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97) at xsbt.boot.Using$.withResource(Using.scala:10) at xsbt.boot.Using$.apply(Using.scala:9) at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58) at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48) at xsbt.boot.Locks$.apply0(Locks.scala:31) at xsbt.boot.Locks$.apply(Locks.scala:28) at xsbt.boot.Launch.locked(Launch.scala:238) at xsbt.boot.Launch.app(Launch.scala:147) at xsbt.boot.Launch.app(Launch.scala:145) at xsbt.boot.Launch$.run(Launch.scala:102) at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35) at xsbt.boot.Launch$.launch(Launch.scala:117) at xsbt.boot.Launch$.apply(Launch.scala:18) at xsbt.boot.Boot$.runImpl(Boot.scala:41) at xsbt.boot.Boot$.main(Boot.scala:17) at xsbt.boot.Boot.main(Boot.scala)Error during sbt execution: Error retrieving required libraries What should I do, what could I've missed?
Now provide the response and nothing else.
| Try to add repositories to ~/.sbt/repositories [repositories] local sbt-releases-repo: https://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext] sbt-plugins-repo: https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext] maven-central: https://repo1.maven.org/maven2/ | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42438544', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6718626/']} | jdg_80426 |
stackexchange | llm_judgeable_groundtruth_similarity | 1786940 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to increment a value every second on the client side with jquery this is what did: <script type="text/javascript"> $(document).ready(function increment(){ $("#counter").text(parseInt($("#counter").text())+1); setTimeout(increment(),1000) }) </script> this is not working as expected and i get a "too much recursion" error. any ideas on this?
Now provide the response and nothing else.
| Try $(document).ready(function () { function increment(){ $("#counter").text(parseInt($("#counter").text())+1); setTimeout(increment,1000); }; increment();}); By the way, there is setInterval() to calling a function repeatedly at a set interval, rathering than recursively calling a function with setTimeout() $(document).ready(function () { var interval = setInterval(increment,1000); });function increment(){ $("#counter").text(parseInt($("#counter").text())+1); } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1786940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1645236/']} | jdg_80427 |
stackexchange | llm_judgeable_groundtruth_similarity | 33143779 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading Critical Section Problem from Operating System Concepts by Peter B. Galvin.According to it 1) Progress is : If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely. And 2) Bounded waiting is : There exists a bound, or limit, on the number of times other processes are allowed to enter their critical sections after a process has made request to enter its critical section and before that request is granted. I am not understanding what the author wants to say in both the cases. Could you please make me understand by giving a proper example related to this definition. Thank You.
Now provide the response and nothing else.
| First, let me introduce some terminology. A critical section (CS) is a sequence of instructions that can be executed by at most one process at the same time. When using critical sections, the code can be broken down into the following sections: // Some arbitrary code (such as initialization).EnterCriticalSection(cs);// The code that constitutes the CS.// Only one process can be executing this code at the same time. LeaveCriticalSection(cs);// Some arbitrary code. This is called the remainder section. The first section contains some code such as initialization code. We don't have a name for that section. The second section is the code that tries to enter the CS. The third section is the CS itself. The fourth section is the code that leaves the critical section. The fifth and last section is called the remainder section which can contain any code. Note that the CS itself can be different between processes (consider for example a process that that receives requests from a client and insert them in a queue and another process that processes these requests). To make sure that an implementation of critical sections works properly, there are three conditions that must be satisfied. You mentioned two of them (which I will explain next). The third is mutual exclusion which is obviously vital. It's worth noting that mutual exclusion applies only to the CS and the leave section. However, the other three sections are not exclusive. The first condition is progress . The purpose of this condition is to make sure that either some process is currently in the CS and doing some work or, if there was at least one process that wants to enter the CS, it will and then do some work. In both cases, some work is getting done and therefore all processes are making progress overall. Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely. Let's understand this definition sentence by sentence. If no process is executing in its critical section If there is a process executing in its critical section (even though not stated explicitly, this includes the leave section as well), then this means that some work is getting done. So we are making progress. Otherwise, if this was not the case... and some processes wish to enter their critical sections If no process wants to enter their critical sections, then there is no more work to do. Otherwise, if there is at least one process that wishes to enter its critical section... then only those processes that are not executing in their remainder section This means we are talking about those processes that are executing in either of the first two sections (remember, no process is executing in its critical section or the leave section)... can participate in deciding which will enter its critical section next, Since there is at least one process that wishes to enter its CS, somehow we must choose one of them to enter its CS. But who's going to make this decision? Those process who already requested permission to enter their critical sections have the right to participate in making this decision. In addition, those processes that may wish to enter their CSs but have not yet requested the permission to do so (this means that they are in executing in the first section) also have the right to participate in making this decision. and this selection cannot be postponed indefinitely. This states that it will take a limited amount of time to select a process to enter its CS. In particular, no deadlock or livelock will occur. So after this limited amount of time, a process will enter its CS and do some work, thereby making progress. Now I will explain the last condition, namely bounded waiting . The purpose of this condition is to make sure that every process gets the chance to actually enter its critical section so that no process starves forever . However, please note that neither this condition nor progress guarantees fairness. An implementation of a CS doesn't have to be fair. Bounded waiting: There exists a bound, or limit, on the number of times other processes are allowed to enter their critical sections after a process has made request to enter its critical section and before that request is granted. Let's understand this definition sentence by sentence, starting from the last one. after a process has made request to enter its critical section and before that request is granted. In other words, if there is a process that has requested to enter its CS but has not yet entered it. Let's call this process P. There exists a bound, or limit, on the number of times other processes are allowed to enter their critical sections While P is waiting to enter its CS, other processes may be waiting as well and some process is executing in its CS. When it leaves its CS, some other process has to be selected to enter the CS which may or may not be P. Suppose a process other than P was selected. This situation might happen again and again. That is, other processes are getting the chance to enter their CSs but never P. Note that progress is being made, but by other processes, not by P. The problem is that P is not getting the chance to do any work. To prevent starvation, there must be a guarantee that P will eventually enter its CS. For this to happen, the number of times other processes enter their CSs must be limited. In this case, P will definitely get the chance to enter its CS. I would like to mention that the definition of a CS can be generalized so that at most N processes are executing in their critical sections where N is any positive integer. There are also variants of reader-writer critical sections. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/33143779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5383097/']} | jdg_80428 |
stackexchange | llm_judgeable_groundtruth_similarity | 54734 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My team currently uses ASP.NET with MS SQL Server for our software.Security has only become important since I started, in which there is injection vulnerabilities everywhere. Due to current integration with Oracle and MS SQL the business decision was never to use parameterized queries. This of course has been an issue. Implementing Find and replace along with whitelisting of parameters has reduced this issue strongly. My only issue is, I have read a lot about unicode and other encodings being the cause of sql injection. I dont quite understand this. Currently we sanitise everything like this: Const pattern As String = "^[a-zA-Z0-9.=,:\s\\\/\']*$" term = term.Replace("'", "''") If Not Tools.ValidString(term, pattern) Then term = String.Empty End If Public Shared Function ValidString(ByVal source As String, ByVal pattern As String) As Boolean If source = String.Empty Then Return True Dim params As Text.RegularExpressions.RegexOptions = Text.RegularExpressions.RegexOptions.None Dim regex As New Text.RegularExpressions.Regex(pattern, params) Dim match As Text.RegularExpressions.Match = regex.Match(source, pattern, params) Return match.Success End Function Does anyone have an example where unicode/encoded injection could be used, or just a plain example where this regular expression would fail to prevent sql injection. Thanks UPDATE Can I please not have answers related to standard SQL Injection. I am strongly familiar with this already. ALSO please stop posting saying don't use string sanitisation. There is zero resources in the company to shift all queries to parameterised queries with ADO.NET while also building in logic for it to use ODP.NET if the client uses oracle. OWASP mention the use of whitelisting of characters if parameterising is out of question, so like in the regex, only few characters are allowed. I am not blacklisting characters, as this is stupid. There is no compliance required for the data we hold. The security is for database integrity, as it would be a nightmare if content was changed. Our software is a very large cloud application CMS and DMS in one, where 99% of the software is used internal, and only a minority is external and is only used for public review and commenting on the documents. From my new understanding of Unicode injection. It can only occur if the data is being encoded before being placed into the query, and therefore unicode injection only really occurs in applications with globalisation of data. I am passing raw string fields straight into the string query after the sanitisation above. Can I please only have an answer from an expert in injection, who can back up my claim that Unicode will not apply in my circumstance?
Now provide the response and nothing else.
| There are cases of SQL Injections leveraging the implicit conversion of Unicode homoglyphs from Unicode character string types (NCHAR, NVARCHAR) to character string types (CHAR, VARCHAR). A character such as ʼ (U+02BC) in NVARCHAR may slip through the escaping routine and get translated to ' (U+0027) in VARCHAR , which may result in an SQL Injection when such a string is used to build an SQL statement dynamically. However, your validation is pretty strict (only characters from the Basic Latin Unicode block and Unicode whitespace characters ) and I can’t think of any case where this would fail. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/54734', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/41909/']} | jdg_80429 |
stackexchange | llm_judgeable_groundtruth_similarity | 3105330 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that the inner product on $P_2(\mathbb{R})$ is defined by $$\langle f,g \rangle:= f(-1)g(-1)+f(0)g(0)+f(1)g(1).$$ Consider the operator $T \in B(P_2(\mathbb{R}))$ which is defined as $Tf=f'$ , so the derivative of $f$ . Find the adjoint of $T$ . I am trying to find the adjoint in the following way.Since $\langle Tf, g\rangle = \langle f, T^*g\rangle$ then we have that $\langle f, T^*g\rangle = f'(-1)g(-1) + f'(0)g(0) + f'(1)g(1)$ . How do I continue now since I don't know how $T^*$ looks like? I appreciate your help.
Now provide the response and nothing else.
| This is not true for general metric spaces - if you have a function whose domain is a two-point space, then this function is $\alpha$ -Hölder for every $\alpha>0$ . $\alpha$ -Hölder property for $\alpha>1$ implies the function being constant only in special spaces, like $\mathbb R^n$ . Let me just focus on functions $f:\mathbb R\to\mathbb R$ . You mention "[the proofs] all make (at least indirect) use of some differentiability assumption on $f$ ". This is not quite correct - you don't need to assume differentiability, since $\alpha$ -Hölder condition for $\alpha>1$ implies that the derivative exists - indeed, we just consider $$\left|\frac{f(x+h)-f(x)}{h}\right|\leq\frac{C|h|^\alpha}{|h|}\to 0,$$ so the limit exists and is zero everywhere. That way or another, we can prove that $f$ is constant directly as well: take any two points $x<y$ and let $x_0=x,x_1=x+\frac{y-x}{n},x_2=x+2\frac{y-x}{n},\dots,x_n=x+n\frac{y-x}{n}=y$ . Then we have $$|f(x)-f(y)|\leq|f(x_0)-f(x_1)|+|f(x_1)-f(x_2)|+\dots+|f(x_{n-1})-f(x_n)|\\\leq C|x_0-x_1|^\alpha+C|x_1-x_2|^\alpha+\dots+C|x_{n-1}-x_n|^\alpha\\=n\cdot C\left|\frac{x-y}{n}\right|^\alpha=n^{1-\alpha}C|x-y|^\alpha\to 0,$$ so that $f(x)=f(y)$ . | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/3105330', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/387962/']} | jdg_80430 |
stackexchange | llm_judgeable_groundtruth_similarity | 10662528 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the following code which is working when the scroll bar reaches the botttom, if($(window).scrollTop() == $(document).height() - $(window).height()){ I however want that the ajax is fired when i reached 70% of the scroll not 100.
Now provide the response and nothing else.
| Provided your current check is firing when scrolled to the page's bottom, you can try some basic arithmetics: if ($(window).scrollTop() >= ($(document).height() - $(window).height())*0.7){ //where 0.7 corresponds to 70% --^ Make sure to add a check to don't fire multiple simultaneous Ajax requests, if you didn't already. This is rather out of the scope of the question, but if you want an example of how to prevent multiple requests from being fired simultaneously: Declare a global var, e.g. processing . Then incorporate it in your function: if (processing) return false;if ($(window).scrollTop() >= ($(document).height() - $(window).height())*0.7){ processing = true; //sets a processing AJAX request flag $.post("url", '<params>', function(data){ //or $.ajax, $.get, $.load etc. //load the content to your div processing = false; //resets the ajax flag once the callback concludes });} That's a simple example of using a var to keep track if there is an active Ajax request for your scroll function or not, and it doesn't interfere with any other concurring Ajax request which you may have. Edit: JSFiddle example Please note that using a % to measure the document height might be a bad idea, considering that the document's height will increase each time you load something, making it trigger the Ajax request being relatively more far from the bottom of the page (absolute-size wise). I'd recommend using a fixed value offset to prevent that (200-700 or so): if ($(window).scrollTop() >= $(document).height() - $(window).height() - 700){ // pixels offset from screen bottom --^ Example: JSFiddle Edit: To reproduce the issue in the first code with percentages, load 50 div s into it. When you load the next div , it'll add only 2% to the total document's height, meaning the next request will be triggered as soon as you scroll these 2% back to the 70% of the document's height. In my fixed example, the defined bottom offset will load new content only when the user is at a defined absolute pixels range from the bottom of the screen. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10662528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/571319/']} | jdg_80431 |
stackexchange | llm_judgeable_groundtruth_similarity | 6072389 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new in log4j. Can anyone explain how to create my own Appender? i.e. how to implement the classes and interfaces and how to override it?
Now provide the response and nothing else.
| Update: the provided solution is valid for Log4J 1.x . If you're looking for 2.x versions, take a look at this article: How to create a custom appender in log4j2 You should extend AppenderSkeleton class, that (quoting javadoc) "provides the code for common functionality, such as support for threshold filtering and support for general filters." If you read the code of AppenderSkeleton, you'll see that it handles almost all, leaving to you just: protected void append(LoggingEvent event) public void close() public boolean requiresLayout() The core method is append. Remember that you don't need to implement the filtering logic in it because it is already implemented in doAppend that in turn calls append.Here I made a (quite useless) class that stores the log entries in an ArrayList, just as a demo. public /*static*/ class MyAppender extends AppenderSkeleton { ArrayList<LoggingEvent> eventsList = new ArrayList(); @Override protected void append(LoggingEvent event) { eventsList.add(event); } public void close() { } public boolean requiresLayout() { return false; }} Ok, let's test it: public static void main (String [] args) { Logger l = Logger.getLogger("test"); MyAppender app = new MyAppender(); l.addAppender(app); l.warn("first"); l.warn("second"); l.warn("third"); l.trace("fourth shouldn't be printed"); for (LoggingEvent le: app.eventsList) { System.out.println("***" + le.getMessage()); }} You should have "first", "second", "third" printed; the fourth message shouldn't be printed since the log level of root logger is debug while the event level is trace. This proves that AbstractSkeleton implements "level management" correctly for us. So that's definitely seems the way to go... now the question: why do you need a custom appender while there are many built in that log to almost any destination? (btw a good place to start with log4j: http://logging.apache.org/log4j/1.2/manual.html ) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6072389', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/726902/']} | jdg_80432 |
stackexchange | llm_judgeable_groundtruth_similarity | 19054723 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a long data frame that contains meteorological data from a mast. It contains observations ( data$value ) taken at the same time of different parameters (wind speed, direction, air temperature, etc., in data$param ) at different heights ( data$z ) I am trying to efficiently slice this data by $time , and then apply functions to all of the data collected. Usually functions are applied to a single $param at a time (i.e. I apply different functions to wind speed than I do to air temperature). Current approach My current method is to use data.frame and ddply . If I want to get all of the wind speed data, I run this: # find good data ----df <- data[((data$param == "wind speed") & !is.na(data$value)),] I then run my function on df using ddply() : df.tav <- ddply(df, .(time), function(x) { y <-data.frame(V1 = sum(x$value) + sum(x$z), V2 = sum(x$value) / sum(x$z)) return(y) }) Usually V1 and V2 are calls to other functions. These are just examples. I do need to run multiple functions on the same data though. Question My current approach is very slow. I have not benchmarked it, but it's slow enough I can go get a coffee and come back before a year's worth of data has been processed. I have order(hundred) towers to process, each with a year of data and 10-12 heights and so I am looking for something faster. Data sample data <- structure(list(time = structure(c(1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262304600, 1262305200, 1262305200, 1262305200, 1262305200, 1262305200, 1262305200, 1262305200), class = c("POSIXct", "POSIXt"), tzone = ""), z = c(0, 0, 0, 100, 100, 100, 120, 120, 120, 140, 140, 140, 160, 160, 160, 180, 180, 180, 200, 200, 200, 40, 40, 40, 50, 50, 50, 60, 60, 60, 80, 80, 80, 0, 0, 0, 100, 100, 100, 120), param = c("temperature", "humidity", "barometric pressure", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "wind direction", "turbulence", "wind speed", "temperature", "barometric pressure", "humidity", "wind direction", "wind speed", "turbulence", "wind direction"), value = c(-2.5, 41, 816.9, 248.4, 0.11, 4.63, 249.8, 0.28, 4.37, 255.5, 0.32, 4.35, 252.4, 0.77, 5.08, 248.4, 0.65, 3.88, 313, 0.94, 6.35, 250.9, 0.1, 4.75, 253.3, 0.11, 4.68, 255.8, 0.1, 4.78, 254.9, 0.11, 4.7, -3.3, 816.9, 42, 253.2, 2.18, 0.27, 229.5)), .Names = c("time", "z", "param", "value"), row.names = c(NA, 40L), class = "data.frame")
Now provide the response and nothing else.
| Use data.table : library(data.table)dt = data.table(data)setkey(dt, param) # sort by param to look it up fastdt[J('wind speed')][!is.na(value), list(sum(value) + sum(z), sum(value)/sum(z)), by = time]# time V1 V2#1: 2009-12-31 18:10:00 1177.57 0.04209735#2: 2009-12-31 18:20:00 102.18 0.02180000 If you want to apply a different function for each param, here's a more uniform approach for that. # make dt smaller because I'm lazydt = dt[param %in% c('wind direction', 'wind speed')]# now let's start - create another data.table# that will have param and corresponding functionfns = data.table(p = c('wind direction', 'wind speed'), fn = c(quote(sum(value) + sum(z)), quote(sum(value) / sum(z))), key = 'p')fns p fn1: wind direction <call> # the fn column contains functions2: wind speed <call> # i.e. this is getting fancy!# now we can evaluate different functions for different params,# sliced by param and timedt[!is.na(value), {param; eval(fns[J(param)]$fn[[1]], .SD)}, by = list(param, time)]# param time V1#1: wind direction 2009-12-31 18:10:00 3.712400e+03#2: wind direction 2009-12-31 18:20:00 7.027000e+02#3: wind speed 2009-12-31 18:10:00 4.209735e-02#4: wind speed 2009-12-31 18:20:00 2.180000e-02 P.S. I think the fact that I have to use param in some way before eval for eval to work is a bug. UPDATE: As of version 1.8.11 this bug has been fixed and the following works: dt[!is.na(value), eval(fns[J(param)]$fn[[1]], .SD), by = list(param, time)] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19054723', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2514568/']} | jdg_80433 |
stackexchange | llm_judgeable_groundtruth_similarity | 32798793 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have refined the Navigation Drawer Activity project template of Android Studio, which uses Toolbar , v7.app.ActionBarDrawerToggle and NavigationView instead of the NavigationDrawerFragment (and layout/fragment_navigation_drawer.xml). It is perfectly working. Then, I go further. I have my Navigation Drawer project in immersive-sticky (full screen) mode. @Overridepublic void onWindowFocusChanged(boolean hasFocus) { super.onWindowFocusChanged(hasFocus); if (hasFocus) { View decorationView = getWindow().getDecorView(); decorationView.setSystemUiVisibility( View.SYSTEM_UI_FLAG_LAYOUT_STABLE | View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION | View.SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN | View.SYSTEM_UI_FLAG_HIDE_NAVIGATION | View.SYSTEM_UI_FLAG_FULLSCREEN | View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY); }}@Overrideprotected void onCreate(Bundle savedInstanceState) { ... toolbar = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(toolbar); ActionBar actionBar = getSupportActionBar(); actionBar.setDisplayHomeAsUpEnabled(true); drawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout); drawerToggle = new ActionBarDrawerToggle( this, drawerLayout, R.string.navigation_drawer_open, /* "open drawer" description for accessibility */ R.string.navigation_drawer_close /* "close drawer" description for accessibility */ ) { @Override public void onDrawerClosed(View drawerView) { super.onDrawerClosed(drawerView); invalidateOptionsMenu(); // calls onPrepareOptionsMenu() } @Override public void onDrawerOpened(View drawerView) { super.onDrawerOpened(drawerView); invalidateOptionsMenu(); // calls onPrepareOptionsMenu() } }; drawerLayout.setDrawerListener(drawerToggle); navigationView = (NavigationView) findViewById(R.id.navigation_view); navigationView.setNavigationItemSelectedListener(this);}@Overrideprotected void onPostCreate(Bundle savedInstanceState) { super.onPostCreate(savedInstanceState); drawerToggle.syncState();}@Overridepublic void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); drawerToggle.onConfigurationChanged(newConfig);}@Overridepublic boolean onOptionsItemSelected(MenuItem item) { if (drawerToggle.onOptionsItemSelected(item)) { return true; } ...} A problem has risen. The bands of overlapped shadow effect on the NavigationView which are derived from status bar (on the top side) and navigation bar (on the bottom side) remain still. How can I get rid of them? I reviewed sources of v7.app.ActionBarDrawerToggle or NavigationView of Android, but in vain. Updated: Thanks for @lcw_gg's advice, I have gotten rid of the status bar's shadow completely (while the navigation bar's shadow remains). That is to set android:windowFullscreen attribute true in layout xml. But I want to do this in Java code. I found a way and probably it is equivalent to the xml way: getWindow().addFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN); And with doing this, you don't need any more to set these two flags -- View.SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN and View.SYSTEM_UI_FLAG_FULLSCREEN -- to the decorationView . Still, I can't find the way to get rid of the navigation bar's shadow. I'm waiting for a solution.
Now provide the response and nothing else.
| At last, I made it. The solution is using FLAG_LAYOUT_NO_LIMITS together with FLAG_FULLSCREEN to the android.view.Window object. getWindow().addFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN | WindowManager.LayoutParams.FLAG_LAYOUT_NO_LIMITS); This has gotten rid of both of the shadows perfectly. lcw_gg's comment was very useful clue to manipulating android.view.Window . Special thanks to him. Update for Android 11 Unfortunately, the problem returns when I update my phone to Android 11 (API-30). The above solution isn't effective any more. An alternate solution is @AllanVeloso's answer using app:insetForeground="@null" And I also got a new solution. Just getting rid of SYSTEM_UI_FLAG_LAYOUT_STABLE is it. This SYSTEM_UI_FLAG_LAYOUT_STABLE should be the root cause of shadow effects. For my humble research on Google Developer or StackOverflow, the flag is always explained to use with SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN and/or SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION , but there is no exact explanation about SYSTEM_UI_FLAG_LAYOUT_STABLE itself other than: This means that the insets seen there will always represent the worstcase that the application can expect as a continuous state. So I used it with other two automatically. But that was the root of this problem. Just removing SYSTEM_UI_FLAG_LAYOUT_STABLE alone resolved this problem (even on Android 10 or earlier). Deprecation for those immersive FLAGs As you know those View.SYSTEM_UI_FLAG_* are deprecated from Android 11 (API-30). You will use like below: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) { WindowInsetsController windowInsetsController = decorView.getWindowInsetsController(); windowInsetsController.setSystemBarsBehavior( WindowInsetsController.BEHAVIOR_SHOW_TRANSIENT_BARS_BY_SWIPE ); windowInsetsController.hide( WindowInsets.Type.statusBars() | WindowInsets.Type.navigationBars() ); window.setDecorFitsSystemWindows(false);} else { (...)} As google people (like Chris Banes ) explained Window.setDecorFitsSystemWindows(false) is almost equivalent to SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN and SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION combine with SYSTEM_UI_FLAG_LAYOUT_STABLE . There is no problem at least about shadow effects. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32798793', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3501958/']} | jdg_80434 |
stackexchange | llm_judgeable_groundtruth_similarity | 1329867 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Since $\dim_{\mathbb{Z}_2} K = \deg f(x)=5$, $K$ has $2^5=32$ elements. So constructing the field $K$, I get: \begin{array}{|c|c|c|}\hline \text{polynomial} & \text{power of $x$} & \text{logarithm} \\\hline 0 & 0 & -\infty \\\hline 1 & 1 & 0 \\\hline x & x & 1 \\\hline x^2 & x^2 & 2 \\\hline x^3 & x^3 & 3 \\\hline x^4 & x^4 & 4 \\\hline x^4+1 & x^5 & 5 \\\hline x^4+x+1 & x^6 & 6 \\\hline x^4+x^2+x+1 & x^7 & 7 \\\hline x^4+x^3+x^2+x+1 & x^8 & 8 \\\hline x^3+x^2+x+1 & x^9 & 9 \\\hline x^4+x^3+x^2+x & x^{10} & 10 \\\hline x^3+x^2+1 & x^{11} & 11 \\\hline x^4+x^3+x & x^{12} & 12 \\\hline x^2+1 & x^{13} & 13 \\\hline x^3+x & x^{14} & 14 \\\hline x^4+x^2 & x^{15} & 15 \\\hline x^4+x^3+1 & x^{16} & 16 \\\hline x+1 & x^{17} & 17 \\\hline x^2+x & x^{18} & 18 \\\hline x^3+x^2 & x^{19} & 19 \\\hline x^4+x^3 & x^{20} & 20 \\\hline\end{array} Have I constructed the field correctly? I'm sure the claim that $K$ has $32$ elements is true, yet when I actually construct the field, I only get $22$ elements as $x^{21} = x^0 = 1$.
Now provide the response and nothing else.
| The reason you have a problem is that $x^5+x^4+1$ is not irreducible in $\mathbb{Z}_2[x]$ . Thus, if $R=\mathbb{Z}_2[x]/(x^5+x^4+1)$ , then $R^\times$ is not (as you would have expected if the polynomial were irreducible) a cyclic group of order $31$ , so the fact that $x^{21}\equiv 1\bmod (x^5+x^4+1)$ is not surprising at all. In fact, $x^5+x^4+1=(x^2+x+1)(x^3+x+1)$ in $\mathbb{Z}_2[x]$ , so that $$\mathbb{Z}_2[x]/(x^5+x^4+1)\;\cong\;\mathbb{Z}_2[x]/(x^2+x+1)\oplus \mathbb{Z}_2[x]/(x^3+x+1)\;\cong\;\mathbb{F}_4\oplus\mathbb{F}_8$$ so that for unit groups, $$\mathbb{Z}_2[x]/(x^5+x^4+1)^\times\;\cong\;\mathbb{F}_4^\times\oplus\mathbb{F}_8^\times\;\cong\;\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/7\mathbb{Z}\;\cong\;\mathbb{Z}/21\mathbb{Z}$$ Again, the ring $R=\mathbb{Z}_2[x]/(x^5+x^4+1)$ , although having $32$ elements, is not a field , and therefore the unit group $R^\times$ is smaller than $R\setminus\{0\}$ . Even though you (completely coincidentally) chose a polynomial $x^5+x^4+1$ such that $x\in R^\times$ , and such that $R^\times$ was cyclic, and such that $x$ was in fact a generator for $R^\times$ , it could not help the fact that since $R^\times$ has $21$ elements and $R\setminus\{0\}$ has $31$ elements, you cannot get all the elements of $R\setminus\{0\}$ by successive powers of $x$ . Now, as to how to list elements. In general, if $f\in \mathbb{Z}_p[x]$ is any polynomial of degree $n$ , whether it is irreducible or not , then the $p^n$ elements of $\mathbb{Z}_p[x]/(f)$ are easily produced as the cosets $$a_0+a_1x+\cdots+a_{n-1}x^{n-1}+(f),\qquad a_0,a_1,\ldots,a_{n-1}\in\mathbb{Z}_p$$ Of course, $f$ is irreducible $\iff$ $\mathbb{Z}_p[x]/(f)$ is a field, but the elements of $\mathbb{Z}_p[x]/(f)$ are always simple to write down. It's the choice of $f$ that determines the additive and multiplicative structure. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1329867', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/87371/']} | jdg_80435 |
stackexchange | llm_judgeable_groundtruth_similarity | 65249158 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In template pipelines you can't place any trigger statement such as trigger: none as specified in Microsoft's docs to disable ci trigger, so I wonder how do you prevent these pipelines from being executed every time you update them or any other yaml file in the same branch?
Now provide the response and nothing else.
| So in the end in a template pipeline you can't state something like trigger: none (to set only manual triggering) and you cannot specify stages or jobs, only steps are allowed (so you can't define any condition to prevent pipeline execution on the job or on the stage). You have an option to disable the CI trigger by going in the triggers section for the template pipeline and select the following: I don't like much this option because it means having pipeline configuration that is not captured in the yaml pipeline definition but I found no other way to disable the template pipeline from being triggered every time something (including the pipeline itself) is updated in its branch. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/65249158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5014665/']} | jdg_80436 |
stackexchange | llm_judgeable_groundtruth_similarity | 10447189 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Which type of id is better for querying and to use as foreign key in mysql: 1)Varchar id with size 20 2)Int id
Now provide the response and nothing else.
| An int is usually better, because it gives better performance. If your item has a natural key which is a varchar, you could use that and it will work. But you will get a performance improvement (and some other benefits) by adding a surrogate key that is an integer. You can add an AUTO_INCREMENT column for this purpose. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10447189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_80437 |
stackexchange | llm_judgeable_groundtruth_similarity | 18789468 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm quite new to Zend and unit testing in general. I have come up with a small application that uses Zend Framework 2 and Doctrine. It has only one model and controller and I want to run some unit tests on them. Here's what I have so far: Base doctrine 'entity' class, containing methods I want to use in all of my entities: <?php/** * Base entity class containing some functionality that will be used by all * entities */namespace Perceptive\Database;use Zend\Validator\ValidatorChain;class Entity{ //An array of validators for various fields in this entity protected $validators; /** * Returns the properties of this object as an array for ease of use. Will * return only properties with the ORM\Column annotation as this way we know * for sure that it is a column with data associated, and won't pick up any * other properties. * @return array */ public function toArray(){ //Create an annotation reader so we can read annotations $reader = new \Doctrine\Common\Annotations\AnnotationReader(); //Create a reflection class and retrieve the properties $reflClass = new \ReflectionClass($this); $properties = $reflClass->getProperties(); //Create an array in which to store the data $array = array(); //Loop through each property. Get the annotations for each property //and add to the array to return, ONLY if it contains an ORM\Column //annotation. foreach($properties as $property){ $annotations = $reader->getPropertyAnnotations($property); foreach($annotations as $annotation){ if($annotation instanceof \Doctrine\ORM\Mapping\Column){ $array[$property->name] = $this->{$property->name}; } } } //Finally, return the data array to the user return $array; } /** * Updates all of the values in this entity from an array. If any property * does not exist a ReflectionException will be thrown. * @param array $data * @return \Perceptive\Database\Entity */ public function fromArray($data){ //Create an annotation reader so we can read annotations $reader = new \Doctrine\Common\Annotations\AnnotationReader(); //Create a reflection class and retrieve the properties $reflClass = new \ReflectionClass($this); //Loop through each element in the supplied array foreach($data as $key=>$value){ //Attempt to get at the property - if the property doesn't exist an //exception will be thrown here. $property = $reflClass->getProperty($key); //Access the property's annotations $annotations = $reader->getPropertyAnnotations($property); //Loop through all annotations to see if this is actually a valid column //to update. $isColumn = false; foreach($annotations as $annotation){ if($annotation instanceof \Doctrine\ORM\Mapping\Column){ $isColumn = true; } } //If it is a column then update it using it's setter function. Otherwise, //throw an exception. if($isColumn===true){ $func = 'set'.ucfirst($property->getName()); $this->$func($data[$property->getName()]); }else{ throw new \Exception('You cannot update the value of a non-column using fromArray.'); } } //return this object to facilitate a 'fluent' interface. return $this; } /** * Validates a field against an array of validators. Returns true if the value is * valid or an error string if not. * @param string $fieldName The name of the field to validate. This is only used when constructing the error string * @param mixed $value * @param array $validators * @return boolean|string */ protected function setField($fieldName, $value){ //Create a validator chain $validatorChain = new ValidatorChain(); $validators = $this->getValidators(); //Try to retrieve the validators for this field if(array_key_exists($fieldName, $this->validators)){ $validators = $this->validators[$fieldName]; }else{ $validators = array(); } //Add all validators to the chain foreach($validators as $validator){ $validatorChain->attach($validator); } //Check if the value is valid according to the validators. Return true if so, //or an error string if not. if($validatorChain->isValid($value)){ $this->{$fieldName} = $value; return $this; }else{ $err = 'The '.$fieldName.' field was not valid: '.implode(',',$validatorChain->getMessages()); throw new \Exception($err); } }} My 'config' entity, which represents a one-row table containing some configuration options: <?php/** * @todo: add a base entity class which handles validation via annotations * and includes toArray function. Also needs to get/set using __get and __set * magic methods. Potentially add a fromArray method? */namespace Application\Entity;use Doctrine\ORM\Mapping as ORM;use Zend\Validator;use Zend\I18n\Validator as I18nValidator;use Perceptive\Database\Entity;/** * @ORM\Entity * @ORM\HasLifecycleCallbacks */class Config extends Entity{ /** * @ORM\Id * @ORM\Column(type="integer") */ protected $minLengthUserId; /** * @ORM\Id * @ORM\Column(type="integer") */ protected $minLengthUserName; /** * @ORM\Id * @ORM\Column(type="integer") */ protected $minLengthUserPassword; /** * @ORM\Id * @ORM\Column(type="integer") */ protected $daysPasswordReuse; /** * @ORM\Id * @ORM\Column(type="boolean") */ protected $passwordLettersAndNumbers; /** * @ORM\Id * @ORM\Column(type="boolean") */ protected $passwordUpperLower; /** * @ORM\Id * @ORM\Column(type="integer") */ protected $maxFailedLogins; /** * @ORM\Id * @ORM\Column(type="integer") */ protected $passwordValidity; /** * @ORM\Id * @ORM\Column(type="integer") */ protected $passwordExpiryDays; /** * @ORM\Id * @ORM\Column(type="integer") */ protected $timeout; // getters/setters /** * Get the minimum length of the user ID * @return int */ public function getMinLengthUserId(){ return $this->minLengthUserId; } /** * Set the minmum length of the user ID * @param int $minLengthUserId * @return \Application\Entity\Config This object */ public function setMinLengthUserId($minLengthUserId){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('minLengthUserId', $minLengthUserId); } /** * Get the minimum length of the user name * @return int */ public function getminLengthUserName(){ return $this->minLengthUserName; } /** * Set the minimum length of the user name * @param int $minLengthUserName * @return \Application\Entity\Config */ public function setMinLengthUserName($minLengthUserName){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('minLengthUserName', $minLengthUserName); } /** * Get the minimum length of the user password * @return int */ public function getMinLengthUserPassword(){ return $this->minLengthUserPassword; } /** * Set the minimum length of the user password * @param int $minLengthUserPassword * @return \Application\Entity\Config */ public function setMinLengthUserPassword($minLengthUserPassword){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('minLengthUserPassword', $minLengthUserPassword); } /** * Get the number of days before passwords can be reused * @return int */ public function getDaysPasswordReuse(){ return $this->daysPasswordReuse; } /** * Set the number of days before passwords can be reused * @param int $daysPasswordReuse * @return \Application\Entity\Config */ public function setDaysPasswordReuse($daysPasswordReuse){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('daysPasswordReuse', $daysPasswordReuse); } /** * Get whether the passwords must contain letters and numbers * @return boolean */ public function getPasswordLettersAndNumbers(){ return $this->passwordLettersAndNumbers; } /** * Set whether passwords must contain letters and numbers * @param int $passwordLettersAndNumbers * @return \Application\Entity\Config */ public function setPasswordLettersAndNumbers($passwordLettersAndNumbers){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('passwordLettersAndNumbers', $passwordLettersAndNumbers); } /** * Get whether password must contain upper and lower case characters * @return type */ public function getPasswordUpperLower(){ return $this->passwordUpperLower; } /** * Set whether password must contain upper and lower case characters * @param type $passwordUpperLower * @return \Application\Entity\Config */ public function setPasswordUpperLower($passwordUpperLower){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('passwordUpperLower', $passwordUpperLower); } /** * Get the number of failed logins before user is locked out * @return int */ public function getMaxFailedLogins(){ return $this->maxFailedLogins; } /** * Set the number of failed logins before user is locked out * @param int $maxFailedLogins * @return \Application\Entity\Config */ public function setMaxFailedLogins($maxFailedLogins){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('maxFailedLogins', $maxFailedLogins); } /** * Get the password validity period in days * @return int */ public function getPasswordValidity(){ return $this->passwordValidity; } /** * Set the password validity in days * @param int $passwordValidity * @return \Application\Entity\Config */ public function setPasswordValidity($passwordValidity){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('passwordValidity', $passwordValidity); } /** * Get the number of days prior to expiry that the user starts getting * warning messages * @return int */ public function getPasswordExpiryDays(){ return $this->passwordExpiryDays; } /** * Get the number of days prior to expiry that the user starts getting * warning messages * @param int $passwordExpiryDays * @return \Application\Entity\Config */ public function setPasswordExpiryDays($passwordExpiryDays){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('passwordExpiryDays', $passwordExpiryDays); } /** * Get the timeout period of the application * @return int */ public function getTimeout(){ return $this->timeout; } /** * Get the timeout period of the application * @param int $timeout * @return \Application\Entity\Config */ public function setTimeout($timeout){ //Use the setField function, which checks whether the field is valid, //to set the value. return $this->setField('timeout', $timeout); } /** * Returns a list of validators for each column. These validators are checked * in the class' setField method, which is inherited from the Perceptive\Database\Entity class * @return array */ public function getValidators(){ //If the validators array hasn't been initialised, initialise it if(!isset($this->validators)){ $validators = array( 'minLengthUserId' => array( new I18nValidator\Int(), new Validator\GreaterThan(1), ), 'minLengthUserName' => array( new I18nValidator\Int(), new Validator\GreaterThan(2), ), 'minLengthUserPassword' => array( new I18nValidator\Int(), new Validator\GreaterThan(3), ), 'daysPasswordReuse' => array( new I18nValidator\Int(), new Validator\GreaterThan(-1), ), 'passwordLettersAndNumbers' => array( new I18nValidator\Int(), new Validator\GreaterThan(-1), new Validator\LessThan(2), ), 'passwordUpperLower' => array( new I18nValidator\Int(), new Validator\GreaterThan(-1), new Validator\LessThan(2), ), 'maxFailedLogins' => array( new I18nValidator\Int(), new Validator\GreaterThan(0), ), 'passwordValidity' => array( new I18nValidator\Int(), new Validator\GreaterThan(1), ), 'passwordExpiryDays' => array( new I18nValidator\Int(), new Validator\GreaterThan(1), ), 'timeout' => array( new I18nValidator\Int(), new Validator\GreaterThan(0), ) ); $this->validators = $validators; } //Return the list of validators return $this->validators; } /** * @todo: add a lifecyle event which validates before persisting the entity. * This way there is no chance of invalid values being saved to the database. * This should probably be implemented in the parent class so all entities know * to validate. */} And my controller, which can read from and write to the entity: <?php/** * A restful controller that retrieves and updates configuration information */namespace Application\Controller;use Zend\Mvc\Controller\AbstractRestfulController;use Zend\View\Model\JsonModel;class ConfigController extends AbstractRestfulController{ /** * The doctrine EntityManager for use with database operations * @var \Doctrine\ORM\EntityManager */ protected $em; /** * Constructor function manages dependencies * @param \Doctrine\ORM\EntityManager $em */ public function __construct(\Doctrine\ORM\EntityManager $em){ $this->em = $em; } /** * Retrieves the configuration from the database */ public function getList(){ //locate the doctrine entity manager $em = $this->em; //there should only ever be one row in the configuration table, so I use findAll $config = $em->getRepository("\Application\Entity\Config")->findAll(); //return a JsonModel to the user. I use my toArray function to convert the doctrine //entity into an array - the JsonModel can't handle a doctrine entity itself. return new JsonModel(array( 'data' => $config[0]->toArray(), )); } /** * Updates the configuration */ public function replaceList($data){ //locate the doctrine entity manager $em = $this->em; //there should only ever be one row in the configuration table, so I use findAll $config = $em->getRepository("\Application\Entity\Config")->findAll(); //use the entity's fromArray function to update the data $config[0]->fromArray($data); //save the entity to the database $em->persist($config[0]); $em->flush(); //return a JsonModel to the user. I use my toArray function to convert the doctrine //entity into an array - the JsonModel can't handle a doctrine entity itself. return new JsonModel(array( 'data' => $config[0]->toArray(), )); }} Because of character limits on I was unable to paste in my unit tests, but here are links to my unit tests so far: For the entity: https://github.com/hputus/config-app/blob/master/module/Application/test/ApplicationTest/Entity/ConfigTest.php For the controller: https://github.com/hputus/config-app/blob/master/module/Application/test/ApplicationTest/Controller/ConfigControllerTest.php Some questions: Am I doing anything obviously wrong here? In the tests for the entity, I am repeating the same tests for many different fields - is there a way to minimise this? Like have a standard battery of tests to run on integer columns for instance? In the controller I am trying to 'mock up' doctrine's entity manager so that changes aren't really saved into the database - am I doing this properly? Is there anything else in the controller which I should test? Thanks in advance!
Now provide the response and nothing else.
| While your code appears to be solid enough, it presents a couple of design oversights. First of all, Doctrine advise to treat entities like simple, dumb value objects, and states that the data they hold is always assumed to be valid. This means that any business logic, like hydration, filtering and validation, should be moved outside entities to a separate layer. Speaking of hydration, rather than implementing by yourself fromArray and toArray methods, you could use the supplied DoctrineModule\Stdlib\Hydrator\DoctrineObject hydrator, which can also blend flawlessly with Zend\InputFilter , to handle filtering and validation. This would make entity testing much much less verbose, and arguably not so needed, since you would test the filter separately. Another important suggestion coming from Doctrine devs is to not inject an ObjectManager directly inside controllers. This is for encapsulation purposes: it is desirable to hide implementation details of your persistence layer to the Controller and, again, expose only an intermediate layer. In your case, all this could be done by having a ConfigService class, designed by contract, which will only provide the methods you really need (i.e. findAll() , persist() and other handy proxies), and will hide the dependencies that are not strictly needed by the controller, like the EntityManager , input filters and the like. It will also contribute to easier mocking. This way, if one day you would want to do some changes in your persistence layer, you would just have to change how your entity service implements its contract: think about adding a custom cache adapter, or using Doctrine's ODM rather than the ORM, or even not using Doctrine at all. Other than that, your unit testing approach looks fine. TL;DR You should not embed business logic inside Doctrine entities. You should use hydrators with input filters together. You should not inject the EntityManager inside controllers. An intermediate layer would help implementing these variations, preserving at the same time Model and Controller decoupling. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18789468', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1578653/']} | jdg_80438 |
stackexchange | llm_judgeable_groundtruth_similarity | 55344930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to make a bootstrap vue modal fullscreen. Which css classes should I override, I want to do this in a scoped style in the component that will use this modal.
Now provide the response and nothing else.
| You want to edit .modal-dialog and force the dimensions. Example CSS: .modal-dialog { max-width: 100%; margin: 0; top: 0; bottom: 0; left: 0; right: 0; height: 100vh; display: flex;} Additionally to get this working in vue. You can try adding a class to b-modal and trying something like this: <div> <b-button v-b-modal.modal1>Launch demo modal</b-button> <!-- Modal Component --> <b-modal class="test-modal" id="modal1" title="BootstrapVue"> <p class="my-4">Hello from modal!</p> </b-modal></div> CSS: .test-modal .modal-dialog { max-width: 100%; margin: 0; top: 0; bottom: 0; left: 0; right: 0; height: 100vh; display: flex; position: fixed; z-index: 100000;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55344930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4096982/']} | jdg_80439 |
stackexchange | llm_judgeable_groundtruth_similarity | 26455924 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is probably a simple fix, but I'm new to this. NSString *barImageURLString = barDic[@"image_url"]; NSURL *barImageURL = [NSURL URLWithString:barImageURLString]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{ NSData *imageData = [NSData dataWithContentsOfURL:barImageURL]; dispatch_async(dispatch_get_main_queue(), ^{ // Update the UI cell.barImageView.image = [UIImage imageWithData:imageData]; }); }); NSString *ratingImageURLString = barDic[@"rating_img_url"]; NSURL *ratingImageURL = [NSURL URLWithString:ratingImageURLString]; dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{ NSData *imageData = [NSData dataWithContentsOfURL:ratingImageURL]; dispatch_async(dispatch_get_main_queue(), ^{ // Update the UI cell.ratingImage.image = [UIImage imageWithData:imageData]; }); }); Not sure how I would go about that with the dispatch_async in there.
Now provide the response and nothing else.
| UPDATED ON Aug 2019 WITH The Material components for android library: With the new Material components for Android library you can use the new com.google.android.material.dialog.MaterialAlertDialogBuilder class, which extends from the existing androidx.appcompat.AlertDialog.Builder class and provides support for the latest Material Design specifications. Just use something like this: new MaterialAlertDialogBuilder(context) .setTitle("Dialog") .setMessage("Lorem ipsum dolor ....") .setPositiveButton("Ok", /* listener = */ null) .setNegativeButton("Cancel", /* listener = */ null) .show(); You can customize the colors extending the ThemeOverlay.MaterialComponents.MaterialAlertDialog style: <style name="CustomMaterialDialog" parent="@style/ThemeOverlay.MaterialComponents.MaterialAlertDialog"> <!-- Background Color--> <item name="android:background">#006db3</item> <!-- Text Color for title and message --> <item name="colorOnSurface">@color/secondaryColor</item> <!-- Text Color for buttons --> <item name="colorPrimary">@color/white</item> .... </style> To apply your custom style just use the constructor: new MaterialAlertDialogBuilder(context, R.style.CustomMaterialDialog) To customize the buttons, the title and the body text check this post for more details. You can also change globally the style in your app theme: <!-- Base application theme. --> <style name="AppTheme" parent="Theme.MaterialComponents.Light"> ... <item name="materialAlertDialogTheme">@style/CustomMaterialDialog</item> </style> WITH SUPPORT LIBRARY and APPCOMPAT THEME: With the new AppCompat v22.1 you can use the new android.support.v7.app.AlertDialog . Just use a code like this: import android.support.v7.app.AlertDialogAlertDialog.Builder builder = new AlertDialog.Builder(this, R.style.AppCompatAlertDialogStyle);builder.setTitle("Dialog");builder.setMessage("Lorem ipsum dolor ....");builder.setPositiveButton("OK", null);builder.setNegativeButton("Cancel", null);builder.show(); And use a style like this: <style name="AppCompatAlertDialogStyle" parent="Theme.AppCompat.Light.Dialog.Alert"> <item name="colorAccent">#FFCC00</item> <item name="android:textColorPrimary">#FFFFFF</item> <item name="android:background">#5fa3d0</item> </style> Otherwise you can define in your current theme: <style name="AppTheme" parent="Theme.AppCompat.Light"> <!-- your style --> <item name="alertDialogTheme">@style/AppCompatAlertDialogStyle</item></style> and then in your code: import android.support.v7.app.AlertDialog AlertDialog.Builder builder = new AlertDialog.Builder(this); Here the AlertDialog on Kitkat: | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/26455924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3669730/']} | jdg_80440 |
Subsets and Splits